Search is not available for this dataset
text
stringlengths
216
4.52M
meta
dict
\section{\textbf{Introduction and Statement of Results}} \subsection{Partitions and Overpartitions} A {\it{partition}} of a positive integer $n$ is a nonincreasing sequence of positive integers that sum to $n$. The total number of partitions of $n$ is denoted by $p(n)$. One can also consider partitions where the parts are restricted to a specific set $S$ of integers and let $p(n;S)$ denote the number of partitions of $n$ into parts from $S$. For example, consider the set $$S=\{1,2,5,8\}.$$ Then $p(5;S)=4$ since the partitions of $5$ with parts from $S$ are $$5,2+2+1, 2+1+1+1,1+1+1+1+1.$$ The generating function for this type of partition is given by \begin{align}\label{set} \sum_{n=0}^{\infty}{p(n;S)q^n}=\prod_{n\in S}\frac{1}{1-q^n}. \end{align} Note that $S$ can be a multiset where repeated numbers are treated independently. For example, let $S=\{1,2_{1},2_{2},3_{1},3_{2}\}$, then we have the following partitions of $n=4$ into parts from $S$, $$3_{1}+1,3_{2}+1,2_{1}+2_{1},2_{1}+2_{2},2_{2}+2_{2},2_{1}+1+1,2_{2}+1+1,1+1+1+1.$$ Thus, $p(4;S)=8.$ Note that repeated numbers in a multiset are given an intrinsic order in terms of their subscript. An {\it{overpartition}} of a positive integer $n$ is a partition of $n$ in which the first occurrence of a part may be overlined. We denote the number of overpartitions of $n$ by $\overline{p}(n)$ and define $\overline{p}(0):=1$. For example, when $n=3$ we see that $\overline{p}(n)=8,$ with overpartitions given by $$3,\overline{3}, 2+1, \overline{2}+1, 2+\overline{1}, \overline{2}+\overline{1}, 1+1+1, \overline{1}+1+1.$$ An overpartition can be interpreted as a pair of partitions one into distinct parts corresponding with the overlined parts and the other unrestricted. Thus, we see that the generating function for overpartitions is given by \begin{equation} \overline{P}(q):=\sum_{n=0}^{\infty}\overline{p}(n)q^{n}=\prod_{n=1}^{\infty} \frac{1+q^n}{1-q^n}= 1+2q+4q^{2}+8q^{3}+14q^{4}+ \cdots. \end{equation} Overpartitions have been studied extensively by Corteel, Lovejoy, Osburn, Bringmann, Mahlburg, Hirschhorn, Sellers, and many other mathematicians. For example, see \cite{bringmann2008rank}, \cite{corteel2004overpartitions}, \cite{hirschhorn2005arithmetic}, \cite{hirschhorn}, \cite{hirschhorn2006arithmetic}, \cite{lovejoy2003gordon}, \cite{lovejoy2004overpartition}, \cite{lovejoy2008rank} and \cite{mahlburg2004overpartition} to mention a few. The well-known Jacobi triple product identity \cite{andrews1965simple} is given by \begin{align}\label{jacobi1} \prod_{n=1}^{\infty}{(1-q^{2n})(1+zq^{2n-1})(1+z^{-1} q^{2n-1})}=\sum_{n=-\infty}^{\infty}z^n q^{n^2}, \end{align} which converges when $z\not =0$ and $|q|<1$. Letting $z=1$ in \eqref{jacobi1}, one can observe one of Ramanujan's classical theta functions \begin{align}\label{mo1} \phi(q):=\sum_{n=-\infty}^{\infty}{q^{n^2}}=\prod_{n=1}^{\infty}{(1-q^{2n})}{(1+q^{2n-1})^2}. \end{align} Replacing $q$ by $-q$ in \eqref{mo1}, we get \begin{align}\label{mo2} \phi(-q)=\sum_{n=-\infty}^{\infty}{(-1)^n q^{n^2}}=\prod_{n=1}^{\infty}{(1-q^{2n})}{(1-q^{2n-1})^2} =\prod_{n=1}^{\infty}{\frac{(1-q^{n})}{(1+q^{n})}}=\frac{1}{\overline{P}(q)}. \end{align} Note that $\phi(q)$ can be written as \begin{align*} \phi(q)=1+2\sum_{n=1}^{\infty}q^{n^2}. \end{align*} Thus, the generating function of overpartitions has the following $2$-adic expansion, \begin{align}\label{GenParOne} \overline{P}(q)=&\frac{1}{\phi(-q)}=\frac{1}{1+2\sum_{n=1}^{\infty}(-1)^{n^2}q^{n^2}}\nonumber\\ =&1+\sum_{k=1}^{\infty}{2^k(-1)^k\left(\sum_{n=1}^{\infty}(-1)^{n^2}q^{n^2}\right)^k}\nonumber\\ =&1+\sum_{k=1}^{\infty}{2^k}\sum_{n_{1}^2+\dots+n_{k}^2=n}{(-1)^{n+k}q^n}\nonumber\\ =&1+\sum_{k=1}^{\infty}{2^k}\sum_{n=1}^{\infty}{(-1)^{n+k}c_{k}(n)q^n}, \end{align} where $c_{k}(n)$ denotes the number of representations of $n$ as a sum of $k$ squares of positive integers. Several overpartition congruences modulo small powers of $2$ have been found using the $2$-adic expansion formula \eqref{GenParOne}. For example, Mahlburg \cite{mahlburg2004overpartition} proves that \begin{align*} \{n\in \mathbb{N}\;|\;\overline{p}(n)\equiv 0\pmod{64}\} \end{align*} is a set of density 1\footnote{The sequence $A$ of positive integers $a1<a2<\cdots$ has a density $\delta(A)$ if ${\delta}(A)=\underset{n \to \infty}{\lim} \frac{A(n)}{n}.$ For more details about arithmetic density of integers, one may see \cite{niven1951asymptotic}. }. Later, Kim \cite{kim2008overpartition} generalized Mahlburg's result modulo $128$. Furthermore, Mahlburg conjectures \cite{mahlburg2004overpartition} that for any integer $k\geq 1$, \begin{align*} \overline{p}(n)\equiv 0\pmod{2^k}, \end{align*} for almost all integers $n$. \subsection{Plane Partitions and Plane Overpartitions} As a natural generalization of partitions, MacMahon \cite{andrews} defines a {\it{plane partition}} of $n$ as a two dimensional array $\pi=(\pi_{ij})_{i,j \geq 1 }$ of nonnegative integers $\pi_{ij}$, with $i$ indexing rows and $j$ indexing columns, that are weakly decreasing in both rows and columns and for which $|\pi|:=\sum{\pi_{ij}}=n<\infty$. Corteel, Savelief and Vuleti{\'c} \cite{corteelplane} define {\it{plane overpartitions}} as a generalization of the overpartitions as follows. \begin{defn}[Corteel, Savelief, Vuleti{\'c}, \cite{corteelplane}] A plane overpartition is a plane partition where \begin{enumerate} \item in each row the last occurrence of an integer can be overlined or not and all the other occurrences of this integer in the row are not overlined and, \item in each column the first occurrence of an integer can be overlined or not and all the other occurrences of this integer in the column are overlined. \end{enumerate} \end{defn} Plane overpartitions can be represented in the form of Ferrers-Young diagrams. For example, $$ { \begin{tikzpicture}[scale=1] \boxWithLabel{0.5}{8}{4}{5} \boxWithLabel{0.5}{8.5}{4}{4} \boxWithLabel{0.5}{9}{4}{$\bar{4}$} \boxWithLabel{0.5}{9.5}{4}{3} \boxWithLabel{0.5}{10}{4}{$\bar{1}$} \boxWithLabel{0.5}{8}{3.5}{3} \boxWithLabel{0.5}{8.5}{3.5}{2} \boxWithLabel{0.5}{9}{3.5}{1} \boxWithLabel{0.5}{8}{3}{2} \boxWithLabel{0.5}{8.5}{3}{$\bar{2}$} \boxWithLabel{0.5}{9}{3}{$\bar{1}$} \boxWithLabel{0.5}{8}{2.5}{1} \boxWithLabel{0.5}{8.5}{2.5}{1} \boxWithLabel{0.5}{8}{2}{$\bar{1}$} \end{tikzpicture} } $$ is a plane overpartition of $31$. The total number of plane overpartitions of $n$ is denoted by $\overline{pl}(n)$. For example, there are $16$ plane overpartitions for $n=3$ are as follows, $$ { \begin{tikzpicture}[scale=1] \boxWithLabel{0.5}{0}{4}{3 \boxWithLabel{0.5}{1}{4}{$\bar{3}$} \boxWithLabel{0.5}{2}{4}{2 \boxWithLabel{0.5}{2.5}{4}{1} \boxWithLabel{0.5}{3.5}{4}{$\bar{2}$} \boxWithLabel{0.5}{4}{4}{1} \boxWithLabel{0.5}{5}{4}{2} \boxWithLabel{0.5}{5.5}{4}{$\bar{1}$} \boxWithLabel{0.5}{6.5}{4}{$\bar{2}$} \boxWithLabel{0.5}{7}{4}{$\bar{1}$} \boxWithLabel{0.5}{8}{4}{1} \boxWithLabel{0.5}{8.5}{4}{1} \boxWithLabel{0.5}{9}{4}{1} \boxWithLabel{0.5}{10}{4}{1} \boxWithLabel{0.5}{10.5}{4}{1} \boxWithLabel{0.5}{11}{4}{$\bar{1}$} \boxWithLabel{0.5}{2}{3}{2 \boxWithLabel{0.5}{2}{2.5}{1} \boxWithLabel{0.5}{3.5}{3}{$\bar{2}$} \boxWithLabel{0.5}{3.5}{2.5}{1} \boxWithLabel{0.5}{5}{3}{2} \boxWithLabel{0.5}{5}{2.5}{$\bar{1}$} \boxWithLabel{0.5}{6.5}{3}{$\bar{2}$} \boxWithLabel{0.5}{6.5}{2.5}{$\bar{1}$} \boxWithLabel{0.5}{8}{3}{1} \boxWithLabel{0.5}{8.5}{3}{1} \boxWithLabel{0.5}{8}{2.5}{$\bar{1}$} \boxWithLabel{0.5}{10}{3}{1} \boxWithLabel{0.5}{10.5}{3}{$\bar{1}$} \boxWithLabel{0.5}{10}{2.5}{$\bar{1}$} \boxWithLabel{0.5}{8}{1.5}{1 \boxWithLabel{0.5}{8}{1}{$\bar{1}$} \boxWithLabel{0.5}{8}{0.5}{$\bar{1}$} \boxWithLabel{0.5}{10}{1.5}{$\bar{1}$ \boxWithLabel{0.5}{10}{1}{$\bar{1}$} \boxWithLabel{0.5}{10}{0.5}{$\bar{1}$} \end{tikzpicture} } $$ Corteel, Savelief and Vuleti\'c \cite{corteelplane} use various methods to obtain the following generating function for plane overpartitions, \begin{equation}\label{GenPl} \overline{PL}(q):=\sum_{n=0}^{\infty}{\overline{pl}(n)q^n}=\prod_{n=1}^{\infty} \frac{(1+q^n)^{n}}{(1-q^n)^{n}}. \end{equation} Using the notation of Lovejoy and Mallet \cite{lovejoyncolor}, the generating function of plane overpartitions is also known as the generating function of $n$-color overpartitions. An $n$-color partition is a partition in which each number $n$ may appear in $n$ colors, with parts ordered first according to size and then according to color\footnote{We note that this is a different definition from what is often meant by $n$-color partition, in which each part regardless of the size may appear in one of $n$ colors}. For example, there are 6 $n$-color partitions of 3, \begin{align*} 3_{3}, 3_{2},3_{1},2_{2}+1_{1},2_{1}+ 1_{1},1_{1}+1_{1}+1_{1}. \end{align*} An $n$-color overpartition is defined similarly to be an $n$-color partition in which the final occurrence of a part $n_{j}$ may be overlined. For example, there are 16 $n$-color overpartitions of 3, $$3_{3}, 3_{2}, 3_{1}, \overline{3}_{3}, \overline{3}_{2}, \overline{3}_{1}, 2_{2}+1_{1}, \overline{2}_{2}+1_{1}, {2}_{2}+\overline{1}_{1}, \overline{2}_{2}+\overline{1}_{1}, {2}_{1}+{1}_{1},\overline{2}_{1}+{1}_{1}, {2}_{1}+\overline{1}_{1},\overline{2}_{1}+\overline{1}_{1},{1}_{1}+{1}_{1}+{1}_{1},{1}_{1}+{1}_{1}+\overline{1}_{1}.$$ In \cite{ali1}, the author defines a restricted form of plane overpartitions called {\it{$k$-rowed plane overpartitions}} as plane overpartitions with at most $k$ rows. The total number of $k$-rowed plane overpartitions of $n$ is denoted by $\overline{pl}_{k}(n)$ and we define $\overline{pl}_{k}(0):=1$. The generating function is given by the following lemma. \begin{lemma}[Al-Saedi,\cite{ali1}]\label{lemma3} For a fixed positive integer $k$, the generating function for $k$-rowed plane overpartitions is given by \begin{equation}\label{PLOvkGen} \overline{PL}_{k}(q):=\sum_{n=0}^{\infty}{\overline{pl}_{k}(n)q^{n}} =\prod_{n=1}^{\infty}{\frac{(1+q^{n})^{\text{min}\{k,n\}}}{(1-q^{n})^{\text{min}\{k,n\}}}}. \end{equation} \end{lemma} The author proves in \cite{ali1} that for all $n\geq 0$, \begin{align*} \overline{pl}_{4}(4n+1)+\overline{pl}_{4}(4n+2)+\overline{pl}_{4}(4n+3)\equiv 0\pmod{4}. \end{align*} \subsection{Main Results}\label{MainResultSec} In this section, we state the main results of this paper. First, we start with results that involve plane and restricted plane overpartition congruences modulo $4$ and $8$. Then, we state a few results for overpartition congruences modulo $8$ and congruence relations modulo $8$ between overpartitions and plane overpartitions. Recall that $\overline{p}_{o}(n)$ denotes the number of overpartitions of a positive integer $n$ into odd parts and $\overline{p}_{o}(0)=1.$ \begin{theorem}\label{PlOverPaTh} For every integer $n\geq 1,$ \[ \overline{pl}(n)\equiv \overline{p}_{o}(n)\equiv \begin{cases} 2\pmod{4} &\text{if $n$ is a square or twice a square},\\ 0\pmod{4} &\text{otherwise}. \ \end{cases} \] \end{theorem} For an integer $n$ and a prime $p$, let $ord_{p}(n)$ denote the highest nonnegative power of $p$ such that $p^{ord_{p}(n)}|n.$ The following theorem gives a congruence relation modulo $4$ between $\overline{pl}(n)$ and $ord_{p}(n)$ for each odd prime $p|n.$ \begin{theorem}\label{OvPa} For any integer $n>1,$ \begin{equation}\label{opcon1} \overline{pl}(n)\equiv 2\cdot \prod_{odd\; prime\; p|n}{\left(ord_{p}(n)+1\right)}\pmod{4}. \end{equation} \end{theorem} Next Theorem gives a systematic pattern of congruences modulo $4$ for even rowed plane overpartitions. \begin{theorem}\label{th11} Let $k\geq 2$ be a positive even integer, $S_{k}:=\{j\;|\; j\;\mbox{odd}\;, 1\leq j \leq k-1\}$ and $\ell$ be the least common multiple of the integers in $S_{k}$. Then for any odd prime $p<k$, $1\leq r\leq ord_{p}(\ell),$ and $n\geq 1,$ \[ \overline{pl}_{k}(\ell n+p^r)\equiv \begin{cases} 0\pmod{4} &\text{if $r$ is odd},\\ 2\pmod{4} &\text{if $r$ is even}. \ \end{cases} \] Moreover, for all $n\geq 1,$ \[ \overline{pl}_{k}(\ell n)\equiv \begin{cases} 0\pmod{4} &\text{if $k\equiv 0\pmod{4}$},\\ 2\pmod{4} &\text{if $k\equiv 2\pmod{4}$}. \ \end{cases} \] \end{theorem} In addition, we prove the following theorem which gives an equivalence modulo $4$ between the $k$-rowed plane overpartition function for odd integers $k$ and the overpartition function. \begin{theorem}\label{th22} Let $k$ be a nonnegative integer. Then, for all $n\geq 0,$ \begin{align} \overline{pl}_{2k+1}(2n+1)\equiv \overline{p}(2n+1)\pmod{4}\label{id1}. \end{align} \end{theorem} Next result gives a pattern of congruences modulo $4$ between $\overline{pl}_{k}(n)$ and $\overline{p}(n)$ for odd $k$. \begin{theorem}\label{th333} Let $k\geq 2$ and $\ell$ be the least common multiple of all positive even integers $\leq 2k$. Then for all integers $n\geq 1,$ \begin{align}\label{th333eq1} \overline{pl}_{2k+1}(\ell n+2^j)\equiv \overline{p}(\ell n+2^j)\pmod{4}, \end{align} where $j\geq 2, j\equiv 0\pmod{2}$ and $2^{j-1}\leq k.$ Moreover, if $k\equiv 0\pmod{2}$, then for all integers $n\geq 0$ \begin{align}\label{th333eq2} \overline{pl}_{2k+1}(\ell n)\equiv \overline{p}(\ell n)\pmod{4}. \end{align} \end{theorem} Next theorem gives a few examples of $4$ and $8$-rowed plane overpartition congruences modulo $8$. One may find more of this type using similar methods of proof. \begin{theorem}\label{ThmMod8} For all integer $n\geq 1,$ \begin{align}\label{c11} \overline{pl}_{4}(12n)\equiv 0\pmod{8}, \end{align} \begin{align}\label{c22} \overline{pl}_{4}(6n+3)\equiv 0\pmod{8}, \end{align} \begin{align}\label{c33} \overline{pl}_{8}(210n)\equiv 0\pmod{8}, \end{align} \begin{align}\label{c34} \overline{pl}_{8}(210n+3)\equiv 0\pmod{8}, \end{align} \begin{align}\label{c35} \overline{pl}_{8}(210n+9)\equiv 0\pmod{8}, \end{align} \begin{align}\label{c37} \overline{pl}_{8}(210n+105)\equiv 0\pmod{8}. \end{align} \end{theorem} Next result gives a useful overpartition congruence modulo $8.$ \begin{theorem}\label{CongOverMod8} The following holds for all nonsquare odd integers $n\geq 0,$ \begin{align*} \overline{p}(n)\equiv 0\pmod{8}. \end{align*} \end{theorem} For $k$-rowed plane overpartitions with odd $k=5$, we obtain the following equivalence modulo $8$ for plane overpartitions with at most $5$ rows. \begin{theorem}\label{5rowed} The following holds for all $n\geq 0,$ \begin{align}\label{Con1Pl5} \overline{pl}_{5}(12n+1)\equiv \overline{p}(12n+1)\pmod{8}, \end{align} \begin{align}\label{ConPl5} \overline{pl}_{5}(12n+5)\equiv \overline{p}(12n+5)\pmod{8}. \end{align} \end{theorem} The rest of this paper will be organized as follows. In Section \ref{pre}, we review some preliminaries which are needed in the proofs of main the theorems including a useful theorem of Kwong \cite{kwong1} which we will apply to prove some of the identities in Theorem \ref{ThmMod8}. In Section \ref{MainProofs}, we present the proofs of the main results in this paper, and we give some applications for these results. In Section \ref{Remark}, we conclude with final remarks. \section{\textbf{Preliminaries}}\label{pre} In this section, we shed light on the periodicity of a certain type of $q$-series, their minimum periodicity modulo integers and how to find such periodicity. Kwong and others have done extensive studies on the periodicity of certain rational functions, including partition generating functions, for example see \cite{kwong3}, \cite{kwong1}, \cite{kwong2}, \cite{newman}, and \cite{nijenhuis1987}. We will apply a result of Kwong \cite{kwong1} that provides us a systematic formula to calculate the minimum periodicity modulo prime powers of such periodic series. Let $$A(q)=\sum_{n=0}^{\infty}{\alpha(n)q^{n}} \in \mathbb{Z}[[q]]$$ be a formal power series with integer coefficients, and let $d, \ell$ and $\gamma$ be positive integers. We say $A(q)$ is {\it{periodic}} with period $d$ modulo $\ell$ if, for all $n\geq \gamma$, $$\alpha(n+d)\equiv \alpha(n)\pmod{\ell}.$$ The smallest such period for $A(q)$, denoted $\pi_{\ell}(A)$, is called the {\it{minimum period of}} $A(q)$ modulo $\ell$. $A(q)$ is called {\it{purely periodic}} if $\gamma=0$. In this work, periodic always means purely periodic. For example, consider the $q$-series $A(q)=\sum_{n\geq 0}{\alpha(n)q^n}$ which generates the sequence $\alpha(n):=4n+1$ for all $n\geq 0$. Note that $\alpha(n+2k)-\alpha(n)=8k\equiv 0\pmod{8}$ for all $n\geq 0$ and $k\geq 1$. Thus, $A(q)$ is periodic modulo $8$ and for each $k$, there is a period of length $2k$. Thus, the minimum period modulo $8$ is $\pi_{8}(A)=2$. Before we state a result of Kwong \cite{kwong1}, we state some necessary definitions.\\ For an integer $n$ and prime $\ell$, define $ord_{\ell}(n)$ to be the unique nonnegative integer such that $$\ell^{ord_{\ell}(n)}\cdot m=n,$$ where $m$ is an integer and $\ell\nmid m$. In addition, we call $m$ the {\it{$\ell$-free part}} of $n$. For a finite multiset of positive integers $S$, we define $m_{\ell}(S)$ to be the $\ell$-free part of $lcm\{n|n\in S\}$, and $b_{\ell}(S)$ to be the least nonnegative integer such that $$\ell^{b_{\ell}(S)}\geq \sum_{n\in S}\ell^{ord_{\ell}(n)}.$$ We now state Kwong's theorem. \begin{theorem}[Kwong,\cite{kwong1}]\label{kwong} Fix a prime $\ell$, and a finite multiset $S$ of positive integers. Then for any positive integer $N$, $$A(q)=\sum_{n=0}^{\infty} {p(n;S)q^{n}}$$ is periodic modulo $\ell^{N}$, with minimum period $$\pi_{\ell^{N}}(A)=\ell^{N+b_{\ell}(S)-1}\cdot m_{\ell}(S).$$ \end{theorem} For example, let $S=\{1_{1},1_{2}, 2_{1},2_{2},2_{3},4_{1},4_{2},5\}.$ Then $p(n;S)$ is generated by the following $q$-series \begin{align*} A(q):=\sum_{n=0}^{\infty}{p(n;S)q^n}=\prod_{n\in S}{\frac{1}{(1-q^n)}}=\frac{1}{(1-q)^2(1-q^2)^3(1-q^4)^2(1-q^5)}. \end{align*} Letting $\ell=2$ in Theorem \ref{kwong}, we obtain \begin{align*} 2^{b_{2}(S)}\geq \sum_{n\in S} 2^{ord_{2}(n)}=2\cdot 2^{0}+3\cdot 2^{1}+2\cdot 2^2+ 2^0=17. \end{align*} Thus $b_{2}(S)=5, \;lcm \{n: n\in S\}= 20$, and hence $m_{2}(S)= 5.$ Using Theorem \ref{kwong}, for a positive integer $N$, the minimum period of $A(q)$ modulo $2^N$ is $\pi_{2^N}(A)=2^{N+4}\cdot 5.$ Theorem \ref{kwong} was used by the author in \cite{ali1} to prove a method to obtain various partition theoretic congruences by verifying they hold for a finite number of values. This work generalized a result of Mizuhara, Sellers, and Swisher \cite{periodic}. The following lemma has a flavor of periodicity of restricted partitions. It is an application of Theorem \ref{kwong} and will be used in the proof of Theorem \ref{ThmMod8} \begin{lemma}\label{LemPart} Let $a,b,c\geq 2$ be integers such that $a,b$ and $c$ are pairwise relatively prime. Let $M_{c}$ be the number of pairs of positive integers $(n,m)\in \mathbb{N}^2$ with $an+bm=c$ where $M_{c}:=0$ if no such pairs exists. Then, \begin{align*} M_{c}=p(c;\{a,b\}), \end{align*} where $p(c;\{a,b\})$ is the number of partitions of $c$ into parts from the set $\{a,b\}$. Moreover, for every integer $N\geq 1$ and a prime $\ell$, \begin{align*} M_{c+\pi_{\ell^N}}\equiv M_{c}\pmod{\ell^N}, \end{align*} where $\pi_{\ell^N}$ is the minimum period modulo $\ell^N$ of the $q$-series \begin{align}\label{qser} \sum_{n=0}^{\infty}p(n;\{a,b\})q^n=\frac{1}{(1-q^a)(1-q^b)} \end{align} which generates the partitions with parts from $\{a,b\}$. \end{lemma} \begin{proof} Note that if there are two positive integers $n$ and $m$ such that $an+bm=c$, then $c$ can be partitioned into parts form $\{a,b\}$ as follows \begin{align*} \underbrace{a+\dots+a}_{n\text{-times}}\vphantom{1}+\underbrace{b+\dots+b}_{m\text{-times}}\vphantom{1}=c. \end{align*} Thus, any pair of positive integers $n$ and $m$ that satisfy $an+bm=c$ corresponds to a partition of $c$ into parts from $\{a,b\}$. Likewise, since $gcd(a,b)=gcd(a,c)=gcd(b,c)=1,$ then any such partition of $c$ must involve both $a$ and $b$, and hence any corresponding integers $n$ and $m$ must be positive. By considering all such pairs $(n,m)$, we then obtain \begin{align*} M_{c}=p(c;\{a,b\}). \end{align*} By Theorem \ref{kwong}, the $q$-series \eqref{qser} is periodic modulo $\ell^N$ for any integer $N\geq 1$ and a prime $\ell$, with minimum period $\pi_{\ell^N}=\ell^{N+b_{\ell}(\{a,b\})-1}\cdot m_{\ell}(\{a,b\})$ which yields that \begin{align*} M_{c+\pi_{\ell^N}}=p(c+\pi_{\ell^N};\{a,b\})\equiv p(c;\{a,b\})= M_{c}\pmod{\ell^N}. \end{align*} \end{proof} Also, the following lemma will be used in the proof of Theorem \ref{ThmMod8}. \begin{lemma}\label{lemma(ab)} Let $a,b,c \in \mathbb{N}$ such that $gcd(a,b)=1$. Then there are $c-1$ pairs of positive integers $(n,m)$ such that $an+bm=abc$. \end{lemma} \begin{proof} Suppose that $an+bm=abc$. Then $an=abc-bm$ and so $b|an$, and since $gcd(a,b)=1$, we must have $b|n$. So $n=bN$ for some $N\in \mathbb{N}$. Similarly, $a|m$ and so $m=aM$ for some $M\in \mathbb{N}$. We see then that $abN+abM=abc$ and thus $N+M=c$. Hence, if $(n,m)\in \mathbb{N}^2$ satisfies $an+bm=abc$, then it is equivalent to say there is a pair $(N,M)\in \mathbb{N}^2 $ such that $N+M=c$. Note that there are $c-1$ pairs $(N,M)\in \mathbb{N}^2$ such that $N+M=c$ since the possible ways are $1+(c-1), 2+(c-2),\dots,(c-1)+1$. \end{proof} We define throughout the formal power series \begin{align}\label{eqf} f(q):=\frac{1+q}{1-q}=1+2q+2q^2+2q^3+\cdots. \end{align} Note that for every positive integer $n\geq 1$, \begin{align*} f(q^n)\equiv \frac{1-q^n}{1-q^n} \equiv 1\pmod{2}. \end{align*} Thus, we obtain \begin{align*} \sum_{n=0}^{\infty}{\overline{pl}(n)q^n}=\prod_{n=1}^{\infty} \frac{(1+q^n)^{n}}{(1-q^n)^{n}} =\prod_{n=1}^{\infty}{f(q^n)^n}\equiv 1\pmod{2}, \end{align*} and \begin{equation} \sum_{n=0}^{\infty}{\overline{pl}_{k}(n)q^{n}} =\prod_{n=1}^{\infty}{\frac{(1+q^{n})^{\text{min}\{k,n\}}}{(1-q^{n})^{\text{min}\{k,n\}}}}=\prod_{n=1}^{\infty}f(q^n)^{\text{min}\{k,n\}}\equiv 1\pmod{2}. \end{equation} \begin{lemma}\label{MainLemma} For all $k\geq 1$, \begin{align}\label{LemEq} \left(1+2S(q)\right)^{2^k}&\equiv 1\pmod{2^{k+1}}, \end{align} where $S(q)\in \mathbb{Z}[[q]]$ is a $q$-series with integer coefficients. \end{lemma} \begin{proof} We induct on $k$. It is easy to see that \eqref{LemEq} is true for $k=1.$ Now suppose that \eqref{LemEq} is true for $1\leq j\leq k-1$. Then by induction there is a $q$-series $T(q)\in \mathbb{Z}[[q]]$ such that $\left(1+2S(q)\right)^{2^{k-1}}=1+2^{k}T(q).$ Thus, \begin{align*} \left(1+2S(q)\right)^{2^k}=&\left((1+2S(q))^{2^{k-1}}\right)^{2}\\ &=\left(1+2^kT(q)\right)^{2}\\ &\equiv 1\pmod{2^{k+1}}, \end{align*} as desired. \end{proof} The following lemma is a very useful tool in the proofs of the main results. \begin{lemma}\label{lem1} For all integers $n,k\geq 1$, \begin{align*} f(q^n)^{2^{k}}\equiv 1\pmod{2^{k+1}}. \end{align*} \end{lemma} \begin{proof} Let $S(q):=\sum_{m\geq 1}q^m$. We observe that $S(q)=\frac{q}{1-q}$, and so \begin{align}\label{eqf2} f(q^n)=\frac{1+q^n}{1-q^n}=1+\frac{2q^n}{1-q^n}=1+2S(q^n). \end{align} The conclusion then follows by Lemma \ref{MainLemma}. \end{proof} Overpartition congruences modulo small powers of $2$ can be derived from the following fact proved by Hirschhorn and Sellers [\cite{hirschhorn2006arithmetic}, Theorem $2.1$] which states \begin{align}\label{PhiThm} \overline{P}(q)=\phi(q)\overline{P}(q^2)^2. \end{align} Iterating \eqref{PhiThm} yields that [\cite{hirschhorn2006arithmetic}, Theorem $2.2$] \begin{align*} \overline{P}(q)=\phi(q)\;\phi^2(q^2)\;\phi^4(q^4)\;\phi^8(q^8)\cdots. \end{align*} Thus, \begin{align*} \sum_{n=0}^{\infty}\overline{p}(n)q^{n}=\left(1+2\sum_{n\geq 1}{q^{n^2}}\right) \left(1+2\sum_{n\geq 1}{q^{2n^2}}\right)^2 \left(1+2\sum_{n\geq 1}{q^{4n^2}}\right)^4 \left(1+2\sum_{n\geq 1}{q^{8n^2}}\right)^8\cdots \end{align*} By Lemma \ref{MainLemma}, we observe that for all $k\geq 1,$ \begin{align}\label{GenParTwo} \phi(q)^{2^k}\equiv 1\pmod{2^{k+1}}. \end{align} Thus, by \eqref{GenParOne} and \eqref{GenParTwo}, we obtain the following general equivalence modulo $2^{k},$ for $k\geq 2,$ \begin{align}\label{GenConOver} \sum_{n=0}^{\infty}\overline{p}(n)q^{n}\equiv \prod_{j=0}^{k-2}\left(\phi(q^{2^j})\right)^{2^j}\equiv 1+\sum_{j=1}^{k-1}{2^j}\sum_{n=1}^{\infty}{(-1)^{n+j}c_{j}(n)q^n}\pmod{2^{k}}, \end{align} which for the case $k=2$, we obtain \begin{align}\label{OverPartMod2} \sum_{n=0}^{\infty}\overline{p}(n)q^{n}\equiv\phi(q)\equiv 1+2\sum_{n=1}^{\infty}{q^{n^2}}\pmod{4}, \end{align} which yields for each nonsquare integer $n\geq 1,$ \begin{align}\label{NonSqMod4} \overline{p}(n)\equiv 0\pmod{4}. \end{align} Manipulating the generating function $\overline{P}(q)$ of overpartitions, Hirschhorn and Sellers \cite{hirschhorn2005arithmetic} employed elementary dissection techniques of generating functions and derived a set of overpartition congruences modulo small powers of $2$. For example, they prove that for all $n\geq 0,$ \begin{align*} &\overline{p}(9n + 6)\equiv 0\pmod{8},\\ &\overline{p}(8n+7)\equiv 0\pmod{64}. \end{align*} For a modulus that is not a power of $2$, Hirschhorn and Sellers \cite{hirschhorn} prove the first infinite family of congruences for $\overline{p}(n)$ modulo $12$ by showing first that for all $n\geq 0,$ and all $\alpha \geq 0,$ \begin{align*} \overline{p}(9^\alpha(27n+18))\equiv 0\pmod{3}. \end{align*} Together with the fact $9^\alpha(27n+18)$ is nonsquare for all $n\geq 0, \alpha \geq 0,$ and hence by the help of \eqref{NonSqMod4}, it follows that for all $\alpha, n\geq 0$, \begin{align}\label{seller1} \overline{p}(9^\alpha(27n+18))\equiv 0\pmod{12}. \end{align} Several examples of overpartition congruences have been found. For more examples of overpartition congruences, one may refer to work of Chen and Xia \cite{chen2013proof}, Fortin, Jacob and Mathieu \cite{fortin2005jagged}, Treneer \cite{treneer2006congruences} and Wang \cite{wang2014another}. Now, let $\overline{p}_{o}(n)$ denote the number of overpartitions of $n$ into odd parts. The generating function for $\overline{p}_{o}(n)$ \cite{hirschhorn2006arithmetic} is given by \begin{align}\label{OverOddGen} \overline{P}_{o}(q):=\sum_{n=0}^{\infty}{\overline{p}_{o}(n)q^n}=\prod_{n=1}^{\infty}\frac{1+q^{2n-1}}{1-q^{2n-1}}. \end{align} Similar to \eqref{PhiThm}, The generating function $\overline{P}_{o}(q)$ can be written as [see \cite{hirschhorn2006arithmetic},Theorem $2.3$], \begin{align}\label{PhiOdd} \overline{P}_{o}(q)=\phi(q)\overline{P}(q^2), \end{align} and the iteration of \eqref{PhiOdd} yields [\cite{hirschhorn2006arithmetic},Theorem $2.4$], \begin{align} \overline{P}_{o}(q)=\phi(q) \phi(q^2) \phi^2(q^4) \phi^4(q^8)\cdots \end{align} For modulus $4$, we then easily get \begin{align*} \sum_{n=0}^{\infty}\overline{p}_{o}(n)q^{n}\equiv \phi(q)\phi(q^2)\equiv 1+2\sum_{n\geq 1}{q^{n^2}}+2\sum_{n\geq 1}{q^{2n^2}}\pmod{4}. \end{align*} As a consequence, Hirschhorn and Sellers obtain Theorem $2.3$ of \cite{hirschhorn2006arithmetic} as following. \begin{theorem}[Hirschhorn, Sellers, \cite{hirschhorn2006arithmetic}]\label{OverOddGenTh} For every integer $n\geq 1,$ \[ \overline{p}_{o}(n)\equiv \begin{cases} 2\pmod{4} &\text{if $n$ is a square or twice a square},\\ 0\pmod{4} &\text{otherwise}. \ \end{cases} \] \end{theorem} Similar to \eqref{GenConOver}, we have the following general equivalence modulo $2^{k}$ for all $k\geq 2,$ \begin{align*} \sum_{n=0}^{\infty}\overline{p}_{o}(n)q^{n}\equiv \phi(q)\; \phi(q^2)\; \phi^2(q^4)\cdots \phi(q^{2^{k-1}})^{2^{k-2}}\pmod{2^{k}}. \end{align*} Later, we will revisit the equivalences \eqref{GenConOver}, \eqref{NonSqMod4}, and Theorem \ref{OverOddGenTh}. \section{\textbf{Proofs of Main Results and Some Corollaries}}\label{MainProofs} We now present proofs of our main results stated in Section \ref{MainResultSec}. In addition, we give several corollaries. \begin{proof}[\textbf{Proof of Theorem \ref{PlOverPaTh}}] We observe by \eqref{GenPl} and Lemma \ref{lem1}, the generating function for plane overpartitions \begin{align*} \sum_{n=0}^{\infty}{\overline{pl}(n)q^n}=&\prod_{n=1}^{\infty} \frac{(1+q^n)^{n}}{(1-q^n)^{n}}=\prod_{n=1}^{\infty}f(q^n)^n\\ =&\prod_{n=1}^{\infty}f(q^{2n})^{2n}f(q^{2n-1})^{2n-1}\\ \equiv& \prod_{n=1}^{\infty}f(q^{2n-1})\pmod{4}\\ =&\prod_{n=1}^{\infty}\frac{(1+q^{2n-1})}{(1-q^{2n-1})}\pmod{4}\\ =& \sum_{n=0}^{\infty}{\overline{p}_{o}(n)q^{n}}\pmod{4}. \end{align*} Thus for all $n\geq 1,$ \begin{align*} \overline{pl}(n)\equiv \overline{p}_{o}(n)\pmod{4}. \end{align*} By Theorem \ref{OverOddGenTh}, for all $n\geq 1,$ \[ \overline{p}_{o}(n)\equiv \begin{cases} 2\pmod{4} &\text{if $n$ is a square or twice a square},\\ 0\pmod{4} &\text{otherwise}, \ \end{cases} \] and the result follows. \end{proof} \begin{corollary} The following holds for all $n\geq 0,$ \begin{align*} \overline{pl}(4n+3)\equiv 0\pmod{4}. \end{align*} \end{corollary} \begin{proof} Note that for all $n\geq 0$, $4n+3$ is not a square since positive odd squares are $1$ modulo $4$. Also $4n+3$ is odd so it can not be twice a square for all $n\geq 0$. The result then follows by Theorem \ref{PlOverPaTh}. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{OvPa}}] Following the same procedure in Theorem \ref{PlOverPaTh}, we note that \begin{alignat}{3}\label{divisor} \sum_{n=0}^{\infty}{\overline{pl}(n)q^{n}} & \equiv f(q) \cdot f(q^3)\cdot f(q^{5})\cdots \pmod{4}\nonumber\\ &\equiv \left( 1+2\sum_{m\geq 1}{q^m}\right)\cdot \left( 1+2\sum_{m\geq 1}{q^{3m}}\right)\cdot \left( 1+2\sum_{n\geq 1}{q^{5m}}\right)\cdots \pmod{4} \nonumber \\ &\equiv 1+2\sum_{m\geq 1}{q^m}+2\sum_{m\geq 1}{q^{3m}}+2\sum_{m\geq 1}{q^{5m}}+\cdots \pmod{4}\nonumber\\ &\equiv 1+2\sum_{m\geq 1}{(q^m+q^{3m}+q^{5m}+\cdots)} \pmod{4}. \end{alignat} Now for any integer $n>1,$ by the fundamental theorem of arithmetic, $n$ can be written as a product of prime powers. Thus, \begin{align}\label{factor} n=2^{\alpha_{0}}p_{1}^{\alpha_{1}}\cdots p_{k}^{\alpha_{k}}, \end{align} where $p_{i}$ are primes and $\alpha_{0},\alpha_{i}$ are nonnegative integers for each $i=1,\dots,k$. Thus $ord_{p_{i}}(n)=\alpha_{i}$ for each $i=1,\dots,k.$ Note that the term $q^n$ will occur in the series \begin{align*} \sum_{m\geq 1}{(q^m+q^{3m}+q^{5m}+\cdots)} \end{align*} when $m=n/d$ in $q^{dm}$ where $d$ is an odd divisor of $n$. In terms of the prime factorization of $n$ in \eqref{factor}, the number of odd divisors of $n$ is given by $$\prod_{i=1}^{k}{\left(\alpha_{i}+1 \right)}=\prod_{i=1}^{k}{\left(ord_{p_{i}}(n)+1\right)}=\prod_{odd\; prime\; p|n}{\left(ord_{p}(n)+1\right)}.$$ Thus the coefficient of $q^n$ in \eqref{divisor} is then given by $$2\cdot \prod_{odd\; prime\; p|n}{\left(ord_{p}(n)+1\right)}.$$ \end{proof} We now prove Theorem \ref{th11}. \begin{proof}[\textbf{Proof of Theorem \ref{th11}}] Let $k\geq 2$ be even. We first observe that the generating function of the $k$-rowed plane overpartitions can be rewritten modulo $4$ using \eqref{PLOvkGen} and Lemma \ref{lem1} to obtain \begin{alignat*}{3} \sum_{n=0}^{\infty}{\overline{pl}_{k}(n)q^{n}} &= \prod_{n=1}^{\infty}{\frac{(1+q^{n})^{\text{min}\{k,n\}}}{(1-q^{n})^{\text{min}\{k,n\}}}} =\prod_{n=1}^{\infty}f(q^n)^{\text{min}\{k,n\}}\\ &= f(q) f(q^2)^2\cdots f(q^{k-1})^{k-1}\cdot \prod_{n\geq k}{f(q^n)^k} \\ & \equiv f(q) f(q^3)\cdots f(q^{k-1})\pmod{4}\\ &\equiv \left( 1+2\sum_{n\geq 1}{q^n}\right)\cdot \left( 1+2\sum_{n\geq 1}{q^{3n}}\right)\cdots \left( 1+2\sum_{n\geq 1}{q^{(k-1)n}}\right) \pmod{4}\\ &\equiv 1+2\sum_{n\geq 1}{q^n}+2\sum_{n\geq 1}{q^{3n}}+\cdots +2\sum_{n\geq 1}{q^{(k-1)n}}\pmod{4}. \end{alignat*} Thus, we see that \begin{alignat*}{3} \sum_{n=0}^{\infty}{\overline{pl}_{k}(n)q^{n}}& \equiv 1+2\sum_{i\in S_{k}}\sum_{n\geq 1}{q^{in}}\pmod{4}\\ & \equiv 1+2\sum_{i\in S_{k}}\sum_{in\not\equiv 0(\text{mod} \;\ell )}{q^{in}}+2\sum_{i\in S_{k}}\sum_{n\geq 1}{q^{\ell n}}\pmod{4}\\ & \equiv 1+2\sum_{i\in S_{k}}\sum_{in\not\equiv 0(\text{mod} \;\ell )}{q^{in}}+2|S_{k}|\sum_{n\geq 1}{q^{\ell n}}\pmod{4}\\ & \equiv 1+2\sum_{i\in S_{k}}\sum_{in\not\equiv 0(\text{mod} \;\ell )}{q^{in}}+k\sum_{n\geq 1}{q^{\ell n}} \pmod{4}, \end{alignat*} where the last congruence is obtained using the fact that $|S_{k}|=k/2$. Thus, we obtain that \[ \overline{pl}_{k}(\ell n)\equiv \begin{cases} 0\pmod{4} &\text{if $k\equiv 0\pmod{4}$}\\ 2\pmod{4} &\text{if $k\equiv 2\pmod{4}$.} \ \end{cases} \] Now for a prime $p \in S_{k}$ and $s:=ord_{p}(\ell)$, we let \begin{align*} \sum_{n\geq 1}{\alpha(n)q^n}:=\sum_{n\geq 1}{\left(q^{n}+q^{pn}+q^{p^{2}n}+\dots +q^{p^{s}n}\right)} \end{align*} For any $m\geq 1$ and $1\leq r\leq s$, the term $q^{m\ell+p^r}$ will occur in the above series when $n=m\ell+p^r,\frac{m\ell+p^r}{p}, \dots, \frac{m\ell+p^r}{p^r},$ arising from the terms $q^{n}, q^{pn}, \dots, q^{p^{r}n},$ respectively. The term $q^{m\ell+p^r}$ can not be obtained from $\sum_{n\geq 1}{q^{p^in}}$ for $i>r$, since $p^i$ does not divide $m\ell+p^r$. Thus \[ \alpha(m\ell+p^r) =r+1\equiv \begin{cases} 0\pmod{2} &\text{if $r$ is odd}\\ 1\pmod{2} &\text{if $r$ is even}.\ \end{cases} \] Observe that \begin{align*} \sum_{j\in S_{k}}\sum_{n\geq 1}{q^{jn}} &=\sum_{n\geq 1}{\left(q^n+q^{pn}+\dots+q^{p^{s}n}\right)}+\sum_{j\in S_{k}-\{p^{i}:0\leq i\leq s\}}\sum_{n\geq 1}{q^{jn}}\\ &=\sum_{n\geq 1}{\alpha(n)q^n}+\sum_{j\in S_{k}-\{p^{i}:0\leq i\leq s\}}\sum_{n\geq 1}{q^{jn}}. \end{align*} Thus, we obtain \begin{align}\label{plk} \sum_{n=0}^{\infty}{\overline{pl}_{k}(n)q^{n}}\equiv 1+2\sum_{n\geq 1}{\alpha(n)q^n}+2\sum_{j\in S_{k}-\{p^{i}:0\leq i\leq s\}}\sum_{n\geq 1}{q^{jn}} \pmod{4}. \end{align} Also, we note that for all $n,m\geq 1$, if $j\in S_{k}-\{p^{i}:0\leq i\leq s\},$ then $jn \not = \ell m+p^{r}$ for all $1\leq r\leq s$. If not, then there are two positive integers $n_{0}, m_{0}$ such that $jn_{0} = \ell m_{0}+p^{r}$, and thus $n_{0}= (\ell m_{0}+p^{r})/j$. Since by the choice of $\ell$, we have that $j$ divides $\ell$, then $j$ must divide $p^{r}$ which contradicts that $j\not =p^{i}$ for all $0\leq i\leq s$. Thus terms of the form $q^{\ell n+p^{r}}$ will arise in $\sum_{j\in S_{k}}\sum_{n\geq 1} q^{jn}$ only from $\sum_{n\geq 1} \alpha(n) q^n$. Now, If we extract the terms of the form $q^{\ell n+p^{r}} $ and replace $n$ with $\ell n+p^{r}$ in \eqref{plk}, we find that, \begin{align*} \sum_{n\geq 1}{\overline{pl}_{k}(\ell n+p^{r})q^{n}}\equiv 2\cdot \sum_{n\geq 1}{\alpha(\ell n+p^{r}) q^{n}} \equiv 2(r+1)\sum_{n\geq 1}{q^n} \pmod{4}. \end{align*} Thus, modulo $4$, \[ \ \overline{pl}_{k}(\ell n+p^{r})\equiv 2\alpha(\ell n+p^r) =2(r+1)\equiv \begin{cases} 0\pmod{4} &\text{if $r$ is odd,}\\ 2\pmod{4} &\text{if $r$ is even}.\ \end{cases} \] \end{proof} As an application of Theorem \ref{th11}, we give a few examples in the following corollary. \begin{corollary} The following hold for all $n\geq 1$, \begin{align*} \overline{pl}_{4}(3n)\equiv 0\pmod{4}, \end{align*} \begin{align*} \overline{pl}_{6}(15n+b)\equiv 0\pmod{4}, \;\; for\;\; b\in \{3,5\}, \end{align*} \begin{align*} \overline{pl}_{6}(15n)\equiv 2\pmod{4}, \end{align*} \begin{align*} \overline{pl}_{8}(105n+b)\equiv 0\pmod{4}, \;\; for\;\; b\in \{0,3,5,7\}, \end{align*} \begin{align*} \overline{pl}_{10}(315n+b)\equiv 0\pmod{4}, \;\; for\;\; b\in \{3,5,7\}, \end{align*} \begin{align*} \overline{pl}_{10}(315n+b)\equiv 2\pmod{4}, \;\; for\;\; b\in \{0,9\}, \end{align*} \begin{align*} \overline{pl}_{12}(3465n+b)\equiv 0\pmod{4}, \;\; for\;\; b\in \{0,3,5,7,11\}, \end{align*} \begin{align*} \overline{pl}_{12}(3465n+9)\equiv 2\pmod{4}. \end{align*} \end{corollary} \begin{proof} For the first congruence, letting $k=4$, we have that $S_{4}=\{1,3\}$, and $\ell=3$. Since $k\equiv 0\pmod{4}$, by Theorem \ref{th11}, for all $n\geq 1$ \begin{equation*} \overline{pl}_{4}(3n)\equiv 0 \pmod{4}. \end{equation*} Now to see the second and third congruences, let $k=6$, then $S_{6}=\{1,3,5\}$ and $\ell=15$. The only primes in $S_{6}$ are 3 and 5 with $ord_{p}(\ell)=1$ for $p=3,5$. Hence $r=1$ is the only choice for $1\leq r\leq ord_{p}(\ell)$. Thus by Theorem \ref{th11}, for all $n\geq 1,$ \begin{alignat*}{2} &\overline{pl}_{6}(15n+3)&\equiv 0&\pmod{4},\\ &\overline{pl}_{6}(15n+5)&\equiv 0&\pmod{4}. \end{alignat*} Moreover, $k=6\equiv 2 \pmod{4}$ which yields that for all $n\geq 1,$ \begin{align*} \overline{pl}_{6}(15n)\equiv 2\pmod{4}. \end{align*} The rest of the identities can be proved similarly. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{th22}}] Clearly, for $k=0,\; \overline{pl}_{1}(n)= \overline{p}(n),$ for all $n\geq 0.$ Now, for $k\geq 1$, we first define \begin{align*} g(q):=\frac{1}{f(q)}, \end{align*} and note that by Lemma \ref{lem1} and \eqref{eqf2} that \begin{align}\label{Theq} g(q)\equiv f(q)=1+2\sum_{n\geq 1}{q^n}\pmod{4}. \end{align} and recall the generating function of $2k+1$-rowed plane overpartions \eqref{PLOvkGen}, \begin{alignat*}{3} \sum_{n=0}^{\infty}{\overline{pl}_{2k+1}(n)q^{n}} &=\prod_{n=1}^{\infty}{\frac{(1+q^{n})^{\text{min}\{2k+1,n\}}}{(1-q^{n})^{\text{min}\{2k+1,n\}}}}= \prod_{n=1}^{\infty}{f(q^n)^{\text{min}\{2k+1,n\}}}& \\ &= f(q) f(q^2)^2\cdots f(q^{2k})^{2k}\cdot \prod_{n\geq 2k+1}{f(q^n)^{2k+1}}& \\ &\equiv f(q) f(q^3)\cdots f(q^{2k-1})\cdot \prod_{n\geq 2k+1}{f(q^n)} \pmod{4},& \end{alignat*} where the last congruence is by Lemma \ref{lem1}. Thus, we have by \eqref{Theq} that \begin{align*} \sum_{n=0}^{\infty}{\overline{pl}_{2k+1}(n)q^{n}} & \equiv g(q^2) g(q^4) g(q^6)\cdots g(q^{2k}) \cdot \prod_{n=1}^{\infty}{f(q^n)}\pmod{4} &\\ & \equiv f(q^2) f(q^4) f(q^6)\cdots f(q^{2k}) \cdot \prod_{n=1}^{\infty}{f(q^n)}\pmod{4} &\\ & \equiv \left( 1+2\sum_{n\geq 1}{q^{2n}}\right)\cdot \left( 1+2\sum_{n\geq 1}{q^{4n}}\right)\cdot \left( 1+2\sum_{n\geq 1}{q^{6n}}\right)\cdots &\\ &\;\;\;\;\left( 1+2\sum_{n\geq 1}{q^{2kn}}\right) \cdot \left(\sum_{n\geq 0}{\overline{p}(n)q^n}\right)\pmod{4}.& \end{align*} Note that for all $n\geq 1, \overline{p}(n)\equiv 0\pmod{2},$ and hence $2\overline{p}(n)\equiv 0\pmod{4}.$ Consequently, \begin{alignat}{2}\label{eqth11} \sum_{n=0}^{\infty}{\overline{pl}_{2k+1}(n)q^{n}}&\equiv \left(1+2 \sum_{n\geq 1}{\left(q^{2n}+q^{4n}+ q^{6n}+\cdots +q^{2kn}\right)}\right) \cdot \left( \sum_{n\geq 0}{\overline{p}(n)q^n}\right) \pmod{4}\nonumber\\ &\equiv 2 \sum_{n\geq 1}{\left(q^{2n}+q^{4n}+q^{6n}+ \cdots +q^{2kn}\right)} +\sum_{n\geq 0}{\overline{p}(n) q^n} \pmod{4}. \end{alignat} Thus, for all $k\geq 0, n\geq 0$, $$\overline{pl}_{2k+1}(2n+1)\equiv \overline{p}(2n+1)\pmod{4},$$ as desired. \end{proof} \begin{corollary}\label{CoroPl1} The following holds for every integer $n\geq 0,$ \begin{align} \overline{pl}(2n+1)\equiv \overline{p}(2n+1)\pmod{4}. \end{align} \end{corollary} \begin{proof} Note that for every integer $n\geq 1$, the plane overpartitions of $n$ have at most $n$ rows. Thus, we obtain for any $k\geq n$, $$\overline{pl}(n)=\overline{pl}_{k}(n).$$ By Theorem \ref{th22}, for $k\geq n,$ \begin{align*} \overline{pl}(2n+1)=\overline{pl}_{2k+1}(2n+1)\equiv \overline{p}(2n+1)\pmod{4}. \end{align*} \end{proof} Next result gives an infinite family of restricted plane overpartitions congruences modulo $4$. \begin{corollary}\label{cor1} For all $k, n\geq 0,$ and $\alpha\geq 0$, \begin{align*} \overline{pl}(9^{\alpha}(54n + 45))\equiv \overline{pl}_{2k+1}(9^{\alpha}(54n + 45))\equiv 0 \pmod{4}. \end{align*} \end{corollary} \begin{proof} Recall that in \cite{hirschhorn}, Hirschhorn and Sellers show that $9^{\alpha}(27n + 18)$ is nonsquare for all $\alpha, n\geq 0$. Thus by \eqref{NonSqMod4}, we have $\overline{p}(9^{\alpha}(27n + 18))\equiv 0 \pmod{4}$ for all $n\geq 0$ and $\alpha\geq 0$. For any odd integer $n$, $9^{\alpha}(27n + 18)$ is odd. Replacing the odd integer $n$ by $2n+1$, the result follows by Theorem \ref{th22} and Corollary \ref{CoroPl1}. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{th333}}] Recall from the proof of Theorem \ref{th22} and \eqref{eqth11} that \begin{align*} \sum_{n=0}^{\infty}{\overline{pl}_{2k+1}(n)q^{n}}\equiv 2 \sum_{n\geq 1}{\left(q^{2n}+q^{4n}+q^{6n}+ \cdots +q^{2kn}\right)} +\sum_{n\geq 0}{\overline{p}(n) q^n} \pmod{4}. \end{align*} We note for odd $r>1$, then $2r\nmid \ell m+2^j$, as well $2^{i}\nmid \ell m+2^j$ for $i>j$. Thus, we get for all $m\geq 1$, the term $q^{\ell m+2^j}$ will occur in the series $$2\sum_{n\geq 1}{\left(q^{2n}+q^{4n}+ q^{6n}+\cdots +q^{4kn}\right)},$$ only when $n=\ell m+2^j/2,\ell m+2^j/4, \ell m+2^j/8,\ldots,\ell m+2^j/2^j,$ arising from the terms $q^{2n}, q^{4n},q^{8n},\ldots,q^{2^j n},$ respectively. Thus, the coefficient of $q^{\ell m+2^j}$ in the above series is $2\sum_{i=1}^{j}1=2j\equiv 0\pmod{4}$ since $j\equiv 0\pmod{2}$. Therefor, for all $n\geq 1$, $$\overline{pl}_{2k+1}(\ell n+2^j)\equiv \overline{p}(\ell n+2^j)\pmod{4},$$ as desired for \eqref{th333eq1}. To prove \eqref{th333eq2}, since $k\equiv 0\pmod{2}$, we replace $k$ by $2k$ in \eqref{eqth11} to obtain \begin{align} \sum_{n=0}^{\infty}{\overline{pl}_{4k+1}(n)q^{n}}\equiv 2 \sum_{n\geq 1}{\left(q^{2n}+q^{4n}+ \cdots +q^{4kn}\right)} +\sum_{n\geq 0}{\overline{p}(n) q^n} \pmod{4}. \end{align} Note that for all $m\geq 1$, the term $q^{\ell m}$ will occur in the series $$2\sum_{n\geq 1}{\left(q^{2n}+q^{4n}+ \cdots +q^{4kn}\right)},$$ when $n=\ell m/2,\ell m/4,\ldots,\ell m/4k,$ arising from the terms $q^{2n}, q^{4n},\ldots,q^{4kn},$ respectively. Thus, the coefficient of $q^{\ell m}$ in the above series is $2\sum_{i=1}^{2k}1=4k\equiv 0\pmod{4}$. Therefor, for all $n\geq 0$, $$\overline{pl}_{4k+1}(\ell n)\equiv \overline{p}(\ell n)\pmod{4},$$ where $\ell$ here is the least common multiple of all even positive integers $\leq 4k.$ \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{ThmMod8}}] Observe that by \eqref{PLOvkGen} and Lemma \ref{lem1}, we have that \begin{align*} \sum_{n=0}^{\infty}{\overline{pl}_{4}(n)q^{n}} &= f(q) f(q^2)^2 f(q^3)^3 \prod_{n\geq 4}{f(q^n)^4} & &\\ &\equiv f(q) f(q^2)^2 f(q^3)^3\pmod{8} & &\\ & \equiv \left( 1+2\sum_{n\geq 1}{q^{n}}\right)\cdot \left( 1+2\sum_{n\geq 1}{q^{2n}}\right)^2\cdot \left( 1+2\sum_{n\geq 1}{q^{3n}}\right)^3 \pmod{8}&& \end{align*} Thus, \begin{align}\label{gen2} \sum_{n=0}^{\infty}{\overline{pl}_{4}(n)q^{n}}&\equiv 1+2\sum_{n\geq 1}{q^n}+4\sum_{n\geq 1}{q^{2n}} +6\sum_{n\geq 1}{q^{3n}}+&&\\ &\;\;\;\;\;4\sum_{m,n\geq 1}{q^{2(n+m)}}+4\sum_{m,n\geq 1}{q^{3(n+m)}}+4\sum_{m,n\geq 1}{q^{n+3m}}\pmod{8}&&\nonumber \end{align} For any $k\geq 1$, the term $q^{12k}$ will occur in the series $$\sum_{n\geq 1}{q^n}, \sum_{n\geq 1}{q^{2n}}, \sum_{n\geq 1}{q^{3n}}$$ when $n=12k, 6k, 4k$, arising from the terms $q^{n}, q^{2n},q^{3n}$, respectively. Also, the term $q^{12k}$ will occur in the series \begin{align}\label{se1} \sum_{m,n\geq 1}{q^{2(n+m)}}, \sum_{m,n\geq 1}{q^{3(n+m)}}, \sum_{m,n\geq 1}{q^{n+3m}} \end{align} when $n+m=6k, 4k$ and $n+3m=12k$, arising from the terms $q^{2(n+m)}, q^{3(n+m)},q^{n+3m}$, respectively. We use Lemma \ref{lemma(ab)} to count the appearances of $q^{12k}$ in the three series of \eqref{se1} and catalog the results in the following table. \FloatBarrier \begin{table}[H \caption{Coefficients of $q^{12k}$ in the series of \eqref{se1}} \centering \begin{tabular}{c c c c c c} \hline\hline $an+bm=abc$ & $a$ & $b$ & $c$& $j$& coefficient of $q^{abcj}$ in $\sum_{n,m\geq 1}q^{j(an+bm)}$\\ [0.5ex] \hline $n+m=6k$ & $1$ & $1$ & $6k$ & $2$ & $6k-1$ \\ $n+m=4k$ & $1$ & $1$ & $4k$ & $3$ & $4k-1$ \\ $n+3m=12k$ & $1$ & $3$ & $4k$ & $1$ & $4k-1$ \\[1ex] \hline \end{tabular} \label{table1} \end{table} \noindent Thus by Table \ref{table1}, the coefficient of $q^{12k}$ in the series on the right hand side of \eqref{gen2} is $$2+4+6+4(6k-1)+4(4k-1)+4(4k-1)\equiv 0\pmod{8},$$ which proves (\ref{c11}).\\ To prove (\ref{c22}), we observe that for any $k\geq 1$, the term $q^{6k+3}$ will occur in the series $$\sum_{n\geq 1}{q^n}, \sum_{n\geq 1}{q^{3n}}$$ from \eqref{gen2} when $n=6k+3, 2k+1,$ arising from the terms $q^{n}, q^{3n}$, respectively. Also, the term $q^{6k+3}$ will occur in the series \begin{align}\label{ss2} \sum_{m,n\geq 1}{q^{3(n+m)}}, \sum_{m,n\geq 1}{q^{n+3m}} \end{align} from \eqref{gen2} when $n+m=2k+1, n+3m=6k+3$ arising from the terms $q^{3(n+m)},q^{n+3m}$ respectively. However, the term $q^{6k+3}$ does not occur in the series \begin{align} \sum_{m,n\geq 1}{q^{2n}}, \sum_{m,n\geq 1}{q^{2(n+m)}}, \end{align} because $6k+3$ is not divisible by $2$ for every integer $k\geq 1$. Again, we use Lemma \ref{lemma(ab)} to conclude the number of occurrences of $q^{6k+3}$ in the series of \eqref{ss2} in the following table \FloatBarrier \begin{table}[H \caption{Coefficients of $q^{6k+3}$ in the series of \eqref{ss2}} \centering \begin{tabular}{c c c c c c} \hline\hline $an+bm=abc$ & $a$ & $b$ & $c$& $j$& coefficient of $q^{abcj}$ in $\sum_{n,m\geq 1}q^{j(an+bm)}$\\ [0.5ex] \hline $n+m=2k+1$ & $1$ & $1$ & $2k+1$ & $3$ & $2k$ \\ $n+3m=6k+3$ & $1$ & $3$ & $2k+1$ & $1$ & $2k$ \\[1ex] \hline \end{tabular} \label{table6k+3} \end{table} \noindent Thus by Table \ref{table6k+3}, the coefficient of $q^{6k+3}$ in the series on the right hand side of \eqref{gen2} is $$2+6+4\cdot 2k+4\cdot 2k\equiv 0\pmod{8},$$ which proves (\ref{c22}). We now prove (\ref{c34}) while (\ref{c33}) can be proved similarly with less effort. We observe that by \eqref{PLOvkGen} and Lemma \ref{lem1} that \begin{align*} \sum_{n=0}^{\infty}{\overline{pl}_{8}(n)q^{n}} &= f(q)f(q^2)^2 f(q^3)^3 f(q^4)^4 f(q^5)^5 f(q^6)^6 f(q^7)^7 \prod_{n\geq 8}{f(q^n)^8} & &\\ &\equiv f(q)\; f(q^2)^2\; f(q^3)^3\; f(q^5)\; f(q^6)^2\; f(q^7)^3 \pmod{8} &&\\ & \equiv \left( 1+2\sum_{n\geq 1}{q^{n}}\right)\cdot \left( 1+2\sum_{n\geq 1}{q^{2n}}\right)^2\cdot \left( 1+2\sum_{n\geq 1}{q^{3n}}\right)^3 \cdot &&\\ &\;\;\;\;\left(1+2\sum_{n\geq 1}{q^{5n}}\right)\cdot \left( 1+2\sum_{n\geq 1}{q^{6n}}\right)^2\cdot \left( 1+2\sum_{n\geq 1}{q^{7n}}\right)^3\pmod{8}.&& \end{align*} Thus we have \begin{align*} \sum_{n=0}^{\infty}{\overline{pl}_{8}(n)q^{n}} &\equiv 1+2\sum_{n\geq 1}{q^n}+4\sum_{n\geq 1}{q^{2n}} +6\sum_{n\geq 1}{q^{3n}}+2\sum_{n\geq 1}{q^{5n}}+4\sum_{n\geq 1}{q^{6n}}+6\sum_{n\geq 1}{q^{7n}}+&&\\ &\;\;\;\;\;4\sum_{m,n\geq 1}{q^{2(n+m)}}+4\sum_{m,n\geq 1}{q^{3(n+m)}}+4\sum_{m,n\geq 1}{q^{6(n+m)}}+4\sum_{m,n\geq 1}{q^{7(n+m)}}+&&\\ &\;\;\;\;4\sum_{m,n\geq 1}{q^{n+3m}}+4\sum_{m,n\geq 1}{q^{n+5m}}+4\sum_{m,n\geq 1}{q^{n+7m}}+&&\\ &\;\;\;\;4\sum_{m,n\geq 1}{q^{3n+5m}}+4\sum_{m,n\geq 1}{q^{3n+7m}}+4\sum_{m,n\geq 1}{q^{5n+7m}}\pmod{8}.&& \end{align*} For any $k\geq 1$, the term $q^{210k+3}$ will occur in the series $$\sum_{n\geq 1}{q^n}, \sum_{n\geq 1}{q^{3n}}$$ when $n=210k+3, 70k+1$ arising from the terms $q^{n}, q^{3n}$ respectively. Also, the term $q^{210k+3}$ will occur in the series \begin{align*} &\sum_{m,n\geq 1}{q^{3(n+m)}},\sum_{m,n\geq 1}{q^{n+3m}},\sum_{m,n\geq 1}{q^{n+5m}}\\ &\sum_{m,n\geq 1}{q^{n+7m}},\sum_{m,n\geq 1}{q^{3n+5m}},\sum_{m,n\geq 1}{q^{3n+7m}},\sum_{m,n\geq 1}{q^{5n+7m}}, \end{align*} when $ 3(n+m),n+3m, n+5m,n+7m, 3n+5m, 3n+7m,5n+7m=210k+3$ arising from the terms $$ q^{3(n+m)}, q^{n+3m},q^{n+5m}, q^{n+7m}, q^{3n+5m}, q^{3n+7m}, q^{5n+7m}$$ respectively. Since $210k+3$ is not divisible by $2,5,6,7,$ so the term $q^{210k+3}$ will not occur in any of the following $q$-series, $$\sum_{n\geq 1}{q^{2n}},\sum_{n\geq 1}{q^{5n}},\sum_{n\geq 1}{q^{6n}},\sum_{n\geq 1}{q^{7n}},\sum_{m,n\geq 1}{q^{2(n+m)}},\sum_{m,n\geq 1}{q^{6(n+m)}},\sum_{m,n\geq 1}{q^{7(n+m)}} .$$ \noindent Again, by applying Lemma \ref{lemma(ab)}, the appearances of $q^{210k+3}$ in the series \begin{align}\label{ss33} \sum_{m,n\geq 1}{q^{3(n+m)}},\sum_{m,n\geq 1}{q^{n+3m}}, \end{align} are given in the following table. \FloatBarrier \begin{table}[H \caption{Coefficients of $q^{210k+3}$ in the series of \eqref{ss33}} \centering \begin{tabular}{c c c c c c} \hline\hline $an+bm=abc$ & $a$ & $b$ & $c$& $j$& coefficient of $q^{abcj}$ in $\sum_{n,m\geq 1}q^{j(an+bm)}$\\ [0.5ex] \hline $n+m=70k+1$ & $1$ & $1$ & $70k+1$ & $3$ & $70k$ \\ $n+3m=210k+3$ & $1$ & $3$ & $70k+1$& $1$ & $70k$ \\[1ex] \hline \end{tabular} \label{table21} \end{table} Now for $n+5m, n+7m=210k+3$, we have the following enumerations $$5\cdot 1+(210k-5+3),5\cdot 2+(210k-10+3),\dots, 5\cdot 42k+3,$$ $$7\cdot 1+(210k-7+3),7\cdot 2+(210k-14+3),\dots, 7\cdot 30k+3.$$ Thus, we have $42k, 30k$ pairs of $m$ and $n$ for $n+5m, n+7m=210k+3$, respectively. For $3n+5m=210k+3$, then $5m=210k+3-3n$ and so $3$ divides $m$. Thus, counting for $3n+5m=210k+3$ is equivalent to counting for $3n+15m=210k+3$ which is equivalent to $n+5m=70k+1$ and the later has the following enumerations $$5\cdot 1+(70k-5+3),5\cdot 2+(70k-10+3),\dots, 5\cdot 14k+3.$$ Hence, we obtain $14k$ possible pairs $n$ and $m$ such that $3n+5m=210k+3$. Similarly, we have $10k$ pairs of positive integers $m$ and $n$ such that $3n+7m=210k+3$. Thus, the following table catalogs the coefficients of the term $q^{210k+3}$ in the following series \begin{align}\label{ss44} \sum_{m,n\geq 1}{q^{n+5m}},\sum_{m,n\geq 1}{q^{n+7m}},\sum_{m,n\geq 1}{q^{3n+5m}},\sum_{m,n\geq 1}{q^{3n+7m}}. \end{align} \FloatBarrier \begin{table}[H \caption{Coefficients of $q^{210k+3}$ in the series of \eqref{ss44}} \centering \begin{tabular}{c c c c} \hline\hline $an+bm=210k+3$ & $a$ & $b$ & coefficient of $q^{210k+3}$ in $\sum_{n,m\geq 1}q^{an+bm}$\\ [0.5ex] \hline $n+5m=210k+3$ & $1$ & $5$ & $42k$\\% inserting body of the table $n+7m=210k+3$ & $1$ & $7$ & $30k$\\ $3n+5m=210k+3$ & $3$ & $5$ & $14k$\\ $3n+7m=210k+3$ & $3$ & $7$ & $10k$ \\[1ex] \hline \end{tabular} \label{table321} \end{table} Now, we only need to check the coefficient of $q^{210k+3}$ in the series $\sum_{m,n\geq 1}{q^{5n+7m}}$. Note that the integers $a=5, b=7$ and $c=210k+3$ satisfy the desired conditions of Lemma \ref{LemPart}. Thus $M_{210k+3}$ is the number of the possible pairs of positive integers $(n,m)$ such that $5n+7m=210k+3$ and $$M_{210k+3}\equiv M_{210k+3+\pi_{8}}\pmod{8},$$ where $\pi_{8}$ is the minimum period modulo $8$ of the following $q$-series \begin{align*} A(q) :=\sum_{n=0}^{\infty}{p(n;S)q^{n}}=\frac{1}{(1-q^5)(1-q^7)}. \end{align*} Letting $S=\{5,7\}$, $\ell=2$, and $N=3$ in Theorem \ref{kwong}, then $\pi_{8}=\pi_{8}(A)=280$. In other words, for all $n\geq 0$, \begin{align*} M_{210k+3+\pi_{8}}=p(210k+3+\pi_{8}(A);S)\equiv p(210k+3;S)=M_{210+3} \pmod{8}. \end{align*} If we let $k=4j$ where $j\in \mathbb{N}$, then we observe by the periodicity of $A(q)$ that $$M_{210k+3}=p(210k+3;S)=p(3+3j\cdot \pi_{8}(A);S)\equiv p(3;S)\pmod{8}=0\pmod{8}.$$ By a similar argument for $k=4j-1,4j-2,4j-3$, we obtain the following \[ \ M_{210k+3} = p(210k+3;S)\equiv \begin{cases} p(3;S)\;\;\;\;\pmod{8}=0\pmod{8} &\text{if $k=4j=4,8,12,\dots$}\\ p(73;S)\;\;\pmod{8}=2\pmod{8} &\text{if $k=4j-1=3,7,11,\dots$}\\ p(143;S)\pmod{8}=4\pmod{8} &\text{if $k=4j-2=2,6,10,\dots$}\\ p(213;S)\pmod{8}=6\pmod{8} &\text{if $k=4j-3=1,5,9,\dots$}\ \end{cases} \] By summing all coefficients of $q^{210k+3}$ and using Tables \ref{table21} and \ref{table321}, we get $$2+6+4\cdot (70k+70k+42k+30k+14k+10k+M_{210k+3})\equiv 0 \pmod{8},$$ which proves (\ref{c34}). Similarly, the identities \eqref{c35} and \eqref{c37} can be proved using the same technique. However, for the sake of completeness, we show in Tables \eqref{table210k+9} and \eqref{table210+105} the corresponding coefficients of the terms $q^{210k+9}, q^{210k+105}$ modulo $8$ of the generating function of $8$-rowed plane overpartitions, $\sum_{n=0}^{\infty}{\overline{pl}_{8}(n)q^n}$. \FloatBarrier \begin{table}[H] \caption{Coefficients of $q^{210k+9}, q^{210k+105}$ modulo $8$ in $c\sum_{n\geq 1}{q^{jn}}$ } \centering \begin{tabular}{c c c c c } \hline\hline $c\sum_{n\geq 1}q^{jn}$ & coefficient of& $q^{210k+9},$& $q^{210k+105}$ &in $c\sum_{n\geq 1}q^{jn}$\\ [0.5ex] \hline $2\sum_{n\geq 1}q^n$ & & $2$ &$2$&\\ $4\sum_{n\geq 1}q^{2n}$ && $0$ &$0$&\\ $6\sum_{n\geq 1}q^{3n}$ && $6$ &$6$&\\ $2\sum_{n\geq 1}q^{5n}$ && $0$ &$2$&\\ $4\sum_{n\geq 1}q^{6n}$ && $0$ &$0$&\\ $6\sum_{n\geq 1}q^{7n}$ && $0$ &$6$&\\[1ex] \hline \end{tabular} \label{table210k+9} \end{table} \FloatBarrier \begin{table}[H] \caption{Coefficients of $q^{210k+9}, q^{210k+105}$ modulo $8$ in $c\sum_{n,m \geq 1}{q^{j(an+bm)}}$ } \centering \begin{tabular}{c c c c c } \hline\hline $c\sum_{n,m\geq 1}q^{j(an+bm)}$ & coefficient of& $q^{210k+9},$& $q^{210k+105}$ &in $c\sum_{n,m\geq 1}q^{j(an+bm)}$\\ [0.5ex] \hline $4\sum_{n,m\geq 1}q^{2(n+m)}$ && $0$ &$0$&\\ $4\sum_{n,m\geq 1}q^{3(n+m)}$ && $4(70k+2)$ &$4(70k+34)$&\\ $4\sum_{n,m\geq 1}q^{6(n+m)}$ && $0$ &$0$&\\ $4\sum_{n,m\geq 1}q^{7(n+m)}$ && $0$ &$4(30k+14)$&\\ $4\sum_{n,m\geq 1}q^{n+3m}$ && $4(70k+2)$ &$4(70k+34)$&\\ $4\sum_{n,m\geq 1}q^{n+5m}$ && $4(42k+1)$ &$4(42k+20)$&\\ $4\sum_{n,m\geq 1}q^{n+7m}$ && $4(30k+1)$ &$4(10k+14)$&\\ $4\sum_{n,m\geq 1}q^{3n+5m}$ && $4\cdot 14k$ &$4(14k+6)$&\\ $4\sum_{n,m\geq 1}q^{3n+7m}$ && $4\cdot 10k$ &$4(10k+4)$&\\ $4\sum_{n,m\geq 1}q^{5n+7m}$ && $4\cdot M_{210k+9}$ &$4(6k+2)$&\\[1ex] \hline \end{tabular} \label{table210+105} \end{table} \indent Now, we only need to show that the number $M_{210k+9}$ is even. Similar to the argument above and by applying Lemma \ref{LemPart} for $a=5,b=7$ and $c=210k+9$, we obtain the following, \[ \ M_{210k+9} = p(210k+9;S)\equiv \begin{cases} p(9;S)\;\;\;\;\pmod{8}=0\pmod{8} &\text{if $k=4j=4,8,12,\dots$}\\ p(79;S)\;\;\pmod{8}=2\pmod{8} &\text{if $k=4j-1=3,7,11,\dots$}\\ p(149;S)\pmod{8}=4\pmod{8} &\text{if $k=4j-2=2,6,10,\dots$}\\ p(219;S)\pmod{8}=6\pmod{8} &\text{if $k=4j-3=1,5,9,\dots$}\ \end{cases} \] Thus by $M_{210k+9}$ is being even, the coefficient of $q^{210k+9}$ modulo $8$ in $\sum_{n=0}^{\infty}{\overline{pl}_{8}(n)q^n}$ is given by summing all coefficients of $q^{210k+9}$ in Tables \eqref{table210k+9} and \eqref{table210+105}, so we obtain $$2+6+4\cdot \left(70k+2+70k+2+42k+1+30k+1+14k+10k+M_{210k+9}\right)\equiv 0 \pmod{8},$$ as desired for the identity \eqref{c35}. For the identity \eqref{c37}, by Tables \eqref{table210k+9} and \eqref{table210+105}, the corresponding coefficient modulo $8$ of $q^{210k+105}$ in the series $\sum_{n=0}^{\infty}{\overline{pl}_{8}(n)q^n}$ is congruent to \begin{align*} 16+4(252k+128)\equiv 0\pmod{8}. \end{align*} \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{CongOverMod8}}] We recall from \eqref{GenConOver} that \begin{align*} \sum_{n=0}^{\infty}\overline{p}(n)q^{n}\equiv \prod_{j=0}^{k-2}\left(\phi(q^{2^j})\right)^{2^j}\pmod{2^{k}}, \end{align*} where as in \eqref{mo1}, \begin{align*} \phi(q)=\sum_{n=-\infty}^{\infty}{q^{n^2}}=1+2\sum_{n=1}^{\infty}{q^{n^2}}. \end{align*} For the case $k=3$, \begin{align}\label{GenOvCongaMod8} \sum_{n=0}^{\infty}\overline{p}(n)q^{n}\equiv\phi(q)\cdot \phi(q^2)^2 \equiv 1+2\sum_{n\geq 1}{q^{n^2}}+4\sum_{n\geq 1}{q^{2n^2}}+4\sum_{n,m\geq 1}{q^{2(n^2+m^2)}}\pmod{8}. \end{align} If $n$ is a nonsquare odd integer, then $n$ can not be written as $m^2,2m^2,$ or $2(m^2+k^2)$ for all $m,k\geq 1$. Thus by \eqref{GenOvCongaMod8}, the result follows. \end{proof} As a consequence, we obtain the following result which gives an infinite family of overpartition congruences modulo $8$. \begin{corollary}\label{infam} For any integer $\alpha\geq 3,$ and $\beta\geq 0,$ the following holds for each $n\geq 0,$ \begin{align*} \overline{p}(2^\alpha 3^\beta n+5)\equiv 0\pmod{8}. \end{align*} \end{corollary} \begin{proof} Clearly, for $\alpha\geq 3,$ and $\beta\geq 0,$ we have that $ 2^\alpha 3^\beta n+5$ is an odd integer for each $n\geq 0$. Suppose that there is a positive integer $m$ such that $2^\alpha 3^\beta n+5=(2m+1)^2$. Thus, we obtain $2^{\alpha-2} 3^\beta n+1=m(m+1)$. We know that $m(m+1)$ is even which contradicts the fact $2^{\alpha-2} 3^\beta n+1$ is odd since $\alpha-2\geq 1$. Thus no such $m$ exists, and $2^\alpha 3^\beta n+5$ is not an odd square. \end{proof} Next, we obtain a result of Hirschhorn and Sellers \cite{hirschhorn2005arithmetic}. \begin{corollary}\label{OvCon4n+3} The following holds for all $n\geq 0,$ \begin{align} \overline{p}(4n+3)\equiv 0\pmod{8}. \end{align} \end{corollary} \begin{proof} Similar to the proof of Corollary \ref{infam}, $4n+3$ is a nonsquare odd integer for all $n\geq 0$. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{5rowed}}] By Lemma \ref{lem1} and the fact \eqref{OverPartMod2}, $4\overline{p}(n)\equiv 0\pmod{8}$ for every integer $n\geq 1$, \begin{align*} \sum_{n=0}^{\infty}{\overline{pl}_{5}(n)q^{n}}&=f(q) f(q^2)^2 f(q^3)^3 f(q^4)^4 \prod_{n\geq 5}{f(q^n)^5}\\ &\equiv f(q^2) \; f(q^3)^2 \; f(q^4)^3 \; \prod_{n=1}^{\infty}f(q^n)\pmod{8} \\ & \equiv \left( 1+2\sum_{n\geq 1}{q^{2n}}\right)\left( 1+2\sum_{n\geq 1}{q^{3n}}\right)^2 \left( 1+2\sum_{n\geq 1}{q^{4n}}\right)^3 \left(1+\sum_{n\geq 1}{\overline{p}(n)q^n}\right)\pmod{8}\\ &\equiv 1+2\sum_{n\geq 1}{q^{2n}}+4\sum_{n\geq 1}{q^{3n}}+6\sum_{n\geq 1}{q^{4n}}\\ &\;\;\;\;\;\;+4\sum_{n,m\geq 1}{q^{3(n+m)}}+4\sum_{n,m\geq 1}{q^{4(n+m)}}+4\sum_{n,m\geq 1}{q^{2n+4m}}\\ &\;\;\;\;\;\;+\sum_{n\geq 1}{\overline{p}(n)q^n}+2\sum_{n,m\geq 1}{\overline{p}(n)q^{n+2m}}+6\sum_{n,m\geq 1}{\overline{p}(n)q^{n+4m}}\pmod{8}. \end{align*} We observe that \begin{align*} 2\sum_{n,m\geq 1}{\overline{p}(n)q^{n+2m}}+6\sum_{n,m\geq 1}{\overline{p}(n)q^{n+4m}}&=2\sum_{n,m\geq 1}{\overline{p}(n)q^{n+4m-2}}+8\sum_{n,m\geq 1}{\overline{p}(n)q^{n+4m}}\\ &\equiv 2\sum_{n,m\geq 1}{\overline{p}(n)q^{n+4m-2}}\pmod{8}. \end{align*} Thus, we obtain \begin{align}\label{EqPL5} \sum_{n=0}^{\infty}{\overline{pl}_{5}(n)q^{n}} &\equiv 1+\sum_{n\geq 1}{\left(2q^{2n}+4q^{3n}+6{q^{4n}}\right)}+4\sum_{n,m\geq 1}{\left(q^{3(n+m)}+q^{4(n+m)}+q^{2(n+2m)}\right)}\nonumber\\ &\;\;\;\;+\sum_{n\geq 1}{\overline{p}(n)q^n}+2\sum_{n,m\geq 1}{\overline{p}(n)q^{n+4m-2}}\pmod{8}. \end{align} Note that $12k+1$ is not divisible by $2$,$3$ and $4$. So for any $k\geq 1$, the term $q^{12k+1}$ will occur only in the series $$\sum_{n\geq 1}{\overline{p}(n)q^n}, \sum_{n,m\geq 1}{\overline{p}(n)q^{n+4m-2}},$$ when $n=12k+1, 12k+1-(4m-2)$ arising from the terms $q^n, q^{12k+1-(4m-2)}$ respectively for $m=1,\dots,3k.$ Hence, the coefficient of $q^{12k+1}$ in the series on the right hand side of \eqref{EqPL5} is then given by \begin{align*} \overline{p}(12k+1)+2\sum_{m=1}^{3k}\overline{p}(12k-4m+3). \end{align*} Note that by Corollary \ref{OvCon4n+3}, for all $k\geq 1$ and $m=1,\dots,3k$, we have $$\overline{p}(12k-4m+3)=\overline{p}(4(3k-m)+3)\equiv 0\pmod{8}.$$ Thus, \begin{align*} \sum_{m=1}^{3k}\overline{p}(12k-4m+3)\equiv 0\pmod{8}. \end{align*} Therefore, for all $k\geq 1$, \begin{align*} \overline{pl}_{5}(12k+1)\equiv \overline{p}(12k+1)+ \sum_{m=1}^{3k}\overline{p}(12k-4m+3)\equiv\overline{p}(12k+1)\pmod{8}. \end{align*} For the case $k=0,$ $$\overline{pl}_{5}(1)\equiv \overline{p}(1)\pmod{8}.$$ Thus, for every integer $k\geq 0$, \begin{align*} \overline{pl}_{5}(12k+1)\equiv \overline{p}(12k+1)\pmod{8}, \end{align*} as desired for \eqref{Con1Pl5}. The congruence \eqref{ConPl5} can be proved similarly. \end{proof} We lastly end this section by combining Theorem \ref{5rowed} and Corollary \ref{infam} to obtain the following infinite family of $5$-rowed plane overpartition congruences modulo $8$. \begin{corollary}\label{LastThm} For any integers $\alpha \geq 3$ and $\beta \geq 1$, the following holds for all $n\geq 0,$ \begin{align} \overline{pl}_{5}(2^\alpha 3^\beta n+5)\equiv 0\pmod{8}. \end{align} \end{corollary} \begin{proof} Note that by Theorem \ref{5rowed}, for all $n\geq 0,$ \begin{align*} \overline{pl}_{5}(2^\alpha 3^\beta n+5)=\overline{pl}_{5}(12(2^{\alpha-2} 3^{\beta-1} n)+5)\equiv \overline{p}_{5}(12(2^{\alpha-2} 3^{\beta-1} n)+5)\pmod{8}. \end{align*} The rest follows by Corollary \ref{infam}. \end{proof} \section{\textbf{Concluding Remarks}}\label{Remark} We close this paper with a few comments and ideas. We established several examples of plane and restricted plane overpartition congruences modulo $4$ and $8$. Often, our technique is based on applying Lemma \ref{lem1} up to a small power of $2$, then collecting the coefficients of certain terms of the desired power. Lemma \ref{lem1} can be a very powerful tool to find and prove additional congruences modulo powers of $2$ for any partition function that involves products containing functions of the form $f(q^n)^m$ where $f$ is defined by $f(q)=\frac{1+q}{1-q}$. For example, the overpartition function has this property. Based on computational evidence, we conjecture that for each integer $r\geq 1,$ and each $k\geq 1$, there exist infinitely many integers $n$ such that \begin{align}\label{tackle} \overline{pl}_{k}(n)\equiv 0\pmod{2^r}. \end{align} If this holds, then for infinitely many integers $n$, \begin{align} \overline{pl}(n)\equiv 0\pmod{2^r}. \end{align} Lemma \ref{lem1} might be a powerful tool to tackle such congruences as \eqref{tackle}. We note that Theorems \ref{th22} and \ref{5rowed} suggest there might be other arithmetic relations between plane overpartitions and overpartitions that are worth investigating. Furthermore, computational evidence suggests that there is a relation modulo powers of $2$ between overpartitions and restricted plane overpartitions. Thus, we conjecture that for each $r\geq 1$ and each $k\geq 1$, there exist infinitely many integers $n$, such that \begin{align*} \overline{pl}_{k}(n)\equiv \overline{p}(n) \pmod{2^r}. \end{align*} Another approach to establish congruences for plane overpartitions modulo powers of $2$ is to look for an iteration formula for plane overpartitions similar to that of overpartitions given by Theorem $2.2$ of \cite{hirschhorn2006arithmetic}. That is, consider \begin{align*} \overline{P}(q)=\phi(q)\;\phi^2(q^2)\;\phi^4(q^4)\;\phi^8(q^8)\cdots, \end{align*} and let \begin{align*} G_{n}(q):=\prod_{i=n+1}^{\infty}{\frac{1+q^i}{1-q^i}=\prod_{i=n+1}^{\infty}{f(q^{i})}}. \end{align*} Thus the generating function for plane overpartitions can be rewritten as \begin{align*} \overline{PL}(q)=&\overline{P}(q)\cdot G_{1}(q) \cdot G_{2}(q)\cdot G_{3}(q) \cdots\\ &=\prod_{n=1}^{\infty}{\phi(q^{2^{n-1}})^{2^{n-1}} G_{n}(q).} \end{align*} Investigating properties of $G_{n}(q)$ might yield congruences modulo higher powers of $2$ for plane overpartitions. \section{\textbf{Acknowledgements}} This work is a part of my PhD thesis written at Oregon State University. I would like to express special thanks of gratitude to my advisor Professor Holly Swisher for her guidance and helpful suggestions. Also, I would like to thank Professor James Sellers for helpful discussions and encouragement that motivated this work. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Whether it is due to Galactic dust or synchrotron, to cosmological backgrounds such as the Cosmic Microwave Background (CMB) or to the Cosmic Infrared Background (CIB), that traces the integrated radiation of unresolved galaxies, diffuse emission is omnipresent in infrared and millimetric observations. The angular power spectrum of this radiation is one of the main tools used to constrain the structure of the interstellar medium, the clustering of IR galaxies (CIB), or the cosmological parameters (CMB). In short, its estimation requires to Fourier transform the image and to average the modulus square of the Fourier amplitudes into frequency bins. However, the image has both boundaries and often masked regions (e.g. to remove bright point sources) that induce power aliasing and biases the estimation of the power spectrum if not accounted for properly. The effect becomes quite significant when the signal has a steep power spectrum such as $k^{-3}$, similar to that measured for Galactic cirrus emission \citep{mamd_07} or even steeper than $k^{-4}$ as for CMB anisotropy on angular scales smaller than a few arcmin \citep[e.g.][]{acbar,quad_t}. To account for non-periodic boundaries, \cite{das} proposed an original apodizing technique that helps us to deconvolve the estimated power spectrum from that of the observed patch boundaries. These authors also mitigate the impact of holes by applying a pre-whitening technique to data in real space. In the context of CMB anisotropy, \cite{master} developed the {\small M{\tiny ASTER}}~method that allows us to correct for mask effects on the output binned power spectrum. They analyze data accross the full sky and account for the sky curvature. Instead of classical Fourier analysis, they project the data onto spherical harmonics and go through the algebra of \emph{pseudo} angular power spectra (see Sect.~\ref{se:ps_def}). This idea has been successfully used in several experiments \citep[e.g.][]{boomerang,archeops_t1} and is also the basis of more refined algorithms used in e.g.~WMAP \citep{hinshaw} and Archeops \citep{archeops_t2}. However, direct use of {\small M{\tiny ASTER}}~in the context of infrared observations with a resolution of typically a few arcsec requires us to estimate Legendre polynomials up to orders $\ell$ of 10,000 or more for which current recurrences and integration methods are numerically unstable. Other techniques developed in the context of CMB anisotropy such as maximum likelihood estimation \citep{bjk} could be transposed to high angular resolution maps, but the numerical cost $\propto n_{pix}^3$ is prohibitive for common applications when the analysis pipeline requires Monte Carlo simulations. This paper aims to transpose the \emph{pseudo}-spectrum approach pioneered by {\small M{\tiny ASTER}}~ to high angular resolution observations and a classical Fourier analysis in the context of the flat sky approximation. Its originality compared to other approaches is that it works exclusively in discrete space and therefore avoids the complexity of resampling the data and integrating Bessel functions. Our algorithm was nicknamed {\small P{\tiny OKER}}, for ``P. Of $k$ EstimatoR''. The paper is organized as follows. Section~\ref{se:ps_def} provides the definitions and algebra involved in {\small P{\tiny OKER}}~and Sect.~\ref{se:applis} shows its applications to simulations of various astrophysical components spectra and a complex mask. Detailed derivations of our results are presented in the appendices. Although this work focuses on temperature power spectrum estimation, we also show in Appendices \ref{se:app_eb} and \ref{se:app_te} how the formalism can be generalized to the case of polarization. \section{Power spectrum estimation for an incomplete observation of the sky} \label{se:ps_def} We first briefly state the limits of the flat sky approximation, then recall the definitions of both the power spectrum and \emph{pseudo}-power spectrum of data in the context of continuous Fourier transforms. Finally, we consider their counterpart in discrete space and the implementation of {\small P{\tiny OKER}}. \subsection{Flat sky approximation} Projecting an observed fraction of the sky onto a plane rather than a sphere alters the image properties in a way that depends on the specific reprojection scheme (e.g. gnomonic, tangential, cylindrical etc). The angular power spectrum of the data as measured on the projection plane therefore differs from that on the sphere. Two comments can be made at this stage. First, as long as the observation patch does not span more than a few degrees, as in most infrared experiments, the distortions are very small. For a gnomonic projection for instance, an observation point at an angular distance $\theta$ from the map center (the point tangent to the sphere) is projected at a distance $\tan(\theta)$ rather than $\theta$. The relative difference between the two is only $2.6\;10^{-3}$ for a map of 10 degree diameter i.e. the projected map is stretched by 46~arcsec in each direction. As long as this remains small compared to the angular scales over which the angular power spectrum is estimated, the distortion can be neglected \citep[e.g.][]{pryke}. In this work, we stay well above this limit by considering square maps of only $5\times 5$~deg$^2$ and 2~arcmin resolution and therefore neglect distortion effects. Second, the impact of the projection on the estimation of the power spectrum is equivalent to a transfer function that can be estimated and corrected for when dealing with the convolution kernel (Eq.~\ref{eq:ps2cl_1}), as detailed in Sect.~\ref{se:transf_func}. Both of these reasons allow us to estimate the angular power spectrum of a map built in the flat sky approximation limit. \subsection{Continuous Fourier analysis and masked data} On a flat two-dimensional (2D) surface, a scalar field $T_{\vec{r}}$ depending on the direction of observation ${\vec{r}}$ is represented in Fourier space by \begin{eqnarray} T_{\vec{k}} & = & \int_{\mathbb{R}^2} d{\vec{r}} T_{\vec{r}} e^{-i{\vec{k}}\cdot{\vec{r}}}, \label{eq:ft_def1} \\ T_{\vec{r}} & = & \int_{\mathbb{R}^2}\frac{d{\vec{k}}}{(2\pi)^2} T_{\vec{k}} e^{i{\vec{k}}\cdot{\vec{r}}}. \label{eq:ft_def2} \end{eqnarray} \noindent For a random isotropic process, the 2D power spectrum $\mathcal{P}_{\vec{k}}$ is defined as \begin{equation} \langle T_{\vec{k}} T^*_{{\vec{k}}^\prime}\rangle \equiv \mathcal{P}_{\vec{k}}\delta_{{\vec{k}}-{\vec{k}}^\prime}, \label{eq:cont_2dps_def} \end{equation} \noindent where the brackets denote the statistical average. We denote by one-dimensional (1D) power spectrum its azimuthal average \begin{equation} P_k \equiv \frac{1}{2\pi}\int_0^{2\pi} d\theta\; T_{\vec{k}} T^*_{\vec{k}}\;, \label{eq:1d_ps_def} \end{equation} \noindent where $P_k$ is the physical quantity of interest which we want to reconstruct. It is the Fourier transform of the two-point correlation function. If the process is isotropic, the 2D and 1D-power spectra are related by \begin{equation} \langle T_{\vec{k}} T^*_{{\vec{k}}^\prime}\rangle \equiv \mathcal{P}_{\vec{k}} \delta_{{\vec{k}}-{\vec{k}}^\prime} = (2\pi)^2 P_k \delta_{{\vec{k}}-{\vec{k}}^\prime}, \label{eq:ps_def} \end{equation} In the following, we neglect the 1D or 2D qualifiers to improve readability and in both 1D and 2D cases use the term power spectrum unless the difference needs to be emphasized. In practice however, the integrals of Eqs.~(\ref{eq:ft_def1}, \ref{eq:ft_def2}) cannot run up to infinity simply because of the limited size of the observation patch. This is accounted for by a weight function $W_{\vec{r}}$ applied to the data. Its most simple form is unitary on the data, zero outside the observation range or where strong sources are masked out. More subtle choices such as inverse noise variance weighting or apodization (cf.~Sect.~\ref{se:applis}) are usually used. Instead of the true Fourier amplitudes, we are then bound to measure the amplitudes of the masked data, a.k.a. the \emph{pseudo}-amplitudes \begin{equation} \hat{T}_{\vec{k}} = \int_{\mathbb{R}^2} d{\vec{r}} T_{\vec{r}} W_{\vec{r}} e^{-i{\vec{k}}\cdot{\vec{r}}} \label{eq:ft_mask}. \end{equation} \noindent Equation~(\ref{eq:1d_ps_def}) applied to the \emph{pseudo}-amplitudes gives the 1D-\emph{pseudo}-power spectrum \begin{equation} \hat{P}_k = \int_0^\infty k_1dk_1\;K_{kk_1}P_{k_1}, \label{eq:pps} \end{equation} \noindent where $K_{kk_1}$ is the mixing matrix that depends on the weighting function $W_{\vec{r}}$. To determine the signal power spectrum, we need to solve this equation for $P_{k_1}$. A detailed derivation of the analytic solution can be found in \cite{master}. The impact of the instrumental beam, the pixel window function, the projection algorithm, the scanning, and the data processing can be accounted for in this formalism as transfer functions and incorporated in the definition of $K_{kk_1}$ (Sect.~\ref{se:transf_func}). In the next two subsections, we focus on mask effects and the global picture of our algorithm. \subsection{Discrete Fourier analysis and {\small P{\tiny OKER}}} Any data set is by construction discretely sampled. Computing the quantities defined in the previous section requires mathematical interpolation and/or resampling of these data and appropriate integration tools, especially if the underlying data power spectrum is steep as for Galactic cirrus for which $P(k)\propto k^{-3}$ \citep{mamd_07}. Rather than dealing with these difficulties, we keep the native pixelized description of the data and work completely in discrete space. We use the discrete fourier transform (hereafter DFT) as provided by data analysis software. For a map of scalar quantity $D_{\mu \nu}$ and size $N_x\times N_y$ pixels, it is defined as \begin{eqnarray} D_{mn} & = & \frac{1}{N_xN_y}\sum_{\mu,\nu} D_{\mu\nu} e^{-2i\pi(\mu m/N_x+\nu n/N_y)}, \label{eq:dft}\\ D_{\mu\nu} & = & \sum_{m,n} D_{mn} e^{+2i\pi(\mu m/N_x+\nu n/N_y)}. \end{eqnarray} Throughout this work, although we denote quantities in direct and Fourier space by the same name, Greek indices denote pixel indices in real space whereas roman indices refer to amplitudes in Fourier space. Unless stated otherwise, sums over $\mu$ and $m$ (resp. $\nu$ and $n$) run from 0 to $N_x-1$ (resp. $N_y-1$), $\Delta\theta$ is the angular resolution of the map in radians. For a given wave-vector ${\vec{k}}_{mn}$, labeled by the $m$ and $n$ indices, its corresponding norm is denoted by $k_{mn} = (2\pi/\Delta\theta)\sqrt{(m^\prime/N_x)^2+(n^\prime/N_y)^2}$ with $m^\prime=m$ (resp. $n^\prime$) if $m\leq N_x/2$ and $m^\prime=N_x-m$ if $m>N_x/2$. This convention ensures that on small angular scales $k$ matches the multipole $\ell$ used in the description of CMB anisotropy. The Nyquist mode is $\pi/\Delta\theta$. It is well known that the DFT slightly differs from the theoretical continuous Fourier transform, hence $D_{mn}$ does not strictly equal $T_{{\vec{k}}_{mn}}$. In particular, the DFT deals with amplitudes for modes ${\vec{k}}_{mn}$ larger than the Nyquist mode $\pi/\Delta\theta$ and in some directions for $\theta_{mn}$ only (see Fig.~\ref{fig:nyquist}). It is therefore not possible to integrate Eq.~(\ref{eq:1d_ps_def}) on the full range $\theta \in [0,2\pi]$ for such modes and so, the 1D power spectrum is undefined outside the Nyquist range. In the following, we therefore restrict ourselves to the Nyquist range for power spectrum estimation. We note, however, that mathematical sums implied in the following may still run over the full range of pixels or DFT amplitude indices. The direct DFT of the masked data results from the convolution of the DFT amplitudes by a kernel $K_{m,m'}^{n,n'}$ that depends only on the mask DFT amplitudes (Appendix \ref{se:fsl_dft}). If the data $D$ consist of signal $T$ and noise $N$, we have \begin{equation} \langle |\hat{D}_{mn}|^2 \rangle = \sum_{m'n'} |K_{m,m'}^{n,n'}|^2 |T_{m'n'}|^2 + \langle\hat{N}_{mn}\rangle\;, \label{eq:ps2cl} \end{equation} \noindent which is the transcription in discrete space of Eq.~(\ref{eq:pps}). The rapid oscillations of the convolution kernel introduce strong correlations between spatial frequencies and make its inversion numerically intractable. (\emph{Pseudo}-)Power spectra are therefore estimated on some frequency band-powers (labeled $b$ hereafter). The binning operator reads \begin{figure} \begin{center} \includegraphics[clip, angle=0, scale = 0.4]{nyquist.eps} \caption{Map of the Fourier modes of the worked examples of Sect~\ref{se:applis}. The inner circle delimits the Nyquist range. Modes that lie on the outer circle are examples of modes of larger modulus than $k_{Nyquist}$. For these modes, not all directions are sampled in the Fourier plane (dashes represent the missing modes).} \label{fig:nyquist} \end{center} \end{figure} \begin{equation} R_b^{mn} = \left\{ \begin{array}{ll} \frac{k_{mn}^\beta}{\Xi_b}&\;{\rm if}\; k^b_{low} \leq k_{mn} < k^{b+1}_{low}\\ 0&\;{\rm otherwise}\end{array}\right. , \label{eq:pmat} \end{equation} \noindent where $k^b_{low}$ is the mode of lowest modulus that belongs to bin $b$ and $\Xi_b$ is the number of wave vectors ${\vec{k}}_{mn}$ that fall into this bin. The reciprocal operator that relates the theoretical value of the 1D binned power spectrum $P_b$ to its value at ${\vec{k}}_{mn}$ is \begin{equation} Q_{mn}^b = \left\{ \begin{array}{ll} \frac{1}{k_{mn}^\beta}&\;{\rm if}\; k^b_{low} \leq k_{mn} < k^{b+1}_{low}\\ 0&\;{\rm otherwise}\end{array}\right. . \label{eq:qmat} \end{equation} Although not strictly required, results may be improved when the spectral index $\beta$ is chosen so that $k^\beta P_k$ is as flat as possible\footnote{In the case of CMB, $\beta \simeq 2$ is the equivalent of the standard $\ell(\ell+1)$ prefactor that flattens the spectrum up to $\ell \sim 2000$.}. In the case of the cosmic infrared background anisotropy, $\beta \simeq 1$ \citep{planck_cib_2011}. The binned \emph{pseudo}-power spectrum is therefore given by \begin{equation} \hat{P}_b=\sum_{m,n\in b}R^{mn}_b|\hat{T}_{mn}|^2, \label{eq:r} \end{equation} \noindent and the data power spectrum is related to its binned value $P_b$ via \begin{equation} |T_{m'n'}|^2 \simeq Q_{m'n'}^{b'}P_{b'}. \label{eq:q} \end{equation} \noindent With such binned quantities, Eq. (\ref{eq:ps2cl}) reads \begin{equation} \langle\hat{P}_b\rangle \simeq \sum_{b'} M_{bb'} P_{b'} + \langle \hat{N}_{b}\rangle, \label{eq:mbb} \end{equation} \noindent where \begin{equation} M_{bb'} = \sum_{m,n\in b}\sum_{m',n'\in b'}R_b^{mn} |K_{m,m'}^{n,n'}|^2 Q_{m'n'}^{b'}. \label{eq:mbb_def} \end{equation} \noindent An unbiased estimate of the binned angular power spectrum of the signal is thus given by \begin{equation} \tilde{P}_b \simeq \sum_{b'}M^{-1}_{bb'}\left( \hat{P}_{b'} -\langle\hat{N}_{b'}\rangle\right). \label{eq:pb_res} \end{equation} \noindent It can indeed be easily verified that $\langle \tilde{P}_b \rangle = P_b$. Uncertainties in $\tilde{P}_b$ come from sampling and noise variance that are estimated via Monte Carlo simulations as described in the next section. \subsection{Statistical uncertainties \label{sec:stat}} \begin{figure} \begin{center} \includegraphics[clip, angle=0, scale = 0.4]{mask_raw.eps} \includegraphics[clip, angle=0, scale = 0.4]{mask.eps} \caption{\emph{Top:} Mask applied to the simulated data. This mask is 1 where data are available and 0 outside the observation patch and where bright sources have been masked out. \emph{Bottom:} Same mask with apodized boundaries but still with the same number of masked pixels.} \label{fig:masks} \end{center} \end{figure} Statistical uncertainties in $P_b$ come from signal sampling variance and noise variance. They are estimated via Monte Carlo simulations. For each realization, a map of signal+noise is produced (see Sect.~\ref{se:algo}) and treated in the same way as described for the data in the previous section to give an estimate, $\tilde{P}_b$, with the same statistical properties as that of the true data. Altogether, these simulations provide the uncertainties in our estimate. The covariance matrix of $\tilde{P}_b$ is \begin{equation} {\bf C}_{bb'} = \big< \left(\tilde{P}_b - \langle \tilde{P}_b\rangle_{\text{MC}}\right)\left(\tilde{P}_{b'} - \langle \tilde{P}_{b'}\rangle_{\text{MC}}\right)\big>_{\text{MC}}, \label{eq:cov_mat} \end{equation} \noindent where $\langle\cdot\rangle_{\text{MC}}$ denotes the Monte-Carlo averaging. The error bar in each $\tilde{P}_b$ is \begin{equation} \sigma_{\tilde{P}_b} = \sqrt{{\bf C}_{bb}}, \label{eq:sigma_pb} \end{equation} \noindent and the bin-bin correlation matrix is given by its standard definition \begin{equation} {\bf \Xi}_{bb'}=\frac{\mathbf{C}_{bb'}}{\sqrt{{\bf C}_{bb}{\bf C}_{b'b'}}}. \label{eq:xcorr} \end{equation} \subsection{Beam, map making and transfer functions \label{se:transf_func}} The above sections describe the main features of the algorithm that provides an unbiased estimate of the power spectrum of data \emph{projected onto a map}. This power spectrum may however differ from the \emph{signal} power spectrum. The map making process may indeed alter the statistical properties of the signal, together with data filtering and the convolution by the instrumental beam. For instance, the pixelization caused by the map making is equivalent to a convolution in real space by a square kernel and therefore translates into a multiplication in Fourier space by a factor $\sin_c$. In the case of CMB anisotropy for which power spectrum estimation has been most extensively studied, to a good approximation, the transfer function $F_k$ of the map making and the data processing together reduces to a function of $k$ that multiplies the data power spectrum $P_k$. The determination of $F_k$ is performed by a set of Monte Carlo simulations of data processing and map making. The beam smearing effect is also described by a multiplicative function $B_k$. In the present framework, it is possible to be even more precise and to account for the exact beam shape and orientation because the beam can be completely described by its Fourier coefficients $B_{mn}$ rather than its approximated annular average $B_k$. This may be of particular relevance for small fields over which the scanning directions are approximately constant and increase the effect of beam asymmetry. The map making together with the filtering transfer function is also likely to be more accurately represented by a function of both Fourier indices $F_{mn}$, such that Eq.~(\ref{eq:ps2cl}) is given by \begin{eqnarray} \langle |\hat{D}_{mn}|^2 \rangle =&& \sum_{m'n'} |K_{m,m'}^{n,n'}|^2 F_{m'n'}B_{m'n'}P_{{\vec{k}}_{m'n'}} \nonumber \\ && + \langle\hat{N}_{mn}\rangle. \label{eq:ps2cl_1} \end{eqnarray} \noindent These new contributions can be incorporated in the definition of the convolution kernel $K_{m,m'}^{n,n'}$ such that no additional modification of the algorithm is needed from Eq.~(\ref{eq:ps2cl}) onward. \subsection{Algorithm \label{se:algo}} An outline of the algorithm is presented in Fig.~\ref{fig:chart}. To perform the complete process of power spectrum and statistical uncertainty estimation, one needs: \renewcommand{\labelenumi}{(\alph{enumi})} \begin{enumerate} \item A tool to simulate the sky that is observed by the instrument given an input angular power spectrum. \item A tool to simulate the instrument observations given the simulated sky. \item The data processing pipeline that derives from these observations the map from which the user can then estimate the angular power spectrum. The pipeline includes optical beam smearing, time domain filtering, data flagging, map making etc. This tool is required to determine the transfer function of the map making and data processing (Eq.~\ref{eq:ps2cl_1}). \item A tool to compute the power spectrum of a 2D map and to bin it into a set of predefined bins with a weight that may be a function of $k$. \item A tool to compute $M_{bb'}$, which involves the computation of the convolution kernel $K_{m,m'}^{n,n'}$. This is actually the longest part of the process because it is a $N_{pix}^2$ operation, but it only needs to be done once. \end{enumerate} All these tools are provided in the {\small P{\tiny OKER}}~library\footnote{{\tt http://www.ias.u-psud.fr/poker}. This library makes use of some of the HEALPix programs \citep{healpix}.}\footnote{The sky simulator provided in the library works directly in flat space. Indeed, we cannot anticipate neither on the user's favorite reprojection scheme from the sphere to the plane nor on his/her data processing pipeline and map making. Furthermore, simulating directly a flat sky of a known power spectrum is a convenient first stage to optimize the binning and apodization scheme before running the full fledge Monte Carlo simulations.}. The algorithm can be summarized as follows: \renewcommand{\labelenumi}{\arabic{enumi}.} \begin{enumerate} \item Insert the observed sky patch of size $N_x \times N_y$ pixels into a ``large patch'' $(N_x^\prime \times N_y^\prime)$ and padd it with zeros. This will allow for the correction of aliasing by scales larger than the observed sky. The size of the patch and the zero padding that should be used have both to be determined by the user. A factor from 1.2 to 2 is enough in most cases. A compromise must be chosen between the uncertainty on large scales that the user tries to estimate and the uncertainty associated with the unknown power in these large scales that needs to be assumed for the simulations. It is also possible to apodize the observation patch to limit large-scale aliasing (see Sect.~\ref{se:applis} for more details) and improve the bin to bin decorrelation at high $k$. \item Define a binning for the estimated power spectrum on the large patch. Typically, modes sampled by the data set are the DC level and modes between $k_{min} = 2\pi/\Delta\theta/\max(N_x,N_y)$ and the Nyquist mode $k_c = \pi/\Delta\theta$. The minimum bandwidth of the bins may be chosen as $\sim 2k_{min}$. \item Determine the noise \emph{pseudo}-power spectrum -- $\langle \hat{N}_b\rangle$ of Eq.~(\ref{eq:mbb}). If it cannot be determined analytically, perform a set of Monte Carlo realizations of noise-only maps (with (a)) and compute the power spectrum with (b) of the masked maps inserted into the large patch. The average of these Monte Carlo realizations gives $\langle\hat{N}_b\rangle$. \item Run a set of noise-free simulations of the observed and reprojected sky to determine the transfer function $F_{mn}$ of the data processing pipeline -- Eq.~(\ref{eq:ps2cl_1}). \item Compute $M_{bb'}$ with (c) -- Eqs.~(\ref{eq:pmat}, \ref{eq:qmat}, \ref{eq:mbb_def}). This operation scales as $N_p^2$ but it only needs to be done once. The implementation proposed in the {\small P{\tiny OKER}}~library can be run on a multiprocessor machine. \item Compute the \emph{pseudo}-power spectrum of the masked data on the large patch $\hat{P}_b$ with (b) -- Eq.~(\ref{eq:raw_ft}). \item Apply Eq.~(\ref{eq:pb_res}) to obtain the binned power spectrum of the data $P_b$. The resolution of this equation can be done with any suitable method of linear algebra. Note that $M_{bb'}$ can be rather small and its inversion straightforward with standard numerical tools such that Eq.~(\ref{eq:pb_res}) can be computed as is. At this stage, it may be useful to discard the first bin of the matrix, which describes the coupling of the DC level of the map to the mask and is therefore irrelevant for a power spectrum analysis but tends to alter the conditioning of $M_{bb'}$. \item Determine the statistical error bars associated with this estimate. For that, perform a set of Monte Carlo realizations of signal+ noise. The input spectrum required for these simulations can be a smooth interpolation of the binned power spectrum determined at the previous step. For each realization, compute the \emph{pseudo}-power spectrum (using (b) on the masked data embedded in the large patch), subtract $\langle \hat{N}_b\rangle$, and then solve Eq.~(\ref{eq:pb_res}). This provides a set of random realizations of $\tilde{P}_b$. The error bars and the bin-to-bin covariance matrix are then given by Eqs.~(\ref{eq:cov_mat},\ref{eq:sigma_pb}). \end{enumerate} \begin{figure} \begin{center} \includegraphics[clip, angle=0, scale = 0.4]{map_dust.eps} \includegraphics[clip, angle=0, scale = 0.4]{map_cmb.eps} \caption{Typical maps of dust with a $k^{-3}$ power spectrum (top) and CMB temperature (bottom). The square is the outline of the observed patch. It is extracted from a larger simulated map to ensure non-periodic boundary conditions. Masked data appear in white.} \label{fig:maps} \end{center} \end{figure} \section{Worked example \label{se:applis}} \begin{figure} \begin{center} \includegraphics[clip, angle=0, scale = 0.3]{dust_rawholes_pk_result.eps} \includegraphics[clip, angle=0, scale = 0.4]{dust_rawholes_x_corr_lego.eps} \caption{\emph{Dust ($k^{-3}$)}. Comparison between the input theoretical power spectrum (black) and the average result of {\small P{\tiny OKER}}~(red) applied to 500 signal+noise simulations. The ``naive'' approach (blue, see text) is also shown for reference. Error bars in the top plots are those associated with the data (i.e. those of a single realization). The square line shows the binned theoretical power spectrum to which {\small P{\tiny OKER}}'s average result should be compared. The bottom plots shows the ratio of the reconstructed binned power spectrum to the input theoretical binned power spectrum (the bias) and the displayed error bar is that of the average of the Monte Carlo realization (in other words: the error bar of the top plot divided by $\sqrt{500}$). These plots altogether show that {\small P{\tiny OKER}}~is unbiased. The mask used in this case is that of the top plot of Fig.~\ref{fig:masks}, with 1 where data are available, 0 elsewhere.} \label{fig:pk_dust_rawholes} \end{center} \end{figure} {\small P{\tiny OKER}}~was applied to real data to measure the cosmic infrared background anisotropy in the Planck-HFI data \citep{planck_cib_2011}. The whole data processing and how it is accounted for with {\small P{\tiny OKER}}~is described in detail in that paper. We do not replicate this analysis here but instead present complementary examples on simulated data with steeper power spectra and a more complex mask. It is in this context that mask aliasing effects are the strongest. We assume that the observation patch is a square of 100 pixels of 2 arcmin side. These parameters are chosen so that they sample a range of angular modes over which the CMB temperature power spectrum exhibits peaks and a varying slope. Note however that the map resolution can be arbitrary high because fast Fourier transform algorithms work in dimensionless units. To force non-periodic boundary conditions, we extract the patch from a map that is 50\% larger and the simulation is performed on the latter. Finally, we draw random holes across the observation patch to mimic point source masking. We consider two types of signal. In the first case, we assume that the data are represented by a pure power law spectrum $k^{-3}$ typical of Galactic dust emission. In the second case, we assume that the data are CMB with a standard $\Lambda CDM$ power spectrum. At these angular scales, the slope of the CMB power spectrum varies from $\sim k^{-2}$ to even steeper than $k^{-6}$ and exhibits oscillations. Fig.~\ref{fig:maps} shows an example of these simulated data. The result of {\small P{\tiny OKER}}~applied to each case is presented in Figs.~\ref{fig:pk_dust_rawholes} and \ref{fig:pk_cmb_rawholes}. In the case of dust, we choose a binning index $\beta=3$ as defined in Eq.~(\ref{eq:qmat}), and in the case of CMB, we make no assumptions, \emph{i.e.} we choose $\beta=0$. We also show what the direct Fourier transform of the observed patch without any further correction would give to illustrate the magnitude of the effect corrected by {\small P{\tiny OKER}}. Note that this reference estimate labeled ``naive $P(k)$'' is not the \emph{pseudo}-power spectrum of the data in the sense of Sect.~\ref{se:ps_def}. Indeed, it is not computed on the whole map from which the observation patch is extracted and padded with zeros. The bottom plots of Figs.~\ref{fig:pk_dust_rawholes} and \ref{fig:pk_cmb_rawholes} show the bin-to-bin correlation matrix of each estimate. In the case of dust, the correlations are small ($\sim 15\%$). This is not the case for the simulated CMB, for which there is strong bin to bin correlation, although the power spectrum remains unbiased. This correlation is due to large-scale aliasing induced by the holes in the mask and show up so significantly because at high $k$, the CMB spectrum is very steep. A way to improve on this is to apodize the mask around the edges and the holes left by point-source masking (Fig.~\ref{fig:masks}). In this work, we simply use a Gaussian kernel with a FWHM of twice the map resolution to smooth the edges. The same analysis as before is performed with this mask and results are presented in the right hand side of Fig.~\ref{fig:pk_cmb_rawholes}. On this occasion, the bin-to-bin correlation is significantly reduced, albeit the sampling variance is slighty increased at low $k$ owing to the effective reduction in the observation area. A more efficient way of performing this apodization is described in \cite{xpure}. Finally, on larger angular scales, there is a slight bias in the recovery of the CMB power spectrum. This does not however occur in the case of dust, because for a pure power-law spectrum, Eq.~(\ref{eq:q}) is then an equality. This is no longer the case for a CMB spectrum whose average slope varies with $k$ and exhibits peaks. No binning could faithfully represent such a spectrum. However, the remaining bias is negligible relative to the statistical error bar in the data. If we force the simulated CMB to have a constant power spectrum over frequency bins, the recovery is unbiased. There is no general prescription regarding the definition of the binning and the apodization. They must however be chosen with care because the bin-to-bin residual correlation may lead to residual ringing (mask aliasing) in the data power spectrum (considered as a single random realization), even if the estimator is, on average, unbiased. \begin{figure*} \begin{center} \includegraphics[clip, angle=0, scale = 0.32]{cmb_rawholes_pk_result.eps} \includegraphics[clip, angle=0, scale = 0.32]{cmb_apodholes_pk_result.eps} \includegraphics[clip, angle=0, scale = 0.4]{cmb_rawholes_x_corr_lego.eps} \includegraphics[clip, angle=0, scale = 0.4]{cmb_apodholes_x_corr_lego.eps} \caption{\emph{Left}: same as Fig.~\ref{fig:pk_dust_rawholes} in the case of CMB. {\small P{\tiny OKER}}'s estimation is unbiased but mask aliasing induces strong correlations between bins at high $k$ where the power spectrum is very steep. \emph{Right}: This time, the mask with apodized boundaries (bottom of Fig.~\ref{fig:masks}) is used. High $k$ bin-to-bin correlations are significantly reduced. Apodization however reduces the effective observed fraction of the sky and therefore slightly increases error bars at low $k$ compared to the plots on the left. Note that although apodization also improves the ``naive'' estimate, it remains not compatible with the input spectrum for almost every bin and to more than the $1\sigma$ error of a single realization.} \label{fig:pk_cmb_rawholes} \end{center} \end{figure*} \section{Conclusion} We have developed a tool that provides an unbiased estimate of the angular power spectrum of diffuse emission in the flat sky approximation limit, for arbitrary high resolution and complex masks. {\small P{\tiny OKER}}~corrects for mask aliasing effects, even in the context of steep power spectra and provides a way to estimate statistical error bars and bin-to-bin correlations. It complements tools developed in the context of spherical sky and potentially full sky surveys \citep[e.g.][]{master} but at the moment for lower angular resolutions. {\small P{\tiny OKER}}~also complements other methods in the flat sky approximation such as \cite{das}. {\small P{\tiny OKER}}~can readily be generalized to polarization power-spectra estimation. To date, experiments that have measured polarized diffuse emission \citep{dasi,kogut,ponthieu2005,quad_pol2008,chiang2010,bierman2011} have been closely related to CMB experiments and studied observation patches of a few to a hundred percent of the sky and angular resolutions larger than a few arcmin. Optimal tools have been developed to measure the polarization power spectra in this context \citep[and references therein]{chon,smith2006,smith_zalda,xpure} and it is unlikely that {\small P{\tiny OKER}}~will bring something significantly new to the analysis of these observations. It is however expected that smaller, deeper, and higher-resolution polarized surveys will happen in the future, for which {\small P{\tiny OKER}}~might be an interesting approach. One of the main features that should then be addressed is the ability of {\small P{\tiny OKER}}~to correct for $E-B$ leakage. Although we postpone the detailed studies of {\small P{\tiny OKER}}'s properties regarding polarized power spectra estimation to future work, we provide the formalism in Appendices \ref{se:app_eb} and \ref{se:app_te} for the sake of completeness. All the software used in this work is publicly available\footnote{{\tt http://www.ias.u-psud.fr/poker}}. \acknowledgements{We thank E.~Hivon, O.~Dor\'e, S.~Prunet, K.~Benabed and T.~Rodet for fruitful discussions. J.~Grain was supported by the ``Groupement d'Int\'er\^et Scientifique (GIS) Physique des 2 Infinis (P2I). This research used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.} \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Deep reinforcement learning (DRL) algorithms have achieved many successes in solving problems with high-dimensional state and action spaces~\cite{mnih2015human,arulkumaran2017deep}. However, DRL's performance is limited by its sample (in)efficiency. It is often impractical to collect millions of training samples as required by standard DRL algorithms. One way to tame this problem is to leverage additional human guidance by following the general paradigm of Human-in-the-Loop Reinforcement Learning (HIRL)~\cite{ijcai2019-884}. It often allows the agent to achieve better performance and higher sample efficiency. One popular form of human guidance in HIRL is the binary evaluative feedback~\cite{knox2009interactively}, in which humans provide a "good" or "bad" judgment for a queried state-action pair. This framework allows non-expert humans to provide feedback, but its sample efficiency could be further improved by asking humans to provide stronger guidance. For example, the binary feedback does not tell the agent why it made a mistake. If humans can explain the "why" behind the evaluative feedback, then it is possible to further improve the sample efficiency and performance. Taking the training of an autonomous driving agent as a motivating example, humans can "explain" the correct action of "apply-brake" by pointing out to the agent that the "STOP" sign is an essential signal. One way to convey this information is through \emph{saliency information}, in which humans highlight the important (salient) regions of the visual environment state. The visual explanation will indicate which visual features matter the most for the decision in the current state. Note that requiring human trainers to provide explanations on their evaluations does not necessarily require any more expertise on their part than is needed for providing only binary feedback. In our driving agent example, the human trainers may not know things like the optimal angle of the steering wheel; however, we can expect them to be able to tell whether an observed action is good or bad, and what visual objects matter for that decision. \begin{wrapfigure}{R}{0.5\linewidth} \begin{center} \centerline{\includegraphics[width=1.0\linewidth]{images/flow.png}} \caption{Overview of EXPAND. The agent queries the human with a sampled trajectory for binary evaluative feedback on the action and saliency annotation on the state. The Perturbation Module supplements the saliency explanation by perturbing irrelevant regions. The agent then consumes the integrated feedback and updates its parameters. This loop continues until the agent is trained with feedback queried every $N_f$ episodes. The domain shown for "Visual Explanation" is \emph{Pixel Taxi}. } \label{fig:flow} \end{center} \vspace{-0.5cm} \end{wrapfigure} In fact, the use of human visual explanation has been investigated by recent works in several supervised learning tasks \cite{schramowski2020making, ross2017right, rieger2020interpretations}. They show that the generalization of a convolutional neural network can be improved by forcing the model to output the same saliency map as human - in other words, forces to model to make the right prediction for the right reasons. However, it remains unclear how to effectively incorporate domain knowledge in visual explanation in deep reinforcement learning. In this work, we present EXPAND, which aims to leverage - EXPlanation AugmeNted feeDback (Fig.~\ref{fig:flow}) to support efficient human guidance for deep reinforcement learning. This raises two immediate challenges (i) how to make it easy for humans to provide the visual explanations and (ii) how to amplify the sparse human feedback. EXPAND employs a novel context-aware data augmentation method, which amplifies the difference between relevant and irrelevant regions in human explanation by applying multiple perturbations to irrelevant parts. To reduce the human effort needed to provide visual explanations, EXPAND uses off-the-shelf object detectors and trackers to allow humans to to give explanations in terms of salient objects in the visual state, which are then automatically converted to saliency regions to be used by the RL system. We show that EXPAND agents require fewer interactions with the environment (\emph{environment sample efficiency}) and over 30\% fewer human signals (\emph{human feedback sample efficiency}) by leveraging human visual explanation. We highlight the main contributions of EXPAND below: \begin{itemize} \item This is the first work that leverages human visual explanation in human-in-the-loop reinforcement learning tasks. We show that EXPAND is the state-of-the-art method to learn from human evaluative feedback in terms of environment sample efficiency and human feedback sample efficiency. \item We benchmark our context-aware data augmentation against standard context-agnostic data augmentation techniques. Our experiment results help shed light on the limitations of existing data augmentations and can benefit future research in accelerating RL with augmented data. \item We show that human visual explanation in a sequential decision-making task can be collected in a low-effort and semi-automated way by using an off-the-shelf object detector and tracker. \end{itemize} We note that EXPAND agents have applicability beyond improving the efficiency of human-in-the-loop RL with sparse environment rewards. In particular, they can be equally useful in accepting human guidance to align the RL system to human preferences. They can be used to incorporate rewards from human feedback, for example, when environment reward is not defined upfront or to accommodate additional constraints that are not reflected in existing reward signal. EXPAND, thus, can also be useful in aligning objectives of humans in the loop to the agent's \cite{knox2009interactively, bobu2021feature}. Finally, EXPAND's approach of allowing humans to provide their explanations and guidance in terms of objects of relevance to them, and automatically converting those to saliency regions, is an instance of the general approach for explainable and advisable AI systems that use symbols as a \textit{lingua franca} for communicating with humans, independent of whether they use symbolic reasoning internally \cite{kambhampati2022symbols}. \section{Related Work} \label{sec:Related Work} Leveraging human guidance for RL tasks has been extensively studied in the context of imitation learning (learning from demonstration) \cite{schaal1997learning}, inverse reinforcement learning \cite{ng2000algorithms, abbeel2004apprenticeship}, reward shaping \cite{ng1999policy}, learning from human preference \cite{christiano2017deep, lee2021pebble}, and learning from saliency information provided by human \cite{zhang2020human}. Surveys on these topics are provided by \cite{ijcai2019-884, wirth2017survey}. Compared to these approaches, learning from human evaluative feedback has the advantage of placing minimum demand on both the human trainer's expertise and the ability to provide guidance (e.g. the requirements of complex and expensive equipment setup). Representative works include the TAMER framework \cite{knox2009interactively,warnell2018deep}, and the COACH framework \cite{macglashan2017interactive, arumugam2019deep}. The TAMER+RL framework extends TAMER by learning from both human evaluative feedback and environment reward signal \cite{knox2010combining,knox2012reinforcement}. DQN-TAMER further augments TAMER+RL by utilizing deep neural networks to learn in high dimensional state space \cite{arakawa2018dqn}. Several approaches have been proposed to increase the information gathered from human feedback, which takes into account the complexities in human feedback-providing behavior. \citeauthor{loftin2014learning} speed up learning by adapting to different feedback-providing strategies~\cite{loftin2014learning,loftin2016learning}; the Advice framework \cite{griffith2013policy, cederborg2015policy} treats human feedback as direct policy labels and uses a probabilistic model to learn from inconsistent feedback. Although these approaches better utilize human feedback with improved modeling of human behaviors, weak supervision is still a fundamental problem in the evaluative feedback approach. Human explanatory information has also been explored previously. The main challenge of using human explanation is to translate human high-level linguistic feedback into representations that can be understood by the agents. As an early attempt, \citeauthor{thomaz2006reinforcement} allow humans to give anticipatory guidance rewards and point out the object tied to the reward \cite{thomaz2006reinforcement}. However, they assume the availability of an object-oriented representation of the state. \citeauthor{krening2016learning, yeh2018bridging} resort to human explanatory advice in natural language \cite{krening2016learning, yeh2018bridging}. Still, they assume a recognition model that can understand concepts in human explanation and recognize objects. In this work, we bridge the vocabulary gap between the humans and the agent by taking human visual explanations in the form of saliency maps. Other works use human gaze data as human saliency information, collected with sophisticated eye trackers, to help agents with decision making in an imitation learning setting \cite{zhang2018agil,zhang2019atari,kim2020using}. They require human data to be collected offline in advance. In contrast, this work is closer to \citeauthor{arakawa2018dqn, xiao2020fresh, cui2020empathic}, where human trainers are required to be more actively involved during training \cite{arakawa2018dqn,xiao2020fresh, cui2020empathic}. Recent works in supervised learning also use human visual explanation to extend the human-machine communication channel \cite{ross2017right, rieger2020interpretations, teso2019explanatory}. The visual explanation is used as domain knowledge to improve the performance of machine learning models. The way we exploit human explanation via state perturbation can be viewed as a novel way of data augmentation. Data augmentation techniques are widely used in computer vision tasks \cite{lecun1998gradient,shorten2019survey} and have recently been applied to deep RL tasks \cite{kostrikov2020image, laskin2020reinforcement, raileanu2020automatic, mitrovic2020representation}. The framework \emph{explanatory interactive learning} \cite{teso2019explanatory} also leverages human visual explanation with data-augmentation. However, they only focus on classification tasks with low-dimensional representations. \section{Problem Setup} \label{sec:Problem-Setup} We intend to verify the hypothesis that the use of visual explanation and binary evaluative feedback can boost an RL agent's performance. We have an agent $M$ that interacts with an environment $\mathcal{E}$ through a sequence of actions, states, and rewards. Following standard practice in deep RL to make states Markovian, $k$ ($k=4$) preprocessed consecutive image observations are stacked together to form a state $s_t = [x_{t-(k-1)}, ... , x_{t-1}, x_{t}]$ \cite{mnih2015human}. At each time-step $t$, the agent can select an action from a set of all possible actions $\mathcal{A} = \{1,...,K\}$ for the state $s_t \in \mathcal{S}$. Then the agent receives a reward $r_t \in \mathcal{R}$ from the environment $\mathcal{E}$. This sequence of interactions ends when the agent achieves its goal or when the time-step limit is reached. This formulation follows the standard Markov decision process framework in RL research~\cite{sutton2018reinforcement}. The agent's goal is to learn a policy function $\pi$, a mapping from a state to action, that maximizes the expected return. For a deep Q-Learning agent, the policy $\pi$ is approximated by the Q function $Q(s, a)$, which represents the maximum expected rewards for taking action $a$ at state $s$. Additionally, we assume a human trainer who provides binary evaluative feedback $\mathcal{H} = (h_1, h_2, ... h_n)$ that conveys their assessment of the queried state-action pairs given the agent trajectory. We define the feedback as $h_t = (x_t^{h}, b_t^{h}, x_t, a_t, s_t)$, where $b_t^{h} \in \{-1,1\}$ is a binary "bad" or "good" feedback provided for action $a_t$, and $x_t^{h} = \{Box_{1}, ... ,Box_{m}\}$ is a saliency map for the image $x_t$ in state $s_t$. $Box_{i}$ is a tuple $(x,y,w,h)$ for the top left Euclidean coordinates $x$ and $y$, the width $w$, and the height $h$ of the bounding box annotated on the observation image $x_t$. \section{Method} \label{sec:method} EXPAND aims to improve data efficiency and performance with explanation augmented feedback. In the following, we will first describe how EXPAND simultaneously learns from both environment reward and binary evaluative feedback. Then we introduce our novel context-aware data augmentation that utilizes domain knowledge with human explanation. Algorithm \ref{algo:trainloop} in Appendix \ref{sec:appendix-expand-worlflow} presents the train-interaction loop of EXPAND. Within an episode, the agent interacts with the environment and stores its transition experiences. Every few episodes, it collects human feedback queried on a trajectory sampled from the most recent policy. All the human feedback is stored in a feedback buffer similar to the replay buffer in off-policy DRL~\cite{mnih2015human}. The weights of RL agent are updated twice, first using sampled environment data as in usual RL update, and then using human feedback data, in a single training step. In our experiment, both EXPAND and the baselines follow the same train-interaction loop. \subsection{Learning from Environment Reward and Binary Feedback} \label{sec:Reinforcement Learning Module} The underlying RL algorithm of EXPAND can be any off-policy Q-learning-based algorithm. In this work, we use Deep Q-Networks \cite{mnih2015human} combined with multi-step returns, reward clipping, soft-target network updates and prioritized experience replay \cite{schaul2016prioritized}. We refer to this RL approach as Efficient DQN. To learn from binary evaluative feedback, we propose a new method that doesn't require additional parameters to explicitly approximate human feedback. We use the advantage value to formulate the feedback loss function, called \emph{advantage loss}. Advantage value is the difference between the Q-value of the action upon which the feedback was given and the Q-value of the current optimal action calculated by the neural network. Given the agent's current policy $\pi$, state $s$, and action $a$ for which the feedback is given, the advantage value is defined as: \small \begin{equation} \begin{split} A^{\pi}(s,a) & = Q^{\pi}(s,a) - V^{\pi}(s) = Q^{\pi}(s,a) - Q^{\pi}(s, \pi(s)) \end{split} \end{equation} \normalsize Hence, the advantage value quantifies the possible (dis)advantage the agent would have if some other action were chosen instead of the current-best. It can be viewed as the agent's judgment on the optimality of an action. Positive feedback means the human trainer expects the advantage value of the annotated action to be zero. Therefore, we define a loss function, i.e., the advantage loss, which forces the network to have the same judgment on the optimality of action as the human trainer. Intuitively, we penalize the policy-approximator when a marked "good" action is not chosen as the best action, or when a marked "bad" action is chosen as the best action. For a feedback $h = (x^{h}, b^{h}, x, a, s)$, when the label is "good", i.e., $b^{h}=1$, we expect the network to output a target value $\hat{A}(s, a) = 0$, so the loss can be defined as $|\hat{A}(s, a) - A^{\pi}(s, a)| = Q^{\pi}(s, \pi(s)) - Q^{\pi}(s,a)$. When the label is "bad", i.e., $b^{h}=-1$, we expect the network to output an advantage value $A^{\pi}(s, a) < 0$. Since here we do not have a specific target value for $A^{\pi}(s, a)$, we resort to the idea of large margin classification loss \cite{piot2014boosted}, which forces $Q^{\pi}(s, a)$ to be at least a margin $l_{m}$ lower than the Q-value of the second best action, i.e., $\max_{a' \neq a}Q^{\pi}(s, a')$. One advantage of this interpretation of human feedback is that it directly shapes the Q-values with the feedback information and does not require additional parameters to model human feedback. This kind of large margin loss has been shown to be effective in practice by previous works like DQfD \cite{hester2018deep}. Formally, for human feedback $h = (x^{h}, b^{h}, x, a, s)$ and the corresponding advantage value $A_{s,a} = A^{\pi}(s,a)$, the advantage loss is: \small \begin{equation} \label{eq:advantage loss} \begin{split} L_{A}(s, a, h) = & L_{A}^{Good}(s, a, h) + L_{A}^{Bad}(s, a, h) \end{split} \end{equation} \normalsize where \small \begin{equation} \begin{split} & L_{A}^{Good}(s, a, h; b^h=1) = \begin{cases} 0 & \text{; $A_{s,a}=0$} \\ Q^{\pi}(s, \pi(s)) - Q^{\pi}(s,a) & \text{; otherwise}\\ \end{cases} \end{split} \end{equation} \normalsize and, \small \begin{equation} \begin{split} & L_{A}^{Bad}(s, a, h; b^h=-1) = \begin{cases} 0 & \text{; $A_{s,a}<0$} \\ Q^{\pi}(s, a) - (\max_{a' \neq a}Q^{\pi}(s, a')-l_{m}) & \text{; $A_{s,a}=0$}\\ \end{cases} \end{split} \end{equation} \normalsize Note that the COACH framework~\cite{macglashan2017interactive} also interprets human feedback as the advantage function. In COACH's formulation, the advantage value is exactly the advantage term in the policy gradient equation--human trainers are supposed to provide positive/negative policy-dependent feedback only when the agent performs an action better/worse than that in current policy. Thus, such a formula is restricted to on-policy policy-gradient methods. In contrast, the advantage function in EXPAND aims to capture the relative utility of all the actions. Here human feedback is direct policy advice as in the Advice framework \cite{griffith2013policy}, indicating whether an action is preferable regardless of the agent's current policy. This property makes the advantage loss in EXPAND a better fit for off-policy value-based methods. \subsection{Leveraging Human Visual Explanation} \label{sec:leverage-visual-explanation} Human visual explanation informs the agent about which parts of the state matter for making the right decision. These "parts" of the state could be specific regions or objects. In EXPAND, each saliency map consists of a set of bounding boxes over the images, marking a region's importance in making the decision. The intuition is that the agent's internal representation must correctly capture the relevant features before it can make an optimal decision. Based on this, we propose a novel context-aware data augmentation technique, which applies multiple perturbations to irrelevant visual regions in the image and forces the model to be invariant under these transformations. The key idea here is that, the manipulations to irrelevant regions should not affect the agent's policy. \begin{wrapfigure}{R}{0.5\linewidth} \centering \subfloat[]{\includegraphics[width=0.4\linewidth]{images/causaldiag.png}\label{fig:aug-causal-graph}} \subfloat[]{\includegraphics[width=0.4\linewidth]{images/blurfig.png}\label{fig:f2}} \caption{(a) Modeling representation learning problem with data augmentation using a causal graph. Figure from \cite{mitrovic2020representation}. (b) Example state of Pixel-Taxi, the top image represents the original state and the bottom represents a sample augmented state.} \end{wrapfigure} To get more insights on how our data augmentation method benefits policy learning, we follow the causal-graph interpretation of representation learning with data augmentation visualized in Fig. \ref{fig:aug-causal-graph} \cite{mitrovic2020representation}. In this model, each image observation $X$ can be expressed by \emph{content} variables and \emph{style} variables. Content variable $C$ contains all the necessary information to make the correct prediction, while style variable $S$ contain all other information that doesn't influence current downstream task. That said, only content is causally related to current downstream target of interest $Y$, and content $C$ is independent of style $S$. The goal of representation learning is to accurately approximate the invariant part (content) and ignore the varying parts (style). Based on this formulation, data augmentation is essentially a way to emulate style variability by performing interventions on the style variables $S$. However, since the choice of data augmentations implicitly defines which aspects of the data are designated as style and which are content, to make sure the context information $C$ is preserved, existing data augmentation methods in RL are limited to conservative transformations such as random translation or random cropping. Previous works show that transformations altering content $C$ can even be detrimental, which is referred to as aggressive augmentations in \cite{purushwalkam2020demystifying}. Different from standard data augmentations in RL, prior domain knowledge about state context is provided by human explanation in EXPAND. Hence, we can resort to a wider range of transformations, and thereby more informatively highlight the content variable $C$ by applying various transformations only to regions marked as irrelevant (style variable $S$) while keeping the relevant parts (content variable $C$) unchanged. Figure \ref{fig:f2} shows an example state from Pixel-Taxi domain where the top image is the original state having gray cell as the taxi, red as the passenger, and black as the destination. The bottom image shows an example image used by EXPAND which contrastively highlights the taxi and passenger by performing a Gaussian blur over the remainder of the image observation. This illustrates why the proposed context-aware augmentation can be more informative than standard data augmentations. We choose to use Gaussian blurring to perturb the irrelevant regions, which is also used in a previous Explainable RL work \cite{greydanus2017visualizing}. The reason we use Gaussian blurring is that, it effectively perturbs objects in image while does not introduce new information. One counterexample here can be, transformations like \emph{cutout} might significantly change state content in environments where small black blocks have particular semantic meaning. Formally, consider a feedback $h = (x^{h}, b^{h}, x, a, s)$, we need to convert state $s$ into augmented states $f(s)$ with perturbations on irrelevant regions. Let $\mathbb{M}(x,i,j)$ denote a mask over relevant regions, where $x(i,j)$ denotes the pixel at index $(i,j)$ for image $x$. We then have: \small \begin{equation} \mathbb{M}(x,i,j) = \begin{cases} 1 & \text{if (i, j) lies in Box, $\exists$ Box $\in b^{h}$} \\ 0 & \text{otherwise}\\ \end{cases} \end{equation} \begin{equation} \phi(x,M,i,j) = x \odot (1- \mathbb{M}(x,i,j)) + G(x, \sigma_{G})\odot \mathbb{M}(x,i,j) \end{equation} \normalsize Then we can perturb pixel $(i,j)$ in image observation $x$ according to mask $\mathbb{M}$ using a function $\phi$ defined as above, where $\odot$ is the Hadamard Product and function $G(x, \sigma_{G})$ is the Gaussian blur of the observation $x$. Hence, we can get an augmented state $f(s)$ with perturbed irrelevant regions by applying $\phi(x, M)$ to each stacked frame $x$ with the corresponding mask $\mathbb{M}$. We use the context-aware data augmentation in the following two ways: \begin{enumerate} \item For any human feedback $h = (x^{h}, b^{h}, x, a, s)$, we train the model by calculating the advantage loss with the original data $h$ as well as the augmented feedback $h' = (x^{h}, b^{h}, x, a, f(s))$. Note that one human feedback can be augmented to multiple feedbacks by varying the parameters of $f(s)$. In EXPAND, we use Gaussian perturbations of various filter sizes and variances (see Appendix \ref{sec:appendix-gaussian-setting} for detailed settings). \item To encourage the RL model's internal representation to accurately capture relevant visual features and ignore other irrelevant parts, we enforce an explicit \emph{invariance constraints} via an auxiliary regularization loss: \small \begin{equation} \label{eq:value invariance loss} L_{I} = \frac{1}{g}\sum_{i = 1}^{g}\frac{1}{|\mathcal{A}|}\sum_{a\in \mathcal{A}}\left\Vert Q(s, a) - Q(f_i(s), a) \right\Vert_2 \end{equation} \normalsize where $\mathcal{A}$ is the action set and $g$ is the number of perturbations. \end{enumerate} \subsection{Combining Feedback and Explanation Losses} \label{sec:Combining-Feedback-Losses} We linearly combine all the losses to obtain the overall feedback loss: \small \begin{equation} \label{eq:feedback-loss} L_{F} = \lambda_{A}L_{A} + \lambda_{I}L_{I} \end{equation} \normalsize where $\lambda_{A}$ and $\lambda_{I}$ are the weights of advantage loss and invariant loss respectively. In all experiments, we set $\lambda_{A}$ to 1.0 without doing hyper-parameter tuning. For $\lambda_{I}$, we set it to 0.1 as suggested in \cite{raileanu2020automatic}, which also applies an invariant constraint on the RL model but with standard data augmentations. The agent is trained with usual DQN loss as well as $L_{F}$. Note that in EXPAND, the advantage loss is computed with both original human feedback and augmented human feedback. For a baseline agent that doesn't utilize human visual explanation (only trained with usual DQN loss and advantage loss with original human feedback), we refer to it as DQN-Feedback. \subsection{Collecting Human Visual Explanation} \begin{wrapfigure}{R}{0.4\linewidth} \begin{center} \centerline{\includegraphics[width=0.7\linewidth]{images/enduro-tracking.png}} \caption{An example of object detection in car driving game Enduro. Users were allowed to remove or add other bounding regions at will.} \label{fig:detection-enduro} \end{center} \vspace{-0.3cm} \end{wrapfigure} Though human visual explanation offers a strong way to make human-agent interaction more natural and effective, it might impose an excessive amount of workload if we require the trainer to annotate the image for every single query. To account for possible overheads and reduce human effort, we design an object-oriented interface that enables the human to provide guidance by effortlessly pointing out the labels of salient objects. Note that this design is in line with the idea of using a "symbolic" interface in human advisable AI, which argues that humans are more comfortable with symbol-level communication, and the agents are free to learn their own internal representation as long as the user interface can ground the human advice (e.g. to pixel space) \cite{kambhampati2022symbols}. Technically, our object-oriented interface contains a tracking and detection module, with which the human trainers only need to annotate at the beginning or when the tracker/detector works imperfectly. For example, in the car driving game Enduro (Fig. \ref{fig:detection-enduro}), all the lanes and cars are automatically highlighted and tracked, so the human trainers only need to deselect irrelevant objects in the image. A screenshot of our video-player like interface can be found in Appendix \ref{sec:appendix-user-study}. Trainers can start/pause the replay of the queried trajectory and provide explanations by directly drawing bounding boxes on the screen. The trainers can also adjust the "video" frame rate based on their needs. Similar to Deep-TAMER~\cite{warnell2018deep}, we also applied each received feedback and explanation to frames that are displayed between 2 and 0.2 seconds before the feedback occurred -- we assume that within this 1.8-second window, the salient regions/objects should be the same. \section{Experimental Evaluation} \label{sec:Experiments} The experimental evaluation of the work would try to answer the following questions. \begin{enumerate} \item Whether the use of human explanation improves the environment and feedback sample efficiency? \item Does EXPAND utilize the human explanation better than other baselines? \item Is context-aware data augmentation more informative than standard data augmentation? \end{enumerate} To answer the first question, we compare EXPAND to DQN-Feedback and an HIRL algorithm DQN-TAMER which combines TAMER with deep RL~\cite{arakawa2018dqn}. For the second question, we compare EXPAND with two other explanatory interactive learning methods that are adapted from supervised learning, namely Ex-AGIL and Attention-Align. Ex-AGIL is adapted from AGIL \cite{zhang2018agil}, which trains a separate attention prediction network to generate visual explanation for unseen states. The predicted attention is used as a mask to filter out irrelevant information in the image observation. Then the masked state is fed as input to the policy network. Attention-Align uses an auxiliary \emph{attention alignment loss} that is similar to the loss functions for supervised learning in \cite{saran2020efficiently, schramowski2020making, rieger2020interpretations, ross2017right}. It penalizes if the agent's attention heatmap does not align with human visual explanation. To efficiently obtain the differentiable attention heatmap of the model, we use an Explainable RL algorithm FLS \cite{nikulin2019free}. The saliency map produced by FLS-DQN is essentially the model's prediction on whether a pixel should be included in a bounding box. Hence the attention alignment loss here is defined to be the mean square error between agent's prediction and human visual explanation. The implementation details of the baselines can be found in Appendix \ref{sec:appendix-baseline-implement}. Finally, to answer the third question, we replace the context-aware data augmentation in EXPAND with standard augmentations that do not use human saliency information. The context-agnostic augmentations we compare to include Gaussian blurring and random cropping, which result in state-of-the-art sample efficiency in recent works \cite{laskin2020reinforcement, kostrikov2020image}. Note that EXPAND only augments states which were queried to the human trainer, hence for this comparison, the context-agnostic methods only augment the states for which the system received a human feedback. We conducted experiments on three tasks: Pixel-Taxi (Fig.~\ref{fig:f2}) and four Atari games. The Pixel-Taxi domain is similar to the Taxi domain which is widely used in RL research~\cite{dietterich2000hierarchical}. It is a grid-world setup in which the taxi agent, occupies one grid cell at a time, and passengers (denoted by different colored dots) occupy some other cells. The taxi agent's goal is to pick up and transport the correct passenger to the destination. To force the agent to learn passenger identities instead of memorizing their "locations", we randomize the passengers' positions at the beginning of each episode. A reward is given only when the taxi drops off the correct passenger at the destination cell. In addition, we choose four Atari games with default settings: Pong, Asterix, Montezuma's Revenge, and Enduro. Original Enduro can be infinitely long and can make human training impractical. Therefore, in Enduro, our goal is to teach the agent to overtake as many cars as possible within 1000 environment steps (an episode); hence we denote this task as Enduro-1000. In Montezuma's Revenge, we train the agent to solve the first room within 1000 environment steps per episode; hence we denote this task as MR Level 1 or simply MR. In all experiments, both EXPAND and the baselines use Efficient DQN as the underlying RL approach. Efficient DQN uses the same DQN network architecture designed by \citeauthor{mnih2015human} \citeyearpar{mnih2015human}. Details on the architecture and hyperparameters can be found in Appendix \ref{sec:hyper-parameters}. Following the standard pre-processing~\cite{mnih2015human}, each frame is converted from RGB format to grayscale and is resized to $84 \times 84$. The input pixel values are normalized to be in the range of [0, 1]. During training, we start with an $\epsilon$-greedy policy ($\epsilon=1.0$) and reduce $\epsilon$ by a factor of $\lambda_{\epsilon}$ at the end of each episode until it reaches 0.01. All the reported results are averaged over 5 random seeds. Algorithm \ref{algo:trainloop} describes the steps for obtaining human feedback: for every $N_f$ ($N_f=4$) episode, we sample one trajectory and query the user for binary evaluative feedback as well as a visual explanation. Active querying, although preferable, is left for future experiments since the goal of this work is to demonstrate whether human explanations can effectively augment binary feedback. When collecting feedback, we allow humans to watch the queried trajectories and provide feedback at will (the human can choose not to provide feedback for some queried states). \subsection{Evaluation using Synthetic Feedback and Explanation from Oracle} \begin{figure*}[ht] \centering \includegraphics[width=1\linewidth]{images/score.png} \caption{The smoothed learning curves of EXPAND and the baselines. The solid lines show the mean score over 5 random seeds. The shaded regions represent the standard error of the mean. In Pixel-Taxi, the score is a running average over the last 20 rollouts. Our method (EXPAND, in red) outperforms the baselines in all the tasks.} \label{fig:oracle-result} \vspace{-0.2cm} \end{figure*} To perform a systematic analysis, we first conducted experiments that include 5 runs of all algorithms with a synthetic oracle. A synthetic oracle allows us to run a large number of experiments and give consistent feedback across different runs, providing fair and systematic comparisons between different approaches \cite{griffith2013policy,arakawa2018dqn}. The oracle uses trained models from \emph{Atari Zoo} \cite{such2019an} for the Atari games, and a trained DQN model for Pixel-Taxi. To annotate the "relevant" regions on the image observation, we use hand-coded models to highlight the taxi-cell, the destination-cell, and the target-passenger in Pixel-Taxi; the two paddles and the ball in Atari-Pong; the player vehicle, lanes, and other vehicles in front of the agent in Enduro-1000. In both MR and Asterix the oracle highlighted the agent and other regions like, monster, ladder and key in MR, and enemies and target in Asterix, depending upon whether they are spatially close to the agent. (See Fig. \ref{fig:detection-enduro} and Appendix \ref{sec:synthetic-feedback-explanation} for other examples). \textbf{Improvements on Environment and Feedback Sample Efficiency : } Fig.~\ref{fig:oracle-result} compares the environment sample efficiency as well as performance between EXPAND (in red) and the baselines. To answer the first question posed in Section \ref{sec:Experiments}, the plot clearly shows that EXPAND outperforms the HIRL baselines (DQN-Feedback and DQN-TAMER) by a large margin across all the tasks except MR, where EXPAND was consistently at par with DQN-Feedback. In Pixel-Taxi and Pong, EXPAND is able to learn a near optimal policy with 35\% less environment samples/ feedback samples. In Enduro-1000, EXPAND manages to consistently obtain a score over 60 by using 80k samples compared to 120k samples used by DQN-TAMER, an over 30\% improvement. In Asterix and MR, EXPAND achieved a considerably higher score than DQN-TAMER. Finally, since human feedback is obtained at fixed intervals, an improvement in environment sample efficiency (x axis in the figure \ref{fig:oracle-result}) would imply subsequent improvement in feedback sample efficiency. \textbf{EXPAND versus other Explanatory Interactive Learning methods : } \begin{figure*}[ht] \centering \includegraphics[width=1\linewidth]{images/extensive_experiments_full.png} \caption{Comparison of EXPAND with other perturbation techniques and attention-based Interactive Learning methods.} \label{fig:extensive-experiment} \vspace{-0.1cm} \end{figure*} As mentioned earlier, to answer the second question posed in Section \ref{sec:Experiments}, we compare EXPAND with Ex-AGIL and Attention-Align. From Fig. \ref{fig:extensive-experiment}, we can observe that Ex-AGIL only improves the baseline DQN-Feedback in Pong, while in other more visually-complex environments, the auxiliary attention prediction harms the performance. This highlights two issues with approximating human attention with an additional model, first, the approximation may fail for unseen states and second, the visual explanation is not used as a strong signal to directly regularize the policy network. On the other hand, the second baseline Attention-Align fails to learn a usable policy in all five tasks. A potential reason for its failure is a misalignment between the attention prediction objective and the reward-seeking objective. This indicates that this type of attention alignment loss might not be suitable for a less stable learning system like deep RL. In contrast, data-augmentation methods for RL have been empirically examined over the years, hinting that EXPAND's methodology is more stable. \textbf{EXPAND versus Context-Agnostic Data Augmentations : } Additional comparisons between EXPAND and standard data augmentations were made to verify our hypothesis that context-aware data augmentation can be more informative than context-agnostic augmentations. As expected, Gaussian blurring and random cropping fail to help the agent (Fig.~\ref{fig:extensive-experiment}). Interestingly, the two context-agnostic data augmentations even degrade the performance in some tasks since they add unnecessary complexities when the agent tries to infer human trainer's intent behind the binary evaluative feedback. This contradicts EXPAND's methodology, which contrastively highlights relevant regions. This result suggests that standard data augmentation helps RL by obtaining more data to prevent overfitting \cite{kostrikov2020image}, but its informativeness can be further improved by incorporating domain knowledge as in EXPAND. \subsection{Evaluation with Human in the Loop} So far, we have presented our results using synthetic oracles. In this section, we will present results with a human trainer. We run this experiment on Enduro-1000. The objective of this user study is to address the following problems: \begin{enumerate} \item Can EXPAND perform well with feedback and explanation from human trainer, considering human feedback can be sparser and noisier than synthetic feedback? \item Is there a low-cost way to collect human visual explanation in sequential decision tasks? \end{enumerate} In the experiment, the agent queried the human at an interval of every 10 episodes. We limit the annotation time for each query to be around 5 minutes. Hence the total interaction time is at most 30 minutes. Each algorithm was run 3 times with different random seeds. Within the 30-minute interaction time, in the baseline DQN-Feedback, human trainers only need to give binary evaluative feedback, and the trainers provided 2405 binary feedbacks on average. In EXPAND, the human trainers provided 2026 feedback-explanation pairs on average within the same time limit. The difference in the number of feedbacks is not large so it suggests that the cost of providing visual explanation is low. From Fig.~\ref{fig:oracle-result}, we can observe that EXPAND significantly outperforms DQN-Feedback in the user study, hinting that the use of human explanation can also improve sample complexity given the same wall-clock interaction time. \subsection{Ablation Study} \begin{figure*}[ht] \centering \includegraphics[width=1\linewidth]{images/ablation_full.png} \caption{Ablation experiments analyzing the effects of each individual loss terms on the performance. The results verify that the combination of both loss terms leads to the best performance.} \label{fig:ablation} \vspace{-0.1cm} \end{figure*} To get more insights into the losses in EXPAND, we conducted an ablation study on two variants of EXPAND, which either only use the augmented data to compute the invariance loss (EXPAND-no-augmented-advantage) or only use the augmented data in the advantage loss (EXPAND-no-invariance). Fig.~\ref{fig:ablation} shows that both variants of EXPAND eventually converged to near-optimal policies. We note that EXPAND-no-invariance (in purple) and EXPAND-no-augmented-advantage (in light green) perform significantly better than DQN-Feedback, highlighting that each saliency loss alone provides significant improvements over the baseline. Finally, combining these losses (EXPAND, in red) boosts this performance even further, indicating that combining invariance constraint with advantage loss with augmented data is a better approach. \section*{Acknowledgement} This research is supported in part by ONR grants N00014- 16-1-2892, N00014-18-1- 2442, N00014-18-1-2840, N00014-9- 1-2119, AFOSR grant FA9550-18-1-0067, DARPA SAIL-ON grant W911NF-19- 2-0006 and a JP Morgan AI Faculty Research grant. Ruohan Zhang's work on this paper was done when he was a PhD student at The University of Texas at Austin. \section{Conclusion \& Future Work} \label{sec:Conclusion} In this work, we presented a novel method to integrate human visual explanation with their binary evaluations of agents' actions in a Human-in-the-Loop RL paradigm. We show that our proposed method, EXPAND, outperforms previous methods in terms of environment sample efficiency, and shows promising results for human feedback efficiency. Future work can experiment with different types of perturbations beyond Gaussian blurs, including domain-dependent perturbations that involve object manipulation. Finally, as we mentioned at the end of introduction, EXPAND agents are an example of AI agents that can take human advice in terms of a {\em symbolic lingua franca} \cite{kambhampati2022symbols}, and are a promising approach for the more general problem of human-advisable RL systems that can be aligned to human preferences through symbolic advice. \section{The Overall Workflow of EXPAND} \label{sec:appendix-expand-worlflow} \begin{algorithm} \caption{Train - Interaction Loop} \label{algo:trainloop} \begin{algorithmic} \STATE {\bfseries Result:} Trained Eff. DQN agent M \STATE {\bfseries Input:} Eff. DQN agent M with randomly-initialized weights $\theta$, replay buffer $\mathcal{D}$, human feedback buffer $\mathcal{D}_h$, feedback frequency $N_{f}$, total episodes $N_{e}$, maximum number of environment steps per episode $T$, update interval b \STATE {\bfseries Begin} \FOR{$i=1$ {\bfseries to} $N_{e}$} \FOR{$t=1$ {\bfseries to} $T$} \STATE Observe state s\; \STATE Sample action from current policy $\pi$ with $\epsilon$-greedy, observe reward r and next state s$'$ and store (s, a, r, s$'$) in $\mathcal{D}$\; \IF{ $t$ mod $b == 0$} \STATE Sample a mini-batch of transitions from $\mathcal{D}$ with prioritization\; \STATE Perform standard DQN update with sampled environment data\; \STATE Sample a mini-batch of human feedback from $\mathcal{D}_h$\; \STATE Update Eff. DQN with human feedback data\; \ENDIF \ENDFOR \IF{ $i$ mod $N_{f} == 0$} \STATE Obtain the last trajectory $\tau$ from $\mathcal{D}$\; \STATE Query $\tau$ to obtain feedback $\mathcal{H}_i$\; \STATE Append $\mathcal{H}_i$ to buffer $\mathcal{D}_h$\; \ENDIF \ENDFOR \STATE {\bfseries End} \end{algorithmic} \end{algorithm} \section{Settings of Gaussian Blurring in EXPAND} \label{sec:appendix-gaussian-setting} \begin{wrapfigure}{R}{0.6\linewidth} \centering \includegraphics[width=1\linewidth]{images/num_augmentations.png} \caption{Learning curves of the variants of EXPAND with different number of augmentations.} \label{fig:num-augmentations} \end{wrapfigure} In EXPAND, we augment each human evaluated state to 5 states. To verify 5 is sufficient, we also experimented with the numbers of augmentations required in each state to get the best performance. Figure \ref{fig:num-augmentations} shows a comparison when the number of augmentations is varied among $\{1,5,12\}$ for Pixel Taxi and Pong using a synthetic oracle. The plots suggest that increasing augmentations only evoke slight performance gains, and therefore setting the number of perturbations to 5 for EXPAND is apt. The settings of the Gaussian blurring filters are listed below: \renewcommand\labelitemii{$\circ$} \begin{itemize} \setlength\itemsep{0em} \item 1 Augmentation: \begin{itemize} \setlength\itemsep{0em} \item filter size: 5, $\sigma$: 5 \end{itemize} \item 5 Augmentations: \begin{itemize} \setlength\itemsep{0em} \item filter size: 5, $\sigma$: 2 \item filter size: 5, $\sigma$: 5 \item filter size: 5, $\sigma$: 10 \item filter size: 11, $\sigma$: 5 \item filter size: 11, $\sigma$: 10 \end{itemize} \item 12 Augmentations: \begin{itemize} \setlength\itemsep{0em} \item filter size: 5, $\sigma$: 2 \item filter size: 5, $\sigma$: 5 \item filter size: 5, $\sigma$: 10 \item filter size: 7, $\sigma$: 3 \item filter size: 7, $\sigma$: 5 \item filter size: 7, $\sigma$: 10 \item filter size: 9, $\sigma$: 3 \item filter size: 9, $\sigma$: 5 \item filter size: 9, $\sigma$: 10 \item filter size: 11, $\sigma$: 3 \item filter size: 11, $\sigma$: 5 \item filter size: 11, $\sigma$: 10 \end{itemize} \end{itemize} \section{Implementation Details} \label{sec:appendix-baseline-implement} \subsection{Ex-AGIL} \begin{figure}[h] \centering \includegraphics[width=0.9\linewidth]{images/agil_architecture.png} \caption{AGIL network architectures. The upper one is the attention prediction network, and the bottom one is the policy network.} \label{fig:agil-architecture} \end{figure} AGIL \cite{zhang2018agil} was designed to utilize saliency map collected via human gaze. In this work, human salient information is provided by annotated visual explanation. Though the source of explanatory information is slightly different, we can expect that AGIL with some minor changes (called Ex-AGIL) should also work well when the gaze input is replaced with visual explanation. Note that we don't aim to compare the two different ways to gather human salient information, but rather focus on which method better utilizes human visual explanation. The network architectures are shown in Fig. \ref{fig:agil-architecture}. Two modifications were made to AGIL: firstly, rather than training the attention network and policy network sequentially as in supervised behavioral cloning, the attention network and policy network are both updated on every DQN update step. Secondly, unlike human gaze, human annotated saliency map (bounding boxes) doesn't induce any probability distribution. Hence, we view the output of attention network as the prediction of whether a pixel should be included in a human annotated bounding box. Accordingly, to train the network, we applied a simple mean-squared error between the predicted saliency maps and human visual explanations, similar to the explanation loss terms in \cite{rieger2020interpretations, schramowski2020making}: \small \begin{equation} L_{explanation} = \frac{1}{N}\sum_{i=1}^{N}(A_i - e_i)^2 \end{equation} \normalsize where $e_i$ is the predicted saliency mask for state $s_i$ and $A_i$ is the preprocessed (resized and stacked) human explanation binary mask, in which 1 means the pixel is highlighted in human explanation. The policy network is trained with the same losses as DQN-Feedback (standard DQN loss + advantage feedback loss), except that the states are masked with the predicted saliency map in the way depicted in Fig. \ref{fig:agil-architecture}. \subsection{Attention-Align} Different from Ex-AGIL, Attention-Align aims to leverage human visual explanation without training a separate saliency prediction network. It uses the same mean-squared error loss in Ex-AGIL but the $e_i$ is obtained with some interpretable reinforcement learning method. Here, we choose to use the method FLS (Free Lunch Saliency) \cite{nikulin2019free}, which produces built-in saliency maps by adding a self-attention module that only allows selective features to pass through. More specifically, to get the agent explanation $e_i$, FLS applies transposed convolution to the neuron activations of the attention layer. This allows us to more efficiently compute the explanation loss compared to other methods such as computing Jacobian of the input image or bilinear upscaling of the attention activations. The Eff. DQN is then jointly trained with standard DQN loss, feedback loss (advantage loss), and the explanation loss. Note that the weight of explanation loss is set to 0.1 as suggested in previous works \cite{rieger2020interpretations, schramowski2020making}. \subsection{Context-Agnostic Data Augmentation} We compared EXPAND to two context-agnostic data augmentations, namely random cropping and Gaussian blurring. Random cropping pads the four sides of each 84 $\times$ 84 frame by 4 pixels and then crop back to the original 84 $\times$ 84 size. Gaussian blurring applies a 23 $\times$ 23 square Gaussian kernel with standard deviation sampled uniformly in (2, 10). Note that in EXPAND we use a fixed set of Gaussian filters, which can be more efficient with respect to wall-clock time. We also experimented with context-agnostic Gaussian blurring with fixed filters. Our result shows there is no significant performance difference between fixed filters and randomly sampled filters. \section{Synthetic Feedback and Explanation} \label{sec:synthetic-feedback-explanation} \begin{figure}[h] \centering \includegraphics[width=0.9\linewidth]{images/example-explanation-full.png} \caption{Examples of visual explanations for (left to right) Pixel-Taxi, Pong, Asterix, Enduro-1000, and MR Level 1.} \label{fig:synthetic-feedback} \end{figure} We evaluated EXPAND against the baselines using an oracle. To get the synthetic visual explanation, we use hard-coded program to highlight the relevant regions. In Pixel Taxi, the relevant objects are the agent (gray block), passenger (red block) and the destination (black block). In Pong, we highlight the ball and the two paddles. In Asterix, the closest target(s) and the enemies in nearby lane(s) are highlighted. In Enduro-1000, we highlight the lanes, the player car, and cars in front of the player. In MR, candidate regions/objects include the agent, monster, key, platforms, and ladders. Certain regions will be highlighted depending on the agent's location. Examples of synthetic visual explanations can be found in the Fig. \ref{fig:synthetic-feedback}. \section{Hyper-parameters} \label{sec:hyper-parameters} \begin{itemize} \setlength\itemsep{0em} \item Convolutional channels per layer: [32, 64, 64] \item Convolutional kernel sizes per layer: [8, 4, 3] \item Convolutional strides per layer: [4, 2, 1] \item Convolutional padding per layer: [0, 0, 0] \item Fully connected layer hidden units: [512, number of actions] \item Update interval: 4 \item Discount factor: 0.99 \item Replay buffer size: 50,000 \item Batch size: 64 \item Feedback buffer size: 50,000 \item Feedback batch size: 64 \item Learning Rate: 0.0001 \item Optimizer: Adam \item Prioritized replay exponent $\alpha=0.6$ \item Prioritized replay importance sampling exponent $\beta=0.4$ \item Advantage loss margin $l_m=0.05$ \item Rewards: clip to [-1, 1] \item Multi-step returns: $n=5$ \item $\epsilon$ episodic decay factor $\lambda_{\epsilon}$: 0.99 in Pixel Taxi, 0.9 in Atari games \end{itemize} \section{User Study} \label{sec:appendix-user-study} \begin{figure}[h] \centering \includegraphics[width=0.9\linewidth]{images/human-interface.png} \caption{Web Interface to collect saliency and binary feedback from human trainers. This example uses a frame from Pong.} \label{fig:human-interface} \end{figure} Fig \ref{fig:human-interface} shows the web-interface used to conduct the user experiment with three graduate students as the human subjects. Before the experiment began, the trainers were briefed about the usage of this interface. The trainers were asked to provide their consent to use any information they provide for any analysis and experiments. The feedback interface, Fig \ref{fig:human-interface}, is divided into two columns, the information pane (left) and the control pane. (right). On the information pane, users were able to adjust the frame rate at which they would want to view the "video" made using the state transitions. Users could also view bounding-box related information, such as which ones were the drawn boxes and the boxes suggested by the implemented object tracker. On the right pane, users could view the frame upon which they would give their saliency feedback. Users could see the agent's current action in the information pane. For their ease, we provided controls like \emph{Pause}, \emph{Play}, skip to \emph{Next} Frame, go to \emph{Previous} Frame, \emph{Clear} all bounding boxes on the frame, \emph{Save} feedback and finally \emph{Finish} the current feedback session. To provide a binary feedback on agents' actions, users were required to press keyboard keys "A" for a good-feedback, "S" for a bad-feedback, and "D" for no feedback. They could provide saliency explanations via clicking at the required position on the image frame and dragging to create a rectangular selection. \section*{Checklist} \begin{enumerate} \item For all authors... \begin{enumerate} \item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? \answerYes{} \item Did you describe the limitations of your work? \answerYes{} \item Did you discuss any potential negative societal impacts of your work? \answerYes{} \item Have you read the ethics review guidelines and ensured that your paper conforms to them? \answerYes{Please see Appendix \ref{appendix:social-impact} for details.} \end{enumerate} \item If you are including theoretical results... \begin{enumerate} \item Did you state the full set of assumptions of all theoretical results? \answerNA{} \item Did you include complete proofs of all theoretical results? \answerNA{} \end{enumerate} \item If you ran experiments... \begin{enumerate} \item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? \answerYes{} \item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? \answerYes{} \item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? \answerYes{} \item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \answerYes{} \end{enumerate} \item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate} \item If your work uses existing assets, did you cite the creators? \answerYes{} \item Did you mention the license of the assets? \answerNA{} \item Did you include any new assets either in the supplemental material or as a URL? \answerNA{} \item Did you discuss whether and how consent was obtained from people whose data you're using/curating? \answerNA{} \item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? \answerNA{} \end{enumerate} \item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate} \item Did you include the full text of instructions given to participants and screenshots, if applicable? \answerYes{} \item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? \answerYes{} \item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? \answerNA{} \end{enumerate} \end{enumerate}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The analysis of binary star systems is a very important part of astrophysics. Calculations of their orbits allow us to directly determine the masses of their components, which also gives us a chance to estimate other physical parameters. Moreover, analysis of eclipsing binary systems can provide the absolute values for the numerous physical parameters of stars, which are essential for testing stellar structure and evolutionary models. In particular, the SB2 eclipsing binary systems allow us to directly determine those parameters with a requisite accuracy \citep{torres10}. From both photometric and spectroscopic data, we can measure very accurate distance-independent stellar parameters such as stellar masses, radii, luminosities and effective temperatures. In this paper we present the first determination of the physical and orbital parameters of the double-lined detached eclipsing binary system from the \emph{All Sky Automated Survey} (ASAS) identified as ASAS J180057-2333.8 (hereafter ASAS1800) by \cite{poj02}. The star is also cataloged as TYC 6842-1399-1 and 2MASS J18005707-2333420, and classified as a detached binary system in the ACVS catalogue. Its V magnitude at maximum brightness is 10.19 \citep{poj02} and the amplitude of photometric variations in the $V$-band is 0.47 mag. It has a circular orbit with a period of 269 days. The system contains two evolved giants. It is located in the Galactic disc ($b=-0^{\circ}\!\!.2$) and has not been spectroscopically analyzed before. Such systems are very rarely found \citep[eg.][]{hel15} in our Galaxy. Although it is relatively young, the binary system is composed of well detached bright giant stars. Late type eclipsing binary systems are one of the best candidates for distance determinations \citep{pie13, thompson01}. The precision of our distance determination to this eclipsing binary (total error of $\sim 4\%$) rivals those obtained from interferometric parallaxes of Galactic masers \citep[e.g.][their Table 1]{Xu12} at comparable distances. In this paper we focus on a precise determination of physical parameters of the system, its distance and space kinematic properties and a discussion of evolutionary status. Absolute dimensions of both components are used in discussion of ASAS1800's evolutionary status. We begin with a presentation of the data collection and analysis followed by a description of our results. In the last section we present our conclusions. \section{OBSERVATIONS} \subsection{Photometry} For our analysis of ASAS1800 we used the archival $V$-band and $I$-band photometry from the ACVS \citep{poj00}. A total of 887 and 266 measurements were obtained in the $V$-band and the $I$-band, respectively, and the data coverage for the light curve for this system is complete in both filters. The primary eclipse is total. The time span of the observations for the $V$-band is 3189 days (JD 2451949 to JD 2455138) and 1675 days for the $I$-band (JD 2452282 to JD 2453957). The magnitude in the K-band was taken from 2MASS catalogue and is K = 5.917 mag \citep{cutri03}. The observation was made at an orbital phase of $\phi$ = 0.125, well separated from either eclipse. \subsection{Spectroscopy} The high-resolution spectra were collected with the ESO 3.6 telescope at La Silla Observatory, Chile equipped with the HARPS spectrograph, as well as with the Euler 1.2m telescope at La Silla, Chile, equipped with the CORALIE spectrograph. The resolution of the CORALIE spectrograph is $\sim$50,000. The HARPS spectrograph was used in the EGGS mode at a resolution of $\sim$80,000. For our analysis we used 14 spectra in total, 12 of which were taken with the HARPS spectrograph and 2 spectra with CORALIE. \section{ANALYSIS AND RESULTS} In order to derive absolute physical and orbital parameters for the system, we used the Wilson-Devinney code (WD), version 2007 \citep{wil07, wilson71, wilson79, wilson90}, equipped with the automated differential correction (DC) optimizing subroutine and Monte Carlo simulation package. The WD code allows us to simultaneously solve multiband light curves and radial velocities which is recommended as the best way to obtain a consistent model of a binary system (Wilson 2007). We also used RaVeSpAn software written by \cite{pil12} for measuring radial velocities, as well as for spectrum disentangling, used in further analysis. \subsection{Radial Velocities} The RaVeSpAn code uses the Broadening Function formalism \citep{ruc92, ruc99} to measure radial velocities of the components of the binary. Templates were selected from the synthetic library of LTE spectra of \citet{coelho05}. We calculated the components' radial velocities over the wavelength range 4360 to 6800 \AA, excluding atmospheric and strong hydrogen lines. The resulting radial velocity curve is presented in Fig.~\ref{fig1}, and the measured radial velocities are presented in the Tab.~\ref{tab:velocities}. The components differ in systemic velocity, and we applied a correction of $v_{sys}$= -175 m s$^{-1}$s to the radial velocities of the secondary component in order to obtain an accurate radial velocity solution. Such a shift can be caused either by a differential gravitational redshift between the stars or due to large-scale convective motions \citep[eg.][]{torres09}. We also determined the rotational velocities of both components in order to compare them with the expected synchronous velocities. We determined rotational velocities using the RaVeSpAn code, fitting to rotationally broadened profiles. The measured broadenings are $v_{M_1}$ = 11.02$\pm$ 0.16 km s$^{-1}$ and $v_{M_2}$= 14.84 $\pm$ 0.28 km s$^{-1}$. To determine $v\sin{i}$ we also had to take into account the macroturbulence and instrumental profile contribution to our earlier measurements. In order to estimate those values we used the relation presented in \citet[][see their Equation 1]{mas08} and \cite{taked08} ($v_{mt}$ = 0.42$\zeta_{RT}$). The measured velocity is linked with rotational velocity through the relation: \begin{equation} \label{macro} v_M^2 = v_{rot}^2 + (0.42\zeta_{RT})^2 + v_{ip}^2 \end{equation} where $\zeta_{RT}$ is the radial-tangential macroturbulence, and $v_{ip}$ is the instrumental profile broadening. We estimated the macroturbulence to be $v_{mt1}$ = 3.20 km s$^{-1}$ and $v_{mt2}$ = 2.61 km s$^{-1}$. The instrumental profile was estimated to be $v_{ip}$ = 2.25 km s$^{-1}$, assuming the resolution of the HARPS spectrograph in EGGS mode to be R = 80 000 and using a relation given in \citet{taked08}. With these assumptions, we estimated the rotational velocities to be $v_1\sin{i}$ = 10.31 $\pm$ 1.16 and $v_2\sin{i}$ = 14.44 $\pm$ 1.28 km s$^{-1}$. Assuming that the rotation axes are perpendicular to the orbit and the rotation is synchronized with orbital motion, the expected equatorial, rotational velocities are $v_1$ = 9.79 km s$^{-1}$ and $v_2$ = 12.71 km s$^{-1}$. We calculated $v$ using a formula: \begin{equation} \label{v_rot} v = \frac{2 \pi R}{P} \end{equation} where $R$ is the radius of a component and $P$ is the orbital period of the system. The measured rotational velocities are consistent with the expected synchronous velocities within 0.5$\sigma$ and 1.4$\sigma$ for the primary and secondary components, respectively. We conclude that the components are rotating synchronously. \begin{table} \caption{Radial velocity measurements. Typical uncertainty is 40 m s$^{-1}$.} \centering \begin{tabular}{p{0.22\linewidth}p{0.19\linewidth}p{0.2\linewidth}p{0.19\linewidth}} \hline HJD & V$_1$ & V$_2$ &Instrument\\ --2450000 & (km s$^{-1}$) & (km s$^{-1}$) &\\ \hline \hline 5448.48918 & -36.806 & 1.770 &HARPS\\ 5449.55107 & -37.554 & 2.520 &HARPS\\ 5467.51220 & -47.416 & 12.664 &HARPS\\ 5470.49095 & -48.694 & 13.910 &HARPS\\ 5478.53899 & -51.214 & 16.369 &HARPS\\ 5479.51462 & -51.498 & 16.601 &HARPS\\ 5499.50211 & -51.982 & 17.304 &CORALIE\\ 5500.49960 & -51.831 & 17.005 &CORALIE\\ 5502.51109 & -51.492 & 16.653 &HARPS\\ 6214.50228 & -3.458 & -31.415 &HARPS\\ 6448.68949 & 15.524 & -50.835 &HARPS\\ 6449.83022 & 15.196 & -50.472 &HARPS\\ 6450.84722 & 14.856 & -50.261 &HARPS\\ 6553.63729 & -50.404 & 15.572 &HARPS\\ \hline \end{tabular} \centering \label{tab:velocities} \end{table} \begin{figure} \centering \includegraphics[width=84mm] {asas1800_velt.eps} \caption{Radial velocity curve solution to ASAS1800 from the WD code. Filled black circles represent measurements of the primary, the empty circles measurements of the secondary. The empty triangle stands for the measurement of the secondary, observed at HJD 2456214.50228, which was not taken into account. } \label{fig1} \end{figure} \subsection{Spectral disentangling and atmospheric analysis} \label{sec:dis} The spectral disentangling was done using a method outlined by \cite{gon06}. We used the two step method described in detail in \cite{gra14} to derive properly renormalized disentangled spectra. These spectra were used for deriving the basic atmospheric parameters of effective temperature $T_{\rm eff}$, gravity $\log{g}$, microturbulance $v_t$, and metallicity [Fe/H] assuming the local thermodynamical equilibrium and using program MOOG \citep{sne73}. Details of the method are given in \cite{mar08} and the line list in \cite{vil10}. Values of these parameters are presented in Tab.~\ref{tab.atmo}. The derived effective temperatures of both components were used in the WD model and to calculate interstellar reddening. \begin{table} \caption{Atmospheric parameters of the components.} \begin{tabular}{p{0.22\linewidth}p{0.19\linewidth}p{0.2\linewidth}p{0.19\linewidth}} \hline Component & $T_{eff}$ [K] &[Fe/H] & log$g$\\ \hline Primary & 4535 $\pm$ 70&-0.14$\pm$ 0.1 & 1.88\\ Secondary & 4240 $\pm$ 70& -0.27 $\pm$0.1& 1.93\\ \hline \end{tabular} \centering \label{tab.atmo} \end{table} Taking into account the metallicity determination uncertainty and additional inaccuracies connected with spectra disentangling and renormalization, the difference in the metallicities of both components can be neglected (Tab.~\ref{tab.atmo}). Therefore, we can assume that the components have common metallicity within the margin of error. In our analysis of the evolutionary status of ASAS1800 we assumed the metallicity of the system to be equal to the metallicity of the primary component - [Fe/H] = $-$ 0.14 dex (see Section~\ref{evolutionary}) . \subsection{Interstellar extinction} \label{sec:ext} We estimated reddening based on several calibrations of $T_{\rm eff}$ - $(V - K)$ colour \citep{ben98,alo99,hou00,ram05,gon09,cas10,mas06}, using the values of $T_{\rm eff}$ presented in Tab.~\ref{tab.atmo}. We determined E(B - V) = 0.525 $\pm$ 0.035mag. The errors on E(B-V) result from the accuracy of our effective temperature determination (see Tab.~\ref{tab.atmo}) and from the accuracy of the adopted effective temperature - colour calibrations. color We also determined the interstellar extinction from from the calibration of effective temperature - $(V - I)$ colour. From the effective temperature - $(V - I)$ colour calibrations of \cite{wor11} we estimated the intrinsic $(V - I)$ colours of each component. These colours were then compared with the observed colours of the components to estimate the E(V - I) extinction. We estimated the reddening E(V-I) for both components and then transformed it to E(B-V) using: \begin{equation} \label{red} E(B-V)=\frac{E(V-I)}{1.399} \end{equation} The mean value of the reddening was E(B-V)=0.609 $\pm$ 0.042 . Finally, we used extinction maps of \cite{schle98} with the recalibration of \cite{schla11} to estimate the reddening in the direction of ASAS1800. The total foreground reddening in this direction is E(B - V) = 26.597 mag. Since the reddening to ASAS1800 is only a fraction of this number, we have to assume a distribution of dust within the Milky Way and to know the distance to our system. The simple axisymetric model of the exponential disc gives a density of matter within the Galaxy: \begin{equation} \label{ro} \rho(r, z) = \rho_0 \exp (-r/ r_d -|z|/z_d) \end{equation} where $r_d$ and $z_d$ are the disc scale length and height, respectively. We adopted the following values from \cite{drim01}: sun's height in the Galactic disc $h_0$ = 0.015 kpc, $r_d$ = 3.2 kpc and $z_d$ = 0.135 kpc. The Galactic coordinates of ASAS1800 are $l = 6.37^\circ$ and $b = -0.23^\circ$. Moreover we assumed the solar distance to the Milky Way centre to be $R_0$ = 8.3 kpc \citep{gil09} and the distance to ASAS1800 to be D = 2.16 kpc (Section ~\ref{distance}). We assumed a Milky Way disc truncation at $D_{outer}$ = 20 kpc. Writing $r$ and $z$ as functions of distance $d$ from the Sun in the direction of ASAS1800 we obtain $r(d) = \sqrt{(R_0^2 + d^2 - 2R_0d\cos l)} $ and $z(d) = |h_0 + d \sin b|$. Substituting those functions into Eq. ~\ref{ro} we obtain the relation $\rho = \rho(d)$. We numerically integrated this relation along the line of sight twice: from 0 to D, corresponding to the reddening of the eclipsing binary, and from 0 to $D_{outer}$, corresponding to the foreground reddening. The ratio gives E(B - V)$_{ASAS1800}$ = 0.440 $\pm$ 0.057 mag. We adopt a final value of E(B-V) = 0.52 $\pm$ 0.07, the mean value from all our estimates. The error is a combination of both statistical and systematic error, dominated by the statistical error. \subsection{Modeling} The WD code is based on Roche lobe geometry and employs a sophisticated treatment of stellar surface physics. It fits a geometric model of a detached eclipsing binary to a light curve in order to establish parameters of the system and its components. The orbital period and the moment of primary minimum were derived from the ACVS data. We measure P = 269.363 days and $T_0$ = 2452728.5. The moment of the primary minimum (T$_0$) was later adjusted during the further analysis. The average out-of-eclipse magnitudes were established taking into account all of the observational data outside of minima. We measure V = 10.319 mag and I = 8.231 mag. Since the primary eclipse is total we were also able to directly determine the magnitudes of the components: V$_S$=11.097 and I$_S$=8.935 for the secondary, and V$_P$ = 11.061 and I$_P$ = 9.071 for the primary. We refer to the primary component as the star which is being eclipsed in the deeper, primary minimum. We simultaneously fitted two light curves, in the $I$-band and $V$-band, as well as the radial velocity curves. The input parameters for the DC subroutine were chosen as described in \cite{gra12}. When using the Wilson-Devinney code, it is important to carefully define which parameters are adjustable in order to arrive at the best fitted model. In our analysis we decided to adjust the orbital semi-major axis ($a$), systematic radial velocity ($\gamma$), the orbital inclination ($i$), the average surface temperature of the secondary component ($T_2$), the modified surface potential of both components ($\Omega_1$, $\Omega_2$), the mass ratio ($q = M_2/M_1$), time of the primary minimum ($T_0$), the observed orbital period ($P_{obs}$), and the relative luminosity of the primary component in the two bands ($L1_V$, $L1_I$). To set the effective temperature scale for each component, we ran the WD code with the initially assumed temperature of T$_1$ = 4700 K assuming a spectral type of K0 III. From the preliminary solutions obtained from the WD code we derived approximate surface gravities for the components of the binary of log $g_1$ = 1.7 and log $g_2$ = 1.4, as well as luminosity ratios in the $V$, $I$ and $K$-bands. The resulting luminosity ratios together with reddening E(B-V) (Section ~\ref{sec:ext}) were used to obtain the dereddened $(V - K)$ colour index. The 2MASS magnitudes were converted onto the Johnson photometric system using updated transformation equations from \cite{car01}\footnote{\texttt{http://www.astro.caltech.edu/$\sim$jmc/2mass/v3/\\transformations/}} and \cite{bes88} (Tab.~\ref{tab:converted}). Knowing the approximate log $g_1$ = 1.7, the dereddened $(V - K)$ = 2.61 and assuming [Fe/H] = 0 dex, we were able to estimate preliminary effective temperatures of the components based on the calibrations given by \cite{wor11}. The resulting effective temperature was set as the temperature of the primary $T_1$ = 4550 K. We used this value as a starting point for our analysis and then iterated to find the best solution for both the $V$-band and $I$-band light curves using the LC subroutine of the WD code. All free parameters were adjusted at the same time. The albedo and gravity brightening parameters were set to 0.5 and 0.32 respectively, which are appropriate values for those kind of stars \citep{lucy67}. To compute the limb darkening coefficients we used the logarithmic law of \cite{kling70}. Those coefficients were calculated internally by the WD code during each iteration of DC using tabulated data computed by \cite{van93}. Additionally, we calculated models using the linear and square root limb darkening law. However, that resulted in a slightly worse fit to the light curves, changing the stellar parameters of the system by less than 0.5\%. Thus, we adopted the solution obtained with fixed coefficients of the logarithmic limb darkening law \citep{pie13}. At the end of the fitting procedure we additionally adjusted the third light ($I_3$) to determine its impact on the solution. Formally, the solution suggested an unphysical value for $I_3$, and we therefore set $I_3$ = 0 in our final solution. The solution, especially the luminosity ratio of the components, was used to renormalize the disentangled spectra. Subsequently, the atmospheric analysis was performed in order to obtain a better estimation of the temperatures of the components and their metallicities (see Section~\ref{sec:dis}). We derived effective temperatures of $T_1$ = 4535 $\pm$ 70 K and $T_2$ = 4240 $\pm$ 70 K and metallicities of [Fe/H]$_1$ = -0.14 $\pm$ 0.1 dex and [Fe/H]$_2$ = -0.27 $\pm$ 0.1 dex. We then adopted $T_1$ as the new effective temperature of the primary component and we repeated the fitting using the DC subroutine of the WD code. \begin{figure} \centering \includegraphics[width=84mm] {asas1800_liti.eps} \caption{ The $I$-band light curve of ASAS1800 together withe the solution from the WD code.} \label{fig2} \end{figure} \begin{figure} \centering \includegraphics[width=84mm] {asas1800-litV.eps} \caption{The $V$-band light curve of ASAS1800 together with the solution from the WD code. } \label{fig3} \end{figure} The $I$-band light curve solution obtained with the WD code is presented in Figure~\ref{fig2} and for the $V$-band in Figure~\ref{fig3} and the parameters are summarized in Table~\ref{table:results}. \begin{table} \caption{Photometric and orbital parameters obtained with the Wilson--Devinney code.} \begin{tabular}{p{0.63\linewidth}p{0.26\linewidth}} \hline Parameter & WD result \\ \hline \hline Orbital inclination $i$ (deg) & 88.67 $\pm$ 0.21 \\ Orbital eccentricity $e$ & 0.0 (fixed)\\ Sec. temperature $T_2$ (K) & 4211 $\pm$ 13\\ Fractional radius $r_1$ & 0.1387 $\pm$ 0.0020 \\ Fractional radius $r_2$ & 0.1800 $\pm$ 0.0012\\ $(r_1+r_2)$ & 0.3187 $\pm$ 0.0012\\ $k = r_2/r_1$ & 1.2976 $\pm$ 0.0104\\ Observed period $P_{obs}$ (day) & 269.496 $\pm$ 0.014\\ $(L2/L1)_V$ & 0.9850 $\pm$ 0.0054\\ $(L2/L1)_I$ & 1.1650 $\pm$ 0.0072 \\ $(L2/L1)_J$ & 1.3379\\ $(L2/L1)_K$ & 1.5249\\ $T_0$ (JD-2450000) & 2728.82 $\pm$ 0.06\\ Semimajor axis $a$ (R$_\odot$) & 375.72 $\pm$ 0.37\\ Systemic velocity $\gamma$ (km s$^{-1}$) & -17.625 $\pm$ 0.021\\ Prim. velocity semi-amplitude $K_1$(km s$^{-1}$) & 35.11 $\pm$ 0.10\\ Sec. velocity semi-amplitude $K_2$(km s$^{-1}$) & 35.38 $\pm$ 0.10\\ Mass ratio $q$ & 0.992 $\pm$ 0.003 \\ RV $rms_1$ (km s$^{-1}$) &0.056\\ RV $rms_2$ (km s$^{-1}$) &0.042\\ \hline \end{tabular} \label{table:results} \end{table} \begin{table} \caption{Physical Properties of the ASAS1800.} \centering \begin{tabular}{p{0.3\linewidth}p{0.30\linewidth}p{0.24\linewidth}} \hline Property & The Primary & The Secondary \\ \hline \hline Spectral type & K1 II & K4 II\\ $V^a$ (mag) & 11.061 & 11.098 \\ $V\!-\!I^a$ (mag) & 1.991 & 2.162 \\ $V\!-\!K^a$ (mag) & 4.124 & 4.598 \\ $J\!-\!K^a$ (mag) & 1.087 & 1.229 \\ Radius ($R_{\odot}$) & 52.12$\pm$ 1.38 & 67.63 $\pm$ 1.40\\ Mass ($M_{\odot}$) & 4.914 $\pm$ 0.021 & 4.875 $\pm$ 0.021\\ log $g$ (cgs) & 1.696 $\pm$ 0.023 & 1.466 $\pm$ 0.018 \\ $T_{\rm eff}$ (K) & 4535$^b$ $\pm$ 80 & 4211$^c$ $\pm$ 80 \\ $v$ sin $i$ (km s$^{-1}$) & 10.31 $\pm$ 1.16 & 14.44 $\pm$ 1.28 \\ Luminosity ($L_{\odot}$) & 1031 $\pm$ 91& 1290 $\pm$ 111\\ $M_{\rm bol}$ (mag) & -2.80 & -3.05\\ $M_{\rm v}$ (mag) & -2.33 & -2.32 \\ $[$Fe/H$]^b$ & -0.14 $\pm$ 0.1& -0.27 $\pm$ 0.1\\ \hline $E(B\!-\!V)$ & 0.525 $\pm$ 0.07 \\ Distance (pc) & 2142.5 $\pm$ 63.5 (stat.) & $\pm$ 53.3 (syst.) \\ \hline $^{a - observed}$ $^{b- atmospheric\: analysis}$ $^{c -WD\: solution}$ \end{tabular} \centering \label{tab:results_final} \end{table} \begin{table*} \centering \caption{Error budget of the distance moduli of the ASAS1800.} \begin{tabular}{@{}ccccccclcc@{}} \hline Type of error & $(m - M)$ & $\sigma$A & $\sigma$(MonteCarlo) & $\sigma$diBenedetto & $\sigma E(B-V)$ & $\sigma V$ & $\sigma K$ & $(L_2/L_1)_K$ &Combined Error \\ &(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag) &(mag) \\ \hline \hline \textbf{Statistical}&11.655&0.003&0.049&--&0.024$^1$&0.0241&0.019& -&\textbf{0.063}\\ \textbf{Systematic} &11.655&--&--&0.043&--&\multicolumn{2}{c}{0.03}&0.01&\textbf{0.053} \\ \hline \multicolumn{3}{c}1 - combination of statistical and systematic error \end{tabular} \centering \label{tab:error} \end{table*} \subsection{Absolute Dimensions} Table~\ref{tab:results_final} gives astrophysical data about the two components. The physical radii of the stars result from the relation: $R = r \cdot a$, where $r$ is the fractional radius listed in Table~\ref{table:results}. The masses are derived from the equations: \begin{equation} M_1[M\odot] = 1.32068 \cdot 10^{-2} \frac{1}{1+q}\frac{a^3[R\odot]}{P^2[d]} \end{equation} \begin{equation} M_2 [M\odot] = M_1 \cdot q \end{equation} where $a$ is the semi major axis, $q$ is the mass ratio and $P$ - real period. The observed individual magnitudes in both $V$ and $I$-band and $V-I$ colour relation were derived directly from the flat bottom minimum of the secondary component. We used bolometric corrections from \cite{alo99} to convert $V$-band magnitudes into bolometric magnitudes. \begin{table*} \centering \caption{Out of eclipse magnitudes of ASAS1800.} \begin{tabular}{@{}cccccc@{}} \hline & V (mag) & I (mag)& J (mag) & K (mag) \\ \hline \hline ASAS1800&10.319 $\pm$ 0.024 &8.231 $\pm$ 0.015 & 7.178 $\pm$ 0.030$^1$& 5.942 $\pm$ 0.030$^1$ \\ Reference & this work (ASAS) &this work (ASAS)& \cite{cutri03} (2MASS) & \cite{cutri03} (2MASS) \\ \hline \multicolumn{4}{r}1 - transformed to Johnson photometric system using equations from \cite{bes88} and \cite{car01} \end{tabular} \centering \label{tab:converted} \end{table*} \subsection{Evolutionary status of ASAS1800} \label{evolutionary} In this Section we compare the physical parameters of ASAS1800 (Tab.~\ref{tab:results_final}) with results of stellar evolution calculations. We assume that the components of the system have common metallicity, equal to the metallicity of the primary ($-0.14$), or possibly $1\sigma$ higher ($-0.04$). As we show below, lower metallicities lead to serious disagreement between the models and observations. In this initial study we also assume that the masses of ASAS1800, as determined in Tab.~\ref{tab:results_final}, are exact. In Fig.~\ref{fig.parsec} we plot the PARSEC isochrones \citep{bres12} for the two metallicities considered. The isochrone was selected to minimize the $\chi^2$ function including luminosities, effective temperatures and radii of the two components. Model values were calculated at the mass points corresponding to the masses of the ASAS1800 components (filled circle and filled square for primary/secondary in Fig.~\ref{fig.parsec}). The comparison with the PARSEC isochrones places the system at the early phase of core helium burning. The agreement with isochrones is not satisfactory, however. It is better for higher metallicity, but still the primary component is under-luminous and the secondary component is too cool and/or under-luminous. We note, however, that the PARSEC isochrones are available for only one fixed set of overshooting parameters, which strongly affect the evolutionary calculations. Below, we express the extent of overshooting as a fraction of the local pressure scale height, $\beta\times H_p$. Both the overshooting from the hydrogen-burning core during main sequence evolution ($\beta_{\rm H}$) and overshooting from the convective envelope ($\beta_{\rm env}$) affect the extent and the luminosity of the helium burning loops \citep{alongi91}. In PARSEC models, in the mass range considered, these are fixed at $\beta_{\rm H}=0.5$ and $\beta_{\rm env}=0.7$ {\it across} the border of the convective zone determined with the Schwarzschild criterion. In the calculations described below, the extent of overshooting is measured {\it above/below} the border of the convective region, which is a more common approach. The resulting overshoot parameters roughly correspond to half of those adopted in PARSEC \cite[see discussion in section 2.6 in][]{bres12}. To explore the effect of the overshooting on stellar evolutionary tracks we used \textsf{MESA star} -- a publicly available stellar evolution code \cite[release 6208;][]{pax11,pax13}. Details of the code setup will be described elsewhere (Smolec et al. in prep.), here we summarize the most important settings. We use OPAL opacities and adopt the \cite{asp09} solar distribution of heavy elements. Convection is modelled with the mixing length formalism \citep{boh58} with the mixing length parameter resulting from calibration of the solar model ($\alpha=1.78$). The convective boundaries are determined with the Schwarzschild criterion. We account for the overshooting above the border of hydrogen burning core, above the border of helium burning core ($\beta_{\rm He}=0.01$, fixed), and below the border of the convective envelope. We neglect rotation, element diffusion (except in solar calibration), and mass loss. For each component of ASAS1800 we fix the mass (Tab.~\ref{tab:results_final}) and compute the evolution from the pre-main sequence until the late AGB phase. Our small model grid consists of two metallicity values, $-0.14$ and $-0.04$, eight values of $\beta_{\rm H}$, $\beta_{\rm H}\in[0.1,\ 0.12,\ 0.14,\ 0.16,\ 0.18,\ 0.20,\ 0.22,\ 0.24]$, and two values of $\beta_{\rm env}$, $\beta_{\rm env}\in[0.,\ 0.35]$. Along each pair of tracks (for the two components) we determined the models that minimize $\chi^2$ function including effective temperatures, luminosities and radii of the two components, at the same age. We first assume that the two components experienced the same extent of mixing at the edge of hydrogen burning core during their evolution, and hence have the same value of $\beta_{\rm H}$, and then allow for a difference in $\beta_{\rm H}$. In Figs.~\ref{fig.mesa1} and \ref{fig.mesa2} we show our best solutions for the described two assumptions. In the four panels of these Figures we show the models with (top) and without (bottom) overshooting from the convective envelope ($\beta_{\rm env}=0.35$ or $\beta_{\rm env}=0.$) and adopting lower (left) and higher (right) metallicity values (${\rm [Fe/H]}=-0.14$ or ${\rm [Fe/H]}=-0.04$). We first analyze the models assuming the same values of $\beta_{\rm H}$ for the primary and secondary, Fig.~\ref{fig.mesa1}. For higher metallicity (right panels) the helium burning loops become less luminous and the tracks shift toward lower effective temperatures. Consequently, the higher metallicity (${\rm [Fe/H]}=-0.04$), together with the smaller extent of overshooting from the hydrogen-burning core, mitigates the problems of an under-luminous primary and a too cool/under-luminous secondary, noted in the analysis of the PARSEC isochrones. The inclusion of the envelope overshoot in the models has two apparent effects on the tracks. It increases the vertical extent of the loops and decreases their overall luminosity. Hence, in the models including envelope overshooting, a larger extent of overshooting at the hydrogen-burning core is possible. We note that the matter that overshoots the convective envelope boundary faces a stabilizing stratification gradient. Whether the significant mixing is possible in such case is a subject of debate \citep{bres12,pietri04}. Modelling of evolved binary systems offers the best opportunity to test mixing scenarios at the bottom of the convective envelope. The best models displayed in Fig.~\ref{fig.mesa1} are those with overshooting from the convective envelope ($\beta_{\rm env}=0.35$) and have $\beta_{\rm H}=0.18$ (Fig.~\ref{fig.mesa1}, top). The solutions assuming $\beta_{\rm H}=0.16$ are only slightly worse. When envelope overshoot is neglected, lower values of $\beta_{\rm H}$ ($\approx 0.14-0.16$) are necessary. Clearly, the best models have higher metallicity. The inferred system's age, given in Fig.~\ref{fig.mesa1}, is very similar for all models, log(age) is within a narrow range from 8.007 to 8.021. As expected, the age is slightly larger for higher metallicity models (see e.g. \cite{salaris06}). Also, larger the extent of overshooting from the hydrogen burning core, longer the main sequence evolution. We note that for the best models in Fig.~\ref{fig.mesa1} we nearly match the luminosity of the secondary, but that the primary component is still {\it before} the observed position. Hence, we can get a much better agreement between the models and observations assuming different values of overshooting from the hydrogen burning core during the main sequence evolution. A lower value of $\beta_{\rm H}$ for the secondary slows down its evolution (extends the main sequence phase) and allows a much better match of the system with observations at the helium burning phase -- Fig.~\ref{fig.mesa2}. An overshooting parameter higher by $0.02-0.04$ for the secondary allows very good fits. The best are obtained for higher metallicity models (as in Fig.~\ref{fig.mesa1}). This time the models that neglect the overshooting from the convective envelope seem slightly better. The inferred ages are very similar to those reported in Fig.~\ref{fig.mesa1}. Our model that matches observations best (Fig.~\ref{fig.mesa2}, bottom right), assumes $\beta_{\rm H}=0.16$ for the primary and $\beta_{\rm H}=0.18$ for the secondary and places the system at the early helium burning phase. The different extent of overshooting adopted for the components of the eclipsing binary system, with nearly equal masses and metallicities of the components, might appear unjustified. We note, however, that an overshooting parameter expresses our ignorance about all kinds of mixing processes that may occur at the edge of the convective core. In particular, rotation leads to additional mixing, the extent and efficiency of which depend on the rotation rate. We see no reason to assume that the initial rotation rate was the same for two stars. We conclude that ASAS1800 is at early phases of core helium burning. The age of the system is slightly larger than 100 million years. The models favour a metallicity that is close to solar in value. \begin{figure*}[p] \centering \includegraphics[width=12.8cm]{solHRisoONLY.eps} \caption{PARSEC isochrones for two metallicities, ${\rm [Fe/H]}=-0.14$ (left panel) and ${\rm [Fe/H]}=-0.04$ (right panel). Location of primary and secondary components is marked with circles and squares, respectively. Filled symbols refer to the fitted isochrones and the empty ones to our measurements. } \label{fig.parsec} \end{figure*} \begin{figure*}[p] \centering \includegraphics[width=12.2cm]{solHRpaper_eq.eps} \caption{MESA tracks computed for the primary ($M=4.914M_\odot$, red, solid line) and secondary ($M=4.875M_\odot$, blue, dashed line) components of ASAS1800, assuming the same values of overshooting from the hydrogen burning core for the primary and the secondary. Models that match the observational constraints best (at the same age) are marked with filled circles/squares for the primary/secondary. Models in the top two panels include convective envelope overshoot (neglected in the bottom panels). Metallicity is equal to ${\rm [Fe/H]}=-0.14$ (left panels) or ${\rm [Fe/H]}=-0.04$ (right panels).} \label{fig.mesa1} \end{figure*} \begin{figure*}[p] \centering \includegraphics[width=12.2cm]{solHRpaper_diff.eps} \caption{The same as Fig.~\ref{fig.mesa1} but for models assuming different values of overshooting from the hydrogen burning core for the primary and secondary.} \label{fig.mesa2} \end{figure*} \subsection{Tidal evolution of the system} \label{tidal} The observations indicate that the orbit of ASAS1800 is circular and rotation of both components is fully synchronized with the orbital period, which means that the memory of the initial values of these parameters is at present entirely forgotten. To check, if this fact agrees with the predictions of the tidal evolution theory of binary stars, we assume that the binary evolves as an isolated system with conserved total mass and angular momentum. The tidal circularization and synchronization time scales are (\citealt{zahn89}, \citealt{mei06}) \begin{equation} \tau_{\rm{circ}} = \frac{t_f}{21\lambda_{\rm{circ}}q(1+q)}\left(\frac{a}{R}\right)^8\,, \end{equation} \begin{equation} \tau_{\rm{sync}} = \frac{It_f}{6\lambda_{\rm{sync}}q^2MR^2}\left(\frac{a}{R}\right)^6\,, \end{equation} where $M$, $R$ and $I$ are respectively mass, radius and moment of inertia of the tidally distorted component, $q$ is the mass ratio (with $M$ in the denominator), $t_f$ is the viscous dissipation time, $\lambda_{\rm{circ}}$ and $\lambda_{\rm{sync}}$ are constants that depend on the internal structure of a star. Both time scales depend on a high power of the ratio of star separation to stellar radius. This ratio was of the order of $10^2$ when the binary was on the main sequence (MS), resulting in both time scales much longer than the MS lifetime of the components. Thus, the mutual tidal interaction at that evolutionary stage can be neglected. Substantially stronger interaction is expected when both components moved to the red giant region. Presently, they are both past the red giant tip, burning helium in their cores. We note that the ratio of both time scales for our binary is $\tau_{\rm{circ}}/\tau_{\rm{sync}} \approx 25-30$ for $q \approx 1, \lambda_{\rm{circ}} \approx \lambda_{\rm{sync}}$ and $I \approx 0.15MR^2$ \citep{rut88}. That means that by the time the orbit becomes circularized, the components already rotate synchronously. To follow the eccentricity change, detailed calculations of the circularization rate for an evolving binary are needed. Such calculations have been performed by several authors for different kinds of systems and upper limits for periods of fully circularized binaries were obtained. We use the data from the paper by \citet{ver95} who calculated the limiting period values for binaries composed of giants. For giant masses corresponding to ASAS1800 the limiting period is equal to 616 d (see their Table~1). Because the period of ASAS1800, equal to 269 d, is significantly shorter than that value, we can conclude that the zero eccentricity of its orbit is to be expected. This conclusion can additionally be verified by a direct estimate of the absolute value of $\tau_{\rm{circ}}$. We use to this purpose an approximation given by \citet{ver95} \begin{equation} \frac{1}{\tau_{\rm{circ}}} \equiv \left|\frac{\rm{d}\ln{e}}{\rm{d}t}\right| \approx 3.4f\left(\frac{T_e}{4500}\right)^{4/3}M_{env}^{2/3}M^{-1}\left(\frac{R}{a}\right)^8 \quad \rm{yr}^{-1}\,. \end{equation} We assumed $q \approx 1$. If we additionally assume that the characteristic value of $R/a \approx 0.2$ over the giant phase, the convection envelope mass $M_{env} \approx M$, which is a good approximation for the first ascend giants, $T_e \approx (T_1+T_2)/2$ and $f \approx 1$ \citep{zahn89}, we obtain $\tau_{\rm{circ}} \approx 10^5$ years if tides on both components are taken into account. This is 1-1.5 order of magnitude shorter than the lifetime of each component of ASAS1800 in the red giant phase so the orbit was efficiently circularized soon after the stars reached the red giant branch. Because the synchronization time scale is still much shorter, as is shown above, the rotation of both components was synchronized even faster. \subsection{Distance to the system} \label{distance} To derive the distance we followed prescriptions given in \cite{gra12, gra14}. We used $V$-band surface brightness (SF) - $(V\!-\!K)$ color calibration measured by \cite{ben05} for Galactic late type giant stars. The angular diameter of a star can be estimated using the formula: \begin{equation} \phi [{\rm mas}] = 10^{0.2(S - m_0)} \end{equation} where $S$ is the surface brightness in a given band and $m_0$ is the dereddened magnitude of a star in this band. We can then directly derive the distance to the star by scaling the angular diameter: \begin{equation} d [pc] = 9.2984 \cdot \frac{R [R_{\odot}]}{\phi [{\rm mas}]} \end{equation} The resulting distance to ASAS1800 is d = 2142.5 $\pm$ 63.5 (stat.) $\pm$ 53.3 (syst.) pc (Tab.~\ref{tab:results_final}). The main contribution to the statistical uncertainty are random errors connected with light curve modelling by the WD code (connected with a relatively large dispersion of ASAS light curves) and infrared photometry errors. Thus there is a significant room for improvement on the derived distance once high accuracy photometry will be available.. The main contribution to the systematic error comes from the SF calibration itself. The total error budget is presented in Tab~\ref{tab:error}. \subsection{Space position and velocity} The proper motion of the star is $\mu_{\alpha}\cos{\delta}=+0.9\pm1.75$ mas yr$^{-1}$ and $\mu_{\delta}=0.67\pm1.71$ mas yr$^{-1}$ and was derived as weighted mean from three catalogues PPMXL catalog \citep{Roe10}, UCAC4 catalog \citep{zach13} and SPM4.0 catalog \citep{girard11}. This proper motion in Galactic coordinates is $(\mu_l\cos{b},\mu_b)=(1.0 \pm 2.3, -0.5 \pm 0.7)$ mas yr$^{-1}$ using the prescription given by \cite{pol13}. The calculated distance corresponds to a transverse velocity in Galactic coordinates of $(10\pm24,-5\pm7)$ km s$^{-1}$. To calculate Galactic space velocity components we used equations given in \cite{jon87} and we obtained $(u,v,w)=(-19\pm3,8\pm24,-4\pm7)$ km s$^{-1}$. This velocity is not corrected for solar motion with respect to the Local Standard of Rest (LSR). Taking into account the peculiar solar motion $(U_\odot,V_\odot,W_\odot) = (11.1\pm1.0,12.2\pm2.0,7.3\pm0.5)$ km s$^{-1}$ \citep{sch10} and the circular speed of LSR in the Galaxy $V_c=238\pm9$ km s$^{-1}$ from \cite{sch12} we obtain Galacto-centric velocity components of ASAS1800 in the position of the sun $(U_1,V_1,W_1)=(-8\pm4,258\pm26,3\pm7)$ km s$^{-1}$, where the errors are dominated by proper motion uncertainties. The Galactic space position of the star with respect to the sun is $(X,Y,Z)=(2.12,0.26,-0.05)$ kpc. The Galacto-centric distance of ASAS1800 is $6.16\pm0.40$ kpc \citep[assuming a distance to the Galactic center $R_0=8.28 \pm 0.38$ kpc from][]{gil09} and the Galacto-centric longitude is $\beta = 2^{\circ}\!\!.5\pm0^{\circ}\!\!.3$, placing the star in the Saggitarius-Carina arm \citep[e.g.][their Figure 3]{sak12}. \section{SUMMARY AND CONCLUSIONS} We have obtained stellar parameters for the eclipsing binary ASAS J180057-2333.8. We measure a distance to the system of 2.14 $\pm$ 0.06 (stat.) $\pm$ 0.05 (syst.) kpc. The accuracy of the distance determination is 4 \%, is slightly less accurate than distances obtained with the same method to LMC/SMC binaries. This is due to much higher interstellar extinction, somewhat lower quality of the photometric light curve and infrared magnitudes transformations between photometric systems, leading to larger errors in the absolute dimensions and final distance. With better photometry and with an improved surface brightness - colour relation it should be possible to measure 1.5 -- 2 \% distances to such individual systems. In a recent series of papers \citep{pie09,gra12,gra14} we have shown that precision of 3\% is already routinely attainable for carefully selected late type eclipsing binaries. As such they are a useful tool to probe the structure and the kinematics of the Galaxy. This technique is also an important and, moreover, independent way of testing the future distance and parallax determinations which will be made by the GAIA mission. Our results also demonstrate the strength of using observations of well detached eclipsing binary systems in the testing of stellar evolution theory. Several such systems, with well determined physical parameters, are known \cite[eg.][]{pie13}. Evolutionary calculations of evolved stars are sensitive to many parameters however, and definite conclusions require a thorough study, which is ongoing (Smolec et al. in prep.). \section*{Acknowledgements} We would like to thank the staff of the ESO La Silla observatory for their support during the observations. We also gratefully acknowledge financial support for this work from the Polish National Science Centre grants OPUS DEC-2013/09/B/ST9/01551 and DEC-2011/03/B/ST9/02573 and the TEAM subsidy from the Foundation for Polish Science (FNP). In this work we used SIMBAD database. W.G., G.P. and D.G gratefully acknowledge financial support for this work from the BASAL Centro de Astrofisica y Tecnologias Afines (CATA) PFB-06/2007, and from the Millenium Institute of Astrophysics (MAS) of the Iniciativa Cientifica Milenio del Ministerio de Economia, Fomento y Turismo de Chile, project IC120009. K.S. acknowledges the financial support from the National Science Centre under the grant DEC-2011/03/B/ST9/03299. RIA acknowledges funding from the Swiss National Science Foundation. RIA, WG and GP acknowledge the support of the Munich Institute for Astro- and Particle Physics (MIAPP) of the DFG cluster of excellence "Origin and Structure of the Universe".
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} In conventional quantum mechanics hermitian operators are used to describe closed quantum systems. These operators allow only for real eigenvalues, which can represent physical observables. Since systems in the real world are hardly ever completely isolated, the environment must be taken into account. Due to a lack of knowledge about the actual layout of the environment of a system or because the environment is too complicated to be completely calculated, one can effectively describe such systems as open quantum systems as long as the interaction to the environment is known. Such Hamiltonians are often no longer hermitian. The interaction with the environment, e.g.\ gain and loss of the probability amplitude, that is the wave function, can be expressed by complex potentials \cite{Moiseyev2011a}. These Hamiltonians in general do not have a real eigenvalue spectrum. A special class of non-hermitian operators was investigated by Bender and Boettcher in 1998 \cite{Bender98}. For certain parameter ranges these operators also had purely real eigenvalue spectra. The origin of the special property can be traced back to the $\mathcal{PT}$-symmetry of the operator, where the $\mathcal{PT}$-operator consists of the parity operator $\mathcal{P}$ and the time reversal operator $\mathcal{T}$. The parity operator exchanges $\hat x \rightarrow -\hat x$ and $\hat p \rightarrow -\hat p$. The time reversal operator replaces $\hat p \rightarrow -\hat p$ and $\ii \rightarrow -\ii$. A $\mathcal{PT}$-symmetric system has a Hamiltonian which fulfils $[H, \mathcal{PT}] = 0$. For a system with \begin{equation} H = -\Delta + V \end{equation} the position space representation of the potential must obey the condition \begin{equation} V(x) = V^*(-x), \label{eq:ptsymmetrycondition} \end{equation} i.e.\ the real part of the potential must be an even function in the spatial coordinate and the imaginary part must be an odd function. $\mathcal{PT}$-symmetric systems have been studied theoretically for quantum systems \cite{qm1,qm2,Mehri,Bender1999a}. However, the concept of $\mathcal{PT}$-symmetry is not restricted to quantum mechanics. Indeed, the experimental breakthrough was achieved in optical wave guides by R\"uter \etal \cite{Rueter10} when in such a system the effects of $\mathcal{PT}$-symmetry and $\mathcal{PT}$-symmetry breaking were observed. This has led to a still increasing interest in the topic \cite{PhysRevA.88.053817, Deffner2015, Albeverio2015, Mostafazadeh2013b}, and $\mathcal{PT}$-symmetric systems have also been studied in microwave cavities \cite{Bittner2012a}, electronic devices \cite{Schindler2011a,Schindler2012a}, and in optical \cite{0305-4470-38-9-L03, Guo09, ramezani10, musslimani08a, optic1, optic2, Makris2010a, makris08, Chong2011, Peng2014} systems. Also in quantum mechanics the stationary Schr\"odinger equation was solved for scattering solutions \cite{qm2} and bound states \cite{Mehri}. Note that it was shown in \cite{Dast2014} that the characteristic $\mathcal{PT}$-symmetric properties are still found when a many-particle description is used. In \cite{Klaiman08} it was suggested that $\mathcal{PT}$-symmetry could also be realized in quantum systems, namely in Bose-Einstein condensates (BECs). The BEC would be captured in a symmetric double-well potential where particles are gained in one well and lost in the other. This loss and gain can then be described by a complex potential coupling the system to the environment. The time-independent solutions (see \ref{s:tigpe}) of such a $\mathcal{PT}$-symmetric double-well system can in the simplest possible case \cite{Graefe12} be described by the matrix \begin{eqnarray} \left( \begin{array}{cc} -g | \psi_1 |^2 - \ii \gamma & v \\ v & -g | \psi_2 |^2 + \ii \gamma \end{array} \right) \left( \begin{array}{c} \psi_1 \\ \psi_2 \end{array} \right) = \mu \left( \begin{array}{c} \psi_1 \\ \psi_2 \end{array} \right), \label{eq:matrix2} \end{eqnarray} where $\psi_1$ and $\psi_2$ represent the occupations of the two wells with atoms in the condensed phase and $\mu$ is the chemical potential. This description can be derived from a non-hermitian representation of a many-particle Bose-Hubbard dimer \cite{Graefe08a}. The off-diagonal elements $v$ of the matrix describe the couplings between the wave functions in the two potential wells. The diagonal contains a nonlinear entry introducing the particle-particle interaction described by an s-wave scattering process. Its strength can be changed via the parameter $g$ which is proportional to the s-wave scattering length and its physical variation can be achieved close to Feshbach resonances. In comparison to the original model from \cite{Graefe12} the replacement $g \rightarrow - g$ is introduced to be consistent with the other models which will be shown later on. In addition the diagonal contains an imaginary term with the parameter $\gamma$. This term models a particle gain in one well and a particle loss in the other. This gain and loss is provided by the (not further described) environment. The wave functions consist of two complex values and contain no spatial information. Therefore the parity operator $\mathcal{P}$, which normally exchanges $\hat x$ with $-\hat x$, exchanges $\psi_1$ with $\psi_2$ and vice versa. It is also assumed that the potential wells are isolated enough such that the nonlinear interaction between $\psi_1$ and $\psi_2$ can be neglected. \begin{figure} \noindent\includegraphics[width=0.99\textwidth]{fig1} \caption{ Analytic solutions for the chemical potential \eref{eq:matrix2ana} of the two-dimensional matrix model described in \eref{eq:matrix2}. The coupling strength $v=1$, and the nonlinearities $g = 0$ in a), $g = 1.4$ in b) and $g = 2.6$ in c) are used. The analytically continued solutions are plotted using dashed lines.} \label{fig:2dmatrix} \end{figure} The system \eref{eq:matrix2} is solved analytically \cite{Graefe12} for wave function vectors $\psi$ which are normalized to one. The chemical potential reads \begin{eqnarray} \mu_s &= -\frac{g}{2} \pm \sqrt{v^2 - \gamma^2}, \nonumber \\ \mu_a &= -g \pm \gamma \sqrt{\frac{4v^2}{g^2+4\gamma^2}-1}. \label{eq:matrix2ana} \end{eqnarray} The values $\mu_s$ in \eref{eq:matrix2ana} are the $\mathcal{PT}$-symmetric solutions, and the $\mathcal{PT}$-broken solutions of the system are denoted $\mu_a$. All solutions are shown in \fref{fig:2dmatrix}. For small $\gamma$ the system without nonlinearity ($g = 0$) shows only $\mathcal{PT}$-symmetric states with real chemical potential $\mu \in \mathbb{R}$ as can be observed in \fref{fig:2dmatrix}a. These states pass through a tangent bifurcation at $\gamma = \gamma_c = 1$, and two $\mathcal{PT}$-broken states emerge. For $\gamma > \gamma_c$ only $\mathcal{PT}$-broken states with a complex chemical potential $\mu \in \mathbb{C}$ exist. For a nonlinearity $g > 0$ the bifurcation in which the two $\mathcal{PT}$-broken states are created moves to a smaller value of $\gamma$ on one of the $\mathcal{PT}$-symmetric branches (compare \fref{fig:2dmatrix}b). A pitchfork bifurcation is formed. Thus for nonzero values of $g$ there is an additional parameter region for $\gamma$, in which $\mathcal{PT}$-symmetric and $\mathcal{PT}$-broken states exist simultaneously. When the nonlinearity is increased even further ($g > 2$) we see in \fref{fig:2dmatrix}c that the pitchfork bifurcation is no longer present and the $\mathcal{PT}$-broken states exist for all values of $\gamma$. A thorough examination of the bifurcation structure and of the associated exceptional points can be found in \cite{Heiss13a}. The matrix model does not take the spatial extension of the system into account. In general BECs can be described by the nonlinear Gross-Pitaevskii equation \cite{Pitaevskii03a}. Often $\delta$ functions have been used to gain a deeper insight \cite{Mostafazadeh2006a,Jones2008a,Mostafazadeh2009a,qm2,Mehri, Mayteevarunyoo2008a,Rapedius08a,Wit08a,Fassari2012a,Demiralp2005a, Jones1999a,Ahmed2001a,Uncu2008a}. Therefore a simple model to include spatial effects describes the potential with double-$\delta$ functions \cite{Cartarius12b}. In this system two $\delta$-wells exist at the positions $x = \pm b$. While both of these wells have the same real depth they possess antisymmetric imaginary parts. That is, one well has a particle gain and the other has an equally strong particle drain. The potential fulfils the $\mathcal{PT}$-symmetry condition \eref{eq:ptsymmetrycondition} and the corresponding Gross-Pitaevskii equation is \begin{eqnarray} \fl -\psi''(x) -\left[ (1+\ii\gamma)\delta(x+b) +(1-\ii\gamma)\delta(x-b) \right] \psi(x)- g | \psi(x)|^2 \psi(x) = \mu \psi(x) \label{eq:delta2}. \end{eqnarray} In this system $\mathcal{PT}$-symmetric solutions and $\mathcal{PT}$-symmetry breaking were found. In \cite{Dast13, Haag14} a similar two well system was examined in much greater detail by using a more realistic potential well shape. The Gross-Pitaevskii equation of such a BEC can be written as \begin{eqnarray} (-\Delta + V(x) - g|\psi(x,t)|^2) \psi = \mu \psi \label{eq:gauss2} \end{eqnarray} with the complex potential \begin{eqnarray} V(x) = \frac{1}{4} x^2 + V_0^{\rm G} \ee^{-\sigma x^2} + \ii \gamma x \ee^{-\rho x^2} \quad{\rm with}~ \rho = \frac{\sigma}{2 \ln(4 V_0^{\rm G} \sigma)} \label{eq:gauss2_pot} \end{eqnarray} containing the BEC in a harmonic trap divided by a Gaussian potential barrier into two wells. The parameter $\rho$ is chosen in such a way that the maximal coupling between the subsystems occurs at the minima of the potential wells. The stationary states show the same general behaviour as those in the matrix model. All descriptions so far used complex potentials to effectively describe the environment. Therefore only the $\mathcal{PT}$-symmetric part of the whole system was described in detail while the concrete layout of the environment itself was not specified. We will now discuss how it might be possible to embed such a $\mathcal{PT}$-symmetric two-well system into a larger hermitian system and therefore explicitly include the environment into our description. As a first step in this direction a hermitian four well model was used \cite{Kreibich2013a, Kreibich2014}, where the double-well with in- and outgoing particle fluxes is achieved by embedding it into the larger system. The two outer wells have time-dependent adjustable parameters namely the potential depth and the coupling strength to the inner wells. By lowering and raising these wells a particle gain and loss in the two inner wells can be obtained, which exactly corresponds to the loss and gain in the non-hermitian two-well model. However, the $\mathcal{PT}$-symmetric subsystem of the inner wells loses its properties when the well which provides the particle gain is depleted. A second possible realization was suggested in \cite{Single2014}, where the wave function of a double-well potential was coupled to additional unbound wave functions (e.g. one ingoing and one outgoing) connecting the gain and loss of the system with a reservoir. These auxiliary wave functions replace the previously unknown environment of the system. In this paper we propose an additional way of realizing a $\mathcal{PT}$-symmetric two-well system by extending the approach used in \cite{Single2014}. We couple two \emph{stationary} bound wave functions, where each of them has the shape of that of the corresponding $\mathcal{PT}$-symmetric system and their combination results in a hermitian system. The influx from one system originates from the second and vice versa. By tuning the coupling strength between the two systems we will be able to control the gain and loss in the subsystems. In contrast to \cite{Single2014} our systems are closed and do not require incoming or outgoing wave functions or time-dependent potentials. We will show that for suitable states the subsystems are indeed $\mathcal{PT}$-symmetric, however, also $\mathcal{PT}$-symmetry breaking can be observed. In \sref{sec:2} a four-dimensional matrix model will be constructed similar to the model \eref{eq:matrix2} to observe the general structure of the eigenstates and to determine their $\mathcal{PT}$-symmetric properties. For this model analytical solutions can be found. In a next step a Hamiltonian is constructed to combine two subsystems with a spatial resolution in one dimension for the wave function similar to the double-$\delta$-potential used in \eref{eq:delta2}. In these systems effects which depend on the shape of the wave functions can be observed. We will examine which detail of description is necessary to capture the $\mathcal{PT}$-symmetric properties of the system and the bifurcation structure. Since a model with double-$\delta$-potentials is only a rough approximation of the reality we will also introduce an additional system. This system is constructed by coupling two subsystems of the form \eref{eq:gauss2}. It not only has an expanded wave function, which resolves spatial information, but also possesses more realistic extended potential wells. In addition the coupling between the two modes takes place over an extended area of space and is not confined to the locations of the $\delta$-wells. Subsequently we will compare the results obtained with the different descriptions in \sref{sec:3}. We will also compare the bifurcation structure with the model \eref{eq:matrix2}. In addition the influence of the phase difference between the two subsystems on the stationary states will be determined. A summary and discussion of the results is given in \sref{sec:4}. \section{Coupling of two two-well potentials in one hermitian system} \label{sec:2} \begin{figure} \noindent\includegraphics[width=0.99\textwidth]{fig2} \caption{ This sketch illustrates how two double-well subsystems are combined into a closed hermitian system. The coupling and description of the wells is given with a varying degree of detail for the different systems discussed in this paper.} \label{fig:model} \end{figure} In \fref{fig:model} the layout of two coupled two-well systems is sketched. The two subsystems are labelled A and B and each contains two wells with the labels 1 and 2. In the drawing the potentials of the wells are extended. This corresponds to an ansatz as shown in \eref{eq:gauss2} and \eref{eq:gauss2_pot} and will be one of the systems studied in this work. Each of the wells is coupled to its counterpart in the other subsystem. The coupling strength is described by the parameter $\gamma$. Since the strength of the in- and outcoupling is also determined by the wave function of the other subsystem, $\mathcal{PT}$-symmetry can only exist for both subsystems. There is no $\mathcal{PT}$-symmetry for arbitrary states but only for states with an appropriate symmetry. As mentioned in the introduction we will investigate the setup in three different degrees of detail. There exist various other systems which have four distinguished modes. A family of such systems named plaquettes was examined \cite{1751-8121-45-44-444021, Yang:14}. These systems are seen as a first step towards building $\mathcal{PT}$-symmetric lattice systems \cite{PhysRevA.85.033825, abdullaev11}. The plaquettes exist in various configurations which differ in the coupling between the sites. In contrast to the model proposed in this paper the gain and loss in these plaquettes is still provided by non-hermitian terms. \subsection{Four-dimensional matrix model} In a first step we construct the four-dimensional hermitian matrix model. Therefore we place two matrices of the shape \eref{eq:matrix2} on the main diagonal blocks in our new matrix $M$ and remove the terms which couple the system to the environment. They are replaced with coupling terms in the off-diagonal $2\times2$ matrix-blocks, i.e. \begin{eqnarray} M = \left( \begin{array}{cc|cc} -g | \psi_{{\rm A},1} |^2 & v & - \ii \gamma & 0 \\ v & -g | \psi_{{\rm A},2} |^2 & 0 & + \ii \gamma \\ \hline +\ii \gamma & 0 & -g | \psi_{{\rm B},1} |^2 & v \\ 0 & -\ii \gamma & v & -g | \psi_{{\rm B},2} |^2 \end{array} \right) \label{eq:matrix4} \end{eqnarray} with the wave function \begin{equation} \psi = \left( \psi_{{\rm A},1} , \psi_{{\rm A},2} , \psi_{{\rm B},1} , \psi_{{\rm B},2} \right). \end{equation} The elements of $\psi$ are four complex values with no information about the spatial extension of the wave function. The first two values $\psi_{A,1}, \psi_{A,2} \in \mathbb{C}$ represent the wave function amplitudes in the double-well potential of subsystem A while the values $\psi_{B,1}, \psi_{B,2} \in \mathbb{C}$ represent the amplitudes in the subsystem B. Therefore in this context the parity operator $\mathcal{P}$ exchanges $\psi_{{\rm A},1}$ with $\psi_{{\rm A},2}$ and $\psi_{{\rm B},1}$ with $\psi_{{\rm B},2}$. The two diagonal submatrices will form our subsystems A and B each with two wells indicated by the indices 1 and 2. The first well of subsystem A is coupled via $M_{1,3} = -\ii\gamma$ to the first well in subsystem B. The first well of subsystem B is coupled via $M_{3,1} = \ii\gamma$ to $\psi_{{\rm A},1}$, therefore keeping the matrix hermitian. The second wells are coupled in a similar manner but with opposite signs. Note that the coupling terms do not yet guarantee a symmetric gain and loss in a subsystem since the gain and loss depend also on the value of the wave function of the other mode. The coupling between the potential wells in one subsystem is done via the parameter $v$. The parameter $g$ still describes the particle-particle scattering in one well, but no scattering between the overlap of the wave functions from different wells is taken into account. The time-independent equation describing the stationary states of the complete system reads \begin{eqnarray} M \psi = \mu \psi \label{eq:matrix} \end{eqnarray} with real eigenvalues $\mu \in \mathbb{R}$ because the matrix $M$ is hermitian. Since we are also interested in the $\mathcal{PT}$-symmetric properties of the subsystems we extend \eref{eq:matrix} to \begin{eqnarray} M \psi = \left( \begin{array}{cc} M_{\rm A} & M_{\rm C} \\ M_{\rm C}^* & M_{\rm B} \end{array} \right) \left( \begin{array}{c} \psi_{{\rm A}} \\ \psi_{{\rm B}} \\ \end{array} \right) = \left( \begin{array}{c} \mu_A \psi_{{\rm A}} \\ \mu_B \psi_{{\rm B}} \\ \end{array} \right) \label{eq:complexmu} \end{eqnarray} with independent eigenvalues $\mu_i \in \mathbb{C}$ for both subsystems and \begin{eqnarray} M_{\rm C} = \left(\begin{array}{cc} -\ii \gamma & 0 \\ 0 & \ii \gamma \end{array}\right) ~, ~~ M_i = \left( \begin{array}{cc} -g | \psi_{i,1} |^2 & v \\ v & -g | \psi_{i,2} |^2 \end{array}\right) \end{eqnarray} for $i = \rm A, B$. We can interpret \eref{eq:complexmu} as two separate equations for both subsystems where the gain and loss is provided by the other subsystem via the matrix $M_{\rm C}$. For $\mu_{\rm A, B} \in \mathbb{C}$ this also allows for $\mathcal{PT}$-broken states where the norm of the subsystems is no longer maintained, but is increased or decreased. Such solutions are therefore non-stationary states, but because the particle number of the whole system is conserved, there is no unlimited exponential growth or decay possible. Therefore these solutions describe only the onset of their growing or decaying temporal evolution. Only states with $\mu_{\rm A} = \mu_{\rm B} \in \mathbb{R}$ are stationary $\mathcal{PT}$-symmetric solutions. For $\mu_{\rm A} = \mu_{\rm B}^*$ \eref{eq:complexmu} leads to solutions where the gain and loss of subsystem A (represented by $\Im \mu_A$) is compensated by the loss and gain of subsystem B ($\Im \mu_B$). Therefore the total particle number is indeed conserved. We can parametrize the ansatz of the wave function for this model and reduce the parameter count by removing a global phase. Solutions exist for different ratios of the probability amplitude between the two subsystems, but they may not exist over the whole range of the parameters. To simplify the equations we choose to restrict the norm of each subsystem to one. This leads to the ansatz \begin{eqnarray} \psi = \left( \begin{array}{c} \psi_{{\rm A},1} \\ \psi_{{\rm A},2} \\ \psi_{{\rm B},1} \\ \psi_{{\rm B},2} \\ \end{array} \right) = \left( \begin{array}{l} \cos\theta_{\rm A} e^{+\ii \varphi_{\rm A}} \\ \sin\theta_{\rm A} e^{-\ii \varphi_{\rm A}} \\ \cos\theta_{\rm B} e^{+\ii \varphi_{\rm B} + \ii \varphi_{\rm rel}} \\ \sin\theta_{\rm B} e^{-\ii \varphi_{\rm B} + \ii \varphi_{\rm rel}} \end{array} \right) \label{eq:matrixansatz} \end{eqnarray} with the parameters $\theta_A$ and $\theta_B$ determining the distribution of the probability amplitude of the wave function on the two potential wells in one subsystem and the parameters $\varphi_A$ and $\varphi_B$ describing the phase difference. The parameter $\varphi_{\rm rel}$ defines the phase difference between the two subsystems. By applying additional symmetry restrictions and thereby reducing the parameter count even further, analytical solutions of \eref{eq:complexmu} can be obtained and are presented in \sref{sec:3}. All other solutions can be gained numerically by applying a multidimensional root search. \subsection{Model with a spatial resolution of the wave function} We want to know if the basic description provided by the matrix model is sufficient to capture the $\mathcal{PT}$-symmetric properties and the bifurcation structure of the system or if a more detailed description is necessary. We do this in two steps. First we allow for the more detailed information of a spatially resolved wave function but retain the concept of isolated coupling points. The double-$\delta$ system keeps the mathematical and numeric intricacy at bay but still provides a spatial resolution for the wave function. Therefore we combine two systems with $\delta$-potentials \eref{eq:delta2} which describe each subsystem in one spatial dimension. The subsystems are then coupled at the positions of the $\delta$-wells $x = \pm b$. The depth of the potentials is controlled by $V_0^{\rm D}$. Both the depth $V_0^{\rm D}$ and the distance $2b$ between the wells correspond to the coupling parameter $v$ in the matrix model. The coupling strength between the two subsystems is controlled by $\gamma$ and is only present at the two points $x = \pm b$, i.e.\ the potential has no spatial extension. The dimensionless coupled Gross-Pitaevskii equations read \begin{eqnarray} \fl \left[ -\frac{\partial^2}{\partial x^2} -g |\psi_{\rm A}|^2 + V_0^{\rm D} (\delta(x-b) + \delta(x+b)) \right] \psi_A \nonumber \\ + \ii \gamma \left[ \delta(x-b)\psi_B(b) - \delta(x+b)\psi_B(-b) \right] = \mu_A \psi_A, \nonumber\\ \fl \left[ -\frac{\partial^2}{\partial x^2} -g |\psi_{\rm B}|^2 + V_0^{\rm D} (\delta(x-b) + \delta(x+b)) \right] \psi_B \nonumber \\ - \ii \gamma \left[ \delta(x-b)\psi_A(b) - \delta(x+b)\psi_A(-b) \right] = \mu_B \psi_B, \label{eq:deltaGPEb} \end{eqnarray} with the same physical interpretation of $\mu_A$ and $\mu_B$ as in \eref{eq:complexmu} for the matrix model. Stationary states of the system are calculated numerically exact by integrating the wave functions outward from $x = 0$ and by imposing the appropriate boundary conditions on the wave functions. We require that the wave functions have to approach zero when $x \rightarrow \pm \infty$. For numerical purposes it is sufficient for the wave functions to have small values at $x = \pm x_{\rm max}$, \begin{eqnarray} \psi_{\rm A, B}(\pm x_{\rm max}) \approx 0. \label{eq:deltainf} \end{eqnarray} An additional condition can be required for the norm of the wave function. In agreement with the normalized ansatz \eref{eq:matrixansatz} in the matrix model we search for solutions that fulfill \begin{eqnarray} ||\psi_{\rm A, B}||^2 = 1. \label{eq:deltanorm} \end{eqnarray} Both wave functions are real at $x = 0$. With this we enforce a global phase and the phase difference between the two modes at $x = 0$ to be $\varphi_{\rm rel} = 0$. The 10 (real) free parameters are $\Re \mu_{A,B}$, $\Im \mu_{A,B}$, $\Re\psi_{\rm A,B}(0)$, $\Re \psi'_{\rm A,B}(0)$ and $\Im \psi'_{\rm A,B}(0)$. They are chosen such that the 10 (real) conditions, i.e.\ the norm \eref{eq:deltanorm} and the boundary conditions at $x = \pm x_{\rm max}$ \eref{eq:deltainf} are fulfilled. Note that there are no constraints on the $\mu_{A,B}$. We will see that for stationary $\mathcal{PT}$-symmetric solutions the result is $\mu_A = \mu_B \in \mathbb{R}$. This is not a constraint on the root search. \subsection{Model with a spatial resolution of both the potential well and the coupling} We consider an additional system and remove a further restriction, viz.\ the point-like coupling approach, by duplicating the system from \eref{eq:gauss2}, where the wells are formed by a harmonic trap and divided by a Gaussian potential barrier. This does not only provide us with a system with much more realistic potential wells but also allows us to extend the coupling of the two subsystems over the whole space. The time-independent GPEs of the system read \begin{eqnarray} \Big( -\frac{\partial^2}{\partial x^2} - g | \psi_{\rm A} |^2 + \frac{1}{4} x^2 + V_0^{\rm G} \ee^{-\sigma x^2} &\Big) \psi_{\rm A} + \ii \gamma x \ee^{-\rho x^2} \psi_{\rm B} &= \mu_A \psi_{\rm A}, \nonumber\\ \Big( -\frac{\partial^2}{\partial x^2} - \underbrace{g | \psi_{\rm B} |^2}_{\rm contact} + \underbrace{\frac{1}{4} x^2 + V_0^{\rm G} \ee^{-\sigma x^2}}_{\rm trap} &\Big) \psi_{\rm B} - \underbrace{\ii \gamma x \ee^{-\rho x^2} \psi_{\rm A}}_{\rm coupling} &= \mu_B \psi_{\rm B}. \label{eq:gauss} \end{eqnarray} The parameter $V_0^{\rm G}$ controls the height of the potential barrier between the two wells in one subsystem and together with the width $\sigma$ of the barrier it relates to the coupling strength $v$ in the matrix model. Again the coupling between the two subsystems is controlled by a parameter labelled $\gamma$. To solve this equation we use an ansatz of coupled Gaussian functions (compare \cite{Rau10a,Rau10b}), \begin{equation} \psi = \sum_{i=A,B \atop j=1,2} \psi_{i,j} = \sum_{i=A,B \atop j=1,2} \exp\left({a_{i, j} x^2 + b_{i,j} x + c_{i,j}}\right). \label{eq:gauss_ansatz} \end{equation} We use four wave functions, two for each subsystem ($i=A,B$) and place one in each well ($j=1,2$). Again we place restrictions on our ansatz. We require that the norm of each subsystem is one, which reduces our parameter set by two. In addition we require \begin{eqnarray} \Im c_{A, 1} = \varphi_A, \qquad& \Im c_{B, 1} = \varphi_{\rm B} + \varphi_{\rm rel}, \nonumber\\ \Im c_{A, 2} = -\varphi_A, \qquad& \Im c_{B, 2} = -\varphi_{\rm B} + \varphi_{\rm rel} \end{eqnarray} with a constant $\varphi_{\rm rel}$ determining the phase difference between the two modes, and again reducing the parameter set by two. Therefore from the 24 parameters $a_{i,j},b_{i,j},c_{i,j} \in \mathbb{C}$ 20 free parameters remain and must be determined such that adequate solutions are found. With these constraints the ansatz is consistent with the ansatz for the matrix model and the system with the double-$\delta$ potential. To obtain solutions of \eref{eq:gauss} we apply the time-dependent variational principle \cite{McLachlan1964a} to the time-dependent GPEs \begin{eqnarray} \Big( -\frac{\partial^2}{\partial x^2} - g | \psi_{\rm A} |^2 + \frac{1}{4} x^2 + V_0^{\rm G} \ee^{-\sigma x^2} &\Big) \psi_{\rm A} + \ii \gamma x \ee^{-\rho x^2} \psi_{\rm B} &= \ii \frac{\partial}{\partial t} \psi_{\rm A}, \nonumber\\ \Big( -\frac{\partial^2}{\partial x^2} - \underbrace{g | \psi_{\rm B} |^2}_{\rm contact} + \underbrace{\frac{1}{4} x^2 + V_0^{\rm G} \ee^{-\sigma x^2}}_{\rm trap} &\Big) \psi_{\rm B} - \underbrace{\ii \gamma x \ee^{-\rho x^2} \psi_{\rm A}}_{\rm coupling} &= \ii \frac{\partial}{\partial t} \psi_{\rm B}. \label{eq:gauss_tdvp} \end{eqnarray} We search a parameter set for our ansatz, which minimizes the difference between the left-hand and right-hand side of the equation, viz.\ we determine the minimum of the functional \begin{eqnarray} I = \left\Vert H \psi - \ii \phi \right\Vert^2. \end{eqnarray} In this procedure $\psi(t)$ is kept constant for a given point in time and $\dot \psi = \phi$ is varied to minimize $I$. Since the wave function $\psi(z(t))$ is not varied we require that the parameters $z = \{a_{i,j}, b_{i,j}, c_{i,j}\}$ do not change. A variation with respect to $\dot z$ leads to the equations of motion for the variational parameters, which follow from \begin{eqnarray} \left< \frac{\partial\psi}{\partial z} \middle| \dot \psi - \ii H \psi\right> = 0. \end{eqnarray} A more elaborate explanation of the method can be found in \cite{Dast13}. With a numerical root search we can now determine those states which satisfy the 20 conditions \begin{eqnarray} 0 = \dot a_{i, j}, 0 = \dot b_{i,j}, \\ \mu_i = \ii \dot c_{i,1}^* = \ii \dot c_{i,2}^* \Rightarrow 0 = \dot c_{i,1} - \dot c_{i,2} ~{\rm with}~ i = A,B. \end{eqnarray} For $\mathcal{PT}$-symmetric solutions the chemical potentials of the subsystems will fulfil $\mu_A = \mu_B \in \mathbb{R}$. \section{\texorpdfstring{$\mathcal{PT}$}{PT}-symmetric properties and bifurcation structure of the systems} \label{sec:3} First we will examine analytical solutions of the matrix model. The bifurcation structure of these solutions and their $\mathcal{PT}$-symmetric properties will be discussed. Furthermore the differences and similarities between this four-dimensional hermitian matrix model and the two-dimensional matrix model with imaginary potential will be examined. In a next step the results obtained with the matrix model will be compared with the spatially extended models. Also the influence of the phase difference between the two modes will be investigated. \subsection{Bifurcations structure and \texorpdfstring{$\mathcal{PT}$}{PT}-symmetric properties of the matrix model} To obtain analytical solutions we have to impose some constraints on the ansatz of the wave function of the matrix model \eref{eq:complexmu}. $\mathcal{PT}$-symmetric solutions must fulfil the condition \eref{eq:ptsymmetrycondition} which for this matrix model results in \begin{eqnarray} \psi_{j, 1} &= \psi_{j, 2}^* \quad{\rm with}~ j=A, B \label{eq:ptcondition2} \end{eqnarray} and \begin{eqnarray} \psi_{A, i} &= \psi_{B, i}^* \quad{\rm with}~ i=1, 2. \label{eq:ptcondition3} \end{eqnarray} This ensures that the particle loss in one system is compensated by the other. These restrictions lead to the ansatz \begin{eqnarray} \psi = \frac{1}{\sqrt{2}} \left( \ee^{\ii\varphi}, \ee^{-\ii\varphi}, \ee^{-\ii\varphi}, \ee^{\ii\varphi} \right) \label{eq:ptsymansatz} \end{eqnarray} with which we obtain an analytical expression for the chemical potentials of two $\mathcal{PT}$-symmetric solutions \begin{eqnarray} \mu = -\frac{g}{2}\pm \sqrt{v^2 + \gamma^2}. \label{eq:analytic_sym} \end{eqnarray} A more detailed calculation is given in \ref{s:analyticalsolutions}. \begin{figure} \noindent\includegraphics[width=0.99\textwidth]{fig3n} \caption{ Analytical solutions for the chemical potential of \eref{eq:matrix} are shown. The $\mathcal{PT}$-symmetric states are denoted by $s_1$ and $s_2$. $\mathcal{PT}$-broken states are labelled with $a_1$ and $a_2$. Solutions of the effective system \eref{eq:ef2dmatrix} are labelled with $r_i$. The states $r_2$ and $r_3$ only exist for $|g| > 2$ and therefore appear only in figure b. The coupling strength is set to $v=1$. In a) the nonlinearity is set to $g = 1.5$ while in b) it is set to $g = 3.5$. The pitchfork bifurcation between the states $a_{1,2}$ and $s_2$ in a) is labelled with $\rm B_P$ and occurs at $\gamma \approx 0.882$. The tangent bifurcation between states $r_2$ and $r_3$ in b) is marked by $\rm B_T$. The analytically continued solutions are plotted using lighter colours.} \label{fig:matrix} \end{figure} The solutions are plotted in \fref{fig:matrix} and labelled $s_1$ and $s_2$. For different values of $g$ the solutions are shifted up or down. For increasing values of $\gamma$ the difference of the values of the chemical potential of the two states is increased. $\mathcal{PT}$-broken states do not need to obey condition \eref{eq:ptcondition2} but \eref{eq:ptcondition3} still must be fulfilled since the influx and outflux between subsystem A and B must be equal. Therefore the ansatz for these states reads \begin{eqnarray} \psi = \left( \cos\theta \ee^{\ii\varphi}, \sin\theta \ee^{-\ii\varphi}, \cos\theta \ee^{-\ii\varphi}, \sin\theta \ee^{\ii\varphi} \right) \label{eq:ptbrokenansatz} \end{eqnarray} with $\mu_A = \mu_B^*$ . The calculation in \ref{s:analyticalsolutions} yields the analytical expressions for the chemical potentials \begin{eqnarray} \mu_{\rm A} = \mu_{\rm B}^* = -\frac{g}{2} \left( 2 {\color{green}\mp} \sqrt{P+\frac{\gamma^2}{v^2}P^2}-P \right) ~{\rm with}~ P = \frac{1}{2} {\color{red}\pm} \frac{\sqrt{g^2 + 16 \gamma^2}}{2g}. \label{eq:analytic_asym} \end{eqnarray} Note that the ${\color{green}\mp}$ and ${\color{red}\pm}$ are independent and we therefore obtain four expressions \eref{eq:analytic_asym} for $\mathcal{PT}$-broken states. However two of these solutions only exist in an analytically continued system (see \fref{fig:matrix}). For $|g| < 2v$ the state $s_2$ passes through a pitchfork bifurcation at \begin{eqnarray} \gamma_c = \sqrt{\frac{4v^4}{g^2} - v^2}, \end{eqnarray} in which $a_1$ and $a_2$ are created. For $\gamma > \gamma_c$ these two states have the same $\Re \mu_{\rm A}$ but a complex conjugate $\Im \mu_{\rm A}$. This means that one of the states gains particles in subsystem A while in subsystem B it is depleted, and vice versa. The pitchfork bifurcation occurs at smaller values of $\gamma_c$ for an increasing nonlinearity $g$ until for $|g| = 2v$ the value of $\gamma_c$ reaches zero. For values of $|g| > 2v$ the bifurcation between $a_{1,2}$ and $s_2$ no longer occurs and the states $a_{1,2}$ exist independent of $s_2$ for all $\gamma$. For $g < 0$ the bifurcation occurs not with the state $s_2$ but with $s_1$. Thus we have shown that $\mathcal{PT}$-symmetric states exist for the closed four-dimensional hermitian matrix model and $\mathcal{PT}$-symmetry breaking can be observed. Besides these states there is another class of states in the four-dimensional matrix model. Wave functions which fulfil the condition \begin{eqnarray} \psi_{{\rm A},i} = -\ii\psi_{{\rm B},i} ~{\rm with}~ i = 1,2 ~{\rm and}~ \psi_{{\rm A},i}, \ii\psi_{{\rm B},i}\in \mathbb{R} \end{eqnarray} lead to decoupled equations for $\psi_A$ and $\psi_B$ and result in the effective two-dimensional model \begin{eqnarray} \left( \begin{array}{cc} -g | \psi_1 |^2 - \gamma & v \\ v & -g | \psi_2 |^2 + \gamma \end{array} \right) \left( \begin{array}{c} \psi_1 \\ \psi_2 \end{array} \right) = \mu \left( \begin{array}{c} \psi_1 \\ \psi_2 \end{array} \right) ~{\rm and}~ \psi_{1,2} \in \mathbb{R}. \label{eq:ef2dmatrix} \end{eqnarray} These states effectively describe a double-well system with a real potential, where one potential well is lowered and the other is raised by the value of $\gamma$. As expected we find that the amplitude of the wave function in the higher well is lower than in the other. For $\gamma = 0$ the system crosses over to a symmetric double-well model with no coupling and therefore we can see in \fref{fig:matrix}a that the state $r_1$ merges with the state $s_1$ and the state $r_4$ merges with the state $s_2$. For values $g > 2v$ the bifurcation between the $\mathcal{PT}$-symmetric and $\mathcal{PT}$-broken states no longer exists and two new states $r_2$ and $r_3$ emerge. Now at $\gamma = 0$ the states $r_1$ and $s_1$ as well as $r_2$ and $s_2$ become equal. Also $r_3$, $r_4$ and $s_4$ merge. For increasing $\gamma$ the states $r_2$ and $r_3$ vanish in a tangent bifurcation. The method used to solve \eref{eq:ef2dmatrix} is described in \ref{s:analyticalsolutions}. We can compare the results of the four-dimensional matrix model in \fref{fig:matrix} with those of the two-dimensional matrix model shown in \fref{fig:2dmatrix}. It is immediately clear that our system shows a new and richer bifurcation scenario which differs from the two-dimensional matrix model. While the $\mathcal{PT}$-symmetric eigenvalues of the states in the two-dimensional system approach each other for increasing coupling strengths $\gamma$ until they merge in a tangent bifurcation, in our system the eigenvalues increase in distance for larger values of $\gamma$ and no bifurcation between the two states $s_{1,2}$ occurs. However, some generic features remain the same. In both cases the $\mathcal{PT}$-symmetric state $s_2$ with a real $\mu$ passes through a pitchfork bifurcation, out of which the $\mathcal{PT}$-broken states with complex $\mu$ emerge. For both models this bifurcation moves to smaller values of $\gamma$ until, for a critical value of the nonlinearity $g$, the bifurcation vanishes and the $\mathcal{PT}$-symmetric and $\mathcal{PT}$-broken states never coincide. One advantage of using a matrix model compared to systems with a more realistic spatially extended description is that the matrix model gives an overview over all possible effects in a system while remaining straightforward to calculate. Also the knowledge about symmetry properties and existence of states gained from the matrix model can help finding states in the more complicated models, e.g.\ by choosing appropriate initial values for a root search. Since we want to concentrate our investigation on the $\mathcal{PT}$-symmetric properties of the subsystems, we will not further investigate the states $r_i$. \subsection{Comparison of the matrix model and the model with a spatial resolution of the wave function} \begin{figure} \noindent\includegraphics[width=0.99\textwidth]{fig4n} \caption{ Chemical potential $\mu = \mu_{\rm A} = \mu_{\rm B}^*$ for the matrix model \eref{eq:complexmu} (blue dashed lines). The parameters of the matrix used for all three plots are $g_0 = 2.75$, $v = 0.28$ and $\gamma_0 = 1.27$. The shift of the chemical potential of the matrix model is $\Delta\mu = -0.17$. For both figures a) and b) the phase difference $\varphi_{\rm rel}$ was set to zero. Figure a) was calculated for a nonlinearity of $g = 1.5$. The different states are denoted by $s_{1,2}$, $a_{1,2}$. In plot b) a nonlinearity of $g = 2.0$ was used. For plot c) the same nonlinearity as in plot a) was used but the phase difference was set to $\varphi_{\rm rel} = 0.03$. The figure also contains the results for the double-$\delta$-system (red solid lines). For the coupling of the two subsystems $V_0^{\rm D}$ was set to $1.0$ and the $\delta$-potentials were located at $b = \pm1.1$. The same nonlinearities as for the matrix model were used. In figure a) the parameters for which the wave functions are shown in \fref{fig:deltawave} are marked by green circles. A pitchfork bifurcation between the states $s_2$ and $a_{1,2}$ is denoted by $\rm B_P$. An additional cusp bifurcation appearing in the case $\varphi_{\rm rel}$ is marked by $\rm B_C$.} \label{fig:deltamatrix} \end{figure} \begin{figure} \noindent\includegraphics[width=0.99\textwidth]{fig5n} \caption{ Wave functions of the double-$\delta$ potential system for the parameter sets marked in \fref{fig:deltamatrix}a. a) Wave function of the $\mathcal{PT}$-symmetric ground state. b) Wave function of the $\mathcal{PT}$-symmetric excited state. In c) the broken symmetry of the $\mathcal{PT}$-broken state can be recognized.} \label{fig:deltawave} \end{figure} The results of the system with the double-$\delta$ potentials are given in \fref{fig:deltamatrix} in comparison with those of the matrix model. To be able to compare the two models the parameters in the matrix model are replaced by $g \rightarrow {g}/{g_0}$ and $\gamma \rightarrow {\gamma}/{\gamma_0}$. Also a shift $\Delta\mu$ in the chemical potential is introduced. Then the parameters $\gamma_0$, $g_0$, $v$ and $\Delta\mu$ are fitted to the results of the double-$\delta$ model. How these parameters are connected to the extended model can be seen in \ref{s:matrixmodelderivation}. In contrast to the matrix model the double-$\delta$ system includes spatial properties of the wave functions. In \fref{fig:deltawave} the wave functions for the parameters marked in \fref{fig:deltamatrix} are shown. One can clearly observe the non-differentiability of the wave functions at the locations $x = \pm b$ of the $\delta$-potentials. It is also clearly visible that the states with complex chemical potential are $\mathcal{PT}$-broken (see \fref{fig:deltawave}c). The two wave functions for the subsystems A and B fulfil the condition $\psi_{\rm A}(x) = \psi^*_{\rm B}(x)$ which ensures that the loss and gain in each subsystem is balanced by the gain and loss in the other subsystem and the $\mathcal{PT}$-symmetry of the potential is maintained. Furthermore the wave function of the ground state (\fref{fig:deltawave}a) is much more localized in the potential wells than the wave function of the excited state (\fref{fig:deltawave}b). When we compare the solutions of the matrix model with those of the model with the double-$\delta$ potential we observe that the qualitative bifurcation structure of the states is the same for both models but some quantitative deviations can be seen. Before we continue our investigation of the cause of these differences in \sref{sec:3.3} we will take a look at the influence of the phase difference $\varphi_{\rm rel}$ between the two subsystems. To examine the influence of the phase difference on the bifurcation scenario we show in \fref{fig:deltamatrix}c the case in which the phase difference between the subsystems is set to $\varphi_{\rm rel} = 0.03$. The pitchfork bifurcation ${\rm B}_{\rm P}$ in \fref{fig:deltamatrix}c turns into a cusp bifurcation ${\rm B}_{\rm C}$. While the central ($\mathcal{PT}$-symmetric) state $s_1$ exists on both sides of the bifurcation point, the two outer ($\mathcal{PT}$-broken) states $a_{1,2}$ are created in the bifurcation of \fref{fig:deltamatrix}a. In the cusp bifurcation of \fref{fig:deltamatrix}c one of the outer states (depending on the sign of $\varphi_{\rm rel}$) merges with the central state and the other outer state performs a continuous transition to the central state for smaller values of $\gamma$. Also the $\mathcal{PT}$-symmetry of all states is broken. The asymmetry increases for the central state for increasing values of $\varphi_{\rm rel}$. If we introduce the phase difference $\exp(\ii \varphi_{\rm rel})$ between the to subsystems explicitly into the stationary GPE \eref{eq:matrixansatz} for the matrix model, we obtain for the subsystem A \begin{eqnarray} \mu_A \psi_{\rm A, 1} &= - g | \psi_{\rm A, 1} |^2 \psi_{\rm A, 2} + v \psi_{\rm A, 2} + \sin(\varphi_{\rm rel}) \gamma \psi_{\rm B, 1} - \ii \cos(\varphi_{\rm rel}) \gamma \psi_{\rm B, 1}, \nonumber \\ \mu_A \psi_{\rm A, 1} &= - g | \psi_{\rm A, 1} |^2 \psi_{\rm A, 2} + v \psi_{\rm A, 1} - \underbrace{\sin(\varphi_{\rm rel}) \gamma \psi_{\rm B, 2}}_{\rm asym.\ pot.} + \underbrace{\ii \cos(\varphi_{\rm rel}) \gamma \psi_{\rm B, 2}}_{\rm gain\ or\ loss}. \end{eqnarray} We see that a phase difference between the two subsystems leads to different contributions to the real and imaginary part of the effective potential of each subsystem. The real part of the effective potential can therefore become asymmetric (this not only depends on the phase difference $\varphi_{\rm rel}$ but also on the phase value of the wave function in the other subsystem). The influence of an asymmetric double-well potential on the bifurcation structure has been discussed previously \cite{PhysRevE.74.056608}. For an asymmetric potential there is no longer a pitchfork bifurcation but a tangent bifurcation. We can compare this to the well known normal forms of the two parameter bifurcation theory \cite{Kuznetsov2004}. The normal form of the cusp bifurcation is \begin{eqnarray} 0 = \dot x = f_{\rm C}(x) = \beta + \alpha x - x^3, \label{eq:normalcusp} \end{eqnarray} with the bifurcation parameters $\alpha$ and $\beta$. In our model the role of the second parameter $\beta$ is taken by the phase difference $\varphi_{\rm rel}$ between the two subsystems. A constant $\varphi_{\rm rel} = 0$ (which is equivalent with $\beta = 0$) defines a line in the $\varphi_{\rm rel}$-$\gamma$ parameter space. On this line the pitchfork bifurcation scenario emerges. We have seen that the phase difference between the two modes is critical to obtain a $\mathcal{PT}$-symmetric system, and the breaking of this symmetry changes the bifurcation structure. Only for $\varphi_{\rm rel} = 0$ $\mathcal{PT}$-symmetric states are observed. \subsection{Comparison of the models and usefulness of the matrix model} \label{sec:3.3} \begin{figure}[t] \noindent\includegraphics[width=0.90\textwidth]{fig6n} \caption{ Comparison of the eigenvalues of the matrix model from \eref{eq:matrix} (blue dashed lines) with the eigenvalues of the system \eref{eq:gauss} (red solid lines), in which the BEC is trapped in a smooth harmonic potential separated into two wells by a Gaussian potential barrier. The fit parameters for the matrix model are $g_0 = 2.78$, $v = 0.043$ and $\gamma_0 = 0.92$ and are used for all cases a)-c). The chemical potential of the matrix model is shifted by $\Delta\mu =2.463$. The height of the Gaussian potential barrier in system \eref{eq:gauss} is $V_0^{\rm G} = 0.25$ with the width $\sigma = 0.5$. Figures a) and c) contain the results for $g = 0.2$, while figure b) is plotted for $g = 0.3$. In figure c) the phase difference is non-zero ($\varphi_{\rm rel}=0.03$). } \label{fig:gaussmatrix} \end{figure} In the system \eref{eq:gauss} the two modes are coupled over a spatially extended range and therefore the continuous change of the phase in the wave functions may play a role. In \fref{fig:gaussmatrix} we show the stationary states of the matrix model \eref{eq:matrix} in comparison with those of the smooth potential system \eref{eq:gauss}. The parameters of the matrix model ($g_0$,$\gamma_0$ and $v$) and a shift of the chemical potential $\Delta\mu$ were adjusted to the solution of the model \eref{eq:gauss} but remained the same for all calculations in \fref{fig:gaussmatrix} with different values for $g$ and $\varphi_{\rm rel}$. In \ref{s:matrixmodelderivation} it is shown how the discrete matrix model can be derived from a continuous model. Again we see a pitchfork bifurcation (\fref{fig:gaussmatrix}a) in the lower state which, for increasing values of the nonlinearity $g$, moves to smaller values of $\gamma$. The two new states created in this bifurcation are non-stationary ($\mu_{\rm A, B} \not\in \mathbb{R}$) $\mathcal{PT}$-broken states. By further increasing $g$ the value of $\gamma$ at which the bifurcation occurs moves to even smaller values of $\gamma$ until it reaches $\gamma = 0$. Thus the qualitative behaviour is exactly the same as in the two previously investigated models. It is generic for the coupled double-well structure. If the phase between the two subsystems is changed to a non-zero value, the pitchfork bifurcation from \fref{fig:gaussmatrix}a changes into a cusp bifurcation (compare \fref{fig:gaussmatrix}c). This is the same behaviour as observed in \fref{fig:deltamatrix} for the double-$\delta$-potential. No change of the bifurcation structure or the $\mathcal{PT}$-symmetric properties due to the extended coupling is observed. However, as can be seen in \fref{fig:gaussmatrix}a-c the agreement with the matrix model is nearly perfect and much better than the agreement between the matrix model and the model with the $\delta$-potential wells. \begin{figure} \noindent\includegraphics[width=0.95\textwidth]{fig7} \caption{ Ground state and mirrored excited state ($\mu_{\rm mirror} = \mu_0 - \mu$). The states are not symmetric. Figures a) and b) show the results for the Gaussian model \eref{eq:gauss} with $g = 0.2$ and $\mu_0 = 4.854$ and $\mu_0 = 4.2733$, respectively. Figures c) and d) show the results of the double-$\delta$ model \eref{eq:deltaGPEb} with $g = 2.0$ and $\mu_0 = -4.5$ and $\mu_0=-1.1$, respectively. In the Gaussian model the hight of the potential barrier between the two wells in each subsystem is changed. For a) the barrier hight is $V_0^{\rm G} = 4.0$, for b) it is $V_0^{\rm G} = 2.5$. In the case of the $\delta$-model the (real) depth of the potentials is lowered from $V_0^{\rm D} = 1.0$ in a) to $V_0^{\rm G} = 2.5$ in b).} \label{fig:locality} \end{figure} Taking a closer look at the states of the matrix model one discovers that the upper and lower states are symmetric with respect to $-{g}/{2}$ as can be seen in \eref{eq:analytic_sym}. This is no longer true for the models with a spatial description. To make this asymmetry visible we examine \fref{fig:locality} in which one state is mirrored onto the other, e.g. for one state \begin{eqnarray} \mu_{\rm mirror} = \mu_0 - \mu \end{eqnarray} is plotted and $\mu_0$ is the average value of the chemical potential of both states at $\gamma = 0$. One observes that the deviation is much more pronounced in the model with $\delta$-wells than in the smooth potential from \eref{eq:gauss}. \begin{table}[t] \caption{ \label{tab:parameter} Fit parameters of the matrix model used for the comparison with the spatially extended models in figures \ref{fig:deltamatrix} and \ref{fig:gaussmatrix}.}\lineup \begin{indented} \item[] \begin{tabular}{@{}l|llll|ll|ll} \br Comparison with & $ g_0 $ & $ v $ & $\gamma_0$ & $\Delta\mu$ & $ V_0^{\rm G} $ & $\sigma$ & $ V_0^{\rm D} $ & $b$ \\ \mr double-$\delta$ model & $2.75$ & $0.28$ & $1.27$ & $-0.17$ & --- & --- & $1.0$ & $1.1$ \\ smooth potential & $2.78$ & $0.043$ & $0.92$ & \m$2.463$ & $2.5$ & $0.5$ & --- & --- \\ \br \end{tabular} \end{indented} \end{table} In the comparison of the fit parameters $g_0$, $v$ and $\gamma_0$ (see \tref{tab:parameter}), one parameter with vastly different values is evident. The coupling strength $v$ of the two potential wells in the $\delta$-potential model case is approximately $6.5$ times larger than in the case of the harmonic trap with the potential well. This means that the separation of the two wells is much less pronounced due to shallower wells in the case of the $\delta$-potential. This leads to wave functions which are not as localized as in the case of the smooth potential. Therefore the contribution of the overlap of the wave functions, which was negligible for the smooth potential, increases. The matrix model is not capable of describing the nonlinear interaction between wave functions of different modes. Only the nonlinear scattering process in the same well is taken into account. For further investigation one can increase the distance between the wells or deepen them. One might expect that the stationary states then would be in a better agreement with the matrix model. We compare the model with smooth potentials for different barrier heights (\fref{fig:locality}a and \fref{fig:locality}b). For a lower potential barrier the asymmetry of the two states becomes more pronounced. The same is true for the $\delta$-model (\fref{fig:locality}c and \fref{fig:locality}d). \begin{figure} \noindent\includegraphics[width=0.95\textwidth]{fig8} \caption{ Wave functions for the ground and excited states in the Gaussian model for different potential barriers (in a) $V_0^{\rm G} = 2.5$, in b) $V_0^{\rm G} = 4.0$) for a nonlinearity of $g = 0.2$. The overlap of the Gaussians at $x = 0$ is much higher for the lower potential barrier in a) and for the excited states. } \label{fig:wavefct} \end{figure} The wave functions for the different parameter sets are shown in \fref{fig:wavefct}. Here the probability density of the ground and excited state for the smooth potential model with different heights for the potential barrier can be seen. One observes a higher probability density in the overlap region around $x=0$ for the excited states. This overlap increases for a lower potential barrier. Thus, we can conclude that the matrix model captures all relevant information of the bifurcation scenario and the $\mathcal{PT}$-symmetric properties as long as the different potential wells are sufficiently separated. A larger overlap leads to quantitative changes and the loss of a mirror symmetry of pairs of values for the chemical potential in the ($\mu$, $\gamma$)-diagram, however, it does not affect the generic structure of the states. \section{Summary and Outlook} \label{sec:4} For an experimental realization of a $\mathcal{PT}$-symmetric double-well potential the description of a physical environment which implements the gain and loss of a complex potential is an important prerequisite. By combining two double-well subsystems into one closed hermitian system we have found such a realization. For the four-dimensional matrix model without a phase difference between the two subsystems analytical solutions for all $\mathcal{PT}$-symmetric and $\mathcal{PT}$-broken states were found. Although the four-dimensional matrix model showed a new and different bifurcation scenario in comparison with the two-dimensional matrix model from \cite{Graefe12} some generic features remained the same. The matrix model showed the same qualitative bifurcation scenario as the two spatially extended models. Deviations could be observed when the two wells of the systems were not isolated enough such that the wave functions in each well had a significant overlap between the wells. In this case the solutions from the systems with a spatially resolved wave function differed from those of the matrix model. A larger overlap leads to quantitative changes and the loss of a mirror symmetry of pairs of energy eigenvalues in the $(\mu, \gamma)$-diagram, however, it does not affect the generic structure of the states. The influence of the phase difference between the two subsystems was also examined. While the coupling strength $\gamma$ between the two subsystems took the role of one bifurcation parameter, the phase difference $\varphi_{\rm rel}$ took the role of another, leading to a two-parametric cusp bifurcation. This bifurcation degenerated for $\varphi_{\rm rel} = 0$ into a pitchfork bifurcation. Only in this case $\mathcal{PT}$-symmetric states could be observed which makes the phase difference between the subsystems critical for the $\mathcal{PT}$-symmetric properties of the system. The matrix model can be investigated further. Under the assumption that the two wells of the system are sufficiently isolated the matrix model reduces the description of the system to a low number of key parameters. Therefore the analytically accessible matrix model of this paper could be helpful to gain more insight into the behaviour of coupled BECs. In particular a similar approach to realize a $\mathcal{PT}$-symmetric quantum system via the coupling of two condensate wave functions was studied in \cite{Single2014} and revealed complicated stability properties. This system should also be representable in our four-mode description such that analytic expressions should be obtainable. \ack This work was support by Deutsche Forschungsgemeinschaft.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Summary} Seismic inversion helps geophysicists build accurate reservoir models for exploration and production purposes. Deep learning-based seismic inversion works by training a neural network to learn a mapping from seismic data to rock properties using well log data as the labels. However, well logs are often very limited in number due to the high cost of drilling wells. Machine learning models can suffer overfitting and poor generalization if trained on limited data. In such cases, well log data from other surveys can provide much needed useful information for better generalization. We propose a learning scheme where we simultaneously train two network architectures, each on a different dataset. By placing a soft constraint on the weight similarity between the two networks, we make them learn from each other where useful for better generalization performance on their respective datasets. Using less than 3$\%$ of the available training data, we were able to achieve an average $r^{2}$ coefficient of 0.8399 on the acoustic impedance pseudologs of the SEAM dataset via joint learning with the Marmousi dataset. \section{Introduction} Seismic inversion refers to the process of estimating rock properties in the subsurface. This allows geophysicists to build accurate reservoir models for hydrocarbon exploration and production. While these properties can be measured directly at the well locations, they must be estimated using seismic data at the non-well locations. Classical seismic inversion usually works by starting with a smooth model of subsurface properties and forward modeling it to generate synthetic seismic data. The synthetic seismic is compared to the actual seismic and the difference between the two is used to update the model parameters. A detailed overview of classical seismic inversion methods is provided by \citep{Veeken2004SeismicIM}. \\ Deep learning, a subset of machine learning, has in the recent past led to ground breaking advancements in the field of Image classification \citep{Krizhevsky2017}, object detection \citep{objdetect}, image segmentation \citep{segmentation}, image and video captioning \citep{captioning}, speech recognition \citep{speech}, and machine translation \citep{DBLP:conf/emnlp/ChoMGBBSB14}. The success of deep learning in computer vision and natural language processing domains has of late inspired geophysicists to replicate these successes in the field of seismic interpretation. Machine learning has been used to solve problems in salt body delineation \citep{haibinSaltbodyDetection, AsjadSaltDetection, AmirSaltDetection} , fault detection \citep{haibinFaultDetection, HaibinFaultDetection2}, facies classification \citep{YazeedFaciesClassification, YazeedFaciesWeakClassification}, and structural similarity based seismic image retrieval and segmentation \citep{YazeedStructurelabelPrediction}. Recently, there has been a lot of interest in developing deep learning-based workflows for seismic inversion. \citep{BiswasPhysicsGuidedCNN, DasCNNInversion} used Convolutional Neural Networks (CNNs) for estimating Acoustic and Elastic Impedance from seismic data. \cite{motazRNN1} and \cite{mustafaTCN} introduced sequence modelling-based neural networks based on Recurrent Neural Networks (RNNs) and Temporal Convolutional Network (TCN) respectively for estimation of various rock properties from seismic data. They demonstrated that such networks were more capable of learning temporal relationships between seismic traces for efficient rock property estimation. \citep{motazSemiSupervisedAcoustic, motazSemiSupervisedElastic} also showed how incorporating the forward model into the network architecture resulted in an implicit regularization of the network, thereby improving the quality of property estimations. All such methods are based upon learning a mapping from seismic data to well log measurements at the well positions, and then using the learned mapping to estimate the properties at the non-well positions. One limitation with such approaches is that they require a lot of labeled training data to achieve satisfactory generalization performance. However, most surveys have only a limited number of wells, due to the high cost of drilling them. This makes machine learning models prone to overfitting if trained on such limited well log data. One way of overcoming this is to use knowledge gained from learning on well logs from other surveys in estimating rock properties on the target survey. Transfer learning is a very popular machine learning framework that uses knowledge from a source dataset while training a machine learning model on the target dataset. It has been shown to help achieve better generalization performance and quicker convergence on the target dataset. It also results in less effort being expended to manually label training examples in the target survey, especially when it is costly and time consuming. For a comprehensive review of transfer learning methodologies that have been used in the past, refer to \citep{transferlearning}. In this paper, we propose a transfer learning scheme for seismic inversion that jointly learns on multiple datasets using identical copies of the same network architecture. In addition to optimizing the losses on their respective datasets, we also impose a soft constraint on the weights of the network copies to be similar to each other. This effectively results in a knowledge sharing scheme where the two networks are learning from each other where it is mutually beneficial while being able at the same time to adapt to the specific nature of their respective dataset. \section{Methodology} \subsection{2-D Temporal Convolutional Network} As mentioned beforehand, our algorithm employs the use of two dimensional Temporal Convolutional Blocks for estimating rock properties from seismic data. The architecture is shown in Figure ~\ref{fig:architecture}. The Architecture consists of a feature extractor module that uses a series of 2-D convolutional kernels to extract increasingly abstract features from the input seismic image. The convolutional kernels in this module use an exponentially increasing dilation factor in depth while staying constant in width. Using increasingly dilated convolutions in depth allows us to model input seismic data temporally for efficiently capturing long term dependencies, leading to better estimation of the desired rock property. The kernel being 2-D allows us to inject spatial context awareness into our network estimations. The output of the feature extractor block is fed simultaneously into a Regression module and a Reconstruction module. The latter is responsible for reconstructing the input seismic image while the former outputs the desired rock property. This is an example of multi-task learning via representation sharing, where multiple tasks (output estimation and input reconstruction in this case) are learnt simultaneously in the hope that the network can learn more generalizable feature representations, leading to better performance on all tasks. This is especially the case when the tasks are highly related to each other. \begin{figure*}[htbp] \centering \includegraphics[width=2\columnwidth]{architecture.png} \caption{The Architecture uses a series of 2-D Temporal Convolutional Blocks to extract features from the input. The input is a 2-D patch of seismic data centered at the well position. The output of the Feature Extractor is fed simultaneously into the regression module and the reconstruction module for the estimation of rock property and reconstruction of seismic input respectively.} \label{fig:architecture} \end{figure*} \subsection{Soft Weight Sharing} As discussed before, another major component of our deep learning based-seismic inversion algorithm is learning from related datasets for rock property prediction. This is achieved by simultaneously training identical copies of the same architecture, one for each dataset. Each network receives a batch of input training examples from its respective dataset, processes them to get the outputs, and uses the corresponding ground-truths to backpropagate the losses through the network to update network weights. In addition to this, we also force the network weights in all corresponding layers to be close to each other in the L2 norm sense. By doing this, we effectively bias the networks to search the parameter space for a solution where the architecture will generalize better to inputs sampled from different distributions. However, by not constraining the weights to be exactly the same, each copy of the architecture is also free to find the optimal set of weights for its dataset in the vicinity of this solution space. Moreover, in the situation where the two datasets are very different from each other and learning on one will not help the other, the networks can choose to not learn from each other at all. The process is illustrated in Figure ~\ref{fig:weight_sharing}. Consider the two networks to be represented by $\mathcal{F}$ and $\mathcal{G}$ respectively. Both $\mathcal{F}$ and $\mathcal{G}$ consist of trainable weights organized into a set of $L$ convolutional layers. Consider $\theta_{A}^{l}$ to be the weight tensor in the $l$-th layer in network $A$, where $l\in [0, L-1]$. Then both $\mathcal{F}$ and $\mathcal{G}$ can be represented as follows: \begin{equation} \mathcal{F} = [\theta_{F}^{0}, \theta_{F}^{1},\cdots, \theta_{F}^{L-1}] \label{eq:network1} \end{equation} \begin{equation} \mathcal{G} = [\theta_{G}^{0}, \theta_{G}^{1},\cdots, \theta_{G}^{L-1}] \label{eq:network2} \end{equation} The Weight Mismatch Loss is then defined as: \begin{equation} l_{WML} = \sum_{l=0}^{L-1} \|\theta_{\mathcal{F}}^{l} - \theta_{\mathcal{G}}^{l}\|_{2}^{2} \label{eq:weight_mismatch} \end{equation} \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{weight_sharing.png} \caption{The two architectures are constrained to have weights in all corresponding layers close to each other in the L2 norm sense.} \label{fig:weight_sharing} \end{figure} \subsection{Network Training} Consider $D_{1} = \{X_{1}, Y_{1}\}$ and $D_{2} = \{X_{2}, Y_{2}\}$ to represent two datasets, where the subscript refers to the dataset. $X = \{x^{1}, ..., x^{N}| x^{i} \in \mathbb{R}^{d \times m}\}$ represents the collection of $N$ seismic images in a dataset, where each $x^{i}$ is a $d\times m$ dimensional image. $d$ refers to the depth of the image while $m$ is the width. $Y = \{y^{1}, ..., y^{N}|y^{i}\in\mathbb{R}^{d}\}$ refers to collection of well log properties corresponding to each $x^{i} \in X$, where each $y^{i}$ is a $d$ dimensional rock property trace. A batch of seismic images from each dataset is processed by its respective network to get the estimated well properties, $\hat{y}^{i}$ as well as the reconstructed seismic images, $\hat{x}^{i}$ as shown below: \begin{equation} \hat{y}_{1}^{i}, \hat{x}_{1}^{i} = \mathcal{F}_{\Theta}(x_{1}^{i}) \label{eq:4} \end{equation} \begin{equation} \hat{y}_{2}^{i}, \hat{x}_{2}^{i} = \mathcal{G}_{\Theta}(x_{2}^{i}) \label{eq:5} \end{equation} The regression and reconstruction losses are then defined as: \begin{equation} l_{reg} = \frac{1}{N_{1}}\sum_{i=1}^{N_{1}}\|\hat{y_{1}}^{i} - y_{1}^{i}\|_{2}^{2} + \frac{1}{N_{2}}\sum_{i=1}^{N_{2}}\|\hat{y}_{2}^{i} - y_{2}^{i}\|_{2}^{2} \end{equation} and \begin{equation} l_{recon} = \frac{1}{N_{1}}\sum_{i=1}^{N_{1}}\|\hat{x_{1}}^{i} - x_{1}^{i}\|_{2}^{2} + \frac{1}{N_{2}}\sum_{i=1}^{N_{2}}\|\hat{x}_{2}^{i} - x_{2}^{i}\|_{2}^{2} \end{equation} where $N_{1}$ and $N_{2}$ are the batch sizes in the two datasets. The total loss is then obtained as: \begin{equation} \textrm{Total Loss} = l_{reg} + l_{recon} + \alpha\times l_{WML}, \end{equation} where $\alpha$ is a hyperparameter that controls the influence of the weight mismatch loss on the training of the two networks. Over each training iteration, the loss obtained above is backpropagated through both networks and the weights updated to reduce the training error at the next iteration. If $\alpha$ is set too high, it forces the networks to look for the same solution, that might not be optimized for each dataset individually. If $\alpha$ is set too low, it makes the training of the two networks effectively independent of each other. An intermediate value for $\alpha$ results in the two networks learning from each other when it is useful for optimization on their own datasets, and ignoring each other when knowledge sharing is not useful. \section{Results and Discussion} We demonstrate our workflow for the estimation of Acoustic Impedance from poststack, migrated seismic data on the open source Marmousi and SEAM datasets. We set up a network Architecture as shown in Figure ~\ref{fig:architecture} for each dataset, and train them jointly for 900 epochs. We use ADAM \citep{kingma2014adam} as the optimization algorithm, which adaptively sets the learning rate during the progression of training. We also impose a weight decay constraint of 0.001 which helps preventt overfitting by constraining the L2 norm of the network weights. We uniformly sample 12 acoustic impedance pseudologs and their corresponding seismic trace data from the crossline section in SEAM located at North 23900m. For Marmousi, we sample uniformly 51 acoustic impedance pseudologs and the corresponding seismic data in the dataset. Marmousi is a synthetic dataset with the seismic data generated by simple convolutional forward modeling, while SEAM contains migrated seismic data made to simulate real world acquisition conditions and artifacts. This makes it a much harder dataset to learn on, especially with only a limited number of pseudologs available. We use a greater number of pseudologs in Marmousi to provide the network training on SEAM with sufficient information to learn from. The results of this training scheme are illustrated in Figure ~\ref{fig:seam}. One can clearly see that our algorithm is able to delineate with sufficient detail, the vertical variations in acoustic impedance, especially in the left half of the section. The top of the salt dome has also been marked out to a high degree of accuracy, despite us not having access to many pseudologs there. We have also been able to mark out the top and the bottom of the high impedance arch occurring around a depth of 12000m to a reasonable degree of detail. One can see that estimation get noisier in the bottom-right portion of the section. This is to be expected since the seismic data in these regions is extremely weak and sometimes not receptive at all to variations in acoustic impedance. Despite this, our algorithm is still able to capture the general increasing trend in acoustic impedance values well. Figure 4 shows some individual acoustic impedance traces extracted from both the estimated and ground truth acoustic impedance sections at select positions and overlaid on top of each other. As explained before, the two largely agree with each other for the most part, except around a depth of 12000m where we have a sudden jump in the value for acoustic impedance, along with a weakened seismic data in the seismic section. The $r^{2}$ coefficient, also called the coefficient of determination, gives information about the goodness of fit of a model. Given a set of observed values and a corresponding set of predicted values, the $r^{2}$ coefficient is an indicator of how well the predicted values approximate the real ones. A value of 1 indicates a perfect fit. We calculated the average $r^{2}$ coefficient between the estimated and ground truth acoustic impedance sections on SEAM, and it turns out to be 0.8399, which indicates that our model has been able to model acoustic impedance well, given that we only had 12 training samples to train it with, which is around 2$\%$ of the total available training data. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{seam_AI.png} \caption{Estimated Acoustic Impedance section (top) vs the groundtruth (bottom).} \label{fig:seam} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{trace_plots.png} \caption{Trace plots for select positions in both the estimated and ground-truth acoustic impedance sections.} \label{fig:traces} \end{figure} \section{Conclusion} In this work, we demonstrate a deep learning-based seismic inversion workflow where we jointly train identical copies of a neural network on two different datasets. We show how, by placing a soft constrain on the network weights to be similar, we allow the transfer of useful knowledge to take place from one dataset to the other while simultaneously letting the networks adapt to their specific datasets. An important implication of this workflow is that one need not use a large number of training examples in either, since mutual information between the them would serve to compensate for that. Another implication for this work is that one can scale this workflow to any number of datasets. We demonstrate the utility of this approach through estimating Acoustic Impedance on the SEAM dataset, although the methodology would be equally valid for other rock properties. \bibliographystyle{seg}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} \vspace*{-0.3cm} At large momentum transfer the hard scattering approach (HSA) \cite{lep:80} provides a scheme to calculate exclusive processes. Observables are described as convolutions of hadronic wave functions which embody soft non-perturbative physics, and hard scattering amplitudes $T_H$ to be calculated from perturbative QCD. In most cases only the contribution from the lowest-order pQCD approach in the collinear approximation using valence Fock states only (termed the standard HSA) has been worked out. Applications of the standard HSA to space-like exclusive reactions, as for instance the magnetic form factor of the nucleon, the pion form factor or Compton scattering off protons revealed that the results are only in fair agreement with experiment if hadronic wave functions are used that are strongly concentrated in the end-point regions where one of the quark momentum fractions, $x$, tends to zero. As has been pointed out by several authors (e.g.\ \cite{isg:89,rad:91}), the results obtained from such wave functions are dominated by contributions from the end-point regions where perturbative QCD cannot readily be applied. Hence, despite the agreement with experiment, the predictions of the standard HSA are theoretically inconsistent for such wave functions. It should also be stressed that the large momentum transfer behaviour of the helicity-flip controlled Pauli form factor of the proton remains unexplained within the standard HSA. Applications of the HSA to time-like exclusive processes fail in most cases (e.g.\ $G_M$, $F_{\pi}$, $\gamma\gamma\to p\bar{p}$). The predictions for the integrated $\gamma\gamma\to \pi\pi$ cross-section ($|\cos{\theta}| \leq 0.6$) are in fair agreement with the data whereas the predictions for the angular distribution fails. Exclusive charmonium decays constitute another class of time-like reactions. If the end-point region concentrated wave functions are employed again, the standard HSA provides results in fair agreement with the data in many cases. It should be noted that in most calculations of exclusive charmonium decays \cite{dun:80} $\alpha_s$ values of the order of $0.2 - 0.3$ are employed. Such values do not match with $\alpha_s$ evaluated at the charm quark mass, the characteristic scale for these decays ($\alpha_s (m_c=1.5 \,{\rm GeV}) = 0.37$ in one-loop approximation with $\Lambda_{QCD}=200 \,{\rm MeV}$). Since high powers of $\alpha_s$ are involved in charmonium decays a large factor of uncertainty is hidden in the predictions. Constraining the pion wave function \cite{jak:96,kro:96} from the recent precise data on the $\pi\gamma$ transition form factor \cite{cleo:95}, one observes an order-of-magnitude discrepancy between data and HSA predictions for charmonium decays into two pions. In \cite{bol:96} contributions from the $c\bar{c}g$ Fock state are suggested as the solution of this puzzle. I am going to discuss these topics in my talk. Also I shall discuss the large momentum transfer behaviour of the pion form factor in the light of the new information on the pion's wave function. \section{The $\pi$-$\gamma$ transition form factor} \label{sec:pigaff} \vspace*{-0.3cm} The apparent success of the end-point concentrated wave functions, in spite of the theoretical inconsistencies, prevented progress in understanding hard exclusive reactions for some time. Recently, with the advent of the CLEO data on the $\pi\gamma$ transition form factor $F_{\pi\gamma}$ \cite{cleo:95}, the situation has changed. The leading twist result for that form factor\footnote{ The pion mass as well as the light current quark masses are neglected throughout.}, including $\alpha_s$-corrections, reads \cite{lep:80} \begin{equation} \label{leadtwisteq} F_{\pi\gamma}(Q^2) = \frac{\sqrt 2}{3}\,\langle x^{-1}\rangle\frac{f_\pi}{Q^2}\; [\,1+\frac{\alpha_s(\mu_R)}{2\pi} K_{\pi\gamma}(Q^2,\mu_R) + {\cal O}(\alpha_s^2)\,]. \end{equation} $f_{\pi}$ is the usual pion decay constant (130.7 \,{\rm MeV}) and $\mu_R$ represents the renormalization scale. The function $K_{\pi\gamma}$ has been calculated by Braaten \cite{bra:83} in the $\overline{MS}$ scheme. $\langle x^{-1}\rangle$ is the $1/x$ moment of the pion distribution amplitude, $\phi$, which represents the light-cone wave function of the pion integrated over transverse quark momenta, ${\bf k}_{\perp}$, up to a factorization scale, $\mu_F$, of order $Q$. The distribution amplitude\ can be expanded upon Gegenbauer polynomials, $C_n^{3/2}$, the eigenfunctions of the evolution kernel for mesons \cite{lep:80} \begin{equation} \label{evoleq} \phi_{\pi}(x,\mu_F)=\phi_{AS}(x)\left[1+ \sum^\infty_{n=2,4,...}B_n(\mu_0)\left( \frac{\alpha_s\left(\mu_F\right)}{\alpha_s\left(\mu_0\right)}\right) ^{\gamma_n}\,C_n^{3/2}(2x-1)\right] \end{equation} where the asymptotic distribution amplitude\ is $\phi_{AS}(x)=6x(1-x)$. The $1/x$ moment of the distribution amplitude\ reads \begin{equation} \langle x^{-1}\rangle=3\left[1+\sum^\infty_{n=2,4,...}B_n(\mu_0) \left(\frac{\alpha_s(\mu_F)}{\alpha_s(\mu_0)}\right)^{\gamma_n}\right] = 3\left[1+\sum^\infty_{n=2,4,...} B_n(\mu_F)\right]. \label{leadmomeq} \end{equation} The process-in\-depen\-dent ex\-pan\-sion coefficients $B_n$ embody the soft physics; they are not calculable at present. $\mu_0$ is a typical hadronic scale, actually $\mu_0=0.5 \,{\rm GeV}$. Since the anomalous dimensions, $\gamma_n$, are positive fractional numbers increasing with $n$ (e.g.\ $\gamma_2=50/81$) any distribution amplitude\ evolves into the asymptotic distribution amplitude\ for $\ln{Q^2}\to\infty$; higher order terms are gradually suppressed. Hence, the limiting behaviour of the transition form factor is \begin{equation} \label{asy} F_{\pi\gamma} \longrightarrow \sqrt{2} f_{\pi} /Q^2 \end{equation} which is a parameter-free QCD prediction \cite{wal:74}. As comparison with the CLEO data \cite{cleo:95} reveals, the limiting behaviour is approached from below. At 8 \,{\rm GeV}$^2$ the data only deviate by about $15 \%$ from (\ref{asy}) (see Fig.\ref{fig:pigaff}). \begin{figure}[t] \[ \psfig{figure=pgraulfs_fig1.ps,% bbllx=20pt,bblly=270pt,bburx=570pt,bbury=660pt width=8cm,clip=} \] \vspace*{-1.0cm} \caption[dummy2]{The scaled $\pi\gamma$ transition form factor vs.\ $Q^2$. The solid (dashed) line represents the results obtained with the modified HSA using the asymptotic (Chernyak-Zhitnitsky) wave function. The evolution of the Chernyak-Zhitnitsky wave function is taken into account. The dotted line represents the limiting behaviour $\sqrt 2f_\pi$. Data are taken from \cite{cleo:95,cello:91}.} \label{fig:pigaff} \end{figure} In order to give a quantitative estimate of the allowed deviations from the asymptotic distribution amplitude\, one may assume that $B_2$ is the only non-zero expansion coefficient in (\ref{evoleq}). The truncated series suffices to parametrize small deviations. Moreover, it has the advantage of interpolating smoothly between the asymptotic distribution amplitude\ and the frequently used Chernyak-Zhitnitsky distribution amplitude\ \cite{che:82} ($B_2=2/3$; $C_2^{3/2}(\xi)=3/2(5\xi^2-1)$). For large momentum transfer the assumption on the expansion coefficients is justified by the properties of the anomalous dimensions $\gamma_n$. In \cite{kro:96} it is shown that the leading twist, lowest order pQCD result (\ref{leadtwisteq}) nicely fits the CLEO data for $B_2^{LO}(\mu_0)=-0.39\, \pm 0.05$. Using Braaten's result for $K_{\pi\gamma}$ \cite{bra:83} that, choosing $\mu_F=\mu_R=Q$, reads \begin{equation} \label{corr} K_{\pi\gamma} = -\frac{10}{3}\;\frac{1-59/72\,B_2 (Q^2)} {1+B_2 (Q^2)}\, , \end{equation} one finds $B_2^{NLO}(\mu_0)=-0.17\, \pm 0.05$ from a fit to the CLEO data. Braaten's analysis is however incomplete in so far as only the $\alpha_s$ corrections to the hard scattering amplitude have been considered but the corresponding corrections to the kernel of the evolution equation for the pion's distribution amplitude were ignored. As has been shown by M\"uller \cite{mue:95} recently next-to-leading order evolution provides logarithmic modifications in the end-point regions for any distribution amplitude, i.e.\ for the asymptotic one too. An estimate however reveals that the modifications of the evolution behaviour in next-lo-leading order are very small and can safely be neglected. To summarize the $F_{\pi\gamma}$ form factor requires a distribution amplitude\ in a leading twist analysis that is narrower than the asymptotic one in the momentum transfer region of a few \,{\rm GeV}$^2$. The Chernyak-Zhitnitsky distribution amplitude\ is in clear conflict with the data and should, therefore, be discarded\footnote{ In \cite{BHL:83,cao:96} a modification of the pion wave function is proposed where the distribution amplitude\ is multiplied by the exponential $\exp\left[-m_q^2a_{\pi}^2/x(1-x)\right]$. The parameter $m_q$ represents a constituent quark mass of, say, $330\,{\rm MeV}$. Since the exponential substantially deviates from unity only in the end-point regions it leads to a strong additional suppression in the case of the Chernyak-Zhitnitsky distribution amplitude\ ($\langle x^{-1}\rangle$ changes from a value of 5 to 3.71 at the scale $\mu_0$). For narrow distribution amplitudes\ ($B_2\leq 0$), on the other hand, the exponential has only a minor bearing on the results for $F_{\pi\gamma}$.}. Recently a modified HSA has been proposed by Botts, Li and Sterman \cite{bot:89} in which transverse degrees of freedom as well as Sudakov suppressions are taken into account. This approach has the advantage of strongly suppressed end-point regions. Hence, the perturbative contributions can be calculated self-consistently in the sense that the bulk of the perturbative contribution is accumulated in regions of reasonably small values of the strong coupling constant. It is to be stressed that the effects of the transverse degrees of freedom taken into account in the modified HSA represent soft contributions of higher-twist type. Still, modified HSA calculations are restricted to the dominant (valence) Fock state. Another advantage of the modified HSA is that the renormalization scale can be chosen in such a way that large logs from higher order perturbation theory are eliminated. Such a choice of the renormalization scale are accompanied by $\alpha_s$ singularities in the end-point regions which are, however, compensated by the Sudakov factor in the modified HSA. Singularities produced by the evolution of the wave function are also cancelled by the Sudakov factor. Adapting the modified HSA to the case of $\pi\gamma$ transitions, one can write the corresponding form factor as \cite{jak:96,kro:96} \begin{equation} F_{\pi\gamma}(Q^2)=\int {\rm d} x\,\frac{{\rm d}^2{\bf b}}{4\pi} \hat\Psi_{\pi}(x,-{\bf b},\mu_F)\, \hat T_H\left(x,{\bf b},Q\right)\,\exp\left[-S\left(x,b,Q\right)\right] \label{fpgeq} \end{equation} up to $\cal {O}$($\alpha_s$, $k^2_{\perp}/Q^2$) corrections. ${\bf b}$ is the quark-antiquark separation and is canonically conjugated to the usual transverse momentum ${\bf k}_{\perp}$. The use of the transverse configuration space is mandatory because the Sudakov exponent $S$ is only known in that space \cite{bot:89}. The Sudakov exponent comprises those gluonic radiative corrections not taken into account in the evolution of the wave function. $\hat T_H$ is the Fourier transform of the lowest order momentum space hard scattering amplitude. It reads \begin{equation} \hat T_H\left(x,{\bf b},Q\right)=\frac{2}{\sqrt 3\pi}K_0 \left(\sqrt{1-x}\,Q\,b\right) \label{hattheq} \end{equation} where $K_0$ is the modified Bessel function of order zero. Due to the properties of the Sudakov exponent any contribution is damped asymptotically, i.e. for $\ln (Q^2/\mu_0^2)\to\infty$, except those from configurations with small quark-antiquark separations and, as can be shown, the limiting behaviour (\ref{asy}) emerges. $b$ plays the role of an infrared cut-off; it sets up the interface between non-perturbative soft gluon contributions - still contained in the hadronic wave function - and perturbative soft gluon contributions accounted for by the Sudakov factor. Hence, the factorization scale $\mu_F$ is to be taken as $1/b$. Finally, $\hat\Psi_{\pi}$ is the Fourier transform of the momentum space (light-cone) wave function of the pion for which a Gaussian ${\bf k}_{\perp}$-dependence is employed \begin{equation} \label{gaussian} \Psi_{\pi}\left(x,{\bf k}_{\perp};\mu_F\right) = \frac{f_\pi}{2\sqrt 6}\, \phi_{\pi}(x,\mu_F) N\exp{\left(-a^2_{\pi}(\mu_F)\frac{k_{\perp}^2}{x(1-x)}\right)}. \end{equation} Here $N=16\pi^2 a^2_{\pi}/(x(1-x))$ and, for a distribution amplitude\ with $B_n=0$ for $n\geq 4$, $a_{\pi}=1/(\pi f_{\pi} \sqrt{8(1+B_2)}$. The $\pi^0\to \gamma\gamma$ constraint \cite{BHL:83} is automatically satisfied for that choice of the transverse size parameter $a_{\pi}$. $\Psi_{\pi}$ represents a soft wave function, i.e.\ a full wave function with its pertubative tail removed from it. For $B_2=0$ the wave function (\ref{gaussian}) leads to a valence Fock state probability of 0.25 and a r.m.s.\ radius of $0.42$ fm. Using the wave function (\ref{gaussian}) in a modified HSA calculation, one finds excellent agreement with the CLEO \cite{cleo:95} and CELLO \cite{cello:91} data above $Q^2 \simeq 1\;\,{\rm GeV}^2$ for $B_2(\mu_0) =-0.006 \pm 0.014$ \cite{jak:96,kro:96} (see Fig.\ref{fig:pigaff}). Hence, the asymptotic wave function, i.e.\ the asymptotic distribution amplitude\ combined with the Gaussian ${\bf k}_{\perp}$-dependence, works very well if the modified HSA is used. A similar analysis of the $\eta\gamma$ and the $\eta'\gamma$ transition form factors has been carried through in \cite{jak:96}. It is important thereby to take into account mass corrections and the $\eta - \eta'$ mixing. The results of that analysis are in excellent agreement with the available data including the recent CLEO data \cite{cleo:95}. The values of the $\eta - \eta'$ mixing angle and the decay constants are calculated in \cite{jak:96} to be $\theta_P=-18^{\circ} \pm 2^{\circ}$, $f_{\eta}= 175 \pm 10 \,{\rm MeV}$ and $f_{\eta'}= 95\pm 6 \,{\rm MeV}$, respectively. The $\eta_c\gamma$ transition form factor can be analyzed in the same manner. Estimates of that form factor can be found in \cite{aur:96}. With the advent of the CLEO data the $\pi\gamma$ transition form factor attracted much interest and, besides \cite{jak:96,kro:96}, many papers have been devoted to its analysis elucidating various aspects of it \cite{cao:96,ong:95,rad:95,ani:96,ans:95}. Particularly interesting is the generalization of (\ref{leadtwisteq}) to the case of two virtual photons. In the standard HSA and again with $B_n=0$ for $n\geq 4$, the $\pi\gamma^*$ transition form factor reads \cite{bra:83} \begin{eqnarray} F_{\pi\gamma^*}(Q^2,\omega)&=& \sqrt{2}\frac{f_{\pi}}{Q^2} \frac{1}{(1-\omega)^3} \Big\{ [ 1 - \omega^2 + 2\omega \ln{\omega}] \nonumber\\ && \times [1 + \frac{1+28\omega+\omega^2}{(1-\omega)^2} B_2(\mu_F)] + 10 \omega \ln{\omega} B_2(\mu_F)\Big \} \label{two} \end{eqnarray} where $\omega=Q'^{2}/Q^2$. The larger one of the two photon virtualities is denoted by $Q^2$, the smaller one by $Q'^2$. The factorization scale may be chosen as $\mu_F= Q \sqrt{1+\omega}$. $\alpha_s$-corrections to $F_{\pi\gamma^*}$ can be found in \cite{bra:83} and an estimate of power corrections in \cite{gor:89}. The treatment of $\pi\gamma^*$ transitions within the the modified HSA is straightforward generalization of (\ref{fpgeq}) \cite{ong:95}. Interestingly, the $F_{\pi\gamma^*}$ form factor still behaves as $Q^{-2}$ at large $Q^2$. This is to be contrasted with the $Q^{-2}Q'^{-2}$ behaviour of the vector meson dominance model \cite{kes:93}. In the limes $\omega\to 1$ (\ref{two}) simplifies to \begin{equation} F_{\pi\gamma^*}=\frac{\sqrt{2}}{3}\frac{f_{\pi}}{Q^2} \left [ 1 + \frac{1}{2}(1 - \omega) (1 - 12 B_2(\mu_F)) \right ]. \label{kappa} \end{equation} The limiting behaviour of the form factor for $\omega=1$ which is strictly independent on the form of the distribution amplitude, has also been derived from QCD sum rules \cite{nov:84}. In \cite{ans:95} the triangle diagram is analyzed with the most general form of the $\pi q \bar{q}$ vertex. The result obtained for $F_{\pi\gamma^*}$ in that paper is similar to (\ref{two}) provided $B_2$ is put to zero in (\ref{two}). The differences between the two results are strongest at $\omega=0$ (about $9\%$) while both the results coincide at $\omega=1$. \section{Pionic decays of charmonium} \label{sec:charmonium} \vspace*{-0.3cm} In view of the results for $F_{\pi\gamma}$ a fresh analysis of the decays $\chi_{cJ}\to \pi\pi$ is in order. Using the information on the $\pi$ wave function obtained from the analysis of $F_{\pi\gamma}$, one finds the following values for the partial widths \begin{equation} \label{shsa} \Gamma (\chi_{c0(2)}\to\pi^+\pi^-)\, =\, 0.872\; (0.011)\, \,{\rm keV} \end{equation} within the standard HSA \cite{bol:96}. As usual the renormalization and the factorization scales are identified in that calculation and put equal to the $c$-quark mass. The parameter describing the $\chi_{cJ}$ state is the derivative $R'_P(0)$ of the non-relativistic $c\bar{c}$ wave function at the origin (in coordinate space) appropriate for the dominant Fock state of the $\chi_{cJ}$, a $c\bar{c}$ pair in a colour-singlet state with quantum numbers ${}^{2S+1}L_J={}^3P_J$. $m_c=1.5\;\,{\rm GeV}$ and, of course, the leading order standard HSA value -0.39 for $B_2(\mu_0)$ are chosen as well as $R'_P(0)=0.22\,{\rm GeV}^{5/2}$ which is consistent with a global fit of charmonium parameters \cite{man:95} as well as with results for charmonium radii from potential models \cite{buc:81}. In \cite{bol:96} the modified HSA is also used to calculate the $\chi_{cJ}\to \pi\pi$ decay widths. Taking $B_2=0$ and the other parameters as quoted above, one finds \begin{equation} \label{mhsa} \Gamma (\chi_{c0(2)}\to\pi^+\pi^-)\, =\, 8.22\; (0.41)\, \,{\rm keV}. \end{equation} For comparison the experimental data as quoted in \cite{pdg} and reported in a recent paper of the BES collaboration \cite{bes} are \begin{eqnarray} \label{dat} \Gamma (\chi_{c0}\to\pi^+\pi^-)&=& 105\; \pm 30\phantom{.3} \,{\rm keV} \;({\rm PDG}),\nonumber \\ && 62.3\pm 17.3 \,{\rm keV} \;({\rm BES}), \nonumber \\ \Gamma (\chi_{c2}\to\pi^+\pi^-)& =& 3.8\;\;\pm 2.0\phantom{3} \,{\rm keV} \;({\rm PDG}),\nonumber \\ && 3.04 \pm 0.73 \,{\rm keV}\; ({\rm BES}). \end{eqnarray} One notes that both the theoretical results, (\ref{shsa}) and (\ref{mhsa}), fail by at least an order of magnitude. To assess the uncertainties of the theoretical results one may vary the parameters, $m_c$, $B_2$ and $\Lambda_{QCD}$. However, even if the parameters are pushed to their extreme values the predicted rates are well below data. Thus, one has to conclude that calculations based on the assumption that the $\chi_{cJ}$ is a pure $c\bar{c}$ state, are not sufficient to explain the observed rates. The necessary corrections would have to be larger than the leading terms. A new mechanism is therefore called for. Recently, the importance of higher Fock states in understanding the production and the {\em inclusive} decays of charmonium has been pointed out \cite{bod:95}. It is therefore tempting to assume the inclusion of contributions from the $|c\bar{c}_8 (^3S_1)g\rangle$ Fock state to {\em exclusive} $\chi_{cJ}$ decays as the solution to the failure of the HSA. The usual higher Fock state suppression by powers of $1/Q^2$ \cite{far:73} where $Q=m_c$ in the present case, does not appear as a simple dimensional argument reveals: the colour-singlet and octet contributions to the decay amplitude behave as \begin{equation} \label{powers} M^{(c)}_J \sim f_{\pi}^2 f^{(c)}_J m_c^{-n_{c}}. \end{equation} The singlet decay constant, $f^{(1)}_J$, represents the derivative of a two-particle coordinate space wave function at the origin. Hence it is of dimension GeV$^2$. The octet decay constant, $f^{(8)}_J$, as a three-particle coordinate space wave function at the origin, is also of dimension GeV$^2$. Since $M^{(c)}_J$ is of dimension GeV, $n_{c}=3$ in both cases. Note that the $\chi_{c J}$ decay constants may also depend on $m_c$. Obviously, the colour-octet contribution will also play an important role in the case of the $\chi_{b J}$ decays. In \cite{bol:96} the colour-octet contributions to the exclusive $\chi_{cJ}$ decays are estimated by calculating the hard scattering amplitude from the set of Feynman graphs shown in Fig.\ \ref{fig:col} and convoluting it with the asymptotic pion wave function. \begin{figure}[t] \unitlength 1mm \begin{picture}(160,100) \put( 5,30){\psfig{figure=coloctgraphs1.ps,% bbllx=10pt,bblly=335pt,bburx=580pt,bbury=810pt,% width=6.8cm,clip=} } \put(80,30){\psfig{figure=coloctgraphs2.ps,% bbllx=10pt,bblly=335pt,bburx=580pt,bbury=810pt,% width=6.8cm,clip=} } \end{picture} \caption{Representatives of the various groups of colour-octet decay graphs. \label{fig:col}} \vspace{-0.3cm} \end{figure} The colour-octet and singlet contributions are to be added coherently. The $\chi_{cJ}\to \pi\pi$ decay widths are given in terms of a single non-perturbative parameter $\kappa$ which approximately accounts for the soft physics in the colour-octet contribution. A fit to the data \cite{pdg,bes} yields $\kappa=0.16 \,{\rm GeV}^2$ (with $m_c=1.5$ GeV; $\Lambda_{QCD}=0.2$ GeV) and the widths \begin{equation} \label{so} \Gamma (\chi_{c0(2)}\to\pi^+\pi^-)\, =\, 49.85\; (3.54)\, \,{\rm keV}. \end{equation} Comparison with (\ref{dat}) reveals that the inclusion of the colour-octet mechanism brings predictions and data in generally good agreement. The value found for the parameter $\kappa$ has a reasonable interpretation in terms of charmonium properties and the mean transverse momentum of the quarks inside the pions. Results for the decays into pairs of uncharged pions are presented in Table 2. The quoted results for the pionic charmonium decays refer to a calculation within the standard HSA but similar good results are found when the modified HSA is used \cite{bol:97}. The only soft parameter appearing in the latter calculation is the octet-decay constant $f_J^{(8)}$ of the charmonium state. \begin{table} \vspace*{0.5cm} \begin{center} \begin{tabular}{|c|r|r|r|r|} \hline & \multicolumn{2}{|c|}{$\Gamma(\chi_{c J} \to \pi^0\pi^0)\,$[keV] } & \multicolumn{2}{|c|}{${\rm BR}(\chi_{c J} \to \pi^0\pi^0)\,$[\%]} \\ \hline $B_2 = 0$ & $25.7$ & $1.81$ & $0.18$ & $0.091$ \\ \hline Exp.\ PDG \cite{pdg} & $42 \pm 18$ & $2.2 \pm 0.6$ & $0.31 \pm 0.06$ & $0.110 \pm 0.028$ \\ \hline \end{tabular} \caption[dummy]{Decay widths and branching ratios of $\chi_{c J} \rightarrow \pi^0 \pi^0$ (colour-octet contributions included; $m_c=1.5\,$GeV, $\Lambda_{QCD}=0.2\,$GeV).} \label{tab:pionzero} \end{center} \end{table} Thus it seems that the colour-octet mechanism leads to a satisfactory explanation of the decay rates of the $\chi_{cJ}$ into two pions. Of course, that mechanism has to pass more tests in exclusive reactions before this issue can be considered as being settled. \section{The $\pi$ form factor} \label{sec:pion ff} \vspace*{-0.3cm} Let us now turn to the case of the $\pi$ form factor and discuss the implications of the constraints on the pion's wave function obtained from the $\pi\gamma$ analysis. The leading twist result for the pion form factor can be brought into a form similar to (\ref{leadtwisteq}) \begin{equation} \label{leadtwist} F_{\pi}(Q^2) = \frac{8\pi}{9}\,\langle x^{-1}\rangle^2 \frac{f_\pi^2}{Q^2}\;\alpha_s(\mu_R)\; [\,1+\frac{\alpha_s(\mu_R)}{2\pi} K_{\pi}(Q^2,\mu_R) + {\cal O}(\alpha_s^2)\,]. \end{equation} Choosing $\mu_F=\mu_R$ and using the value of $B_2^{LO}(\mu_0)$ determined in the leading twist analysis of the $F_{\pi\gamma}$ as well as a value of, say, 0.4 for $\alpha_s$, one obtains the result $0.097 \,{\rm GeV}^2/Q^2$ for $F_{\pi}$ to lowest order pQCD. That result is much smaller than the admittedly poor experimental result \cite{beb:76}: $F_{\pi}=0.35\pm 0.10\,{\rm GeV}^2/Q^2$. The $\alpha_s$-corrections are too small to account for that discrepancy \cite{fie:81}. The modified HSA likewise provides too small a perturbative contribution \cite{jak:93}. It is important to remember at this point that, formally, the perturbative contribution to the pion form factor represents the overlap of the large momentum tails of the initial and final state wave functions. But the form factor also gets a contribution from the overlap of the soft wave functions $\hat\Psi_{\pi}$. That contribution, frequently termed the Feynman contribution, is customarily assumed to be negligible already at momentum transfers as low as a few GeV$^2$\footnote{ At large $Q^2$ the Feynman contribution is suppressed by $1/Q^2$ as compared to the perturbative contribution.}. Examining the validity of that presumption by estimating the Feynman contribution from the asymptotic wave function (\ref{gaussian}), one finds results of appropriate magnitude to fill in the gap between the perturbative contribution and the data of \cite{beb:76}. The results exhibit a broad flat maximum which, for momentum transfers between $3$ and about $15$ GeV$^2$, simulates the dimensional counting behaviour. For a wave function based on the Chernyak-Zhitnitsky distribution amplitude, on the other hand, the Feynman contribution exceeds the data significantly \cite{jak:96,jak:93}. Large Feynman contributions have also been found by other authors \cite{isg:89,KisWan:93}. Thus, the small size of the perturbative contribution to the elastic form factor finds a comforting although model-dependent explanation, a fact which has been pointed out by Isgur and Llewellyn Smith \cite{isg:89} long time ago. Of course this line of reasoning is based on the assumption that the data of \cite{beb:76} are essentially correct. A comment concerning the pion form factor in the time-like region is in order. In the standard HSA the predictions for the form factor in both the time-like and the space-like region, are identical. The experimental information on the time-like form factor comes from two sources, $e^+\,e^-\to \pi^+\pi^-$ and $J/\Psi \to\pi^+\pi^-$, which provides, to a very good approximation\footnote{ The contribution from $c\bar{c}$ transitions into the light quarks via three gluons cancel to zero if quark masses are neglected \cite{BL}.}, the form factor at $s=M^2_{J/\Psi}$. Although the $e^+\,e^-$ annihilation data of Bollini et al.\ \cite{Bol75} suffer from low statistics, they agree very well with the result obtained from the $J/\Psi$ decay. Combining both the data, one finds $|F_{\pi}|=0.93\pm 0.08\,{\rm GeV}^2/s$ in the momentum transfer range between 2 and 10 GeV$^2$, a value wich is roughly a factor of 3 larger than the space-like data. The modified HSA can account for that large ratio of the time-like over space-like form factors \cite{GP} although the perturbative contributions to the form factor in both the regions are too small. The structure function of the pion offers another possibility to test the wave function against data. As has been pointed out in \cite{BHL:83} the parton distribution functions are determined by the Fock state wave functions. Since each Fock state contributes through the modulus squared of its wave function integrated over transverse momenta up to $Q$ and over all fractions $x$ except those pertaining to the type of parton considered, the contribution from the valence Fock state should not exceed the data of the valence quark structure function. As discussed in \cite{jak:96,HuaMaShe:94} the asymptotic wave function respects this inequality while the Chernyak-Zhitnitsky one again fails dramatically. \newpage \section{Summary} \label{sec:concl} \vspace*{-0.3cm} The study of hard exclusive reactions is an interesting and challenging subject. The standard HSA, i.e.\ the valence Fock state contribution in collinear approximation to lowest order perturbative QCD, while asymptotically correct (at least for form factors), does not lead to a consistent description of the data. In many cases the predicted perturbative contribution to particular exclusive reactions are much smaller then the data. The observed spin effects do not find a comforting explanation. In some reactions agreement between prediction and experiment is found although at the expense of dominant contributions from the soft end-point regions rendering the perturbative analysis inconsistent. From a detailed analysis of the $\pi\gamma$ transition form factor it turned out that the pion distribution amplitude\ is close to the asymptotic form. Strongly end-point concentrated distribution amplitudes\ are obsolete and the ostensible successes in describing the large momentum transfer behaviour of the pion form factor in the space-like region and charmonium decays into two pions with such distribution amplitudes\ must therefore be dismissed. In view of these observations it seems that higher Fock state and/or higher twist contributions have to be included in the analysis of exclusive reactions. However, not much is known about them as yet. We are lacking systematic investigations of such contributions to exclusive reactions. The colour octet model for exclusive charmonium decays is discussed in this talk as an example of such contributions. \vspace*{0.5cm}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:intro} Fundamental progress in understanding the properties of galaxies, star clusters and stellar populations comes from the comparisons between observed photometry and synthetic photometry derived from stellar evolution codes. It has become common practice to infer properties such as star formation rate (SFR), star formation history (SFH), age, metallicity, redshift, and stellar mass from photometry. Despite the limits of theoretical modeling of stellar populations \citep[such as uncertainties with dust, stellar evolution, and the stellar initial mass function (IMF); see][]{conroy1, conroy2, conroy3} synthetic libraries have reached a degree of precision that allows accurate estimates of these parameters -- although sometimes with degeneracy -- in massive galaxies and clusters. However, observations reveal a higher complexity in lower mass systems where scaling relations which apply to more massive systems cannot be trivially extrapolated \citep[e.g.,][]{lee2007, weisz}. Moreover, in lower mass systems, the limited number of stars that are present invalidates the basic assumption used by most of the currently available codes for synthetic photometry (such as \textsc{starburst99} \citep[SB99;][]{starburst99}; PEGASE \citep{pegase}; and GALEV \citep{galev}): that the IMF is fully sampled at all times. Violation of this assumption leads to stochastic variations in photometric properties that these codes do not fully capture. For example in globular clusters, the simplest observed stellar populations, failure to account for sampling effects can lead to a dramatic overestimate of the contributions of blue horizontal branch and AGB stars to the integrated light. As a result, correct estimates of globular cluster ages and metallicities based on their integrated light are possible only if one correctly accounts for stochasticity \citep{colucci2011}. Moreover, in weakly star forming regions, stochastic effects can mimic those of a varying IMF. Indeed, recent observations in the low SFR regime have led to serious consideration of a varying IMF \citep{pflamm2008, hoversten, meurer, lee2009}. However a fully self-consistent model of stochasticity, allowing for a full range of parameters such as differing degrees of stellar clustering, metallicities, stellar tracks, input IMFs and CMFs, and SFHs has not been available to test the null hypothesis of a non-varying but stochastically sampled IMF. These considerations apply not only to the dwarf galaxies studied by \cite{lee2009} but also to the outer regions of galaxies such as XUV disks \citep{boissier2007, thilker} and outlying \ion{H}{2} regions \citep{werk2008,gogarten} where stochasticity becomes crucial in the interpretation of inferred SFRs and SFHs. While the number of studies that use Monte Carlo approaches to address problems on scales of clusters and galaxies is growing, a general purpose tool to study photometry in clusters and galaxies has not previously been available. To fill this need, we have created SLUG, a code to allow proper study of the stochastic star formation regime at a range of scales from individual star clusters to entire galaxies. SLUG provides a variety tools for studying the stochastic regime, such as the ability to create catalogs of clusters including their individual IMFs and photometric properties, color-magnitude diagrams (CMDs) of entire galaxies where we keep track of the photometry of every star, as well as integrated photometry of entire composite populations. This paper, the first of a series, focuses on the methods used in the code along with several tests to demonstrate that we are reliably reproducing observations and other synthetic photometry predictions. We then demonstrate the use of this code in the stochastic regime. In a companion paper \citep{mikiletter}, we use SLUG to demonstrate that, once random sampling is included, a stochastic non-varying IMF can reproduce the observed variation of the H$\alpha$/FUV ratio in dwarf galaxies without resorting to modifications of the IMF. In a the second paper of the series (da Silva et al in prep.) we will explore in detail the implications of stochastic star formation with clustering. Further work will apply this code to a variety of astrophysical questions, such as understanding SFR calibrations in the stochastic regime and further study of other claims of a varying IMF. The layout of the paper is as follows: $\S$\ref{sec:stoch} presents an introduction to stochasticity and its effects on the luminosity of stellar populations; $\S$\ref{sec:tech} gives a detailed description of the SLUG algorithm; $\S$\ref{sec:tests} discusses various tests of the code; $\S$\ref{sec:inaction} shows a presentation of the code's outputs in the stochastic regime; finally, $\S$\ref{sec:summary} summarizes the results. \section{What is Stochasticity?}\label{sec:stoch} Many astrophysical studies require creation of synthetic photometry of galaxies and other collections of stars in order to compare with observations. In this section we present a discussion of the various effects of stochasticity and the regimes in which they are important. \subsection{Coeval Stellar Populations} The standard procedure for calculating the luminosity from a coeval population of stars used by the most popular implementations (such as SB99) is as follows. To find the luminosity per unit mass of a coeval population in some band $\beta$ at a time $t$ after formation ($\ell_{\beta, coeval}(t)$), one simply integrates the luminosity per unit mass of each star in that band as a function of mass and time ($\ell_\beta(m,t)$) weighted by the distribution of stellar masses (i. e. the IMF) $\frac{dN}{d\ln m}$: \begin{equation} \ell_{\beta, coeval}(t)=\int_{m_{min}}^{m_{max}} \ell_\beta (m, t)\frac{dN}{d\ln m}dm. \end{equation} Note that here we use a normalization of the IMF such that $\int_{m_{min}}^{m_{max}} \frac{dN}{d\ln m}dm=1$. By performing this integral, these models assume an infinitely well-sampled IMF. As a result the above formula is mass-independent, meaning that $\ell_{\beta,coeval}$ can be scaled according to the total amount of stellar mass in a population (i.e. the luminosity of a mass $M$ of stars is simply $M \ell_{\beta,coeval}$). Thus a given amount of mass $M$ will have a 1-to-1 mapping to a particular luminosity $L$. However for small stellar populations, the assumption of continuous sampling breaks down and effects of stochasticity can become important. Specifically, stochastic effects create a statistical dispersion of luminosities that result from a given mass $M$ of stars based entirely on the probabilistic sampling of the mass distribution of stars. This is because each realization of a given mass $M$ is built up with a different distribution of stellar masses which, due to the non-linear dependence of luminosity on stellar mass, yields a different luminosity. We call this type of stochastic process {\it sampling} stochasticity. Perhaps the most important manifestation of sampling stochasticity is undersampling of the upper end of the IMF. Since the IMF is steeply declining with increasing stellar mass, the expectation value of a low mass population drawing a massive star is small. As a result, the IMF in a low mass population with few stars can appear truncated and have less luminosity than a fully-sampled assumption would have predicted. This is due to the very super linear dependence of luminosity on stellar mass. One can roughly estimate the mass below which this effect is insignificant by calculating the expectation value of obtaining a star above a given mass. We do so following the formalism of \cite{elmegreen00}, who find that the total mass ($M$) required to expect a single star above a mass $m$ is \begin{equation} M\sim 3\times10^3 \left(\frac{m}{100 M_\odot}\right)^{1.35}. \end{equation} This statement is clearly dependent on one's choice of IMF. \cite{elmegreen00} uses a Salpeter IMF with a a lower limit of 0.3 $M_\odot$ and no upper limit. If one imposes an upper limit to the stellar mass function, this relation turns over and asymptotically approaches the limit. However, for order-of-magnitude purposes here, we neglect such consideration. This result implies that in order to reasonably expect even a single 120 $M_\odot$ star\footnote{Due to limitations of stellar evolutionary tracks, this is the highest stellar mass SLUG can model and is a reasonable guess for the highly uncertain absolute stellar mass limit. While some \citep[e.g.,][]{figer2005} suggest a value of $\sim 120-150 M_\odot$ others \citep{crowther} suggest it may be as high as 300 $M_\odot$.}, one would need at least a total mass sampled of approximately $10^4 M_\odot\equiv M_{trunc}$. Thus this IMF truncation effect of sampling stochasticity can be ignored for \emph{coeval} populations with masses $\gg$ $M_{trunc}$. For more reference on the limits of stochastic sampling, we recommend \cite{cervino1} and \cite{cervino2}. For specific considerations to H$\alpha$ luminosity (one of the features of a stellar population most sensitive to stochasticity), see \cite{cervino3}. Another manner in which stochastic sampling can manifest in coeval populations is for stars going through particularly short-lived and luminous phases of evolution after they leave the main sequence (e.g., AGB and blue horizontal branch stars). Since these phases are short, only a very narrow range of masses is undergoing one of them at any given time. Thus the exact sampling within that mass range can have a large impact on the number of stars within that phase. As a result, a non-infinite population of stars can have additional random scatter in luminosity even if $M>M_{trunc}$. This effect is more important in populations with little ongoing star formation relative to their stellar mass (otherwise new stars dominate the photometric properties of the population), at specific ages when these post-main sequence populations contribute significantly to the luminosity of the population \citep{colucci2011}. \begin{figure*} \epsscale{0.75} \plotone{f1.eps} \caption{A schematic flow-chart describing the algorithm of the the SLUG code. Note that for the case of unclustered star formation, the cluster mass is drawn from the IMF and the population step is skipped as the single star is treated as part of a disrupted cluster for the remainder of the code. Note this is updated from \cite{miki}.} \label{fig:schematic} \end{figure*} \subsection{Composite Stellar Populations}\label{sec:comp} In order to characterize a more complicated star formation history, SB99 and other such schemes integrate over the coeval populations discussed above to find the luminosity of all stars in a given band at a time $\tau$ \begin{equation} L_{\beta, total}(\tau)=\int_{-\infty}^\tau \mbox{SFR}(t) \ell_{\beta,coeval} ( \tau-t)dt. \end{equation} Such a treatment makes two key assumptions: (1) each of the summed coeval populations is large enough to ignore the effects of sampling stochasticity and (2) the SFR is continuously sampled as well. These assumptions can quickly break down for sufficiently low SFRs To illustrate this point, consider a galaxy forming stars at a constant rate. In order for the IMF not to be truncated within some time interval $dt$, there need to be at least $M_{trunc}$ worth of stars formed in that interval. For the SFR to be considered reasonably well sampled, $dt$ must be much smaller than the evolutionary timescales of any of the stars, which are $\approx 10^6$ yr for the massive stars that generally dominate the light in an actively star-forming system. Thus these assumptions require \begin{equation}\label{eqt:dt} dt=\frac{M_{trunc}}{\mbox{SFR}} \ll 10^6 \mbox{yr}. \end{equation} \begin{deluxetable}{cl} \tablewidth{0pc} \tablecaption{Input Parameters} \tablehead{\colhead{Parameter} & \colhead{Description}} \startdata \cutinhead{Controlling the Physics} IMF & stellar initial mass function; can choose \\ &Kroupa, Salpeter, Chabrier, IGIMF, or\\ & an arbitrary slope \vspace{1 mm}\\ CMF & cluster mass function, can change\\ &slope, minimum and maximum mass\vspace{1 mm}\\ Stellar Evolutionary & library of models used for stellar evolution\\ Tracks &\vspace{1 mm}\\ Metallicity & metallicity of the stellar population\vspace{1 mm}\\ Stellar Atmosphere & which scheme and models are used for SEDs\vspace{1 mm}\\ Stellar Wind Model & which wind model is used for SEDs\vspace{1 mm}\\ Fraction of stars in & mass fraction of stars formed in clusters\\ clusters &\vspace{1 mm}\\ \cutinhead{Controlling the Simulation} Maximum time & how long the simulation is run\vspace{1 mm}\\ SFH & can be arbitrary\vspace{1 mm}\\ Seed & random seed used for simulation\vspace{1 mm}\\ \cutinhead{Controlling Output} Time step & time between code outputs\vspace{1 mm}\\ Fluxes & choose which fluxes to output\vspace{1 mm}\\ Colors & which colors to use for CMDs\vspace{1 mm}\\ CMD output & choice of number of bins and\\ parameters&range of color and luminosity for each CMD\vspace{1 mm}\\ Cluster output? & set to print output for each cluster\vspace{1 mm}\\ IMF output? & set to output IMF histograms for each cluster\vspace{1 mm}\\ \enddata \label{table:inputs} \end{deluxetable} Thus these effects can only be ignored for SFRs consistently $\gg 10^{-2} M_\odot \mbox{yr}^{-1}\equiv\mbox{SFR}_{temp}$. However, this \emph{temporal} stochasticity is amplified when one considers that stars are believed to be formed in discrete collections known as clusters. As a result, the clumping in time of star formation in clusters can produce stochastic effects even in regions with SFRs higher than SFR$_{temp}$. In this case the characteristic mass in Equation \ref{eqt:dt} is replaced with a mass characteristic of the clusters being drawn (discussed further in da Silva in prep.; \citealt{mikiletter}). The conditions required to treat a stellar population as continuous (as opposed to stochastically sampled) break down in a variety of astrophysical environments such as dwarf galaxies \citep[e.g.,][]{lee2009}, low star formation rate regions in the outskirts of galaxies \citep[e.g.,][]{boissier07, miki08, bigiel10}, and low surface brightness galaxies \citep[e.g.,][]{boissier08}. \section{Technique}\label{sec:tech} \subsection{Overview} Here we present a brief overview of the code while we present each step in detail in the subsequent sections. SLUG simulates star formation according to the scheme presented in Figure~\ref{fig:schematic}. We create collections of star clusters obeying a user-defined cluster mass function (CMF) (which can include a given mass fraction of stars not formed in clusters), SFH, IMF, and choice of stellar evolutionary tracks, which we call a ``galaxy". A description of the parameters that users can vary is provided in Table~\ref{table:inputs}. These galaxies are built up ($\S$\ref{sec:imf}) by first drawing the mass of an individual cluster from a CMF. This cluster's mass is then filled up with stars according to an IMF. The age of the cluster is drawn from a distribution weighted by the given SFH. Each of the stars within the cluster is evolved using a stellar evolutionary track combined with a model spectral energy distribution (SED) to determine a variety of integrated fluxes corresponding to commonly used photometric filters ($\S$\ref{sec:seds}). At a given set of time steps, these fluxes are summed over each star cluster. The clusters are then disrupted according to the prescription of \cite{fall2009}. Disrupted clusters have their fluxes added to a ``field" population while surviving clusters have their properties stored individually. The code repeats this process until a stellar mass equal to the integral of the provided SFH is created. The code outputs a variety of files that keep track of the properties of the stars, clusters, and total integrated stellar populations. Table~\ref{table:output} provides a short description of each available output file. All outputs are parsed and transformed into binary FITS tables. The code is open source and written in C++ with wrapping and parsing routines written in IDL. This entire process can be controlled through an IDL graphical user interface (see Figure \ref{fig:gui}) or either of the UNIX or IDL command lines. The IDL routines are wrapped in packages for use with the IDL virtual machine\footnote{which is available for free from http://www.ittvis.com/language/en-us/productsservices/idl/idlmodules/idlvirtualmachine.aspx} for those without IDL licenses. For a full manual on how to use the code, visit the SLUG website at http://sites.google.com/site/runslug/. \begin{deluxetable*}{cc} \tablewidth{0pc} \tablecaption{SLUG Output Files} \tablehead{\colhead{Name} & \colhead{Description} } \startdata Histogram & a 2d histogram of the user's choice of color-magnitude diagram(s)\\ & of every star in the ``galaxy" at each timestep\\ Cluster & mass, fluxes, most massive star, number of stars, and \\ &age of each undisrupted cluster at each timestep\\ IMF & a histogram of the IMF of each cluster that appears in the Cluster file\\ Integral & the total flux of the entire ``galaxy" at each timestep\\ Miscellaneous & the total stellar mass actually formed,\\ & as well as the actual SFH and CMF of the simulation\\ \enddata \label{table:output} \end{deluxetable*} \begin{figure} \plotone{f2.eps} \caption{ IDL GUI interface for running the code. The code may also be called via the UNIX or IDL command lines.} \label{fig:gui} \end{figure} \subsection{Cluster Creation}\label{sec:imf} \begin{figure} \plotone{f3.eps} \caption{Examples of star formation histories average over 1 Myr bins for simulations with varying input constant SFRs of 0.0001--100 $M_\odot$ yr$^{-1}$. The dotted lines show the input SFR. The average SFR of the simulation in each case is within 2, 0.2, and $<$0.02 percent of the input for $10^{-4}$,$10^{-3}$, and $>10^{-2}$ $M_\odot$ yr$^{-1}$ respectively. SFRs of zero are masked. } \label{fig:sfh} \end{figure} Most stars are thought to be born in star clusters \citep{ladalada} and the distribution of star cluster masses appears to obey a power law distribution, where observations \citep[e.g.,][]{zhang99,ladalada,fall09,chandar10} and theory \citep[e.g.,][]{fallkrummatz} suggest that the index ($\beta$) of the power law $dN/dM\propto M^{-\beta}$ is approximately 2. SLUG allows for both clustered and unclustered star formation. The user can choose what fraction of all stellar mass they wish to form in star clusters. If the code is forming clusters, the CMF's power law slope as well as its upper and lower bounds can be varied. If unclustered star formation is desired, the stars' masses are drawn individually from an IMF and treated as a disrupted ``cluster" of one star for the remainder of the code. The initial masses of stars are drawn from an IMF. Choices of IMF\footnote{IMFs are truncated at 0.08$M_\odot$ due to lack of lower mass stellar tracks} currently are \cite{chabrier}, \cite{Kroupa}, \cite{Salpeter}, a user-defined arbitrary power law, and the recently proposed IGIMF \citep{igimf0,pflamm2008}. While the Chabrier, Kroupa, Salpeter, and power law IMFs are implemented as a standard probability density function of stellar masses, the IGIMF has additional features that require different treatment (see Appendix \ref{sec:igimf}). Regardless of the choice of IMF, we draw stars until the total mass of the star cluster is built up. Since the random distribution of stars never exactly equals the mass of the cluster, a question arises as to whether to keep the last star added. This last star increases the mass of the cluster above the cluster mass drawn from the CMF. We determine whether or not to keep that star in the cluster based on whether keeping the star in makes the total mass of stars closer to the mass drawn from the CMF than leaving it out\footnote{The effects of different sampling methods and their dependence on the CMF is studied in detail by \cite{haas2010}. Our method is identical to their `stop-nearest' method.}. Independent of its mass, the age of the cluster relative to the galaxy is assigned in a probabilistic manner weighted by the SFH (which can be arbitrary) such that the SFH is reproduced on average. This produces a scatter in the SFHs for even a given ``constant" SFR. Thus SLUG's definition of a galaxy with a constant SFR is not a galaxy where the SFR is constant at every individual time\footnote{A constant SFR cannot be instantaneously constant because stars form in discrete units of mass. For example, when a star is born, the instantaneous SFR is infinite, thus we must turn to a more probabilistic interpretation of the SFR.}, but rather a galaxy that produces an amount of stars over a time $dt$ equal to SFR$\times dt$ which are distributed in clusters whose ages are drawn from a uniform distribution. This interpretation of what a SFR is and its implications is discussed in more detail in da Silva et al. (in prep.). Clusters are born until the total mass of stars formed is equal to the integral of the SFH. As with the problem of populating a cluster with stars, a galaxy will never be filled to exactly its given mass with an integer number of clusters. Therefore we apply the same condition for populating the galaxy as we do the clusters: we add until we exceed the mass and keep the final cluster only if the total galaxy mass is closer to the desired value if we keep it. As a result the average SFR over the entire simulation of a particular galaxy can be higher or lower than the input value. This effect is small for most regimes, but very rare drawings of the CMF at low SFRs can produce mild departures. We emphasize that this is not the effect of any error associated with the code but rather is the necessary result of our interpretation of what a SFR means. We demonstrate the results of this procedure in Figure \ref{fig:sfh}. The figure shows that, while lower average SFRs tend to produce larger fractional scatter in the instantaneous SFR, significant scatter remains until SFRs exceed 10 $M_\odot$ yr$^{-1}$. This scatter is a direct result of the finite size of clusters. To clarify with an example, consider that a $10^7M_\odot$ cluster (when averaged over the 1 Myr similarly to the curves shown in Figure \ref{fig:sfh}), will appear as a deviant peak for all but the highest SFRs, where the contribution of that individual cluster is drowned out by enough other clusters. We note that in this release of the code all stars in a cluster are treated as having identically the same age. While observations suggest a scatter of a several Myr \citep{palla99, jeffries, hosokawa}, the mass dependence of this scatter is unclear. Given the uncertainties, and that the intracluster age scatter is at most a few Myr, we chose to neglect this effect for now but plan on implementing it in the future. \subsection{Stellar Tracks, SEDs, and Broad Band Photometry}\label{sec:seds} Given the mass and age of each star, we need to determine its properties for a variety of observables. Our method uses many of the same algorithms found in \textsc{SB99} \citep{starburst99, sb992} to create a set of tables from which SLUG interpolates. These tables are constructed in advance so they need not be computed at run time. Our first step is to determine the physical properties of each star. To this end, we make use of a variety of stellar evolutionary models. Modifying the \textsc{SB99} source code, we were able to obtain the full range of stellar tracks available to \textsc{SB99} (see Table~\ref{table:stellarprop}). In the future we plan to implement a wider range of stellar tracks including those from \cite{eldridge} and the BaSTI library \citep{basti1, basti2}. We supplement the Geneva tracks with the Padova+AGB tracks for stars in the mass range 0.15-0.8 $M_\odot$. These models provide luminosities, gravities, chemical compositions, and effective temperatures at discrete intervals in the evolution of a discrete number of stellar masses. We then need to map these physical properties to stellar atmospheres in order to estimate the spectral energy distributions of the stars. Our code allows users to choose from one of five possible \textsc{SB99} algorithms for modeling the atmospheres. We implement all four prescriptions of stellar winds available in \textsc{SB99} (Maeder, empirical, theoretical, and Elson), which affect the SEDs for Wolf-Rayet stars for some regimes and prescriptions. It is important to note that the SB99 algorithms match SEDs to tracks with a nearest neighbor approach and not through interpolation. Therefore there can be some discreteness in the output SEDs. Future work will include removal of this effect. With SEDs in hand, we can convolve with filters to determine the photometry of each point in our stellar tracks. For this step we include the effects of nebular continuum (free-free, free-bound, and 2 photon processes) as implemented in SB99, but neglect nebular line emission for this first release of the code. (For a discussion of the importance of nebular continuum for the SEDs, see \citealt{nebcont}. Also see \citealt{nebcont3} and \citealt{nebcont2}.) The full list of available filters is presented in Table~\ref{table:filters}. We also integrate the SED to determine the bolometric luminosity as well as to calculate Q(H$^{0}$), the number of hydrogen ionizing photons emitted per second. One can convert Q(H$^{0}$) to H$\alpha$ luminosity with a simple conversion assuming case B recombination (our notation follows \citealt{agn2}). \begin{align} L_{H\alpha}&=(1-f_{esc})(1-f_{dust})\mbox{Q(H}^0\mbox{)}\left(\frac{\alpha_{H\alpha}^{eff}}{\alpha_B}\right)h\nu_{H\alpha} \nonumber\\ &\approx 1.37\times10^{-12}(1-f_{esc})(1-f_{dust})\mbox{Q(H}^0\mbox{)}\mbox{ ergs/s} \end{align} where $f_{esc}$ is the escape fraction (thought to be between 0.05 \citep{boselli} and 0.4 \citep{hirashita}) and $f_{dust}$ represents the fraction of of ionizing photons absorbed by dust grains \citep[e.g., see appendix of ][ who suggest a value of 0.37]{mckeewilliams97}. To better characterize the ionizing luminosity we also keep track of Q(He$^0$) and Q(He$^1$) which represent the numbers of ionizing photons in the \ion{He}{1} and \ion{He}{2} continua respectively. \begin{deluxetable*}{cc} \tablewidth{0pc} \tablecaption{Stellar Properties} \tablehead{\colhead{Parameter} & \colhead{Allowed Values} } \startdata {\bf Tracks} & Geneva STD\tablenotemark{a}, Geneva High\tablenotemark{a}, Padova STD\tablenotemark{b}, Padova AGB\tablenotemark{b}\\ {\bf Metallicity\tablenotemark{c} } & 0.0004-0.50 \\ {\bf SEDs} & Planck\tablenotemark{d}, Lejeune\tablenotemark{e}, Lejeune+Sch\tablenotemark{f}, Lejeune+SMI\tablenotemark{g}, Pau+SMI\tablenotemark{h}\\ {\bf Wind Models} & Maeder\tablenotemark{i}, Empirical\tablenotemark{i}, Theoretical\tablenotemark{i}, Elson\tablenotemark{i} \\ \enddata \label{table:stellarprop} \tablenotetext{a}{\cite{meynet1994} and references therein} \tablenotetext{b}{\cite{padova} and references therein} \tablenotetext{c}{solar is 0.20} \tablenotetext{d}{simple blackbody SED} \tablenotetext{e}{\cite{lejeune1,lejeune2}} \tablenotetext{f}{same as e, but for stars with strong winds use \cite{schmutz}} \tablenotetext{g}{same as e, but for stars with strong winds use \cite{hillier}} \tablenotetext{h}{same as g, but use \cite{pauldrach} for O stars} \tablenotetext{i}{\cite{sb99winds}} \end{deluxetable*} The above steps allow us to create a discrete two-dimensional table for each flux band where one axis represents stellar mass, the other represents time, and the value of the table is the logarithm of the flux in that band at the appropriate mass and time. Our tables are created through use of the isochrone synthesis method such that our results are stable against the numerical issues that arise from a fixed mass approach \citep{isochrone}. \begin{deluxetable}{cc} \tablewidth{0pc} \tablecaption{Broad Band Filters} \tablehead{\colhead{Filter\qquad\qquad \qquad} & \colhead{Reference}} \startdata NUV \qquad \qquad\qquad& 1 \\ FUV \qquad \qquad\qquad& 1 \\ $u$ \qquad\qquad \qquad& 2 \\ $g$\qquad \qquad\qquad & 2 \\ $r$ \qquad\qquad\qquad& 2\\ $i$\qquad \qquad\qquad & 2 \\ $z$ \qquad \qquad\qquad& 2 \\ J\qquad \qquad\qquad & 3 \\ H\qquad \qquad\qquad & 3 \\ K\qquad \qquad\qquad & 3 \\ U\qquad \qquad\qquad& 4\\ B \qquad \qquad\qquad& 4 \\ V\qquad \qquad\qquad& 4\\ R \qquad \qquad\qquad& 4 \\ I \qquad \qquad\qquad& 4 \\ Q(H$^0$)\qquad \qquad\qquad & 5 \\ Q(He$^0$) \qquad \qquad\qquad& 5 \\ Q(He$^1$) \qquad \qquad\qquad& 5 \\ $L_{\rm Bol}$ \qquad \qquad\qquad&6\\ \enddata \tablenotetext{1}{\cite{galexfilt}} \tablenotetext{2}{\cite{sdssfilt}} \tablenotetext{3}{\cite{2mass}} \tablenotetext{4}{\cite{forsfilt}} \tablenotetext{5}{Obtained by integrating SED blueward of 912, 504, and 208 \AA\ for Q(H$^0$), Q(He$^0$), Q(He$^1$) respectively. } \tablenotetext{6}{Given by stellar evolutionary tracks.} \label{table:filters} \end{deluxetable} \begin{figure*} \plotone{f4.eps} \caption{Comparison of observed young star clusters from \cite{larsen} (black points) to SLUG models of clusters $>10^4 M_\odot$ (blue triangles). The orange curve shows the trajectory of a SB99 $10^5M_\odot$ cluster. Data are omitted from upper left panel as the ages are not present in the \cite{larsen} catalog. Arrows denote the extinction vector for $A_V=0.5$ mag \citep[created following appendix B of][]{schlegel}.} \label{fig:clusphot} \end{figure*} \subsection{Evaluating the Stellar Properties} To determine the properties of a given star of any mass at any given time, we first determine if the star is still alive. This is done by an interpolation in time to find the minimum mass of a dead star ($m_{death}$) at a given time according to our stellar evolution models (where we call a star ``dead" if it no longer has entries in our stellar tracks). If the star is less massive than $m_{death}$, we interpolate our model tables to determine the flux in a given filter within 0.01 dex . For computational speed, there are a variety of approximations and restrictions we are forced to implement. The current scheme only allows ages up to 1 Gyr for the stellar tracks (to be expanded in later releases of the code). We do not evolve stars less massive than 0.9 $M_\odot$ (a number which can be changed by the user). These stars do not evolve past the main sequence for the current maximum age of the code of 1 Gyr, so these stars are treated as having their zero-age main sequence (ZAMS) properties. Due to limitations of the stellar tracks, we treat the photometric properties of all stars less massive than 0.156 $M_\odot$ identically to those of 0.156 $M_\odot$ stars. For many purposes, more massive stars dominate the light in the bands such that this approximation is reasonable. The tracks also impose a $120 M_\odot$ upper mass limit on stars. Currently, we neglect the effects of binary stellar evolution \citep[see][]{eldridge}, which may have an impact on the derived results by producing a bluer population with a reduced number of red supergiants and increased age range of Wolf-Rayet stars. \subsection{Cluster Disruption} If the user chooses to form stars in star clusters, we randomly disrupt our clusters in a mass independent way such that $dN/d\tau\propto \tau^{-1}$ \citep[following][]{fall2009}. We start cluster disruption 1 Myr after the cluster forms. This results in 90\% of star clusters being disrupted for each factor of 10 in age after 1 Myr. Stars in disrupted clusters still have their photometry calculated for the integrated properties of the galaxy and are kept track in a set of ``field" variables and outputs. \section{Validating Tests}\label{sec:tests} \begin{deluxetable}{cc} \tablewidth{0pc} \tablecaption{Fiducial Inputs} \tablehead{\colhead{Parameter} & \colhead{Fiducial Value}} \startdata Time step & $10^6$ yr\\ Maximum time & $10^9$ yr\\ IMF & 1-120$M_\odot$; slope=-2.35\\ CMF & $20-10^7 M_\odot$; slope=-2\\ Stellar Evolutionary Tracks & Padova+AGB\\ Metallicity & Solar; $Z=0.20$\\ Stellar Atmosphere & Lej+Smi\\ Stellar Wind Model & Maeder\\ Fraction of stars in clusters & 100\%\\ \enddata \label{table:fiducial} \end{deluxetable} In this section we present a variety of tests to validate the outputs of SLUG. For these tests we make use of a set of fiducial parameters presented in Table~\ref{table:fiducial} unless otherwise noted\footnote{While the preferred SEDs for \textsc{SB99} are the Pau+Smi atmospheres, we find that the Pauldrach models are far too discrete. Therefore while we provide the Pau+Smi atmospheres, we recommend the Lej+Smi.}\footnote{Since we aim to test SLUG rather than to perform a study of the effects that the multiple parameters have on the luminosity distributions , we choose widely adopted vlaues.}. To emphasize that SLUG can be applied at different scales, we arrange these tests in order of scale starting with individual clusters and then considering integrated properties of entire galaxies in the well-sampled regime. \subsection{Photometry of Clusters}\label{sec:clusphot} To demonstrate that SLUG reproduces properties of observed clusters, we turn to the catalog of young star clusters compiled in \cite{larsen}. To reproduce the clusters we modify our fiducial IMF to extend down to 0.08 $M_\odot$ and run a SLUG model with a SFR of 1$M_\odot$ yr$^{-1}$ for 500 Myr, evaluated every 10 Myr. Note that the the SFR does not directly affect the CMF or the properties of the clusters, only the number of clusters in existence at a given time. We show the results of this exercise in Figure~\ref{fig:clusphot} where we find remarkable agreement between the models and the data. As is clear from the figure, we are able to reproduce both the location and spread of most of the observed data. Clusters that fall outside of the locus of SLUG models fall can easily be reproduced when one accounts for a modest amount of reddening (see reddening vector). \begin{figure*} \epsscale{0.9} \plotone{f5.eps} \caption{ (\emph{top}) Here we present the birthline as first discussed by \cite{corbelli}. Black points and crosses are data from that paper with circle-crosses denoting their `clean' sample. Blue data points are clusters from SLUG. We see that our models are in relatively good agreement with observations. (\emph{bottom}) We present overlays demonstrating the average IMFs in each region of the birthline plot.} \label{fig:birthline} \end{figure*} \subsection{Cluster Birthline} Another test of the photometry of clusters is to compare their H$\alpha$ luminosity to their bolometric luminosity. Work by \cite{corbelli} has shown that newly born clusters lie along a birthline in this parameter space. In Fig.~\ref{fig:birthline} we compare the same models as Section \ref{sec:clusphot} (assuming $f_{esc}$ and $f_{dust}=0$) with their observational data and find good agreement. Our theoretical predictions differ slightly in the tilt of the locus of points from those by \cite{corbelli}, since we characterize the properties of our stars in a different manner (making use of stellar tracks rather than fitting formulae). To better demonstrate the origin of the birthline we also make use of SLUG's ability to keep track of the IMF of each individual cluster (see bottom panel of Figure~\ref{fig:birthline}). Here we can see that the birthline from left to right forms a sequence of progressively more well-sampled upper ends of the IMF. Extremely rare deviants exist below the birthline where more extremely massive ($>100M_\odot$) stars are drawn than average, resulting in being born below the birthline. Note that these rare clusters consisting of essentially isolated O stars have also been reported in the Milky Way \citep{dewit04,dewit05} and the SMC \citep{oey04,lamb10} in numbers consistent with stochastic sampling of the IMF. \subsection{Comparison with \textsc{SB99} }\label{sec:sb99} A third obvious comparison for SLUG is \textsc{SB99} itself. Since SB99 is widely used, it serves as a benchmark for SLUG. Indeed, one of the motivations for making use of the SB99 tracks and SED matching algorithms is that our code should be able to exactly reproduce SB99 if we select input parameters that place us in the continuously-sampled regime. To that end we now present a variety of tests where we compare to SB99 to demonstrate that we can reproduce their results in the this regime (the regime of a large galaxy-sized amount of stars). \begin{figure} \epsscale{1.} \plotone{f6.eps} \caption{A comparison of SLUG and \textsc{SB99} simulations of an instantaneous burst of $10^9 M_\odot$. We find good agreement between the two in both the absolute normalization of the fluxes as well as the time-dependent behavior. FUV and $r$ band fluxes are presented in units of ergs/s/Hz while Q(H$^0$) is in units of photons/s. In the $L_r$ panel, one can see the effects of the discrete SED matching techniques implemented by SB99 in the age ranges of 12-18 Myr and 20-22 Myr.} \label{fig:sb99} \end{figure} \begin{figure} \epsscale{1.} \plotone{f7.eps} \caption{A comparison of SLUG and SB99 photometry for an instantaneous burst of $10^9M_\odot$, evaluated at the ages indicated in each panel. The grey solid line represents the output spectrum from SB99 for such a population. The filled color circles show the SB99 integrated fluxes for the filters available to SLUG. The black $\times$'s mark the SLUG photometry for the well-sampled model described in section \ref{sec:sb99}.} \label{fig:sb99sed} \end{figure} To compare the outputs of both SB99 and SLUG, we choose an instantaneous burst of star formation to demonstrate the matching of the codes in both amplitude and time. We run a \textsc{SB99} model similar to our fiducial model (i.e. IMF slope of -2.35 from 1-120 $M_\odot$, solar metallicity, Padova+AGB tracks, Lej+SMI SEDs, and Maeder stellar wind models). To meaningfully compare with SB99 we must choose SLUG input parameters such that we are evaluating a population where SB99's approximations are valid. We therefore draw a very large instantaneous population of $10^9M_\odot$. To nullify any possible effects of our procedure for populating the clusters, we ensure all clusters are very large by modifying the fiducial CMF to a restricted range ($10^6-2\times10^6 M_\odot$). We present the results in Fig.~\ref{fig:sb99}. It is evident that we are accurately able to reproduce \textsc{SB99} in the well-sampled regime for integrated ``galaxy" properties. We match both the amplitude and time evolution in all photometric bands. This can also be seen by looking at the full SEDs. In Figure~\ref{fig:sb99sed}, we present photometry for all 15 of the flux bands available for SLUG and compare with the spectra and integrated photometry produced by SB99 at a variety of time steps. Again we are able to fully reproduce the photometric properties in the well-sampled regime from FUV to $K$-band. In both these demonstrations, SLUG matches SB99 within 0.026 dex for all fluxes at all times. \section{Stochasticity in Action}\label{sec:inaction} Having demonstrated that SLUG can reproduce realistic clusters as well as reproduce SB99's results, we now present outputs of SLUG in the stochastic regime. \subsection{Effects on Coeval Populations } Recent studies \citep[e.g.,][]{angst} have shown the wealth of information that can be obtained using resolved CMDs of stars within a galaxy. For comparison with such studies in the stochastic regime, SLUG produces binned 2 dimensional histograms that keep track of the user's choice of color magnitude diagrams. Such diagrams allow us to directly characterize the effects of stochasticity in a coeval population. In Figure~\ref{fig:angst}, we compare CMDs produced by SLUG for a $10^5 M_\odot$ instantaneous to \begin{figure} \epsscale{0.8} \plotone{f8.eps} \caption{CMDs of an instantaneous $10^5 M_\odot$ burst population at the ages indicated in each panel. Only stars more massive than 0.9 $M_\odot$ are binned in the CMDs. The dotted lines show the corresponding theoretical isochrones. The SLUG CMD has been convolved with circular top-hat PSF solely to improve visibility. The color bar denotes the number of stars in that region of the diagram. } \label{fig:angst} \end{figure} the theoretical isochrones from which they are produced. Aside from demonstrating we accurately reproduce the tracks, we are able to see the effects of stochasticity in leaving rapid phases of evolution unpopulated. Note that SLUG is capable of producing such diagrams for any given SFH. \subsection{Effects on Composite Populations } \begin{figure*} \begin{minipage}[b]{0.3\linewidth} \centering \epsscale{1.3} \plotone{f9a.eps} \end{minipage} \begin{minipage}[b]{0.3\linewidth} \epsscale{1.3} \plotone{f9b.eps} \end{minipage} \begin{minipage}[b]{0.3\linewidth} \epsscale{1.3} \plotone{f9c.eps} \end{minipage} \caption{ $R$-band, FUV, and ionizing photon luminosities vs. time for galaxies with constant SFRs of 1, $10^{-1}$, and $10^{-2}\ M_\odot$ yr$^{-1}$ as indicated. $R$-band and FUV luminosities are in units of erg s$^{-1}$ Hz$^{-1}$. We compare a fully sampled realization from \textsc{SB99} (solid black lines) with 100, 500, and 1000 realizations from SLUG for SFRs of 1, $10^{-1}$, and $10^{-2}\ M_\odot$ yr$^{-1}$ respectively. The SLUG models are represented by their mean (black dash-dotted line), median (colored dashed line) and 5-95 percentile range (filled color region). Our SLUG models were set to only output every 10 million years. Note that the y-axis in each panel has been chosen to match the SFR, but always spans the same logarithmic interval. } \label{fig:sb99stoch} \end{figure*} \begin{figure*} \begin{minipage}[b]{0.3\linewidth} \centering \epsscale{1.3} \plotone{f10a.eps} \end{minipage} \begin{minipage}[b]{0.3\linewidth} \epsscale{1.3} \plotone{f10b.eps} \end{minipage} \begin{minipage}[b]{0.3\linewidth} \epsscale{1.3} \plotone{f10c.eps} \end{minipage} \caption{Same as Figure \ref{fig:sb99stoch}, but this time made with unclustered star formation, and using lower SFR. Note the third panel of Figure \ref{fig:sb99stoch} is the same SFR as the first panel of this figure. These figures were constructed with 100, 500, and 1000 realizations at SFRs of $10^{-2}$, $10^{-3}$, and $10^{-4}$ $M_\odot$ yr$^{-1}$ respectively.} \label{fig:unclustered} \end{figure*} \begin{figure*} \begin{minipage}[b]{0.3\linewidth} \centering \epsscale{1.3} \plotone{f11a.eps} \end{minipage} \begin{minipage}[b]{0.3\linewidth} \epsscale{1.3} \plotone{f11b.eps} \end{minipage} \begin{minipage}[b]{0.3\linewidth} \epsscale{1.3} \plotone{f11c.eps} \end{minipage} \begin{minipage}[b]{0.3\linewidth} \centering \epsscale{1.3} \plotone{f11d.eps} \end{minipage} \begin{minipage}[b]{0.3\linewidth} \epsscale{1.3} \plotone{f11e.eps} \end{minipage} \begin{minipage}[b]{0.3\linewidth} \epsscale{1.3} \plotone{f11f.eps} \end{minipage} \caption{Solid lines show the evolution of Q(H$^0$) and $R$ band luminosity for individual simulations with clustered star formation with SFRs of 1, $10^{-1}$, and $10^{-2}$ $M_\odot$ yr$^{-1}$. Dashed lines show the SB99 prediction. Note that the y-axis in each panel has been chosen to match the SFR, but always spans the same logarithmic interval.} \label{fig:tracks} \end{figure*} While individual clusters of stars can be treated as coeval, larger systems are intrinsically built of composite populations. One of the most basic composite populations one can consider is a galaxy forming stars at a constant star formation rate. As discussed in Section \ref{sec:comp}, the value of the SFR will have a significant impact on the effects of stochasticity. To demonstrate the differences that stochasticity makes, we compare SLUG realizations to those of a well-sampled SB99 model. In Figure \ref{fig:sb99stoch}, we first examine the luminosities for SFRs of 1, $10^{-1}$, and $10^{-2}$ $M_\odot$ yr$^{-1}$ with our fiducial values for the CMF and cluster mass fraction. For each SFR, we show the mean and median of the SLUG runs along with the 5 and 95 percentiles. One can clearly see an increase in fractional scatter as one decreases the SFR, which can be attributed to the more bursty SFHs which are a result of the grouping of age in massive clusters. This scatter appears at higher SFRs than predicted by our naive discussion in Section \ref{sec:comp} as a direct result of the clustering. In fact, nearly all of the scatter seen in Figure \ref{fig:sb99stoch} is a result of the clustering rather than sampling of individual clusters. This is most clearly demonstrated by Figure \ref{fig:unclustered} which shows similar simulations but with completely unclustered star formation. Without clustering the $10^{-2}$ $M_\odot$ yr$^{-1}$ models have approximately an order of magnitude less scatter in the 5-95 percentile range of the log of the luminosity. We see that the unclustered stochastic effects behave as predicted in Section \ref{sec:comp} where the fractional scatter is small for SFRs $\sim 10^{-2}$ $M_\odot$ yr$^{-1}$ and quickly increases as the SFR decreases \cite[also discussed in][]{mikiletter}. For a demonstration of the effects of clustering, we present the tracks of a subset of individual stochastic realizations of clustered star formation in Figure \ref{fig:tracks}. One can see that the Q(H$^0$) curves are less uniform than the $R$ luminosity. This is a direct result of the sensitivity of Q(H$^0$) to the youngest, most massive stars. One can also see that the scatter increases with decreasing SFR as expected. This is to be further discussed in da Silva et al. (in prep.) where we elaborate on the effects of stochastic star formation when one includes clusters. \section{Summary}\label{sec:summary} We introduce SLUG, a new code that correctly accounts for the effects of stochasticity (with caveats discussed in the text) by populating galaxies with stars and clusters of stars and then following their evolution using stellar evolutionary tracks. Cluster disruption is taken into account and a variety of outputs are created. We present a series of tests comparing SLUG to observations and other theoretical predictions. SLUG is able to reproduce the photometric properties of clusters from the \cite{larsen} catalog as well as the \cite{corbelli} birthline. It can also reproduce the results of SB99 in the well-sampled regime. Finally we present SLUG outputs in the stochastic regime and demonstrate the flexibility of the code to address a variety of astrophysical problems with its variety of possible outputs. SLUG is a publicly available code, and can be found at http://sites.google.com/site/runslug/. \acknowledgements R.L.dS. is partially supported by an NSF CAREER grant (AST-0548180). The work of R.L.dS. is supported under a National Science Foundation Graduate Research Fellowship. MRK acknowledges support from: an Alfred P.~Sloan Fellowship; the National Science Foundation through grants AST-0807739 and CAREER-0955300; and NASA through Astrophysics Theory and Fundamental Physics grant NNX09AK31G and a {\it Spitzer Space Telescope} theoretical research program grant. We would like to thank J. X. Prochaska for help in reading and providing input on the early stages of this manuscript. We would like to thank F. Bigiel for encouraging us to create SLUG and useful conversations with J. Eldridge, C. Weidner, R. Bernstein, and J. Colucci. \clearpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{s:intro} Studying the statistical properties of the galaxy distribution allows one to probe the structure of overdense regions today, learning about galaxy formation and cosmology. We observe significant clumping in this large-scale structure (LSS), which is commonly characterized by a series of $n$-point\ correlation functions \citep[reviewed in][]{peebles:80}. Observational evidence is in line with predictions of a dark-energy dominated cold dark matter ($\ensuremath{\Lambda\mathrm{CDM}}$) model \citep{komatsu:09,sanchez:09,reid:10}. However, there is a large conceptual hurdle between following the evolution of mass densities in gravitational collapse \citep[e.g. ][]{lss_review} and that realized by galaxy positions. A priori, there is little reason to believe a one-to-one correspondence exists between mass overdensities and galaxy positions; complex galaxy formation processes such as merging and feedback should have significant contributions. For example recent results from the Sloan Digital Sky Survey \citep[SDSS; ][]{york:00} in \citet{zehavi:05,zehavi:10} show clustering varies with galaxy luminosity and color. This discrepancy between the observed ``light'' in galaxies relative to the predicted ``mass'' clustering is often described as \emph{galaxy-mass bias}. The parameterization of galaxy-mass bias enables a two-pronged approach to probe both cosmology and galaxy formation. On one side, we map the clustering of galaxies to that of the underlying mass distribution allowing us to understand and constrain cosmology. Alternatively, the parameterization of the bias itself encodes useful information concerning galaxy formation processes. This approach distills observational data from hundreds of thousands of galaxies available in modern surveys, such as the the two-degree field galaxy redshift survey \citep[2dFGRS;][]{2dFGRS} and the SDSS into a significantly smaller and more manageable form. Most observational evidence exploits the two-point correlation function (2PCF), the first in the series of $n$-point\ functions (or equivalently, the power spectrum in Fourier space). However, the 2PCF represents only a portion of the available information. Measurements of higher order moments, such as the three-point correlation function (3PCF), allow a more complete picture of the galaxy distribution. The statistical strength of higher order information might rival that of two-point statistics \citep{sefusatti:05}, as well as break model degeneracies describing cosmology and galaxy bias \citep{zheng:07,kulkarni:07}. Previous analyses have estimated the 3PCF from modern galaxy redshift surveys, including work on the the 2dFGRS \citep{jing:04,wang:04,gaztanaga:05} and results from SDSS data \citep{kayo:04,nichol:06,kulkarni:07,gaztanaga:09,marin:11}. Related higher order statistics have also been measured for these datasets \citep{verde:02,pan:05,hikage:05,nishimichi:07}. This work is the second of two papers analyzing the reduced 3PCF on SDSS galaxy samples. The first paper \citep{mcbride:10} focused on the details of the measurements we analyze here, as well as clustering differences due to galaxy luminosity and color. This paper utilizes the configuration dependence to constrain non-linear galaxy-mass bias parameters in the local bias model \citep{fry:93}, and the properties of the errors necessary for quantitative analyses. The local bias model is a simple approach to characterize galaxy-mass bias. Alternative descriptions exist based on the halo model \citep[reviewed in][]{cooray:02}, which form phenomenological models with a wider range of parameters. Two well used formulations include the halo occupation distribution \citep[HOD;][]{berlind:02} and the conditional luminosity function \citep[CLF;][]{yang:03,vdB:03}. There are formulations for the 3PCF; however, the accuracy of the model predictions is not as well determined as the 2PCF when compared with data \citep[see][]{takada:03,wang:04,fosalba:05}. A significant advantage of a HOD modeling is the ability to use well determined measurements of the small scales for constraints (the non-linear regime in gravitational perturbation theory). Understanding the projected 3PCF, $Q_{proj}$, a major component of this work, provides a critical link to obtain reliable measurements at these smaller scales from observational galaxy samples. However, by using this simple prescription for galaxy-mass bias, we investigate the effects of binning and covariance resolution in a quantitative analysis with a clear and simple model where the implications for bias and cosmology are better studied for higher order moments. An important part of our analysis is comparing results from the projected 3PCF with the more commonly used redshift space measurements. This paper is organized as follows. We discuss the SDSS data, simulations, and mock galaxy catalogs in \S\ref{s:data}. We review the theory and methods of our analysis in \S\ref{s:methods}. We constrain the non-linear galaxy mass bias parameters in \S\ref{s:bias}. In \S\ref{s:ev}, we investigate clustering properties contained in the eigenvectors of the 3PCF covariance matrix. We perform a detailed examination of the quality of error estimation in \S\ref{s:errors}. We discuss our results and compare to related analyses in \S\ref{s:disc}. Finally, we review our main conclusions in \S\ref{s:summary}. Unless otherwise specified, we assume a flat $\ensuremath{\Lambda\mathrm{CDM}}$ cosmology where $\ensuremath{\Omega_{\mathrm{m}}} = 0.3$, $\ensuremath{\Omega_{\Lambda}} = 0.7 $, and $H_o = h \, 100 {\,{\rm km}\,{\rm s}^{-1}\,{\rm Mpc}^{-1}}$, used to convert redshift to physical distances. \section{Data} \label{s:data} \subsection{SDSS Galaxy Samples} \label{s:sdss} \begin{table*} \centering \begin{tabular}{lccccc} \hline \multicolumn{6}{c}{\bfseries Specifics of SDSS galaxy samples} \\ \multirow{2}{*}{Sample} & Absolute & \multirow{2}{*}{Redshift} & Volume & Number of & Density \\ & Magnitude & & $\; h^{-3}{\mathrm{Gpc}}^{3}$ & Galaxies & $10^{-3} \; h^{3}{\mathrm{Mpc}}^{-3} $ \\ \hline \hline BRIGHT & $M_{r} < -21.5$ & $0.010 $ - $0.210 $ & $0.1390$ & $ 37,875$ & $0.272$ \\ LSTAR & $-21.5 < M_{r} < -20.5$ & $0.053 $ - $0.138 $ & $0.0391$ & $106,823$ & $2.732$ \\ \hline \end{tabular} \caption[Volume-limited galaxy catalogs from the SDSS DR6]{ The magnitude range, redshift limits, volume, total number of galaxies, and completeness corrected number density are shown for the galaxy samples constructed from the SDSS DR6 spectroscopic catalog. We selected these samples by cuts in redshift and corrected (K-correction and passive evolution) absolute $r$-band magnitude to create volume-limited selections. See details in \citet{mcbride:10}. } \label{t:gal_samples} \end{table*} The SDSS has revolutionized many fields in astronomy, obtaining images and spectra covering nearly a quarter of the sky by utilizing a dedicated 2.5 meter telescope at Apache Point Observatory in New Mexico \citep{gunn:98,gunn:06,york:00,stoughton:02}. Our galaxy samples and details of the measurements are fully described in a companion paper \citep{mcbride:10}. Briefly, we use galaxy data with spectroscopically determined redshifts, defined as the Main galaxy sample \citep{strauss:02}. We conduct our analysis of clustering measurements using galaxies from DR6 \citep{sdss_dr6}, and define samples from a refined parent catalog: the New York University Value-Added Galaxy Catalog \citep[NYU-VAGC;][]{vagc}. We analyze two samples: a BRIGHT sample where $M_r < -21.5$ and LSTAR with $-21.5 < M_r < -20.5$. We do not analyze the FAINT sample presented in the companion paper \citep{mcbride:10}, as the errors suffer from small volume effects. We tabulate properties, such as the redshift range, number of objects, volume and completeness corrected number density in Table~\ref{t:gal_samples}. Our absolute $r$-band magnitudes use the NYU-VAGC convention defined to represent values at $z=0.1$ \citep[see details in][]{vagc}. We note these as $M_r$ for simplicity, which refer to $M_{\band{0.1}{r}} - 5 \log h$. Radial distances and absolute magnitudes are calculated using a flat \ensuremath{\Lambda\mathrm{CDM}}\ cosmology with $ \ensuremath{\Omega_{\mathrm{m}}} = 0.3 $ and $ \ensuremath{\Omega_{\Lambda}} = 1 - \ensuremath{\Omega_{\mathrm{m}}} $. \subsection{Hubble Volume Simulation} \label{ss:hvs} To estimate the clustering of mass in the late time \ensuremath{\Lambda\mathrm{CDM}}\ cosmology, we analyze cosmological $N$-body\ simulations. We use the \emph{Hubble Volume} (HV) simulations \citep{colberg:00,evrard:02} that were completed by the the Virgo Consortium. We utilize the lightcone output with \ensuremath{\Lambda\mathrm{CDM}}\ cosmology: ($\ensuremath{\Omega_{\mathrm{m}}} = 0.3$, $\ensuremath{\Omega_{\Lambda}} = 0.7$, $h = 0.7$, $\sigma_8 = 0.9$), where $H_o = h 100 {\,{\rm km}\,{\rm s}^{-1}\,{\rm Mpc}^{-1}}$. The HV simulation consists of $1000^3$ particles in a box of $(3000 \; h^{-1}\mathrm{Mpc})^3$ volume, resulting in a particle mass of $m_\mathrm{part} = 2.2 \times 10^{12} \; h^{-1}M_{\odot}$. The particles start from an initial redshift of $z_{init} = 35$, and are evolved to the current time using a Plummer softened gravitational potential with a softening length of $0.1 \; h^{-1}\mathrm{Mpc}$. We use the same simulation output as presented in our companion paper \citep{mcbride:10}. Here, we briefly review our postprocessing of the simulation data for completeness. We include redshift distortions in the mass field by distorting the position according to the peculiar velocity of the dark matter particle. We trim particles to match the identical volume of the corresponding SDSS samples, including the non-trivial angular geometry of SDSS data. Finally, we randomly downsample the number of dark matter particles to make the computational time of the analysis more manageable \citep[discussed further in ][]{mcbride:10}. \subsection{Mock Galaxy Catalogs} \label{ss:mock_desc} We analyze mock galaxy catalogs created to match some of the SDSS galaxy data. These mock catalogs were constructed from 49 independent $N$-body\ simulations, initiated with different random phases and evolved from a single cosmology: ($\Omega_m = 0.27, \; \Omega_\Lambda = 0.73, \; h=0.72, \; \sigma_8 = 0.9$). While these differ slightly from our assumed cosmology for the data, we expect the differences to be minor, with no significant implications on our analysis. Each of the 49 realizations had randomized phases where the initial conditions were generated from 2nd order Lagrangian perturbation theory \citep{scoccimarro:98,crocce:06}. These simulations each consist of $640^3$ particles that we evolved using Gadget2 \citep{gadget2} from an initial redshift of $z_i=49$ to the present epoch. The box side-length of $1280 \; \; h^{-1}\mathrm{Mpc}$ contained enough volume to exactly match the brightest galaxy sample after applying the SDSS geometry. These simulations have been used in various other studies \citep[e.g.][]{tinker:08,manera:10}. The galaxy mocks were created by populating dark matter halos with galaxies by applying the HOD model in \citet{tinker:05} with parameter values defined to represent $M_r < -21.5$ and $\sigma_8=0.9$. The halos were identified using a friends-of-friends algorithm \citep{fof} applying a linking length of $b = 0.2$ in units of the mean interparticle separation. The least massive halos contained $33$ particles, a minimum mass capable of hosting the faintest galaxies in the BRIGHT galaxy sample. Given the mass resolution of these simulations, less massive halos necessary to host galaxies in the LSTAR galaxy sample could not be identified. Therefore, we could only obtain reliable mock galaxy catalogs corresponding to the BRIGHT sample. \section{Theory \& Methods} \label{s:methods} The $n$-point\ correlation functions remain a standard description of the complexity seen in large-scale structure \citep[LSS;][]{peebles:80}. In terms of the fractional overdensity ($\delta$) about the mean density ($\bar{\rho}$), \begin{equation} \label{eq:overdensity} \delta(\vec{x}) = \frac{\rho(\vec{x})}{\bar{\rho}} - 1 \; , \end{equation} we characterize the two-point correlation function (2PCF) and three-point correlation function (3PCF) as: \begin{equation} \label{eq:2pcf_delta} \xi(r_{12}) = \langle \delta(\vec{x}_1) \delta(\vec{x}_2) \rangle \; . \end{equation} \begin{equation} \label{eq:3pcf_delta} \zeta(r_{12}, r_{23}, r_{31}) = \langle \delta(\vec{x}_1) \delta(\vec{x}_2) \delta(\vec{x}_3) \rangle \; . \end{equation} We make the standard assumption of a homogeneous and isotropic distribution, and report clustering amplitudes dependent on the magnitude of the separation vector, e.g. $r_{12} = |\vec{x}_1 - \vec{x}_2|$. Motivated by the \emph{hierarchical ansatz} \citep{peebles:80} and gravitational perturbation theory \citep{lss_review} we use the \emph{reduced} 3PCF: \begin{equation} \label{eq:Q} Q(r_{12}, r_{23}, r_{31}) = \frac{ \zeta(r_{12}, r_{23}, r_{31}) } {\xi_{12} \xi_{23} + \xi_{23} \xi_{31} + \xi_{31} \xi_{12} }\; . \end{equation} This ``ratio statistic'' remains close to unity at all scales, and to leading order is insensitive to both time evolution and cosmology \citep[reviewed in][]{lss_review}. Redshift distortions impact measurements of clustering by altering the line-of-sight radial distant estimate, as we are unable to distinguish the galaxy's peculiar velocity from the Hubble flow \citep[reviewed in][]{hamilton:98}. We refer to the theoretical non-distorted distances as \emph{real} space, commonly denoted with $r$. Distances that include the redshift distortion (e.g. observational distances) are in \emph{redshift space}, denoted with $s$. We decompose the redshift space distance into line-of-sight ($\pi$) and projected separation ($r_p$) such that $ s = (\pi^2 + r_p^2)^{1/2} $. With this separation, the anisotropic distortion is primarily contained in the $\pi$ coordinate. We minimize the impact of redshift distortions by estimating the correlation function binned in both $r_p$ and $\pi$ and integrate along the line-of-sight resulting in the projected correlation function \citep{davis:83}: \begin{equation} \label{eq:wp} w_p(r_p) = 2 \int_{0}^{\pi_\mathrm{max}} \xi(r_p, \pi) \mathrm{d} \pi \; . \end{equation} The projected 3PCF and its reduced form have analogous definitions: \begin{multline} \label{eq:zeta_proj} \zeta_{proj}(r_{p12}, r_{p23}, r_{p31}) = \\ \iint \zeta(r_{p12}, r_{p23}, r_{p31}, \pi_{12}, \pi_{23}) \mathrm{d} \pi_{12} \mathrm{d} \pi_{23} \; , \end{multline} \begin{multline} \label{eq:Qproj} Q_{proj}(r_{p12}, r_{p23}, r_{p31}) = \\ \frac{ \zeta_{proj}(r_{p12}, r_{p23}, r_{p31}) } { w_{p12} w_{p23} + w_{p23} w_{p31} + w_{p31} w_{p12} } \; . \end{multline} The measurements we analyze set $\pi_\mathrm{max} = 20 \; h^{-1}\mathrm{Mpc}$. We find this sufficiently deep to recover correlated structure to minimize redshift distortions, but not overly expensive to calculate \citep[see detailed discussion in appendix of ][]{mcbride:10}. The full 3PCF is a function of three variables that characterize both the size and shape of triplets. We parameterize the 3PCF by ($r_1$, $r_2$, $\theta$), where $r_1$ and $r_2$ represent two sides of a triangle (simplified notation from $r_{12}$ and $r_{23}$), and $\theta$ defines the opening angle between these sides. However, our measurements are estimated in bins defined by ($r_{12}, r_{23}, r_{31}$). We convert $r_{31}$ to $\theta$ using the cosine rule \citep[as detailed in][]{mcbride:10}. The 3PCF remains sensitive to the exact choice of binning scheme, which can mask or distort the expected signal \citep{GS05,marin:08,mcbride:10}. We choose a bin-width as a fraction, $f$, of the measured scale, $r$, such that $\Delta_r = f \times r$ and a bin at $r$ represents $(r - \frac{\Delta_r}{2}, r + \frac{\Delta_r}{2})$. We always use the reduced 3PCF as a function of three variables, $Q(r_1, r_2, \theta)$, but we simply our notation by sometimes referring to it as $Q(\theta)$ or even $Q$. If the amplitude of $Q(\theta)$ varies significantly with $\theta$, we refer to this as \emph{strong} configuration dependence, in contrast to little or no variation for a \emph{weak} configuration dependence. We define the scale of triangles by $r_1$, and choose configurations such that $r_2 = 2 r_1$. This results in $r_3$ varying in size from $r_3 = r_2 - r_1$ when $\theta = 0$ to $r_3 = r_2 + r_1$ when $\theta = \pi$. \subsection{Galaxy-Mass Bias} \label{ss:bias_intro} We can consider galaxies to be a \emph{biased} realization of the \ensuremath{\Lambda\mathrm{CDM}}\ mass field. In the local bias model \citep{fry:93}, the galaxy over-density, $\delta_g$, can be connected to the mass over-density, $\delta_m$, by a non-linear Taylor series expansion: \begin{equation} \label{eq:bias_mod} \delta_g = \sum_k \frac{ b_k }{k!} \delta^k_{m} \approx b_1 \delta_{m} + \frac{b_2}{2} \delta^2_{m} \; . \end{equation} This relation describes the mapping between galaxy and mass by simple scalar values, to second order: the linear ($b_1$) and quadratic ($b_2$) bias. With measurements on galaxy $n$-point correlation functions, the clustering of galaxies is linked to mass clustering via the bias parameters. The 2PCF can be used to constrain the linear bias by equating the correlation function between galaxies, $\xi_g$, to that of dark matter, $\xi_m$, such that \begin{equation} \label{eq:bias2pt} \xi_g(r) = b_1^2 \, \xi_{m}(r) \; . \end{equation} The 3PCF is the lowest order correlation function that shows leading order sensitivity to the quadratic bias term. The analog to \eqref{eq:bias2pt} for the connected 3PCF is written \begin{align} \label{eq:bias3pt} \zeta_g(r_{12},r_{23},r_{31}) = & \: b_1^3 \zeta_m(r_{12},r_{23},r_{31}) \: + \\ & \: b_1^2 b_2 \left[ \xi_{12} \xi_{23} + \xi_{12} \xi_{31} + \xi_{31} \xi_{23} \right] \; , \nonumber \end{align} where $\xi_{12} = \xi_m(r_{12})$, etc. This simplifies for the reduced 3PCF where we denote the bias parameters as $B = b_1$ and $C = b_2 / b_1$: \begin{equation} \label{eq:biasQ} Q_g(r_{12},r_{23},\theta) = \frac{1}{B} \big[ \; Q_{m}(r_{12},r_{23},\theta) + C \; \big] \; . \end{equation} We have changed notation slightly in \eqref{eq:biasQ}, replacing $r_{31}$ with $\theta$, the opening angle between the two sides $r_{12}$ and $r_{23}$, as we discussed above. A multiplicative factor such as $B$ can dampen ($B>1$) or enhance ($B<1$) the configuration dependence of $Q_m$ as seen from the galaxy distribution, whereas the value of $C$ will produce an offset. We see that $B$ and $C$ are partially degenerate in this model. If $Q(\theta)$ shows no configuration dependence, two parameters are used to describe a shift in amplitude. However, this degeneracy can be removed when the 3PCF exhibits a shape dependence \citep{fry:94}. Even with the degeneracy broken, the values of $B$ and $C$ could show a strong correlation. \subsection{Estimating the Covariance Matrix} \label{ss:covar_est} We measure the correlation between measurements by empirically calculating the covariance matrix. Given a number of realizations, $N$, a fractional error on $Q$ can be written as \begin{equation}\label{eq:del} \Delta_i^k = \frac{ Q_i^k - \bar{Q}_i }{ \sigma_i } \; , \end{equation} for each realization ($k$) and bin ($i$) given a mean value ($\bar{Q}_i$) and variance ($\sigma_i^2$) for each bin over all realizations. We use $Q$ as a general placeholder for any measured statistic (2PCF, 3PCF, etc). We construct the normalized covariance matrix using the standard unbiased estimator: \begin{equation}\label{eq:cov} \mathcal{C}_{ij} = \frac{1}{N - 1} \sum_{k=1}^{N}\Delta_i^k \Delta_j^k \; . \end{equation} Equation~\ref{eq:cov} assumes that each realization is independent. In practice, a number of mock galaxy catalogs can be used to make this a tractable approach. If mock catalogs appropriate to the galaxy sample are not available, a covariance matrix can be estimated from the data itself, such as the commonly employed jackknife re-sampling \citep{lupton:01}. Since jackknife samples are not independent realizations, we compute the covariance by: \begin{equation}\label{eq:cov_jack} \mathcal{C}^{(jack)}_{ij} = \frac{(N - 1)^2}{N} \mathcal{C}_{ij} = \frac{N - 1}{N} \sum_{k=1}^{N}\Delta_i^k \Delta_j^k \; , \end{equation} where $\mathcal{C}_{ij}$ denotes the typical unbiased estimator of the covariance when computed on $N$ jackknife samples. \subsection{Eigenmode Analysis} \label{ss:ema} We constrain galaxy-mass bias parameters using the information in the full covariance matrix. We utilize an \emph{eigenmode} analysis \citep{scoccimarro:00}, an equivalent method to a principal component analysis (PCA) on the measurement covariance matrix. This method was tested in detail for the galaxy-mass bias of the 3PCF using simulated data in \citet{GS05}. The basic idea is to isolate the primary contributing eigenmodes of the reduced 3PCF based on the structure of the normalized covariance matrix. This allows one to trim unresolved modes and perform a fit in a basis which minimizes the non-Gaussianity of the residuals. To summarize, the covariance matrix can be cast in terms of a singular value decomposition (SVD), \begin{equation}\label{eq:svd} \bm{\mathcal{C}} = \bm{ U \; \Sigma \; } \bm{V}^T \quad ; \quad \Sigma_{ij} = \lambda_{i}^2 \delta_{ij} \; . \end{equation} where $\delta_{ij}$ is the Kronecker delta function making $\bm \Sigma$ a diagonal matrix containing the singular values, $\lambda_i^2$. The matrices $\bm{U}$ and $\bm{V}$ are orthogonal rotations to diagonalize the covariance into $\bm{\Sigma}$ where $\bm{V}^T$ denotes the transpose of $\bm{V}$. Applying the SVD to the covariance matrix yields a rotation into a basis where the eigenmodes are uncorrelated (i.e. the covariance matrix becomes diagonal). The resulting rotation matrix can be directly applied to our signal forming the \emph{$Q$-eigenmodes}, \begin{equation}\label{eq:q-eigenmodes} \widehat{Q}_i = \sum_j U_{ij} \frac{ Q_j } { \sigma_j } \; . \end{equation} The singular values provide a weight on the importance of each eigenvector. Specifically, a multiplicative factor of $1/\lambda_i^2$ is applied when $\mathcal{C}$ is inverted. With this feature in mind, we define the signal-to-noise ratio as \begin{equation}\label{eq:s2n} \left(\!\frac{S}{N}\!\right)_i = \left| \frac{ \widehat{Q}_i }{ \lambda_i } \right| \; . \end{equation} We note that this $S/N$ estimate is a \emph{lower} bound on the true $S/N$ due to the SVD. To remove noise and avoid numerical instabilities, we trim eigenmodes corresponding to low singular values. \citet{GS05} suggest keeping eigenmodes resolved better than the sampling error in the covariance matrix. Since our covariance matrices are normalized (i.e. the diagonal elements are set to one), the singular values are directly related to sampling error, and we require the so-called ``dominant modes'' \citep{GS05} to satisfy: \begin{equation}\label{eq:svcut} \lambda^2_i > \sqrt{ 2 / N } \;, \end{equation} where $N$ refers to the number of samples used to estimate the covariance matrix. The advantage to using this eigenmode analysis for fitting is threefold. First, it correctly incorporates the correlation between measurement bins. Second, by performing the fit in the rotated basis of the eigenmodes, the residuals of the fit are more Gaussian and the degrees of freedom are properly addressed (e.g. 3 eigenmodes really only fits over 3 numbers). Finally, using only dominant modes removes artifacts due to noise in the estimated covariance matrices. For example, when using the full covariance but not trimming any modes, noise can cause a fit to converge on incorrect values with artificially small errors (and falsely high $S/N$). This effect becomes worse as the covariance becomes less resolved. Conversely, fitting over dominant eigenmodes helps to eliminate any problems from unresolved parts of the error estimation \citep[ Figure 13]{GS05}, and has the benefit of dealing with singular covariance matrices. \section{Galaxy-Mass Bias} \label{s:bias} \begin{figure} \centering \includegraphics[angle=0,width=\linewidth]{bias_vl_mt21_5_ev_cmp6.eps} \caption[Galaxy Mass Bias Constraints: $ M_r < -21.5 $]{ Constraints on the galaxy-mass bias parameters using the $M_r < -21.5$ galaxy sample and the HV simulation for mass estimates. The left column corresponds to fits using $Q_z(\theta)$ (redshift space) with the right column fit using $Q_{proj}(\theta)$ (projected space). The top and bottom panels represent individual fits with triangles of $r_1 = 6 $ and $9 \; \; h^{-1}\mathrm{Mpc}$ as indicated. The middle panels are a joint fit using both triangles. There are two points of comparison marked: an unbiased result with $(B=1.0;C=0.0)$ and only non-linear bias $(B=1.0;C=-0.3)$. The contours denote the $1, 2$ and $3 \sigma$ levels from the $\Delta \chi^2$ distribution of two parameters. } \label{f:bias_mt21_5} \end{figure} \begin{figure} \centering \includegraphics[angle=0,width=\linewidth]{bias_vl_mb20_5_f10_ev_cmp6.eps} \caption[Galaxy Mass Bias Constraints: $-21.5 < M_r < -20.5$]{ Analogous to Figure~\ref{f:bias_mt21_5}, but for $-21.5 < M_r < -20.5$ galaxies. } \label{f:bias_mb20_5} \end{figure} \begin{figure} \centering \includegraphics[angle=270,width=\linewidth]{sigQ_fit_s6a9q2tb15f25_dr6fix_vl_mt21_5_spat_ev.eps} \includegraphics[angle=270,width=\linewidth]{sigQ_fit_s6a9q2tb15f25_dr6fix_vl_mt21_5_proj_ev.eps} \caption[SDSS 3PCF with Best Fit Bias Parameters: $M_r < -21.5$]{ The reduced 3PCF for the $M_r < -21.5$ sample showing the mass scaled to the ``best fit'' galaxy-mass bias parameters. The top two panels correspond to redshift space, and the bottom two to projected space. From left to right, the scale of the triangle increases as noted. The red (dashed) line represents an individual fit only to that triangle scale, and the blue (dotted) line shows a joint fit between both scales. } \label{f:Qfit_mt21_5} \end{figure} \begin{figure} \centering \includegraphics[angle=270,width=\linewidth]{sigQ_fit_s6a9q2tb15f10_dr6fix_vl_mb20_5_spat_ev.eps} \includegraphics[angle=270,width=\linewidth]{sigQ_fit_s6a9q2tb15f10_dr6fix_vl_mb20_5_proj_ev.eps} \caption[SDSS 3PCF with Best Fit Bias Parameters: $-21.5 < M_r < -20.5$]{ Like Figure~\ref{f:Qfit_mt21_5} but for the $-21.5 < M_r < -20.5$ sample. The reduced 3PCF showing the mass scaled to the ``best fit'' galaxy-mass bias parameters. The top two panels correspond to redshift space, and the bottom two to projected space. From left to right, the scale of the triangle increases as noted. The red (dashed) line represents an individual fit only to that triangle scale, and the blue (dotted) line shows a joint fit between both scales. } \label{f:Qfit_mb20_5} \end{figure} We want to constrain the galaxy-mass bias described by \eqref{eq:bias_mod} using the full configuration dependence of the reduced 3PCF in the quasi-linear regime. For the galaxy data, we use measurements in both redshift space, $Q_z(\theta)$, and projected space, $Q_{proj}(\theta)$ as presented in \citet{mcbride:10}. We estimate the bias parameters by comparing to mass estimates obtained from dark matter particles in the HV simulation (see \S{ss:hv}). We expect redshift distortions to affect the bias relation, which we partially neglect \citep{scoccimarro:99}. In particular, we account for the effects of redshift distortions by applying a distortion distance to the dark matter particles based on their velocities for our mass measurement. However, this is not completely sufficient as redshift distortions alter the bias relation in \eqref{eq:biasQ}, especially for $Q_z$ \citep{scoccimarro:99,scoccimarro:01}. We expect that $Q_{proj}$ will be predominantly unaffected and roughly equivalent to real space measurements for this parameterization \citep[see e.g.][]{zheng:04}. We restrict our analysis to scales above $6 \; \; h^{-1}\mathrm{Mpc}$, corresponding to $Q(r_1, r_2, \theta)$ with $r_1 = 6$ and $9 \, \; h^{-1}\mathrm{Mpc}$ with configurations having $r_2 = 2 r_1$. We let $\theta$ vary between $0$ and $\pi$ using $15$ bins, as detailed in \citet{mcbride:10}. We investigate galaxy-mass bias in two samples where the covariance is well determined: BRIGHT ($M_r < -21.5$) and LSTAR ($-21.5 < M_r < -20.5$) as listed in Table~\ref{t:gal_samples}. We remove the least significant eigenmodes during the fit by applying the criteria in \eqref{eq:svcut}. We discuss possible effects of using a different number of modes in \S\ref{s:errors}. For each galaxy sample, we perform six independent fits: a series of three different scales for measurements in both redshift and projected space. We use the full configuration dependence for triangles with $r_1 = 6 $ and $9 \; \; h^{-1}\mathrm{Mpc}$, as well as a joint fit using both scales. For the joint fit, we estimate the full combined covariance matrix to correctly account for overlap and correlation and use the same eigenmode analysis. This changes the number of available modes from $15$ in the individual fits to $30$ modes for the combined joint fit. We estimate the covariance for these samples using $30$ jackknife samples, where our our jackknife regions have equal unmasked area on the sky and use the full redshift distribution of the observational galaxy sample \citep{mcbride:10}. \subsection{Constraining Non-Linear Local Bias} \label{ss:nlbias} We constrain the galaxy-mass bias using a maximum likelihood approach by calculating a simple $\chi^2$ statistic where the likelihood $\mathcal{L} \propto \exp( -\chi^2 / 2 ) $ and \begin{eqnarray} \label{eq:chi2} \chi^2 &=& \vec{\Delta}^T \bm{\mathcal{C}}^{-1} \vec{\Delta} \; , \nonumber \\ \Delta_i &=& \frac{Q_i - Q_i^{(t)}}{ \sigma_i } \; . \end{eqnarray} We determine the theoretical model, $Q^{(t)}$, by scaling the mass measurement from the HV simulation, $Q_m$, with bias parameters $B$ and $C$ as per \eqref{eq:biasQ}. We evaluate $\mathcal{L}$ on a grid using the ranges: $B = 0.1 \ldots 3.0$ and $C = -1.5 \ldots 1.5$ with a step-size of $0.01$. We tested for discrepancies using a factor of $10$ finer spacing between grid elements with no significant differences to the fitted results. We first examine the BRIGHT sample ($M_r < -21.5$), with the likelihood space of the six 2-parameter fits displayed in Figure~\ref{f:bias_mt21_5}. We include contours for Gaussian $1,2$ and $3\sigma$ levels which identify regions of probabilities for $68.3, 95.5$ and $99.7$\%. We calculate these from the $\Delta\chi^2$ distribution for a 2-parameter fit (i.e. two degrees-of-freedom), with corresponding values of 2.3, 6.2, and 11.8 from the best fit value. We include two reference points for comparison, the \emph{unbiased} result where $(B=1.0;C=0.0)$ along with a potential negative quadratic bias term accounting for the entire galaxy bias $(B=1.0;C=-0.3)$ \citep[similar comparison to Figure 5 in][]{gaztanaga:05}. We can clearly see the degeneracy between $B$ and $C$ in Figure~\ref{f:bias_mt21_5}, visible as the elongated diagonal contour. Larger values of $B$ remain likely with larger values of $C$, consistent with our expectation of degeneracy by inspecting the bias relation in \eqref{eq:biasQ}. The size of the errors are notably larger for projected space measurements, as well as lower values for the overall $S/N$. This results from the larger uncertainties in the projected measurements \citep{mcbride:10}. Since the scale $r_p$ represents a projection that incorporates larger scales (determined by the line-of-sight integration $\pi_{max}$), projected measurements are more sensitive to the dominant uncertainty from cosmic variance that increases with scale. In all cases, the \emph{unbiased} $(B=1;C=0)$ model is excluded at greater than a $2\sigma$ level. To see the success of the fit ``by eye'', we plot the 3PCF for dark matter, galaxies and best fit scaled model for this sample in Figure~\ref{f:Qfit_mt21_5}. Both the ``individual'' fits and ``combined'' joint fit produce models that well match the data. Next, we fit the galaxy-mass bias parameters using the LSTAR sample ($-21.5 < M_r < -20.5$). This sample spans a unit bin in magnitude, and consists only of galaxies fainter than the previous bright sample. The results of the fit with likelihood contours are shown in Figure~\ref{f:bias_mb20_5}. The uncertainties appear reduced in size -- a striking difference with respect to the BRIGHT sample in Figure~\ref{f:bias_mt21_5}. In addition, the slope of the ``line of degeneracy'' between $B$ and $C$ has shifted. We reason that this is in part due to the increased statistical significance of the larger sample, as both the measurements and covariance are better resolved. Due to the higher number density of galaxies, we re-measured the 3PCF using a finer binning scheme (fractional bin-width of $f=0.1$ as opposed to $f=0.25$, see comparison in Appendix~\ref{s:binning}). With the finer binning, we see a stronger configuration dependence, which will alter the degeneracy between $B$ and $C$. We note that many of the best fit $B$ values appear smaller, which we expect for a fainter sample \citep{zehavi:05,zehavi:10}. The same line of reasoning suggests that the ``unbiased'' model $(B=1;C=0)$ should be more likely to fit. As before, we plot the respective best fit model in comparison with the dark matter and galaxy 3PCF in Figure~\ref{f:Qfit_mb20_5}. There is a smaller difference between HV (mass) and galaxy measurements, as this sample is fainter. We notice some noise of the HV measurement for $Q_{proj}$, making the model not quite as smooth. We note that by eye, $Q_z$ on larger scales indicates a slight bias for the combined fit, with the model undershooting the data and $1\sigma$ uncertainties. Significant off-diagonal structure in the covariance matrix can produce a fit where ``chi-by-eye'' suggests a poor fit. Since the $r_1 = 6 \; \; h^{-1}\mathrm{Mpc}$ measurements in $Q_z$ have much smaller errors, these scales drive the fit making measurements with $r_1 = 9 \; \; h^{-1}\mathrm{Mpc}$ appear a poor match to the ``best fit'' model. We summarize the results of our two parameter constraints for the BRIGHT and LSTAR sample in Table~\ref{t:bias}. The BRIGHT sample ($M_r < -21.5$) represents galaxies with $r$-band magnitudes significantly brighter than $\ensuremath{L_{\ast}}$ where $M_r \sim -20.4$ \citep{blanton:03}. We typically consider $\ensuremath{L_{\ast}}$ galaxies to have a linear bias, i.e. where $B \sim 1$, and we might expect this brighter sample to have a larger $B$ value. The constraints from projected measurements appear to follow this logic; the best fit values on $Q_{proj}$ in the fainter LSTAR sample ($-21.5 < M_r < -20.5$) are lower with $B \sim 1$. Redshift space measurements, $Q_z$, appear consistent with $B \sim 1$ for all fits, but at the same time values of $C$ are lower, reflecting the degeneracy of the $B$ and $C$ parameters. The reduced $\chi^2_{\nu}$ values show an acceptable fit in almost all cases; the exceptions are the two $Q_z$ fits using the $r_1 = 6 \; \; h^{-1}\mathrm{Mpc}$ triangles for the LSTAR sample. Consequently, the joint fit appears to be the poorest match in Figure~\ref{f:Qfit_mb20_5}. The \ensuremath{\Delta\chi^2}\ in Table~\ref{t:bias} displays the likelihood an unbiased model is from the best fit parameters. We find an unbiased model is ruled out for the BRIGHT galaxy sample at greater than $4.8\sigma$ in redshift space and $2.6\sigma$ in projected space. We cannot conclude the same for the LSTAR sample, which is largely consistent with an unbiased model. We generally consider bright galaxies to be more biased \citep{zehavi:05,zehavi:10}. The LSTAR sample is a magnitude bin around $\ensuremath{L_{\ast}}$ and fainter than the BRIGHT sample, and we expect a better consistency with the unbiased model. \begin{table*} \centering \begin{tabular}{lcccccc} \hline \hline \multicolumn{7}{c}{\bfseries Galaxy-Mass Bias Parameters from SDSS} \\ \hline Measurement & Scales ($\; h^{-1}\mathrm{Mpc}$) & B & C & $\chi^2_\nu$ & D.o.F. & unbiased $\Delta\chi^2$ \\ \hline \hline BRIGHT-z & 6-18 & $ 1.03_{-0.08}^{+0.11} $ & $ -0.25_{-0.06}^{+0.08} $ & 1.48 & 6-2 & $118.86$ ($10.7\sigma$) \\ BRIGHT-proj & 6-18 & $ 1.27_{-0.21}^{+0.30} $ & $ -0.03_{-0.14}^{+0.19} $ & 0.78 & 6-2 & $ 16.43$ ($ 3.6\sigma$) \\ BRIGHT-z & 6-27 & $ 1.04_{-0.06}^{+0.06} $ & $ -0.24_{-0.05}^{+0.05} $ & 0.83 & 9-2 & $132.54$ ($11.3\sigma$) \\ BRIGHT-proj & 6-27 & $ 1.20_{-0.14}^{+0.21} $ & $ -0.06_{-0.11}^{+0.15} $ & 0.45 & 10-2 & $ 18.70$ ($ 3.9\sigma$) \\ BRIGHT-z & 9-27 & $ 1.01_{-0.09}^{+0.10} $ & $ -0.22_{-0.08}^{+0.09} $ & 0.60 & 4-2 & $ 26.90$ ($ 4.8\sigma$) \\ BRIGHT-proj & 9-27 & $ 1.23_{-0.22}^{+0.34} $ & $ -0.02_{-0.18}^{+0.27} $ & 0.34 & 5-2 & $ 9.44$ ($ 2.6\sigma$) \\ \hline LSTAR-z & 6-18 & $ 1.03_{-0.07}^{+0.09} $ & $ -0.22_{-0.08}^{+0.10} $ &13.47 & 3-2 & $ 28.07$ ($ 4.9\sigma$) \\ LSTAR-proj & 6-18 & $ 1.10_{-0.11}^{+0.13} $ & $ -0.01_{-0.14}^{+0.16} $ & 0.85 & 4-2 & $ 3.00$ ($ 1.2\sigma$) \\ LSTAR-z & 6-27 & $ 0.96_{-0.07}^{+0.08} $ & $ -0.30_{-0.08}^{+0.08} $ & 3.22 & 5-2 & $ 45.85$ ($ 6.5\sigma$) \\ LSTAR-proj & 6-27 & $ 1.03_{-0.11}^{+0.15} $ & $ -0.14_{-0.12}^{+0.16} $ & 1.07 & 7-2 & $ 5.86$ ($ 1.9\sigma$) \\ LSTAR-z & 9-27 & $ 1.04_{-0.09}^{+0.11} $ & $ -0.07_{-0.14}^{+0.16} $ & 0.07 & 3-2 & $ 2.37$ ($ 1.0\sigma$) \\ LSTAR-proj & 9-27 & $ 1.03_{-0.13}^{+0.19} $ & $ -0.09_{-0.15}^{+0.19} $ & 1.75 & 4-2 & $ 1.93$ ($ 0.9\sigma$) \\ \hline \end{tabular} \caption[Galaxy-Mass Bias Parameters from SDSS DR6]{ The two-parameter best fit galaxy-mass bias parameters, using \eqref{eq:biasQ} with the configuration dependence in the reduced 3PCF from SDSS DR6 galaxy samples in comparison with dark matter clustering from the Hubble volume simulation. The fits are performed separately on two galaxy samples BRIGHT ($M_r < -21.5$) and LSTAR ($-21.5 < M_r < -20.5$) using measurements in redshift space (denoted with ``z'') as well as projected space (``proj''). The second column lists the range of scales used for the respective fit. The errors are marginalized $1\sigma$ bounds calculated by the range within $\Delta\chi^2 \le 1$ from the best fit value. The quality of the best fit value is stated with the reduced chi-square $\chi^2_{\nu} = \chi^2 / \text{D.o.F}$. The degrees of freedom (D.o.F.) correspond to the number of eigenmodes used minus the number of parameters ($2 for all these fits$). The last column lists the $\Delta\chi^2$ value to quantify the likelihood of an ``unbiased'' model matching the data, i.e. $(B=1;C=0)$, with a likelihood expressed in the number of $\sigma$ from by the standard Gaussian assumption for the $\Delta\chi^2$ distribution. } \label{t:bias} \end{table*} \begin{table*} \centering \begin{tabular}{lccccc} \hline \hline \multicolumn{6}{c}{\bfseries Galaxy-Mass Bias without Quadratic Term } \\ \hline Measurement & Scales ($\; h^{-1}\mathrm{Mpc}$) & B & $\chi^2_\nu$ & D.o.F. & $\Delta\chi^2$ from best fit \\ \hline \hline BRIGHT-z & 6-18 & $ 1.34_{-0.04}^{+0.04} $ & 2.42 & 6-1 & $ 6.16$ ($ 2.0\sigma$)\\ BRIGHT-proj & 6-18 & $ 1.31_{-0.09}^{+0.11} $ & 0.63 & 6-1 & $ 0.04$ ($ 0.0\sigma$)\\ BRIGHT-z & 6-27 & $ 1.30_{-0.03}^{+0.03} $ & 2.23 & 9-1 & $ 12.01$ ($ 3.0\sigma$)\\ BRIGHT-proj & 6-27 & $ 1.27_{-0.07}^{+0.09} $ & 0.42 & 10-1 & $ 0.16$ ($ 0.1\sigma$)\\ BRIGHT-z & 9-27 & $ 1.24_{-0.06}^{+0.06} $ & 1.85 & 4-1 & $ 4.35$ ($ 1.6\sigma$)\\ BRIGHT-proj & 9-27 & $ 1.25_{-0.09}^{+0.11} $ & 0.26 & 5-1 & $ 0.01$ ($ 0.0\sigma$)\\ \hline LSTAR-z & 6-18 & $ 1.21_{-0.04}^{+0.05} $ & 8.68 & 3-1 & $ 3.90$ ($ 1.5\sigma$)\\ LSTAR-proj & 6-18 & $ 1.11_{-0.06}^{+0.07} $ & 0.57 & 4-1 & $ 0.01$ ($ 0.0\sigma$)\\ LSTAR-z & 6-27 & $ 1.23_{-0.04}^{+0.04} $ & 4.59 & 5-1 & $ 8.70$ ($ 2.5\sigma$)\\ LSTAR-proj & 6-27 & $ 1.15_{-0.07}^{+0.07} $ & 1.02 & 7-1 & $ 0.76$ ($ 0.4\sigma$)\\ LSTAR-z & 9-27 & $ 1.08_{-0.05}^{+0.06} $ & 0.14 & 3-1 & $ 0.21$ ($ 0.1\sigma$)\\ LSTAR-proj & 9-27 & $ 1.11_{-0.08}^{+0.09} $ & 1.25 & 4-1 & $ 0.25$ ($ 0.1\sigma$)\\ \hline \end{tabular} \caption[Galaxy-Mass Bias without a Quadratic Term]{ The single-parameter best fits for galaxy-mass bias using \eqref{eq:biasQ} where we constrain $C=0$. We fit the configuration dependence of the reduced 3PCF from SDSS DR6 galaxy samples in comparison with dark matter clustering from the Hubble volume simulation. Fits are performed separately on two galaxy samples BRIGHT ($M_r < -21.5$) and LSTAR ($-21.5 < M_r < -20.5$) using measurements in redshift space (denoted with ``z'') as well as projected space (``proj''). The second column lists the range of scales used for the respective fit. The errors are marginalized $1\sigma$ bounds calculated by the range within $\Delta\chi^2 \le 1$ from the best fit value. The quality of the best fit value is stated with the reduced chi-square $\chi^2_{\nu} = \chi^2 / \text{D.o.F}$. The degrees of freedom (D.o.F.) correspond to the number of eigenmodes used minus the number of parameters (in this case, just one). The last column lists the $\Delta\chi^2$ value to quantify the difference in likelihood of this model with $C=0$ compared with the best fit of a two-parameter fit (i.e. Table~\ref{t:bias}). } \label{t:linbias} \end{table*} \subsection{Non-zero Quadratic Bias?} \label{ss:linbias} With our two parameter likelihood space, we can investigate the statistical significance of a non-zero quadratic bias term ($b_2$ which is encapsulated in $C = b_2 / b_1$). We use the same configuration dependence of the 3PCF, and the measured covariance, but restrict the two parameter fit such that $C = 0$. We evaluate the best fit $B$, the quality of the fit (via the reduced $\chi^2_\nu$) as well as the $\Delta\chi^2$ for the best two-parameter fit, which we present in Table~\ref{t:linbias}. For the BRIGHT sample, we notice the $B$ values are equivalent across both $Q_{proj}$ and $Q_{z}$ for the same scales on the same sample. Since we removed the degeneracy (as $C$ is zero), this behavior makes sense and in agreement with the measurements. We note that the typical $B$ values are larger for the BRIGHT sample, and lower for the fainter LSTAR sample. For $Q_{proj}$ our constraints find little statistical significance for a non-zero quadratic bias term; the likelihood difference is small and a linear bias term is sufficient to quantify the bias for both the BRIGHT and LSTAR samples. Overall, measurements in redshift space ($Q_z$) more strongly suggest that $C \ne 0$, especially when using the smaller scale triangles ($r_1 = 6 \; \; h^{-1}\mathrm{Mpc}$). \subsection{Relative Bias} \label{ss:brel} \begin{figure} \centering \includegraphics[angle=0,width=0.85\linewidth]{brel_2pcf.eps} \caption[Relative Bias in 2PCF]{ The relative bias $b^{(2)}_{rel} = \sqrt{\xi_{BRIGHT} / \xi_{LSTAR}}$ using measurements in redshift (red 'x' symbols) and projected space (blue diamonds). We calculate the uncertainties by propagating 1$\sigma$ values from the 2PCF. The dotted and dashed lines display results from the best fit bias terms at the largest scales ($9-27 \; h^{-1}\mathrm{Mpc}$). The bold lines indicate values from the two-parameter fit (Table~\ref{t:bias}), and the faint lines show the best linear fit (Table~\ref{t:linbias}). } \label{f:brel_2pcf} \end{figure} \begin{figure} \centering \includegraphics[angle=0,width=0.85\linewidth]{brel_Qeq.eps} \caption[Relative Bias in $Q_{eq}(r)$]{ Analogous to Figure~\ref{f:brel_2pcf} but for the 3PCF. The relative bias $b^{(3)}_{rel} = Q_{LSTAR} / Q_{BRIGHT}$ using measurements of equilateral 3PCF in redshift (red 'x' symbols) and projected space (blue diamonds). We calculate the uncertainties by propagating 1$\sigma$ values from the 3PCF. The dotted and dashed lines display results using the best fit bias terms at the largest scales ($9-27 \; h^{-1}\mathrm{Mpc}$). The bold lines indicate values from the two-parameter fit (Table~\ref{t:bias}), and the faint lines show the best linear fit (quadratic bias constrained to be zero; Table~\ref{t:linbias}). } \label{f:brel_Qeq} \end{figure} The \emph{relative} bias characterizes the relative clustering strength between different galaxy samples -- an alternative to the ``absolute'' galaxy-mass bias constrained previously. Relative bias is insensitive to cosmology and does not require assumptions to determine mass clustering. We can use the relative bias to check consistency with linear and quadratic bias parameters obtained above in \S\ref{s:bias}. For the 2PCF, the relative bias is simply: \begin{equation} \label{eq:brel_2pcf} b^{(2)}_{rel} = \sqrt{ \frac{\xi_{BRIGHT}}{ \xi_{LSTAR} } } \; , \end{equation} where $\xi$ can refer to redshift or projected space measurements. We show $b^{(2)}_{rel}$ from the 2PCF in Figure~\ref{f:brel_2pcf}, using the linear bias parameters obtained from the best two-parameter fit (i.e. Table~\ref{t:bias}). Both redshift space and projected measurements agree and produce a flat relative bias, even at non-linear scales below a few $\; h^{-1}\mathrm{Mpc}$. Two obvious discrepancies arise when comparing observational data to ``best fit'' values. First, neither redshift nor projected space fits appear to match data. Earlier we noted a substantial degeneracy between the linear and quadratic bias terms. The quadratic bias term is accounting for more of the clustering bias when we constrain with $Q(\theta)$, which isn't noticeable in the 2PCF. This suggests we underpredict values of linear bias, either just for the BRIGHT sample or in unequal portions for both. Second, there is a significant difference between these two estimates given the same galaxy samples, although the projected measurement appears closer to agreement. Let us consider the relative bias of the reduced 3PCF. Since $Q$ is proportional to $1/B$, we define the relation \begin{equation} \label{eq:brel_Q} b^{(3)}_{rel} = \frac{ Q_{LSTAR} }{ Q_{BRIGHT} } \; . \end{equation} Figure~\ref{f:brel_Qeq} presents the relative bias of our DR6 galaxies for the equilateral 3PCF ($Q_{eq}$) \citep{mcbride:10}. We note that $Q_{eq}$ is related but not identical to the configuration dependence measurements of $Q(\theta)$ used for constraining $B$ and $C$. Looking at Figure~\ref{f:brel_Qeq}, we see an obvious difference with respect to the 2PCF: the much larger uncertainties. However, the predicted $b^{(3)}_{rel}$ from the galaxy-mass bias constraints appear much more consistent with the measurements, as opposed to the 2PCF. The 3PCF results agree with the observational data, and show a much smaller discrepancy between redshift and projected space. The quadratic bias term ($C$) can properly account for the clustering difference that was missing in the relative bias of the 2PCF. \subsection{Implications for Cosmology: $\sigma_8$} \label{ss:s8} \begin{table}[b] \centering \begin{tabular}{lccc} \hline \hline \multicolumn{4}{c}{\bfseries Implied values of $\sigma_8$} \\ \hline Measurement & Scales ($\; h^{-1}\mathrm{Mpc}$) & B & $\sigma_8$ \\ \hline \hline BRIGHT-z & 9-27 & $ 1.24_{-0.06}^{+0.06} $ & 0.96-1.13 \\ BRIGHT-proj & 9-27 & $ 1.25_{-0.09}^{+0.11} $ & 1.02-1.12 \\ \hline LSTAR-z & 9-27 & $ 1.08_{-0.05}^{+0.06} $ & 0.88-0.97 \\ LSTAR-proj & 9-27 & $ 1.11_{-0.08}^{+0.09} $ & 0.83-0.97 \\ \hline \end{tabular} \caption[Implied values of $\sigma_8$ from galaxy-mass bias]{ We use galaxy-mass bias constraints from the configuration dependence of the 3PCF, $Q(\theta)$, with measurements of the 2PCF, to estimate the implied values of $\sigma_8$ via \eqref{eq:bias2pt_s8}. We use the largest triangle configurations for our two samples, and the 1-parameters constraints on $B$. The range of $\sigma_8$ does not represent formal uncertainties; we calculate values from the range of uncertainties stated in $B$, neglecting additional errors from the 2PCF. For reference, WMAP-5 (with SN and BAO) suggest $\sigma_8 = 0.82$ \citep{komatsu:09}. } \label{t:sigma8} \end{table} Better understanding galaxy-mass bias, or at the least accurately parameterizing it, allows one to ``calibrate out'' the effects of galaxies and infer properties of the underlying mass distribution to constrain cosmology. We can use our estimates of bias to probe the mass variance in spheres of $8 \; \; h^{-1}\mathrm{Mpc}$ radius, a common normalization of the amplitude of the matter power spectrum, $P(k)$. The theoretical $\sigma_8$ is linearly extrapolated from a very early epoch until today, \begin{equation}\label{eq:sigma8} \sigma_8^2 = 4 \pi \int_0^\infty W^2(k,R = 8 \; \; h^{-1}\mathrm{Mpc}) P_{lin}(k) \frac{k^2 dk}{(2\pi)^3} \; , \end{equation} where $W(k,R)$ is a top-hat window function in Fourier space for mode $k$ and smoothing radius $R$ and $P_{lin}(k)$ is the linear power spectrum. In terms of our fitting formula on the 3PCF in \eqref{eq:biasQ}, we expand the bias relation for the 2PCF to highlight its dependence on $\sigma_8$ \begin{equation} \label{eq:bias2pt_s8} \xi_g(r) = B^2 \left(\frac{\sigma_8}{0.9}\right)^2 \xi_{m}(r) \; . \end{equation} Formally, the mass 2PCF already encodes a value of $\sigma_8$. As $\xi_m$ scales linearly with a change in the square of $\sigma_8$, we include an explicit scaling factor to account for a difference in $\sigma_8$ between the underlying mass of the observed galaxy distribution and that assumed in our estimate of mass clustering from $N$-body\ results. In our case, we use dark matter from the HV simulation where $\sigma_8 = 0.9$, explaining the denominator on the right hand side of \eqref{eq:bias2pt_s8}. We can see that an incorrect assumption of $\sigma_8$ in the estimate of mass will directly translate into a different value of the best $B$ describing galaxies. Even if we use the above relation, \eqref{eq:bias2pt_s8}, $B$ and $\sigma_8$ are completely degenerate when solely considering the 2PCF. By using the additional information available in the configuration dependence of the reduced 3PCF, we obtain a value of $B$ that is independent of $\sigma_8$, and breaks the degeneracy between the two parameters. Formally, this is only true to leading order, as loop corrections in $Q(\theta)$ will add cosmological dependence which we neglect in this analysis. With an independent value of $B$ from \eqref{eq:biasQ}, we estimate $\sigma_8$ by utilizing the 2PCF in \eqref{eq:bias2pt_s8}. Ideally, we could construct a three-parameter fit to jointly constrain $B$, $C$ and $\sigma_8$ \citep[e.g.][ on 2dFGRS data]{pan:05}. Or as an further extension, we could jointly fit over several samples, since they each have the same underlying $\sigma_8$. However, this additional complexity is beyond the scope of this analysis as our current uncertainties would yield poor constraints on $\sigma_8$. We simply estimate the value of $\sigma_8$ implied by best fit bias parameters. We restrict this estimate to the largest scale triangles ($r_1 = 9 \; \; h^{-1}\mathrm{Mpc}$) to ensure we approach the linear regime (i.e. the scales we are most confident with using the local bias model). Given our analysis of the relative bias, we use the larger $B$ values where we constrain $C=0$. We present these estimates in Table~\ref{t:sigma8}. \section{Eigenvectors of the 3PCF Covariance Matrix} \label{s:ev} \begin{figure} \centering \includegraphics[angle=270,width=0.425\linewidth]{ev3_vl_mt21_5_s6_q2_tb15_f25_spat.eps} \includegraphics[angle=270,width=0.425\linewidth]{ev3_vl_mt21_5_s6_q2_tb15_f25_proj.eps} \includegraphics[angle=270,width=0.425\linewidth]{ev3_vl_mt21_5_s9_q2_tb15_f25_spat.eps} \includegraphics[angle=270,width=0.425\linewidth]{ev3_vl_mt21_5_s9_q2_tb15_f25_proj.eps} \caption[Top 3 Eigenvectors for $M_r < -21.5$]{ Top three eigenvectors (EVs) chosen from the normalized covariance matrix in the $M_r < -21.5$ galaxy sample. The sign of the EV is arbitrary. The first EV (solid black) shows equal weights for all bins. The second (dashed red) and third (dotted blue) EV display the configuration difference between perpendicular and co-linear triangles as well as the scale variation as the scale of the third side increases. } \label{f:ev3_mt21_5} \end{figure} \begin{figure} \centering \includegraphics[angle=270,width=0.425\linewidth]{ev3_vl_mb20_5_s6_q2_tb15_f10_spat.eps} \includegraphics[angle=270,width=0.425\linewidth]{ev3_vl_mb20_5_s6_q2_tb15_f10_proj.eps} \includegraphics[angle=270,width=0.425\linewidth]{ev3_vl_mb20_5_s9_q2_tb15_f10_spat.eps} \includegraphics[angle=270,width=0.425\linewidth]{ev3_vl_mb20_5_s9_q2_tb15_f10_proj.eps} \caption[Top 3 Eigenvectors for $-21.5 < M_r < -20.5$]{ Like Figure~\ref{f:ev3_mt21_5} but for the $-21.5 < M_r < -20.5$ galaxy sample. The top three eigenvectors (EVs) chosen from the normalized covariance matrix. The sign of the EV is arbitrary. The first EV (solid black) shows equal weights for all bins. The second (dashed red) and third (dotted blue) EV display the configuration difference between perpendicular and co-linear triangles as well as the scale variation as the scale of the third side increases. } \label{f:ev3_mb20_5} \end{figure} A point that is often overlooked is that the covariance matrix itself is a measurement of clustering rather than simply a means of quantifying uncertainty. It exhibits increased sensitivity to higher order terms \citep[for a concise review see][]{szapudi:09} with the covariance of the 2PCF being leading order sensitive up to fourth order, and the 3PCF up to sixth order. We investigate the structure of the normalized covariance matrices by examining the eigenvectors (EVs), or principal components, obtained by a singular value decomposition. The EVs are contained in the $\bm{U}$ and $\bm{V}$ matrices from \eqref{eq:svd} and \eqref{eq:q-eigenmodes}. The first EV is associated with the largest singular value (SV), and accounts for the largest variance in the normalized covariance matrix (i.e. most of the observed structure); the second EV is the next largest SV and so on. If the covariance matrix resolves predominately ``true'' signal, the first EVs should characterize this structure whereas the lower ranked EVs encapsulate noise. While the amplitude of the EVs are not significant without the corresponding SV, they do represent the variation between bins in an orthogonal basis where the full covariance is a simple linear combination. We show the top three EVs for the BRIGHT and LSTAR galaxy samples in Figures~\ref{f:ev3_mt21_5} and \ref{f:ev3_mb20_5}, respectively. In all cases there appear to be consistent features in the eigenmodes. The first EV represents weighting all bins equally. Typically, the second EV highlights the difference between ``perpendicular'' and ``co-linear'' configurations. Finally, the third EV tracks a roughly monotonic change from small to large $\theta$, possibly accounting for the scale difference of the continually increasing third side of the triangle. We point out that this structure evident in observational galaxy samples agrees well with theoretical predictions from simulations in \citet{GS05}. Remember, these EVs are obtained by deconstructing just the \emph{normalized} covariance matrix. We do not always see a clear separation between the second and third EVs. As the full structure is a linear combination of all modes, the configuration dependence and scale variation effects could be combined. The SVs of the two effects are essentially equivalent for our measurements, making their numerical distinction in the SVD somewhat arbitrary. This is not a concern, as it appears that the linear combination of these two EVs is consistent with our interpretation (even if they are mixed). Less significant EVs show less coherent structure, consistent with noisy modes in the covariance (as we would expect). By examining the EVs of the covariance matrices, we note structure consistent with measurements of the reduced 3PCF (see Figures~\ref{f:Qfit_mt21_5} and \ref{f:Qfit_mb20_5}). Observing this structure provides supporting evidence that we have signal dominated estimates of the covariance matrix. This justifies our approach of using a combination of the most significant eigenmodes in a quantitative comparison to galaxy-mass bias models, as we did in \S\ref{s:bias}. \begin{figure*} \centering \includegraphics[angle=270,width=0.85\linewidth]{diagerrb_dr5bright.eps} \caption[Absolute errors using different error estimates]{ We compare the $1\sigma$ absolute (diagonal) errors of the reduced 3PCF obtained by using different methods of estimation: independent mock catalogs or jackknife resampling as denoted. These measurements correspond to the BRIGHT galaxy sample. } \label{f:diagerr} \end{figure*} \section{Quality of Error Estimation} \label{s:errors} \begin{figure*} \centering \includegraphics[angle=0,width=0.85\linewidth]{covar_resid_dr5_bright_vl21_5_cmp20.eps} \caption[Comparison of covariance using different error estimates]{ We present the normalized covariance matrices and residuals of the error estimation for large triangles ($9-27 \; h^{-1}\mathrm{Mpc}$). The left and right columns pertain to redshift and projected space respectively. We estimate errors using $15$, $30$, $49$, and $105$ jackknife regions and compare with results from $49$ mocks from independent $N$-body\ simulations. The solid, dashed and dotted contours in the normalized covariance correspond to values of $0.70$, $0.85$, and $0.99$, respectively. } \label{f:covar_cmp20} \end{figure*} \begin{figure} \centering \includegraphics[angle=0,width=0.425\linewidth]{ev3_dr5bright_cmp3z.eps} \qquad \includegraphics[angle=0,width=0.425\linewidth]{ev3_dr5bright_cmp3p.eps} \caption[Top 3 Eigenvectors of normalized covariance]{ Top three eigenvectors (EVs) chosen from the normalized covariance matrix for the different error estimates for $Q_z(\theta)$ and $Q_{proj}(\theta)$. The sign of the EV is arbitrary. The first EV (left panels) shows equal weights between all bins. The second (middle) and third (right) EV display the configuration difference between perpendicular and co-linear triangles, as well as the scale variation as the scale of the third side increases. } \label{f:ev3} \end{figure} \begin{figure} \centering \includegraphics[angle=270,width=\linewidth]{sv_dr5_bright_vl21_5_cmp.eps} \caption[Singular values of the covariance matrices]{ The singular values (SV), or eigenvalues, obtained from the singular value decomposition (SVD) of the normalized covariance matrix for each of our error estimates. Larger values of the SV correspond to more statistically significant eigenmodes in the structure of the covariance. } \label{f:sv} \end{figure} \begin{figure} \centering \includegraphics[angle=270,width=\linewidth]{sn_dr5_bright_vl21_5_cmp.eps} \caption[S/N ratio for each eigenmode]{ The signal-to-noise ratio for each eigenmode, ordered in terms of importance. The total signal-to-noise of a measurement is calculated by adding each individual eigenmode in quadrature. } \label{f:s2n} \end{figure} \begin{figure} \centering \includegraphics[angle=270,width=\linewidth]{sntot_dr5_bright_vl21_5_cmp.eps} \caption[Cumulative S/N ratio for the eigenmodes]{ The cumulative signal-to-noise ratio for each eigenmode, ordered in terms of importance. The cumulative total is calculated by summing in quadrature the more significant modes. } \label{f:s2ntot} \end{figure} \begin{figure*} \centering \includegraphics[angle=0,width=0.425\linewidth]{evsubspace_dr5bright_s9f25_spat.eps} \qquad \includegraphics[angle=0,width=0.425\linewidth]{evsubspace_dr5bright_s9f25_proj.eps} \caption[Eigenvector subspace comparison: jackknife vs mocks]{ We compute the compatibility of the \emph{subspace} contained in a series of eigenvectors such that the y-axis can be interpreted as the fractional "match" between the spaces the eigenvectors probe. A value of $1.0$ means no mismatch in the space they probe, and $0.0$ means no overlap (i.e. orthogonal). The left panel uses the eigenvectors of the redshift space covariance matrix determined by the $49$ mocks as the reference value. The right plot does the same as the left, but uses measurements in projected space. The comparison is cumulative (eigenmode 3 means the sum of the first 3 modes). } \label{f:evsubspace} \end{figure*} \begin{figure} \centering \includegraphics[angle=0,width=0.85\linewidth]{evsubspace_dr5bright_s9f25_ZvP.eps} \caption[Eigenvector subspace comparison: redshift vs projected]{ We use the subspace comparison of eigenvectors to estimate the difference in space probed between similar numbers of eigenmodes in redshift and projected covariance matrices for each of the error estimates. } \label{f:evsubspace_ZvP} \end{figure} We rely heavily on the structure of the error covariance matrix for constraints on galaxy-mass bias. We noticed the observed structure in the covariance is qualitatively similar to clustering measurements (in \S\ref{s:ev}), but it remains unclear if this structure is affected by our jackknife re-sampling estimation of the covariance. Higher orders add complexity and increased sensitivity to systematics, even with a ``ratio statistic'' such as the reduced 3PCF where the error sensitivity is canceled to first order. We must investigate the error resolution of jackknife resampling on the 3PCF, as tests on the 2PCF in angular correlation functions \citep{scranton:02} or redshift space SDSS \citep{zehavi:02,zehavi:05,zehavi:10} can not be assumed to be sufficient. An alternative method to estimate errors uses a series of independent realizations of artificial galaxies, ideally created to match observational limitations such as the volume and geometry of the SDSS galaxy samples. We created 49 independent galaxy mock catalogs based on independent $N$-body\ simulations that have appropriate resolution to match the BRIGHT SDSS sample ($M_r < -21.5$). We use these independent mocks to estimate errors and compare with those obtained from jackknife re-sampling of \emph{the data}. This exercise should provide an idea how effective jackknife re-sampling is for resolving the errors on the 3PCF. The BRIGHT galaxy sample has the lowest number density with the least number of galaxies over the largest volume. To help protect against undersampled measurements due to low bin counts \citep[see discussion in ][]{mcbride:10}, we restrict the comparison to the configuration dependence of the larger triangles ($r_1 = 9 \; h^{-1}\mathrm{Mpc}$ sides). We estimate the covariance matrix for the BRIGHT sample using different numbers of jackknife samples on \emph{the data}, specifically using $15$, $30$, $49$ and $105$ jackknife regions. Again, these jackknife sample are created from the observational data and not from the mock galaxy catalogs, where each jackknife region is selected to maintain equal unmasked area \citep[same method as detailed in ][]{mcbride:10}. Since we measure $15$ bins for $Q(\theta)$, we require at least $15$ jackknife samples to prevent a singular covariance matrix. We use twice this number ($30$) and then use the same number as the number of mocks ($49$). The final value corresponds to the number of unique elements in the symmetric covariance matrix: $15(15-1)/2 = 105$. We caution that as we increase the number of jackknife samples, we decrease the respective volume of each jackknife region which might subtly bias jackknife estimates (e.g. underestimate cosmic variance). First, we investigate the magnitude of the absolute (diagonal) errors of the reduced 3PCF. Since we use normalized covariance matrices, differences in the absolute errors might not be noticeable in the covariance structure. The $1\sigma$ absolute errors are shown in Figure~\ref{f:diagerr}. We see little difference between any of the methods, and the uncertainty typically ranges between $0.1$ and $0.15$. For each of our methods, we estimate the normalized covariance in both redshift and projected space, as depicted in Figure~\ref{f:covar_cmp20}, and include the distribution of residuals. We note that jackknife re-sampling methods appear to underestimate the correlation in all cases, but the general structure looks comparable. More samples generally produce a smoother, and more correlated, covariance matrix. However, not even the $105$ jackknife sample estimate reproduces the correlation in $49$ mocks. We consider the distribution of residuals an important metric in evaluating the reliability of the resulting covariance, which we include in Figure~\ref{f:covar_cmp20}. Ideally, the covariance matrix accounts for all ``connections'' between bins only if the residuals are reasonably Gaussian. We notice a skew in several of the jackknife re-samplings, with a tail extending to lower values. As discussed in \citet{mcbride:10}, this is a consequence of cosmic variance within jackknife samples. A few rare structures affect the 3PCF; when they are excluded by an a jackknife region the $Q(\theta)$ of the entire sample drops. The mock estimate shows a slight skew in the positive direction from the same effect. In mocks, when a rare structure exists in the probed volume then the 3PCF rises producing a rare high measurement. The eigenmode analysis we utilize relies on signal being the dominant contribution to the structure of the covariance matrix (as opposed to noise). Noise is commonly expected to be an independent or \emph{diagonal} contribution. Similar to \S\ref{s:ev}, we examine the eigenvectors (EVs) of the covariance matrix to provide insight into the structure. By using the singular value decomposition (SVD), the eigenvectors are ordered by largest to least amount of variance explained in the covariance matrix. The first three EVs are shown in Figure~\ref{f:ev3} for both redshift and projected space. Similar structure appears in each of them, which we interpret as follows. The first EV represents the general measurement, with all eigenmodes equally weighted. The second EV shows the difference between ``collapsed'' and ``perpendicular'' configurations. Finally, the third EV represents a scale dependence as the third side of the triplet ranges between $9 \; h^{-1}\mathrm{Mpc}$ at $\theta \sim 0$ to $27 \; h^{-1}\mathrm{Mpc}$ at $\theta \sim \pi$. In some of the estimates, the shapes of the second and third EVs appear either combined or transposed. Since the full measurement is a linear combination of all EVs, this lack of separation makes sense. In these cases, the statistical significance the two EVs remain similar. This interpretation of the structure follows the analysis by \citet{GS05} for $N$-body\ simulations. The less significant eigenvectors (which we do not show) appear random, with the lowest being contributions from noise or numerical instabilities. We identify the significance of the eigenmodes by inspecting the singular values (SVs) shown in Figure~\ref{f:sv}. The SV can be understood as an ``importance weighting'' of each eigenmode, and the figure shows a rapid decline of significance for each eigenmode. The first three eigenvectors cumulatively account for over $99.9\%$ of the variance in the normalized covariance matrix. The signal-to-noise ratio ($S/N$) of each eigenmode is shown in Figure~\ref{f:s2n}, as calculated by \eqref{eq:s2n}. The mocks in both redshift and projected space depict a slow decline in $S/N$ over the first few eigenmodes, supportive or our interpretation of relative significance. This trend is not as clear in the jackknife estimates for redshift space, although it appears consistent in projected space. We see the first half of the modes appear resolved, with well behaved $S/N$. For the least significant eigenmodes, the noisier error estimates using fewest jackknife samples show unrealistically high $S/N$ ratios (especially in the case of $15$ jackknife regions). The total $S/N$ would increase dramatically and artificially if we included these noise dominated modes. In these cases, using the full covariance (i.e. including all modes) would be a mistake. To make the point clearer, we examine the cumulative $S/N$ ratio in Figure~\ref{f:s2ntot} where we identify rapid upturns in the total $S/N$ as an artificial consequence of noise. Several curves in Figure~\ref{f:s2ntot} do not appear problematic with this test, and show steady behavior across all modes. The amplitude of the $S/N$ ratio between $49$ mocks and the $105$ jackknife samples show consistency, but the $S/N$ ratio does not appear to be a monotonic change with the number of jackknife samples which suggests a complex relationship between the best $S/N$ and an optimal number of jackknife regions. We can compare the \emph{subspace} that a set of eigenvectors probe between two error estimates. The formalism is the same as discussed in \citet[ see section 4]{yip:04}, which results in a fractional ``compatibility'' between a collection of eigenvectors. Intuitively, this is the matrix equivalent of the vector dot product, where two orthogonal unit vectors would have a vector subspace of $0$ (no compatibility) and two identical unit vectors would result in $1$. We use the covariance of the $49$ mocks as ``truth'', and test the fractional compatibility of the jackknife estimates for covariance in $Q_z(\theta)$ and $Q_{proj}(\theta)$ shown in Figure~\ref{f:evsubspace}. When all the eigenmodes are considered, the \emph{subspace} becomes the full space and the comparison yields unity by construction. We notice the projected measurements never appear more discrepant than $75\%$. After the first few eigenmodes, redshift space shows a similar agreement. With the exception of the $15$ jackknife sample estimate, the 3 eigenmode mark appears $90\%$ compatible or better in all cases. This quantifies our argument of the top three EVs in Figure~\ref{f:ev3}, where the second and third eigenvectors appear different (predominantly in redshift space), but their linear combination remains consistent with each other. Remember, this comparison only considers the compatibility of the \emph{direction} of each eigenvector, and not their relative strengths (i.e. SVs). We evaluate the subspace compatibility on the normalized covariance matrix between redshift and projected space estimates. For each method, we show the fractional comparison in Figure~\ref{f:evsubspace_ZvP}. The mocks estimates show the most compatibility across all eigenmodes, where showing agreement at $\sim 85\%$ or better. With all estimates, we find that the combination of the first three eigenmodes remains a compatible subspace, above $90\%$, if we again exempt the $15$-jackknife sample estimate (which shows less than $70\%$ compatibility). We caution that the resolution of errors and the choice of binning scheme relate in a non-trivial manner, which is discussed in additional detail by \citet{mcbride:10}. We chose ``large'' bins (fiducial scheme with $f=0.25$) to ensure a smooth, signal dominant structure in the covariance matrix. Overall, this error comparison supports our claim that accurate results can still be obtained even with less-than-optimal error estimation such as jackknife re-sampling. \section{Discussion} \label{s:disc} We utilize the the configuration dependence of the 3PCF in redshift and projected space to constrain galaxy-mass bias parameters in the local bias model. We find that galaxies are biased tracers of mass, with brighter galaxies corresponding to increased bias. These results are consistent with detailed analysis of SDSS galaxies from the 2PCF \citep{zehavi:05,zehavi:10} which quantifies how bias increases clustering for brighter galaxy samples. Our results indicate that a linear bias model yields reasonable approximations to the observations, in agreement with \citet{hikage:05}. However, a non-linear bias model produces slightly better agreement, and yields lower reduced chi-square values ($\chi^2_\nu$ in Tables~\ref{t:bias} and \ref{t:linbias}). We notice a strong correlation between linear and quadratic bias, as expected from inspection of \eqref{eq:biasQ}, and consistent with measurements of SDSS galaxies using the bispectrum \citep{nishimichi:07}. We find that our redshift space measurements predict significantly negative quadratic bias with a linear bias near one. This effect was seen in a similar analysis conducted on 2dFGRS galaxies \citep{gaztanaga:05}. Interestingly, we find projected measurements suggest a larger linear bias with near zero quadratic bias for the same samples, suggesting a possible systematic effect from redshift distortions in this simple bias model. We examined the relative bias in \S\ref{ss:brel}. We find supporting evidence that the brighter galaxy sample is a \emph{more biased} realization using both the 2PCF and 3PCF, consistent with other analyses of SDSS data \citep{zehavi:02,zehavi:05,zehavi:10}. Relative bias provides a consistency check on the ``absolute'' galaxy-mass bias parameters we constrain, suggesting a combination of linear and quadratic bias terms are consistent with observations. However, the relative bias of the 2PCF suggests that our two parameter bias model fits underpredict the value of linear bias necessary to explain the observations. Again, we see a hint that constraints from projected measurements appear to be less affected -- although we caution that this trend has weak statistical significance given the larger uncertainties in projected space. We obtain reasonable projections for $\sigma_8$ by using our linear bias values from fits on $Q(\theta)$ in conjunction with the 2PCF. We estimate the values of $\sigma_8$ to be between $0.83$ and $1.13$ based on the BRIGHT ($M_r < -21.5$) and LSTAR ($-21.5 < M_r < -20.5$) galaxy samples. The values we obtain are contingent on a specific model of mass clustering, where we have chosen to use $N$-body\ simulations (specifically the Hubble Volume \ensuremath{\Lambda\mathrm{CDM}}\ results), and redshift distortions (which we include through velocity information to distort particle positions in the HV simulation). For comparison, constraints of $\sigma_8$ from a joint analysis of the cosmic microwave background (CMB), supernova data (SN) and baryon acoustic oscillations (BAO) find $\sigma_8 = 0.82$ \citep{komatsu:09}. Our lower values are in good agreement with these constraints. Our high end values appear too large, but our results are in reasonable agreement with an analysis of a related statistic, the monopole moment of the 3PCF, where they find best fit $\sigma_8$ values between $0.9$ and $1.07$ \citep[see Table~3 in][]{pan:05} using 2dFGRS galaxies \citep{2dFGRS}. Although the value of $\sigma_8$ we obtain is comparable with results from 2dFGRS, the specific bias values will not be, as the 2dF targets a different galaxy selection than our SDSS samples. If we underestimate the value of linear bias, effectively $B$ here, \eqref{eq:bias2pt_s8} shows that the implied value of $\sigma_8$ will be overestimated. This might explain the larger values of our estimates in comparison to WMAP analyses. Our projections for $\sigma_8$ use clustering measurements between $9-27 \; \; h^{-1}\mathrm{Mpc}$ and exploit only the configuration dependence of $Q(\theta)$. This is a much smaller slice of data that is significantly different than either the monopole measurement (which utilizes a larger range of scales without configuration dependence) or WMAP results (that combines a immense amount of data from both CMB and LSS analyses). We do not intend this analysis to complete with these constraints, but rather to help illuminate the role of galaxy-mass bias in future constraints of $\sigma_8$ using the 3PCF. Understanding the properties of measurement errors and the impact of empirical methods of estimating the covariance is a critical component necessary for quantitative constraints. Recent results have done comparisons on lower order statistics, such as the work by \citet{norberg:09}. We compared several properties of 3PCF covariance matrices estimated from jackknife re-sampling to those constructed from many realizations of independent galaxy mock catalogs. While we noted some concerning discrepancies, we found these typically affected only the least significant eigenmodes. We found many similarities between the covariance estimates, including physical descriptions for the first three eigenmodes which account for an overwhelming majority of the variance. We established the need to trim noisy, unresolved modes from the covariance. When trimmed, and the eigenmode analysis is properly utilized, we noted only a few significant differences, mostly in the case of $15$ jackknife samples. We conclude that our use of $30$ jackknife samples does not significantly affect our analysis. \section{Summary} \label{s:summary} We analyze measurements of the configuration dependence of reduced 3PCF for two SDSS galaxy samples that were first presented in \citet{mcbride:10}. In both redshift and projected space, we characterize the galaxy clustering differences with those predicted by the non-linear mass evolution in the \ensuremath{\Lambda\mathrm{CDM}}\ Hubble Volume simulation. Here, we summarize our main results: \begin{itemize} \item We demonstrate that brighter galaxies remain a more biased tracer of the mass field by constraining the linear and quadratic galaxy-mass bias parameters using a maximum likelihood analysis on scales between $6$ and $27 \; h^{-1}\mathrm{Mpc}$. Conservatively using scales above $9 \; h^{-1}\mathrm{Mpc}$, the BRIGHT sample is biased at greater than $2\sigma$ and the fainter LSTAR shows no significant bias, in generally agreement with expectations from previous analyses of SDSS galaxies \citep{zehavi:05,zehavi:10}. The bias parameters and their significance are summarized in Table~\ref{t:bias}. \item We resolve the degeneracy between the linear and quadratic bias terms, which helps to explain the weak luminosity dependence observed in the reduced 3PCF. \item We find a linear bias model appears sufficient to explain the measurements of the 3PCF by re-fitting the linear bias while constraining the quadratic bias at zero (results reported in Table~\ref{t:linbias}). However, we find the two parameter fit is preferred in our likelihood analysis, as it yields a lower chi-square in the best fit value. \item The relative bias between samples of different luminosities (which is independent of the mass predictions), as well as the cosmological implications for values of $\sigma_8$ , show general consistency with previous analyses. Inspection of our results suggest that the linear bias values obtained without a quadratic bias term are preferred. This suggests that two-parameter bias constraints might underpredict the linear bias. \item We decompose the structure of the normalized covariance matrix as an alternative view into clustering properties of our samples. The eigenvectors of the first three dominant modes show coherent structure consistent with variations seen in the $Q(\theta)$ measurements, supporting our claim that the covariance is signal dominated and sufficiently resolved. \item We find that jackknife re-sampling methods cannot reproduce the correlation seen in the a 3PCF covariance matrix estimated from many realizations of mock galaxy catalogs. By performing a detailed comparison of the properties and structure of the errors, we identify that noisy, unresolved modes introduce significant discrepancies. We find that using an eigenmode analysis can mitigate the differences and conclude that our analysis should not be significantly affected by less-than-ideal methods of error estimation. \item Comparing results between redshift space and projected measurements implies a potential systematic bias on values from the redshift space analysis when scales below $9 \; h^{-1}\mathrm{Mpc}$ are included, which have been utilized in other comparable analyses. Since the small scale measurements contain more constraining power than larger scales, they drive the likelihood analysis even when larger scales are considered. \item On scales above $9 \; h^{-1}\mathrm{Mpc}$, the statistical significance of constraints from redshift space analyses appear stronger than those found in analyses of projected measurements. We attribute this result to the increased uncertainties of the projected 3PCF, which mixes in larger scales (with larger errors) due to the line-of-sight projection. When considered with the results of \citet{mcbride:10}, which finds the projected 3PCF recovers configuration dependence at small scales lost in redshift space, a combination of redshift space analysis at large scales and projected measurements at small scales would form a nice complement in future analyses. \end{itemize} \acknowledgments We thank many in the SDSS collaboration, where active discussion helped to refine this work. We would like to specifically acknowledge valuable input from Istv\'an Szapudi, David H. Weinberg, Zheng Zheng, Robert Nichol, Robert E. Smith, Andrew Zentner, and the detailed discussions on error estimates with Idit Zehavi. We thank August Evrard and J\"{o}rg Colberg for kindly providing data and assistance with the Hubble Volume (HV) simulation. The HV simulation was carried out by the Virgo Supercomputing Consortium using computers based at the Computing Centre of the Max-Planck Society in Garching and at the Edinburgh parallel Computing Centre. J.~G. and the development of {\it N}tropy\ was funded by NASA Advanced Information Systems Research Program grant NNG05GA60G. A.~J.~C. acknowledges partial support from DOE grant DE-SC0002607, NSF grant AST 0709394, and parallel application development under NSF IIS-0844580. This research was supported in part by the National Science Foundation through TeraGrid resources provided by NCSA (Mercury) and the PSC (BigBen) under grant numbers TG-AST060027N and TG-AST060028N. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is \texttt{http://www.sdss.org/}. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. \begin{appendix} \section{Effects of Binning} \label{s:binning} As an example of the effects of bin-size on galaxy-mass bias constraints, we re-analyze the LSTAR galaxy sample ($-21.5 < M_r < -20.5$) using the two fractional bin-widths: $f=0.1$ and $f=0.25$. First, we ignore the structure of the covariance matrix and show constraints using all the bins assuming perfect independence shown in left panel of Figure~\ref{f:bin_bias}. While unphysical, this illustration allows one to probe the effect of shape differences in the 3PCF measurements without considering the resolution of the covariance matrix. Since larger bins smooth the configuration dependence, we expect a larger degeneracy between $B$ and $C$, which is apparent. We see the best fit values (symbols) stay within the respective $1\sigma$ contours, but just barely. Remember, this approach uses all modes and the exact same input data, suggesting that binning can result in a $1\sigma$ systematic bias. \begin{figure*} \centering \includegraphics[angle=0,width=0.425\linewidth]{bias_vl_mb20_5_diag_fcmp_contour4.eps} \qquad \includegraphics[angle=0,width=0.425\linewidth]{bias_vl_mb20_5_ev_fcmp_contour4.eps} \caption[Differences in Galaxy-Mass bias due to binning]{ Analogous to the Galaxy-Mass bias constraints in \S\ref{s:bias}, we show the constraints on $B$ and $C$ using the same data for our fiducial binning scheme with fractional bin-size of $f=0.1$ (solid red contours) and larger $f=0.25$ (block dotted). On the left, we neglect any overlap as well as the the covariance and assume independent diagonal errors while and use the full 15 bins. On the right, we utilize the full covariance and only fit the dominant modes in an eigenmode analysis. The contours correspond to the $1\sigma$ and $2\sigma$ confidence levels from the $\Delta\chi^2$ surface. We use the LSTAR galaxy sample. } \label{f:bin_bias} \end{figure*} For the right panel of Figure~\ref{f:bin_bias}, we consider the full covariance as well as improvements obtained by using the eigenmode analysis in the galaxy-mass bias constraints. First, we notice the error contours appear less stretched, in accord with our expectations of using a non-diagonal covariance matrix. In most cases (excepting $Q_z$ for $r_1 = 6 \; h^{-1}\mathrm{Mpc}$), the area of the contours appear of equal size or even decreased for the larger $f=0.25$ measurements in contrast to the diagonal case. This makes sense, as the lower variance measurements of $f=0.25$ appear better resolved as long as there are enough remaining modes to constrain two parameters. The best fit values appear discrepant, especially at the lower scales ($6-18 \; \; h^{-1}\mathrm{Mpc}$) where they disagree at more than a $1\sigma$ significance. While this causes some concern, it is not as drastic as the diagonal case. As the eigenmode analysis trims modes, it excludes information and the same input data produces a different statistical representation. In light of this effect, a $1\sigma$ difference becomes a statistical difference of analysis rather than a significant systematic effect. In summary, we find lower galaxy-mass bias parameters with larger bin-widths, a potential artificial bias on the galaxy-mass parameters due to over-smoothing. Since we gain very little additional constraining power with the $f=0.25$ bin-width, we argue the $f=0.10$ bin-width represents the more conservative choice. Although the $f=0.10$ scheme represents smaller bins, they are still quite large and adequately resolve structure in the covariance. \end{appendix} \def1{1}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Background \& Related Work} For explanations in models of code, we focus on local explanation techniques that produce explanations of a specific instance of the input space for a prediction given by the model. Further, given the variety of model families used at Facebook, we focus on model-agnostic techniques. We consider a model to be a learned black-box function $f$ that maps an input $x \in X \subset R^d$ to an output $f(x) \in Y, f : R^d \mapsto Y$. In local explainability, we want to formulate an explanation function $g$ that explains why $f$ predicted $f(x)$ for a particular point $x$. Several methods have been proposed to provide explanations for model predictions. We can broadly characterize them as white-box (attention- and gradient-based), and perturbation-based mechanisms. \subsection{White-box Explanation Mechanisms} Attention-based mechanisms can provide a window into how a model operates. We can extract weights from attention layers and conveniently attach them for each input token~\cite{clark:19, galassi:20, li:16, vashishth:19}. However, recent work has called into question whether using attention weights as feature attribution actually surmounts to an explanation of model behavior, showing that different weights can lead to the same prediction~\cite{jain:19}. Other work has even shown that we can systematically manipulate attention in a network while still retaining the same prediction~\cite{pruthi:20}. Other work proposed leveraging the dynamics of gradients to diagnose where the model pays the most ``attention". One of the recent gradient-based mechanisms for finding local explanations is integrated gradients~\cite{sundararajan:17}, where the idea is to create interpolations of the input and evaluate the model on those interpolated inputs. Unfortunately, unlike image data, code does not lend itself easily to interpolation. For instance, there is no meaningful token to be had by combining embedding of a pad token and a keyword token arithmetically. \subsection{Perturbation-based Explanation Mechanisms} Perturbation-based mechanisms remove or replace a subset of features in the input space and track the score differential of the model prediction. The aggregated score difference over many samples is then attributed to features involved in the perturbation. We explored several perturbation-based mechanisms including SHAP with unsatisfactory results. Local surrogate methods, such as LIME~\cite{lime}, generate a new dataset consisting of perturbed samples and the corresponding predictions of the existing model. From our experience, these methods perform poorly because perturbations to source code inputs must adhere to strict semantic guidelines such that the resulting source code retain the structure of a working program. We found that counterfactual explanations, another related concept to find explanations, to be a better fit for finding explanations for models of source code. There have been several papers that investigate interpretation of models of code through perturbation: \begin{itemize} \item Understanding Neural Code Intelligence Through Program Simplification, FSE'21: Uses delta-debugging to reduce program up until it changes its mind, i.e., produces a "minimal" program that still has the same prediction as the original. The paper dismisses robustness issues for models of source code, Section 7.2 last paragraph: \emph{"For linguistic expressions, input perturbations are usually less obvious: while certain words (such as stop words) may safely be removed without altering the meaning of a sentence, more general changes quickly risk producing very different inputs. Recent input-related methods rely on synonym datasets and swap out similar names to ensure that they generate semantically similar phrases ... Our work shows that, at least for current models and tasks, this is significantly less of a concern in software engineering, where many tokens can be removed with little consequence"} \item Probing Model Signal-Awareness via Prediction-Preserving Input Minimization, FSE'21 \item PyExplainer: Explaining the Predictions of Just-In-Time Defect Models, ASE'21 $\mapsto$ focus on tabular data, not source code perturbation, uses a variation of LIME \item Autofocus: Interpreting attention-based neural networks by code perturbation, ASE'19 \end{itemize} \subsection{Counterfactual Explanations} Counterfactuals in NLP. Paper links below items as comments. \begin{itemize} \item Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and Improving Models \item Explaining NLP Models via Minimal Contrastive Editing (MICE) \item Generate Your Counterfactuals: Towards Controlled Counterfactual Generation for Text \item Generating Plausible Counterfactual Explanations for Deep Transformers in Financial Text Classification \item Generative Counterfactuals for Neural Networks via Attribute-Informed Perturbation: \item Mask and Infill: Applying Masked Language Model to Sentiment Transfer \item Generative Counterfactuals for Neural Networks via Attribute-Informed Perturbation \end{itemize} \subsection{Robustness and Adversarial Examples} Mix of adversarial examples of code, robustness, and a few papers investigating the relationship between between robustness and counterfactuals. Paper links below items as comments. Adv. Examples of Source Code: \begin{itemize} \item Adversarial examples for models of code~\cite{yefet:20} \item STRATA: Simple, Gradient-Free Attacks for Models of Code \item Towards the Unification and Robustness of Perturbation and Gradient Based Explanations: \item Adversarial Robustness for Code \item Generating adversarial examples for holding robustness of source code processing models, AAAI'20~\cite{zhang:20} \item Semantic robustness of models of source code~\cite{semanticrobustness} \item On the generalizability of Neural Program Models with respect to semantic-preserving program transformations \end{itemize} Connection Robustness and Counterfactuals \begin{itemize} \item On the Connections between Counterfactual Explanations and Adversarial Examples~\cite{pawelczyk:21} \item Semantics and explanation: why counterfactual explanations produce adversarial examples in deep neural networks \item The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations, IJCAI'19~\cite{laugel:19} \item The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples -- quite philosophical, potentially out of scope \end{itemize} \subsection*{MLM vs. Hand-crafted Perturbation} What are the nature of code Perturbations produced through MLM? We identified three distinct categories during the course of our qualitative analysis. Note that the examples we provide in the following are similar to the ones we found in our investigation to be representative to the identified category, but are not actually part of our codebase. \begin{enumerate} \item \emph{Specialized Concept to Generalized Concept:} This is the most common kind of perturbation we have encountered in various different contexts. A very specialized token is replaced with a more generic or generalized version of it. There are several examples of this we found in our sample: In web development replacing a specialized component tag name with a more generic tag name (\texttt{<UIButton>} $\longmapsto$ \texttt{<div>}). Specific class to more generic class in the type hierarchy\\(\texttt{DistributedTask} $\longmapsto$ \texttt{GeneralTask}). And similarly with function calls (\texttt{genNativeRenderTea()} $\longmapsto$ \texttt{gen()}). Similar patterns continue in different contexts. \item \emph{Variable Name/Function Name Correspondence:} Often, when the return value of a function invocation is assigned to a variable, function and variable name tend to have a correspondence. If that was not already the case in the original instance, this kind of perturbation we observed replaces either function or variable name to correspond (e.g., \texttt{prime = is\_prime(x)}). \todo{ID: I don't understand this.} \item \emph{Maximal Contrast:} Some perturbations, albeit the minority we found in our sample, seemingly follow no pattern. However, they still do achieve their goal in providing what we call a ``maximal contrast" between the original input and the counterfactual. \todo{Isn't this a good justification for using MLM in the first place? i.e., it can find useful counterfactuals that are not easy to capture with patterns. This is presented as a bug but seems like a feature :) } \end{enumerate} For some downstream tasks, MLM is certainly part of an end-to-end solution to produce counterfactuals in a human-in-the-loop system. For other tasks, where quality of the counterfactuals that MLM provides varies, it is still a good tool for exploration as to why kind of perturbations we can turn into hand-crafted operators to apply in a production setting. \section{Context of this work} \label{sec:context} \paragraph*{\textbf{BERT-based models of code}} Deep-learning architectures drawn from NLP have become commonplace in software engineering~\cite{code2vec, codesummary1, codesummary2}. Among these, recently large-scale language models such as BERT~\cite{bert} have shown state-of-the-art performance in NLP tasks, and correspondingly, the software engineering community has adopted these architectures for code-related purposes, e.g. in CodeBERT~\cite{codebert}. Since our models fall in this category, we give a very brief background here. The idea behind BERT (and CodeBERT) is to \emph{pre}-train a sequence model using self-supervision. This is done by giving a neural network the training task of predicting one or more tokens in the sequence that are purposely masked out. The network's training objective is to predict the masked-out tokens correctly. This is known popularly as a \emph{masked language model}, or MLM. The purpose of the MLM pre-training is to learn a generic embedding of a token sequence; as in LSTMs, the hidden state after the last token in a sequence is taken to be the embedding of the entire sequence. While pre-training is generic, a second step, called \emph{fine tuning}, is applied to customize the network to a specific task at hand. Here one uses a supervised dataset (i.e., a language fragment as well as a label that a human associates with the fragment), to become a classifier for a specific task. For instance, in a sentiment analysis application, a (binary) label could be whether the token sequence reflects a positive sentiment. \paragraph*{\textbf{Automated code review}} At companies small and large (including Facebook), there is considerable interest in more thorough and more automated code review with the hope of giving feedback to developers early. Whenever a developer submits a code commit (aka a ``diff''), a series of automated analyses are carried out in addition to a human code review. These automated analyses try to find both stylistic inconsistencies, and to the extent possible, functional and non-functional issues in code. Complementing traditional code analyses are ML-based analyses for a variety of purposes, and are increasingly being used in software engineering tasks in industrial practice~\cite{ai-in-se}. Here we illustrate three of these analyses (models), using a common small example. Our models are based on BERT, described above. \begin{listing}[!t] \begin{minted} [ frame=lines, framesep=2mm, baselinestretch=1.2, bgcolor=LightGray, fontsize=\footnotesize, linenos, startinline ]{php} private async function storeAndDisplayDialog( SomeContext $vc, SomeContent $content, - ): Awaitable<SomethingStoreHandle> { + ): Awaitable<SomeUIElement> { - $store_handle = await SomethingStore::genStoreHandle($vc); + $store_handle = await SomethingStore::genHandle($vc); + ... other code ... + $store_success = await $store_handle->store( + $store_handle, + $content, + ); - return $store_handle; + return await $store_success->genUIElementToRender(); } \end{minted} \caption{A code change that we use as running example. The added lines are marked with a "+" and the deleted lines with a "-"} \label{lst:example} \end{listing} Listing~\ref{lst:example} represents a code change that could be made in a diff (for simplicity, we omit other features such as the commit message). It implements a function that stores a piece of content in a data store and returns a UI dialog element to signify that the operation has been successful. By convention, the added lines are shown with a "+" and the deleted lines are shown with a "-". The unchanged lines are shown for context, which is also important for the model to train well. We now discuss the three downstream tasks while referring to Listing~\ref{lst:example}. We illustrate several aspects of counterfactual explanations in the first of these tasks, and keep the other two brief. \paragraph*{\textbf{Performance Prediction.}} In the diff shown above, replacing line 6 (an optimized operation for store handles) with line 7 (an unoptimized version for generic handles) could cause a performance problem. Lines 8-11 are also added, which could independently cause performance problems. It is not known at code review time whether or not this diff will cause a performance regression for sure, because the code has not been deployed in a real environment yet. In support of an intelligent, automated code review, a BERT-based predictive model (trained on a curated set of past performance regressions) provides an educated guess at whether this diff is a likely performance regression. In this example, suppose that the model's prediction is ``yes". In the current status quo, the developer has to start a complex investigation process that involves hypothesis building, statically inspecting potential culprits, and expensive testing with delta-debugging that involves benchmarks~\cite{baltes:15}. If it then turns out that the machine learning prediction was a false positive, it could lead to frustration, as is well-known from existing experiences with deploying any kind of uncertain program analysis~\cite{christakis:16, sadowski:18}. \begin{listing}[!t] \begin{minted} [ frame=lines, framesep=2mm, baselinestretch=1.2, bgcolor=LightGray, fontsize=\footnotesize, escapeinside=@@, startinline ]{php} - $store_handle = await SomethingStore::genStoreHandle($vc); + $store_handle = await @\colorbox{red}{SomethingStore::genHandle(\$vc)}@; @\colorbox{green}{SomethingStore::genSimple(\$vc)}@ + ... other code ... \end{minted} \caption{Counterfactual explanation of the model's prediction that the code change in Listing~\ref{lst:example} will cause a performance regression. The explanation says that if we replace the red part with green, the model will no longer make that prediction. We refer to such replacements as \emph{perturbations.}} \label{lst:counterfactual} \end{listing} To help the developer gain better insight about the model's prediction, our system automatically generates a counterfactual explanation, as shown in Listing~\ref{lst:counterfactual}. This counterfactual states that had \texttt{genStoreHandle} been replaced with \texttt{genSimple} instead of \texttt{genHandle}, then the model would \emph{not} have predicted a performance regression. Such feedback is useful to the developer, as it highlights that \texttt{genHandle} is the likely culprit and allows them to ignore changes to the other parts of the code. Thus, the part elided as "... other code ..." can be ignored by the developer when trying to figure out how to address this automated feedback. \paragraph*{Multiple Blame Points} In many cases, the model's prediction does not depend on a single line of the diff, of even a consecutive fragment of code. Rather, there may be multiple parts of the diff that cause the model to make a certain prediction. For instance, Listing~\ref{lst:multipoint} shows a scenario where the counterfactual involves multiple parts of the diff. For example, if the procedure \texttt{store} relies on an optimization performed by \texttt{genStoreHandle}, then it would make sense that the model's prediction is informed by \emph{both} of these function calls. Observe that techniques based on delta debugging~\cite{simplification} might not be able to pinpoint blame points in non-contiguous pieces of code, for example, due to the "...other code..." in this example. \begin{listing}[!t] \begin{minted} [ frame=lines, framesep=2mm, baselinestretch=1.2, bgcolor=LightGray, fontsize=\footnotesize, escapeinside=@@, startinline ]{php} - $store_handle = await SomethingStore::genStoreHandle($vc); + $store_handle = await @\colorbox{red}{SomethingStore::genHandle(\$vc)}@; @\colorbox{green}{SomethingStore::genSimple(\$vc)}@ + ... other code ... + $store_success = await @\colorbox{red}{\$store\_handle->store}@( @\colorbox{green}{\$store\_handle->probe}@ + $store_handle, + $content, + ); \end{minted} \caption{A counterfactual with two changes that must be made together for the model to change its prediction.} \label{lst:multipoint} \end{listing} \paragraph*{Preserving in-distribution} As illustrated by the previous two examples, our method produces counterfactuals that look ``natural" --- that is, the replacements proposed by our technique \emph{could} have plausibly occurred in the diff. This is an important aspect of this work, as candidate explanations that are implausible often cause the model to ``change its mind" due to robustness issues in the model. For example, consider the candidate explanation shown in Listing~\ref{lst:ood}. In this case, replacing the return value by \texttt{await 5} yields a non-sensical code snippet that comes from a different distribution than the one our model has been trained on. Thus, the model is \emph{not} expected to make reliable predictions for such code and can \emph{erroneously} predict that the diff (after perturbation) does not have a performance regression. In this case, the out-of-distribution nature of the perturbation results in an ``explanation" that does not blame the true culprit. We want to avoid such out-of-distribution perturbations. \begin{comment} \begin{listing}[!t] \begin{minted} [ frame=lines, framesep=2mm, baselinestretch=1.2, bgcolor=LightGray, fontsize=\footnotesize, escapeinside=@@, startinline ]{php} - $store_handle = await SomethingStore::genStoreHandle($vc); + $store_handle = await @\colorbox{red}{SomethingStore::genHandle(\$vc)}@; @\colorbox{green}{5}@ + ... other code ... \end{minted} \caption{An out-of-distribution change} \label{lst:ood} \end{listing} \end{comment} \begin{listing}[b] \begin{minted} [ frame=lines, framesep=2mm, baselinestretch=1.2, bgcolor=LightGray, fontsize=\footnotesize, escapeinside=@@, startinline ]{php} + ... other code ... - return $store_handle; + return await @\colorbox{red}{\$store\_success->genUIElementToRender();}@; @\colorbox{green}{5}@ \end{minted} \caption{A counterfactual based on an out-of-distribution code change.} \label{lst:ood} \end{listing} \paragraph*{\textbf{Testplan Screenshot Prediction.}} So far, we highlighted salient aspects of our approach in the context of a model for performance prediction. However, such counterfactuals are more broadly useful across different downstream tasks. Next, we illustrate how explanations can be useful in the context of \emph{testplan screenshot prediction}. When submitting a diff for review, the developer also has to provide a testplan that details how exactly that particular diff should be tested. The testplan usually includes instructions on how to run automated tests that exercise the code change as well as the results of those tests. A testplan that has to do with UI functionality should also include a screenshot; otherwise it is not a very good quality test plan. Calling out poor quality test plans is another check that code review is expected to do. At Facebook, we use a predictive model for testplan quality that indicates whether a testplan should contain a screenshot of the introduced functionality. Such a model is trained over a curated dataset of diffs with manually-specified binary labels. For instance, in our running example, the model might predict that the testplan should require a screenshot, possibly because the functionality that is being changed involves displaying and rendering UI elements (line 14 in the original example). Our proposed method also helps justify such testplan-related predictions by producing the counterfactual explanation shown in Listing~\ref{lst:testplan}. With such an explanation, the developer gets a clear indication that the prediction is related to the call to \texttt{genUIElementToRender}, absolving her from the burden of manually inspecting irrelevant parts of a (potentially very large) diff. \begin{listing}[!t] \begin{minted} [ frame=lines, framesep=2mm, baselinestretch=1.2, bgcolor=LightGray, fontsize=\footnotesize, escapeinside=@@, startinline ]{php} + $store_handle = await SomethingStore::genHandle($vc); + ... other code ... + $store_success = await $store_handle->store( + $store_handle, + $content, + ); - return $store_handle; + return await $store_success->@\colorbox{red}{genUIElementToRender}@(); @\colorbox{green}{getValue}@ } \end{minted} \caption{A counterfactual for testplan screenshot prediction model} \label{lst:testplan} \end{listing} \begin{listing}[t] \begin{minted} [ frame=lines, framesep=2mm, baselinestretch=1.2, bgcolor=LightGray, fontsize=\footnotesize, escapeinside=@@, startinline ]{php} private async function storeAndDisplayDialog( SomeContext $vc, SomeContent @\colorbox{red}{\$content}@, @\colorbox{green}{\$count}@ ... + ... other code ... + $store_success = await $store_handle->store( + $store_handle, + @\colorbox{red}{\$content}@, @\colorbox{green}{\$count}@ + ); \end{minted} \caption{A counterfactual for taint propagation detection model} \label{lst:taint} \end{listing} \paragraph*{\textbf{Taint Propagation Detection.}} Next, we discuss the usefulness of counterfactuals in the context of a model that predicts whether a diff introduces data flow that may propagate tainted information. Going back to our running example, a model trained for this task might flag the diff as suspect because the added code stores some content in a data store (line 9-12 in the original code example). However, given that the diff makes many other changes, it may not be apparent to the developer why the model makes this prediction. Our technique could also be useful in this context by producing the counterfactual shown in Listing~\ref{lst:taint}. This explanation informs the developer that the model based its decision on the use of the variable name \texttt{content}. \section{Discussion} \subsubsection*{\textbf{Deployment Considerations}} While most research on explanatory systems discusses how to provide faithful explanations to predictions, they implicitly require that prediction to be a true-positive. Most research papers discussing explanations do not even consider false-positives in their evaluation scenarios. However, in the context of deploying explanations as part of large-scale machine learning deployments, considering false-positives is vital to the success of the model. This is especially true in the context of learned models for code quality, where developers are rightly disillusioned with the lack of certainty and soundness of developer tools, leading them to chase ghosts in the light of wrong predictions. That is why we put an emphasis on evaluating our approach not only on a subjective consideration of ``usefulness", but also whether (and how) an explanation instills trust in both correct and wrong predictions. Another aspect we want to consider for deployment is enabling interactive use with our generated counterfactuals to allow for exploration of decision boundaries (Figure~\ref{fig:robustvscounterfactual}). \subsubsection*{\textbf{Interactive Counterfactual Exploration}} Using generative models to produce counterfactuals incurs non-negligeble runtime costs that ranges in the order of 30 seconds for smaller diffs to up to 10 minutes on the very large side of the diff size spectrum. In our current setting (Section~\ref{sec:context}), we can allow ourselves to process counterfactual generation offline and attach it as part of the code review. However, we envision counterfactual explanations to become part of an interactive exploration process. While part of making such an interactive experience possible is certainly performance engineering, we may have to think of other creative ways to make counterfactual generation with language models more instant. A possibility we want to explore is leveraging the traces of token replacements produced in the offline search to learn a neural model that mimics the MLM filling with much faster inference times. \subsubsection*{\textbf{Limitations and Scope}} We developed and applied the proposed approach in the context of workflows and tools within Facebook. While nuances of the internal workflow may have specific peculiarities, we generally think that the mode of submitting commits for code review is widely established both in wider industry and open-source projects. Also, while we run our experiments on large transformer, i.e. BERT-like, models, our approach and algorithm are model agnostic and only require repeated access to label and score information that most statistical learning models provide. Nevertheless, other kinds of models could exhibit other characteristics when it comes to the kinds of counterfactuals that are produced. \section{Experiments} \label{sec:results} To explore the effectiveness of our proposed approach, we perform a series of experiments based on models and tasks in the context of a large deployment scenario at Facebook. In what follows, we provide a brief description of the underlying setting, including machine learning models and downstream tasks, and then describe the experiment methodology to answer our research questions. \subsection{Setting} The models and tasks we employ in our experiments take as input a diff (e.g., akin to pull-requests in open source projects) and assign a task-dependent probability about the code changes (context, additions, deletions) of that diff. Each model has been pre-trained on millions of diffs at Facebook (with no particular downstream task in mind) and then fine-tuned for specific tasks. To evaluate our approach, we focus on the downstream tasks described in Section~\ref{sec:context}, namely performance regression, test plan quality (whether a screenshot is warranted), and detecting whether there is a taint propagation issue. \subsection{Research Questions and Methodology} The goal of our experimental evaluation is to address the following research questions. \paragraph{\textbf{RQ1: How often do users find counterfactual explanations for models of code helpful?}} We conduct a study with 3 software engineers and research scientists (we will collectively call them ``users" going forward) from different stakeholder teams within Facebook. We randomly sample 30 instances from the validation dataset used during training of these models. For each instance, we produce counterfactual explanations using our approach. We then ask the participants whether they found the explanation to be useful to understand the prediction. \paragraph{\textbf{RQ2: How do users utilize counterfactual explanations to discern between true-positive and false-positive predictions in models of code?}} We also ask the study participants to assess whether they think prediction is a true-positive or false-positive. They follow a think-aloud protocol in which they are encouraged to verbalize their thought process such that we can capture the rationale of their decision. We qualitatively analyze their responses and report on their experience with utilizing explanations. \paragraph{\textbf{RQ3: Are counterfactual explanations for models of code aligned with human rationales provided by domain experts?}} We perform a case study based on an internal dataset where a team of domain experts for the \emph{taint propagation detection} task had provided human rationales for code changes that we compare to our generated counterfactual explanations. We randomly sample a set of 30 diffs and filter out instances where we cannot find counterfactual explanations perturbing at most 5 tokens\footnote{Anecdotally, we can say that perturbing any more than 5 tokens will make an explanation practically useless}. We are eventually left with 11 instances for our analysis. Since our research questions involve human assessment of the counterfactual explanations, we use up to 5 explanations per diff regardless of the number of explanations generated. The ones shown to users are chosen based on the model's prediction likelihood for the counterfactual. That is, the lower the probability of the positive label from the model, the bigger was the influence of the perturbation, and the better is the explanation. \subsection{Results} \subsubsection*{\textbf{RQ1: Usefulness}} Our users found the explanations useful or very useful in 83.3\% (25/30) of cases. Very useful explanations made up 30\% (9/30) of cases. They only found 16.6\% (5/30) of the explanations not useful (or were indifferent about them). When analyzing these cases together with rationales given by the participants, we found that this mostly had to do with explanations that introduced irrational perturbations. There are several kinds of perturbations that were considered irrational in the study. For instance, it was noted that a perturbation did not make sense because a method invocation on a class was perturbed into method that does not exist as part of that class. A particular explanation was considered not very useful because the counterfactual introduced \emph{too many} perturbations. Even though we aim for sparse explanations, this can happen due to our goal of \emph{consistency} (see Section~\ref{sec:formative}.) If we perturb one particular token (e.g., a type) and it occurs multiple times in the input, we perturb all instances of it. Perturbations that occurred in non-actionable parts of the input were also initially dismissed and deemed not useful. However, upon reflection, users noted that it is still useful for models where contextual cues (i.e., where a change occurs) is sometimes more important than the change itself. This reinforces the perspective on feasibility and actionability we observed in our formative study (Section~\ref{sec:formative}). \subsubsection*{\textbf{RQ2: Discerning TP vs FP}} In addition to eliciting usefulness signals, we wanted to observe how our explanations can help discerning whether a prediction is a true-positive or a false-positive. Our users were able to accurately determine the underlying ground truth in 86.6\% (26/30) of cases. We noticed a distinction in how participants came to the conclusion on how to interpret their explanation. In true-positive instances, participants noted that the explanation guided them to parts of the code that were aligned with their mental model. More specifically, the explanation reinforced their hypothesis that had been induced through the prediction. (One participant did note that while this line of reasoning seems sound, it could be prone to confirmation bias.) In false-positive instances, participants noted that the strongest signal not to trust the prediction was the level of unreasonableness of the explanation. If the explanation pointed to irrelevant parts of the input or introduced an irrational perturbation, it was a sign that the prediction could probably not be trusted. Overall, our qualitative analysis showed that the counterfactual explanations provided our study participants with intuition and confidence in understanding exactly where the model is picking up its signal. Whether that signal was reasonable or not helped them decide to trust or discard the prediction. \begin{table}[h!] \caption{Experiment overview summarizing the quantitative results of RQ1 and 2} \label{tab:results} \begin{tabular}{@{}lll@{}} \toprule \textbf{User Study} & \textbf{TP/FP Guess} & \textbf{Usefulness} \\ \midrule Overall & 86.6\% Accuracy & 83.3\% useful / 16.6\% not \\ Performance & 85\% Accuracy & 85\% useful / 15\% not \\ Testplan Screenshot & 90\% Accuracy & 80\% useful / 20\% not \\ \bottomrule \end{tabular} \end{table} \subsubsection*{\textbf{RQ3: Correspondence with Human Rationale}} We investigate to what extent generated explanations align with rationale provided by human experts. To determine this alignment, two of the authors compare the generated explanations to the rationales. Specifically, we map the natural language description to tokens of the source code. We consider an explanation to match the rationale if their token attributions overlap at least 50\%. In our sample, $\sim$90\% (10/11) of the instance explanations (at least one of the top 5 offered) aligned with the human rationale. Even in the one case we deemed to not align entirely, while the top 5 explanations by our scoring did not align with the human rationale, two among the top 10 did. In addition to analyzing alignment, we wanted to see how our approach compares to a representative baseline from the literature. Thus, we also generate explanations that are based on occlusion (i.e., token removal instead of replacement) using the SEDC algorithm~\cite{martensprovost}, a greedy approach that successively removes token combinations from the input until the prediction flips. Occlusion has been used as a simple, yet effective, baseline in related work in NLP~\cite{hotflip, li_nlp:16, feng:18}. Table~\ref{tab:quantresults} provides a quantitative overview on the results of our approach (CFEX) with the baseline (SEDC) including: number of explanations generated (\textbf{\#Exp}), size of the diff measured in number of tokens (\textbf{Size}), and summary statistics on the size of explanations in number of tokens (\textbf{Avg, Min, Max}). We compare both approaches by determining a winner (\textbf{Wins}) based on the following criteria: First, the approach needs to produce an explanation that aligns with the human rationale. If both approaches generate aligning explanations, the shorter explanation wins (less number of attributive tokens). If the explanation size is the same, it is considered a tie. In our results, we observe wins or ties in $\sim$90\% (10/11) for CFEX and $\sim$19\% (2/11) for SEDC. This indicates that CEFX produces explanations that are more aligned with human explanation while also producing shorter explanations than the baseline technique does. Overall, we were able to see that counterfactual explanations generated through perturbations proposed by MLM are \emph{useful}, can help users discern false- and true-positives, and seem likely to align human rationale. \begin{table}[h] \caption{Overview of generated counterfactual explanations (CFEX) and explanations generated by token removal (SEDC)} \label{tab:quantresults} \begin{tabular}{ll|llllll} & & \textbf{\# Exp} & \textbf{Size} & \textbf{Avg} & \textbf{Min} & \textbf{Max} & \textbf{Wins} \\ \hline 1 & CFEX & 1 & 234 & 4 & 4 & 4 & \textbf{x} \\ & SEDC & - & & & & & \\ \hline 2 & CFEX & 7 & 143 & 5 & 5 & 5 & \textbf{x} \\ & SEDC & 2 & 143 & 4.5 & 4 & 5 & \textbf{x} \\ \hline 3 & CFEX & 23 & 274 & 3.83 & 3 & 4 & \\ & SEDC & 13 & 274 & 4 & 4 & 4 & \textbf{x} \\ \hline 4 & CFEX & 11 & 307 & 2.91 & 2 & 3 & \textbf{x} \\ & SEDC & 11 & 307 & 3.64 & 1 & 4 & \\ \hline 5 & CFEX & 8 & 48 & 2.75 & 2 & 3 & \textbf{x} \\ & SEDC & 2 & 48 & 3 & 3 & 3 & \\ \hline 6 & CFEX & 75 & 315 & 4 & 4 & 4 & \textbf{x} \\ & SEDC & - & & & & & \\ \hline 7 & CFEX & 13 & 96 & 3.62 & 3 & 4 & \textbf{x} \\ & SEDC & - & & & & & \\ \hline 8 & CFEX & 27 & 219 & 2.96 & 2 & 3 & \textbf{x} \\ & SEDC & 22 & 219 & 3.95 & 3 & 4 & \\ \hline 9 & CFEX & 88 & 292 & 2.77 & 2 & 3 & \textbf{x} \\ & SEDC & 30 & 292 & 3 & 3 & 3 & \\ \hline 10 & CFEX & 124 & 301 & 2.95 & 2 & 3 & \textbf{x} \\ & SEDC & - & & & & & \\ \hline 11 & CFEX & 15 & 117 & 2.27 & 1 & 3 & \textbf{x} \\ & SEDC & 22 & 117 & 2.23 & 1 & 3 & \end{tabular} \end{table} \section{Evaluation} We show where counterfactuals work well and where their limitations are by performing several experiments. \subsection{Models and Datasets} We want to explore \begin{itemize} \item Method Name Prediction PLBART (open source) \item DiffBERT + Screenshot prediction dataset (industry) \end{itemize} \subsection{Baselines and Approaches} \begin{itemize} \item Naive Greedy Search (Counterfactual Removal) \item SHAP or LIME? \textbf{JC: or none of those because we've already shown they don't work well wrt. robustness} \item MaskFilling \item Denoising Generative Mask Filling \item Attention (Saliency Maps) \item Program Simplification (FSE'21) \end{itemize} \subsection{Sampling and Ground Truth} To enable quantitative comparison between the different approaches, we need to obtain ground truth labels for what domain experts deem to be explanation attributions. We sample 30 instances from each dataset and let domain experts label which code feature they think is the most appropriate \textbf{JC: why 30? kind of a folklore minimum sample size that we learn in undergrad stats. I have to think about this and re-read the sampling in software engineering guide book).} \subsection{Experiments and Methodology} \begin{itemize} \item \emph{Quantitative Experiment}: Report accuracy, precision, recall for each approach based on overlap \item \emph{Performance Experiment}: Compare explanation generation time across techniques and parameters \end{itemize} \subsection{Results} ... \section{Threats to Validity} .... \section{Desiderata for Counterfactual Explanations for Code} \label{sec:formative} The series of examples presented in Section~\ref{sec:context} hopefully convey the idea that the results of ML models can be made more useful, persuasive, and actionable if accompanied by a counterfactual explanation. However, when designing explanatory systems, there are several criteria and pitfalls to consider, some of which have been discussed in the literature~\cite{pitfalls}. Additionally, since our method targets models of code, we need to take into account specific considerations of software engineers submitting their changes for code review. For this purpose, we conduct a formative study with three software engineers who are domain experts in the downstream tasks that we investigate in this paper. Our goal is to understand the desiderata of explanation systems as part of the software engineering process. In this study, we present users with the predictions of the model for specific tasks and ask them to provide human rationale for each prediction. We also show them different counterfactual explanations and ask them for feedback. Overall, we discuss around 10 predictions with each participant. At this point of our research, we were mostly interested in open-ended, qualitative feedback to understand the frustrations and confusions of software engineers and assess overall room for improvement. This study informed our design decisions that in turn influenced our problem formulation, implementation, and experiments. In what follows, we briefly summarize our findings, many of which are consistent with similar findings in different (non-code related) contexts. \paragraph{\textbf{Plausibility and actionability.}} Plausibility and actionability are regularly mentioned terms in the literature on counterfactual explanations~\cite{face, cf_survey1} and algorithmic recourse~\cite{karimi:21, rawal:20}. Plausibility means that we should not generate counterfactuals that are neither realistic nor believably part of the original input. Similarly to plausibility constraints in other domains, a code counterfactual is plausible if the code retains its naturalness as a result of the modification. We showed Listing~\ref{lst:counterfactual} as a plausible counterfactual, and Listing~\ref{lst:ood} as one that is \emph{im}plausible. Another concern frequently brought up in our study is that of \emph{actionability}, i.e., does the explanation show actionable recourse options? Pragmatically, we could take into account actionability constraints in our problem formulation and only allow counterfactuals that modify actionable features of the input. For example, in the case of code diffs, we could restrict modifications to be only applied to the added lines in the diff. However, after more discussions with our formative study participants, we realized that deleted lines and context matter just as much to the model --- otherwise we could have trained the model only on added lines in the first place, making it a lot more myopic and consequently imprecise. Therefore we decided to \emph{not} restrict which parts of the input to perturb, with the rationale that the model designer can always later apply a filter to the generated counterfactuals in a post-processing step. Listing~\ref{lst:taint} showed a counterfactual in which one of the perturbations is outside the lines changed in the diff. \begin{comment} \begin{listing}[H] \begin{minted} [ frame=lines, framesep=2mm, baselinestretch=1.2, bgcolor=LightGray, fontsize=\footnotesize, escapeinside=@@, startinline ]{php} ... lines in context... - @\colorbox{red}{efficientCall}@() @\colorbox{green}{unremarkableCall}@ + inefficientCall() \end{minted} \caption{An unintuitive counterfactual} \label{lst:unintuitive} \end{listing} \end{comment} \paragraph{\textbf{Consistency.}} Another important finding from our formative study is that counterfactuals that modify \emph{different occurrences} of the \emph{same} identifier are neither particularly useful nor plausible. For example, consider a diff involving a class called ``SomethingStore". If the modification renames this class in the import statement but not elsewhere in the code, the modified code neither makes sense nor leads to useful counterfactuals. Based on this finding, we decided to restrict code modifications we consider when generating counterfactuals to those that are \emph{consistent}. That is, our approach only allows input modifications that change the same identifier consistently throughout the code diff. \paragraph{\textbf{Proximity.}} Multiple formative study participants brought up that the distance between multiple perturbations matters to the comprehensibility of explanations. Perturbations that were part of the explanation of one instance, but that were ``too spread out" were confusing and not really helpful (e.g., an explanation that added perturbations in the beginning of the diff and at the far end of the diff). We use this finding to design a heuristic to select between multiple candidate counterfactuals. In particular, when our method finds multiple explanations, we prefer those where the modifications are not too far apart. \section{Introduction} With the rapid advances in deep learning over the last decade, complex machine learning models have increasingly made their way into software engineering. In particular, models based on deep neural networks are now routinely used in code-related tasks such as bug detection~\cite{deepbugs}, auto-completion~\cite{autocomplete}, type inference~\cite{typewriter}, code summarization~\cite{codesummary1, codesummary2}, and more. While deep learning models are remarkably accurate and generally quite effective, one growing concern about their adoption is \emph{interpretability}. In particular, because deep neural networks are extremely complex, it is often very difficult to understand why they make the predictions that they do. This issue of interpretability is a particularly relevant concern in software engineering since the outputs of machine learning models are routinely used to make predictions about code quality and non-functional properties of code. Even worse, because the predictions of these models are often consumed by software developers to change or improve an existing code base, it is particularly important that developers are able to understand \emph{why} machine learning models make certain predictions. To provide a more concrete illustration, consider a machine learning model that has been trained to detect whether some piece of code contains a security vulnerability~\cite{securityvulnerability}. If the developer does not understand \emph{why} the model thinks a piece of code is vulnerable, they may fail to take a true vulnerability seriously. Furthermore, even when they do take the model prediction seriously, they might not be able to take remedial action if they cannot understand why something is considered to be a vulnerability. This paper takes a step towards improving the interpretability of machine learning models used in code for \emph{practical downstream applications.} Motivated by the shortcomings of existing methods like LIME~\cite{lime}, SHAP~\cite{shap}, and attention-based methods~\cite{vaswani2017attention} in the context of source code, we develop a new technique for generating \emph{counterfactual explanations} for models of code. Such explanations demonstrate how the model's prediction \emph{would} have changed had the program been modified---or, \emph{perturbed}---in a certain way. We believe that counterfactual explanations are particularly useful in the context of software engineering, as they reveal \emph{not only} \emph{which} parts of the code the model is paying attention to, but they also how to \emph{change} the code so that the model makes a different prediction. This mode of so-called contrastive reasoning through counterfactuals is also aligned with how humans explain complex decisions~\cite{miller:19}. In order to generate useful counterfactual explanations for code, we have conducted a formative study involving software engineers using different machine learning models at Facebook. Informed by the findings from this formative study, we then formalize what a counterfactual explanation entails in the context of code and present an algorithm for generating such explanations. At the heart of our technique lies a mechanism for modifying the program such that the resulting code is ``natural". This ability is extremely important because perturbations that result in ``unnatural" code can confuse the model by generating programs that come from a different distribution that the model has \emph{not} been trained on. Thus, such modifications often reveal robustness problems in the model as opposed to yielding useful counterfactual explanations. Furthermore, counterfactual explanations that result in ``unnatural code" are also not useful to developers because they do not provide meaningful insights on what a proper contrastive example looks like that flips the model prediction. We show an example of this in Section~\ref{sec:context}. In this paper, we propose using \emph{masked language models} (MLMs) as a way of generating meaningful counterfactual explanations. Our proposed technique ``masks" a small set $S$ of tokens in the original program and uses an MLM to predict a new set of tokens $S'$ that can be used to replace $S$. This results in a ``natural''-looking perturbation of the original program for which the model can make meaningful predictions, as the perturbed program comes from the same distribution that the model has been trained on. Our method then systematically builds a search space of perturbations and tests whether they result in a change of prediction for the relevant downstream task, and if so, returns it as a counterfactual explanation. We conduct a series of experiments in the context of three machine learning models at Facebook that are applied to source code diffs: performance regression prediction, testplan screenshot prediction, and taint propagation detection. We conduct a user study with 3 software engineers and research scientists at Facebook to see whether counterfactual explanations help them discern true positive from false positive predictions (they can in 86\% of cases), and how useful they think these explanations are (83\% of cases). We also assess how our explanations aligns with human rationales for prediction provided by domain experts. For our study sample, our generated counterfactual explanations correspond to human rationale in over 90\%. \paragraph{\textbf{Contributions}} This paper makes the following contributions: \begin{itemize} \item We present a method of generating counterfactual explanations for ML models that predict certain properties of code or code changes. These properties are typically checked by humans during code review, so if ML models were to be used in place of humans, an explanation of the model predictions is necessary. \item We present desiderata for counterfactual explanations in the context of source code that are \emph{useful} to the end user who has to consume the predictions of the ML models, as well as the counterfactual explanation offered along with it. \item We have applied counterfactual explanations for predictions generated by models for three distinct real-world code quality machine learning models. We show empirical results on the usefulness of these explanations to the users of these models. We also show, quantitatively, that counterfactual explanations have a better correspondence with human-generated explanation, compared to a previously presented perturbation-based explanation technique~\cite{martensprovost}. \end{itemize} The paper is organized as follows. Section~\ref{sec:context} gives the context of this work in terms of the model family we use, as well as the tasks for which these models are trained. We walk the reader through examples of counterfactual explanations. Section~\ref{sec:formative} describes informally the requirements from useful counterfactual explanations; these are based on interactions with prospective users. Section~\ref{sec:problem-def} tackles the problem of explanation generation formally, and presents our core algorithm. Section~\ref{sec:results} describes our experimental setting and the results. \section{The Need for Model Explanation} Let us consider learned binary classification model that takes as input a "diff", i.e. a commit (a series of code additions, deletions, and context code lines around the changes) and returns a probability whether the commit will introduce a performance problem~\cite{cito:19, weber:21, deepperf}. \begin{minted} [ frame=lines, framesep=2mm, baselinestretch=1.2, bgcolor=LightGray, fontsize=\footnotesize, linenos ]{Javascript} async function handleTask(taskManager, taskId) { const starter = await taskManager.getStarted() + const startTime = await taskManager.getTime() if(starter.endTime < startTime) { - starter.requestNextTask = False + starter.requestNextTask = True + handleCanceledTask(taskManager, taskId) } + starter.push(taskId) + starter.dispatch() } ... + async function editTask(taskManager, taskId, data) { + task = taskManager.find(taskId) + task.cancel() + handleCanceledTask(taskManager, taskId) + task.update(taskId, data) + task.resetTime() + handleTask(taskId) + } ... + async function handleCanceledTask(taskManager, cancelTaskId) { + for(task of taskManager.all()) { + if(task.id == cancelTaskId) { + taskManager.prune(task) + } + } + } \end{minted} The code example above represents a larger commit that was labeled as potentially introducing a performance regression by such a machine learning model. The developer of this commit is confused at first as they are not immediately aware of anything that even has the potential of introducing such an issue. In the current status quo, the developer has to start a complex investigation process that involves hypothesis building, statically inspecting potential culprits, and expensive testing with delta-debugging that involves benchmarks~\cite{baltes:15}. If it then turns out that the machine learning prediction was a false positive, it could lead to frustration well-known from existing experiences when deploying any kind of uncertain program analysis~\cite{tricorder, christakis:16, sadowski:18}. To avoid unnecessary grievances we introduce our system that generates counterfactual explanations. We produce a minimally different commit that would \emph{not} be classified as a performance regression. Let us revisit two scenarios of how our explanations bring value in this situation. \paragraph{\textbf{Scenario 1: False Positive Explanation}:} Our explanation attributes the prediction Line 3 in our commit: When changing \texttt{await taskManager.getTime()} to calling a different method \texttt{find}, the classifier changes its mind. The developer is now guided to investigate the this method invocation. They inspect the \texttt{getTime} method and can verify that there is nothing that can cause a performance regression. They conclude that the machine learning model must have picked up on a spurious correlation (potentially involving the \texttt{await} keyword indicating an asynchronous call). The explanation has enabled the developer to understand the inner working of the model and move on quickly, not having wasted time on a false-positive. \paragraph{\textbf{Scenario 2: True Positive Explanation}:} Let us consider a case in which the prediction is a true positive. In this case, the explanation points to line 28 in our commit: When changing \texttt{prune(task)} to a different method \texttt{update(task)}, the classifier changes its mind. The developer again is guided in their exploration and this time the inspection reveals that actually the \lstinline{prune} method is indeed the culprit. The developer can now revisit the design of their code contribution. \\ In both cases, the explanation achieved its goal. It provided guidance to the developer by pointing out similar, but plausible scenarios in which the model changed its mind. \todo{ 1. Show two different examples for false positive and true positive (instead of the same commit) 2. show what our method actually does, currently sounds too hypothetical } \section{Preliminary Observation: Naive Perturbations expose Robustness Issues} We explore the efficacy of existing perturbation-based methods and find that rather than providing explanations, they expose model robustness issues. We applied two naive perturbation-based explanation methods, SEDC and SHAP, to an internal auto complete model based on an LSTM. We sample N=X instances from a public dataset (ETH dataset?) and apply explanations to it. We report on the distributions of explanations found with both approaches. Two authors of this paper qualitatively analyze each of the explanations and label whether an explanation is reasonable or an indication of a robustness issue instead. Initially experimentation showed that simply removing a set of tokens does not lead to satisfying explanations. This phenomenon can be observed both in natural language and source code tasks. Let us have a look at two very unsatisfying explanations for a canonical NLP dataset, IMDBM. When using naive explanation methods, we were able to see that removing either the tokens ['by', '.'] or ['by', '+'] leads the model to predict differently (without altering the semantics of the input). In our autocomplete model, we similarly see that removing syntactic tokens leads to a significant change in prediction. Removing \emph{['=', '[']} leads to the model predicting the wrong token and original prediction not even being in the Top-3 predictions any more. Rather than finding explanations, we have now actually found something akin to adversarial examples known from research in model robustness. Reflecting on this finding, we found that the definition of adversarial examples only differs in one particular aspect: What is a (minimal) set of \emph{imperceptible} changes to a specific input, such that the classification of that input changes? This has different meaning for different kinds of input spaces. For images, the perturbation should be imperceptible to the human eye. For text, the perturbation often produces synonyms of features~\cite{riberio:18}. For source code, the perturbation could include operations such as alpha renaming~\cite{semanticrobustness}. The goal is testing semantic robustness of the models: would the model predict differently given two semantically equivalent input instances? \section{Counterfactual Explanations for Models of Code} In this section, we formalize the problem of generating counterfactual explanations for code and describe our algorithm. \subsection{Problem Formulation}\label{sec:problem-def} Our problem formulation utilizes the concept of \emph{perturbation} which is a transformation $\pi$ that can be applied to a program. Given a program $P$ and perturbation $\pi$, we write $\pi(P)$ to denote the resulting program obtained by applying $\pi$ to $P$. Given a machine learning model $\mathcal{M}$ trained on set $S \sim \mathcal{D}$ for some task $t$ and a program $P \sim \mathcal{D}$, a \emph{counterfactual explanation} is a perturbation $\pi$ of $P$ satisfying the following criteria: \begin{enumerate} \item The model makes different predictions for the original program and the perturbed version, i.e., $\mathcal{M}(P) \neq \mathcal{M}(\pi(P))$ \item The ground truth for $P$ and $\pi(P)$ are different for task $t$, i.e., $\mathcal{G}(P) \neq \mathcal{G}(\pi(P))$ \item The perturbed program $\pi(P)$ is ``natural", meaning that it comes from the same distribution as the original program $P$ (i.e., $\pi(P) \sim \mathcal{D})$ \end{enumerate} Note that our definition of counterfactual explanation is quite similar to \emph{adversarial examples}~\cite{yefet:20, semanticrobustness} in that both are defined as small perturbations $\pi$ that cause the model to ``change its mind" (i.e., condition (1) from above). However, there are two key differences between counterfactual explanations and adversarial examples that are outlined in conditions (2) and (3) in the above definition. First, adversarial examples are inputs that are designed to intentionally fool the model -- that is, from a human's perspective, the prediction result for $P$ and $\pi(P)$ should be exactly the same. In contrast, a perturbation only makes sense as a counterfactual explanation if $P$ and $\pi(P)$ are semantically different for task $t$ from the user's perspective. That is, the ground truth prediction for $P$ and $\pi(P)$, denoted by $\mathcal{G}(P)$ and $\mathcal{G}(\pi(P))$ resp., should be different, as stipulated in condition (2) of our definition. In addition, a second key difference between counterfactual explanations and adversarial examples is whether they are ``natural". In particular, since adversarial examples are specifically crafted to fool the model, there is no requirement that they are drawn from the same distribution that the model has been trained on. On the other hand, for a perturbation $\pi$ to make sense as an explanation of the model's behavior as opposed to a \emph{robustness} issue, the perturbed program needs to be drawn from the same distribution as the training data, as stipulated in condition (3). We illustrate the differences between counterfactuals and adversarial examples in Figure~\ref{fig:robustvscounterfactual} in the context of a classification problem with four classes, depicted by different colors. As we go out from the center, the data gets increasingly out of distribution. If we want to show an adversarial example, we try a perturbation that pulls the input out of distribution to make the model mispredict, but we want to retain the same ground truth. If we want to find a counterfactual, we try to perturb such that the ground truth label does change (as well as the model prediction) while staying within distribution. \paragraph{\textbf{Discussion.}} While our problem formulation stipulates that a counterfactual should flip the \emph{ground truth} prediction, our algorithm for generating counterfactuals can never enforce condition (2), as we do not have access to the ground truth. In fact, an explanation generated by \emph{any} technique (including ours) could violate condition (2). However, \emph{assuming} that the model is quite accurate for inputs drawn from the target distribution (i.e., $P \sim\mathcal{D}$), we have $\mathcal{M}(P) \approx \mathcal{G}(P)$, so conditions (1) and (2) become interchangeable. Thus, for models with high accuracy, an algorithm that ensures conditions (1) and (3) is unlikely to generate counterfactual explanations that violate condition (2). Additionally, the contrapositive of the above observation suggests a way of using counterfactuals to diagnose a model's mispredictions. In particular, suppose that an algorithm produces a counterfactual $\pi$ where the user thinks $\mathcal{G}(P)$ should be the same as $\mathcal{G}(\pi(P))$. In this case, assuming that the counterfactual generation algorithm enforces conditions (1) and (3), the above assumption that $\mathcal{M}(P) \approx \mathcal{G}(P)$ is violated. Thus, in practice, we have observed a strong correlation between ``non-sensical" counterfactuals and model mispredictions. Said differently, if the algorithm produces a counterfactual that does not make sense to users, this can serve as a strong hint that the model's prediction is wrong. As an example of this way of diagnosing model mispredictions, revisit Listing~\ref{lst:example}. Suppose, hypothetically, that Listing~\ref{lst:testplan} were offered as the (only) counterfactual for the performance model's prediction of a potential regression. Suppose further that a human judge immediately sees that the unperturbed and the perturbed code should have the same ground truth, because the UI-related call has nothing to do with performance. In that case, we can conclude the model's prediction of a performance regression was a misprediction. \begin{figure} \includegraphics[width=0.8\columnwidth, clip]{counterfactuals-bigger.pdf} \caption{The two colors show different classes. As we go out from the center, the data gets more out of distribution. Two perturbations are shown: one that exposes a robustness issue, and the other that is useful for counterfactual explanation. The colors of the small circles show the class that the model predicts.} \label{fig:robustvscounterfactual} \end{figure} \subsection{MLM-Based Perturbations for Code}\label{sec:mlm} Our problem formulation does not specify the exact nature of perturbations; however, a counterfactual generation algorithm needs to consider a universe of possible modifications to a given code snippet. To ensure that our counterfactual explanations satisfy the naturalness criterion (i.e., condition (3)), we propose generating perturbations using \emph{masked language modeling (MLM)}~\cite{bert}. At a high level, the idea is to replace each token with a blank (``mask") and use MLM to come up with a plausible replacement for the original token. In more detail, our approach first tokenizes the input program $P$ and produces an indexed sequential representation $\mathcal{T}(P) = \{p_i\}_{i=1}^{N}$. Then, given a specific token $p_j$ that we want to perturb, we produce a new sequence $\mathcal{T}(P') = \langle p_1, \ldots, p_{j-1},$ \texttt[MASK] $, p_{j+1}, \ldots p_N \rangle$ and use MLM to produce a probability distribution for possible instantiations of \texttt[MASK]. Since MLM leverages all other tokens in the sequence to make predictions, it can produce more natural perturbations than other (more traditional) language models that only leverage preceding tokens in the sequence. \begin{comment} \subsection{Generative Natural Counterfactuals} To produce counterfactuals that adhere to the ``naturalness" critera and are thus still part of our distribution, we leverage a masked language model, a technique where some tokens are replaced with ``masks" and are replaced by likely variants by the model. Formally, we take an input program $P \sim \mathcal{D}$ and tokenize that produces an indexed sequential representation $\mathcal{T}(P) = \{p_i\}_{i=1}^{N}$. The next step is to convert the input into a masked form, introducing a \texttt{[MASK]} token at each position we want to perturb. Specifically, given a tokenized program $\mathcal{T}(P) = \langle p_1, \ldots, p_j, \ldots p_N \rangle$, introducing a masking token over the j-th word would yield $\mathcal{T}(P') = \langle p_1, \ldots, p_{j-1},$ \texttt[MASK] $, p_{j+1}, \ldots p_N \rangle$. An MLM models the probability $p(p_j | \langle p_1, \ldots, p_{j-1},$ \texttt[MASK] $, p_{j+1}, \ldots p_N \rangle)$ and is able to leverage context before and after the masking token to produce more accurate predictions than traditional language models, that can only use context before. While we use mask filling in our experiments, our approach works with any technique that produces plausible replacements for particular token (or set of tokens) in the input (e.g., code2vec~\cite{code2vec}). \end{comment} \subsection{Algorithm for Generating Counterfactuals} We now describe our algorithm for generating counterfactual explanations. Our algorithm takes as input a program $P$ and a machine learning model $\mathcal{M}$ and returns a set of counterfactual explanations $E$. Each explanation $e \in E$ is a mapping from a token $t_i \in P$ to a new token $t_i'$ such that replacing every $t_i$ with $t_i'$ causes the model $\mathcal{M}$ to change its prediction. At a high level, our algorithm explores explanations of increasing size, where the size of an explanation is the number of replaced tokens. Specifically, Algorithm~\ref{alg:generatecounterfactual} maintains a set (called \emph{explore}) of \emph{candidate replacements}, where each candidate is a set of tokens to be replaced. Then, in each iteration, it grows the most promising candidate replacement by one additional token (Line 6) to generate a new set of candidate replacements called \emph{new\_candidates} (Line 7). Then, for each candidate $c$ in this set, Algorithm~\ref{alg:exists_cf} invokes a procedure called \textsf{FindCounterfactual} to test whether there exists a counterfactual explanation whose domain is exactly $c$. If so, all explanations returned by \textsf{FindCounterfactual} are added to set \emph{explanations} (line 11). On the other hand, if there is no counterfactual whose domain is $c$, we add $c$ to set \emph{explore} so that the domain of the explanation is grown in subsequent iterations (line 13). Note that, since smaller counterfactual explanations are always preferable, there is no point in adding $c$ to \emph{explore} if \textsf{FindCounterfactual} returns a non-empty set. Algorithm~\ref{alg:exists_cf} shows the implementation of the \textsf{FindCounterfactual} procedure. At a high level, this method uses the masked language modeling technique described in Section~\ref{sec:mlm} to find plausible replacements for \emph{all} tokens in $c$ (line 4). That is, given a set of tokens $p_1, \ldots, p_n$, we replace each $p_i$ in this set with a mask and ask the language model to generate a set $S$ of plausible replacements for the tokens $p_1, \ldots, p_n$. Then, for each replacament $\langle p_1', \ldots, p_n'\rangle$ in set $S$, we check whether substituting each $p_i$ with $p_i'$ \emph{actually} causes the model $\mathcal{M}$ to change its prediction (line 7). If so, we then add the mapping $[p_1 \mapsto p_1', \ldots, p_k \mapsto p_k']$ to the set of generated counterfactual explanations (line 8). \begin{algorithm}[t] \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \DontPrintSemicolon \Input{Input program $P$, Model $\mathcal{M}$: $\mathcal{D} \mapsto \{T, F\} \times \mathcal{R}$ returning a classification and a score, iteration count \texttt{ITER}} \Output{List of counterfactuals that lead to prediction changes} \BlankLine $explore =\varnothing$\\ $tokens = \{ t_i~|~t_i \in \mathcal{T}(P) \}$\\ explanations = $\varnothing$ \\ \For{\_ in \texttt{ITER}+1} { $c_{best} = \mathsf{Choose}(explore)$\\ new\_candidates = $\{ (c_{best}, t_i)~|~t_i \in (tokens \setminus explanations) \}$\\ \For{$c$ in new\_candidates}{ $E = \mathsf{FindCounterfactual}(P, \mathcal{M}, c)$\\ \If{$E \neq \varnothing$} { explanations.add($E$) } \Else { explore.add($c$) } } return explanations } \caption{GenerateCounterfactualExplanations} \label{alg:generatecounterfactual} \end{algorithm} \begin{algorithm}[t] \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \DontPrintSemicolon \Input{Input program $P$, Model $\mathcal{M}$, Token or token combination to perturb $\langle p_1, \ldots, p_k \rangle$ where $p_i \in \mathcal{T}(P)$\\} \Output{Whether counterfactual exists (boolean), Score after perturbation, Counterfactual $P'$ (input after perturbation)} \BlankLine $E = \varnothing$ \\ $S= \mathsf{MLM}(P, \langle p_1, \ldots, p_k \rangle)$\\ \For{$\langle p_1', \ldots, p_k' \rangle \in S$} { $P' = P[p_1'/p_1, \ldots, p_k'/p_k]$ \\ \If{$\mathcal{M}(P) \neq \mathcal{M}(P')$ } { $E = E \cup [p_1 \mapsto p_1', \ldots, p_k \mapsto p_k'] $ } } return $E$ \caption{FindCounterfactual} \label{alg:exists_cf} \end{algorithm} \section{Related Work} There has been significant recent interest in improving the interpretability of machine learning models. Efforts in this space can be broadly classified into two categories, namely \emph{local} and \emph{global} interpretability. Local explanability techniques aim to provide justification for predictions on a specific input~\cite{lime,shap,anchors}. On the other hand, global explanability techniques try to explain the behavior of the \emph{whole} model, for instance by constructing a simpler surrogate model that emulates the original model~\cite{global1,global2}. Since our goal in this work is to provide justifications for individual predictions, this work falls under \emph{local explanation} techniques. In what follows, we give a more detailed overview of relevant work in this space. \paragraph{White-box techniques for local explanations} Techniques for generating local explanations can be further classified as being either \emph{white-box} or \emph{black-box}. As their name indicates, white-box techniques are customized to specific ML models and exploit the internals of model when generating explanations. A common approach to white-box interpretability of deep learning is through so-called \emph{attention mechanisms} where the idea is to use weights of attention layers inside the network to determine the importance of each input token ~\cite{clark:19, galassi:20, li:16, vashishth:19}. However, recent work has shown that different weights can lead to the same prediction and has called into question whether attention-based mechanisms are actually meaningful as explanations~\cite{jain:19}. Similarly, other work has shown that it is possible to systematically manipulate attention while still retaining the same prediction~\cite{pruthi:20}. Another popular white-box technique for generating local explanations is \emph{integrated gradients}~\cite{sundararajan:17}. The high-level idea behind this method is to create interpolations of the input and evaluate the model on those interpolated inputs. Unfortunately, unlike image data, code does not lend itself easily to such interpolation. For instance, there is no meaningful token that can be obtained by combining the embedding of a pad token and that of a keyword token. In fact, we initially investigated using integrated gradients for generating local explanations for models of code, but we were not successful in generating useful explanations. This is, presumably, because unlike images, there is no natural interpolation between a zero token and the token in the input text. \paragraph{Perturbation-based explanation mechanisms} Another common approach for generating local explanations is through perturbation-based mechanisms such as LIME~\cite{lime} and SHAP~\cite{shap}. These techniques remove or replace a subset of the features in the input space and track the score differential of the model's prediction. The aggregated score difference over many samples is then attributed to features involved in the perturbation. While these techniques can be applied to multiple types of ML models, they do not generate counterfactual explanations. Instead, they highlight input features that are most important for the model's prediction. In the realm of understanding for source code models, recent work uses delta-debugging techniques to reduce a program to a set of statements that is minimal and still preserves the initial model prediction~\cite{simplification, suneja:21}. The intuition here is that essentially the remaining statements are the important signal being picked up by the model. Another effort empirically show that attention scores in neural networks are highly correlated with code perturbations (statement removals in the source code input)~\cite{autofocus}. However, these works have not investigated (or considered) the effects of token removal that may lead to out-of-distribution inputs. These in turn can lead to unreliable prediction outcomes~\cite{datasetshift} and misleading attention scores~\cite{jain:19}. \paragraph{Counterfactual explanations} Another common approach to local interpretability is through the generation of counterfactuals. These techniques are very related to the previously discussed perturbation-based mechanisms, but they come with a stronger guarantee, namely that the perturbations are \emph{guaranteed} to change the model's prediction. The generation of counterfactual explanations has received significant attention in the NLP community~\cite{martensprovost,polyjuice,explain-nlp,generate,generative}. In the simplest case, these counterfactuals can be generated by deleting words from the input text~\cite{martensprovost} or via rewrite-rules such as adding negations or shuffling words~\cite{polyjuice}. Similar to our goal of generating natural-looking programs, these techniques also aim to generate \emph{fluent} text that is grammatically correct. Among the counterfactual generation techniques, perhaps the most relevant to ours is the \emph{minimal contrastive editing (MiCE)} technique of~\cite{explain-nlp}. Specifically, they train an \emph{editor} to predict words to edit and use a generative model (called \emph{predictor}) to predict replacements for the chosen words. \paragraph{Interpretability of SE models.} There has also been recent interest in improving interpretability of models used in software engineering~\cite{simplification,md,autofocus,probing}. Two of these efforts~\cite{simplification,probing} propose to simplify the code while retaining the model prediction. Another effort called AutoFocus~\cite{autofocus} aims to rate and visualize the relative importance of different code elements by using a combination of attention layers in the neural network and deleting statements in the program. Another recent effort~\cite{md} aims for global interpretability and helps model developers by identifying which types of inputs the model performs poorly on. We believe that the counterfactual explanation generation technique proposed in this paper complements all of these efforts on improving SE model interpretability.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Looking into the history of computer systems, computation has been migrating between terminal clients and central servers, leading to different designs: "fat client and thin server" or "thin client and fat server". The shifts of this trend are mostly driven by the changes in usage pattern, advances in hardware and software technologies, and even new business models. Nowadays, both clients and servers are rather fat regarding to their processing power and storage capacity, but the fact is that they still fail to keep their pace with the ever-growing demands of end users. Meanwhile, network devices have been quickly evolving and growing their capabilities \cite{telefonica:nfv, telefonica:cartablanco, telefonica:unica}. Quite different from a decade ago, these powerful middle boxes were no longer simple network devices which used to only know how to forward packets. They have complicated structures, highly optimised algorithms, and powerful processing and storage capabilities even comparable to end devices. Since these network devices are mostly underutilised, there is an obvious trend that more and more data and computation are migrating into networks. Such migration has been accelerated by the following facts in both directions, namely from clouds to networks and from end-user devices to networks. First, many popular Internet services are cloud-based which often rely on a persistent and stable connection to access. However, both connectivity and latency pose significant challenges on quality of services especially in a challenged environment. To improve service availability and reduce latency, big content providers often resort to content-distribution networks (CDN) or deploy their own datacenters co-located with ISP networks. Second, the emergence of User Generated Content (UGC) has further triggered another dramatic shift in usage pattern on the Internet. Huge amount of content is constantly generated and consumed on mobile devices. Processing and storing such overwhelming information, combined with users' increasing on-line activities, give birth to various mobile applications, most of which require a significant amount of computations on users' devices. Given current battery technology, mobile devices are severely energy constrained. Many prior work proposed to offload computation intensive tasks into a network to extend battery life \cite{6195845, Cuervo:2010:MMS:1814433.1814441}. Third, even for ISP themselves, their own network services started migrating from specialised servers to their networks with the adoption of the NFV (Network function virtualization) paradigm. For example, Telefonica is shifting 30\% of their infrastructure to NFV technologies by the end of 2016\cite{telefonica:nfv, telefonica:cartablanco, telefonica:unica}. Other providers such as AT\&T\cite{att:nfv}, Vodafone\cite{ericsson:nfv}, NTT Docomo\cite{nttdocomo:nfv} and China Mobile\cite{chinamobile:nfv} are following similar strategies. ISPs' networks, especially those at edges, have transformed into an ideal place for both storing data and performing computation, which collectively provide \textit{services} to its users. Followed by previous information-centric networking (ICN) proposals \cite{jacobson:ccn, Dannewitz:2013:NII:2459510.2459643, 10033435}, service-enabled ICN designs \cite{Nordstrom:2012:SES:2228298.2228308, 6882684, Sathiaseelan:2015:SSC:2753488.2753490, freedman2010service, Braun:2013:SNE:2480362.2480475} clearly start gaining many research interests in the community. Because service execution consumes multiple resources on a router, especially demands CPU cycles for computation intensive tasks, it introduces a new type of "congestion" in a network which we refer to as \textit{"computation congestion"}. Different from conventional traffic congestions which are avoided by the cooperation of both communication ends, in-network services do not necessarily impose a point-to-point paradigm. Also different from classic load balancing problem in cloud which often has a regular structure (i.e., regular network topology, central coordination, homogeneous configurations, uniform demands, and etc.), the situation in an ISP network is more complicated: 1) the underlying topology is not regular; 2) the node configurations can be heterogeneous; 3) demands distribution is highly skewed hence the resources in a neighbourhood needs to be well utilised; 4) central coordination is often expensive and reduces responsiveness of a node. The emerging ephemeral in-network services call for a thorough investigation on the "computation congestion control" in order to effectively distribute service load within a neighbourhood. In this paper, we study two basic control strategies and propose a fully distributed algorithm called C3PO (Computation Congestion Control PrOactive) built atop of proactive control strategy. Our preliminary evaluations with various realistic settings show that the proposed algorithm is low complexity, able to well exploit neighbourhood resources, and very responsive to dynamic workloads. \section{Related Work} \label{sec:related} ICN is a clean-slate redesign of current Internet to build network infrastructure around content. It abandons the classic point-to-point communication paradigm, and applies two basic design principles in its architecture: 1) accessing content by name and 2) universal caching. Originally, the notion of information in prior ICN proposals\cite{jacobson:ccn, Dannewitz:2013:NII:2459510.2459643, 10033435} only refers to static content. As cloud computing, virtualisation technology become mature enough, more computation are pushed towards edge networks. The definition of information therefore is naturally extended to include both computation and data, which is also referred to as services in most recent work \cite{Nordstrom:2012:SES:2228298.2228308, 6882684, Sathiaseelan:2015:SSC:2753488.2753490, freedman2010service, Braun:2013:SNE:2480362.2480475}. Such service-enabled ICN systems can be considered as an inevitable evolution of ICN paradigm in order to catch up with the growing demands from edge networks and improve quality of service. Since service execution consumes different resources, both computation and traffic congestions can potentially happen in a network. Traditional congestion control targets traffic load. The solutions usually either try to reduce the transmission rate or take advantage of multiple paths \cite{Han:2006:MTJ:1217687.1217696, 662909}. In practice all the solutions rely on the cooperation of both ends in a transmission. In ICN context, the congestion needs to be controlled in a hop-by-hop fashion and can be ameliorated by caching to some extent \cite{wangeffects, wong:globecom2012}. Load balancing, scheduling, and resource management are classic problems in high-performance computing (HPC) cluster. The topic gained lots of attention recently due to the popularity of cloud computing, virtualization, big data framework. Fully centralised control \cite{Hindman:2011:MPF:1972457.1972488, Schwarzkopf:2013:OFS:2465351.2465386} is a popular solution at the moment, and control theory has been shown as an effective tool to dynamically allocate resources \cite{Kalyvianaki:2014:ARP:2642710.2626290}. As mentioned, there are distinctive differences between a cloud environment and an ISP edge network regarding its stability, homogeneous configuration, regular topology, and etc. Most jobs execute for a longer period and often access a lot of data, hence can tolerate long scheduling delay. The maturity of virtualisation technologies (e.g., Xen, Linux container, unikernel\cite{Barham:2003:XAV:1165389.945462, Soltesz:2007:COS:1272998.1273025, Madhavapeddy:2013:URV:2557963.2566628}) combined with edge computing will undoubtedly lead us to an era where countless services reside in an ISP network, dynamically created and destroyed on demand to perform various tasks. In such a context, previous highly centralised solution designed for cloud-based services will fail to scale in order to provide a responsive control over such a high volume and asymmetrically distributed demands. Based on our knowledge, very little work has been done to address this challenge. In this paper, we focus on these ephemeral and computation intensive services and research a low complexity, distributed, self-adaptive, and responsive solution. \section{Proposed Solution} \label{sec:sol} We start this section with two fundamental control strategies, followed by a basic workload analysis on a service router, based on which we propose a proactive strategy to avoid computation congestion. Then we present the actual algorithm (C3PO) with implementation details. \subsection{Two Basic Strategies} \label{sec:strategy} Service execution consumes both CPU and memory as well as other resources such as bandwidth. Herein we focus on the first two since they are usually the most dominant resources. The goal of load balancing is achieved by strategically drop or forward the computational tasks to some other nodes to avoid being overloaded. However, instead of distributing load uniformly over all available nodes, a service is preferred to be executed as close to a client as possible to minimise induced latency. Centralised coordination is not ideal in a practical deployment (especially out of datacenters) due to the obvious reasons: 1) A central solver needs global knowledge of all the nodes in a network; 2) the optimal strategy needs to be calculated periodically given the dynamic nature of a network and traffic; 3) there is a single point of failure; 4) there might be only marginal improvement over a smartly designed heuristic. Therefore, we study two basic strategies in this paper. \begin{itemize} \item \textbf{Passive Control}: with this strategy, a node tries to execute as many services as possible before being overloaded. Whenever a service request arrives, it will be executed by default given enough resources. If the node is overloaded, the request will be passed to the next hop along the path to a server, or dropped if current node is already the last hop node in ISP networks. \item \textbf{Proactive Control}: with this strategy, a node tries to execute services conservatively to avoid being overloaded. To do so, a node estimates request arrival rate with which it can further estimate the potential consumption. If the estimate shows that the node may be overloaded, it only executes some requests and forwards the rest to the next hop neighbour with the lightest load. This strategy requires exchanging state information within a node's one-hop neighbourhood. \end{itemize} Because of its simple logic, passive strategy has a very straightforward implementation. Clients can benefit from minimised service latency given no nodes are overloaded, since a service gets executed immediately at an edge router. For proactive strategy, the implementation relies on how estimate is made which we will detail in the following. Despite of being conservative, we still aim to keep the latency low. \subsection{Workload Analysis} \label{sec:workload} A node $n$ receives service requests either from directly connected clients or neighbours. We assume that a node $n$ has CPU capacity $c'$ and memory capacity $m'$. For a specific service $f_j$, we denote its average CPU and memory consumption as $c_j$ and $m_j$ respectively. In practice, both can be easily measured by tracking a service execution. We also assume the execution time of $f_j$ follows an exponential distribution with its mean value equal to $t_j$. The requests for service $f_j$ can be viewed as a Poisson processes with arrival rate $\lambda_j$. We can easily recognise that the process is a typical \textit{birth-death process}. Because the joint process of multiple Poisson processs is also Poisson, the aggregated requests of all services form another well-defined birth-death process with the birth rate as $\lambda = \sum_{\forall j} \lambda_j$ and death rate as $\mu = \sum_{\forall j} \frac{1}{t_j}$. We herein focus on this aggregate request stream. In order to calculate average workload, for any given time, we need to estimate the average number of simultaneously running services on node $n$, denoted as $l$. This is equivalent to calculating the average queue length in a simple $M/M/1$ queueing system, where the clients in a queue represents the services running concurrently on a node by applying a multiprogramming model. Herein we consider a stable system where $\lambda < \mu$ to prevent a queue from growing infinitely long to overload a node. We will show later how a proactive strategy is able to keep the system stable. We have assumed that one CPU is allocated for service execution hence we choose to use $M/M/1$ model in this paper to simplify the discussion. However the analysis can be easily extended to $M/M/C$ model to analyse a multi-core system. Let $p_j$ denote the normalised popularity of $f_j$ derived from all the requests observed by $n$, then $p_j = \frac{\lambda_j}{\lambda}$ and note that $\sum_{\forall j} p_j = 1$ by definition. The average CPU consumption is $c'' = \sum_{\forall j} p_j \times c''_j$ and average memory consumption is $m'' = \sum_{\forall j} p_j \times m''_j$. If we let $\rho = \frac{\lambda}{\mu}$ (i.e., utilisation rate), then we have $l = \frac{\rho}{1 - \rho}$ by applying a stationary analysis on $M/M/1$ model. Therefore we can calculate the overall workload induced by executing services in a straightforward way: namely $l \times c''$ for CPU load and $l \times m''$ for memory load. \subsection{Probabilistic Execution} \label{sec:prob} To avoid overloading a node, we need to make sure the workload is less than $n$'s actual capacity. As we have shown, workload is directly controlled by the queue length $l$, which can be further tuned by probabilistically selecting some requests in a stream to execute and forwarding the rest to the next hop. For each service request, if we let node $n$ execute a service with probability $q$, and $q \in [0,1]$ follows a uniform distribution. According to basic queueing theory, the resulting sub-process forms another well-defined birth-death process, with a new birth rate $q \times \lambda$ and the same death rate $\mu$. Therefore the new sub-process has a new utilisation rate equal to $q \times \rho$. To calculate $q$, we can simply perform the following derivations by letting the induced load (e.g., for CPU) $l \times c''$ less than the capacity $c'$. \begin{align} l \times c'' < c' & \Longrightarrow \frac{q \times \rho}{1 - q \times \rho} \times c'' < c' \\ & \Longrightarrow \rho \times q < \frac{c'}{c' + c''} \\ & \Longrightarrow q < \frac{c'}{c' + c''} \times \frac{\mu}{\lambda} \end{align} The formula has a very intuitive explanation: if services can be executed faster on average (i.e., higher death rate $\mu$), node $n$ increases $q$ in order to serve more requests by maintaining a longer queue; otherwise $n$ decreases $q$ to reduce the queue length. If requests arrive faster (i.e., higher birth rate $\lambda$), the node also decreases $q$ to keep the number of simultaneously running services low. Similarly, we can perform the same calculations for memory constraint $m'$. Eventually, we set $q$ with the following formula. \begin{align} & q = \max\{ \min\{ \frac{c'}{c' + c''}, \frac{m'}{m' + m''} \} \times \frac{\mu}{\lambda}, 1\} \label{eq:0} \end{align} The formula above essentially indicates that the final $q$ is decided by the first bottleneck in a system, either CPU or memory in our case. Also, $q$ is capped by $1$, indicating that an underutilised system will simply accept all the requests. \subsection{Proactive Control} \label{sec:congestion} We present an implementation of proactive control in Algorithm~\ref{algo:1}, namely \textit{C3PO} -- Computation Congestion Control (PrOactive). The algorithm consists of two major functions: \textbf{on\_arrival($\cdot$)} (line 1--10) is called whenever a service request arrives; and \textbf{on\_complete($\cdot$)} (line 12--21) is called whenever a service execution is completed. The notations used in the algorithm follow the same definition as those in the previous text. By keeping track of CPU usage $c''$, memory usage $m''$, execution rate $\mu$, and request arrival rate $\lambda$, the previous analysis shows how to control the workload by tuning execution probability $q$. However, maintaining a complete history of these statistics can be very expensive. In the actual implementation, we use four circular buffers of size $k$: 1) buf$_\lambda$ for the timestamps of the most recently arrived requests; 2) buf$_\mu$ for the execution time of the most recently finished services; 3) buf$_{c''}$ and 4) buf$_{m''}$ for CPU and memory usage of the most recently finished services. \begin{algorithm}[!tb] \caption{C3PO - A Distributed Proactive Computation Congestion Control for In-Network Services} \label{algo:1} \begin{algorithmic}[1] \STATE{void \textbf{on\_arrival} (request $r$):} \STATE{\quad buf$_\lambda$[$i$] $\leftarrow$ timestamp ($r$)} \STATE{\quad $\lambda \leftarrow$ mean\_rate (buf$_\lambda$)} \STATE{\quad $\Delta \lambda \leftarrow$ max$(0, \lambda - \lambda')$} \STATE{\quad $\lambda \leftarrow \lambda + \Delta \lambda$} \STATE{\quad $q \leftarrow$ \textbf{eq.\ref{eq:0}} ($\lambda, \mu, c', c'', m', m''$)} \STATE{\quad \textbf{if} draw\_uniform ([0,1]) $ < q$ \textbf{then} execute ($r$)} \STATE{\quad \textbf{else} forward\_to\_lightest\_load\_node ($r$)} \STATE{\quad $i \leftarrow (i+1)$ mod $k$} \STATE{\quad \textbf{if} $i == 0$ \textbf{then} $\lambda' \leftarrow 0.5 \times $($\lambda' + \lambda - \Delta \lambda$)} \STATE \STATE{void \textbf{on\_complete} (service $s$):} \STATE{\quad buf$_\mu$[$i$] $\leftarrow$ execution\_time ($s$)} \STATE{\quad buf$_{c''}$[$i$] $\leftarrow$ cpu\_consumption ($s$)} \STATE{\quad buf$_{m''}$[$i$] $\leftarrow$ memory\_consumption ($s$)} \STATE{\quad $i \leftarrow (i+1)$ mod $k$} \STATE{\quad \textbf{if} $i == 0$ \textbf{then}} \STATE{\quad \quad $\mu \leftarrow 0.5 \times $($\mu +$ mean(buf$_{\mu}$)$^{-1}$)} \STATE{\quad \quad $c'' \leftarrow 0.5 \times $($c'' +$ mean (buf$_{c''}$))} \STATE{\quad \quad $m'' \leftarrow 0.5 \times $($m'' +$ mean (buf$_{m''}$))} \STATE{\quad forward\_result (s)} \end{algorithmic} \end{algorithm} With these four circular buffers, we can calculate the recent values of the parameters in eq.\ref{eq:0}. We decide to use fixed buffer instead of fixed time window to prevent the memory usage of Algorithm \ref{algo:1} from being subject to service arrival/completion rate. Parameter $k$ represents a trade-off between stability and responsiveness. Larger $k$ leads to more stable estimates whereas smaller $k$ indicates higher responsiveness of a strategy to the changes in two metrics (i.e., $\lambda$ and $\mu$). Line 2--6 calculate the execution probability $q$. The algorithm also maintains a variable $\lambda'$ for the average arrival rate of previous $k$ requests, so that we can calculate the variation in $\lambda$ as $\Delta \lambda = \lambda - \lambda'$. It is definitely worth emphasising line 4 and 5: when $\Delta \lambda > 0$, it indicates an increase in request arrival rate, then C3PO will enter into conservative mode. In conservative mode, C3PO updates $q$ at line 6 by plugging $(\lambda + \Delta \lambda)$ as arrival rate in eq.\ref{eq:0} rather than plugging original $\lambda$. In such a way, C3PO "pessimistically" estimates the arrival rate will increase at the same rate $\Delta \lambda$ in a near future. If $\Delta \lambda \leq 0$, C3PO operates in normal mode. In some sense, "being proactive" is achieved by "being conservative" when noticing a potential increase in resource consumption. Although $\lambda$ needs to be calculated at every request arrival (line 3), we can optimise the performance by using another variable $x$ to keep track the sum of arrival intervals. If we further let $y \leftarrow \text{buf}_\lambda[(i+1) \text{ mod } k] - \text{buf}_\lambda[i]$ and $z \leftarrow \text{timestamp}(r) - \text{buf}_\lambda[(i-1) \text{ mod } k]$ before performing line 2, then mean rate can be calculated by $\lambda \leftarrow (x - y + z]) / (k - 1) $. Because all $x,y,z$ can be updated with $\mathcal{O}(1)$, this reduces the complexity of "mean\_rate" function from $\mathcal{O}(k)$ to $\mathcal{O}(1)$ by avoiding traversing through all the timestamps in buf$_\lambda$. Other parameters except $\lambda$ are updated only periodically in both functions (line 10, 18-20). We apply an ARMA (AutoRegressive Moving Average) model with exponential mean when updating these parameters. Both history and recent measure are given the equal weight 0.5. To keep the code short and easy to understand, we did not perform further optimisations in Algorithm \ref{algo:1}. \section{Preliminary Evaluation} \label{sec:eval} In our evaluations, we study how different strategy impacts load distribution as well as latency, drop rate, responsiveness to jitters. We test three strategies (None, Passive, and Proactive) on both synthetic and realistic topologies using Icarus simulator \cite{icarus-simutools14}. In most simulations, we use a Poisson request stream with $\lambda = 1000$ as arrival rate, increasing request rate means introducing more load into a network. All simulations are performed at least 50 times to guarantee the reported results are representative. To simplify the presentation, we assume CPU is the first bottleneck in the system for computation intensive services, and only present the results of using Exodus network \cite{SpringN:Rocketfuel} in the following. \subsection{Exploiting Neighbourhood} \label{sec:neighbour} \begin{figure} [!htp] \includegraphics[width=8.5cm]{cpuload_grid} \caption{An illustration of different behaviours of Passive and Proactive control on grid topology. A client connects to the router at $(0,0)$ while a server connects to the router at $(9,9)$. Proactive is more capable of utilising the nearby resources within its neighbourhood, leading to better load balancing and smaller latency. (Yellow indicates high load.)} \label{fig:1} \end{figure} Before evaluating on a realistic topology, figure \ref{fig:1} provides a basic example to illustrate the fundamental differences between passive and proactive strategy. The understanding of these differences will help us in analysing the following results. The experiment is performed on a $10 \times 10$ grid and a router connects to all its adjacent neighbours. For passive control in the first row, since the server is deployed at top right corner, the load is distributed along the path towards the server as we increase the request rate from $0.25 \lambda$ to $\lambda$. Whereas for proactive control, the load is distributed in a quite different way, the services are given high priority to be executed in nearby neighbours. This introduces two immediate benefits: first, a network with proactive control is able to absorb more load. In comparison, with a workload of $3 \lambda$, a large amount of requests will be dropped by the router at (9,9) if passive control is used. Second, because services are likely to be executed on nearby routers, the induced latency tend to be shorter with proactive control. Especially when edge routers are overloaded, the distance between execution point and client grows much slower with proactive control than with passive control as figure shows. Moreover, being able to effectively exploit neighbourhood resources can significantly benefit QoS due to the strong temporal and spatial locality in usage pattern \cite{Wang:2015:PUS:2810156.2810162}. \subsection{Scalability to Workload} \label{sec:workload} Figure \ref{fig:2} shows the results of using three strategies (one for each row) with three workloads (one for each column) on Exodus network. The average load of each node is normalised by its CPU capacity and only top 50 of the heaviest load are presented in a decreasing order in the figure. \begin{figure} [!htp] \includegraphics[width=8.5cm]{isp_exodus_load} \caption{Comparison of three control strategies (in each row) on Exodus ISP network, the load is increased step by step in each column. $x$-axis is node index and $y$-axis is load. Top $50$ nodes of the heaviest load are sorted in decreasing order and presented. Notations in the figure: $\tau$: average load; $\phi$: average latency (in $ms$); $\psi$: ratio of dropped requests.} \label{fig:2} \end{figure} By examining the first column, we can see all three strategies have identical behaviours when the network is underutilised with a workload of $\lambda$. The heaviest loaded node only uses about 60\% of its total capacity. However, as we increase the load to $4 \lambda$ and $8 \lambda$, three strategies exhibit quite different behaviours. For none control at the first row, the figures remain the similar shape. Since no load is distributed and a node simply drops all requests when being overloaded, none control leads to over 54\% drop rate with load of $8 \lambda$. For passive control at the second row, we can see both head and tail parts are fatter than none control, indicating more load are absorbed by the network and are distributed on different routers. This can also be verified by checking the average load in the figure: given load $8 \lambda$, passive control increases the average load of the network from $0.2305$ to $0.3202$ comparing to using none control. However, there is still over $36\%$ requests are dropped at the last hop router. This can be explained by the well-known small-world effect which makes the network diameter short in general, so there are only limited resources along a random path. Among all the experiments, a network with proactive control always absorbs all the load, leading to the highest average load in the network which further indicates the highest utilisation rate. As the workload increases from $\lambda$ to $8 \lambda$, average load also increases accordingly with the same factor. One very distinct characteristic that can be easily noticed in the last two figures on the third row is that the load distribution has a very heavy tail. This is attributed to proactive strategy's capability of offloading services to its neighbours. It is also worth pointing out that we only measured the latency of those successfully executed services, which further explains why none control has the smallest latency, since a service gets executed immediately at an edge router connected to a client, but more than half of the requests are simply dropped and not counted at all. Comparing to passive strategy, proactive strategy achieves shorter latency. Further investigation on other ISP topologies show that such improvement on latency will even increase on larger networks. \subsection{Responsiveness to Jitters} \label{sec:jitter} \begin{figure} [!htp] \includegraphics[width=9cm]{jitter} \caption{Comparison of two control strategies using a simple line topology: client $\rightarrow$ router $n_1$ $\rightarrow$ router $n_2$ $\rightarrow$ server. Two jitters are injected at time 40 ms and 70 ms. $x$-axis is time (ms) and $y$-axis is normalised load. Red numbers represent the average load during a jitter period.} \label{fig:3} \end{figure} To study how a control strategy responds to a sudden increase in workload (a.k.a. jitters), we perform another experiment where we use a simple line topology: client $\rightarrow$ router $n_1$ $\rightarrow$ router $n_2$ $\rightarrow$ server. The client maintains a stable flow of the request rate $\lambda$ and injects two 10-millisecond jitters (of rate $6\lambda$) at time 40 millisecond and 70 millisecond respectively. The first two rows in figure \ref{fig:3} show the time series of the workload on two routers using passive strategy, namely PAS $n_1$ and PAS $n_2$. Similarly, the last two rows are for the two routers using proactive control, namely PRO $n_1$ and PRO $n_2$. The two right columns zoom in at two moments when the jitter just happens (at 40 and 70 ms respectively). For passive control, the first router PAS $n_1$ takes most of the load (i.e., 88\%) and exhibits consistent behaviours in both jitters. However, the routers using proactive control show an interesting variation when handling two jitters. For the first jitter, although router PRO $n_1$ successfully offloads 31.8\% load to PRO $n_2$, it apparently also experiences high load for a period of 2 ms (i.e., 40 - 42 ms). After the first jitter, PRO $n_1$ enters into a conservative mode, therefore when the second jitter arrives, the load curve on PRO $n_1$ is much flatter and the load peak does not appear at all since it proactively offloads more tasks on PRO $n_2$. As a result, PRO $n_2$ absorbs about 36.7\% load in the second jitter. Even after two jitters, PRO $n_1$ remains in the conservative mode until 130 ms, which explains why there is a small amount of load that has been continuously transferred to PRO $n_2$. After 130 ms, PRO $n_1$ returns to its normal mode. Technically, the mode shift is because all the timestamps of jitters have been purged out from circular buffer buf$_\lambda$ by constant requests. By checking the second and third columns, we are able to gain an even better understanding on what actually happens when a jitter arrives. For both jitters, proactive control responses faster than the passive one, since the load curve on the second router starts rising earlier and faster. For the second jitter, proactive responses even faster since it is in a conservative mode. Whereas for passive control, PAS $n_2$ only starts taking some load at 74 ms, 4 ms later after the second jitter arrives at PAS $n_1$. To summarise, our evaluations have clearly showed that proactive control possesses the following attractive properties which make it an ideal solution for balancing computation load in an ISP network: 1) fully distributed with very loose cooperation with one-hop neighbours; 2) good capability of utilising resources in a neighbourhood; 3) high responsiveness to workload jitters. \section{Conclusion} \label{sec:conclusion} We studied and evaluated two control strategies in this paper. Based on the proactive control, we designed a fully distributed, low complexity, and responsive load controller to avoid potential computation congestions when executing in-network services. Our preliminary results showed that the proposed solution C3PO can effectively take advantage of available resources in a neighbourhood to balance the service load and further reduce service latency and request drop rate. As our future research, we plan to extend the current NDN platform\cite{jacobson:ccn} to implement C3PO. We will perform a more thorough evaluation after a realistic deployment in a production network. Besides, we assumed that the network had enough storage to host all the services. Although in practice, a simple LRU algorithm can be used given a cache is full, a more careful investigation on how caching strategies impact the performance of service execution is definitely needed. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Extremum seeking (ES) is a well-established technique to find the extremal value of an unknown mapping \mbox{$F(\vec{x})$}. The quantity $\vec{x}$ corresponds to an input, which needs to be adjusted in such a way that it converges to a minimum of the objective function $F$. The map including an adjustment policy of $x$, such that it converges to the minimizer, is called \textit{extremum seeking system}. The only information available to this system consists of measurements of the objective function, whereas its analytical expression is unknown. As a consequence, gradient-based techniques are not suitable for this kind of problems. The idea behind ES is to recover an approximate gradient on a slower time scale by exciting the system via periodic dither signals on a faster time scale. The slow movement of the system along the recovered gradient of $F$ is referred to as \textit{learning dynamics} (LD). Former research in this topic focused mainly on the analysis of stability of such ES schemes, whereas a quantification of this recovered gradient was given little attention. The first local stability proof of a special ES scheme was given in \cite{Krstic00}, and later extended to non-local stability by \cite{Tan06}, where it was also noted that ES approximates the gradient of $F$. A novel view on ES systems was introduced by \cite{Duerr13,Duerr17}, who also showed stability for their setup. Common to these results is that they establish so-called practical stability of the feedback scheme with respect to its parameters. This notion implies that, for convex objective functions, there exists for any set of initial conditions a choice of parameters of the ES controller, such that the ES system converges to an arbitrarily small neighborhood of a minimizer. As mentioned above, these results are able to explain convergence to the minimizer when considering convex functions. However, global convergence in the presence of local extrema is a topic that has been rarely addressed in former research. Both \cite{Tan09} and \cite{Nesic13} presented adapted ES schemes which, under some additional (and possibly hard to verify) assumptions, achieve practical global stability with respect to their parameters, despite the local extrema. However, it has been observed that also standard ES schemes achieve global optimization for a certain choice of parameters \cite{Tan06}. For a setup as used in \cite{Duerr13}, the LD pass through a local minimum if the frequency $\omega$ of the dither signals is chosen rather low. In \cite{Michalowsky16}, an explicit description of the recovered gradient for a particular scalar system within this setup, treated with needle-shaped dithers, was given. This result gave rise to the interpretation that the LD can be described by a gradient descent on a function other than $F$, which is parametrized by $\omega$ and will be called $L_\omega$. From this emerges the following question regarding the observation mentioned above: does $L_\omega$ become globally convex for certain parameter choices $\omega$, although $F$ is a non-convex function, such that the LD converge to the global minimizer? The main contribution of this paper is to give an explicit description of the recovered gradient and thus the LD for general ES systems, which is valid for any parameter choice. This result contributes to the existing theory in three different ways. First, the paper extends the results of \cite{Michalowsky16} to the use of general dithers and vector fields as considered in \cite{Grushkovskaya17}, and to multidimensional ES. We show that for all of these extensions, the LD approximately move along an \textit{averaged} gradient of $F$, as it is the case for needle-shaped inputs considered in \cite{Michalowsky16}. This is equivalent to viewing the recovered gradient as an averaged gradient of $F$. Second, we extend the theory of ES as introduced by \cite{Duerr13}, by stating an explicit discrete-time recursion describing the LD of ES systems in this structure. Furthermore, our results explicitly state the learning descent direction with an uncertainty depending on $\omega$, whereas the existing theory did not give an interpretable estimate of this direction. Third, we provide a new perspective on the analysis of global convergence of ES with non-convex maps: we do not aim to prove practical global stability of a new algorithm under some possibly restrictive assumptions; instead, the explicit quantification of the LD will allow us to analyze the global LD of such systems for any given parameters, and to examine when and why the LD follow the gradient of a convex function, despite $F$ being non-convex. The paper will be organized as follows. In Section \ref{methods}, we introduce some notation and necessary background in state-transition matrices and variational calculus. Section \ref{mainresults} contains the problem statement and the theoretic main results, while a simulative verification of these statements is given in Section \ref{simulations}. Our work is summarized in Section \ref{conclusion}. This article is an extended version of \cite{Wildhagen18} and augments its contents by an introduction to variational calculus for multidimensional systems, a characterization of the recovered gradient for the multidimensional case, and an additional numerical example. \section{Preliminaries and Methods} \label{methods} \subsection{Notation} We denote by $\mathbb{R}$ the set of real numbers and by $\mathbb{N}_+$ the set of positive natural numbers other than $0$. Let $\mathcal{C}^p$ denote the set of $p$ times continuously differentiable functions. We denote $\nabla f = \begin{bmatrix} \frac{\partial f}{\partial x_1}& \dots & \frac{\partial f}{\partial x_n}\end{bmatrix}^\top$ the gradient of a function $f:\mathbb{R}^n\rightarrow\mathbb{R}, \; f\in \mathcal{C}^1$. We consider the Landau notation, i.e., for $f,g:\mathbb{R}^n\rightarrow\mathbb{R}$ write $f(\vec{x})=\mathcal{O}(g(\vec{x}))$, meaning that there exist some $M>0$ and $\delta>0$ such that $||f(\vec{x})||\le M||g(\vec{x})||$ for all $||\vec{x}||\le \delta$. Note that when \mbox{$f_1=\mathcal{O}(g)$} and $f_2=\mathcal{O}(g)$ with $M,\delta$, it also holds that \mbox{$f_1+f_2=\mathcal{O}(2g)$} with $M,\delta$, because of the triangle inequality. We denote $\vec{0}_i$ a vector in $\mathbb{R}^i$ where each entry is $0$, $\vec{I}$ the unit matrix, $a_i$ the $i$-th entry of a vector $\vec{a}$, and $B_i$ the $i$-th diagonal entry of a square matrix $\vec{B}$. \subsection{State-Transition Matrix} For a linear, time-varying system $\dot{\vec{x}}(t)=\vec{A}(t)\vec{x}(t)$, \mbox{$\vec{x}(t)\in\mathbb{R}^n$}, the state-transition matrix (STM) relates the solutions at different time points $\vec{x}(t)=\vec{\Phi}(t,t_0)\vec{x}(t_0)$. If $n=1$ and $A$ is locally integrable, it is defined by \begin{equation} \Phi(t,t_0) = \exp\left(\int_{t_0}^{t} A(\tau) \mathrm{d}\tau\right). \end{equation} Important properties are the so-called semi-group property $\vec{\Phi}(t,t_0)=\vec{\Phi}(t,t_1)\vec{\Phi}(t_1,t_0)$ and the differentiation property $\frac{\mathrm{d}}{\mathrm{d} t_0}\vec{\Phi}(t,t_0) = -\vec{\Phi}(t,t_0)\vec{A}(t_0)$. \subsection{Variational Calculus}\label{VariationalCalculus} This section presents well-established results on the effect of so-called needle variations on the trajectories of dynamical systems, adapted to our argumentations. An exhaustive treatment of needle variations can be found in \cite{Liberzon11}. \subsubsection{General Perturbation} Consider the nonlinear system \begin{equation} \dot{\vec{x}}(t) = g_1(\vec{x}(t)) \vec{u}_1(t) + g_2(\vec{x}(t)) \vec{u}_2(t) \label{system_var} \end{equation} with $\vec{x}(t)\in\mathbb{R}^n$, $g_{1},g_{2}:\mathbb{R}^n \rightarrow \mathbb{R}$ and $\vec{u}_{1},\vec{u}_{2}:\mathbb{R} \rightarrow \mathbb{R}^n$. Suppose that $g_{1},g_{2} \in \mathcal{C}^1$ and $\vec{u}_{1},\vec{u}_{2}$ are piecewise continuous, such that local existence and uniqueness of the solution of \eqref{system_var} is guaranteed. Denote $\vec{x}^*(t)$ the solution of \eqref{system_var}, when some nominal input trajectory $\vec{u}_1(t)=\vec{u}_1^*(t)$, and $\vec{u}_2(t)=\vec{u}_2^*(t)$ is applied to the system. We call $\vec{x}^*(t)$ the nominal solution of \eqref{system_var}. Next, we study the effects on the solution of \eqref{system_var}, when the nominal inputs are perturbed by a so-called needle or Pontryagin-McShane variation. We consider a perturbation in $\vec{u}_2$ only, such that the perturbed input is defined by \begin{equation} \vec{u}_1(t) = \vec{u}_1^*(t), \quad \vec{u}_2(t) = \begin{cases} \vec{u}_2^*(t) & t \notin [\bar{t},\bar{t}+\epsilon] \\ \vec{\alpha} & t \in [\bar{t},\bar{t}+\epsilon] \end{cases}, \label{input_pert} \end{equation} where $\vec{\alpha}= [ \alpha_1,\ldots,\alpha_n ]^\top \in \mathbb{R}^n$ and $\bar{t},\epsilon>0$. \begin{figure} \centering \input{Abbildungen/needleVariation.tex} \caption{The needle variation (left) and the nominal and perturbed trajectories (right).} \label{needleVar} \end{figure} As illustrated in Fig. \ref{needleVar}, the nominal input is perturbed over an interval of length $\epsilon$, starting at $\bar{t}$, and held on some constant value $\vec{\alpha}$ in this time period. The solution of \eqref{system_var}, when the perturbed input \eqref{input_pert} is applied, will be denoted by $\vec{x}(t)$. As Fig. \ref{needleVar} suggests, $\vec{x}(t)$ will deviate from $\vec{x}^*(t)$ on $[\bar{t},\bar{t}+\epsilon]$, and then ``run parallel'' to the nominal solution in the following. By several Taylor expansions (see \cite{Liberzon11} for details), one obtains that the perturbed trajectory evolves according to \begin{equation} \vec{x}(t) = \vec{x}^*(t) + \epsilon \vec{v}(t) + \mathcal{O}(\epsilon^2), \quad \forall t \ge \bar{t}+\epsilon. \label{perturbed_sol} \end{equation} The quantity $\vec{v}(t) \in \mathbb{R}^n$ is the so-called variational variable, which evolves according to the variational equation \begin{equation} \dot{\vec{v}}(t) = \underbrace{ \vec{u}_1^*(t) \nabla g_1(\vec{x}^*(t))^\top}_{\eqqcolon \vec{A}(\vec{x}^*(t),t)} \label{vareq_var} \vec{v}(t), \quad \forall t \ge \bar{t}+\epsilon, \end{equation} and when $\vec{u}_2^*(\bar{t}+\epsilon)=\vec{0}$, has initial condition \begin{equation} \vec{v}(\bar{t}+\epsilon) = g_{2}(\vec{x}^*(\bar{t}+\epsilon))\vec{\alpha}. \label{var_IC} \end{equation} Note that the variational equation \eqref{vareq_var} is equivalent to a linearization of \eqref{system_var} around $\vec{x}^*(t)$, $\vec{u}_1^*(t)$, and $\vec{u}_2^*(t)$. \subsubsection{Perturbation and Nominal Input in Single Dimension} \label{SpecPertInp} Consider the perturbed input \eqref{input_pert} with \begin{equation} \vec{\alpha} = [ \vec{0}_{\ell-1} , \alpha_{\ell} , \vec{0}_{n-\ell} ]^\top, \quad \ell \in \{1,\ldots,n\}, \end{equation} i.e., the perturbation acts only in the single dimension $\ell$. Then, the initial condition of the variational variable reads from \eqref{var_IC} \begin{equation} \vec{v}(\bar{t}+\epsilon) = [\vec{0}_{\ell-1} , g_{2}(\vec{x}^*(\bar{t}+\epsilon))\alpha_{\ell} , \vec{0}_{n-\ell}]^\top. \label{IC_sing} \end{equation} Additionally, let the nominal input $u_{1i}^*(t) = 0$ for all $i \neq \ell$ in some time period $t \in [\bar{t}+\epsilon,t_f]$. From the variational equation \eqref{vareq_var}, it follows that $\dot{v}_i(t)= 0$ for all $i\neq \ell$ for $t \in [\bar{t}+\epsilon,t_f]$, such that these components do not show any dynamic behavior. Furthermore, $v_i(\bar{t}+\epsilon)=0$ for all $i\neq \ell$ from \eqref{IC_sing}, such that it holds \begin{equation} v_i(t)=0 \quad \forall i\neq \ell, \quad t \in[\bar{t}+\epsilon,t_f]. \end{equation} As a result, they have no effect on the component $v_{\ell}(t)$ and we obtain \begin{align} \dot{v}_{\ell}(t) = u_{1\ell}^*(t) \frac{\partial g_{1}}{\partial x_\ell}(\vec{x}^*(t)) v_{\ell}(t), \quad t \in[\bar{t}+\epsilon,t_f]. \end{align} This will be of importance in the proof of Theorem \ref{TheoremSI}. \section{Main Results} \label{mainresults} \subsection{Scalar Extremum Seeking} Extremum seeking (ES) offers a systematic approach to address the optimization problem \begin{equation} \min F(x), \label{optprob} \end{equation} with $x\in\mathbb{R}$, the nonlinear map $F: \mathbb{R}\rightarrow\mathbb{R}$, $F\in \mathcal{C}^1$, without any gradient information being available. In the following, we consider the class of ES systems introduced by \cite{Duerr13} \begin{equation} \dot{x}(t) = g_1(F(x(t)))u_1(t) + g_2(F(x(t)))u_2(t), \; x(0) = x_0, \label{system} \end{equation} where $g_1,g_2: \mathbb{R}\rightarrow\mathbb{R}$, $g_1,g_2\in \mathcal{C}^1$. In \cite{Duerr17}, it was discussed that, apart from technical differences, the setups considered in \cite{Krstic00,Tan06} can be represented by this class as well. The dither functions $u_1,u_2: \mathbb{R}\rightarrow\mathbb{R}$ are assumed to be $T$-periodic with $T=\frac{2\pi}{\omega}$. Note that in the following, both $\omega$ and $T$ will be used, although they contain the same information. A typical approach is to choose the dithers \begin{equation} u_1(t) = \sqrt{\omega}\sin(\omega t), \quad u_2(t) = \sqrt{\omega}\cos(\omega t), \label{sincos} \end{equation} and the vector fields $g_1,g_2$ such that their Lie bracket \begin{equation} [g_1,g_2](F) = \frac{\partial g_2(F)}{\partial F}g_1(F)-\frac{\partial g_1(F)}{\partial F}g_2(F) \eqqcolon - g_0(F) \label{LieBr} \end{equation} is $1$. Then for $\omega\rightarrow\infty$, the trajectories of \eqref{system} follow those of the gradient flow system $\dot{\bar{x}}=-\frac{1}{2}\nabla F(\bar{x})$ arbitrarily close \cite{Duerr13,Grushkovskaya17}. As a result, $x$ converges to a solution of \eqref{optprob}. Apparently, $\omega\rightarrow\infty$ can never be achieved in practice. However, also for finite $\omega$, $\eqref{system}$ is observed to move on average along a recovered gradient of $F$. This movement is referred to as learning dynamics (LD). Since the recovered gradient is not exact, the LD generally minimize a function other than $F$, called $L_\omega$. The LD can be carved out from $x(t)$ by neglecting the periodic oscillations induced by the dithers, i.e., by regarding only the system state at $T$-multiples $x(kT), \;k\in\mathbb{N}_+$. Then, the LD of \eqref{system} can be described as a gradient descent recursion on $L_\omega$ with fixed step size \begin{equation} x(kT) = x((k-1)T) + \nabla L_\omega (x((k-1)T)). \label{grad_descent} \end{equation} As detailed above, the LD have been found to move along $\nabla F$ for $\omega\rightarrow\infty$, whereas for finite $\omega$, the LD were not explicitly quantified so far. The main purpose of this paper is thus to give an explicit description of the gradient descent direction $\nabla L_\omega$, valid for any $\omega$. Thereby, the LD and the function $L_\omega$, that the LD effectively minimize, are characterized. \begin{figure}[h] \centering \input{Abbildungen/u1u21Dgen.tex} \caption{Trigonometric dithers $u_1(t)$ and $u_2(t)$ which are compliant to \textit{A1}-\textit{A3}. Note that the individual needles in the sampled dither $\bar{u}_2(t)$ form needle pairs of opposite sign (illustrated by matching colors).} \label{u1u2} \end{figure} We assume that the following holds for the dither functions: \begin{enumerate} \item[\textit{A1:}] $u_1,u_2$ are piecewise continuous and bounded. \item[\textit{A2:}] The function $u_1$ is point-symmetric to $(\frac{T}{2},0)$, i.e., it holds that $u_1(t)=-u_1(T-t)$ for all $t \in [0,T]$. \item[\textit{A3:}] For $u_2$ it holds $u_2(t)=-u_2(\frac{T}{2}+t)$ for all $t \in [0,\frac{T}{2}]$. \end{enumerate} \begin{remark} Note that \textit{A2} and \textit{A3} imply that $u_1$ and $u_2$ have zero mean on $[0,T]$. \end{remark} \begin{remark} We presume that \textit{A1}-\textit{A3} are mild conditions for dithers commonly considered in ES. For example, they are fulfilled by the well-known trigonometric dithers \eqref{sincos}, but also by square-wave or sawtooth dithers proposed in \cite{Tan08}. \end{remark} The idea is now to sample and approximate the dither $u_2$ by needle-shaped functions. We restrict the sampling interval $\epsilon$, i.e., the length of the individual needles, to even divisors of $T$. This means that if the sampling interval is $\epsilon = \frac{T}{2N}, \; N\in \mathbb{N_+}$, then $u_2(t)$ is approximated by $2N$ needles in the interval $[0,T)$. The sampled dither function is thus \begin{equation} \bar{u}_2(t)=u_2(i\epsilon), \; t\in[(i-1)\epsilon, i\epsilon), \; i=1,\ldots,N. \label{u2GU} \end{equation} Because of \textit{A3} and the even number of samples, for every needle in the time interval $[0,\frac{T}{2})$, there is a corresponding needle with same amplitude but opposite sign in $[\frac{T}{2},T)$, such that we can extract ``needle pairs'' out of $\bar{u}_2(t)$. This is illustrated in Fig. \ref{u1u2}, where the same-colored areas form needle pairs of opposite sign. This fact will become crucial in order to establish Theorem \ref{TheoremGU}. The following theorem explicitly relates the solutions of \eqref{system} at times $t=0$ and $t=T$, and thereby quantifies the gradient recovered by ES. \begin{theorem} Suppose that Assumptions \textit{A1}-\textit{A3} on the dither functions $u_1,u_2$ hold. Let $x^*(t)$ denote the solution of \eqref{system}, when $u_1(t)$ and $u_2(t) \equiv 0$ are applied, i.e., $x^*(t)$ fulfills \begin{equation} \dot{x}^*(t)=g_1(F(x^*(t)))u_1(t), \: x^*(0)=x_0. \label{nomsystem_GU} \end{equation} Assume that $x^*(t)$ exists on $[0,T]$. Let $\Phi(t,t_0)$ be the STM corresponding to the time-varying variational equation \begin{equation} \dot{v}(t)=u_1(t)\frac{\partial g_1}{\partial F}(F(x^*(t)))\frac{\partial F}{\partial x}(x^*(t)) v(t) \label{vareqGU} \end{equation} with initial time $t_0$, and let $g_1,g_2$ satisfy \eqref{LieBr}. Consider system \eqref{system}, where $u_1(t)$, and $\bar{u}_2(t)$ as in \eqref{u2GU}, are applied. Then \begin{align} &x(T) = x_0 + \mathcal{O}(T^2) \label{xTNfinGU} \\ &\hspace{-3pt}+ \epsilon\hspace{-1pt}\sum_{i=1}^{N} u_2(i\epsilon)\hspace{-6pt}\int\displaylimits_{i\epsilon}^{\frac{T}{2}-i\epsilon}\hspace{-2pt} \frac{\partial F}{\partial x}(x^*(\tau)) \Phi(0,\tau) u_1(\tau) g_0(F(x^*(\tau))) \mathrm{d}\tau. \nonumber \end{align} Moreover, \begin{align} &\lim_{N \rightarrow \infty} x(T) = x_0 + \mathcal{O}(T^2) \label{xTNinfGU}\\ &+ \int\displaylimits_{0}^{\frac{T}{2}} u_2(t) \hspace{-4pt} \int\displaylimits_{t}^{\frac{T}{2}-t} \hspace{-1pt} \frac{\partial F}{\partial x}(x^*(\tau)) \Phi(0,\tau) u_1(\tau) g_0(F(x^*(\tau))) \mathrm{d}\tau \mathrm{d}t. \nonumber \end{align} \label{TheoremGU} \end{theorem} The proof can be found in Appendix \ref{AppProof1}. Note that \eqref{xTNinfGU} characterizes the solution of \eqref{system} when $u_2(t)$ is applied, since $\lim_{N\rightarrow\infty}\bar{u}_2(t)=u_2(t)$ due to \textit{A1}. A continuation of \eqref{xTNinfGU} gives a gradient descent recursion of the form \eqref{grad_descent}. Because $x^*(t)$ depends only on its initial condition \mbox{$x^*(T_k)=x(T_k)$} (with $T_k=(k-1)T$), $\nabla L_\omega$ is given by \eqref{xTNinfGU} as \begin{align} &\nabla L_\omega (x(T_k)) = \mathcal{O}(T^2) \label{grad_approx} \\ &+ \hspace{-8pt}\int\displaylimits_{T_k}^{T_k+\frac{T}{2}}\hspace{-8pt}u_2(t)\hspace{-6pt}\int\displaylimits_{t}^{\frac{T}{2}-t} \frac{\partial F}{\partial x}(x^*(\tau)) \Phi(T_k,\tau) u_1(\tau) g_0(F(x^*(\tau))) \mathrm{d}\tau \mathrm{d}t. \nonumber \end{align} Consequently, Theorem \ref{TheoremGU} gives an approximate quantification of the gradient descent direction $\nabla L_\omega$, and thus describes the LD of system \eqref{system}. This result is independent of the parameter $\omega$ or convexity properties of the function $F$. The theoretical insight into the mechanics of ES that can be gained from \eqref{xTNinfGU} is extremely valuable. It shows that the LD evolve along a weighted averaged gradient of the function $F$, even for very general dithers $u_1,u_2$. The weighting factors of the gradient are $\Phi(0,\tau)$, $u_1(\tau)$ and $g_0(F(x^*(\tau)))$. Note that for many vector fields $g_1,g_2$ commonly considered in ES (e.g. $g_1=F,g_2=1$ or $g_1=\sin(F), g_2=\cos(F)$), it holds that $g_0(F)=1$ \cite{Grushkovskaya17}, such that this factor even vanishes in the integral. The inner integral (which corresponds to the averaged gradient) is then weighted with the second dither $u_2(t)$ and averaged once again. This observation gives rise to the interpretation that when choosing the input period $T$ large enough, the LD of the ES system might ``even out'' local minima in $F$. As a result, the LD converge to the global minimum instead of getting stuck in local minima. This phenomenon is indeed observed in practice as seen later. \begin{remark} Standard ES analysis already indicates that the right-hand side of the "average" ES system corresponds to an averaged version of the static objective function (see e.g. \cite{Krstic00,Tan06}). In contrast to our results, however, this method does not address the \textit{explicit} solution of the \textit{original} ES system. \end{remark} \begin{remark} Note that $\nabla L_\omega (x(T_k))$ in \eqref{grad_approx} depends on $x^*(t)$, the solution of a nonlinear differential equation. Therefore, it cannot be computed directly and an approximate numerical solution must be obtained instead. This fact indeed does not diminish the valuable insight gained from \eqref{grad_approx}. \end{remark} \begin{remark} Theorem \ref{TheoremGU} also includes the case of needle-shaped inputs treated in \cite{Michalowsky16}. There are only two needles of length $\epsilon$, such that the rest term in \eqref{xTNfinGU} is estimated to $\mathcal{O}(\epsilon^2)$. \end{remark} \subsection{\fontdimen2\font=3pt Multidimensional Extremum Seeking with Sequential Dither} In this section, the characterization of the LD of ES systems is extended to the multidimensional case. Consider the multidimensional system \begin{equation} \dot{\vec{x}}(t) = g_1(F(\vec{x}(t)))\vec{u}_1(t) + g_2(F(\vec{x}(t)))\vec{u}_2(t), \; \vec{x}(0)=\vec{x}_0 \label{system_mult}, \end{equation} with $\vec{x}(t)\in\mathbb{R}^n$, $F:\mathbb{R}^n\rightarrow\mathbb{R}, F\in\mathcal{C}^1$, \mbox{$g_1,g_2: \mathbb{R}\rightarrow\mathbb{R}$} and the $nT$-periodic dither functions \mbox{$\vec{u}_1,\vec{u}_2: \mathbb{R}\rightarrow\mathbb{R}^n$}. Here, we consider the special multidimensional sequence, where a general scalar dither $u_1,u_2$ is applied sequentially in all dimensions, while the other dithers are zero meanwhile. This sequence, shown in Fig. \ref{u1u2mult}, is defined by \begin{equation} u_{ji}(t) = \begin{cases} u_j(t-(i-1)T) & t\in[(i-1)T,iT) \\ 0 & \text{else} \end{cases}, \; \substack{j=1,2 \\ i=1,\ldots,n}, \label{u_mult} \end{equation} with $\vec{u}_j(t)=[u_{j1}(t),\ldots,u_{jn}(t)]^\top, \; j=1,2$. Again, the scalar $u_1,u_2$ need to fulfill \textit{A1}-\textit{A3}. \begin{figure}[h] \centering \input{Abbildungen/u1lu2lsequential.tex} \caption{The multidimensional dither sequences $u_{1i}(t)$ (solid) and $u_{2i}(t)$ (dashed) for $n=3$. The scalar dithers $u_1,u_2$ are merely applied sequentially in all dimensions.} \label{u1u2mult} \end{figure} \\ The following theorem characterizes the LD of system \eqref{system_mult} when treated with this sequential dither. \begin{theorem} Suppose that Assumptions \textit{A1}-\textit{A3} on the scalar dithers $u_1,u_2$ hold. Let $\ell\in\{1,\ldots,n\}$. Denote $\vec{x}^*(t)$ the solution of \eqref{system_mult} when $\vec{u}_1(t)$ is as defined in \eqref{u_mult} and \mbox{$\vec{u}_2(t) \equiv \vec{0}$}, and assume that $\vec{x}^*(t)$ exists on $[0,\ell T]$. Let \mbox{$\Phi_i(t,t_0),\Phi_i:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}$} be the STM corresponding to the time-varying variational equation \begin{equation} \dot{v}_i(t) = u_{1i}(t)\frac{\partial g_1}{\partial F}(F(\vec{x}^*(t)))\frac{\partial F}{\partial x_i}(\vec{x}^*(t)) v_i(t) \label{vareqSI} \end{equation} with initial time $t_0$, and let $g_1,g_2$ satisfy \eqref{LieBr}. Consider the multidimensional system \eqref{system_mult}, where $\vec{u}_1(t)$ and $\vec{u}_2(t)$ are applied as defined in \eqref{u_mult}. Then, \begin{align} &\vec{x}(\ell T) = \vec{x}_0 + \mathcal{O}(\ell^2 T^2) \label{xnTSI} \\ &+ \begin{bmatrix} \hspace{-57pt}\int\displaylimits_{0}^{\frac{T}{2}} u_2(t) \int\displaylimits_{t}^{\frac{T}{2}-t}\Big( \frac{\partial F}{\partial x_1}(\vec{x}^*(\tau)) \Phi_1(0,\tau) \\ \hspace{100pt} \cdot u_1(\tau) g_0(F(\vec{x}^*(\tau))) \Big) \mathrm{d}\tau \mathrm{d}t \\ \vdots \\ \hspace{-10pt}\int\displaylimits_{0}^{\frac{T}{2}} u_2(t) \int\displaylimits_{t}^{\frac{T}{2}-t} \Big( \frac{\partial F}{\partial x_\ell}(\vec{x}^*(T_\ell+\tau)) \Phi_\ell(T_\ell,T_\ell+\tau) \\ \hspace{79pt} \cdot u_1(\tau) g_0(F(\vec{x}^*(T_\ell+\tau))) \Big) \mathrm{d}\tau \mathrm{d}t \\ \vec{0}_{n-\ell} \end{bmatrix}\hspace{-2pt}. \nonumber \end{align} \label{TheoremSI} \end{theorem} \begin{sketchofproof} The proof of Theorem \ref{TheoremSI} follows closely the lines of the proof of Theorem \ref{TheoremGU}. The main idea is again to sample the input functions in dimension $i$ by an even number of needles in the interval $[(i-1)T,iT)$. Consider $\vec{\Phi}(t,t_0)$, the STM corresponding to \begin{equation} \vec{\dot{v}}(t) = \vec{u}_1(t)\frac{\partial g_1}{\partial F}(F(\vec{x}^*(t))) \nabla F(\vec{x}^*(t))^\top \vec{v}(t). \end{equation} With the fact that the needles are applied in one dimension at a time, and the principles of variational calculus given in Section \ref{VariationalCalculus}, one can express the solution of \eqref{system_mult} after $\ell T$ as \begin{align} &\vec{x}(\ell T) = \vec{x}_0 + \mathcal{O}(\ell T^2) \label{xnTSI1}\\ &+ \epsilon \sum_{i=1}^{\ell} \sum_{j=1}^{2N} u_2(j\epsilon) \vec{\Phi}(\ell T,T_{i}+j\epsilon) \begin{bmatrix} \vec{0}_{i-1} \\ g_2(\vec{x}^*(T_{i}+j\epsilon)) \\ \vec{0}_{n-i} \end{bmatrix}. \nonumber \end{align} Because of the point-symmetries in $\vec{u}_1(t)$, it holds that $\vec{\Phi}(T_{i+1},T_i)=\vec{I}$ for all $i$. With the semi-group property of the STM, and the results presented in Section \ref{SpecPertInp}, one can replace $\vec{\Phi}(\ell T,T_{i}+j\epsilon)$ in \eqref{xnTSI1} by the scalar $\Phi_i(T_{i+1},T_{i}+j\epsilon)$. Again, we apply the symmetry property of $u_2(t)$ and write the $\mathcal{O}(\epsilon)$ terms as an integral. Letting $\epsilon\rightarrow 0$ proves \eqref{xnTSI}. \end{sketchofproof} Theorem \ref{TheoremSI} shows that using the special sequential sequence as defined in \eqref{u_mult}, a component-wise and decoupled weighting and averaging of the gradient is performed. Furthermore, formula \eqref{xnTSI} reveals that the LD move primarily along the dimension where the scalar input was applied (as the integral term indicates). However, due to the non-vanishing remainder, the system moves slightly along the other dimensions as well. \section{Numerical Evaluation} \label{simulations} In this section, we compare the simulated LD $x(kT)_\mathrm{Sim}$ of an ES system with the result of the recursion \eqref{grad_descent} and \eqref{grad_approx}, denoted by $x(kT)_\mathrm{Rec}$. We consider the system from \cite{Duerr13} \begin{equation} \dot{x}(t) = F(x(t))\sqrt{\omega}\sin(\omega t) - a\sqrt{\omega}\cos(\omega t), \label{system_example} \end{equation} with initial condition $x(0)=1.8$. The parameter $\omega=\frac{2\pi}{T}$ will be adapted to display its various effects. \begin{figure}[h] \centering \input{Abbildungen/error1.tex} \caption{The absolute error between the simulated LD and the recursion (dashed), and the size of the tube around the LD (solid), with $F_1(x)$, $a=5$.} \label{error} \end{figure} \begin{example} Consider \eqref{system_example} together with the quadratic function $F_1(x)=\frac{1}{2}x^2$. Fig. \ref{error} shows the absolute error $|x(kT)_\mathrm{Rec}-x(kT)_\mathrm{Sim}|$ between the simulated LD and the recursion for system \eqref{system_example} for various $T$. The chosen period times differ by a factor of $10$, as do the errors at a certain time point $t$. This observation matches the theory very well, e.g., consider $T_1=10 T_2$. Then using the larger period time $T_1$ leads to an error at time $T_1$ of $\mathcal{O}(T_1^2)$, whereas performing the recursion $10$ times with $T_2$ causes an error at time $T_1$ of $\mathcal{O}(10T_2^2)=\mathcal{O}(0.1T_1^2)$. In \cite{Duerr13}, the LD could be verified to move in a tube around the gradient flow system. Observe from Fig. \ref{error} that the recursion gives a much more accurate estimation of the ES system's LD. \end{example} \begin{figure}[h] \centering \input{Abbildungen/F.tex} \input{Abbildungen/xkT1D.tex} \input{Abbildungen/Lomega.tex} \caption{The non-convex test function $F_2(x)$ (top), the simulated LD $x(kT)_\mathrm{Sim}$ (solid) and the recursion $x(kT)_\mathrm{Rec}$ (dashed) with $F_2(x)$ (middle), and the simulated $L_\omega(x)$ corresponding to $F_2(x)$ (bottom), $a=20$.} \label{xkT1D} \end{figure} \begin{example} This example shall demonstrate the recursion's ability to represent the simulated LD with non-convex functions. Consider the function $F_2(x)$ as depicted in Fig. \ref{xkT1D} (top). It has a sharp local minimum between $x_0$ and its global minimum at $x=0$, and is a quadratic function elsewhere. In Fig. \ref{xkT1D} (middle), the simulated LD and the recursion are depicted. For $T=0.1,0.01$, both the simulation and the recursion converge globally, and for $T=0.0001$, both get stuck in the local minimum. However, for $T=0.001$, the simulation passes through the local minimum while the recursion gets stuck. We can infer that in borderline cases, when the ES system barely converges globally, the recursion's informative value is limited due to its intrinsic uncertainty. Nonetheless, for clearer cases, the recursion displays the actual LD very well. Fig. \ref{xkT1D} (bottom) shows $L_\omega(x)$ generated by the simulated $x(kT)_\mathrm{Sim}$, where \eqref{system_example} was treated with $F_2(x)$. The function $L_\omega(x)$ was numerically integrated using the Euler method from \eqref{grad_descent} with a normalized scaling. This example illustrates the property that $L_\omega(x)$ can become convex for certain $\omega$ although $F_2(x)$ is not. It can be observed that for higher $T$, $L_\omega(x)$ is a convex function, whereas for small $T$, it shows a local minimum similar to $F_2(x)$. Consequently, when starting to the right of this local minimum, $x(kT)_\mathrm{Sim}$ does not converge near the global minimizer of $F_2(x)$. \end{example} \begin{figure}[h] \centering \input{Abbildungen/xkT2D.tex} \caption{The two-dimensional LD using $F_3(\vec{x})$ and $T=0.01$. The value of $F_3(\vec{x})$ is color-coded from yellow (high) to green (low), $a=10$.} \label{xkT2D} \end{figure} \begin{example} The third example is devoted to the two-dimensional extension of \eqref{system_example}, where the one-dimensional sine-waves are applied sequentially according to \eqref{u_mult}. The initial condition $\vec{x}(0) = [1.8,1.8]^\top$ and the convex function $F_3(\vec{x})=\frac{1}{2}(x_1^2+x_2^2)$ are used. In Fig. \ref{xkT2D}, simulation results of this setup are depicted. The staircase-shaped LD, that were predicted by \eqref{xnTSI}, are clearly visible. Over the first few time periods, the recursion gives a very precise approximation of the LD. However, the approximation error sums up such that the recursion becomes less and less precise over time. Note that this observation does not disagree with our main results. Nonetheless, the recursion converges near the minimizer at the origin, just like the simulated LD. \end{example} \section{Summary} \label{conclusion} In this paper, we gave an explicit recursion for the LD of scalar ES systems with static maps. This recursion approximately quantifies the gradient information recovered by ES, and reveals that it corresponds to an averaged gradient of the objective function. As this property holds without strong restrictions on the objective function, the recursion is also able to represent convergence of the LD to the global minimum, despite the presence of local minima. Furthermore, we presented a special multidimensional dither sequence and showed that an ES system, treated with this sequence, moves along an averaged gradient as well. Eventually, we illustrated and verified our results in simulations. Since in general, dynamic maps are considered in ES, an extension of the presented analysis to this case seems worthwhile. \section{Appendix} \subsection{Proof of Theorem \ref{TheoremGU}} \label{AppProof1} \begin{proof} The proof relies on the principles of variational calculus presented in Section \ref{methods}. We define \begin{equation} \bar{u}_2^{(j)}(t) = \begin{cases} \bar{u}_2(t) & t\in[0,(j-1)\epsilon) \\ 0 & t\notin [0,(j-1)\epsilon) \end{cases}, \; j=1,\ldots,2N+1, \end{equation} consisting of the first $(j-1)$ needles and denote \begin{itemize} \item $x^{(j)}(t)$ the solution of \eqref{system} when $u_1(t)$ and $\bar{u}_2^{(j)}(t)$ are applied, \item $v^{(j)}(t)$ the variational variable which describes the variation of $x^{(j+1)}(t)$ from $x^{(j)}(t)$. \end{itemize} Consider system \eqref{system}, where $\bar{u}_2^{(j)}(t)$ is applied, and its solution $x^{(j)}(t)$. Now apply $\bar{u}_2^{(j+1)}(t)$. Then, $\bar{u}_2^{(j)}(t)$ corresponds to the nominal input that is perturbed on \mbox{$t\in[(j-1)\epsilon,j\epsilon)$}, and $x^{(j)}(t)$ to the nominal trajectory, such that \begin{equation} x^{(j+1)}(t)=x^{(j)}(t)+\epsilon v^{(j)}(t)+\mathcal{O}(\epsilon^2). \label{xvarproof} \end{equation} The variational variable $v^{(j)}(t)$ evolves as \begin{equation} \dot{v}^{(j)}(t) = u_1(t)\frac{\partial g_1}{\partial F}(F(x^{(j)}(t)))\frac{\partial F}{\partial x}(x^{(j)}(t)) v^{(j)}(t) \label{vareqproof} \end{equation} and has initial condition \mbox{$v^{(j)}(j\epsilon) = g_2(F(x^{(j)}(j\epsilon)))u_2(j\epsilon)$}, since $\bar{u}_2^{(j)}(j\epsilon)=0$. The proof is subdivided into four sections. In the first part, we will derive a representation of the solution of \eqref{system} after one dither period $T$ via variational calculus, where the terms linear in $\epsilon$ will be comprised of STMs. Second, we will derive a useful symmetric property of these STMs. Using this, the terms of order $\mathcal{O}(\epsilon)$ will be expressed as an integral to prove \eqref{xTNfinGU} in the third part. In the last part, we let the length of the needles tend to zero to show \eqref{xTNinfGU}. \textbf{1):} We begin to show by induction that when $\bar{u}_2^{(j+1)}(t)$ is applied to the system, the solution of \eqref{system} is \begin{align} x^{(j+1)}(t) &= x^*(t) + \epsilon \sum_{i=1}^{j} \Phi(t,i\epsilon) g_2(F(x^*(i\epsilon)))u_2(i\epsilon) \nonumber \\ &+\sum_{i=1}^{j} \Big(R_i(\epsilon)+S_i(\epsilon)\Big), \label{xjinductionGU} \end{align} for $t\in[j\epsilon,T]$, where each of the remainders $R_i(\epsilon)=\mathcal{O}(\epsilon^2)$ and each $S_i(\epsilon)=\mathcal{O}(2(i-1)\epsilon^2)$. \textit{Step 1: Basis} By the principles of variational calculus presented in Section \ref{methods}, applying $\bar{u}_2^{(2)}(t)$, we obtain for the solution of \eqref{system} \begin{equation} x^{(2)}(t) = \underbrace{x^{(1)}(t)}_{=x^*(t)} + \epsilon v^{(1)}(t) + R_1(\epsilon), \end{equation} for $t\in [\epsilon,T]$. For the remainder it holds $R_1(\epsilon)=\mathcal{O}(\epsilon^2)$. As $x^{(1)}(t)=x^*(t)$, the variational variable $v^{(1)}(t)$ fulfills \eqref{vareqGU} with initial condition $v^{(1)}(\epsilon)=g_2(F(x^*(\epsilon)))u_2(\epsilon)$. Since $\Phi(t,t_0)$ is the STM of \eqref{vareqGU}, we can express $x^{(2)}(t)$ by \begin{equation} x^{(2)}(t) = x^*(t) + \epsilon\Phi(t,\epsilon)g_2(F(x^*(\epsilon)))u_2(\epsilon) + R_1(\epsilon), \label{xepsilon} \end{equation} for $t\in [\epsilon,T]$, which is \eqref{xjinductionGU} for $j=2$. \textit{Step 2: Inductive Step} Assume that \eqref{xjinductionGU} holds when $\bar{u}_2^{(j)}(t)$ is applied, i.e., that the solution $x^{(j)}(t)$ reads \begin{align} x^{(j)}(t) &= x^*(t) + \epsilon \sum_{i=1}^{j-1} \Phi(t,i\epsilon) g_2(F(x^*(i\epsilon)))u_2(i\epsilon) \nonumber \\ &+\sum_{i=1}^{j-1}\Big(R_i(\epsilon)+S_i(\epsilon)\Big), \label{xjinductionGU2} \end{align} for $t\in[(j-1)\epsilon,T]$. Consider $v^{(j)*}(t)$, which fulfills \eqref{vareqGU} and has initial conditions $v^{(j)*}(j\epsilon) = g_2(F(x^*(j\epsilon)))u_2(j\epsilon)$. Next, we apply $\bar{u}_2^{(j+1)}(t)$. With Lemma \ref{LemmaMultNeed} (see Appendix \ref{AppLemma1}), equation \eqref{xvarproof} becomes \begin{align} &x^{(j+1)}(t) = x^{(j)}(t) + \epsilon v^{(j)*}(t) + \underbrace{\epsilon v_R^{(j)}(\epsilon)}_{\eqqcolon S_j(\epsilon)} + R_j(\epsilon) \nonumber \\ &\stackrel{\eqref{xjinductionGU2}}{=} x^*(t) + \epsilon \sum_{i=1}^{j-1} \Phi(t,i\epsilon) g_2(F(x^*(i\epsilon)))u_2(i\epsilon)+\epsilon v^{(j)*}(t) \nonumber \\ &+\sum_{i=1}^{j-1}\Big(R_i(\epsilon)+S_i(\epsilon)\Big)+R_j(\epsilon)+S_j(\epsilon), \end{align} for $t \in[j\epsilon,T]$. Again, it holds that $R_j(\epsilon)=\mathcal{O}(\epsilon^2)$. Note that $x^{(j)}(t) = x^*(t) + \mathcal{O}((j-1)\epsilon)$, because according to \eqref{xjinductionGU}, each of the summands in the first sum is $\mathcal{O}(\epsilon)$. With Lemma \ref{LemmaMultNeed}, it holds that \mbox{$v_R^{(j)}(\epsilon)=\mathcal{O}(2(j-1)\epsilon)$} and thus \mbox{$S_j(\epsilon) = \epsilon v_R^{(j)}(\epsilon)=\mathcal{O}(2(j-1)\epsilon^2)$}. Using the STM of \eqref{vareqGU}, we obtain \begin{align} x^{(j+1)}(t) &= x^*(t) + \epsilon \sum_{i=1}^{j} \Phi(t,i\epsilon) g_2(F(x^*(i\epsilon)))u_2(i\epsilon) \nonumber \\ &+\sum_{i=1}^{j}\Big(R_i(\epsilon)+S_i(\epsilon)\Big), \end{align} for $t\in[j\epsilon,T]$, which is exactly \eqref{xjinductionGU}. Using \eqref{xjinductionGU}, we can express the solution of \eqref{system} at $t=T$, when $\bar{u}_2^{(2N+1)}(t)=\bar{u}_2(t)$ was applied to the system, as \begin{align} x(T) &= x^{(2N+1)}(T) \nonumber \\ &= x^*(T) + \epsilon \sum_{i=1}^{2N} \Phi(T,i\epsilon) g_2(F(x^*(i\epsilon)))u_2(i\epsilon) \nonumber \\ &+\sum_{i=1}^{2N}\Big(R_i(\epsilon)+S_i(\epsilon)\Big). \label{xTGU1} \end{align} In the next step, we will give a precise assessment of the remainder terms in \eqref{xTGU1}. We have established that the individual terms can be estimated as $R_i(\epsilon)=\mathcal{O}(\epsilon^2)$ and \mbox{$S_i(\epsilon)=\mathcal{O}(2(i-1)\epsilon^2)$}. Taking the summation of the remainder terms into account, the overall remainder is assessed as \begin{align} \sum_{i=1}^{2N} \Big(R_i(\epsilon) + S_i(\epsilon)\Big) &= \mathcal{O}(\big(2N+\sum_{i=1}^{2N}2(i-1)\big)\epsilon^2) \nonumber \\ &= \mathcal{O}((2N)^2\epsilon^2) \stackrel{\epsilon = \frac{T}{2N}}{=} \mathcal{O}(T^2). \end{align} Recall that because of \textit{A3} and the even number of needles, the sampled dither $\bar{u}_2(t)$ is comprised of ``needle pairs'' of same amplitude, but opposite sign. In the following, we will formalize this idea and carve out its effect on $x(T)$. First we split the sum in \eqref{xTGU1} \begin{align} x(T) &= x^*(T) + \epsilon \sum_{i=1}^{N} \Phi(T,i\epsilon) g_2(F(x^*(i\epsilon)))u_2(i\epsilon) \nonumber \\ &+\epsilon \sum_{i=N}^{2N} \Phi(T,i\epsilon) g_2(F(x^*(i\epsilon)))u_2(i\epsilon)+\mathcal{O}(T^2), \end{align} then use that $u_2(i\epsilon)=-u_2(i\epsilon-\frac{T}{2})$ in the second sum due to \textit{A3}, and perform an index shift to obtain \begin{align} x(T) &= x^*(T) + \epsilon \sum_{i=1}^{N} u_2(i\epsilon)\Big( \Phi(T,i\epsilon) g_2(F(x^*(i\epsilon))) \nonumber \\ &- \Phi(T,\tfrac{T}{2}+i\epsilon) g_2(F(x^*(\tfrac{T}{2}+i\epsilon))) \Big) +\mathcal{O}(T^2). \label{xTGU2} \end{align} \textbf{2):} The second part of the proof is dedicated to finding a simpler form of the STMs occurring in \eqref{xTGU2}. Because of \textit{A2}, it holds for the dither that $u_1(t)=-u_1(T-t)$. As a consequence, the nominal solution goes along the vector field $g_1(F(x^*(t)))u_1(t)$ in $[0,\frac{T}{2}]$, and back along the same vector field in $[\frac{T}{2},T]$. Then the nominal solution fulfills \begin{equation} x^*(t)=x^*(T-t). \label{xsymmetryGU} \end{equation} Using this symmetry property and $u_1(t)=-u_1(T-t)$ on the definition of $\Phi(t,t_0)$ yields \begin{align} &\Phi(t,t_0) =\exp\left( \int_{t_0}^{t} u_1(\tau)\frac{\partial g_1}{\partial F}(F(x^*(\tau))) \frac{\partial F}{\partial x}(x^*(\tau)) \mathrm{d}\tau \right) \nonumber \\ &\stackrel{\phantom{s\coloneqq T-\tau}}{=} \exp\left( \int_{T-t_0}^{T-t} u_1(s)\frac{\partial g_1}{\partial F}(F(x^*(s))) \frac{\partial F}{\partial x}(x^*(s)) \mathrm{d}s \right) \nonumber \\ &\stackrel{\phantom{s\coloneqq T-\tau}}{=} \Phi(T-t,T-t_0). \label{PhisymmetryGU} \end{align} The relation \eqref{PhisymmetryGU} and the semi-group property of the STM is used to establish that, if $0\le i\epsilon\le\frac{T}{2}$, \begin{align} \Phi(T,i\epsilon)&\stackrel{\phantom{\eqref{PhisymmetryGU}}}{=}\Phi(T,\tfrac{T}{2})\Phi(\tfrac{T}{2},i\epsilon) \nonumber \\ &\stackrel{\eqref{PhisymmetryGU}}{=} \Phi(0,\tfrac{T}{2})\Phi(\tfrac{T}{2},i\epsilon) = \Phi(0,i\epsilon) \label{PhiGU1} \end{align} and \begin{equation} \Phi(T,\tfrac{T}{2}+i\epsilon)\stackrel{\eqref{PhisymmetryGU}}{=} \Phi(0,\tfrac{T}{2}-i\epsilon). \label{PhiGU2} \end{equation} \textbf{3):} In the third section of the proof, we combine the results of the first two sections to come up with \eqref{xTNfinGU}. We use the identities \eqref{xsymmetryGU}, \eqref{PhiGU1} and \eqref{PhiGU2} in \eqref{xTGU2} to write \begin{align} &x(T) = \underbrace{x^*(T)}_{\stackrel{\eqref{xsymmetryGU}}{=}x^*(0)=x_0}+\mathcal{O}(T^2) \nonumber \\ &+ \epsilon \sum_{i=1}^{N} u_2(i\epsilon)\bigg( \underbrace{\Phi(T,i\epsilon)}_{\stackrel{\eqref{PhiGU1}}{=}\Phi(0,i\epsilon)} g_2(F(x^*(i\epsilon))) \nonumber \\ &\hspace{67pt}- \underbrace{\Phi(T,\tfrac{T}{2}+i\epsilon)}_{\stackrel{\eqref{PhiGU2}}{=}\Phi(0,\tfrac{T}{2}-i\epsilon)} g_2(F(\underbrace{x^*(\tfrac{T}{2}+i\epsilon)}_{\stackrel{\eqref{xsymmetryGU}}{=}x^*(\tfrac{T}{2}-i\epsilon)})) \bigg) \nonumber \\ &= x_0 + \mathcal{O}(T^2) \label{xTGU4} \\ &- \epsilon\sum_{i=1}^{N} u_2(i\epsilon)\int\displaylimits_{i\epsilon}^{\frac{T}{2}-i\epsilon} \frac{\mathrm{d}}{\mathrm{d} \tau}\bigg(\Phi(0,\tau)g_2(F(x^*(\tau)))\bigg)\mathrm{d}\tau. \nonumber \end{align} Note that writing the term linear in $\epsilon$ as an integral was only possible as there existed ``needle pairs'' of opposite sign. Next, we concentrate on simplifying the integral. Using the product rule, the differentiation property of the STM, and the chain rule, the integral becomes \begin{align} &\hspace{-3pt}\int\displaylimits_{i\epsilon}^{\frac{T}{2}-i\epsilon} \hspace{-6pt} \Phi(0,\tau) \frac{\mathrm{d} g_2}{\mathrm{d} \tau}(F(x^*(\tau))) \hspace{-2pt} + \hspace{-2pt} \frac{\mathrm{d}}{\mathrm{d} \tau}\big(\Phi(0,\tau)\big)g_2(F(x^*(\tau))) \mathrm{d}\tau \nonumber \\ &=\int\displaylimits_{i\epsilon}^{\frac{T}{2}-i\epsilon} \Phi(0,\tau) \frac{\partial g_2}{\partial F}(F(x^*(\tau)))\frac{\partial F}{\partial x}(x^*(\tau))\dot{x}^*(\tau) \\ & \hspace{0pt} -\Phi(0,\hspace{-1pt}\tau) u_1(\tau)\hspace{-1pt} \frac{\partial g_1}{\partial F}\hspace{-1pt}(F(x^*(\tau)\hspace{-1pt})\hspace{-1pt})\frac{\partial F}{\partial x}(x^*(\tau)\hspace{-1pt}) g_2(F(x^*(\tau)\hspace{-1pt})\hspace{-1pt}) \mathrm{d}\tau. \nonumber \end{align} The term $\dot{x}^*(\tau)$ is the right-hand side of the nominal differential equation \eqref{nomsystem_GU}. Using this, we can write the integral \begin{equation} \int\displaylimits_{\epsilon}^{\frac{T}{2}-\epsilon} \frac{\partial F}{\partial x}(x^*(\tau))\Phi(0,\tau)u_1(\tau)\underbrace{\bigg(\frac{\partial g_2}{\partial F}g_1-\frac{\partial g_1}{\partial F}g_2\bigg)}_{\stackrel{\eqref{LieBr}}{=}-g_0(F(x^*(\tau)))} \mathrm{d}\tau, \label{int_final} \end{equation} omitting some arguments. Using \eqref{int_final} in \eqref{xTGU4} proves \eqref{xTNfinGU}. \textbf{4):} In the fourth part of the proof, we perform the limit process of letting $N$ tend to infinity. We define \begin{equation} t_i \coloneqq i\epsilon \end{equation} to express the limit of the first-order term in \eqref{xTNfinGU} as \begin{align} \lim_{N\rightarrow \infty} \sum_{i=1}^{N} &\bigg( (t_i-t_{i-1}) u_2(t_i) \label{riemann}\\ &\cdot\int\displaylimits_{t_i}^{\frac{T}{2}-t_i} \frac{\partial F}{\partial x}(x^*(\tau)) \Phi(0,\tau) u_1(\tau) g_0(F(x^*(\tau))) \mathrm{d}\tau \bigg). \nonumber \end{align} Note that as $u_2$ is continuous and bounded due to \textit{A1}, this is the limit of a Riemann sum \cite{Trench03} with partition $\{t_i\}$, which converges to the Riemann integral. Therefore, the first-order term \eqref{riemann} becomes \begin{equation} \int\displaylimits_{0}^{\frac{T}{2}} u_2(t) \int\displaylimits_{t}^{\frac{T}{2}-t} \frac{\partial F}{\partial x}(x^*(\tau)) \Phi(0,\tau) u_1(\tau) g_0(F(x^*(\tau))) \mathrm{d}\tau \mathrm{d}t, \end{equation} which gives \eqref{xTNinfGU} and completes the proof. \end{proof} \subsection{Auxiliary Lemma \ref{LemmaMultNeed}} \label{AppLemma1} \begin{lemma} Consider $v^{(j)}(t)$ and $v^{(j)*}(t)$ as defined in the proof of Theorem \ref{TheoremGU}. Suppose that \mbox{$x^{(j)}(t)=x^*(t)+\mathcal{O}(i\epsilon)$}. Then, \begin{equation} v^{(j)}(t) = v^{(j)*}(t) + v^{(j)}_R(\epsilon), \end{equation} where it holds for the remainder $v^{(j)}_R(\epsilon)=\mathcal{O}(2i\epsilon)$. \label{LemmaMultNeed} \end{lemma} \begin{proof} We perform a Taylor expansion of the system matrix of \eqref{vareqproof}, and the initial condition of $v^{(j)}(t)$ about $x^*$, where we use that $x^{(j)}=x^*+\mathcal{O}(i\epsilon)$. This yields \begin{align} A(x^{(j)},t) &= A(x^*+\mathcal{O}(i\epsilon),t) = A(x^*,t) + \mathcal{O}(i\epsilon), \\ g_2(x^{(j)}) &= g_2(x^*+\mathcal{O}(i\epsilon)) = g_2(x^*) + \mathcal{O}(i\epsilon). \end{align} As a result, the solution of the variational equation is \begin{align} v^{(j)}(t) = \exp\bigg(\int_{t_0}^t &A(x^*(\tau),\tau) + \mathcal{O}(i\epsilon) \mathrm{d}\tau \bigg) \nonumber\\ &\cdot\Big(g_2(x^*(j\epsilon))+ \mathcal{O}(i\epsilon)\Big) u_2(j\epsilon). \end{align} Using the Taylor expansion for $\exp\left(\int_{t_0}^t \mathcal{O}(i\epsilon) \mathrm{d}\tau \right)$ gives \begin{align} &v^{(j)}(t) = \exp\left(\int_{t_0}^t A(x^*(\tau),\tau)\mathrm{d}\tau \right)(1+\mathcal{O}(i\epsilon)) \nonumber\\ &\hspace{80pt} \cdot\Big(g_2(x^*(j\epsilon))+\mathcal{O}(i\epsilon)\Big) u_2(j\epsilon) \nonumber \\ &=\exp\left(\int_{t_0}^t A(x^*(\tau),\tau)\mathrm{d}\tau\right) g_2(x^*(j\epsilon))u_2(j\epsilon) + \mathcal{O}(2i\epsilon). \end{align} We note that the first term corresponds to $v^{(j)*}(t)$ and the second remainder term fulfills $v^{(j)}_R(\epsilon)=\mathcal{O}(2i\epsilon)$. \end{proof} \bibliographystyle{IEEEtran} \section{Introduction} Extremum seeking (ES) is a well-established technique to find the extremal value of an unknown mapping \mbox{$F(\vec{x})$}. The quantity $\vec{x}$ corresponds to an input, which needs to be adjusted in such a way that it converges to a minimum of the objective function $F$. The map including an adjustment policy of $x$, such that it converges to the minimizer, is called \textit{extremum seeking system}. The only information available to this system consists of measurements of the objective function, whereas its analytical expression is unknown. As a consequence, gradient-based techniques are not suitable for this kind of problems. The idea behind ES is to recover an approximate gradient on a slower time scale by exciting the system via periodic dither signals on a faster time scale. The slow movement of the system along the recovered gradient of $F$ is referred to as \textit{learning dynamics} (LD). Former research in this topic focused mainly on the analysis of stability of such ES schemes, whereas a quantification of this recovered gradient was given little attention. The first local stability proof of a special ES scheme was given in \cite{Krstic00}, and later extended to non-local stability by \cite{Tan06}, where it was also noted that ES approximates the gradient of $F$. A novel view on ES systems was introduced by \cite{Duerr13,Duerr17}, who also showed stability for their setup. Common to these results is that they establish so-called practical stability of the feedback scheme with respect to its parameters. This notion implies that, for convex objective functions, there exists for any set of initial conditions a choice of parameters of the ES controller, such that the ES system converges to an arbitrarily small neighborhood of a minimizer. As mentioned above, these results are able to explain convergence to the minimizer when considering convex functions. However, global convergence in the presence of local extrema is a topic that has been rarely addressed in former research. Both \cite{Tan09} and \cite{Nesic13} presented adapted ES schemes which, under some additional (and possibly hard to verify) assumptions, achieve practical global stability with respect to their parameters, despite the local extrema. However, it has been observed that also standard ES schemes achieve global optimization for a certain choice of parameters \cite{Tan06}. For a setup as used in \cite{Duerr13}, the LD pass through a local minimum if the frequency $\omega$ of the dither signals is chosen rather low. In \cite{Michalowsky16}, an explicit description of the recovered gradient for a particular scalar system within this setup, treated with needle-shaped dithers, was given. This result gave rise to the interpretation that the LD can be described by a gradient descent on a function other than $F$, which is parametrized by $\omega$ and will be called $L_\omega$. From this emerges the following question regarding the observation mentioned above: does $L_\omega$ become globally convex for certain parameter choices $\omega$, although $F$ is a non-convex function, such that the LD converge to the global minimizer? The main contribution of this paper is to give an explicit description of the recovered gradient and thus the LD for general ES systems, which is valid for any parameter choice. This result contributes to the existing theory in three different ways. First, the paper extends the results of \cite{Michalowsky16} to the use of general dithers and vector fields as considered in \cite{Grushkovskaya17}, and to multidimensional ES. We show that for all of these extensions, the LD approximately move along an \textit{averaged} gradient of $F$, as it is the case for needle-shaped inputs considered in \cite{Michalowsky16}. This is equivalent to viewing the recovered gradient as an averaged gradient of $F$. Second, we extend the theory of ES as introduced by \cite{Duerr13}, by stating an explicit discrete-time recursion describing the LD of ES systems in this structure. Furthermore, our results explicitly state the learning descent direction with an uncertainty depending on $\omega$, whereas the existing theory did not give an interpretable estimate of this direction. Third, we provide a new perspective on the analysis of global convergence of ES with non-convex maps: we do not aim to prove practical global stability of a new algorithm under some possibly restrictive assumptions; instead, the explicit quantification of the LD will allow us to analyze the global LD of such systems for any given parameters, and to examine when and why the LD follow the gradient of a convex function, despite $F$ being non-convex. The paper will be organized as follows. In Section \ref{methods}, we introduce some notation and necessary background in state-transition matrices and variational calculus. Section \ref{mainresults} contains the problem statement and the theoretic main results, while a simulative verification of these statements is given in Section \ref{simulations}. Our work is summarized in Section \ref{conclusion}. This article is an extended version of \cite{Wildhagen18} and augments its contents by an introduction to variational calculus for multidimensional systems, a characterization of the recovered gradient for the multidimensional case, and an additional numerical example. \section{Preliminaries and Methods} \label{methods} \subsection{Notation} We denote by $\mathbb{R}$ the set of real numbers and by $\mathbb{N}_+$ the set of positive natural numbers other than $0$. Let $\mathcal{C}^p$ denote the set of $p$ times continuously differentiable functions. We denote $\nabla f = \begin{bmatrix} \frac{\partial f}{\partial x_1}& \dots & \frac{\partial f}{\partial x_n}\end{bmatrix}^\top$ the gradient of a function $f:\mathbb{R}^n\rightarrow\mathbb{R}, \; f\in \mathcal{C}^1$. We consider the Landau notation, i.e., for $f,g:\mathbb{R}^n\rightarrow\mathbb{R}$ write $f(\vec{x})=\mathcal{O}(g(\vec{x}))$, meaning that there exist some $M>0$ and $\delta>0$ such that $||f(\vec{x})||\le M||g(\vec{x})||$ for all $||\vec{x}||\le \delta$. Note that when \mbox{$f_1=\mathcal{O}(g)$} and $f_2=\mathcal{O}(g)$ with $M,\delta$, it also holds that \mbox{$f_1+f_2=\mathcal{O}(2g)$} with $M,\delta$, because of the triangle inequality. We denote $\vec{0}_i$ a vector in $\mathbb{R}^i$ where each entry is $0$, $\vec{I}$ the unit matrix, $a_i$ the $i$-th entry of a vector $\vec{a}$, and $B_i$ the $i$-th diagonal entry of a square matrix $\vec{B}$. \subsection{State-Transition Matrix} For a linear, time-varying system $\dot{\vec{x}}(t)=\vec{A}(t)\vec{x}(t)$, \mbox{$\vec{x}(t)\in\mathbb{R}^n$}, the state-transition matrix (STM) relates the solutions at different time points $\vec{x}(t)=\vec{\Phi}(t,t_0)\vec{x}(t_0)$. If $n=1$ and $A$ is locally integrable, it is defined by \begin{equation} \Phi(t,t_0) = \exp\left(\int_{t_0}^{t} A(\tau) \mathrm{d}\tau\right). \end{equation} Important properties are the so-called semi-group property $\vec{\Phi}(t,t_0)=\vec{\Phi}(t,t_1)\vec{\Phi}(t_1,t_0)$ and the differentiation property $\frac{\mathrm{d}}{\mathrm{d} t_0}\vec{\Phi}(t,t_0) = -\vec{\Phi}(t,t_0)\vec{A}(t_0)$. \subsection{Variational Calculus}\label{VariationalCalculus} This section presents well-established results on the effect of so-called needle variations on the trajectories of dynamical systems, adapted to our argumentations. An exhaustive treatment of needle variations can be found in \cite{Liberzon11}. \subsubsection{General Perturbation} Consider the nonlinear system \begin{equation} \dot{\vec{x}}(t) = g_1(\vec{x}(t)) \vec{u}_1(t) + g_2(\vec{x}(t)) \vec{u}_2(t) \label{system_var} \end{equation} with $\vec{x}(t)\in\mathbb{R}^n$, $g_{1},g_{2}:\mathbb{R}^n \rightarrow \mathbb{R}$ and $\vec{u}_{1},\vec{u}_{2}:\mathbb{R} \rightarrow \mathbb{R}^n$. Suppose that $g_{1},g_{2} \in \mathcal{C}^1$ and $\vec{u}_{1},\vec{u}_{2}$ are piecewise continuous, such that local existence and uniqueness of the solution of \eqref{system_var} is guaranteed. Denote $\vec{x}^*(t)$ the solution of \eqref{system_var}, when some nominal input trajectory $\vec{u}_1(t)=\vec{u}_1^*(t)$, and $\vec{u}_2(t)=\vec{u}_2^*(t)$ is applied to the system. We call $\vec{x}^*(t)$ the nominal solution of \eqref{system_var}. Next, we study the effects on the solution of \eqref{system_var}, when the nominal inputs are perturbed by a so-called needle or Pontryagin-McShane variation. We consider a perturbation in $\vec{u}_2$ only, such that the perturbed input is defined by \begin{equation} \vec{u}_1(t) = \vec{u}_1^*(t), \quad \vec{u}_2(t) = \begin{cases} \vec{u}_2^*(t) & t \notin [\bar{t},\bar{t}+\epsilon] \\ \vec{\alpha} & t \in [\bar{t},\bar{t}+\epsilon] \end{cases}, \label{input_pert} \end{equation} where $\vec{\alpha}= [ \alpha_1,\ldots,\alpha_n ]^\top \in \mathbb{R}^n$ and $\bar{t},\epsilon>0$. \begin{figure} \centering \input{Abbildungen/needleVariation.tex} \caption{The needle variation (left) and the nominal and perturbed trajectories (right).} \label{needleVar} \end{figure} As illustrated in Fig. \ref{needleVar}, the nominal input is perturbed over an interval of length $\epsilon$, starting at $\bar{t}$, and held on some constant value $\vec{\alpha}$ in this time period. The solution of \eqref{system_var}, when the perturbed input \eqref{input_pert} is applied, will be denoted by $\vec{x}(t)$. As Fig. \ref{needleVar} suggests, $\vec{x}(t)$ will deviate from $\vec{x}^*(t)$ on $[\bar{t},\bar{t}+\epsilon]$, and then ``run parallel'' to the nominal solution in the following. By several Taylor expansions (see \cite{Liberzon11} for details), one obtains that the perturbed trajectory evolves according to \begin{equation} \vec{x}(t) = \vec{x}^*(t) + \epsilon \vec{v}(t) + \mathcal{O}(\epsilon^2), \quad \forall t \ge \bar{t}+\epsilon. \label{perturbed_sol} \end{equation} The quantity $\vec{v}(t) \in \mathbb{R}^n$ is the so-called variational variable, which evolves according to the variational equation \begin{equation} \dot{\vec{v}}(t) = \underbrace{ \vec{u}_1^*(t) \nabla g_1(\vec{x}^*(t))^\top}_{\eqqcolon \vec{A}(\vec{x}^*(t),t)} \label{vareq_var} \vec{v}(t), \quad \forall t \ge \bar{t}+\epsilon, \end{equation} and when $\vec{u}_2^*(\bar{t}+\epsilon)=\vec{0}$, has initial condition \begin{equation} \vec{v}(\bar{t}+\epsilon) = g_{2}(\vec{x}^*(\bar{t}+\epsilon))\vec{\alpha}. \label{var_IC} \end{equation} Note that the variational equation \eqref{vareq_var} is equivalent to a linearization of \eqref{system_var} around $\vec{x}^*(t)$, $\vec{u}_1^*(t)$, and $\vec{u}_2^*(t)$. \subsubsection{Perturbation and Nominal Input in Single Dimension} \label{SpecPertInp} Consider the perturbed input \eqref{input_pert} with \begin{equation} \vec{\alpha} = [ \vec{0}_{\ell-1} , \alpha_{\ell} , \vec{0}_{n-\ell} ]^\top, \quad \ell \in \{1,\ldots,n\}, \end{equation} i.e., the perturbation acts only in the single dimension $\ell$. Then, the initial condition of the variational variable reads from \eqref{var_IC} \begin{equation} \vec{v}(\bar{t}+\epsilon) = [\vec{0}_{\ell-1} , g_{2}(\vec{x}^*(\bar{t}+\epsilon))\alpha_{\ell} , \vec{0}_{n-\ell}]^\top. \label{IC_sing} \end{equation} Additionally, let the nominal input $u_{1i}^*(t) = 0$ for all $i \neq \ell$ in some time period $t \in [\bar{t}+\epsilon,t_f]$. From the variational equation \eqref{vareq_var}, it follows that $\dot{v}_i(t)= 0$ for all $i\neq \ell$ for $t \in [\bar{t}+\epsilon,t_f]$, such that these components do not show any dynamic behavior. Furthermore, $v_i(\bar{t}+\epsilon)=0$ for all $i\neq \ell$ from \eqref{IC_sing}, such that it holds \begin{equation} v_i(t)=0 \quad \forall i\neq \ell, \quad t \in[\bar{t}+\epsilon,t_f]. \end{equation} As a result, they have no effect on the component $v_{\ell}(t)$ and we obtain \begin{align} \dot{v}_{\ell}(t) = u_{1\ell}^*(t) \frac{\partial g_{1}}{\partial x_\ell}(\vec{x}^*(t)) v_{\ell}(t), \quad t \in[\bar{t}+\epsilon,t_f]. \end{align} This will be of importance in the proof of Theorem \ref{TheoremSI}. \section{Main Results} \label{mainresults} \subsection{Scalar Extremum Seeking} Extremum seeking (ES) offers a systematic approach to address the optimization problem \begin{equation} \min F(x), \label{optprob} \end{equation} with $x\in\mathbb{R}$, the nonlinear map $F: \mathbb{R}\rightarrow\mathbb{R}$, $F\in \mathcal{C}^1$, without any gradient information being available. In the following, we consider the class of ES systems introduced by \cite{Duerr13} \begin{equation} \dot{x}(t) = g_1(F(x(t)))u_1(t) + g_2(F(x(t)))u_2(t), \; x(0) = x_0, \label{system} \end{equation} where $g_1,g_2: \mathbb{R}\rightarrow\mathbb{R}$, $g_1,g_2\in \mathcal{C}^1$. In \cite{Duerr17}, it was discussed that, apart from technical differences, the setups considered in \cite{Krstic00,Tan06} can be represented by this class as well. The dither functions $u_1,u_2: \mathbb{R}\rightarrow\mathbb{R}$ are assumed to be $T$-periodic with $T=\frac{2\pi}{\omega}$. Note that in the following, both $\omega$ and $T$ will be used, although they contain the same information. A typical approach is to choose the dithers \begin{equation} u_1(t) = \sqrt{\omega}\sin(\omega t), \quad u_2(t) = \sqrt{\omega}\cos(\omega t), \label{sincos} \end{equation} and the vector fields $g_1,g_2$ such that their Lie bracket \begin{equation} [g_1,g_2](F) = \frac{\partial g_2(F)}{\partial F}g_1(F)-\frac{\partial g_1(F)}{\partial F}g_2(F) \eqqcolon - g_0(F) \label{LieBr} \end{equation} is $1$. Then for $\omega\rightarrow\infty$, the trajectories of \eqref{system} follow those of the gradient flow system $\dot{\bar{x}}=-\frac{1}{2}\nabla F(\bar{x})$ arbitrarily close \cite{Duerr13,Grushkovskaya17}. As a result, $x$ converges to a solution of \eqref{optprob}. Apparently, $\omega\rightarrow\infty$ can never be achieved in practice. However, also for finite $\omega$, $\eqref{system}$ is observed to move on average along a recovered gradient of $F$. This movement is referred to as learning dynamics (LD). Since the recovered gradient is not exact, the LD generally minimize a function other than $F$, called $L_\omega$. The LD can be carved out from $x(t)$ by neglecting the periodic oscillations induced by the dithers, i.e., by regarding only the system state at $T$-multiples $x(kT), \;k\in\mathbb{N}_+$. Then, the LD of \eqref{system} can be described as a gradient descent recursion on $L_\omega$ with fixed step size \begin{equation} x(kT) = x((k-1)T) + \nabla L_\omega (x((k-1)T)). \label{grad_descent} \end{equation} As detailed above, the LD have been found to move along $\nabla F$ for $\omega\rightarrow\infty$, whereas for finite $\omega$, the LD were not explicitly quantified so far. The main purpose of this paper is thus to give an explicit description of the gradient descent direction $\nabla L_\omega$, valid for any $\omega$. Thereby, the LD and the function $L_\omega$, that the LD effectively minimize, are characterized. \begin{figure}[h] \centering \input{Abbildungen/u1u21Dgen.tex} \caption{Trigonometric dithers $u_1(t)$ and $u_2(t)$ which are compliant to \textit{A1}-\textit{A3}. Note that the individual needles in the sampled dither $\bar{u}_2(t)$ form needle pairs of opposite sign (illustrated by matching colors).} \label{u1u2} \end{figure} We assume that the following holds for the dither functions: \begin{enumerate} \item[\textit{A1:}] $u_1,u_2$ are piecewise continuous and bounded. \item[\textit{A2:}] The function $u_1$ is point-symmetric to $(\frac{T}{2},0)$, i.e., it holds that $u_1(t)=-u_1(T-t)$ for all $t \in [0,T]$. \item[\textit{A3:}] For $u_2$ it holds $u_2(t)=-u_2(\frac{T}{2}+t)$ for all $t \in [0,\frac{T}{2}]$. \end{enumerate} \begin{remark} Note that \textit{A2} and \textit{A3} imply that $u_1$ and $u_2$ have zero mean on $[0,T]$. \end{remark} \begin{remark} We presume that \textit{A1}-\textit{A3} are mild conditions for dithers commonly considered in ES. For example, they are fulfilled by the well-known trigonometric dithers \eqref{sincos}, but also by square-wave or sawtooth dithers proposed in \cite{Tan08}. \end{remark} The idea is now to sample and approximate the dither $u_2$ by needle-shaped functions. We restrict the sampling interval $\epsilon$, i.e., the length of the individual needles, to even divisors of $T$. This means that if the sampling interval is $\epsilon = \frac{T}{2N}, \; N\in \mathbb{N_+}$, then $u_2(t)$ is approximated by $2N$ needles in the interval $[0,T)$. The sampled dither function is thus \begin{equation} \bar{u}_2(t)=u_2(i\epsilon), \; t\in[(i-1)\epsilon, i\epsilon), \; i=1,\ldots,N. \label{u2GU} \end{equation} Because of \textit{A3} and the even number of samples, for every needle in the time interval $[0,\frac{T}{2})$, there is a corresponding needle with same amplitude but opposite sign in $[\frac{T}{2},T)$, such that we can extract ``needle pairs'' out of $\bar{u}_2(t)$. This is illustrated in Fig. \ref{u1u2}, where the same-colored areas form needle pairs of opposite sign. This fact will become crucial in order to establish Theorem \ref{TheoremGU}. The following theorem explicitly relates the solutions of \eqref{system} at times $t=0$ and $t=T$, and thereby quantifies the gradient recovered by ES. \begin{theorem} Suppose that Assumptions \textit{A1}-\textit{A3} on the dither functions $u_1,u_2$ hold. Let $x^*(t)$ denote the solution of \eqref{system}, when $u_1(t)$ and $u_2(t) \equiv 0$ are applied, i.e., $x^*(t)$ fulfills \begin{equation} \dot{x}^*(t)=g_1(F(x^*(t)))u_1(t), \: x^*(0)=x_0. \label{nomsystem_GU} \end{equation} Assume that $x^*(t)$ exists on $[0,T]$. Let $\Phi(t,t_0)$ be the STM corresponding to the time-varying variational equation \begin{equation} \dot{v}(t)=u_1(t)\frac{\partial g_1}{\partial F}(F(x^*(t)))\frac{\partial F}{\partial x}(x^*(t)) v(t) \label{vareqGU} \end{equation} with initial time $t_0$, and let $g_1,g_2$ satisfy \eqref{LieBr}. Consider system \eqref{system}, where $u_1(t)$, and $\bar{u}_2(t)$ as in \eqref{u2GU}, are applied. Then \begin{align} &x(T) = x_0 + \mathcal{O}(T^2) \label{xTNfinGU} \\ &\hspace{-3pt}+ \epsilon\hspace{-1pt}\sum_{i=1}^{N} u_2(i\epsilon)\hspace{-6pt}\int\displaylimits_{i\epsilon}^{\frac{T}{2}-i\epsilon}\hspace{-2pt} \frac{\partial F}{\partial x}(x^*(\tau)) \Phi(0,\tau) u_1(\tau) g_0(F(x^*(\tau))) \mathrm{d}\tau. \nonumber \end{align} Moreover, \begin{align} &\lim_{N \rightarrow \infty} x(T) = x_0 + \mathcal{O}(T^2) \label{xTNinfGU}\\ &+ \int\displaylimits_{0}^{\frac{T}{2}} u_2(t) \hspace{-4pt} \int\displaylimits_{t}^{\frac{T}{2}-t} \hspace{-1pt} \frac{\partial F}{\partial x}(x^*(\tau)) \Phi(0,\tau) u_1(\tau) g_0(F(x^*(\tau))) \mathrm{d}\tau \mathrm{d}t. \nonumber \end{align} \label{TheoremGU} \end{theorem} The proof can be found in Appendix \ref{AppProof1}. Note that \eqref{xTNinfGU} characterizes the solution of \eqref{system} when $u_2(t)$ is applied, since $\lim_{N\rightarrow\infty}\bar{u}_2(t)=u_2(t)$ due to \textit{A1}. A continuation of \eqref{xTNinfGU} gives a gradient descent recursion of the form \eqref{grad_descent}. Because $x^*(t)$ depends only on its initial condition \mbox{$x^*(T_k)=x(T_k)$} (with $T_k=(k-1)T$), $\nabla L_\omega$ is given by \eqref{xTNinfGU} as \begin{align} &\nabla L_\omega (x(T_k)) = \mathcal{O}(T^2) \label{grad_approx} \\ &+ \hspace{-8pt}\int\displaylimits_{T_k}^{T_k+\frac{T}{2}}\hspace{-8pt}u_2(t)\hspace{-6pt}\int\displaylimits_{t}^{\frac{T}{2}-t} \frac{\partial F}{\partial x}(x^*(\tau)) \Phi(T_k,\tau) u_1(\tau) g_0(F(x^*(\tau))) \mathrm{d}\tau \mathrm{d}t. \nonumber \end{align} Consequently, Theorem \ref{TheoremGU} gives an approximate quantification of the gradient descent direction $\nabla L_\omega$, and thus describes the LD of system \eqref{system}. This result is independent of the parameter $\omega$ or convexity properties of the function $F$. The theoretical insight into the mechanics of ES that can be gained from \eqref{xTNinfGU} is extremely valuable. It shows that the LD evolve along a weighted averaged gradient of the function $F$, even for very general dithers $u_1,u_2$. The weighting factors of the gradient are $\Phi(0,\tau)$, $u_1(\tau)$ and $g_0(F(x^*(\tau)))$. Note that for many vector fields $g_1,g_2$ commonly considered in ES (e.g. $g_1=F,g_2=1$ or $g_1=\sin(F), g_2=\cos(F)$), it holds that $g_0(F)=1$ \cite{Grushkovskaya17}, such that this factor even vanishes in the integral. The inner integral (which corresponds to the averaged gradient) is then weighted with the second dither $u_2(t)$ and averaged once again. This observation gives rise to the interpretation that when choosing the input period $T$ large enough, the LD of the ES system might ``even out'' local minima in $F$. As a result, the LD converge to the global minimum instead of getting stuck in local minima. This phenomenon is indeed observed in practice as seen later. \begin{remark} Standard ES analysis already indicates that the right-hand side of the "average" ES system corresponds to an averaged version of the static objective function (see e.g. \cite{Krstic00,Tan06}). In contrast to our results, however, this method does not address the \textit{explicit} solution of the \textit{original} ES system. \end{remark} \begin{remark} Note that $\nabla L_\omega (x(T_k))$ in \eqref{grad_approx} depends on $x^*(t)$, the solution of a nonlinear differential equation. Therefore, it cannot be computed directly and an approximate numerical solution must be obtained instead. This fact indeed does not diminish the valuable insight gained from \eqref{grad_approx}. \end{remark} \begin{remark} Theorem \ref{TheoremGU} also includes the case of needle-shaped inputs treated in \cite{Michalowsky16}. There are only two needles of length $\epsilon$, such that the rest term in \eqref{xTNfinGU} is estimated to $\mathcal{O}(\epsilon^2)$. \end{remark} \subsection{\fontdimen2\font=3pt Multidimensional Extremum Seeking with Sequential Dither} In this section, the characterization of the LD of ES systems is extended to the multidimensional case. Consider the multidimensional system \begin{equation} \dot{\vec{x}}(t) = g_1(F(\vec{x}(t)))\vec{u}_1(t) + g_2(F(\vec{x}(t)))\vec{u}_2(t), \; \vec{x}(0)=\vec{x}_0 \label{system_mult}, \end{equation} with $\vec{x}(t)\in\mathbb{R}^n$, $F:\mathbb{R}^n\rightarrow\mathbb{R}, F\in\mathcal{C}^1$, \mbox{$g_1,g_2: \mathbb{R}\rightarrow\mathbb{R}$} and the $nT$-periodic dither functions \mbox{$\vec{u}_1,\vec{u}_2: \mathbb{R}\rightarrow\mathbb{R}^n$}. Here, we consider the special multidimensional sequence, where a general scalar dither $u_1,u_2$ is applied sequentially in all dimensions, while the other dithers are zero meanwhile. This sequence, shown in Fig. \ref{u1u2mult}, is defined by \begin{equation} u_{ji}(t) = \begin{cases} u_j(t-(i-1)T) & t\in[(i-1)T,iT) \\ 0 & \text{else} \end{cases}, \; \substack{j=1,2 \\ i=1,\ldots,n}, \label{u_mult} \end{equation} with $\vec{u}_j(t)=[u_{j1}(t),\ldots,u_{jn}(t)]^\top, \; j=1,2$. Again, the scalar $u_1,u_2$ need to fulfill \textit{A1}-\textit{A3}. \begin{figure}[h] \centering \input{Abbildungen/u1lu2lsequential.tex} \caption{The multidimensional dither sequences $u_{1i}(t)$ (solid) and $u_{2i}(t)$ (dashed) for $n=3$. The scalar dithers $u_1,u_2$ are merely applied sequentially in all dimensions.} \label{u1u2mult} \end{figure} \\ The following theorem characterizes the LD of system \eqref{system_mult} when treated with this sequential dither. \begin{theorem} Suppose that Assumptions \textit{A1}-\textit{A3} on the scalar dithers $u_1,u_2$ hold. Let $\ell\in\{1,\ldots,n\}$. Denote $\vec{x}^*(t)$ the solution of \eqref{system_mult} when $\vec{u}_1(t)$ is as defined in \eqref{u_mult} and \mbox{$\vec{u}_2(t) \equiv \vec{0}$}, and assume that $\vec{x}^*(t)$ exists on $[0,\ell T]$. Let \mbox{$\Phi_i(t,t_0),\Phi_i:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}$} be the STM corresponding to the time-varying variational equation \begin{equation} \dot{v}_i(t) = u_{1i}(t)\frac{\partial g_1}{\partial F}(F(\vec{x}^*(t)))\frac{\partial F}{\partial x_i}(\vec{x}^*(t)) v_i(t) \label{vareqSI} \end{equation} with initial time $t_0$, and let $g_1,g_2$ satisfy \eqref{LieBr}. Consider the multidimensional system \eqref{system_mult}, where $\vec{u}_1(t)$ and $\vec{u}_2(t)$ are applied as defined in \eqref{u_mult}. Then, \begin{align} &\vec{x}(\ell T) = \vec{x}_0 + \mathcal{O}(\ell^2 T^2) \label{xnTSI} \\ &+ \begin{bmatrix} \hspace{-57pt}\int\displaylimits_{0}^{\frac{T}{2}} u_2(t) \int\displaylimits_{t}^{\frac{T}{2}-t}\Big( \frac{\partial F}{\partial x_1}(\vec{x}^*(\tau)) \Phi_1(0,\tau) \\ \hspace{100pt} \cdot u_1(\tau) g_0(F(\vec{x}^*(\tau))) \Big) \mathrm{d}\tau \mathrm{d}t \\ \vdots \\ \hspace{-10pt}\int\displaylimits_{0}^{\frac{T}{2}} u_2(t) \int\displaylimits_{t}^{\frac{T}{2}-t} \Big( \frac{\partial F}{\partial x_\ell}(\vec{x}^*(T_\ell+\tau)) \Phi_\ell(T_\ell,T_\ell+\tau) \\ \hspace{79pt} \cdot u_1(\tau) g_0(F(\vec{x}^*(T_\ell+\tau))) \Big) \mathrm{d}\tau \mathrm{d}t \\ \vec{0}_{n-\ell} \end{bmatrix}\hspace{-2pt}. \nonumber \end{align} \label{TheoremSI} \end{theorem} \begin{sketchofproof} The proof of Theorem \ref{TheoremSI} follows closely the lines of the proof of Theorem \ref{TheoremGU}. The main idea is again to sample the input functions in dimension $i$ by an even number of needles in the interval $[(i-1)T,iT)$. Consider $\vec{\Phi}(t,t_0)$, the STM corresponding to \begin{equation} \vec{\dot{v}}(t) = \vec{u}_1(t)\frac{\partial g_1}{\partial F}(F(\vec{x}^*(t))) \nabla F(\vec{x}^*(t))^\top \vec{v}(t). \end{equation} With the fact that the needles are applied in one dimension at a time, and the principles of variational calculus given in Section \ref{VariationalCalculus}, one can express the solution of \eqref{system_mult} after $\ell T$ as \begin{align} &\vec{x}(\ell T) = \vec{x}_0 + \mathcal{O}(\ell T^2) \label{xnTSI1}\\ &+ \epsilon \sum_{i=1}^{\ell} \sum_{j=1}^{2N} u_2(j\epsilon) \vec{\Phi}(\ell T,T_{i}+j\epsilon) \begin{bmatrix} \vec{0}_{i-1} \\ g_2(\vec{x}^*(T_{i}+j\epsilon)) \\ \vec{0}_{n-i} \end{bmatrix}. \nonumber \end{align} Because of the point-symmetries in $\vec{u}_1(t)$, it holds that $\vec{\Phi}(T_{i+1},T_i)=\vec{I}$ for all $i$. With the semi-group property of the STM, and the results presented in Section \ref{SpecPertInp}, one can replace $\vec{\Phi}(\ell T,T_{i}+j\epsilon)$ in \eqref{xnTSI1} by the scalar $\Phi_i(T_{i+1},T_{i}+j\epsilon)$. Again, we apply the symmetry property of $u_2(t)$ and write the $\mathcal{O}(\epsilon)$ terms as an integral. Letting $\epsilon\rightarrow 0$ proves \eqref{xnTSI}. \end{sketchofproof} Theorem \ref{TheoremSI} shows that using the special sequential sequence as defined in \eqref{u_mult}, a component-wise and decoupled weighting and averaging of the gradient is performed. Furthermore, formula \eqref{xnTSI} reveals that the LD move primarily along the dimension where the scalar input was applied (as the integral term indicates). However, due to the non-vanishing remainder, the system moves slightly along the other dimensions as well. \section{Numerical Evaluation} \label{simulations} In this section, we compare the simulated LD $x(kT)_\mathrm{Sim}$ of an ES system with the result of the recursion \eqref{grad_descent} and \eqref{grad_approx}, denoted by $x(kT)_\mathrm{Rec}$. We consider the system from \cite{Duerr13} \begin{equation} \dot{x}(t) = F(x(t))\sqrt{\omega}\sin(\omega t) - a\sqrt{\omega}\cos(\omega t), \label{system_example} \end{equation} with initial condition $x(0)=1.8$. The parameter $\omega=\frac{2\pi}{T}$ will be adapted to display its various effects. \begin{figure}[h] \centering \input{Abbildungen/error1.tex} \caption{The absolute error between the simulated LD and the recursion (dashed), and the size of the tube around the LD (solid), with $F_1(x)$, $a=5$.} \label{error} \end{figure} \begin{example} Consider \eqref{system_example} together with the quadratic function $F_1(x)=\frac{1}{2}x^2$. Fig. \ref{error} shows the absolute error $|x(kT)_\mathrm{Rec}-x(kT)_\mathrm{Sim}|$ between the simulated LD and the recursion for system \eqref{system_example} for various $T$. The chosen period times differ by a factor of $10$, as do the errors at a certain time point $t$. This observation matches the theory very well, e.g., consider $T_1=10 T_2$. Then using the larger period time $T_1$ leads to an error at time $T_1$ of $\mathcal{O}(T_1^2)$, whereas performing the recursion $10$ times with $T_2$ causes an error at time $T_1$ of $\mathcal{O}(10T_2^2)=\mathcal{O}(0.1T_1^2)$. In \cite{Duerr13}, the LD could be verified to move in a tube around the gradient flow system. Observe from Fig. \ref{error} that the recursion gives a much more accurate estimation of the ES system's LD. \end{example} \begin{figure}[h] \centering \input{Abbildungen/F.tex} \input{Abbildungen/xkT1D.tex} \input{Abbildungen/Lomega.tex} \caption{The non-convex test function $F_2(x)$ (top), the simulated LD $x(kT)_\mathrm{Sim}$ (solid) and the recursion $x(kT)_\mathrm{Rec}$ (dashed) with $F_2(x)$ (middle), and the simulated $L_\omega(x)$ corresponding to $F_2(x)$ (bottom), $a=20$.} \label{xkT1D} \end{figure} \begin{example} This example shall demonstrate the recursion's ability to represent the simulated LD with non-convex functions. Consider the function $F_2(x)$ as depicted in Fig. \ref{xkT1D} (top). It has a sharp local minimum between $x_0$ and its global minimum at $x=0$, and is a quadratic function elsewhere. In Fig. \ref{xkT1D} (middle), the simulated LD and the recursion are depicted. For $T=0.1,0.01$, both the simulation and the recursion converge globally, and for $T=0.0001$, both get stuck in the local minimum. However, for $T=0.001$, the simulation passes through the local minimum while the recursion gets stuck. We can infer that in borderline cases, when the ES system barely converges globally, the recursion's informative value is limited due to its intrinsic uncertainty. Nonetheless, for clearer cases, the recursion displays the actual LD very well. Fig. \ref{xkT1D} (bottom) shows $L_\omega(x)$ generated by the simulated $x(kT)_\mathrm{Sim}$, where \eqref{system_example} was treated with $F_2(x)$. The function $L_\omega(x)$ was numerically integrated using the Euler method from \eqref{grad_descent} with a normalized scaling. This example illustrates the property that $L_\omega(x)$ can become convex for certain $\omega$ although $F_2(x)$ is not. It can be observed that for higher $T$, $L_\omega(x)$ is a convex function, whereas for small $T$, it shows a local minimum similar to $F_2(x)$. Consequently, when starting to the right of this local minimum, $x(kT)_\mathrm{Sim}$ does not converge near the global minimizer of $F_2(x)$. \end{example} \begin{figure}[h] \centering \input{Abbildungen/xkT2D.tex} \caption{The two-dimensional LD using $F_3(\vec{x})$ and $T=0.01$. The value of $F_3(\vec{x})$ is color-coded from yellow (high) to green (low), $a=10$.} \label{xkT2D} \end{figure} \begin{example} The third example is devoted to the two-dimensional extension of \eqref{system_example}, where the one-dimensional sine-waves are applied sequentially according to \eqref{u_mult}. The initial condition $\vec{x}(0) = [1.8,1.8]^\top$ and the convex function $F_3(\vec{x})=\frac{1}{2}(x_1^2+x_2^2)$ are used. In Fig. \ref{xkT2D}, simulation results of this setup are depicted. The staircase-shaped LD, that were predicted by \eqref{xnTSI}, are clearly visible. Over the first few time periods, the recursion gives a very precise approximation of the LD. However, the approximation error sums up such that the recursion becomes less and less precise over time. Note that this observation does not disagree with our main results. Nonetheless, the recursion converges near the minimizer at the origin, just like the simulated LD. \end{example} \section{Summary} \label{conclusion} In this paper, we gave an explicit recursion for the LD of scalar ES systems with static maps. This recursion approximately quantifies the gradient information recovered by ES, and reveals that it corresponds to an averaged gradient of the objective function. As this property holds without strong restrictions on the objective function, the recursion is also able to represent convergence of the LD to the global minimum, despite the presence of local minima. Furthermore, we presented a special multidimensional dither sequence and showed that an ES system, treated with this sequence, moves along an averaged gradient as well. Eventually, we illustrated and verified our results in simulations. Since in general, dynamic maps are considered in ES, an extension of the presented analysis to this case seems worthwhile. \section{Appendix} \subsection{Proof of Theorem \ref{TheoremGU}} \label{AppProof1} \begin{proof} The proof relies on the principles of variational calculus presented in Section \ref{methods}. We define \begin{equation} \bar{u}_2^{(j)}(t) = \begin{cases} \bar{u}_2(t) & t\in[0,(j-1)\epsilon) \\ 0 & t\notin [0,(j-1)\epsilon) \end{cases}, \; j=1,\ldots,2N+1, \end{equation} consisting of the first $(j-1)$ needles and denote \begin{itemize} \item $x^{(j)}(t)$ the solution of \eqref{system} when $u_1(t)$ and $\bar{u}_2^{(j)}(t)$ are applied, \item $v^{(j)}(t)$ the variational variable which describes the variation of $x^{(j+1)}(t)$ from $x^{(j)}(t)$. \end{itemize} Consider system \eqref{system}, where $\bar{u}_2^{(j)}(t)$ is applied, and its solution $x^{(j)}(t)$. Now apply $\bar{u}_2^{(j+1)}(t)$. Then, $\bar{u}_2^{(j)}(t)$ corresponds to the nominal input that is perturbed on \mbox{$t\in[(j-1)\epsilon,j\epsilon)$}, and $x^{(j)}(t)$ to the nominal trajectory, such that \begin{equation} x^{(j+1)}(t)=x^{(j)}(t)+\epsilon v^{(j)}(t)+\mathcal{O}(\epsilon^2). \label{xvarproof} \end{equation} The variational variable $v^{(j)}(t)$ evolves as \begin{equation} \dot{v}^{(j)}(t) = u_1(t)\frac{\partial g_1}{\partial F}(F(x^{(j)}(t)))\frac{\partial F}{\partial x}(x^{(j)}(t)) v^{(j)}(t) \label{vareqproof} \end{equation} and has initial condition \mbox{$v^{(j)}(j\epsilon) = g_2(F(x^{(j)}(j\epsilon)))u_2(j\epsilon)$}, since $\bar{u}_2^{(j)}(j\epsilon)=0$. The proof is subdivided into four sections. In the first part, we will derive a representation of the solution of \eqref{system} after one dither period $T$ via variational calculus, where the terms linear in $\epsilon$ will be comprised of STMs. Second, we will derive a useful symmetric property of these STMs. Using this, the terms of order $\mathcal{O}(\epsilon)$ will be expressed as an integral to prove \eqref{xTNfinGU} in the third part. In the last part, we let the length of the needles tend to zero to show \eqref{xTNinfGU}. \textbf{1):} We begin to show by induction that when $\bar{u}_2^{(j+1)}(t)$ is applied to the system, the solution of \eqref{system} is \begin{align} x^{(j+1)}(t) &= x^*(t) + \epsilon \sum_{i=1}^{j} \Phi(t,i\epsilon) g_2(F(x^*(i\epsilon)))u_2(i\epsilon) \nonumber \\ &+\sum_{i=1}^{j} \Big(R_i(\epsilon)+S_i(\epsilon)\Big), \label{xjinductionGU} \end{align} for $t\in[j\epsilon,T]$, where each of the remainders $R_i(\epsilon)=\mathcal{O}(\epsilon^2)$ and each $S_i(\epsilon)=\mathcal{O}(2(i-1)\epsilon^2)$. \textit{Step 1: Basis} By the principles of variational calculus presented in Section \ref{methods}, applying $\bar{u}_2^{(2)}(t)$, we obtain for the solution of \eqref{system} \begin{equation} x^{(2)}(t) = \underbrace{x^{(1)}(t)}_{=x^*(t)} + \epsilon v^{(1)}(t) + R_1(\epsilon), \end{equation} for $t\in [\epsilon,T]$. For the remainder it holds $R_1(\epsilon)=\mathcal{O}(\epsilon^2)$. As $x^{(1)}(t)=x^*(t)$, the variational variable $v^{(1)}(t)$ fulfills \eqref{vareqGU} with initial condition $v^{(1)}(\epsilon)=g_2(F(x^*(\epsilon)))u_2(\epsilon)$. Since $\Phi(t,t_0)$ is the STM of \eqref{vareqGU}, we can express $x^{(2)}(t)$ by \begin{equation} x^{(2)}(t) = x^*(t) + \epsilon\Phi(t,\epsilon)g_2(F(x^*(\epsilon)))u_2(\epsilon) + R_1(\epsilon), \label{xepsilon} \end{equation} for $t\in [\epsilon,T]$, which is \eqref{xjinductionGU} for $j=2$. \textit{Step 2: Inductive Step} Assume that \eqref{xjinductionGU} holds when $\bar{u}_2^{(j)}(t)$ is applied, i.e., that the solution $x^{(j)}(t)$ reads \begin{align} x^{(j)}(t) &= x^*(t) + \epsilon \sum_{i=1}^{j-1} \Phi(t,i\epsilon) g_2(F(x^*(i\epsilon)))u_2(i\epsilon) \nonumber \\ &+\sum_{i=1}^{j-1}\Big(R_i(\epsilon)+S_i(\epsilon)\Big), \label{xjinductionGU2} \end{align} for $t\in[(j-1)\epsilon,T]$. Consider $v^{(j)*}(t)$, which fulfills \eqref{vareqGU} and has initial conditions $v^{(j)*}(j\epsilon) = g_2(F(x^*(j\epsilon)))u_2(j\epsilon)$. Next, we apply $\bar{u}_2^{(j+1)}(t)$. With Lemma \ref{LemmaMultNeed} (see Appendix \ref{AppLemma1}), equation \eqref{xvarproof} becomes \begin{align} &x^{(j+1)}(t) = x^{(j)}(t) + \epsilon v^{(j)*}(t) + \underbrace{\epsilon v_R^{(j)}(\epsilon)}_{\eqqcolon S_j(\epsilon)} + R_j(\epsilon) \nonumber \\ &\stackrel{\eqref{xjinductionGU2}}{=} x^*(t) + \epsilon \sum_{i=1}^{j-1} \Phi(t,i\epsilon) g_2(F(x^*(i\epsilon)))u_2(i\epsilon)+\epsilon v^{(j)*}(t) \nonumber \\ &+\sum_{i=1}^{j-1}\Big(R_i(\epsilon)+S_i(\epsilon)\Big)+R_j(\epsilon)+S_j(\epsilon), \end{align} for $t \in[j\epsilon,T]$. Again, it holds that $R_j(\epsilon)=\mathcal{O}(\epsilon^2)$. Note that $x^{(j)}(t) = x^*(t) + \mathcal{O}((j-1)\epsilon)$, because according to \eqref{xjinductionGU}, each of the summands in the first sum is $\mathcal{O}(\epsilon)$. With Lemma \ref{LemmaMultNeed}, it holds that \mbox{$v_R^{(j)}(\epsilon)=\mathcal{O}(2(j-1)\epsilon)$} and thus \mbox{$S_j(\epsilon) = \epsilon v_R^{(j)}(\epsilon)=\mathcal{O}(2(j-1)\epsilon^2)$}. Using the STM of \eqref{vareqGU}, we obtain \begin{align} x^{(j+1)}(t) &= x^*(t) + \epsilon \sum_{i=1}^{j} \Phi(t,i\epsilon) g_2(F(x^*(i\epsilon)))u_2(i\epsilon) \nonumber \\ &+\sum_{i=1}^{j}\Big(R_i(\epsilon)+S_i(\epsilon)\Big), \end{align} for $t\in[j\epsilon,T]$, which is exactly \eqref{xjinductionGU}. Using \eqref{xjinductionGU}, we can express the solution of \eqref{system} at $t=T$, when $\bar{u}_2^{(2N+1)}(t)=\bar{u}_2(t)$ was applied to the system, as \begin{align} x(T) &= x^{(2N+1)}(T) \nonumber \\ &= x^*(T) + \epsilon \sum_{i=1}^{2N} \Phi(T,i\epsilon) g_2(F(x^*(i\epsilon)))u_2(i\epsilon) \nonumber \\ &+\sum_{i=1}^{2N}\Big(R_i(\epsilon)+S_i(\epsilon)\Big). \label{xTGU1} \end{align} In the next step, we will give a precise assessment of the remainder terms in \eqref{xTGU1}. We have established that the individual terms can be estimated as $R_i(\epsilon)=\mathcal{O}(\epsilon^2)$ and \mbox{$S_i(\epsilon)=\mathcal{O}(2(i-1)\epsilon^2)$}. Taking the summation of the remainder terms into account, the overall remainder is assessed as \begin{align} \sum_{i=1}^{2N} \Big(R_i(\epsilon) + S_i(\epsilon)\Big) &= \mathcal{O}(\big(2N+\sum_{i=1}^{2N}2(i-1)\big)\epsilon^2) \nonumber \\ &= \mathcal{O}((2N)^2\epsilon^2) \stackrel{\epsilon = \frac{T}{2N}}{=} \mathcal{O}(T^2). \end{align} Recall that because of \textit{A3} and the even number of needles, the sampled dither $\bar{u}_2(t)$ is comprised of ``needle pairs'' of same amplitude, but opposite sign. In the following, we will formalize this idea and carve out its effect on $x(T)$. First we split the sum in \eqref{xTGU1} \begin{align} x(T) &= x^*(T) + \epsilon \sum_{i=1}^{N} \Phi(T,i\epsilon) g_2(F(x^*(i\epsilon)))u_2(i\epsilon) \nonumber \\ &+\epsilon \sum_{i=N}^{2N} \Phi(T,i\epsilon) g_2(F(x^*(i\epsilon)))u_2(i\epsilon)+\mathcal{O}(T^2), \end{align} then use that $u_2(i\epsilon)=-u_2(i\epsilon-\frac{T}{2})$ in the second sum due to \textit{A3}, and perform an index shift to obtain \begin{align} x(T) &= x^*(T) + \epsilon \sum_{i=1}^{N} u_2(i\epsilon)\Big( \Phi(T,i\epsilon) g_2(F(x^*(i\epsilon))) \nonumber \\ &- \Phi(T,\tfrac{T}{2}+i\epsilon) g_2(F(x^*(\tfrac{T}{2}+i\epsilon))) \Big) +\mathcal{O}(T^2). \label{xTGU2} \end{align} \textbf{2):} The second part of the proof is dedicated to finding a simpler form of the STMs occurring in \eqref{xTGU2}. Because of \textit{A2}, it holds for the dither that $u_1(t)=-u_1(T-t)$. As a consequence, the nominal solution goes along the vector field $g_1(F(x^*(t)))u_1(t)$ in $[0,\frac{T}{2}]$, and back along the same vector field in $[\frac{T}{2},T]$. Then the nominal solution fulfills \begin{equation} x^*(t)=x^*(T-t). \label{xsymmetryGU} \end{equation} Using this symmetry property and $u_1(t)=-u_1(T-t)$ on the definition of $\Phi(t,t_0)$ yields \begin{align} &\Phi(t,t_0) =\exp\left( \int_{t_0}^{t} u_1(\tau)\frac{\partial g_1}{\partial F}(F(x^*(\tau))) \frac{\partial F}{\partial x}(x^*(\tau)) \mathrm{d}\tau \right) \nonumber \\ &\stackrel{\phantom{s\coloneqq T-\tau}}{=} \exp\left( \int_{T-t_0}^{T-t} u_1(s)\frac{\partial g_1}{\partial F}(F(x^*(s))) \frac{\partial F}{\partial x}(x^*(s)) \mathrm{d}s \right) \nonumber \\ &\stackrel{\phantom{s\coloneqq T-\tau}}{=} \Phi(T-t,T-t_0). \label{PhisymmetryGU} \end{align} The relation \eqref{PhisymmetryGU} and the semi-group property of the STM is used to establish that, if $0\le i\epsilon\le\frac{T}{2}$, \begin{align} \Phi(T,i\epsilon)&\stackrel{\phantom{\eqref{PhisymmetryGU}}}{=}\Phi(T,\tfrac{T}{2})\Phi(\tfrac{T}{2},i\epsilon) \nonumber \\ &\stackrel{\eqref{PhisymmetryGU}}{=} \Phi(0,\tfrac{T}{2})\Phi(\tfrac{T}{2},i\epsilon) = \Phi(0,i\epsilon) \label{PhiGU1} \end{align} and \begin{equation} \Phi(T,\tfrac{T}{2}+i\epsilon)\stackrel{\eqref{PhisymmetryGU}}{=} \Phi(0,\tfrac{T}{2}-i\epsilon). \label{PhiGU2} \end{equation} \textbf{3):} In the third section of the proof, we combine the results of the first two sections to come up with \eqref{xTNfinGU}. We use the identities \eqref{xsymmetryGU}, \eqref{PhiGU1} and \eqref{PhiGU2} in \eqref{xTGU2} to write \begin{align} &x(T) = \underbrace{x^*(T)}_{\stackrel{\eqref{xsymmetryGU}}{=}x^*(0)=x_0}+\mathcal{O}(T^2) \nonumber \\ &+ \epsilon \sum_{i=1}^{N} u_2(i\epsilon)\bigg( \underbrace{\Phi(T,i\epsilon)}_{\stackrel{\eqref{PhiGU1}}{=}\Phi(0,i\epsilon)} g_2(F(x^*(i\epsilon))) \nonumber \\ &\hspace{67pt}- \underbrace{\Phi(T,\tfrac{T}{2}+i\epsilon)}_{\stackrel{\eqref{PhiGU2}}{=}\Phi(0,\tfrac{T}{2}-i\epsilon)} g_2(F(\underbrace{x^*(\tfrac{T}{2}+i\epsilon)}_{\stackrel{\eqref{xsymmetryGU}}{=}x^*(\tfrac{T}{2}-i\epsilon)})) \bigg) \nonumber \\ &= x_0 + \mathcal{O}(T^2) \label{xTGU4} \\ &- \epsilon\sum_{i=1}^{N} u_2(i\epsilon)\int\displaylimits_{i\epsilon}^{\frac{T}{2}-i\epsilon} \frac{\mathrm{d}}{\mathrm{d} \tau}\bigg(\Phi(0,\tau)g_2(F(x^*(\tau)))\bigg)\mathrm{d}\tau. \nonumber \end{align} Note that writing the term linear in $\epsilon$ as an integral was only possible as there existed ``needle pairs'' of opposite sign. Next, we concentrate on simplifying the integral. Using the product rule, the differentiation property of the STM, and the chain rule, the integral becomes \begin{align} &\hspace{-3pt}\int\displaylimits_{i\epsilon}^{\frac{T}{2}-i\epsilon} \hspace{-6pt} \Phi(0,\tau) \frac{\mathrm{d} g_2}{\mathrm{d} \tau}(F(x^*(\tau))) \hspace{-2pt} + \hspace{-2pt} \frac{\mathrm{d}}{\mathrm{d} \tau}\big(\Phi(0,\tau)\big)g_2(F(x^*(\tau))) \mathrm{d}\tau \nonumber \\ &=\int\displaylimits_{i\epsilon}^{\frac{T}{2}-i\epsilon} \Phi(0,\tau) \frac{\partial g_2}{\partial F}(F(x^*(\tau)))\frac{\partial F}{\partial x}(x^*(\tau))\dot{x}^*(\tau) \\ & \hspace{0pt} -\Phi(0,\hspace{-1pt}\tau) u_1(\tau)\hspace{-1pt} \frac{\partial g_1}{\partial F}\hspace{-1pt}(F(x^*(\tau)\hspace{-1pt})\hspace{-1pt})\frac{\partial F}{\partial x}(x^*(\tau)\hspace{-1pt}) g_2(F(x^*(\tau)\hspace{-1pt})\hspace{-1pt}) \mathrm{d}\tau. \nonumber \end{align} The term $\dot{x}^*(\tau)$ is the right-hand side of the nominal differential equation \eqref{nomsystem_GU}. Using this, we can write the integral \begin{equation} \int\displaylimits_{\epsilon}^{\frac{T}{2}-\epsilon} \frac{\partial F}{\partial x}(x^*(\tau))\Phi(0,\tau)u_1(\tau)\underbrace{\bigg(\frac{\partial g_2}{\partial F}g_1-\frac{\partial g_1}{\partial F}g_2\bigg)}_{\stackrel{\eqref{LieBr}}{=}-g_0(F(x^*(\tau)))} \mathrm{d}\tau, \label{int_final} \end{equation} omitting some arguments. Using \eqref{int_final} in \eqref{xTGU4} proves \eqref{xTNfinGU}. \textbf{4):} In the fourth part of the proof, we perform the limit process of letting $N$ tend to infinity. We define \begin{equation} t_i \coloneqq i\epsilon \end{equation} to express the limit of the first-order term in \eqref{xTNfinGU} as \begin{align} \lim_{N\rightarrow \infty} \sum_{i=1}^{N} &\bigg( (t_i-t_{i-1}) u_2(t_i) \label{riemann}\\ &\cdot\int\displaylimits_{t_i}^{\frac{T}{2}-t_i} \frac{\partial F}{\partial x}(x^*(\tau)) \Phi(0,\tau) u_1(\tau) g_0(F(x^*(\tau))) \mathrm{d}\tau \bigg). \nonumber \end{align} Note that as $u_2$ is continuous and bounded due to \textit{A1}, this is the limit of a Riemann sum \cite{Trench03} with partition $\{t_i\}$, which converges to the Riemann integral. Therefore, the first-order term \eqref{riemann} becomes \begin{equation} \int\displaylimits_{0}^{\frac{T}{2}} u_2(t) \int\displaylimits_{t}^{\frac{T}{2}-t} \frac{\partial F}{\partial x}(x^*(\tau)) \Phi(0,\tau) u_1(\tau) g_0(F(x^*(\tau))) \mathrm{d}\tau \mathrm{d}t, \end{equation} which gives \eqref{xTNinfGU} and completes the proof. \end{proof} \subsection{Auxiliary Lemma \ref{LemmaMultNeed}} \label{AppLemma1} \begin{lemma} Consider $v^{(j)}(t)$ and $v^{(j)*}(t)$ as defined in the proof of Theorem \ref{TheoremGU}. Suppose that \mbox{$x^{(j)}(t)=x^*(t)+\mathcal{O}(i\epsilon)$}. Then, \begin{equation} v^{(j)}(t) = v^{(j)*}(t) + v^{(j)}_R(\epsilon), \end{equation} where it holds for the remainder $v^{(j)}_R(\epsilon)=\mathcal{O}(2i\epsilon)$. \label{LemmaMultNeed} \end{lemma} \begin{proof} We perform a Taylor expansion of the system matrix of \eqref{vareqproof}, and the initial condition of $v^{(j)}(t)$ about $x^*$, where we use that $x^{(j)}=x^*+\mathcal{O}(i\epsilon)$. This yields \begin{align} A(x^{(j)},t) &= A(x^*+\mathcal{O}(i\epsilon),t) = A(x^*,t) + \mathcal{O}(i\epsilon), \\ g_2(x^{(j)}) &= g_2(x^*+\mathcal{O}(i\epsilon)) = g_2(x^*) + \mathcal{O}(i\epsilon). \end{align} As a result, the solution of the variational equation is \begin{align} v^{(j)}(t) = \exp\bigg(\int_{t_0}^t &A(x^*(\tau),\tau) + \mathcal{O}(i\epsilon) \mathrm{d}\tau \bigg) \nonumber\\ &\cdot\Big(g_2(x^*(j\epsilon))+ \mathcal{O}(i\epsilon)\Big) u_2(j\epsilon). \end{align} Using the Taylor expansion for $\exp\left(\int_{t_0}^t \mathcal{O}(i\epsilon) \mathrm{d}\tau \right)$ gives \begin{align} &v^{(j)}(t) = \exp\left(\int_{t_0}^t A(x^*(\tau),\tau)\mathrm{d}\tau \right)(1+\mathcal{O}(i\epsilon)) \nonumber\\ &\hspace{80pt} \cdot\Big(g_2(x^*(j\epsilon))+\mathcal{O}(i\epsilon)\Big) u_2(j\epsilon) \nonumber \\ &=\exp\left(\int_{t_0}^t A(x^*(\tau),\tau)\mathrm{d}\tau\right) g_2(x^*(j\epsilon))u_2(j\epsilon) + \mathcal{O}(2i\epsilon). \end{align} We note that the first term corresponds to $v^{(j)*}(t)$ and the second remainder term fulfills $v^{(j)}_R(\epsilon)=\mathcal{O}(2i\epsilon)$. \end{proof} \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{}} \section{Introduction} The second {\em Fermi} Large Area Telescope (LAT) catalog \citep{nolan12_2fgl} contains 1,298 identified or associated sources, of which 84\% are Active Galactic Nuclei (AGN) of some flavor or another, mostly blazars. Of the 575 unidentified sources in this catalog, 27\% have since been associated with blazars based on analysis of their infrared (IR) colors, as observed by the {\em Wide Field Infrared Survey Explorer (WISE)} \citep{massaro12_IR}. Blazars dominate the $\gamma$-ray sky in terms of sheer number of sources. \subsection{Basic Blazar Physics} Blazars are thought to be powered by accretion onto supermassive ($M\sim 10^6-10^9\ M_{\odot}$) black holes at the center of what seem to be almost entirely elliptical galaxies \citep[e.g.,][]{bahcall97,boyce98,urry00}. Jets are produced perpendicular to the accretion disk probably through magnetic fields wound up by the spin of the black hole \citep{blandford77}. The jets are closely aligned to our line of sight, the defining property of blazars. The jets move at speeds close to the speed of light, $c$, with Lorentz factors $\Gamma = (1-\beta^2)^{-1/2}\sim 10$, where the jet speed $v=\beta c$. These high jet speeds can be inferred from several pieces of evidence: their extreme radio surface brightnesses that would require extreme energy densities if produced by synchrotron from stationary sources \citep{jones74_1,jones74_2}; the superluminal apparent speeds ($v_{app}=\beta_{app}c$) of components seen with radio very long baseline interferometry \citep[VLBI; e.g.,][]{lister09}; and the detection of rapid $\gamma$-ray flares implies the source must be moving with $\Gamma \gg 1$ to avoid $\gamma\g$ attenuation \citep[e.g.,][]{dondi95}. With a small angle, $\theta$, to the line of sight they have Doppler factors given by $\delta = [\Gamma(1-\beta\cos\theta)]^{-1}$. The observed $\nu F_\nu$ synchrotron flux ($f_{sy}$) of a rapidly moving source compared to what its flux would be if it were stationary ($f_{sy}^{\prime}$) is given by $f_{sy}=\delta^4 f_{sy}^{\prime}$. A blazar jet with fiducial values $\Gamma= 10$ and $\theta=1/\Gamma$ will have $\delta = 10$ and so will be $10^4$ times brighter than what it would be if it were stationary. The radiation is said to be beamed in the direction of the jet's motion, and this accounts for the extreme brightness of blazars. The rapid variability observed in blazars at all wavelengths, from GHz radio frequencies to TeV $\gamma$-rays, implies emission from a compact region. If a compact region of plasma (the ``blob'') is assumed to be a sphere with radius $R^\prime$ in the frame co-moving with the blob, then the variability timescale ($t_v$; the approximate time it takes the flux to double) and light travel-time arguments give the constraint \begin{eqnarray} \label{size} R^\prime \le \delta c t_v/(1+z) = 3\times10^{15}\ \delta_1 t_{v,4} (1+z)^{-1}\ \mathrm{cm}\ . \end{eqnarray} Here and everywhere, primed quantities refer to the co-moving frame. I have used the notation that $A_x = 10^x A$ and all variables are in Gaussian/cgs units unless otherwise stated. The VLBI imaging of blazar and jets often reveal individual knots \citep[e.g.][]{piner04,lister09}, further evidence that jets consist of discreet components. Electrons are accelerated, probably by shocks internal to the jet, to form power-law distributions, $N(\gamma^{\prime})\propto \gamma^{\prime -p}$. In a magnetic field these electrons emit synchrotron radiation, which almost certainly is responsible for the low-energy emission in blazars, peaking in the infrared through X-ray. The $\gamma$-ray emission from blazars is less clear but probably originates from Compton scattering either of the synchrotron radiation (synchrotron self-Compton or SSC) or some external radiation field (external Compton or EC). The external radiation field could be from a thermal accretion disk, a broad line region (BLR), or a dust torus. It is also possible there could be a $\gamma$-ray component from emission by protons accelerated in the jet as well. Both leptonic and hadronic emission models in blazars are reviewed by \citet{boett07,boett12}. \subsection{Classification} Blazars are sub-divided as Flat Spectrum Radio Quasars (FSRQs) and BL Lacertae objects based on their optical spectrum, with sources with weak or absent broad emission lines being BL Lacs, and those with stronger broad emission lines being FSRQs. Blazars are further classified based on $\nu_{pk}^{sy}$, the frequency of their synchrotron peak in a $\nu F_{\nu}$ representation. Most recently, they were classified as low synchrotron peaked (LSP) if $\nu_{pk}^{sy} < 10^{14}\ \mathrm{Hz}$, intermediate synchrotron peaked (ISP) if $10^{14}\ \mathrm{Hz}<\nu_{pk}^{sy}<10^{15}\ \mathrm{Hz}$, or high synchrotron peaked (HSP) if $10^{15}<\nu_{pk}^{sy}$ by \citet{abdo10_sed}. Almost all FSRQs are LSPs \citep{ackermann11_2lac}. BL Lacs are generally thought to be the aligned counterpart to FR Is, while FSRQs are generally thought to be the aligned counterpart to FR IIs \citep[e.g.,][]{urry95}, although some exceptions exist \citep[e.g.,][]{landt04}. \section{Blazar Sequence} \subsection{The Origin of the Sequence} \label{sequence_origin} One of the great accomplishments of twentieth century astrophysics is the understanding of stars. We now understand their power source, how much radiation they produce and their spectra, how this depends on their mass and chemical composition, and how it evolves with time. It is worth taking the time to think about the question: How is it that we understand stars so well, yet we understand blazars so poorly? Why do we not have a good understanding of how blazars' emission and spectra depend on fundamental parameters (black hole mass, black hole spin, or other parameters), how they evolve with time, and so forth. Stars are isotropic emitters, and appear mostly the same no matter which direction one is looking at them. For blazars, this is obviously not the case. Stars tend to have relatively constant emission on human timescales, or, if they are variable, the variability is predictable (e.g., Cepheid variables or RR Lyrae stars). Blazars are highly variable at all wavelengths across the electromagnetic spectrum on time scales as short as hours or even minutes \citep[e.g.,][]{aharonian07_2155}, and the variability is apparently stochastic. Globular clusters played an important role in the understanding of stars, since one can safely assume that all of the stars in the cluster have been created at about the same time. There is no such similar method for figuring out the relative ages of blazars. Finally, one can determine the composition, temperature, and density of stellar photospheres from their optical spectra; as the jets of blazars are fully-ionized, spectral lines are not expected, and they have no similar diagnostic. One of the most useful tools in stellar astrophysics is the Hertzsprung-Russell diagram, which describes the luminosity to the optical spectral type (related to temperature and color) of stars and includes the very prominent main sequence, on which stars spend a large fraction of their lifetimes. This diagram has led to enormous success in the understanding of stars, so that one is greatly tempted to find a similar diagram for blazars. The possible discovery of a ``blazar main sequence'' or ``blazar sequence'' was made by \citet{fossati98}, combining three samples of blazars: a sample of FSRQs \citep[from the 2 Jy sample of ][]{wall85}, a radio-selected sample of BL Lacs \citep[from the 1 Jy sample of][]{kuhr81}, and an X-ray selected sample of BL Lacs \citep[from the Einstein Slew Survey;][]{elvis92}. They found three parameters that appeared to be well-correlated with the peak of the blazar synchrotron component: the 5 GHz radio luminosity, the luminosity at the peak of the synchrotron component, and the ``$\gamma$-ray dominance'', i.e., the ratio of of the $\gamma$-ray luminosity (as measured by EGRET) and the peak luminosity of the synchrotron component. Could one or all of these sequences hold the same place in blazar phenomenology that the stellar main sequence holds in stellar phenomenology? \citet{ghisellini98} provided a physical explanation for the correlations, or sequence, found by \citet{fossati98}. For nonthermal electrons accelerated as power-laws and allowed to escape a region of size $R^\prime$ and cool through synchrotron and Compton losses, a ``cooling break'' will be found in the electron distribution at electron Lorentz factor given by \begin{eqnarray} \label{gc} \gamma_c^\prime = \frac{ 3 m_e c^2 }{ 4 c \sigma_{\rm T} u_{tot}^{\prime} t^\prime_{esc} }\ , \end{eqnarray} where $m_e=9.1\times10^{28}$\ g is the electron mass, $\sigma_{\rm T}=6.65\times10^{-25}\ \mathrm{cm}$ is the Thomson cross section, $t^\prime_{esc}\cong R^\prime/c$ is the escape timescale, and $u_{tot}^\prime$ is the total energy density is the frame of the relativistic blob, given by the sum of the Poynting flux ($u_B^\prime$), synchrotron ($u_{sy}^\prime$), and external radiation field ($u^\prime_{ext} \cong \Gamma^2 u_{ext}$) energy densities. Note that all primed quantities are in the frame co-moving with the jet blob. The cooling Lorentz factor $\gamma^{\prime}_c$ will be associated with a peak in the synchrotron spectrum of the source in a $\nu F_\nu$ representation observed at frequency \begin{eqnarray} \label{nupk} \nu_{pk}^{sy} = 3.7\times10^6\ \gamma_c^{\prime 2}\ \left(\frac{B}{\mathrm{G}}\right)\ \frac{\delta}{1+z}\ \mathrm{Hz} \end{eqnarray} \citep[e.g.,][]{tavecchio98} where $B$ is the magnetic field in the blob. For objects that have weak external radiation fields so that $u^\prime_B \gg u^\prime_{ext}$, and neglecting $u^\prime_{sy}$, then Equations (\ref{gc}) and (\ref{nupk}) give \begin{eqnarray} \nu_{pk}^{sy} \cong 2.2\times10^{15}\ B_0^{-3}\ \delta_1\ (1+z)^{-1}\ \ R_{15.5}^{\prime -2}\ \mathrm{Hz}\ , \end{eqnarray} where I have chosen fiducial values for all quantities. These objects will be HSPs. Objects with a strong external radiation fields from the broad line region (BLR) which dominate over $u^\prime_B$ and $u^\prime_{sy}$, will have peak synchrotron frequencies given by \begin{eqnarray} \nu_{pk}^{sy} \cong 3.2\times10^{12}\ \ B_0\ \delta_1^{-3}\ (1+z)^{-1}\ \ R_{15.5}^{\prime -2}\ u_{ext,-2}^{-2}\ \mathrm{Hz} \end{eqnarray} where I assumed that $\delta = \Gamma$. These objects will be LSPs. It turns out that so far all blazars with high synchrotron peaks are BL Lacs (without strong broad emission lines by definition), while FSRQs with strong emission lines are almost entirely LSPs. Note however, that there are a significant number of BL Lacs which are LSPs. Objects with stronger line emission would also be expected to have greater $\gamma$-ray dominances, due to scattering of the external radiation field. \citet{ghisellini98} thus predicted a sequence of blazars, from low power, high peaked, low $\gamma$-ray dominance, lineless objects, and as the external radiation field increases, to low peaked, high $\gamma$-ray dominance objects with strong broad emission lines. \citet{boett02_seq} suggested the ``blazar sequence'' is evolutionary, with FSRQs being young objects, and as the circum-nuclear material accretes, the the broad emission lines decrease, and the accretion rate decreases, and the sources become older BL Lac objects. However, the correlations found by \citet{fossati98} have not always been found in subsequent studies \citep{padovani03,nieppola06} although they have in others \citep[e.g.,][]{chen11,finke13}. Furthermore, an alternative explanation was provided by \citet{giommi02,giommi05,giommi12_selection}, In their scenario, the sequence is a result of a selection effect: luminous blazars with high synchrotron peaks will have their spectral lines totally swamped by the nonthermal continuum, making a redshift measurement impossible. Without a redshift, it is not possible to determine their luminosities, and so they are not included in statistical tests between luminosity and $\nu_{pk}^{sy}$. What is the explanation for the blazar sequence? Is it a physical effect \citep{ghisellini98} or a selection effect \citep{giommi02}? \subsection{More Recent Work} \begin{figure} \vspace{3.mm} \includegraphics[width=60mm,angle=270]{padovani_fig} \caption{ The sum $L_{pk}^{sy} + L_{pk}^{C}$ versus $\nu_{pk}^{sy}$ for a number of blazars. \citet{rau12} found the redshifts for the four luminous, high-peaked, high $z$ objects shown in red. Figure taken from \citet{padovani12}. } \label{padovani_fig} \end{figure} \vspace{2.2mm} \citet{rau12} have constrained the redshifts of a number of high $z$ BL Lacs. Four of these do seem to have high $\nu_{pk}^{sy}$ and are very luminous \citep[see Fig.\ \ref{padovani_fig};][]{padovani12}. This would seem to support the argument that the blazar sequence is the result of a selection, rather than physical, effect. In the {\em Fermi} era, however, it is possible to look at not just the synchrotron component, but also the $\gamma$-ray component, presumably the result of Compton scattering. Both \citet{meyer12} and \citet{ghisellini12} pointed out that these four sources are not out of the ordinary on a $\gamma$-ray ``blazar sequence, '' where one plots the LAT spectral index, $\Gamma_\gamma$ (a proxy for the peak of the $\gamma$-ray component) and the LAT $\gamma$-ray luminosity, and they are perfectly consistent with other LAT $\gamma$-ray sources (see Fig.\ \ref{meyer12_fig}). However, it is certainly possible that in the future, as more redshifts are measured and constrained, sources with high $L_\gamma$ and low $\Gamma_\gamma$ will be found. \begin{figure} \vspace{4.mm} \includegraphics[width=75mm]{meyer12_fig1} \caption{ The LAT $\gamma$-ray luminosity versus LAT spectral energy index ($\alpha = \Gamma_\gamma - 1$) from \citet{meyer12}. FSRQ sources are shown in red, BL Lacs are shown in blue, and the sources from \citet{rau12} and \citet{padovani12} are shown as asterisks. } \label{meyer12_fig} \end{figure} \vspace{2.2mm} \begin{figure} \vspace{4.mm} \includegraphics[width=75mm]{CD_bat_02} \caption{Compton dominance (i.e., $L_{pk}^C/L_{pk}^{sy}$) versus peak synchrotron frequency. Filled circles represent FSRQs, empty circles represent BL Lacs, and filled squares represent objects which do not have an unambiguous classification. Rightward-pointing triangles represent BL Lacs with unknown redshifts, for which $\nu_{pk}^{sy}$ is a lower limit. Figure taken from \citet{finke13}.} \label{CD} \end{figure} \vspace{1.2mm} An often overlooked part of the blazar sequence as found by \citet{fossati98} is the $\gamma$-ray dominance, i.e., the ratio of the $L_\gamma$ to the peak synchrotron luminosity ($L_{pk}^{sy}$). This and a similar quantity, the Compton dominance, $A_C \equiv L_{pk}^C / L_{pk}^{sy}$ (where $L_{pk}^C$ is the luminosity at the Compton peak) are redshift-independent. Also, $\nu_{pk}^{sy}$ is only weakly dependent on redshift, by a factor $(1+z)$, i.e., a factor of a few. A plot of $A_C$ versus $\nu_{pk}^{sy}$ is shown in Fig.\ \ref{CD}, from a subset of sources in the second LAT AGN catalog \citep{ackermann11_2lac}, including sources which do not have known redshifts. It is clear a correlation exists, and this is confirmed with the Spearman and Kendall tests \citep{finke13}. It seems that this aspect of the blazar sequence has a physical origin, and is not the result of a selection effect. In the future, the luminosity-peak frequency relations could be improved with new redshift measurements and constraints \citep[e.g.,][]{shaw13}. Then it should be possible to determine if these aspects of the sequence are physical as well. As an alternative to the physical scenario described by \citet{ghisellini98} and in \S\ \ref{sequence_origin}, \citet{meyer11} proposed another physical scenario, based on updated data from a number of sources. In their scenario, the difference between BL Lacs and FSRQs is the former have jet structure with velocity (or Lorentz factor) gradients, either perpendicular or parallel to the direction of motion. FSRQs, according to \citet{meyer11}, do not have these gradients; they have a single Lorentz factor for the entire jet, or at least the radiatively important parts. There is indeed ample evidence for different Lorentz factors in BL Lacs and FRIs \citep[e.g.,][]{chiaberge00,chiaberge01,abdo10_cena}. The lack of $\gamma$-ray detected FRIIs hints that FRIIs/FSRQs do not share this jet structure \citep{grandi12}, however, see \citet{boett09_decel} for evidence of jet deceleration in an FSRQ. \section{Curvature in LAT Spectra} \label{curvature} After the launch of {\em Fermi}, while the spacecraft was still in its post-launch commissioning and checkout phase, the FSRQ 3C~454.3 was detected by the LAT in an extreme bright state \citep{tosti08_atel}. The source reached a flux of $F(>100\ \mathrm{MeV}) > 10^{-5}\ \mathrm{ph}\ \mathrm{cm}^{-2}\ \mathrm{s}^{-1}$ and its spectrum showed an obvious curvature (i.e., a deviation from a single power-law), which was best-fit by a broken power-law \citep{abdo09_3c454.3} with break energy $\sim 2$\ GeV. This source flared on several more occasions \citep{ackermann10_3c454.3,abdo11_3c454.3}, always exhibiting a spectral break during bright states. The energy of the break varied by no more than a factor of $\sim 3$, while the flux varied by as much as a factor of 10 \citep{abdo11_3c454.3}. This spectral curvature has been found in other blazars as well, although a broken power-law is not always preferred over a log-parabola fit, which has one less free parameter \citep{abdo10_latsed}. The cause of the break is not clear but there are several possible explanations. {\em A combination of several scattering components.} \citet{finke10_3c454} noted that, based on the shape of the optical and $\gamma$-ray spectra, the Compton scattering of more than one seed photon source was needed to explain the overall spectral energy distribution (SED) of 3C~454.3. The particularly soft spectra above the break requires that this scattering be done in the Klein-Nishina (KN) regime. This model requires that scattering occur within the BLR, and a wind model for the BLR in order to explain the relative stability of the break energy. {\em Photoabsorption of $\gamma$-rays with BLR photons}. \citet{pout10} \citep[see also][]{stern11} pointed out that He II Ly$\alpha$ and recombination photons are at the right energy (54.4 eV and 40.8 eV, respectively) to absorb $\gamma$-ray photons at $\sim 5$\ GeV, about the same energy as the spectral breaks observed. This model would also require the $\gamma$-ray emitting region to be within the BLR. {\em Compton scattering of BLR Ly$\alpha$ photons}. For the scattering of Ly$\alpha$ photons ($E_*=10.2$\ eV), the KN regime will emerge at energies above \begin{eqnarray} E_{KN} \approx 1.2\ (E_*/10.2\ \mathrm{eV})^{-1}\ \mathrm{GeV}\ , \end{eqnarray} approximately in agreement with the observed break energy \citep{ackermann10_3c454.3}. Fits with this model using power-law electron distributions failed to reproduce the observed LAT spectra \citep{ackermann10_3c454.3}; however, fits using a log-parabola electron distribution were able to reproduce the $\gamma$-rays \citep{cerruti12}. Naturally, this model would also require the $\gamma$-ray emitting region to be within the BLR. \begin{figure} \vspace{3.mm} \includegraphics[width=80mm]{0537_SED_09A} \caption{SED of the FSRQ PKS~0537$-$441 from \citet{dammando13}. The spectral curvature in the IR/optical in the high state could indicate the cause of the $\gamma$-ray curvature is the result of curvature in the electron distribution. } \label{0537sed} \end{figure} \vspace{2.2mm} {\em Curvature in the electron distribution}. This is the explanation originally favored by \citet{abdo09_3c454.3}. If there is curvature in the electron distribution that produces the $\gamma$-rays, presumably from Compton scattering, this would naturally be reflected in the LAT spectrum as well. In this scenario, one would expect the curvature in the electron distribution to cause a curvature in the synchrotron emission from the same electrons, which would appear in the IR/optical. Indeed, observations of PKS 0537$-$441 do show this curvature \citep[][see Fig.\ \ref{0537sed}]{dammando13}. This explanation would not require scattering to take place in the BLR, as dust torus photons could be the seed photon source for scattering. This scenario begs the question: what is the cause of the break in the electron distribution? \section{Location of the $\gamma$-ray Emitting Region} \label{region} Many of the scenarios described in \S\ \ref{curvature} require the $\gamma$-rays to be produced within the BLR. However, it is not clear that this is the case. Optical and $\gamma$-ray flares are often associated with the rotation of polarization angles, the slow increase in radio flux, and the ejection of superluminal components from the core as seen at 43 GHz \citep[e.g.,][]{marscher08,marscher10}. According to \citet{marscher12}, 2/3 of $\gamma$-ray flares are associated with the ejection of a superluminal component, indicating the $\gamma$-ray flares are coincident with the 43 GHz core. There are two arguments that the 43 GHz core is located at, and the $\gamma$-ray flares originate from, $>$ a few pc, outside the BLR. (1) Using the observed radius of the 43 GHz core ($R_{core}$), and assuming a conical jet with a half opening angle $\alpha$ \citep[measured, e.g., by ][]{jorstad05}, one can determine the distance of the core from the base of the jet, $r=R_{core}/\alpha$ \citep{agudo11}. (2) The $\gamma$-ray flares occur in the same region as the much slower radio outbursts and/or polarization angle swings lasting 10s of days \citep[$\Delta t$;][]{marscher10,orienti13}. The distance associated with the light travel time of these radio outbursts or polarization swings assuming $\theta\ll 1$ is \begin{eqnarray} r \ge 1.0\ \Delta t_{6}\ \delta_1\Gamma_1\ (1+z)^{-1}\ \mathrm{pc}\ . \end{eqnarray} On the other hand, the rapid $\gamma$-ray variability observed in blazars such as 3C~454.3 \citep[$\sim 3$\ hours;][]{tavecchio10}, PKS~1510$-$089 \citep[$\sim 1$\ hour;][]{brown13,saito13}, 4C~21.35 \citep[$\sim 10$\ minutes; source also known as PKS 1222+21;][]{aleksic11}, and PKS~2155$-$304 \citep[$\sim 5$\ minutes;][]{aharonian07_2155} limits the size of the emitting region by Equation (\ref{size}). If the emitting region takes up the entire cross section of a conical jet, then it should be at a distance \begin{eqnarray} r \le 0.1\ \delta_1\ t_{v,4}\ \alpha_{-2}^{-1}\ (1+z)^{-1}\ \mathrm{pc}\ \end{eqnarray} from the base of the jet. Based on scaling relations found from reverberation mapping, the typical BLR region for FSRQs is $r_{BLR}\sim 0.1\ \mathrm{pc}$ \citep[e.g.,][]{bentz06}. 4C~21.35 was detected to have flux-doubling timescales of $\sim 10$\ minutes, as measured by MAGIC, out to 400 GeV. The $\gamma\g$ optical depth is \begin{eqnarray} \tau_{\gamma\g} = \int^\infty_{\max[r,r_{BLR}]} d\ell\ \sigma_{\gamma\g}\ u_{BLR}/E_* \end{eqnarray} We can estimate the $\gamma\g$ cross section $\sigma_{\gamma\g}\cong \sigma_{\rm T}/3$, and $u_{BLR}\cong$\ constant for $r<r_{BLR}$ . I will use $E_* = 10.2$\ eV, i.e., for Ly$\alpha$. If $r<r_{BLR}$ then \begin{eqnarray} \tau_{\gamma\g} \cong 40\ u_{BLR,-2}\ r_{BLR,17.5} (E_{*}/10.2\ \mathrm{eV})^{-1}\ , \end{eqnarray} so $\gamma$-rays with energies above the threshold energy, about $50\ \mathrm{GeV}\ (E_*/10.2\ \mathrm{eV})^{-1}$ will clearly not be able to escape the BLR. Several ways to avoid $\gamma\g$ attenuation have been suggested, such as energy transport through neutron beams \citep{dermer12_21.35}, or $\gamma$-ray conversion to axions \citep{tavecchio12_axion}. Otherwise, the $\gamma$-rays from this source must be produced by a small fraction of the jet cross section at $\ge 4$\ pc from the black hole, outside the BLR. If the $\gamma$-ray emitting region is within the BLR, the seed photons for Compton scattering are likely to be at higher energies than they would be if they emitting region was outside the BLR, where lower-energy dust torus photons would serve as the seed photon source. Due to KN effects, the Compton cooling will be different in these different cases. \citet{dotson12} suggest that because of this effect, detailed study of the $\gamma$-ray light curves could distinguish the seed photon source energy, and hence, the location of the emitting region. \section{The End of the One-Zone Leptonic Model?} One-zone leptonic models (1ZLMs), where the lower energy emission is produced by synchrotron radiation, and the higher energy emission is produced by Compton scattering with the same electron population (SSC or EC) has been the standard for fitting multi-wavelength blazar SEDs. However, lately the multi-wavelength coverage has become complete enough that in many cases these models do not provide sufficient fits to blazar SEDs. These include 3C~279 \citep[LSP FSRQ;][]{boett09_3c279} , PKS~2005$-$489 \citep[HSP BL Lac;][]{abram11_2005}, AO~0235+164 \citep[LSP BL Lac;][]{ackermann12_0235}, 1ES 0414+009 \citep[HSP BL Lac;][]{aliu12}, PKS~1510$-$089 \citep[LSP FSRQ;][]{nalew12_1510}, and AP Lib [LSP BL Lac; Abramowski et al.\ 2013, in preparation]. What is the next step in blazar modeling? I list three broad categories of models: {\em Multi-zone models.} It has been known for some time that the flat radio spectra (index $\alpha_r\approx0$) of blazars are almost certainly explained by the superposition of several self-absorbed components \citep{konigl81}, so these models are perhaps the most obvious. The hard TeV spectra seen in some blazars, such as 1ES~1101$-$232, led \citet{boett08} to suggest the TeV emission was produced by Compton-scattering cosmic microwave background (CMB) photons in the kpc-scale jet. Several models have been motivated by the contradictory clues for the location of the $\gamma$-ray emitting region, as described in \S\ \ref{region}. These typically include a smaller region at a large distance from the black hole, with one or more other regions accounting for the slower radio emission \citep[e.g.,][]{marscher10_multizone,tavecchio11_21.35,nalew12_1510}. Inhomogeneous jets have also been explored by \citet{graff08} and \citet{joshi11}. {\em Hadronic models.} Blazars have long been a candidate for the production of ultra-high energy cosmic rays (UHECRs), a hypothesis that was recently strengthened by the correlation of UHECRs observed by the Auger observatory with local AGN \citep[e.g.,][]{abraham07}. This has motivated blazar emission models where the $\gamma$-rays come from processes originating from protons and cosmic rays accelerated in the jet \citep[e.g.,][]{muecke03}. Variability in hadronic models is difficult to model, although progress has been made recently by \citet{boett12}. A neutral beam model was recently proposed by \citet{dermer12_21.35} to explain the rapidly varying very-high energy (VHE) emission from 4C~21.35 (see \S\ \ref{region}). {\em Intergalactic cascade models.} If blazars are sources of UHECRs, the particles that escape the jet could interact with the extragalactic background light (EBL) from stars, dust, and the CMB, to produce cascade VHE $\gamma$-rays \citep{essey10_1,essey10_2}. In this case the VHE emission would not be variable, and would be expected to be disconnected from the rest of the SED. This is a simple prediction that could be used to test this hypothesis. VHE $\gamma$-rays could also be produced from VHE $\gamma$-rays which interact with the EBL to produce $e^+/e^-$ pairs. These pairs could then in turn Compton-scatter CMB photons, producing $\gamma$-rays in the LAT bandpass. This creates another component that needs to be taken into account in spectral modeling of blazars \citep{davezac07,tavecchio11_igmfmodel}. The problem with alternatives to the 1ZLM is that the addition of free parameters means that no matter what model is used, one will almost certainly be able to adjust the parameters to fit the data. Both theoretical and observational advances are needed to advance our understanding of these sources. \begin{acknowledgments} I would like to thank C.\ Dermer, S.\ Razzaque, and B.\ Giebels for discussions on topics presented in this proceeding. I would also like to thank the conference organizers for the opportunity to speak at the {\em Fourth Fermi Symposium}, and for organizing an interesting and enjoyable conference. The {\em Fermi} LAT Collaboration acknowledges support from a number of agencies and institutes for both development and the operation of the LAT as well as scientific data analysis. These include NASA and DOE in the United States, CEA/Irfu and IN2P3/CNRS in France, ASI and INFN in Italy, MEXT, KEK, and JAXA in Japan, and the K.~A.~Wallenberg Foundation, the Swedish Research Council and the National Space Board in Sweden. Additional support from INAF in Italy and CNES in France for science analysis during the operations phase is also gratefully acknowledged. \end{acknowledgments} \bigskip
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection{Quivers and relations} Let $Q$ be a finite quiver with a set of vertices $Q_0$, a set of arrows $Q_1$ and $s, t : Q_1 \to Q_0$ be the maps associating to each arrow $\alpha$ its source $s(\alpha)$ and its target $t(\alpha)$. A path $w$ of length $l$ is a sequence of $l$ arrows $\alpha_1 \dots \alpha_l$ such that $t(\alpha_i)=s(\alpha_{i+1})$. We put $s(w)=s(\alpha_1)$ and $t(w)=t(\alpha_l)$. For any vertex $x$ we consider $e_x$ the trivial path of length zero and we put $s(e_x)=t(e_x)=x$. A cycle is a non-trivial path $w$ such that $s(w)=t(w)$. The corresponding path algebra $kQ$ is the $k$-vector space with basis the set of paths in $Q$; the product on the basis elements is given by the concatenation of the sequences of arrows of the paths $w$ and $w'$ if they form a path (namely, if $t(w)=s(w')$) and zero otherwise. Vertices form a complete set of orthogonal idempotents. Let $F$ be the two-sided ideal of $kQ$ generated by the arrows of $Q$. A two-sided ideal $I$ is said to be \textit{admissible} if there exists an integer $m \geq 2$ such that $F^m \subseteq I \subseteq F^2$. The pair $(Q,I)$ is called a \textit{bound quiver}. It is well known that if $A$ is a basic, connected, finite dimensional algebra over an algebraically closed field $k$, then there exists a unique finite quiver $Q$ and a surjective morphism of $k$-algebras $\nu: kQ \to A$, which is not unique in general, with $I_\nu=\Ker \nu$ admissible. The pair $(Q,I_\nu)$ is called a \textit{presentation} of $A$. We denote by $kQ(x,y)$ the subspace of $kQ$ with basis the set of paths from $x$ to $y$, $I_\nu(x,y)=I_\nu \cap kQ(x,y)$ and $A(x,y)=kQ(x,y)/I_\nu(x,y)$. \subsection{Incidence algebras}\label{incidencia} An incidence algebra ${\Ia}(\Sigma)$ is a subalgebra of the algebra $M_n(k)$ of square matrices over $k$ with elements $(x_{ij}) \in M_n(k)$ satisfying $x_{ij}=0$ if $i \not \geq j$, for some partial order $\ge$ defined in the {\it poset} (partially ordered set) $\Sigma= \{ 1, \dots , n\}$. Incidence algebras can equivalently be viewed as path algebras of quivers with relations in the following way. Let $Q$ be a finite quiver without oriented cycles and such that for each arrow $ x \stackrel{\alpha}{\rightarrow} y \in Q_1$ there is no oriented path other than $\alpha$ joining $x$ to $y$. These quivers are called {\it ordered}. The set $Q_0$ of vertices of $Q$ is then a finite poset $(\Sigma, \geq)$ as follows: $x \geq y$ if and only if there exists an oriented path from $x$ to $y$. Conversely, if $\Sigma$ is a finite poset, we construct a quiver $Q$ with the set of vertices $\Sigma$, and with an arrow from $x$ to $y$ if and only if $x>y$ and there is no $u \in Q_0$ such that $x>u>y$. In other words $Q$ is the Hasse diagram of the poset $\Sigma$. Clearly we obtain in this way an ordered quiver and a bijection between finite posets and ordered quivers. Let us consider $kQ$ the path algebra of $Q$ and $I$ the {\it parallel ideal} of $kQ$, that is, $I$ is the two-sided ideal of $kQ$ generated by all the differences $\gamma - \delta$ where $\gamma$ and $\delta$ are {\it parallel paths} (that is, $\gamma$ and $\delta$ have the same starting and ending points). The algebra ${\Ia}(\Sigma) = kQ/I$ is the {\it incidence algebra} of the poset $\Sigma$ associated to the ordered quiver $Q$. \subsection{Fundamental group} Let $(Q,I)$ be a connected bound quiver. For $x,y \in Q_0$, a relation $\rho = \sum_{i=1}^m \lambda_i w_i$ in $I(x,y)$ is called {\it minimal} if, for every non-empty proper subset $J \subset \{ 1,2, \dots,m\}$, we have $\sum_{j \in J} \lambda_j w_j \notin I(x,y)$. A relation $\rho$ is called \textit{monomial} if $m=1$, and \textit{binomial} if $m=2$. For an arrow $\alpha \in Q_1$, we denote by $\alpha^{-1}$ its formal inverse and put $s(\alpha^{-1})=t(\alpha)$ and $t(\alpha^{-1})=s(\alpha)$. A walk from $x$ to $y$ in $Q$ is a formal composition $\alpha_1^{\epsilon_1}\alpha_2^{\epsilon_2} \dots \alpha_t^{\epsilon_t}$ (where $\alpha_i \in Q_1$, $\epsilon_i=^+_-1$ for $1\leq i\leq t$, and $t(\alpha_i^{\epsilon_i})=s(\alpha_{i+1}^{\epsilon_{i+1}})$) starting at $x$ and ending at $y$. Let $\approx$ be the smallest equivalence relation on the set of all walks in $Q$ such that: \begin{itemize} \item[a)] If $\alpha: x \to y$ is an arrow then $\alpha^{-1}\alpha \approx e_y$ and $\alpha \alpha^{-1} \approx e_x$; \item[b)] If $\rho = \sum_{i=1}^m \lambda_iw_i$ is a minimal relation then $w_i \approx w_j$ for all $1 \leq i,j \leq m$; \item[c)] If $u \approx v$ then $wuw' \approx wvw'$ whenever these compositions make sense. \end{itemize} Let $x_0 \in Q_0$ be arbitrary. The set $\pi_1(Q,I,x_0)$ of equivalence classes of all the closed walks starting and ending at $x_0$ has a group structure. Clearly the group $\pi_1(Q,I,x_0)$ does not depend on the choice of the base point $x_0$. We denote it simply by $\pi_1 (Q,I)$ and call it the {\it fundamental group} of $(Q,I)$, see~\cite{AP}. \subsection{Hochschild cohomology: a convenient resolution for $A=kQ/I$} We recall that the Hochschild cohomology groups ${\HH}^i(A)$ of an algebra $A$ are the groups $\Ext_{A^e}^i(A,A)$. We refer the reader to~\cite{CE,H,R} for more general results. To compute the Hochschild cohomology groups of $A=kQ/I$ we will use a convenient projective resolution of $A$ as $A$-bimodule given in~\cite {C}. Let $E$ be the subalgebra of $A$ generated by the set of trivial paths $\{e_x \vert x \in Q_0 \}$; note that $E$ is semisimple and $A=E \oplus \rad A$ as $E$-bimodule. Let $\rad A ^{\otimes n}$ denote the $n$-fold tensor product of $\rad A$ with itself over $E$, with $\rad A^{\otimes 0}=E$. The complex $$ \cdots \to A \otimes_E \rad A^{\otimes 2} \otimes_E A \stackrel{b_2}{\to} A \otimes_E \rad A \otimes_E A \stackrel{b_1}{\to} A \otimes_E A \stackrel{b_0}{\to} A \to 0$$ with $$b_n(a_0 \otimes \dots \otimes a_{n+1}) = \sum_{i=0}^{n} {(-1)}^i a_0\otimes \dots \otimes a_i a_{i+1} \otimes \dots \otimes a_{n+1}$$ is a projective resolution of $A$ as $A$-bimodule. There is a natural isomorphism $\Hom_{A^e}(A \otimes_E \rad A^{\otimes n} \otimes_E A, A) \simeq \Hom_{E^e}(\rad A^{\otimes n}, A)$, and the corresponding boundary map $b^n: \Hom_{E^e}(\rad A^{\otimes n}, A) \to \Hom_{E^e}(\rad A^{\otimes n+1}, A)$ is given by \begin{equation*} \begin{split} (b^0f)(a_0) & =a_0f(1)-f(1)a_0,\\ (b^nf)(a_0 \otimes \dots \otimes a_{n}) &= a_0f(a_1 \otimes \dots \otimes a_{n}) \\ & \quad + \sum_{i=1}^{n} {(-1)}^{i}f(a_0\otimes \dots \otimes a_{i-1} a_{i} \otimes \dots \otimes a_{n}) \\ & \quad +{(-1)}^{n+1} f(a_0 \otimes \dots \otimes a_{n-1})a_{n}. \end{split} \end{equation*} \subsection{Hochschild cohomology and simplicial cohomology} \label{posetincidencia} Let $A={\Ia}(\Sigma)$ be an incidence algebra associated to a poset $\Sigma$. The simplicial complex associated to $\Sigma$ is defined as follows: $SC_n=SC_n(\Sigma)$ is the $k$-vector space with basis the set $\{ s_0 > s_1 > \dots > s_n \vert s_i \in \Sigma\}$. The complex computing the cohomology groups $\SH^n(\Sigma,k)$ is the following $$ 0 \to \Hom_k(SC_0,k) \stackrel{B^0}{\to} \Hom_k(SC_1,k) \stackrel{B^1}{\to} \Hom_k(SC_2,k) \to \cdots$$ with $$(B^{n-1} f)(s_0> \dots > s_n) = \sum_{i=0}^n (-1)^i f(s_0> s_1 > \dots > \hat{s_i} > \dots > s_n).$$ In~\cite{GS,C} it was shown that $\SH^n(\Sigma,k)$ and ${\HH}^n(A)$ are isomorphic. Moreover, there is an explicit isomorphism between the complexes computing these cohomology groups, which is defined as follows: note that $A={\Ia}(\Sigma)$ is an incidence algebra, and we are considering tensor products over $E$, thus $$\rad A^{\otimes n}= \oplus_{s_0> s_1 > \dots > s_n} A(s_0, s_1) \otimes_k A(s_1, s_2) \otimes_k \dots \otimes_k A(s_{n-1}, s_n),$$ with $\dim_k A(s_{i-1}, s_i)=1$ for all $i$ with $1\leq i \leq n$. Taking a basis element $(\overline w_1, \dots , \overline w_n)$ in $A(s_0, s_1) \otimes_k A(s_1, s_2) \otimes_k \dots \otimes_k A(s_{n-1}, s_n)$, the maps $$\varepsilon_n: \Hom_k(SC_n,k) \to \Hom_{E^e}(\rad A^{\otimes n}, A)$$ given by $\varepsilon_n(f)(\overline w_1, \dots , \overline w_n)= f(s_0> s_1 > \dots > s_n)\overline{w_1 \cdots w_n}$ commute with the corresponding boundary maps and induce the desired isomorphism of complexes. \section{Associated incidence algebra} \subsection{The associated incidence algebra} \label{poset} Let $(Q,I_\nu)$ be a presentation of an algebra $A$, that is, $\nu: kQ \to A$ is a surjective morphism and $I_\nu=\Ker \nu$. We associate to $(Q,I_\nu)$ an incidence algebra $A(\Sigma_\nu)$, where $\Sigma_\nu$ is the poset defined as follows (see~\cite{Re2}): let $P(Q)$ be the set of paths of $Q$, let $P(Q,I_\nu)=P(Q)/\sim$ be the set of equivalence classes, where $\sim$ is the smallest equivalence relation on $P(Q)$ satisfying: \begin{itemize} \item[a)] If $\rho = \sum_{i=1}^m \lambda_iw_i \in I_\nu$ is a minimal relation then $w_i \sim w_j$ for all $1 \leq i,j \leq m$; \item[b)] If $u \sim v$ then $wuw' \sim wvw'$ whenever these compositions make sense. \end{itemize} We denote by $[u]$ the equivalence class of a path $u$. Observe that these are the conditions used in the definition of the fundamental group $\pi_1(Q, I_\nu)$ if we replace walks by paths. Let $\Sigma_\nu$ be the set of equivalence classes in $P(Q, I_\nu)$ that do not contain paths in $I_\nu$. Equivalent paths share source and target, and the equivalence relation is compatible with concatenation; this allows us to define $s([w])=[e_{s(w)}]$, $t([w])=[e_{t(w)}]$ and $[u][v]=[uv]$ whenever the composition makes sense and it is not equivalent to a path in $I_\nu$. The set $\Sigma_\nu$ is a poset, with $[w] \geq [w']$ if and only if there exist $[u],[v] \in \Sigma_\nu$ such that $[w']=[u][w][v]$, see~\cite{Re2}. \begin{example}\label{ejemplo} Consider the quiver \[ \xymatrix{1 \ar@/^/[r]^{\alpha}\ar@/_/[r]_{\beta} & 2 \ar[r]^\gamma & 3}\] and the ideals $I_1= < \alpha \gamma> $, $I_2= < (\alpha - \beta) \gamma> $. The bound quivers $(Q,I_1)$ and $(Q,I_2)$ are presentations of the same algebra $A=kQ/I_1$, and the corresponding Hasse diagrams are given by \[\Sigma_1: \xymatrix{[e_1] \ar[d] \ar[dr] & [e_2] \ar[d] \ar[dl] \ar[dr] & [e_3] \ar[d]\\ [\alpha] & [\beta] \ar[d] & [\gamma] \ar[dl]\\ & [\beta \gamma] } \qquad \qquad \Sigma_2: \xymatrix{[e_1] \ar[d] \ar[dr] & [e_2] \ar[d] \ar[dl] \ar[dr] & [e_3] \ar[d]\\ [\alpha] \ar[dr] & [\beta] \ar[d] & [\gamma] \ar[dl]\\ & [\beta \gamma] }\] \end{example} \section{Homotopy coherent and compatible presentations} In order to prove the main results in Section \ref{main} we must consider presentations satisfying two particular conditions: homotopy coherence and right (left) compatibility. \begin{definition}\label{P1} A presentation $(Q,I)$ is called \textit{homotopy coherent} if for any $w, w'$ paths in $Q$ with $w \sim w'$, we have $w \in I$ if and only if $w' \in I$. \end{definition} This condition is necessary to construct the morphism of complexes computing $\HH^*(A(\Sigma_\nu))$ and $\HH^*(A)$, see Step 3 and Step 4 in Section \ref{4}. \begin{remark} If $(Q,I_\nu)$ is homotopy coherent then $\Sigma_\nu$ is just the set of equivalence classes of non-zero paths. \end{remark} \begin{proposition} Let $(Q,I_\nu)$ be a presentation with $I_\nu$ generated by monomial or binomial relations. Then $(Q,I_\nu)$ is homotopy coherent. \end{proposition} \begin{proof} Let $w,w'$ be paths in $Q$ such that $w \sim w'$. If $I_\nu$ is generated by monomial relations, then $w=w'$. If not, there exists a finite sequence of paths $w_0=w, w_1, \dots, w_r=w'$ such that, for all $i$ with $0 \leq i < r$, $w_i=u_i\rho_iv_i, w_{i+1}=u_i\gamma_iv_i$ with $\rho_i, \gamma_i$ appearing in a minimal relation in $I_\nu$, $u_i, v_i, \rho_i, \gamma_i$ paths in $Q$. So $\lambda_i \rho_i + \mu_i \gamma_i \in I_\nu$ for some $\lambda_i, \mu_i \in k\setminus \{0\}$. This implies that $\lambda_i w_i + \mu_i w_{i+1} \in I_\nu$ and hence $w_i \in I_\nu$ if and only if $w_{i+1} \in I_\nu$. \end{proof} Recall that an algebra $A$ is said to be \textit{schurian} if $\dim A(x,y) \leq 1$ for any $x,y \in Q_0$. \begin{corollary} Schurian algebras, incidence algebras and monomial algebras admit homotopy coherent presentations. \end{corollary} \begin{proof} These classes of algebras admit presentations $(Q,I_\nu)$ with $I_\nu$ generated by monomial or binomial relations. \end{proof} \begin{definition}\label{P2} A presentation $(Q,I_\nu)$ is called \textit{right compatible} if for any $s > s' \in SC_1(\Sigma_\nu)$ we can choose a path $u(s, s') \in Q$ such that \begin{itemize} \item [(i)] $s' = [v] \ s \ [u(s,s')]$; \item [(ii)] for any $s > s' > s''$ in $SC_2(\Sigma_\nu)$, $u(s,s'') \sim u(s,s')u(s',s'')$. \end{itemize} \end{definition} For the sake of brevity, we refrain from stating the dual of the previous condition and leave the primal-dual translation to the reader.\\ The following result will be essentially used in Lemma \ref{left}, and shows the necessity of assuming left or right compatibility. \begin{lemma} If $(Q,I_\nu)$ is a right compatible presentation then there exists a family $\{ u(s, s') \in Q \vert s > s' \in SC_1(\Sigma_\nu)\}$ such that if $$[u(s_0, s_1)], \dots, [u(s_{n-1}, s_n)]$$ is the sequence associated to $s_0 > \cdots > s_n$ then $$[u(s_0, s_1)], \dots, [u(s_{i-1}, s_i)u(s_i,s_{i+1})], \dots, [u(s_{n-1}, s_n)]$$ is the sequence associated to $s_0 > \cdots > \hat{s_i} > \cdots > s_n$. \end{lemma} \begin{proof} It follows by induction. \end{proof} \begin{example} The presentations $(Q,I_1)$ and $(Q,I_2)$ presented in Example \ref{ejemplo} are right compatible, where the corresponding paths are given by \begin{itemize} \item [(i)] for $(Q,I_1)$, \[\begin{array}{llllllll} & u([e_1],[\alpha])=\alpha, \quad & u([e_1],[\beta])=\beta, \quad & u([e_2],[\alpha])=e_2, \quad & u([e_2],[\beta])=e_2 \\ & u([e_2],[\gamma])=\gamma, & u([e_3],[\gamma])=e_3, & u([\beta], [\beta \gamma])=\gamma, & u([\gamma], [\beta\gamma])=e_3; \end{array}\] \item [(ii)] for $(Q,I_2)$, \[ \begin{array}{llllllllll} & u([e_1],[\alpha])=\alpha, \quad & u([e_1],[\beta])=\beta, \quad & u([e_2],[\alpha])=e_2, \quad & u([e_2],[\beta])=e_2 \\ & u([e_2],[\gamma])=\gamma, & u([e_3],[\gamma])=e_3, & u([\beta], [\beta \gamma])=\gamma, & u([\gamma], [\beta\gamma])=e_3, \\ & u([\alpha],[\beta \gamma])=\gamma. \end{array}\] \end{itemize} However the presentation $(Q,I_2)$ is not left compatible: \[v([e_2],[\alpha])=\alpha, \quad v([e_2],[\beta])=\beta, \quad v([\alpha], [\beta \gamma])=e_1 \quad \mbox{and} \quad v([\beta], [\beta\gamma])=e_1.\] On the other hand, $v([\alpha], [\beta \gamma])v([e_2],[\alpha])= \alpha$ and $v([\beta], [\beta\gamma])v([e_2],[\beta])=\beta$, hence we have no choice for $v([e_2],[\beta\gamma])$ satisfying the dual of Definition \ref{P2}(ii) since $\alpha \not \sim \beta$. \end{example} \begin{example} \label{ejemploNo} Consider the quiver \[ \xymatrix{0 \ar[r]^{\delta} & 1 \ar@/^/[r]^{\alpha}\ar@/_/[r]_{\beta} & 2 \ar[r]^\gamma & 3}\] and the ideal $I= < \delta ( \alpha - \beta) , (\alpha - \beta) \gamma> $. The presentation $(Q,I)$ is neither left nor right compatible. As in the previous example, we have no choice for $v([e_2],[\beta\gamma])$ satisfying the dual of Definition \ref{P2}(ii) and, dually, we have no choice for $u([e_1],[\delta \beta])$ satisfying Definition \ref{P2}(ii). \end{example} \begin{proposition} \begin{itemize} \item[] \item [(a)] Monomial algebras admit right compatible presentations; \item [(b)] Schurian algebras admit right compatible presentations; \item [(c)] Incidence algebras admit right compatible presentations. \end{itemize} \end{proposition} \begin{proof} \begin{itemize} \item[] \item [(a)] The relation $\sim$ is just equality for monomial algebras; \item [(b)] Parallel non-zero paths in a schurian algebra are linearly dependent and hence for any $s>s' \in SC_1(\Sigma_\nu)$ we have that $s' = [v] \ s \ [u]$, with $[u]$ and $[v]$ uniquely determined; \item [(c)] Incidence algebras are schurian. \end{itemize} \end{proof} \section{Main results} \label{main} \subsection{Morphism ${\HH}^n(A(\Sigma_\nu))\to {\HH}^n(A)$}\label{4} In order to compare the cohomological groups ${\HH}^n(A(\Sigma_\nu))$ and ${\HH}^n(A)$, we will define a morphism of complexes \[\Phi^*: \Hom_k(SC_*(\Sigma),k) \to \Hom_{E^e}(\rad A^{\otimes *}, A),\] and this will be done in several steps. Recall that $F$ is the two-sided ideal of $kQ$ generated by the arrows, so $F$ is a $k$-vector space with basis the set of all non-trivial paths in $Q$. For any $n \geq 0$, let $F^{\otimes n}$ be the $n$-fold tensor product of $F$ with itself over $E$ , with ${F}^{\otimes 0}=E$. \bigskip \textit{Step 1}. Definition of $T_n : {F}^{\otimes n} \to SC_n(\Sigma_\nu)$. \textit{Step 2}. Definition of $\partial_{n+1} : {F}^{\otimes n+1} \to {F}^{\otimes n}$ and $\delta_{n+1} : SC_{n+1}(\Sigma_\nu) \to SC_n(\Sigma_\nu)$. \textit{Step 3}. Proof of the commutativity of the diagram \[ \xymatrix{ {F}^{\otimes n+1} \ar[rr]^{T_{n+1}} \ar[d]^{\partial_{n+1}} & & SC_{n+1}(\Sigma_\nu) \ar[d]^{\delta_{n+1}} \\ {F}^{\otimes n} \ar[rr]^{T_n} & & SC_n(\Sigma_\nu) } \] \textit{Step 4}. Description of the morphism $\Phi^n : \Hom_k(SC_n,k) \to \Hom_{E^e} (\rad A^{\otimes n}, A)$. \bigskip \noindent \textit{Step 1}. Let $T_n : {F}^{\otimes n} \to SC_n(\Sigma_\nu)$ be the $k$-linear map defined inductively by: \begin{itemize} \item [(i)] $T_0(e_i)=[e_i]$; \item [(ii)] For any path $w$ in $F$, put \[T_1(w) = [e_{s(w)}]>[w] - [e_{t(w)}]>[w] \] if $[w] \in \Sigma_\nu$, and zero otherwise; \item [(iii)] For any basis element $(w_1, \dots, w_n)$ in ${F}^{\otimes n}$, put \begin{multline*} T_n(w_1, \dots, w_n) \\ = \left[ T_{n-1}(w_1, \dots, w_{n-1}) + (-1)^n T_{n-1}(w_2, \dots, w_n) \right] > [w_1 \cdots w_n] \end{multline*} if $[w_1 \dots w_{n}] \in \Sigma_\nu$, and zero otherwise. \end{itemize} Note that we put $\left[ \sum_{i} \lambda_i \ s_0^i > \cdots > s_{n-1}^i \right] > s_n := \sum_{i} \lambda_i \ s_0^i > \cdots > s_{n-1}^i > s_n$. \medskip \noindent \textit{Step 2}. For $n \geq 0$, let $\partial_{n+1} : {F}^{\otimes n+1} \to {F}^{\otimes n}$ be the $k$-linear map defined by \begin{equation*} \begin{split} \partial_1 (w) & =e_{t(w)}-e_{s(w)}, \\ \partial_{n+1} (w_0, \dots , w_n) & = (w_1, \dots , w_n) + \widetilde{\partial_{n+1}} (w_0, \dots , w_n) + (-1)^{n+1} (w_0, \dots , w_{n-1}) \\ & = (w_1, \dots , w_n) + \sum_{i=1}^{n} (-1)^i (w_0, \dots, w_{i-1}w_i, \dots , w_n) \\ & \quad + (-1)^{n+1} (w_0, \dots , w_{n-1}), \end{split} \end{equation*} and let $\delta_{n+1} : SC_{n+1}(\Sigma_\nu) \to SC_n(\Sigma_\nu)$ be the $k$-linear map defined by \[\delta_{n+1} (s_0 > \cdots > s_{n+1}) = \sum_{i=0}^{n+1} (-1)^i s_0 > \cdots > \hat{s_i} > \cdots > s_{n+1}.\] \medskip \noindent \textit{Step 3}. From now on, we will consider homotopy coherent presentations $(Q,I)$. We stress that, under this assumption, $T_n$ is compatible with the equivalence relation used to define $\Sigma_\nu$, that is, if $w_i \sim u_i$ for all $i$ with $1 \leq i \leq n$ then $T_n(w_1, \dots, w_n) = T_n(u_1, \dots, u_n)$. \begin{remark}\label{trivial} If $(Q,I_\nu)$ is homotopy coherent and $[u],[v] \in \Sigma_\nu$ are such that $[u]=[v][u]$ then $v$ is a trivial path. In fact, if $v$ is a path of positive length, $u \sim vu$ implies that $u \sim v^m u$ and, for $m$ sufficiently large, the path $v^m u$ belongs to the admissible ideal $I_\nu$, a contradiction. \end{remark} \begin{lemma}\label{h} Let $(Q,I_\nu)$ be a homotopy coherent presentation of $A$. Then \\ $T_{n} \partial_{n+1} (w_0, \dots , w_n)= \delta_{n+1} T_{n+1}(w_0, \dots , w_n)$, for any $n \geq 0$ and $(w_0, \dots , w_n)$ a basis element in $F^{\otimes n+1}$ with $w_0 \cdots w_n \not \in I_\nu$. \end{lemma} \begin{proof} Observe that $w_0 \cdots w_n \not \in I_\nu$ implies that $w_1 \cdots w_n \not \in I_\nu$ and $w_0 \cdots w_{n-1} \not \in I_\nu$, and the homotopy coherence of the presentation implies that $[w_0 \dots w_{n}] \in \Sigma_\nu$ if and only if $w_0 \cdots w_n \not \in I_\nu$. A direct computation shows that $T_0 \partial_1 (w) = [e_{t(w)}]-[e_{s(w)}] = \delta_1 T_1 (w)$ and \begin{equation*} \begin{split} T_1 \partial_2 (w_0, w_1) & = T_1(w_1)-[e_{s(w_0)}]>[w_0w_1] + [e_{t(w_1)}]>[w_0w_1] +T_1(w_0) \\ & = \delta_2 T_2(w_0, w_1) \end{split} \end{equation*} for any $w \not \in I_\nu$, $w_0w_1 \not \in I_\nu$. Now we proceed by induction, assuming that the desired equality holds for any $j$ such that $0 \leq j < n$, $n>1$. \begin{equation*}\begin{split} & T_n \partial_{n+1} (w_0, \dots , w_n)\\ & \quad = T_n(w_1, \dots, w_n) - T_n(w_0w_1, \dots, w_n) + (-1)^n T_n(w_0, \dots, w_{n-1}w_n) \\ & \qquad + (-1)^{n+1}T_n(w_0, \dots, w_{n-1}) - T_n (w_0,\widetilde{\partial_{n-1}} (w_1, \dots, w_{n-1}),w_n) \\ & \quad = T_n(w_1, \dots, w_n) - T_n(w_0w_1, \dots, w_n) + (-1)^n T_n(w_0, \dots, w_{n-1}w_n) \\ & \qquad + (-1)^{n+1}T_n(w_0, \dots, w_{n-1}) \\ & \qquad - T_{n-1}(w_0, \widetilde{\partial_{n-1}} (w_1, \dots, w_{n-1})) > [w_0 \cdots w_n] \\ & \qquad - (-1)^{n} T_{n-1}(\widetilde{\partial_{n-1}} (w_1, \dots, w_{n-1}),w_n) > [w_0 \cdots w_n] \\ & \quad = T_n(w_1, \dots, w_n) - T_n(w_0w_1, \dots, w_n) + (-1)^n T_n(w_0, \dots, w_{n-1}w_n) \\ & \qquad +(-1)^{n+1}T_n(w_0, \dots, w_{n-1}) \\ & \qquad + \left[ T_{n-1} \partial_{n} (w_0, \dots, w_{n-1}) + (-1)^{n+1} T_{n-1} \partial_{n} (w_1, \dots , w_{n-1},w_n) \right] > [w_0 \cdots w_n] \\ & \qquad + \left[-T_{n-1}(w_1, \dots, w_{n-1}) + (-1)^n T_{n-1} (w_2, \cdots, w_n)\right] > [w_0 \cdots w_n] \\ & \qquad + \left[T_{n-1}(w_0w_1, \dots, w_{n-1}) - T_{n-1} (w_1, \dots , w_{n-1}w_n)\right] > [w_0 \cdots w_n] \\ & \qquad + \left[ (-1)^{n+1} T_{n-1}(w_0, \dots , w_{n-2}) + T_{n-1} (w_1, \dots , w_{n-1})\right] > [w_0 \cdots w_n]. \end{split} \end{equation*} By the inductive hypothesis and using the inductive definition of $T_n$ we have \begin{equation*} \begin{split} & T_n \partial_{n+1} (w_0, \dots , w_n) \\ & \quad = T_n(w_1, \dots , w_n) +(-1)^{n+1}T_n(w_0, \dots , w_{n-1}) \\ & \qquad + \left[ \delta_{n}T_{n} (w_0, \dots , w_{n-1}) + (-1)^{n+1} \delta_{n}T_{n} (w_1, \dots , w_n) \right] > [w_0 \cdots w_n] \\ & \quad = \delta_{n+1} \left( T_n(w_0, \dots , w_{n-1})> [w_0 \cdots w_n]\right)\\ & \qquad + (-1)^{n+1} \delta_{n+1} \left(T_n (w_1, \dots , w_n) > [w_0 \cdots w_n] \right)\\ & \quad = \delta_{n+1} T_{n+1}(w_0, \dots , w_n). \end{split} \end{equation*} \end{proof} \bigskip \noindent \textit{Step 4}. We are now in a position to describe the morphisms \[\Phi^n : \Hom_k(SC_n(\Sigma_\nu),k) \to \Hom_{E^e} (\rad A^{\otimes n}, A).\] Note that $\rad A \simeq F/I_\nu$ and $\rad A^{\otimes n} \simeq F^{\otimes n}/R$ with \[R= \sum_{s=0}^{n-1} F^{\otimes s} \otimes_E I_\nu \otimes_E F^{\otimes n-s-1}.\] For any $w \in F$ we write $\overline w$ for the class of $w$ modulo $I_\nu$. For any $f$ in $\Hom_k(SC_n(\Sigma_\nu),k)$ let $\widetilde \Phi^n (f) : F^{\otimes n} \to A$ be the $k$-linear map defined by \[\widetilde \Phi^n(f) (w_1, \dots , w_n) = f(T_n(w_1, \dots , w_n)) \overline{w_1 \cdots w_n},\] where $(w_1, \dots , w_n)$ is a basis element in $F^{\otimes n}$. \medskip Let $(w_1, \dots , \rho, \dots , w_n) \in F^{\otimes s} \otimes_E I_\nu \otimes F^{\otimes n-s-1}$. If $\rho$ is a path, it is clear that $\widetilde \Phi^n(f)(w_1, \dots , \rho, \dots , w_n)=0$. If $\rho= \sum_{i=1}^m \lambda_i v_i$ is a minimal relation, $m > 1$, then \[T_n (w_1, \dots , v_i, \dots , w_n) = T_n (w_1, \dots , v_1, \dots , w_n)\] for any $i$ with $1 \leq i \leq m$. Hence \begin{equation*} \begin{split} \widetilde \Phi^n(f)(w_1, \dots , \rho, \dots , w_n) & = \sum_{i=1}^m \lambda_i \widetilde \Phi^n(f)(w_1, \dots , v_i, \dots, w_n) \\ &= \sum_{i=1}^m \lambda_i f(T_n(w_1, \dots, v_i, \dots, w_n)) \overline{w_1 \cdots v_i \cdots w_n} \\ &= f(T_n(w_1, \dots, v_1, \dots, w_n)) \sum_{i=1}^m \lambda_i \overline{w_1 \cdots v_i \cdots w_n} = 0. \end{split} \end{equation*} The ideal $I_\nu$ is generated by paths and minimal relations, so $\widetilde \Phi^n(f)(R)=0$, and then $\widetilde \Phi^n$ induces a map $\Phi^n(f): \rad A^{\otimes n} \to A$ given by \begin{equation*} \begin{split} \Phi^0(f)(e_i) & = f(T_0(e_i))e_i= f([e_i])e_i, \\ \Phi^n(f) (\overline w_1, \dots , \overline w_n)& = f(T_n(w_1, \dots , w_n)) \overline{w_1 \cdots w_n}. \end{split} \end{equation*} \begin{proposition} If $(Q,I_\nu)$ is a homotopy coherent presentation of $A$, the map $\Phi^* : \Hom_k(SC_*(\Sigma_\nu),k) \to \Hom_{E^e} (\rad A^{\otimes *}, A)$ is a morphism of complexes and, hence, it induces morphisms ${\HH}(\Phi^n): {\HH}^n(A(\Sigma_\nu)) \to {\HH}^n(A)$, for any $n \geq 0$. \end{proposition} \begin{proof} We have to show that, for any $n \geq 0$, the diagram \[\xymatrix{ \Hom_k(SC_n(\Sigma_\nu),k) \ar[r]^{\Phi^n} \ar[d]^{B^n} & \Hom_{E^e} (\rad A^{\otimes n}, A) \ar[d]^{b^n} \\ \Hom_k(SC_{n+1}(\Sigma_\nu),k) \ar[r]^{\Phi^{n+1}} & \Hom_{E^e} (\rad A^{\otimes n+1}, A) } \] is commutative. A direct computation shows that \begin{equation*} \begin{split} \Phi^{n+1}(B^{n}f) (\overline w_0, \dots , \overline w_n) & = (B^{n}f)(T_{n+1}(w_0, \dots , w_n)) \overline{w_0 \cdots w_n}\\ & = f(\delta_{n+1} T_{n+1}(w_0, \dots , w_n)) \overline{w_0 \cdots w_n} \end{split} \end{equation*} and \[(b^n \Phi^n(f)) (\overline w_0, \dots , \overline w_n) = f(T_n \partial_{n+1}(w_0, \dots , w_n)) \overline{w_0 \cdots w_n}.\] The desired equality follows from Lemma~\ref{h}. \end{proof} \subsection{The complex $\Ker \Phi^*$} We will show that the complex $\Ker \Phi^*$ is exact, by constructing a contraction homotopy $S_n: \Ker \Phi^n \to \Ker \Phi^{n-1}$. Observe that \begin{equation*} \begin{split} \Ker \Phi^n & = \{ f \in \Hom_k(SC_n(\Sigma_\nu),k) : f(T_n(w_1, \dots , w_n)) \overline{w_1 \cdots w_n} = 0, \\ & \qquad \quad \mbox{for any basis element $(w_1, \dots , w_n) \in F^{\otimes n}$}\} \\ & = \{ f \in \Hom_k(SC_n(\Sigma_\nu),k) : f(T_n(w_1, \dots , w_n)) = 0, \\ & \qquad \quad \mbox{for any basis element $(w_1, \dots , w_n) \in F^{\otimes n} \setminus R $}\}. \end{split} \end{equation*} In order to construct the homotopy, the chosen presentation $(Q,I_\nu)$ of $A$ must be left or right compatible, see Definition \ref{P2}. In this case we choose a family \[\{ u(s, s') \in Q \vert s > s' \in SC_1(\Sigma_\nu)\}\] satisfying Definition \ref{P2} and let $G^0 : SC_0(\Sigma_\nu) \to SC_1(\Sigma_\nu)$ be the map given by \[ G^0([w])= \begin{cases} 0 & \mbox{if $w \in E$}, \\ [e_{t(w)}] > [w] & \mbox{if $w \in F$}. \end{cases} \] Let $G^{n}: SC_{n}(\Sigma_\nu) \to SC_{n+1}(\Sigma_\nu)$, for $n > 0$, be defined inductively in the following way: for any $s_0 > \cdots > s_n$ in $SC_n(\Sigma_\nu)$, denote \[u_i = u(s_{i-1}, s_{i}), \qquad \Omega= \Omega(s_0 > \cdots > s_n)=\{ i : u_i \in E , 0< i \leq n\},\] and put \begin{itemize} \item [(i)] $G^{n} (s_0 > \cdots > s_n) = G^{n-1}(s_0 > \cdots > s_{n-1})> s_n$ if $\Omega \not = \emptyset$ or $[u_1 \cdots u_n] = s_n$; \item [(ii)] $G^{n} (s_0 > \cdots > s_n) = \left[ G^{n-1}(s_0 > \cdots > s_{n-1}) +(-1)^{n} T_{n}(u_1, \dots, u_n) \right] > s_n$ otherwise. \end{itemize} \medskip In order to prove that the complex $\Ker \Phi^*$ admits a contraction homotopy, we need the following lemma. \begin{lemma} \label{left} Let $(Q, I_\nu)$ be a homotopy coherent, right compatible presentation of $A$. Then \begin{itemize} \item [(i)] $ \delta_1 G^0 ([w])= [w]- T_0(e_{t(w)})$, for any $[w] \in SC_0(\Sigma_\nu)$; \item [(ii)] $(\delta_{n+1} G^{n} + G^{n-1} \delta_n )(s_0 > \cdots > s_n)= s_0 > \cdots > s_n$ if $\Omega(s_0 > \cdots > s_n) \not = \emptyset$; \item [(iii)] $(\delta_{n+1} G^{n} + G^{n-1} \delta_n )(s_0 > \cdots > s_n)= s_0 > \cdots > s_n - T_{n}(u_1, \dots, u_n)$, otherwise. \end{itemize} \end{lemma} \begin{proof} A direct computation shows that $\delta_1 G^0 ([w])= [w]- T_0(e_{t(w)})$ and that the assertion is true for $\delta_2 G^1 + G^0 \delta_1$. Now assume that $n > 1$ and proceed by induction. Let $s_0 > \cdots > s_n \in SC_n(\Sigma_\nu)$ and consider the following cases: \begin{itemize} \item [ a)] $\Omega(s_0 > \cdots > s_n) = \emptyset$, $[u_1 \cdots u_n] \not = s_n$; \item [ b)] $\Omega(s_0 > \cdots > s_n) = \emptyset$, $[u_1 \cdots u_n] = s_n$; \item [ c)] \begin{itemize} \item [1)] $\Omega(s_0 > \cdots > s_n) = \{ n \}$; \item [2)] $\Omega(s_0 > \cdots > s_n) = \{ i \}$, $0< i < n$; \item [3)] $\Omega(s_0 > \cdots > s_n)$ has at least two elements. \end{itemize} \end{itemize} \textit{Case} (a). If $s_0 > \cdots > s_n$ is such that $\Omega(s_0 > \cdots > s_n) = \emptyset$ and $[u_1 \cdots u_n] \not = s_n$, we have \begin{equation*} \begin{split} & \delta_{n+1} G^{n} (s_0 > \cdots > s_n) \\ & \quad = \delta_{n+1} (G^{n-1}(s_0 > \cdots > s_{n-1}) > s_n) + (-1)^{n} \delta_{n+1} (T_{n}(u_1, \dots, u_n) > s_n) \\ & \quad = \delta_{n} G^{n-1}(s_0 > \cdots > s_{n-1}) > s_n + (-1)^{n+1} G^{n-1}(s_0 > \cdots > s_{n-1}) \\ & \qquad + (-1)^{n} \delta_{n} T_{n}(u_1, \dots, u_n) > s_n + (-1)^n (-1)^{n+1} T_{n}(u_1, \dots, u_n). \end{split} \end{equation*} Using the inductive hypothesis and Lemma~\ref{h} we get \begin{equation*} \begin{split} & \delta_{n+1} G^{n} (s_0 > \cdots > s_n) \\ & \quad= s_0 > \cdots > s_n - G^{n-2} \delta_{n-1} (s_0 > \cdots > s_{n-1}) > s_n \\ & \qquad - T_{n-1}(u_1 , \dots, u_{n-1}) > s_n + (-1)^{n+1} G^{n-1}(s_0 > \cdots > s_{n-1}) \\ & \qquad + (-1)^{n} T_{n-1} \partial_{n}(u_1, \dots, u_n) > s_n - T_{n}(u_1, \dots, u_n) \\ & \quad= s_0 > \cdots > s_n - T_{n}(u_1, \dots, u_n) - G^{n-1} \delta_n(s_0 > \cdots > s_n). \end{split} \end{equation*} \textit{Case} (b). If $s_0 > \cdots > s_n$ is such that $\Omega(s_0 > \cdots > s_n) = \emptyset$ and $[u_1 \cdots u_n] = s_n$ then \begin{equation*} \begin{split} & \delta_{n+1} G^{n} (s_0 > \cdots > s_n) \\ & \quad = \delta_{n+1} (G^{n-1}(s_0 > \cdots > s_{n-1}) > s_n) \\ & \quad = \delta_{n} G^{n-1}(s_0 > \cdots > s_{n-1}) > s_n + (-1)^{n+1} G^{n-1}(s_0 > \cdots > s_{n-1}) \\ & \quad = s_0 > \cdots > s_n - G^{n-2} \delta_{n-1} (s_0 > \cdots > s_{n-1}) > s_n \\ & \qquad - T_{n-1}(u_1 , \dots, u_{n-1}) > s_n + (-1)^{n+1} G^{n-1}(s_0 > \cdots > s_{n-1}). \end{split} \end{equation*} On the other hand \begin{equation*} \begin{split} & G^{n-1} \delta_{n}(s_0 > \cdots > s_n) \\ & \qquad = G^{n-1}(\delta_{n-1} (s_0 > \cdots > s_{n-1}) > s_n) + (-1)^{n} G^{n-1}(s_0 > \cdots > s_{n-1}) \\ & \qquad = G^{n-2}\delta_{n-1} (s_0 > \cdots > s_{n-1}) > s_n + (-1)^{n-1} T_{n-1}(u_2, \dots, u_n) > s_n \\ & \qquad \quad + (-1)^{n} G^{n-1}(s_0 > \cdots > s_{n-1}). \end{split} \end{equation*} Hence \begin{equation*} \begin{split} &(\delta_{n+1} G^{n}+ G^{n-1} \delta_{n})(s_0 > \cdots > s_n) \\ & \quad = s_0 > \cdots > s_n - \left[ T_{n-1}(u_1 , \dots, u_{n-1}) + (-1)^n T_{n-1}(u_2, \dots, u_n)\right] > s_n \\ & \quad = s_0 > \cdots > s_n - T_n(u_1, \dots , u_n). \end{split} \end{equation*} \textit{Case} (c). If $s_0 > \cdots > s_n$ is such that $\Omega(s_0 > \cdots > s_n) \not = \emptyset$ then \begin{equation*} \begin{split} & \delta_{n+1} G^{n} (s_0 > \cdots > s_n) \\ & \quad = \delta_{n+1} (G^{n-1}(s_0 > \cdots > s_{n-1}) > s_n) \\ & \quad = \delta_{n} G^{n-1}(s_0 > \cdots > s_{n-1}) > s_n + (-1)^{n+1} G^{n-1}(s_0 > \cdots > s_{n-1}). \end{split} \end{equation*} \textit{Case} (c 1). If $\Omega(s_0 > \cdots > s_n) = \{ n \}$ then $u(s_{n-2},s_n)=u_{n-1}$ and \[\Omega(s_0 > \cdots> \hat s_j > \cdots > s_n) = \begin{cases} \emptyset \quad & \mbox{if $j=n-1,n$}, \\ \{ n \} & \mbox{otherwise}. \end{cases}\] Moreover $[u_1 \cdots u_{n-1}] \not = s_n$. In fact, if $[u_1 \cdots u_{n-1}] = s_n$, using Remark~\ref{trivial} we deduce that $s_n=s_{n-1}$, a contradiction. So \begin{equation*} \begin{split} & G^{n-1} \delta_{n}(s_0 > \cdots > s_n) \\ & \quad = G^{n-1}(\delta_{n-1} (s_0 > \cdots > s_{n-1}) > s_n) + (-1)^{n} G^{n-1}(s_0 > \cdots > s_{n-1}) \\ & \quad = G^{n-2}\delta_{n-1} (s_0 > \cdots > s_{n-1}) > s_n + T_{n-1}(u_1, \dots, u_{n-1}) > s_n \\ & \qquad + (-1)^{n} G^{n-1}(s_0 > \cdots > s_{n-1}). \end{split} \end{equation*} \textit{Case} (c 2). If $\Omega(s_0 > \cdots > s_n) = \{ i \}$, $0< i < n$, then \[\Omega(s_0 > \cdots> \hat s_j > \cdots > s_n) = \begin{cases} \emptyset \quad & \mbox{if $j=i-1,i$}, \\ \{ i \} & \mbox{otherwise}, \end{cases}\] \begin{multline*} G^{n-1}(s_0> \cdots > \hat s_{i-1} > \cdots > s_n) - G^{n-1}(s_0> \cdots > \hat s_i > \cdots > s_n) \\ = \left[ G^{n-2}(s_0> \cdots > \hat s_{i-1} > \cdots > s_{n-1}) - G^{n-2}(s_0> \cdots > \hat s_i > \cdots > s_{n-1})\right] >s_n \end{multline*} and so \begin{equation*} \begin{split} & G^{n-1} \delta_{n}(s_0 > \cdots > s_n) \\ & \quad = G^{n-1}(\delta_{n-1} (s_0 > \cdots > s_{n-1}) > s_n) + (-1)^{n} G^{n-1}(s_0 > \cdots > s_{n-1}) \\ & \quad = G^{n-2}\delta_{n-1} (s_0 > \cdots > s_{n-1}) > s_n + (-1)^{n} G^{n-1}(s_0 > \cdots > s_{n-1}). \end{split} \end{equation*} \textit{Case} (c 3). If $\Omega(s_0 > \cdots > s_n)$ has at least two elements then \[\Omega(s_0 > \cdots > \hat s_i > \cdots > s_n) \not = \emptyset\] and \begin{equation*} \begin{split} & G^{n-1} \delta_{n}(s_0 > \cdots > s_n) \\ & \quad = G^{n-1}(\delta_{n-1} (s_0 > \cdots > s_{n-1}) > s_n) + (-1)^{n} G^{n-1}(s_0 > \cdots > s_{n-1}) \\ & \quad = G^{n-2}\delta_{n-1} (s_0 > \cdots > s_{n-1}) > s_n + (-1)^{n} G^{n-1}(s_0 > \cdots > s_{n-1}). \end{split} \end{equation*} Then, in cases (c 2) and (c 3), \begin{multline*} (\delta_{n+1} G^{n}+G^{n-1} \delta_{n}) (s_0 > \cdots > s_n) \\ = \left[ (\delta_{n} G^{n-1}+G^{n-2}\delta_{n-1})(s_0 > \cdots > s_{n-1}) \right] > s_n \end{multline*} and in case (c 1) \begin{multline*} (\delta_{n+1} G^{n}+G^{n-1} \delta_{n}) (s_0 > \cdots > s_n) \\ = \left[ (\delta_{n} G^{n-1}+G^{n-2}\delta_{n-1})(s_0 > \cdots > s_{n-1}) + T_{n-1}(u_1, \dots, u_{n-1}) \right] > s_n. \end{multline*} Hence the assertion follows by induction. \end{proof} Note that $G^{n}T_n =0$: a direct computation proves this for $n=0,1$, and an inductive procedure completes the proof since \[T_n(w_1, \dots, w_n) = [e_{s(w_1)}]>[w_1]> \cdots >[w_1 \cdots w_n] + \mbox{simplices with $\Omega \not = \emptyset$}\] and hence \begin{equation*} \begin{split} & G^n T_n (w_1, \dots, w_n) \\ & \quad = G^n ( \left[ T_{n-1} (w_1, \dots, w_{n-1}) + (-1)^n T_{n-1} (w_2, \dots, w_n) \right] > [w_1 \cdots w_n])\\ & \quad = G^{n-1} ( T_{n-1} (w_1, \dots, w_{n-1}) +(-1)^n T_{n-1} (w_2, \dots, w_n)) > [w_1 \cdots w_n]. \end{split} \end{equation*} So, for any $f \in \Ker \Phi^{n+1}$, the composition $f G^{n}$ belongs to $\Ker \Phi^{n}$. Let \[S_{n+1}: \Ker \Phi^{n+1} \to \Ker \Phi^{n}\] be the map defined by $S_{n+1}(f)= f G^{n}$. \begin{proposition} Let $(Q, I_\nu)$ be a homotopy coherent, right compatible presentation of $A$. The map $S_{n+1}: \Ker \Phi^{n+1} \to \Ker \Phi^{n}$ is a contraction homotopy. \end{proposition} \begin{proof} It follows immediately from the previous lemma since \begin{equation*} \begin{split} (S_1B^0)(f) & = B^0 f G^ 0 = f \delta_1 G^0=f, \quad \mbox{and} \\ (S_{n+1} B^{n} + B^{n-1} S_{n})(f)&= B^{n} f G^{n} + B^{n-1} f G^{n-1}= f \delta_{n+1} G^n + f G^{n-1} \delta_{n}=f \end{split} \end{equation*} because $f T_{0}=0=f T_{n}$. \end{proof} \begin{theorem}\label{teorema} Let $(Q,I_\nu)$ be a homotopy coherent, right compatible presentation of an algebra $A$. If $\Phi^{n-1}$ is a surjective morphism, then ${\HH}(\Phi^n): {\HH}^n(A(\Sigma_\nu)) \to {\HH}^n(A)$ is an injective morphism. \end{theorem} \begin{proof} It follows by diagram chasing of elements in the commutative diagram of complexes computing the corresponding cohomology groups. \end{proof} Monomial algebras without non-zero oriented cycles, schurian algebras and incidence algebras satisfy the assumptions of the following corollary. \begin{corollary}\label{46} Let $(Q,I_\nu)$ be a homotopy coherent, right compatible presentation of an algebra $A$. If $\dim_k A(x,x)=1$ for any $x \in Q_0$ then the morphism ${\HH}(\Phi^1): {\HH}^1(A(\Sigma_\nu)) \to {\HH}^1(A)$ is injective. \end{corollary} \begin{proof} The proof follows from the previous theorem by observing that the assumed hypotheses imply that $\Phi^0$ is a surjective map. \end{proof} \begin{corollary} \label{caso1} If $A$ is an incidence algebra, then ${\HH}(\Phi^n): {\HH}^n(A(\Sigma_\nu)) \to {\HH}^n(A)$ is an isomorphism for any $n \geq 0$. \end{corollary} \begin{proof} The assertion is clear for $n=0$. Since $A={\Ia}(\Sigma)$ is an incidence algebra, recall from Section \ref{posetincidencia} that \[\rad A^{\otimes n}= \oplus_{s_0> s_1 > \dots > s_n} A(s_0, s_1) \otimes_k A(s_1, s_2) \otimes_k \dots \otimes_k A(s_{n-1}, s_n),\] with $\dim_k A(s_{i-1}, s_i)=1$ for all $i$ with $1\leq i \leq n$. Taking a set of basis elements $(\overline w_1, \dots , \overline w_n)$ in $A(s_0, s_1) \otimes_k A(s_1, s_2) \otimes_k \dots \otimes_k A(s_{n-1}, s_n)$, the maps $g: \rad A^{\otimes n} \to A$ defined by \[g(\overline v_1, \dots , \overline v_n)= \begin{cases} \lambda \overline{ w_1 \cdots w_n} \quad & \mbox{if $(\overline v_1, \dots , \overline v_n)=(\overline w_1, \dots , \overline w_n)$},\\ 0 & \mbox{otherwise}, \end{cases}\] form a basis of the $k$-vector space $\Hom_{E^e} (\rad A^{\otimes n}, A)$. Take $f \in \Hom_k(SC_n(\Sigma_\nu),k)$ defined as follows: \[f(s_0 > \cdots > s_n)= \begin{cases} \lambda \quad & \mbox{if $s_0 > \cdots > s_n = [e_{s(w_1)}]>[w_1]>\cdots > [w_1 \cdots w_n]$},\\ 0 & \mbox{otherwise}. \end{cases}\] Now $\Phi^n(f)=g$ and hence $\Phi^{n}$ is a surjective map for any $n \geq 0$. Then we get the short exact sequence of the complexes \[0 \to \Ker (\Phi^*) \to \Hom_k(SC_*(\Sigma_\nu),k) \to \Hom_{E^e}(\rad A^{\otimes *},A) \to 0,\] so that we get the long exact sequence \[\dots \to {\HH}^n(\Ker \Phi^*) \to {\HH}^n(A(\Sigma_\nu)) \to {\HH}^n(A) \to {\HH}^{n+1}(\Ker \Phi^*) \to \dots\] \end{proof} \section{Examples} \begin{example} Consider the presentations $(Q,I_1)$ and $(Q,I_2)$ of the algebra $A=kQ/I_1$ given by \[ Q: \xymatrix{1 \ar@/^/[r]^{\alpha}\ar@/_/[r]_{\beta} & 2 \ar[r]^\gamma & 3, }\] $I_1= < \alpha \gamma> $ and $I_2= < (\alpha - \beta) \gamma> $, presented in Example~\ref{ejemplo}. Using \cite{IZ} we construct the reduced posets $\overline \Sigma_1, \overline \Sigma_2$, as described in the Introduction, \[\overline \Sigma_1: \xymatrix{[e_1] \ar[d] \ar[dr] & [e_2] \ar[d] \ar[dl] & [e_3] \ar[dl]\\ [\alpha] & [\beta\gamma] & } \qquad \qquad \overline \Sigma_2: \xymatrix{[e_1] \ar[dr] & [e_2] \ar[d] & [e_3] \ar[dl]\\ & [\beta \gamma] }\] and from \cite[2.5,2.2]{GR} and \cite[5.3]{H} we get \[{\HH}^i(A(\Sigma_1)) = \begin{cases} k \quad &\mbox{if $i=0, 1$}, \\ 0 \quad &\mbox{otherwise} \end{cases}, \qquad {\HH}^i(A(\Sigma_2)) = \begin{cases} k \quad &\mbox{if $i=0$}, \\ 0 \quad &\mbox{otherwise}. \end{cases}\] From \cite[5.3,1.6]{H} we get \[{\HH}^i(A) = \begin{cases} k \quad &\mbox{if $i=0$}, \\ k^2 \quad &\mbox{if $i=1$}, \\ 0 \quad &\mbox{otherwise}. \end{cases} \] \end{example} Now we present families of algebras where the non-vanishing of some Hochschild cohomology groups can be deduced. \begin{example} Consider the quiver \[ \xymatrix{1 \ar @/^/ @{-} [rr]^{\alpha_1}\ar @/_/ @{-} [rr]_{\alpha_n} & \vdots & 2,}\] with arrows $\alpha_1, \dots \alpha_n$ with any orientation, and let $A=kQ/F^2$. The quiver of the corresponding incidence algebra $A(\Sigma)$ is given by \[\xymatrix{ [1] \ar[d] \ar[rd] \ar[drrr] & & & [2] \ar[d] \ar[dll] \ar[dlll] \\ [\alpha_1] & [\alpha_2] & \dots & [\alpha_n]} \] and from~\cite[1.6]{H} we get \[{\HH}^i(A(\Sigma)) = \begin{cases} k \quad &\mbox{if $i=0$}, \\ k^{n-1} \quad &\mbox{if $i=1$}, \\ 0 \quad &\mbox{otherwise}. \end{cases}\] Hence $\dim_k {\HH}^1(A) \geq n-1$ by Corollary \ref{46}. If $\alpha_1, \cdots, \alpha_n$ share starting and ending points, then from~\cite[1.6]{H} we get \[{\HH}^i(A) = \begin{cases} k \quad &\mbox{if $i=0$}; \\ k^{n^2-1} \quad &\mbox{if $i=1$}; \\ 0\quad &\mbox{otherwise}. \end{cases}\] The particular case $n=2$, $\alpha_1,\alpha_2$ with opposite orientations, has been considered in~\cite{C2} and \[{\HH}^i(A) = \begin{cases} k \quad &\mbox{if $i=0, 4s, 4s+1$}; \\ 0\quad &\mbox{otherwise}. \end{cases} \] \end{example} The previous example shows that the injective morphism described in Theorem~\ref{teorema} can not be expected to be an isomorphism in general. It would be nice to determined all the algebras that admit a distinguished presentation making the mentioned morphism an isomorphism. \begin{example} Let $Q_n$ be the quiver \[\xymatrix{ 1 \ar @/^/ [r]^{\alpha_1}\ar @/_/ [r]_{\beta_1} & 2 \ar @/^/ [r]^{\alpha_2}\ar @/_/ [r]_{\beta_2} & 3 \ar @/^/ [r]^{\alpha_3}\ar @/_/ [r]_{\beta_3} & \cdots \ar @/^/ [r]^{\alpha_{n-1}}\ar @/_/ [r]_{\beta_{n-1}} & n}\] and let $A_n=kQ_n/F^2$. The quiver of the corresponding incidence algebra $A(\Sigma_n)$ is given by \[\xymatrix{ 1 \ar[d] \ar[rd] & & 2 \ar[dll] \ar[dl] \ar[d] \ar[dr] & & 3 \ar[dll] \ar[dl] \ar[d] & \cdots & n \ar[dl] \ar[d]\\ [\alpha_1] & [\beta_1] & [\alpha_2] & [\beta_2] & [\alpha_{3}] & \cdots \ [\alpha_{n-1}] & [\beta_{n-1}]} \] and from~\cite[1.6]{H} we get \[{\HH}^i(A(\Sigma_n)) = \begin{cases} k \quad &\mbox{if $i=0$}; \\ k^{n-1} \quad &\mbox{if $i=1$}; \\ 0\quad &\mbox{otherwise}. \end{cases}\] Hence $\dim_k {\HH}^1(A) \geq n-1$ by Corollary \ref{46}. \end{example}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Recently a big progress has been made in application of digital technique in experimental physics what allows to perform milestone physics experiments even in student laboratories. A good example is the Perrin experiment [1] considered as the first one directly proving the atomic structure of matter. However, its verification at university laboratories [2],[3],[4],[5] due to small statistics one takes, may meet some difficulties (see e.g.[2],[4],[5]). The linear dependence between the average square displacement $\langle r^{2}\rangle$ of the particle in media due to its Brownian motion and the observation time $t$ as required by the Einstein-Smoluchowski diffusion law often becomes very problematic.\\ It is essential therefore to examine the minimal statistics (number of tracked particles) one should take into account in the limited observation time to reveal the major feature of the diffusion law. We propose the analytical model which can be also easy simulated numerically. The aim of this model is to investigate how the results of $\langle r^{2}\rangle$ versus $t$ depend on the statistics and what the scaling range of the expected linear relationship is. This study should help to set up the experiment properly as well as to analyze the obtained results more correctly.\\ In the next section we present the model directly reflecting the physics standing behind the Perrin experiment. In section 3 the results of numerical simulation of this model are described and main features of diffusion relation with its scaling range are revealed for various number of particles to be observed. To avoid the problem of small statistics causing departures from the strict power law behavior we introduce and discuss the idea of Artificially Increased Statistics(AIS) in section 4. This method is then applied both to the results of numerical simulation and to some experimental data. We argue the method may significantly decrease the level of statistical noise in data, leading to much better agreement with the linear dependence in diffusion theory. In the last section summary of obtained results is given. \section{Description of the Model} The most popular derivation of the diffusion law in media with viscosity comes from Einstein [6], Langevin [7] and Smoluchowski [8]. Here we propose another approach based on the time series analysis combined with the average time $\tau$ between consecutive collisions of the tracked mesoscopic particles with other particles in liquid (i.e. $\tau$ has the meaning of the average time between collisions which significantly change the motion of the tracked object). Such approach seems to be closer to the spirit of the original Perrin experiment [1]. \\ Let the trajectory of the observed particle of mass $m$ moving in $d$-dimensional space is $x^{\alpha}(t)$, where $\alpha =1,2,...,d$. We assume $x^{\alpha}(t)$ to be discrete $d$-dimensional time series with constant spacing $\tau$ in time ($t = 0, \tau, 2\tau,...,N\tau$). The obvious notation \begin{equation}\label{eq1} x^{\alpha}(k\tau) = x^{\alpha}_{k}, k = 1, 2, ..., N \end{equation} and \begin{equation}\label{eq2} \Delta x^{\alpha}_{k} = x^{\alpha}_{k+1}-x^{\alpha}_{k} \end{equation} will be applied, where $\Delta x^{\alpha}_{k}$ is the instantenous displacement of the particle at $t=k\tau$. For the stationary, integer Brownian motion (no displacement correlation) with no drift one has for large $n$: \begin{equation}\label{eq3} \langle\Delta x^{\alpha}_{i}\rangle_{n} = 0 \end{equation} and \begin{equation}\label{eq4} \langle\Delta x^{\alpha}_{i}\Delta x^{\alpha}_{j}\rangle_{n} = \delta_{ij}{({\sigma}^{\alpha}_{i})}^2 \end{equation} where $\langle .\rangle_n$ is the average taken over the ensamble of $n$ tracked particles and ${({\sigma} ^{\alpha}_{i})}^2=\sigma^{2}$ is the standard deviation of instantenous displacements, i.e.: \begin{equation}\label{eq5} \langle (\Delta x^{\alpha}_{i})^{2}\rangle_{n} = \sigma^{2} \end{equation} The total mean squared displacement $\langle r^{2}\rangle_{n}$ of particles from their initial positions after $N$ collisions can be easy calculated with the help of Eq.~(5): \begin{equation}\label{eq6} \langle \Delta r^{2}\rangle_{n} = \langle\sum\limits^{d}_{\alpha=1}(\sum^{N}_{i}\Delta x^{\alpha}_{i})^{2}\rangle_{n} = \frac{d\sigma^{2}}{\tau}t \end{equation} In order to calculate $\sigma^{2}$ let us notice that \begin{equation}\label{eq7} \Delta x^{\alpha}_{i} = \tau\langle v^{\alpha}_{i}\rangle_{\tau} \end{equation} with $\langle v^{\alpha}_{i}\rangle_{\tau}$ being the average velocity of the $i$ - th particle between collisions. Hence from Eqs.~(5) and(7): \begin{equation}\label{eq8} \sigma^{2} = \tau^{2}\langle\langle v^{\alpha}_{i}\rangle^{2}_{\tau}\rangle_{n} \end{equation} The equipartition theorem establishes the connection of microscopic quantities with the absolute temperature $T$ and the Boltzmann constant $k$: \begin{equation}\label{eq9} \frac{1}{2}m\langle\langle v^{\alpha}_{i}\rangle^{2}_{\tau}\rangle_{n} = \frac{1}{2}kT \end{equation} Therefore Eq.~(6) reads: \begin{equation}\label{eq10} \langle\Delta r^{2}\rangle_{n} = (\frac{dkT}{m}\tau)t \end{equation} The above formula is the standard diffusion law with the diffusion constant \begin{equation}\label{eqx11} D = \frac{dkT}{m}\tau \end{equation} expressed in terms of $\tau$.\\ Usually one writes $D$ in terms of liquid viscosity $\eta$ as \begin{equation}\label{eqx12} D = \frac{dkT}{\alpha} \end{equation} where $\alpha = 6\pi \varrho \eta$ (Stokes law) and $\varrho$ being the radius of the considered mesoscopic particles.\\ Hence one gets the simple relation between parameter $\tau$ in the model and macroscopic quantities $m$, $\alpha$: \begin{equation}\label{eq13} \tau = \frac{m}{\alpha} \end{equation} Thus the model reproducing the known diffusion law also estimates the average time $\tau$ lapsing between consecutive collisions in the system as the simple function of macroscopically measured quantities. This time can be taken as the input parameter in the numerical study of the Perrin experiment what is done in the next section. \section{Numerical Simulation of the Perrin Experiment} The solution in Eq.~(10) can be checked via numerical simulation of the Brownian motion in viscous media. In fact this simulation is the only way one can find the sufficient statistics, i.e. the number of tracked particles in the ensamble one should observe in real experiment to obtain results confirming the linear relation. If sufficient statistics requirement is not satisfied, one observes significant departures from the linear behavior $\langle r^2\rangle_{n}\sim t$ (see e.g. Ref. [2], [4]).\\ We simulated all time series $\{x^{\alpha}_i\}$ in $d=2$ dimensions usually discussed by experimentalists. The time series were built in the well known iterative way \begin{equation}\label{eqx14} x^{\alpha}_{i+1}= x^{\alpha}_{i} + \Delta x^{\alpha}_{i} \end{equation} \begin{equation}\label{eq15} r^{2}_{i} = {(x^{1}_{i})}^{2} + {(x^{2}_{i})}^{2} \end{equation} where displacements have been generated as the random gaussian numbers $N(0,\sigma)$, with the standard deviation $\sigma = \tau{(kT/m)}^{1/2}$ obtained from Eqs.~(8),(9). All simulations were performed for the case of diffusion in pure water $(\eta = 1.00\times {10}^{-3} Pa\cdot s)$, room temperature $T = 295~K$, $m = 4.28\times {10}^{-16}~kg$ and $\varrho = 425~nm$ what roughly corresponds to the real Perrin experiment parameters.\\ The essential task to be done just in the beginning was to determine the scaling range $\lambda$ of the discussed linear dependence as a function of the number of tracked particles $n$. It was done for the bunch of simulated trajectories varying the number of observed particles in the range $n = 10 \div 500$. The bunch of twenty trajectories was investigated for any $n$ in the above range. The examples of just five runs in each bunch (for the clarity of figure we do not show all the runs) are pictured in Fig.1 a-d. Hence we have found the scaling range relation revealed in Fig.2. The best fit gives \begin{equation}\label{eqx16} \lambda \sim n^\beta \end{equation} where $\beta = 0.51 \pm 0.04$ and the uncertainty comes from the statistics.\\ Let us notice that if the number of observed particles does not exceed $10$ the linear dependence $\langle r^2\rangle_{n}\sim t$ can be confirmed only for the observation time $t<3s$! It makes the analysis taking into account longer observation times (as authors of Ref.~[4] did), simply incorrect. Having the scaling range determined we may proceed to calculate the diffusion constant value and its expected standard deviation from the mean. Such analysis was done by us for the simulated trajectories mentioned above. Some chosen cases (again for the clarity of graph we do not show all of them) with maximal and minimal values of $D$ for every $n$ are shown in Fig.3 a-d. All results of the mean $D$ values and their standard deviation as the function of $n$ are presented in Fig.4. Hence we see that the final result within $10\%$ percent of the expected theoretical value can be found only if one considers the ensamble of $n \geq 50$ particles. \section{Analysis of Results with Artificially Increased Statistics} The results of the previous section seem to suggest that to get a reasonable agreement with the diffusion law predictions one should take into account in the real experiment data from at least $ n\sim 50$ particles. In many less professional labs (e.g. student labs) such a requirement is virtually impossible to be satisfied - mainly because of the limited time duration of the data collection if no sophisticated computerized apparatus is used. Below we give the idea that helps to overcome such a difficulty. We call it Artificially Increased Statistics (AIS).\\ The main idea of AIS is to build the statistics of consecutive displacements from the very small number of available trajectories, counting all the displacements not from the initial starting point $(x^1_0, x^2_0) = (0, 0)$ but varying it along the whole one particle trajectory. Thus any momentary position of the particle, say $(x^1_k, x^2_k), k=1,2,...,N$ is the starting point to collect statistics of all displacements afterwards, i.e. $(\Delta x^1_{l-k}, \Delta x^2_{l-k}), l>k$, where $\Delta x^{\alpha}_{l-k} = x^{\alpha}_l - x^{\alpha}_k$ is the $\alpha$-th part of the $(l-k)$ step displacement. This way for the time series of length $N$ we have $N-m$ data for $m$-steps displacements instead of just one displacement usually taken into account. Then the statistics is averaged in the usual way over the all considered (observed) particles. This way even if $n$ is small the overall number of data entering the statistics is large enough to fulfil the linear law expectation. Let us now look at the results of the application of AIS to the simulated Brownian motion as well as to the pure experimental data from the real experiments. In Fig.5a-b we present the bunch of squared displacements in time taken for the statistics of $n=10$(a) and $n=50$(b) particles worked out with the AIS procedure. The comparison with the "naked" data from Fig.3a-b shows the tremendous difference. Although the scaling range after AIS lifting does not seem to change a lot, the linear dependence $<r^2> \sim t$ is now much more convincing. In fact the comparison of diffusion constants $D$ obtained from the "naked" analysis and from the data lifted by AIS shows about 7 times smaller uncertainty in $D$ \begin{table}[ht] \caption{The comparison analysis of diffusion constant values found as the best fit before and after the AIS procedure. Each item is taken from different simulation of the n = 10 or n = 50 particles run. The scaling range is fixed according to Fig 1a,b with the sampling time interval $\Delta \, t = 10^{5} \tau \sim 0.01 s$.} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{4}{c|}{$D(\mu\, m^2 \, s^{-1} )$} \cr \cline{2-5} & \multicolumn{2}{c|}{n=10 particles} & \multicolumn{2}{c|}{n=50 particles} \cr \hline \hline Run nr & Before AIS & AIS data & Before AIS & AIS data \cr & (``naked`` data) && (``naked`` data) & \cr &&&& \cr \hline 1. & 1.04 & 1.83 & 1.92 & 1.92 \cr \hline 2. & 2.07 & 1.76 & 1.90 & 1.95 \cr \hline 3. & 1.28 & 1.90 & 1.88 & 1.95 \cr \hline 4. & 1.65 & 2.15 & 2.29 & 1.83 \cr \hline 5. & 3.23 & 2.00 & 2.24 & 2.16 \cr \hline 6. & 0.96 & 1.92 & 2.00 & 2.03 \cr \hline 7. & 2.73 & 1.95 & 1.73 & 2.05 \cr \hline 8. & 4.01 & 2.23 & 1.74 & 1.86 \cr \hline 9. & 1.70 & 1.82 & 2.64 & 2.00 \cr \hline 10. & 1.46 & 1.99 & 1.99 & 1.89 \cr \hline \hline $< D >$ & 2.00 & 1.96 & 2.03 & 1.96 \cr \hline $\sigma_{D}$ & 1.0 & 0.15 & 0.28 & 0.10 \cr \hline \end{tabular} \end{center} \end{table} evaluation in the case of $n=10$ statistics (see Table 1). The corresponding result for the $n=50$ case is improved about $3$ times.\\ We have calculated also the mean absolute error (MAE)defined as \begin{equation} \delta_{MAE}=\frac{1}{N}\sum\limits^{N}_{k=1}|D_k - D_{th}| \end{equation} where $D_{th}$ is the theoretical value of the diffusion coefficient ($D_{th}=2.01 \mu m^2/s$ for the considered diffusion process) and the sum is taken over all simulated runs.\\ For the sample of $10$ runs with and without AIS one obtains for $n=10$ particles $\delta_{MAE}(n=10)=0.80~\mu m^2s^{-1}$ decreasing to $\delta^{AIS}_{MAE}(n=10)=0.13~\mu m^2s^{-1}$ when AIS is switched on. The corresponding result for $n=50$ particles are (in $\mu m^2s^{-1}$) $\delta_{MAE}(n=50)=0.20$ and $\delta^{AIS}_{MAE}(n=50)=0.09$ respectively.\\ The positive feature of AIS procedure can also be seen directly with the pure experimental data. We show in Fig.~6a the data taken in Ref.~[4] for the diffusion of $n=5$ latex spherical particles in the pure water. One gets much better correspondence with the linear dependence when AIS procedure is applied to these experimental points, what is clearly revealed in Fig.6b. The obtained best fit for the diffusion constant corresponds now closely to the expected theoretical value $D_{th}=2.01 ~\mu m^2s^{-1}$ what is not the case of the fit obtained by authors of Ref.~[4]. \section{Conclusions} The proper determination of the scaling range for the linear dependence $\langle r^2\rangle \sim t$ is crucial in the data analysis. We argued this scaling range behaves like $\lambda \sim n^\beta$, where the constant $\beta$ was determined as $\beta \sim 0.5$. The numerical simulation shows that for the case of mesoscopic particles diffusing in water the scaling range for $n \sim 10$ particles is as short as $\lambda \leq 3s$. For $n<10$ this scaling range is difficult to determine at all. In many papers this fact is ignored what gives misleading results.\\ However, even if one remains in the scaling range regime the results of simulated runs are not always statistically repeatable if too small statistics is considered. The minimal number of tracked particles to reveal the diffusion law is $n\geq 50$. One may nevertheless find the reasonable correspondence between theoretical predictions and experimental results even for the smaller number of tracked particles if the idea of AIS is applied. In this paper we have described this idea and have shown how it works for simulated data as well as for data taken from the real experiments. It turns out that with AIS analysis one may get results within $10\%$ of the expected theoretical value of the diffusion constant tracking just few mesoscopic objects. The corresponding input data without AIS gives much bigger uncertainty of the order of $50\%$ (see Fig.4). The same applies when MAE is calculated. To decrease the uncertainty to the former level of $10\%$ one has to track roughly ten times more objects!\\ We have checked that for $n=50$ tracked particles AIS procedure decreases the statistical uncertainty in $D$ from about $15\%$ (the "naked" data case) to $\sim 5\%$. Simultaneously the $\delta_{MAE}$ drops down about twice (from $0.20~\mu m^2s^{-1}$ to $0.09~ \mu m^2s^{-1}$). The AIS procedure is here less impressive than for $n=10$ case but still shows the significant improvement in data results.\\ This way it is quite possible to collect data giving very good prediction for the diffusion constant even in less professional labs where one is not able to measure simultaneously signals coming from bigger number of objects. Hence, other important physical constants (like e.g. Boltzmann constant $k$ or the Avogadro number $N_A$) can be deduced with high accuracy what is often the crucial point of such experiments.\\ The simulations were done by us also for liquids with other viscosities. The same final conclusions as for the case of water can be formulated. Because of very similar results we did not show them explicitly in this paper but we believe they should be studied in the way of numerical simulation in any case before the actual experiment is planned.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The author~\cite{mygroupoidalgebra,mygroupoidarxiv} associated to a commutative ring with unit $\Bbbk$ and an \'etale groupoid $\mathscr G$ with locally compact Hausdorff and totally disconnected unit space a $\Bbbk$-algebra $\Bbbk\mathscr G$, which is a discrete analogue of the groupoid $C^*$-algebra of $\mathscr G$~\cite{Renault,Paterson,Exel}. This class of algebras includes group algebras, inverse semigroup algebras and Leavitt path algebras. Further study of these algebras has occurred in~\cite{operatorguys1,operatorsimple1,operatorsimple2,groupoidbundles,GroupoidMorita}. Recently, Nekrashevych~\cite{Nekgrpd} has used \'etale groupoid algebras to construct finitely generated simple algebras of quadratic growth over any base field. In this paper we study simplicity, primitivity and semiprimitivity of group\-oid algebras. In particular, an extension of the famous result of Amitsur on semiprimitivity of group algebras in characteristic $0$ over fields which are not algebraic over $\mathbb Q$ is obtained for groupoid algebras. Applications are then presented to inverse semigroup algebras. In particular, we recover results of Munn~\cite{MunnSemiprim,MunnSemiprim2,MunnAlgebraSurvey} and Domanov~\cite{Domanov} on semiprimitivity of inverse semigroup algebras with simpler, and more conceptual proofs. A primitivity result of Munn~\cite{Munnprimitive} is also recovered. We give a partial answer to a question of Munn as to which contracted inverse semigroup algebras are simple~\cite{MunnAlgebraSurvey}. Namely, we characterize all inverse semigroups with Hausdorff universal groupoid whose contracted semigroup algebra is simple. This includes all $0$-$E$-unitary inverse semigroups. We also note that the semiprimitivity of Leavitt path algebras over any base field~\cite{Leavittsemiprimitivity} is an immediate consequence of our results. The primitivity results for Leavitt path algebras of~\cite{Leavittprimitivity} can also be rederived from our results, although we do not do so here. The main observation is that condition (L) corresponds to effectiveness of the corresponding \'etale groupoid and that the further condition needed for primitivity amounts to the groupoid having a dense orbit. The paper is organized as follows. First we recall basics on \'etale groupoids, inverse semigroups and their associated algebras. Then we discuss the extension of the simplicity/uniquness results of~\cite{operatorsimple1} to arbitrary rings (see the historical discussion below for connections with~\cite{operatorsimple2}). This is followed by our main results on primitivity and semiprimitivity. We then discuss inverse semigroup algebras and how topological properties of groupoids of germs relate to dynamical properties of inverse semigroup actions (see the historical discussion below for connections with~\cite{ExelPardo}). Then our \'etale groupoid results are applied to inverse semigroup algebras. \subsubsection*{Historical note} This paper began when the author read~\cite{operatorsimple1} and realized it could be use to make progress on an old question of Munn~\cite{MunnAlgebraSurvey}. The author was able to remove the assumption that the base field was $\mathbb C$ and obtained results on minimality and effectiveness of tight groupoids of inverse semigroups. These results were announced at the workshop ``Semigroups and Applications" (Uppsala, August 2012). Since then the simplicity results were obtained independently by Clark and Edie-Michell~\cite{operatorsimple2} (and submitted before we wrote up our results). The results on primitivity and semiprimitivity were obtained in 2013/2014 and presented at ``Partial Actions and Representations Symposium" (Gramado, May 2014) and the ``Fields Institute Workshop on Groups, Rings and Group Rings" (July 2014). As we were finalizing this paper for submission, Exel and Pardo placed on ArXiv the paper~\cite{ExelPardo}, which contains quite a bit of overlap with our results on tight groupoids that were announced in Uppsala and Gramado, and which appear in the final section of this paper. The work in~\cite{ExelPardo} was obtained independently of our work and some of it was mentioned in the meeting at Gramado in connection with self-similar group actions. \section{Groupoids, inverse semigroups and their algebras} This section contains preliminaries about groupoids, inverse semigroups and their algebras. Lawson~\cite{Lawson} is the definitive reference for inverse semigroups theory. For \'etale groupoids, we recommend~\cite{Renault,Exel,Paterson}. Algebras of ample groupoids were introduced in~\cite{mygroupoidalgebra}; see also~\cite{mygroupoidarxiv} for some additional results not included in~\cite{mygroupoidalgebra} as well as~\cite{operatorguys1,operatorsimple1,operatorsimple2}. \subsection{Inverse semigroups} An \emph{inverse semigroup} is a semigroup $S$ such that, for all $s\in S$, there exists unique $s^*\in S$ with $ss^*s=s$ and $s^*ss^*=s^*$. Notice that $s^*s,ss^*$ are idempotents. Also, note that $(st)^*=t^*s^*$. Idempotents of $S$ commute and so $E(S)$ is a subsemigroup. Moreover, it is a meet semilattice with respect to the ordering $e\leq f$ if $ef=e$. In fact, $S$ itself is ordered by $s\leq t$ if $s=te$ for some idempotent $e\in E(S)$, or equivalently $s=ft$ for some $f\in E(S)$. This partial order is compatible with multiplication and stable under the involution. We put $s^{\downarrow}=\{t\in S\mid t\leq s\}$ and $s^{\uparrow}=\{t\in S\mid t\geq s\}$. If $e\in E(S)$, then $G_e=\{s\in S\mid s^*s=e=ss^*\}$ is a group called the \emph{maximal subgroup} of $S$ at $e$. It is the group of units of the monoid $eSe$. All groups are inverse semigroups, as are all (meet) semilattices. A semidirect product $E\rtimes G$ of a group $G$ and a semilattice $E$ is also an inverse semigroup. If $X$ is a topological space, then the set of all homeomorphisms between open subsets of $X$ is an inverse semigroup $I_X$ under the usual composition of partial functions. An inverse semigroup $S$ has a zero element $z$, if $zs=z=sz$ for all $s\in S$. Zero elements are unique when they exist and will often be denoted $0$. The zero element of $I_X$ is the empty partial bijection. By an action of an inverse semigroup $S$ on a space $X$, we mean a homomorphism $\theta\colon S\to I_X$ such that if we put $X_e=\mathrm{dom}(\theta(e))$, then \[\bigcup_{e\in E(S)}X_e= X.\] This last condition is a non-degeneracy condition and implies, for instance, that a group must act by homeomorphisms. We write $\mathrm{Fix}(s)$ for the fixed-point set of $s$ and we put \begin{equation}\label{defineXs} X_s=\bigcup_{\{e\in E(S)\mid e\leq s\}}X_e. \end{equation} Note that if $s$ is idempotent, then both definitions of $X_e$ agree and so there is no ambiguity. Trivially, $X_s\subseteq \mathrm{Int}(\mathrm{Fix}(s))$ because $s$ fixes $X_e$ pointwise if $e\leq s$ is an idempotent. \begin{Prop}\label{p:faithful} If $S$ is an inverse semigroup acting faithfully on a space $X$ and if the $X_e$ with $e\in E(S)$ form a basis for the topology on $X$, then $X_s=\mathrm{Int}(\mathrm{Fix}(s))$. \end{Prop} \begin{proof} Clearly, $X_s\subseteq\mathrm{Fix}(s)$ and since $X_s$ is open, it consists of interior points. Conversely, let $x$ be an interior point $\mathrm{Fix}(s)$ and suppose that $X_e$ is a basic neighborhood of $x$ with $X_e\subseteq \mathrm{Fix}(s)\subseteq X_{s^*s}$. Then we deduce that $e\leq s^*s$ by faithfulness of the action and that $sex=x=ex$ for all $x\in X_e=X_{(se)^*(se)}$. Thus $se=e$, that is, $e\leq s$, by faithfulness. Therefore, $x\in X_s$. \end{proof} A \emph{congruence} on an inverse semigroup $S$ is an equivalence relation $\equiv$ such that $s\equiv s'$ implies $us\equiv us'$ and $sv\equiv s'v$ for all $u,v\in S$. An inverse semigroup $S$ is called \emph{congruence-free} if the only congruences on $S$ are the equality relation and the universal relation. For example, a group is congruence-free if and only if it is simple. We consider neither the trivial inverse semigroup, nor the empty inverse semigroup to be congruence-free. An inverse semigroup $S$ is \emph{$E$-unitary} if $s\geq e$ with $e\in E(S)$ implies that $s\in E(S)$. An inverse semigroup $S$ with zero is called \emph{$0$-$E$-unitary} (or \emph{$E^*$-unitary}) if $s\geq e$ with $e\in E(S)\setminus \{0\}$ implies $s\in E(S)$. We shall say that an inverse semigroup $S$ is \emph{Hausdorff} if, for all $s,t\in S$, the set $s^\downarrow\cap t^\downarrow$ is finitely generated as a lower set, that is, there is a finite set $F$ such that $x\leq s,t$ if and only if $x\leq u$ for some $u\in F$. (The term weak semilattice is used for this in~\cite{mygroupoidalgebra}.) It is known that $E$-unitary and $0$-$E$-unitary inverse semigroups are Hausdorff~\cite{mygroupoidalgebra} (in fact, this follows directly from Proposition~\ref{p:Hausdorffsemichar} below). The reason for the terminology Hausdorff will become apparent later. \begin{Prop}\label{p:Hausdorffsemichar} An inverse semigroup $S$ is Hausdorff if and only if the lower set $(s^*s)^\downarrow\cap s^\downarrow$ of $E(S)$ is finitely generated for all $s\in S$. \end{Prop} \begin{proof} Necessity is clear. For sufficiency, suppose that $s,t\in S$ and put $u=ts^*$. Then we claim that the mapping $e\mapsto es$ provides an order isomorphism $(u^*u)^{\downarrow}\cap u^\downarrow\to s^\downarrow\cap t^\downarrow$ with inverse $x\mapsto xs^*$. The proposition will then follow. From $u^*u= st^*ts^*$ we have that $e\leq u^*u,u$ implies that $es\leq us=ts^*s\leq t$ and $es\leq s$ (because $e$ is idempotent). Conversely, if $x\leq s,t$, then $xs^*\leq ss^*,ts^*$ and hence $xs^*\leq u$ and is an idempotent. Thus $xs^*\leq u^*u$. It remains to show these are inverse mappings. Note that $e\leq u$ implies that $e=ts^*f$ for some idempotent $f$. Then $ess^*=ts^*fss^*=ts^*ss^*f=ts^*f=e$. Conversely, if $x\leq s,t$, then $x=sf$ with $f\in E(S)$ and so $xs^*s=sfs^*s=sf=x$. This completes the proof. \end{proof} If $\Bbbk$ is a commutative ring with unit, then the semigroup algebra $\Bbbk S$ of an inverse semigroup $S$ is defined in the usual way as a $\Bbbk$-algebra with basis $S$ and multiplication extending that of $S$ via the distributive law. If $S$ is an inverse semigroup with zero element $z$, then the contracted semigroup algebra is $\Bbbk_0S=\Bbbk S/\Bbbk z$. It amounts to identifying the zero of $S$ with the zero of $\Bbbk$ and it is universal for representations of $S$ into $\Bbbk$-algebras that preserve the zero element. Occasionally, we shall require the notion of a \emph{generalized boolean algebra}, that is a relatively complemented, distributive lattice with bottom. Generalized boolean algebras are, up to isomorphism, ideals in boolean algebras. \subsection{\'Etale groupoids} In this paper, compactness will include the Hausdorff axiom. However, we do not require locally compact spaces to be Hausdorff. A topological groupoid $\mathscr G=(\mathscr G\skel 0,\mathscr G\skel 1)$ is \emph{\'etale} if its domain map $s$ (or, equivalently, its range map $t$) is a local homeomorphism. In this case, identifying objects with identity arrows, we have that $\mathscr G\skel 0$ is an open subspace of $\mathscr G\skel 1$ and the multiplication map is a local homeomorphism. Details can be found in~\cite{Paterson,resendeetale,Exel}. Following~\cite{Paterson}, an \'etale groupoid is called \emph{ample} if its unit space $\mathscr G\skel 0$ is locally compact Hausdorff with a basis of compact open subsets. We shall say that an ample groupoid $\mathscr G$ is Hausdorff if $\mathscr G\skel 1$ is Hausdorff. A \emph{local bisection} of an \'etale groupoid $\mathscr G$ is an open subset $U\subseteq \mathscr G\skel 1$ such that $s|_U$ and $t|_U$ are homeomorphisms. The local bisections form a basis for the topology on $\mathscr G\skel 1$~\cite{Exel}. The set $\mathop{\mathrm{Bis}}\nolimits(\mathscr G)$ of local bisections is an inverse monoid under the binary operation \[UV = \{uv\mid u\in U,\ v\in V,\ s (u)=t (v)\}.\] The semigroup inverse is given by $U^* = \{u^{-1}\mid u\in U\}$ and $E(\mathop{\mathrm{Bis}}\nolimits(\mathscr G))=\mathop{\mathrm{Bis}}\nolimits(\mathscr G\skel 0)$. The inverse monoid $\mathop{\mathrm{Bis}}\nolimits(\mathscr G)$ acts on $\mathscr G\skel 0$ by partial homeomorphisms by putting \[U\cdot x=\begin{cases}y, & \text{if there is $g\in U$ with $s(g)=x,t(g)=y$}\\ \text{undefined}, & \text{else.}\end{cases}\] The set $\mathop{\mathrm{Bis}}\nolimits_c(\mathscr G)$ of compact open local bisections (which should be thought of as the set of local bisections with compact support) is an inverse subsemigroup of $\mathop{\mathrm{Bis}}\nolimits(\mathscr G)$ (it is a submonoid if and only if $\mathscr G\skel 0$ is compact)~\cite{Paterson}. Note that $\mathscr G$ is ample if and only if $\mathop{\mathrm{Bis}}\nolimits_c(\mathscr G)$ is a basis for the topology on $\mathscr G\skel 1$~\cite{Exel,Paterson}. The \emph{isotropy subgroupoid} of a groupoid $\mathscr G=(\mathscr G\skel 0,\mathscr G\skel 1)$ is the subgroupoid $\mathop{\mathrm{Is}}(\mathscr G)$ with $\mathop{\mathrm{Is}}(\mathscr G)\skel 0=\mathscr G\skel 0$ and \[\mathop{\mathrm{Is}}(\mathscr G)\skel 1=\{g\in \mathscr G\skel 1\mid s(g)=t(g)\}.\] The \emph{isotropy group} of $x\in \mathscr G\skel 0$ is the group \[G_x=\{g\in \mathscr G\skel 1\mid s(g)=x=t(g)\}.\] An \'etale groupoid is said to be \emph{effective} if $\mathscr G\skel 0=\mathrm{Int}(\mathop{\mathrm{Is}}(\mathscr G)\skel 1)$. It is well known, and easy to prove, that an ample groupoid $\mathscr G$ is effective if and only if the natural action of $\mathop{\mathrm{Bis}}\nolimits_c(\mathscr G)$ on $\mathscr G\skel 0$ is faithful. If $x\in \mathscr G\skel 0$, then the \emph{orbit} $\mathcal O_x$ of $x$ consists of all $y\in \mathscr G\skel 0$ such that there is an arrow $g$ with $s(g)=x$ and $t(g)=y$. The orbits form a partition of $\mathscr G\skel 0$. If $\mathscr G$ is ample, then the orbits of $\mathscr G$ are precisely the orbits for the natural action of $\mathop{\mathrm{Bis}}\nolimits_c(\mathscr G)$ on $\mathscr G\skel 0$. A subset $X\subseteq \mathscr G\skel 0$ is \emph{invariant} if it is a union of orbits. Equivalently, $X$ is invariant if and only if it is invariant under the natural action of $\mathop{\mathrm{Bis}}\nolimits_c(\mathscr G)$ on $\mathscr G\skel 0$. An \'etale groupoid is said to be \emph{minimal} if $\mathscr G\skel 0$ has no proper, non-empty closed invariant subsets, or equivalently, if each orbit is dense. A key example of an \'etale groupoid is that of a groupoid of germs. Let $S$ be an inverse semigroup acting on a locally compact Hausdorff space $X$. The groupoid of germs $\mathscr G=S\ltimes X$ is defined as follows. One puts $\mathscr G\skel 0=X$ and $\mathscr G\skel 1=\{(s,x)\in S\times X\mid x\in X_{s^*s}\}/{\sim}$ where $(s,x)\sim (t,y)$ if and only if $x=y$ and there exists $u\leq s,t$ with $x\in X_{u^*u}$. Note that if $S$ is a group, then there are no identifications. The $\sim$-class of an element $(s,x)$ is denoted $[s,x]$. The topology on $\mathscr G\skel 1$ has basis all sets of the form $(s,U)$ where $U\subseteq X_{s^*s}$ is open and $(s,U) = \{[s,x]\mid x\in U\}$. Note that if $[t,x]\in (s,U)$, then $(t,V)\subseteq (s,U)$ for some open neighborhood $V$ with $x\in V\subseteq U$. Indeed, since $[t,x]=[s,x]$, there exists $u\leq s,t$ with $x\in X_{u^*u}$. It follows that if $V=U\cap X_{u^*u}\cap X_{t^*t}$, then $x\in V$ and $[t,y]=[s,y]$ for all $y\in V$. Thus each arrow $[t,x]$ has a basis of neighborhoods of the form $(t,U)$ with $U\subseteq X_{t^*t}$. One puts $s([s,x])=x$, $t([s,x])=sx$ and defines $[s,ty][t,y]=[st,y]$. Inversion is given by $[s,x]^{-1} = [s^*,sx]$. Note that $(s,X_{s^*s})\in \mathop{\mathrm{Bis}}\nolimits(S\ltimes X)$ and if $X_{s^*s}$ is compact, then $(s,X_{s^*s})\in \mathop{\mathrm{Bis}}\nolimits_c(S\ltimes X)$. See~\cite{Exel,Paterson} for details. The following criterion generalizes a result from~\cite{mygroupoidalgebra}. It was first observed by the author (unpublished) under the assumption that the domains were compact open and then it was observed by R.~Exel and E.~Pardo, that clopen suffices (private communication). \begin{Prop}\label{p:hausdorffcondition} Let $S$ be an inverse semigroup acting on a locally compact Hausdorff space $X$ and suppose that $X_e$ is clopen for all $e\in E(S)$. Then $S\ltimes X$ is Hausdorff if and only if, for each $s\in S$, the set $X_s$ is closed. In particular, if $S$ is Hausdorff, then so is $S\ltimes X$. \end{Prop} \begin{proof} Note that $X_s\subseteq X_{s^*s}$. Suppose first that $S\ltimes X$ is Hausdorff and let $x\in X\setminus X_s$. If $x\notin X_{s^*s}$, then $X\setminus X_{s^*s}$ is a neighborhood of $x$ disjoint from $X_s$. So assume that $x\in X_{s^*s}$. Then we claim that $[s,x]\neq [s^*s,x]$. Indeed, if $(s,x)\sim (s^*s,x)$, then we can find $u\leq s,s^*s$ with $x\in X_{u^*u}$. But $u\leq s^*s$ implies that $u\in E(S)$ and $X_u=X_{u^*u}$. Therefore, we have $x\in X_s$, a contradiction. Thus we can find disjoint basic neighborhoods $(s,U)$ and $(s^*s,V)$ of $[s,x]$ and $[s^*s,x]$ respectively. Then $U\cap V$ is a neighborhood of $x$ and we claim it is disjoint from $X_s$. Indeed, if $y\in U\cap V\cap X_s$, then $y\in X_e$ for some idempotent $e\leq s$. But then $e\leq s,s^*s$ and $y\in X_e$, whence $[s,y]=[s^*s,y]\in (s,U)\cap (s^*s,V)$, a contradiction. We conclude $X_s$ is closed. Conversely, suppose that $X_s$ is closed for all $x\in X$. Let $[s,x]\neq [t,y]$ be arrows of $S\ltimes X$. If $x\neq y$, then we can choose disjoint neighborhoods $U,V$ of $x,y$ in $X$, respectively. We may assume without loss of generality that $U\subseteq X_{s^*s}$ and $V\subseteq X_{t^*t}$. Then $(s,U)$ and $(t,V)$ are disjoint neighborhoods of $[s,x]$ and $[t,y]$ respectively. So assume next that $x=y$ and put $u=s^*t$. We claim that $x\notin X_u$. Indeed, if $x\in X_u$, then $x\in X_e$ for some idempotent $e\leq s^*t$. Put $z=se$. and write $e=s^*tf$ with $f\in E(S)$. Then $z^*z=s^*se=s^*ss^*tf=s^*tf=e$ and so $x\in X_{z^*z}$. Clearly $z\leq s$. But since $e\leq u$, we have $z=se\leq su=ss^*t\leq t$. Thus $[s,x]=[t,x]$, a contradiction. Since $X_u$ is closed, there is a neighborhood $U$ of $x$ with $U\subseteq X\setminus X_u$. Without loss of generality, we may assume that $U\subseteq X_{s^*s}\cap X_{t^*t}$. We claim that $(s,U)$ and $(t,U)$ are disjoint neighborhoods of $[s,x]$ and $[t,x]$, respectively. Indeed, suppose that $[s,x']=[t,x']$ belongs to $(s,U)\cap (t,U)$. Then there exists $w\in S$ with $x'\in X_{w^*w}$ and $w\leq s,t$. Then $w^*w\leq s^*t=u$ and so $x'\in X_u\cap U$, a contradiction. This completes the proof of the first statement. For the final statement, observe that the union in \eqref{defineXs} is finite if $S$ is Hausdorff (because $\{e\in E(S)\mid e\leq s\}=(s^*s)^{\downarrow} \cap s^{\downarrow}$) and so $X_s$ is closed. \end{proof} We remark that since $X_s\subseteq X_{s^*s}$, if $X_{s^*s}$ is compact, then $X_s$ is closed if and only if it is compact. If $X$ is a Hausdorff space with a basis of compact open sets, then $S\ltimes X$ will be an ample groupoid~\cite{mygroupoidalgebra}. \subsection{\'Etale groupoid algebras} Fix now a commutative ring with unit $\Bbbk$. The author~\cite{mygroupoidalgebra} associated a $\Bbbk$-algebra $\Bbbk \mathscr G$ to each ample groupoid $\mathscr G$ as follows. We define $\Bbbk\mathscr G$ to be the $\Bbbk$-span in $\Bbbk^{\mathscr G\skel 1}$ of the characteristic functions $\chi_U$ of compact open subsets $U$ of $\mathscr G\skel 1$. It is shown in~\cite[Proposition~4.3]{mygroupoidalgebra} that $\Bbbk \mathscr G$ is spanned by the elements $\chi_U$ with $U\in \mathop{\mathrm{Bis}}\nolimits_c(\mathscr G)$. If $\mathscr G\skel 1$ is Hausdorff, then $\Bbbk \mathscr G$ consists of the continuous $\Bbbk$-valued functions on $\mathscr G\skel 1$ with compact support where $\Bbbk$ is endowed with the discrete topology. Convolution is defined on $\Bbbk \mathscr G$ by \[\varphi\ast \psi(g)=\sum_{s(h)=s(g)}\varphi(gh^{-1})\psi(h).\] The finiteness of this sum is proved in~\cite{mygroupoidalgebra}. The fact that the convolution belongs to $\Bbbk \mathscr G$ rests on the computation $\chi_U\ast \chi_V=\chi_{UV}$ for $U,V\in \mathop{\mathrm{Bis}}\nolimits_c(\mathscr G)$~\cite{mygroupoidalgebra}. Note that since $\mathscr G\skel 0$ is open in $\mathscr G\skel 1$, it follows that $\Bbbk \mathscr G\skel 0$ (where we view $\mathscr G\skel 0$ as a groupoid consisting of identities) is a subalgebra of $\Bbbk \mathscr G$. Also observe that $\Bbbk \mathscr G\skel 0$ is just the ring of $\Bbbk$-valued continuous functions with compact support on $\mathscr G\skel 0$ with pointwise multiplication, and hence is commutative. The algebra $\mathbb C\mathscr G$, in the case where $\mathscr G$ is Hausdorff, has been further studied in~\cite{operatorguys1,operatorsimple1}; see also~\cite{operatorsimple2}. Morita equivalence with applications to Leavitt path algebras~\cite{Leavittsemiprimitivity} was studied in~\cite{GroupoidMorita} and a sheaf representation of $\Bbbk\mathscr G$-modules was obtained in~\cite{groupoidbundles}. Recently, the growth of \'etale groupoid algebras was studied in~\cite{Nekgrpd}. The algebra $\Bbbk\mathscr G$ admits an involution $\varphi\mapsto \check{\varphi}$ where $\check{\varphi}(g)=\varphi(g^{-1})$ and so dual notions like right and left primitivity are equivalent for $\Bbbk\mathscr G$. Hence we shall work alternatively with right and left modules as we find convenient. The algebra $\Bbbk\mathscr G$ is unital if and only if $\mathscr G\skel 0$ is compact, but it always has local units~\cite{mygroupoidalgebra,groupoidbundles}. \section{Simplicity of groupoid algebras} The results of this section were first obtained in~\cite{operatorsimple1} for Hausdorff groupoids over $\Bbbk=\mathbb C$ (and the norm on $\mathbb C$ was used in the proof). Here we give proofs that avoid using the norm, drop the Hausdorff condition whenever possible and consider more ground general rings. Our techniques are different as well, in that we use the Sch\"utzenberger representations of~\cite{mygroupoidalgebra}. These results were announced by the author at the workshop ``Semigroups and Applications" (Uppsala, August 2012), but never previously published. Similar results were obtained independently in the meantime in~\cite{operatorsimple2}. Nonetheless, we produce here a complete proof of the simplicity criterion as we shall need some of the intermediary results in the sequel. \begin{Lemma}\label{cutdowntochar} Let $\mathscr G$ be an ample groupoid and $\Bbbk$ a commutative ring with unit. Suppose that $0\neq \varphi\in \Bbbk \mathscr G\skel 0$ and $I$ is an ideal of $\Bbbk \mathscr G$ containing $\varphi$. Then $I$ contains a non-zero element of the form $k\cdot \chi_U$ where $k\in \Bbbk$ and $U\subseteq \mathscr G\skel 0$ is compact open. \end{Lemma} \begin{proof} Let $k\in \Bbbk$ be a non-zero element in the image of $\varphi$. Then $U=\varphi^{-1} (k)$ is a non-empty compact open subset of $\mathscr G\skel 0$ and $\varphi\ast \chi_U = k\cdot \chi_U$ is in $I$. \end{proof} The following lemma (and its dual) will also be useful. \begin{Lemma}\label{hitright} Let $\varphi\in \Bbbk\mathscr G$ and $U\in \mathop{\mathrm{Bis}}\nolimits_c(\mathscr G)$. Suppose that $h\in U$ and $s(g)=t(h)$. Then \[\varphi\ast \chi_U(gh)=\varphi(g)\] and so in particular is non-zero if $\varphi(g)\neq 0$. \end{Lemma} \begin{proof} The unique element $y\in U$ with $s(y)=s(gh)$ is $h$ and so \[\varphi\ast \chi_U(gh)=\sum_{s(y)=s(gh)}\varphi(ghy^{-1})\chi_U(y)=\varphi(g)\] as required. \end{proof} Now we prove an analogue of the Cuntz-Krieger uniqueness theorem in the context of Hausdorff ample groupoids. The proof is based on the idea of~\cite{operatorsimple1}, but we avoid using the norm. A similar result is embedded in the proof of the main result of~\cite{operatorsimple2}. \begin{Prop}\label{effectivecase} Let $\mathscr G$ be an effective Hausdorff ample groupoid and $\Bbbk$ a commutative ring with unit. Suppose that $I$ is a non-zero ideal of $\Bbbk \mathscr G$. Then $I$ contains a non-zero element of the form $k\cdot \chi_U$ with $k\in \Bbbk\setminus \{0\}$ and $U\subseteq \mathscr G\skel 0$ a non-empty compact open subset. \end{Prop} \begin{proof} Let $\varphi\in I\setminus \{0\}$ and suppose that $\varphi(g)\neq 0$ with $g\in \mathscr G\skel 1$. Let $U\in \mathop{\mathrm{Bis}}\nolimits_c(\mathscr G)$ contain $g^{-1}$ and put $x=t(g)$. Then $\varphi\ast \chi_U(x) =\varphi(g)\neq 0$ by Lemma~\ref{hitright}. Thus there is an element $\psi\in I$ with $\psi|_{\mathscr G\skel 0}\neq 0$ (take $\psi=\varphi\ast \chi_U$ as above). As $\mathscr G\skel 0$ is clopen in $\mathscr G\skel 1$ by the Hausdorff property, we have that $\psi|_{\mathscr G\skel 0}\in \Bbbk \mathscr G\skel 0$. Let $K$ be the support of $\Psi=\psi-\psi|_{\mathscr G\skel 0}$. Then $K\subseteq \mathscr G\skel 1\setminus \mathscr G\skel 0$ is compact open. Write $\psi|_{\mathscr G\skel 0} = \sum_{i=1}^m k_i\cdot \chi_{U_i}$ with the $U_i$ disjoint compact open subsets of $\mathscr G\skel 0$ and the $k_i\in \Bbbk\setminus \{0\}$. By~\cite[Lemma~3.1]{operatorsimple1}, the effectiveness of $\mathscr G$ implies that there is a non-empty open subset $V\subseteq U_1$ with $VKV= \emptyset$. Since $\mathscr G\skel 0$ has a basis of compact open sets, we may assume that $V$ is compact. We claim that $k_1\cdot \chi_V=\chi_V\ast \psi\ast \chi_V\in I$. Indeed, Lemma~\ref{hitright} and its dual imply $\chi_V\ast \Psi\ast \chi_V=0$ because $VKV=\emptyset$. Thus $\chi_V\ast \psi\ast \chi_V= \chi_V\ast \psi|_{\mathscr G\skel 0}\ast \chi_V=k_1\cdot \chi_V$ is in $I$, as required. \end{proof} Our next result removes the Hausdorff condition from a result of~\cite{operatorsimple1} and works over any base ring. See also~\cite{operatorsimple2} where more or less the same result is obtained in a slightly different way. Let $\mathscr G$ be an ample groupoid and $\Bbbk$ a commutative ring with unit. If $x\in \mathscr G\skel 0$, we put $L_x=s^{-1}(x)$. There is a $\Bbbk \mathscr G$-module structure on $\Bbbk L_x$ given by \[\varphi\cdot t=\sum_{y\in L_x}\varphi(yt^{-1})y=\sum_{d(g)=r(t)}\varphi(g)gt\] for $t\in L_x$. Moreover, if $U\in \mathop{\mathrm{Bis}}\nolimits_c(\mathscr G)$, \begin{equation*}\label{schutzformula} \chi_U\cdot t=\begin{cases}ht, & \text{if}\ h\in U,\ s(h)=t(t)\\ 0, & \text{if}\ t(t)\notin s(U)=U^*U\end{cases} \end{equation*} for $t\in L_x$. Note that if $t(t)\in U^*U$, then $h$ above is unique. Also the isotropy group $G_x$ of $\mathscr G$ at $x$ acts freely on the right of $L_x$ and $\Bbbk L_x$ is a $\Bbbk \mathscr G$-$\Bbbk G_x$-bimodule. See~\cite[Proposition~7.8]{mygroupoidalgebra} for details. If $V$ is a $\Bbbk G_x$-module, then $\mathop{\mathrm{Ind}}\nolimits_x(V)=\Bbbk L_x\otimes_{\Bbbk G_x} V$ is a $\Bbbk\mathscr G$-module. Moreover, the functor $\mathop{\mathrm{Ind}}\nolimits_x$ is exact and sends (semi)simple modules to (semi)simple modules (see~\cite[Proposition~7.19]{mygroupoidalgebra} for the latter property). The annihilator ideal of $\Bbbk L_x$ is a proper ideal because if $U$ is a compact open neighborhood of $x$ in $\mathscr G\skel 0$, then $\chi_U\cdot x=x$. If we give $\Bbbk$ the trivial $\Bbbk G_x$-module structure, then $\mathop{\mathrm{Ind}}\nolimits_x(\Bbbk)\cong \Bbbk \mathcal O_x$ with the action given by \[\varphi\cdot u=\sum_{s(g)=u}\varphi(g)t(g)\] for $u\in \mathcal O_y$. In particular, if $U\in \mathop{\mathrm{Bis}}\nolimits_c(\mathscr G)$, then \[\chi_U\cdot u=\begin{cases} U\cdot u, & \text{if}\ u\in U^*U\\ 0, & \text{else.}\end{cases}\] If $x\in U$ with $U\in \mathop{\mathrm{Bis}}\nolimits_c(\mathscr G\skel 0)$, then $\chi_U\cdot x=x$ and so the annihilator of $\Bbbk O_x$ is proper. \begin{Prop}\label{minimal} Let $\mathscr G$ be an ample groupoid and $\Bbbk$ a commutative ring with unit. Then $\mathscr G$ is minimal if and only if $\chi_U$ generates $\Bbbk \mathscr G$ as an ideal for all $U\subseteq \mathscr G\skel 0$ a non-empty compact open subset. \end{Prop} \begin{proof} Suppose first that $\chi_U$ generates $\Bbbk \mathscr G$ as an ideal for all non-empty compact open subsets $U\in \mathscr G\skel 0$. Let $x\in \mathscr G\skel 0$. The annihilator of $\Bbbk L_x$ is a proper ideal in $\Bbbk\mathscr G$ and therefore $\chi_U$ does not annihilate $\Bbbk L_x$. Thus there exists $t\in L_x$ with $\chi_U\cdot t\neq 0$. This means that $t(t)\in U$ and so $U\cap \mathcal O_x\neq \emptyset$. We conclude that all orbits are dense and hence $\mathscr G$ is minimal. Next suppose that $\mathscr G$ is minimal and let $I$ be the ideal generated by $\chi_U$. Let $S=\{V\in \mathop{\mathrm{Bis}}\nolimits_c(\mathscr G)\mid \chi_V\in I\}$. Then $S$ is an ideal of $\mathop{\mathrm{Bis}}\nolimits_c(\mathscr G)$ and $E(S)$ is a generalized boolean algebra (since $\chi_{V_1\cup V_2}=\chi_{V_1}+\chi_{V_2}-\chi_{V_1}\ast \chi_{V_2}$ and $\chi_{V_1\setminus V_2}=\chi_{V_1}-\chi_{V_1}\ast\chi_{V_2}$ for $V_1,V_2\in \mathop{\mathrm{Bis}}\nolimits_c(\mathscr G\skel 0)$). We claim that $E(S)=\mathop{\mathrm{Bis}}\nolimits_c(\mathscr G\skel 0)$. Indeed, if $x\in \mathscr G\skel 0$, then since $\mathscr G$ is minimal we have that $U\cap \mathcal O_x$ is non-empty. Suppose that $y\in U\cap \mathcal O_x$ and $g\in \mathscr G\skel 1$ with $t(g)=y$ and $s(g)=x$. Let $V\in \mathop{\mathrm{Bis}}\nolimits_c(\mathscr G)$ with $g\in V$. Then $x\in (UV)^*UV\in E(S)$. Thus $E(S)$ contains a compact open neighborhood of each point of $\mathscr G\skel 0$. Also $E(S)$ is an order ideal in $\mathop{\mathrm{Bis}}\nolimits_c(\mathscr G\skel 0)$. Thus $E(S)$ contains a basis of compact open subsets of $\mathscr G\skel 0$. But then since $E(S)$ is closed under finite unions, we conclude $E(S)=\mathop{\mathrm{Bis}}\nolimits_c(\mathscr G\skel 0)$. But no proper ideal in an inverse semigroup can contain all the idempotents and so $S=\mathop{\mathrm{Bis}}\nolimits_c(\mathscr G)$, whence $I=\Bbbk \mathscr G$. \end{proof} The following theorem generalizes the results of~\cite{operatorsimple1} and was obtained in~\cite{operatorsimple2} using a similar method. \begin{Thm}\label{simple} Let $\mathscr G$ be an ample groupoid and $\Bbbk$ a field. If $\Bbbk \mathscr G$ is simple, then $\mathscr G$ is effective and minimal. The converse holds if $\mathscr G$ is Hausdorff. \end{Thm} \begin{proof} Suppose first that $\Bbbk \mathscr G$ is simple. Minimality is immediate from Proposition~\ref{minimal}. To show that $\mathscr G$ is effective, it suffices to prove that if $U\in \mathop{\mathrm{Bis}}\nolimits_c(\mathscr G)$ is contained in $\mathop{\mathrm{Is}}(\mathscr G)$, then it is contained in $\mathscr G\skel 0$. Of course, we may assume that $U$ is non-empty. Because $U$ is contained in the isotropy subgroupoid, it follows that $\chi_U-\chi_{U^*U}$ annihilates each of the modules $\Bbbk \mathcal O_x$ with $x\in \mathscr G\skel 0$. But these modules have proper annihilator ideals and hence are faithful by simplicity of $\Bbbk\mathscr G$. We conclude that $U=U^*U$ and hence $U\subseteq \mathscr G\skel 0$. Assume next that $\mathscr G$ is effective, minimal and Hausdorff. If $I$ is a non-zero ideal of $\Bbbk \mathscr G$, then by Proposition~\ref{effectivecase} it contains an element of the form $\chi_U$ with $\emptyset\neq U\in \mathop{\mathrm{Bis}}\nolimits_c(\mathscr G\skel 0)$. We conclude from Proposition~\ref{minimal} that $I=\Bbbk \mathscr G$. This completes the proof. \end{proof} We leave it as an open question as to whether the Hausdorff condition is really needed in the converse (we suspect that it is). Basically it boils down to whether Proposition~\ref{effectivecase} is true when $\mathscr G$ is not Hausdorff. \begin{Cor}\label{simplehaus} Let $\mathscr G$ be a Hausdorff ample groupoid and $\Bbbk$ a field. Then $\Bbbk \mathscr G$ is simple if and only if $\mathscr G$ is effective and minimal. \end{Cor} We end this section with two further results on effective Hausdorff ample groupoids. The first result, characterizing the center of an effective Hausdorff groupoid, generalizes a result of~\cite{operatorsimple2}. \begin{Prop} Let $\mathscr G$ be an effective Hausdorff ample groupoid and $\Bbbk$ a commutative ring with unit. Then the center $Z(\Bbbk \mathscr G)$ of $\Bbbk \mathscr G$ consists of those continuous functions with compact support $\varphi\colon \mathscr G\skel 0\to \Bbbk$ which are constant on orbits. In particular, if $\mathscr G\skel 0$ has a dense orbit, then the equality \[Z(\Bbbk \mathscr G)=\begin{cases} \Bbbk\cdot \chi_{\mathscr G\skel 0}, & \text{if}\ \mathscr G\skel 0\ \text{is compact}\\ 0, & \text{else}\end{cases}\] holds. \end{Prop} \begin{proof} By~\cite[Proposition~4.13]{mygroupoidalgebra} $\varphi\in Z(\Bbbk \mathscr G)$ if and only if $\varphi$ is supported on $\mathop{\mathrm{Is}}(\mathscr G)$ and $\varphi(gzg^{-1})=\varphi(z)$ whenever $s(g)=t(z)=s(z)$. Since the support of $\varphi$ is open (because $\mathscr G$ is Hausdorff), we in fact have that the support of $\varphi$ is contained in $\mathscr G\skel 0$ because $\mathscr G$ is effective. It is now immediate that $Z(\Bbbk \mathscr G)$ consists of those $\varphi\in \Bbbk\mathscr G\skel 0$ which are constant on orbits. The final statement is clear since if $\mathcal O$ is a dense orbit and $\varphi\in Z(\mathscr G)$, then $\varphi$ is constant on $\mathcal O$ and hence on $\ov {\mathcal O}=\mathscr G\skel 0$. \end{proof} We record here another result for effective Hausdorff groupoids. \begin{Prop} Let $\mathscr G$ be an effective Hausdorff ample groupoid and $\Bbbk$ a commutative ring with unit. Then $\Bbbk \mathscr G\skel 0$ is a maximal commutative subalgebra of $\Bbbk\mathscr G$. \end{Prop} \begin{proof} Clearly $\Bbbk \mathscr G\skel 0$ is a commutative subalgebra. Suppose that $0\neq \varphi\in \Bbbk\mathscr G$ centralizes $\Bbbk \mathscr G\skel 0$. Because $\mathscr G$ is Hausdorff, $\varphi^{-1}(\Bbbk\setminus \{0\})$ is compact open. By effectiveness, it suffices to show that the support of $\varphi$ is contained in $\mathop{\mathrm{Is}}(\mathscr G)$. Suppose $\varphi(g)\neq 0$ with $s(g)\neq t(g)$. Then there exists $U\in \mathop{\mathrm{Bis}}\nolimits_c(\mathscr G\skel 0)$ such that $s(g)\in U$ and $t(g)\notin U$. But then $\varphi\ast \chi_U(g)=\varphi(g)\neq 0$ and $\chi_U\ast \varphi(g)=0$, a contradiction. This completes the proof. \end{proof} \section{Primitivity and semiprimitivity} Recall that a ring is \emph{primitive} if it has a faithful simple module and \emph{semiprimitive} if it has a faithful semisimple module (cf.~\cite{LamBook}). We investigate primitivity and semiprimitivity of ample groupoid algebras. This will allow us to recover results of Domanov~\cite{Domanov} and Munn~\cite{MunnSemiprim,MunnSemiprim2,MunnAlgebraSurvey} for inverse semigroup algebras (with easier proofs) and the semiprimitivity of Leavitt path algebras~\cite{Leavittsemiprimitivity}. Also an extension of the result of Amitsur~\cite{Amitsur} that a group algebra over a field of characteristic $0$, which is not algebraic over $\mathbb Q$, is semiprimitive is obtained for ample groupoid algebras. \subsection{Semiprimitivity} We first establish that effective Hausdorff groupoids always have semiprimitive algebras over a semiprimitive base ring. \begin{Prop}\label{semiprimitivityforeffective} Let $\mathscr G$ be a Hausdorff effective ample groupoid and $\Bbbk$ a commutative ring with unit. Then $\Bbbk \mathscr G$ is semiprimitive whenever $\Bbbk$ is semiprimitive. \end{Prop} \begin{proof} Assume $\Bbbk$ is semiprimitive and let $V$ be a faithful semisimple $\Bbbk$-module. Consider the $\Bbbk \mathscr G$-module $M=\bigoplus_{x\in \mathscr G\skel 0} \mathop{\mathrm{Ind}}\nolimits_x(V)$ where we view $V$ as a $\Bbbk G_x$-module via the trivial action of $G_x$. Then $M$ is a semisimple module by~\cite[Proposition~7.19]{mygroupoidalgebra}. Suppose that $M$ is not faithful. Then its annihilator contains an element of the form $k\cdot \chi_U$ with $k\in \Bbbk\setminus \{0\}$ and $U\in \mathop{\mathrm{Bis}}\nolimits_c(\mathscr G\skel 0)$ non-empty by Proposition~\ref{effectivecase}. Let $v\in V$ with $kv\neq 0$ and let $x\in U$. Then $k\cdot \chi_U(x\otimes v) = x\otimes kv\neq 0$ as $\mathop{\mathrm{Ind}}\nolimits_x(V)= \bigoplus_{y\in \mathcal O_x}y\otimes V$ as a $\Bbbk$-module. This contradiction shows that $M$ is faithful. \end{proof} Let $\mathscr G$ be an ample groupoid and $\Bbbk$ a commutative ring with unit. Let us say that $X\subseteq \mathscr G\skel 0$ is \emph{$\Bbbk$-dense} if, for all $0\neq \varphi\in \Bbbk \mathscr G$, there exists $g\in \mathscr G\skel 1$ such that: \begin{enumerate} \item $s(g)\in X$; \item $\varphi(g)\neq 0$. \end{enumerate} Notice that if $X$ is $\Bbbk$-dense, then so is each $Y\supseteq X$. Also note that the condition $s(g)\in X$ could be replaced with $t(g)\in X$ by considering $\check{\varphi}$. The following proposition justifies the terminology. \begin{Prop}\label{kdense} Let $\mathscr G$ be an ample groupoid and $X\subseteq \mathscr G\skel 0$. \begin{enumerate} \item If $X$ is $\Bbbk$-dense, then $X$ is dense in $\mathscr G\skel 0$. \item If $\mathscr G$ is Hausdorff and $X$ is dense in $\mathscr G\skel 0$, then $X$ is $\Bbbk$-dense. \end{enumerate} \end{Prop} \begin{proof} Assume that $X$ is $\Bbbk$-dense and $U\subseteq \mathscr G\skel 0$ is compact open. Then $\chi_U(x)\neq 0$ for some $x\in X$ and hence $U\cap X\neq \emptyset$. Thus $X$ is dense in $\mathscr G\skel 0$. Suppose that $\mathscr G$ is Hausdorff and $X$ is dense. Let $\varphi\in \Bbbk\mathscr G$. Then $V=\varphi^{-1} (\Bbbk\setminus \{0\})$ is compact open and hence $s(V)$ is open. Thus there exists $x\in X\cap s(V)$, that is, there exists $g\in \mathscr G\skel 1$ with $\varphi(g)\neq 0$ and $s(g)\in X$. \end{proof} Example~\ref{examplecliff} below shows that $\Bbbk$-density can be different than density for non-Hausdorff groupoids. In order to prove our main semiprimitivity result we need a condition under which a module induced from an isotropy group is not annihilated by some element of $\Bbbk\mathscr G$. \begin{Lemma}\label{l:induceup} Let $\Bbbk$ be a commutative ring with unit and $\mathscr G$ an ample groupoid. Let $x\in \mathscr G\skel 0$ and let $V_x$ be a faithful $\Bbbk G_x$-module. Suppose that $\varphi\in \Bbbk\mathscr G$ satisfies $\varphi(g)\neq 0$ for some $g\in \mathscr G\skel 1$ with $s(g)\in \mathcal O_x$. Then $\varphi\cdot \mathop{\mathrm{Ind}}\nolimits_x(V_x)\neq 0$. \end{Lemma} \begin{proof} For each $y$ in the orbit $\mathcal O_x$ of $x$, choose an arrow $h_y\colon x\to y$. Set \[a=\sum_{s (z)=s(g),t(z)=t(g)}\varphi(z)(h_{t(g)}^{-1} zh_{s(g)})\in \Bbbk G_x.\] Note that $g$ is the unique element $z$ with $s(z)=s(g)$, $t(z)=t(g)$ and $h_{t(g)}^{-1} zh_{s(g)}=h_{t(g)}^{-1} gh_{s(g)}$ and so the coefficient of $h_{t(g)}^{-1} gh_{s(g)}$ in $a$ is $\varphi(g)\neq 0$, whence $a\neq 0$. Because $V_x$ is a faithful $\Bbbk G_x$-module, we can choose $v\in V_x$ with $av\neq 0$. Recall that \[\mathop{\mathrm{Ind}}\nolimits_x(V_x)=\Bbbk L_x\otimes_{\Bbbk G_x} V_x=\bigoplus_{y\in \mathcal O_x}h_y\otimes V_x\] where the direct sum decomposition is as a $\Bbbk$-module~\cite{mygroupoidalgebra}. Now we compute \begin{align*} \varphi\cdot (h_{s(g)}\otimes v) &= \sum_{s(z)=s(g)} \varphi(z)zh_{s(g)}\otimes v \\ &= \sum_{s(z)= s(g)}\varphi(z)h_{t(z)}(h_{t(z)}^{-1} zh_{s(g)})\otimes v \\ &= \sum_{s(z)=s(g)}h_{t(z)}\otimes \varphi(z)(h_{t(z)}^{-1} zh_{s(z)})v \\ &= \sum_{y\in \mathcal O_x}h_y\otimes \sum_{s(z)=s(g),t(z)=y}\varphi(z)(h_y^{-1} zh_{s(g)})v \\ &= h_{t(g)}\otimes av+ {}\\ & \qquad\quad \sum_{y\in \mathcal O_x\setminus \{t(g)\}}h_y\otimes \sum_{s(z)=s(g),t(z)=y}\varphi(z)(h_y^{-1} zh_{s(g)})v\\ & \neq 0 \end{align*} by choice of $v$. \end{proof} We now prove that if the algebras of sufficiently many isotropy groups are semiprimitive over $\Bbbk$, then so is the groupoid algebra. This is a generalization of a result of Domanov~\cite{Domanov} for inverse semigroups. \begin{Thm}\label{t:semiprimitivity} Let $\mathscr G$ be an ample groupoid and $\Bbbk$ a commutative ring with unit. Suppose that the set $X$ of $x\in \mathscr G\skel 0$ such that $\Bbbk G_x$ is semiprimitive is $\Bbbk$-dense. Then $\Bbbk \mathscr G$ is semiprimitive. \end{Thm} \begin{proof} Choose a faithful semisimple module $V_x$ for $\Bbbk G_x$ for each $x\in X$ and put $V=\bigoplus_{x\in X} \mathop{\mathrm{Ind}}\nolimits_x(V_x)$. Then $V$ is semisimple by~\cite[Proposition~7.19]{mygroupoidalgebra}. We need to show that it is faithful. Let $0\neq \varphi\in \Bbbk \mathscr G$ and let $g\in \mathscr G\skel 1$ with $\varphi(g)\neq 0$ and $x=s(g)\in X$. Then $\varphi\cdot \mathop{\mathrm{Ind}}\nolimits_x(V_x)\neq 0$ by Lemma~\ref{l:induceup} and hence $\varphi\cdot V\neq 0$. This proves that $V$ is faithful and hence $\Bbbk\mathscr G$ is semiprimitive. \end{proof} The groupoid used in~\cite{GroupoidMorita} to realize Leavitt path algebras has the property that each isotropy group is either trivial or infinite cyclic. Hence one has that the isotropy groups have semiprimitive algebras over any base field. We thus recover the following result of~\cite{Leavittsemiprimitivity}. \begin{Cor} Leavitt path algebras are semiprimitive over any base field. \end{Cor} The following extends a celebrated result of Amitsur~\cite{Amitsur} to ample groupoid algebras. \begin{Cor} Let $\mathscr G$ be an ample groupoid and let $\Bbbk$ be a field of characteristic $0$ that is not algebraic over $\mathbb Q$. Then $\Bbbk\mathscr G$ is semiprimitive. \end{Cor} \begin{proof} By a well-known result of Amitsur~\cite{Amitsur}, $\Bbbk G$ is semiprimitive for any group $G$. Since $\mathscr G\skel 0$ is obviously $\Bbbk$-dense, we conclude that $\Bbbk \mathscr G$ is semiprimitive by Theorem~\ref{t:semiprimitivity}. \end{proof} In Example~\ref{e:needkdense} below, we shall show that $\Bbbk$-density cannot be relaxed to just density in Theorem~\ref{t:semiprimitivity}. Our next goal is to generalize a result of Munn~\cite{MunnSemiprim} from inverse semigroups to groupoids. In fact, his result is trivial in the groupoid context. \begin{Prop}\label{p:isolated} Let $\mathscr G$ be an ample groupoid and $\Bbbk$ a commutative ring with unit. Let $U\subseteq \mathscr G\skel 0$ be compact open and let $\mathscr G|_U$ be the full open subgroupoid with object set $U$. If $\Bbbk\mathscr G$ is (semi)primitive, then so $\Bbbk \mathscr G|_U$. In particular, if $x\in \mathscr G_0$ is an isolated point, then $\Bbbk G_x$ is (semi)primitive whenever $\Bbbk\mathscr G$ is (semi)primitive. \end{Prop} \begin{proof} It is shown in~\cite{groupoidbundles} that $\Bbbk\mathscr G|_U$ is the corner $\chi_U\cdot \Bbbk\mathscr G\cdot \chi_U$. As a corner in a (semi)primitive ring is (semi)primitive (cf.~\cite[Corollary~21.13]{LamBook}), the result follows. \end{proof} Combining Proposition~\ref{p:isolated} with Theorem~\ref{t:semiprimitivity}, we obtain the following corollary. \begin{Cor}\label{c:isolatedsemi} Let $\mathscr G$ be an ample groupoid and $\Bbbk$ a commutative ring with unit. Suppose that $X\subseteq \mathscr G\skel 0$ is a $\Bbbk$-dense set of isolated points. Then $\Bbbk\mathscr G$ is semiprimitive if and only if $\Bbbk G_x$ is semiprimitive for all $x\in X$. \end{Cor} \subsection{Primitivity} Now we consider primitivity of ample groupoid algebras. First we show that if $\Bbbk\mathscr G$ is primitive, then $\mathscr G$ admits a $\Bbbk$-dense orbit (and hence a dense orbit). Let $\mathscr G$ be an \'etale groupoid. A \emph{$\mathscr G$-sheaf} consists of a space $E$, a local homeomorphism $p\colon E\to \mathscr G\skel 0$ and an action map $E\times_{\mathscr G\skel 0} \mathscr G\skel 1\to E$ (where the fiber product is with respect to $p$ and $t$), denoted $(x,g)\mapsto xg$ satisfying the following axioms: \begin{itemize} \item $ep(e)=e$ for all $e\in E$; \item $p(eg)=s(g)$ whenever $p(e)=t(g)$; \item $(eg)h=e(gh)$ whenever $p(e)=t(g)$ and $s(g)=t(h)$. \end{itemize} If $\Bbbk$ is a commutative ring with unit, then a \emph{$\mathscr G$-sheaf of $\Bbbk$-modules} is a $\mathscr G$-sheaf $(E,p)$ together a $\Bbbk$-module structure on each stalk $E_x=p^{-1}(x)$ such that: \begin{itemize} \item the zero section, denoted $0$, sending $x\in \mathscr G\skel 0$ to the zero of $E_x$ is continuous; \item addition $E\times_{\mathscr G\skel 0} E\to E$ is continuous; \item scalar multiplication $\Bbbk\times E\to E$ is continuous; \item for each $g\in \mathscr G\skel 1$, the map $R_g\colon E_{t(g)}\to E_{s(x)}$ given by $R_g(e) = eg$ is $\Bbbk$-linear; \end{itemize} where $\Bbbk$ has the discrete topology in the third item. Note that the first three conditions are equivalent to $(E,p)$ being a sheaf of $\Bbbk$-modules over $\mathscr G\skel 0$. If $(E,p)$ is a $\mathscr G$-sheaf of $\Bbbk$-modules, then $\Gamma_c(E,p)$ denotes the $\Bbbk$-module of global sections of $p$ with compact support. There is a right $\Bbbk\mathscr G$-module structure on $\Gamma_c(E,p)$ given by \[(s\varphi)(x) = \sum_{s(g)=x} \varphi(g)s(t(g))g=\sum_{s(g)=x} \varphi(g)R_g(s(t(g))). \] It is shown in~\cite{groupoidbundles} that every unitary right $\Bbbk\mathscr G$-module is isomorphic to $\Gamma_c(E,p)$ for some $\mathscr G$-sheaf $(E,p)$ of $\Bbbk$-modules. We recall that $M$ is unitary if $M\cdot \Bbbk\mathscr G=M$. \begin{Prop}\label{p:denseorbit} Let $\mathscr G$ be an ample groupoid and $\Bbbk$ a field. Suppose that $\Bbbk\mathscr G$ is primitive. Then $\mathscr G$ has a $\Bbbk$-dense orbit. \end{Prop} \begin{proof} Let $M$ be a faithful simple right $\Bbbk\mathscr G$-module. Then $M$ is unitary and hence $M\cong \Gamma_c(E,p)$ for some $\mathscr G$-sheaf of $\Bbbk$-vector spaces $(E,p)$~\cite{groupoidbundles}. Suppose that the stalk $E_x$ is non-zero. We claim that the orbit $\mathcal O_x$ is $\Bbbk$-dense. Indeed, let $X=t^{-1}(\mathcal O_x)$ and let $I$ be the set of all $\varphi\in \Bbbk\mathscr G$ that vanish on $X$. Then $I$ is a right ideal. If $I=0$, then $\mathcal O_x$ is $\Bbbk$-dense. So suppose that $I\neq 0$. Because $M$ is faithful, we can find $t\in \Gamma_c(E,p)$ with $tI\neq 0$. Then $tI=\Gamma_c(E,p)$ by simplicity. So if $s\in \Gamma_c(E,p)$, then $s=t\varphi$ for some $\varphi\in I$. But then, \[s(x)=(t\varphi)(x)=\sum_{s(g)=x}\varphi(g)t(t(g))g=0\] because $\varphi$ vanishes on $X$ and $d(g)=x$ implies $g\in X$. This contradicts $E_x\neq 0$. We conclude that $\mathcal O_x$ is $\Bbbk$-dense. \end{proof} We now give several situations under which the converse holds. \begin{Thm} Let $\mathscr G$ be an effective Hausdorff ample groupoid and $\Bbbk$ a field. Then $\Bbbk\mathscr G$ is primitive if and only if $\mathscr G$ has a dense orbit. \end{Thm} \begin{proof} It is necessary for $\mathscr G$ to have a dense orbit by Proposition~\ref{p:denseorbit} and Proposition~\ref{kdense}. For sufficiency, assume that $\mathscr G\skel 0$ has a dense orbit $\mathcal O_x$. We claim that $\Bbbk \mathcal O_x=\mathop{\mathrm{Ind}}\nolimits_x(\Bbbk)$ is a faithful simple module (where $\Bbbk$ has the trivial $G_x$-action). We know that it is simple by~\cite[Proposition~7.19]{mygroupoidalgebra}. If the annihilator is non-zero, then by Proposition~\ref{effectivecase} it contains an element of the form $\chi_U$ with $U\in \mathop{\mathrm{Bis}}\nolimits_c(\mathscr G\skel 0)$. By density, there exists $y\in \mathcal O_x\cap U$. Then $\chi_Uy=y\neq 0$. This contradiction shows that $\Bbbk\mathscr G$ is primitive. \end{proof} Next we show that if there is a $\Bbbk$-dense orbit whose isotropy group has a primitive algebra, then the groupoid has a primitive algebra. \begin{Thm}\label{t:primitivity} Let $\mathscr G$ be an ample groupoid and $\Bbbk$ a field. Suppose that $x\in \mathscr G\skel 0$ is such that $\mathcal O_x$ is $\Bbbk$-dense. If $\Bbbk G_x$ is primitive, then $\Bbbk\mathscr G$ is primitive. The converse holds if $x$ is an isolated point. \end{Thm} \begin{proof} The final statement follows from Proposition~\ref{p:isolated}. Suppose that $\Bbbk G_x$ is primitive and that $V_x$ is a faithful simple $\Bbbk G_x$-module. Then $\mathop{\mathrm{Ind}}\nolimits_x(V_x)$ is simple by~\cite[Proposition~7.19]{mygroupoidalgebra} and faithful by Lemma~\ref{l:induceup} and the definition of $\Bbbk$-density. \end{proof} Inverse semigroups with zero are constructed in~\cite{Munntwoexamples} with the property that their associated ample groupoids have simple (hence primitive) algebras over $\mathbb F_p$ but with the property that none of their isotropy groups have a semiprimitive algebra over $\mathbb F_p$ (in fact, they contain non-zero nilpotent ideals). \section{Applications to inverse semigroups} In this section, we apply the results of the previous sections to obtain new results about inverse semigroup algebras, as well as simpler and more conceptual proofs of old results. \subsection{The universal groupoid} First we recall the construction of the universal groupoid $\mathscr U(S)$ of an inverse semigroup and the contracted universal groupoid $\mathscr U_0(S)$ for an inverse semigroup with zero. See~\cite{Exel,Paterson,mygroupoidalgebra} for details. A \emph{character} of a semilattice $E$ is a non-zero homomorphism $\theta\colon E\to \{0,1\}$ where $\{0,1\}$ is a semilattice under multiplication. The \emph{spectrum} of $E$ is the space $\widehat E$ of characters of $E$, topologized as a subspace of $\{0,1\}^E$. Note that $\widehat E$ is Hausdorff with a basis of compact open sets. Indeed, if we put $D(e)=\{\theta\in \widehat E\mid \theta(e)=1\}$ for $e\in E(S)$, then the sets of the form $D(e)\cap D(e_1)^c\cap\cdots D(e_n)^c$ form a basis of compact open sets for the topology, where $X^c$ denotes the complement of $X$. If $e\in E$, then the \emph{principal character} $\chi_e\colon E\to \{0,1\}$ is defined by \[\chi_e(f)=\begin{cases} 1, & \text{if}\ f\geq e\\ 0, & \text{else.}\end{cases}\] The principal characters are dense in $\widehat E$. If $E$ has a zero element, then a character $\theta$ is called \emph{proper} if $\theta(0)=0$, or equivalently $\theta\neq \chi_0$. The set of proper characters will be denoted $\widehat E_0$. Notice that $D(0)=\{\theta_0\}$ and so $\theta_0$ is always an isolated point. Let $S$ be an inverse semigroup. Then $S$ acts on $\widehat{E(S)}$. The domain of the action of $s$ is $D(s^*s)$. If $\theta\in D(s^*s)$, then $(s\theta)(e) = \theta(s^*es)$. If $S$ has a zero, then $\widehat{E(S)}_0$ is invariant under $S$. The \emph{universal groupoid} of $S$ is $\mathscr U(S)=S\ltimes \widehat{E(S)}$. Note that $[s,\chi_{s^*s}]\in (t,D(t^*t))$ if and only if $s\leq t$ and that the isotropy group $G_{\chi_e}$ of a principal character is isomorphic to the maximal subgroup $G_e$ (cf.~\cite{mygroupoidalgebra}). It is known that $\mathscr U(S)$ is Hausdorff if and only if $S$ is Hausdorff~\cite[Theorem~5.17]{mygroupoidalgebra}. If $S$ has a zero, we put $\mathscr U_0(S)=S\ltimes \widehat{E(S)}_0$ and call it the \emph{contracted universal groupoid} of $S$. The following theorem is fundamental to the subject. \begin{Thm}\label{t:isothm} Let $S$ be an inverse semigroup and $\Bbbk$ a commutative ring with unit. Then $\Bbbk S\cong \Bbbk\mathscr U(S)$. The isomorphism sends $s\in S$ to $\chi_{(s,D(s^*s))}$. If $S$ has a zero, then $\Bbbk_0S\cong \Bbbk\mathscr U_0(S)$. \end{Thm} \begin{proof} The first isomorphism is proved in~\cite[Theorem~6.3]{mygroupoidalgebra}. For the second isomorphism, note that $0$ is sent to $\chi_{(0,D(0))}$ and so $\Bbbk_0S\cong \Bbbk\mathscr U(S)/\Bbbk \chi_{(0,D(0))}$. But $(0,D(0))$ just consists of the unit $\chi_0$ and hence $\Bbbk \chi_{(0,D(0))}$ is the kernel of the restriction map $\Bbbk\mathscr U(S)\to \Bbbk\mathscr U(S)_0$, and the latter is a surjective homomorphism because $\mathscr U_0(S)$ is a clopen full subgroupoid of $\mathscr U(S)$ whose unit space is a union of orbits. \end{proof} \subsubsection{Ultrafilters and tight characters} Let $E$ be a semilattice. A \emph{filter} is a non-empty subsemigroup $\mathcal F\subseteq E$ with the property that $e\geq f$ with $f\in \mathcal F$ implies $e\in \mathcal F$. The characters of $E$ are exactly the characteristic functions $\chi_{\mathcal F}$ of filters $\mathcal F$. Let $E$ be a semilattice with zero. The proper characters of $E$ are in bijection with proper filters. A maximal proper filter is called an \emph{ultrafilter}. Denote by $UF(E)$ the subspace of $\widehat E_0$ consisting of those $\theta\in \widehat E$ with $\theta^{-1} (1)$ an ultrafilter. If $S$ is an inverse semigroup with zero, then $UF(E(S))$ is invariant under $S$~\cite{Exel}. This led Exel to study the closure of $UF(E(S))$ in $\widehat{E(S)}_0$. Let $E$ be a semilattice with zero and $e\in E$. A finite subset $F\subseteq e^{\downarrow}$ is a \emph{cover} of $e$ if $zf=0$ for all $f\in F$ implies that $ze=0$. Equivalently, $F$ is a cover of $e$ if $0\neq z\leq e$ implies that $zf\neq 0$ for some $f\in F$. Note that the empty set if a cover of $0$. Following Exel~\cite{Exel} (but using a reformulation of Lawson~\cite{Lawsontight}), we say that a filter $\mathcal F$ is \emph{tight} if $e\in \mathcal F$ and $F$ a cover of $e$ implies $F\cap \mathcal F\neq \emptyset$. Note that a tight filter must be proper because $\emptyset$ covers $0$. We say that a character $\theta$ is \emph{tight} if $\theta^{-1}(1)$ is a tight filter, or equivalently, \[\theta(e)=\bigvee_{f\in F}\theta(f)\] whenever $F$ is a cover of $e$. This is also equivalent to \[\prod_{f\in F}(\theta(e)-\theta(f))=0\] for each cover $F$ of $e$. The space of tight characters is denoted $\widehat E_T$. Any ultrafilter is tight and Exel proved~\cite{Exel} $\widehat E_T$ is the closure of $UF(E)$. In particular, $\widehat{E(S)}_T$ is invariant for an inverse semigroup $S$ with zero. We put $\mathscr U_T(S)=S\ltimes \widehat{E(S)}_T$ and call it the \emph{universal tight groupoid} of $S$. Our next goal is to give a presentation of the algebra of the universal tight groupoid under the assumption that $S$ is Hausdorff. We shall use without comment that the idempotents of a commutative ring form a generalized boolean algebra via $e\vee f=e+f-ef$, $e\wedge f=ef$ and $e\setminus f=e-ef$. If $\psi\colon S\to A$ is a homomorphism to a $\Bbbk$-algebra $A$, we say that $\psi$ is \emph{tight} if $\psi(e)=\bigvee_{f\in F}\psi(f)$ whenever $F$ is a cover of $E$. \begin{Prop}\label{p:presentation} Let $S$ be a Hausdorff inverse semigroup and let $X\subseteq \widehat{E(S)}$ be a closed invariant subspace. If $\mathscr G=\mathscr U(S)|_X$ and $\Bbbk$ is a commutative ring with unit, then \[\Bbbk\mathscr G\cong \Bbbk S/\langle \prod_{i=1}^n(e-e_i)\mid e_i\leq e, D(e)\cap D(e_1)^c\cap\cdots \cap D(e_n)^c\cap X=\emptyset\rangle.\] \end{Prop} \begin{proof} Since $\mathscr G$ is a closed full subgroupoid of $\mathscr U(S)$, with unit space a union of orbits, there is a surjective homomorphism $F\colon \Bbbk\mathscr U(S)\to \Bbbk\mathscr G$ given by $F(\psi)=\psi|_{\mathscr G\skel 1}$. Let $I=\ker F$. We claim that $I$ is generated as an ideal by the $\chi_U$ such that $U\subseteq \widehat {E(S)}$ is compact open and $U\cap X=\emptyset$. Indeed, if $\varphi\in \ker I$, then because $\mathscr U(S)$ is Hausdorff, we have that $\varphi^{-1}(\Bbbk\setminus 0)$ is compact open and hence $U=s(\varphi^{-1}(\Bbbk\setminus 0))$ is compact open. Moreover, $\varphi=\varphi\ast \chi_U$. If $x\in U$, then $x=s(g)$ with $\varphi(g)\neq 0$. Since $X$ is invariant and $\varphi|_{\mathscr G\skel 1}=0$, we conclude that $x\notin X$. Thus $U\cap X=\emptyset$. This proves the claim. If $U\subseteq \widehat{E(S)}$ is compact open with $U\cap X=\emptyset$, then $U=\bigcup_{i=1}^nB_i$ where $B_i$ are basic compact open subsets of $\widehat {E(S)}$ (which necessarily satisfy $B_i\cap X=\emptyset$) and hence $\chi_U=\bigvee_{i=1}^n \chi_{B_i}$. We deduce that $I$ is generated by the $\chi_B$ with $B$ a basic compact open subset of $\widehat{E(S)}$ with $B\cap X=\emptyset$. Such a basic neighborhood is of the form $B=D(e)\cap D(e_1)^c\cap\cdots \cap D(e_n)^c$ where $e_1,\ldots, e_n\leq e$. Then $\chi_B=\prod_{i=1}^n (\chi_{D(e)}-\chi_{D(e_i)})$. Under the isomorphism $\Bbbk S\to \Bbbk\mathscr U(S)$, we have that $\chi_B$ is the image of $\prod_{i=1}^n(e-e_i)$ and so the proposition follows. \end{proof} As a corollary, we obtain the following result. \begin{Cor}\label{c:tightpres} Let $S$ be a Hausdorff inverse semigroup with zero and let $\Bbbk$ be a commutative ring with unit. Then \begin{align*} \Bbbk \mathscr U_T(S) &\cong \Bbbk S/\langle e-\bigvee F\mid F\ \text{covers}\ e\rangle\\ &\cong \Bbbk S/\langle \prod_{f\in F}(e-f)\mid F\ \text{covers}\ e\rangle. \end{align*} Hence the map $S\to \Bbbk\mathscr U_T(S)$ given by $s\mapsto \chi_{(s,D(s^*s)\cap \widehat{E(S)}_T)}$ is the universal tight homomorphism from $S$ into a $\Bbbk$-algebra. \end{Cor} \begin{proof} If $e_1,\ldots, e_n$ (where possibly $n=0$) is a cover of $e$, then $D(e)\cap D(e_1)^c\cap\cdots\cap D(e_n)^c$ cannot contain a tight character. Conversely, if $e_1,\ldots, e_n$ is not a cover of $e$, then there exists $z$ with $0\neq z\leq e$ and $ze_i=0$ for $i=1,\ldots,n$. Let $\mathcal F$ be an ultrafilter containing $z$ (such exists by Zorn's lemma). Then $e\in \mathcal F$ and $e_1,\ldots, e_n\notin \mathcal F$. Thus $\chi_{\mathcal F}\in D(e)\cap D(e_1)^c\cap\cdots\cap D(e_n)^c\cap \widehat{E(S)}_T$ because ultrafilters are tight. The result follows from Proposition~\ref{p:presentation}. \end{proof} Note that since the empty set is a cover of $0$, it follows that $\Bbbk S\to \Bbbk\mathscr U_T(S)$ factors through the contracted semigroup algebra $\Bbbk_0S$. Since graph inverse semigroups are $0$-$E$-unitary and hence Hausdorff, it is immediate from the corollary and~\cite[Proposition~3.10]{JonesLawson} that if $S$ is a graph inverse semigroup of a directed graph in which the in-degree of every vertex is finite and at least $2$, then $\Bbbk \mathscr U_T(S)$ is isomorphic to the Leavitt path algebra corresponding to the same graph. \subsection{Groupoids of germs from a dynamical viewpoint} Let $S$ be an inverse semigroup acting on a locally compact Hausdorff space $X$. We characterize the orbits and effectiveness of $\mathscr G=S\ltimes X$ in terms of the dynamics of the action of $S$ on $X$. The first proposition shows that the orbits of $S$ and $S\ltimes X$ are the same. \begin{Prop}\label{orbits} Let $S$ be an inverse semigroup acting on a space $X$. If $x\in X$, then $\mathcal O_x=\{sx\mid s\in S, x\in X_{s^*s}\}$. \end{Prop} \begin{proof} This is immediate from the fact that $[s,x]$ is an arrow if and only if $x\in X_{s^*s}$ and that $s([s,x])=x$ and $t([s,x])=sx$. \end{proof} We say that the action of $S$ on $X$ is \emph{minimal} if there are no proper, non-empty closed $S$-invariant subspaces, or equivalently, each orbit under $S$ is dense. We then have the following corollary of Proposition~\ref{orbits}. \begin{Cor}\label{c:minimalactionofinv} Let $S$ be an inverse semigroup acting on a space $X$. Then $S\ltimes X$ is minimal if and only if the action of $S$ on $X$ is minimal. \end{Cor} Next we consider effectiveness. \begin{Prop}\label{p:effectiveprop} Let $S$ be an inverse semigroup acting on a space $X$. Then $S\ltimes X$ is effective if and only if $X_s=\mathrm{Int}(\mathrm{Fix}(s))$ for all $s\in S$. \end{Prop} \begin{proof} Let $\mathscr G=S\ltimes X$. Recall that $X_s\subseteq \mathrm{Int}(\mathrm{Fix}(s))$ always is true. Suppose that $\mathscr G$ is effective and let $x\in \mathrm{Int}(\mathrm{Fix}(s))$. Note that $[s,y]\in \mathop{\mathrm{Is}}(\mathscr G)$ for all $y\in \mathrm{Fix}(s)$. Let $U$ be an open neighborhood of $x$ contained in $\mathrm{Fix}(s)$. Then $(s,U)$ is an open neighborhood of $[s,x]$ contained in $\mathop{\mathrm{Is}}(\mathscr G)$ and hence is contained in $\mathscr G\skel 0$. In particular, $[s,x]$ is an identity and so there is an idempotent $e\leq s$ with $x\in X_e$. Thus $x\in X_s$. Conversely, suppose that $X_s=\mathrm{Int}(\mathrm{Fix}(s))$ and that $[s,x]\in \mathrm{Int}(\mathop{\mathrm{Is}}(\mathscr G))$. Let $(s,U)$ be a basic neighborhood of $[s,x]$ contained in $\mathop{\mathrm{Is}}(\mathscr G)$. Then $sy=t([s,y])=s([s,y])=y$ for all $y\in U$ and so $x\in U\subseteq \mathrm{Fix}(s)$. Thus $x\in \mathrm{Int}(\mathrm{Fix}(s))=X_s$ and so $x\in X_e$ for some $e\leq s$. But then $[s,x]=[e,x]$ is an identity and so $\mathscr G$ is effective. \end{proof} \begin{Cor}\label{c:faithfulimplieseffective} Let $S$ be an inverse semigroup acting faithfully on a space $X$ such that the $X_e$ with $e\in E(S)$ form a basis for the topology on $X$. Then $S\ltimes X$ is effective. \end{Cor} \begin{proof} This follows from Proposition~\ref{p:effectiveprop} and Proposition~\ref{p:faithful}. \end{proof} Proposition~\ref{p:effectiveprop} has a simpler formulation for $E$-unitary and $0$-$E$-unitary inverse semigroups. \begin{Cor}\label{c:topologicallyfree} Suppose that $\theta\colon S\to I_X$ is an action of an inverse semigroup on a space $X$. If $S$ is $E$-unitary, or $S$ is $0$-$E$-unitary and $\theta(0)=0$, then $S\ltimes X$ is effective if and only if $\mathrm{Int}(\mathrm{Fix}(s))=\emptyset$ for all $s\in S\setminus E(S)$. \end{Cor} \begin{proof} We handle just the $0$-$E$-unitary case, as the other case is simpler. Note that if $S$ is $0$-$E$-unitary and if $X_0=\emptyset$, then $X_s=\emptyset$ for any $s\in S\setminus E(S)$. The result is now immediate from Proposition~\ref{p:effectiveprop}. \end{proof} We now have the following theorem as a consequence of Corollary~\ref{simplehaus}. \begin{Thm}\label{t:simplefromaction} Let $S$ be an inverse semigroup and $\Bbbk$ a field. Suppose that $S$ acts on a Hausdorff space $X$ with a basis of compact open sets such that $X_s$ is clopen for all $s\in S$. Setting $\mathscr G=S\ltimes X$, one has $\Bbbk \mathscr G$ is simple if and only if the action of $S$ is minimal and $X_s=\mathrm{Int}(\mathrm{Fix}(s))$ for all $s\in S$. \end{Thm} \begin{proof} We have that $\mathscr G$ is Hausdorff by Proposition~\ref{p:hausdorffcondition}. In light of Corollary~\ref{c:minimalactionofinv} and Proposition~\ref{p:effectiveprop}, the result follows from Corollary~\ref{simplehaus}. \end{proof} \subsection{Simplicity of contracted inverse semigroup algebras} If $S$ is a non-trivial inverse semigroup and $\Bbbk$ is a field, then $\Bbbk S$ is never simple because $\Bbbk$ is a homomorphic image via the mapping $s\mapsto 1$ for $s\in S$. But if $S$ is an inverse semigroup with zero, then the contracted semigroup algebra $\Bbbk_0S$ can be simple. It was observed by Munn that a necessary condition is that $S$ be congruence-free, since if $\equiv$ is a non-trivial congruence on $S$, then there is an induced surjective homomorphism $\Bbbk_0S\to \Bbbk_0[S/{\equiv}]$ which has a non-trivial kernel. But even congruence-free inverse semigroups with zero can have non-simple contracted semigroup algebras. For instance, the polycyclic inverse monoid on $2$ generators~\cite{Lawson} is congruence-free and its algebra has as a quotient the Leavitt path algebra associated to a rose with $2$ petals. Munn asked~\cite{MunnAlgebraSurvey} for a characterization of when a congruence-free inverse semigroup with zero has a simple contracted semigroup algebra. We provide an answer to this question under the additional assumption that the inverse semigroup is Hausdorff. In particular, this applies to $0$-$E$-unitary inverse semigroups. We also show that the universal tight groupoid of a congruence-free Hausdorff inverse semigroup is always simple. Recall that an inverse semigroup $S$ with zero is called \emph{$0$-simple} if it contains no proper, non-zero ideals, i.e, $SsS=S$ for all $s\neq 0$. An inverse semigroup~$S$ is called \emph{fundamental} if every non-trivial congruence identifies some pair of idempotents. A semilattice $E$ with zero is called \emph{$0$-disjunctive} if for all $0<e<f$, there exists $0<e'<f$ such that $ee'=0$. It is well known~\cite{petrich} that an inverse semigroup $S$ with zero is congruence-free if and only if it is $0$-simple, fundamental and $E(S)$ is $0$-disjunctive. Moreover, $S$ is fundamental if and only if the centralizer of $E(S)$ is $E(S)$~\cite{Lawson}, or equivalently, $s^*es=e$ for all idempotents $e\leq s$ implies $s\in E(S)$. If every maximal subgroup of $S$ is trivial, then $S$ is fundamental. \begin{Lemma}\label{l:ultrafilter} Let $S$ be an inverse semigroup and suppose that $S$ is fundamental and $E(S)$ is $0$-disjunctive. Then $S\ltimes UF(E(S))$ is effective. \end{Lemma} \begin{proof} Let $K_e=UF(E(S))\cap D(e)$ for $e\in E(S)$. Then it is easy to see that the $K_e$ with $e\in E(S)$ form a basis for the topology on $UF(E(S))$ (cf.~\cite{Lawsontight}). Indeed, if $\mathcal F$ is an ultrafilter with $e\in \mathcal F$ and $e_1,\ldots, e_n\notin \mathcal F$, then by maximality it follows, for $i=1,\ldots, n$, that $e_ie_i'=0$ for some $e_i'\in \mathcal F$. Taking, $e'=ee_1'\cdots e_n'$, we have $e'\in \mathcal F$, $e'\leq e$ and $e'e_i=0$ for $i=1,\ldots, n$. Thus $K_{e'}\subseteq D(e)\cap D(e_1)^c\cap\cdots \cap D(e_n)^c\cap UF(E(S))$ with $\chi_{\mathscr F}\in K_{e'}$. Therefore, by Corollary~\ref{c:faithfulimplieseffective}, it suffices to prove that $S$ acts faithfully on $UF(E(S))$. First note that $e\mapsto K_e$ is injective. Indeed, first observe that $K_e=\emptyset$ if and only if $e=0$, since any non-zero idempotent is contained in an ultrafilter by Zorn's lemma. Suppose that $K_e=K_f$ with $0\neq e,f\in E(S)$. Without loss of generality, assume $e\nleq f$. Then $K_{ef}=K_e$ and $ef< e$. Also $ef\neq 0$ because $K_e=K_{ef}$ is non-empty. By the definition of $0$-disjunctive, there exists $0<e'<e$ such that $efe'=0$. If $\mathcal F$ is an ultrafilter containing $e'$, then $e\in \mathcal F$ and $ef\notin\mathcal F$. This contradicts $K_e=K_{ef}$ and hence the assumption that $K_e=K_f$. It follows that if $\theta\colon S\to I_{UF(E(S))}$ is the action homomorphism, then $\theta$ is idempotent separating. As every idempotent separating homomorphism from a fundamental inverse semigroups is injective, we conclude that $S$ acts faithfully on $UF(E(S))$. This completes the proof. \end{proof} As a corollary, we obtain an effectiveness result for $\mathscr U_T(S)$. \begin{Cor}\label{c:tightiseffective} Let $S$ be a fundamental inverse semigroup such that $E(S)$ is $0$-disjunctive. Suppose, moreover, that $\mathscr U_T(S)$ is Hausdorff (e.g., if $S$ is Hausdorff). Then $\mathscr U_T(S)$ is effective. \end{Cor} \begin{proof} Let $X=\widehat{E(S)}_T$ and suppose that $\theta\in X$ belongs to $\mathrm{Int}(\mathrm{Fix}(s))$. Then $\theta=\lim \theta_{\alpha}$ where $\{\theta_{\alpha}\}_{\alpha\in D}$ is a net in $UF(E(S))$. Since $\theta$ is an interior point of $\mathrm{Fix}(s)$, $\theta_{\alpha}\in \mathrm{Int}(\mathrm{Fix}(s))$ for all $\alpha$ sufficiently large. By Lemma~\ref{l:ultrafilter} and Proposition~\ref{p:effectiveprop} applied to $UF(E(S))$ we conclude that $\theta_{\alpha}\in X_s$ for $\alpha$ sufficiently large. But $X_s$ is closed by Proposition~\ref{p:hausdorffcondition} and thus $\theta\in X_s$. We conclude that $\mathscr U_T(S)$ is effective by Proposition~\ref{p:effectiveprop}. \end{proof} Next we prove that if $S$ is $0$-simple, then $\mathscr U_T(S)$ is minimal. \begin{Prop}\label{p:minimality} Let $S$ be a $0$-simple inverse semigroup. Then $\mathscr U_T(S)$ is minimal. \end{Prop} \begin{proof} We must show that all orbits of $S$ on $X=\widehat{E(S)}_T$ are dense. Since $UF(E(S))$ is dense in $X$, it suffices to show that each orbit contains $UF(E(S))$ in its closure. Let $\theta\in X$ and $\varphi\in UF(E(S))$. Let $X_e\cap X_{e_1}^c\cap\cdots \cap X_{e_n}^c$ be a basic neighborhood of $\varphi$. Since $\mathcal F=\varphi^{-1}(1)$ is an ultrafilter and doesn't contain the $e_i$, there exists $e'\in E(S)$ with $\varphi(e')=1$, $e'\leq e$ and $e'e_i=0$ for $i=1,\ldots, n$ (cf.~the proof of Lemma~\ref{l:ultrafilter}). Suppose that $\theta(f)=1$. Since $S$ is $0$-simple, we have $f=se't$ with $s,t\in S$. Put $z=e'tf$. Then $sz=f$ and $zf=z$ and so $z^*z=z^*zf=fz^*z=szz^*z=sz=f$ and $e'zz^*=e'e'tfz^*=e'tfz^*=zz^*$ and so $zz^*\leq e'$. Therefore $\theta\in D(f)=D(z^*z)$ and \[(z\theta)(e')\geq (z\theta)(zz^*)=\theta(z^*zz^*z)=\theta(f)=1.\] Thus $z\theta\in X_{e'}\subseteq X_e\cap X_{e_1}^c\cap\cdots \cap X_{e_n}^c$. We conclude that $\ov{\mathcal O_{\theta}}\supseteq \ov{UF(E(S))}=X$, as required. \end{proof} The next corollary is one of our principal applications to inverse semigroups. \begin{Cor}\label{c:simplicitytight} Let $S$ be a congruence-free inverse semigroup and $\Bbbk$ a field. Suppose, moreover, that $\mathscr U_T(S)$ is Hausdorff (e.g., if $S$ is Hausdorff). Then $\Bbbk \mathscr U_T(S)$ is simple. \end{Cor} \begin{proof} Since congruence-free inverse semigroups are fundamental, $0$-simple and have $0$-disjunctive semilattices of idempotents, this is immediate from Corollary~\ref{simplehaus}, Corollary~\ref{c:tightiseffective} and Proposition~\ref{p:minimality}. \end{proof} We are now ready to characterize Hausdorff inverse semigroups with zero whose contracted semigroup algebras are simple, which is another main result of this paper. Let us say that an inverse semigroup $S$ with zero is \emph{tight}, if $0\neq e\in E(S)$ and $F$ a cover of $e$ implies $e\in F$. Equivalently, $S$ is tight if each proper principal character of $S$ is tight, that is, $\mathscr U_0(S)=\mathscr U_T(S)$. Notice that $S$ is tight if and only if $E(S)$ is tight. \begin{Thm}\label{t:simpleinvalg} Let $S$ be Hausdorff inverse semigroup with zero and $\Bbbk$ a field. Then the contracted semigroup algebra $\Bbbk_0S$ is simple if and only if $S$ is congruence-free and tight. \end{Thm} \begin{proof} Suppose first that $\Bbbk_0S$ is simple. We already observed that $S$ must be congruence-free. By Corollary~\ref{simplehaus} and the isomorphism $\Bbbk_0S\cong \Bbbk\mathscr U_0(S)$, we must have that $\mathscr U_0(S)$ is minimal. Since $\widehat{E(S)}_T$ is a closed invariant subspace, we deduce that all proper characters of $S$ are tight and hence $S$ is tight. Conversely, if $S$ is tight, then because the principal characters are dense and tight, we deduce that $\widehat{E(S)}=\widehat{E(S)}_T$ and hence $\mathscr U_0(S)=\mathscr U_T(S)$. Therefore, we have $\Bbbk_0S\cong \Bbbk\mathscr U_0(S)\cong \Bbbk\mathscr U_T(S)$ and the result follows from Corollary~\ref{c:simplicitytight}. \end{proof} \begin{Cor}\label{c:simpleeunit} Let $S$ be a $0$-$E$-unitary inverse semigroup and $\Bbbk$ a field. Then the contracted semigroup algebra $\Bbbk_0S$ is simple if and only if $S$ is congruence-free and tight. \end{Cor} Let us consider an example. \begin{Example} If $X$ is a set with $|X|\geq 2$, then the polycyclic inverse monoid $P_X$~\cite{Lawson} is $0$-$E$-unitary and congruence-free. If $|X|<\infty$, then $\{x^*x\mid x\in X\}$ covers $1$ and so $P_X$ is not tight. Moreover, $\Bbbk\mathscr U_T(P_X)$ is the Leavitt algebra associated to a graph with one vertex and $|X|$ loops. If $|X|$ is infinite, then $P_X$ is tight. For if $ww^*$ is an idempotent (with $w$ a word over $X$) and if $wx_1(wx_1)^*,\ldots, wx_n(wx_n)^*$ are idempotents strictly below $ww^*$, then choosing $x\in X$ different from the first letters of $x_1,\ldots, x_n$, we have that $0\neq wx(wx)^*<ww^*$ and $wx(wx)^*(wx_i)(wx_i)^*=0$. Thus $P_X$ is tight and therefore, $\Bbbk P_X$ is simple for any field $\Bbbk$ (as is well known). \end{Example} \subsection{Primitivity and semiprimitivity of inverse semigroup algebras} In this section, we apply our results on primitivity and semiprimitivity of groupoid algebras to inverse semigroups. We first give our promised example for which $\Bbbk$-density differs from density. \begin{Example}\label{examplecliff} Let $S$ be the following inverse monoid. It consists of elements $\{a,e,0,1,2,3,\ldots\}$ where $e,0,1,2,3,\ldots$ are idempotents, $0$ is a multiplicative zero, $e$ is the identity, $ij=0$ for $i\neq j>0$, $a^2=e$ and $ai=i=ia$ for all $i\geq 0$. Each character of $E(S)$ is principal. The universal groupoid $\mathscr U(S)$ is isomorphic to a group bundle with unit space $E(S)$ where $E(S)$ is given the topology of the one-point compactification of the discrete space $E(S)\setminus \{e\}$. The isotropy group at $e$ is $\{a,e\}$ and is trivial at all other objects. A subset containing $a$ is open if and only if contains all but finitely many elements of $E(S)$. In particular, all neighborhoods of $e$ and $a$ intersect non-trivially and so $\mathscr U(S)$ is not Hausdorff. Notice that $E(S)\setminus \{e\}$ is dense in $\mathscr G\skel 0$. However, it is not $\Bbbk$-dense for any commutative ring with unit $\Bbbk$. Let $U=\{a\}\cup E(S)\setminus\{e\}$. Then $U\in \mathop{\mathrm{Bis}}\nolimits_c(\mathscr U(S))$ and hence $\varphi=\chi_{E(S)}-\chi_U = \delta_e-\delta_a$ belongs to $\Bbbk \mathscr U(S)$. There is no element $g\in \mathscr U(S)\skel 1$ with $\varphi(g)\neq 0$ and $s(g)\in E(S)\setminus \{e\}$. Thus $E(S)\setminus \{e\}$ is not $\Bbbk$-dense. \end{Example} Our next proposition shows that the principal characters of an inverse semigroup are $\Bbbk$-dense for any base ring $\Bbbk$. \begin{Prop}\label{p:idempotentsstronglydense} Let $S$ be an inverse semigroup and $\Bbbk$ a commutative ring with unit. Then the set of principal characters of $E(S)$ is $\Bbbk$-dense in $\mathscr U(S)$. \end{Prop} \begin{proof} Recall that there is an isomorphism $\theta\colon \Bbbk S\to \Bbbk \mathscr U(S)$ given by sending $s\in S$ to $\chi_{(s,D(s^*s))}$. Suppose that $\varphi$ is the image under $\theta$ of $\sum_{i=1}^nc_is_i$ with $c_1,\ldots,c_n\in \Bbbk\setminus \{0\}$. Without loss of generality we may assume that $s_1$ is maximal among $s_1,\ldots,s_n$ in the natural partial order. Then the arrow $[s_1,\chi_{s_1^*s_1}]$ belongs to $(s_1,D(s_1^*s_1))$ but not to $(s_i,D(s_i^*s_i))$, for $i\neq 1$, by maximality of $s_1$. Thus $\varphi([s_1,\chi_{s_1^*s_1}])\neq 0$ and $s([s_1,\chi_{s_1^*s_1}])= \chi_{s_1^*s_1}$. This proves that the principal characters are $\Bbbk$-dense. \end{proof} Next we recover Domanov's theorem~\cite{Domanov} as a special case of our groupoid results. \begin{Cor}[Domanov] Let $S$ be an inverse semigroup and $\Bbbk$ a commutative ring with unit. Suppose that $\Bbbk G_e$ is semiprimitive for each idempotent $e\in E(S)$. Then $\Bbbk S$ is semiprimitive. This applies, in particular, if $\Bbbk$ is a field of characteristic $0$ that is not algebraic over $\mathbb Q$. \end{Cor} \begin{proof} If $\chi_e$ is the principal character associated to $e\in E(S)$, then $G_e$ is the isotropy group at $\chi_e$. Since the principal characters are $\Bbbk$-dense in $\mathscr U(S)$ by Proposition~\ref{p:idempotentsstronglydense}, the result follows from Theorem~\ref{t:semiprimitivity} and the isomorphism $\Bbbk S\cong \Bbbk\mathscr U(S)$. \end{proof} We now give an example showing that $\Bbbk$-density cannot be relaxed to just density in Theorem~\ref{t:semiprimitivity}. \begin{Example}\label{e:needkdense} Let $S$ be the inverse monoid from Example~\ref{examplecliff} and let $\mathbb F_2$ be the $2$-element field. The set of principal characters associated to the non-identity idempotents is dense in $\widehat{E(S)}$ and the corresponding isotropy groups are trivial. Thus $\mathbb F_2G_x$ is semiprimitive for $x$ in a dense subset of $\mathscr U(S)\skel 0$. But $\mathbb F_2S$ is a commutative ring with unit and the element $e-a$ is nilpotent (because $(e-a)^2=e-2a+e=0$). Thus $\mathbb F_2S$ is not semiprimitive. This shows that for non-Hausdorff groupoids, we need to work with $\Bbbk$-density rather than just density. \end{Example} A semilattice $E$ is \emph{pseudofinite}~\cite{MunnSemiprim} if, for all $e\in E$, the set of elements strictly below $e$ is finitely generated as a lower set. In~\cite[Proposition~2.5]{mygroupoidalgebra}, it was shown that this is equivalent to the principal characters being isolated points of $\widehat{E}$. Indeed, if $e_1,\ldots, e_n$ generate the lower set of elements below $e$, then $D(e)\cap D(e_1)^c\cap\cdots\cap D(e_n)^c=\{\chi_e\}$. The following corollary is the main result of~\cite{MunnSemiprim}. \begin{Cor}[Munn] Let $S$ be an inverse semigroup and $\Bbbk$ a commutative ring with unit. If $E(S)$ is pseudofinite, then $\Bbbk S$ is semiprimitive if and only if $\Bbbk G_e$ is semiprimitive for each idempotent $e$. \end{Cor} \begin{proof} The principal characters form a $\Bbbk$-dense set of isolated points in $\mathscr U(S)$ and their isotropy groups are precisely the maximal subgroups. Corollary~\ref{c:isolatedsemi} and the isomorphism $\Bbbk S\cong \Bbbk\mathscr U(S)$ yield the required result. \end{proof} As another corollary, we obtain the following result for inverse semigroup algebras. Recall that an inverse semigroup $S$ (with zero) is \emph{($0$-)bisimple} if it contains a unique (non-zero) $\mathscr D$-class, that is, the principal characters (except $\chi_0$) of $\mathscr U(S)$ belong to a single orbit. \begin{Cor} Let $S$ be a ($0$-)bisimple inverse semigroup with maximal subgroup $G$ of its unique (non-zero) $\mathscr D$-class and let $\Bbbk$ be a field. If $\Bbbk G$ is primitive, then so is the (contracted) semigroup algebra of $S$. The converse holds if $E(S)$ is pseudofinite. \end{Cor} \begin{proof} By ($0$-)bisimplicity, the principal characters form a $\Bbbk$-dense orbit. They are isolated points if $E(S)$ is pseudofinite. The result follows from Theorem~\ref{t:primitivity}. \end{proof} In particular, we obtain the main result of~\cite{Munnprimitive}. \begin{Cor} Let $S$ be a $0$-bisimple inverse semigroup with trivial maximal subgroup of its unique non-zero $\mathscr D$-class. Then the contracted semigroup algebra of $S$ is primitive over any field. \end{Cor} Munn constructed inverse semigroups with zero whose contracted semigroup algebras are simple over $\mathbb F_p$ (hence primitive) such that none of their maximal subgroups has a semiprimitive algebra~\cite{Munntwoexamples} over $\mathbb F_p$. His examples have the further property that none of the isotropy groups of $\mathscr U(S)$ have semiprimitive algebras over $\mathbb F_p$ as is easily checked.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Detection of charged particles provides important information about the reaction mechanisms induced by accelerated heavy ions. For light charged particles (LCPs) such as protons, deuterons, alphas solid-state silicon detectors provide excellent energy, timing and position resolution. Moreover, due to higher density of the detecting medium solid-state silicon detectors offer better opportunity to stop high energy charged particles. The solid-state detectors are also prone to permanent radiation damage and therefore inadequate for heavy ion detection. Common problems with highly ionizing particles for these detectors are pulse height defect and plasma delay. Corrections in the measurement are then required to get correct information about the energy and rise time of the interacting radiations.\\ In highly ionizing environment, on the other hand, gas detectors offer better options \cite{as92}. These detectors permit operation under high current rates and high dose environment and the gas can be recycled to maintain the purity of the detecting medium. Gas detectors for charged particles are primarily of two types: 1) the low pressure multiwire counters which show excellent position and timing resolutions \cite{ba02} and 2) relatively high pressure ionization or proportional chambers for identification and energy measurement of various heavy ions emitted from a nuclear reaction.\\ Ionization chambers generally operate at the reduced electric field values of $E/P$ $\sim$ 1 to 2 Volts/(cm$\times$Torr) where $E$ is the applied electric field and $P$ is the gas pressure. In a typical design of an ionization chamber \cite{kn00}, a transverse field is applied. This field arrangement proves to be disadvantageous as the associated pulse height becomes a function of position where the incident particle impinges \cite{kn00}. Although, this problem is solved by the introduction of a Frisch grid to the ionization chamber, in an axial field configuration a Frisch grid is not always useful.\\ The present work describes the performance of a $\Delta E$ ionization chamber working in the axial field configuration. The advantage of using an axial field in a $\Delta E$ detector over the more usual transverse field has been discussed in ref [4,5]. Since then there has been no studies or development on this form of gas detector. Extensive studies on the different parameters are sparse in the literature. In this paper we have made a detailed study on the different operating parameters of our detector. The gas pressure, bias voltage and window foil thickness of the chamber have been optimized. Earlier studies [4,5] have given little or no importance to the anode structure on the detector performance. A parallel wire anode structure has been used for the first time to achieve very high transparency and its possible effects on the energy resolution have been addressed.\\ \section{Construction of the detector} \begin{figure} \centering \includegraphics[width=4.5in]{fig-1} \caption{Cross-sectional view of the gas ionization chamber} \label{fig_cons} \end{figure} Fig.1 shows the construction of the detector used in the present studies. The body of the detector is made of aluminium with an active area of 45$\times$45 mm$^2$. The window is an aluminized mylar foil mounted by grease (non outgassing at pressures of at least 10$^{-6}$ Torr) and held by an O-ring in a brass flange. The window foil is maintained at ground potential by contact with the screws of the brass flange. The anode structure is located in the middle of the chamber and is mounted on a brass plate of active area 34$\times$34 mm$^2$ and thickness 2 mm. The plate is provided with a hole of diameter 20 mm. The anode structure is in the form of conducting parallel thin wires or mesh and is mounted on a 1.5 mm thin copper clad G10 board (PCB) of area 30$\times$30 mm$^2$ with a square hole 20$\times$20 mm$^2$. The anode assembly is fixed by teflon screws with the central brass plate which is stepped down by the PCB thickness. Different thin wire structures (parallel or crossed) in the anode are used to study its effect on the energy resolution of the detector. In order to maintain the uniformity of the electric field gradient along the incident particle path two additional brass plates are kept at half the anode voltage at 10 mm on either side of the anode. In this way the separation between the plates is made smaller compared to their lengths so that the effect of non-uniform electric field can be ignored. The brass plates are of the same thickness and area as that of the central plate except the hole is kept empty. The 20 mm hole of the guard plates and that of the anode are coaxial with the 5 mm hole of the window. The separation of the first guard plate from the window and the last guard plate to the exit is 9.5 mm. Since the detector will be ultimately used as a $\Delta$$E$ detector the anode is centrally located so that the active volume is extended on both sides of the anode plane.\\ The voltage to the anode is provided by a kovar seal and the divided voltage to the plates is provided by a breeder (resistive) circuit placed inside the chamber. The plates with applied voltage are isolated from the ground by G10 spacers which cover the inner walls perpendicular to the window and exit side. The body of the detector at the window (including the window foil) and the exit side is at ground. This ensures the field lines along the axis of the chamber. The DC current path and other details of the equivalent circuit describing the resistive gradient is shown in fig.2. Calibrated resistances of value 50 M$\Omega$ are used to make the resistive gradient. \begin{figure} \centering \includegraphics[width=4.5in]{fig-2} \caption{Equivalent circuit describing the resistive gradient and all the ground paths. The details of the preamplifier (Ortec 142IH) circuit are available in the manual.} \label{fig_cir} \end{figure} Resistive noise in our case was negligible. The noise level at the preamplifier stage was within 15 mV. One side of the detector is flanged with a viton O-ring for dismantling and testing. The gas inlet-outlet and provision for evacuation of the detector are shown in the figure 1. The system is pumped down to 10$^{-3}$ Torr vacuum and charged with gas at the desired pressure. The detector is now ready to be tested.\\ \section{Experiment and discussion of results} The detector performance was tested with a $^{252}$Cf $\alpha$-source. The $\alpha$-source was collimated to fall on the entrance window which is 5 mm in diameter. The collimation and thickness of the window foil plays an important role on the resolution of the detector. We used a collimation of 1 mm and mylar foils for the window of two different thickness. The detector was filled with gas between 60-600 Torr pressure. Two different gases Ar(90$\%$)-CH$_4$(10$\%$) and isobutane were used to compare the performance of the chamber in each case. The output of the detector was fed to a charge sensitive preamplifier (ORTEC 142IH) and to a spectroscopy amplifier (ORTEC 672) and finally to a 2K ADC MCA (ORTEC maestro-32). Fig.3 shows the acquired energy-loss spectrum of $\alpha$ particle (84$\%$ 6.12 MeV and 15.7$\%$ 6.08 MeV) from a collimated $^{252}$Cf source of strength 10 $\mu$Ci.\\ \begin{figure} \centering \includegraphics[width=4.5in]{fig-3} \caption{The energy-loss spectrum of $\alpha$-particle (84$\%$ 6.12 MeV and 15.7$\%$ 6.08 MeV) from a collimated $^{252}$Cf source. The energy resolution is 6.8$\%$ FWHM.} \label{fig_alpha} \end{figure} \\ \begin{figure} \centering \includegraphics[width=3.5in]{fig-4} \caption{Variation of energy resolution with {\bf (a)} anode voltage at a fixed gas pressure (160 Torr) for two different types of gas Isobutane and Ar(90$\%$)-CH$_4$(10$\%$) (P10) and {\bf (b)} with gas pressure at a fixed anode voltage (200 V).} \label{fig_para} \end{figure} The different parameters of the detector were optimized to obtain the best possible energy resolution with the collimated alpha spectrum. The energy calibration for the $\Delta E$ detector was performed with a $^{241}$Am and $^{252}$Cf $\alpha$-particle (5.485 MeV and 6.12 MeV respectively) energy loss and its correlation with the pulse heights. The stopping power of $\alpha$ particles in gases are well accounted by the Bethe-Bloch formula \cite{pa66}. The mean energy loss (centroids) in the gas and mylar window were thus determined from Bethe-Bloch formula (code SRIM \cite{zi66}) with the density of gas scaled by the ratio of desired to the normal gas pressure (760 torr). The centroid of the data was obtained by an in-built fitting software of the MCA. This method of energy calibration for $\Delta E$ detectors using simulated energy loss have been adopted in studies with solid-state detectors \cite{al00}. Two different window foils of thickness 1.5 and 8 $\mu$m were examined. For the thicker foil the resolution varied between 9-13$\%$ FWHM whereas for the 1.5 $\mu$m window the resolution was much better (6-8$\%$ FWHM). A possible reason for this is that higher straggling effects in the thicker window foil degrades the resolution of the detector. As the area of the window aperture is small noticeable effect of window deformation due to pressure differences on the either side of the window was not observed. As far as the working gas is concerned, better performance was obtained with isobutane in comparison to Ar(90$\%$)-CH$_4$(10$\%$). This is depicted in fig. 4(a). A reason for this may be due to the higher energy loss per unit pressure in isobutane \cite{ja83}. The optimum gas pressure was found to be at 160-200 Torr and bias voltage 200-250 Volts. We show in fig.4 (b) the region of optimum gas pressure (160-250 torr) at 200 Volts bias. At lower gas pressures (60 - 100 Torr) the recombination effect is overcome earlier though the resolution is poorer than at higher pressures. The statistical fluctuation in the number of electron ion pairs created is larger at lower pressure due to lower energy loss (hence poorer resolution). At higher pressure the resolution is mainly limited due to increased recombination [4,10]. The energy loss of the alpha particle in the gas and FWHM of the observed peak in energy unit is given in table 1 at anode voltage, $V=200$Volt and shaping time$=3 \mu s$. The operating reduced electric field ($E/P$) was found to be between 1-2 Volts/(cm$\times$Torr). A gaussian shaping for optimum signal-to-noise ratio was used and the optimum amplifier shaping time was found to be 3$\mu$s (charge collection time for our detector is about 400 $ns$). The resolution degraded at higher and lower shaping time. The detector could handle about 500 counts per second with good resolution.\\ \begin{figure} \centering \includegraphics[width=4.5in]{fig-5} \caption{Plot of variation of percentage potential shift per unit field difference ($\Delta$$\Phi$) with opacity($p$) for parallel and crossed wire mesh.} \label{fig_ef} \end{figure} \begin{table} \renewcommand{\arraystretch}{1.3} \caption{Energy loss and resolution in keV are given at different gas pressures for Isobutane and P10 gases.} \label{table_example} \centering \begin{tabular}{|c||c||c||c||c|} \hline & \multicolumn{2}{c||}{Isobutane} & \multicolumn{2}{c|}{P10}\\ \cline{2-5} Gas Pressure & Energy Loss & FWHM(expt) & Energy Loss & FWHM(expt)\\ (Torr) & (keV) & (keV) & (keV) & (keV)\\ \hline 60 & 365.3 & 47.03 & 261.3 & -\\ \hline 160 & 973.6 & 68.15 & 697.1 & 55.77\\ \hline 200 & 1217.6 & 90.56 & 871.3 & 74.06\\ \hline 260 & 1582.8 & 130.48 & 1132.8 & 107.61\\ \hline 405 & 2466.0 & 369.90 & 1764.4 & -\\ \hline \end{tabular} \end{table} The effect of the anode structure on the energy resolution has not been addressed by earlier workers [4,5]. In ref \cite{ba89} an electro-formed nickel mesh was used with high transparency (97$\%$). But no justification was given for choosing such an anode structure. F.H. Read et al \cite{re98}, have made an extensive computational study about the electrostatic problems involving a mesh sandwiched between plates on either side kept at different voltages. The effect of a mesh made of a) parallel ultra thin round wires and b) crossed round wires have been studied. According to this work, the potential on the mesh is modified depending on the structure and transparency of the mesh. The fluctuation of potential on the mesh ($\phi_m$) is defined as \begin{eqnarray} \phi_m&=&\Delta {\cal{E}} \Delta \phi\\ \Delta \phi &=&s\chi_m \end{eqnarray} where $\Delta \cal{E}$ is the difference in electric fields on either side of the mesh, $\Delta \phi$ denotes the potential shift per unit field difference, $\chi_m$ is a dimensionless parameter that depends on the transparency (opacity) of the mesh and $s$ is the separation between any two adjacent wires in the mesh \cite{re98}. The percentage variation of potential shift per unit field difference ($\Delta \phi$ $\%$) for crossed and parallel wires with opacity ($p$) is displayed in figure 5. It is clearly seen that with very high transparency (small opacity) $\Delta \phi$ is very large and the mean voltage on the mesh will shift considerably even if $\Delta \cal{E}$ is small. (For the present electrode configuration the magnitude of the electric field is same but changes sign on either side of the anode.) At the same transparency however this shift is reduced by a factor of 2 in comparison to a mesh of parallel wires (figure 5). This is due to the reduction of the parameter $\chi_m$ by this factor in the latter case \cite{re98}. It is therefore interesting to study how the fluctuation of potential (if any) and structure of the anode affect the resolution of the detector.\\ Anode structures made of parallel and crossed ultra thin wires were used in the present ionization detector. For the crossed anode structure an electro-formed Nickel mesh acquired from Precision e-forming, USA with transparency of 89$\%$ was used. For this mesh the expected potential shift is only 1$\%$ for our detector configuration. The best resolution that obtained with this anode structure was about 7.5$\%$ FWHM. The anode structure made of parallel wires was self-fabricated with gold plated platinum wires. The wire diameter was kept 50$\mu$m and wire spacing 2 mm. The transparency in this case was 97.5$\%$. A resolution of 7.3$\%$ FWHM was obtained with this anode structure. Though for the latter anode structure $\Delta \phi$ could be as high as 10$\%$ we did not observe any significant effect of this increase on the resolution. Instead the effect of transparency could be seen through the improvement in resolution with increase in transparency. The reason for better resolution for more transparent electrodes is due to the reduced input capacitance of the preamplifier. The opacity of a crossed mesh is roughly twice than that with parallel wires for the same ratio of wire-diameter to wire-spacing. Therefore it is more convenient to work with parallel wires that can be fabricated to very high transparency. In order to test the effect of transparency on resolution we used another anode structure where the wire diameter was taken to be 12.5 $\mu$m, keeping the wire spacing to be 2mm as before. The transparency for this case was even higher (99.4$\%$). Though the expected potential fluctuation on the wires is very high in this case we found an improvement in resolution, which is found to be within 7.0$\%$ FWHM.\\ In the extreme case of a blank anode i.e. the anode frame made of brass plate with the 20 mm hole at the center was used without any mesh or wires. Here in this case however no improvement of resolution was observed. This structure also required a higher amplifier shaping time (6 $\mu$s) for the best possible resolution (8$\%$). This is because with the blank hole anode the charge collection is not as efficient as with an anode with conductors in the hole. The use of a highly transparent mesh as anode is therefore suitable for improving the resolution. This can be achieved more easily with parallel arrangement of anode wires rather than with a crossed structure. In all the measurements, fresh gas was used for each different anode structure so that the relative error between any two measurement (due to gas impurity) is negligible.\\ \begin{figure} \centering \includegraphics[width=4.5in]{fig-6} \caption{The energy-loss spectrum of spontaneous fission fragments from $^{252}$Cf source. The heavy fragments on the left have higher intensities than that of lighter fragments.} \label{fig_fission} \end{figure} Performance of the detector for heavy ions were studied by recording the $\Delta$$E$ spectrum of the spontaneous fission fragments from $^{252}$Cf. The acquired spectrum is displayed in fig.6. The light (on the right) and heavy fragments (on the left) are observed to be reasonably separated. The optimum anode structure was the one with 12.5$\mu$m parallel thin wires and gas pressure at 60 Torr. At higher pressures the separation degraded possibly due to increased recombination. The peak to valley ratio was seen to improve with collimation of the source. The fission $E$ spectrum parameters for a solid-state detector described in ref \cite{kn00} are evaluated for the present detector and depicted in table 2. The $^{252}$Cf fission spectrum has been studied in \cite{aj72} by a gas $\Delta E$ chamber where the best peak to valley ratio is 2.8. However, in \cite{aj72} the source is placed inside the detector volume and the window effect is avoided. In \cite{sa75} $^{252}$Cf spectrum has been measured by a gas $E$ detector and the peak to valley ratio for light and heavy fragments are quoted as 2.25 and 2.07 respectively. These parameters in the present case are thus within reasonable limit. It should be however noted that the intensities of the light and heavy fragments interchange in the recorded $\Delta$$E$ spectrum in comparison to a typical $E$ spectrum of the fission fragments. The heavy fragments (left) have higher intensity than that of the lighter fragments. This is owing to increased energy straggling of the lighter fragment than the heavier one as studied in ref \cite{sy71}. Similar shift in the heavy and light group has also been observed by \cite{aj72} but at much higher pressures. In-beam studies with a gas $\Delta E$ detector in conjunction with a stopping solid-state detector were not done as they have been pursued in detail by several workers [15-19].\\ \begin{table} \renewcommand{\arraystretch}{1.3} \caption{Parameters of the $^{252}$Cf $\Delta E$ fission fragment spectrum. The definition of the different parameters are same as in ref \cite{kn00}.} \label{table_example} \centering \begin{tabular}{|c||c|} \hline Spectrum Parameter & Values\\ \hline $N_H/N_V$ & 3.8\\ \hline $N_L/N_V$ & 2.2\\ \hline $N_H/N_L$ & 1.7\\ \hline $\Delta$$L/(L-H)$ & 0.3\\ \hline $\Delta$$H/(L-H)$ & 0.27\\ \hline $(H-HS)/(L-H)$ & 0.53\\ \hline $(L-LS)/(L-H)$ & 0.69\\ \hline $(LS-HS)/(L-H)$ & 2.23\\ \hline \end{tabular} \end{table} There are however certain limitations on the detector performance. For example, the charge induced due to moving electrons ($q_{ind}^{(-)}$) and positive ions ($q_{ind}^{(+)}$) is given by the Shockley-Ramo Theorem [3,20,21] as \begin{eqnarray} q_{ind}^{(-)}&=&-\frac{e}{W}\int_{0}^{d}\frac{dE(z)}{dz} [\phi_{w}(d)- \phi_{w}(z)]dz \\ q_{ind}^{(+)}&=&-\frac{e}{W}\int_{0}^{d}\frac{dE(z)}{dz} [\phi_{w}(z)- \phi_{w}(0)]dz \end{eqnarray} where $e$ is the electronic charge, $W$ is the mean energy for ionization, $\frac{dE(z)}{dz}$ the stopping power of the incident particle in the gas and $\phi_{w}(z)$ is the weighting potential in the present case at any point $z$ along the particle track, $d$ being the anode-cathode separation. If the particle stops inside the detector active length the upper limit in the integrals should be replaced by the range of the particle.\\ The total induced charge is a result of the motion of all the ionizations that occur along the track between the cathode and anode. As can be seen from equations (3) and (4), in the electron sensitive operation \cite{kn00} of the detector, the anode signal is not proportional to the energy deposited by the particle (due to the $z$ dependence of $\phi_{w}$). This problem can be reduced if a Frisch Grid is placed between the cathode and anode. The weighting potential is now suppressed between the cathode and grid ($\phi_w=0$ for $0<z<b$) so that equation (3) reduces to \begin{eqnarray} \nonumber q_{ind}^{(-)}&=&-\frac{e}{W}\int_{0}^{b}\frac{dE(z)}{dz} [\phi_{w}(d)- \phi_{w}(z)]dz \\ &-&\frac{e}{W}\int_{b}^{d}\frac{dE(z)}{dz} [\phi_{w}(d)-\phi_{w}(z)]dz \end{eqnarray} The proportionality to energy loss is thus ensured in the region between cathode and grid. However, the source of non-uniformity (second term of (5)) cannot be completely eliminated in an axial mode but can be reduced by keeping the separation between grid and anode small. In this work we have used a very simple electrode structure keeping the holes in the guard plates empty. The weighting potential for the present electrode configuration ($b/d=0.5$) is calculated by using the formalism of ref \cite{ja89} (subject to the boundary conditions in the present case) and depicted in fig.7. The weighting potential along the axis of the detector is plotted where the distortion in the region between the cathode and the guard plate due to the hole in the plate is maximum. This effect reduces as one moves away from the axis. The effect of a guard plate without a hole which entirely suppress $\phi_w$ between the cathode and grid is shown for comparison. In the practical case a transparent mesh has to be used. However use of additional mesh besides the anode may add to a background in the detector spectrum due to unwanted scattering of the incident particles \cite{zu82}. Thus in an axial chamber the Frisch grid is not completely effective as in a transverse field chamber.\\ \begin{figure} \centering \includegraphics[width=4.5in]{fig-7} \caption{Plot of weighting potential $\phi_w$ in the present electrode geometry (solid line) along the axis of the detector. The dotted line repesents the weighting potential for the case of guard plate with no hole.} \label{fig_fission} \end{figure} If however the Bragg-curve of the particle is approximately constant over the detector length, equations (3) reduces to \begin{equation} q_{ind}^{(-)} \approx - \frac{e}{W}S\int_{0}^{d} [1- \phi_{w}(z)]dz \end{equation} where $S$ is the constant stopping power. The result of the integration will always be a function of $d$ (i.e, independent of $z$) irrespective of the form of $\phi_w(z)$. Thus the measured signal will be proportional to the energy lost ($Sd$) by the particle in the detector active length. In the present work the Bragg-curve for a 6.12 MeV $\alpha$ particle and the $^{252}$Cf fission fragments are almost constant over the cathode-anode gap (20 mm) and so the detector functions satisfactorily as a $\Delta E$ device. \section{Summary and conclusion} We have fabricated a gas ionization chamber working in the axial mode charge collection configuration using parallel plate geometry. The different parameters like the gas type and pressure, anode voltage and anode structure have been optimized. In particular the effect of anode structure has been studied. A mesh with parallel array of round wires was found to give better energy resolution. A higher transparency is found to give better resolution, although a larger fluctuation of the mesh voltage was associated with it. Therefore, a parallel array of 12.5 $\mu$m round gold plated wires will be used in future to get better resolution. \section{Acknowledgement} The author(S.A.) would like to thank Ms. S. Bhattacharyya, Prof. S. Saha, Mr. Dulal Ghosal, Mr. S. Chakraborty and SINP workshop for their help at different stages of this work.\\
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:intro} Theorem provers are software systems that can find or check proofs for conjectures given in some logic. Research in theorem proving systems started with Newell and Simon's ``logic theorist'' 1955~\cite{NewSim:ltmcips56} -- one of the earliest systems in the then-emerging field of Artificial Intelligence -- and has led to a succession of systems since. Today, more than 60 years later, the CADE ATP system competition~\cite{CASC} attracts 15-20 systems annually. Automated reasoning systems have applications ranging from the verification of mathematical results, via program synthesis/verification, the Semantic Web, all the way to the discovery of unfair trading rules in darkpools of investment banks. Theorem provers are complex software systems that have pushed the envelope of artificial intelligence and programming, and as such they constitute important cultural artefacts that carry within them the beginnings of many aspects of computing we take for granted today. To name just one example: the programming language ML: (Proof) Meta-Langauge which heavily influenced modern typed functional programs was introduced as a meta-language of the LCF theorem prover by Robin Milner. Its type system was motivated by the idea that proofs could be programmed, if the type of proofs can only contain logically valid proofs. With the ongoing wave of retirements of the original principal investigators there is good chance that these systems are lost, when their group servers are shut down. The following incident is unfortunately quite typical. When -- ten days after Herbert Simon's passing in February 2001 -- the author tried to find a copy of the source code of the Logic Theorist in Simon's scientific estate at CMU, all tapes and printouts had already been discarded -- only the written materials and notes were being catalogued in the CMU library. Fortunately, report P-868 of the Rand Corporation~\cite{NewSim:ltmcips56}, where the program was conceived contains the full printout of the code. Otherwise we would only be able to read about this seminal program, but not be able to study the artefact itself. In other cases, we may not have been so lucky; see~\cite{tpmuseum:tpbl:on} for a list of theorem provers believed lost. This is a great loss to the culture of our discipline, which is in danger of becoming marginalized by the hype waves rolling through AI and computing. Without the systems as preserved cultural artefacts, future historians will have difficulties to study the history of science and engineering. \section{A Museum of Theorem Prover Source Code and Artefacts} This article reports on an initiative started by the author in spring 2016 to help conserve the source code of theorem provers: the ``theorem prover museum'', a collection of GitHub repositories with source code of systems, together with a web site that presents them and organizes the process of acquiring more. The term ``museum'' in the title may sound a bit ambitious, since the exhibition and didactic interpretation of the theorem provers is beyond the scope of the initiative (and perhaps abilities of the founder). But the foremost function of any museum is the conservation of artefacts, which is what the ``theorem prover museum'' project intends to do. Once the source code is preserved, historians of science and engineering can start to do research on it and create multiple user interfaces to present it to the public. Note that it is not the purpose of the museum to keep the theorem proving systems running (in many cases the compilers and dependencies have moved on, making this very difficult). But only to archive the source code for academic study. This is a well-considered design decision, taken to lower the barrier of archiving systems here. Again, once the source code is preserved -- i.e. made public by the original authors -- other enthusiasts can possibly revive it. Indeed this has already happened, triggered by the act of exposing the source in the museum. \section{Realizing the Museum} The actual ``theorem prover museum'' consists of a simple web site at \url{https://theoremprover-museum.github.io/} that features a couple of with cards with short profiles for theorem provers (see Figure~\ref{fig:cards}) depending on their museum status. The front page of the museum is the index of museum systems, i.e. systems that are no longer actively maintained but for which a code repository exists. The repositories are collected in the GitHub organisation \texttt{theoremprover-museum} \url{https://github.com/theoremprover-museum}. An increasing number of systems already have repositories (git or other), here we are working towards automatically maintaining a local mirror repository in the museum -- just to keep the systems safe. \begin{figure}[ht]\centering \includegraphics[width=\textwidth]{cards.png} \caption{Three Theorem Prover Cards in the Museum}\label{fig:cards} \end{figure} Additionally the museum contains various administrative pages that collect systems, e.g. a list of ``most wanted systems'', a list of ``theorem provers believed lost''~\cite{tpmuseum:tpbl:on}, and a list of ``active systems''. Once in a while, a request for the source code of a system that has fallen below the radar of the community is met with an exasperated reply like ``but Ontic lives!!!'' (David McAllister in 2016). All of these pages are statically generated from a central data file \texttt{provers.yml} \cite{tpmuseum:data:on} which keeps nested key/value data in YAML. This file can be extended by a simple pull request and has proven a low-maintenance solution. Since the initiative was started, the museum has gained the source code of 38 systems, which form a cross-section of the discipline. The systems span a period of 50 years, and the code ranges from machine language to high-level languages like OCaml. Even though the museum has some of the iconic systems of the field -- along with some of the more obscure ones, it does not -- unfortunately -- constitute a fully representative sample yet. More contributions and hunting down system sources is still needed for that. The concept of the theorem prover museum is compatible with the Software Heritage Initiative~\cite{SoftwareHeritage:on}, and particular GitHub-based implementation contributes to it automatically, since the SHI indexes GitHub repositories and the museum adds content that was unreachable to the SHI before. The \textsf{swMath} information system for mathematical software~\cite{swMath:on} lists the museum as one of its special categories~\cite{swMath:tpmuseum:on}. This links systems to their traces in the mathematical literature -- unfortunately, much of the theorem proving literature is in Computer Science conferences, which are only partially tracked in the underlying \textsf{zbMATH} abstracting service~\cite{zbMATH:on}, but CS does not have a comparable system. Even so, the \textsf{swMath} pages provide valuable additional information for the museum systems. \section{Related Initiatives and Resources} We list other public resources that may give further information \begin{itemize} \item there is a small literature on the history of automated reasoning, it includes~\cite{Bibel:ehpad07} on the early history up to 1970 and~\cite{RobVor:hoar01} for the next 30 years. \item the Encyclopedia of Proof Systems~\cite{Wolzenlogel-Paleo:teps17} collects proof systems that are mechanized by the theorem provers. \item the Wikipedia pages on automated theorem provers and proof assistants keep list of systems \item the program verification and synthesis community keeps a systems list~\cite{vsstp:on} that also contains a section on theorem provers. \end{itemize} \section{Conclusion \& Call for Contributions}\label{sec:concl} We have presented an initiative for conserving the sources of historic theorem proversystems, i.e. systems that are no longer actively developed and in danger of loss. The theorem prover museum is now fully functional as a system and has attracted various entries. Even though it has been well received, it needs contributions from the community: curators who chase down sources, talk to retired researchers who might know about the whereabouts of source code, and even go to the basement and lug up dusty magnetic tapes. In short the Indiana Jones types of Automated Reasoning -- without the ``stealing from indigenous cultures'' part. But most importantly, we need the individual researchers who, when they realize that they have moved on from a project to routinely submit to the theorem prover museum just as we submit a paper to a journal. The theorem prover museum gives them a place to do this and thus to contribute to the immaterial legacy of our research field. \paragraph{acknowledgements} I am grateful to many colleagues from the automated reasoning community, amongst all contributors I would like to single out William Farmer, who submitted the first prover: IMPS to the museum, Tom Wiesing who helped me with the web page, J\"org Siekmann and Wolfgang Bibel, who were supportive to the idea from the inception, Mike Gordon and Konrad Slind who chased down early versions of the HOL provers, and finally Rany Pollack, who after enduring more than a dozen reminder finally dug up the LEGO source code and contributed it. \printbibliography \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Skyrmions are extended field configurations that behave as new particle degrees of freedom. Initially, they were proposed as a description of baryons within an Effective Field Theory (EFT) description of strong interactions containing only the pion fields~\cite{Skyrme:1961vq, Witten:1979kh, Adkins:1983ya}. Since the pions can be viewed as pseudo-Goldstone bosons arising from the breaking of an $SU(2)$ symmetry, the original setting can be directly applied to the electroweak sector, in the limit in which the Higgs field is infinitely massive, and the gauge bosons are decoupled, so that only the $SU(2)$ would-be Goldstone bosons are present~\cite{Ellis:2012cs}. Electroweak skyrmions have been shown to survive under certain conditions in more realistic settings, in which these limits are partially removed, which can destroy the topological protection they enjoyed in the first place~\cite{Ambjorn:1984bb}. The purpose of ref.~\cite{Criado:2020zwu} and of this work is to consider skyrmions in the full electroweak theory, including the effects of both the gauge fields and a dynamical Higgs boson. As in the original Skyrme setting, the existence of skyrmions in the Standard Model (SM) Lagrangian is forbidden by Derrick's theorem~\cite{Derrick:1964ww}, but they can be stabilized by including higher-order effective operators. Since the discovery of the Higgs~\cite{ATLAS:2012yve, CMS:2012qbp}, two effective descriptions of the electroweak sector have emerged: the Standard Model EFT (SMEFT), in which the scalars furnish a linear representation of the electroweak symmetry group; and the Higgs Effective Field Theory (HEFT), in which the realization of this symmetry is non-linear. The SMEFT version is studied in ref.~\cite{Criado:2020zwu}. In this paper, we focus on the HEFT framework, which we find to be better suited for the description of skyrmions because of the non-trivial topology of its scalar sector. In section~\ref{sec:theory}, we briefly introduce the relevant sector of the HEFT, discuss the differences with the SMEFT and with the approximations that have been previously taken, and introduce the topological numbers that characterize the topology of its field configurations. In section~\ref{sec:results}, we study the existence of skyrmions numerically in the presence of the different combinations of HEFT operators. In section~\ref{sec:pheno}, we consider the phenomenological consequences of skyrmions and the operators that generate them. This allows us to obtain constraints on the parameter space, in which we include positivity bounds. We summarize our conclusions in section~\ref{sec:conclusions}. \section{Theory} \label{sec:theory} The relevant degrees of freedom for skyrmions in the electroweak sector are the $SU(2)$ gauge bosons $W^a_\mu$, the would-be Goldstone bosons $G^a$, and the Higgs boson $h$. We neglect the effects of the $U(1)_Y$ gauge sector. The Higgs is invariant under $SU(2)$ gauge transformations, while the Goldstones are collected in a non-linear representation \begin{equation} U = \exp \frac{i \sigma^a G^a}{\sqrt{2} v} \in SU(2), \end{equation} with no relation to the Higgs singlet field $h$.\footnote{This is to be contrasted with the more restrictive linear realization where $h(x)$ and $U(x)$ are assembled into the Higgs doublet $ \phi = \frac{1}{\sqrt{2}}\left(v +h\right) U \begin{pmatrix} 0 \\ 1 \end{pmatrix}$.} We write the effective Lagrangian as \begin{equation} \mathcal{L} = \sum_i F_i(h/v) \mathcal{Q}_i, \qquad \qquad F_i(\eta) = \sum_{n=0}^\infty c_{i,n} \eta^n, \end{equation} where $\Lambda$ is the HEFT cut-off scale; the $\mathcal{Q}_i$ are monomials in $W_{\mu\nu}$, $U$, $h$ and their covariant derivatives, with $h$ appearing only through its derivatives. That is, schematically \begin{equation} \mathcal{Q}_i \sim h^{\mathrm{H}_i} (W_{\mu\nu})^{\mathrm{W}_i} U^{\mathrm{U}_i} D^{\mathrm{D}_i}, \end{equation} where $\mathrm{H}_i$, $\mathrm{W}_i$, $\mathrm{D}_i$ are, respectively, the number of Higgs fields, field-strength tensors, and covariant derivatives contained in $\mathcal{Q}_i$. In this setting, $\mathrm{D}_i$ corresponds to the general chiral dimension~\cite{Buchalla:2018yce}. We adopt a power counting based on the chiral dimension, in which each $c_{i,n}$ coefficient is of order $\Lambda^{2 - \mathrm{D}_i}$, multiplied by the necessary power of $v$ for the coefficient to have the correct energy dimensions. Thus, terms with higher chiral dimensions are suppressed by higher powers of $v/\Lambda$. We keep terms with chiral dimension 4, and impose custodial symmetry, which is needed for configurations in the spherical ansatz to give spherically symmetric contributions to the energy, as described in section~\ref{sec:spherical-ansatz}. A list of all relevant operators $\mathcal{Q}_i$ is given in table~\ref{tab:operators}, partially following the notation of ref.~\cite{Buchalla:2013rka}, where angle brackets $\trace{\cdot}$ denote a trace and $L_\mu = i U D_\mu U^\dagger$. \begin{table} \centering \begin{tabular}{ccc} \toprule Name & Operator & Radial energy density $\rho_i$ in spherical ansatz \\ \midrule $\mathcal{Q}_1$ & 1 & $-\frac{r^2}{e^2 v^4}$ \\ $\mathcal{Q}_h$ & $\partial_\mu h \partial^\mu h$ & $\frac{r^2}{2} (\eta')^2$ \\ $\mathcal{Q}_U$ & $\trace{D_\mu U^\dagger D^\mu U}$ & $\frac{2}{v^2} \left(f_1^2 + f_2^2 + \frac{r^2}{2} b^2\right)$ \\ $\mathcal{Q}_{X h 2}$ & $\trace{W_{\mu\nu} W^{\mu\nu}}$ & \small $-8 e^2 \Big[ (f_1' - 2 b f_2)^2 + (f_2' + (2 f_1 - 1) b)^2 + \frac{2}{r^2} (f_1^2 + f_2^2 - f_1)^2 \Big]$ \\ $\mathcal{Q}_{X h 5}$ & $\epsilon^{\mu\nu\rho\sigma} \trace{W_{\mu\nu} W_{\rho\sigma}}$ & $0$ \\ $\mathcal{Q}_{X U 8}$ & $i\trace{W_{\mu\nu} [L^\mu, L^\nu]}$ & \small $\frac{16 e^2}{2 r^2} \Big[ (f_1^2 + f_2^2)(f_1^2 + f_2^2 - f_1 + 2 r^2 b^2) - b r^2 (f_2 f_1' - f_1 f_2' + b f_1) \Big]$ \\ $\mathcal{Q}_{X U 11}$ & $i\epsilon^{\mu\nu\rho\sigma} \trace{W_{\mu\nu} [L_\rho, L_\sigma]}$ & $0$ \\ $\mathcal{Q}_{D1}$ & $\trace{L_\mu L^\mu}^2$ & $-\frac{4 e^2}{r^2} \left[2 (f_1^2 + f_2^2) + r^2 b^2\right]^2$ \\ $\mathcal{Q}_{D2}$ & $\trace{L_\mu L_\nu} \trace{L^\mu L^\nu}$ & $-\frac{4 e^2}{r^2} \left[2 (f_1^2 + f_2^2)^2 + r^4 b^4\right]$ \\ $\mathcal{Q}_{D7}$ & $\trace{L_\mu L^\mu} \partial_\nu h \partial^\nu h$ & $-e^2 v^2 (\eta')^2 \left[2 (f_1^2 + f_2^2) + r^2 b^2\right]$ \\ $\mathcal{Q}_{D8}$ & $\trace{L_\mu L_\nu} \partial^\mu h \partial^\nu h$ & $- e^2 v^2 (\eta')^2 r^2 b^2$ \\ $\mathcal{Q}_{D11}$ & $(\partial_\mu h \partial^\mu h)^2$ & $-\frac{e^2 v^4}{4} (\eta')^4 r^2$ \\ \bottomrule \end{tabular} \caption{Custodial-invariant operators $\mathcal{Q}_i$ containing the Higgs only through derivatives, of order up to $\Lambda^0$, together with their contribution to the radial energy density $\rho_i$ in the spherical ansatz, defined in eq.~\eqref{eq:radial-density}. \label{tab:operators}} \end{table} The relevant sector of the SM Lagrangian is given by the chiral dimension 2 operators, with \begin{gather} F_1(h/v) = V(h) = \lambda v^4 \left((h/v)^2 + (h/v)^3 + \frac{1}{4}(h/v)^4\right) = \lambda \left(v^2 h^2 + v h^3 + \frac{h^4}{4}\right), \\ F_h(h/v) = \frac{1}{2}, \qquad F_{Xh2}(h/v) = -\frac{1}{2 g^2}, \qquad F_U(h/v) = \frac{v^2}{4} \left(1 + \frac{h}{v}\right)^2. \end{gather} Deviations from the SM are encoded in modifications of any of the $F_i(h)$. Derrick's theorem forbids the existence of solitons in the SM. A necessary condition for them to exist is that higher-derivative terms are present. The original term proposed by Skyrme~\cite{Skyrme:1961vq} to stabilize skyrmions can be written in the HEFT Lagrangian as \begin{equation} \mathcal{L}_{\mathrm{Sk}} = -\, \frac{1}{16 e^2} (\mathcal{Q}_{D1} - \mathcal{Q}_{D2}), \label{eq:skyrme-term} \end{equation} that is, setting $F_{D1}(h/v) = -F_{D2}(h/v) = -\,1 / (16 e^2)$. In the chiral dimension power-counting, the size of the coefficient is given by $e \sim \Lambda / (4 v)$. The theory $\mathcal{L}_{\text{SM}} + \mathcal{L}_{\text{Sk}}$ is thus a candidate for the stabilization of skyrmions. Two limits of it have been previously studied in the literature: \begin{enumerate} \item[\textbf{A.}] Frozen Higgs. This corresponds to $m_h \to \infty$, which implies that the Higgs is set to its vev $h = 0$ everywhere. \item[\textbf{B.}] No gauge fields. This is obtained when the $SU(2)$ vanishes, $g \to 0$. In this limit, the coefficient of the $\trace{W_{\mu\nu} W^{\mu\nu}}$ term becomes large, and the gauge fields are forced to approach a pure gauge configuration in order to minimize the energy. One can gauge them away. The only degrees of freedom left are the Goldstone bosons and the Higgs. \end{enumerate} Taking both limits leads to a theory with only the Goldstone bosons as dynamical degrees of freedom, which has been studied in, e.g. ref.~\cite{Adkins:1983ya}. Limit \textbf{A} has been considered in ref.~\cite{Ambjorn:1984bb}, while limit \textbf{B} has been considered in ref.~\cite{Kitano:2016ooc}. In any of these limits, and in the full theory, the Skyrme term can be generalized by allowing other linear combinations of the $\mathcal{Q}_{D1}$ and $\mathcal{Q}_{D2}$ operators. This has been done in the case where both limits are taken, in ref.~\cite{Ellis:2012cs}, and in limit \textbf{B}, in ref.~\cite{Kitano:2017zqw}. \medskip In ref.~\cite{Criado:2020zwu} skyrmions were studied in the full theory without assuming any of the two limits above. This was done within the SMEFT framework, in which the electroweak symmetry is realized linearly. The purpose of the present paper is to continue this program in the non-linear realization. Ultimately, the existence of skyrmions turns out to be much harder to prove in the SMEFT than in the HEFT, as discussed below. Ref.~\cite{Hamada:2021oqm}, which appeared during the preparation of this work, has a similar scope. \medskip In limit \textbf{B}, the theory contains stable field configurations separated from the vacuum by an infinite energy barrier. This fact can be understood from a topological point of view. To have \emph{finite} energy, the scalar fields must satisfy the following boundary conditions: \begin{equation} \lim_{|\mathbf{x}| \to \infty} h(x) = 0, \qquad \lim_{|\mathbf{x}| \to \infty} U(x) = 1_{2 \times 2}, \end{equation} which means that all directions towards infinity can be identified with a single point, effectively compactifying space into $S^3$. Thus, the fields can be viewed as a $S^3 \to \mathbb{R} \times S^3$ mapping. We can then define a topological charge, the winding number for the $U : S^3 \to S^3$ part of the mapping: \begin{equation} n_U = \frac{1}{24 \pi^2} \epsilon_{ijk} \int d^3x \trace{L_i L_j L_k}. \end{equation} This is a homotopy invariant of $U$, and therefore it can never change with smooth time evolution. However, this number is only well defined when the target space of the scalar sector has the topology $\mathbb{R} \times S^3$. This is true generically both in the full HEFT and in limit \textbf{B}, but it ceases to be in the particular case of the SMEFT, in which the Lagrangian becomes independent of $U(x)$ when $h(x) = -v$ (see footnote 1 above). One can then identify all points with this value of $h$, turning the scalar manifold into $\mathbb{R}^4 \cong \mathbb{C}^2$. The scalar degrees of freedom are thus collected into a $SU(2)$ doublet $\phi$. In general, the topology of the static configurations of $\phi$ cannot be characterized in terms of the number $n_U$ since $U$ is only defined through $ \phi = \frac{1}{\sqrt{2}}\left(v +h\right) U\cdot (0\,1)^T$ when $\phi \neq 0$ everywhere.\footnote{Even if $\phi = 0$ only at an isolated point $p$, $U$ becomes a mapping $S^3 - \{p\} \cong \mathbb{R}^3 \to S^3$, and all such mappings are homotopically equivalent.} One can recover a well-defined $n_U$ in the SMEFT by taking the frozen Higgs limit \textbf{A}, which disallows $h(x) = -v$ and forces the scalars to be in the submanifold $S^3$. As already noted earlier, we will not follow this route in this paper and will instead use the HEFT formulation of the theory where $h$ and $U$ are independent and without imposing limits \textbf{A} and \textbf{B}. \medskip The inclusion of gauge fields destroys the topological protection of $n_U \neq 0$ configurations from decaying into the vacuum. However, the $\trace{W_{ij} W^{ij}}$ term in the energy induces a finite-energy barrier between configurations in which $W_\mu$ is a pure gauge, $W_i = \mathcal{U} \partial_i \mathcal{U}^\dagger$, $W_0=0$, possibly making them metastable. In order to describe this, we use the Chern-Simons number \begin{equation} n_{\text{CS}} = \frac{1}{16 \pi^2} \epsilon_{ijk} \int d^3x \trace{ W_i W_{jk} + \frac{2i}{3} W_i W_j W_k }. \end{equation} For a pure-gauge $W_i = \mathcal{U} \partial_i \mathcal{U}^\dagger$, $n_{\text{CS}}$ is the integer winding number of the gauge transformation $\mathcal{U} (\mathbf{x}): S^3 \to S^3$. A skyrmion is a field configuration for which $n_U$ and $n_{\text{CS}}$ differ by (approximately\footnote{Due to metastability.}) one unit. We thus define the skyrmion number as \begin{equation} n_{\text{Sk}} = n_U - n_{\text{CS}}. \end{equation} While $n_U$ and $n_{\text{CS}}$ are not gauge invariant, $n_{\text{Sk}}$ is, because $n_U$ and $n_{\text{CS}}$ change by the same integer under a large gauge transformation. An anti-skyrmion is similarly a configuration where $n_{\text{Sk}} \simeq -1$, and multi-skyrmions have $|n_{\text{Sk}}| > 1$. A CP transformation changes the sign of the skyrmion number. \section{Skyrmion configurations and energy landscape} \label{sec:results} \subsection{The energy functional in the spherical ansatz} \label{sec:spherical-ansatz} We parametrize the space of static configurations of the fields $W^a_\mu$, $U$ and $h$ in the $W_0=0$ gauge by means of 4 real functions of one variable: $f_1$, $f_2$, $b$ and $\eta$. We do so by further imposing the unitary gauge $U (\mathbf{x})= 1_{2 \times 2}$ and the spherical ansatz: \begin{equation} W_i(\mathbf{x}) = v e \tau_a \left( \epsilon_{ija} n_j \frac{f_1(r)}{r} + (\delta_{ia} - n_i n_a) \frac{f_2(r)}{r} + n_i n_a b(r) \right), \qquad h(\mathbf{x}) = \frac{v}{\sqrt{2}} \eta(r), \end{equation} where $\tau_a$ are the Pauli matrices, $n_i = x_i / |\mathbf{x}|$, $r = v e |\mathbf{x}|$, and $e$ is a parameter we will adjust as a function of Wilson coefficients. The energy density in this ansatz is spherically symmetric when all interactions are invariant under custodial symmetry.\footnote{Indeed, if one takes any $M \in SU(2)$ and $R$ its representation as a spatial rotation, one has $M W_i(\mathbf{x}) M^\dagger = R_{ij} \, W_j(R^{-1} \mathbf{x})$, so invariance under spatial rotations and under custodial symmetry become equivalent. } We can then write the energy as \begin{equation} E = \frac{4 \pi v}{e} \int_0^\infty dr \sum_i F_i(\eta) \rho_i, \label{eq:radial-density} \end{equation} where the contributions $\rho_i$ to the radial energy density of each $\mathcal{Q}_i$ operator are given in table~\ref{tab:operators}. Requiring that the energy is finite and that the fields are regular at the origin gives rise to the following boundary conditions: \begin{gather} f_1(0) = f_1'(0) = f_2(0) = f_2'(0) - b(0) = \eta'(0) = 0, \\ f_1(\infty) = f_2(\infty) = b(\infty) = \eta(\infty) = 0. \end{gather} Since we have fixed the unitary gauge, the skyrmion number is just $n_{\text{Sk}} = - n_{\text{CS}}$. For convenience, we define \begin{equation} n_W = \frac{i}{24 \pi^2} \epsilon_{ijk} \int d^3x \trace{W_i W_j W_k} = \frac{2}{\pi} \int_0^\infty dr \; b (f_1^2 + f_2^2), \end{equation} which agrees with $n_{\text{CS}}$ at integer values. Thus, skyrmion and anti-skyrmions will be found at $n_W \simeq -1$ and $n_W \simeq 1$, respectively. CP symmetry, which takes one into the other, is given here by $f_1 \to f_1$, $f_2 \to - f_2$, $b \to -b$, $\eta \to \eta$. All the operators we consider are invariant under this transformation. This is because the two operators that violate CP vanish for static field configurations. Thus, the static-configuration energy functional is invariant under $n_W \to - n_W$. \subsection{The Skyrme term} We focus first on the case in which \begin{equation} -c_{D1,0} = c_{D2,0} \equiv \frac{1}{16 e^2}, \label{eq:skyrme-coeffs} \end{equation} with the rest of non-SM coefficients in the HEFT Lagrangian being set to zero. The last equality is to be understood as fixing the free parameter $e$ of the ansatz. This corresponds to the original Skyrme term, given in eq.~\eqref{eq:skyrme-term}. The total energy functional is given by \begin{equation} E = \frac{4 \pi v}{e} \int_0^\infty dr \left[ \rho_{\text{SM}} + (f_1^2 + f_2^2) \left(b^2 + \frac{f_1^2 + f_2^2}{2 r^2}\right) \right], \end{equation} where $\rho_{\text{SM}}$ is the contribution from the SM. We shall now describe the field configurations and energy landscape that arise in this setting. We study them using the method described in appendix~\ref{sec:numerical-method}. We display two example configurations for $e = 1.8$ and different values of $n_W$ in figure~\ref{fig:configs}. In figure~\ref{fig:E-vs-nW}, we show the minimal energy as a function of $n_W$, for different values of $e$. For $e > e_{\text{crit}} \simeq 0.9$, we find a finite-energy barrier separating the skyrmion, with $n_W \simeq 1$, and the vacuum at $n_W = 0$. This barrier disappears below $e_{\text{crit}}$. Thus, the skyrmion solution exists only when $e > e_{\text{crit}}$ and is a metastable configuration. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{figures/config_040} \includegraphics[width=0.49\textwidth]{figures/config_080} \caption{Minimal energy configurations for $e = 1.2$, $c_{D1,0} = -c_{D2,0} = 1/(16 e^2)$, and $n_W = 0.4$ (left) or $n_W = 0.8$ (right).} \label{fig:configs} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{figures/E_vs_nW} \caption{Minimal energy as a function of $n_W$, for $c_{D1,0} = -c_{D2, 0} = 1/(16 e^2)$ and different values of $e$. The finite-energy disappears around $e = 0.9$.} \label{fig:E-vs-nW} \end{figure} The energy $E$ of the local minimum is the skyrmion mass. We find that the normalized energy $e M_{\text{Sk}} / (4 \pi v)$ is approximately constant, with a value of $3.3$ at $e = e_{\text{crit}}$, and a limiting value of $3$ as $e \to \infty$, so the skyrmion mass is given by \begin{equation} M_{\text{Sk}}|_{e \simeq e_{\text{crit}}} \simeq \frac{41 v}{e}, \qquad \qquad M_{\text{Sk}}|_{e \to \infty} \simeq \frac{38 v}{e}. \end{equation} The maximum of $M_{\text{Sk}}$ is reached at $e = e_{\text{crit}}$: \begin{equation} M_{\text{Sk}} \leq M_{\text{Sk}}|_{e = e_{\text{crit}}} \simeq \SI{11}{TeV}. \end{equation} In figure~\ref{fig:M-vs-e}, we show this behaviour and compare it to the case in which no gauge fields are present, labelled limit \textbf{B} in section~\ref{sec:theory}. The curves are similar for large $e$. This is to be expected since a large value of $e$ makes the $\trace{W_{\mu\nu} W^{\mu\nu}}$ term dominant, with similar effects as taking $g \to 0$, which is limit \textbf{B}. However, some differences arise at small $e$. Just above $e_{\text{crit}}$, the mass of the skyrmion in the full theory is slightly lower than in limit \textbf{B}. This is because the $n_W = 1$ is no longer topologically fixed in the full theory, and so $n_W$ can move to another value with lower energy. For $e < e_{\text{crit}}$, skyrmions become unstable in the full theory, but nothing changes in limit \textbf{B}, as they are still topologically protected. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{figures/E_vs_e} \caption{Skyrmion mass $M_{\text{Sk}}$ as a function of $e$ for the full theory and for limit \textbf{B}.} \label{fig:M-vs-e} \end{figure} For the height of the barrier, the energy of the local maximum near $n_W = 1/2$, we find that \begin{equation} E_{\text{barrier}}|_{e = e_{\text{crit}}} \simeq \SI{11}{TeV}, \qquad \text{and } E_{\text{barrier}}|_{e \to \infty} \simeq \SI{10}{TeV}. \end{equation} We also define the radius of the skyrmion $R_{\text{Sk}}$ by averaging over the $n_W$ density as \begin{equation} R^2_{\text{Sk}} = \frac{i}{24 \pi^2} \epsilon_{ijk} \int d^3x \; |\mathbf{x}|^2 \, \trace{W_i W_j W_k} = \frac{2}{\pi (v e)^2} \int dr \; r^2 \, b \, (f_1^2 + f_2^2). \end{equation} We find that \begin{equation} R_{\text{Sk}}|_{e = e_{\text{crit}}} \simeq \frac{1.4}{v e}, \qquad \qquad R_{\text{Sk}}|_{e \to \infty} \simeq \frac{1.9}{v e}, \end{equation} \medskip \subsection{Skyrmion stabilisation from other operators in HEFT} We consider here the possibility that skyrmions are stabilized by some operator from table~\ref{tab:operators} other than $\mathcal{Q}_{D1} - \mathcal{Q}_{D2}$. Some of these operators can be discarded for this purpose from general considerations: $\mathcal{Q}_1$, $\mathcal{Q}_h$ and $\mathcal{Q}_U$, by Derrick's theorem; and all operators containing a field-strength tensor can also be neglected since they vanish when the gauge fields are set to a pure gauge configuration. There are five remaining operators that can contribute: the $\mathcal{Q}_{Di}$ in table~\ref{tab:operators}. We consider now turning on one $c_{Di,n}$ coefficient at a time while fixing the others to zero. We find that none of them are capable of stabilizing skyrmions except for $c_{D1,0}$ and $c_{D2,0}$. Indeed, for all the others, their radial energy density $\rho_i$ is multiplied by some monomial in $\eta$ or $\eta'$. One can then take $\eta = 0$ everywhere, which implies $F_i(\eta) \rho_i = 0$, and then skyrmions become unstable by Derrick's theorem. We have checked this numerically in several examples. It remains to study the skyrmions generated by $c_{D1,0}$ and $c_{D2,0}$. It turns out that both individually, as well as some of their linear combinations, generate meta-stable skyrmions. We parametrize the space of linear combinations with two parameters $e$ and $\theta$, with the former to be used as the corresponding parameter in the spherical ansatz: \begin{equation} c_{D1,0} = \frac{\sqrt{2}}{16 e^2} \cos \theta, \qquad \qquad c_{D2,0} = \frac{\sqrt{2}}{16 e^2} \sin \theta. \end{equation} The Skyrme term is recovered for $\theta = 3 \pi / 4$. In terms of these parameters, the non-SM contribution to the radial energy reads \begin{equation} c_{D1,0} \rho_{D1} + c_{D2, 0} \rho_{D2} = - \frac{\cos \theta}{4 r^2} \left[ 2 (f_1^2 + f_2^2) r^2 b^2 + 2 (2 + \tan\theta) (f_1^2 + f_2^2)^2 + (1 + \tan\theta) r^4 b^4 \right] \end{equation} This is positive everywhere if and only if $\cos \theta \leq 0$ and $\tan\theta \geq -1$, or, equivalently $3 \pi / 4 \leq \theta \leq 3 \pi / 2$. Numerically, we find that skyrmions are stabilized in a slightly wider range:\footnote{The region determined by these values agrees with the one obtained in ref.~\cite{Ellis:2012cs} for the case in which both limit \textbf{A} and \textbf{B} are taken.} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/e_crit_vs_theta} \includegraphics[width=0.45\textwidth]{figures/E_vs_theta} \caption{Left: $e_{\text{crit}}$ as a function of $\theta$. Right: skyrmion mass $M_{\text{Sk}}$ as a function of $\theta$, for $e = 3$.} \label{fig:eE-vs-theta} \end{figure} \begin{figure} \centering \includegraphics[width=0.55\textwidth]{figures/E_vs_nW_vs_theta} \caption{Minimal energy as a function of $n_W$, for $e = 1.8$ and $\theta = 3 \pi / 4, \pi, 3 \pi / 2$.} \label{fig:E-vs-nW-vs-theta} \end{figure} \begin{equation} 0.71 \pi \simeq \theta_{\text{min}} \leq \theta \leq \theta_{\text{max}} \simeq 1.6 \pi, \end{equation} for $e > e_{\text{crit}}(\theta)$, where $e_{\text{crit}}(\theta)$ is a $\theta$-dependent critical value of $e$, that we show on the left panel in figure~\ref{fig:eE-vs-theta}. The skyrmion mass also depends on $\theta$ for constant $e$, with $M_{\text{Sk}} = 0$ at $\theta = \theta_{\text{max}}$. We show this on the right panel of figure~\ref{fig:eE-vs-theta}. The normalized mass $e M_{\text{Sk}} / (4 \pi v)$ has little variation with $e$, as it happened for $\theta = 3 \pi / 4$. In figure~\ref{fig:E-vs-nW-vs-theta}, we display the energy profile for $e = 1.8 > \max_\theta e_{\text{crit}}(\theta)$, and different values of $\theta$. Finally, in figure~\ref{fig:coeffs} we show the region of $(c_{D1,0}, c_{D2,0})$ space where meta-stable skyrmions exist, and the values the masses of the skyrmions inside it, which are given approximately by \begin{equation} M_{\text{Sk}} \simeq (\SI{30}{TeV}) \cdot \left[\tan(\theta_{\text{max}}) c_{D1,0} - c_{D2,0}\right]^{1/2}. \end{equation} The radius is similarly given by \begin{equation} R_{\text{Sk}} \simeq (\SI{20}{TeV^{-1}}) \cdot \left[\tan(\theta_{\text{max}}) c_{D1,0} - c_{D2,0}\right]^{1/2}. \end{equation} The condition $e > e_{\text{crit}}(\theta)$ is just a $\theta$-independent upper bound on the skyrmion mass $M_{\text{Sk}} < \SI{11}{TeV}$. The region where skyrmions exist in the $(c_{D1,0}, c_{D2,0})$ plane is thus determined by \begin{equation} c_{D2,0} < \tan(\theta_{\text{min}}) c_{D1,0}, \qquad \qquad 0 < \tan(\theta_{\text{max}}) c_{D1,0} - c_{D2,0} \lesssim 0.13. \end{equation} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{figures/coeffs_simple} \caption{Skyrmion mass $M_{\text{Sk}}$ as a function of $c_{D1, 0}$ and $c_{D2, 0}$.} \label{fig:coeffs} \end{figure} Although the rest of the $c_{i, n}$ coefficients are not enough by themselves to stabilize skyrmions, they may have effects in the configurations generated by $c_{D1,0}$ and $c_{D2,0}$. Figure~\ref{fig:densities} shows the contribution of each $\mathcal{Q}_i$ to the energy density in the configuration with $\theta = 3 \pi / 4$ and $e = 1.8$. The contributions from the operators not included in the generation of the configuration are negligible compared to the energy. This means that whenever the $c_{i,n}$ coefficients are chosen so that their contribution is positive, they will not change the skyrmion configuration in a significant way. However, they might be chosen so that their contribution to the energy is arbitrarily negative, destabilizing the skyrmion. We find numerically that this happens when $c_{D8,0} = 1$, for example. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{figures/densities_stab} \includegraphics[width=0.49\textwidth]{figures/densities_nonstab} \caption{Radial energy densities $\rho_i$ in the skyrmion configuration with $\theta = 3 \pi / 4$ and $e = 1.8$. The operators in the left plot are included in the calculation of the skyrmion configuration. The ones in the right plot are computed once this configuration is obtained and fixed.} \label{fig:densities} \end{figure} \section{Phenomenology} \label{sec:pheno} \subsection{Collider signals} The process of electroweak skyrmion production is similar to the electroweak instanton, as it is a $B + L$ violating transition over a barrier of a few TeV. As such, it is expected to be exponentially suppressed, even at energies above the potential barrier~\cite{DasBakshi:2020ejz, Banks:1990zb}. Thus, it is unlikely that this process will take place at colliders. However, one can indirectly study the existence of skyrmions through other effects of the operators that generate them. The two skyrmion-stabilizing operators $\mathcal{Q}_{D1}$ and $\mathcal{Q}_{D2}$ induce an anomalous quartic gauge coupling (aQGC) while preserving the SM triple gauge coupling. Most LHC searches for aQGC~\cite{CMS:2014mra,ATLAS:2015ify,ATLAS:2017vqm,ATLAS:2017bon,CMS:2019qfk,CMS:2020gfh,CMS:2020fqz} use a parametrization in terms of dimension-8 SMEFT operators which was first proposed in ref.~\cite{Eboli:2006wa}. This set of operators was corrected in ref.~\cite{Eboli:2016kko} by introducing missing operators and removing redundant ones in order for them to form a basis. The space of operators with four covariant derivatives was shown to have dimension 3. However, the experimental searches with the strongest constraint on this space~\cite{CMS:2019qfk,CMS:2020gfh} give their results in terms of only two operators, coming from an incomplete set of ref.~\cite{Eboli:2006wa}: \begin{equation} \mathcal{L}_{S} = \frac{f_{S0}}{\Lambda^4} (D_\mu \phi^\dagger D_\nu \phi) (D^\mu \phi^\dagger D^\nu \phi) + \frac{f_{S1}}{\Lambda^4} (D_\mu \phi^\dagger D^\mu \phi) (D_\nu \phi^\dagger D^\nu \phi) \end{equation} Therefore, their results cannot be used in general to constrain the full 3-dimensional space of Wilson coefficients. Only when the measured final state uniquely selects one aQCG vertex ($WWWW$, $WWZZ$ or $ZZZZ$) can the results in the incomplete set be translated into the complete EFT basis, as shown in ref.~\cite{Rauch:2016pai}. Following this reference, we obtain limits over $c_{D1,0}$ and $c_{D2,0}$ (denoted $\alpha_5$ and $\alpha_4$ there) from the 95\% CL limits over $f_{S0}$ and $f_{S1}$ found in ref.~\cite{CMS:2020gfh} individually for $WW$ and $WZ$ production at $\sqrt{s} = \SI{13}{TeV}$ and $\int L dt = \SI{137}{fb^{-1}}$. $WW$ production comes from the $WWWW$ vertex, and the limits and conversion are given by \begin{gather} \num{-2.7e-3} \leq 2 c_{D1, 0} + c_{D2, 0} = \frac{v^4 f_{S1}}{8 \Lambda^4} \leq \num{2.9e-3}, \\ \num{-8.2e-3} \leq c_{D2, 0} = \frac{v^4 f_{S0}}{8 \Lambda^4} \leq \num{8.9e-3}, \end{gather} whereas $WZ$ production comes from the $WWZZ$ vertex, they are \begin{gather} \num{-1.3e-3} \leq c_{D1, 0} = \frac{v^4 f_{S1}}{16 \Lambda^4} \leq \num{1.3e-3}, \\ \num{-1.9e-3} \leq c_{D2, 0} = \frac{v^4 f_{S0}}{16 \Lambda^4} \leq \num{1.9e-3}. \end{gather} We show these limits in figure~\ref{fig:coeffs-limits}. We point out that the experimental bounds in ref.~\cite{ATLAS:2016nmw} are presented in terms of a basis for 2-dimensional custodial-invariant subspace of the 3-dimensional space of qQGC operators containing only covariant derivatives, and thus directly translatable into our setting. However, they are weaker than the ones we have obtained, and they are not shown in figure~\ref{fig:coeffs-limits}. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{figures/coeffs_limits} \caption{Dashed lines: 95\% CL limits on $c_{D1,0}$ and $c_{D2,0}$ at $\sqrt{s} = \SI{13}{TeV}$, $\int dt L = \SI{35.9}{fb^{-1}}$ from CMS~\cite{CMS:2019qfk}, using data from $WZ$ production (red) and $WW$ production (green). Transparent shaded blue region: excluded by positivity bounds. Color-gradient region: allowed values of the coefficients for the existence of skyrmions, from the numerical calculations in this work. The coloring represents the skyrmion mass. Solid black-line perimeter encloses the triangle of allowed values of the coefficients that support a skyrmion.} \label{fig:coeffs-limits} \end{figure} \subsection{Positivity bounds} The space of Wilson coefficients can also be constrained theoretically by imposing general principles such as unitarity, locality and causality. The bounds obtained in this way are known as positivity bounds~\cite{Adams:2006sv}, and can be interpreted as necessary conditions for the existence of a UV completion to the EFT in question. In the HEFT, causality implies that~\cite{Distler:2006if, Fabbrichesi:2015hsa, Zhang:2018shp, Bi:2019phv} \begin{equation} c_{D1, 0} + c_{D2, 0} > 0, \qquad \qquad c_{D2, 0} > 0. \end{equation} These inequalities also arise in the chiral Lagrangian without gauge bosons~\cite{Jenkins:2006ia}. The region excluded by them is shown in blue in figure~\ref{fig:coeffs-limits}. It follows that skyrmions can only exist in the angular region $\theta_{\text{min}} \leq \theta < 3 \pi / 4$. Combining this fact with the experimental limits gives an upper bound on the mass of the skyrmion: \begin{equation} M_{\text{Sk}} \lesssim \SI{1.6}{TeV}. \end{equation} \subsection{Dark matter} Similarly to skyrmion production, skyrmion decay is a $B + L$-violating process which expected to be exponentially suppressed. The skyrmion lifetime is thus likely longer than the age of the universe, opening the possibility of skyrmions being Dark Matter (DM) candidates. We use the following order-of-magnitude estimate of the freeze-out skyrmion density~\cite{Criado:2020zwu} \begin{equation} \Omega_{\text{Sk}} h^2 \simeq \frac{\SI{3e-27}{cm^3 s^{-1}}}{\left<\sigma_{\text{ann}} \text{v}\right>}, \qquad \qquad \sigma_{\text{ann}} \simeq \pi R_{\text{Sk}}^2, \qquad \qquad \text{v} \simeq 1/2. \end{equation} Requiring that the skyrmion density is at most the total DM density $\Omega_{\text{Sk}} h^2 \lesssim 0.1$ results in a lower limit for the skyrmion mass \begin{equation} M_{\text{Sk}} \gtrsim \SI{60}{GeV} \end{equation} This limit would be saturated if skyrmions formed all of the DM. \section{Conclusions} \label{sec:conclusions} We have studied the skyrmion configurations that arise in the HEFT. We have found that a meta-stable configuration with skyrmion number close to one exists whenever the coefficients $c_{D1, 0}$ and $c_{D2, 0}$ lie on the strip \begin{equation*} c_{D2, 0} \leq \tan(\theta_{\text{min}}) c_{D1, 0}, \qquad \qquad 0 < \tan(\theta_{\text{max}}) c_{D1, 0} - c_{D2, 0} \lesssim 0.13. \end{equation*} The mass of this skyrmion is given by $M_{\text{Sk}} = (\SI{30}{TeV}) \cdot [\tan(\theta_{\text{max}}) c_{D1, 0} - c_{D2, 0}]^{1/2}$. It is separated from the trivial vacuum by an energy barrier of about $\SI{11}{TeV}$. This value also represents the maximal theoretical $M_{\text{Sk}}$, as above it the barrier disappears. Since skyrmions are unlikely to be created at colliders, we have focused on the experimental signals of the operators that stabilize them. LHC searches for aQCG put bounds of order $\num{e-3}$ on both coefficients. Combining these bounds with positivity constraints, we have found that the allowed parameter space for skyrmions in the triangle \begin{equation} 1 < c_{D2, 0} / c_{D1, 0} \leq \tan(\theta_{\text{min}}), \qquad \qquad -\num{1.3e-3} \leq c_{D1, 0} \leq 0. \end{equation} This allowed us to obtain a stronger upper bound on the mass of the skyrmion, of about $\SI{1.6}{TeV}$. Skyrmions are also expected to be long-lived, so they contribute to the DM density. By assuming that their abundance is generated by the freeze-out mechanism and adopting a simple approximation for the skyrmion annihilation cross-section, we have computed an order-of-magnitude lower bound on the skyrmion mass, of $\SI{60}{GeV}$. \medskip
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Most businesses thrive on the effective use of event logs and process records. The ability to predict the nature of an unseen event in a business-process can have very useful applications \cite{breuker2016comprehensible}. This can help in more efficient customer service, and facilitate in developing an improved work-plan for companies. The domain of process mining deals with combining a wide range of classical model-based predictive techniques along with traditional data-analysis techniques~\cite{van2016data}. A process can be a representation of any set of activities that take place in a business enterprise; for example, the procedure for obtaining a financial document, the steps involved in a complaint-registering system, etc. Business-process mining, in general, deals with the analysis of the sequence of events produced during the execution of such processes~\cite{castellanos2004comprehensive,maggi2014predictive,marquez2017predictive}. Even though the classical approach of depicting event logs is with the help of process graphs \cite{agrawal1998mining,van2003workflow}, Pasquadibisceglie et al.~\cite{pasquadibisceglie2019using}, Tax et al.~\cite{tax2017predictive}, Taymouri et al.~\cite{taymouri2020predictive}, and others have recently applied deep-learning techniques like Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM) networks, and Generative Adversarial Nets (GANs) for the task of predictive process mining. The deep-learning based models obtained results that outperformed traditional models. Inspired from these works and taking into consideration the graph nature of processes, we aim to model event logs as graph structures and apply different Graph Neural Network (GNN) models on such data structures. GNNs have shown superior results for the vertex-classification task ~\cite{DBLP:conf/iclr/KipfW17}, link-prediction task~\cite{zhang2018link}, and recommender systems~\cite{ying2018graph}. In this work, we use a new representation for the event-log data and investigated the performance of different variants of a Graph Convolutional Network (GCN)~\cite{DBLP:conf/iclr/KipfW17} as a successful example of GNNs. We compare the GCN model among others with the CNN and LSTM models along with a Multi-Layer Perceptron (MLP) and classical process-mining techniques~\cite{breuker2016comprehensible,van2011time}. In contrast to the existing body of research~\cite{pasquadibisceglie2019using,evermann2016deep,camargo2019learning,tax2017predictive,taymouri2020predictive,lin2019mm}, we analyze how the performance of the models for business-process prediction depend on the stage of a process. The results show that the next activity type and timestamp prediction depend a lot on the model and also on whether an early, mid, or late stage of the process is considered. Furthermore, we observe from our experiments that MLP is a strong baseline and in many cases outperforms more advanced neural networks like LSTMs, GCNs, and CNNs. The MLP model achieves a maximum of 82\% accuracy in predicting the next event type, and a minimum mean absolute error of 1.3229 days for predicting the timestamp of the next event. Below, we discuss related works in business-process mining. Section~\ref{sec:exp_app} introduces our experimental apparatus, datasets and pre-processing, as well as our GCN-based models. Sections~\ref{sec:results} and ~\ref{sec:results_comparison} highlights the major results from the experiments, followed by a discussion in Section~\ref{sec:discussions}, before we conclude. \section{Related Works} \label{sec:rel_works} Business-process mining deals with several prediction tasks like predicting the next activity type~\cite{becker2014designing,tax2017predictive,pasquadibisceglie2019using,evermann2016deep,breuker2016comprehensible}, the timestamp of the next event in the process~\cite{tax2017predictive,van2011time}, the overall outcome of a given process~\cite{taylor2017customer}, or the time remaining until the completion of a given process instance \cite{rogge2013prediction}. There is a huge body of algorithms for these process-mining tasks~\cite{breuker2016comprehensible,van2011time}. In the context of this work, we focus on the first two aspects of the aforementioned list of predictive tasks, namely, the task of predicting the nature and timestamp of the next event in a given process. We reconsider the results from the classical methods and compare them with latest developments on business-process mining using deep learning. There has been a recent shift towards deep-learning models for the task of predictive business-process monitoring. Tax et al.~\cite{tax2017predictive} proposed to use a Recurrent Neural Network (RNN) architecture with Long Short-Term Memory (LSTM) for the task of predicting the next activity and timestamp, the remaining cycle time, and the sequence of remaining events in a case. Their model was able to model the temporal properties of the data and improve on the results obtained from traditional process-mining techniques. The main motivation for using an LSTM model was to obtain results that were consistent for a range of tasks and datasets. The LSTM architecture of Tax et al. could also be extended to the task of predicting the case outcome. Camargo et al.~\cite{camargo2019learning} and Lin et al.~\cite{lin2019mm} both use LSTM models, too. The first one to predict the next event including timestamp and the associated resource pool, the latter to predict the next event, including its attributes. Evermann et al. \cite{evermann2016deep} also used RNN for the task of predicting the next event on two real-life datasets. Their system architecture involved two hidden RNN layers using basic LSTM cells. Pasquadibisceglie et al. \cite{pasquadibisceglie2019using} used Convolutional Neural Networks (CNN) for the task of predictive process analytics. An image-like data engineering approach was used to model the event logs and obtain results from benchmark datasets. In order to adapt a CNN for process-mining tasks, a novel technique of transforming temporal data into a spatial structure similar to images was introduced. The CNN results improve over the accuracy scores obtained by Tax et al.'s LSTM~\cite{tax2017predictive} for the task of predicting the next event. Scarselli et al. \cite{scarselli2008graph} introduced Graph Neural Networks (GNNs) as a new deep-learning technique that could efficiently perform feature extraction. Especially in the last year, GNNs have gained widespread attention and use in different domains. Wu et al. \cite{wu2020comprehensive} provided a comprehensive survey of GNNs. They categorize the different GNN architectures into Graph Convolutional Networks (GCN, or also called: ConvGNN), Spatio-temporal Graph Neural Networks (STGNNs), Recurrent Graph Neural Networks (RecGNN), and Graph Autoencoders (GAEs). Esser et al. \cite{a3a7ca89d76a435ca35751963ce60f18} discussed the advantages of using graph structures to model event logs. Performing process-mining tasks by modelling the relationships between events and case instances as process graphs has been a widely accepted approach \cite{maruster2002process,van2007business}. Recently, Taymouri et al. \cite{taymouri2020predictive} have used Generative Adversarial Nets (GANs) for predicting the next activity and its timestamp. In a minmax game of discriminator and generator, both consisting of RNNs in a LSTM architecture and feedforward neural networks, a prediction is made of the next step, including event type and event-timestamp prediction. Taymouri et al. used different models each trained over a specific length of sub-sequences of processes, modeled by the parameter $k$. For example, a value of $k=20$ means that sub-sequences of length $20$ are used for training, and testing would be applied on process steps $21$, $22$, $23$, and following until the end of the process. Other works used features from unstructured data like texts in deep-learning architectures to improve the process-prediction task. Ding et al.~\cite{ding2015deep} demonstrate how a deep-learning model using events extracted from texts improves predictions in the stock markets domain. For business-process modelling, Teinemaa et al.~\cite{teinemaa2016predictive} improve the performance of predictive business models by using text-mining techniques on the unstructured data present in event logs. In this work, we aim to combine traditional process mining from event graphs along with deep-learning techniques like GCNs to achieve a better performance in predictive business-process monitoring. We evaluate each of the model variants at different stages of a process, determined by quartiles of the number of events in a case and normalized quarters computed over the case durations. This would provide a more detailed understanding of the models' performance. \section{Experimental Apparatus} \label{sec:exp_app} We introduce the datasets used in this work and the methodology adopted for representing the feature vectors corresponding to each row in the dataset. Following this, a mathematical formulation of graphs and the specific case of process graphs is provided, which lays the foundation to understand a Graph Convolutional Network. We conclude with a description of the procedure and metrics adopted for this work. \subsection{Datasets} \label{sec:datasets} We use two well-known benchmark event-log datasets, namely Helpdesk and BPI12 (W). These two representative datasets have been chosen as they are used by the models we want to compare with, namely the CNN by Pasquadibisceglie et al.~\cite{pasquadibisceglie2019using}, LSTMs from Camargo et al.~\cite{camargo2019learning} and Tax et al.~\cite{tax2017predictive}, and the GAN from Taymouri et al.~\cite{taymouri2020predictive}. Thus, the datasets best possible serve the purpose to compare the different Deep-Learning architectures. All datasets are characterised by three columns: ``Case ID'' (the process-case identifier), ``Activity ID'' (the event-type identifier), and the ``Complete Timestamp'' denoting the time at which a particular event took place. Table~\ref{table:data-analysis} shows an overview of the datasets. \begin{table}[!h] \centering \caption{Overview of the datasets used} \label{table:data-analysis} \begin{tabular}{l|rr} \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Attribute}}} & \multicolumn{2}{c}{\textbf{Dataset}} \\ \cline{2-3} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Helpdesk} & \multicolumn{1}{c}{BPI12(W)} \\ \hline No. of events & \multicolumn{1}{r|}{13,710} & 72,413 \\ No. of process cases & \multicolumn{1}{r|}{3,804} & 9,658 \\ No. of activity types & \multicolumn{1}{r|}{9} & 6 \\ Avg. case duration (sec.) & \multicolumn{1}{r|}{22,475} & 1,364 \\ Avg. no. of events per case & \multicolumn{1}{r|}{3.604} & 7.498 \end{tabular} \end{table} \paragraph{Helpdesk dataset} This dataset presents event logs obtained at the helpdesk of an Italian software company.\footnote{\url{https://data.mendeley.com/datasets/39bp3vv62t/1}} The events in the log correspond to the activities associated with different process instances of a ticket management scenario. It is a database of 13,710 events related to 3,804 different process instances. There are 9 activity types, i.\,e., classes in the dataset. Each process contains events from a list of nine unique activities involved in the process. A typical process instance spans events from inserting a new ticket, until it is closed or resolved. Table \ref{table:data-analysis} shows the average case duration and the number of activities per case. \paragraph{BPIC'12 (Sub-process W) dataset} The Business-Process Intelligence Challenge (BPIC'12) dataset\footnote{\url{https://www.win.tue.nl/bpi/doku.php?id=2012:challenge&redirect=1id=2012/challenge}} contains event logs of a process consisting of three sub-processes, which in itself is a relatively large dataset. As described in \cite{tax2017predictive} and \cite{pasquadibisceglie2019using}, only completed events that are executed manually are taken into consideration for predictive analysis. This dataset, called BPI12 (W), includes 72,413 events from 9,658 process instances. Each event in a process is one among 6 activity types involved in a process instance, i.\,e., a process case. The activities denote the steps involved in the application procedure for financial needs, like personal loans and overdrafts. \subsection{Graphs and Graph Convolutional Layer} \label{subsec:GCN} A graph can be represented as $G = (V,E)$, where \textit{V} is the set of vertices and \textit{E} denotes the edges present between the vertices~\cite{wu2020comprehensive}. An edge between vertex \textit{i} and vertex \textit{j} is denoted as \textit{e$_{ij}$} $\in$ \textit{E}. A graph can be either directed or undirected, depending on the nature of interaction between the vertices. In addition, a graph may be characterized by vertex attributes or edge attributes, which in simple terms are feature vectors associated with that particular vertex or edge. The adjacency matrix of a graph is an \textit{$n \times n$} matrix with \textit{A$_{ij}$} = 1 if e$_{ij}$ $\in$ \textit{E} and \textit{A$_{ij}$} = 0 if e$_{ij}$ $\notin$ \textit{E}, where \textit{n} is the number of vertices in the graph. A degree matrix is a diagonal matrix which stores the degree of each vertex, which numerically corresponds to the number of edges that the node is attached to. A GCN layer operates by calculating a hidden embedding vector for each node of the graph. It calculates this hidden vector by combining each node's feature-vector with the adjacency matrix for the graph, by the equation (Kipf and Welling~\cite{DBLP:conf/iclr/KipfW17}): \begin{equation} f(X,A,W) = \sigma (D^{-1}AXW), \label{equation:gcn_kipfwelling} \end{equation} where \textit{X} is the input-feature matrix containing the feature vector for each of the vertices, \textit{A} is the adjacency matrix of the graph, \textit{D} is the degree matrix, \textit{W} is a learnable weight matrix, and $\sigma$ is the activation function of that layer. In \eqref{equation:gcn_kipfwelling}, the product $D^{-1}A$ represents an attempt to normalize the adjacency matrix. However, as matrix multiplication is non-commutative, an alternative symmetric normalisation is preferred \cite{DBLP:conf/iclr/KipfW17}, changing the GCN layer's operation to: \begin{equation} \label{equation:d_normal} f(X,A,W) = \sigma (D^{-\frac{1}{2}}AD^{-\frac{1}{2}}XW) \end{equation} Note, all the models used in this work, the GCN layer calculations are done as described in \eqref{equation:d_normal}. For the model variants described in Section \ref{subsec:model_variants} involving the Laplacian matrix, the adjacency matrix ($A$) in \eqref{equation:d_normal} is replaced by the corresponding unnormalized Laplacian. \subsection{Data Pre-processing} \label{section:feat_vec} The timestamp corresponding to each event in the dataset can be used to derive a feature-vector representation for each row in the data. The approach introduced in \cite{tax2017predictive} has been used to initially get a feature vector consisting of the following four elements: 1. The time since previous event in the case. 2.~The time since the case started. 3. The time since midnight. 4. The day of the week for the event. All four values are treated as real-valued durations. This results in a 4-element feature vector for every row in the dataset. The drawback in this kind of a representation is that it treats each event in a case independently. In order to overcome this drawback, it was necessary for the feature vector of every event to have a history of other events that had already occurred for that particular Case ID. Hence, a new comprehensive feature vector representation was introduced. In this work, each entry in a dataset is assigned a matrix representation (\emph{X}) whose dimensions depend on the dataset which is considered. The number of \emph{rows} in \emph{X} can be obtained by identifying the unique entries in the `Activity ID' column, i.\,e., the unique activity types as shown in Table~\ref{table:data-analysis}, or can be visually identified as the number of vertices in the process graphs for each of the datasets (Figure \ref{fig:dfg}). Let us denote this value by `\textit{num$\_$of$\_$nodes}' for ease of representation. As it can be observed from Table \ref{table:data-analysis}, \textit{num$\_$of$\_$nodes} is 9 for the Helpdesk dataset and 6 for the BPI'12 (W) dataset. The number of \emph{columns} in \emph{X} corresponds to the length of the initial feature vector, i.\,e., 4. This would result in a matrix of size `\textit{$num\_of\_nodes \times 4$}' for each data entry. The matrix \emph{X} is first initialized with zeroes. Each row index of $X$ stores the 4-element long feature vector corresponding to the most-recent Activity ID denoted by that particular row index, for the current case ID. For example, the first row stores the 4-element long feature vector for the event with Activity ID equal to 1, and so on. One approximation that we have used in this step is that if an event corresponding to a particular Activity ID has occurred more than once in a case, we use the feature vector for only the most-recent occurrence of that event. In scenarios where events with a particular Activity ID have not occurred yet in a given case, the feature matrix will hence just store a vector with zeroes corresponding to that Activity ID. This method of representation gives each row of the Helpdesk dataset a $9 \times 4$ matrix, and each row of the BPI'12 (W) dataset a $6 \times 4$ matrix. The motivation behind choosing such a representation is to facilitate the computation involved in a Graph Convolutional Layer, as explained in Section \ref{subsec:GCN}. For a given row, the Activity ID of the next event and the time since current event are taken as the target labels for the event-predictor and the time-predictor, respectively. \subsection{Process Graphs as Input to GCNs} \begin{figure} [!h] \centering \includegraphics[width=0.49\linewidth]{dfg_helpdesk.jpg} \includegraphics[width=0.49\linewidth]{dfg_BPI.jpg} \caption{Directly-follows graphs generated for the Helpdesk dataset (left) and BPI'12 (W) dataset (right) using PM4Py. The vertices represent the unique Activity IDs (i.\,e., activity types) along with their frequencies denoted in brackets. The numbers on the directed edges denote the frequency of directly-follows relations.} \label{fig:dfg} \end{figure} \begin{figure*} [!h] \centering \includegraphics[width=1\linewidth]{sys_arch.png} \caption{Graph Convolutional Network architecture for the event type and timestamp predictor. The value for \textit{n} in the last layer denotes the number of classes for the event predictor and 1 for the time predictor. } \label{fig:system_arch} \end{figure*} Process discovery from event logs can be achieved using different traditional process-mining techniques. In this work, we have used an inductive mining approach with Directly-Follows Graphs (DFGs) to represent the processes extracted from each of the datasets. The choice is motivated by the simplicity and efficiency with which the entire data can be represented in the form of a graph. A Directly-Follows Graph for an Event Log \textit{L} is denoted as \cite{van2016data}: $G(L) = (A_L,\mapsto _L,A_L^{start},A_L^{end})$, where A$_L$ is the set of activities in \textit{L} with A$_L^{start}$ and A$_L^{end}$ denoting the set of start and end activities, respectively. $\mapsto _L$ denotes the directly-follows operation, which exists between two events if and only if there is a case instance in which the source event is followed by the other target event. The vertices in the graph represent the unique activities present in the event log, and the directed edges of the graph exist if there is a directly-follows relation between the vertices. The number of directly-follows relations that exist between two vertices is denoted by a weight for the corresponding edge. Berti et al.~\cite{DBLP:journals/corr/abs-1905-06169} presented a process-mining tool for Python called PM4Py. The Directly-Follows Graphs for both the datasets (considering all the events/rows) were visualised using the PM4Py package as shown in Figure \ref{fig:dfg}. Consider the following binary adjacency matrix for the process graph generated, as example, from the BPI'12 (W) dataset: {\small \[ B_{BPI'12(W)} = \begin{bmatrix} 1 & 1 & 1 & 0 & 0 & 0\\ 1 & 1 & 1 & 0 & 0 & 1\\ 0 & 1 & 1 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 & 0 & 1\\ 0 & 0 & 1 & 0 & 1 & 1\\ 0 & 1 & 0 & 1 & 1 & 1\\ \end{bmatrix} \] } This \textit{$6 \times 6$} matrix needs to be normalized as per Equation \eqref{equation:d_normal} to be used in a GCN. The elements of the diagonal degree matrix can be numerically computed as a row-wise sum from the above matrix. The dimensions of the normalized matrix (\textit{$6 \times 6$}) and the dimension of \emph{X} (\textit{$6 \times 4$} for BPI'12 (W) dataset) makes it compatible for matrix multiplication in the GCN layer. In general, the normalized adjacency matrix will have dimensions \textit{$num\_of\_nodes \times num\_of\_nodes$} and \emph{X} the dimensions \textit{$num\_of\_nodes \times 4$}. \subsection{Procedure} The network depicted in Figure \ref{fig:system_arch} shows the architecture for the GCN model that learns the next Activity ID and the timestamp of the next activity. The overall structure that was constructed for this work mainly focuses on a Graph Convolutional Layer followed by a sequential layer consisting of three fully-connected layers with Dropout (present after the GCN layer and before the last fully-connected layer). The weight matrix (W) in the GCN layer is of size $4 \times 1$. The Event Predictor Network has \textit{tanh} activation for the first two fully connected layers and softmax activation at the last layer. Cross-entropy loss is used during training. The Timestamp Predictor Network on the other hand consists of \textit{ReLU} activation for the first two layers and a linear activation function at the last layer. The training process uses the Mean Absolute Error as the loss function. An Adam optimizer \cite{DBLP:journals/corr/KingmaB14} is used for the training processes for all variants. In line with the training procedure of prior studies~\cite{tax2017predictive,pasquadibisceglie2019using}, each of the datasets is divided into train (2/3) and test sets (1/3). We use 20\% of the training as validation set during the training process. The validation set is randomly sampled from the training set in each of the five experimental runs. Note, the chronological nature of the datasets have been preserved during the train-test splitting. One row is taken at a time during training resulting in a mini-batch size of 1. The final results after the evaluation on the test set are reported as an average measure of 5 runs. \begin{table*}[t!] \centering \caption{Accuracy for next-event prediction at different stages of a process (indicated by quartiles based on the number of events and quarters based on normalising the case duration). Standard deviations (SD) have been omitted as they are very low ($<0.06$). } \label{table:accuracy_results} \begin{tabular}{c|c|cccc|cccc|c} \multirow{3}{*}{\textbf{Dataset}} & \multirow{3}{*}{\textbf{Model}} & \multicolumn{8}{c|}{\textbf{Accuracy for Event Prediction}} & \multirow{3}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Overall\\ accuracy\end{tabular}}} \\ \cline{3-10} & & \multicolumn{4}{c|}{\textbf{Quartiles based on Events}} & \multicolumn{4}{c|}{\textbf{Quarters based on Duration}} & \\ \cline{3-10} & & \multicolumn{1}{c|}{\textbf{1}} & \multicolumn{1}{c|}{\textbf{2}} & \multicolumn{1}{c|}{\textbf{3}} & \textbf{4} & \multicolumn{1}{c|}{\textbf{1}} & \multicolumn{1}{c|}{\textbf{2}} & \multicolumn{1}{c|}{\textbf{3}} & \textbf{4} & \\ \hline \multirow{5}{*}{Helpdesk} & GCN$_W$ & 0.7288 & 0.6888 & 0.7634 & 0.9419 & 0.7499 & 0.5508 & 0.5940 & 0.8951 & 0.7954 \\ & GCN$_B$ & 0.7266 & 0.6778 & 0.7475 & 0.8973 & 0.7418 & 0.5410 & 0.5590 & 0.8561 & 0.7731 \\ & GCN$_{LB}$ & 0.7270 & 0.6837 & 0.7729 & 0.9108 & 0.7523 & 0.5492 & 0.5819 & 0.8722 & 0.7863 \\ & GCN$_{LW}$ & 0.6681 & 0.6922 & 0.7665 & 0.9167 & 0.7389 & 0.5508 & 0.5723 & 0.8803 & 0.7830 \\ & MLP & \textbf{0.7297} & \textbf{0.7031} & \textbf{0.8110} & \textbf{0.9642} & \textbf{0.7677} & \textbf{0.6082} & \textbf{0.6446} & \textbf{0.9212} & \textbf{0.8201} \\ \hline \multirow{5}{*}{\begin{tabular}[c]{@{}c@{}}BPI'12 \\ (W)\end{tabular}} & GCN$_W$ & 0.6964 & 0.7397 & 0.8011 & 0.4303 & 0.7247 & 0.8802 & 0.7869 & 0.4493 & 0.6484 \\ & GCN$_B$ & 0.7329 & 0.7487 & 0.8039 & 0.3936 & 0.7424 & 0.8819 & 0.7933 & 0.4251 & 0.6473 \\ & GCN$_{LB}$ & \textbf{0.7381} & \textbf{0.7587} & \textbf{0.8111} & 0.4077 & \textbf{0.7579} & \textbf{0.8961} & 0.7883 & 0.4329 & \textbf{0.6569} \\ & GCN$_{LW}$ & 0.7366 & 0.7542 & 0.8050 & 0.4028 & 0.7552 & 0.8827 & 0.7882 & 0.4279 & 0.6525 \\ & MLP & 0.6554 & 0.7369 & 0.8058 & \textbf{0.4792} & 0.7006 & 0.8818 & \textbf{0.8001} & \textbf{0.4888} & 0.6559 \end{tabular} \end{table*} \begin{table*}[t!] \centering \caption{MAE values (in days) for predicting the timestamp of the next-event at different stages of a process (indicated by quartiles based on the number of events and quarters based on normalising the case duration). SDs omitted as they are very low ($<0.2$).} \label{table:mae_results} \begin{tabular}{c|c|llll|llll|l} \multirow{3}{*}{\textbf{Dataset}} & \multirow{3}{*}{\textbf{Model}} & \multicolumn{8}{c|}{\textbf{MAE (in days) for Time Prediction}} & \multicolumn{1}{c}{\multirow{3}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Overall\\ MAE \\ (days)\end{tabular}}}} \\ \cline{3-10} & & \multicolumn{4}{c|}{\textbf{Quartiles based on Events}} & \multicolumn{4}{c|}{\textbf{Quarters based on Duration}} & \multicolumn{1}{c}{} \\ \cline{3-10} & & \multicolumn{1}{c|}{\textbf{1}} & \multicolumn{1}{c|}{\textbf{2}} & \multicolumn{1}{c|}{\textbf{3}} & \multicolumn{1}{c|}{\textbf{4}} & \multicolumn{1}{c|}{\textbf{1}} & \multicolumn{1}{c|}{\textbf{2}} & \multicolumn{1}{c|}{\textbf{3}} & \multicolumn{1}{c|}{\textbf{4}} & \multicolumn{1}{c}{} \\ \hline \multirow{5}{*}{Helpdesk} & GCN$_W$ & 2.2955 & \textbf{2.8397} & 4.1637 & 0.3340 & 3.6811 & 6.4332 & 3.6726 & 0.1806 & 2.3346 \\ & GCN$_B$ & 2.2993 & 2.8577 & 4.1483 & \textbf{0.3143} & 3.6958 & 6.3667 & 3.4909 & \textbf{0.1768} & 2.3298 \\ & GCN$_{LB}$ & 2.2973 & 2.8474 & 4.1085 & 0.3433 & 3.6744 & 6.2572 & 3.5081 & 0.2020 & 2.3250 \\ & GCN$_{LW}$ & 2.2950 & 2.8470 & \textbf{4.0661} & 0.3323 & 3.6651 & 6.1060 & \textbf{3.2253} & 0.2195 & \textbf{2.3095} \\ & MLP & \textbf{2.2948} & 2.9030 & 4.1969 & 0.3445 & \textbf{3.5724} & \textbf{5.688} & 5.0011 & 0.3572 & 2.3661 \\ \hline \multirow{5}{*}{\begin{tabular}[c]{@{}c@{}}BPI'12 \\ (W)\end{tabular}} & GCN$_W$ & \textbf{1.0956} & 1.5503 & 1.6047 & 1.1491 & 1.7064 & 2.4116 & 1.7891 & 0.4943 & 1.3468 \\ & GCN$_B$ & 1.1134 & 1.6109 & 1.6877 & 1.1449 & 1.7666 & 2.5344 & 1.8986 & 0.4548 & 1.3837 \\ & GCN$_{LB}$ & 1.1114 & 1.6043 & 1.6775 & 1.1359 & 1.7530 & 2.5318 & 1.8997 & \textbf{0.4495} & 1.3765 \\ & GCN$_{LW}$ & 1.1069 & 1.5900 & 1.6632 & 1.1437 & 1.7528 & 2.4998 & 1.8530 & 0.4618 & 1.3710 \\ & MLP & 1.0966 & \textbf{1.5224} & \textbf{1.5587} & \textbf{1.1288} & \textbf{1.6529} & \textbf{2.3617} & \textbf{1.7134} & 0.5276 & \textbf{1.3229} \end{tabular} \end{table*} \subsection{GCN Model Variants and MLP Baseline} \label{subsec:model_variants} We have introduced four GCN variants of this general architecture and an MLP-only variant for the experiments carried out in this work. \paragraph{GCN$_W$ (GCN with Weighted Adjacency Matrix)} \label{subsub: weighted} The adjacency matrix of the process graph depicted in Figure~\ref{fig:dfg} is computed. Rather than a traditional approach of using binary entries (as in $B_{BPI'12(W)}$), we introduce a new method in this variant by having the adjacency matrix store the values corresponding to the weighted edges of the process graph. The normalization procedure given in Eq.~\eqref{equation:d_normal} is then applied to this adjacency matrix in the GCN layer. \paragraph{GCN$_B$ (GCN with Binary Adjacency Matrix)} This variant uses the binary adjacency matrix shown in the previous section (see example: $B_{BPI'12(W)}$). The degree matrix is computed, from which a symmetrically normalized adjacency matrix is obtained. The main motivation behind using the binary and weighted variants of the adjacency matrix is due to the fact that GCN$_B$ is heavily influenced by outliers whereas GCN$_{W}$ might be biased by frequency differences between common connections in the DFG. \paragraph{GCN$_{LW}$ (GCN with Laplacian Transform of Weighted Adjacency Matrix)} The Laplacian matrix~\cite{godsil2013algebraic} of a graph is $L = D - A$, where \textit{D} is the Degree matrix and the \textit{A} is the Adjacency matrix. In this variant, \textit{A} corresponds to the weighted adjacency matrix. The Laplacian matrix is then used for all computations involved within the Graph Convolutional layer as follows: $f(X,A,W) = \sigma (D^{-\frac{1}{2}}(D-A)D^{-\frac{1}{2}}XW)$. \paragraph{GCN$_{LB}$ (GCN with Laplacian Transform of Binary Adjacency Matrix)} This variant is equivalent to the previous one, except for the fact that it uses the binary adjacency matrix instead of the weighted adjacency matrix to compute the Laplacian matrix. \paragraph{MLP (Multi-Layer Perceptron)}: In order to understand if the GCN layer added any significant change to the performance, we used a variant which had only the three fully-connected layers (omitting the GCN layer). This model also serves as baselines for the other architectures compared. The feature matrix (\emph{X}) was flattened and given as input to the fully-connected layers. As in the other variants, Dropout is used before the last layer. Hence, the dimensions for the input vector of the MLP was (\textit{$number\_of\_nodes \times number\_of\_features$}). \subsection{Measures} Each row is associated with two labels, the next activity type and the time (in seconds) after which the next event in that case takes place. As in \cite{tax2017predictive}, an additional label is added to denote the end of a case. \paragraph{Next Activity and Timestamp} The quality of the next activity is measured in terms of the accuracy of predicting the correct label. In the case of timestamp prediction, we use Mean Absolute Error (MAE) calculated in days. \paragraph{Quartiles based on Events} We have evaluated the performance of each variant at different quartiles. The quartiles for each case instance have been computed based on the number of events. For each case instance, its full list of events are split into four (approximately) equal quartiles, based on the order the events occurred in that case instance. \paragraph{Quarters based on Unit Length Time} We normalize the full case duration to unit length time and divide it into four equidistant intervals, to make a comparison along the time axis between cases and datasets possible. Thus, each case instance's full duration is divided by 4, and the case's events are put into the four intervals based on their individual finishing timestamps. In contrast to the quartiles based on events above, these temporal quarters divide the true natural distribution of the process events based on time. \section{Results of Predicting Event Types and Time at Different Stages for GCNs and MLP} \label{sec:results} \label{sec:q-results} We describe per dataset the results for the GCN and MLP models based on the quartiles over event type and quarters of the unit length time. Subsequently, we compare the performance of the deep-learning architectures CNN, LSTM, GCN, and GAN with the MLP and classical approaches. \subsection{Helpdesk Dataset} \paragraph{Optimization} Each of the model variants was initially run with different learning rates for the Adam optimizer. The learning rate with the best performance was chosen for each variant. For all the GCN variants, the best performance for the timestamp predictor was obtained with a learning rate of 0.001. For the event predictor, GCN$_{LW}$ gave the best performance at a learning rate of 0.001 and all other GCN variants performed best at 0.0001. For the MLP model, both the tasks gave best results at a learning rate of 0.0001. The model corresponding to the best validation loss is saved for all the model variants, and then evaluated on the same test set. \paragraph{Results} The accuracy values corresponding to the event-prediction task achieved in this process is presented in Table \ref{table:accuracy_results}. The Mean Absolute Error (in days) achieved on a test set from models saved for the different variants is shown in Table \ref{table:mae_results}. It can be observed from Tables~\ref{table:accuracy_results} and \ref{table:mae_results} that the MLP model outperforms all other variants for the event-prediction task, in all individual quartiles/quarters as well as for the overall performance. Among all model variants, a maximum overall accuracy of 82.01\% is obtained for the event predictor by the MLP. The minimum overall MAE of 2.3095 days was achieved by the GCN$_{LW}$ variant. \subsection{BPI'12 (W) Dataset} \paragraph{Optimization} The same optimization procedure as for the Helpdesk dataset has been used. The timestamp predictor for all variants gave the best results with a learning rate of 0.0001. It is also the preferred learning rate for the event predictor in all variants, except GCN$_{B}$ and MLP (where it is 0.00001). The computation of quartiles over event types is also the same as before. \paragraph{Results} The accuracy values and MAE values for the BPI'12 (W) dataset are presented in Tables~\ref{table:accuracy_results} and~\ref{table:mae_results}. The MLP model outperforms all other variants in the time-prediction task for most of the scenarios. An overall minimum MAE of 1.3229 days is achieved. We are able to observe slight variations when it comes to the results of the event predictor. The best performance at individual quartiles and quarters are shown by GCN$_{LB}$ and MLP for different instances. The highest overall accuracy of 65.69\% is achieved by GCN$_{LB}$. \begin{table*}[t!] \centering \caption{Comparison of the different models with other reported results on the same benchmark datasets} \begin{threeparttable} \label{table:compare} \begin{tabular}{l|l|l|l|l} \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Model}}} & \multicolumn{2}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Accuracy for \\ Event Prediction\end{tabular}}} & \multicolumn{2}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}MAE (in days) for \\ Time Prediction\end{tabular}}} \\ \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\textit{Helpdesk}} & \multicolumn{1}{c|}{\textit{BPI'12 (W)}} & \multicolumn{1}{c|}{\textit{Helpdesk}} & \multicolumn{1}{c}{\textit{BPI'12 (W)}} \\ \hline CNN~\cite{pasquadibisceglie2019using} & 0.7393 & \textbf{0.7817} & N/A & N/A \\ \hline LSTM (Evermann et al.)~\cite{evermann2016deep} & N/A & 0.623 & N/A & N/A \\ LSTM (Camargo et al.)~\cite{camargo2019learning} & 0.789 & 0.778 & N/A & N/A \\ LSTM (Tax et al.)~\cite{tax2017predictive} & 0.7123 & 0.7600 & 3.75 & 1.56 \\ \hline GCN$_W$ & 0.7954 & 0.6484 & 2.3346 & 1.3468 \\ GCN$_B$ & 0.7731 & 0.6473 & 2.3298 & 1.3837 \\ GCN$_{LB}$ & 0.7863 & 0.6569 & 2.3250 & 1.3765 \\ GCN$_{LW}$ & 0.7830 & 0.6525 & \textbf{2.3095} & 1.3710 \\ \hline MLP & \textbf{0.8201} & 0.6559 & 2.3661 & \textbf{1.3229} \\ \hline Breuker et al. \cite{breuker2016comprehensible} & N/A & 0.719 & N/A & N/A\\ WMP Van der Aalst et al. \cite{van2011time} & N/A & N/A & 5.67 & 1.91 \\ \hline \hline GAN+LSTM~\cite{taymouri2020predictive} ($k=2$) \tnote{a}& 0.8668 & 0.7535 & 1.6434 & 1.4004\\ GAN+LSTM~\cite{taymouri2020predictive} ($k=4$) \tnote{a}& 0.8657 & 0.8009 & 1.1505 & 1.1611 \\ GAN+LSTM~\cite{taymouri2020predictive} ($k=6$) \tnote{a} & 0.8976 & 0.8298 & 0.8864 & 0.9390 \\ GAN+LSTM~\cite{taymouri2020predictive} ($k=16$) \tnote{a} & N/A & 0.9019 & N/A & 0.4274 \\ GAN+LSTM~\cite{taymouri2020predictive} ($k=30$) \tnote{a} & N/A & 0.9290 & N/A & 0.3399 \\ \hline LSTM (Lin et al.)~\cite{lin2019mm} \tnote{b} & 0.916 & N/A & N/A & N/A \\ \hline \end{tabular} \begin{tablenotes} \small{ \item a) Our reruns of the code adapted to fit the evaluation strategy of the CNN, LSTM, GCN, and MLP for fair comparison. Note, models are based on a specific $k$ value, i.\,e., they only predict cases of length $k+1$ or longer. \item b) Code was not available. Thus the number cannot be independently confirmed.} \end{tablenotes} \end{threeparttable} \end{table*} \section{Results of Comparing Deep-Learning Variants of CNN, LSTM, GCN, and GAN} \label{sec:results_comparison} We compare the performance of the different deep-learning variants of CNN, LSTM, GCN, and GAN with the MLP and classical approaches. As mentioned in Section \ref{sec:rel_works}, the task of event prediction and the timestamp prediction has been explored in various other works as well, using other techniques. Table~\ref{table:compare} compiles the best results reported in other works and compares them with the results obtained from our GCNs as documented in Section~\ref{sec:q-results}. The values for the GAN by Taymouri et al.~\cite{taymouri2020predictive} have been obtained after rerunning the original code with necessary changes to make it comparable with the other results. This was necessary since the original paper by Taymouri et al.~\cite{taymouri2020predictive} reported only weighted average measures over different case lengths (\emph{k} values). Also, their train-test split ratio was 80:20 and changed to 66:33 as in the other and our models~\cite{tax2017predictive,pasquadibisceglie2019using}. The source code from the model introduced by Lin et al.~\cite{lin2019mm} was not available online. Hence, their results have been included in Table~\ref{table:compare} as a separate block. For the classical process-mining model reported by Van der Aalst et al. \cite{van2011time}, we have used the values obtained from the experiments conducted by Tax et al.~\cite{tax2017predictive} on the current datasets. It can be observed from Table \ref{table:compare} that all the model variants introduced in this work perform well in comparison to previous models for the time-prediction task. For the event-prediction task, we have mixed results. On the Helpdesk dataset, all the GCN model variants outperform two LSTM models~\cite{tax2017predictive,camargo2019learning} and the CNN model~\cite{pasquadibisceglie2019using}, but fail to outperform the improved LSTM model introduced by Lin et al. \cite{lin2019mm}. Our models perform poorly on the BPI'12 (W) dataset for event prediction. Regarding the GAN+LSTM~\cite{taymouri2020predictive}, the results show that it is generally a strong performer. But it has to be noted that the training procedure is fundamentally different from the other models due to the use of the parameter $k$. This parameter denotes that subsequences of the processes of length $k$ are used for training, and $k+1$, $k+2$ etc. are used for testing. Thus, the result for, e.\,g., $k=30$ on the BPI12 (W) dataset only considers few process cases of length 31 or more. \section{Discussion} \label{sec:discussions} Our experiments show that a simple MLP is able to outperform other sophisticated architectures such as the LSTMs and CNN. But it is also to be noted that MLP does not emerge as the best performer in all of the experiments. Some possible factors that might have resulted in this performance could be an improved feature vector representation or the fact that the number of classes in the event-prediction task is not that high (9+1 classes for Helpdesk dataset and 6+1 classes for the BPI'12 (W) dataset). Thus, the simple MLP models were able to effectively learn the correlations between input features and the target labels. Regarding our analysis at different quartiles based on the number of events and quarters based on unit-length time show that automated process-prediction results vary at different stages of a business process. For example, with the Helpdesk dataset, the accuracy of event prediction continuously improves over the quartiles based on events. However, for the BPI'12 (W) dataset, it surprisingly improves only until the 3rd quartile, when it suddenly drops in the last quartile. A similar observation can be made for MAE over both quartiles based on events and quarters based on duration. Here, the scores continuously increase (MAE gets worse), until they drop in the last quartile. Quartiles over events and quarters over unit length time truly model two different things. Quarters better reflect the performance in a unit length progression over time, but can be negatively influenced by a skewed event distribution. At the same time, quartiles have an equal distribution. Future experiments would need to be conducted to explain this varying behaviour between datasets and measures. A potential risk to the validity of these results can be from one of the assumptions we had used during the pre-processing stage. Where there were recurring events of the same type in a case, we only included that event type's most-recent occurrence. Particularly in the BPI'12 (W) dataset, there are cases where the same event occurs many times. To understand how our assumption might have affected the results, the same experiments were performed on a different version of the BPI'12(W) dataset, which had reduced instances of an event following itself~\cite{tax2017predictive}. But the results obtained were very similar to the original dataset. Comparing the different models has been in general very difficult, due to different train-test split ratios and different training procedures. Following~\cite{tax2017predictive,pasquadibisceglie2019using}, we have used $2/3${rd} of the data for training and $1/3$rd for testing, while preserving the chronological nature of the data. Other works like \cite{breuker2016comprehensible,camargo2019learning} have also used a ratio that is comparable to ours, namely 70:30 for training and testing. Only the GAN model \cite{taymouri2020predictive} had originally used a 80:20 split and the work carried out by Lin et al.~\cite{lin2019mm} have split the data in a 7:2:1 ratio. Since the GAN code is available, we adapted it to the same train-test split and rerun it with $25$ epochs, as stated in the paper, for different values of $k$. The code for the LSTM by Lin et al. is not available, as also noted by Taymouri et al.~\cite{taymouri2020predictive}, and thus cannot be independently confirmed. However, this study includes three other strong LSTM models, which are directly comparable. A key difference of the GAN model is its training procedure, which involves windows of different case-lengths (the \emph{k} values), whereas our training procedure does not differentiate between different case lengths. For example, the GAN model with $k=30$ is trained on subsequences of processes of a length of 30 in the BPI12 (W) dataset. For testing, only the remaining few process cases of length 31, 32 etc. are used. Thus, the GAN results ~\cite{taymouri2020predictive} cannot be compared directly to any of the other models, which are designed to make predictions on any lengths of cases, but are reported in Table~\ref{table:compare} for completeness. The major impact of this work lies in the observation that there is no silver-bullet method when it comes to business-process prediction. It can be observed that MLP is a strong baseline and in many cases outperforms complex neural networks like the LSTM, GCN, and CNN. However, interestingly, there are cases where the MLP performs comparably poor, such as predicting the activity type in the BPI2 (W) dataset. There have been other works which report similar behaviour of an MLP baseline for classification tasks~\cite{DBLP:conf/um/GalkeMVS18,IJCNN-GalkeEtAl-2021,DBLP:conf/jcdl/MaiGS18}. Thus, interesting future work is to understand why MLPs perform well on certain datasets, outperforming strong models, while their performance is low for other datasets. Also, it would be interesting to look into other variations in representing the feature vector. \section{Conclusions} \label{sec:conclusions} Our experiments show that MLP is a strong baseline for the task of event prediction and time prediction in business processes. However, overall the MLP is not a clear best performing model. Furthermore, the detailed analyses at different quartiles based on the number of events and quarters based on unit length time show that automated process-prediction results vary at different stages of a business process. Hence, care must be taken while evaluating and applying business-process prediction models. The source code for this work is available at: \url{https://github.com/ishwarvenugopal/GCN-ProcessPrediction} \begingroup \setstretch{0.9} \setlength\bibitemsep{0pt} \printbibliography \endgroup \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{INTRODUCTION}\label{sec:intro} \par Seagrass meadows are one of the most valuable structural elements in marine coastal areas since they provide ecosystemic services in the form of nutrient supply, refuge, and nursery ground to many species of fishes and invertebrates \cite{Costanza_1997}. In addition to support marine biodiversity, they create architectural structure as benthic producers, contribute to the water quality and sediment stabilization \cite{Orth_2006}, protect coasts lines from strong waves \cite{FONSECA1992565,S_nchez_Gonz_lez_2011}, and are responsible for globally significant carbon sequestration \cite{bg-2-1-2005}. Moreover, seagrass beds support fisheries and provide livelihoods for millions of people in coastal communities \cite{Watson_1993}. Despite their relevance, seagrass ecosystems are under global threat due to anthropogenic impact \cite{Orth_2006,Hughes_2009}. Eutrophication, coastal development, water pollution, increased mooring activity, competition with invasive species, and global warming are some of the many causes that are leading seagrasses to experience an accelerating decline in the last decades \cite{Waycott:2009aa}. \vspace{0.3cm} \par The alarming demise of the worldwide population of seagrasses requires immediate and strategic actions for marine coastal protection. Mathematical models provide a theoretical framework to study the most relevant mechanisms that govern the dynamics of these ecosystems, and they can predict the possible future scenarios subjected to different environmental conditions. They can also evaluate the resilience of the existing population to stress factors and identify tipping points, where a slight change in conditions may cause irreversible loss. Thus, mathematical models constitute an essential tool to assess the health of ecosystems and generate more informed decisions for the sustainable management of marine coastal areas. An agent-based spatial model for seagrasses and other organisms that exhibit clonal growth was proposed in \cite{TS2005}. This model collects detailed information of each shoot, and by iterating a set of empirically-based clonal growth rules, reproduces the non-linearities in the dynamics of a meadow \cite{Sintes_2006}. The applications of the numerical simulations produced by the model are diverse, and range from assessing the CO$_2$ capture potential by seagrasses \cite{CDuarteCO2_2013} to estimate the age of living meadows \cite{Arnaud_Haond_2012}. A macroscopic description of the agent-based model was recently proposed to predict the spontaneous formation of spatial patterns in seagrass meadows at seascape level \cite{Ruiz-Reynes:2017aa}. In this model the change in the seagrass density obeys a set of coupled first order partial differential equations that include non-linear local and non-local interaction terms. By losing the detailed information on the shoot and ramet development, this macroscopic model works efficiently at larger spatial scales and it can be used to study the underlying mechanisms behind self-organizasation \cite{Ruiz_Reyn_s_2019, Ruiz_Reyn_s2_2020, sulfurs}. \vspace{0.3cm} \par Current models in the literature have not considered interactions among different species. Several interacting species commonly constitute seagrass ecosystems and form spatial distributions that range from perfectly separated domains to mixed meadows. The study of interspecific interactions is key to identify the main processes that shape the distribution of the seascape. For instance, the introduction of invasive species that compete for resources with native ones can drastically affect the habitat conditions \cite{norton1976sargassum,de_Vill_le_1995, Al_s_2016}. The tropicalization of temperate latitudes due to global warming aggravates the spread of exotic species that are more resilient to higher water temperatures \cite{Rius_2014, Verg_s_2014,Wesselmann_2021}. Therefore, the addition of inter-specific interactions to the present numerical model is necessary to predict the dynamics of seagrasses under different global warming scenarios since each species have different responses to the thermal stress \cite{Savva_2018, Collier_2011}. \vspace{0.3cm} \par In this work, we introduce a generalization of the agent-based model \cite{TS2005} that includes a cross-interaction term among clonal species either seagrasses or seaweeds. Such interaction is implemented in the local interaction term through coupling parameters that quantify the strength of the interaction between any pair of species. These parameters cannot be directly measured and need to be inferred indirectly by field observations. We will test the model and explore the role of the coupling for the specific case of the seagrass-seaweed interaction between {\it Cymodocea nodosa} and {\it Caulerpa prolifera}. These two species commonly form mixed meadows, and several studies showed that they negatively influence each other \cite{TUYA20131, PEREZRUZAFA2012101}. In this article, we propose a systematic way to fix the interaction parameters of the model and reproduce field observations of mixed meadows of {\it C. nodosa} and {\it C. prolifera} in the Alfacs Bay (Ebro River Delta). \vspace{0.3cm} \section*{MATERIALS AND METHODS} \subsection*{Numerical Model} \par In this article we propose a numerical model to study the interactions among different species of seagrasses. Our model follows similar clonal growth rules as those formulated in \cite{TS2005, Sintes_2006} for non-interacting seagrasses that we briefly summarized here. In their model, the development of clonal networks was simulated using a set of ecologically relevant parameters that can be easily derived from empirical observations, such as: the rhizome elongation rate $[v]$, that sets the horizontal spread of the clone; the branching rate $[\nu_0]$, that controls the capacity of the clone to form dense networks; the branching angle $[\phi]$, that determines the efficiency of the space occupation; the spacer length $[\delta]$, that measures the length of the piece of rhizome between consecutive shoots; and the shoot mortality rate $[\mu]$ \cite{BELLTOMLIN,1bb9fcb800914db2af54226c5b443c5c}. The simulation starts placing a seed (a shoot carrying an apical meristem) and assigning to it a unitary vector $\hat u$, randomly oriented, setting the direction of growth of the rhizome. At each iteration, the following steps are repeated: \begin{enumerate} \item A rhizome, that originates in a randomly selected apex, is proposed to grow from its current position, $\vec{r}_0$, to $\vec{r}=\vec{r}_0+\delta \hat u$. The proposal is accepted if no other shoot is present within an exclusion area of radius $\sigma < \delta$ centered at $\vec{r}$. The value of $\sigma$ is set to avoid multiple shoots occupying the same position and to preserve the shoot density reported in natural stands of the species. Then, the apex is relocated at $\vec{r}$ where a new shoot will develop. In this process, the direction of growth, $\hat u$, does not change. \item Time is increased as $\Delta t= \delta / (v N_a(t))$, where $N_a(t)$ is the number of apices at time $t$. \item A new branch, holding a growing apex, may develop at $\vec{r}$ with a probability $p_{\nu}(t)=\nu_0 \Delta t N_a(t)$. The new branch will extend a long a new unitary vector $\hat u'$ forming an angle $\phi$ with $\hat u$ along the right or left side of $\hat u$, randomly chosen. Only one branch is possible at the position where the apex is located. \item Within this time step, a number of shoots are removed from the meadow with a probability $p_{\mu}(t)=\mu \Delta t / N_s(t)$, where $N_s(t)$ is the number of shoots at time $t$. It is assumed the shoot mortality to be an age-independent event \cite{Duarte_1994}. \end{enumerate} In this work, an alternative and more efficient type of exclusion procedure is implemented. A characteristic value of the shoot density, according to the empirical observations, is set for each of the competing species, $\rho_{max}$. A square grid, representing the transects placed in the meadow, is superimposed on top of the continuum space. The grid spacing is set to $20 \, cm$. When a rhizome is proposed to extend into a cell which density has reached its saturation value $\rho_{max}$, it will not advance in that iteration. This method generates the same results as the exclusion area principle and reduces substantially the computing time. \vspace{0.3cm} \par The application of the above mentioned growth rules lead to an homogeneous spatial development of the clones that strongly depend on the balance between the branching and the shoot mortality rates \cite{TS2005}, but is not able to reproduce the reported self-organized marine vegetation patterns, such as, stripes or fairy circles \cite{MMS12306,Borum2013, Frederiksen_2004, Pasqualini1999EnvironmentalII, Ruiz-Reynes:2017aa, van_der_Heide_2010}. This problem has been addressed assuming non-linear density-dependent growth rates that include facilitation and competitive interactions \cite{Ruiz-Reynes:2017aa}. Although the pattern formation in single specific meadows is directly related to the presence of non-local interactions, in the case of interacting species, and for the sake of simplicity, we will assume a local density dependent branching rate of the form: % \begin{equation} \nu(\rho)=\nu_0+\alpha \hat\rho \left (1-\hat\rho \right ), \label{singleeq} \end{equation} where $\nu_0$ is the intrinsic branching rate that depends on external factors such as temperature or irradiance \cite{Duarte1989}. The local density dependence includes two terms: a facilitation one that is assumed to be linear and results from the positive contribution of the neighbouring plants that dissipates the wave energy and prevents the shoot removal; and a non-linear competitive term that determines the environmental carrying capacity. The reduction in the branching rate results from the local competition for natural resources, such as $CO_2$, and the self-shading in dense meadows \cite{Invers1997EffectsOP}. $\hat\rho = \rho/\rho_{max}$ is the normalized local density, and $\alpha$ is a coefficient that controls the strength of the interaction. Given the parabolic shape of the interaction, the growth of over- and under-populated areas is penalized, whereas regions around an optimal density $\rho = \rho_{max}/2$ is favoured. The shoot mortality rate, $\mu$, is kept fixed. \vspace{0.3cm} \par Equation \eqref{singleeq} can be easily extended to consider the interaction among $N$ species as follows. We define a normalized local density for the species $i=1, \ldots, N$ as: \begin{equation} \hat \rho_i={1\over \rho_{max,i}}{\left(\rho_i + \sum\limits_{j \ne i}\gamma_{ij} \rho_j\right)} \,, \label{rhodef} \end{equation} where $\rho_{max,i}$ is the saturation density for the $i-$species. $\gamma_{ij}$ is the coupling coefficient between species $i$ and $j$. The higher their value is, the more one species is affected by the presence of the others, and viceversa. $\gamma_{ij}=0$, reduces to the single-species case. This normalized density $\hat \rho_i$ is used in Equation \eqref{singleeq} to evaluate the branching rate for the $i-$species, $\nu_i(\rho_i)$. Once the system is initialized with a random distribution of seeds of the competing species, the simulation proceeds as follows: \begin{enumerate} \item Since species have different characteristic growing times: $\tau _i={\delta_i/ v_i}$, at each iteration, one of the species is selected with probability $p_i = \tau_i / \left ( \sum\limits_{i} \tau_i \right )$. \item The rhizome that originates in the $n^{th}$-apex of the $i$-species, randomly selected, is proposed to extend over a distance $\delta_i \hat u_i^{(n)} $. \item The apex will be relocated to its new position and a new shoot will develop only if the normalized local density in the corresponding cell, given by the Equation \eqref{rhodef}, fullfils: $\hat \rho_i < 1$. \item Time is increased by $\Delta t = \delta_i/(v_i N_a^T(t))$, where $N_a^T(t) $ is the total number of apices from all species at time $t$ \item A new branch with a growing apex will develop according to the branching rate $\nu_i(\rho_i)=\nu_{0i}+\alpha_i \hat\rho_i (1-\hat\rho_i)$ with probability: $p_{\nu,i}(t)=\nu_i(\rho_i) \Delta t N_{a,i}(t)$, with $N_{a,i}(t)$ the number apices of the $i-$species. \item During this time step, a number of shoots of the $i-$species are removed with probability $p_{\mu,i}(t)=\mu_i \Delta t / N_{s,i}(t)$, with $N_{s,i}(t)$ the number shoots of the $i-$species. \end{enumerate} The coupling coefficients $\gamma_{i,j}$ cannot be fixed by direct measurements and must be inferred indirectly by comparing the outcomes of the model to field observations. In this work, we will consider the seagrass-seaweed interaction between {\it C. nodosa} and {\it C. prolifera} and it will be used as a test case for the proposed model. The set of parameters used as model inputs for each of the selected species is presented in Table \ref{tab:param} in the Supplemetary Material \ref{sec:clonalparam}, according to averaged experimental observations. The saturation density for both species is selected to be $\rho_{max} = 1800$ shoots$/m^{2}$. In the Supplementary Materials \ref{sec:hyst}, we have studied the response of the interacting model proposed above for single species, taking into account local self-interaction term, \eqref{rhodef} with $\gamma_{ij}=0$. We find that the system tends to bi-stable density states that strongly depend on initial conditions. In the rest of this work, we will consider mortalities $\mu_i < \nu_{0,i}$. This condition ensures that the species are out of the bi-stability region. \vspace{0.3cm} \par For two species ($N=2$), the number of coupling parameters reduces to $\gamma_{12}$ and $\gamma_{21}$. We will determine the values of these coefficients such that our simulations reproduce experimental observations. In our simulations we have used a system size of $L \times L$, with $L= 20 \, m $ and periodic boundary conditions. The results have been averaged over 15 independent realizations. \subsection*{Experimental design} \par In order to evaluate the coupling parameters, the model outcome will be compared to the observed shoot densities in mixed meadows of {\it C. nodosa} and {\it C. prolifera}. The field observation was conducted in the Alfacs Bay (Ebro River Delta), a shallow embayment (up to 6 m deep) with an estimated surface area of $50\, km^2$, located in the Northwestern Mediterranean. The fieldwork was performed on the northern shore of the embayment, which is covered by an extensive seagrass meadow of Cymodocea nodosa. The northern part of the bay receives seasonal freshwater inputs as runoff from rice paddy fields, with high nutrient and organic matter concentrations, and with suspended materials as well that increases water turbidity and, therefore, light deprivation, as compared to the southern part of the bay \cite{PEREZ1994249}. To avoid seasonal variability, plant sampling took place within a short period of time in two consecutive years (June 2018 and June 2019) when the seasonal peak in seagrass biomass and shoot density occurs \cite{2014ECSS..142...23M}. Points were randomly selected for a depth range, and distributed along, approximately, 10 km of seagrass meadow (See Appendix \ref{sec:map} for a map of the study site). At each sampling point, we collected with a hand-held corer (15 cm internal diameter) all the plant and algae biomass to a depth in the sediment of about 30 cm \cite{PerezMateoAlcoverroRomero}. Each sample was rinsed in situ thoroughly with seawater to eliminate the sediment, and organic material sealed in plastic bags, immediately frozen and carried to the laboratory where it was conserved at $-25^\circ$C until processing. Samples from 2018 were transported to the laboratory. Laboratory procedures involved separating plant and algae material of each sample into the different fractions (shoots, roots and rhizomes for {\it Cymodocea nodosa}, and fronds, rhizoids and roots for {\it Caulerpa prolifera}) and counting the shoots and fronds, respectively. In 2019, in order to optimize sampling effort, the shoot/frond density was counted directly on board, and no samples were transported to the laboratory. \vspace{0.4cm} \section*{RESULTS} \subsection*{Symmetric interaction between two species} \par We consider a scenario in which two species, {\it C. nodosa} and {\it C. prolifera}, coexist in the same region. For simplicity, we start assuming symmetric couplings between them, i.e., $\gamma_{12}=\gamma_{21} =\gamma$. Our main result in this section is the identification of two distinct regions in the space of parameters: $ \gamma > 1$, and $ \gamma < 1$. From the definition of the coupling coefficients in \eqref{rhodef}, we observe that $ \gamma > 1$ implies that the interaction between species is higher than the self-interaction. In this case, we expect meadows to separate into different domains to minimize the competition. The opposite happens for $\gamma < 1$, where the competition minimizes when the species mix. In the following, we study the outcomes of our mathematical model in these different regimes. All the simulations have been initialized with a random distribution of seeds that ensures a rather homogeneous spatial distribution. The initial shoot density for both species is $\rho_{0,_{C_n}} =\rho_{0,_{C_p}} = 25\, m^{-2}$, where the subindexes $C_n$ and $C_p$ stand for {\it C. nodosa} and {\it C. prolifera}, respectively. The coefficient $\alpha$ has been set to $\alpha_{C_n}=\alpha_{C_p}=4$. The value of the shoot mortality rate for {\it C. nodosa} is fixed to its average field observation value $\mu_{C_n}=0.92 \, yr^{-1}$ (Table \ref{tab:param}), whereas the one for {\it C. prolifera} has been varied in the region $\mu_{C_p} < \nu_{0,C_p}$ in order investigate the change in the population of shoots and their spatial distribution. Here, we summarize the most relevant results of our simulations: \vspace{0.4cm} \begin{itemize} \item[-] {$\large\boldsymbol{\gamma > 1}$}: In this case, the competition between shoots of different species is strengthened, and it favors the spatial segregation of shoots in different domains. The results for $\gamma = 2$ are shown in Figure \ref{fig:domain}. As expected, soon after the homogeneous initial condition has been set ($t< 5$ yr), well-defined domains separated by clear fronts emerge. For a shoot mortality rate for the {\it C. prolifera}, $\mu_{C_p} > 0.73\, yr^{-1}$, the area covered by {\it C. prolifera} shrinks, and the space left is occupied by {\it C. nodosa} that will become the dominant species. The situation reverts for $\mu_{C_p} < 0.7\, yr^{-1}$ in which {\it C. prolifera} colonizes the whole meadow. The coexistence of different species is not possible and, after a transient period characterized by the spatial segregation of species, whose duration depends on $\mu_{C_p}$, the stable steady-state solution corresponds to a homogeneous mono-specific meadow. \item[-] {$\large\boldsymbol{\gamma < 1}$}: The competition between species weakens, which favors the formation of mixed meadows. The results for $\gamma = 0.5$ are illustrated in Figure \ref{fig:mixed05}. The steady-state behaviour is quickly achieved ($t < 2\, yr$) as seen in Figure \ref{fig:mixed05}(a). In this regime, the averaged shoot densities for the different species and different values of the shoot mortality rate of {\it C. prolifera}, $\mu_{C_p}$, are shown in Figure \ref{fig:mixed05}(b). Interestingly, the saturation densities for the {\it C. nodosa} vs. {\it C. prolifera} in these mixed meadows follow a linear relationship (see Fig. \ref{fig:mixed05}(c)). The best fit to the data gives a slope of $-0.48\pm 0.01$. This result indicates that the occupation of {\it C. prolifera} fronds increases by a factor of two as the shoots of {\it C. nodosa} decreases. A representative snapshot of a mixed meadow after $t=10\, yr$ of growth at $\mu_{C_p}=0.65\, yr^{-1}$ is shown in Figure \ref{fig:mixed05}(d). \item[-] {$\large\boldsymbol{\gamma = 1}$}: This is the limit case between the two previous regions (Figure \ref{fig:domain1}). In this case, shoots of different species equally compete. The analysis of the saturation densities as a function of $\mu_{C_p}$ (Figure \ref{fig:domain1}(b)) shows that the coexistence region is extremely narrow in comparison with the one found for $\gamma = 0.5$ (Figure \ref{fig:mixed05}(b)). As a consequence, mixed meadows, that can last hundreds of years (see Figure \ref{fig:domain1}(a) for $\mu_{C_p}=0.70\, yr^{-1}$), are a transient state towards a homogeneous mono-specific meadow. \end{itemize} \begin{SCfigure}\centering \includegraphics[width=0.65\textwidth]{sym2.eps} \caption{\label{fig:domain} \small Change in the spatial organization of {\it C. nodosa} and {\it C. prolifera} in a mixed meadow. The coupling coefficients are $\normalsize\boldsymbol{\gamma_{12}=\gamma_{21}=2}$. The shoot mortality rate for {\it C. nodosa} is $\mu_{C_n}=0,92\, yr^{-1}$ , whereas the one for {\it C. prolifera} changes from $\mu_{C_p}=0.77$ to $0.70 \, yr^{-1}$, from top to bottom. Different snapshots are taken at $t=50,\,300\,yr$. Regions colored in blue (red) are dominated by the presence of {\it C. nodosa} ({\it C. prolifera} ), and the color green represents regions of coexistence of both species. The color bar shows the difference between $\rho_{Cp}-\rho_{Cn}$, and it has units of shoots$/m^2$. In the right column, the change in the average population of shoots for both species is shown.} \end{SCfigure} \begin{SCfigure}\centering \includegraphics[width=0.6\textwidth]{sym05.eps} \caption{\label{fig:mixed05} \small {Mixed meadows of \it C. nodosa} and {\it C. prolifera} with a coupling coefficient $\normalsize\boldsymbol{\gamma_{12}=\gamma_{21}=0.5}$. (a) Change in the density profile for selected values of $\mu_{C_p}$. Red lines: data for {\it C. prolifera}; Blue lines: results for {\it C. nodosa}. (b) The saturation shoot densities of both species vs $\mu_{C_p}$. (c) The shoot density of {\it C. nodosa} vs {\it C. prolifera} in mixed meadows for different values of $\mu_{C_p}$. (d) A snapshot of the meadow taken in the steady state regime at $t=10$ yr. for a {\it C. prolifera} mortality rate: $\mu_{C_p} = 0.65\, yr^{-1}$. The predominance of green color represents a mixed meadow solution. The color bar shows the difference between $\rho_{Cp}-\rho_{Cn}$, and it has units of shoots$/m^2$.} \end{SCfigure} \begin{figure}[H]\centering \includegraphics[width=1.0\textwidth]{sym1.eps} \caption{\label{fig:domain1} \small {\it C. nodosa} and {\it C. prolifera} meadows for coupling coefficients $\normalsize\boldsymbol{\gamma_{12}=\gamma_{21}=1}$. (Left) Snapshots of the three possible states: mono-specific meadows of {\it C. prolifera} in red ($\mu_{C_p}=0.64\, yr^{-1}$) or {\it C. nodosa} in blue ($\mu_{C_p}=0.76\, yr^{-1}$) (top row), and mixed meadows in green ($\mu_{C_p}=0.70\, yr^{-1}$) (bottom row). The color bar shows the difference between $\rho_{Cp}-\rho_{Cn}$, and it has units of shoot$/m^2$. (a) Change in the density profile of both species for selected values of $\mu_{C_p}$. Mixed meadows are found to be a transitory state. (b) Saturation shoot densities for both species vs. $\mu_{C_p}$.} \end{figure} \subsection*{Experimental analysis and model validation} \par In this section, we study the response of the model to non-symmetric coupling coefficients $\gamma_{12}\neq \gamma_{21} $. One of our main observations is that the results from the symmetric case generalize and the formation of mixed meadows is guaranteed when the coupling coefficients are in the range $0<\gamma_{ij}<1$. In Fig. \ref{fig:slopeslope}, we plot the shoot densities of {\it C. nodosa} ($\rho_{Cn}$) vs. {\it C. prolifera} ($\rho_{Cp}$) for different combinations of interaction parameters in $0<\gamma_{ij}<1$. We find that their steady-states are mixed meadow solutions that follow linear relations of the type $\rho_{Cn}={\cal A} \rho_{Cp}+{\cal B}$, as in the symmetric case (Fig. \ref{fig:mixed05}(c)). In Fig. \ref{fig:slopeslope}(a), we kept the mortality of {\em C. nodosa} ($\mu_{C_n}$) fixed, while varying the mortality of {\em C. prolifera} ($\mu_{C_p}$). Interestingly, we observe that the coupling coefficient $\gamma_{12}$ governs the magnitude of the slope $\cal A$, which follows the relation $\gamma_{12} \sim -\cal A$. We could expect this relation from the linear stability analysis of the model at equilibrium (See Supplementary Material \ref{sec:hom}). For small values of $\gamma_{12}$, the density $\rho_{Cn}$ is less affected by a change in the mortality $\mu_{C_p}$, resulting in a much smaller slope $\cal A$. Changing the other coefficient $\gamma_{21}$, translates the points along the same line since a decrease in $\gamma_{21}$ favors the growth of {\em C. prolifera} at the expense of {\em C. nodosa}, and vice-versa. An analogous analysis can be done for Fig. \ref{fig:slopeslope}(b), where we kept the mortality $\mu_{C_p}$ fixed and varied $\mu_{C_n}$. Here, the coefficient $\gamma_{21}$ controls the slope of the linear regression. In this case, it follows the inverse relation $\gamma_{21} \sim -{\cal A}^{-1}$, and the coefficient $\gamma_{12}$ does not alter the slope. \begin{figure}[H]\centering \includegraphics[width=1.05\textwidth]{slopeslope.eps} \caption{\small \label{fig:slopeslope} Sensitivity analysis of the model for the coupling parameters: $\gamma_{12},\,\gamma_{21}<1$. We plot the shoot density of {\it C. nodosa} vs. {\it C. prolifera} in stable mixed meadows. Each color represents a different combination of the coupling coefficients. Each data point corresponds to a different value the {\it C. prolifera} mortality rate, $\mu_{Cp}$, in (a), and the {\it C. nodosa} mortality rate, $\mu_{Cn}$, in (b). The different sets of points fit the linear regressions of slope $A$, with an error not higher than $2\%$ in any of the cases.} \end{figure} \par After the analysis of the model behavior, it is possible to determine the coupling coefficients $\gamma_{ij}$ in real meadows by comparing the results of our simulations with the experimental data. We measured the averaged shoot density in meadows of {\it C. nodosa} and {\it C. prolifera} in 106 sampling points in the Alfacs Bay (Ebro River Delta - Spain), shown with purple dots in Figure \ref{fig:dades}. The presence of mixed meadows of these two species indicates values of the couplings $\gamma_{12}, \, \gamma_{21}<1$. Although data is highly scattered, it is possible to find linear relationships between the shoot densities of {\it C. nodosa}, vs. {\it C. prolifera}: $\rho_{Cn}={\cal A} \rho_{Cp}+{\cal B}$. We use the least-square method to fit the data \cite{Watson_1967}. This method minimizes the errors of one of the data sets and considers the other as a control variable with no error. If we choose to minimize the errors in the measurements of $\rho_{Cn}$, the best fit to the data gives a slope of ${\cal A}=-0.47 \pm 0.11$ (solid blue line). This slope, within its error, is in a very good agreement with the model outcome for $\gamma_{12} \sim 0.5$ according to Fig.\ref{fig:slopeslope}(a). The linear regression that minimizes the errors in $\rho_{Cp}$ has a slope of ${\cal A}=- 3.3 \pm 0.7 $ (solid red line), which coincides with the simulations when $\gamma_{21}\sim 0.3$ (Fig. \ref{fig:slopeslope}(b)). Therefore, the numerical model nicely reproduces field data for $\gamma_{12}= 0.5$ and $\gamma_{21}= 0.3$. For these coupling coefficients, the simulations with fixed $\mu_{Cn}$ and varying $\mu_{Cp}$ generate the dotted blue line, and similarly for the dotted red line, by keeping $\mu_{Cp}$ constant and mo $\mu_{Cn}$. The match between model and data is one of our main results, which establishes $\gamma_{ij}$ as the key parameter to allow the connection between the numerical model and the experimental data and provide information on the strength of the interaction between species. \begin{figure}[H]\centering \subfigure{\includegraphics[width=0.75\textwidth]{datamodelv2.eps}} \caption{\label{fig:dades} \small {Comparison between the model outcome and the experimental data}. The purple dots correspond to experimental measurements of shoot densities of {\it C. nodosa} and {\it C. prolifera} in mixed meadows in the Alfacs Bay (Ebro River Delta, Spain). The solid lines correspond to two types of linear fits of the experimental data, minimizing both the errors in the measurements of density of {\it C. nodosa} (solid blue line), or minimizing the errors in {\it C. prolifera} (solid red line). The dotted lines are the results of our simulations assuming coupling parameters $\normalsize\boldsymbol{\gamma_{12}=0.5$,\, $\gamma_{21}=0.3}$. The dotted blue line is reproduced by keeping the mortality of {\it C. nodosa} fixed, while varying the mortality {\it C. prolifera}, and vice-versa for the dotted red line. For a better comparison with the field measurements, the maximum shoot density has been adjusted to the average values found in the Alfacs Bay location ($\rho_{max,Cn}=2300\,m^{-2}$ and $\rho_{max,Cp}=1900\,m^{-2}$). } \end{figure} \par In all previous studies we assumed an initial condition consisting of a mixed meadow with the same amount of seeds of both species homogeneously distributed all over the available space. We analyze the sensitivity of the results to the initial condition. We will assume the case in which patches of different species are initially placed apart from each other. In a square region of size $L^2=40\times40\,m^2$, we locate a homogeneous density of seeds along two vertical stripes, $1\,m$ wide, one of {\it C. nodosa}, centered at $x= -10\,m$, and another of {\it C. prolifera}, centered at $x= 10\,m$ (Figure \ref{fig:ci} ($t= 0\,yr$)). During a short period of time, two separated mono-specific meadows develop. Shortly after that, the two domains collide forming a clear front (Figure \ref{fig:ci} ($t \sim 8 \,yr$)). At this stage, the two species have reached their maximum shoot density values. This front quickly dilutes giving rise to a mixed meadow of {\it C. prolifera} and {\it C. nodosa} (Figure \ref{fig:ci} ($t \sim 19 \,yr$)). The final state of the system is comparable to that collected assuming a homogeneous initial distribution of seeds (see Figure \ref{fig:mixed05}(d)). \begin{figure}[H]\centering \includegraphics[width=0.85\textwidth]{front.eps} \caption{\label{fig:ci} \small (Left) Spatial distribution, at different time steps, of two competing species: {\it C. nodosa} (blue) and {\it C. prolifera} (red), starting from two well defined striped patches separated from each other. As time evolves, both species cover all available space forming a mixed meadow (green). The color bar shows the difference between $\rho_{Cp}-\rho_{Cn}$, and it has units of shoots$/m^2$. The interaction coefficients are set to $ \gamma_{12}=0.5$ and $\gamma_{21}= 0.3$, and the mortality rates, $\mu_{C_p} = 0.65\, yr^{-1}$, and $\mu_{C_n} = 0.92\, yr^{-1}$ (Right).} \end{figure} \section*{DISCUSSION} \par In this paper we present the results of an agent-based model that investigates the dynamics of growth of clonal organisms (like seagrasses and seaweeds) in a scenario of species competition. The interaction among species has been implemented through a coupling parameter, $\gamma_{ij}$, placed in the local interaction term (eq. \eqref{rhodef}) that quantifies the strength of the interaction between any pair of species. We found the value of this coupling parameter to determine the model outcome that ranges from separated and well defined mono-specific domains to mixed ones. A comparison between the model outcome and field measurements, conducted in the Alfacs Bay where large meadows of {\it C. nodosa} and {\it C. prolifera} are present, allowed us to determine the coupling coefficients between these two species ($\gamma_{12}=0.5$, $\gamma_{21}= 0.3$) and it is one of the main findings of this paper (Fig. \ref{fig:dades}). In our simulations, the choice of coupling parameters $\gamma_{ij}$ is decisive since they reflect the type and magnitude of the interaction between species: $\gamma_{12}$ controls the influence of {\it C. prolifera} over {\it C. nodosa}, and vice-versa for $\gamma_{21}$. The asymmetry found in the results is a clear indication that the negative effect of the presence of {\it C. prolifera} over {\it C. nodosa} is higher than vice-versa. The fact that both coefficients are in the $\gamma_{ij}<1$ regime implies that the seagrass/seaweed competes less with the shoots of the other species than with their own. Therefore, in order to minimize the competition, species self-organize and form mixed meadows. This effect should typically be explained by several biologically relevant mechanisms related to the anatomy, metabolism, or other properties of the plants. Although {\it C. prolifera} and {\it C. nodosa} have comparable magnitudes in their clonal growth rates (Table \ref{tab:param}), they have very different root lengths: while {\it C. nodosa} extends into the sediment to a mean depth of $14.1 cm$, the roots of {\it C. prolifera} just elongate an average of $3 cm$ \cite{Duarte_1998,Bedinger_2013}. This characteristic allows the shoots of the two species to absorb the nutrients from different soil regions, making self-competition higher than the interspecific competition, favoring the appearance of mixed meadows. Therefore, our numerical analysis leads us to conjecture that the difference in root lengths between species is one of the main mechanisms that mediate the formation of mixed meadows of {\it C. nodosa} and {\it C. prolifera}. \vspace{0.3cm} \par Our analysis is consistent with previous results found in the literature. In \cite{TUYA20131}, shoot densities in mixed meadows of {\it C. nodosa} and {\it C. prolifera} were collected in 16 sites across Gran Canaria island (Spain, Atlantic Ocean) in the summer season, when the meadows should be at their optimal growth. The best linear fit to their data ($ \rho_{1} = {\cal A} \rho_{2} + {\cal B}$), minimizing the errors in the measurements of {\it C. nodosa}, led to a slope ${\cal A}=-0.71 \pm 0.35$ which is consistent with our findings (Fig. \ref{fig:dades}). Despite the similarities found, different slopes can be expected due to the very different conditions between the locations of both experiments: the sites in our experiment were located in the Ebro River Delta, that is characterized by very calm, shallow, and high-on-nutrient waters, while sites in \cite{TUYA20131} were in Gran Canary island, whose coastline has a lack of pronounced geographical accidents and it is very exposed to winds and currents. \vspace{0.3cm} \par In \cite{TUYA20131}, they also performed an experiment to establish the direct effect of {\it C. prolifera} over {\it C. nodosa}. In mixed meadows, $100\%$ of the fronds of {\it C. prolifera} were removed. Eight months later, the density of shoots of {\it C. nodosa} increased $1.5$-times. This result is also consistent with our findings. When the two species are allowed to grow separately, setting appropiate initial conditions as those depicted in Fig. \ref{fig:ci}, {\it C. nodosa} patches reach a maximum density of $\rho_{Cn}=2000\,m^{-2}$ (maximum of the solid blue line at $t \sim 8 \,yr$). Once the two species meet and interact a stable mixed meadow of {\it C. prolifera} and {\it C. nodosa} develop with densities $\rho_{Cn}=1480\,m^{-2}$ and $\rho_{Cp}=1270 \,m^{-2}$. Thus, isolated meadows of {\it C. nodosa} have an average shoot density $1.35$-times higher that in mixed meadows. Therefore, our model supports the observations made in \cite{TUYA20131}, where they conclude that the appearance of {\it C. prolifera} partially contributes to the demise of the population of {\it C. nodosa}. Also in \cite{PEREZRUZAFA2012101}, the changes of seagrass beds in Mar Menor Coastal Lagoon (Spain, Mediterranean Sea) were analyzed. They observed a decrease and almost total loss of {\it C. nodosa} in deeper areas of the lagoon ($2-7m$) after a colonization event of {\it C. prolifera} in the early 1970's. Our simulations sustain the hypothesis that the invasion of {\it C. prolifera} should negatively influence the abundance of {\it C. nodosa}. Still, this cannot solely explain the loss of {\it C. nodosa}, and we should consider other effects that deteriorate the seagrass, such as light scarcity due to new trophic conditions \cite{PEREZRUZAFA2012101,Terrados_1998}. In \cite{BELANDO2021103415}, the most recent study of the distribution of {\it C. nodosa} and {\it C. prolifera} in the Mar Menor, they study the presence of mixed meadows in the deeper areas of the lagoon (less than $-4m$). Their data does not show a negative correlation between the biomass of both macrophytes, which is in disagreement with our findings in Fig. \ref{fig:dades}. The discrepancy between both observations might be related to the different enviromental conditions among study sites, and it should be further investigated. \vspace{0.3cm} \par Although our model has focused on two species ({\it C. nodosa} and {\it C. prolifera}) as a case study and to test our hypothesis, it can be generalized to $N$-species of seagrasses and other clonal plants. We have proposed a way to implement the cross-species interactions through the local interaction term (eq. \eqref{rhodef}), and we have identified a procedure to determine the coupling coefficients between species, $\gamma_{ij}$, assuming a linear relationship $\rho_{i} = {\cal A}_i \rho_{j} + {\cal B}_i$ with $\gamma_{ij} \sim -{\cal A}_i$. In the case of $N$-interacting species in mixed meadows, we would have $N$ data sets of shoot densities that need to be fitted pair-wise to a linear regression in order to fix the $2^N$ coupling coefficients. \section*{ACKNOWLEDGMENTS} We are especially grateful to Neus Sanmart{\'i}, Javier Romero, Marta P{\'e}rez, and Jordi Boada for their collaboration in the field observations and data collection. E.LL. and T.S. acknowledge the Research Grants: PRD2018/18-2 funded by LIET from the D. Gral. d'Innovaci{\'o} i Recerca (CAIB), RTI2018-095441-B-C22 funded by MCIN/AEI/10.13039/501100011033 and by European Regional Development Fund - "A way of making Europe", and MDM-2017-0711 funded by MCIN/AEI/10.13039/501100011033. N.M. and E.M. also acknowledge the Spanish Ministry of Science, Innovation and Universities (SuMaEco RTI2018-095441-B-C21). \section*{AUTHORSHIP} NM, EM, ELL, and TS conceived this study. ELL developed the models, produced the figures, and analyzed the data. EM carried out the field observations, and wrote the experimental methodology section. NM, EM, ELL, and TS analized the results. ELL wrote the initial draft and all coauthors contributed to editing the manuscript.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Social stratification refers to the classification of individuals into groups or classes based on shared socio-economic or power conditions within a society \cite{Grusky}. A characteristic feature of stratified societies is that individuals tend to interact more strongly with others in their own group. This tendency has been observed in class endogamy \cite{Belding}, scientific communities and citations \cite{Lehman}, population biology \cite{Vasquez}, human capital \cite{Martins}, opinion formation \cite{Martin}, epidemic dynamics \cite{Masuda}, and economic exchanges between banks \cite{Inaoka}. Recently, the effects of social stratification on the wealth distribution of a system of interacting economic agents have been studied \cite{Laguna}. In this model, agents behave as particles in a gas and they can interact with each other at random, as in most models that have been proposed for economic exchange \cite{Yakovenko,Chattarjee,Slanina}. However, many real social and economic systems can be described as complex networks, such as small-world networks and scale-free networks \cite{Watts,Barabasi,Newman}. Some models have considered economic dynamics on networks; for example, Refs.~\cite{Pianegonda1} and \cite{Garlaschelli} studied the effects of the network topology on wealth distributions; while Ref.~\cite{Ausloos} proposed a model of closed market on a fixed network with free flow of goods and money. In this paper, we study the effects of the topology of a network on the collective behavior of a system subject to stratified economic exchanges. Our model, based on the interaction dynamics in a stratified society proposed by Laguna et al. \cite{Laguna}, is presented in Sec.~2. The inclusion of a spatial support allows to employ concepts from the dynamics of spatiotemporal systems in economic systems. Our results indicate that the size of the local neighborhood plays an important role for achieving an equitable distribution of wealth in systems possessing stratified economic exchange. Conclusions are presented in Sec.~3. \section{The Model} We consider a network defined by following the algorithm of construction of small-world networks originally proposed by Watts and Strogatz \cite{Watts}. We start from a regular ring with $N$ nodes, where each node is connected to its $k$ nearest neighbors, $k$ being an even number. Then, each connection is rewired at random with probability $p$ to any other node in the network. After the rewiring process, the number of elements coupled to each node -- which we call neighbors of that node -- may vary, but the total number of links in the network is constant and equal to $Nk/2$. The condition $\log N \leq k \leq N$ is employed to ensure that no node is isolated after the rewiring process, which results in a connected graph. For $p = 0$, the network corresponds to a regular ring, while for $p = 1$ the resulting network is completely random. With this algorithm, a small-world network is formed for values of the probability in the intermediate range \cite{Watts}. A small-world network is characterized by a high degree of clustering, as in a regular lattice, and a small characteristic path length compared to the size of the system. We consider a population of $N$ interacting agents placed at the nodes of this network. At a discrete time $t$, an agent $i$ ($i=1,\ldots,N$), is characterized by a wealth $w_i(t)\geq 0$ and a fixed risk aversion factor $\beta_i$, where the values $\beta_i$ are randomly and uniformly distributed in the interval $[0,1]$. The quantity $(1-\beta_i)$ measures the fraction of wealth that agent $i$ is willing to risk in an economic interaction \cite{Chattarjee,Chakrabarti,Iglesias}. The initial values $w_i(0)$ are uniformly distributed at random in the interval $w_i(0) \in [0,W]$. We assume that the total wealth of the system, $W_T=\sum_iw_t(i)$, is conserved. For simplicity, we assume that the stratification of economic classes is uniform, i.e., all classes have the same width, denoted by a parameter $u$. Thus, agents $i$ and $j$ belong to the same economic class if they satisfy the condition $|w_i(t) - w_j(t)| < u$. Stratified economic exchange means that only agents belonging to the same economic class may interact. As a consequence of these interactions, the wealth of the agents in the system will change. At each time step $t$, the dynamics of the system is defined by iterating the following steps: \begin{enumerate} \item Choose an agent $i$ at random. \item Choose randomly an agent $j \neq i$ from the set of neighbors of agent $i$, i.e., $j \in [ i-k/2, i+k/2 ]$. \item Check if they belong to the same economic class, i.e., \begin{equation} |w_i(t) - w_j(t)| < u. \end{equation} Repeat steps (1) and (2) until condition (3) is achieved. \item Compute the amount of wealth $\Delta w(t)$ to be exchanged between agents $i$ and $j$, defined as \begin{equation} \Delta w(t) = \mbox{min}[(1-\beta_i)w_i(t);(1-\beta_j)w_j(t)]. \end{equation} \item Calculate the probability $r$ of favoring the agent that has less wealth between $i$ and $j$ at time $t$, defined as \cite{Laguna,Iglesias} \begin{equation} r=\frac{1}{2} + f \times \frac{|w_i(t) - w_j(t)|}{w_i(t) + w_j(t)}, \end{equation} where the parameter $f \in [0,1/2]$. \item Assign the quantity $\Delta w(t)$ with probability $r$ to the agent having less wealth and with probability $(1-r)$ to the agent with greater wealth between $i$ and $j$. \end{enumerate} The parameter $f$ describes the probability of favoring the poorer of the two agents when they interact. For $f=0$ both agents have equal probability of receiving the amount $\Delta w(t)$ in the exchange, while for $f=1/2$ the agent with less wealth has the highest probability of receiving this amount. In a typical simulation following these dynamical rules, and after a transient time, this dynamical network reaches a stationary state where the total wealth $W_T$ has been redistributed between the agents. The spatial localization of the interacting economic agents allows to see this system as a spatiotemporal dynamical system. Figure~\ref{fig1} shows the spatiotemporal patterns of wealth arising in a network with $k=2$ and $p=0$, corresponding to a regular one-dimensional lattice with periodic boundary conditions, for different values of the parameters. In analogy to many nonlinear spatiotemporal dynamical systems \cite{Manneville}, this network of economic agents can exhibit three basic states depending on parameter values: a stationary, coherent or laminar state (left panel), where the wealth of each agent $i$ maintains a constant value; an intermittent state (center panel), characterized by the coexistence of coherent and irregular domains evolving in space and time; and a turbulent state (right panel) where the wealth values change irregularly in both space and time. \begin{figure}[h] \begin{center} \includegraphics[scale=0.17,angle=0]{Fig-1a} \vspace{0.1cm} \includegraphics[scale=0.17,angle=0]{Fig-1b} \vspace{0.1cm} \includegraphics[scale=0.17,angle=0]{Fig-1c} \end{center} \caption{Spatiotemporal patterns in a one-dimensional lattice with $k=2$, size $N=50$ and $W=1$, after discarding $5000$ time steps. The vertical axis describes the ordered position $i$ of the agents in the lattice, increasing from bottom to top. Horizontal axis represents time, increasing from left to right. The wealths $w_i(t)$ evolving in time are represented by a color code. The color palette goes from light gray (the poorest agent) to dark gray (the richest agent). Top: laminar state; $u=10$, $f=0.001$. Center: spatiotemporal intermittent state; $u=3, f=0.4$. Bottom: turbulent state; $u=30$, $f=0.4$.} \label{fig1} \end{figure} To characterize the transition from the laminar to the turbulent state, via spatiotemporal intermittency, we employ the average wealth exchange for long times, a quantity that we call the activity of the system and define as \begin{equation} \label{activity} A=\frac{1}{T-\tau}\sum_{t=\tau}^{T} \Delta w(t), \end{equation} where $\tau$ is a transient number of steps that are discarded before taking the average. The laminar phase is associated to values $A=0$, where no transactions take place in the asymptotic state of system, while the turbulent phase is characterized by $A >0$. \begin{figure}[h] \begin{center} \includegraphics[scale=0.39,angle=90]{Fig-2a} \vspace{0.3cm} \includegraphics[scale=0.39,angle=90]{Fig-2b} \end{center} \caption{(a) Activity as a function of $u$ in a regular lattice ($p=0)$ with fixed $k=4$, for different values of $f$. The curves correspond to $f=0.5$ (diamonds); $f=0.3$ (circles); and $f=0.1$ (squares).(b) Activity as a function of $f$ in a regular lattice with $k=4$, for $u=1$ (squares), and $u=30$ (circles).} \label{fig2} \end{figure} In our calculations, we have fixed these values of parameters: size $N=10^4$, $\tau=10^8$, $T=2\times 10^4$, and $W=1$. Each value of the statistical quantities shown has been averaged over $100$ realizations of initial conditions. Figure~\ref{fig2}(a) shows the activity in the system as a function of the width of the economic classes $u$ for different values of the parameter $f$. The transition from the laminar phase to the turbulent state occurs about the value $u\approx W=1$ in all cases. When the value of the width $u$ reaches the value of the maximum initial wealth of the agents, exchanges may take place in every neighborhood, and this is reflected in the increase in the activity in the system. For $u > W$, interactions continue to occur in the entire system and the total wealth exchanged reaches the maximum amount allowed by the favoring parameter $f$. Thus, the activity in the system reaches an almost constant value in this region, for a given value of $f$. On the other hand, Figure~\ref{fig2}(b) shows the activity in the system as function of $f$. The increment in $f$ enhances the transfer of wealth from richer to poorer agents. Therefore, the probability that neighboring agents belong to the same economic class increases, and so does the probability that they exchange wealth. As a consequence, the activity in the system increases with increasing $f$. \begin{figure}[h] \begin{center} \includegraphics[scale=0.39,angle=90]{Fig-3a} \vspace{0.3cm} \includegraphics[scale=0.39,angle=90]{Fig-3b} \end{center} \caption{(a) Activity as a function of $k$, on a regular lattice with $p=0$, fixed $f=0.5$, and for different values $u=1$ (squares) and $u=10$ (circles). (b) Activity as a function of $p$, on a network with $k=4$, and for different values $f=0.5$ (triangles), $f=0.3$ (circles) and $f=0.1$ (squares).} \label{fig3} \end{figure} To explore the effects of the network topology on the collective properties of the system, we show in Figure~\ref{fig3}(a) the activity as a function of the size of the neighborhood $k$ in the network, for different values of $u$. The range of the local interaction, given by $k$, has little effect on the activity. Similarly, Figure~\ref{fig3}(b) shows the activity as a function of the rewiring probability in the network, for fixed $k=4$. We see that the exchange activity in the system is practically unaffected by the topological properties of the network, represented by $k$ and $p$. Thus, the parameters of the dynamics, $f$ and $u$, are more relevant for the increase in the activity in the system than the topological parameters of the underlying network. An important variable in economic dynamics is the Gini coefficient, a statistical quantity that measures the degree of inequality in the wealth distribution in a system, defined as \cite{Gini} \begin{equation} G(t)=\frac{1}{2N}\frac{\sum_{i,j=1}^N |w_i(t) - w_j(t)|}{\sum_{i=1}^N w_i(t)}. \end{equation} A perfectly equitable distribution of wealth at time $t$, where $w_i(t)=w_j(t), \forall i,j$, yields a value $G(t)=0$. The other extreme, where one agent has the total wealth $\sum_{i=1}^N w_i(t)$, corresponds to a value $G(t)=1$. The random, uniform distribution of wealth used as initial condition has $G(0) \approx 0$, and the average initial wealth per agent is $w_i(0)=0.5$. Figure~\ref{fig4}(a) shows the asymptotic, statistically stationary Gini coefficient as a function of the width of the social classes $u$, for different values of the parameter $f$. \begin{figure}[h] \begin{center} \includegraphics[scale=0.39,angle=90]{Fig-4a} \vspace{0.3cm} \includegraphics[scale=0.39,angle=90]{Fig-4b} \end{center} \caption{Gini coefficient at $t=10^8$ as a function of $u$ with fixed $k=4$ for different $f$. The curves correspond to $f=0.5$ (circles); $f=0.3$ (squares); and $f=0.1$ (triangles). (b) Gini coefficient at $t=10^8$ as a function of parameter $f$ for different values of $k$ and fixed $u=30$. The curves correspond to $k=2$ (squares); $k=4$ (circles); and $k=N-1$ (triangles).} \label{fig4} \end{figure} For small values of $u$, there is a small probability of interaction between neighbors, and therefore the initial random, uniform distribution of wealth with $G\approx 0$ is maintained in the system, manifested in a low value of $G$. As $u$ increases, the transfer of wealth between neighbors also increases, producing a redistribution of wealth reflected in the increase of the the Gini coefficient. A maximum of $G$ occurs around $u \approx W=1$, when each agent can initially interact with his neighbors, and therefore a greater variation with respect to the initial uniform distribution of wealth occurs in the system. For larger values of $u$, all local interactions are allowed initially. In this regime, a redistribution of wealth should occur as the probability $f$ of favoring the poorest agents is incremented. This can be seen in Figure~\ref{fig4}(a) as a decrease in the values of $G$, for $u>W$, as $f$ increases. Figure~\ref{fig4}(b) shows the Gini coefficient as a function of the probability $f$, for different sizes of the neighborhood $k$. The values of $G$ are almost constant for small values of $f$, but they decrease for larger values of $f$. \begin{figure}[h] \begin{center} \includegraphics[scale=0.39,angle=90]{Fig-5a} \vspace{0.3cm} \includegraphics[scale=0.39,angle=90]{Fig-5b} \end{center} \caption{(a) Gini coefficient at $t=10^8$ as a function of $k$ on a regular lattice with $p=0$, for $f=0.1$, $u=1$ (squares) and $u=10$ (circles). (b) Gini coefficient at $t=10^8$ as a function of the rewiring probability $p$ on a network with $k=4$, with fixed $u=10$ and $f=0.5$ (diamonds), $f=0.3$ (circles), and $f=0.1$ (squares).} \label{fig5} \end{figure} In order to study the influence of the topology of the network on the distribution of wealth, Figure~\ref{fig5}(a) displays $G$ as a function of $k$, on a regular lattice with $p=0$, for different values of $u$. Increasing the number of neighbors $k$ contributes to an increase in the inequality of the wealth distribution, as measured by $G$. Note that $G$ tends to an asymptotic, large value as $k \rightarrow N-1$, corresponding to a fully connected network, i. e., any agent can interact with each other in the system, losing the notion of spatial location. This corresponds to the most commonly studied situations in models of economic exchange \cite{Laguna}. Increasing the spatial range of the interactions, represented by $k$, implies both an increment in the clustering coefficient and a decrease in the characteristic path length of the network. To see which of these two topological properties of the network is more relevant for the variation of the Gini coefficient observed in Figure~\ref{fig5}(a), we plot in Figure~\ref{fig5}(b) $G$ as a function of the rewiring probability $p$, for different values of the parameter $f$. Note that there is little change in the values of $G$ as $p$ increases, in comparison to the larger variation experienced by $G$ when $k$ is augmented in Figure~\ref{fig5}(a). The characteristic path length in the network decreases in both cases, but the clustering coefficient does not increases on the range of values of $p$ shown in Figure~\ref{fig5}(b) \cite{Watts}. Thus, the increment in the Gini coefficient observed in Figure~\ref{fig5}(a) can be mainly attributed to the increase in the clustering coefficient of the network when $k$ is varied. In other words, the size of the neighborhood is more relevant for the occurrence of an equitable distribution of wealth than the presence of long range connections in a system subject to a stratified economic exchange. \section{Conclusions} The inclusion of a network or a spatial location for interacting economic agents allows the use of concepts from spatiotemporal dynamical systems in economic models. We have considered a model of stratified economic exchange defined on a network and have shown that different spatiotemporal patterns can occur as the parameters of the system are varied. We have characterized these patterns as laminar, intermittent and turbulent, employing analogies from spatiotemporal dynamical systems. We have characterized the transition from a laminar state to a turbulent state through the activity of the system, that measures the average wealth exchanged in the asymptotic regime of the system. This quantity depends mainly on the dynamical parameters $u$ and $f$. Similarly, the Gini coefficient, that characterizes the inequality in the distribution of wealth, depends on the parameters $u$ and $f$. For large values of $u$, increasing $f$ increases the activity but decreases the Gini coefficient. Thus, high levels of economic exchange activity are associated to low values of the Gini coefficient, i.e., to more equitable distributions of wealth in the system. The topology of the underlying network has little effect on the activity of the system $A$. In contrast, the Gini coefficient $G$ increases when the range of the interactions, represented by $k$, is increased. We have shown that the relevant topological property of the network that influences the behavior of $G$, is the clustering coefficient, instead of the characteristic path length of the network. Figure~\ref{fig5} shows that a reduction of the Gini coefficient in a system subject to a dynamics of stratified economic exchange may be achieved by reducing the size of the neighborhood of the interacting agents. Our results add support to the view of local interactions as a relevant ingredient that can have important consequences in the collective behavior of economic models. \section*{Acknowledgments} This work was supported by grant C-1692-10-05-B from Consejo de Desarrollo Cient\'ifico, Human\'istico y Tecnol\'ogico of Universidad de Los Andes, Venezuela. M.~G.~C. acknowledges support from project 490440/2007-0, CNPq-PROSUL, Brazil.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Determining the equation of state of dense matter (EoS) -- the relation between the pressure $P$ and energy-density $\rho$ beyond the nuclear saturation energy-density $\rho_{\rm sat} \sim 2.4\tee{14}\cgsdensity$ -- is an important goal of fundamental physics and astrophysics, with far-reaching implications. Observations of neutron stars (NSs) offer extraordinary tools to investigate dense matter properties, which are complementary to experimental studies~\citep[e.g.,][]{lattimer07,kramer08,lattimer10,lattimer2013,hebeler13}. For instance, macroscopic properties of NSs, such as masses, radii, moments of inertia or tidal deformabilities, provide constraints on dense matter at energy-densities beyond $\rho_{\rm sat}$ \citep[e.g.,][]{lattimer90,lattimer05,Flanagan2008,lattimer10,abbott18}. A variety of methods exist to constrain the EoS from NSs. Besides electromagnetic observations described below, the recent observation of the gravitational wave signal from a NS-NS merger and its electromagnetic counterpart has been analyzed to better constrain the stiffness of matter inside NSs. Specifically, the signal GW~170817, detected by the LIGO and Virgo gravitational wave (GW) detectors on 2017 August 17th, resulted in constraints on the tidal deformability of the NSs from the quadrupole moment in the space-time surrounding the NS merger \citep{abbott17}. Following the discovery of GW~170817, several articles proposed constraints on the EoS and the radius of these NSs using information from the GW signal and the simultaneous GRB~170817, including its afterglow AT~2017gfo (e.g., \citealt{bauswein17,radice18,annala18,raithel18,Tews2018,de18}, with the most recent one from \citealt{abbott18}). The conclusions of these papers are consistent, although the real quantitative information extracted from this first ever detection may not yet compete with nuclear physics knowledge~\citep{Tews2018}. The future of this detection method is however promising and will certainly constrain present EoS models. All other methods to constrain the EoS make use of electromagnetic observations of NSs. More generally, they rely on mass \mbox{$M_{\rm NS}$}\ and radius \mbox{$R_{\rm NS}$}\ measurements (or other related properties). For example, the modelling of the pulse profile of millisecond pulsars (MSP) can provide measurements of \mbox{$M_{\rm NS}$}\ and \mbox{$R_{\rm NS}$}\ \citep[e.g.,][]{bogdanov07,bogdanov13}. The currently operating \textit{Neutron Star Interior Composition ExploreR} (\textit{NICER}) is routinely observing MSPs with this aim \citep{gendreau16,gendreau17}. Measuring the moment of inertia of pulsars using radio timing observations of pulsars in binary systems via spin-orbit coupling effects is also being envisaged to constrain the EoS \citep[e.g.,][]{lattimer05,kramer08}. Finally, the thermal emission from NSs provides a promising technique to obtain \mbox{$M_{\rm NS}$}\ and \mbox{$R_{\rm NS}$}{} \citep[see ][for recent reviews]{miller13,heinke13,ozel16b}. While this could, in principle, be achieved with all cooling NSs, some of them may be affected by systematic uncertainties that may alter the measurements. For example, the spectral modeling of X-ray dim isolated NSs may be complicated by uncertainties about their atmospheres, their magnetic field $B\sim\ee{11-12}\mbox{$\,G$}$, and the presence of X-ray pulsations indicating a non-uniform surface emission \citep[e.g.,][]{pons02}; which may require phase-resolved spectroscopy \citep{hambaryan17}. Similarly, central compact objects are likely affected by the same effects, although not all CCOs show pulsations \citep[e.g.,][]{klochkov15}. The cooling tails of Type-I bursts from NSs in X-ray binaries have also been used for EoS constraints \citep[e.g.,][]{suleimanov11b,nattila16}. Furthermore, when these bursts reach the Eddington flux, the peak flux provides an additional observable with which to break the degeneracy between \mbox{$M_{\rm NS}$}\ and \mbox{$R_{\rm NS}$}{} \citep[e.g.][]{ozel10,guver13}. However, difficulties may arise from the spectral modeling with a Planck function and the use of a color correction from theoretical atmosphere models \citep[see ][for discussions]{guver12a,guver12b,kajava14,nattila16,ozel16a}. To remedy these issues, recent work fitted such atmosphere models to each spectrum during the cooling tail of the NS 4U~1702--429 to obtain \mbox{$M_{\rm NS}$}\ and \mbox{$R_{\rm NS}$}\ measurements \citep{nattila17}, instead of relying on color corrections. All methods using thermally emitting NSs require precise knowledge of the source distances. For this reason, quiescent low-mass X-ray binaries (qLMXBs) located inside globular clusters (GCs) have provided reliable constraints on the EoS. The distances to GCs can be measured independently with uncertainties of $\sim$5--10\% \citep{harris96,harris10}; compared to the $\sim$ 30--50\% uncertainties of LMXBs in the field of the Galaxy. Furthermore, qLMXBs present other advantages that we describe in Section~\ref{sec:thermal}. While initially these sources where analyzed individually to constrain the EoS \cite[e.g.,][]{heinke06a,webb07,guillot11a}, it has become clear in recent years that statistical analyses combining multiple qLMXBs would provide more useful constraints on dense matter \citep{guillot13,guillot14,guillot16b,lattimer14,ozel16a,bogdanov16,steiner18}. This article presents one such analysis in which the spectra of a sample of qLMXBs are simultaneously analyzed to constrain the EoS. Because the red-shifted radius, measured from the modelling of the observed spectrum, depends on both the gravitational mass and the physical radius, a simultaneous analysis of several qLMXB sources can help break degeneracies between these two properties of NSs, assuming these objects are governed by the same \mbox{\mns--\rns}\ relation, i.e., the same EoS. This can in turn be used to infer the properties of the dense NSs matter. For practical reasons, such method requires parameterizing the EoS, i.e., representing it as a function of some parameters either in \mbox{\mns--\rns}\ space, or in $P$--$\rho$ space. Previous work used analytical parameterizations, such as a toy-model constant-\mbox{$R_{\rm NS}$}\ \citep{guillot13,guillot14,guillot16b} or piecewise polytrope representations\footnote{A sequence of connected power laws, $P=k_i\rho^{\gamma_i}$, where $i$ typically runs up to 3 or 5 \citep[e.g.,][]{read09,raithel16}.}~\citep{lattimer14,ozel16a,bogdanov16,steiner18}. In this work, we employ a representation of the EoS based on nuclear physics empirical parameters. The model is presented in \cite{margueron18a,margueron18b} and offers the possibility to easily incorporate nuclear physics knowledge. In Section~\ref{sec:thermal}, we summarize the characteristics of qLMXBs and present the reasons that make them ideal sources for EoS constraints. We also describe the data reduction and spectral extraction of our qLMXBs sample, as well as the surface emission model of these NSs. Section~\ref{sec:eos} summarizes various aspects of the EoS meta-model of \cite{margueron18a,margueron18b} that we used to fit our spectral data of qLMXBs. Section~\ref{sec:mcmc} presents the Markov-Chain Monte Carlo (MCMC) approach used to find the best fit EoS model to the qLMXBs spectra and Section~\ref{sec:results} presents the results, and compares them with previous constraints on the EoS. Finally, the conclusions in Section~\ref{sec:conclusions} summarize this work. \section{Thermal emission from quiescent low-mass X-ray binaries} \label{sec:thermal} In this section, we detail our present understanding of qLMXB thermal emission in GCs as well as host GC distance measurements. We also give details on our X-ray spectral data analysis and spectral model. \subsection{Low-mass X-ray binaries in quiescence} \label{sec:qlmxb} The surface emission from NSs in qLMXBs is now routinely used to obtain measurements of \mbox{$M_{\rm NS}$}\ and \mbox{$R_{\rm NS}$}. While during outbursts the accreted matter dominates the X-ray emission, the thermal emission from the surface of the NS becomes visible in quiescence. The source of this thermal emission is internal, and originates from the heat deposited by nuclear reactions in the crust during accretion episodes \citep[e.g.,][]{haensel08}. As this emission, reprocessed by the NS outer layers, is observed in the X-rays and modeled with realistic atmosphere models \citep{zavlin96,heinke06a,ho09,haakonsen12}, one can measure the red-shifted temperature and the size of the emission area. In this way, the X-ray spectra of qLMXBs provide a measurement of \mbox{$R_{\infty}$}, defined as: \begin{equation} \mbox{$R_{\infty}$} = \mbox{$R_{\rm NS}$} \left(1+z\right) = \mbox{$R_{\rm NS}$} \left(1-\frac{2 G \mbox{$M_{\rm NS}$}}{\mbox{$R_{\rm NS}$} c^{2}}\right)^{-1/2}. \end{equation} This requires knowing the distance to the source, and qLMXBs located in GCs have provided \mbox{$R_{\rm NS}$}\ measurements since their distances can be independently and rather precisely measured (see Section~\ref{sec:distances}). The qLMXBs inside GCs also present the other advantage of exhibiting a remarkable flux stability at all timescales \citep{heinke06a,guillot11a,servillat12,heinke14}. While LMXBs in the field of the Galaxy often exhibit flux variability, attributed to changes in the non-thermal and/or thermal components \citep[e.g.,][]{rutledge02a,campana04a}, which complicate the spectral modeling, the spectra of known qLMXBs located in GCs are purely thermal, without signs of non-thermal emission \citep[e.g.,][]{guillot13}. Overall, this reinforces the scenario in which we are observing the uncontaminated thermal cooling of NSs. \begin{deluxetable*}{lccccccrr} \tablecaption{Observational information on the 7 qLMXB sources considered in our analysis. \label{tab:sources}} \tablehead{ \colhead{Globular} & \colhead{R.A.\tablenotemark{a}} & \colhead{Decl.\tablenotemark{a}} & \colhead{XMM Exp.} & \colhead{Chandra Exp.} & \colhead{S/N} & \colhead{Group\tablenotemark{b}} & \colhead{Distances} & \colhead{Distances [8]} \\ \colhead{Cluster host}& \colhead{(J2000)} & \colhead{(J2000)} & \colhead{time (ks)} & \colhead{time (ks)} & & & \colhead{\emph{Dist \#1} (kpc)} & \colhead{\emph{Dist \#2} (kpc)} } \startdata 47Tuc (X-7) & 00:24:03.53 & --72:04:52.2 & 0 & 181 & 122 & A,A' & $4.53 \pm 0.08$ [1] & $4.50 \pm 0.06$ \\ M28 & 18:24:32.84 & --24:52:08.4 & 0 & 327 & 113 & A,A' & $5.5 \pm 0.3$ [2,3] & $5.50 \pm 0.13$ \\ NGC~6397 & 17:40:41.50 & --53:40:04.6 & 0 & 340 & 82 & A,A' & $2.51 \pm 0.07 $ [4] & $2.30 \pm 0.05$\\ \mbox{$\omega$\,Cen}{} & 13:26:19.78 & --47:29:10.9 & 36 & 291 & 49 & B,B' & $4.59 \pm 0.08$ [5] & $5.20 \pm 0.09$ \\ M13 & 16:41:43.75 & +36:27:57.7 & 29 & 55 & 36 & B,A' & $7.1 \pm 0.62$ [6] & $7.10\pm0.10$ \\ M30 & 21:40:22.16 & --23:10:45.9 & 0 & 49 & 32 & B,B' & $8.2 \pm0.62$ [6] & $8.10 \pm 0.12$ \\ NGC~6304 & 17:14:32.96 & --29:27:48.1 & 0 & 97 & 28 & B,B' & $6.22 \pm 0.26$ [7] & $5.90 \pm 0.14$ \\ \enddata \tablenotetext{a}{Coordinates of the qLMXB in each of the GC.} \tablenotetext{b}{The groups A and B denote the sources with a high S/N ($>60$) and lower S/N ($<60$), respectively. The groups A' and B' denote the sources for which we obtain a peaked and flat posterior distribution of the NS mass, respectively (see Section~\ref{sec:results} for more details).} \tablecomments{All distance uncertainties are given at 1$\sigma$ confidence level. References: [1] \cite{bogdanov16}; [2] \cite{harris10} (with uncertainties estimated in [3] \citealt{servillat12}); [4] \cite{heinke14}; [5] \cite{watkins13}; [6] \cite{omalley17}; [7] \cite{recioblanco05}; [8] \cite{gaia18}, from which the distances were obtained from the individual $X$, $Y$, $Z$ coordinate values, as given in their Table~C.3, using $r_{\rm GC,\odot} = \sqrt{X^2+Y^2+Z^2}$.} \end{deluxetable*} Another advantage of NSs in qLMXBs over other sub-groups of NSs for the purpose of radius measurements is the relatively straightforward modeling of their emergent spectra. While the atmospheric composition of isolated NSs may be uncertain \citep[e.g.,][]{burwitz03,ho09}, the atmosphere of NSs in LMXBs consists of a single-composition layer of a fully ionized light element. Since the accreted matter settles gravitationally within 10--100\mbox{$\,{\rm sec}$}\ \citep{alcock80,bildsten92}, the outermost layer of a transiently accreting NS is thought to be composed of the lightest accreted element, usually hydrogen (H). Moreover, the magnetic fields of these old sources is thought to be weak, as supported by the fact that their presumed descendants, millisecond pulsars \citep{alpar82,bhattacharya91,tauris06}, have inferred dipole fields $B\sim10^{8}$--$10^{9}\mbox{$\,G$}$, compared to $10^{11}$--$10^{12}$\mbox{$\,G$}\ for the younger, ``classical'' pulsars, which have not undergone accretion. Such low $B$-fields do not affect the emergent spectrum, and it can therefore be assumed that the NS atmosphere is non-magnetic. For these reasons, H-atmosphere models, and in some cases Helium (He) atmosphere models (see below), have been used to fit the spectra of the NS in qLMXBs and extract measurements of \mbox{$M_{\rm NS}$}\ and \mbox{$R_{\rm NS}$}. It is generally accepted that the atmosphere of a NS in a qLMXB is composed of pure H, since the atmospheric composition would be that of the lightest element present in the companion star. Unless the companion is completely devoid of H, the matter transferred onto the NS will contain some H, and therefore the material present in the outermost layer will be H. Diffusive burning of H into He may happen in the hot photosphere, but this is expected to happen on timescales of $10^{3}$--$10^{4}$ yrs \citep{chang04,chang10}, whereas the atmosphere (of thickness $\sim1\mbox{$\,{\rm cm}$}$ and mass $M_{\rm atm}\sim 10^{-20}\mbox{$\,{\rm M}_\odot$}$, \citealt{bogdanov16}) can rapidly be replenished by H matter from the stellar companion, even at very low accretion rates of $\sim10^{-13}\unit{\mbox{$\,{\rm M}_\odot$}\mbox{$\,{\rm yr^{-1}}$}}$. More importantly, observational evidence demonstrated the presence of H in the qLMXB systems in 47~Tux~X-5 \citep{edmonds02} and 47~Tuc~X-7~\citep{bogdanov16}, and in the GC \mbox{$\omega$\,Cen}{} \citep{haggard04}. Searches for \mbox{H$\alpha$}\ emission at the position of the qLMXB in NGC~6397 were unsuccessful, only placing upper limits on the equivalent width of the spectral line and thus on the accretion rate \citep{heinke14}. It was therefore argued that this qLMXB was devoid of H. and the authors advocated a He atmosphere instead. This conclusion was supported by the low \mbox{$R_{\rm NS}$}\ found from the spectral analyses with a H atmosphere ($\sim 8\hbox{$\,{\rm km}$}$, in the earlier work of \citealt{guillot11a,guillot13}), while a He atmosphere resulted in a \mbox{$R_{\rm NS}$}\ value compatible with that of other NSs \citep{heinke14}. However, the stellar companion was only detected in the $R$-band, and limits on its photometric colors made it compatible with both the white-dwarf sequence and the main sequence of the host GC. However, as discussed below, the proper modeling of pile-up (an instrumental effect, \citealt{davis01}) in the \textit{Chandra X-ray Observatory}\ spectra of this qLMXB is sufficient to yield radii in the range\footnote{It was demonstrated that pile-up effects, even at the 1\%-level, can significantly shift the peak of the thermal spectrum to higher energies, and therefore result in underestimated radii \citep{bogdanov16}.} 10--11\hbox{$\,{\rm km}$}. With all these considerations in mind, qLMXBs located inside GCs are ideal objects that provide a well-understood scenario to measure the radii of NSs. As mentioned above, obtaining constraints on the EoS from qLMXBs requires combining them into a statistical analysis. Here, we analyze the spectra of the qLMXB in the GCs M13 (NGC\,6205), M28 (NGC\,6266), M30 (NGC\,7099), NGC~6304, NGC~6397, \mbox{$\omega$\,Cen}\ (NGC\,5139), and 47 Tuc~(NGC\,104)~X-7. We excluded 47 Tuc~X-5 because of its eclipses, its flux variability, and variable line-of-sight absorption, which make the spectral modelling rather uncertain \citep{bogdanov16}. Some information about these sources is detailed in Table~\ref{tab:sources}. \subsection{On the distances of globular clusters} \label{sec:distances} In this paper, we work with a set of distances obtained from a heterogeneous set of methods, including dynamical \citep{watkins13} and photometric (other references in Table~\ref{tab:sources} distance measurements. In most cases, these are recent measurements, or measurements discussed in previous qLMXB analyses, which we used for convenient comparison \citep[e.g.,][]{bogdanov16}. These distances used and their uncertainties are listed in Table~\ref{tab:sources} as \emph{Dist \#1}. To evaluate the impact of the choice of distances, we also considered distances from a more uniform set of measurements. The determination of accurate astrometric distances to large samples of GCs have now become a tangible reality, thanks to the exquisite data provided by the European Space Agency's (ESA's) \textit{Gaia}{} space mission \citep{gaia2016}. Within the framework of \textit{Gaia}'s Data Release 2 \citep[DR2;][]{gaiadr2}, trigonometric parallaxes have already become available for large numbers of stars belonging to dozens of GCs. Still, as discussed in detail by \citet{pancino-gaia} and more recently also emphasized by \citet{gaia-babusiaux}, systematic uncertainties still preclude the determination of reliable distances based on the available \textit{Gaia}{} data for such crowded fields as Galactic GCs~-- even though, by the end of the mission, GC distances that are accurate to within the 1\% level can be expected \citep{pancino-gaia}. Confronting the \textit{Gaia}-DR2 data with distances from the literature, as independently compiled in the \citet{harris96,harris10} catalog, a relatively small systematic offset, at the level of 0.029\,mas, was found \citep{gaia18}, in the sense that parallaxes derived by \textit{Gaia}{} are smaller than those implied by the distances given in \citet{harris10}. In any case, at this stage, the \citet{gaia18,gaia-babusiaux} is using the latter distances, as opposed to those implied by the \textit{Gaia}{} parallaxes, in its analyses of the Hertzsprung-Russell diagram and GC orbits. Using the \citet{harris10} distances, the \cite{gaia18} rederived the $X$, $Y$, $Z$ coordinates of the GCs with respect to the Sun, given the improved positional information obtained by the \textit{Gaia}{} mission. For our uniform set of distance measurements to the seven GCs studied here (\emph{Dist \#2}), we used the distances calculated from the $X$, $Y$, $Z$ coordinates in the \cite{gaia18}. We note that these distances are in most cases consistent with those of \emph{Dist \#1}, albeit with smaller uncertainties. The most significant difference between the two sets is for the GC \mbox{$\omega$\,Cen}{}, although it has been noted that the dynamical measurement for this cluster \citep{watkins13} may suffer from systematics. Finally, we note that \cite{chen18} reported a distance to 47~Tuc of $4.45\pm0.01\pm0.12\mbox{$\,{\rm kpc}$}$ (statistical and systematic uncertainties) obtained from a careful treatment of the \textit{Gaia}{}-DR2 parallaxes. This result is fully consistent with the values used in our sets \emph{Dist \#1} and \emph{Dist \#2}. Using these two sets allows us to study the impact of the distance choices on the analyses of X-ray spectra of thermally-emitting NSs. \subsection{X-ray spectral data analysis and spectral model} \label{sec:spec_model} The processing of the \textit{XMM-Newton}\ and \textit{Chandra}\ data sets is performed with the \emph{XMMSASv15.0} \citep{gabriel04} and \emph{CIAO v4.8} \citep{fruscione06}, respectively, following their respective standard procedures. The spectra are created from flare-filtered event files, by extracting counts in circular regions. Background spectra are chosen from circular regions near the qLMXB, on the same CCD chip, and devoid of other sources. Finally, we grouped energy channels to ensure a minimum of 20 counts per bin. A detailed description of the data preparation is available in \cite{guillot13}, and here we follow similar data reduction recipes. The analysis of the qLMXB spectra is performed with \emph{PyXSPEC}, the Python interface to the fitting package \emph{XSPEC} \citep{arnaud96}. This allows us to employ an MCMC approach to sample the parameter space, as described in Section~\ref{sec:mcmc}. The spectral model used is the NS H atmosphere model {\tt nsatmos} \citep{heinke06a}, modulated by absorption of soft X-rays by the interstellar medium. For the Galactic absorption, we used the recent model {\tt tbabs} \citep{wilms00}. We also add a power-law component to account for possible excess of counts above 2\mbox{$\,{\rm keV}$}\ that may originate from non-thermal emission. The exponent of this power law is fixed to 1.5, and we fit for the normalization. As will be shown below, the contribution of this power-law component is consistent with being null for all qLMXBs. A pile-up component is also added for all \textit{Chandra}\ spectra, even those qLMXBs with low count rates inducing a pile-up fraction $\lesssim 1\%$. As was pointed out by \cite{bogdanov16}, uncorrected pile-up, even at low pile-up fraction $\sim 1\%$, can significantly bias the radius measurement. Specifically, for NGC~6397, the low \mbox{$R_{\rm NS}$}\ obtained with H atmosphere models was a consequence of the unmodelled pile-up of photons in the X-ray spectra. In summary, for each NS qLMXB in our sample, the spectral parameters of the model are: \begin{itemize} \item the parameter $\alpha$ in the {\tt pileup} model, \item the column density of neutral hydrogen \mbox{$N_{\rm H}$}, from the {\tt tbabs} model, \item the NS surface temperature \mbox{$kT_{\rm eff}$}\ in the {\tt nsatmos} model, \item the NS mass in the {\tt nsatmos} model, \item the NS radius in the {\tt nsatmos} model, \item the NS distance (set as a prior; see Table~\ref{tab:sources}) in the {\tt nsatmos} model, \item the power-law normalization (model {\tt powerlaw} with fixed $\Gamma=1.5$). \end{itemize} In addition, multiplicative constants are used to account for absolute flux cross-calibration uncertainties between different detectors (\textit{XMM}-pn, \textit{XMM}-MOS, and \textit{Chandra}). Therefore, for sources with spectra obtained with multiple detectors, multiplicative constants are added to the spectral model, as commonly done\footnote{In those case, the constant for \textit{XMM}-pn is fixed to unity, while the ones for the \textit{XMM}-MOS and \textit{Chandra}\ spectra, $C_1$ and $C_2$ respectively, are fitted parameters.}. In this work, all NSs are assumed to be described by the same EoS. Therefore, their masses and radii will be tied together by the parameterized EoS described in the following section. \section{The dense matter equation of state} \label{sec:eos} For the present analysis, the dense matter EoS is provided by a meta-modeling described in~\cite{margueron18a,margueron18b}, instead of the toy-model constant-\mbox{$R_{\rm NS}$}\ representation of the EoS \citep{guillot13,guillot14,guillot16b}, or instead of the polytropes \citep{steiner13,ozel16a,steiner18} used in previous works. The meta-modeling employed here is able to accurately reproduce existing nucleonic EoSs and smoothly interpolate between them. It is based on a Taylor expansion in the baryon density $n=n_n+n_p$, where $n_n$ and $n_p$ are the neutron and proton densities, around the nuclear saturation density $n_\mathrm{sat}\approx 0.16$~fm$^{-3}$. Note that the nuclear saturation density is expressed as baryon number per unit volume and it coincides with the energy-density $\rho_\mathrm{sat}$ introduced previously. Such an approach is realistic up to 3--4 $n_{\rm sat}$, where one could expect the onset of new degrees of freedom (hyperons, quarks, pion condensation, etc). This meta-model may therefore break down for high-mass NSs (at around or above 2\mbox{$\,{\rm M}_\odot$}). Fortunately, these high masses seem not to be favored in the present analysis and for the present sources. For completeness, we briefly describe our modeling for the crust and the core of the NSs in this section. \subsection{Equation of state for cold catalyzed neutron stars} \label{sec:eosmodel} Our EoS spans from the outer crust of NSs down to their dense core. We consider the HP94 model for the outer crust, which represents it as a Coulomb lattice of spherical nuclei immersed in a gas of electrons \citep{haensel94}. In this model, the nuclear masses are the experimental ones when available, supplemented by a theoretical mass formula~\citep{moeller92} for the more exotic nuclei. The inner crust starts when the energy density reaches $3.285\times 10^{11}$~g~cm$^{-3}$, and we consider the tabulated SLY EoS~\citep{douchin01} obtained from a Compressible Liquid Drop Model based on the Skyrme interaction SLy4~\citep{chabanat98}. A test of the sensitivity on the crust EoS can be performed by replacing the SLY EoS by another one, such as the FPS one. These two tabulated EoS can be downloaded from the following website\footnote{http://www.ioffe.ru/astro/NSG/NSEOS/}. For numerical reasons, the transition between the crust and the core is guided within and log$\rho$--log$P$ cubic spline matching the values and derivatives at both boundaries. The two boundaries are taken to be $n_{\rm sat}/10$ for the lower bound and $n_{\rm sat}$ for the upper one. The sensitivity of this procedure to the choice of the boundaries is found to be small. Its impact on the total NS radius is less than 100~m, which is much smaller than current measurement uncertainties~\citep{margueron18b}. In this work, we considered that the NS interior is made only of purely nucleonic matter, whose properties are obtained from the extrapolation of the known saturation properties of nuclear matter. These properties are encoded in the so-called empirical parameters of nuclear matter, which are defined as being the coefficients of the series expansion in terms of the density parameter $x=(n-n_{\rm sat})/(3n_{\rm sat})$ of the energy per particle in symmetric matter, \begin{equation} e_{\rm sat} = E_{\rm sat}+\dfrac{1}{2}K_{\rm sat}x^2+\dfrac{1}{3!}Q_{\rm sat}x^3+\dfrac{1}{4!}Z_{\rm sat}x^4+... \quad , \end{equation} and of the symmetry energy per particle \begin{equation} e_{\rm sym} = E_{\rm sym}+L_{\rm sym}x+\dfrac{1}{2}K_{\rm sym}x^2+\dfrac{1}{3!}Q_{\rm sym}x^3+\dfrac{1}{4!}Z_{\rm sym}x^4+... , \end{equation} where the symmetry energy $e_{\rm sym}$ is defined as the isospin polarization energy \begin{equation} e_{\rm sym} = \frac 1 2 \frac{\partial^2 e}{\partial \delta^2} \quad , \end{equation} and where $\delta=(n_n-n_p)/(n_n+n_p)$ is the isospin asymmetry parameter and $e(n,\delta)$ is the nuclear energy per particle. $E_{\rm sat}$ and $E_{\rm sym}$ are the saturation and symmetry energy at the saturation density $n_{\rm sat}$. $L_{\rm sym}$ is the slope of the symmetry energy, and since the saturation is an equilibrium point, there is no slope of the energy per particle in symmetric matter. $K_{\rm sat/sym}$ stands for the curvature, $Q_{\rm sat/sym}$ for the skewness, and $Z_{\rm sat/sym}$ for the kurtosis of the energy per particle in symmetric matter and of the symmetry energy, respectively. The values of these empirical parameters are determined from experimental measures, with different accuracies. Reviews of their experimental determination can be found in~\cite{margueron18a} and in references therein. \begin{deluxetable*}{lcccccccccccc}[t] \centering \tablecaption{Standard values and domain of variation of the empirical parameters considered in this analysis; taken from \cite{margueron18a}. See Section~\ref{sec:eosmodel} for the description of the parameters. \label{tab:empParam}} \tablecolumns{13} \tablewidth{0pt} \tablehead{ \colhead{Emp. param.} & \colhead{$E_{\rm sat}$} & \colhead{$E_{\rm sym}$} & \colhead{$n_{\rm sat}$} & \colhead{$L_{\rm sym}$} & \colhead{$K_{\rm sat}$} & \colhead{$K_{\rm sym}$} & \colhead{$Q_{\rm sat}$} & \colhead{$Q_{\rm sym}$} & \colhead{$Z_{\rm sat}$} & \colhead{$Z_{\rm sym}$} & \colhead{$m^*$} & \colhead{$\Delta m^*$} \\ & \colhead{(MeV)} & \colhead{(MeV)} & \colhead{(fm$^{-3}$)} & \colhead{(MeV)} & \colhead{(MeV)} & \colhead{(MeV)} & \colhead{(MeV)} & \colhead{(MeV)} & \colhead{(MeV)} & \colhead{(MeV)} & \colhead{($m_N$)} & \colhead{($m_N$)} } \startdata Standard & -15.8 & 32.0 & 0.155 & 60 & 230 & -100 & 300 & 0 & -500 & -500 & 0.75 & 0.1 \\ Variation & -- & -- & -- & 20--90 & -- & -400--200 & -1300--1900 & -- & -- & -- & -- & -- \\ \enddata \end{deluxetable*} We consider the meta-modeling ELFc proposed in~\cite{margueron18a}, which is based on the decomposition of the nuclear energy per particle in terms of a kinetic term $t$ and a potential term $v$, as \begin{equation} e(n,\delta) = t(n,\delta)+v(n,\delta) \quad . \end{equation} The kinetic energy is defined as that of the Fermi gas plus medium corrections to the bare mass (encoded in the parameters $\kappa_{\rm sat/sym}$), \begin{equation} t(n,\delta)=\frac{t_{\rm sat}}{2}\left(\frac{n}{n_{\rm sat}}\right)^{2/3}\Big[\left(1+\kappa_{\rm sat}\frac{n}{n_{\rm sat}}\right)f_1(\delta)+\kappa_{\rm sym}\frac{n}{n_{\rm sat}}f_2(\delta)\Big], \end{equation} where $t_{\rm sat}=3\hbar^2/(10m)(3\pi^2/2)^{2/3}n_{\rm sat}^{2/3}$, $m$ is the nucleon mass, and the functions $f_{1/2}$ are defined as, \begin{eqnarray} f_1(\delta) &=&(1+\delta)^{5/3}+(1-\delta)^{5/3},\\ f_2(\delta) &=&\Big[(1+\delta)^{5/3}-(1-\delta)^{5/3}\Big]\delta. \end{eqnarray} The potential term is expressed as, \begin{equation} v(n,\delta) = \sum^N_{\alpha=0} \left(v_\alpha^{\rm is} + \delta^2 v_\alpha^{\rm iv} \right)\frac{x^\alpha}{\alpha !} u(x), \end{equation} where the function $u(x)$ takes into account the corrections due to the truncation $N$ at low density, as \begin{equation} u(x)=1-(-3x)^{N+1-\alpha}\exp(-b n/n_{\rm sat}). \end{equation} Fixing $b=10\ln 2\approx 6.93$, as in \cite{margueron18a}, implies that the function $u$ converges quickly to $1$ as the density increases from 0. It ensures that $v(n,\delta)\rightarrow 0$ for $n\rightarrow 0$ for any order $N$. The larger $N$, the smaller the correction $u(x)$. The parameters $v_\alpha^{\rm is/iv}$ entering into the series expansion of the potential term have a one-to-one relation with the empirical parameters. The ability of this meta-modeling to reproduce existing EoS increases as the order $N$ increases. For $N=4$, the meta-modeling can very accurately (at the \% accuracy, in the worst case) reproduce binding energy, pressure, and sound velocity of a large number of existing EoS up to $4n_{\rm sat}$, as shown in \cite{margueron18a}. In the present work, we use the flexibility of the meta-modeling to sample the parameter space of the empirical parameters using an MCMC approach. The range of variation for each of the empirical parameters considered in this analysis is given in Table~\ref{tab:empParam}. We fix the value of the lowest-order empirical parameters at saturation density to be: $E_{\rm sat}=-15.8$~MeV, $E_{\rm sym}=32$~MeV, $n_{\rm sat}=0.155$~fm$^{-3}$ and $K_{\rm sat}=230$~MeV. The parameters $\kappa_{\rm sat/sym}$ are adjusted so that the Landau mass in symmetric matter is $m^*/m=0.75$ and the splitting between the neutron and proton Landau masses $(m^*_n-m^*_p)/m$ in neutron matter is 0.1 (see Table~\ref{tab:empParam}). The \mbox{\mns--\rns}\ relation is known to be mostly influenced by the empirical parameters $L_{\rm sym}$, $K_{\rm sym}$, and $Q_{\rm sat}$, since the EoS in the density range going from $n_{\rm sat}$ to approximately $3n_{\rm sat}$ depends most strongly on them \citep{margueron18b}. $L_{\rm sym}$ and $K_{\rm sym}$ (respectively, $Q_{\rm sat}$) control the density dependence of the symmetry energy (respectively, the energy per particle in symmetric matter) above saturation density. The higher-order empirical parameters are poorly known, but they impact the EoS at higher densities. They could in principle be deduced from \mbox{$M_{\rm NS}$}\ and \mbox{$R_{\rm NS}$}\ measurements for high-mass NSs. \textsl{A priori}, we do not know which region of NS masses will be reached by our analysis. Anticipating our results, however, we find that the NS masses do not exceed 1.5--1.6~\mbox{$\,{\rm M}_\odot$}, which implies that the central densities of these NSs are not very large, and the meta-model can reasonably be applied. \subsection{The effect of the empirical parameters on the \mbox{\mns--\rns}\ relation} \label{sec:nuclmodels} We illustrate here the impact of the empirical parameters $L_{\rm sym}$, $K_{\rm sym}$ and $Q_{\rm sat}$ on the \mbox{\mns--\rns}\ relation. Since the rotation of the sources studied here is unknown, we consider non-rotating NS models, whose \mbox{\mns--\rns}\ relation for a given EoS is obtained by solving the well-known Tolman-Oppenheimer-Volkoff (TOV) equations \citep{tolman39,oppenheimer39}. Only if the frequency is larger than 300 Hz (period\,$<3$~ms) the rotational effects could bias the \mbox{$R_{\rm NS}$}\ measurements \citep{morsink07}. For a NS with spin frequency of 600 Hz, its non-rotating radius would be underestimated by 2--5\%, depending on the NS size \citep{baubock13}. The EoS selection criteria include those which satisfy the requirements of causality and positiveness of the symmetry energy, as well as being compatible with a maximum mass above $1.9\mbox{$\,{\rm M}_\odot$}$. This mass limit corresponds approximately to the $2\sigma$ lower limits of the measurements for PSR~J1614--2230, $1.908\pm0.016\mbox{$\,{\rm M}_\odot$}$ \citep{demorest10,fonseca16,arzoumanian18}, and PSR~J0348+0432 \citep{antoniadis13}, $2.01\pm0.04\mbox{$\,{\rm M}_\odot$}$. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{EOS_LKQ.png} \label{fig:MR1} \vspace{-0.4cm} \caption{This figure shows the effect on the \mbox{\mns--\rns}\ relations as the EoS parameters $L_{\rm sym}$, $K_{\rm sym}$ and $Q_{\rm sat}$ are varied. In the top panel, the value of $Q_{\rm sat}$ is fixed to 300 \mbox{$\,{\rm MeV}$}, while $L_{\rm sym}$ takes the values specified in the legend, and $K_{\rm sym}$ is varied from $-400\mbox{$\,{\rm MeV}$}$ to $200\mbox{$\,{\rm MeV}$}$ in steps of $120\mbox{$\,{\rm MeV}$}$ (increasing $K_{\rm sym}$ from left to right). In some cases, the EoS for the lowest values for $K_{\rm sym}$ are not plotted if it does not match the selection criteria (see Section~\ref{sec:eos} for details). The points joined by the dashed and dotted line are models with central densities $2n_{sat}$ and $3n_{sat}$, respectively. In the bottom panel, $L_{\rm sym}$ also takes the values specified in the legend, $K_{\rm sym}$ is fixed to -85\mbox{$\,{\rm MeV}$}, while $Q_{\rm sat}$ varies from $1900\mbox{$\,{\rm MeV}$}$ down to $-500\mbox{$\,{\rm MeV}$}$ in steps of -600\mbox{$\,{\rm MeV}$}. As in the top panel, the sets of the three parameters which do not satisfy the selection criteria are not plotted. Here, only the $2n_{sat}$ central densities points are shown, as some of the EoS displayed do not reach a central density of $3n_{sat}$.} \end{figure} Figure~\ref{fig:MR1} shows the effect of varying the empirical parameters $L_{\rm sym}$, $K_{\rm sym}$ and $Q_{\rm sat}$ on the \mbox{\mns--\rns}\ relation. Specifically, an increase of any of these three parameter shifts the high-mass part of the \mbox{\mns--\rns}\ relation to larger radii. For clarity, only two parameters are varied in each of the top and bottom panels -- the third parameter being kept fixed ($Q_{\rm sat}=300\mbox{$\,{\rm MeV}$}$ in the top panel, and $K_{\rm sym}=-85\mbox{$\,{\rm MeV}$}$ in the bottom panel). There are four groups of curves corresponding to the same value of $L_{\rm sym}$ and coinciding for very low mass NSs ($\mbox{$M_{\rm NS}$}<0.2\mbox{$\,{\rm M}_\odot$}$). As \mbox{$M_{\rm NS}$}\ increases, the central density increases as well since we consider only the stable branch, and the different values for $K_{\rm sym}$ change the \mbox{\mns--\rns}\ curves associated with the different EoSs. Overall, varying $L_{\rm sym}$, $K_{\rm sym}$ and $Q_{\rm sat}$ over the whole range allowed by nuclear physics, together with the requirement of supporting a $1.9\mbox{$\,{\rm M}_\odot$}$ NS, yields radii between 11.5 and 14.2~km at 1.4\mbox{$\,{\rm M}_\odot$}. The effect of varying the parameter $Q_{\rm sat}$ is most noticeable for \mbox{$M_{\rm NS}$}\ above 1.0--1.2\mbox{$\,{\rm M}_\odot$}. Being of higher order in the density expansion, $Q_{\rm sat}$ influences the EoS at high density only, or equivalently at high \mbox{$M_{\rm NS}$}\ only. Depending on the value of $Q_{\rm sat}$, the EoS can be stiffer at high density, as reflected in the curves which go straight up, or softer at high density letting the \mbox{\mns--\rns}\ curve populate the low-\mbox{$R_{\rm NS}$}\ space at high \mbox{$M_{\rm NS}$}. There is however a limitation in the radii which can be explored based on the nucleonic EoS. As suggested in~\cite{margueron18b}, low-mass NSs with $\mbox{$R_{\rm NS}$}<11$~km (at $\sim 1.4\mbox{$\,{\rm M}_\odot$}$) cannot be described by nucleonic EoS models that respect causality and that must support a 1.9\mbox{$\,{\rm M}_\odot$}\ NS. While there are various EoSs which pass through a point in the \mbox{\mns--\rns}\ diagram, their paths are different. The degeneracy between different EoSs thus requires the knowledge of a set of \mbox{\mns--\rns}\ points, as distant as possible from each other. In conclusion of this analysis, the empirical parameters $L_{\rm sym}$, $K_{\rm sym}$, and $Q_{\rm sat}$ allow the exploration of a wide domain of \mbox{$M_{\rm NS}$}\ and \mbox{$R_{\rm NS}$}\ with various paths. Therefore, it may be possible to constrain the values of these parameters by confronting them to the observational data from the thermal X-ray emission of NSs. \section{Confronting the equation of state with the data} \label{sec:mcmc} In this section, we detail the methodology of our analysis: we employ an MCMC approach with the stretch-move algorithm \citep{goodman10} to consistently analyze the seven qLMXB sources, and the nuclear matter EoS meta-modeling is included directly in the analysis. The result is that the astrophysical (NS properties) and nuclear physics (EoS) parameters are adjusted together, without over-constraining one or the other. We can solve the so-called inverse problem and obtain constraints on the EoS properties directly from the data analysis. This is the first time that the thermal emission from NSs is analyzed in this manner. \subsection{MCMC approach with the stretch-move algorithm} \label{sec:gw} For all the cases considered here, the priors on the parameters are chosen so as to minimize any \textsl{a priori} assumption on the parameter distributions. All astrophysical parameters (except the distances to the sources) are sampled with uniform distributions within the boundaries allowed by the spectral model (defined in \texttt{Xspec}). The distances $D$ are strongly coupled to the NS radii and effective surface temperatures. Letting this parameter explore a uniform prior would increase the uncertainties in the analysis enormously. For the two sets of distances presented in Table~\ref{tab:sources}, we limit the qLMXB distances to Gaussian priors given by the central values and $1\sigma$ uncertainties listed. To do so, we add to the likelihood $\chi^2$ a penalty for each source $i$, proportional to the difference between the MCMC sampled distance $D_{\rm{mcmc,i}}$ and the actual measured data $D_{\rm{data,i}}$ (from Table~\ref{tab:sources}), taking into account their standard deviations $\sigma_i$. The distance penalty reads $\chi^2_D= \sum_{i=0}^{N}\chi^2_{D,i}$, where the $\chi^2_{D,i}$ for each source are given by: \[ \chi^2_{D,i}=\dfrac{ \left( D_{\rm{mcmc,i}}-D_{\rm{data,i}}\right)^2 }{ \left(\sigma_i\right)^2} \, . \] The MCMC approach permits efficient sampling of our parameter space with high dimensionality: 49 parameters in total, including 3 nuclear physics EoS parameters, plus 6 astrophysical parameters per qLMXB (those listed in Section~\ref{sec:spec_model}, except for the radii which are obtained given the sampled EoS parameters and NS masses, after solving the TOV equations), plus 4 multiplicative normalization constants (for the cross-calibration between the \textit{XMM}-pn, \textit{XMM}-MOS and \textit{Chandra}, for the qLMXBs in M13 and \mbox{$\omega$\,Cen}). We use the python \texttt{emcee} package \citep{foremanmackey13} with the stretch-move algorithm \citep{goodman10}, which we applied as follows (see also the flow-chart in Figure~\ref{fig:flowchart}): \begin{itemize} \item Step 0: a large number of chains or "walkers" are initialized, each one corresponding to a random point in the multi-dimensional parameter space defined by the set of parameters described above. We use 426 walkers (a multiple of the number of CPU cores available for our study). \item Step 1: we solve the TOV equations for each walker, providing 426 \mbox{\mns--\rns}\ relations at each iteration. \item Step 2: for each walker, the sampled masses of the seven NSs are associated to seven calculated radii according to the \mbox{\mns--\rns}\ relation. Using those \mbox{$M_{\rm NS}$}\ and \mbox{$R_{\rm NS}$}\ and the other astrophysical parameters, we calculate the global $\chi^2$ between the emission models (NS atmosphere) and the data for the seven NSs. \item Step 3: Given the calculated probability (its likelihood multiplied by the distance Gaussian priors mentioned above), the evolution of the walkers in the parameter space is decided according to the stretch-move algorithm. To determine the new position, each walker is randomly paired with another and will move along the line joining the two current points in the parameter space. The amount of "stretch" is determined by the scale-parameter (the only adjustable parameter in this algorithm), that have been chosen to $a=2.0$ as prescribed in \cite{goodman10}. The new position is accepted or rejected depending on its probability. For more details about the stretch-move algorithm, see \cite{goodman10,foremanmackey13}. \item Step 4: Steps 1 to 3 are repeated numerous times until the walkers have converged in the region of the parameter space resulting in the highest likelihood, or minimum $\chi^2$ value. \item Step 5: When the MCMC loop stops, the statistical posterior distributions are calculated and marginalized to create the outputs. \end{itemize} Before running the code on the full data set and with the most general meta-modeling in section~\ref{sec:results}, we first test it considering the constant-\mbox{$R_{\rm NS}$}\ toy-model. In addition to its simplicity, this test is interesting since it allows us to compare with results that have already been reported in the literature. \begin{figure} \centering \begin{tikzpicture}[node distance=2cm] \node (start) [startstop] {Start}; \node (in1) [io, below of=start, text width=1.5cm,yshift=0.2cm ] {Load X-ray spectra of 7 qLMXBs}; \node (in2) [io, right of=in1, yshift=0.5cm, xshift=1.3cm, text width=2.25cm] {Initiate set of MCMC walkers}; \node (in3) [io, below of=in2, yshift=0.8cm,text width=1.7cm] {MCMC step}; \node (proc1a) [process, below of=in3, xshift=-1cm, yshift=0.0cm, text width=1.5cm] {Sample NS$_{i}$ parameters ($N_{H,i}$, $T_{i}$, $M_{i}$, ...)}; \node (proc1b) [process, right of=proc1a, yshift=-0.2cm, text width=1.5cm] {Sample EoS parameters ($L_{\rm sym}$, $K_{\rm sym}$, $Q_{\rm sat}$,...)}; \node (proc1c) [model, below of=proc1b, text width=1.5cm, yshift=-0.2cm] {EoS Model}; \node (proc1d) [process, below of=proc1c, text width=1.5cm, yshift=0.2cm] {Solve TOV to get $M$--$R$ relation}; \node (proc1e) [process, left of=proc1d, text width=1.5cm] {Given $M_{i}$, get $R_{i}$ from $M$--$R$ relation}; \node (proc1f) [model, below of=proc1e, text width=1.5cm, yshift=0.3cm] {Spectral model}; \node (proc2) [process, below of=proc1f, text width=2.5cm, yshift=0.5cm] {Compare to X-ray data. Get likelihood}; \node (dec1) [decision, below of=proc2, yshift=0.0cm, text width=1cm] {Continue MCMC}; \node (proc2a) [process, right of=dec1, xshift=1cm, text width=1.5cm] {New MCMC step}; \node (proc2b) [process, below of=dec1, text width=2cm, yshift=0.0cm] {Performs statistical distributions}; \node (out1) [io, below of=proc2b,text width=3cm, yshift=0.2cm] {Outputs: corner-plots, marginalized distributions, $M$--$R$ probabilities, ...}; \node (stop) [startstop, below of=out1,text width=2cm, yshift=0.3cm] {Stop}; \draw [arrow] (start) -- (in1); \draw [arrow] (start) -| (in2); \draw [arrow] (in2) -- (in3); \draw [arrow] (in1) |- (proc2); \draw [arrow] (in3) -- (proc1a); \draw [arrow] (in3) -- (proc1b); \draw [arrow] (proc1a) -- (proc1e); \draw [arrow] (proc1b) -- (proc1c); \draw [arrow] (proc1c) -- (proc1d); \draw [arrow] (proc1d) -- (proc1e); \draw [arrow] (proc1e) -- (proc1f); \draw [arrow] (proc1f) -- (proc2); \draw [arrow] (proc2) -- (dec1); \draw [arrow] (dec1) -- node[anchor=north east] {yes} (proc2a); \draw [arrow] (proc2a) |- (in3); \draw [arrow] (dec1) -- node[anchor=south west] {no} (proc2b); \draw [arrow] (proc2b) -- (out1); \draw [arrow] (out1) -- (stop); \end{tikzpicture} \caption{Flowchart of the global fit to the data (X-ray spectra of 7 qLMXBs) with a set of walkers, and using MCMC and stretch-move algorithm (see text for more details). Note that the EoS model is implemented inside the observational analysis to provide consistent $MR$ relations.\label{fig:flowchart}} \end{figure} \subsection{Tests using a constant radius toy model} We first consider the constant-\mbox{$R_{\rm NS}$}\ model \citep{guillot13}, which assumes that all NSs have the same radius, i.e., that the EoS is represented in \mbox{\mns--\rns}\ space by a vertical line in which \mbox{$R_{\rm NS}$}\ is independent of \mbox{$M_{\rm NS}$}\ (which remain as free parameters). This is a simple toy-model approximation motivated by the observations that most nucleonic EoSs (the ones consistent with 2\mbox{$\,{\rm M}_\odot$}) have a rather weak dependence on \mbox{$M_{\rm NS}$}, between 1\mbox{$\,{\rm M}_\odot$}\ and 2\mbox{$\,{\rm M}_\odot$}. The purpose of this toy-model is mainly to test our code and MCMC approach. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{R_histogram.png} \caption{(top) Marginalized posterior probability distributions of the radius obtained for the constant-\mbox{$R_{\rm NS}$}\ toy-model used in the MCMC tests runs (with the two sets of distances).} \label{fig:radConst} \end{figure} After running the analysis described in Section~\ref{sec:gw}, we obtain the results shown in Figure~\ref{fig:radConst}, i.e., the \mbox{$R_{\rm NS}$}\ posterior distributions (the same for all seven NSs) considering the distance sets \emph{Dist \#1} and \emph{Dist \#2}, marginalized over the other parameters of the model. For both distance sets, the radius distributions are $\mbox{$R_{\rm NS}$}=11.09^{+0.38}_{-0.36}\hbox{$\,{\rm km}$}$ (\emph{Dist \#1}, for $\mbox{$\chi^2_\nu$}=1.06$) and $\mbox{$R_{\rm NS}$}=11.04^{+0.39}_{-0.35}\hbox{$\,{\rm km}$}$ (\emph{Dist \#2}, for $\mbox{$\chi^2_\nu$}=1.07$). These values are consistent with the recent results of \cite{guillot16b}, but at odds with older results \citep[e.g.][]{guillot13, guillot14}. The differences are likely due to the inclusion of new sources (47Tuc~X-7) and new data, the use of recent distance measurements, the improvement of the analysis (e.g., the new absorption model \texttt{tbabs}), and the inclusion of the pile-up correction model for all sources (including those with a $\sim1\%$ pile-up fraction, see Section~\ref{sec:qlmxb}). Overall, we find a radius distribution that is easier to reconcile with the nuclear physics models of Section~\ref{sec:eos}, i.e., having non-negligible probabilities for a NS radius larger than about 11\hbox{$\,{\rm km}$}. There is however also a large fraction of the posterior probability distribution which is located below 11\hbox{$\,{\rm km}$}, in conflict with nuclear physics expectations \cite[e.g.,][]{margueron18b,Tews2018} as well as our illustrative Figure~\ref{fig:MR1}. For instance, one could deduce from these figures that $\mbox{$R_{\rm NS}$}\lesssim 11\hbox{$\,{\rm km}$}$ requires $L_{\rm sym}\lesssim 20\mbox{$\,{\rm MeV}$}$, which contradicts nuclear physics expectations \citep{lattimer2013}. In the following section, we address the question of the compatibility between the thermal emission modeling and the nuclear EoS by including the meta-model directly in the global spectral data analysis. In this way, we show that there is no inconsistency between the observational data and the nuclear EoS, and we extract an estimation for the nuclear EoS parameters. \section{Results} \label{sec:results} In this section, the main results of our novel approach are presented and discussed. \subsection{Framework} We remind the reader that the main features of our work are that i) we fit simultaneously seven NS qLMXB sources, ii) we impose the same EoS to all these sources, and iii) we treat the EoS and the astrophysical model parameters equally. Only a few nuclear EoS parameters are taken as free parameters. We recall that the nuclear meta-modeling is governed by a set of empirical parameters (see Section~\ref{sec:eos}). Some of these empirical parameters can be well constrained by nuclear experiments \citep[see the discussion in][]{margueron18a}, and they are kept fixed in the present analysis: $n_{\rm sat}$, $E_{\rm sat}$, $E_{\rm sym}$, and $K_{\rm sat}$ (values in Table~\ref{tab:empParam}). The more influential and less known parameters, $L_{\rm sym}$, $K_{\rm sym}$ and $Q_{\rm sat}$, are fitted in our analysis. The values of $K_{\rm sym}$ and $Q_{\rm sat}$ are currently unknown, while there exist constraints on $L_{\rm sym}$ from nuclear experiments and nuclear theoretical predictions. These constraints indicate that $L_{\rm sym}$ has a value around 50~MeV with an uncertainty of about \ppm10~MeV \citep{lattimer2013}. We therefore incorporate this knowledge from nuclear physics by considering a Gaussian prior on $L_{\rm sym}$ centered at 50~MeV with a width of 10~MeV. Since $K_{\rm sym}$ and $Q_{\rm sat}$ are unknown, we consider a uniform distribution in the wide ranges listed in Table~\ref{tab:empParam}. The higher-order empirical parameters, $Q_{\rm sym}$ and $Z_{\rm sat/sym}$, are not known. However, since they influence only the high-density part of the EoS, they will not be tightly constrained by the present analysis. Therefore, they can be fixed to the values listed in Table~\ref{tab:empParam} \citep{margueron18b}. \subsection{Main results} \label{sec:mainresults} The MCMC routine (described in Section \ref{sec:mcmc}) was run considering the seven qLMXB sources mentioned above. We have considered the chains that converged to the global minimum, excluding a few percent (1--5 \%) stuck at higher \mbox{$\chi^2$}\ values (typically for reduced \mbox{$\chi^2$}\ above 10). We tested the presence of these "stuck chains" with repeated iterations of the exact same MCMC run. In each case, the minimum best-fit \mbox{$\chi^2$}\ is always found to be the same, and a small fraction of chains remain in the high-\mbox{$\chi^2$}\ parts of the parameter space. After 150,000 iterations, the reduced \mbox{$\chi^2$}\ distribution is centered around $1.10\pm 0.02$, for 1126 degrees of freedom, and the best fit corresponds to $\mbox{$\chi^2$}=1.08$, giving a null hypothesis probability of 3.1\%. \begin{figure} \centering \includegraphics[width=\columnwidth]{gaia_Empirical_parameters.png} \caption{Marginalized posterior probability distributions and correlations of the empirical parameters $L_{\rm sym}$, $K_{\rm sym}$ and $Q_{\rm sat}$. On the two-dimensional correlation plots, the contours indicate the 1, 2, and 3$\sigma$ confidence areas. On the one dimensional posterior distributions, the dashed vertical lines show the 68\% and 90\% quantiles around the median values. Here, all seven qLMXBs are included, the prior on $L_{\rm sym}=50\pm10\mbox{$\,{\rm MeV}$}$ is considered and the distances are determined from the set \emph{Dist \#2}. } \label{fig:lkq} \end{figure} The marginalized posterior probabilities for the empirical EoS parameters $L_{\rm sym}$, $K_{\rm sym}$ and $Q_{\rm sat}$ are shown in Figure~\ref{fig:lkq}. We observed from the marginalized distributions that $L_{\rm sym}$ peaks at lower values than the one imposed by the prior ($50\pm10\mbox{$\,{\rm MeV}$}$), but remains consistent with it: $L_{\rm sym}=37.2^{+9.2}_{-8.9}\mbox{$\,{\rm MeV}$}$. This somewhat reflects the tension driving the fit towards low radii at low masses (see Figure~\ref{fig:MR1} and related discussion). The empirical parameter $K_{\rm sym}=-85^{+82}_{-70}\mbox{$\,{\rm MeV}$}$ is rather well constrained compared to the uniform prior, showing that this parameter is important for our data set. Notice that it is also remarkably compatible with the one $-100\pm 100\mbox{$\,{\rm MeV}$}$ extracted from analysis of chiral effective field theory (EFT) calculations \citep{margueron18a}. Finally, the empirical parameter $Q_{\rm sat}=318^{+673}_{-366}\mbox{$\,{\rm MeV}$}$ is less constrained, but there is a preference for the lower values of the uniform prior distribution. The values of the empirical parameters for this run are reported in the first row of Table~\ref{tab:res1}. We point out that, despite the rather large uncertainties on the empirical parameters $K_{\rm sym}$ and $Q_{\rm sat}$, this is the first time that these parameters are extracted from data. The correlations among empirical parameters are also visible in Figure~\ref{fig:lkq}. There is a weak anti-correlation between $L_{\rm sym}$ and $K_{\rm sym}$ and a stronger anti-correlation between $K_{\rm sym}$ and $Q_{\rm sat}$. These correlations reflect the causality and stability requirements, implying, for instance, that a large value for $K_{\rm sym}$ shall be compensated by a small value of $L_{\rm sym}$ or of $Q_{\rm sat}$ to limit the upper bound for the sound velocity, and vice-versa for the lower bound. The anti-correlation between $L_{\rm sym}$ and $K_{\rm sym}$ was already found in~\cite{margueron18b}, but the empirical parameters $L_{\rm sym}$/$K_{\rm sym}$ and $Q_{\rm sat}$ were found to be correlated for a stiff EoS (if the direct URCA process occurs for $\mbox{$M_{\rm NS}$} < 2\mbox{$\,{\rm M}_\odot$}$) while for soft EoS (no direct URCA possible for $\mbox{$M_{\rm NS}$} < 2\mbox{$\,{\rm M}_\odot$}$) no correlations were found. The anti-correlation between $K_{\rm sym}$ and $Q_{\rm sat}$ is therefore a new feature coming from the fit to the thermal x-ray emission. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{MR_density_maps.png} \caption{\mbox{\mns--\rns}\ posterior probability distributions considering all the seven qLMXB sources, a prior on $L_{\rm sym}$ and on the distances from the set \emph{Dist \#1} (upper panel) and \emph{Dist \#2} (lower panel). The 50\%, 90\% and 99\% confidence levels are represented, as well as the constraints from AT2017gfo \citep{bauswein17} and from GW170817 \citep{annala18,abbott18,Tews2018}.} \label{fig:lkqMR} \end{figure} The \mbox{\mns--\rns}\ posterior probability distributions corresponding to the MCMC runs with the distances of set \emph{Dist \#1} (upper panel) and set \emph{Dist \#2} (lower panel) are displayed in Figure~\ref{fig:lkqMR}. It is reassuring to notice that the \mbox{\mns--\rns}\ posterior probability distribution is almost insensitive to the set of distances considered, as was also observed for the constant-\mbox{$R_{\rm NS}$}\ test runs (Figure~\ref{fig:radConst}). The global features of the probability distribution are the same for the two distance sets: The radius that we obtain is between $\sim11.5$ and $13.0\hbox{$\,{\rm km}$}$ for a 1.4\mbox{$\,{\rm M}_\odot$}\ NS, with a preference for low masses, although the 90\% credible intervals are compatible with 2\mbox{$\,{\rm M}_\odot$}. In Figure~\ref{fig:indiv_fr1_7} (top panels), we present the 90\% credible interval \mbox{\mns--\rns}\ posterior distributions of individual sources, obtained with the distance sets \emph{Dist \#2} (panel a) and \emph{Dist \#1} (panel b). Most sources have credible intervals that reach $\sim1.9\mbox{$\,{\rm M}_\odot$}$ or higher. In a few cases, the 90\% credible intervals only reach masses around 1.4--1.5\mbox{$\,{\rm M}_\odot$}, which appear to favor low-mass NSs. However, this is compatible with current distribution of MSP masses \citep{antoniadis16,ozel16b} which descend from LMXBs; we note that the lowest known NS mass is 1.174\ppm0.004\mbox{$\,{\rm M}_\odot$}\ (for PSR~J0453+1559; \citealt{martinez15}). Therefore, at the moment, there are no discrepancies with our current knowledge of NS formation mechanisms and their expected masses. We note that the masses of the individual sources are not constrained as well as the radius (Figure~\ref{fig:indiv_fr1_7}). This is inherent to the method used in the present work, in which the measurable physical quantity is \mbox{$R_{\infty}$}. The constraints on the radius emerge from the combination of 1) the general shape in \mbox{\mns--\rns}-space of most EOS models in our nucleonic parameterization, and 2) the shape of the quantity \mbox{$R_{\infty}$}\ extracted from qLMXB spectra (see previous works, e.g., \citealt{guillot11a, bogdanov16, shaw18} for the \mbox{\mns--\rns}\ constraints from single qLMXBs, for which a significant portion of the \mbox{\mns--\rns}\ contours appear at constant \mbox{$R_{\rm NS}$}.). Therefore, unless the mass of the NS is measured independently, the degeneracy between \mbox{$M_{\rm NS}$}\ and \mbox{$R_{\rm NS}$}\ from observations of qLMXB can only be minimized by the implementation of an EOS parameterization as done in this work. Other events involving NSs enable measurements of \mbox{$M_{\rm NS}$}\ and \mbox{$R_{\rm NS}$}\ independently. For example, type I X-ray bursts with photospheric radius expansion bursts provide two observables (the Eddington flux and the cooling tail normalization, which both depend on \mbox{$M_{\rm NS}$}\ and \mbox{$R_{\rm NS}$}, see \citealt{ozel16b} for a review). The GW signals of two merging NSs, on the other hand, provides measurements of the merging masses and of the tidal deformability, which can be used to derive the radius \citep[e.g.,][]{abbott18,de18}. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{density_maps_all.pdf} \caption{({\it top}) 90\% credible contours of the posterior probability distributions in \mbox{\mns--\rns}\ space for the individual sources in the case of the Frameworks~\#1 (a) and \#2 (b). Note that for visibility, the legend indicating the individual sources is split between the panels (a) and (b). ({\it bottom}) 90\% credible contours of the posterior probability distributions in \mbox{\mns--\rns}\ space for the Framework~\#1 to \#4 (panel c) and \#5 to \#7 (panel d). See Table~\ref{tab:res1} for the full results. The constraints GW170817 \citep{abbott18} are also shown.} \label{fig:indiv_fr1_7} \end{figure*} The 50\%, 90\% and 99\% contours resulting from the present analysis can also be compared with the constraints from AT2017gfo \citep{bauswein17} and from GW170817 \citep{annala18,abbott18,Tews2018}. There is a good agreement between these different constraints. Nonetheless, the width of the distribution obtained from the present work appears narrower than those from analyses of GW~170817, indicative of more restrictive constraints on the NS radius, but this could also be due to the fact that we do not consider phase transitions in the meta-modeling of our work. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{therm-eos-rho.pdf} \vspace{-0.8cm} \caption{Boundary contours obtained for the pressure $P$ as a function of the energy density $\rho$, considering all the 7 qLMXB sources, the $L_{\rm sym}$ prior and the distances from \emph{Dist \#1} (orange band with solid contour) and \emph{Dist \#2} (purple band with dashed contour). The green band with dotted contour represents the prediction of the meta-model (MM) constrained by chiral EFT calculations in nuclear matter and the observed maximum mass of NS. There is a good overlap between the observed and the MM predictions for the EoS.} \label{fig:lkqeos} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{therm-eos-vs-esym.pdf} \vspace{-0.5cm} \caption{Boundary contours obtained for the sound velocity (left panel) and the symmetry energy (right panel) as a function of the energy density, for the same cases presented in Figure~\ref{fig:lkqeos}. See text for more discussions.} \label{fig:lkqeos-vs-esym} \end{figure} The most likely EoS properties can also be deduced from our MCMC analyses, since the EoSs are described by empirical parameters. In Figure~\ref{fig:lkqeos}, we show the boundaries of the relation between the total pressure $P$ and the energy density $\rho$ resulting from our analysis, considering the nucleon and lepton contributions in $\beta$-equilibrium and for the two distance sets \emph{Dist \#1} and \emph{Dist \#2}. As noted before, the two distance sets do not significantly affect the most likely EoSs defined by those boundaries. Our predictions for the EoS are contrasted with a prediction for the EoS based on different constraints and labelled as MM in Figure~\ref{fig:lkqeos}. The meta-model MM is constrained by quantum Monte-Carlo predictions in low-density nuclear matter up to $n_{sat}$ and based on 2 and 3-nucleon forces from the chiral EFT Hamiltonians given in~\cite{Tews2018b}. The extrapolation beyond $n_{sat}$ is controlled by causality and stability requirements, as well as positiveness of the symmetry energy and maximum observed NS masses predicted in~\cite{Tews2018}. There is a good overlap between the EoS deduced from our analysis and the one from the MM analysis. It is however interesting to note that the intersection between the bands generated from our analysis and from the MM prediction could potentially further reduce the possibilities for the EoS. \begin{deluxetable*}{lllllllllll}[t] \centering \tablecaption{Distribution of empirical parameters $L_{\rm{\rm sym}}$, $K_{\rm{\rm sym}}$ and $Q_{\rm{\rm sat}}$ for various cases. \\ Group A contains high S/N sources (peaked masses): NGC6397, 47-Tuc, M28.\\ Group B contains low S/N sources (flat masses): \mbox{$\omega$\,Cen}, NGC6304, M13 and M30.\\ Group A$^\prime$ contains sources with peaked masses: NGC6397, 47-Tuc, M28 and M13.\\ Group B$^\prime$ contains sources with almost-flat masses: \mbox{$\omega$\,Cen}, NGC6304 and M30. See text for more details.\label{tab:res1}} \tablecolumns{11} \tablewidth{0pt} \tablehead{ \colhead{Framework} & \colhead{Sources} & \colhead{Distances} & \colhead{prior} & \colhead{$L_{\rm sym}$} & \colhead{$K_{\rm sym}$} & \colhead{$Q_{\rm sat}$} & \colhead{$R_{1.45}$} & \colhead{\mbox{$\chi^2_\nu$}} & \colhead{nb. of} & \colhead{d.o.f.} \\ & & & \colhead{$L_{\rm sym}$} & \colhead{(MeV)} & \colhead{(MeV)} & \colhead{(MeV)} & \colhead{(km)} & & \colhead{param.} & } \startdata 1 & all & \emph{Dist \#2} & yes & 37.2$^{+9.2}_{-8.9}$ & -85$^{+82}_{-70}$ & 318$^{+673}_{-366}$ & $12.35\pm 0.37$ & 1.08 & 49 & 1126\\ 2 & all & \emph{Dist \#1} & yes & 38.3$^{+9.1}_{-8.9}$ & -91$^{+85}_{-71}$ & 353$^{+696}_{-484}$ & $12.42\pm 0.34$ & 1.07 & 49 & 1126 \\ \hline 3 & all & \emph{Dist \#1} & yes & 38.6$^{+9.2}_{-8.7}$ & -95$^{+80}_{-36}$ & $300$ & $12.25\pm 0.30$ & 1.07 & 48 & 1127 \\ 4 & all & \emph{Dist \#1} & no & 27.2$^{+10.9}_{-5.3}$ & -59$^{+103}_{-74}$ & 408$^{+735}_{-430}$ & $12.37\pm 0.30$ & 1.07 & 49 & 1126 \\ \hline 5 & all/47-Tuc & \emph{Dist \#1} & yes & 43.4$^{+9.7}_{-9.3}$ & -66$^{+137}_{-102}$ & 622$^{+763}_{-560}$ & $12.57\pm 0.41$ & 1.08 & 43 & 700 \\ 6 & all/NGC6397 & \emph{Dist \#1} & yes & 42.6$^{+9.9}_{-9.5}$ & -77$^{+129}_{-96}$ & 623$^{+757}_{-544}$ & $12.58\pm 0.40$ & 1.09 & 43 & 961 \\ 7 & all/M28 & \emph{Dist \#1} & yes & 42.5$^{+9.5}_{-9.5}$ & -80$^{+124}_{-91}$ & $597^{+717}_{-510}$ & $12.46\pm 0.37$ & 1.07 & 43 & 846 \\ \hline 8 & A & \emph{Dist \#2} & yes & 38.6$^{+9.4}_{-8.9}$ & -91$^{+81}_{-76}$ & 343$^{+805}_{-431}$ & $12.18\pm 0.29$ & 1.04 & 21 & 874 \\ 9 & A$^\prime$ & \emph{Dist \#2} & yes & 37.5$^{+9.0}_{-8.9}$ & -88$^{+76}_{-70}$ & 263$^{+764}_{-361}$ & $12.22\pm 0.32$ & 1.06 & 29 & 945 \\ 10 & B & \emph{Dist \#2} & yes & 49.12$^{+10.0}_{-10.0}$ & -6.66$^{+137}_{-138}$ & 804$^{+709}_{-675}$ & $12.88\pm 0.43$ & 1.19 & 28 & 255 \\ 11 & B$^\prime$ & \emph{Dist \#2} & yes & 50.3$^{+9.8}_{-9.6}$ & -1$^{+134}_{-143}$ & 881$^{+671}_{-705}$ & $12.98\pm 0.40$ & 1.18 & 23 & 178 \\ \enddata \end{deluxetable*} In Figure~\ref{fig:lkqeos-vs-esym}, we show both the squared sound speed in units of the speed of light, $(v_s/c)^2$, and the symmetry energy, $e_{\rm sym}$, as functions of the energy density $\rho$. The coloured bands in Figure~\ref{fig:lkqeos-vs-esym} are the same as in Figure~\ref{fig:lkqeos}. Here, too, one can remark that there is little impact of the distance sets on $(v_s/c)^2$ and $e_{\rm sym}$, as for the general EoS shown in Figure~\ref{fig:lkqeos}. Similarly to Figure~\ref{fig:lkqeos}, there is also a good overlap of our predictions with the MM ones. \begin{deluxetable}{cccc} \tablecaption{Distributions of all the model parameters with quantiles corresponding to the 98\% credible interval., except the empirical parameters given in Table~\ref{tab:res1} for the reference calculation (distances of \emph{Dist \#2}, prior on $L_{\rm sym}$, variation over the 3 empirical parameters $L_{\rm{\rm sym}}$, $K_{\rm{\rm sym}}$ and $Q_{\rm{\rm sat}}$).\label{tab:results}} \tabletypesize{\scriptsize} \tablecolumns{4} \tablehead{ \colhead{Parameter} & \colhead{Source} & \colhead{\emph{Dist \#1}} & \colhead{\emph{Dist \#2}} } \startdata & M13 & 0.15 -- 0.96 & 0.15 -- 0.95 \\ & \mbox{$\omega$\,Cen} & 0.23 -- 0.97 & 0.20 -- 0.97 \\ & 47Tuc & 0.15 -- 0.65 & 0.16 -- 0.67 \\ pile-up $\alpha$ & M28 & 0.38 -- 0.54 & 0.39 -- 0.55 \\ & M30 & 0.27 -- 0.97 & 0.24 -- 0.97 \\ & NGC6304 & 0.20 -- 0.97 & 0.20 -- 0.97 \\ & NGC6397 & 0.33 -- 0.81 & 0.38 -- 0.85 \\ \hline & M13 & 1.56 -- 4.44 & 2.47 -- 5.43 \\ & \mbox{$\omega$\,Cen} & 17.19 -- 21.01 & 15.76 -- 19.32 \\ & 47Tuc & 2.86 -- 4.64 & 2.93 -- 4.76 \\ $N_{\rm{H}}\left(\unit{10^{20} cm^{-2}}\right)$ & M28 & 35.17 -- 37.98 & 35.34 -- 37.88 \\ & M30 & 3.01 -- 6.37 & 3.18 -- 6.47 \\ & NGC6304 & 44.96 -- 56.61 & 45.98 -- 58.13 \\ & NGC6397 & 16.60 -- 17.73 & 17.38 -- 19.44 \\ \hline & M13 & 79.06 -- 90.42 & 76.04 -- 86.57 \\ & \mbox{$\omega$\,Cen} & 70.48 -- 80.96 & 73.62 -- 86.05 \\ & 47Tuc & 104.73 -- 112.71 & 104.54 -- 112.79 \\ $kT_{\rm{eff}}\,\left(\rm{eV}\right)$ & M28 & 109.01 -- 118.76 & 108.49 -- 116.35 \\ & M30 & 86.69 -- 99.99 & 86.51 -- 99.17 \\ & NGC6304 & 91.93 -- 108.39 & 90.26 -- 105.36 \\ & NGC6397 & 62.87 -- 68.88 & 61.16 -- 65.88 \\ \hline & M13 & 0.77 -- 1.95 & 0.74 -- 1.95 \\ & \mbox{$\omega$\,Cen} & 0.80 -- 2.01 & 0.85 -- 2.06 \\ & 47Tuc & 0.66 -- 1.44 & 0.67 -- 1.47 \\ $M_{\rm{NS}}\,(\mbox{$\,{\rm M}_\odot$})$ & M28 & 0.70 -- 1.51 & 0.68 -- 1.43 \\ & M30 & 0.78 -- 2.00 & 0.80 -- 1.99 \\ & NGC6304 & 0.87 -- 2.07 & 0.85 -- 2.06 \\ & NGC6397 & 0.72 -- 1.62 & 0.70 -- 1.47 \\ \hline & M13 & 7.72 -- 8.50 & 7.07 -- 7.19 \\ & \mbox{$\omega$\,Cen} & 4.50 -- 4.65 & 5.18 -- 5.25 \\ & 47Tuc & 4.50 -- 4.65 & 4.47 -- 4.59 \\ $D\,\left(\rm{kpc}\right)$ & M28 & 5.48 -- 5.89 & 5.467 -- 5.65 \\ & M30 & 8.02 -- 8.77 & 8.06 -- 8.21 \\ & NGC6304 & 6.15 -- 6.43 & 5.86 -- 6.01 \\ & NGC6397 & 2.49 -- 2.56 & 2.29 -- 2.34 \\ \hline & M13 & 4.55 -- 15.57 & 5.64 -- 16.48 \\ & \mbox{$\omega$\,Cen} & 1.21 -- 5.20 & 0.89 -- 4.87 \\ & 47Tuc & 1.15 -- 11.50 & 1.17 -- 12.44 \\ $N_{\rm{pl}}$ & M28 & 2.30 -- 10.63 & 2.23 -- 10.32 \\ & M30 & 4.01 -- 19.99 & 4.16 -- 19.54 \\ & NGC6304 & 3.87 -- 15.78 & 4.42 -- 15.66 \\ & NGC6397 & 2.83 -- 8.19 & 2.89 -- 8.30 \\ \hline $C_1$ & M13 & 0.89 -- 1.09 & 0.89 -- 1.09 \\ & \mbox{$\omega$\,Cen} & 0.99 -- 1.16 & 0.99 -- 1.18 \\ $C_2$ & M13 & 0.79 -- 0.96 & 0.78 -- 0.99 \\ & \mbox{$\omega$\,Cen}& 0.96 -- 1.12 & 0.96 -- 1.12 \\ \enddata \tablecomments{$C_1$ and $C_2$ are the multiplicative coefficients that accounts for absolute flux cross-calibration uncertainties between the \textit{XMM}-pn, \textit{XMM}-MOS and \textit{Chandra}\ detectors (see Section \ref{sec:spec_model}). } \end{deluxetable} The posterior ranges at 98\% confidence for the qLMXB emission model parameters are given in Table~\ref{tab:results} for the two distances considered here, \emph{Dist \#1} and \emph{Dist \#2}. First, we note that all parameters resulting from the \emph{Dist \#1} run are consistent with those of \emph{Dist \#2}. The small differences observed between the results of these two runs are not significant -- only the seven distances posterior distributions differ since they are driven by the priors imposed. The NS temperatures and masses are consistent with previously reported values \citep{guillot13,guillot14,heinke14,bogdanov16}. Interestingly, none of the NSs studied have masses going over $\sim 2.1\mbox{$\,{\rm M}_\odot$}$ at 98\% confidence. The best-fit absorption values \mbox{$N_{\rm H}$}\ are also consistent with the expected values in the direction of the host GCs (see e.g., neutral H maps \citealt{dickey90,kalberla05}). Finally, we note that the power-law normalizations $N_{\rm pl}$ obtained are consistent with zero. Although this might not be readily obvious from the quantile ranges provided in Table~\ref{tab:results}, the seven posterior distributions (not shown in the paper) do indeed have non-zero probabilities for $N_{\rm pl}=0$. This lends further evidence for the absence of non-thermal emission in these objects. We have nonetheless considered the possible existence of non-thermal emission in our analyses by including a power-law component in the spectral model. \subsection{Sensitivity analysis} This section presents a sensitivity analysis of our results in which modifications of the main framework are tested, such as reducing the number of empirical parameters to vary, changing the set of distances considered (notice that this was already largely explored in the previous sub-section), or reducing the number of qLMXB sources considered. We report in Table~\ref{tab:res1} the global results of the sensitivity analysis, where the impact of the changes is given for a few parameters: the EoS empirical parameters $L_{\rm sym}$, $K_{\rm sym}$ and $Q_{\rm sat}$, the radius $R_{1.45}$ for a $1.45\mbox{$\,{\rm M}_\odot$}$ NS, and the best $\mbox{$\chi^2_\nu$}$. We also give the number of fitting parameters and the number of degrees of freedom (d.o.f.) for each run. The rows of Table~\ref{tab:res1} represent the various frameworks considered. The two first rows represent the framework already explored, considering all the seven qLMXB sources, the two sets of distances \emph{Dist \#1} and \emph{Dist \#2}, the prior on $L_{\rm sym}$, and the variation of the three empirical parameters. They are considered hereafter as our reference results, around which we slightly perturb the framework to extract the sensitivity of this reference to small corrections. In the first approach for the sensitivity analysis, we modify the distance sets. For Framework~\#3, we reduce the number of free EoS parameter by fixing $Q_{\rm sat}=300\mbox{$\,{\rm MeV}$}$. The impact on the centroid and width for $L_{\rm sym}$, $K_{\rm sym}$ and $R_{1.45}$ is marginal. The minimum $\mbox{$\chi^2_\nu$}$ changes minimally. For the Framework~\#4, we replace the Gaussian prior on $L_{\rm sym}$ by a uniform prior ranging from 20 to 120 MeV. The change in the minimum $\mbox{$\chi^2_\nu$}$ in marginal, indicating that the fit statistic is not affected by adding/removing the prior on $L_{\rm sym}$. Furthermore, removing the prior on $L_{\rm sym}$ somewhat reduces the posterior value, which should have the effect of decreasing the overall radius (as indicated by Figure~\ref{fig:lkqMR}). However, we observe that $R_{1.45}$ remains unchanged (marginal decrease, see Table~\ref{tab:res1}), because lower values of $L_{\rm sym}$ are compensated by larger values of $K_{\rm sym}$ and $Q_{\rm sat}$. This emerges from the anti-correlation between $L_{\rm sym}$ and $K_{\rm sym}$, as discussed in Section~\ref{sec:mainresults}, and reported in \cite{margueron18b}. This observed compensation between the empirical parameters originates from the fact that in our meta-model, $e_{\rm sym}$ is constrained to positive values up to a threshold central density that produces 1.9\mbox{$\,{\rm M}_\odot$}\ NSs. Therefore, if $L_{\rm sym}$ becomes too low, the other parameters (mostly $K_{\rm sym}$) re-adjust to satisfy the condition on $e_{\rm sym}$. In the spectral analyses of this work, this translates into a stable average radius (Table~\ref{tab:res1}), as required by the observational data. In a second approach, we analyse the sensitivity of the result to the modification of the qLMXB source set by removing a single qLMXB. In the Framework~\#4, \#6, and \#7, we removed 47Tuc~X-7, NGC~6397, or M28, respectively. Since these sources have the largest signal-to-noise ratio (S/N), their removal allows to check to which extent they contribute to drive the results. While marginal, there are indeed some systematic effects: $L_{\rm sym}$ is increased by about 5--6\mbox{$\,{\rm MeV}$}, $K_{\rm sym}$ by 10--20\mbox{$\,{\rm MeV}$}, the value for $Q_{\rm sat}$ is almost doubled, and the radius $R_{1.45}$ is increased by up to 0.23\hbox{$\,{\rm km}$}. These systematic corrections remain inside the original uncertainty estimated for the reference results (Framework~\#1 and \#2). These different approaches are summarized in Figure~\ref{fig:indiv_fr1_7} (bottom) which shows the 90\% credible interval of the \mbox{\mns--\rns}\ posterior distributions of Frameworks \#1 to \#7, and demonstrating that they broadly overlap in the 12--13\hbox{$\,{\rm km}$}\ radius range. In a third approach for the sensitivity analysis, we split the qLMXB sources into the different groups ($A$, $B$) and ($A^\prime$, $B^\prime$). The groups $A$ and $B$ are defined with respect to the S/N ($A$ for S/N $>$ 60, $B$ otherwise, see Table~\ref{tab:sources}). The groups $A^\prime$ and $B^\prime$ are defined with respect to the posterior mass distribution ($A^\prime$ if the posterior mass distribution is well peaked, $B^\prime$ if it is almost flat). There is a nice correlation between the S/N ratio and the posterior mass distribution ($A=A^\prime$ and $B=B^\prime$), except for the source M13, which has a low S/N but a well peaked mass distribution (see Table~\ref{tab:sources}). As a consequence, the results for the groups $A$ and $A^\prime$, as well as $B$ and $B^\prime$ are almost identical. The groups $A$ and $A^\prime$ prefer the lower values for $L_{\rm sym}$, $K_{\rm sym}$ and $Q_{\rm sat}$ comparable to the reference results. They favor lower radii $R_{1.45}\approx 12.2\pm 0.3$~km. By contrast, the groups $B$ and $B^\prime$ tend to increase the values for $L_{\rm sym}$, $K_{\rm sym}$, $Q_{\rm sat}$ and $R_{1.45}$ to values that are still compatible with the uncertainty of the reference results, albeit with some tension. Naturally, the uncertainty on these values is also increased, especially for the parameter $K_{\rm sym}$ and for the radius $R_{1.45}$. We also note that, for the groups $B$ and $B^\prime$, the $L_{\rm sym}$ values are essentially identical to the prior given on that parameter ($L_{\rm sym}=50\pm10\mbox{$\,{\rm MeV}$}$); implying that these two groups have little weight in the constraints on $L_{\rm sym}$. As a conclusion of this sensitivity analysis, we can state that our reference results are only marginally impacted by small changes in the crucial input parameters such as the distance set, the number of free EoS parameters, and the selection of qLMXB sources. In addition, we identified a group of qLMXB sources with low S/N (subsets $B$ and $B^\prime$), which do not contribute significantly to the constraints on the empirical parameters, especially $L_{\rm sym}$. These are the qLMXBs in \mbox{$\omega$\,Cen}, M13, M30, and NGC~6304. An improvement in the analysis of the qLMXB thermal emission will require more statistics especially for these sources. \subsection{Comparison with previous work} \label{sec:comparison} Since the seminal papers of \cite{brown98} and \cite{rutledge02a}, the thermal emission from qLMXBs has been analyzed by several authors in order to better constrain the properties of matter at high density. Over the years, atmosphere models have been improved \citep[e.g.,][]{heinke06a,haakonsen12} and the number of sources used in the analysis has increased \citep{guillot14,bogdanov16}. The theoretical description of the EoS has also been improved, from the unconstrained case where masses and radii are considered independently of each other (i.e., directly extracted from \mbox{$R_{\infty}$}\ meausurements, e.g., \citealt{heinke06a,guillot11a}, to more consistent approaches. In a first attempt to consistently analyse several qLMXB sources combined, a constant radius EoS model was proposed, inspired by the qualitative behaviour of most of the nuclear EoSs~\citep{guillot13,guillot16b}. Because these early results did not consider a full treatment of the pile-up instrumental effects in the {\em Chandra}\ data (which are significant even at low pile-up fractions, \citealt{bogdanov16}), we only compared our results to the most recent ones in which qLMXBs are analyzed including the effects of the pile-up and which contain similar inputs as in our analysis. Recently, \cite{steiner18} found that the radius of a $1.4\mbox{$\,{\rm M}_\odot$}$ NS is most likely between 10.4 to 13.7\hbox{$\,{\rm km}$}\ at 68\% confidence level, considering all cases tested in that work. Assuming a pure H atmosphere for all objects, they found \mbox{$R_{\rm NS}$}\ in the range 11.2--12.3\hbox{$\,{\rm km}$}, which is consistent with our results. In comparison, the interval of possible radii in the present work is narrower, since we disregarded the possible occurrence of a strong phase transition. For $L_{\rm sym}$, \cite{steiner18} found 38.94--58.09\mbox{$\,{\rm MeV}$}, which is also consistent with our findings, while the uncertainty band is also larger in their case. However, the main difference between our analysis and that of \cite{steiner18} is that we have implemented the EoS parameters in the fitting procedure, while \cite{steiner18} determine a \mbox{\mns--\rns}\ posterior probability independent from the EoS and in a second step fit different EoS scenarios to this posterior result. It is reassuring to find that our results agree. In another analysis, \cite{ozel16a} analyzed the thermal emission of the same sources as ours, except 47 Tuc~X-7, in addition to data from six type-I X-ray bursts. They found radii between 10.1 and 11.1\hbox{$\,{\rm km}$}, for masses ranging from 1 to 2\mbox{$\,{\rm M}_\odot$}, which is a smaller estimation than ours. In a more recent analysis, \cite{bogdanov16} included the same twelve sources as \cite{ozel16b} with the addition of 47 Tuc~X-5 and X-7, and found radii ranging from 9.9 to 11.2\hbox{$\,{\rm km}$}. These two analyses favor a rather soft EoS, at odds with our results. There are still some differences between these analyses and ours, including (1) different values of the distances, (2) they included the X-ray bursts data, which we have not, (3) they used polytropes to parameterize the EoS. Another main difference to our work is that they deduced the radii of NSs from the marginalized posterior mass distributions (as in \cite{steiner18}), while in our case the radii are calculated consistently with the masses for each considered EoS. In our analysis, we have shown that without nuclear physics inputs the constant-\mbox{$R_{\rm NS}$}\ approximation prefer radii around $\sim 11.1\pm0.4\hbox{$\,{\rm km}$}$, consistent with the estimates in \cite{ozel16a} and \cite{steiner18}, while including nuclear EoS and a prior on the empirical parameter $L_{\rm sym}$, the radius can increase up to $\sim$12.0--12.5\hbox{$\,{\rm km}$}. This demonstrates the advantage of fitting the thermal emission model parameters together with the ones of the EoS. Recently, \cite{nattila17} performed the first direct atmosphere model spectral analysis of five hard-state type-I X-ray burst cooling tails from the LMXB 4U~1702--429. They extracted a precise estimation of the radius, $12.4\pm0.4\hbox{$\,{\rm km}$}$ at 68\% credibility, for a mass more difficult to constrain, in the range 1.4--2.2\mbox{$\,{\rm M}_\odot$}. Observations of millisecond pulsars also provided measurements of the NS radius, and therefore constraints on the EoS. While the early analyses provided lower limits on the radius (e.g., $R>10.4\hbox{$\,{\rm km}$}$ for PSR~J0437$-$4715, \citealt{bogdanov13}), the recent NS parameter estimation resulting from X-ray pulse profile analyses of \textit{NICER}\ data resulted in better constrained radii \citep[priv. communication, and ][]{riley19,miller19}, compatible with those reported here. A different analysis, which exploited the far ultraviolet and soft X-ray emission of PSR~J0437$-$4715, fitted to a low-temperature atmosphere model, resulted in $\mbox{$R_{\rm NS}$}=13.1\ud{0.9}{0.7}\hbox{$\,{\rm km}$}$, compatible with our results here, although with some moderate tension \citep{gonzalez19}. Finally, the recent observation of the NS-NS merger event GW~170817 allowed to get an estimation of the radii of the two stars as well as constraints on their EoS through the tidal deformability parameter $\Lambda$~\citep{abbott18}. Further analyses of the GW and electromagnetic signals lead to the constraints on the radii drawn in Figure~\ref{fig:lkqMR}. A good agreement with our analysis is also to be noticed. \section{Conclusions} \label{sec:conclusions} We have used a collection of X-ray spectra coming from seven qLMXBs and have analyzed their surface thermal emission assuming a NS H-atmosphere, and assuming a flexible meta-modeling for the nuclear EoS which has been implemented directly in the fit. For the first time, the emission model and the EoS parameters have been treated on equal footing, avoiding overconstraints which were potentially present in previous analyses. In all our analyses, the instrumental phenomenon of pile-up and the absorption of X-rays in the ISM have been taken into account using the new {\tt tbabs} absorption model, as well as a power-law component accounting for non-thermal emission. We modeled the surface thermal emission using the {\tt NSATMOS} model, which requires the mass and the radius of the sources as inputs, so that we can implement the \mbox{\mns--\rns}\ relation, obtained from the EoS parameterization, directly in the spectral modeling. Because of the degeneracy between the radius of a source and its distance to the observatory in the thermal photon flux \citep{rutledge99}, we have investigated the sensitivity of all our results to the distances of the sources. We have used two sets of distance measurements and showed that their differences have a rather small impact on the EoS parameter estimation. The MCMC method based on the stretch-move algorithm has been used to sample the whole parameter space (49 dimensions in our reference runs), and we found the best set of parameters reproducing the observational data. The method employed here has first been tested on the constant-\mbox{$R_{\rm NS}$}\ approximation \citep{guillot13}, giving $\mbox{$R_{\rm NS}$}=11.1\pm0.4\hbox{$\,{\rm km}$}$, consistent with recent analyses \citep{guillot16b}. When applied with the meta-modeling for the nuclear EoS \citep{margueron18a}, our MCMC method permitted obtaining, for the first time, some constraints on the most determinant parameters: $L_{\rm sym}=27.2\ud{10.9}{5.3}\mbox{$\,{\rm MeV}$}$, $K_{\rm sym}=-59\ud{103}{74}\mbox{$\,{\rm MeV}$}$ and $Q_{\rm sat}=408\ud{735}{430}\mbox{$\,{\rm MeV}$}$. When considering current knowledge of nuclear physics as input (prior) for the value of $L_{\rm sym}$ \citep{lattimer2013}, we find slightly better constrained parameters, as expected: $L_{\rm sym}=37.2\ud{9.2}{8.9}\mbox{$\,{\rm MeV}$}$, $K_{\rm sym}=-85\ud{82}{70}\mbox{$\,{\rm MeV}$}$ and $Q_{\rm sat}=318\ud{673}{366}\mbox{$\,{\rm MeV}$}$. We stress that the values of $K_{\rm sym}$ and $Q_{\rm sat}$ we reported are the first estimations for these empirical parameters extracted from observational data. These quantities are not yet accessible in nuclear physics experiments and are therefore poorly constrained \citep{margueron18a}, since their effects are mainly situated far from saturation density, such as in NS matter. We also obtained an anti-correlation between $K_{\rm sym}$ and $Q_{\rm sat}$, induced by the causality and stability requirements. The distributions of these empirical parameters are not affected by the choice in the distance set. As a product of our analyses, we also provide the average radius (at 1.45\mbox{$\,{\rm M}_\odot$}) of the statistically preferred EoS. When the prior on $L_{\rm sym}$ is included ,We find $R_{1.45}=12.42\pm 0.34\hbox{$\,{\rm km}$}$ for the set of distances Dist \#1 and $R_{1.45}=12.35\pm 0.37\hbox{$\,{\rm km}$}$ for the set of distances Dist \#2. These resulting radius distributions are narrower than the range of radii allowed by the meta-model used (see Figure~\ref{fig:lkqMR}), i.e., than the prior range imposed by our choice of nuclear physics input. One can note that the radius obtained here is at the upper bound of previous analyses, \citep[e.g.,][]{steiner18}, resulting from the fact that we took into account the nuclear physics knowledge through the prior on $L_{\rm sym}$. Adding that prior did not degrade the fit statistics. Furthermore, the average radius was constant under this change, implying that it is required by the data, and not driven by the $L_{\rm sym}$ prior. The only nuclear physics input in our model is the well accepted condition of an EOS respecting causality and reaching at least 1.9\mbox{$\,{\rm M}_\odot$}. Leaving $L_{\rm sym}$ free results in a posterior distributions at values significantly lower than the prior, but it is compensated by an adjustment of the other two parameters, $K_{\rm sym}$ and $Q_{\rm sat}$, that keeps the radius essentially constant, while supporting a 1.9\mbox{$\,{\rm M}_\odot$}\ NS. We further note that there are major differences between the meta-model and the constant-radius assumption. The latter does not require to satisfy causality and does not impose a condition on the maximum mass of NS. These conditions together naturally make the radius at 1.4\mbox{$\,{\rm M}_\odot$}\ larger for the meta-model than in the constant-radius toy-model. While previous analyses invoked the need for He atmosphere model\footnote{As noted in Section~\ref{sec:qlmxb}, applying He atmosphere models to NS emission spectra produces NS radii larger by $\sim$ 30--50\% \citep{servillat12, catuneanu13,heinke14,steiner18}.} to reconcile the otherwise small radii obtained from qLMXB spectra \citep[e.g.,][]{guillot13,guillot14}, we demonstrated that the use of our meta-model produces radii in the 12--13 km range, with or without prior on $L_{\rm sym}$. We have also investigated the impact of the selection of the sources on the results and found that we can separate the sources in two ways: according to the S/N (group A and B presented in Table~\ref{tab:sources}), or according to the posterior distribution of the mass (groups A' and B'). When using only the sources with a high S/N, or a peaked posterior mass distribution, we found slightly smaller radii $R_{1.45}=12.2\pm 0.3$~km compared to our reference results. On the other hand, selecting the sources with lower S/N or a flat mass distribution increased the radius up to $R_{1.45}=12.9\pm 0.4$~km. These results therefore advocate for improving the statistics for the sources in \mbox{$\omega$\,Cen}, NGC~6304, M30 and M13. In the future, we foresee two possibilities: \begin{enumerate} \item The mass and radius predictions presented in this work are consistent with those obtained by other means (e.g., pulse waveform modelling with \emph{NICER}, constraints obtained with future GW signals from NS-NS mergers), which would bring support to the nuclear physics assumptions we made; \item Future analyses prefer low-radius NS, which would suggests tension with these nuclear physics assumptions. Such tension would open up the possibility to learn about dense matter, to eventually reject some of the assumptions or advocate for the presence of phase transitions in dense nuclear matter, which goes beyond our present model. \end{enumerate} We plan to improve the nuclear EoS modeling by implementing strong first-order phase transitions and by calibrating its parameters on the data in the same way as we have done here for the empirical parameters. We believe that this will shed light on the need for first order phase transitions to reproduce the thermal spectrum of qLMXBs. The model selection could also include constraints from other observables, such as those expected from \emph{NICER} as well as the wealth of new results expected from the LIGO-Virgo collaboration. Some limitations due to flux calibrations will always remain for methods that rely on broad band X-ray spectroscopy, such as that in the present work, since $F_{\rm X}\propto (\mbox{$R_{\infty}$}/D)^2$. In addition to the multiplicative constants accounting for flux cross-calibration uncertainties between the instrument, we have also included 3\% systematic uncertainties to each spectral bin, as was done in previous works \cite{guillot13,guillot14,bogdanov16}, to include flux calibrations uncertainties into our final results. We note, however, that at the moment, other sources of uncertainties (e.g., on the distances to the sources) likely dominate over flux calibration uncertainties. \acknowledgments The authors thank the anonymous referee for their useful comments that improved the discussion in this article. We acknowledge the support of ECOS-CONICYT collaboration grant C16U01. The authors are grateful to the LABEX Lyon Institute of Origins (ANR-10-LABX-0066) of the Universit\'e de Lyon for its financial support within the program "Investissements d'Avenir" (ANR-11-IDEX-0007) of the French government, operated by the National Research Agency (ANR). NB and JM were partially supported by the IN2P3 Master Project MAC. The authors also thank the "NewCompStar" COST Action MP1304 and PHAROS COST Action MP16214 for the conferences where this project was born. SG and NAW acknowledge the support of the French Centre National d'\'{E}tudes Spatiales (CNES), and of the FONDECYT Postdoctoral Project 3150428 in the early phases of this work. Additional support for MC is provided by the Chilean Ministry for Economy, Development, and Tourism's Millennium Science Initiative through grant IC\,120009, awarded to the Millennium Institute of Astrophysics (MAS). The work of MC and AR is funded by the Center for Astronomy and Associated Technologies (CATA; CONICYT project Basal AFB-170002). AR also acknowledges support from FONDECYT grant \#1171421. \software{\texttt{emcee} \citep{foremanmackey13}, \texttt{corner} \citep{foremanmackey16}, \texttt{HEAsoft} \citep{heasoft14}, \texttt{Xspec} (and PyXspec, \citealt{arnaud96}), \texttt{XMMSAS} \citep{gabriel04}, and \texttt{CIAO} \citep{fruscione06}. } \bibliographystyle{aasjournal}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Percolation~\cite{Kesten82,Grimmet89,Stauffer92} has its origins in the paper~\cite{BroadHamm57} by Broadbent and Hammersley from 1957. Despite its relatively simple description, the subtleties and richness of percolation continue to hold much interest and even surprises after 50 years. One exciting recent development is the demonstration~\cite{Cardy92,Smirnov01} that the continuum scaling limit of percolation on the lattice yields a conformally invariant measure in the plane with connections to stochastic Loewner evolution~\cite{Schramm00,LawlerEtAl01,Werner03,RohdeEtAl05,KN,BB}. This is achieved by considering discrete analytic functions on the lattice. Another intriguing development is the unexpected connection~\cite{RazStrog01,BatchelorEtAl01,PearceEtAl02,FZZ06} between the groundstate of percolation, viewed as a stochastic process, and fully packed loop refinements of enumerations of symmetry classes of alternating sign matrices. Percolation as a Conformal Field Theory (CFT) has some novel aspects being a non-rational and non-unitary theory with a countably infinite number of scaling fields. Most importantly, as argued in~\cite{Cardy99,GL,FL} for example, it is a {\em logarithmic} CFT with the consequence that it admits indecomposable representations of the Virasoro algebra~\cite{Roh96}. The first systematic study of logarithmic CFT appeared in~\cite{Gurarie93}. Logarithmic CFTs are currently the subject of intensive investigation, see~\cite{MS,Flohr97,RAK,Kausch00,bdylcft,MRS,Flohr03,Gaberdiel03,FFHST,Ruelle02,Kawai03,LMRS,Nichols02,RasWZW,FG,FGST,GR,QS} and references therein. There is of course a long history of studying percolation as the continuum scaling limit of lattice models~\cite{SaleurD87,SaleurSUSY,ReSa01}. Here, however, it is convenient to regard critical percolation as a member of the family ${\cal LM}(p,p')$ of logarithmic CFTs defined as the continuum scaling limit of integrable lattice models~\cite{PRZ}. The first two members ${\cal LM}(1,2)$ and ${\cal LM}(2,3)$ correspond to critical dense polymers and critical percolation (bond percolation on the square lattice), respectively. This solvable model of critical dense polymers was considered in \cite{PRpoly}. In this paper, we are interested in the fusion algebra of ${\cal LM}(2,3)$ and we present an explicit conjecture for the fusion rules generated from two fundamental representations, here denoted $(2,1)$ and $(1,2)$. The identity of this fundamental fusion algebra is denoted $(1,1)$ and is a reducible yet indecomposable representation of rank 1. Our fusion rules are supported by extensive numerical studies of our integrable lattice model of critical percolation. Details of our lattice findings and numerical results will be presented elsewhere. It appears natural to suspect that the so-called augmented $c_{p,p'}$ models \cite{EberleF06} are equivalent to our logarithmic minimal models ${\cal LM}(p,p')$. In particular, we believe that the augmented $c_{2,3}$ model is equivalent to critical percolation ${\cal LM}(2,3)$. Much is known~\cite{GK} about the fusion algebras of the augmented $c_{p,p'}$ models with $p=1$ while much less is known about the fusion algebras of these models for $p>1$. For critical percolation, the most complete information on fusion comes from Eberle and Flohr~\cite{EberleF06} who systematically applied the Nahm algorithm~\cite{Nahm94,GK} to obtain fusions level-by-level. A careful comparison shows that our fusion rules are compatible with their results~\cite{EberleF06}. In particular, we confirm their observation of indecomposable representations of rank 3. We also make a detailed comparison of our fusion rules with the results of \cite{RS} which we find correspond to a subalgebra of our fusion algebra of critical percolation. \subsection{Kac Representations} Critical percolation ${\cal LM}(2,3)$ has central charge $c=0$ and conformal weights \be \D_{r,s}\ =\ \frac{(3r-2s)^{2}-1}{24},\hspace{1.2cm}r,s\in\mathbb{N} \label{D} \ee The set of distinct conformal weights is $\{\D_{k,1},\D_{k+1,2},\D_{k+1,3};\ k\in\mathbb{N}\} =\{\D_{1,k+1},\D_{2,k+2},;\ k\in\mathbb{N}\}$. \begin{figure}[h] \begin{center} \begin{pspicture}(0,0)(7,8) \rput[bl](0.15,0){\color{lightestblue}{\rule{6.9cm}{7.25cm}}} \rput[bl](1.12,0){\color{lightlightblue}{\rule{.985cm}{7.25cm}}} \rput[bl](3.09,0){\color{lightlightblue}{\rule{.985cm}{7.25cm}}} \rput[bl](5.06,0){\color{lightlightblue}{\rule{.985cm}{7.25cm}}} \rput[bl](0.15,1.32){\color{lightlightblue}{\rule{6.87cm}{.63cm}}} \rput[bl](0.15,3.26){\color{lightlightblue}{\rule{6.87cm}{.63cm}}} \rput[bl](0.15,5.20){\color{lightlightblue}{\rule{6.87cm}{.63cm}}} \rput[bl](0.15,0){\color{lightblue}{\rule{.98cm}{1.3cm}}} \rput[bl](1.11,1.3){\color{midblue}{\rule{.98cm}{.65cm}}} \rput[bl](1.11,3.24){\color{midblue}{\rule{.98cm}{.65cm}}} \rput[bl](1.11,5.18){\color{midblue}{\rule{.98cm}{.65cm}}} \rput[bl](3.08,1.3){\color{midblue}{\rule{.98cm}{.65cm}}} \rput[bl](3.08,3.24){\color{midblue}{\rule{.98cm}{.65cm}}} \rput[bl](3.08,5.18){\color{midblue}{\rule{.98cm}{.65cm}}} \rput[bl](5.05,1.3){\color{midblue}{\rule{.98cm}{.65cm}}} \rput[bl](5.05,3.24){\color{midblue}{\rule{.98cm}{.65cm}}} \rput[bl](5.05,5.18){\color{midblue}{\rule{.98cm}{.65cm}}} \pswedge[fillstyle=solid,fillcolor=red,linecolor=red](1.1,1.92){.2}{180}{270} \pswedge[fillstyle=solid,fillcolor=red,linecolor=red](2.09,1.92){.2}{180}{270} \pswedge[fillstyle=solid,fillcolor=red,linecolor=red](1.1,3.86){.2}{180}{270} \pswedge[fillstyle=solid,fillcolor=red,linecolor=red](2.09,3.86){.2}{180}{270} \pswedge[fillstyle=solid,fillcolor=red,linecolor=red](1.1,5.80){.2}{180}{270} \pswedge[fillstyle=solid,fillcolor=red,linecolor=red](2.09,5.80){.2}{180}{270} \pswedge[fillstyle=solid,fillcolor=red,linecolor=red](2.09,.64){.2}{180}{270} \pswedge[fillstyle=solid,fillcolor=red,linecolor=red](2.09,1.28){.2}{180}{270} \pswedge[fillstyle=solid,fillcolor=red,linecolor=red](4.06,.64){.2}{180}{270} \pswedge[fillstyle=solid,fillcolor=red,linecolor=red](4.06,1.28){.2}{180}{270} \pswedge[fillstyle=solid,fillcolor=red,linecolor=red](4.06,1.92){.2}{180}{270} \pswedge[fillstyle=solid,fillcolor=red,linecolor=red](6.03,.64){.2}{180}{270} \pswedge[fillstyle=solid,fillcolor=red,linecolor=red](6.03,1.28){.2}{180}{270} \pswedge[fillstyle=solid,fillcolor=red,linecolor=red](6.03,1.92){.2}{180}{270} \rput[bl](0,0){ \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \st{\vdots}&\st{\vdots}&\st{\vdots}&\st{\vdots}&\st{\vdots}&\st{\vdots}&\st{\vvdots}\\ \hline \st{12}&\st{\frac{65}{8}}&\st{5}&\st{\frac{21}{8}}&\st{1}&\st{\frac{1}{8}}&\st{\cdots}\\ \hline \st{\frac{28}{3}}&\st{\frac{143}{24}}&\st{\frac{10}{3}}&\st{\frac{35}{24}}&\st{\frac{1}{3}} &\st{-\frac{1}{24}}&\st{\cdots}\\ \hline \st{7}&\st{\frac{33}{8}}&\st{2}&\st{\frac{5}{8}}&\st{0}&\st{\frac{1}{8}}&\st{\cdots}\\ \hline \st{5}&\st{\frac{21}{8}}&\st{1}&\st{\frac{1}{8}}&\st{0}&\st{\frac{5}{8}}&\st{\cdots}\\ \hline \st{\frac{10}{3}}&\st{\frac{35}{24}}&\st{\frac{1}{3}} &\st{-\frac{1}{24}}&\st{\frac{1}{3}}&\st{\frac{35}{24}}&\st{\cdots}\\ \hline \st{2}&\st{\frac{5}{8}}&\st{0}&\st{\frac{1}{8}}&\st{1}&\st{\frac{21}{8}}&\st{\cdots}\\ \hline \st{1}&\st{\frac{1}{8}}&\st{0}&\st{\frac{5}{8}}&\st{2}&\st{\frac{33}{8}}&\st{\cdots}\\ \hline \st{\frac{1}{3}}&\st{-\frac{1}{24}}&\st{\frac{1}{3}}&\st{\frac{35}{24}}&\st{\frac{10}{3}}&\st{\frac{143}{24}} &\st{\cdots}\\ \hline \st{0}&\st{\frac{1}{8}}&\st{1}&\st{\frac{21}{8}}&\st{5}&\st{\frac{65}{8}}&\st{\cdots}\\ \hline \st{0}&\st{\frac{5}{8}}&\st{2}&\st{\frac{33}{8}}&\st{7}&\st{\frac{85}{8}}&\st{\cdots}\\ \hline \end{tabular}} \end{pspicture} \end{center} \caption{Extended Kac table of critical percolation ${\cal LM}(2,3)$ showing the conformal weights $\Delta_{r,s}$ of the Kac representations $(r,s)$. Except for the identifications $(2k,3k')=(2k',3k)$, the entries relate to {\em distinct} Kac representations even if the conformal weights coincide. This is unlike the irreducible representations which are uniquely characterized by their conformal weight. The periodicity of conformal weights $\Delta_{r,s}=\Delta_{r+2,s+3}$ is made manifest by shading the rows and columns with $r\equiv0$ (mod 2) or $s\equiv0$ (mod 3). The Kac representations which happen to be irreducible representations are marked with a red shaded quadrant in the top-right corner. These do not exhaust the distinct values of the conformal weights. For example, the irreducible representation with $\Delta_{1,1}=0$ does not arise as a Kac representation. By contrast, the Kac table of the associated {\em rational} (minimal) model consisting of the shaded $1\times 2$ grid in the lower-left corner is trivial and contains only the operator corresponding to the irreducible representation with $\D=0$.} \label{KacTable} \end{figure} {}From the lattice, a {\em Kac representation} $(r,s)$ arises for {\em every} pair of integer Kac labels $r,s$ in the first quadrant of the infinitely extended Kac table, see Figure~\ref{KacTable}. This relaxes the constraint $r=1,2$ considered in \cite{PRZ}. The lattice description of the full set of Kac representations will be discussed in detail elsewhere. The conformal character of the Kac representation $(r,s)$ is given by \be \chit_{r,s}(q)\ =\ \frac{q^{\frac{1}{24}+\D_{r,s}}}{\eta(q)}\left(1-q^{rs}\right) \label{chikac} \ee where the Dedekind eta function is defined by \be \eta(q)\ =\ q^{1/24}\prod_{m=1}^\infty(1-q^m) \label{eta} \ee We will denote the character of the {\em irreducible} Virasoro representation of conformal weight $\D_{r,s}$ by $\ch_{r,s}(q)$. These irreducible characters \cite{FSZ} read \bea {\rm ch}_{2k-1,a}(q)\!\!&=&\!\!K_{12,6k-3-2a;k}(q)-K_{12,6k-3+2a;k}(q),\hspace{1.2cm}a=1,2\nn {\rm ch}_{2k+1,3}(q)\!\!&=&\!\! \frac{1}{\eta(q)}\big(q^{3(2k-1)^2/8}-q^{3(2k+1)^2/8}\big)\nn {\rm ch}_{2k,b}(q)\!\!&=&\!\! \frac{1}{\eta(q)}\big(q^{(3k-b)^2/6}-q^{(3k+b)^2/6}\big),\hspace{2.1cm} b=1,2,3 \label{laq} \eea where $k\in\mathbb{N}$ while $K_{n,\nu;k}(q)$ is defined as \be K_{n,\nu;k}(q)\ =\ \frac{1}{\eta(q)}\sum_{j\in\mathbb{Z}\setminus\{1,\ldots,k-1\}}q^{(nj-\nu)^2/2n} \label{Kk} \ee It follows that for $k=1$, the first expression in (\ref{laq}) reduces to the well-known irreducible character \be {\rm ch}_{1,a}(q)\ =\ \frac{1}{\eta(q)}\sum_{j\in\mathbb{Z}}\big(q^{(12j-1)^2/24}-q^{(12j+5)^2/24}\big) \ =\ 1, \hspace{1.5cm} a=1,2 \label{r0s0} \ee A priori, a Kac representation can be either irreducible or reducible. In the latter case, it could be fully reducible (in which case it would be a direct sum of irreducible representations) or its direct-sum decomposition could involve at least one reducible but indecomposable representation of rank 1 (possibly in addition to some irreducible representations). We will only characterize the Kac representations appearing in the fusion algebras to be discussed in the present work. Among these are the Kac representations $\{(2k,1),(2k,2),(2k,3),(1,3k),(2,3k);\ k\in\mathbb{N}\}$. Since their characters all correspond to irreducible Virasoro characters, these Kac representations must themselves be irreducible. They constitute an exhaustive list of irreducible Kac representations. Two Kac representations are naturally identified if they have identical conformal weights and are both irreducible. The relations \be (2k,3)\ =\ (2,3k) \label{idirr} \ee are the only such identifications. More general relations are considered in (\ref{kkexp}) and (\ref{RequalR}). Here we merely point out that two Kac characters (\ref{chikac}) are equal $\chit_{r,s}(q)=\chit_{r',s'}(q)$ if and only if $(r',s')=(r,s)$ or $(r',s')=(2s/3,3r/2)$. That is, the only equalities between Kac characters are of the form $\chit_{2k,3k'}(q)=\chit_{2k',3k}(q)$. According to (\ref{RequalR}), a similar equality applies to the Kac representations themselves: $(2k,3k')=(2k',3k)$. The only {\em reducible} Kac representations entering the fundamental fusion algebra to be discussed below are $(1,1)$ and $(1,2)$ and they are both indecomposable representations of rank 1, cf. Section \ref{sectionEF}. The indecomposable representations of higher rank appearing in the fusion algebra may be described in terms of Kac representations and their characters. We therefore list the decompositions of the relevant Kac characters in terms of irreducible characters \bea \chit_{2k-1,b}(q)\!\!&=&\!\! \ch_{2k-1,b}(q)+\big(1-\delta_{b,3}\delta_{k,1}\big)\ch_{2k+1,b}(q),\hspace{3cm}b=1,2,3\nn \chit_{a,3k-b}(q)\!\!&=&\!\!\ch_{a,3k-b}(q) +\big(1-\delta_{a,2}\delta_{k,1}\big)\ch_{a,3k+b}(q),\hspace{3cm}a,b=1,2\nn \chit_{3,3k+b}(q)\!\!&=&\!\! \ch_{1,3k-3+b}(q)+\ch_{1,3k+b}(q)+\ch_{1,3k+3-b}(q)\nn \!\!&+&\!\! \ch_{1,3k+3+b}(q)+\ch_{1,3k+6-b}(q)+\ch_{1,3k+9-b}(q),\hspace{2cm}b=1,2 \label{chitch} \eea where $k\in\mathbb{N}$. The decomposition in the general case is discussed in the appendix of \cite{PRZ}. \section{Fusion Algebras} The {\em fundamental} fusion algebra $\big\langle(2,1), (1,2)\big\rangle$ is defined as the fusion algebra generated by the fundamental representations $(2,1)$ and $(1,2)$. We find that closure of this fusion algebra requires the inclusion of a variety of other representations \be \big\langle(2,1), (1,2)\big\rangle\ =\ \big\langle(1,1), (1,2), (2k,a), (1,3k), (2k,3), \R_{2k,a}^{1,0}, \R_{2k,3}^{1,0}, \R_{a,3k}^{0,b}, \R_{2k,3}^{1,b};\ a,b=1,2;\ k\in\mathbb{N}\big\rangle \label{A2112} \ee to be discussed next. \subsection{Indecomposable Representations of Rank 2 or 3} For $k\in\mathbb{N}$, the representations denoted by $\R_{2k,1}^{1,0}$, $\R_{2k,2}^{1,0}$, $\R_{2k,3}^{1,0}$, $\R_{1,3k}^{0,1}$, $\R_{1,3k}^{0,2}$, $\R_{2,3k}^{0,1}$ and $\R_{2,3k}^{0,2}$ are indecomposable representations of rank 2, while $\R_{2k,3}^{1,1}$ and $\R_{2k,3}^{1,2}$ are indecomposable representations of rank 3. Their characters read \bea \chit[\R_{2k,b}^{1,0}](q)\!\!&=&\!\!\chit_{2k-1,b}(q)+\chit_{2k+1,b}(q)\nn \!\!&=&\!\! \big(1-\delta_{b,3}\delta_{k,1}\big)\ch_{2k-1,b}(q)+2\ch_{2k+1,b}(q) +\ch_{2k+3,b}(q),\hspace{2cm}b=1,2,3\nn \chit[\R_{a,3k}^{0,b}](q)\!\!&=&\!\!\chit_{a,3k-b}(q)+\chit_{a,3k+b}(q)\nn \!\!&=&\!\!\big(1-\delta_{a,2}\delta_{k,1}\big)\ch_{a,3k-b}(q)+2\ch_{a,3k+b}(q) +\ch_{a,3(k+2)-b}(q),\hspace{1.2cm}a,b=1,2\nn \chit[\R_{2k,3}^{1,b}](q)\!\!&=&\!\!\chit_{2k-1,3-b}(q)+\chit_{2k-1,3+b}(q) +\chit_{2k+1,3-b}(q)+\chit_{2k+1,3+b}(q)\nn \!\!&=&\!\! \big(1-\delta_{k,1}\big)\ch_{1,3k-3-b}(q)+2\big(1-\delta_{k,1}\big)\ch_{1,3k-3+b}(q) +2\ch_{1,3k-b}(q)\nn \!\!&+&\!\!4\ch_{1,3k+b}(q)+\big(2-\delta_{k,1}\big)\ch_{1,3k+3-b}(q) +2\ch_{1,3k+3+b}(q)\nn \!\!&+&\!\!2\ch_{1,3k+6-b}(q)+\ch_{1,3k+9-b}(q),\hspace{5.5cm}b=1,2 \label{chiR} \eea indicating that one may consider the indecomposable representations as `indecomposable combinations' of Kac representations. The participating Kac representations are of course the ones whose characters appear in (\ref{chiR}). In the case of the indecomposable representation $\R_{2k,b}^{1,0}$ (or $\R_{a,3k}^{0,b}$) of rank 2, our lattice analysis indicates that a Jordan cell is formed between every state in $\ch_{2k+1,b}(q)$ (or $\ch_{a,3k+b}(q)$) and its partner state in the second copy of $\ch_{2k+1,b}(q)$ (or $\ch_{a,3k+b}(q)$), and nowhere else. In the case of the indecomposable representation $\R_{2k,3}^{1,b}$ of rank 3, our lattice analysis indicates that for every quartet of matching states in the four copies of $\ch_{1,3k+b}(q)$, a rank-3 Jordan cell is formed along with a single state. It likewise appears that a Jordan cell of rank 2 is formed between every pair of matching states in the irreducible components with multiplicity 2. The notation $\R_{r,s}^{a,b}$ is meant to reflect simple properties of the higher-rank indecomposable representations. The pair of lower indices thus refers to a `symmetry point' in the Kac table around which an indecomposable comination of Kac representations are located. The pair of upper indices indicates the distribution of these representations of which there are either two (if $a=0$ or $b=0$) or four (if $a,b\neq0$). Their locations correspond to endpoints or corners, respectively, of a line segment or a rectangle with center at $(r,s)$. This structure is encoded neatly in the character expressions (\ref{chiR}). It follows from the lattice that the fundamental fusion algebra may be described by separating the representations into a horizontal and a vertical part. Before discussing implications of this, we examine the two directions individually, and introduce some abbreviations. To compactify the fusion rules, we use the notation \be (r,-s)\ \equiv\ (-r,s)\ \equiv\ -(r,s),\hspace{1cm}\R_{-r,s}^{a,b}\ \equiv\ \R_{r,-s}^{a,b}\ \equiv\ -\R_{r,s}^{a,b} \ee implying, in particular, that $(0,s)\equiv(r,0)\equiv\R_{0,s}^{a,b}\equiv\R_{r,0}^{a,b}\equiv0$, and define the Kronecker delta combinations \bea \dkk\!\!&=&\!\! 2-\delta_{j,|k-k'|}-\delta_{j,k+k'} \nn \ddkk\!\!&=&\!\! 4-3\delta_{j,|k-k'|-1}-2\delta_{j,|k-k'|}-\delta_{j,|k-k'|+1} -\delta_{j,k+k'-1}-2\delta_{j,k+k'}-3\delta_{j,k+k'+1}\nn \dddkk\!\!&=&\!\!8-7\delta_{j,|k-k'|-2}-6\delta_{j,|k-k'|-1}-4\delta_{j,|k-k'|}-2\delta_{j,|k-k'|+1} -\delta_{j,|k-k'|+2}\nn \!\!&-&\!\!\delta_{j,k+k'-2}-2\delta_{j,k+k'-1}-4\delta_{j,k+k'}-6\delta_{j,k+k'+1}-7\delta_{j,k+k'+2} \label{d24} \eea \subsection{Horizontal Fusion Algebra} The {\em horizontal} fusion algebra $\big\langle(2,1)\big\rangle$ is defined as the fusion algebra generated by the fundamental representation $(2,1)$. We find that closure of this fusion algebra requires the inclusion of the Kac representations $(2k,1)$ and the rank-2 indecomposable representations $\R_{2k,1}^{1,0}$ \be \big\langle(2,1)\big\rangle\ =\ \big\langle(2k,1), \R_{2k,1}^{1,0};\ k\in\mathbb{N}\big\rangle \label{A21} \ee We conjecture that the fusion algebra $\big\langle(2,1)\big\rangle$ reads \bea (2k,1)\otimes(2k',1)\!\!&=&\!\!\bigoplus_{j=|k-k'|+1,\ \!{\rm by}\ \!2}^{k+k'-1} \R_{2j,1}^{1,0}\nn (2k,1)\otimes \R_{2k',1}^{1,0}\!\!&=&\!\!\bigoplus_{j=|k-k'|}^{k+k'} \dkk(2j,1)\nn \R_{2k,1}^{1,0}\otimes \R_{2k',1}^{1,0}\!\!&=&\!\!\bigoplus_{j=|k-k'|}^{k+k'} \dkk\R_{2j,1}^{1,0} \label{fusion21} \eea This fusion algebra does not contain an identity. \subsection{Vertical Fusion Algebra} The {\em vertical} fusion algebra $\big\langle(1,2)\big\rangle$ is defined as the fusion algebra generated by the fundamental representation $(1,2)$. We find that closure of this fusion algebra requires the inclusion of the Kac representations $(1,1)$ and $(1,3k)$ and the rank-2 indecomposable representations $\R_{2k,1}^{0,b}$ \be \big\langle(1,2)\big\rangle \ =\ \big\langle(1,1), (1,2), (1,3k), \R_{1,3k}^{0,b};\ b=1,2;\ k\in\mathbb{N}\big\rangle \label{A12} \ee Letting $X$ denote any of these representations, we conjecture that the fusion algebra $\big\langle(1,2)\big\rangle$ reads \bea (1,1)\otimes X\!\!&=&\!\!X\nn (1,2)\otimes (1,2)\!\!&=&\!\!(1,1)\oplus(1,3)\nn (1,2)\otimes (1,3k)\!\!&=&\!\!\R_{1,3k}^{0,1}\nn (1,2)\otimes \R_{1,3k}^{0,1}\!\!&=&\!\!\R_{1,3k}^{0,2}\oplus2(1,3k)\nn (1,2)\otimes \R_{1,3k}^{0,2}\!\!&=&\!\!\R_{1,3k}^{0,1}\oplus(1,3(k-1))\oplus (1,3(k+1))\nn (1,3k)\otimes(1,3k')\!\!&=&\!\! \bigoplus_{j=|k-k'|+1,\ \!{\rm by}\ \!2}^{k+k'-1}\big(\R_{1,3j}^{0,2}\oplus(1,3j)\big)\nn (1,3k)\otimes \R_{1,3k'}^{0,1}\!\!&=&\!\! \Big(\bigoplus_{j=|k-k'|+1,\ \!{\rm by}\ \!2}^{k+k'-1}2\R_{1,3j}^{0,1}\Big) \oplus\Big(\bigoplus_{j=|k-k'|,\ \!{\rm by}\ \!2}^{k+k'}\dkk (1,3j)\Big)\nn (1,3k)\otimes \R_{1,3k'}^{0,2}\!\!&=&\!\! \Big(\bigoplus_{j=|k-k'|,\ \!{\rm by}\ \!2}^{k+k'}\dkk\R_{1,3j}^{0,1}\Big) \oplus\Big(\bigoplus_{j=|k-k'|+1,\ \!{\rm by}\ \!2}^{k+k'-1}2(1,3j)\Big)\nn \R_{1,3k}^{0,1}\otimes \R_{1,3k'}^{0,1}\!\!&=&\!\!\ \Big(\bigoplus_{j=|k-k'|,\ \!{\rm by}\ \!2}^{k+k'}\dkk\R_{1,3j}^{0,1}\Big) \oplus\Big(\bigoplus_{j=|k-k'|+1,\ \!{\rm by}\ \!2}^{k+k'-1}\big(2\R_{1,3j}^{0,2}\oplus4(1,3j)\big)\Big)\nn \R_{1,3k}^{0,1}\otimes \R_{1,3k'}^{0,2}\!\!&=&\!\! \Big(\bigoplus_{j=|k-k'|+1,\ \!{\rm by}\ \!2}^{k+k'-1}2\R_{1,3j}^{0,1}\Big) \oplus \Big(\bigoplus_{j=|k-k'|,\ \!{\rm by}\ \!2}^{k+k'}\dkk\big(\R_{1,3j}^{0,2} \oplus2(1,3j)\big)\Big)\nn \R_{1,3k}^{0,2}\otimes R_{1,3k'}^{0,2}\!\!&=&\!\! \Big(\bigoplus_{j=|k-k'|,\ \!{\rm by}\ \!2}^{k+k'}\dkk\R_{1,3j}^{0,1}\Big) \oplus\Big(\bigoplus_{j=|k-k'|+1,\ \!{\rm by}\ \!2}^{k+k'-1}2\R_{1,3j}^{0,2}\Big)\nn \!\!&\oplus&\!\! \Big(\bigoplus_{j=|k-k'|-1,\ \!{\rm by}\ \!2}^{k+k'+1} \ddkk(1,3j)\Big) \label{fusion12} \eea It is noted that for $j=|k-k'|-1$ (mod 2), as in $\R_{1,3k}^{0,2}\otimes R_{1,3k'}^{0,2}$, the fusion multiplicity $\ddkk$ reduces to $4-3\delta_{j,|k-k'|-1}-\delta_{j,|k-k'|+1}-\delta_{j,k+k'-1}-3\delta_{j,k+k'+1}$. The representation $(1,1)$ is the identity of this vertical fusion algebra. \subsection{Comparison with Read and Saleur} It is verified that \be \big\langle(1,1),(1,6k-3),\R_{1,6k}^{0,1},\R_{1,6k-3}^{0,2};\ k\in\mathbb{N}\big\rangle \label{subvert} \ee is a subalgebra of the vertical fusion algebra. It corresponds to the fusion algebra of critical percolation discussed by Read and Saleur in \cite{RS}. To appreciate this, we provide a dictionary for translating the representations generating the subalgebra (\ref{subvert}) into the notation used in \cite{RS} \bea (1,1)\ \ &\longleftrightarrow&\ \ \R_0\nn (1,2j+1)\ \ &\longleftrightarrow&\ \ \R_j,\hspace{1cm}j\equiv1\ ({\rm mod}\ 3)\nn \R_{1,2j-1}^{0,2}\ \ &\longleftrightarrow&\ \ \R_j,\hspace{1cm}j\equiv2\ ({\rm mod}\ 3)\nn \R_{1,2j}^{0,1}\ \ &\longleftrightarrow&\ \ \R_j,\hspace{1cm}j\equiv0\ ({\rm mod}\ 3) \label{dictRS} \eea where $j\in\mathbb{N}$. We find that their fusion algebra is in agreement with the subalgebra (\ref{subvert}) of the vertical fusion algebra $\big\langle(1,2)\big\rangle$ which itself is a subalgebra of the fundamental fusion algebra $\big\langle(2,1),(1,2)\big\rangle$ of critical percolation. \subsection{Fundamental Fusion Algebra} It follows from the lattice description that the fundamental fusion algebra $\big\langle(2,1),(1,2)\big\rangle$ is both associative and commutative. As already announced, it also follows from the lattice that the representations may be separated into a horizontal and a vertical part. For the Kac representations, this implies \be (r,s)\ =\ (r,1)\otimes(1,s) \label{r11s} \ee For the purposes of examining the fundamental fusion algebra, we introduce the representations \bea (2k,3k')\!\!&=&\!\!(2k,1)\otimes(1,3k'),\hspace{1.5cm} \R_{2k,3k'}^{1,0}\ =\ \R_{2k,1}^{1,0}\otimes(1,3k')\nn \R_{2k,3k'}^{0,b}\!\!&=&\!\!(2k,1)\otimes \R_{1,3k'}^{0,b},\hspace{1.67cm} \R_{2k,3k'}^{1,b}\ =\ \R_{2k,1}^{1,0}\otimes \R_{1,3k'}^{0,b} \label{kk} \eea thus defined as the result of certain simple fusions of `a horizontal and a vertical representation'. As we will show elsewhere, these representations may be decomposed in terms of the representations listed in (\ref{A2112}) \bea (2k,3k')\!\!&=&\!\!\bigoplus_{j=|k-k'|+1,\ \!{\rm by}\ \!2}^{k+k'-1}(2j,3),\hspace{1.5cm} \R_{2k,3k'}^{1,0}\ =\ \bigoplus_{j=|k-k'|+1,\ \!{\rm by}\ \!2}^{k+k'-1}\R_{2j,3}^{1,0}\nn \R_{2k,3k'}^{0,b}\!\!&=&\!\!\bigoplus_{j=|k-k'|+1,\ \!{\rm by}\ \!2}^{k+k'-1}\R_{2,3j}^{0,b},\hspace{1.55cm} \R_{2k,3k'}^{1,b}\ =\ \bigoplus_{j=|k-k'|+1,\ \!{\rm by}\ \!2}^{k+k'-1}\R_{2j,3}^{1,b} \label{kkexp} \eea with \be (2k,3k')\ =\ (2k',3k),\ \ \ \ \ \R_{2k,3k'}^{1,0}\ =\ \R_{2k',3k}^{1,0},\ \ \ \ \ \R_{2k,3k'}^{0,b}\ =\ \R_{2k',3k}^{0,b},\ \ \ \ \ \R_{2k,3k'}^{1,b}\ =\ \R_{2k',3k}^{1,b} \label{RequalR} \ee as special identifications extending the set (\ref{idirr}). The fundamental fusion algebra is now obtained by simply applying (\ref{kk}) and (\ref{kkexp}) to the fusion of a pair of representations in (\ref{A2112}). We illustrate this with a general but somewhat formal evaluation where we let $A_{r,s}=\bar{a}_{r,1}\otimes\ a_{1,s}$, $B_{r',s'}=\bar{b}_{r',1}\otimes\ b_{1,s'}$, $\bar{a}_{r,1}\otimes\bar{b}_{r',1}=\bigoplus_{r''}\bar{c}_{r'',1}$ and $a_{1,s}\otimes b_{1,s'}=\bigoplus_{s''}c_{1,s''}$. Our fusion prescription now yields \bea A_{r,s}\otimes B_{r',s'}\!\!&=&\!\!\Big(\bar{a}_{r,1}\otimes a_{1,s}\Big)\otimes \Big(\bar{b}_{r',1}\otimes b_{1,s'}\Big) \ =\ \Big(\bar{a}_{r,1}\otimes\bar{b}_{r',1}\Big)\otimes \Big(a_{1,s}\otimes b_{1,s'}\Big)\nn \!\!&=&\!\!\Big(\bigoplus_{r''}\bar{c}_{r'',1}\Big)\otimes\Big(\bigoplus_{s''}c_{1,s''}\Big) \ =\ \bigoplus_{r'',s''}C_{r'',s''} \label{rs} \eea where $C_{r'',s''}=\bar{c}_{r'',1}\otimes c_{1,s''}$. Using this, the fundamental fusion algebra $\big\langle(2,1),(1,2)\big\rangle$ follows straightforwardly from the fusion algebras $\big\langle(2,1)\big\rangle$ and $\big\langle(1,2)\big\rangle$ together with (\ref{kk}) and (\ref{kkexp}). In particular, it follows readily that the Kac representation $(1,1)$ is the {\em identity} of the fundamental fusion algebra $\big\langle(2,1),(1,2)\big\rangle$. In this brief communication, we will only apply this fusion prescription explicitly to the fusion of the two rank-2 indecomposable representations $\R_{2k,2}^{1,0}$ and $\R_{2,3k'}^{0,2}$ \bea \R_{2k,2}^{1,0}\otimes\R_{2,3k'}^{0,2}\!\!&=&\!\!\Big(\R_{2k,1}^{1,0}\otimes (1,2)\Big) \otimes\Big((2,1)\otimes \R_{1,3k'}^{0,2}\Big) \ =\ \Big(\R_{2k,1}^{1,0}\otimes (2,1)\Big)\otimes\Big((1,2)\otimes \R_{1,3k'}^{0,2}\Big)\nn \!\!&=&\!\!\Big((2(k-1),1)\oplus2(2k,1)\oplus(2(k+1),1)\Big)\otimes\Big(\R_{1,3k'}^{0,1}\oplus (1,3(k'-1))\oplus(1,3(k'+1))\Big)\nn \!\!&=&\!\!\Big(\bigoplus_{j=|k-k'|}^{k+k'}\dkk\R_{2,3j}^{0,1}\Big) \oplus\Big(\bigoplus_{j=|k-k'|-1}^{k+k'+1}\delta_{j,\{k,k'\}}^{(4)}(2j,3)\Big) \label{ex22} \eea and to the fusion of two rank-3 indecomposable representations \bea \R_{2k,3}^{1,1}\otimes \R_{2k',3}^{1,1}\!\!&=&\!\!\Big(\R_{2k,1}^{1,0}\otimes \R_{1,3}^{0,1}\Big) \otimes\Big(\R_{2k',1}^{1,0}\otimes \R_{1,3}^{0,1}\Big) \ =\ \Big(\R_{2k,1}^{1,0}\otimes \R_{2k',1}^{1,0}\Big) \otimes\Big(\R_{1,3}^{0,1}\otimes \R_{1,3}^{0,1}\Big)\nn \!\!&=&\!\!\Big(\bigoplus_{j=|k-k'|}^{k+k'}\dkk\R_{2j,1}^{1,0}\Big) \otimes\Big(\R_{1,6}^{0,1}\oplus2\R_{1,3}^{0,2}\oplus4(1,3)\Big)\nn \!\!&=&\!\!\Big(\bigoplus_{j=|k-k'|-1}^{k+k'+1}\ddkk\R_{2j,3}^{1,1}\Big) \oplus\Big(\bigoplus_{j=|k-k'|}^{k+k'}\dkk\big(2\R_{2j,3}^{1,2}\oplus4\R_{2j,3}^{1,0}\big)\Big) \label{ex3311} \eea and likewise \bea \R_{2k,3}^{1,1}\otimes \R_{2k',3}^{1,2}\!\!&=&\!\! \Big(\bigoplus_{j=|k-k'|}^{k+k'}\dkk2\R_{2j,3}^{1,1}\Big) \oplus\Big(\bigoplus_{j=|k-k'|-1}^{k+k'+1}\ddkk\big(\R_{2j,3}^{1,2}\oplus2\R_{2j,3}^{1,0}\big)\Big) \nn \R_{2k,3}^{1,2}\otimes \R_{2k',3}^{1,2}\!\!&=&\!\! \Big(\bigoplus_{j=|k-k'|-1}^{k+k'+1}\ddkk\R_{2j,3}^{1,1}\Big) \oplus\Big(\bigoplus_{j=|k-k'|}^{k+k'}\dkk2\R_{2j,3}^{1,2}\Big) \oplus\Big(\bigoplus_{j=|k-k'|-2}^{k+k'+2}\dddkk\R_{2j,3}^{1,0}\Big)\nn \label{ex3312} \eea Several subalgebras of the fundamental fusion algebra are easily identified. An interesting example is the one generated by the set of rank-3 indecomposable representations and the rank-2 indecomposable representations $R_{2k,3}^{1,0}$. Two other noteworthy subalgebras are the ones generated by all the representations in (\ref{A2112}) except $(1,2)$ or $(1,1)$ and $(1,2)$. We wish to point out that, at the level of Kac characters, the horizontal, vertical and fundamental fusion algebras are all compatible with the $s\ell(2)$ structure \be \phi_n\otimes\phi_{n'}\ =\ \bigoplus_{m=|n-n'|+1,\ \!{\rm by}\ \!2}^{n+n'-1}\phi_m \label{sl2} \ee This is straightforward to establish for the horizontal and vertical fusion algebras as illustrated by the fusion $\R_{2k,1}^{1,0}\otimes\R_{2k',1}^{1,0}$ where (\ref{sl2}) yields \bea \chit[\R_{2k,1}^{1,0}\otimes\R_{2k',1}^{1,0}](q)\!\!&=&\!\! \big(\chit_{2k-1,1}(q)+\chit_{2k+1,1}(q)\big)\otimes\big(\chit_{2k'-1,1}(q)+\chit_{2k'+1,1}(q)\big)\nn \!\!&=&\!\!\sum_{j=|2k-2k'|+1,\ \!{\rm by}\ \!2}^{2(k+k')-3}\chit_{j,1}(q) +\sum_{j=|2k-2k'-2|+1,\ \!{\rm by}\ \!2}^{2(k+k')-1}\chit_{j,1}(q)\nn \!\!&+&\!\!\sum_{j=|2k-2k'+2|+1,\ \!{\rm by}\ \!2}^{2(k+k')-1}\chit_{j,1}(q) +\sum_{j=|2k-2k'|+1,\ \!{\rm by}\ \!2}^{2(k+k')+1}\chit_{j,1}(q)\nn \!\!&=&\!\!\sum_{j=|k-k'|}^{k+k'}\dkk\big(\chit_{2j-1,1}(q)+\chit_{2j+1,1}(q)\big) \eea while \be \chit[\R_{2k,1}^{1,0}\otimes\R_{2k',1}^{1,0}](q)\ =\ \sum_{j=|k-k'|}^{k+k'}\dkk\chit[\R_{2j,1}^{1,0}](q) \ =\ \sum_{j=|k-k'|}^{k+k'}\dkk\big(\chit_{2j-1,1}(q)+\chit_{2j+1,1}(q)\big) \ee The separation into a horizontal and a vertical part (\ref{r11s}) and (\ref{kk}) then implies that the characters of the fundamental fusion algebra exhibit two independent $s\ell(2)$ structures as in (\ref{sl2}) -- one in each direction. This is clearly reminiscent of the fusion algebras of rational (minimal) models where the $s\ell(2)$ structures are carried by the (characters of the) {\em irreducible} representations. Here, on the other hand, the $s\ell(2)$ structures are tied to the {\em Kac} representations but, due to the higher-rank indecomposable nature of some other representations, only at the level of their {\em characters}. \subsection{Comparison with Eberle and Flohr} \label{sectionEF} To facilitate a comparison with \cite{EberleF06} by Eberle and Flohr, we provide a partial dictionary relating our notation to the one used in \cite{EberleF06}. In the orders specified, the translation reads \bea \{(2k,b),(1,3k)\} &\longleftrightarrow&\{{\cal V}(\D_{2k,b}),{\cal V}(\D_{1,3k})\},\hspace{2cm} b=1,2,3;\ k\in\mathbb{N}\nn \{(1,1),(1,2)\} &\longleftrightarrow&\{\R^{(1)}(0)_{2},\R^{(1)}(0)_{1}\}\nn \{\R_{2,1}^{1,0},\R_{4,1}^{1,0},\R_{6,1}^{1,0},\R_{8,1}^{1,0}\} &\longleftrightarrow&\{\R^{(2)}(0,2)_{7},\R^{(2)}(2,7),\R^{(2)}(7,15),\R^{(2)}(15,26)\}\nn \{\R_{2,2}^{1,0},\R_{4,2}^{1,0},\R_{6,2}^{1,0},\R_{8,2}^{1,0}\} &\longleftrightarrow&\{\R^{(2)}(0,1)_{5},\R^{(2)}(1,5),\R^{(2)}(5,12),\R^{(2)}(12,22)\}\nn \{\R_{2,3}^{1,0},\R_{4,3}^{1,0},\R_{6,3}^{1,0},\R_{8,3}^{1,0}\} &\longleftrightarrow&\{\R^{(2)}(1/3,1/3),\R^{(2)}(1/3,10/3),\R^{(2)}(10/3,28/3),\R^{(2)}(28/3,55/3)\}\nn \{\R_{1,3}^{0,1},\R_{1,6}^{0,1},\R_{1,9}^{0,1},\R_{1,12}^{0,1}\} &\longleftrightarrow&\{\R^{(2)}(0,1)_{7},\R^{(2)}(2,5),\R^{(2)}(7,12),\R^{(2)}(15,22)\}\nn \{\R_{2,3}^{0,1},\R_{2,6}^{0,1},\R_{2,9}^{0,1},\R_{2,12}^{0,1}\} &\longleftrightarrow&\{\R^{(2)}(1/8,1/8),\R^{(2)}(5/8,21/8),\R^{(2)}(33/8,65/8),\R^{(2)}(85/8,133/8)\}\nn \{\R_{1,3}^{0,2},\R_{1,6}^{0,2},\R_{1,9}^{0,2},\R_{1,12}^{0,2}\} &\longleftrightarrow&\{\R^{(2)}(0,2)_{5},\R^{(2)}(1,7),\R^{(2)}(5,15),\R^{(2)}(12,26)\}\nn \{\R_{2,3}^{0,2},\R_{2,6}^{0,2},\R_{2,9}^{0,2},\R_{2,12}^{0,2}\} &\longleftrightarrow&\{\R^{(2)}(5/8,5/8),\R^{(2)}(1/8,33/8),\R^{(2)}(21/8,85/8),\R^{(2)}(65/8,161/8)\}\nn \{\R_{2,3}^{1,1},\R_{4,3}^{1,1},\R_{6,3}^{1,1},\R_{8,3}^{1,1}\} &\longleftrightarrow&\{\R^{(3)}(0,0,1,1),\R^{(3)}(0,1,2,5),\R^{(3)}(2,5,7,12),\R^{(3)}(7,12,15,22)\}\nn \{\R_{2,3}^{1,2},\R_{4,3}^{1,2},\R_{6,3}^{1,2},\R_{8,3}^{1,2}\} &\longleftrightarrow&\{\R^{(3)}(0,0,2,2),\R^{(3)}(0,1,2,7),\R^{(3)}(1,5,7,15),\R^{(3)}(5,12,15,26)\}\nn \label{dictEF} \eea The only three fusions of rank-3 indecomposable representations considered in \cite{EberleF06} correspond to \bea \R_{2,3}^{1,1}\otimes \R_{2,3}^{1,1}\!\!&=&\!\! \R_{2,3}^{1,1}\oplus2\R_{4,3}^{1,1}\oplus \R_{6,3}^{1,1}\oplus4\R_{2,3}^{1,2}\oplus2\R_{4,3}^{1,2} \oplus8\R_{2,3}^{1,0}\oplus4\R_{4,3}^{1,0} \nn \R_{2,3}^{1,1}\otimes \R_{2,3}^{1,2}\!\!&=&\!\! 4\R_{2,3}^{1,1}\oplus2\R_{4,3}^{1,1}\oplus \R_{2,3}^{1,2}\oplus2\R_{4,3}^{1,2}\oplus\R_{6,3}^{1,2}\oplus2\R_{2,3}^{1,0}\oplus 4\R_{4,3}^{1,0}\oplus2\R_{6,3}^{1,0} \nn \R_{2,3}^{1,2}\otimes \R_{2,3}^{1,2}\!\!&=&\!\! \R_{2,3}^{1,1}\oplus2\R_{4,3}^{1,1}\oplus \R_{6,3}^{1,1}\oplus4\R_{2,3}^{1,2}\oplus2\R_{4,3}^{1,2}\oplus2\R_{2,3}^{1,0}\oplus 2\R_{4,3}^{1,0}\oplus2\R_{6,3}^{1,0}\oplus\R_{8,3}^{1,0} \label{REF} \eea Likewise, the only fusion of the type (\ref{ex22}) considered in \cite{EberleF06} corresponds to \be \R_{2,2}^{1,0}\otimes \R_{2,3}^{0,2}\ =\ 2\R_{2,3}^{0,1}\oplus\R_{4,3}^{0,1} \oplus(2,3)\oplus2(4,3)\oplus(6,3) \ee We find that our fusion rules reduce to the many examples examined by Eberle and Flohr \cite{EberleF06}. This confirms their observation that indecomposable representations of rank 3 are required. Our results also demonstrate that the fusion algebra closes without the introduction of indecomposable representations of higher rank than 3. Eberle and Flohr also presented an algorithm \cite{EberleF06} for computing fusion products in the augmented $c_{p,p'}$ models, in particular in the augmented $c_{2,3}=0$ model. Their algorithm is rooted in the many explicit examples examined in their paper and yields fusion rules which are both commutative and associative. Considering the affirmative comparison of our fusion rules with their examples, we believe that their algorithm for the augmented $c_{2,3}$ model yields results equivalent to our explicit fusion rules for critical percolation ${\cal LM}(2,3)$. \subsection{Kac Representations Revisited} As already indicated and also discussed in \cite{EberleF06}, the two representations $(1,1)$ and $(1,2)$ (there denoted $\R^{(1)}(0)_{2}$ and $\R^{(1)}(0)_{1}$, respectively) are not fully reducible. We quote Eberle and Flohr: \begin{quote} On the other hand, the representations $\R^{(2)}(0,1)_5$ and $\R^{(2)}(0,1)_7$ contain a state with weight 0 which generates a subrepresentation $\R^{(1)}(0)_1$. This subrepresentation is indecomposable but neither is it irreducible nor does it exhibit any higher rank behaviour. It only exists as a subrepresentation as it needs the embedding into the rank 2 representation in order not to have nullvectors at both levels 1 and 2. But, nevertheless, being a subrepresentation of a representation in the spectrum it has to be included into the spectrum, too. \end{quote} This is corroborated by our findings. {}From the lattice, the two representations $(1,1)$ and $(1,2)$ arise in the conformal scaling limit from very simple and natural boundary conditions. This supports our assertion that these Kac representations are indeed physical. Furthermore, since one is immediately faced with problems when attempting to include their irreducible components \be (1,1):\ \ \{{\cal V}(0),{\cal V}(2)\},\hspace{2cm} (1,2):\ \ \{{\cal V}(0),{\cal V}(1)\} \ee in the fusion algebra, we advocate to consider fusion algebras of critical percolation generated from Kac representations and indecomposable representations of higher rank. The only irreducible representations appearing in these fusion algebras are therefore themselves Kac representations, that is, they belong to the set of irreducible Kac representations $\{(2k,1),(2k,2),(2k,3)=(2,3k),(1,3k)\}$. Natural extensions of the horizontal, vertical and fundamental fusion algebras involve {\em all} the associated Kac representations and read \be \big\langle(2,1),(3,1)\big\rangle, \hspace{1cm}\big\langle(1,2),(1,4)\big\rangle, \hspace{1cm}\big\langle(2,1),(3,1),(1,2),(1,4)\big\rangle \label{full} \ee respectively. They will be addressed elsewhere. Further evidence in support of the relevance of Kac representations in logarithmic CFT may be found in \cite{Ras} where quotient modules with characters (\ref{chikac}) are found to arise naturally in the limit of certain sequences of minimal models. \section{Conclusion} We have presented explicit general conjectures for the chiral fusion algebras of critical percolation, and we have exhibited dictionaries to facilitate comparison of our results with the particular results of Eberle and Flohr~\cite{EberleF06} and Read and Saleur~\cite{RS}. Importantly, we observe the appearance of rank-3 indecomposable representations in agreement with Eberle and Flohr. Our fundamental fusion algebra is built from independent horizontal and vertical algebras that, at the level of characters, respect an underlying $s\ell(2)$ structure. The identity $(1,1)$ of this fundamental fusion algebra is a reducible yet indecomposable Kac representation of rank 1. Our reported fusion rules are supported by extensive numerical investigations of an integrable lattice model of critical percolation. These lattice results will be presented elsewhere. We also hope to discuss elsewhere the full fusion algebra encompassing all of the Kac representations as well as extensions to general logarithmic minimal models. \vskip.5cm \section*{Acknowledgments} \vskip.1cm \noindent This work is supported by the Australian Research Council. JR thanks Andreas Ludwig for encouraging discussions at the KITP in November 2006.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{} \begin{abstract} A complete description of twisting somersaults is given using a reduction to a time-dependent Euler equation for non-rigid body dynamics. The central idea is that after reduction the twisting motion is apparent in a body frame, while the somersaulting (rotation about the fixed angular momentum vector in space) is recovered by a combination of dynamic and geometric phase. In the simplest ``kick-model'' the number of somersaults $m$ and the number of twists $n$ are obtained through a rational rotation number $W = m/n$ of a (rigid) Euler top. This rotation number is obtained by a slight modification of Montgomery's formula \cite{Montgomery91} for how much the rigid body has rotated. Using the full model with shape changes that take a realistic time we then derive the master twisting-somersault formula: An exact formula that relates the airborne time of the diver, the time spent in various stages of the dive, the numbers $m$ and $n$, the energy in the stages, and the angular momentum by extending a geometric phase formula due to Cabrera \cite{Cabrera07}. Numerical simulations for various dives agree perfectly with this formula where realistic parameters are taken from actual observations. \end{abstract} \section{Introduction} One of the most beautiful Olympic sports is springboard and platform diving, where a typical dive consists of a number of somersaults and twists performed in a variety of forms. The athlete generates angular momentum at take-off and achieves the desired dive by executing shape changes while airborne. From a mathematical point of view the simpler class of dives are those for which the rotation axis and hence the direction of angular velocity remains constant, and only the values of the principal moments of inertia are changed by the shape change, but not the principal axis. This is typical in dives with somersaults in a tight tuck position with minimal moment of inertia. The mathematically not so simple dives are those where the shape change also moves the principal axis and hence generates a motion in which the rotation axis is not constant. This is typical in twisting somersaults. In this work we present a detailed mathematical theory of the twisting somersault, starting from the derivation of the equations of motion for coupled rigid bodies in a certain co-moving frame, to the derivation of a model of a shape-changing diver who performs a twisting somersault. The first correct description of the physics of the twisting somersault was given by Frohlich \cite{Frohlich79}. Frohlich writes with regards to some publication from the 60s and 70s that ``several books written by or for coaches discuss somersaulting and twisting, and exhibit varying degrees of insight and/or confusion about the physics processes that occur". A full fledged analysis has been developed by Yeadon in a series of classical papers \cite{Yeadon93a,Yeadon93b,Yeadon93c,Yeadon93d}. Frohlich was the first to point out the importance of shape change for generating rotations even in the absence of angular momentum. What we want to add here is the analysis of how precisely a shape change can generate a change in rotation in the presence of angular momentum. From a modern point of view this is a question raised in the seminal paper by Shapere and Wilczek \cite{ShapereWilczek87,ShapereWilczek88}: ``what is the most efficient way for a body to change its orientation?" Our answer involves generalisations of geometric phase in rigid body dynamics \cite{Montgomery91} to shape-changing bodies recently obtained in \cite{Cabrera07}. To be able to apply these ideas in our context we first derive a version of the Euler equation for a shape-changing body. Such equations are not new, see, e.g.~\cite{Montgomery93,Iwai98,Iwai99,Enos93}, but what is new is the derivation of the explicit form of the additional time-dependent terms when the model is a system of coupled rigid bodies. We have chosen to be less general than previous authors by specialising in a particular frame, but for our intended application the gained beauty and simplicity of the equations of motion (see Theorem 1) is worth the loss of generality. We then take a particularly simple system of just two coupled rigid bodies (the ``one-armed diver'') and show how a twisting somersault can be achieved even with this overly simplified model. An even simpler model is the diver with a rotor \cite{BDDLT15}, in which all the stages of the dive can be analytically solved for. Throughout the paper we emphasise the geometric mechanics point of view. Hence the translational and rotational symmetry of the problem is reduced, and thus Euler-type equations are found in a co-moving frame. In this reduced description the amount of somersault (i.e.\ the amount of rotation about the fixed angular momentum vector in space) is not present. Reconstruction allows us to recover this angle by solving an additional differential equation driven by the solution of the reduced equations. However, it turns out that for a closed loop in shape space the somersault angle can be recovered by a geometric phase formula due to \cite{Cabrera07}. In diving terminology this means that from the knowledge of tilt and twist the somersault can be recovered. We hope that the additional insight gained in this way will lead to an improvement in technique and performance of the twisting somersault. The structure of the paper is as follows: In section 2 we derive the equations of motion for a system of coupled rigid bodies that is changing shape. The resulting Euler-type equations are the basis for the following analysis. In section 3 we discuss a simplified kick-model, in which the shape change is impulsive. The kick changes the trajectory and the energy, but not the total angular momentum. In section 4 the full model is analysed, without the kick assumption. Unlike the previous section, here we have to resort to numerics to compute some of the terms. Nevertheless, the generalised geometric phase formula due to Cabrera \cite{Cabrera07} still holds and gives a beautiful geometric interpretation to the mechanics behind the twisting somersault. \newpage \section{Euler equations for coupled rigid bodies} Let $\l$ be the constant angular momentum vector in a space fixed frame. Rigid body dynamics usually use a body-frame because in that frame the tensor of inertia is constant. The change from one coordinate system to the other is given by a rotation matrix $R = R(t) \in SO(3)$, such that $\l = R \L$. In the body frame the vector $\L$ is described as a moving vector and only its length remains constant since $R \in SO(3)$. The angular velocity $\O$ in the body frame is the vector such that $ \O \times \v = R^t \dot R \v$ for any vector $\v \in \mathbb{R}^3$. Even though for a system of coupled rigid bodies the tensor of inertia is generally not a constant, a body frame still gives the simplest equations of motion: \begin{theorem} The equations of motion for a shape-changing body with angular momentum vector $\L \in \mathbb{R}^3$ in a body frame are \[ \dot \L = \L \times \O \] where the angular velocity $\O \in \mathbb{R}^3$ is given by \[ \O = I^{-1} ( \L - \mathbf{A}), \] $I = I(t)$ is the tensor of inertia, and $\mathbf{A} = \mathbf{A}(t)$ is a momentum shift generated by the shape change. \end{theorem} \begin{proof} The basic assumption is that the shape change is such that the angular momentum is constant. Let $\l$ be the vector of angular momentum in the space fixed frame, then $ \l = R \L$. Taking the time derivative gives $\mathbf{0} = \dot R \L + R \dot \L$ and hence $\dot \L = -R^t \dot R \L = -\O \times \L = \L \times \O$. The interesting dynamics is all hidden in the relation between $\O$ and $\L$. Let $\mathbf{q} = R \mathbf{Q}$ where $\mathbf{Q}$ is the position of a point in the body $B$ in the body frame, and $\mathbf{q}$ is the corresponding point in the space fixed frame. The relation between $\L$ and $\O$ is obtained from the definition of angular momentum which is $\mathbf{q} \times \dot \mathbf{q}$ integrated over the body $B$. The relation $ \mathbf{q} = R \mathbf{Q}$ is for a rigid body, for a deforming body we label each point by $\mathbf{Q}$ in the body frame but allow for an additional shape change $S$, so that $\mathbf{q} = R S \mathbf{Q}$. We assume that $S : \mathbb{R}^3 \to \mathbb{R}^3$ is volume preserving, which means that the determinant of the Jacobian matrix of $S$ is 1. The deformation $S$ need not be linear, but we assume that we are in a frame in which the centre of mass is fixed at the origin, so that $S$ fixes that point. Now \[ \begin{aligned} \dot \mathbf{q} & = \dot R S \mathbf{Q} + R \dot S \mathbf{Q} + R S \dot \mathbf{Q} = R R^t \dot R S \mathbf{Q} + R \dot S \mathbf{Q} = R ( \O \times S \mathbf{Q} )+ R \dot S S^{-1} S \mathbf{Q} \\ & = R (\O \times \tilde \mathbf{Q} + \dot S S^{-1} \tilde \mathbf{Q}) \end{aligned} \] where $\tilde \mathbf{Q} = S \mathbf{Q}$. Thus we have \[ \begin{aligned} \mathbf{q} \times \dot \mathbf{q} & = R \tilde \mathbf{Q} \times R ( \O \times \tilde \mathbf{Q} + \dot SS^{-1} \tilde \mathbf{Q}) \\ & = R ( | \tilde \mathbf{Q}|^2 \mathbbm{1} - \tilde \mathbf{Q} \tilde \mathbf{Q}^t) \O + R ( \tilde \mathbf{Q} \times \dot S S^{-1} \tilde \mathbf{Q}) \,. \end{aligned} \] Now $\l$ is defined by integrating over the deformed body $\tilde B$ with density $\rho = \rho(\tilde \mathbf{Q})$, so that $\l = \int \rho \mathbf{q} \times \dot \mathbf{q} \mathrm{d} \tilde \mathbf{Q}$ and using $\l = R \L$ gives \[ \L = \int_{\tilde B} \rho ( | \tilde \mathbf{Q} |{}^2 \mathbbm{1} - \tilde \mathbf{Q} \tilde \mathbf{Q}^t) \, \mathrm{d} \tilde \mathbf{Q} \,\, \O + \int_{\tilde B} \rho \tilde \mathbf{Q} \times \dot S S^{-1} \tilde \mathbf{Q} \, \mathrm{d} \tilde \mathbf{Q}. \] The first term is the tensor of inertia $I$ of the shape changed body and the constant term defines $\mathbf{A}$ so that \[ \L = I \O + \mathbf{A} \] as claimed. \end{proof} \begin{remark} Explicit formulas for $I$ and $\mathbf{A}$ in the case of a system of coupled rigid bodies are given in the next theorem. When $I$ is constant and $\mathbf{A} = 0$ the equations reduce to the classical Euler equations for a rigid body. \end{remark} \begin{remark} For arbitrary time dependence of $I$ and $\mathbf{A}$ the total angular momentum $| \L |$ is conserved, in fact it is a Casimir of the Poisson structure $\{ f, g \} = \nabla f \cdot \L \times \nabla g$. \end{remark} \begin{remark} The equations are Hamiltonian with respect to this Poisson structure with Hamiltonian $H = \frac12 ( \L - \mathbf{A}) I^{-1} ( \L - \mathbf{A})$ such that $\O = \partial H/ \partial \L$. \end{remark} For a system of coupled rigid bodies the shape change $S$ is given by rotations of the individual segments relative to some reference segment, typically the trunk. The orientation of the reference segment is given by the rotation matrix $R$ so that $\l = R \L$. The system of rigid bodies is described by a tree that describes the connectivity of the bodies, see \cite{Tong15} for the details. Denote by $\mathbf{C}$ the overall centre of mass, and by $\mathbf{C}_i$ the position of the centre of mass of body $B_i$ relative to $\mathbf{C}$. Each body's mass is denoted by $m_i$, and its orientation by $R_{\alpha_i}$, where $\alpha_i$ denotes the set of angles necessary to describe its relative orientation (e.g.\ a single angle for a pin joint, or 3 angles for a ball and socket joint). All orientations are measured relative to the reference segment, so that the orientation of $B_i$ in the space fixed frame is given by $R R_{\alpha_i}$. The angular velocity $\O_{\alpha_i}$ is the relative angular velocity corresponding to $R_{\alpha_i}$, so that the angular velocity of $B_i$ in the space fixed frame is $R_\alpha^t \O + \O_\alpha$. Finally $I_i$ is the tensor of inertia of $B_i$ in a local frame with center at $\mathbf{C}_i$ and coordinate axes aligned with the principle axes of inertia. With this notation we have \begin{theorem} For a system of coupled rigid bodies we have \[ I = \sum R_{\alpha_i} I_i R_{\alpha_i}^t + m_i (| \mathbf{C}_i |{}^2 \mathbbm{1} - \mathbf{C}_i \mathbf{C}_i^t ) \] and \[ \mathbf{A} = \sum ( m_i \mathbf{C}_i \times \dot \mathbf{C}_i + R_{\alpha_i} I_i \O_{\alpha_i} ) \] where $m_i$ is the mass, $\mathbf{C}_i$ the position of the centre of mass, $R_{\alpha_i}$ the relative orientation, $\O_{\alpha_i}$ the relative angular velocity such that $R_{\alpha_i}^t \dot R_{\alpha_i} \v = \O_{\alpha_i} \v$ for all $\v \in \mathbb{R}^3$, and $I_i$ the tensor of inertia of body $B_i$. The sum is over all bodies $B_i$ including the reference segment, for which the rotation is simply given by $\mathbbm{1}$. \end{theorem} \begin{proof} The basic transformation law for body $B_i$ in the tree of coupled rigid bodies is $\mathbf{q}_i = R R_{\alpha_i} ( \mathbf{C}_ i + \mathbf{Q}_i)$. Repeating the calculation in the proof of Theorem 1 with this particular $S$ and summing over the bodies gives the result. We will skip the derivation of $\mathbf{C}_i$ in terms of the shape change and the geometry of the model and refer to the Thesis of William Tong \cite{Tong15} for the details. \end{proof} \begin{remark} In the formula for $I$ the first term is the moment of inertia of the segment transformed to the frame of the reference segment, while the second term comes from the parallel axis theorem applied to the centre of mass of the segment relative to the overall centre of mass. \end{remark} \begin{remark} In the formula for $\mathbf{A}$ the first term is the internal angular momentum generated by the change of the relative centre of mass, while the second term originates from the relative angular velocity. \end{remark} \begin{remark} When there is no shape change then $\dot \mathbf{C}_i = 0$ and $\O_i = 0$, hence $\mathbf{A} = 0$. \end{remark} \begin{remark} The vectors $\mathbf{C}_i$, $\dot \mathbf{C}_i$, and $\O_{\alpha_i}$ are determined by the set of time-dependent matrices $\{ R_{\alpha_i} \}$ (the time-dependent ``shape'') and the joint positions of the coupled rigid bodies (the time-independent ``geometry'' of the model), see \cite{Tong15} for the details. In particular also $\sum m_i \mathbf{C}_i = 0$. \end{remark} \section{A simple model for twisting somersault} \begin{figure} \centering \subfloat[$t=0$ ]{\includegraphics[width=4cm]{animation01.png}\label{fig:animationlayout}} \subfloat[$t=1/32$ ]{\includegraphics[width=4cm]{animation02.png}} \subfloat[$t=1/16$ ]{\includegraphics[width=4cm]{animation03.png}}\\ \subfloat[$t=3/32$ ]{\includegraphics[width=4cm]{animation04.png}} \subfloat[$t=1/8$ ]{\includegraphics[width=4cm]{animation05.png}} \subfloat[$t=5/32$ ]{\includegraphics[width=4cm]{animation06.png}}\\ \subfloat[$t=3/16$ ]{\includegraphics[width=4cm]{animation07.png}} \subfloat[$t=7/32$ ]{\includegraphics[width=4cm]{animation08.png}} \subfloat[$t=1/4$ ]{\includegraphics[width=4cm]{animation09.png}\label{fig:animationi}} \caption{The arm motion for the twisting somersault.}\label{fig:animation2} \end{figure} Instead of the full complexity of a realistic coupled rigid body model for the human body, e.g., with 11 segments \cite{Yeadon90b} or more, here we are going to show that even when all but one arm is kept fixed it is still possible to do a twisting somersault. The formulas we derive are completely general, so that more complicated shape changes can be studied in the same framework. But in order to explain the essential ingredients of the twisting somersault we chose to discuss a simple example. A typical dive consists of a number of phases or stages in which the body shape is either fixed or not. Again, it is not necessary to make this distinction, the equations of motion are general and one could study dives where the shape is changing throughout. However, the assumption of rigid body motions for certain times is satisfied to a good approximation in reality and makes the analysis simpler and more explicit. The stages where shape change occurs are relatively short, and considerable time is spent in rotation with a fixed shape. This observation motivates our first approximate model in which the shape changes are assumed to be impulsive. Hence we have instantaneous transitions between solution curves of rigid bodies with different tensors of inertia and different energy, but the same angular momentum. A simple twisting somersault hence looks like this: The motion starts out as a steady rotation about a principal axis resulting in pure somersault (stage 1), and typically this is about the axis of the middle principle moment of inertia which has unstable equilibrium. After some time a shape change occurs; in our case one arm goes down (stage 2). This makes the body asymmetric and generates some tilt between the new principal axis and the constant angular momentum vector. As a result the body starts twisting with constant shape (stage 3) until another shape change (stage 4) stops the twist, for which the body then resumes pure somersaulting motion (stage 5) until head first entry in the water. The amount of time spent in each of the five stages is denoted by $\tau_i$, where $i = 1, \dots, 5$. \begin{figure} \centering \subfloat[$\L$ for all stages.]{\includegraphics[width=7.25cm]{Lfull.pdf}\label{fig:Lfull}} \subfloat[$q$ for all stages.]{\includegraphics[width=7.25cm]{qfull.pdf}} \caption{Twisting somersault with $m=1.5$ somersaults and $n=3$ twists. The left pane shows the angular momentum $\L(t)$, and the right pane the quaternion $q(t)$ that determines the orientation $R$. The stages are separated by the vertical dashed lines, $\tau_1 = 0$, $\tau_2 = \tau_4 = 1/4$. The same trajectory on the $\L$-sphere is shown in Fig.~\ref{fig:spacefull}.}\label{fig:Lqfull} \end{figure} \begin{figure} \centering \includegraphics[width=12cm]{cap.png} \caption{Twisting somersault on the sphere $|\L| = l$ where the shape change is a kick. The region $A$ bounded by the stage 3 orbit of $\L$ and the equator (dashed) is shaded in dark blue. } \label{fig:Asimple} \end{figure} The energy for pure somersault in stage 1 and stage 5 is $E_s = \frac12 \L_s I_s^{-1} \L_s = \frac12 l^2 I_{s,yy}$. In the kick-model stage 2 and stage 4 do not take up any time so $\tau_2 = \tau_4 = 0$, but they do change the energy and the tensor of inertia. On the momentum sphere $|\L|^2 = l^2$ the dive thus appears like this, see Fig.~\ref{fig:Asimple}: For some time the orbit remains at the equilibrium point $L_x = L_z = 0$, then it is kicked into a periodic orbit with larger energy describing the twisting motion. Depending on the total available time a number of full (or half) revolutions are done on this orbit, until another kick brings the solution back to the unstable equilibrium point where it started (or on the opposite end with negative $L_y$). Finally, some time is spent in the pure somersaulting motion before the athlete completes the dive with head first entry into the water. The description in the body frame on the sphere $|\L|^2 = l^2$ does not directly contain the somersault rotation about $\l$ in physical space. This is measured by the angle of rotation $\phi$ about the fixed angular momentum vector $\l$ in space, which is the angle reduced by the symmetry reduction that lead to the description in the body frame, and the angle $\phi$ will have to be recovered from its dynamic and geometric phase. What is visible on the $\L$-sphere is the dynamics of the twisting motion, which is the rotation about the (approximately) head-to-toe body axis. All motions on the $\L$-sphere except for the separatrices are periodic, but they are not periodic in physical space because $\phi$ in general has a different period. This is the typical situation in a system after symmetry reduction. To build a successful jump the orbit has to start and finish at one of the equilibrium points that correspond to somersault rotation about the fixed momentum axis $\l$ in space. This automatically ensures that the number of twists performed will be a half-integer or integer. In addition, the change in $\phi$, i.e.\ the amount of rotation about the axis $\l$ has to be such that it is a half-integer, corresponding to take-off in upright position and entry with head first into the water. If necessary the angle $\phi$ will evolve (without generating additional twist) in the pure somersaulting stages 1 and 5 to meet this condition. The orbit for the twisting stage 3 is obtained as the intersection of the angular momentum sphere $|\L|^2 = l^2$ and the Hamiltonian $H = \frac12 \L I_t^{-1} \L = E_t$, where $I_t$ denotes the tensor of inertia for the twisting stage 3 which in general is non-diagonal. The period of this motion depends on $E_t$, and can be computed in terms of complete elliptic integrals of the first kind, see below. \begin{lemma} In the kick-model the instantaneous shape change of the arm from up ($\alpha = \pi)$ to down ($\alpha = 0$) rotates the angular momentum vector in the body frame from $\L_s = (0, l, 0)^t$ to $\L_t = R_x( -\chi) \L_s$ where the tilt angle is given by \[ \chi = \int_0^\pi I_{t,xx}^{-1}(\alpha) A_x(\alpha) \mathrm{d} \alpha \, , \] $I^{-1}_{t,xx}(\alpha)$ is the $xx$ component of the (changing) moment of inertia tensor $I_t^{-1}$, and the internal angular momentum is $\mathbf{A} = ( A_x(\alpha) \dot \alpha, 0, 0)^t$. In particular the energy after the kick is $E_t = \frac12 \L_s R_x(\chi) I_t R_x(-\chi) \L_s$. \end{lemma} \begin{proof} Denote by $\alpha$ the angle of the arm relative to the trunk, where $\alpha = 0$ is arm down and $\alpha = \pi$ is arm up. Let the shape change be determined by $\alpha(t)$ where $\alpha(0) = \pi$ and $\alpha(\tau_2) = 0$. Now $\mathbf{A}$ is proportional to $\dot \alpha$ and hence diverges when the time that it takes to do the kick goes to zero, $\tau_2 \to 0$. Thus we approximate $\O = -I^{-1} \mathbf{A}$, and hence the equations of motion for the kick becomes \[ \dot \L = I^{-1} \mathbf{A} \times \L \] and are linear in $\L$. Denote the moving arm as the body $B_2$ with index 2, and the trunk and all the other fixed segments as a combined body with index 1. Since the arm is moved in the $yz$-plane we have $R_{\alpha_2} = R_x(\alpha(t))$ and $\Omega_{\alpha_2}$ parallel to the $x$-axis. Moreover the overall centre of mass will be in the $yz$-plane, so that $\mathbf{C}_i$ and $\dot \mathbf{C}_i$, $i=1,2$, are also in the $yz$-plane. So we have $\mathbf{C}_i \times \dot \mathbf{C}_i$ parallel to the $x$-axis as well, and hence $\mathbf{A} = (A_x \dot\alpha, 0, 0)^t$. The parallel axis theorem gives non-zero off-diagonal entries only in the $yz$-component of $I$, and similarly for $R_{\alpha_2} I_s R^t_{\alpha_2}$; hence $I_{xy} = I_{xz} = 0$ for this shape change. Thus $I^{-1} \mathbf{A} = ( I_{t,xx}^{-1} A_x \dot \alpha, 0, 0)^t$, and the equation for $\dot \L$ can be written as $\dot \L = f(t) M \L$ where $M$ is a constant matrix given by $M = \frac{\mathrm{d}}{\mathrm{d} t} R_x(t)|_{t=0}$ and $f(t) = I_{t,xx}^{-1}(\alpha(t)) A_x(\alpha(t)) \dot \alpha$. Since $M$ is constant we can solve the time-dependent linear equation and find \[ \L_t = R_x( -\chi) \L_s \] (the subscript $t$ stands for ``twist", not for ``time") where \[ \chi = \int_0^{\tau_2} I_{t,xx}^{-1} A_x \dot \alpha \,\mathrm{d} t = \int_0^\pi I_{t,xx}^{-1}(\alpha) A_x(\alpha)\, \mathrm{d} \alpha \,. \] We take the limit $\tau_2 \to 0$ and obtain the effect of the kick, which is a change in $\L_s$ by a rotation of $-\chi$ about the $x$-axis. The larger the value of $\chi$ the higher the twisting orbit is on the sphere, and thus the shorter the period. The energy after the kick is easily found by evaluating the Hamiltonian at the new point $\L_t$ on the sphere with the new tensor of inertia $I_t$. \end{proof} The tensor of inertia after the shape change is denoted by $I_t$, it does not change by much in comparison to $I_s$ but is now non-diagonal, however, with the rotation $R_x(\mathcal{P})$ it can be re-diagonalised to \[ J = \mathrm{diag}( J_x, J_y, J_z) = R_x(-\mathcal{P}) I_t R_x(\mathcal{P}) \] where in general the eigenvalues are distinct. The precise formula for $\mathcal{P}$ depends on the inertia properties of the model, but the value for a realistic model with 10 segments it can be found in \cite{Tong15}. Formulas for $A_x(\alpha)$ in terms of realistic inertial parameters are also given in \cite{Tong15}. In stage 3 the twisting somersault occurs, where we assume an integer number of twists occur. This corresponds to an integer number of revolutions of the periodic orbit of the rigid body with energy $E_t$ and tensor of inertia $I_t$. Let $T_t$ be the period of this motion. As already pointed out the amount of rotation about the fixed angular momentum vector $\l$ cannot be directly seen on the $\L$-sphere. Denote this angle of rotation by $\phi$. We need the total change $\Delta \phi$ to be an odd multiple of $\pi$ for head-first entry. Following Montgomery \cite{Montgomery91} the amount $\Delta \phi$ can be split into a dynamic and a geometric phase, where the geometric phase is given by the solid angle $S$ enclosed by the curve on the $\L$-sphere. We are going to re-derive the formula for $\Delta \phi$ here using simple properties of the integrable Euler top. The formula for the solid angle $S$ enclosed by a periodic orbit on the $\L$-sphere is given in the next Lemma (without proof, see, e.g.~\cite{Montgomery91,Levi93,Cushman05}). \begin{lemma} \label{lem:Ssphere} The solid angle on the sphere $\L^2 = l^2$ enclosed by the intersection with the ellipsoid $\L J^{-1} \L = 2 E$ is given by \[ S(h, \rho) = \frac{4 h g}{\pi} \Big( \Pi(n,m) - K(m)\Big) \] where $m = \rho ( 1 - 2 h \rho)/( 2 h + \rho)$, $n = 1 - 2 h \rho$, $g = ( 1 + 2 h /\rho)^{-1/2}$, $h = ( 2 E_t J_y/l^2 - 1)/ \mu$, $\rho = (1 - J_y/J_z)/(J_y/J_x - 1)$, $\mu = (J_y/J_x - 1)(1 - J_y/J_z)$. \end{lemma} \begin{remark} The essential parameter is $E_t J_y/l^2$ inside $h$, and all other dependences are on certain dimensionless combinations of moments of inertia. Note that the notation in the Lemma uses the classical letter $n$ and $m$ for the arguments of the complete elliptic integrals, these are not to be confused with the number of twists and somersaults. \end{remark} \begin{remark} This is the solid angle enclosed by the curve towards the north-pole and when measuring the solid angle between the equator and the curve the results is $2 \pi - S$. This can be seen by considering $h \to 1/( 2 \rho)$ which implies $m \to 0$ and $n \to 0$, and hence $S \to 0$. \end{remark} Using this Lemma we can find simple expressions for the period and rotation number by noticing that the action variables of the integrable system on the $\L$-sphere are given by $l S/\pi$: \begin{lemma}\label{lem:tT} The derivative of the action $l S /( 2 \pi) $ with respect to the energy $E$ gives the inverse of the frequency $(2\pi)/T$ of the motion, such that the period $T$ is \[ T = \frac{ 4 \pi g}{\mu l} K(m) \,. \] and the derivative of the action $lS/( 2\pi) $ with respect to $l$ gives the rotation number $-W$. \end{lemma} \begin{proof} The main observation is that the symplectic form on the $\L$-sphere of radius $l$ is the area-form on the sphere divided by $l$, and that the solid angle on the sphere of radius $l$ is the enclosed area divided by $l^2$. Thus the action is $ l S / ( 2\pi)$ and the area is $l^2 S$. From a different point of view the reason that the essential object is the solid angle is that the Euler-top has scaling symmetry: if $\L$ is replaced by $s \L$ then $E$ is replaced by $s^2 E$ and nothing changes. This implies that the essential parameter is the ratio $E/l^2$ and the solid angle is a function of $E/l^2$ only. A direct derivation of the action in which $h$ is the essential parameter can be found in \cite{PD12}. Now differentiating the action $l S(E/l^2) / ( 2\pi) $ with respect to $E$ gives \[ \frac{ T}{2\pi} = \frac{ \partial l S(E/l^2) }{2\pi \partial E} = \frac{1}{2\pi l} S'( E/l^2) \] and differentiating the action with respect to $l$ gives \[ -2\pi W = \frac{ \partial l S(E/l^2) }{\partial l} = S(E / l^2) - \frac{2 E}{l^2} S'(E/l^2) = S - \frac{ 2 E T}{ l}\vspace{-7mm} \] \end{proof} \vspace{0mm} \begin{remark} What is not apparent in these formulas are that the scaled period $l T$ is a relatively simple complete elliptic integral of the first kind (depending on $E/l^2$ only), while $S$ and $W$ are both complete elliptic integrals of the third kind (again depending on $E/l^2$ only). \end{remark} \begin{remark} In general the rotation number is given by the ratio of frequencies, and those can be computed from derivatives of the actions with respect to the integrals. If one of the integrals (of the original integrable system) is a global $S^1$ action, then the simple formula $W = \partial I / \partial l$ results, which is the change of the action $I$ with respect to changing the other action $l$ while keeping the energy constant. \end{remark} \begin{theorem} The total amount of rotation $\Delta \phi_{kick}$ about the fixed angular momentum axis~$\l$ for the kick-model when performing $n$ twists is given by \begin{equation} \label{eqn:kick} \Delta \phi_{kick} = (\tau_1 + \tau_5) \frac{2 E_s }{l} + \tau_3 \frac{2 E_t}{l} - n S\,. \end{equation} The first terms are the dynamic phase where $E_s$ is the energy in the somersault stages and $E_t$ is the energy in the twisting somersault stage. The last term is the geometric phase where $S$ is the solid angle enclosed by the orbit in the twisting somersault stage. For equal moments of inertia $J_x = J_y$ the solid angle $S$ is \[ S = 2\pi \sin( \mathcal{\chi} + \mathcal{P}) \] and in general is given by Lemma~\ref{lem:Ssphere}. To perform $n$ twists the time necessary is $\tau_3 = n T_t$ where again for equal moments of inertia $J_x = J_y$ the period $T_t$ is \[ T_t = \frac{2\pi}{l} \frac{(J_y^{-1} - J_z^{-1})^{-1}}{ \sin ( \chi + \mathcal{P} )} \] and in general is given by Lemma~\ref{lem:tT} where $T=T_t$. \end{theorem} \begin{proof} This is a slight extension of Montgomery's formula \cite{Montgomery91} for how much the rigid body rotates, with the added feature that there is no $\mathrm{mod} \, 2\pi$ for $\Delta \phi_{kick}$: We actually need to know how many somersaults occurred. The formula is applied to each stage of the dive, but stage 2 and stage 4 do not contribute because $\tau_2 = \tau_4 = 0$. In stage 1 and stage 5 the trajectory is at an equilibrium point on the $\L$-sphere, so there is only a contribution to the dynamic phase. The essential terms come from stage 3, which is the twisting somersault stage without shape change. When computing $S$ we need to choose a particular normalisation of the integral which is different from Montgomery \cite{Montgomery91,Levi93}, and also different from \cite{Cushman05}. Our normalisation is such that when $J_x = J_y$ the amount of rotation obtained is the corresponding angle $\phi$ of the somersault, i.e.\ the rotation about the fixed axis $\l$ in space. This means that the correct solid angle for our purpose is such that when $J_x = J_y$ and $E_t = E_s$ the contribution is zero. Therefore, we should measure area $ A = S l^2$ relative to the equator on the sphere. When $J_x = J_y$ we are simply measuring the area of a slice of the sphere bounded by the equator and the twisting somersault orbit, which the latter is in a plane parallel to the $xy$-plane with opening angle $\chi + \mathcal{P}$, see Fig.~\ref{fig:Asimple} \footnote{This is somewhat less obvious then it appears, since the orbit is actually tilted by $\mathcal{P}$ relative to the original equator. It turns out, however, that computing it in either way gives the same answer.} In the general case where $J_x \neq J_y$ the area can be computed in terms of elliptic integrals, and the details are given in \cite{Tong15}. Similarly the period of the motion along $H = E_t $ can be either computed from explicit solutions of the Euler equations for $J_x = J_y$, or by elliptic integrals, again see \cite{Tong15} for the details. \end{proof} Now we have all the information needed to construct a twisting somersault. A result of the kick approximation is that we have $\tau_2 = \tau_4 = 0$, and if we further set $\tau_1=\tau_5=0$ then there is no pure somersault either, which makes this the simplest twisting somersault. We call this dive the pure twisting somersault and take it as a first approximation to understanding the more complicated dives. \begin{corollary} A pure twisting somersault with $m$ somersaults and $n$ twists is found for $\tau_1 = \tau_2 = \tau_4 = \tau_5 = 0$ and must satisfy \begin{equation} \label{eqn:rot} 2 \pi m = \Big( 2 l T_t \frac{ E_t}{l^2} - S \Big) n \end{equation} where both $S$ and $l T_t$ are functions of $E_t/l^2$ only (besides inertial parameters). \end{corollary} \begin{proof} This is a simple consequence of the previous theorem by setting $\Delta \phi = 2 \pi m$, $\tau_3 = n T_t$, and $\tau_1 = \tau_5 = 0$. \end{proof} \begin{remark} Solving \eqref{eqn:rot} for $m/n$ gives a rotation number of the Euler top, which characterizes the dynamics on the 2-tori of the super-integrable Euler top. This rotation number is equivalent up to uni-modular transformations to that of Bates et al \cite{Cushman05}. \end{remark} \begin{remark} The number of somersaults per twists is $m/n$, and \eqref{eqn:rot} determines $E_t/l^2$ (assuming the inertial parameters are given). Having $E_t/l^2$ determined in this way means one would need to find a shaped change or kick which achieves that $E_t/l^2$, and large values of $E_t/l^2$ can be hard or impossible to achieve. For the one-arm kick-model discussed above the energy that is reached is given by \[ \frac{E_t}{l^2} = \frac12 \L_s R_x(\chi + \mathcal{P}) J R_x(-\chi - \mathcal{P}) \L_s/l^2. \] \end{remark} \begin{remark} Given a particular shape change (say, in the kick-approximation) the resulting $E_t/l^2$ will in general not result in a rational rotation number, and hence not be a solution of \eqref{eqn:rot}. In this case the pure somersault of stage 1 and/or stage 5 needs to be used to achieve a solution of \eqref{eqn:kick} instead. \end{remark} \begin{remark} The signs are chosen so that $S$ is positive in the situation we consider. Thus the geometric phase lowers $\Delta \phi$, and can be thought of as an additional cost that twisting on for somersaulting. \end{remark} \begin{figure} \centering \includegraphics[width=10cm]{spacefull.png} \caption{Twisting somersault with $m=1.5$ somersault and $n=3$ twists on the sphere $|\L| = l$. The orbit starts and finishes on the $L_y$-axis with stage 1 and stage 5. Shape-changing stages 2 and 4 are the curved orbit segments that start and finish at this point. The twisting somersault stage 3 appears as a slightly deformed circle below the equator (dashed). }\label{fig:spacefull} \end{figure} \begin{figure} \centering \includegraphics[width=7cm]{circle3.pdf} \caption{Areas $A_+$ and $A_-$ corresponding to solid angles $S_+ = A_+/l^2$ and $S_- = A_-/l^2$. The geometric phase correction due to the shape change is given by $S_-$} \label{fig:Aminus} \end{figure} The total airborne time $T_{air}$ has small variability for platform diving, and is bounded above for springboard diving. A typical dive has $ 1.5 < T_{air} < 2.0$ seconds. After $E_t/l^2$ is determined by the choice of $m/n$ the airborne time can be adjusted by changing $l$ (within the physical possibilities) while keeping $E_t/l^2$ fixed. Imposing $T_{air} = \tau_1 + \tau_5 + \tau_3$ we obtain: \begin{corollary} A twisting somersault with $m$ somersaults and $n$ twists in the kick-model must satisfy \[ 2 \pi m + n S = T_{air} \frac{ 2 E_s}{l} + 2 n l T_t \frac{ E_t - E_s}{l^2} \] where $T_{air} - \tau_3 = \tau_1 + \tau_5 \ge 0$. \end{corollary} \section{The general twisting somersault} The kick-model gives a good understanding of the principal ingredients needed in a successful dive. In the full model the shape-changing times $\tau_2$ and $\tau_4$ need to be set to realistic values. We estimate that the full arm motion takes at least about $1/4$ of a second. So instead of having a kick connecting $\L_s$ to $\L_t(0)$, a piece of trajectory from the time-dependent Euler equations needs to be inserted, which can be seen in Fig.~\ref{fig:Lqfull}. The computation of the two dive segments from stage 2 and stage 4 has to be done numerically in general. Nevertheless, this is a beautiful generalisation of Montgomery's formula due to Cabrera \cite{Cabrera07}, which holds in the non-rigid situation. In Cabrera's formula the geometric phase is still given by the solid angle enclosed by the trajectory, however for the dynamic phase instead of simply $2ET$ we actually need to integrate $\L \cdot \O$ from 0 to $T$. Now when the body is rigid we have $2 E = \L \cdot \O = const$ and Cabrera's formula reduces back to Montgomery's formula. \begin{theorem} For the full model of a twisting somersault with $n$ twists, the total amount of rotation $\Delta\Phi$ about the fixed angular momentum axis $\l$ is given by \[ \Delta \phi = \Delta \phi_{kick} + \frac{ 2 \bar E_2 \tau_2 }{l} + \frac{ 2 \bar E_4 \tau_4 }{l} + S_- \] where $S_-$ is the solid angle of the triangular area on the $\L$-sphere enclosed by the trajectories of the shape-changing stage 2 and stage 4, and part of the trajectory of stage 3, see Fig.~\ref{fig:Aminus}. The average energies along the transition segments are given by \[ \bar E_i = \frac{1}{2\tau_i} \int_0^{\tau_i} \L \cdot \O \, \mathrm{d} t, \quad i = 2, 4 \] \end{theorem} \begin{proof} This is a straightforward application of Cabrera's formula. For stage 1, stage 3, and stage 5 where there is no shape change the previous formula is obtained. For stage 2 and stage 4 the integral of $\L \cdot \O$ along the numerically computed trajectory with time-dependent shape is computed to give the average energy during the shape change. \end{proof} \begin{figure}[ht] \centering \includegraphics[width=10cm]{tvl.pdf} \caption{The relationship between airborne time $T_\mathit{air}$ and angular momentum $l$ when $\tau_2=\tau_4=1/4$ is used in corollary~\ref{cor:fin}. The result is for the case of $m=1.5$ somersaults with different number of $n$ twists. The maximum number of twists is $n=4$ since we need $T_{air} - \tau_2 - \tau_3 - \tau_4 = \tau_1 + \tau_5 \ge 0$. } \label{fig:tvlan} \end{figure} \begin{remark} This quantifies the error that occurs with the kick-model. The geometric phase is corrected by the solid angle $S_-$ of a small triangle, see Fig.~\ref{fig:Aminus}. The dynamic phase is corrected by adding terms proportional to $\tau_2$ and $\tau_4$. Note, if we keep the total time $\tau_2+ \tau_3 + \tau_4$ constant then we can think of the shape-changing times $\tau_2$ and $\tau_4$ from the full model as being part of the twisting somersault time $\tau_3$ of the kick-model. The difference is $2\Big((\bar{E}_2-E_t)\tau_2+(\bar{E}_4-E_t)\tau_4\big)/l$, and since both $\bar{E}_2$ and $\bar{E}_4 < E_t$ the dynamics phase in the full model is slightly smaller than in the kick-model. \end{remark} \begin{remark} As $E_t$ is found using the endpoint of stage 2 it can only be calculated numerically now. \end{remark} The final step is to use the above results to find parameters that will achieve $m$ somersaults and $n$ twists, where typically $m$ is a half-integer and $n$ an integer. \begin{corollary} A twisting somersault with $m$ somersaults and $n$ twists satisfies \[ 2 \pi m + n S - S_- = T_{air} \frac{ 2 E_s} { l} + 2 \tau_2 \frac{\bar E_2 - E_s }{l} + 2 \tau_4 \frac{ \bar E_4 - E_s}{l} + 2 \tau_3 \frac{ E_t - E_s}{ l} \] where $T_{air} - \tau_2 - \tau_3 - \tau_4 = \tau_1 + \tau_5 \ge 0$. \label{cor:fin} \end{corollary} \begin{remark} Even though $\bar E_2, \bar E_4, E_t$, and $S_-$ have to be computed numerically in this formula, the geometric interpretation is as clear as before: The geometric phase is given by the area terms $nS$ and $S_-$. \end{remark} In the absence of explicit solutions for the shape-changing stages 2 and 4, we have numerically evaluated the corresponding integrals and compared the predictions of the theory to a full numerical simulation. The results for a particular case and parameter scan are shown in Fig.~\ref{fig:Lqfull} and Fig.~\ref{fig:tvlan} respectively, and the agreement between theory and numerical simulation is extremely good. Fixing the shape change and the time it takes determines $E_t/l^2$, so the essential parameters to be adjusted by the athlete are the angular momentum $l$ and airborne time $T_{air}$ (which are directly related to the initial angular and vertical velocities at take-off). Our result shows that these two parameters are related in a precise way given in Corollary~\ref{cor:fin}. At first it may seem counterintuitive that a twisting somersault with more twists (and same number of somersaults) requires less angular momentum when the airborne time is the same, as shown in Fig.~\ref{fig:tvlan} for $m = 3/2$ and $n = 0,1,2,3,4$. The reason is that while twisting, the moments of inertia relevant for somersaulting are smaller than not twisting, since pure somersaults take layout position as shown in Fig.~\ref{fig:animationlayout}, hence less overall time is necessary. In reality, the somersaulting phase is often done in pike or tuck position which significantly reduces the moment of inertia about the somersault axis, leading to the intuitive result that more twists require larger angular momentum when airborne time is the same. \section{Acknowledgement} This research was supported by ARC Linkage grant LP100200245 and the New South Wales Institute of Sports. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{introduction} The phenomenon of geometric frustration has attracted the interest of physicists due to the presence of degeneracy in the classical ground states arising from the arrangement of spins on triangular clusters \cite{greedan,ramirez1,ramirez2,moessner}. A frustrated magnet is one in which not all interaction energies can be simultaneously optimized, for which the anti-ferromagnetic Ising model on a two-dimensional triangular lattice, is an example. The highly frustrated magnets, on the other hand, are the class of frustrated magnets that have an infinite number of classical ground states, even after removing the global symmetries of Hamiltonian. The classical $XY$ anti-ferromagnet on the two-dimensional Kagom\'e lattice constructed from corner-sharing triangular units and the classical Heisenberg antiferromagnet on the $3D$ pyrochlore lattice consisting of corner-sharing tetrahedra are two prototypes of the highly frustrated class. The discoveries, such as heavy-fermion behaviour \cite{heavy-fermion}, spin-ice ordering \cite{ice1,ice2,ice3}, spin nematics \cite{nematic}, spin liquid behaviours \cite{liquid1,liquid2,liquid3} and even novel superconductivity \cite{sc} in materials with magnetic sublattices of corner-sharing tetrahedra (such as spinel and pyrochlores), have made these structures in the focus of physicists's attention over the recent years. It has been widely accepted that no order-by-disorder mechanism can establish a long-range order in the Heisenberg pyrochlore anti-ferromagnet, consequently such a system remains disordered at all temperatures~\cite{od1,od2,od3}. However, experimental observations have represented an all-in all-out long-range order (consisting of four sublattices oriented along four [111] spin directions), for the low-temperature phase of $\mathrm{FeF_{3}}$ in pyrochlore form~\cite{exp1,exp2}. In this compound, the $\mathrm{Fe^{+3}}$ ions located on a pyrochlore lattice, interact anti-ferromagnetically with their nearest neighbors. Since, the magnetic {$\mathrm{Fe}^{+3}$} ions are in $d^{5}$ electronic configuration with a totally symmetric ground state and no net angular momentum, this system can be considered as a Heisenberg anti-ferromagnet and so the origin of the long-range ordered phase in it, has remained as a puzzle. Reimers \textit{et al} have shown that, taking into account the interaction with farther neighbors, would cause a second order transition in this a system~\cite{Reimers}. However, they found that because of the thermal fluctuations, a co-linear spin ordering would be preferred rather than the all-in all-out state. Therefore, it seems that to stabilize a long range all-in all-out spin configuration, one should inevitably introduce a single-ion an-isotropic crystal field term in the model Hamiltonian. Another interesting aspect of the transition in pyr-$\mathrm{Fe^{+3}}$ is in its universality class. The order parameter critical exponent $\beta$ has been fixed to the value $0.18(2)$, in neutron- diffraction experiments, which is nearest to the tetra-critical value $\beta=1/6$~\cite{sim}. On the other hand, recent Monte Carlo simulations, carried on Heisenberg pyrochlore antiferromagnet with single ion anisotropy, have revealed the existence of a tricritical point for this system \cite{peter,kawamura}. The above interesting problem motivated us to study the critical properties of its two-dimensional equivalent, the $XY$ Kagom\'e anti-ferromagnet model with single-ion anisotropy. The classical antiferromagnetic $O(n)$ models on the Kagom\'e lattice have been studied by Huse and Rutenberg ~\cite{order1}. There, it has been shown that the Ising model ($n=1$) is disordered at all temperatures, while the $XY$ model ($n=2$) represents quasi long-range order in a three-fold ordered parameter at zero temperature. Because the system is two-dimensional this quasi long-range order does not survive at finite temperatures and so transforms to disordered phase through a Kosterlitz-Thouless transition. The ground state of the $XY$ model has the same properties as the three-state Potts model which can be mapped exactly onto solid-on solid (SOS) model at the roughening transition. On the other hand, the study of two-dimensional antiferromagnet Heisenberg model on Kagom\'e, have been carried out by Ritchey \textit{et al}, which resulted in a coplanar spin configurations in which there are nematic spin correlations with planar threefold symmetry and non-Abelian homotopy ~\cite{chandra}. They have also shown that very small amounts of bond XY anisotropy are sufficient to convert a crossover to a topological phase transition, in which the binding of non-Abelian disclinations would result in a glassy behavior in the absence of extrinsic disorder. The Hamiltonian of nearest-neighbor $XY$ antiferromagnet model on the $\mathrm{Kagom\acute{e}}$ lattice is given by: \begin{equation}\label{H1} H=-J\sum_{\langle ij\rangle}{\bf S}_{i}.{\bf S}_{j}, \end{equation} in which $J\langle 0$ and ${\bf S}_{i}$ denotes the unit planar vectors and $\langle ij\rangle$ indicates the nearest-neighbors. The ground state of this model is known to have a huge accidental degeneracy not related to the global symmetries of the Hamiltonian \cite{order1,order2}. In any ground state of the $\mathrm{Kagom\acute{e}}$ lattice the spins ${\bf S}_{i}$ acquires only three directions whose angles with respect to an arbitrary axis, say $x$-axis, differ from each other by $2\pi/3$. Therefore, the ground state in addition to the continuous $U(1)$ symmetry (due to the arbitrary simultaneous rotation of all spins) is characterized by a well developed discrete degeneracy of the same type as in the 3-state antiferromagnetic Potts model. The extensive degeneracy of the ground state in this model makes it extremely unstable towards the imposing of perturbations \cite{pyro}. For instance, if one adds a single-ion easy-axis anisotropic term to Hamiltonian (\ref{H1}), all spins prefer to align along the anisotropy directions yielding a long-range all-in all-out state for the system. The goal of this paper is to determine the critical properties of an $XY$ model on the two dimensional Kagom\'e lattice with single ion easy-axes anisotropic term. For this purpose we employ mean-field theory and Monte Carlo simulation. The structure of paper is as follows. In Sec. II, we introduce a mean-field formalism to derive the qualitative picture of transitions in the model. Section III is dedicated to the Monte Carlo method based on multiple histograms and also some methods for analyzing the Monte Carlo data to determine the order of transitions, critical temperatures and critical exponents. The simulation results and discussion are given in Sec. IV and conclusion appears in Sec. V. \section{mean-field formalism} The Hamiltonian, describing the $XY$ spins with nearest-neighbor anti-ferromagnetic interaction on a Kagom\'e lattice subjected to single site easy-axes anisotropy, is given by : \begin{equation}\label{H} H=-{J\over 2}\sum_{i,j}\sum_{a,b}{\bf S}_{i}^{a}\cdot{\bf S}_{j}^{b}-D\sum_{i}\sum_{a}({\bf S}_{i}^{a}\cdot\hat{\mathrm{ z}}^{a})^{2}, \end{equation} in which $J < 0, D > 0$ and $i,j=1,\cdot\cdot\cdot ,N$ and $a,b=1,2,3$ denote the Bravais lattice and sublattice indices, respectively. $\hat{\mathrm{z}}^a$'s represent the unit vectors of three easy-axes directions in 2d plane, which are along the line connecting the corner and the center of corner-sharing triangular units, given by: \begin{eqnarray} \hat{\mathrm{z}}^{1}&=&({\sqrt{3}\over 2},{-1\over 2})\nonumber\\ \hat{\mathrm{z}}^{2}&=&(-{\sqrt{3}\over 2},{-1\over 2})\nonumber\\ \hat{\mathrm{z}}^{3}&=&(0,1) \end{eqnarray} in global Cartesian coordinates. To apply mean-field theory on this model, we follow the method introduced by Harris, Mouritson and Berlinsky \cite{Reimers,Harris}. Defining the average magnetization as $\bf{M}_{i}^{a}=\langle\bf{S}_{i}^{a}\rangle$ and the deviation from the mean magnetization as $\delta{\bf S}_{i}^{a}=\bf{S}_{i}^{a}-\bf{M}_{i}^{a}$, to order $O(\delta S^2)$, we can write the Hamiltonian (Eq.(\ref{H})) as the following linear form: \begin{equation}\label{H-mf} H={J\over 2}\sum_{i,j}\sum_{a,b}{\bf M}_{i}^{a}\cdot{\bf M}_{j}^{b}+D\sum_{i}\sum_{a}({\bf M}_{i}^{a}\cdot\hat{\mathrm{z}}^{a})^{2} -{J}\sum_{i,a}\sum_{j,b}{\bf M}_{j}^{b}\cdot{\bf S}_{i}^{a}-2D\sum_{i,a}({\bf S}_{i}^{a}\cdot\hat{\mathrm{z}}^{a})({\bf M}_{i}^{a}\cdot\hat{\mathrm{z}}^{a}). \end{equation} Therefore, the mean-field partition function can be written as: \begin{equation}\label{pt} Z=e^{-\beta\left({J\over 2}\sum_{i,j}\sum_{a,b}{\bf M}_{i}^{a}\cdot{\bf M}_{j}^{b}+D\sum_{i}\sum_{a}({\bf M}_{i}^{a}\cdot\hat{\mathrm{z}}^{a})^{2}\right)} \Pi_{i,a}\int e^{\beta{\bf B}_{i}^{a}\cdot{\bf S}_{i}^{a}}d{\bf S}_{i}^{a}, \end{equation} where \begin{equation} {\bf B}_{i}^{a}={J}\sum_{j\neq i}\sum_{b\neq a}{\bf M}_{j}^{b}+2D({\bf M}_{i}^{a}\cdot\hat{\mathrm{z}}^{a})\hat {\mathrm{z}}^{a}, \end{equation} in which, the summation is over the nearest neighbors. The integral in Eq.(\ref{pt}) can be evaluated easily as follows: \begin{equation} \int e^{\beta{\bf B}_{i}^{a}\cdot{\bf S}_{i}^{a}}d{\bf S}_{i}^{a}= 2\pi\int_{0}^{\pi} e^{\beta{B}_{i}^{a}{\cos(\theta)}}d\theta=2\pi I_{0}(\beta B_{i}^{a}) , \end{equation} where $B_{i}^{a}=|{\bf B}_{i}^{a}|$. Then, assuming $K_{B}=1$, we reach the following expression for the free energy: \begin{equation}\label{f} F=-T\ln{Z}={J\over 2}\sum_{i,j}\sum_{a,b}{\bf M}_{i}^{a}\cdot{\bf M}_{j}^{b}+D\sum_{i}\sum_{a}({\bf M}_{i}^{a}\cdot\hat{\mathrm{z}}^{a})^{2}-T\sum_{i,a}\ln\left(2\pi I_{0}({ B_{i}^{a}\over T}) \right). \end{equation} From the mean-field free energy, obtained above, one can calculate the magnetization and entropy as: \begin{eqnarray} S&=&-{\partial F\over \partial T}=\sum_{i,a}\ln\left(2\pi I_{0}({ B_{i}^{a}\over T})\right)+{1\over T^2}\sum_{i,a} {B_{i}^{a}I_{1}({B_{i}^{a}\over T})\over 2I_{0}({ B_{i}^{a}\over T})}\\, {\bf M}_{i}^{a}&=&-\nabla_{B} F=-{\partial F\over \partial B_{i}^{a}}{\hat B_{i}^{a}}=-\sum_{i,a} {I_{1} ({B_{i}^{a}\over T})\over 2I_{0}({B_{i}^{a}\over T})}. \end{eqnarray} For small values of $B$, one can expand Eq.(10) as: \begin{equation} {M}_{i}^{a}=\left[{B_{i}^{a}\over 2T} - {{B_{i}^{a}}^{3}\over 16T^3}+{{B_{i}^{a}}^{5}\over 96T^5}-{11 \over 6144} {{B_{i}^{a}}^{7}\over T^7}+ O({{B}^{9}})\right], \end{equation} from which, by reversing the series one gets: \begin{equation}\label{exp-b} {B}_{i}^{a}={2T}{{M_{i}^{a}}}-T{({M_{i}^{a}})^{3}}+{5\over 9}{({M_{i}^{a}})^5} +O(M^8). \end{equation} Substituting Eq.(\ref{exp-b}) into Eq.(9) and expanding the entropy in powers of $M_{i}^{a}$, enables us to expand the free energy as: \begin{eqnarray} F&=&\langle H \rangle -TS\nonumber \\ &=& -4NT\ln(4\pi)-{J\over 2}\sum_{i,j}\sum_{a,b}{\bf M}_{i}^{a}\cdot{\bf M}_{j}^{b}-D\sum_{i,a}({\bf M}_{i}^{a}\cdot\hat{\mathrm{z}}^{a})^{2}\nonumber \\ &+&T\sum_{i,a}\left(({M_{i}^{a}})^{2}+{1\over 4}({M_{i}^{a}})^{4}-{5\over 36}({M_{i}^{a}})^{6}+O(M^7)\right), \end{eqnarray} where we have used Eq.(\ref{H-mf}). We can also expand the free energy in terms of Fourier components defined by: \begin{eqnarray} {\bf M}_{i}^{a}&=&\sum_{q} {\bf M}_{\bf q}^{a} \exp(i{\bf q}\cdot{\bf R}_{i}^{a})\\ J_{\bf q}^{ab}&=&\sum_{j\neq i}\sum_{b\neq a} {J} \exp\left(i{\bf q}\cdot({\bf R}_{i}^{a}-{\bf R}_{j}^{b})\right), \end{eqnarray} where the summation in Eq.(15) is over the nearest neighbors of a selected spins. Then we reach the following form for the free energy per particle in terms of Fourier components: \begin{eqnarray}\label{ff2} f(T,J,D)&=&\frac{F(T,J,D}{N}=-4T\ln(4\pi)\nonumber \\ &+&{1\over 2}\sum_{q}\sum_{ab} {\bf M}_{\bf q}^{a} {\bf M}_{-\bf q}^{b}(2T\delta^{ab}-J_{\bf q}^{ab})-D\sum_{q}\sum_{a}({\bf M_{\bf q}^a}\cdot{\hat z}^{a})({\bf M_{-\bf q}^a}\cdot{\hat z}^{a})\nonumber \\ &+&{1\over 4}T\sum_{a}\sum'_{\{{\bf q}\}}({\bf M}_{\bf q1}^{a}\cdot{\bf M}_{\bf q2}^{a})({\bf M}_{\bf q3}^{a}\cdot{\bf M}_{\bf q4}^{a}) \nonumber \\ &-&{5\over 36}T\sum_{a}\sum'_{\{{\bf q}\}}({\bf M}_{\bf q1}^{a}\cdot{\bf M}_{\bf q2}^{a})({\bf M}_{\bf q3}^{a}\cdot{\bf M}_{\bf q4}^{a})({\bf M}_{\bf q5}^{a}\cdot{\bf M}_{\bf q6}^{a})+O(M^7), \end{eqnarray} where \[ \sum'_{\{{\bf q}\}}=\sum_{\{{\bf q}\}}\delta(\sum_{i}{\bf qi}). \] The free energy (Eq.\ref{ff2}) can be rewritten in terms of Cartesian components of \\ ${\bf M}_{\bf q}^{a}=({m}_{\bf q}^{a,1},{m}_{\bf q}^{a,2})$ as: \begin{eqnarray}\label{f2} f(T,J,D)&=&-4T\ln(4\pi)+{1\over 2}\sum_{q}\sum_{ab}\sum_{\alpha\beta} (2T\delta^{ab}\delta^{\alpha\beta}-J_{\bf q}^{ab}\delta^{\alpha\beta}-D_{\alpha\beta}^{a}\delta^{ab}){m}_{\bf q}^{a,\alpha}{m}_{-\bf q}^{b,\beta}\nonumber \\ &+&{1\over 4}T\sum_{a}\sum_{\alpha\beta}\sum'_{\{{\bf q}\}}({m}_{\bf q1}^{a,\alpha}{m}_{\bf q2}^{a,\alpha})({m}_{\bf q3}^{a,\beta}{m}_{\bf q4}^{a,\beta}) \nonumber \\ &-&{5\over 36}T\sum_{a}\sum_{\alpha\beta\gamma}\sum'_{\{{\bf q}\}}({m}_{\bf q1}^{a,\alpha}{m}_{\bf q2}^{a,\alpha})({m}_{\bf q3}^{a,\beta}{m}_{\bf q4}^{a,\beta})({m}_{\bf q5}^{a,\gamma}{m}_{\bf q6}^{a,\gamma})+O(M^7), \end{eqnarray} in which $\alpha,\beta,\gamma$ take the values $1,2$. It can be seen from the above equation, that only the an-isotropic term $D$ couples the different Cartesian components of ${\bf M}$. The $2\times 2$ matrices $D^{a}$ are given by: \begin{equation} D^{1}= D\left( \begin{array}{cc} {\sqrt{3}\over2} & {\sqrt{3}\over4}\\ {\sqrt{3}\over4} & {1\over2} \end{array} \right), D^{2}= D\left( \begin{array}{cc} {\sqrt{3}\over2} & -{\sqrt{3}\over4} \\ -{\sqrt{3}\over4} & {1\over2} \end{array} \right), D^{3}= D\left( \begin{array}{cc} {0} & {0} \\ {0} & {1} \end{array} \right). \end{equation} Thus we are left with the following coupling $6\times 6$ matrix for the quadratic terms: \begin{equation}\label{jq} {\tilde J}_{\bf q}= D\left( \begin{array}{ccc} D^{1} & J_{\bf q}^{12}& J_{\bf q}^{13}\\ J_{\bf q}^{12} & D^{2}& J_{\bf q}^{23}\\ J_{\bf q}^{13} & J_{\bf q}^{23}& D^{3} \end{array} \right), \end{equation} in which the off-diagonal matrices $J_{\bf q}^{ij}$ are proportional to the $2\times 2$ unit matrix as follows : \begin{eqnarray} J_{\bf q}^{12}&=&2J\cos(\frac{q_{x}}{2})I_{2\times 2}\\ J_{\bf q}^{13}&=&2J\cos(\frac{\sqrt{3}q_{y}+q_{x}}{2})I_{2\times 2}\\ J_{\bf q}^{23}&=&2J\cos(\frac{\sqrt{3}q_{y}-q_{x}}{2})I_{2\times 2}. \end{eqnarray} In deriving the above expressions, we have used Eq.(15) together with the positions of $\mathrm{Kagom\acute{e}}$ atoms given by their $xy$ components. For convenience we reduce the number of indices ($a=1,2,3$ and $\alpha=1,2$) by defining a new set of indices $s=1,\cdot\cdot\cdot,6$, which leads to a $6$-component magnetization vector as: \begin{equation} {\tilde{\bf M}}_{\bf q}=(m_{\bf q}^{1,1},m_{\bf q}^{1,2},m_{\bf q}^{2,1},\cdot\cdot\cdot m_{\bf q}^{3,2}) =(m_{\bf q}^{1},m_{\bf q}^{2},\cdot\cdot\cdot m_{\bf q}^{6}), \end{equation} from which the quadratic term in free energy can be written as: \begin{equation} f^{(2)}=\sum_{\bf q}{\tilde{\bf M}}_{\bf q}.{\tilde J}_{q}.{\tilde{\bf M}}_{\bf q}^{T}. \end{equation} Diagonalizing the quadratic term, requires transforming to the normal modes $\Phi_{\bf q}$: \begin{equation} m_{\bf q}^{s}=\sum_{i=1}^{6} U_{\bf q}^{si} \phi_{\bf q}^{j} \end{equation} for $s=1,2, \cdot\cdot\cdot 6$. $U_{\bf q}$ is the unitary matrix that diagonalizes the coupling matrix ${\tilde J}_{\bf q}$, with eigenvalues $\lambda_{\bf q}^{i}$: \begin{equation}\label{normal} \sum_{b} {\tilde J}_{\bf q}^{ab}U_{\bf q}^{bi}=\lambda_{\bf q}^{i}U_{\bf q}^{ai}. \end{equation} in which, the unitarity condition requires: \begin{equation}\label{unitary} \sum_{a} U_{\bf q}^{ai} U_{-\bf q}^{aj}=\delta^{ij}. \end{equation} Equation (\ref{normal}) enables us to write the free energy as a power series in terms of normal modes, such that to $O(\phi^7)$ we obtain the following expansion for the free energy: \begin{eqnarray}\label{f3} f(T,J,D)&=&-4T\ln(4\pi)+{1\over 2}\sum_{q}\sum_{i=1}^{12} (2T-\lambda_{\bf q}^{i}){\phi}_{\bf q}^{i}\phi_{-\bf q}^{i}\nonumber \\ &+&{T\over 4} \sum_{s=1}^{12}\sum_{ijkl}\sum'_{\{{\bf q}\}} U_{\bf q1}^{si} U_{\bf q2}^{sj}U_{\bf q3}^{sk}U_{\bf q4}^{sl} {\phi}_{\bf q1}^{i}{\phi}_{\bf q2}^{j}{\phi}_{\bf q3}^{k}{\phi}_{\bf q4}^{l} \nonumber \\ &+&{5\over 36} T\sum_{s=1}^{12}\sum_{ijklmn}\sum'_{\{{\bf q}\}} U_{\bf q1}^{si} U_{\bf q2}^{sj}U_{\bf q3}^{sk}U_{\bf q4}^{sl}U_{\bf q5}^{sm}U_{\bf q6}^{sn} {\phi}_{\bf q1}^{i}{\phi}_{\bf q2}^{j}{\phi}_{\bf q3}^{k}{\phi}_{\bf q4}^{l} {\phi}_{\bf q5}^{m} {\phi}_{\bf q6}^{n}. \end{eqnarray} It is clear that phase transition occurs when the sign of quadratic term of free energy changes. Therefore, from the above expression one finds that the spontaneously breaking symmetry occurs at a temperature: \begin{equation} T_{c}={1\over 2}{\bf max}_{{\bf q},i}\{{\lambda_{\bf q}^{i}}\}, \end{equation} where max $\{ \}$ means the global maximum over all $i$ and ${\bf q}$. In the case of $D=0$ one can exactly diagonalize the matrix ${\tilde J}_{\bf q}$ (Eq.(\ref{jq})) and find the following eigenvalues: \begin{eqnarray} \lambda_{\bf q}^{i}&=&-2J \hspace{3.1cm} i=1,2 \nonumber\\ \lambda^{i}_{\bf q}&=&2J(1-\sqrt{3+Q}) \hspace{1cm} i=3,4 \nonumber\\ \lambda^{i}_{\bf q}&=&2J(1+\sqrt{3+Q}) \hspace{1cm} i=5,6, \end{eqnarray} where $Q$ is given by: \begin{eqnarray} Q= \{&&\cos(2q_{x})+\cos(\sqrt{3}q_{x}+q_{y})+\cos((2-\sqrt{3})q_{x}-q_{y})\}, \end{eqnarray} which coincides with the result derived in Ref.\cite{Reimers}. The above results show that for $J < 0$ the largest eigenvalues are degenerate and dispersionless (q-independent), such that when $T < -J$, the order parameters corresponding to all of these modes turn to be nonzero and we were left with a huge number of states with broken symmetry. Therefore, because of the extensive degeneracy of symmetry broken states, one concludes that in mean-field theory, no long range order can be established as the temperature decreases down to zero. The $q$-dependence of eigenvalues for $D=0$ along [1~0] direction is depicted in Fig.(1). For an-isotropic case ($D \rangle 0$) the eigenvalues of matrix ${\tilde J}_{\bf q}$ can be obtained numerically. The dispersion curves for $D=0.2$ and $D=1.0$ along [1~0] direction has been shown in Figs.(2) and (3), respectively. As can be seen from these graphs, all the degeneracies have been removed, so we were left with 6 distinct modes, where the highest mode has a maximum at $q=0$ with the value $\lambda_{0}^{1}=-2J+2D$. It can be easily shown, by deriving the eigenvector of this mode, that this mode corresponds to all-in all-out spin configuration represented in Fig.(4). As a result, the mean-field theory predicts a continues phase transition from disordered to a long-range ordered all-in all-out state at the critical temperature $T_{c}=-J+D$. Another interesting point is that the branch $\lambda_{\bf q}^{4}$ is independent of magnitude of anisotropy ($D$), which means that the modes describing by it, are corresponding to spin fluctuations perpendicular to easy-axes directions (${\hat{ \mathrm{z}}}^{a},a=1,2,3$) in Hamiltonian, given by Eq.(\ref{H}). \section{Monte Carlo simulation} For large values of $D$, spins tend to remain mainly along easy-axes directions such that the effective degrees of freedom flip along these axes. Therefore, one expects that the transition to all-in all-out state to be in 2D Ising universality class. However, when $D$ is small, the transverse fluctuations normal to local easy-axes directions become larger and so this leads to lowering of the transition temperature as well as deviation from Ising behaviour. In this section we use Monte Carlo simulation, to study the phase transition of the model described in previous section and find the order of transitions for different values of anisotropic term $D$. To obtain a qualitative picture of the transitions and also the approximate location of the critical points, we first set some low resolution simulations. The simulations were carried out using standard Metropolis single spin-rotating algorithm with lattice size $N=3\times 20 \times 20$. During each simulation step, the angles of planar spins with the horizontal axes were treated as unconstrained, continuous variables. The random-angles rotations were adjusted in such a way that roughly $50\%$ of the attempted angle rotations were accepted. To ensure thermal equilibrium, 100 000 Monte Carlo steps (MCSs) per spin were used for each temperature and 200 000 MCS were used for data collection. The basic thermodynamic quantities of interest are the specific heat $c=(\langle E^2 \rangle-\langle E \rangle^{2})/(N T^{2})$, the order parameter defined as $M=|\sum_{i,a}{\bf S}_{i}^{a}\cdot\hat{\mathrm{z}}^{a}|/N$ and the susceptibility $\chi=(\langle M^2 \rangle-\langle M \rangle^{2})/(NT)$. In Figs. (5-8), temperature dependence of the energy per spin, ,the order parameter, specific-heat and susceptibility have respectively been represented for $J=-1.0$, $D=0.2,0.1$. As can be observed from Figures.(7) and (8), the transition for $D=0.2$ seems to be continuous, while for $D=0.1$, because of sudden peaks in specific heat and susceptibility, it seems to be first order. However, The determination of the order of transition requires more accurate methods, for which we will use Binder's fourth energy cumulant method. Once the probability density of energy ($P(E,T)$) is obtained, for measuring the thermodynamic quantities other than the energy, one can choose to work with this energy probability distribution and microcanonical averages of the quantities of interest. This leads to optimized use of computer memory. The microcanonical average of a given quantity $A$, which is a function of energy, can be calculated directly as: \begin{equation} A(E)=\frac{\sum_{t}A_{t}\delta_{E_{t},E}}{\sum_{t}\delta_{E_{t},E}}, \end{equation} from which, the canonical average of $A$ can be obtained as a function of $T$: \begin{equation} \langle A \rangle=\frac{\sum_{E}A(E)P(E,T)}{\sum_{E}P(E,T)}. \end{equation} In our simulation, we use $\mathrm{Kagom\acute{e}}$ lattices with linear sizes $L=20,24,28,32,36,40$ (the number of sites is given by $N=3\times L\times L$), such that the maximum number of spins is 4800, large enough for reducing the finite size effects. For each system size, at least five overlapping energy histograms are obtained near the transition point so that the statistical uncertainty in the wing of the histograms, may be suppressed by using the optimized multiple-histogram method\cite{fs}. This enables us to measure the location and magnitude of the extrema of the thermodynamic quantities with high accuracy. For each histogram we performed $5\times10^5$ Monte Carlo steps per spin for equilibration and also $5\times10^5$ MCSs for gathering data. To reduce the correlation, 10 to 20 Monte Carlo sweeps were discarded between successive measurements. In all simulation we fix $J=-1$ and vary the value of $D$ from 0.1 to 1.0. First of all, we deal with the order of transitions. \subsection{Order of the transition} To determine the order of transitions, we used Binder's fourth energy cumulant defined as: \begin{equation} U_{L}=1-\frac{<E^4>}{3<E^2>^2}. \end{equation} It has been shown that this quantity reaches a minimum at the effective transition temperature $T_{c}(L)$ whose size dependence is given by\cite{landau,lk,lb}: \begin{equation}\label{bind} U_{min}(L)=U^{*}+BL^{-d}+O(L^{-2d}), \end{equation} where \begin{equation} U^{*}=\frac{2}{3}-\left(e_{1}/e_{2}-e_{2}/e_{1}\right)^{2}/12. \end{equation} The quantities $e_{1}$ and $e_{2}$ are the values of energy per site at the transition point of a first order phase transition and $d$ is the spatial dimension of the system ($d=2$ in our simulation). Hence, for the continuous transitions for which there is no latent heat ($e_{1}=e_{2}$), in the limit of infinite system sizes, $U_{min}(L)$ tends to the value $U^{*}$ equal to $2/3$. For the first-order transitions, however $e_{1}\neq e_{2}$ and then $U^{*}$ reaches a value less than $2/3$ in the the limit $L\rightarrow\infty$. The size dependences of ${U(L)}$ for $D=1.0,0.18,0.15,0.13,0.1$ have been exhibited in Fig.(9). The straight lines fitted to the data have been obtained from Eq.(\ref{bind}). The values of $U^{*}$ and latent heat per spin are also listed in Table.(I), from which one can see that, within the errors of simulation, transitions are second order for $D>0.17$ and clearly first order for $D<0.15$. The precise determination of the tricritical point is extremely difficult, however our results suggest the existence of a tricritical point between $D/|J|=0.15$ and $D/|J|=0.17$. In the Figs.(10) and (11) the energy histograms of $D=0.2$ and $D=0.1$ for the size $N=3\times 40\times 40$ have been shown, respectively. As can be seen from these figures, the energy histogram for $D=0.2$ has one broad peak at the transition, while for $D=0.1$, it has two well separated peaks around the transition temperature. This is in agreement with the results of Binder's method. Note that the small peak at the middle in Fig.(11), is artifact of the finite time of simulation and will vanish at large enough times. The reason is that at a strong first order transition point,free energy possesses two equivalent minima corresponding to two stable coexisting phases. For large system sizes these two minima are separated by a large energy barrier, so the system remains mainly around its minima during the time evolution, caused by thermal fluctuations, in simulation. Therefore, the configurations corresponding to the unstable region at the middle are rare, consequently the relative error for these data is large. As the next step we proceed to calculate the critical temperatures and critical exponents for continuous phase transitions, using finite-size scaling theory. \subsection{Determination of $T_{c}$ and static critical exponents} According to the finite-size scaling theory \cite{barber}, the scaling form for various thermodynamic quantities such as magnetization density, susceptibility and specific heat in zero field are given by: \begin{eqnarray} \label{mag}m&\approx& L^{\beta/\nu}{\mathcal{M}}(tL^{1/\nu})\\ \label{kappa}\chi&\approx& L^{\gamma/\nu}{\mathcal{K}}(tL^{1/\nu})\\ \label{sh}c&\approx& c_{\infty}(t)+L^{\alpha/\nu}{\mathcal{C}}(tL^{1/\nu}), \end{eqnarray} where $t=(T-T_{c})/T_{c}$ is the reduced temperature for a sufficiently large system at a temperature $T$ close enough to the infinite lattice critical point $T_{c}$, $L$ is the linear size of the system and $\alpha,\beta,\gamma,\delta$ are static critical exponents. Equations (\ref{mag}-\ref{sh}) are used to estimate the critical exponents. However, before dealing with the critical exponents we should first determine the critical temperature accurately. The logarithmic derivatives of total magnetization ($mL^{d}$) are important thermodynamic quantities for studying critical phenomena and very useful to high accurate estimation of the critical temperature $T_{c}$ and the correlation length critical exponent ($\nu$)) \cite{chen}. To this, we Define the following quantities: \begin{eqnarray} \label{v1}V_{1}&\equiv& 4[M^{3}]-3[M^{4}],\\ V_{2}&\equiv& 2[M^{2}]-[M^{4}],\\ V_{3}&\equiv& 3[M^{2}]-2[M^{3}],\\ V_{4}&\equiv& (4[M]-[M^{4}])/3,\\ V_{5}&\equiv& (3[M]-[M^{3}]/2,\\ \label{v6}V_{6}&\equiv& 2[M]-[M^{2}], \end{eqnarray} where $M=Nm$ is the total magnetization of the system and \begin{equation} [M^{n}]\equiv \ln\frac{\partial\langle M^{n} \rangle}{\partial T}. \end{equation} From Eq.(\ref{mag}) it is easy to show that \begin{equation} \label{vj}V_{j}\approx (1/\nu)\ln L+{\mathcal{V}}_{j}(tL^{1/\nu}), \end{equation} for $j=1,2,\cdot\cdot\cdot,6$. At the critical temperature ($t=0$), ${\mathcal{V}}_{j}$ should be constants, independent of the system size $L$. Using Eq. (\ref{vj}) one can find the slope of quantities $V_{1}$ to $V_{6}$ (Eq. \ref{v1}-\ref{v6}) versus $\ln(L)$ for the region near the critical point. Scanning over the critical region and looking for a quantity-independent slope gives us both the critical temperature $T_{c}$ and the correlation length exponent $\nu$ with high precision. Figures (12) and (13) give the examples of such an effort for the set of the coupling $D/|J|=0.2$. From these figures, we estimate that $\nu=0.842(2)$ and $T_{c}=1.198(1)$. The linear fits to the data in Fig.(12) have been obtained by the linear least squares method. Once $\nu$ and $T_{c}$ are determined accurately, we can extract other static critical exponents related to the order parameter ($\beta$) and susceptibility ($\gamma$). The ratio $\beta/\nu$ can be estimated by using the size dependence of the order parameter at the critical point given by Eq.(\ref{mag}). Fig.(14) shows the log-log plots of the size dependence of the order parameter corresponding to $D/|J|=0.5$ and $D/|J|=0.2$. From this figure the ratio $\beta/\nu$ can be estimated as the slope of the straight lines fitted to the data according to Eq.(\ref{mag}). We then have $\beta/\nu=0.198(8)$ for $D/|J|=0.5$ and $\beta/\nu=0.285(8)$ for $D/|J|=0.2$. Accordingly, from Eq.(\ref{kappa}) it is clear that the peak values of the finite-lattice susceptibility ($\chi=(\langle M^2 \rangle-\langle M \rangle^{2})/(NT)$) and the magnitude of the true susceptibility at $T_{c}$ (the same as $\chi$ with $\langle m \rangle=0$) are asymptotically proportional to $L^{\gamma/\nu}$. Then the slope of straight line fitted linearly to the log-log plot of these two quantities versus linear size of the lattices, can be calculated to estimate the ratio $\gamma/\nu$. In Fig.(15) the finite lattice susceptibility have been depicted for ${D/|J|}=0.2,0.5$, respectively. The slopes of linear lines fitted to these data give $\gamma/\nu=1.39(2)$ for ${D/|J|}=0.5$ and $\gamma/\nu=1.42(2)$ for ${D/|J|}=0.2$, where the error includes the uncertainty in the slope resulting from uncertainty in our estimate for $T_{c}$. The above procedure has been applied for other values of $D/|J|=1.0,0.5,0.2$ and the obtained critical exponents are listed in Table.(II). In this table, the critical exponent $\alpha$, has been calculated using the hyper-scaling relation: \begin{equation} \alpha=2-d\nu, \end{equation} in which $d=2$. On the other hand the Rushbrook scaling law ($\alpha+2\beta+\gamma=2$) is satisfied for all set of exponents within the computational errors. For comparison, we have listed the corresponding critical exponents of Onsager's solution for 2D-Ising, and also Zamolodchikov's conjecture for the Ising-tricritical point in two-dimensions, which corresponds to a 2D-$\phi^6$ field theory~\cite{tri1}. Zamolodchikov's conjecture is based on conformal field theory and has been verified by Monte Carlo simulation\cite{tri2}. One can see from Table.(II) that the critical exponents for $D/|J|=1.0$ are pretty close to the 2D-Ising values, then anisotropy magnitude of $D/|J|=1.0$ is large enough to suppress the transverse fluctuations normal to easy-axes directions. Upon decreasing the anisotropy, the transverse fluctuations become important and the exponents deviate from Ising values. However, although the exponents $\nu$, $\gamma$ and $\alpha$ monotonously tend to the the 2D-triciritcal values, but the exponent $\beta$ gets farther from it. This discrepancy, might the sign of a new universality class, other than 2D-$\phi^6$ model. At the end, we deal with the dependence of the transition temperature to the anisotropy intensity. We have already mentioned the method of obtaining the critical temperature for the continuous transitions ($D~ > ~0.17$). For strongly enough first order transitions whose energy histograms are double peaked ($D~\langle~ 0.15$), the finite size transition temperatures ($T_{c}(L)$) , are determined as the temperature at which the two peaks have equal heights. Once $T_{c}(L)$ for all lattice sizes is obtained, the transition temperature in thermodynamic limit can be extrapolated by the following scaling relation: \begin{equation} T_{c}(L)=T_{c}(\infty)+BL^{-d}, \end{equation} where $B$ is a constant and $d=2$. The resulting transition temperatures are listed in Table.(I). In Fig.(16), we have plotted the transition temperature versus $D$ in logarithmic scale. This linear log-log plot shows a power law relation between these to quantities as: \begin{equation} T_{c}\propto D^{0.501(2)}. \end{equation} This result is in clear contrast with mean-field prediction of a linear dependence of transition temperature on the anisotropy intensity $D$. This scaling behaviour can be explained by a simple dimensional analysis. Assuming that both exchange interaction, $J$, and anisotropy, $D$, are equally important in occurrence phase transition in $XY$ Kagom\'{e} antiferromagnet. So the thermal energy which balances the entropy and internal energy at the transition point, must be proportional to a combination of $J$ and $D$. Accordingly, dimensional analysis requires $K_{B}T_{c}\sim (|J|D)^{1\over 2}$, which leads us to $T_{c}/|J|\sim (D/|J|)^{1\over 2}$. \section{Conclusion } In summary, using mean-field theory and the optimized Monte Carlo simulation based on multi-histogram, we investigated the phase transitions of the antiferromagnetic classical $XY$ model on a two dimensional $\mathrm{Kagom\acute{e}}$ lattice with the easy-axes single ion anisotropy. In the absence of anisotropy, this system is highly frustrated and no phase transition is expected to occur at finite temperatures, except the Kosterlitz-Thouless transition mentioned in Ref. \cite{chandra}. Turning on the anisotropy, removes the degeneracies of the ground state and so establishes a long range order with all-in all-out spin configuration at low temperatures. By increasing the temperature, the system exhibits a phase transition from all-in all-out ordered state to disordered (paramagnetic) state. According to Monte Carlo results this transition is first order for small values of anisotropy, while turns to second order at a tricritical point, corresponding to an anisotropy strength in the interval $0.15 < \frac{D}{|J|} < 0.17$. Employing finite size scaling theory, we derived the critical exponents for continuous transitions and found that the transition is in Ising universality for large values of anisotropy. This is because in large $D/|J|$ limit, the fluctuations perpendicular to easy-axes directions are frozen, and so the effective degrees of freedom are spin flips along easy-axes directions, such that the order parameter possess the discrete $Z_{2}$ symmetry. Decreasing the anisotropy magnitude, activates the spin fluctuations perpendicular to the easy-axes directions. In principle, the coupling of transverse modes (independent of anisotropy) and also of other underlying modes, shown in Fig.(2) and (3), with the all-in all-out state at $q=0$, is the reason for the deviation of the universality class of transitions from Ising, and is also responsible for changing the type of transition to dis-continuous for small values of anisotropy. However, obtained critical exponents near the tricritical point, do not coincide with those of two-dimensional Ising-tricritical point derived from 2D-$\phi^6$ field theory. This suggests the possibility of the existence of a new tricritical universality class in two-dimensions. It is not surprising, because the critical behaviours in frustrated systems are usually different form standard universality classes~\cite{kawamura2}. In this case, finding such a universality class requires more theoretical and numerical investigations. We hope that this work will motivate further experimental, computational and analytical efforts for deeper understanding of the nature of transitions in geometrically frustrated systems. {\textbf{ Acknowledgment}} \\ We would like to thank M. J. P. Gingras, H. Kawamura, and P. Holdsworth for enthusiastic discussions and useful comments. \begin{table}[c] \begin{tabular}{|c|c|c|} $D/J$ & $T_{c}$ & $U^*$ \\ \hline 1.0 & 0.449(1) & 0.66662(7) \\ 0.5 & 0.316(1) & 0.66660(9) \\ 0.2 & 0.199(1) & 0.66659(8) \\ 0.18 & 0.189(5) & 0.66653(9) \\ 0.17 & 0.184(6) & 0.66649(9) \\ 0.15 & 0.174(8) & 0.6664(1) \\ 0.14 & 0.167(7) & 0.6662(1) \\ 0.13 & 0.162(7) & 0.6661(1) \\ 0.12 & 0.156(8) & 0.6659(1) \\ 0.1 & 0.142(8) & 0.6658(1) \end{tabular} \narrowtext\caption{The critical temperatures and value of $U^*$ for ${D\over J}=1.0,0.5,0.2,0.18,0.17,0.15,0.14,0.13,0.12,0.1$.(see the text) } \end{table} \begin{table}[t] \begin{tabular}{|c|c|c|c|c|c|} $D/|J|$ & $\nu$ & $\beta$ & $\gamma$ & $\alpha$ & $\alpha+2\beta+\gamma$\\ \hline 1 & 1.019(2) & 0.15(1) & 1.64(8) & -0.038(4)& 1.9(1)\\ 0.5 & 0.959(2) & 0.19(1) & 1.52(6) & 0.082(4) & 2.0(1)\\ 0.2 & 0.842(2) & 0.24(2) & 1.18(6) & 0.316(4) & 2.0(1)\\ \hline 2D-Ising & 1 & 1/8 & 7/4 & 0($\log$) & 2\\ 2D-$\phi^6$ & 5/9 & 1/24 & 37/36 & 8/9 & 2\\ \end{tabular} \narrowtext\caption{The static critical exponents $\nu, \beta, \gamma$ and $\alpha$ for ${D\over J}=1.0,0.5,0.2$, derived from finite-size scaling. In the last column the Rushbrook's scaling law is computed. The last two rows are listed the corresponding exact critical exponent of 2D-Ising model and two-dimensional Ising-tricriticl point, respectively.} \end{table}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The {\it least median of squares\/} (LMS) method has recently been proposed by Rousseeuw in \cite{Rous84} to provide a very robust estimate of parameters in linear regression problems. The LMS estimate can be obtained as the solution of the following optimization problem. Let $ {x}^{\top}_{i} = (x_{i1}, \ldots ,x_{ip}),\: i=1, \ldots ,n $, and $ {y}=(y_{1}, \ldots ,y_{n})^{\top} $ be given real vectors. We assume that $ n/2 \geq p $ and the $ (n \times p) $--matrix $ X = [x_{ij}] $ is of full rank to avoid degenerate cases. Let $ {\theta} = (\theta_{1}, \ldots , \theta_{p})^{\top} $ be a vector of regression parameters. The optimization problem that arises out of the LMS method is to find $ \theta^{*} $ providing \begin{equation} \label{equ1} \min_{\theta} \; \mbox{med} \: \{ (y_{i} - {x}_{i}^{\top} {\theta})^{2} \}. \end{equation} It is known (see \cite{Edel90,Joss90,Rous84,Souv87}) that the objective function in (\ref{equ1}) is hard to minimize. This function is multi-extremal, it is considered as having $ O(n^{p}) $ local minima. In fact, there are efficient and exact algorithms available only for the problems of the lowest dimensions. A simple algorithm for $ p=1 $ can be found in \cite{Rous84}. Another two designed for the problems of the dimension $ p=2 $ have been described in \cite{Edel90} and \cite{Souv87}. For other dimensions, there are probabilistic algorithms producing approximate solutions of the problem (see \cite{Atki91} and \cite{Joss90}). The purpose of this paper is to present some new ideas concerning the LMS problem so as to provide some theoretical framework for efficient regression algorithms. In Section~2 we offer a useful representation of the problem. The representation is exploited in Section~3 to demonstrate properties of the objective function and estimate the number of its local minima. Section~4 includes our main result providing the exact number of local minima. Finally, in Section~5 we briefly outline three LMS regression algorithms based on the above results. \section{Representation of the LMS Problem} To produce our representations, we first replace (\ref{equ1}) by an equivalent problem just examined below. Obviously, the solutions of (\ref{equ1}) are exactly the same as those of the problem: \begin{equation} \label{equ2} \min_{\theta} \; \mbox{med} \: \{ |y_{i} - {x}_{i}^{\top} {\theta}| \}. \end{equation} A serious difficulty one meets in analyzing both problems (\ref{equ1}) and (\ref{equ2}) is that it is hard to understand how the median behaves as the objective function. The next result offers a useful representation for the median as well as for other operators defined by means of ordering. Let $ R = \{ r_{1}, \ldots ,r_{n} \} $ be a finite set of real numbers. Suppose that we arrange its elements in order of increase, and denote the $ k $th smallest element by $ r_{(k)} $. If there are elements of the equal value, we count them repeatedly in an arbitrary order. \begin{lemma} For each $ k = 1, \ldots ,n, $ the value of $ r_{(k)} $ is given by \begin{equation} \label{equ3} r_{(k)} = \min_{I \in \Im_{\scriptstyle k}} \; \max_{ i \in I } \: r_{i}, \end{equation} where $ \Im_{k} $ is the set of all $k$-subsets of the set $ N = \{1, \ldots ,n \} $. \end{lemma} \begin{proof} Denote the set of indices of the first $ k $ smallest elements by $ I^{*} $. It is clear that $ r_{(k)} = \max_{ i \in I^{*} } r_{i} $. Consider an arbitrary subset $ I \in \Im_{k} $. Obviously, if $ I \neq I^{*} $, there is at least one index $ j \in I $ such that $ r_{j} \geq r_{(k)} $. Therefore, we have $ r_{(k)} \leq \max_{ i \in I } r_{i} $. It remains to take minimum over all $ I \in \Im_{k} $ in the last inequality so as to get (\ref{equ3}). \end{proof} Let $ h= \lfloor n/2 \rfloor + 1 $, where $ \lfloor n/2 \rfloor $ is the largest integer less than or equal to $ n/2 $. For simplicity, we assume $ \mbox{med}_{ i \in N } \: r_{i} = r_{(h)} $. (It is absolutely correct to define the median in this form if $ n $ is odd. However, for an even $ n $, it is normally defined as $ \frac{1}{2}( r_{(h-1)} + r_{(h)} ) $.) By using (\ref{equ3}) with $ k = h $ and $ r_{i} = r_{i} ( {\theta} ) = |y_{i} - {x}_{i}^{\top} {\theta}| $, we may now rewrite (\ref{equ2}) as follows: \begin{equation}\label{equ4} \min_{\theta} \: \min_{ I \in \Im_{\scriptstyle {h} } } \: \max_{ i \in I } \: |y_{i} - {x}_{i}^{\top} {\theta}|. \end{equation} The obtained representation seems to be more useful than the original because it is based on the well-known functions {\it max\/} and {\it min}. Moreover, the representation allows of further reducing the problem. In particular, one may change the order of the operations of taking minimum in (\ref{equ4}) and get \begin{equation} \label{equ5} \min_{ I \in \Im_{\scriptstyle {h} } } \: \min_{\theta} \: \max_{ i \in I } \: |y_{i} - {}{x}_{i}^{\top} {}{\theta}|. \end{equation} Assume $ I $ to be a fixed subset of $ N $. Consider the problem \begin{equation} \label{equ6} P(I): \; \min_{\theta} \: \max_{ i \in I } \: |y_{i} - {x}_{i}^{\top} {\theta}|. \end{equation} This is the well-known problem of fitting a linear function according to the $ l_{\infty} $--criterion, first examined by Fourier in the early 19th century \cite{Four26}. The method proposed by Fourier was actually a version of the simplex algorithm and therefore (\ref{equ6}) may be regarded as one of the oldest problems in linear programming. For modern methods and ideas, one can be referred to \cite{Poly89}. Incidentally, by applying an additional variable $ \rho $, we may shape (\ref{equ6}) into a usual form of linear programming problems: \begin{eqnarray} & \min \: \rho & \nonumber \\ & \mbox{subject to} & \rho - x_{i}^{\top}\theta \geq -y_{i}, \; \; \rho + x_{i}^{\top}\theta \geq y_{i}, \; i \in I. \label{equ7} \end{eqnarray} To conclude this section, note that (\ref{equ5} may be regarded as a "two--stage" problem of both combinatorial optimization and linear programming. It consists in minimizing a function defined on a discrete set by solving some linear programming problem. \section{An Analysis of the Objective Function} In this section we examine properties of the objective function in (\ref{equ4}), {\it i.e.} \begin{equation}\label{equ8} F(\theta)=\min_{I \in \Im_{\scriptstyle h}} \: \max_{ i \in I } \: |y_{i} - {x}_{i}^{\top} {\theta}|. \end{equation} The main question we will try to answer is how many local minima it can have. To start the discussion, consider the function $ \varrho_{I} (\theta) = \max_{i \in I} | y_{i} - x_{i}^{\top} \theta |, \; I \subset N $. It is a piecewise linear and convex function bounded below. Clearly, the problem of minimizing $ \varrho_{I}(\theta) $ always has the solution. The function $ \varrho_{I}(\theta) $ can be portrayed as the surface of a convex polyhedron in a $ (p+1) $--dimensional space. It is not difficult to see that function (\ref{equ8}), which one may now express as $ \: F(\theta)= \min_{ I \in \Im_{\scriptstyle h} } \varrho_{I} (\theta) $, also allows of visualizing its graph as the surface of some polyhedron. It is that produced by taking union of the polyhedra associated with $ \varrho_{I}(\theta) $, for all $ I \in \Im_{h} $. Note that $ F(\theta) $ is still piecewise linear, but fails to be convex. An illustration for $ p = 1 $ and $ n = 5 $ is given in Figure~\ref{fig1}. \begin{figure}[hhh] \setlength{\unitlength}{2.5mm} \begin{center} \begin{picture}(50,33)(0,-5) \put(7,0){\line(-2,3){7}} \put(7,0){\line(2,3){17}} \put(14,0){\line(-1,5){5}} \put(14,0){\line(1,5){5.2}} \put(22,0){\line(-1,1){20}} \put(22,0){\line(1,1){23}} \put(28,0){\line(-1,2){13}} \put(28,0){\line(1,2){12}} \put(42,0){\line(-1,1){26}} \put(42,0){\line(1,1){6}} \put(0,0){\vector(1,0){48}} \multiput(13,9)(0,-1.05){9}{\line(0,-1){0.55}} \multiput(12,10)(-0.2,1.0){15}{\line(0,1){3}} \multiput(12,10)(0.25,-0.25){4}{\line(0,1){2}} \multiput(13,9)(0.25,0.375){16}{\line(0,1){2}} \multiput(17,15)(0.2,1.0){5}{\line(0,1){3}} \multiput(18,20)(0.25,-0.5){4}{\line(0,1){2.5}} \multiput(19,18)(0.25,0.375){8}{\line(0,1){2}} \multiput(21,21)(0.25,-0.25){44}{\line(0,1){2}} \multiput(32,10)(0.25,0.25){8}{\line(0,1){2}} \multiput(34,12)(0.25,0.5){24}{\line(0,1){2.5}} \thicklines \put(12,10){\line(-1,5){3}} \put(13,9){\line(2,3){4}} \put(12,10){\line(1,-1){1}} \put(17,15){\line(1,5){1}} \put(18,20){\line(1,-2){1}} \put(19,18){\line(2,3){2}} \put(21,21){\line(1,-1){11}} \put(32,10){\line(1,1){2}} \put(34,12){\line(1,2){6}} \put(6,-2){$y_{1}$} \put(12,-2){$\theta^{*}$} \put(14,-2){$y_{2}$} \put(21,-2){$y_{3}$} \put(27,-2){$y_{4}$} \put(41,-2){$y_{5}$} \put(49,0){$\theta$} \end{picture} \caption{An objective function plot.}\label{fig1} \end{center} \end{figure} The objective function in Figure~\ref{fig1} is multi-extremal, it has three local minima. It is clear that for practical problems, the number of the local minima of (\ref{equ8}) can be enormous. To take the first step to determining this number, we may conclude from representation (\ref{equ5}) that it must not be greater than the number of problems $ P(I) $ for all $ I \in \Im_{h} $. This last is equal to $ n \choose{h} $, {\it i.e.\/} the number of all $h$--subsets $ I \in \Im_{h} $. Suppose $ \theta^{*} $ to be the solution of a problem $ P(I) $, $ |I| \geq p+1 $. One can state the condition for the function $ \varrho_{I}(\theta) $ to have the minimum at $ \theta^{*} $ (see \cite{Poly89}): it is necessary that there exist a $ (p+1)$--subset $ I^{*} \subset I $ and real numbers $ \lambda_{i} $ to satisfy \begin{equation}\label{equ9} \sum_{ i \in I^{*} } \: \lambda_{i} \varepsilon_{i} x_{i}=0, \; \; \sum_{ i \in I^{*} } \: \lambda_{i} = 1, \; \; \lambda_{i} \geq 0, \; i \in I^{*}, \end{equation} for some $ \varepsilon_{i} \in \{-1,1\} $. In other words, $ \theta^{*} $ is defined by the point of intersection of $ p+1 $ "active" hyperplanes $\: \rho + \varepsilon_{i} x_{i}^{\top} \theta = \varepsilon_{i} y_{i} \:$ for some choice of $ \varepsilon_{i} \in \{-1,1\}, \: i \in I^{*} $, provided that the intersection point is an "acute" top of the corresponding polyhedron. On the other hand, for any $(p+1)$--subset of indices, we are always able to choose both $ \lambda_{i} $ and $ \varepsilon_{i} $ suitable to satisfy (\ref{equ9}). To illustrate this, let us examine an arbitrary $(p+1)$--subset. Without loss of generality, we assume it to be $ \{1, \ldots , p+1 \} $. Consider the equation $\; \sum_{i=1}^{p} t_{i} x_{i} = -t_{p+1} x_{p+1}, $ and set $ t_{p+1} = 1 $ in it. Since $\: \mbox{rank}(X) = p $, we may obtain values of $ t_{1}, \ldots ,t_{p} $ as the unique solution of the above equation. For every $ i=1, \ldots ,p+1 $, we define $ \; \lambda_{i} = |t_{i}| / \sum_{j=1}^{p+1}|t_{j}|, \; \; \varepsilon_{i} = \mbox{sign} (t_{i}) $. Obviously, $ \lambda_{i}, \: i=1, \ldots ,p+1 $, are just those required in (\ref{equ9}). As we have shown, the solution of any problem $ P(I) $ is determined by $ p+1 $ vectors $ x_{i} $. Conversely, any $ p+1 $ vectors $ x_{i} $ produce only one point which satisfies the necessary condition (\ref{equ9}) and can therefore be treated as the solution of some problem. Clearly, the number of the local minima of $ F(\theta) $ must not be greater than the number of such points, equaled $ {n \choose{p+1}} $. Since we assume that $ p \leq n/2 $, our first estimate $ {n\choose{h}} $ can be improved by replacing it by $ {n \choose{p+1}} $. Although the last estimate is still rough, yet it is much lower than the quantity $ n^{p} $ considered in \cite{Edel90,Joss90,Souv87} as the order of the number of local minima. \section{The Exact Number of Local Minima} We may now present our main result providing us with the exact number of local minima in (\ref{equ4}). In fact, it allows of determining the number of local minima for any function of the absolute residuals $\; |y_{i} - x_{i}^{\top} \theta|, \; i \in N \:$, defined by using representation (\ref{equ3}). For each $ k=0,1, \ldots , n-(p+1) $, let us introduce the function \begin{equation}\label{equ10} f_{k}(\theta) = \min_{ I \in \Im_{\scriptstyle {n-k} } } \: \max_{ i \in I } \: |y_{i} - {x}_{i}^{\top} {\theta}|, \end{equation} and denote the number of its local minima by $ M_{k} $. It should be noted that we have to set $ k=n-h=\lfloor \frac{n-1}{2} \rfloor $ in (\ref{equ10}) to produce the objective function of problem (\ref{equ4}). \begin{theorem}\label{the} For each $ k=0,1, \ldots ,n-(p+1) $, it holds \begin{equation}\label{equ11} M_{k} = { p+k \choose{p} }. \end{equation} \end{theorem} \begin{proof}[Sketch of the proof] Let $ \Pi $ be the set of problems $ P(I) $ for all $ I \subset N, \; |I| \geq p+1 $. To prove the theorem, we express $ | \Pi | $, {\it i.e.\/} the number of all the problems in $ \Pi $, in two ways. Firstly, it is easy to see that this number may be calculated as the sum \begin{equation}\label{equ12} | \Pi | = {n \choose{0}} + {n \choose{1}} + \ldots + {n \choose{n-(p+1)}} = \sum_{j=0}^{n-(p+1)} {n \choose{j}}. \end{equation} To produce the second representation, we examine a local minimum of the function $ f_{k}(\theta) $ for an arbitrary $ k, \; 0 \leq k \leq n-(p+1) $. Assume $ \theta^{*} $ to be the point of the local minimum. It is clear that $\: \theta^{*} = \theta^{*}(I) \:$ is the solution of some problem $ P(I) $, where $ |I| = n-k $. Since $ \theta^{*} $ is actually determined by a subset $ I^{*} \subset I $, which consists of $ p+1 $ "active" indices, it is also the solution of problems $ P( I \setminus J ) $ for all $ J \subset I \setminus I^{*} $. The number of the problems having the solution at $ \theta^{*} $ coincides with the number of all subsets of $ I \setminus I^{*} $ including the empty set $ \emptyset $, and equals $ 2^{n-(p+1)-k} $. In that case, the total number of the problems connected with the local minima of $ f_{k}(\theta) $ is $ 2^{n-(p+1)-k} M_{k} $. Now we may express $ | \Pi | $ in the form: \begin{equation}\label{equ13} | \Pi | = 2^{n-(p+1)} M_{0} + 2^{n-(p+1)-1} M_{1} + \ldots + M_{n-(p+1)} = \sum_{j=0}^{n-(p+1)} 2^{n-(p+1)-j} M_{j}. \end{equation} From (\ref{equ12}) and (\ref{equ13}), we have \begin{equation}\label{equ14} \sum_{j=0}^{n-(p+1)} 2^{n-(p+1)-j} M_{j} = \sum_{j=0}^{n-(p+1)} {n \choose{j}}. \end{equation} It is not difficult to understand that for a fixed $ k, \; 0 \leq k \leq n-(p+1) $, the number $ M_{k} $ depends on $ p $, but does not on $ n $. One can consider $ M_{0} $ as an illustration. Because the problem $ P(N) $ has the unique solution (see \cite{Poly89}), $ M_{0} $ is always equal to $ 1 $. Also, it holds $ M_{1} = p+1 $ independently on $ n $. To see this, note that every one of the local minima of $ f_{1}(\theta) $ can be produced by relaxing only one of $ p+1 $ "active" constraints at the minimum point of $ f_{0}(\theta) $. Setting $ n=p+1, p+2, p+3, \ldots \; $ in (\ref{equ14}), we may successively get $\; M_{0} = 1, \; M_{1} = p+1, \; M_{2} = \frac{(p+1)(p+2)}{2}, \ldots $. It is not difficult to verify that the general solution of (\ref{equ14}) is represented as (\ref{equ11}). \end{proof} Finally, substituting $\; k=\lfloor \frac{n-1}{2} \rfloor \;$ into (\ref{equ11}), we conclude that the objective function of the LMS problem has $ {p+\lfloor(n-1)/2 \rfloor \choose{p}} $ local minima. \section{Applications} In this section we briefly outline LMS regression algorithms based on the above analysis of the problem. Only the main ideas that underlie the algorithms are presented. \paragraph*{"Greedy" algorithm.} The algorithm produces an approximate solution and consists of solving the sequence of problems (\ref{equ6}), $ P(I_{0}), P(I_{1}), \ldots, P(I_{n-h}) $, where $ I_{0}=N $ and the sets $ I_{1}, I_{2}, \ldots, I_{n-h} $ are defined as follows. Let $ I_{k}^{*} $ be the set of $ p+1 $ "active" indices for the solution of a problem $ P(I_{k}) $. Clearly, for each $ i \in I_{k}^{*} $, the minimum of the objective function in the problem $ P(I_{k}\setminus\{i\}) $ is at least no greater than that in $ P(I_{k}) $. Denote by $ i_{k}^{*} $ the index that yields the problem having the lowest solution. Finally, we define $ I_{k+1} = I_{k} \setminus \{ i_{k}^{*} \} $. The "greedy" algorithm formally requires solving $ (n-h) \! \times \! (p+1)+ 1 $ optimization problems. In practice, however, an efficient procedure of transition between points, which yields the solutions of the problems, may be designed to avoid solving each of them. \paragraph*{Exhaustive search algorithm.} This algorithm may be considered as the complete version of the previous one which actually uses a reduced search procedure. It exploits the classical depth-first search technique to provide all local minima of the objective function. From Theorem~\ref{the}, one can conclude that it requires examining $ {n-h+p+1\choose{p+1}} $ points to produce the exact solution. Because of its exponential time complexity, this search algorithm can hardly be applied to problems of high dimensions. Note, however, that it normally allows of solving problems with $ p \leq 5 $ within reasonable time. \paragraph*{Branch and probability bound algorithm.} It is a random search algorithm based on the Branch and Probability Bound (BPB) technique which has been developed in \cite{Zhig91} as an efficient tool for solving both continuous and discrete optimization problems. The BPB algorithm designed to solve the LMS problem is of combinatorial optimization. It produces an approximate solution by searching over $(p+1)$--subsets of $ N $. As it follows from Section~3, each $(p+1)$--subset determines a point satisfying the condition (\ref{equ9}), one of such points is the solution of the LMS problem. In conclusion, I would like to thank Professor A.A.~Zhigljavsky for drawing my attention to the problem and for valuable discussions, and Professor A.C.~Atkinson for his kind interest in this work as well as for providing me with a reprint of paper \cite{Atki91}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:intro} The system created in heavy-ion collisions provides an environment where we can study the spin polarization properties of the produced hadrons. In recent years, this observable has drawn a great deal of attention due to the possibility that the spin of hadrons may be aligned along the global vorticity produced in non-central collisions~\cite{Becattini:2007sr,Becattini:2016gvu}. Among these hadrons, $\Lambda$ and $\bar{\Lambda}$ play an important role due to their self-analyzing polarization properties. ALICE~\cite{Acharya:2019ryw} and the STAR Beam Energy Scan (BES)~ \cite{STAR:2017ckg,Adam:2018ivw} have measured the $\Lambda$ and $\bar{\Lambda}$ global polarization as a function of the collision energy. In particular, the STAR BES found that both polarizations increase as the collision energy decreases, with the $\bar{\Lambda}$ polarization becoming larger than the $\Lambda$ polarization. Recently, this behaviour has been studied assuming that in non-central heavy-ion collisions, these hyperons can be produced both from a high-density core and from a less dense corona~\cite{Ayala:2020soy,Ayala:2020vyi,Ayala:2001jp}. The $\bar{\Lambda}$ global polarization is amplified despite that the intrinsic $\Lambda$ polarization is larger than the intrinsic $\bar{\Lambda}$ polarization~\cite{Ayala:2020ndx,Ayala:2019iin}, when a larger abundance of $\Lambda$s with respect to $\bar{\Lambda}$s in the corona is combined together with a smaller number of $\Lambda$s from the core compared to those coming from the corona. In this work we present a technique to test this idea within the Multi-Purpose Detector (MPD) at the Nuclotron-based Ion Collider fAcility (NICA) located at the Joint Institute for Nuclear Research (JINR)~\cite{Abraamyan:2011zz,Golovatyuk:2016zps} designed to study heavy-ion collisions in the center of mass energy range $\sqrt{s_{NN}}=\{4, 11\}$ GeV. In order to test the model, we measure the hyperon global polarization for different centrality sets of data and compare them with the assumptions made in the model. The measurement requires analysis at three different steps: MC event generation, the transport through detector and the reconstruction within the MPDroot framework, for both the hyperon identification and event plane determination. This work is organized as follows: In Sec.~2 we review how the standard global polarization measurement is performed. In Sec.~3, we describe the analyzed data and in Sec.~4 we describe the preliminary results of hyperon reconstruction and angular distributions required to obtain the global polarization. We finally conclude and summarize in Sec.~5. \label{sec:HyperonGlobalPolarization} \section{Hyperon Global Polarization} \begin{figure}[!h] \begin{center} \includegraphics[width=60mm]{Figuras/Fig1.png}\hspace{10mm} \begin{minipage}[b]{50mm} \caption{Diagram that relates the laboratory frame and the hyperon rest frame. The reaction plane is defined by the impact parameter $\hat{\textbf{b}}$ and the beam direction $\hat{\textbf{p}}_{beam}$.} \end{minipage} \end{center} \labelf{figura0} \end{figure} The hyperon global polarization is measured relative to the system's orbital angular momentum $\hat{\textbf{L}}$, which is perpendicular to the reaction plane and defined by $\hat{\textbf{L}} = \hat{\textbf{b}} \times \hat{\textbf{p}}_{beam}$, namely, the cross product of the impact parameter $\hat{\textbf{b}}$ and the beam direction $\hat{\textbf{p}}_{beam}$. The angular distribution of the hyperon decay products relative to $\hat{\textbf{L}}$ is given by \begin{eqnarray} \frac{dN}{d \Omega^{*}} = \frac{N}{4\pi}(1 + \alpha_H \mathscr{P}_H\cos{\theta^{*}}), \label{dist} \end{eqnarray} where $N$ is the number of particles, $\mathscr{P}_{H}$ is the hyperon global polarization, $\alpha_H$ is the hyperon decay parameter ($\alpha_{\Lambda} = 0.642 \pm 0.013$)~\cite{PhysRevD.98.030001} and $\theta^{*}$ is the angle in the hyperon rest frame between the system's orbital angular momentum $\hat{\textbf{L}}$ and the three-momentum of the baryon produced by the hyperon decay $\mathbf{p}^*_p$. Polarization in equation~(\ref{dist}) can be rewritten in terms of the reaction plane angle $\Psi_{RP}$ and the azimuthal angle $\phi^*_p$ of the hyperon decay baryon three-momentum in the hyperon rest frame, by means of the trigonometric relation between the angles in the laboratory frame and the $\Lambda$ rest frame given by $\cos{\theta^*} = \sin{\theta^*_p}\sin{(\phi^*_p - \Psi_{RP})}$, where $\theta^*_p$ refers to the angle between the three-momentum of the baryon produced by the hyperon decay and the $z$ laboratory frame axis as depicted in Fig.~ \ref{figura0}. This results in an expression for the hyperon global polarization given by \begin{eqnarray} \mathscr{P}_{H} = \frac{8}{\pi \alpha_H}\langle\sin{(\phi^*_p - \Psi_{RP})} \rangle. \label{pol} \end{eqnarray} \label{sec:Data analyzed} \section{Data analysis} For this study we generate 100,000 Bi + Bi events at $\sqrt{s_{NN}} = 11$ GeV for different centrality sets of data: \begin{itemize} \item Minimum Bias. \item Central collisions, with $b < 4$ fm. \item Semi-central collisions, with $b \in (6,8)$ fm. \item Peripheral collisions, with $b> 10$ fm. \end{itemize} We use UrQMD and Geant3 for generation and transport through the detector. The transport and reconstruction usually considers the TPC, TOF, EMC and ZDC detectors, however for this part of the analysis we only consider the TPC, that is, the main tracking detector in the MPD barrel which covers the mid-rapidity region $|\eta|<1.2$ and $p_{\textit{T}} > 100$ MeV/c ~\cite{Averyanov:2017oec}. We reconstruct the hyperons through its weak decay topology into proton (antiproton) and corresponding charged pion using the TPC. We analyze $\Lambda$ and $\bar{\Lambda}$ at three steps: generation, simulation and reconstruction. For this purpose we define the particles analyzed at each level. \begin{itemize} \item Monte Carlo data (MC), $\Lambda$s and $\bar{\Lambda}$s produced with UrQMD. In addition we consider $\Lambda$s and $\bar{\Lambda}$s coming from decays of particles such as $\Omega$, $\Xi$ and $\Sigma$ to account for secondary interactions produced by GEANT3 with the different elements of the detector. \item Simulated data (sim), $\Lambda$ and $\bar{\Lambda}$ that can be identified by Monte Carlo association of the products of its charged decay and with transverse momentum $p_{\textit{T}}>0.001$ GeV$/c$ and $|\eta| < 1.3$, to be in the acceptance of the detector. \item Reconstructed data (rec) namely $\Lambda$ and $\bar{\Lambda}$ identified by the combination of secondary tracks of opposite identified charge, namely, p$^+$(p$^-$) and $\pi^-$($\pi^+$), together with background subtraction. \end{itemize} Table~\ref{tab:table1} shows the number of $\Lambda$s and $\bar{\Lambda}$s per event for each different level of the analysis for the different sets of data. The abundance for each impact parameter range is also shown in Fig.~\ref{figura2}. \begin{table}[!ht] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline Data & \multicolumn{2}{|l|}{Generated} & \multicolumn{2}{|l|}{Simulated} & \multicolumn{2}{|l|}{Reconstructed} \\ \hline \hline Sample & $\Lambda$ & $\bar{\Lambda}$& $\Lambda$ & $\bar{\Lambda}$ & $\Lambda$ & $\bar{\Lambda}$ \\ \hline MB & 11.8 & 0.22 & 6.36 & 0.14 & 0.66 & 0.02 \\ $b < 4$ fm & 50.6 & 0.74 & 28.2 & 0.47 & 3.78 & 0.06 \\ $6 < b < 8$ fm & 24.0 & 0.45 & 13.1 & 0.28 & 1.16 & 0.04 \\ $b > 10$ fm & 2.12 & 0.07 & 1.10 & 0.04 & 0.05 & 0.004 \\ \hline \end{tabular} \caption{Content of $\Lambda$ and $\bar{\Lambda}$ for the different datasets.} \label{tab:table1} \end{table} \begin{figure}[t] \begin{center} \includegraphics[width=127mm]{Figuras/Fig2.png} \vspace{-3mm} \caption{Number of $\Lambda$s (left) and $\bar{\Lambda}$s (right) per event in each dataset.} \end{center} \labelf{figura2} \vspace{-5mm} \end{figure} In addition we need to estimate the maximum efficiency $\varepsilon$ to be obtained in the reconstruction of particles so as to correct the effects of detector acceptance and resolution. The maximum efficiency is made identifying the reconstructed hyperons with the MC association. Figure~\ref{figura3} show the maximum efficiency for $\Lambda$ and $\bar{\Lambda}$ as a function of $p_{\textit{T}}$. We choose this variable in order to compare with the results of any other possible analysis. We observe that the efficiency is similar for the different sets of analyzed data. The maximum value is $\varepsilon\approx 0.3$ for $p_{\textit{T}}\approx 2$ GeV. To reproduce this efficiency, we require to improve the particle identification. We should get this efficiency as a function of the variables to be analyzed, such as the azimuthal angle, to correct for detector effects. \begin{figure}[t] \begin{center} \includegraphics[width=63.5mm]{Figuras/Fig3-1.png} \includegraphics[width=63.5mm]{Figuras/Fig3-2.png} \vspace{-3mm} \caption{Reconstruction efficiency for $\Lambda$ (left) and $\bar{\Lambda}$ (right) in each data set. Both shows a rise in efficiency for $0 < p_{\textit{T}} < 2$ GeV/c.} \end{center} \labelf{figura3} \vspace{-5mm} \end{figure} \label{sec:Reconstruction} \section{Hyperon reconstruction} We select $\Lambda$ and $\bar{\Lambda}$ using their weak decay topologies, as is shown in Fig.~\ref{figura4}. The $V^0$ finding procedure starts with the combination of each secondary track with every secondary track of opposite charge. To get the invariant mass, the selection of candidates is done using cuts on variables such as the produced baryon distance of closest approach to the primary vertex (DCA $p$-track and DCA $\pi$-track), the distance of closest approach between the two produced particle tracks (DCA V0) and the cosine of the angle between the $V^0$ reconstructed momentum and a vector $\textbf{R}$ joining the primary and secondary vertices (Cosine($\theta$)). The distribution of these variables as a function of invariant mass are shown in Fig.~\ref{figura5}. To distinguish between $\Lambda$ and $\bar{\Lambda}$ we use the asymmetry of the longitudinal momentum of product tracks in the rest frame of the hyperon given by the Armenteros-Podolanski variables\cite{doi:10.1080/14786440108520416}. \begin{figure}[h!] \begin{center} \includegraphics[width=127mm]{Figuras/Fig4.png} \vspace{-3mm}\caption{Topological reconstruction variables.} \end{center} \labelf{figura4} \vspace{-5mm} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[width=63.5mm]{Figuras/Fig5-1.png} \includegraphics[width=63.5mm]{Figuras/Fig5-2.png}\\ \includegraphics[width=63.5mm]{Figuras/Fig5-3.png} \includegraphics[width=63.5mm]{Figuras/Fig5-4.png} \vspace{-3mm}\caption{The topological variables vs. the hyperon invariant mass.} \end{center} \labelf{figura5} \vspace{-5mm} \end{figure} The invariant mass distributions for $\Lambda$ and $\bar{\Lambda}$ are obtained implementing the cuts shown in Fig.~\ref{figura5} for the different data sets shown in Fig.~\ref{figura6}. \begin{figure}[h!] \begin{center} \includegraphics[width=42.33mm]{Figuras/masas/Fig6-1.png} \includegraphics[width=42.33mm]{Figuras/masas/Fig6-2.png} \includegraphics[width=42.33mm]{Figuras/masas/Fig6-3.png}\\ \includegraphics[width=42.33mm]{Figuras/masas/Fig6-4.png} \includegraphics[width=42.33mm]{Figuras/masas/Fig6-5.png} \includegraphics[width=42.33mm]{Figuras/masas/Fig6-6.png}\\ \includegraphics[width=42.33mm]{Figuras/masas/Fig6-7.png} \includegraphics[width=42.33mm]{Figuras/masas/Fig6-8.png} \vspace{-3mm}\caption{Invariant mass distribution for $\Lambda$ and $\bar{\Lambda}$ obtained with the cuts in Fig.~\ref{figura4}. To improve the selection, we need to change these cuts maximizing the significance and using PID for decay particles.} \end{center} \labelf{figura6} \vspace{-5mm} \end{figure} \begin{figure}[bh!] \begin{center} \includegraphics[width=55.5mm]{Figuras/Fig7-1.png} \includegraphics[width=55.5mm]{Figuras/Fig7-2.png} \vspace{-3mm}\caption{(Left) Azimuthal baryon decay product distribution showing the homegeneous and inhomogeneus distributions. (Right) Estimation of the polarization showing that the distributions in the left panel lead to different estimations.} \end{center} \labelf{figura7} \vspace{-5mm} \end{figure} The number of $\Lambda$s ($\bar{\Lambda}$s) is larger for central collisions and decreases for peripheral collisions, as is expected from the generated data. Once the hyperons are identified, we compare the azimuthal angle of the baryon products with the event plane angle $\Psi_{RP}$. The event plane orientation is estimated in terms of the first order event plane angle $\Psi^{(1)}_{EP}$ and its resolution $R^{(1)}_{EP}$ using Eq.~(\ref{pol})~\cite{Abelev:2007zk}. As a method to test our protocol to measure the hyperon polarization, we first obtain a set of hyperons for which we choose an inhomogeneous distribution of the azimuthal angle of the baryon decays so that we can compare with the MB set of unpolarized data and check for the differences. Also, for this part of the analysis, we use the reaction plane angle $\Psi_{RP}$, assigned by the generator during the transport through the detector. As a next step we plan to use the measured angle $\Psi^{(1)}_{EP}$ with the TPC. Figure~\ref{figura7} shows that the effect of the selection of the angle produces a change of the calculated polarization value. In the future, this fixed distribution of the azimuthal angle will be modeled using results obtained from the relative abundance of the $\Lambda$ and $\bar{\Lambda}$ coming from the high-density core region and a less dense corona, as well as accounting for the intrinsic polarization. We plan to compare with experimental data as well as with data produced with other generators. \label{sec: Summary} \section{Summary and outlook} In this work we have presented a general overview of $\Lambda$ and $\bar{\Lambda}$ reconstruction using the MPD, aimed at measuring the hyperon global polarization for NICA energies and to test the influence of the relative abundances of these particles coming from the core and corona regions of a peripheral heavy-ion collision. The analysis is motivated by a model that has recently been shown to explain the experimental $\Lambda$ and $\bar{\Lambda}$ global polarization differences found at low energies. In a future analysis we plan to get the polarization with the measured event plane angle $\Psi^{(1)}_{EP}$ and to improve the selection of $\Lambda$ and $\bar{\Lambda}$ considering the Particle Identification (PID) for the decay product tracks as well as modifying the topological cuts to increase the data set significance. We plan to model the azimuthal angular distributions of the decay baryons to simulate particles coming from the dense and less dense regions of the peripheral heavy-ion collision. We will also compare the results with those obtained using other generators such as DCM-SMM (Statistical Multi Fragmentation Model) and DCM-QGSM (Dubna Cascade Quark Gluon String Model) ~\cite{Baznat:2019iom}. \section*{Acknowledgments} I.M. thanks the ICN-UNAM faculty and staff for the support and kind hospitality provided during the development of part of this work. Support for this work has been received in part by UNAM-DGAPA-PAPIIT grant number IG100219 and by Consejo Nacional de Ciencia y Tecnolog\'{\i}a grant numbers A1-S-7655 and A1-S-16215. I.M. acknowledges support from a postdoctoral fellowship granted by Consejo Nacional de Ciencia y Tecnolog\'{\i}a. \bibliographystyle{pepan}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Einstein's hole argument shows the importance of looking for observable predictions of a theory: It gives us different solutions $g_{\mu\nu}(x)$, $g'_{\mu\nu}(x)$ for the same initial values and boundary conditions. Nonetheless, the two solutions cannot be distinguished by observation. In this paper we argue that the situation in the quantum domain is different. We have a qualitatively new element -- the superposition of different gravitational fields. There are two different generalizations of the hole argument: One which applies the same diffeomorphism to all fields (which we name c-covariance), and one where different diffeomorphisms may be applied to different fields (q-covariance). A truly background-free theory of quantum gravity, which is based only on covariant equations, has to be q-covariant, while a c-covariant theory needs some additional structure, which connects the different solutions involved in a quasiclassical superpositional state, especially allows to define a notion of ``the same diffeomorphism'' for different solutions. Especially, the equations of GR define the involved fields only modulo diffeomorphism, and there is no well-defined notion of ``the same diffeomorphism'' for two different solutions of the Einstein equations. As a consequence, many researchers think that quantum gravity has to be a q-covariant theory (see, for example, \cite{Anandan}), or, in other words, background-free. Instead, we argue, that a q-covariant theory is not viable. In quantum gravity, there will be new, quantum observables, which cannot be computed in a q-covariant theory. At a first look, it seems a strange idea to postulate the existence of observables in a theory -- quantum gravity -- which does not yet exist. But a quite simple quantum theory of gravity already exists: non-relativistic multi-particle Schr\"{o}dinger theory with Newtonian interaction potential, or, shortly Newtonian quantum gravity (NQG). It seems reasonable to assume that NQG has to appear as the non-relativistic limit of full QG. As well, it seems reasonable to assume that observables of NQG have to be defined in full QG as observables, at least for sufficiently weak, non-relativistic gravitational fields. Now, we can find in NQG a simple observable which depends on the relative position of different gravitational fields involved in a superpositional state, and which cannot be defined in a q-covariant theory. The new observable is a simple transition probability $\rho_{trans}$ in a variant of a double-slit experiment. It measures, if and how much of a superposition is destroyed by gravitational interaction of the state with a test particle. This quantum observable gives a new, quantum version of the hole argument. Different from Einstein's original argument, it is based on an observable $\rho_{trans}$, which cannot be defined in a q-covariant theory. As a consequence, a q-covariant quantum theory of gravity is unable to predict the observable $\rho_{trans}$, and is, therefore, not viable. We consider various solutions of this problem. We favour the introduction of a common background for different gravitational fields. This solution requires an additional classical ($\hbar$-independent) physical equation, for example, the harmonic condition, which breaks general covariance and defines a common background. \section{The classical hole argument} Let's remember Einstein's hole argument \cite{Einstein} and it's resolution in classical general relativity. Let $g_{\mu\nu}(x)$ a solution of the Einstein equation. Let's consider some bounded spacetime region $\Sigma$ -- the ``hole''. It is located in the future of the initial time $x^0=t_0$. \footnote{ In Einstein's version it was located in a region without any material processes. The conclusion was, correspondingly, that the gravitational field cannot be uniquely defined by the distribution of matter. This contradicts Mach's principle. But that GR violates Mach's principle is something we have accepted. The gravitational field has it's own degrees of freedom, gravitational waves. Thus, this version of the hole argument has only historical interest. } Then we consider a nontrivial smooth diffeomorphism $x' = x'(x)$, which is trivial ($x'=x$) outside $\Sigma$. As a consequence of diffeomorphism invariance, the transformed metric $g'_{\mu\nu}(x')$, after replacing $x'$ with $x$, gives a different solution $g'_{\mu\nu}(x)$ of the Einstein equations. It coinsides with $g_{\mu\nu}(x)$ outside $\Sigma$, thus, defines a different solution with the same boundary conditions and the same set of initial data at $x^0=t_0$. Thus, at a first look, it seems that in GR the gravitational field cannot be uniquely defined by whatever set of initial values and boundary conditions, for matter fields as well as the gravitational field. The solution of this problem is that the two solutions $g_{\mu\nu}(x)$ and $g'_{\mu\nu}(x)$, while different as functions of $x$, cannot be distinguished by observation. What naively looks like an observable -- the value $g_{\mu\nu}(x)$ in a point $x$ -- is not observable. Observables are connected with events, which have to be identified by their relations to other events. For example, the event $x_1$ may be identified by the set of events at $t_0$ which intersect it's past light cone. But this same set at $t_0$ defines, on the $g'_{\mu\nu}(x)$, another event $x_1'=x'(x_1)$. In this sense, all real, physical observables may be, despite this argument, well-defined by the initial values and boundary conditions. We see that for the viability of the covariant Einstein equations, it is essential that there is no observable $O$ which allows to distinguish the two solutions $g_{\mu\nu}(x)$ and $g'_{\mu\nu}(x)$ connected via the coordinate transformation $x'=x'(x)$ in the hole $\Sigma$. Let's note that Einstein's original assumption, that there should be no material inside the hole, can be removed. We can add an arbitrary set of material fields $\phi(x)$. Applying the hole argument, we obtain two sets of solutions $\{g_{\mu\nu}(x),\phi(x)\}$ and $\{g'_{\mu\nu}(x),\phi'(x)\}$. The argument as well as it's solution remains unchanged. Last not least, the argumentation can be extended without much difficulty from the purely classical gravity into semiclassical gravity, that means, quantum theories on a fixed background. It is known that this requires, because of problems related with particle creation, the consideration of quantum field theory. Despite this, for the purpose of this paper, a single particle approximation is sufficient. As well, for the purpose of this paper purely spatial diffeomorphisms, which do not change the time variable, are completely sufficient. Therefore we do not have to consider problems in semiclassical gravity related with the choice of the time variable. Thus, we fix some foliation $x = ({\bf x},t)$ and restrict the argumentation to diffeomorphisms which preserve this foliation. That means, we describe a quantum state on a curved background $g_{\mu\nu}({\bf x},t)$ in a single particle approximation with some wave function $\psi({\bf x},t)$. The semiclassical version of the hole argument is, then, described by two sets $\{g_{\mu\nu}({\bf x},t),\psi({\bf x},t)\}$ and $\{g'_{\mu\nu}({\bf x},t),\psi'({\bf x},t)\}$ of solutions. Again, the argument can be resolved, as in the purely classical domain, by recognizing that all physical observables coinside. \section{Quasiclassical superposition of gravitational fields} Let's consider now a next step in the direction of quantum gravity -- a simple superposition of two semiclassical states. That means, to describe this state, a single classical gravitational field (as in semiclassical theory) is no longer sufficient. We have to consider two semiclassical field configurations $\{g^1_{\mu\nu}({\bf x},t),\psi_1({\bf x},t)\}$ and $\{g^2_{\mu\nu}({\bf x},t),\psi_2({\bf x},t)\}$. We may write them in the Dirac notation as $|g_1,\psi_1\rangle$ and $|g_2,\psi_2\rangle$. Now we can denote our superpositional state as \begin{equation} |\Psi\rangle = |g_1,\psi_1\rangle + |g_2,\psi_2\rangle. \label{eq:super} \end{equation} The classical principle of covariance gives: \begin{principle}[simple covariance] The states $|g,\psi\rangle$ and $|g',\psi'\rangle$ cannot be distinguished by observation. \end{principle} But now we have already two different classical gravitational fields in a single superpositional state. How to extend the principle of covariance to such quantum superpositions? There are two possibilities: \begin{principle}[c-covariance] The states $|g_1,\psi_1\rangle + |g_2,\psi_2\rangle$ and $|g_1',\psi_1'\rangle + |g_2',\psi_2'\rangle$ cannot be distinguished by observation. \end{principle} Here, we apply the same transformation $x'(x)$ to above semiclassical fields. But there is also another possibility. Let's consider two different transformations $x'(x)$ and $x''(x)$ for our two semiclassical configurations. We obtain another, more restrictive covariance principle: \begin{principle}[q-covariance] The states $|g_1,\psi_1\rangle + |g_2,\psi_2\rangle$ and $|g_1',\psi_1'\rangle + |g_2'',\psi_2''\rangle$ cannot be distinguished by observation. \end{principle} Obviously, q-covariance is stronger than c-covariance: We obtain c-covariance from q-covariance for the particular case $x'(x) = x''(x)$. Which of the two generalizations is the appropriate one? The answer will be given, of course, only by quantum gravity. But already now it is possible to classify quantum theories of gravity based on these principles: In a really background-independent theory it is impossible to define the notion ``the same diffeomorphism'' for two different solutions. This can be done only if we have two solutions given in the same set of coordinates $x$. But the GR equations do not define this particular system of coordinates. Thus, even the notion of c-covariance is not defined in GR. Thus, the only appropriate choice for a GR-based theory of quantum gravity seems to be q-covariance. In other words, q-covariance can be considered as another word for background independence of a theory of gravity. Instead, if we have a theory of gravity with a fixed background, we can define the position relative to the background. This allows to define a notion of ``the same diffeomorphism'' for different gravitational fields. The background itself is not a q-covariant notion. Thus, for a theory with background, the notion of q-covariance is not natural at all. On the other hand, it is well-known, that covariant formulations are possible even for non-covariant classical theories like SR or Newtonian theory. The notion of c-covariance seems to be an appropriate generalization of such an artificial, trivial notion of covariance into the quantum domain. Thus, it seems, we can identify c-covariance of a theory of gravity with background-dependence. Last not least, let's note that the background does not need to have a predefined geometry. Especially an affine structure would be completely sufficient. For example, introducing the harmonic gauge is sufficient to fix a common background, so that the hole argument no longer works, but it does not define a geometry on the background. \section{Gravitational partial position measurement} Let's consider now a simple thought experiment in the domain of quasiclassical quantum gravity. We have one particle, called ``source particle'', in a superpositional state of type $\delta(x-x_l) + \delta(x-x_r)$. This state interacts gravitationally with a second particle, the ``test particle''. Then, the test particle will be simply ignored, and we consider the resulting state of the source particle. There are two limiting cases: If the gravitational interaction is sufficiently strong, the final position of the test particle is a measurement of the position of the source particle. As a consequence, the interference is destroyed and the effective state of the source particle has to be described by a density matrix. If the interaction is too weak, the final position of the test particle does not depend on the position of the source particle, and for the state of the source particle nothing has changed. In intermediate situations we will find a partial destruction of the superposition. The different outcomes of the interaction can be distinguished by subsequent observation of the source particle alone. For example, if the source particle is part of a classical double slit experiment, we can observe the presence or absence of an interference pattern. Let's see now how to compute the remaining ``degree of interference''. Let's denote the initial state of the source particle as $|\phi_+\rangle$, where \begin{equation} |\phi_\pm\rangle = \frac{1}{\sqrt{2}}(|\phi_l\rangle \pm |\phi_r\rangle), \label{eq:Psi} \end{equation} and $|\phi_{l/r}\rangle\approx |\delta(x-x_{l/r})\rangle$ denotes states where the source particle is located near the position $x_{l/r}$ (say, the left or right slit of a double slit experiment). The test particle is prepared initially in the state $|\psi_0\rangle$. Thus, the two-particle system is prepared in the state \begin{equation} |\Psi_{in}\rangle = |\phi_+\rangle\otimes|\psi_0\rangle \label{eq:Psi_in} \end{equation} The gravitational interaction is diagonal in the particle positions. We assume that the mass of the source particle $M$ is much greater than the mass $m$ of the test particle. Thus, in some approximation the interaction leaves the position of the source particle unchanged, and changes only the state of the test particle. This gives, for the position $x_{l/r}$ of the source particle, the wave functions $\psi_{l/r}(x)$. At the end of the interaction, we have obtained the state \begin{equation} |\Psi_{out}\rangle = \frac{1}{\sqrt{2}}(|\phi_l\rangle\otimes|\psi_l\rangle + |\phi_r\rangle\otimes|\psi_r\rangle), \label{eq:Psi_out_lr} \end{equation} Now, in the final measurement we measure the eigenstates $|\phi_\pm\rangle$ of the source particle. The test particle is ignored. (That means, we assume its position is measured, but ignore it, computing only the trace.) The ``degree of loss of interference'' is defined by the probability $\rho_{trans}$ of the source particle being observed in the state $\phi_-$, and equals \begin{equation} \rho_{trans} = \frac{1}{2}(1 - \Re \langle \psi_l|\psi_r\rangle). \label{eq:rho} \end{equation} Especially, if there is no interaction, with $|\psi_l\rangle=|\psi_r\rangle$, we have $\rho_{trans}=0$. In the case of complete measurement, which corresponds to no interference pattern, $\langle\psi_l|\psi_r\rangle=0$, thus, $\rho_{trans}=\frac{1}{2}$. We conclude that (at least) the real part of the scalar product $\langle \psi_l|\psi_r\rangle$ is observable. \subsection{Description in terms of Newtonian quantum gravity} Let's see now how to compute the scalar product $\langle\psi_l|\psi_r\rangle$ in Newtonian quantum gravity. We use the approximation $\phi_{l/r}(x) = \delta(x-x_{l/r})$. In this approximation, the two-particle problem reduces to two one-particle problems with a classical source of the gravitational field in $x_{l/r}$. Thus, we have to solve only the two one-particle Schr\"{o}dinger equations \begin{equation} i\partial_t \psi_{l/r}(x,t) = (-\frac{1}{2m} \Delta - \frac{mM}{|x-x_{l/r}|})\psi_{l/r}(x,t) \label{eq:Schroedinger} \end{equation} for the initial value $\psi_{l/r}(x,t_0) = \psi_0(x)$. Then we can compute the transition probability $\rho_{trans}$ by the well-defined expression \begin{equation} \rho_{trans} = \frac{1}{2}(1 - \Re \int \overline{\psi}_l(x)\psi_r(x) d^3x) \label{eq:transNQG} \end{equation} \subsection{Description in terms of semiclassical general relativity} Now, we would like to compute the first GR corrections for our well-defined NQG observable $\rho_{trans}$. There appear, of course, the usual problems of one-particle theory in the relativistic domain and all the other problems in the domain of semiclassical quantum theory. But these problems are solvable using the standard techniques of semiclassical QFT. Let's therefore assume that, in an appropriate approximation (which ignores particle creation and so on), using for simplicity a fixed foliation, we can find, for a given gravitational field $g_{\mu\nu}({\bf x},t)$, a corresponding one-particle wave function $\psi({\bf x},t)$. Then, we use the Schwarzschild solutions $g^{l/r}_{\mu\nu}({\bf x},t)$ for the source particle located in $x_{l/r}$ and obtain two pairs of solutions $\{g^l_{\mu\nu}({\bf x},t),\psi_l({\bf x},t)\}$ and $\{g^r_{\mu\nu}({\bf x},t),\psi_r({\bf x},t)\}$. As far, everything seems nice. It remains to compute the scalar product. The straightforward formula would be \begin{equation} \rho_{trans} = \frac{1}{2}(1 - \Re \int \overline{\psi}_l(x)\psi_r(x)d\mu), \label{eq:transGR} \end{equation} where we leave the question of the definition of the measure $d\mu$ open. But now let's consider the covariance properties of our expression for $\rho_{trans}$. If we have correctly managed the transformation rules for $\psi$ and $d\mu$ for changes of coordinates, we obtain, without problems, weak covariance: \begin{equation} \int \overline{\psi}_l(x)\psi_r(x) d\mu = \int \overline{\psi}'_l(x)\psi'_r(x) d\mu' \label{eq:weakPsi} \end{equation} But strong covariance fails completely. For example, assume $\psi_l(x)=\psi_r(x)$, thus, $\rho_{trans} = 0$, with an ideal, unchanged interference picture. Assume that $\psi_l(x)$ has finite support $U$. Now consider the deformations $x'=x'(x), x''=x''(x)$ so that $U'\cap U''=\emptyset$. As a consequence, $\int \overline{\psi}'_l(x)\psi''_r(x)d\mu = 0$ (whatever the measure $d\mu$), and we obtain \begin{equation} \tilde{\rho}_{trans} = \frac{1}{2}(1 - \Re \int \overline{\psi}_l(x')\psi_r(x'')d\mu) = \frac{1}{2} \neq \rho_{trans}, \label{eq:transFalse} \end{equation} thus, no interference picture at all. We conclude that our NQG observable $\rho_{trans}$ is not strong covariant. Let's remember now that the equations of general relativity define the metric $g_{\mu\nu}$ only modulo arbitrary coordinate transformation. So, we simply don't know, which of the two choices, $\overline{\psi}_l(x)\psi_r(x)d\mu$, or $\overline{\psi}_l(x')\psi_r(x'')d\mu$, we have to use to define $\rho_{trans}$. Thus, the NQG observable $\rho_{trans}$ cannot be computed using only the equations of GR, together with semiclaccial quantum theory on a fixed GR background. In geometric language, the two different solutions $\{g^l_{\mu\nu}({\bf x},t),\psi_l({\bf x},t)\}$ and $\{g^r_{\mu\nu}({\bf x},t),\psi_r({\bf x},t)\}$ of semiclassical theory live on different manifolds. The scalar product of functions defined on different manifolds is undefined. To define it, we need an additional object -- a map between the two manifolds. \section{Consequences} As the consequence of this consideration, we conclude, that a strong covariant theory of gravity (in other words, a background-independent quantum theory of gravity) cannot have a correct NQG limit, and, therefore, is not viable. Let's consider some possible objections. First, maybe in full quantum gravity $\rho_{trans}$ is not an observable, but becomes observable only in the limit? But, whatever the observables of full quantum gravity, we should be able to derive the observables we can observe in the domain of application of NQG. Note that, at least in some sense, we already today can and do observe $\rho_{trans}$. Indeed, we observe interference patterns. Now, the particles, which show the interference patterns, always interact gravitationally with all other particles. Of course, this interaction is far too small to lead to different functions $\psi_{l/r}(x)$. That's why the NQG prediction gives $\rho_{trans}=0$. This prediction is in full agreement with the fact that we can observe interference patterns. But, even if the prediction $\rho_{trans}=0$ is quite trivial, the fact that there is agreement with observation even today is a quite nontrivial one, especially given that $\rho_{trans}$ is completely undefined in a fully covariant theory. Note also that the difference between the two results $\rho_{trans}$ and $\tilde{\rho}_{trans}$ is very big, as big as possible. In this sense, $\rho_{trans}$ is not only observable, but easily observable. Note also that the problem with the computation of $\rho_{trans}$ is not a problem for strong, relativistic gravitational fields. The problem appears whenever we consider a superposition of gravitational fields. Even for gravitational fields which can be, as accurate as you like, approximated by a Newtonian potential. And, at least in principle, even if the gravitational field is as weak as you like. It is, even in this limit, possible to chose $x'=x'(x), x''=x''(x)$ so that $\int \overline{\psi}'_l(x)\psi''_r(x)d\mu$ becomes zero, in fatal and obvious disagreement with the observation of interference patterns. But, then, maybe Newtonian quantum gravity is not a correct limit of full quantum gravity? But even this does not help. Last not least, the conceptual problem remains the same even if we consider the non-gravity limit of NQG, which is simply non-relativistic quantum mechanics. Even in this limit we obtain $\rho_{trans}=0$ as the value for transitions caused by gravity, and even in this limit we can compare with observation and obtain full agreement. Moreover, some experiments (with energy levels of neutrons in the gravitational field of the Earth), which require Newtonian quantum gravity, have been already done, and they have shown agreement with the (quite simple) predictions \cite{neutrons}. \subsection{The alternative: A preferred background} The simple way to fix this problem is to fix a preferred background. This can be done using a gauge condition which fixes a preferred system of coordinates. The rule for the computation of $\rho_{trans}$ is, then, the following: We have to transform the solutions $\{g^{l/r}_{\mu\nu},\psi_{l/r}\}$ into the preferred coordinates, so that $\psi_{l}({\bf x},t)=\psi_{r}({\bf x},t)$ before the interaction $t<t_0$. Then we can use the wave functions $\psi_{l/r}$, in the preferred coordinates, to compute the scalar product $\langle\psi_l|\psi_r\rangle$. This prescription obviously depends on the choice of the gauge condition. Different choices of the gauge condition lead to different predictions for $\rho_{trans}$. It seems, at least in principle, possible to construct a sufficiently artificial coordinate condition such that this prescription leads to $\rho_{trans}\neq 0$ even for the usual interference patterns we observe today. (Of course, such a condition has to be very strange, especially it has to define for two metrics which are very close to each other very different preferred coordinates. But in principle this would be possible.) Thus, the choice of the gauge condition, in combination with the prescription for the computation of $\rho_{trans}$ above, is a physical choice. The gauge condition is, in this sense, a physical equation, no longer an arbitrary human choice. Therefore, it seems more appropriate to name it an equation. Once it defines preferred coordinates, and ``coordinates'' different solutions, we suggest to name it ``coordinate equation''. Now, once it is clear that we need a new physical equation, which fixes a preferred system of coordinates, we have to postulate this equation. The harmonic coordinate equation \begin{equation} \partial_\mu (g^{\mu\nu}(x)\sqrt{-g}) = 0 \label{eq:harmonic} \end{equation} seems to be a favourable choice for an additional physical equation. In harmonic coordinates, the hole problem disappears. Indeed, harmonic coordinates are quite appropriate for the initial value problem, so that local uniqueness can be proven \cite{Choquet-Bruhat} \cite{Choquet-Bruhat1}. The harmonic condition may be introduced as an additional physical equation into GR. But in this case, we no longer have a Lagrange formalism for all equations of the theory. If we want to obtain the harmonic equation as an Euler-Lagrange equation, we have to add some gauge-breaking term, which also modifies the Einstein equations themself. There are two alternative metric theories of gravity following this way: First, there is the ``relativistic theory of gravity'' (RTG) \cite{Logunov, Logunov1}, a theory of gravity with massive graviton, which gives the Einstein equation in the limit of zero graviton mass $m_g\to 0$. Then there is the ``general Lorentz ether theory'' (GLET) \cite{Protvino, glet, clm}, with two additional parameters $\Xi, \Upsilon$, which also gives the Einstein equation for $\Xi, \Upsilon \to 0$. The main differences between these two alternatives are different metaphysics. RTG is a bimetric theory on a Minkowski background. Instead, GLET is an ether theory with classical Newtonian spacetime as the background. A fixed Minkowski background, which allows to compare different gravitational fields, and, therefore, to compute $\rho_{trans}$, exists also in string theory. \subsection{Is a preferred background necessary?} While a preferred background solves the problem with the computation of $\rho_{trans}$, it seems natural to ask if such a background is the only way to solve the problem of defining $\rho_{trans}$. Let's note that our considerations work only for gravitational fields which may be superposed. There may be superrules which forbid some superpositions, so that all gravitational fields split into classes of fields which may be superposed with each other. We do not consider this possibility. Thus, we assume that all gravitational fields may be superposed with each other. In this case, let's fix a single gravitational field, say, the vacuum metric $\eta_{\mu\nu}$, as a reference field. Let's, then, take another, arbitrary, field $g_{\mu\nu}(y)$. Then, consider pairs of solutions $\{\eta_{\mu\nu}(x),\varphi(x)\}$ and $\{g_{\mu\nu}(y),\psi(y)\}$. Assume that the transition probability \begin{equation} \rho_{trans} = \frac{1}{2}(1 - \Re \langle\varphi|\psi\rangle). \label{eq:trans} \end{equation} is well-defined. Then we can, obviously, use $\varphi(x)=\delta(x-x_0)$ defined on the on $\eta_{\mu\nu}$ background as a test function and obtain for real-valued functions $\psi(y)$ on the curved background, a corresponding real-valued function $\tilde{\psi}(x_0)$ \begin{equation} \tilde{\psi}(x_0) = \Re \langle\delta(x-x_0)|\psi\rangle = 1 - 2 \rho_{trans}. \label{eq:test} \end{equation} In other words, we can compute a corresponding representation of the wave function $\tilde{\psi}(x)$ on the flat background. In the simplest case, this representation is simply defined by some map $y(x)$ of the underlying spaces, so that $\tilde{\psi}(x) = \psi(y(x))$, or \begin{equation} \langle\varphi|\psi\rangle = \int \overline{\varphi}(x)\psi(y(x))d^3x. \label{eq:scalarProduct} \end{equation} In principle, there may be more general situations: The measurement of the background position on $\eta_{\mu\nu}$ may be incompatible with the position measurement on the curved background. But even in this case, the measurement of the background position on a common reference background $\eta_{\mu\nu}$ is well-defined, with the corresponding wave function $\tilde{\psi}(x)$. \section{One possibility to save the background-independent approach} Let's consider shortly one idea, which, possibly, could be tried out to save the background-independent approach. Instead of considering only the interaction between the source particle and the test particle, let's consider the whole double slit experiment, from the beginning to the end, in it's most classical form, that means, with a common starting point for the source particle, and a common final point of the source particle on the screen. In this case, as in the initial state, as in the final state, we have no longer superpositional states of gravitational fields. Thus, the superpositional hole argument cannot be used, once no superposition is present. Assume all is fine with this argument. What follows? Even in our simple double slit experiment, the splitting can be done as far in the past as we like. And the final measurement can be done as far in future as we like. If we have to trace back all superpositional states until we can reduce them in such a way to a common, non-superpositional origin, as well as to a superposition-free final state, we will end with some sort of S-matrix-like theory, which does not allow us to compute anything for finite distances. Now, S-matrix theory is, without doubt, of high value for pragmatical computations for scattering experiments in particle accelerators. But we should clearly reject the idea that a fundamental theory of everything may be such an S-matrix theory. Even for scattering experiments in particle accelerators the S-matrix gives only a fixed level of accuracy of the predictions. For more accurate predictions, it becomes necessary to compute corrections related with the finite size of our measurment devices. A theory which, even in principle, is unable to do this, is not viable as a theory of the universe. Moreover, an S-matrix approach is obviously completely useless for cosmological consideration. What we observe, we observe from a very tiny part of the universe, our Solar system. Thus, a theory of everything should be able to compute predictions for finite distances, in space as well as in time. A theory, which cannot do it in principle, is simply not acceptable as a theory of the universe. The aim of the search for quantum gravity and theories of everything is not the computation of S-matrices: These can be computed with high enough accuracy without them. The SM is sufficient for this purpose. The aim of such theories is to obtain a common conceptual scheme for everything in the universe. And a theory which gives only S-matrices, obviously, fails to give this scheme for our local, human observations. Thus, we need a theory which allows to compute predictions in finite space and time. And a quantum theory of gravity which describes what happends here and now cannot live without superpositions of gravitational fields. The idea to trace back such superpositions until we find a superposition-free state in the past should, therefore, be rejected. Against attempts to deny our new observable the right to exist in full quantum gravity we can, last not least, propose a general argument: They do not take the lecture teached by quantum theory seriously. Our new observable $\rho_{trans}$ is a new, purely quantum, observable. In classical theory, there is nothing comparable: There are no superpositions which may be destroyed by observation. Such qualitatively new observables also define some deep insight into nature. And we should not ignore this lecture, should not close our eyes, but look at these new, quantum observables, try to find out if they allow us to see more. Note also that nothing of the beauty of the classical symmetry is lost, if we find that the symmetry in the quantum case is different, if we find, that quantum observables give us, sometimes, more and different information. The qualitatively new, quantum observable we have found in this paper allows us to distinguish states which would have been indistinguishable using only the old, classical observables. They allow us to see the common background -- an object which is hidden from observation as long as we can use only classical observables. \section{Discussion} Our argument suggests that q-covariant, background-independent approaches to quantum gravity are not viable. But, of course, the history of various impossibility theorem (say, von Neumann's impossibility theorem for hidden variable theories, which has been shown to be nonsensical by pilot wave theories) as well as the history of Einstein's original hole argument suggests, that we should not take impossibility arguments of the sort given here too serious. Therefore, it should not be expected that quantum gravity researchers give up their hope for a background-free quantum theory of gravity. Moreover, I suspect that they do not find the argument very impressive. Last not least, the definition of the observables is known to be a complicate issue in quantum gravity. For example, Smolin \cite{Smolin} notes that ``\ldots one cannot define the physical observables of the theory without solving the dynamics'', Thiemann \cite{Thiemann} writes ``\ldots one must find a complete set of Dirac observables (operators that leave the space of solutions invariant) which is an impossible task to achieve even in classical general relativity.'' One reason for the attractiveness of the background-independent approach is, obviously, it's philosophical background. It goes back to the position of Leibniz, who has proposed arguments for a relational view, against an absolute notion of space and time proposed by Newton. A nice introduction, from point of view of the modern background-independent approach, can be found in \cite{Smolin}: \begin{quote} ``Leibniz's argument for relationalism was based on two principles, which have been the focus of many books and papers by philosophers to the present day. The \emph{principle of sufficient reason} states that it must be possible to give a rational justification for every choice made in the description of nature. \ldots A theory that begins with the choice of a background geometry, among many equally consistent choices, violates this principle.\ldots One way to formulate the argument against background spacetime is through a second principle of Leibniz, \emph{the identity of the indiscernible}. This states that any two entities which share the same properties are to be identified. Leibniz argues that were this not the case, the first principle would be violated, as there would be a distinction between two entities in nature without a rational basis. If there is no experiment that could tell the difference between the state in which the universe is here, and the state in which it is translated 10 feet to the left, they cannot be distinguished. The principle says that they must then be identified. In modern terms, this is something like saying that a cosmological theory should not have global symmetries, for they generate motions and charges that could only be measured by an observer at infinity, who is hence not part of the universe.'' \end{quote} Now, we believe that relationalism is inherently wrong, because it is based on a wrong, positivistic understanding of scientific knowledge, and should be rejected. Following Popper \cite{PopperCR, PopperLSD}, we believe into the priority of theory: Theories are free guesses about Nature, not bound by principles following from observations. Based on the theory we derive what is observable as well as predictions about these observables. Only after this, observations become important, as a method to falsify some of the theories. Instead, relationalism has to be rejected on the same grounds as pre-Popperian logical positivism, as a variant of the priority of observation. But this paper we would like to finish with a completely different argument. Namely, the argumentation against a background based on relationalism completely fails, if we take into account our new, quantum observable. Indeed, this new observable gives, as we have shown, sufficient reason for the introduction of a common background: The background solves the problem of computing a prediction for the new observable, and there is no obvious alternative way to solve this problem. It is also no longer possible to apply the principle of ``identity of the indiscernible'' against the background. Indeed, superpositional states with different values for $\rho_{trans}$ are no longer indiscernible. And, last not least, arguments in favour of relationalism in general fail to support the background-independent approach: The background, as constructed here, defines itself a \emph{relation} between physical objects -- a relation between two gravitational fields, which are part of one superpositional state.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} \noindent There has been a renewal of interest in distributed control during the past few years mainly due to the complexity in modeling and analysis of large-scale systems. The unprecedented size of the produced data and required computations have enforced policy-makers to parallelize the computational power into sub-tasks to effectively exploit the problem's potential underlying structure. In such scenarios, high-dimensional collective tasks are formed from local decisions of each member that lead towards a global system-level decision where the synthesis problem can possibly be intertwined or divided among agents. From a modelling viewpoint, the distributed characteristic of the problem can be originated from the underlying interconnections among dynamics, the objective of subsystems, or simply the sparsity in the information exchange. Either way, the standard control methods are not directly applicable due to the shortcomings of scalability\footnote{$\mathcal{O}(n^3)$ complexity of solving the \textit{Algebraic Ricatti Equation} \cite{bini2011numerical} and Scalability issues of \textit{Model Predictive Control} \cite{camponogara2002distributed} are among such examples.} and there has been a persistent attempt in the literature to exploit the structure of problem, and thus, make control algorithms more computationally plausible. \noindent \\ In general, structured control synthesis is an NP-hard constrained optimization problem in nature \cite{borrelli2008distributed, maartensson2009gradient}. Therefore, the main focus in distributed control is not necessarily to find the optimal policy, but an efficient (possibly suboptimal) control mechanism that benefits from the inherent structure of the system. Meanwhile, recent advances in measurement and sensing technologies have led to the availability of an unprecedented amount of data generated by a variety of physical, social, biological, and transportation systems \cite{hou2017data}. This so-called big data revolution has resulted in the development of effective computational tools that utilize the data generated by a dynamic system to reason about reduced order representations of this data, subsequently utilized for classification or prediction on the underlying model. Such techniques have been particularly useful when the derivation of models from first principles is restrictively complex or infeasible, and hence, it seems natural to leverage such data-driven methods for high-dimensional complex systems \cite{brunton2019data}. \noindent \\ In this work, we propose the LQR-based \ac{D3PI} algorithm to iteratively learn a set of stabilizing controllers for unknown but identical linear dynamical systems that are connected with a network topology induced by couplings in their performance indices that can also be cast as a particular form of a cooperative game-theoretic setup. Furthermore, data-driven control of (unknown) identical systems is motivated by applications such as formation flight \cite{stipanovic2004decentralized} or monitoring networked cameras \cite{borrelli2005hybrid} where the agents possess corresponding dynamics whose exact models are inaccessible due to model perturbations and uncertainties. Notably, to avoid annihilation of the consensus-based nature of the distributed control setup, the option of single-agent learning is ruled out in this framework. We assume that the feedback signal for stabilization is available to each system locally through the same network topology. Then, we find a compound data-driven feedback mechanism for the entire networked system which is trained based on data collected from only a small portion of the network. In particular, considering a subgraph including the agent corresponding to the node with maximum degree, we require temporary feedback links within this subgraph in order to iteratively learn a stabilizing structured controller for the entire network that is optimal for the subgraph---thus, generally suboptimal for the entire network (Figure \ref{fig:random_graph}). Finally, the compound feedback for the entire network is constructed based on this locally optimal policy. We provide extensive analysis on the convergence and stability of our proposed distributed policy followed by comments on its suboptimality with respect to the optimal (unstructured) LQR controller with the same design parameters. We also give a simulation on distributed control of turbocharged diesel engines that showcase the usefulness of \ac{D3PI} in a practical setting. \begin{figure}[t] \centering \includegraphics[width=0.65\columnwidth]{Images/schematic.pdf} \caption{Addition of auxiliary links (dashed red) to the subgraph $\mathcal{G}_d$ in D3PI during the policy learning phase. The size of the subgraph depends on the maximum degree of the original graph $\mathcal{G}$.} \label{fig:random_graph} \end{figure} \noindent \\ The remainder of the paper is organized as follows. We provide an overview on the related literature in \Cref{sec:relWork}. In \Cref{sec:problem_setup}, we introduce the problem setup and the motivation behind our work. In \Cref{sec:main_algorithm}, we present and scrutinize the \ac{D3PI} algorithm. We provide theoretical analysis of our method in \Cref{sec:analysis}. Illustrative simulations are provided in \Cref{sec:simulation}, followed by concluding remarks in \Cref{sec:conclusion}. \section{Related Work} \label{sec:relWork} \noindent Distributed control of large-scale systems is a well-established area of research. The roots of the field trace back to the socioeconomics literature of 1970's \cite{mcfadden1969controllability} and early works in the control literature began to emerge later in the same decade~\cite{wang1973stabilization}. The inspiration of these types of works were that the presupposition of centrality fails to hold due to the lack of either a central intelligence or computational capability~\cite{sandell1978survey, ioannou1986decentralized}. Subsequently, stability conditions were derived for multi-channel linear systems \cite{corfmat1976decentralized}, solidifying the research one further step. Fast forward a few decades, sufficient graph-theoretic conditions were provided for stability of formations comprised of identical vehicles \cite{fax2004information} and, along the same lines, graph-based distributed controller synthesis was further examined independently in works such as \cite{massioni2009distributed, deshpande2012sub, borrelli2008distributed, maartensson2009gradient, wang2017distributed, lewis2013cooperative}. The topic was also studied from a spatially distributed control viewpoint~\cite{bamieh2002distributed, motee2008optimal} and a compositional layered design approach~\cite{chapman2017data, alemzadeh2018influence}. Moreover, from an agent-level perspective, the problem has been tackled for both homogeneous systems \cite{massioni2009distributed, borrelli2008distributed, wang2017distributed} and more recently heterogeneous ones \cite{sturz2020distributed}. \noindent \\ The complete knowledge of the underlying system model is a common assumption in the literature on distributed control, where the goal is to find a distributed feedback mechanism that follows an underlying network topology. However, derivation of models from first principles could be restrictive when the system is highly complex and uncertain. Such restrictions also hold for parametric perturbations that occur due to inefficient modeling or other unknown design factors.\footnote{For instance, even the LQR solution with its strong input robustness properties, may have small stability margins for general parameter perturbations \cite{dorato1994linear}.} Robust synthesis approaches could alleviate this issue if the perturbations follow specific models, in both centralized \cite{khargonekar1990robust} and distributed \cite{li2012distributed} cases. However, if the original estimates of system parameters are faulty or the perturbations violate the presumed model, then both stability and optimality of the proposed feedback mechanisms will be shattered. Data-driven control, on the other hand, circumvents such drawback by performing the modeling, optimization, and synthesis only by using the data when the physical model is unavailable. From an asymptotic analysis point of view, such machinery are studied under adaptive control and system identification \cite{ljung1999system, aastrom2013adaptive}. Later, more research was conducted in a discrete non-asymptotic fashion where control and analysis are performed on batches of collected data \cite{van2020data, alaeddini2018linear, dean2019sample} or through an online iterative procedure \cite{talebi2020online, oymak2019non}. Besides, in regards to the adaptive nature of such algorithms, there is a close connection between online data-driven control and reinforcement learning \cite{sutton1992reinforcement, lewis2012reinforcement, bradtke1994adaptive}. The latter works extend policy iteration \cite{hewer1971iterative} to approximate LQR by avoiding the direct solution of \ac{ARE}; yet most of them fail to scale towards high dimensions. \noindent \\ Control and estimation of large-scale systems in essence, is a more complicated problem than single-agent (centralized) control because of higher levels of uncertainty, dimensions, and modeling errors. Nevertheless, model-free large-scale analysis---as one tool to address such setbacks---is still an immature research area. Similar ideas have been investigated in \ac{MARL} community, mainly presented on a more general \ac{MDP} framework \cite{busoniu2008comprehensive, zhang2019multi}, but still application of most of the \ac{MARL}-based algorithms remains challenging as the state-action space grows exponentially large with the number of agents \cite{menda2018deep}. From a control theoretic perspective, \cite{luo2019natural} addresses this using ideas from mean-field multiagent systems and with the key assumption of partial exchangeability. Besides, \cite{alemzadeh2019distributed} provides a decentralized LQR algorithm based on network consensus that demonstrates low complexity, but potentially very high cost. Also, an SDP projection-based analysis is studied in \cite{chang2020distributed} where each agent is shown to obtain sublinear regret compared to the best fixed controller in hindsight. The problem is also considered from a game-theoretic standpoint \cite{li2017off, talebi2019distributed, nowe2012game} where the set of agents possess conflicting objectives. \noindent \\ In contrast, we propose and analyze the \ac{D3PI} algorithm in which we iteratively learn necessary control components that would be used to design a distributed controller for the entire network. We also show that depending on the structure of the problem, not only our scheme would be computationally efficient, but also more applicable when model-based control in high dimensions is practically infeasible. The structure of our distributed control design is inspired by classic ideas previously studied in \cite{borrelli2008distributed} (which is also extended to discrete-time in \cite{wang2017distributed}). \noindent \\ \textbf{Notations.} We denote by $\mathbb{R}$ and $\mathbb{N}$ the set of real and natural numbers respectively, $\mathbb{N}_0 = \mathbb{N} \cup \{0\}$, and $\mathbb{S}$ refers to the set of symmetric matrices. A column vector with $n$ elements is referred to as $\mbox{$\bm v$} \in\mathbb{R}^n$, where $\mbox{$\bm v$}_i$ represents the $i$th element in $\mbox{$\bm v$}$. The matrix $M\in \mathbb{R}^{p\times q}$ contains $p$ rows and $q$ columns with $M_{ij}$ denoting the element in the $i$th row and $j$th column of $M$. The square matrix $N\in\mathbb{S}^{n\times n}$ is \emph{symmetric} if $N^{\top}=N$, where $N^\top$ denotes the \emph{transpose} of the matrix $N$. The operator $\emph{diag(.)}$ makes a square diagonal matrix out of the elements of its argument and the operator $\emph{vech}(.)$ takes a square matrix and stacks the upper right triangular half (including the diagonal) into a single vector. The zero and identity matrices are denoted by $\textbf{0}$ and $\mathrm{I}_n$. We let $N\succ0$ ($\succeq0$) when $N$ is a positive-(semi)definite matrix, i.e., $\x^\top N \x > 0$ ($\geq0$) for all $\x\neq0$. A matrix is positive-definite if and only if all of its leading principle minors are positive. The $i$th eigenvalue and spectral radius of $M$ are denoted by $\lambda_i(M)$ and $\rho(M)$ and $M$ is Schur stable if $\rho(M)<1$. To simplify the vector notation, we use semicolon (;) to concatenate column vectors, e.g., $[\mbox{$\bm v$}^{\top}\ \mbox{$\bm w$}^{\top}]^{\top} = [\mbox{$\bm v$};~\mbox{$\bm w$}]$. We call the pair $(A,B)$ \emph{controllable}, if and only if the controllability matrix $\mathcal{C}=[B\quad AB\ \dots\ A^{n-1}B]$ has full-rank, where $n$ is the size of the system. $A\otimes B\in\mathbb{R}^{p_1q_1\times p_2q_2}$ refers to the \emph{Kronecker product} of $A\in\mathbb{R}^{p_1\times q_1}$ and $B\in\mathbb{R}^{p_2\times q_2}$ and the mixed-product property states that for matrices $A$, $B$, $C$, and $D$ with compatible dimensions $(A \otimes B)(C \otimes D) = A C \otimes B D$. For a block matrix $\textbf{F}$, by $[\textbf{F}]_{r k}$ we imply the $r$th row and $k$th column ``block'' component with appropriate dimensions. We denote by $\mathbb{I}_{\mathcal{E}}$ the \emph{indicator function} of event $\mathcal{E}$ defined as $\mathbb{I}_{\mathcal{E}}=1$ if $\mathcal{E}$ holds and $\mathbb{I}_{\mathcal{E}}=0$ otherwise. A graph is characterized by $\mathcal{G}=(\mathcal{V}_\mathcal{G},\mathcal{E}_\mathcal{G})$ where $\mathcal{V}_\mathcal{G}$ is the set of nodes and $\mathcal{E}_\mathcal{G}\subseteq \mathcal{V}_\mathcal{G}\times\mathcal{V}_\mathcal{G}$ denotes the set of edges. An edge exists from node $i$ to $j$ if $(i,j)\in \mathcal{E}_\mathcal{G}$; this is also specified by writing $j\in\mathcal{N}_i$, where $\mathcal{N}_i$ is the set of neighbors of node $i$. We show the maximum degree of $\mathcal{G}$ by $d_{\max}(\mathcal{G})$. The complete graph created from the nodes in $\mathcal{V}_{\mathcal{G}}$ is denoted by $\mathcal{K}(\mathcal{V}_{\mathcal{G}})$. Finally, the graph can be represented by various matrices such as \emph{graph Laplacian} $\mathcal{L}_\mathcal{G}$ or the \emph{adjacency} $\mathcal{A}_\mathcal{G}$. \section{Problem Setup} \label{sec:problem_setup} Consider a network of identical agents with interdependecies briefed in a common network-level objective function. In particular, we assume that the system contains $N$ agents forming a graph $\mathcal{G}=(\mathcal{V}_\mathcal{G},\mathcal{E}_\mathcal{G})$ where each node of the graph in $\mathcal{V}_\mathcal{G}$ represents a linear discrete-time decoupled dynamical system as, \begin{equation} \begin{aligned} \label{eq:agent_i_dynamics} \x_{i,t+1} = A \x_{i,t} + B \mbox{$\bm u$}_{i,t}, \hspace{10mm} i = 1,2,\dots,N \end{aligned} \end{equation} with $\x_{i,t}\in\mathbb{R}^{n}$ and $\mbox{$\bm u$}_{i,t}\in\mathbb{R}^{m}$ denoting the states and inputs of agent $i$ at time-step $t$, respectively. Also, $A\in\mathbb{R}^{n\times n}$ and $B\in\mathbb{R}^{n\times m}$ are unknown system parameters that we assume to be controllable as a pair $(A,B)$ on the hindsight. The formulation can be compactly presented as, \begin{align} \label{eq:dynamics_N_agents} \hat{\mathbf{x}}_{t+1} = \hat{\mathbf{A}} \hat{\mathbf{x}}_t + \hat{\mathbf{B}} \hat{\mathbf{u}}_t, \end{align} where $\hat{\mathbf{x}}_t\in\mathbb{R}^{Nn}$ and $\hat{\mathbf{u}}_t\in\mathbb{R}^{Nm}$ contain the states and inputs of entire networked systems, \begin{align*} \hat{\mathbf{x}}_t = \big[ \x_{1,t}^\top;\dots;\x_{N,t}^\top \big]^\top, \quad \hat{\mathbf{u}}_t = \big[ \mbox{$\bm u$}_{1,t}^\top;\dots;\mbox{$\bm u$}_{N,t}^\top \big]^\top, \end{align*} with $\hat{\mathbf{A}}\in\mathbb{R}^{Nn\times Nn}$ and $\hat{\mathbf{B}}\in\mathbb{R}^{Nn\times Nm}$ defined in block diagonal forms $\hat{\mathbf{A}} = \mathrm{I}_{N} \otimes A$ and $\hat{\mathbf{B}} = \mathrm{I}_{N} \otimes B$. The interconnections of these agents are captured by the set of edges $\mathcal{E}_\mathcal{G}\subseteq \mathcal{V}_\mathcal{G}\times\mathcal{V}_\mathcal{G}$ that provide the communication possibilities for distributed feedback controller design. We do not assume that $\mathcal{G}$ is necessarily connected and its reason becomes apparent in the next section. Let $\mathcal{N}_i$ denote the set of neighbors of node $i$ in $\mathcal{G}$ (excluding itself). Then, based on this communication graph structure and for any choice of positive integers $a$ and $b$, we define a linear subspace of $\mathbb{R}^{a N \times b N}$ as follows \begin{gather*} \mathcal{U}_{a,b}^N(\mathcal{G}) \coloneqq \big\{ M \in \mathbb{R}^{a N \times b N} \vert\ [M]_{ij} = \mathbf{0}\ \text{if}\ j\not\in\mathcal{N}_i\cup\{i\}, [M]_{ij} \in \mathbb{R}^{a\times b},\ i,j=1,\cdots,N \big\}. \end{gather*} Without having access to the system parameters $A$ and $B$, we are interested in designing linear feedback gains with a desired sparsity pattern using measurements from the system in \eqref{eq:dynamics_N_agents}. Of particular interest in this work are the structured feedback gains based on the underlying communication network. Given an initial condition $\hat{\mathbf{x}}_0$, the distributed (structured) optimal control problem is posed in the form of, \begin{equation} \begin{aligned} \label{eq:distributed_optimization} &\hspace{3mm}\min_{\hat{\mathbf{K}}} \hspace{3mm} \sum_{t=0}^{\infty} \hat{\mathbf{x}}_t^{\top} \hat{\mathbf{Q}} \hat{\mathbf{x}}_t + \hat{\mathbf{u}}_t^{\top} \hat{\mathbf{R}} \hat{\mathbf{u}}_t \\ &\hspace{5mm}\text{s.t.}\hspace{4mm} \cref{eq:dynamics_N_agents}, \quad \hat{\mathbf{u}}_t = \hat{\mathbf{K}} \hat{\mathbf{x}}_t, \quad \hat{\mathbf{K}} \in \mathcal{U}_{m,n}^N(\mathcal{G}), \end{aligned} \end{equation} where $\hat{\mathbf{K}}$ stabilizes the pair $(\hat{\mathbf{A}}, \hat{\mathbf{B}})$ (i.e. $\rho(\hat{\mathbf{A}} + \hat{\mathbf{B}}\hat{\mathbf{K}}) < 1$), $\hat{\mathbf{R}} = \mathrm{I}_N \otimes R$ and $\hat{\mathbf{Q}} = I_N \otimes Q_1 + \mathcal{L}_\mathcal{G} \otimes Q_2$ for some given cost matrices $Q_1\succ 0$, $Q_2\succeq 0$, $R\succ 0$. Note that $\hat{\mathbf{Q}} \in \mathcal{U}_{n,n}^N(\mathcal{G})$ and it is a simple exercise to show that $\hat{\mathbf{Q}}$ is positive definite. Such interdependent structure of the cost have been considered in a graph-based distributed control framework in the literature (See for instance \cite{borrelli2008distributed,deshpande2011distributed,wang2017distributed,massioni2009distributed}). Intuitively, the first term in $\hat{\mathbf{Q}}$ indicates the absolute cost pertinent to regulating each agent, while the second term denotes the cost that arises from consensus error with respect to the neighbors' states. \noindent \\ The goal is to find a data-guided suboptimal solution to problem in \eqref{eq:distributed_optimization} without the knowledge of system parameters $\hat{\mathbf{A}}$ and $\hat{\mathbf{B}}$. Indeed, not knowing the true underlying system parameters, adds one additional layer of difficulty to this framework. Instead, input-state data of the system can be collected for distributed feedback control synthesis over the given underlying communication graph $\mathcal{G}$. In what follows, we summarize the challenges involving the analysis of this problem: \begin{enumerate*}[label=\Roman*)] \item In general, the constrained optimization problem in \eqref{eq:distributed_optimization} is an NP-hard problem \cite{papadimitriou1986intractable}. Based on the complete knowledge of the system parameters, it has been investigated under variety of assumptions \cite{gupta2005sub, rotkowitz2005characterization, bamieh2002distributed, borrelli2008distributed}, or tackled directly with the aid of projected gradient-based policy updates \cite{maartensson2009gradient, bu2019lqr}. \item On the other hand, the main issue with data-driven approaches is the fact that, the policy obtained in this way, will not necessarily respect the hard constraint of $\hat{\mathbf{K}} \in \mathcal{U}_{m,n}^N(\mathcal{G})$ as was posed in the optimization \eqref{eq:distributed_optimization}. % Also, a ``projection'' onto the intersection of this constraint and stabilizing controllers is not straightforward due to the complicated geometry of the set of stabilizing controllers \cite{bu2020topological}. \item Another significant challenge of adopting data-driven methods for the entire network takes root in the ``curse of dimensionality'' inherent to large-scale networked systems. In fact, collecting data from the entire large-scale network itself is prohibitive. \item Finally, it is often impossible in real-world applications to stop the entire network from functioning for data collection or decision-making purposes. Therefore, a proposed algorithm must enjoy an online nature that allows the entire network to evolve through out the process. \end{enumerate*} \noindent \\ As a designer is left with no choice rather than leveraging the input-state observations to find a ``reasonable'' distributed policy for the networked system, we summarize our approach in the followings: \begin{enumerate*}[label=\Roman*)] \item As the constraint $\hat{\mathbf{K}} \in \mathcal{U}_{m,n}^N(\mathcal{G})$ is strict in distributed control synthesis and the underlying problem is NP-hard in general, we shift our attention from the optimal solution of \cref{eq:distributed_optimization} towards a suboptimal stabilizing distributed controller with reduced computational burden. \item Inspired by a $\mathcal{Q}$-learning-based policy iteration algorithm, we propose a model-free distributed policy iteration scheme to obtain a reasonable solution to the problem in \eqref{eq:distributed_optimization}. % In particular, We learn two control components that resembles the ``individual'' and ``cooperative'' parts. We also simultaneously learn a stability margin related to these components which eventually is utilized to prescribe a distributed control synthesis consisting for the entire network. \item Inherent to our algorithm is a subproblem whose dimension is related to only the maximum degree of the underlying graph rather than the dimension of the original network. % In particular, we will reason that for the learning phase, our method requires data collection only from a specific smaller portion of the underlying network $\mathcal{G}_d \subseteq \mathcal{G}$ with size $d = d_{\max}(\mathcal{G})+1$. % This subgraph is substantially smaller than the original graph whenever $d_{\max}(\mathcal{G}) \ll N$ which is the case in many real-world applications. \item We allow temporary feedback links on $\mathcal{G}_d$ during the learning phase of our algorithm that would be removed subsequently. % So, our approach allows an online execution of our algorithm. \end{enumerate*} \noindent \\ As described in the subsequent sections, the distributed control design part of our method follows a model-based framework previously studied in \cite{borrelli2008distributed, wang2017distributed}. Here, we provide a model-free distributed policy iteration algorithm provided which is not only computationally efficient, but also, more practical when model-based control in high dimensions is not feasible. \subsection{The Objective of \texorpdfstring{\ac{D3PI}}{D3PI} Algorithm} \label{subsec:motivation} We base our method on a policy iteration scheme where a linear feedback gain is updated at each iteration followed by a policy evaluation step that finds the solution to the corresponding Lyapunov function. With no knowledge of the system parameters $\hat{\textbf{A}}$ and $\hat{\textbf{B}}$, we may follow a $\mathcal{Q}$-learning-based methodology with a trial-and-error nature in which a control agent optimizes some value function from observing the results of its own actions (see \cite{bradtke1994adaptive, lewis2012reinforcement} for instance). To implement such machinery, an online iterative scheme---say, \ac{RLS}---is often utilized where at each iteration, data is collected using the current estimate of the policy and the same data is recursively employed to perform an inherent policy evaluation step. Then the policy is updated in a similar manner to the discrete-time LQR update as, \begin{align*} \hat{\textbf{K}} = - \textbf{F}^{-1} \textbf{G}, \end{align*} where $\textbf{F}$ and $\textbf{G}$ are matrix blocks estimated from data in the policy evaluation step. Nonetheless, the main issue with this approach is the fact that the policy obtained in this way, will not respect the hard constraint of $\hat{\textbf{K}} \in \mathcal{U}_{m,n}^N(\mathcal{G})$ as was posed in the optimization \eqref{eq:distributed_optimization}. On the other hand, an arbitrary ``projection'' of the obtained policy on the set $\mathcal{U}_{m,n}^N(\mathcal{G})$ may fail to be stabilizing. Furthermore, such projection onto the intersection of this constraint and stabilizing controllers is not straightforward due to the complicated geometry of the set of stabilizing controllers \cite{bu2020topological}. \noindent \\ As the constraint $\hat{\textbf{K}} \in \mathcal{U}_{m,n}^N(\mathcal{G})$ is strict in distributed control synthesis, we shift our attention from the optimal solution of \cref{eq:distributed_optimization} towards a suboptimal stabilizing distributed controller with reduced computational burden. Hence, we need to exploit the structure that is incurred to this formulation from the underlying graph. Finally, we note that it is often prohibitive in real-world applications to stop the entire network from functioning for data collection or decision-making purposes. Therefore, we allow temporary feedback links on a specific smaller portion of the underlying network $\mathcal{G}_d \subseteq \mathcal{G}$ with size $d$ during the learning stage of our algorithm that would be removed subsequently.\footnote{ Of our particular interest are those systems that can be modeled with graphs in which $d \ll N$. Examples of such occurrences are prevalent in grid-based applications such as social media, power grids, etc.} We will reason that for the learning phase, our method requires observations only from this portion of the graph where few temporary links are appended to make $\mathcal{G}_d$ complete. We show that such interdependency leads to finding control components separately that later on, prescribe a distinction between ``self'' versus ``cooperative'' controls in a network. In the next section, we provide the \ac{D3PI} algorithm and sketch the analysis that connects the ideas presented above along with the convergence study. \section{Main Algorithm} \label{sec:main_algorithm} In this section, we propose and discuss the main algorithm of the paper. Given the underlying communication graph $\mathcal{G}$, the system is considered as a black-box, whereas the designer is capable of inserting input signals to the system and observe states. The goal of \ac{D3PI} is then to find a data-guided suboptimal solution to problem in \eqref{eq:distributed_optimization} without the knowledge of system parameters $\hat{\textbf{A}}$ and $\hat{\textbf{B}}$. To this end, we take a roundabout and consider the synthesis problem of only a subgraph $\mathcal{G}_d \subseteq \mathcal{G}$ and collect data only from that portion of the system. Before we proceed to the main algorithm, we formalize two notions in order to facilitate the presentation. \begin{definition} \label{def:policy} Given any subgraph $\mathcal{G}' \subseteq \mathcal{G}$ and a fixed labeling of the nodes, we let ``$\mathrm{Policy} \left( \mathcal{V}_{\mathcal{G}'} \right)$'' denote the concatenation of all the policies of the agents in $\mathcal{V}_{\mathcal{G}'}$, i.e., \begin{align*} \mathrm{Policy} \left( \mathcal{V}_{\mathcal{G}'} \right) \coloneqq \left[ \mbox{$\bm u$}_1^\top~;~\mbox{$\bm u$}_2^\top~;~\cdots~;~\mbox{$\bm u$}_{\left|\mathcal{V}_{\mathcal{G}'}\right|}^\top \right]^\top, \end{align*} where $\mbox{$\bm u$}_i$ is the feedback control policy of agent $i$ in the subgraph $\mathcal{G}'$ as a mapping from $\{\x_j|j\in\mathcal{N}_i\cup\{i\}\}$ to $\mathbb{R}^m$. Furthermore, we use $\mathrm{Policy}( \mathcal{V}_{\mathcal{G}'}) \big|_t$ to denote the realization of these policies (\textit{i.e.} the control signals) at time $t$. Similarly, we define, \[\mathrm{State} \left( \mathcal{V}_{\mathcal{G}'} \right) \coloneqq \left[\x_1^\top~;~\x_2^\top~;~ \cdots~;~\x_{|\mathcal{V}_{\mathcal{G}'}|}^\top\right]^\top.\] \end{definition} \noindent The \ac{D3PI} algorithm is introduced in \Cref{alg:distributed_control_algorithm}. During the learning phase, we include temporary ``auxiliary'' links to $\mathcal{G}_d$ and make the communication graph complete. We show such distinction by $\mathcal{G}_{d,\mathrm{learn}}$ where $\left\vert \mathcal{V}_{\mathcal{G}_d} \right\vert = \left\vert \mathcal{V}_{\mathcal{G}_{d, \mathrm{learn}}} \right\vert$ and $\mathcal{G}_{d,\mathrm{learn}} = \mathcal{K}(\mathcal{G}_d)$. Inherent to \ac{D3PI} is a policy iteration on $\mathcal{G}_{d,\mathrm{learn}}$ that finds components $K_k$ and $L_k$ which intuitively represent ``self'' and ``cooperative'' controls at iteration $k$, respectively. Even during the learning phase, we utilize these components in order to design and update an effective stabilizing controller for the remainder of the agents in $\mathcal{G} \setminus \mathcal{G}_{d,\mathrm{learn}}$. We do so by ensuring that during the learning phase, information is exchanged unidirectional from $\mathcal{G}_{d,\mathrm{learn}}$ to the rest of the network, therefore, the policy of the agents in $\mathcal{G} \setminus \mathcal{G}_{d,\mathrm{learn}}$ is dependent on those in $\mathcal{G}_{d,\mathrm{learn}}$, but not vice versa. Finally, with the convergence of the algorithm, we remove the temporary links so as to get back to the original communication topology, and propose a final suboptimal stabilizing scheme for the entire network that follows the original network topology. In the learning stage of \ac{D3PI}, we use an \ac{RLS}-based recursion to estimate the unknown parameters in the cost matrix at iteration $k$, referred to as $\widetilde{\textbf{H}}_k$. \begin{algorithm}[!ht] \caption{{\small \acf{D3PI}}} \label{alg:distributed_control_algorithm} \begin{algorithmic}[1] \State \textbf{Initialization} $(t \leftarrow 1,\ k \leftarrow 1)$ \State $\hspace{5mm}$ Obtain graph $\mathcal{G}$ and choose $\mathcal{G}_d \subseteq \mathcal{G}$ with $d = d_{\max}(\mathcal{G})+1$ \State $\hspace{5mm}$ Obtain $Q_1$, $Q_2$, $R$ and set $\widetilde{Q} \leftarrow Q_1 + d Q_2$ \State $\hspace{5mm}$ Randomize $\tilde{\textbf{x}}_1 \in \mathbb{R}^{d n}$ and $\widetilde{\textbf{H}}_0\in\mathbb{R}^{p\times p}$ with $p=d(n+m)$ and set $\tau_1 \leftarrow 0$ \State $\hspace{5mm}$ Set $\mathcal{P}^\circ \leftarrow \beta \mathrm{I}_{p(p+1)/2}$ for large enough $\beta>0$ and fix variance $\Sigma$ \State \label{line:assumption} $\hspace{4mm}$ Choose $K_1$ that stabilizes \eqref{eq:agent_i_dynamics} and set $L_1 \leftarrow \textbf{0}_{m\times n}$ and $\Delta K_1 \leftarrow K_1 - L_1$ \State $\hspace{5mm}$ Switch ON temporary links in $\mathcal{G}_d$ to set $\mathcal{G}_{d,\mathrm{learn}} \leftarrow \mathcal{K}(\mathcal{G}_d)$ \State \textbf{While $K_k$ and $L_k$ not converged, do} \State $\hspace{5mm}$ Set $\mathcal{P}_k \leftarrow \mathcal{P}^\circ$ \State \label{line:policy_update} $\hspace{4mm}$ Set $\mathrm{Policy}_k(\mathcal{V}_{\mathcal{G}})$ such that for each $i\in\mathcal{V}_{\mathcal{G}}$, \begin{align*} \textstyle \mbox{$\bm u$}_{i} \leftarrow \Delta K_k \x_{i} + L_k \left[ \mathbb{I}_{ \left\{ i\in\mathcal{V}_{\mathcal{G} \setminus \mathcal{G}_d} \right\} } \sum_{j\in\mathcal{N}_i} \frac{\tau_k}{d-1} \x_{j} + \mathbb{I}_{ \left\{ i\in\mathcal{V}_{\mathcal{G}_d} \right\} } \sum_{j \in \mathcal{V}_{\mathcal{G}_d} \setminus \{ i \} } \x_{j} \right] \end{align*} \State $\hspace{5mm}$ Evaluate $\mathrm{Policy}_k(\mathcal{V}_{\mathcal{G}_d})$ by obtaining $\widetilde{\textbf{H}}_k \leftarrow \mathrm{SPE} \left( \mathcal{G},\ \mathcal{G}_{d,\mathrm{learn}},\ \mathrm{Policy}_k(\mathcal{V}_{\mathcal{G}}),\ \widetilde{\textbf{H}}_{k-1}, \mathcal{P}_k \right)$ \State $\hspace{5mm}$ Recover matrix blocks \begin{align*} \widetilde{\textbf{H}}_{11,k} &\leftarrow \widetilde{\textbf{H}}_k[1:dn,\ 1:dn], \\ \widetilde{\textbf{H}}_{21,k} &\leftarrow \widetilde{\textbf{H}}_k[dn+1:dn+dm,\ 1:dn], \\ \widetilde{\textbf{H}}_{22,k} &\leftarrow \widetilde{\textbf{H}}_k[dn+1:dn+dm,\ dn+1:dn+dm] \end{align*} \State \label{step:blocks} $\hspace{4mm}$ Set \small \begin{align*} X_1 &\leftarrow \widetilde{\textbf{H}}_{11,k} \big[ 1:n,\ 1:n \big], \hspace{13.5mm} Y_1 \leftarrow \widetilde{\textbf{H}}_{22,k} \big[ 1:m,\ 1:m \big], \hspace{15mm} Z_1 \leftarrow \widetilde{\textbf{H}}_{21,k} \big[ 1:m,\ 1:n \big], \\ X_2 &\leftarrow \widetilde{\textbf{H}}_{11,k} \big[ 1:n,\ n+1:2n \big], \hspace{5.5mm} Y_2 \leftarrow \widetilde{\textbf{H}}_{22,k} \big[ 1:m,\ m+1:2m \big], \hspace{6mm} Z_2 \leftarrow \widetilde{\textbf{H}}_{21,k} \big[ 1:m,\ n+1:2n \big], \\ &\hspace{-6mm} \Delta X \leftarrow X_1 - X_2, \hspace{26mm} \Delta Y \leftarrow Y_1 - Y_2, \hspace{32mm} \Delta Z \leftarrow Z_1 - Z_2 \end{align*} \normalsize \State \label{step:inverses} $\hspace{4mm}$ Set \small $F \leftarrow \left( Y_1 - (d-1) Y_2 \big( Y_1 + (d-2) Y_2 \big)^{-1} Y_2 \right)^{-1}$ and $G \leftarrow \left( Y_1 + (d-1) Y_2 \right)^{-1} Y_2 \left( Y_1 - Y_2 \right)^{-1}$ \normalsize \State $\hspace{5mm}$ Update, \small \begin{align*} K_{k+1} &\leftarrow - F Z_1 + (d-1) G Z_2, \hspace{8mm} L_{k+1} \leftarrow - F Z_2 + G Z_1 + (d-2) G Z_2, \hspace{8mm} \Delta K_{k+1} \leftarrow K_{k+1} - L_{k+1} \end{align*} \normalsize \State $\hspace{5mm}$ Set \small $\hspace{2mm} \Xi_{k+1} \leftarrow \Delta X - \widetilde{Q} + Q_2 + \Delta K_{k+1}^\top \Delta Z + \Delta Z^\top \Delta K_{k+1} + \Delta K_{k+1}^\top \big( \Delta Y - R \big) \Delta K_{k+1}$ \normalsize \State \label{step:singular_values} $\hspace{4mm}$ Set \small $\gamma_{k+1} \leftarrow \sigma_{\min} \Big( \Delta K_{k+1}^\top R \Delta K_{k+1} + \widetilde{Q} \Big) \Big/ \sigma_{\max} \Big( \Xi_{k+1} + L_{k+1}^\top (\Delta Y - R) L_{k+1} \Big)$ \normalsize \State $\hspace{5mm}$ Set $\tau_{k+1} \leftarrow \sqrt{\gamma_{k+1}^2 / (1 + \gamma_{k+1})}$ \State $\hspace{5mm}$ $k \leftarrow k + 1$ \State $\hspace{0mm}$ Switch OFF the temporary links and retrieve $\mathcal{G}_d$ \State $\hspace{0mm}$ Set $\mathrm{Policy}_k(\mathcal{V}_{\mathcal{G}})$ such that for each $i\in\mathcal{V}_{\mathcal{G}}$, $\mbox{$\bm u$}_{i} \leftarrow \Delta K_k \x_{i} - \frac{\tau_k}{d-1} L_k \sum_{j\in\mathcal{N}_i} \x_{j}$ \end{algorithmic} \end{algorithm} \begin{algorithm}[!ht] \caption{{\small Subgraph Policy Evaluation} (SPE)} \label{alg:SPE} \begin{algorithmic}[1] \State $\hspace{0mm}$ \textbf{Input:} Graph $\mathcal{G}$, subgraph $\mathcal{G}'\subseteq \mathcal{G}$, $\mathrm{Policy}(\mathcal{V}_{\mathcal{G}})$, $\textbf{H}$, $\mathcal{P}$ \State $\hspace{0mm}$ \textbf{Output:} The updated cost matrix $\textbf{H}^+$ associated with $\mathcal{G}'$ \State $\hspace{0mm}$ \textbf{While} $\textbf{H}$ \textbf{not converged, do} \State $\hspace{5mm}$ Set $\tilde{\textbf{x}}_t \leftarrow \mathrm{State} (\mathcal{V}_{\mathcal{G}'})\big|_t$ and $\tilde{\textbf{u}}_t \leftarrow \mathrm{Policy} (\mathcal{V}_{\mathcal{G}'})\big|_t$ \State $\hspace{5mm}$ Choose $\e_t \sim \mathcal{N}(\textbf{0},\Sigma)$ and update $\mathrm{Policy}(\mathcal{V}_{\mathcal{G}'}) \big|_t$ for all $i\in \mathcal{V}_{\mathcal{G}'}$ as $\tilde{\textbf{u}}_t \leftarrow \tilde{\textbf{u}}_t + \e_t$ \State $\hspace{5mm}$ Let entire $\mathcal{G}$ run under $\mathrm{Policy}(\mathcal{V}_{\mathcal{G}})$ and collect $\mathrm{State} (\mathcal{V}_{\mathcal{G}'})\big|_{t+1}$ only from $\mathcal{G}'$ \State $\hspace{5mm}$ Set $\tilde{\textbf{x}}_{t+1} \leftarrow \mathrm{State} (\mathcal{V}_{\mathcal{G}'})\big|_{t+1}$ and $\tilde{\textbf{u}}_{t+1} \leftarrow \mathrm{Policy}( \mathcal{V}_{\mathcal{G}'}) \big|_{t+1}$ \State $\hspace{5mm}$ Set $\tilde{\textbf{z}}_{t} \leftarrow [\tilde{\textbf{x}}_{t} ; ~ \tilde{\textbf{u}}_{t}]$ and $\tilde{\textbf{z}}_{t+1} \leftarrow [\tilde{\textbf{x}}_{t+1} ; ~ \tilde{\textbf{u}}_{t+1}]$ and form $\ophi_t \leftarrow \tilde{\textbf{z}}_{t} - \tilde{\textbf{z}}_{t+1}$ \State $\hspace{5mm}$ Compute $\ozeta_t \leftarrow \big[ \ophi_{1,t}^2~~\ophi_{1,t}\ophi_{2,t}~~\cdots~~\ophi_{1,t}\ophi_{p,t}~\big\vert~\ophi_{2,t}^2~~\ophi_{2,t}\ophi_{3,t}~~\cdots~~\ophi_{2,t}\ophi_{p,t}~\big\vert~\cdots~\big\vert~\ophi_{p,t}^2 \big]^\top$ \State $\hspace{5mm}$ Compute $\mathcal{R}(\tilde{\textbf{x}}_t, \tilde{\textbf{u}}_t) \leftarrow \tilde{\textbf{x}}_t^{\top} \big( \mathrm{I} \otimes \widetilde{Q} - \mathbbm{1}\mathbbm{1}^{\top} \otimes Q_2 \big) \tilde{\textbf{x}}_t + \tilde{\textbf{u}}_t^{\top} \big( \mathrm{I} \otimes R \big) \tilde{\textbf{u}}_t$ \State $\hspace{5mm}$ Form $\otheta = \mathrm{vech}(\textbf{H})$ and use \ac{RLS} update, \begin{align} \label{eq:RLS_iteration} \otheta \leftarrow \otheta + \frac{\mathcal{P} \ozeta_{t} \big( \mathcal{R}(\tilde{\textbf{x}}_t, \tilde{\textbf{u}}_t) - \ozeta_{t}^\top \otheta \big)}{1 + \ozeta_{t}^\top \mathcal{P} \ozeta_{t}}, \hspace{15mm} \mathcal{P} \leftarrow \mathcal{P} - \frac{\mathcal{P} \ozeta_{t} \ozeta_{t}^\top \mathcal{P}}{1 + \ozeta_{t}^\top \mathcal{P} \ozeta_{t}} \end{align} \State $\hspace{5mm}$ Find $\textbf{H}^+ = \mathrm{vech}^{-1}(\otheta)$ and update $\textbf{H} \leftarrow \textbf{H}^+$ \State $\hspace{5mm}$ $t\leftarrow t+1$ \end{algorithmic} \end{algorithm} This entire process is performed in the \ac{SPE} (\Cref{alg:SPE}) given $\mathcal{G}$, $\mathcal{G}_{d,\text{learn}}$, the mapping $\mathrm{policy}(\mathcal{V}_{\mathcal{G}_d})$, and the previous estimate of $\widetilde{\textbf{H}}_{k-1}$.\footnote{Tilded notations are secured for the parameters related to $\mathcal{G}_{d,\mathrm{learn}}$.} As will be discussed in \Cref{subsec:analysis_setup}, $\widetilde{\textbf{H}}_{k}$ contains the required information to find $K_k$ and $L_k$ from data. We extract this square matrix through a recursive update on the vector $\otheta_{k-1}$, derived from half-vectorization of $\widetilde{\textbf{H}}_{k-1}$, solving the \ac{RLS} for the linear equation $\mathcal{R}(\tilde{\textbf{x}}_t, \tilde{\textbf{u}}_t) = \ozeta_t^{\top} \otheta_{k-1}$, where $\mathcal{R}(\tilde{\textbf{x}}_t, \tilde{\textbf{u}}_t)$ denotes the local cost and $\ozeta_t \in \mathbb{R}^p$ contains the data measurements.\footnote{We use the increments $k$ for the policy update and $t$ for the data collection.} The adaptive nature of the algorithm entails the exploration signal $\e_t$ to be augmented to the policy vector in order to satisfy persistence of excitation. In our setup, $\e_t$ is sampled from a normal distribution $\e \sim \mathcal{N}(\textbf{0},\Sigma)$ where the choice of the variance $\Sigma$ is problem-specific.\footnote{In practice, excitation of the input is a subtle task and has been implemented in a variety of forms such as random noise \cite{bradtke1994adaptive}, sinusoidal signals \cite{jiang2012computational}, and exponentially decaying noise \cite{lewis2010reinforcement}.} We denote by $\mathcal{P}_k$ the projection factor that is reset to $\mathcal{P}^\circ \succ 0$ at the beginning of each iteration. Convergence of the \ac{SPE}---guaranteed based on the persistence of excitation condition---is followed by the update of $\widetilde{\textbf{H}}_k$ that encodes the necessary information to get $K_k$ and $L_k$. This is done by recovering the block matrices $X_1$, $X_2$, $Y_1$, $Y_2$, $Z_1$, and $Z_2$ from $\widetilde{\textbf{H}}_k$ that are further used to form the intermediate variables $F$ and $G$.\footnote{Matrix inversions on \cref{step:inverses} of \Cref{alg:distributed_control_algorithm} will be justified in \Cref{sec:analysis} \Cref{lem:tools_for_analysis}.} Such recovery of meaningful blocks from $\widetilde{\textbf{H}}_k$ is due to the specific matrix structure that is resulted from adding extra links to $\mathcal{G}_d$ and will be discussed further in the following section. Each iteration loop is then ended by updating the parameters $\gamma_k$ and $\tau_k$ which are effective in the stability analysis of the proposed controller throughout the process. With the convergence of \ac{D3PI}, $\mathcal{G}_d$ is retrieved by removing the temporary links and the distributed policy is extrapolated to the entire graph $\mathcal{G}$. Lastly, note that the inverse operations on \cref{step:inverses} of the algorithm occur on matrices of size $m\times m$ and are computationally negligible. Furthermore, the complexity of finding extreme singular values---as on \cref{step:singular_values} in \Cref{alg:distributed_control_algorithm}---is shown to be as low as $\mathcal{O}(n^2)$ \cite{comon1990tracking}. Hence, the computational complexity of \ac{D3PI} is mainly originated from the \ac{SPE} recursion that is equivalent to the complexity of \ac{RLS} for the number of unknown system parameters in $\mathcal{G}_d$ and is equal to $\mathcal{O} \left( d^2 (n+m)^2 \right)$ \cite{haykin2002adaptive}. This implies that the complexity of the algorithm will be fixed for any number of agents $N$, as long as the maximum degree of the graph remains unchanged. \begin{remark} \label{rmk:notes_on_algorithm} Adding temporary links within the subgraph $\mathcal{G}_d$ is a way to learn optimal $K_k$ and $L_k$ for the subgraph $\mathcal{G}_{d,\mathrm{learn}}$ by fully examining the dynamical interference among the agents. % Moreover, initializing $K_k$ such that \eqref{eq:agent_i_dynamics} is Schur stable---as assumed in our algorithm---has become a standard in the data-driven control literature. % However, we acknowledge that this is not a trivial assumption and often impractical, in particular, when the system is unknown. % While we stick to this assumption for the brevity of analysis, the interested reader is referred to the recent works \cite{talebi2020online, chen2020black} to learn how the issue can be circumvented for specific classes of systems. \end{remark} \section{Convergence and Stability} \label{sec:analysis} In this section, we provide the convergence and stability analyses of the \ac{D3PI} algorithm. For clarity of the discussions, we defer the framework subtleties and extensive detailed proofs to Appendices \ref{sec:setup-analysis} and \ref{sec:proofs}. First, we study the structure and gain margins of each local controller and then establish the stability property of the proposed controller for the entire network throughout the learning process of \ac{D3PI} algorithm. Lastly, we show the convergence of \ac{D3PI} to a stabilizing suboptimal distributed controller followed by the derivation of a suboptimality bound characterized by the problem parameters. \noindent \\ We begin by introducing a linear group and show its useful properties in our setup. For any integer $r \geq 2$, let us define the \textit{patterned linear group} as follows \begin{gather*} \mathrm{PL}(r\times n, \mathbb{R}) \coloneqq \left\{\textbf{N}_r \in \mathrm{G L}(rn,\mathbb{R}) \; \Big| \; \textbf{N}_r = \mathrm{I}_r \otimes \big( A - B \big) + \mathbbm{1}_r \mathbbm{1}_r^\top \otimes B, \text{ for some } A \in \mathrm{G L}(n,\mathbb{R}) \cap \mathbb{S}^{n\times n} \text{ and } B \in \mathbb{S}^{n\times n} \right\}. \end{gather*} \begin{proposition}\label{prop:lineargroup} The $\mathrm{PL}(r\times n, \mathbb{R})$ is indeed a linear group. Furthermore, for any stable matrix $A \in \mathrm{PL}(r\times n, \mathbb{R})$, let $P$ denote the unique solution to the discrete-time Lyapunov equation for any $0 \prec Q \in \mathbb{S}^{rn\times rn}$, \textit{i.e.}, it satisfies $P = A^\top P A + Q$. Then, $P \in \mathrm{PL}(r\times n, \mathbb{R})$ if and only if $Q \in \mathrm{PL}(r\times n, \mathbb{R})$. \end{proposition} \noindent The fact that the patterned linear group is preserved under Lyapunov equation is a key to our analysis that facilitates efficient propagation of information in our algorithm. Next, we demonstrate how a specific structure and stability of the controller for the subgraph $\mathcal{G}_{d,\mathrm{learn}}$, if initialized properly, can be preserved throughout \ac{D3PI} algorithm. \begin{proposition} \label{prop:H_structure} Let $\widetilde{\textbf{K}}_k \coloneqq \mathrm{I}_d \otimes \big( K_k - L_k \big) + \mathbbm{1} \mathbbm{1}^{\top} \otimes L_k$ for all $k \geq 0$ where $K_k$ and $L_k$ are obtained as in \Cref{alg:distributed_control_algorithm}. % If $\widetilde{\textbf{K}}_1$ is a stabilizing controller for the system in $\mathcal{G}_{d,\mathrm{learn}} = \mathcal{K}(\mathcal{V}_{\mathcal{G}_d})$, then the following statements are true for all $k \geq 0$, \begin{itemize} \item $\mathrm{Policy}_k( \mathcal{V}_{\mathcal{G}_{d,\text{learn}}}) \big|_t = \widetilde{\textbf{K}}_k \tilde{\textbf{\textup{x}}}_t, \; \forall t$. \item $\widetilde{\textbf{K}}_k$ is stabilizing for all $k\geq 0$ for the system in $\mathcal{G}_{d,\mathrm{learn}}$. \item $\Delta K_k \coloneqq K_k - L_k$ stabilizes the dynamics of one single agent, {\it i.e.}, $A+B \Delta K_k$ is Schur stable. \end{itemize} \end{proposition} \noindent The results in \Cref{prop:H_structure} form the cornerstone of the remainder of the analysis in this section. It also gives insights on the existence of a stabilizing controller $\Delta K_k$ and its corresponding cost-to-go matrix $\Delta P_k$. In the sequel, our goal is to design a distributed suboptimal controller based on the components that shape $\Delta K_k$. To this end, we find stabilizability gain margins that induce more flexibility in designing the distributed controller. In the following proposition, we provide the backbone of the distributed controller design for the networked system of $\mathcal{G}$. \begin{proposition} \label{prop:gain_margin} At each iteration $k \geq 0$, let $K_k$, $L_k$ and $\tau_k$ be as obtained in \Cref{alg:distributed_control_algorithm} and choose $\alpha$ such that $\left\vert \alpha - 1 \right\vert < \tau_k$. Then $A + B(K_k - \alpha L_k)$ is Schur stable. \end{proposition} \noindent The result in \Cref{prop:gain_margin} provides model-free stability gain margins, $\tau_k$, at each iteration of the algorithm for the dynamics of one single agent in $\mathcal{G}$. We take advantage of these margins as a tool to find stability guarantee for the controller proposed in our algorithm. For the learning phase of \ac{D3PI}, this is formally established in the following theorem. \begin{theorem} \label{thm:stabilizability_learning_phase} Suppose $K_k$, $L_k$ and $\tau_k$ are defined as in \Cref{alg:distributed_control_algorithm}. % Then the control policy $\mathrm{Policy}_k(\mathcal{V}_{\mathcal{G}})$ designed as, \begin{align} \label{eq:control_scheme_learning_phase} \mbox{$\bm u$}_{i} &= \Big( K_k - L_k \Big) \x_{i} + L_k \left[ \mathbb{I}_{ \big\{ i\in\mathcal{V}_{\mathcal{G} \setminus \mathcal{G}_{d,\mathrm{learn}}} \big\} } \sum_{j\in\mathcal{N}_i} \frac{\tau_k}{d-1} \x_{j} + \mathbb{I}_{ \big\{ i\in\mathcal{V}_{\mathcal{G}_{d,\mathrm{learn}}} \big\} } \sum_{j \in \mathcal{V}_{\mathcal{G}_{d,\mathrm{learn}}} \setminus \{ i \} } \x_{j} \right], \end{align} stabilizes the entire network $\mathcal{G}$ at each iteration of the learning phase in \Cref{alg:distributed_control_algorithm} for any choice of $\mathcal{V}_{\mathcal{G}_d}$. \end{theorem} \noindent The latter theorem establishes the fact that the proposed feedback mechanism stabilizes the entire network throughout the learning phase. This facilitates the control of the agents outside of $\mathcal{G}_d$, while the learning process is activated in the subgraph. Nonetheless, the practicality and suboptimality characteristics of the algorithm heavily depend on its convergence. This will be addressed in the following theorem. \begin{theorem} \label{thm:convergence} Assume that the initial controller $K_1$ is stabilizing for \eqref{eq:agent_i_dynamics} and $\mathrm{Policy}_k(\mathcal{V}_{\mathcal{G}_d}) \big|_t$ in \Cref{alg:SPE} is persistently excited. % Then $K_k \rightarrow K^*$, $L_k \rightarrow L^*$, $\gamma_k \rightarrow \gamma^*$, and $\tau_k \rightarrow \tau^*$ as $k\rightarrow\infty$ where $\widetilde{\textbf{K}}^* = \mathrm{I}_d \otimes \big( K^* - L^* \big) + \mathbbm{1} \mathbbm{1}^{\top} \otimes L^*$ is the optimal solution to the infinite-horizon state-feedback LQR problem with system parameters $\big( \widetilde{\textbf{A}}, \widetilde{\textbf{B}}, \widetilde{\textbf{Q}}, \widetilde{\textbf{R}} \big)$ defined as $ \widetilde{\textbf{A}} = \mathrm{I}_d \otimes A$, $\widetilde{\textbf{B}} = \mathrm{I}_d \otimes B$, $\widetilde{\textbf{Q}} = \mathrm{I}_{d} \otimes \widetilde{Q} - \mathbbm{1}\mathbbm{1}^{\top} \otimes Q_2$, and $\widetilde{\textbf{R}} = \mathrm{I}_d \otimes R$. \end{theorem} \noindent As we remove the temporary links that were provided for the learning phase, the structure of the interconnections in $\mathcal{G}$ are returned to it original topology. Therefore, it is vital to provide stability guarantees after the recursion in \Cref{alg:distributed_control_algorithm} is completed. This statement is a direct consequence of \Cref{thm:stabilizability_learning_phase} when the control components are converged and is outlined in the following corollary. \begin{corollary} \label{cor:ultimate_stabilizability} Suppose $K^*$, $L^*$, $\gamma^*$, and $\tau^*$ are given as in \Cref{thm:convergence} where \Cref{alg:distributed_control_algorithm} is converged. % Then $\mathrm{Policy}(\mathcal{V}_{\mathcal{G}})$ defined as, \begin{align*} \mbox{$\bm u$}_{i} &= \big( K^* - L^* \big) \x_{i} + \frac{\tau_k}{d-1} L^* \sum_{j\in\mathcal{N}_i} \x_{j}, \hspace{12mm} \forall i\in \mathcal{V}_{\mathcal{G}} \end{align*} stabilizes the system in \eqref{eq:dynamics_N_agents}. \end{corollary} \begin{comment} \noindent Finally, we conclude this section by exploring the suboptimality of our proposed policy. Given the problem parameters, let $\hat{\textbf{K}}_{\mathrm{struc}}^*$ denote the globally optimal distributed solution for the structured LQR problem in \eqref{eq:distributed_optimization} with the associated cost matrix $\hat{\textbf{P}}_{\mathrm{struc}}^*$. Given any other stabilizing distributed policy $\hat{\textbf{K}}$ associated with cost matrix $\hat{\textbf{P}}$, we define the \textit{optimality gap} as, \[\mathrm{\text{gap}}(\hat{\textbf{K}}) \coloneqq \mathrm{Tr}[\hat{\textbf{P}} - \hat{\textbf{P}}_{\mathrm{struc}}^*].\] \noindent The following theorem provides an upperbound on the optimality gap of distributed policy learned by \ac{D3PI} based on the problem parameters. Especially, when the system is ``contractible'', the upperbound depends on the difference of the distributed controller with that of unstructured optimal LQR controller. \begin{theorem} \label{thm:suboptimality} Let $\hat{\textbf{K}}^*$ be the distributed policy learned by \Cref{alg:distributed_control_algorithm} at convergence corresponding to the cost matrix $\hat{\textbf{P}}^*$. % Also, let $\hat{\textbf{K}}_{\mathrm{lqr}}^*$ denote the optimal (unstructured) solution to the infinite-horizon state-feedback LQR problem with parameters $(\hat{\textbf{A}}, \hat{\textbf{B}}, \hat{\textbf{Q}}, \hat{\textbf{R}})$ associated with the cost matrix $\hat{\textbf{P}}_{\mathrm{lqr}}^*$. % Then the optimality gap of $\hat{\textbf{K}}^*$ is bounded as follows: \begin{align*} \color{red} 0 \leq \mathrm{gap}(\hat{\textbf{K}}^*) \leq \mathrm{Tr}(\hat{\textbf{P}}^* - \hat{\textbf{P}}_{\mathrm{lqr}}^*), \end{align*} Furthermore, if $\hat{\textbf{A}}_{\hat{\textbf{K}}_{\mathrm{lqr}}} = \hat{\textbf{A}} + \hat{\textbf{B}} \hat{\textbf{K}}_{\mathrm{lqr}}$ is contractible (i.e., $\sigma_{\max}(\hat{\textbf{A}}_{\hat{\textbf{K}}_{\mathrm{lqr}}}) < 1$), then \begin{align*} 0 \leq \mathrm{gap}(\hat{\textbf{K}}^*) \leq \frac{\mathrm{Tr}(\textbf{M})}{1-\sigma_{\max}^2(\hat{\textbf{A}}_{\hat{\textbf{K}}_{\mathrm{lqr}}})}, \end{align*} where, $\textbf{M} \coloneqq (\hat{\textbf{R}}+ \hat{\textbf{B}}^\top \hat{\textbf{P}}^* \hat{\textbf{B}}) ( \hat{\textbf{K}}^* \hat{\textbf{K}}^{*\top} - \hat{\textbf{K}}_{\mathrm{lqr}} \hat{\textbf{K}}_{\mathrm{lqr}}^\top) + 2 \hat{\textbf{A}}^\top \hat{\textbf{P}}^* \hat{\textbf{B}} (\hat{\textbf{K}}^* - \hat{\textbf{K}}_{\mathrm{lqr}}).$ \end{theorem} \begin{remark} \label{rmk:contractibility} Contractibility of the pair $(A,B)$ entails more restriction than stabilizability or regularizability of the system \cite{talebi2020online} and is more recently employed in iterative data-guided control methods \cite{lale2020regret, agarwal2019online} to streamline finite-time analysis for dynamical systems. \end{remark} \end{comment} \section{Simulation} \label{sec:simulation} In this section, we apply our method to a series of identical turbocharged diesel engines with exhaust gas recirculation in a chain formation that work collectively for industrial power generation (\Cref{fig:factory}). \begin{figure}[t] \centering \includegraphics[width = 0.6\textwidth]{Images/factory.pdf} \caption{Distributed control of multi-engine power generation in an industrial setting.} \label{fig:factory} \end{figure} The dynamics of the engines are assumed to be unknown and we apply \ac{D3PI} to find a model-free distributed policy based on the observed data from the system. The values of the continuous-time system parameters are given as \cite{jung2005comparison}, \small \begin{align*} A = \begin{bmatrix} -0.4125 &-0.0248 & 0.0741 & 0.0089 & 0.0000 & 0.0000 \\ 101.5873 & -7.2651 & 2.7608 & 2.8068 & 0.0000 & 0.0000 \\ 0.0704 & 0.0085 & -0.0741 & -0.0089 & 0.0000 & 0.0200 \\ 0.0878 & 0.2672 & 0.0000 & -0.3674 & 0.0044 & 0.3962 \\ -1.8414 & 0.0990 & 0.0000 & 0.0000 & -0.0343 & -0.0330 \\ 0.0000 & 0.0000 & 0.0000 & -359.0000 & 187.5364 & -87.0316 \end{bmatrix}, \hspace{5mm} B = \begin{bmatrix} -0.0003 & 0.0005 \\ -0.0764 & 0.1149 \\ 0.0004 & 0.0000 \\ -0.0127 & 0.0016 \\ -0.0005 & -0.0011 \\ 0.0456 & -0.0075 \end{bmatrix}. \end{align*} \normalsize We normalize and discretized the dynamics with a sampling rate of $\Delta T = 0.1s$. We set $Q_1 = Q_2 = \mathrm{I}_n$ and $R = \mathrm{I}_m$ with $n=6$ and $m=2$ and sample a random exploration signal from a zero-mean normal distribution. Given the number of engines $N\geq 5$, the maximum degree of a path graph is $d_{\max}=2$, and hence, $d=d_{\max}+1=3$. W.l.o.g., we take the first three engines ($i=1,2,3$) as the elements of $\mathcal{G}_d$. \begin{figure}[t] \centering \includegraphics[width=0.65\columnwidth]{Images/states.pdf} \caption{State trajectories of engine $i=4$ during and after the learning process. Solid lines denote results from D3PI, whereas dashed lines refer to stationary $\mathcal{G} \setminus \mathcal{G}_d$ during the learning phase.} \label{fig:states} \end{figure} \begin{figure}[t] \centering \begin{minipage}{.46\textwidth} \centering \includegraphics[width = 0.98\columnwidth]{Images/cost_time.pdf} \caption{Cumulative cost vs time plot.} \label{fig:cost_time} \end{minipage} % \hspace{7mm} \begin{minipage}{.46\textwidth} \centering \includegraphics[width=1.05\columnwidth]{Images/cost_agent.pdf} \caption{Cost vs number of agents plot.} \label{fig:cost_agent} \end{minipage} \end{figure} \noindent \\ Figure \ref{fig:states} demonstrates the trajectories of the states of node $i=4$ for the case when \ac{D3PI} synthesis is ON (solid curves) and contrasts it to the naive application of model-free policy iteration to $\mathcal{G}_d$ without considering the control of the rest of the network. The latter is referred to as \ac{D3PI}-OFF in the plots (dashed curves). As the plot suggests, when \ac{D3PI} is activated, the states of each engine remain near the origin throughout learning. Note that when \ac{D3PI} is ON, the noisy behavior of the trajectories is due to the (unidirectional) interconnections of $\mathcal{G}_d$---being persistently excited---with the rest of the network. Furthermore, the first few iterates of the learning and post-learning are magnified for better comparison of the trajectories. Figures \ref{fig:cost_time} and \ref{fig:cost_agent} outline the performance of our algorithm on the nodes inside $\mathcal{G} \setminus \mathcal{G}_d$ over $5000$ runs of the simulation. Figure \ref{fig:cost_time} depicts the cumulative logarithmic cost $\hat{\textbf{J}}_{\mathcal{G} \setminus \mathcal{G}_{d,learn}}$ for all nodes $i\in\mathcal{V}_{\mathcal{G}_d}$ over the entire horizon of the algorithm's implementation for $N=10$. As the figure shows, the algorithm is converged by the end of the first iteration at time-step $t=400$ where the cost indices get fixed, implying the convergence of the states to the origin (because the state cost matrix is positive definite). The result of our algorithm (blue) is compared with the case when \ac{D3PI} is OFF (red) and also the optimal unstructured solution of LQR (for the same system parameters) as the baseline. The plot also verifies the stability of the proposed policy as the cost remains bounded bounded (and $\hat{\textbf{Q}} \succeq 0$). Similar implementation is performed for $N=5,6,\cdots,30$ and the final costs are reported in Figure \ref{fig:cost_agent}. The increase in costs is due to the added value by the newly included agents in $\mathcal{G} \setminus \mathcal{G}_{d,learn}$. As the plot suggests, the gap between our method and the case with inactive \ac{D3PI} shows the superiority of our algorithm for any problem setting with different number of agents in the system. \section{Conclusion and Remarks} \label{sec:conclusion} In this paper, we propose the \ac{D3PI} algorithm as an efficient model-free distributed control of potentially high-dimensional homogeneous linear systems that are interconnected with an underlying communication graph. We exploit the graph-based structure to propose a policy that reflects the sparsity pattern in the system. To achieve a structured controller for the entire network that is persistently stabilizing throughout the learning process, \ac{D3PI} requires temporary additional communication links during the policy learning phase to a small portion of the graph. The results of the synthesis of this subsystem is leveraged in order to achieve a distributed feedback mechanism for the entire system. Other than controllability of the unknown system, our algorithm makes no assumption on the dynamics' parameters. Furthermore, our method does not require rest-of-the-network shutdown during the learning process and introduces stabilizing controllers at each iteration. Extension of the current results to a heterogeneous system of agents is considered as an immediate future direction of our work. Moreover, \ac{D3PI} builds upon parameter estimation techniques that represent an end-to-end policy prediction directly from observed data. In a black-box linear setup, model-based methods---{\it i.e.}, extracting an approximate model from data and use it for control---can be integrated in this multiagent setup to further reduce sample and time complexities of the algorithm. We address these ideas for potential future investigation. \bibliographystyle{ieeetr}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Community networks or Do-It-Yourself networks (DIYs) are bottom-up built decentralized networks, deployed and maintained by their own users. In the early 2000s, community networks (CNs) gained momentum in response to the limited options for network connectivity in rural and urban communities. One successful effort of such a network is Guifi.net\footnote{http://guifi.net/}, located in the Catalonia region of Spain. Guifi.net is defined as an open, free and neutral community network built by its members: citizens and organizations pooling their resources and coordinating efforts to build and operate a local network infrastructure. Guifi.net was launched in 2004 and till today it has grown into a network of more than 30.000 operational nodes, which makes it the largest community network worldwide \cite{Braem2013}. Figure \ref{fig:traffic_guifi} shows the evolution of total inbound and outbound Guifi.net traffic to the Internet for the last two years. Pink represents incoming traffic from Internet and yellow represents outgoing traffic. For two years the traffic is doubled and peaks are as a result of a new links and fiber optics in the backbone. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{guifi_traffic.png} \caption{Guifi Traffic} \label{fig:traffic_guifi} \end{figure} Similar to other community networks, Guifi.net aims to create a highly localized digital ecosystem. However, the predominant usage we have observed, is to access the cloud-based Internet services external to a community network. For instance, more than 50\% of the user-oriented services consumed in Guifi.net are gateway proxies that provide Internet connectivity hence impose a heavy burden on the limit backbone links \cite{GuifiServices}. For a very long time in the past, user-oriented services had not been developed locally because of lacking streamlined mechanisms to exploit all the available resources within a community network as well as other technological barriers. With the adoption of \emph{community network micro-clouds}\footnote{http://cloudy.community/}, i.e., the platform that enables cloud-based services in community networks, local user-oriented services gathered a huge momentum. Community network users started creating their own homegrown services and using alternative open source software for many of today's Internet cloud services, e.g., data storage services, interactive applications such as Voice-over-IP (VoIP), video streaming, P2P-TV, and etc. In fact, a significant amount of services were already locally deployed and running within Guifi.net including GuifiTV, Graph servers, mail servers, game servers \cite{COMNET}. All these services are provided by individuals, social groups, small non-profit or commercial service providers. Because Guifi.net nodes are geographically distributed, given this set of local services, we need to decide where these services should be placed in a network. Obviously, without taking into account the underlying network resources, a service may suffer from poor performance, e.g, by sending up large amounts of data across slow wireless links while faster and more reliable links remain underutilized. Therefore, the key challenge in community network micro-clouds is to determine the location, i.e. servers at certain geographic points in the network, where the different services multiplexed on a shared infrastructure will be running. While conceptually straightforward, it is challenging to calculate an optimal decision due to the dynamic nature of community networks and usage patterns. In this work we aim to address the following question: \textit{"Given a community network cloud infrastructure, what is an effective and low-complexity service placement solution that maximises end-to-end performance (e.g., bandwidth)?"} Our preliminary results show that the proposed algorithm consistently outperforms the current random placement adopted in Guifi.net by 35\% regarding its bandwidth gain. More importantly, as the number of services increases, the gain tends to increase accordingly. \label{sec:qmp} \section{Need for Localized Services} In this section, we characterize wireless community networks by presenting our experimental measurements in a production example over five months, which exposes the necessity of deploying localized services \cite{ICN} and justifies our motivation of proposing an intelligent placement algorithm. \subsection{QMP Network: A Brief Background} The network we consider, began deployment in 2009 in a quarter of the city of Barcelona, Spain, called Sants, as part of the \textit{Quick Mesh Project\footnote{http://qmp.cat/Home}} (QMP). In 2012, nodes from \textit{Universitat Polit\`{e}cnica de Catalunya} (UPC) joined the network, supported by the EU CONFINE \footnote{https://confine-project.eu/} project. We shall refer to this network as \textit{QMPSU} (from Quick Mesh Project at Sants-UPC). QMPSU is part of the Guifi community network which has more than $30.000$ operational nodes. At the time of writing, QMPSU has around 61 nodes, 16 at UPC and 45 at Sants. There are two gateways, one in UPC Campus and another in Sants, that connect QMPSU to the rest of Guifi.net (see Figure \ref{fig:qmpsantsupc}). A detailed description of QMPSU can be found in \cite{LlorencMSWiM}, and a live monitoring page updated hourly is available in the Internet \footnote{http://dsg.ac.upc.edu/qmpsu/index.php}. Typically, QMPSU users have an outdoor router (OR) with a Wi-fi interface on the roof, connected through Ethernet to an indoor AP (access point) as a premises network. The most common OR in QMPSU is the NanoStation M5, which integrates a sectorial antenna with a router furnished with a wireless 802.11an interface. Some strategic locations have several NanoStations, that provide larger coverage. In addition, some links of several kilometers are set up with parabolic antennas (NanoBridges). ORs in QMPSU are flashed with the Linux distribution which was developed inside the QMP project wihich is a branch of OpenWRT\footnote{https://openwrt.org/} and uses BMX6 as the mesh routing protocol \cite{neumannBMX6}. \subsection{Characterization: Bandwidth-Hungry} In the following, we characterize the network performance of QMP network. Our goal is to determine the key features of the network and its nodes; in particular to understand the network metrics that could help us to design new heuristic frameworks for intelligent service placement in community networks \cite{AINTEC}. Measurements have been obtained by connecting via SSH to each QMPSU OR and running basic system commands available in the QMP distribution. This method has the advantage that no changes or additional software need to be installed in the nodes. Live measurements have been taken hourly over the last 5 months, starting from October 2015 to February 2016. We use this data to analyse main aspects of QMP network. \begin{figure*}[htb] \begin{minipage}[t]{0.32\linewidth} \centering \includegraphics[scale=0.5]{qmpsantsupc_topology}\\[-2mm] \caption{QMPSU network. Two main gateways are underlined.}\label{fig:qmpsantsupc} \end{minipage}\hspace{0.02\linewidth \begin{minipage}[t]{0.31\linewidth} \centering \includegraphics[scale=0.5]{presence}\\[-2mm] \caption{Nodes and links presence.}\label{fig:presence} \end{minipage}\hfill % \begin{minipage}[t]{0.33\linewidth} \centering \includegraphics[scale=0.5]{traffic_busy_hour_ecdf}\\[-2mm] \caption{Link traffic in the busy hour ECDF.}\label{fig:traffic-bh-ecdf} \end{minipage}\hfill \end{figure*} \begin{figure*}[htb] % \begin{minipage}[t]{0.33\linewidth} \centering \includegraphics[scale=0.5]{traffic-by-link}\\[-2mm] \caption{Traffic in the 3 busiest links.}\label{fig:link-traffic} \end{minipage}\hfill % \begin{minipage}[t]{0.32\linewidth} \centering \includegraphics[scale=0.5]{bw_ecdf}\\[-2mm] \caption{Throughput ECDF.}\label{fig:bw-ecdf} \end{minipage}\hfill % \begin{minipage}[t]{0.32\linewidth} \centering \includegraphics[scale=0.5]{bw-by-link}\\[-2mm] \caption{Throughput in the 3 busiest links.}\label{fig:link-bw} \end{minipage} \end{figure*} Figure \ref{fig:presence} shows the node and link presence. We define presence as the percentage a given node or link is observed over the captures. Overall, 90 different nodes were detected. From those, only 61 were alive during the all measurement period, leading to a presence higher than 98\%. Around 30 nodes were missed in majority of the captures (i.e., presence less than 10\%). These are temporarily working nodes from other mesh networks and laboratory devices used for various experiments. Figure \ref{fig:presence} also reveals that 56\% of links used between nodes are unidirectional and others are bidirectional. Figure \ref{fig:traffic-bh-ecdf}, depicts the Empirical Cumulative Distribution Function (ECDF) of the average traffic sent in each of the links in the busy hour. The overall average traffic observed is 70 kbps . Figure \ref{fig:link-traffic} shows the average traffic in both directions (upload/download) of the three busiest links. We characterize the wireless links of the QMP network by studying their throughput. Figure \ref{fig:bw-ecdf} shows the ECDF of the throughput of the links. The figure shows that the link throughput can be fitted with an exponential distribution with mean~21.8 Mbps. In order to see the variability of the throughput, Figure \ref{fig:link-bw} shows the throughput averages in both directions (upload and download) of the three busiest links (same links as in Figure \ref{fig:link-traffic}). When we compare the Figure \ref{fig:link-bw} and Figure \ref{fig:link-traffic}, we observe that the throughput is slightly affected by the traffic in the links. Solid and dashed lines are used to identify the measurements on each direction of the links (dashed line for download, solid line for upload). It is interesting to note that the asymmetry of the throughputs measured in both directions it not always due to the asymmetry of the user traffic. For instance (node GSgranVia255), around 6am, when the user traffic is the lowest and equal in both directions, the asymmetry of the links throughputs observed in Figure \ref{fig:traffic-bh-ecdf} remains the same. We thus conclude that this asymmetry must be due to the link characteristics, as level of interferences present at each end, or different transmission powers. A significant amount of applications that run on Guifi.net and QMP network are network-intensive (bandwidth and delay sensitive), transferring large amounts of data between the network nodes \cite{COMNET}. The performance of such kind of applications depends not just on computational and disk resources but also on the network bandwidth between the nodes on which they are deployed. Therefore, the placement of such services in the network is of high importance. Here are some observations (features) that we captured from the measurements in QMP network: \begin{itemize} \item QMP network is highly dynamic and diverse due to many reasons, e.g., its community nature in an urban area; its decentralised organic growth with extensive diversity in the technological choices for hardware, wireless media, link protocols, channels, routing protocols etc.; its mesh nature in the network etc. The current network deployment model is based on geographic singularities rather than QoS. The network is not scale-free. The topology is organic and different for conventional ISP network. \item The resources are not uniformly distributed in the network. Wireless links are with asymmetric quality for services (30\% of the links have a deviation higher than 30\%). We observed a highly skewed traffic pattern (Figure \ref{fig:traffic-bh-ecdf}) and highly skewed bandwidth distribution (Figure \ref{fig:bw-ecdf}). \end{itemize} Currently used \textit{organic (random) placement scheme} in Guifi.net community network is not sufficient to capture the dynamics of the network and therefore it fails to deliver the satisfying QoS. The strong assumption under random service placement, i.e., uniform distribution of resources, does not hold in such environments. Furthermore, the services deployed have different QoS requirements. Services that require intensive inter-component communication (e.g streaming service), can perform better if the replicas (service components) are placed close to each other in high capacity links \cite{CLOUDCOM15}. On other side, bandwidth-intensive services (e.g., distributed storage, video-on-demand) can perform much better if their replicas are as close as possible to their final users (e.g., overall reduction of bandwidth for service provisioning). Our goal is to build on this insight and design a network-aware service placement algorithm that will improve the service quality and network performance by optimizing the usage of scarce resources in community networks such as bandwidth. \label{sec:model} \section{Bandwidth-Aware Placement} The deployment and sharing of services in community networks is made available through \emph{community network micro-clouds} (CNMCs). The idea of CNMC is to place the cloud closer to community end-users, so users can have fast and reliable access to the service. To reach its full potential, a CNMC needs to be carefully deployed in order to utilize the available bandwidth resources. \subsection{Assumptions} In a CNMC, a server or low-power device is directly connected to the wireless base-station providing cloud services to users that are either within a reasonable distance or directly connected to base-station. These nodes are core-graph nodes what we call in Guifi.net. It is important to remark that the services aimed in this work are at infrastructure level (IaaS), as cloud services in current dedicated datacenters (we assume QMP nodes are core-graph nodes). Therefore the services are deployed directly over the core resources of the network (nodes in the core-graph) and accessed by base-graph clients. Services can be deployed by Guifi.net users or administrators. The services we consider can be centralized or distributed. The distributed services can be composite services (non-monolithic) built from simpler parts, e.g., video streaming. These parts or components of service would create an overlay and interact with each other to offer more complex services. A service may or may not be tied to a specific node of the network. Each nodes can host one or more services. In this work we assume offline service placement approach where single or a set of application are placed "in one shot" in the underlying physical network. We might rearrange the placement of the same service over the time because of the service performance fluctuation (e.g. weather conditions, node availability, changes in use pattern, and etc.). We do not consider real-time service migration. \subsection{Formulation and Notations} We call the community network the \emph{underlay} to distinguish it from the \emph{overlay} network which is built by the services. The underlay network is supposed to be connected and we assume each node knows whether other nodes can be reached (i.e., next hop is known). We can model the underlay graph as: $G \gets (OR, L)$ where OR is the set of outdoor routers present in the CNs and $L$ is the set of wireless links that connects them. Let $f_{ij}$ be the bandwidth of the path to go from node $i$ to node $j$. We want a partition of $k$ clusters: $S \gets {S_1, S_2, S_3,...,S_k}$ of the set of nodes in the mesh network. The cluster head $i$ of cluster $S_i$ is the location of the node where the service will be deployed.The partition maximizing the bandwidth from the cluster head to the other nodes in the cluster is given by: \begin{equation} \label{equ1} \operatorname{arg\,max}_S \sum_{i=1}^k \sum_{j\in Si} f_{ij} \end{equation} \label{sec:algorithm} \subsection{Proposed Algorithm: BASP} We designed a bandwidth-aware algorithm that allocated services taking into account the bandwidth of the network. We take a network snapshot (capture) from QMP network regarding the bandwidth of the links \footnote{http://tomir.ac.upc.edu/qmpsu/index.php?cap=56d07684}. Our bandwidth-aware service placement algorithm BASP (see Algorithm \ref{alg:basp}) runs in three phases. \begin{algorithm}[t] \caption{Bandwidth-aware Service Placement (BASP)} \label{alg:basp} \begin{algorithmic}[1] \Require{$G(V_n,E_n)$}\Comment{Network graph} \Statex $S \gets {S_1, S_2, S_3,...,S_k}$ \Comment{$k$ partition of clusters} \Statex $bw_i$ \Comment{bandwidth of node $i$} \item[] \Procedure{PerformKmeans}{$G, k$} \State\Return{$S$} \EndProcedure \Procedure{FindClusterHeads}{$S$} \State $clusterHeads \gets list()$ \ForAll{$k \in S$} \ForAll{$i \in S_k$} \State $bw_i \gets 0$ \ForAll{$j \in setdiff(S,i)$} \State $bw_i \gets bw+estimate.route.bandw(G,i,j)$ \EndFor \State $clusterHeads \gets \max{bw_i}$ \EndFor \EndFor \State\Return{$clusterHeads$} \EndProcedure \Procedure{RecomputeClusters}{$clusterHeads, G$} \State $S\prime \gets list() $ \ForAll{$i \in clusterHeads$} \State $cluster_i \gets list()$ \ForAll{$j \in setdiff(G,i)$} \State $bw_j \gets estimate.route.bandw(G,j,i)$ \If{$bw_j$ is best from other nodes $j$} \State $cluster_i \gets j$ \EndIf \State $S\prime \gets cluster_i$ \EndFor \EndFor \State\Return{$S\prime$} \EndProcedure \end{algorithmic} \end{algorithm} (i) Initially, we use the naive k-means partitioning algorithm in order to group nodes based on their geo-location. The idea is to get back clusters of locations that are close to each other. The k-means algorithm forms clusters of nodes based on the Euclidean distances between them, where the distance metrics in our case are the geographical coordinates of the nodes. In traditional k-means algorithm, first, $k$ out of $n$ nodes are randomly selected as the cluster heads (centroids). Each of the remaining nodes decides its cluster head nearest to it according to the Euclidean distance. After each of the nodes in the network is assigned to one of $k$ clusters, the centroid of each cluster is re-calculated. Grouping nodes based on geo-location is in line with how Guifi.net is organized. The nodes in Guifi.net are organized into a tree hierarchy of \emph{zones} \cite{VegaCN}. A zone can represent nodes from a neighborhood or a city. Each zone can be further divided in child zones that cover smaller geographical areas where nodes are close to each other. From the service perspective we consider placements inside a particular zone. (ii) The second phase of the algorithm it is based on the concept of finding the cluster head maximizing the bandwidth between the head and member nodes of the cluster, formed in the first phase of the algorithm. The cluster heads computed in this phase are the ones having the maximum bandwidth to the other nodes in the cluster $S_k$. The cluster heads are node candidates for service placement. (iii) The third and last phase of the algorithm includes reassigning the nodes to the selected cluster heads having the maximum bandwidth. Regarding computational complexity, the naive brute force method can be estimated by calculating the \textit{Stirling number of the second kind} \cite{Stirling} which counts the number of ways to partition a set of $n$ elements into $k$ nonempty subsets, i.e., $\frac{1}{k!} \sum_{j=0}^{k} (-1)^{j-k} {{n}\choose{k}} j^n$ $\Rightarrow$ $\mathcal{O}(n^{k} k^n)$. However, for BASP, finding the optimal solution to the k-means clustering problem if $k$ and $d$ (the dimension) are fixed (e.g., in our case $n=54$, and $d=2$), the problem can be exactly solved in time $\mathcal{O}(n^{dk+1}\log{n})$, where n is the number of entities to be clustered. The complexity for computing the cluster heads in phase two is $\mathcal{O}(n^2)$, and $\mathcal{O}(n)$ for the reassigning the clusters in phase three. Therefore, the overall complexity of BASP is $\mathcal{O}(n^{2k+1}\log{n})$, which is significantly smaller than the brute force method. \label{sec:evaluation} \section{Preliminary Evaluation} Solving the problem stated in Equation \ref{equ1} in brute force for any number of $N$ and $k$ is NP-hard. For this reason we came out with our heuristic. Initially we used k-means algorithm for a first selection of the clusters. Then, we limit the choice of the cluster heads to be inside the sets of clusters obtained using k-means. Inside these clusters we computed the cluster heads having the maximum bandwidth to the other nodes. To emphasise the importance of phase two and three, in this section we compare \emph{BASP} to \emph{Naive K-Means} which partitions the nodes into $k$ groups such that the sum of squares from nodes to the assigned cluster heads (centroids) is minimized. At the minimum, all cluster heads in \emph{Naive K-Means} are at the mean of their Voronoi sets (the set of nodes which are nearest to the cluster heads). Our experiment is comprised of 5 runs and the presented results are averaged over all the successful runs. Each run consists of 15 repetitions. Figure \ref{fig:bw_clusters} depicts the average bandwidth to the cluster heads obtained with \emph{Naive K-Means} algorithm and our \emph{BASP} algorithm. Figure reveals that for any number of $k$, our \emph{BASP} algorithm outperforms the \emph{Naive K-Means} algorithm. For k=2 the average bandwidth to the cluster head is increased from 18.3 Mbps (obtained with naive k-means) to 27.7 Mbps (obtained with our BASP algorithm) i.e., 40\% increase. The biggest increase of 50\% is when k=7. Based on the observations from the Figure \ref{fig:bw_clusters}, the gap between two algorithms is growing as $k$ increases. K increases as network grows. \begin{figure}[t] \centering \includegraphics[width=3.2in,height=1.8in]{basp.pdf} \caption{Average bandwidth to the cluster heads} \label{fig:bw_clusters} \end{figure} Note that our heuristics enables us to select nodes (cluster heads) that provide much higher bandwidth then any other random or naive approach. But, if we were about to look for the optimum bandwidth within the clusters (i.e., optimum average bandwidth for the cluster), then this problem would end up to be an NP-hard. Finding the solution is NP-hard, because finding the optimum entails running our algorithm for all the combinations of size $k$ from a set of size $n$ . This is a combinatorial problem that becomes intractable even for small sizes of $k$ or $n$ (e.g., $k=5$, $n=54$). For instance, if we would like to find the optimum bandwidth for a cluster of size k=3, then the algorithm need to run for every possible (non repeating) combination of size 3 from the set of size 54. That is for 54 nodes we would end up having~25K combinations ($choose(54,3)$), or 25K possible nodes to start with. We managed to do this and the optimum average bandwidth obtained was 62.7 Mbps. The optimum bandwidth obtained for $k=2$ was 49.1 Mbps, and for $k=1$ was 16.9 Mbps. However the computation time took very long (65 hours for $k=3$, 30 minutes for $k=2$ etc.), comparing to BASP where it took 23 seconds for $k=3$ and 15 seconds for $k=2$. Table \ref{tab:centrality} shows the BASP improvement over Naive K-Means algorithm. Furthermore, Table \ref{tab:centrality} shows some centrality measures and some graph properties obtained for each cluster head. To summarize, BASP is able to achieve good bandwidth performance with very low computation complexity. \begin{figure}[hb] \centering \includegraphics[width=2.8in,height=2.8in]{original.pdf} \caption{Neighborhood Connectivity} \label{fig:neighborhood} \end{figure} \begin{table*} \caption{Centrality measures for cluster heads} \label{tab:centrality} \centering \begin{adjustbox}{max width=\textwidth} \begin{tabular}{|*{13}{c|}} \hline \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{\textbf{k=1}} & \multicolumn{2}{|c}{\textbf{k=2}} & \multicolumn{3}{|c|}{\textbf{k=3}} & \multicolumn{5}{|c|}{\textbf{k=5}} \\ \hline Clusters [node id] & C1 [27] & C1 [20] &C2 [39] & C1 [20] & C2 [39] &C3 [49] & C1 [20] &C2 [4] &C3 [49] & C4 [51] &C5 [39] \\ \hline \textbf{Head degree} & 20 & 6 & 6 & 6 & 6 & 10 & 6 & 10 & 10 & 12 & 6 \\ \hline \textbf{Neighborhood Connectivity} & 7.7 & 9.6 & 9.6 & \textbf{9.6} & \textbf{9.6} & \textbf{10.8} & 9.6 & 8.7 & 10.8 & 8.1 & 9.6 \\ \hline \textbf{Diameter} & 6 & 5 & 3 & 4 & 3 & 5 & 4 & 2 & 3 & 1 & 3 \\ \hline \textbf{Naive K-Means Bandwidth [Mbps]} & 16.6 & \multicolumn{2}{|c|}{18.3}& \multicolumn{3}{|c|}{23} & \multicolumn{5}{|c|}{23.4} \\ \hline \textbf{BASP Bandwidth [Mbps]} & 16.9 & \multicolumn{2}{|c|}{27.7}& \multicolumn{3}{|c|}{32.9} & \multicolumn{5}{|c|}{38.5} \\ \hline \textbf{BASP Running Time} & 7 sec & \multicolumn{2}{|c|}{15 sec}& \multicolumn{3}{|c|}{23 sec} & \multicolumn{5}{|c|}{30 sec} \\ \hline \end{tabular} \end{adjustbox} \end{table*} \textbf{Correlation with centrality metrics:} Figure \ref{fig:neighborhood} shows the neighborhood connectivity graph of the QMP network.The neighborhood connectivity of a node $n$ is defined as the average connectivity of all neighbors of $n$. In the figure, nodes with low neighborhood connectivity values are depicted with bright colors and high values with dark colors. It is interesting to note that the nodes with the highest neighborhood connectivity are the the cluster heads obtained with our BASP algorithm. The cluster heads (for k=2 and k=3) are illustrated with a rectangle in the graph. A deeper investigation into the relationship between service placement and network topological properties is out of the scope of this paper and will be reserved as our future work. \iffalse \begin{table*} \caption{Centrality measures for cluster heads} \centering \begin{adjustbox}{max width=\textwidth} \begin{tabular}{|*{13}{c|}} \hline \multicolumn{1}{|c}{} & \multicolumn{1}{|c}{\textbf{k=1}} & \multicolumn{2}{|c}{\textbf{k=2}} & \multicolumn{3}{|c|}{\textbf{k=3}} & \multicolumn{5}{|c|}{\textbf{k=5}} \\ \hline Clusters [node id] & C1 [27] & C1 [20] &C2 [39] & C1 [20] & C2 [39] &C3 [49] & C1 [20] &C2 [4] &C3 [49] & C4 [51] &C5 [39] \\ \hline \textbf{Head degree} & 20 & 6 & 6 & 6 & 6 & 10 & 6 & 10 & 10 & 12 & 6 \\ \hline \textbf{Closeness} & 0.348& 0.378& 0.333& 0.378& 0.333 & 0.398 & 0.378 & 0.353 & 0.398 & 0.407 & 0.333 \\ \hline \textbf{Betweenness} & 0.053& 0.011& 0 & 0.011 & 0 & 0 & 0.011 & 0.042 & 0 & 0.016 & 0 \\ \hline \textbf{Clustering Coefficient} & 0.466 & 0.333 & 1.0 & 0.333 & 1.0 & 1.0 & 0.333 & 0.2 & 1.0 & 0.6 & 1.0 \\ \hline \textbf{Neighborhood Connectivity} & 7.7 & 9.6 & 9.6 & \textbf{9.6} & \textbf{9.6} & \textbf{10.8} & 9.6 & 8.7 & 10.8 & 8.1 & 9.6 \\ \hline \textbf{Diameter} & 6 & 5 & 3 & 4 & 3 & 5 & 4 & 2 & 3 & 1 & 3 \\ \hline \textbf{Optimum} & 16.92 Mbps & \multicolumn{2}{|c|}{49.1 Mbps}& \multicolumn{3}{|c|}{\textbf{62.7 Mbps}} & \multicolumn{5}{|c|}{? Mbps} \\ \hline \end{tabular} \end{adjustbox} \label{tab:eval:1} \end{table*} \begin{figure}[b] \centering \includegraphics[width=3.2in,keepaspectratio]{original.pdf} \caption{Neighborhood Connectivity. Cluster heads are highlighted with square shape.} \label{fig:bw_cluster22} \end{figure} \fi \label{sec:related-work} \section{Related Work} Service placement is a key function of cloud management systems. Typically, by monitoring all the physical and virtual resources on a system, it aims to balance load through the allocation, migration and replication of tasks. \textbf{Data centers:} Choreo \cite{Choreo} is a measurement-based method for placing applications in the cloud infrastructures to minimize an objective function such as application completion time. Choreo makes fast measurements of cloud networks using packet trains as well as other methods, profiles application network demands using a machine-learning algorithm, and places applications using a greedy heuristic, which in practice is much more efficient than finding an optimal solution. In \cite{ambient} the authors proposed an optimal allocation solution for ambient intelligence environments using tasks replication to avoid network performance degradation. Volley \cite{VolleyNSDI2010} is a system that performs automatic data placement across geographically distributed datacenters of Microsoft. Volley analyzes the logs or requests using an iterative optimization algorithm based on data access patterns and client locations, and outputs migration recommendations back to the cloud service. \textbf{Distributed Clouds:} There are few works that provides service placement in distributed clouds with network-aware capabilities. The work in \cite{SIGCOMM12} proposes efficient algorithms for the placement of services in distributed cloud environment. Their algorithms need input on the status of the network, computational resources and data resources which are matched to application requirements. In \cite{www12} authors propose a selection algorithm to allocate resources for service-oriented applications and the work in \cite{INFOCOM12} focuses on resource allocation in distributed small datacenters. \textbf{Service Migration:} Regarding the service migration in distributed clouds, few works came out recently. The authors in \cite{ServiceMigration1} and \cite{ServiceMigration2} study the dynamic service migration problem in mobile edge-clouds that host cloud-based services at the network edge. They formulate a sequential decision making problem for service migration using the framework of Markov Decision Process (MDP) and illustrate the effectiveness of their approach by simulation using real-world mobility traces of taxis in San Francisco. The work in \cite{ServiceMigration3} studies when services should be migrated in response to user mobility and demand variation. While our focus in this paper is to design a low-complexity service placement heuristic for community network clouds to maximise bandwidth, another closely related work is \cite{davide} which proposed several algorithms that minimize the coordination and overlay cost along a network. \label{sec:conclusion} \section{Conclusion} In this paper, we first motivated the need for bandwidth-aware service placement on community network micro-cloud infrastructures. Community networks provide a perfect scenario to deploy and use community services in contributory manner. Much previous work done in CNs has focused on better ways to design the network to avoid hot spots and bottlenecks. As services become more network-intensive, they can become bottle-necked by the network, even in well-provisioned clouds. The case in community network clouds is even more hair-raising, with limited capacity of nodes and links and an unpredictable network performance. Without a network aware system for placing services, poor paths can be chosen while faster, more reliable paths go unused. Furthermore, we proposed a low-complexity service placement heuristic called BASP to maximise the bandwidth allocation in deploying a CNMC. We presented algorithmic details, analysed its complexity, and carefully evaluated its performance with realistic settings. Our preliminary results show that BASP consistently outperforms the currently adopted random placement in Guifi.net by 35\%. Moreover, as the number of services increases, the gain tends to increase accordingly. As a future work, we plan to deploy our service placement algorithm in a real network segment of Guifi.net, using real services and quantify the performance and effects of the algorithm.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Volume-Phase Holographic (VPH) gratings potentially have many advantages over classical surface-relief gratings (Barden, Arns \& Colburn 1998; Barden et al. 2000). They are already in operation in some existing astronomical spectrographs (Kashiwagi et al. 2004) and their use is also planned for a number of forthcoming instruments (Smith et al. 2004). While the main applications of VPH gratings are currently for optical spectrographs, they will also be useful for near-infrared (NIR) spectrographs if the performance at low temperatures is satisfactory. In particular, diffraction efficiency and its dependency on wavelength should be confirmed. Contraction of dichromated gelatin with decreasing temperature could cause variations in line density and the diffraction efficiency profile (the thickness of the gelatin layer is one of the parameters that define diffraction efficiency). Since thermal cycling may cause some deterioration of a VPH grating and reduce its operational life time, we also need to investigate whether the characteristics vary with successive thermal cycles. Previously, we tested a VPH grating at 200 $K$ and confirmed that the performance is nearly independent of temperature during 5 thermal cycles (Tamura et al. 2003, 2004). While cooling to 200 $K$ can be sufficient for a spectrograph operating at wavelengths out to $\sim$ 1.8 $\mu$m ($H$ band) such as Fiber Multi Object Spectrograph (FMOS; e.g. Kimura et al. 2003), a much lower temperature (e.g., 100 $K$) is required to extend the spectral coverage of a spectrograph out to $\sim$ 2.4 $\mu$m ($K$ band). In this paper, measurements of diffraction efficiency of VPH gratings at 100 $K$ and at room temperature are reported. Pictures of the gratings investigated are shown in Fig. \ref{vphpics}, both of which were manufactured by Ralcon Development Lab. The one in the left panel of Fig. \ref{vphpics} has a line density of 385 lines/mm and the peak of diffraction efficiency exists around 1.3 $\mu$m at the Bragg condition when the incident angle of an input beam to the normal of the grating surface is 15$^{\circ}$. The measurements of this grating are performed at wavelengths from 0.9 $\mu$m to 1.6 $\mu$m. Since it is important to see whether the performance at low temperatures is different from grating to grating (Bianco et al. 2003; Blais-Ouellette et al. 2004; Branche et al. 2004) we also investigate a different VPH grating, which is shown in the right panel of Fig. \ref{vphpics}. (This grating was provided as a free sample for demonstration purposes only.) The line density is 300 lines/mm and the peak efficiency is obtained around 0.7 $\mu$m for an incident angle of 6$^{\circ}$. The measurements of this grating are performed at wavelengths from 0.5 $\mu$m to 0.9 $\mu$m. \section{The test facility and measurements} In Fig. \ref{system1} and \ref{system2}, the overall configuration of the optical components used for the measurements is indicated (detailed information for the main components is given in Table \ref{comps}). Light exiting from the monochromator is fed into the cryogenic chamber through the fiber cable (Fig. \ref{system1}). Lenses with the same specifications are attached to both ends of this fiber cable so that a collimated beam received at one end exits from the other end in the cryogenic chamber. This beam is directed towards the central axis of the cold bench with parallel to the bench and illuminates the central part of the VPH grating at a controlled incident angle. The spectral band-width of this input beam is set by the slit width at the exit of the monochromator. The slit width and the corresponding spectral band-width are set to 0.5 mm and $\sim$ 0.01 $\mu$m, respectively, throughout the measurements. The input beam diameter is $\sim 25$ mm which is defined by the lens diameter of the fiber cable assembly. The incident beam is then diffracted by the grating. The position of the output collimator lens and the VPH grating are rotated independently around the central axis of the cold bench so that the diffracted beam for a certain combination of incident angle and wavelength goes through the window towards the camera section consisting of lenses and a detector (Fig. \ref{system2}). The output fiber core is thus re-imaged on the detector. The basic measurement procedures are as follows: First, the brightness of the lamp and the wavelength of light exiting from the monochromator are fixed (the brightness of the lamp is kept constant by a stabilized power supply), and the total intensity included in the image of the fiber core is measured without a VPH grating. This measurement is performed at all the sampling wavelengths and is also carried out both at room temperature and cold temperature. The brightness of the lamp can be changed when moving from one wavelength to another; a higher brightness is used at the shortest and longest wavelengths because the system throughput is lower. These data are used to normalize the intensities measured when a VPH grating is inserted into the test set-up and to calculate the diffraction efficiency. Since marginal differences ($\sim 2$\%) were found in the intensities measured without a VPH grating between room temperature and cold temperature, we use the intensities taken at room (cold) temperature to normalize those measured with a VPH grating at room (cold) temperature, respectively. Next, a VPH grating is inserted and the intensity of the first order ($m=+1$) diffracted light is measured for an incident angle of 10$^{\circ}$, 15$^{\circ}$, and 20$^{\circ}$ at all the sampling wavelengths. Once all these measurements are performed, a cooling cycle of the VPH grating is started. The temperature of the grating is monitored with a calibrated silicon diode sensor on the grating surface, close to the edge of the grating but unilluminated by the input beam. For a good thermal contact between the grating and the sensor and accurate measurements of the grating temperature, we put a thin layer of grease to increase the contact area between the two surfaces. We also use a device to keep pushing the sensor against the grating lightly during a thermal cycle, which is thermally insulated from the metal components of the test facility. An example of the temperature variation is shown in Fig. \ref{tempmon}. When the temperature of the grating becomes lower than 100 $K$, we start the same sequence of measurements as above with running the compressor and cold heads. There is no closed loop control of the grating temperature as the rate of temperature variation is very low at 100 $K$. The temperature of the grating therefore stays approximately at $\sim$ 90 $-$ 100 $K$ for the duration of the measurements. After the measurements at $\sim$ 100 $K$, the compressor and cold heads are switched off and the VPH grating is allowed to warm up passively. The measurements are repeated when the temperature is back to the ambient temperature. During these thermal cycles, the cryogenic chamber is kept evacuated to $\sim$ 10$^{-7}$ Torr. \section{Results and discussions} First, we show results from the VPH grating for NIR wavelengths. In Fig. \ref{effnir}, measured efficiency of the first order ($m=+1$) diffraction is plotted against wavelength for an incident angle of 10$^{\circ}$, 15$^{\circ}$, and 20$^{\circ}$ in the left, middle, and right panel, respectively. The circles indicate the data points at room temperature, and the triangles are those at 100 $K$. The error bars indicate estimated random errors in the measurements expected due to pixel-to-pixel variation of background intensities in the images of the fiber core and subsequent uncertainty of background subtraction. Note that the large error at $0.9$ $\mu$m is due to the lower sensitivity of the detector at the edge of the spectral coverage. For clearer presentation, these error bars are attached only to the data points for a cold test, but those for a warm test are similar. These results suggest that a VPH grating can withstand cryogenic temperatures in vacuum and that its performance at 100 $K$ is similar to that at room temperature. In order to see whether the performance of this VPH grating deteriorated with successive thermal cycling, the differences in diffraction efficiency for an incident angle of 15$^{\circ}$ between the first warm test and subsequent tests are averaged over the wavelength range investigated and plotted against cycle number in Fig. \ref{effvarnir}. Circles and triangles indicate the measurements at room temperature and those at 100 $K$, respectively. The error bar represents a combination of the standard deviation of the differences around the average value and the typical uncertainty ($\sim$ 3 \%) in the measurement of diffraction efficiency. Similar results are obtained from the data for the other incident angles (10$^{\circ}$ and 20$^{\circ}$). This result suggests that no significant deterioration of a VPH grating is caused by thermal cycling. Next, we present results from the VPH grating for visible wavelengths. The measurement procedure is the same as that in the NIR except that a CCD camera is used and the grating is accommodated with a different mount on the cold bench. In Fig. \ref{effvis}, the measurements of diffraction efficiency at 100 $K$ are compared with those at room temperature for an incident angle of 1$^{\circ}$, 6$^{\circ}$, and 11$^{\circ}$ in the left, middle, and right panel, respectively. In Fig. \ref{effvarvis}, the average difference from the first warm test is plotted against cycle number for an incident angle of 6$^{\circ}$. These results again suggest that the performance does not largely depend on temperature or the number of thermal cycles. We note that the throughput is significantly lower than the NIR grating. The reason for this is unknown, but this grating was for demonstration purposes only and hence the low throughput may be due to, e.g., severe internal absorption and/or imperfect fringes. This robustness of diffraction efficiency to temperature variation is expected for these VPH gratings in theory, provided that the linear thermal expansion coefficient of gelatin at 100 $-$ 200 $K$ is similar to that at 200 $-$ 300 $K$, i.e., in the range of $10^{-4} - 10^{-5}$ $K^{-1}$. Given a linear thermal expansion coefficient of 10$^{-4}$ $K^{-1}$, a gelatin thickness would be 2 \% smaller at 100 $K$ compared to that at 300 $K$. Considering the VPH grating for NIR wavelengths and assuming a 12 $\mu$m thickness at room temperature, which gives a good fit of the predicted throughput curve to the measurements for this NIR VPH grating (Tamura et al. 2003), the thickness at 100 $K$ would be 11.76 $\mu$m. Also, the line density would be increased by the same fraction. Since the line density of this grating is 385 lines/mm in the specification, it would be 393 lines/mm at 100 $K$. In Fig. \ref{change}, the diffraction efficiency predicted with coupled wave analysis (Kogelnik 1969) is plotted against wavelength for the two sets of line density and gelatin thickness; one is 385 lines/mm and 12 $\mu$m, and the other is 393 lines/mm and 11.76 $\mu$m. The fringe amplitude in refractive index is assumed to be 0.055 in both calculations. An incident angle of 15$^{\circ}$ is also assumed, but the difference between the two calculations is similarly small for incident angles of $10^{\circ}$ and $20^{\circ}$. Note that the predicted diffraction efficiencies are scaled by a factor of 1.2 to fit them to the measurements, indicating that the throughput is $\sim 20$ \% lower than the theoretical prediction. About half of this discrepancy can be explained by energy loss due to reflections at interfaces between glass substrate and ambient space. The other half has not been identified but it perhaps includes, e.g., internal absorption (identifying the source of this energy loss is beyond the scope of this paper). These calculations suggest that the expected change in throughput is as small as confirmed by the measurements. \section{Summary \& conclusion} In this paper, we present results from cryogenic tests of VPH gratings at $\sim$ 100 $K$. The aims of these tests are to see whether the diffraction efficiency at a low temperature as a function of wavelength is significantly different from that at room temperature and to see whether the grating can withstand a number of thermal cycles. Having exposed VPH gratings to 10 cycles between room temperature and 100 $K$, we find that diffraction efficiency measured at 100 $K$ agrees with that at room temperature within the errors. We also find no clear evidence that the performance changes with the successive thermal cycles. These results were found for both of the two different VPH gratings investigated here, which may imply that VPH gratings can withstand such cryogenic temperatures in general. Ideally, an investigation of more gratings, in particular from different manufacturers and with different substrate materials, should be carried out to confirm this point. It needs to be emphasized that we have only confirmed the performance of a low dispersion VPH grating at cryogenic temperature. It would be useful to repeat the same experiments for high dispersion VPH gratings. Since the band-width of a throughput curve is narrower for high dispersion gratings, some changes of physical properties (e.g., gelatin thickness) due to temperature variations are expected to be revealed much more clearly in the form of a shift of the throughput peak and/or a global decrease of the throughput. In this case, one would have to predict the change of characteristics due to temperature variations and take them into account in the design and fabrication of a VPH grating so that it could work with the optimal performance at the operating temperature. \section*{Acknowledgements} We thank colleagues in Durham for their assistance with this work, particularly J\"{u}rgen Schmoll, Daniel Gedge, and the members of the mechanical workshop. We are also grateful to Ian Parry for letting us use his VPH grating for visible wavelengths. This work was funded by PPARC Rolling Grant PPA/G/O/2003/00022.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Squeezed states are such for which the noise in one of the chosen pair of observables is reduced below the vacuum or ground-state noise level, at the expense of increased noise in the other observable. The squeezing effect indeed improves interferometric and spectroscopic measurements, so in the context of interferometric detection and of gravitational waves the squeezed state is very useful \cite{01,02}. In a very recently published paper, Agarwal \cite{r1} revealed that a vortex state of a two-mode system can be generated from a squeezed vacuum by subtracting a photon, such a subtracting mechanism may happen in a quantum channel with amplitude damping. Usually, in nature every system is not isolated, dissipation or dephasing usually happens when a system is immersed in a thermal environment, or a signal (a quantum state) passes through a quantum channel which is described by a master equation \cite{03}. For example, when a pure state propagates in a medium, it inevitably interacts with it and evolves into a mixed state \cit {04}. Dissipation or dephasing will deteriorate the degree of nonclassicality of photon fields, so physicists pay much attention to it \cite{05,06,07}. In this present work we investigate how an initial single-mode squeezed vacuum state evolves in an amplitude dissipative channel (ADC). When a system is described by its interaction with a channel with a large number of degrees of freedom, master equations are set up for a better understanding how quantum decoherence is processed to affect unitary character in the dissipation or gain of the system. In most cases people are interested in the evolution of the variables associated with the system only. This requires us to obtain the equations of motion for the system of interest only after tracing over the reservoir variables. A quantitative measure of nonclassicality of quantum fields is necessary for further investigating the system's dynamical behavior. For this channel, the associated loss mechanism in physical processes is governed by the following master equation \cite{03} \begin{equation} \frac{d\rho \left( t\right) }{dt}=\kappa \left( 2a\rho a^{\dagger }-a^{\dagger }a\rho -\rho a^{\dagger }a\right) , \label{p1} \end{equation} where $\rho $ is the density operator of the system, and $\kappa $ is the rate of decay. We have solved this problem with use of the thermo entangled state representation \cite{08}. Our questions are: What kind of mixed state does the initial squeezed state turns into? How does the photon statistics distributions varies in the ADC? Thus solving master equations is one of the fundamental tasks in quantum optics. Usually people use various quasi-probability representations, such as P-representation, Q-representation, complex P-representation, and Wigner functions, etc. for converting the master equations of density operators into their corresponding c-number equations. Recently, a new approach \cite {08,09}, using the thermal entangled state representation \cite{10,11} to convert operator master equations to their c-number equations is presented which can directly lead to the corresponding Kraus operators (the infinitive representation of evolved density operators) in many cases. The work is arranged as follows. In Sec. 2 by virtue of the entangled state representation we briefly review our way of deriving the infinitive sum representation of density operator as a solution of the master equation. In Sec. 3 we show that a pure squeezed vacuum state (with squeezing parameter $ \lambda )$ will evolves into a mixed state (output state), whose exact form is derived, which turns out to be a squeezed chaotic state. We investigate average photon number, photon statistics distributions for this state. The probability of finding $n$ photons in this mixed state is obtained which turns out to be a Legendre polynomial function relating to the squeezing parameter $\lambda $ and the decaying rate $\kappa $. In Sec. 4 we discuss the photon statistics distributions of the output state. In Sec. 5 and 6 we respectively discuss the Wigner function and tomogram of the output state. \section{Brief review of deducing the infinitive sum representation of $ \protect\rho \left( t\right) $} For solving the above master equation, in a recent review paper \cite{12} we have introduced a convenient approach in which the two-mode entangled state \cite{10,11} \begin{equation} |\eta \rangle =\exp (-\frac{1}{2}|\eta |^{2}+\eta a^{\dag }-\eta ^{\ast } \tilde{a}^{\dag }+a^{\dag }\tilde{a}^{\dag })|0\tilde{0}\rangle , \label{p2} \end{equation} is employed, where $\tilde{a}^{\dag }$ is a fictitious mode independent of the real mode $a^{\dagger },$ $[\tilde{a},a^{\dagger }]=0$. $|\eta =0\rangle $ possesses the properties \begin{align} a|\eta & =0\rangle =\tilde{a}^{\dag }|\eta =0\rangle , \notag \\ a^{\dag }|\eta & =0\rangle =\tilde{a}|\eta =0\rangle , \label{p3} \\ (a^{\dag }a)^{n}|\eta & =0\rangle =(\tilde{a}^{\dag }\tilde{a})^{n}|\eta =0\rangle . \notag \end{align} Acting the both sides of Eq.(\ref{p1}) on the state $|\eta =0\rangle \equiv \left \vert I\right \rangle $, and denoting $\left \vert \rho \right \rangle =\rho \left \vert I\right \rangle $, we have \begin{align} \frac{d}{dt}\left \vert \rho \right \rangle & =\kappa \left( 2a\rho a^{\dagger }-a^{\dagger }a\rho -\rho a^{\dagger }a\right) \left \vert I\right \rangle \notag \\ & =\kappa \left( 2a\tilde{a}-a^{\dagger }a-\tilde{a}^{\dagger }\tilde{a} \right) \left \vert \rho \right \rangle , \label{p4} \end{align} so its formal solution is \begin{equation} \left \vert \rho \right \rangle =\exp \left[ \kappa t\left( 2a\tilde{a} -a^{\dagger }a-\tilde{a}^{\dagger }\tilde{a}\right) \right] \left \vert \rho _{0}\right \rangle , \label{p5} \end{equation} where $\left \vert \rho _{0}\right \rangle \equiv \rho _{0}\left \vert I\right \rangle ,$ $\rho _{0}$ is the initial density operator. Noticing that the operators in Eq.(\ref{p5}) obey the following commutative relation, \begin{equation} \left[ a\tilde{a},a^{\dagger }a\right] =\left[ a\tilde{a},\tilde{a}^{\dagger }\tilde{a}\right] =\tilde{a}a \label{p6} \end{equation} an \begin{equation} \left[ \frac{a^{\dagger }a+\tilde{a}^{\dagger }\tilde{a}}{2},a\tilde{a} \right] =-\tilde{a}a, \label{p7} \end{equation} as well as using the operator identity \cite{13} \begin{equation} e^{\lambda \left( A+\sigma B\right) }=e^{\lambda A}e^{\sigma \left( 1-e^{-\lambda \tau }\right) B/\tau }, \label{p8} \end{equation (which is valid for $\left[ A,B\right] =\tau B$), we have \begin{equation} e^{-2\kappa t\left( \frac{a^{\dagger }a+\tilde{a}^{\dagger }\tilde{a}}{2}-a \tilde{a}\right) }=e^{-\kappa t\left( a^{\dagger }a+\tilde{a}^{\dagger } \tilde{a}\right) }e^{T^{\prime }a\tilde{a}}, \label{p9} \end{equation} where $T^{\prime }=1-e^{-2\kappa t}.$ Then substituting Eq.(\ref{p9}) into Eq.(\ref{p5}) yields \cite{12} \begin{align} \left \vert \rho \right \rangle & =e^{-\kappa t\left( a^{\dagger }a+\tilde{a} ^{\dagger }\tilde{a}\right) }\sum_{n=0}^{\infty }\frac{T^{\prime n}}{n!} a^{n} \tilde{a}^{n}\left \vert \rho _{0}\right \rangle \notag \\ & =e^{-\kappa ta^{\dagger }a}\sum_{n=0}^{\infty }\frac{T^{\prime n}}{n!} a^{n}\rho _{0}a^{\dag n}e^{-\kappa t\tilde{a}^{\dagger }\tilde{a}}\left \vert I\right \rangle \notag \\ & =\sum_{n=0}^{\infty }\frac{T^{\prime n}}{n!}e^{-\kappa ta^{\dagger }a}a^{n}\rho _{0}a^{\dag n}e^{-\kappa ta^{\dagger }a}\left \vert I\right \rangle , \label{p10} \end{align which leads to the infinitive operator-sum representation of$\ \rho $, \begin{equation} \rho =\sum_{n=0}^{\infty }M_{n}\rho _{0}M_{n}^{\dagger }, \label{p11} \end{equation} where \begin{equation} M_{n}\equiv \sqrt{\frac{T^{\prime n}}{n!}}e^{-\kappa ta^{\dagger }a}a^{n}. \label{p12} \end{equation} We can prove \begin{align} \sum_{n}M_{n}^{\dagger }M_{n}& =\sum_{n}\frac{T^{\prime n}}{n!}a^{\dag n}e^{-2\kappa ta^{\dagger }a}a^{n} \notag \\ & =\sum_{n}\frac{T^{\prime n}}{n!}e^{2n\kappa t}\colon a^{\dag n}a^{n}\colon e^{-2\kappa ta^{\dagger }a} \notag \\ & =\left. :e^{T^{\prime }e^{2\kappa t}a^{\dagger }a}:\right. e^{-2\kappa ta^{\dagger }a} \notag \\ & =\left. :e^{\left( e^{2\kappa t}-1\right) a^{\dagger }a}:\right. e^{-2\kappa ta^{\dagger }a}=1, \label{p13} \end{align} where $\colon \colon $ stands for the normal ordering. Thus $M_{n}$ is a kind of Kraus operator, and $\rho $ in Eq.(\ref{p11}) is qualified to be a density operator, i.e., \begin{equation} Tr\left[ \rho \left( t\right) \right] =Tr\left[ \sum_{n=0}^{\infty }M_{n}\rho _{0}M_{n}^{\dagger }\right] =Tr\rho _{0}. \label{p14} \end{equation} Therefore, for any given initial state $\rho _{0}$, the density operator $ \rho \left( t\right) $ can be directly calculated from Eq.(\ref{p11}). The entangled state representation provides us with an elegant way of deriving the infinitive sum representation of density operator as a solution of the master equation. \section{Evolving of an initial single-mode squeezed vacuum state in ADC} It is seen from Eq.(\ref{p11}) that for any given initial state $\rho _{0}$, the density operator $\rho \left( t\right) $ can be directly calculated. When $\rho _{0}$ is a single-mode squeezed vacuum state, \begin{equation} \rho _{0}=\text{sech}\lambda \exp \left( \frac{\tanh \lambda }{2}a^{\dag 2}\right) \left\vert 0\right\rangle \left\langle 0\right\vert \exp \left( \frac{\tanh \lambda }{2}a^{2}\right) , \label{p15} \end{equation} we see \begin{eqnarray} \rho \left( t\right) &=&\text{sech}\lambda \sum_{n=0}^{\infty }\frac{ T^{\prime n}}{n!}e^{-\kappa ta^{\dagger }a}a^{n}\exp \left( \frac{\tanh \lambda }{2}a^{\dag 2}\right) \left\vert 0\right\rangle \notag \\ &&\times \left\langle 0\right\vert \exp \left( \frac{\tanh \lambda }{2} a^{2}\right) a^{\dag n}e^{-\kappa ta^{\dagger }a}. \label{p16} \end{eqnarray} Using the Baker-Hausdorff lemma \cite{14}, \begin{equation} e^{\lambda \hat{A}}\hat{B}e^{-\lambda \hat{A}}=\hat{B}+\lambda \left[ \hat{A} ,\hat{B}\right] +\frac{\lambda ^{2}}{2!}\left[ \hat{A},\left[ \hat{A},\hat{B} \right] \right] +\cdots . \label{p17} \end{equation} we have \begin{eqnarray} a^{n}\exp \left( \frac{\tanh \lambda }{2}a^{\dag 2}\right) \left\vert 0\right\rangle &=&e^{\frac{\tanh \lambda }{2}a^{\dag 2}}e^{-\frac{\tanh \lambda }{2}a^{\dag 2}}a^{n}e^{\frac{\tanh \lambda }{2}a^{\dag 2}}\left\vert 0\right\rangle \notag \\ &=&e^{\frac{\tanh \lambda }{2}a^{\dag 2}}\left( a+a^{\dagger }\tanh \lambda \right) ^{n}\left\vert 0\right\rangle . \label{p18} \end{eqnarray} Further employing the operator identity \cite{15} \begin{equation} \left( \mu a+\nu a^{\dagger }\right) ^{m}=\left( -i\sqrt{\frac{\mu \nu }{2}} \right) ^{m}\colon H_{m}\left( i\sqrt{\frac{\mu }{2\nu }}a+i\sqrt{\frac{\nu }{2\mu }}a^{\dagger }\right) \colon , \label{p19} \end{equation} where $H_{m}(x)$ is the Hermite polynomial, we know \begin{eqnarray} &&\left( a+a^{\dagger }\tanh \lambda \right) ^{n} \notag \\ &=&\left( -i\sqrt{\frac{\tanh \lambda }{2}}\right) ^{n}\colon H_{n}\left( i \sqrt{\frac{1}{2\tanh \lambda }}a+i\sqrt{\frac{\tanh \lambda }{2}}a^{\dagger }\right) \colon . \label{p20} \end{eqnarray} From Eq.(\ref{p18}), it follows that \begin{eqnarray} a^{n}e^{\frac{\tanh \lambda }{2}a^{\dag 2}}\left\vert 0\right\rangle &=&\left( -i\sqrt{\frac{\tanh \lambda }{2}}\right) ^{n}e^{\frac{\tanh \lambda }{2}a^{\dag 2}} \notag \\ &&\times H_{n}\left( i\sqrt{\frac{\tanh \lambda }{2}}a^{\dagger }\right) \left\vert 0\right\rangle . \label{p21} \end{eqnarray} On the other hand, noting $e^{-\kappa ta^{\dagger }a}a^{\dagger }e^{\kappa ta^{\dagger }a}=a^{\dagger }e^{-\kappa t},e^{\kappa ta^{\dagger }a}ae^{-\kappa ta^{\dagger }a}=ae^{-\kappa t}$ and the normally ordered form of the vacuum projector $\left\vert 0\right\rangle \left\langle 0\right\vert =\colon e^{-a^{\dagger }a}\colon ,$ we have \begin{align} \rho \left( t\right) & =\text{sech}\lambda \sum_{n=0}^{\infty }\frac{ T^{\prime n}}{n!}e^{-\kappa ta^{\dagger }a}a^{n}e^{\frac{\tanh \lambda }{2} a^{\dag 2}}\left\vert 0\right\rangle \notag \\ & \times \left\langle 0\right\vert e^{\frac{\tanh \lambda }{2}a^{2}}a^{\dag n}e^{-\kappa ta^{\dagger }a} \notag \\ & =\text{sech}\lambda \sum_{n=0}^{\infty }\frac{\left( T^{\prime }\tanh \lambda \right) ^{n}}{2^{n}n!}e^{\frac{e^{-2\kappa t}a^{\dag 2}\tanh \lambda }{2}} \notag \\ & \times H_{n}\left( i\sqrt{\frac{\tanh \lambda }{2}}a^{\dagger }e^{-\kappa t}\right) \left\vert 0\right\rangle \left\langle 0\right\vert \notag \\ & \times H_{n}\left( -i\sqrt{\frac{\tanh \lambda }{2}}ae^{-\kappa t}\right) e^{\frac{e^{-2\kappa t}a^{2}\tanh \lambda }{2}} \notag \\ & =\text{sech}\lambda \sum_{n=0}^{\infty }\frac{\left( T^{\prime }\tanh \lambda \right) ^{n}}{2^{n}n!}\colon e^{\frac{e^{-2\kappa t}\left( a^{2}+a^{\dag 2}\right) \tanh \lambda }{2}-a^{\dagger }a} \notag \\ & \times H_{n}\left( i\sqrt{\frac{\tanh \lambda }{2}}a^{\dagger }e^{-\kappa t}\right) H_{n}\left( -i\sqrt{\frac{\tanh \lambda }{2}}ae^{-\kappa t}\right) \colon \label{p22} \end{align} then using the following identity \cite{16} \begin{eqnarray} &&\sum_{n=0}^{\infty }\frac{t^{n}}{2^{n}n!}H_{n}\left( x\right) H_{n}\left( y\right) \notag \\ &=&\left( 1-t^{2}\right) ^{-1/2}\exp \left[ \frac{t^{2}\left( x^{2}+y^{2}\right) -2txy}{t^{2}-1}\right] , \label{p23} \end{eqnarray} and $e^{\lambda a^{\dag }a}=\colon e^{\left( e^{\lambda }-1\right) a^{\dag }a}\colon ,$ we finally obtain the expression of the output stat \begin{equation} \rho \left( t\right) =We^{\frac{\text{\ss }}{2}a^{\dag 2}}e^{a^{\dagger }a\ln \left( \text{\ss }T^{\prime }\tanh \lambda \right) }e^{\frac{\text{\ss }}{2}a^{2}}, \label{p23a} \end{equation} with $T^{\prime }=1-e^{-2\kappa t}$ and \begin{equation} W\equiv \frac{\text{sech}\lambda }{\sqrt{1-T^{\prime 2}\tanh ^{2}\lambda }}, \text{\ss }\equiv \frac{e^{-2\kappa t}\tanh \lambda }{1-T^{\prime 2}\tanh ^{2}\lambda }. \label{p24} \end{equation} By comparing Eq.(\ref{p15}) with (\ref{p23}) one can see that after going through the channel the initial squeezing parameter $\tanh \lambda $ in Eq.( \ref{p15}) becomes to \ss $\equiv \frac{e^{-2\kappa t}\tanh \lambda } 1-T^{\prime 2}\tanh ^{2}\lambda },$ and $\left\vert 0\right\rangle \left\langle 0\right\vert \rightarrow \frac{1}{\sqrt{1-T^{\prime 2}\tanh ^{2}\lambda }}e^{a^{\dagger }a\ln \left( \text{\ss }T^{\prime }\tanh \lambda \right) },$ a chaotic state (mixed state), due to $T^{\prime }>0,$ we can prove $\frac{e^{-2\kappa t}}{1-T^{\prime 2}\tanh ^{2}\lambda }<1,$ which means a squeezing-decreasing process. When $\kappa t=0$, then $T^{\prime }=0$ and \ss\ $=\tanh \lambda $, Eq.(\ref{p22}) becomes the initial squeezed vacuum state as expected. It is important to check: if Tr$\rho (t)=1$. Using Eq.(\ref{p22}) and the completeness of coherent state $\int \frac{d^{2}z}{\pi }\left\vert z\right\rangle \left\langle z\right\vert =1$ as well as the following formula \cite{17} \begin{equation} \int \frac{d^{2}z}{\pi }e^{\zeta \left\vert z\right\vert ^{2}+\xi z+\eta z^{\ast }+fz^{2}+gz^{\ast 2}}=\frac{1}{\sqrt{\zeta ^{2}-4fg}}e^{\frac{-\zeta \xi \eta +f\eta ^{2}+g\xi ^{2}}{\zeta ^{2}-4fg}}, \label{p25} \end{equation whose convergent condition is Re$\left( \zeta \pm f\pm g\right) <0$ and$\ \mathtt{Re}\left( \frac{\zeta ^{2}-4fg}{\zeta \pm f\pm g}\right) <0$, we really see \begin{eqnarray} \text{Tr}\rho \left( t\right) &=&W\int \frac{d^{2}z}{\pi }\left\langle z\right\vert e^{\frac{\text{\ss }}{2}a^{\dag 2}}e^{a^{\dagger }a\ln \left( \text{\ss }T^{\prime }\tanh \lambda \right) }e^{\frac{\text{\ss }}{2} a^{2}}\left\vert z\right\rangle \notag \\ &=&\frac{W}{\sqrt{\left( \text{\ss }T^{\prime }\tanh \lambda -1\right) ^{2}- \text{\ss }^{2}}}=1. \label{p26} \end{eqnarray} so $\rho \left( t\right) $ is qualified to be a mixed state, thus we see an initial pure squeezed vacuum state evolves into a squeezed chaotic state with decreasing-squeezing after passing through an amplitude dissipative channel. \section{Average photon number} Using the completeness relation of coherent state and the normally ordering form of $\rho \left( t\right) $ in Eq. (\ref{p22}), and using $e^{\frac{ \text{\ss }}{2}a^{2}}a^{\dagger }e^{-\frac{\text{\ss }}{2}a^{2}}=a^{\dagger }+$\ss $a$, as well\ as $e^{a^{\dagger }a\ln \left( \text{\ss }T^{\prime }\tanh \lambda \right) }a^{\dagger }e^{-a^{\dagger }a\ln \left( \text{\ss } T^{\prime }\tanh \lambda \right) }$=$a^{\dagger }$\ss $T^{\prime }\tanh \lambda ,$ we have \begin{align} & \mathtt{Tr}\left( \rho \left( t\right) a^{\dagger }a\right) \notag \\ & =W\int \frac{d^{2}z}{\pi }\left\langle z\right\vert e^{\frac{\text{\ss }}{ 2 }a^{\dag 2}}e^{a^{\dagger }a\ln \left( \text{\ss }T^{\prime }\tanh \lambda \right) }e^{\frac{\text{\ss }}{2}a^{2}}a^{\dagger }a\left\vert z\right\rangle \notag \\ & =W\int \frac{d^{2}z}{\pi }\left\langle z\right\vert e^{\frac{\text{\ss }}{ 2 }a^{\dag 2}}e^{a^{\dagger }a\ln \left( \text{\ss }T^{\prime }\tanh \lambda \right) }e^{\frac{\text{\ss }}{2}a^{2}}za^{\dagger }\left\vert z\right\rangle \notag \\ & =W\int \frac{d^{2}z}{\pi }z\left\langle z\right\vert e^{\frac{\text{\ss }}{ 2}a^{\dag 2}}e^{a^{\dagger }a\ln \left( \text{\ss }T^{\prime }\tanh \lambda \right) }\left( a^{\dagger }+\text{\ss }a\right) e^{\frac{\text{\ss }}{2} a^{2}}\left\vert z\right\rangle \notag \\ & =W\text{\ss }\int \frac{d^{2}z}{\pi }ze^{\frac{\text{\ss }}{2}\left( z^{\ast 2}+z^{2}\right) }\left\langle z\right\vert \left( a^{\dagger }T^{\prime }\tanh \lambda +z\right) e^{a^{\dagger }a\ln \left( \text{\ss } T^{\prime }\tanh \lambda \right) }\left\vert z\right\rangle \notag \\ & =W\text{\ss }\int \frac{d^{2}z}{\pi }\left( |z|^{2}T^{\prime }\tanh \lambda +z^{2}\right) \notag \\ & \times \exp \left[ \left( \text{\ss }T^{\prime }\tanh \lambda -1\right) |z|^{2}+\frac{\text{\ss }}{2}\left( z^{\ast 2}+z^{2}\right) \right] . \label{p59} \end{align} In order to perform the integration, we reform Eq.(\ref{p59}) as $ \allowbreak $ \begin{eqnarray} \mathtt{Tr}\left( \rho \left( t\right) a^{\dagger }a\right) &=&W\text{\ss } \left\{ T^{\prime }\tanh \lambda \frac{\partial }{\partial f}+\frac{2}{\text{ \ss }}\frac{\partial }{\partial s}\right\} \notag \\ &&\times \int \frac{d^{2}z}{\pi }\exp \left[ \left( \text{\ss }T^{\prime }\tanh \lambda -1+f\right) |z|^{2}\right. \notag \\ &&+\left. \frac{\text{\ss }}{2}\left( z^{\ast 2}+\left( 1+s\right) z^{2}\right) \right] _{f=s=0} \notag \\ &=&\frac{1-\text{\ss }T^{\prime }\tanh \lambda }{\left( \text{\ss }T^{\prime }\tanh \lambda -1\right) ^{2}-\text{\ss }^{2}}-1 \label{p27} \end{eqnarray} in the last step, we have used Eq.(\ref{p26}). Using Eq.(\ref{p27}), we present the time evolution of the average photon number in Fig. 1, from which we find that the average photon number of the single-mode squeezed vacuum state in the amplitude damping channel reduces gradually to zero when decay time goes. \begin{figure}[tbp] \centering \includegraphics[width=8cm]{Fig1} \caption{(Color online) The average $\bar{n}\left( \protect\kappa t\right) $ as the function of $\protect\kappa t$ for different values of squeezing parameter $\protect\lambda $ (from bottom $\ $\ to top $\protect\lambda =0,0.1,0.3,0.5,1$.) } \end{figure} \section{Photon statistics distribution} Next, we shall derive the photon statistics distributions of $\rho \left( t\right) $. The photon number is given by $p\left( n,t\right) =\left\langle n\right\vert \rho \left( t\right) \left\vert n\right\rangle $. Noticing $ a^{\dag m}\left\vert n\right\rangle =\sqrt{(m+n)!/n!}\left\vert m+n\right\rangle $ and using the un-normalized coherent state $\left\vert \alpha \right\rangle =\exp [\alpha a^{\dag }]\left\vert 0\right\rangle $, \cite{18,19} leading to $\left\vert n\right\rangle =\frac{1}{\sqrt{n!}}\frac{ \mathtt{d}^{n}}{\mathtt{d}\alpha ^{n}}\left\vert \alpha \right\rangle \left\vert _{\alpha =0}\right. ,$ $\left( \left\langle \beta \right. \left\vert \alpha \right\rangle =e^{\alpha \beta ^{\ast }}\right) $, as well as the normal ordering form of $\rho \left( t\right) $ in Eq. (\ref{p22}), the probability of finding $n$ photons in the field is given by \begin{eqnarray} &&p\left( n,t\right) \notag \\ &=&\left\langle n\right\vert \rho \left( t\right) \left\vert n\right\rangle \notag \\ &=&\frac{W}{n!}\frac{\mathtt{d}^{n}}{\mathtt{d}\beta ^{\ast n}}\frac{\mathtt{ \ d}^{n}}{\mathtt{d}\alpha ^{n}}\left. \left\langle \beta \right\vert e^{ \frac{ \text{\ss }}{2}\beta ^{\ast 2}}e^{a^{\dagger }a\ln \left( \text{\ss } T^{\prime }\tanh \lambda \right) }e^{\frac{\text{\ss }}{2}\alpha ^{2}}\left\vert \alpha \right\rangle \right\vert _{\alpha ,\beta ^{\ast }=0} \notag \\ &=&\frac{W}{n!}\frac{\mathtt{d}^{n}}{\mathtt{d}\beta ^{\ast n}}\frac{\mathtt{ \ d}^{n}}{\mathtt{d}\alpha ^{n}}\left. \exp \left[ \beta ^{\ast }\alpha \text{ \ss }T^{\prime }\tanh \lambda +\frac{\text{\ss }}{2}\beta ^{\ast 2}+ \frac{ \text{\ss }}{2}\alpha ^{2}\right] \right\vert _{\alpha ,\beta ^{\ast }=0}. \label{p49} \end{eqnarray} Note that \begin{equation*} \left[ e^{\frac{\text{\ss }}{2}a^{\dagger 2}}e^{a^{\dagger }a\ln \left( \text{\ss }T^{\prime }\tanh \lambda \right) }e^{\frac{\text{\ss }}{2}\alpha ^{2}}\right] ^{\dagger }=e^{\frac{\text{\ss }}{2}a^{\dagger 2}}e^{a^{\dagger }a\ln \left( \text{\ss }T^{\prime }\tanh \lambda \right) }e^{\frac{\text{\ss }}{2}\alpha ^{2}} \end{equation*} so \begin{equation*} \left\langle n\right\vert \rho \left( t\right) \left\vert n\right\rangle ^{\ast }=\left\langle n\right\vert \rho \left( t\right) ^{\dagger }\left\vert n\right\rangle =\left\langle n\right\vert \rho \left( t\right) \left\vert n\right\rangle \end{equation*} \begin{eqnarray} &&\frac{\partial ^{n+n}}{\partial t^{n}\partial t^{\prime n}}\exp \left[ 2xtt^{\prime }-t^{2}-t^{\prime 2}\right] _{t=t^{\prime }=0} \notag \\ &=&2^{n}n!\sum_{m=0}^{[n/2]}\frac{n!}{2^{2m}\left( m!\right) ^{2}(n-2m)!} x^{n-2m}, \label{p50} \end{eqnarray} we derive the compact form for $\mathfrak{p}\left( n,t\right) $, i.e.,\ \begin{eqnarray} &&p\left( n,t\right) \notag \\ &=&\frac{W}{n!}\left( -\frac{\text{\ss }}{2}\right) ^{n}\frac{\mathtt{d}^{n} }{\mathtt{d}\beta ^{\ast n}}\frac{\mathtt{d}^{n}}{\mathtt{d}\alpha ^{n}} \left. e^{-2T^{\prime }\tanh \lambda \beta ^{\ast }\alpha -\beta ^{\ast 2}-\alpha ^{2}}\right\vert _{\alpha ,\beta ^{\ast }=0} \notag \\ &=&W\left( \text{\ss }T^{\prime }\tanh \lambda \right) ^{n}\sum_{m=0}^{[n/2]} \frac{n!\left( T^{\prime }\tanh \lambda \right) ^{-2m}}{2^{2m}\left( m!\right) ^{2}(n-2m)!}. \label{p52} \end{eqnarray Using the newly expression of Legendre polynomials found in Ref. \cite{20} \begin{equation} x^{n}\sum_{m=0}^{[n/2]}\frac{n!}{2^{2m}\left( m!\right) ^{2}(n-2m)!}\left( 1- \frac{1}{x^{2}}\right) ^{m}=P_{n}\left( x\right) , \label{p51} \end{equation} we can formally recast Eq.(\ref{p52}) into the following compact form, i.e., \begin{equation*} p\left( n,t\right) =W\left( e^{-\kappa t}\sqrt{-\text{\ss }\tanh \lambda } \right) ^{n}P_{n}\left( e^{\kappa t}T^{\prime }\sqrt{-\text{\ss }\tanh \lambda }\right) \end{equation*} note that since $\sqrt{-\text{\ss }\tanh \lambda }$ is pure imaginary, while $p\left( n,t\right) $ is real, so we must still use the power-series expansion on the right-hand side of Eq.(\ref{p52}) to depict figures of the variation of $p\left( n,t\right) $. In particular, when $t=0$, Eq.(\ref{p52} ) reduces to \begin{eqnarray} p\left( n,0\right) &=&\text{sech}\lambda \left( \tanh \lambda \right) ^{n}\lim_{T^{\prime }\rightarrow 0}\sum_{m=0}^{[n/2]}\frac{n!\left( T^{\prime }\tanh \lambda \right) ^{n-2m}}{2^{2m}\left( m!\right) ^{2}(n-2m)!} \notag \\ &=&\left\{ \begin{array}{cc} \frac{\left( 2k\right) !}{2^{2k}k!k!}\text{sech}\lambda \tanh ^{2k}\lambda , & n=2k \\ 0 & n=2k+1 \end{array} \right. , \label{p53} \end{eqnarray} which just correspond to the number distributions of the squeezed vacuum state \cite{21,22}. From\ Eq.(\ref{p53}) it is not difficult to see that the photocount distribution decreases as the squeezing parameter $\lambda $ increases. While for $\kappa t\rightarrow \infty ,$ we see that $p\left( n,\infty \right) =0.$ This indicates that there is no photon when a system interacting with a amplitude dissipative channel for enough long time, as expected. In Fig. 2, the photon number distribution is shown for different $ \kappa t$. \begin{figure}[t] \label{2}\centering\includegraphics[width=8cm]{Fig2.eps} \caption{(Color online) Photon number distribution of the squeezed vacuum state in amplitude damping channel for $\protect\lambda =1$, and different $ \protect\kappa t$: ($a)$ $\protect\kappa t=0$, ($b$) $\protect\kappa t=0.5,$ ($c$) $\protect\kappa t=1$ and ($d$) $\protect\kappa t=2$.} \end{figure} \section{Wigner functions} In this section, we shall use the normally ordering for of density operators to calculate the analytical expression of Wigner function. For a single-mode system, the WF is given by \cite{23} \begin{equation} W\left( \alpha ,\alpha ^{\ast },t\right) =e^{2\left\vert \alpha \right\vert ^{2}}\int \frac{d^{2}\beta }{\pi ^{2}}\left\langle -\beta \right\vert \rho \left( t\right) \left\vert \beta \right\rangle e^{-2\left( \beta \alpha ^{\ast }-\beta ^{\ast }\alpha \right) }, \label{p60} \end{equation} where $\left\vert \beta \right\rangle $ is the coherent state \cite{18,19} . From Eq.(\ref{p22}) it is easy to see that once the normal ordered form of $ \rho \left( t\right) $ is known, we can conveniently obtain the Wigner function of $\rho \left( t\right) $. On substituting Eq.(\ref{p23a}) into Eq.(\ref{p60}) we obtain the WF of the single-mode squeezed state in the ADC, \begin{eqnarray} &&W\left( \alpha ,\alpha ^{\ast },t\right) \notag \\ &=&We^{2\left\vert \alpha \right\vert ^{2}}\int \frac{d^{2}\beta }{\pi ^{2}} \exp \left[ -\left( 1+\text{\ss }T^{\prime }\tanh \lambda \right) \left\vert \beta \right\vert ^{2}\right. \notag \\ &&\left. -2\left( \beta \alpha ^{\ast }-\beta ^{\ast }\alpha \right) +\frac{ \text{\ss }}{2}\beta ^{\ast 2}+\frac{\text{\ss }}{2}\beta ^{2}\right] \notag \\ &=&\frac{W}{\pi \sqrt{\left( 1+\text{\ss }T^{\prime }\tanh \lambda \right) ^{2}-\text{\ss }^{2}}}\exp \left[ 2\left\vert \alpha \right\vert ^{2}\right] \notag \\ &&\times \exp \left[ 2\frac{-2\left( 1+\text{\ss }T^{\prime }\tanh \lambda \right) \left\vert \alpha \right\vert ^{2}+\text{\ss }\left( \alpha ^{\ast 2}+\alpha ^{2}\right) }{\left( 1+\text{\ss }T^{\prime }\tanh \lambda \right) ^{2}-\text{\ss }^{2}}\right] \label{p61} \end{eqnarray} In particular, when $t=0$ and $t\rightarrow \infty $, Eq.(\ref{p61}) reduces to $W\left( \alpha ,\alpha ^{\ast },0\right) =\frac{1}{\pi }\exp [-2\left\vert \alpha \right\vert ^{2}\cosh 2\lambda +\left( \alpha ^{\ast 2}+\alpha ^{2}\right) \sinh 2\lambda ]$, and $W\left( \alpha ,\alpha ^{\ast },\infty \right) =\frac{1}{\pi }\exp \left[ -2\left\vert \alpha \right\vert ^{2}\right] $, which are just the WF of the single-mode squeezed vacuum state and the vacuum state, respectively. In Fig. 3, the WF of the single-mode squeezed vacuum state in the amplitude damping channel is shown for different decay time $\kappa t$. \begin{figure}[t] \centerline{\includegraphics[width=8cm]{Fig3.eps}} \caption{(Color online) Wigner function of the squeezed vacuum state in amplitude damping channel for $\protect\lambda =1.0$, different $\protect \kappa t$: ($a)$ $\protect\kappa t=0.0$, ($b$) $\protect\kappa t=0.5$, ($c$) $\protect\kappa t=1$, and ($d$) $\protect\kappa t=2$.} \end{figure} \section{Tomogram} As we know, once the probability distributions $P_{\theta }\left( \hat{x} _{\theta }\right) $ of the quadrature amplitude are obtained, one can use the inverse Radon transformation familiar in tomographic imaging to obtain the WF and density matrix \cite{24}. Thus the Radon transform of the WF is corresponding to the probability distributions $P_{\theta }\left( \hat{x} _{\theta }\right) $. In this section we derive the tomogram of $\rho \left( t\right) $. For a single-mode system, the Radon transform of WF, denoted as $\mathcal{R}$ is defined by \cite{25} \begin{eqnarray} \mathcal{R}\left( q\right) _{f,g} &=&\int \delta \left( q-fq^{\prime }-gp^{\prime }\right) Tr\left[ \Delta \left( \beta \right) \rho \left( t\right) \right] dq^{\prime }dp^{\prime } \notag \\ &=&Tr\left[ \left \vert q\right \rangle _{f,g\text{ }f,g}\left \langle q\right \vert \rho \left( t\right) \right] =_{f,g}\left \langle q\right \vert \rho \left( t\right) \left \vert q\right \rangle _{f,g} \label{p63} \end{eqnarray} where the operator $\left \vert q\right \rangle _{f,g\text{ } f,g}\left \langle q\right \vert $ is just the Radon transform of single-mode Wigner operator $\Delta \left( \beta \right) $, and \begin{equation} \left \vert q\right \rangle _{f,g}=A\exp \left[ \frac{\sqrt{2}qa^{\dag }}{B} - \frac{B^{\ast }}{2B}a^{\dag 2}\right] \left \vert 0\right \rangle , \label{p64} \end{equation} as well as $B=f-ig,$ $A=\left[ \pi \left( f^{2}+g^{2}\right) \right] ^{-1/4}\exp [-q^{2}/2\left( f^{2}+g^{2}\right) ]$. Thus the tomogram of a quantum state $\rho \left( t\right) $ is just the quantum average of $\rho \left( t\right) $ in $\left \vert q\right \rangle _{f,g}$ representation (a kind of intermediate coordinate-momentum representation) \cite{26}. Substituting Eqs.(\ref{p23a}) and (\ref{p64}) into Eq.(\ref{p63}), and using the completeness relation of coherent state, we see that the Radom transform of WF of $\rho \left( t\right) $ is given by \begin{eqnarray} &&\mathcal{R}\left( q\right) _{f,g} \notag \\ &=&W_{f,g}\left\langle q\right\vert e^{\frac{\text{\ss }}{2}a^{\dag 2}}e^{a^{\dagger }a\ln \left( \text{\ss }T^{\prime }\tanh \lambda \right) }e^{\frac{\text{\ss }}{2}a^{2}}\left\vert q\right\rangle _{f,g} \notag \\ &=&\frac{WA^{2}}{\sqrt{E}}\exp \left\{ \frac{q^{2}\text{\ss }}{E\left\vert B\right\vert ^{4}}\left( B^{2}+B^{\ast }{}^{2}\right) \right. \notag \\ &&+\left. \frac{2q^{2}\text{\ss }}{E\left\vert B\right\vert ^{2}}\left( T\tanh \lambda +\text{\ss }-\text{\ss }T^{2}\tanh ^{2}\lambda \right) \right\} , \label{p65} \end{eqnarray} where we have used the formula (\ref{p25}) and $\left\langle \alpha \right\vert \left. \gamma \right\rangle =\exp [-\left\vert \alpha \right\vert ^{2}/2-\left\vert \gamma \right\vert ^{2}/2+\alpha ^{\ast }\gamma ]$, as well as$\allowbreak $ \begin{eqnarray} E &=&\left( 1+\text{\ss }\frac{B}{B^{\ast }}\right) \left( 1+\frac{B^{\ast } }{B}\text{\ss }-B^{\ast }\frac{\left( \text{\ss }T^{\prime }\tanh \lambda \right) ^{2}}{B^{\ast }+\text{\ss }B}\right) \notag \\ &=&\left\vert 1+\frac{\text{\ss }B}{B^{\ast }}\right\vert ^{2}-\left( \text{ \ss }T^{\prime }\tanh \lambda \right) ^{2}. \label{p66} \end{eqnarray} In particular, when $t=0,$ ($T=0$), then Eq.(\ref{p65}) reduces to ($\frac{B }{B^{\ast }}=e^{2i\phi }$) \begin{eqnarray} \mathcal{R}\left( q\right) _{f,g} &=&\frac{A^{2}\text{sech}\lambda }{ \left\vert 1+e^{2i\phi }\tanh \lambda \right\vert } \notag \\ &&\times \exp \left\{ \frac{q^{2}\left( B^{2}+B^{\ast }{}^{2}+2\left\vert B\right\vert ^{2}\tanh \lambda \right) \tanh \lambda }{\left\vert 1+e^{2i\phi }\left\vert B\right\vert ^{4}\tanh \lambda \right\vert ^{2}} \right\} , \label{p67} \end{eqnarray} which is a tomogram of single-mode squeezed vacuum state; while for $\kappa t\rightarrow \infty ,$($T=1$), then $\mathcal{R}\left( q\right) _{f,g}=A^{2}, $ which is a Gaussian distribution corresponding to the vacuum state. In summary, using the way of deriving infinitive sum representation of density operator by virtue of the entangled state representation describing, we conclude that in the amplitude dissipative channel the initial density operator of a single-mode squeezed vacuum state evolves into a squeezed chaotic state with decreasing-squeezing. We investigate average photon number, photon statistics distributions, Wigner functions and tomogram for the output state. \section*{Acknowledgments} This work is supported by the National Natural Science Foundation of China (Grant No.11175113 and 11047133), Shandong Provincial Natural Science Foundation in China (Gant No.ZR2010AQ024), and a grant from the Key Programs Foundation of Ministry of Education of China (Grant No. 210115),as well as Jiangxi Provincial Natural Science Foundation in China (No. 2010GQW0027).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \begin{figure*}[htbp] \centering \includegraphics[width=\linewidth]{all_histo_3x3.png} \caption{Distribution of all the images across the different classes with the period highlighted.} \label{fig:histoclasses} \end{figure*} Started in India, Buddhism spread across all the Asian subcontinent through China reaching the coasts of South-eastern Asia and the Japanese archipelago, benefiting from the travels along the Silk Roads~\cite{secret2006,faces2013}. The story is still subject to many debates as multiple theories are confronting on how this spread and evolution took place~\cite{spread1986, Guide, secret2006,faces2013}. Nonetheless, as Buddhism flourished along the centuries, scholars have exchanged original ideas that further diffused, shaping the different branches of Buddhism and art as we know them today. When Buddhism reached new territories, local people would craft Buddhism art themselves. Not only they observed common rules of crafting, but also they adapted them to express their own culture, giving rise to new styles~\cite{style1987}. With multiple crisis and cultural exchanges, many pieces of art have been displaced, and their sources may remain today uncertain. Only a few experts can identify these works. This is however subject to their own knowledge, and the origin of some statues is still disputed today~\cite{controversy2011}. However, our decade has seen tremendous progress in machine learning, so that we may harvest these techniques to support art identification~\cite{blessing2010using}. Our work focuses on the representation of Buddha, central to Buddhism art, and more specifically on Buddha statues. Statues are 3D objects by nature. There are many types of Buddha statues, but all of them obey construction rules. These are \textit{canons}, that make a set of universal rules or principals to establish the very fundamentals of the representation of Buddha. Although the canons have first been taught using language-based description, these rules have been preserved today, and are consigned graphically in rule books (\textit{e.g.} Tibetan representations ~\cite{tibet17--, tibetan} as illustrated in Fig.~\ref{fig:tibetlines}). The study of art pieces measurements, or iconometry, may further be used to investigate the differences between classes of Buddha statues~\cite{reportya}. In this paper, we are interested in understanding how these rules can reflect in a medium size set of Buddha statues (>1k identified statues in about 7k images). We focus in particular on the faces of the statues, through photographs taken from these statues (each being a 2D projection of the 3D statue). We propose to automatically recover the construction guidelines, and proceed with iconometry in a systematic manner. Taking advantage of the recent advances in image description, we further investigate different deep features and classification tasks of Buddha statues. This paper contributes by setting a baseline for the comparison between ``historical'' features, the set of canon rules, against ``modern'' features, on a medium size dataset of Buddha statues and pictures. The rest of the paper is organized as follows. After discussing the related work, we present our dataset in Section~\ref{sec:data}. We then introduce the iconometry measurement and application in Section~\ref{sec:iconometry}. From this point on, we study different embedding techniques and compare them along a larger set of classification tasks (Sec.~\ref{sec:classification}) before concluding. \subsection{Related Work} Automatic art analysis is not a new topic, and early works have focused on hand crafted feature extraction to represent the content typically of paintings~\cite{johnson2008image,shamir2010impressionism,carneiro2012artistic,khan2014painting,mensink2014rijksmuseum}. These features were specific to their application, such as the author identification by brushwork decomposition using wavelets~\cite{johnson2008image}. A combination of color, edge, and texture features was used for author/school/style classification~\cite{shamir2010impressionism, khan2014painting}. The larger task of painting classification has also been approached in a much more traditional way with SIFT features~\cite{carneiro2012artistic,mensink2014rijksmuseum}. This was naturally extended to the use of deep visual features with great effectiveness~\cite{Bar2014ClassificationOA,karayev2014recognizing,Saleh2015LargescaleCO,elgammal2015quantifying, Tan2016CeciNP,ma2017part,mao2017deepart,elgammal2018shape,Garcia2018How,strezoski2018omniart}. The first approaches were using pre-trained networks for automatic classification~\cite{Bar2014ClassificationOA,karayev2014recognizing,Saleh2015LargescaleCO}. Fine tuned networks have then shown improved performances~\cite{Tan2016CeciNP,seguin2016visual,mao2017deepart,strezoski2017omniart, chu2018image}. Recent approaches~\cite{Garcia2018How, garcia2019context} introduced the combination of multimedia information in the form of joint visual and textual models~\cite{Garcia2018How} or using graph modeling~\cite{garcia2019context} for the semantic analysis of paintings. The analysis of style has also been investigated with relation to time and visual features~\cite{elgammal2015quantifying, elgammal2018shape}. Other alternatives are exploring domain transfer for object and face detection and recognition~\cite{crowley2015face,crowley2014state,crowley2016art}. These methods mostly focus on capturing the visual content of paintings, on very well curated datasets. However, paintings are very different to Buddha statues, in that sense that statues are 3D objects, created with strict rules. In addition, we are interested by studying the history of art, not limited to the visual appearance, but also about their historical, material, and artistic context. In this work, we explore different embeddings, from ancient Tibetan rules, to modern visual, in addition to face-based, and graph-based, for different classification tasks of Buddha statues. We can also investigate recent works which are close to our application domain, \textit{i.e.} the analysis of oriental statues \cite{kamakura2005classification, ikeuchi2007great, reportya, bevan2014computer, bhaumik2018recognition, wang2019average}. Although, one previous work has achieved Thai statue recognition by using handcrafted facial features~\cite{pornpanomchai2011thai}. Other related works focus on the 3D acquisition of statues ~\cite{kamakura2005classification, ikeuchi2007great} and their structural analysis~\cite{bevan2014computer, bhaumik2018recognition}, with sometimes the goals of classification too~\cite{kamakura2005classification, reportya}. We should also highlight the recent use of inpainting techniques on Buddhism faces for the study and recovery of damaged pieces~\cite{wang2019average}. Because 3D scanning does not scale to the order of thousands statues, we investigate features of 2D pictures of 3D statues, very close to the spirit of Pornpanomchai \textit{et al.}~\cite{pornpanomchai2011thai}. In addition to the study of ancient proportions, we provide modern analysis with visual, face-based (which also implies a 3D analysis), and semantic features for multiple classification tasks, on a very sparse dataset that does not provide information for every class. \begin{figure*}[!ht] \centering \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[height=3.8cm,]{iconography.png} \caption{} \label{fig:tibetlines} \end{subfigure} \hspace*{-1.7mm} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=3.8cm,height=3.8cm,]{linelabels.png} \caption{} \label{fig:linelabels} \end{subfigure} \hspace*{-1.7mm} \begin{subfigure}[b]{0.22\textwidth} \centering \includegraphics[height=3.6cm]{tibetproportion68pointsblue.png} \caption{} \label{fig:model68landmarks} \end{subfigure} \hspace*{-1.8mm} \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[height=3.6cm]{buddha68landmarks.jpg} \caption{} \label{fig:buddha68landmarks} \end{subfigure} \hspace*{-1.8mm} \begin{subfigure}[b]{0.22\textwidth} \centering \includegraphics[height=3.6cm,]{buddha2D68frontal.jpg} \caption{} \label{fig:buddha2D68frontal} \end{subfigure} \\ \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=4.2cm,height=4.0cm]{lineschina.jpg} \caption{} \label{fig:lineschina} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=4.2cm,height=4.0cm,]{linesheian.jpg} \caption{} \label{fig:linesheian} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=4.2cm,height=4.0cm]{lineskamakura.jpg} \caption{} \label{fig:lineskamakura} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=4.2cm,height=4.0cm]{linescombined.png} \caption{} \label{fig:linescombined} \end{subfigure} \caption{Above: Deriving the Buddha iconometric proportions based on 68 facial landmarks.\ (a) Proportional measurements on a Tibetan canon of Buddha~\cite{tibetan} facial regions and their value (original template \textcopyright Carmen Mensik, \url{www.tibetanbuddhistart.com}).\ (b) Iconometric proportional guidelines defined from the 68 facial landmark points.\ (c) Application of landmarks and guidelines detection to the Tibetan model~\cite{tibetan}.\ (d) 3D 68 facial landmarks detected on a Buddha statue image, and its frontal projection (e). Below: Examples of the detected iconometric proportions in three different styles.\ (f) China.\ (g) Heian.\ (h) Kamakura.\ (i) The combined and superimposed iconometric proportional lines from the examples (f)-(h). Canon image is the courtesy of Carmen Mensink~\cite{tibetan}.\ } \label{fig:buddhalandmarks} \end{figure*} \section{Data} \label{sec:data} This work is led in collaboration with experts who wish to investigate three important styles of Buddha statues. A first style is made of statues from ancient \textbf{China} spreading between the IV and XIII centuries. A second style is made of Japanese statues during the \textbf{Heian} period (794-1185). The last style is also made of Japanese statues, during the \textbf{Kamakura} era (1185-1333). To do so, our experts have captured (scanned) photos in 4 series of books, resulting in a total of 6811 scanned images, and documented 1393 statues among them. The first series~\cite{chinabook} concerns 1076 Chinese statues (1524 pictures). Two book series~\cite{heian1,heian2} regroup 132 statues of the Heian period (1847 pictures). The last series~\cite{Kamakurabook} collects 185 statues of the Kamakura era (3888 pictures). To further investigate the statues, our experts have also manually curated extra meta-data information (only when available). For the \textbf{localization}, we so far only consider China and Japan. \textbf{Dimensions} are reporting the height of each statue, so we created three classes: \textit{small} (from 0 to 100 cm), \textit{medium} (from 100cm to 250cm) and \textit{big} (greater than 250 cm). Many statues also have a specific \textbf{statue type} attributed to them. We threshold them to the most common types, represented by at least 20 pictures, namely \textit{Bodhisattva} and \textit{Buddha}. A \textbf{temporal information} which can be inferred from up to four components: an exact international date, a date or period that may be specific to the Japanese or Chinese traditional dating system, a century information (period), an era that may be specific to Japan or China (period). Because these information may be only periods, we re-align them temporally to give an estimate year in the international system, that is the median year of the intersection of all potential time periods. They all distribute between the V and XIII century. \textbf{Material information} is also provided but it is made of multiple compounds and/or subdivisions. We observe the following categories: \textbf{base material} can be of \textit{wood}, \textit{wood+lacquer}, \textit{iron}, or \textit{brick}; \textbf{color or texture} can refer to \textit{pigment}, \textit{lacquered foil}, \textit{gold leaves}, \textit{gold paint}, \textit{plating}, \textit{dry lacquer finish}, or \textit{lacquer}; \textbf{type of stone} (when applies) may be \textit{limestone}, \textit{sand stone}, \textit{white marble}, or \textit{marble}; \textbf{type of wood} (also when applies) may be \textit{Japanese cypress}, \textit{Katsura}, \textit{Japanese Torreya}, \textit{cherry wood}, \textit{coniferous}, or \textit{camphor tree}; the material may also imply a \textbf{construction method} among \textit{separate pieces}, \textit{one piece cut}, and \textit{one piece}. Fig.~\ref{fig:histoclasses} shows the distribution of all the images across the different classes. Because for each of the statues many information is either uncertain or unavailable, we can note that the data is very sparse, and most of the different classes are balanced unevenly. Note that not all pictures are corresponding to a documented statue, the curated dataset annotates a total of 3065 images in 1393 unique statues. In addition, not the same statues shares the same information, \textit{i.e.} some statues have color information, but no base material, when others have temporal information only \textit{etc.} As a consequence, each classification task we describe later in Sec.~\ref{sec:classification} has a specific subset of images and statues to which it may apply, not necessary overlapping with the subsets of other tasks. \section{Iconometry}\label{sec:iconometry} \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{measurements.png} \caption{Six iconometric proportions distribution across the three styles, China, Kamakura, and Heian, against Tibetan theoretical canons and their actually observed proportions.} \label{fig:allhistorical} \end{figure*} We begin our analysis with the use of historic iconometry for determining facial proportions in figurative Buddha constructions. For this, we have chosen a model based on a Tibetan-originated 18th century book comprising of precise iconometric guidelines for representing Buddha-related artworks~\cite{tibet17--, tibetan}. Although this book primarily encompasses Tibetan-based Buddha drawing gui\-delines, it gave us insights of how Buddha-artists from different eras and geographical locations proportionate key facial regions in their portrayal of Buddha artworks. We propose to detect and use these proportions for the analysis and differentiation of Buddha designs from different eras and locations around the world. Fig.~\ref{fig:tibetlines} depicts the chosen iconometric proportional measurements of different facial regions that is used in our analysis. The idea is to use automatic landmark detection, so we may infer the iconometry lines from any Buddha face in the dataset. Based on these lines, we can identify and normalize the proportions of each key region of the Buddha faces and compare them together and against the canons. \subsection{Guidelines and Proportions} The guidelines are given for a front facing Buddha statue, but not all pictures are perfectly facing front the camera source point. Finding 3D facial landmarks allows for affine spatial transformation, and for normalizing the statue pose before searching for the iconometric guidelines. Moreover, we wish to locate the guidelines with relation to important facial points. To do so, we first employ facial landmark detection on the historical Buddha model, and find correspondences between the lines and the model (as detailed in Table~\ref{tab:lineconnections} and illustrated in Fig.~\ref{fig:tibetlines}-c). Because the landmark points are defined in a 3-dimensional space, the correspondences are defined on the 2D front-facing orthogonal projection of the landmarks. We employ the Position Map Regression Network (PRN)~\cite{feng2018joint} which identifies 68 3D facial landmarks in faces. Table~\ref{tab:lineconnections} defines the proportional guidelines that can be drawn from any given 68 facial landmark points (refer to the Fig.~\ref{fig:linelabels} for reference to the point numbers). \begin{table}[b] \caption{The proportional guidelines can be drawn from any given 68 facial landmark points as shown in Fig.~\ref{fig:linelabels}.}\label{table1} \centering \begin{tabular}{c c c} \hline \small \centering \hspace{-1mm} Line &\hspace{-1mm} \small Description \hspace{-1mm} & \hspace{-1mm}\small Point Connections \hspace{-1mm} \\ \hline \footnotesize \centering L1 & \footnotesize Eyebrow Line & \footnotesize Mean of (19,21) to Mean of (24,26) \\ \footnotesize \centering L2 & \footnotesize Top Eye Line & \footnotesize Mean of (38,39) to Mean of (44,45) \\ \footnotesize \centering L3 & \footnotesize Bottom Eye Line & \footnotesize Mean of (41,42) to Mean of (47,48) \\ \footnotesize \centering L4 & \footnotesize Nose Sides Line & \footnotesize 32 to 36 \\ \footnotesize \centering L5 & \footnotesize Jaw Line & \footnotesize 7 to 11 \\ \footnotesize \centering L6 & \footnotesize Center Nose Line & \footnotesize Mean of (22,23) to Mean of (28,29,30,31) \\ \footnotesize \centering L7 & \footnotesize Left Face Line & \footnotesize Line between L1 and L5 through 2 \\ \footnotesize \centering L8 & \footnotesize Left Face Line & \footnotesize Line between L1 and L5 through 16 \\ \hline \end{tabular} \label{tab:lineconnections} \end{table} Once the guidelines are established from the detected 68 landmark points, each key region of the Buddha face is then measured according to the proposed proportions as seen in Fig. \ref{fig:tibetlines}. For this analysis we do not make use of the inner diagonal guidelines, but we rather focus on a clear subset of six key facial regions, namely, \textit{left forehead (LH)}, \textit{right forehead (RH)}, \textit{eyelids (EL)}, \textit{eyes (E)}, \textit{nose (N)}, and \textit{lower face (LF)}. Table~\ref{tab:proportionmeasurements} details how we may derive the proportions from the lines, with their theoretical values, Fig.~\ref{fig:model68landmarks} shows the lines once the whole process is applied to the historical model. Fig.~\ref{fig:buddha68landmarks} shows the PRN-detected 68 landmark points on a Buddha face and its 2D frontal orthographic projection is presented in Fig.~\ref{fig:buddha2D68frontal}. Results on statues are shown in Fig.~\ref{fig:lineschina}-i.\ \subsection{Analysis} Given our dataset, we apply the above described iconometric proportions for the three main categories of statues. Given that we may have multiple pictures for each statue and that the landmark detection may fail on some pictures, we obtain 179 measurements for statues from China, 894 proportions for Japan Heian statues, and 1994 for Japan Kamakura statues. Results are reported in Fig.~\ref{fig:allhistorical} against two baselines, the theoretical Tibetan canon baseline, and the actually measured baseline on the same Tibetan model. Although the proportion differences might be minute, it can be observed that the Buddha designs from China, in general, have much larger noses and shorter eyelids when compared with the other two datasets, while Buddhas from the Kamakura period have their design proportions in-between the other two datasets. Eyelids tend to be slightly smaller for Kamakura designs in comparison to Heian ones. Fig.~\ref{fig:lineschina}-i show a sample of the iconometric proportional measurement taken from each of the experimented dataset while Fig.~\ref{fig:linescombined} displays a superimposition of the three. \begin{table}[b] \caption{The iconometric measurements derived from the guidelines with their theoretical values, normalized by the largest possible proportion (here the total width, LH+RH=12).} \centering \begin{tabular}{c c c c} \hline \small \centering \hspace{-1mm} Label &\hspace{-1mm} \small Description \hspace{-1mm} & \hspace{-1mm}\small Line/Point Connections \hspace{-1mm} & \hspace{-1mm}\small \vtop{\hbox{\strut Theoretical length}\hbox{\strut \hspace{3mm}(normalized)}} \hspace{-1mm} \\ \hline \footnotesize \centering LH & \footnotesize Left Forehead & \footnotesize L1 left-point to L6 top-point & \footnotesize 6 (0.500) \\ \footnotesize \centering RH & \footnotesize Left Forehead & \footnotesize L6 top-point to L1 right-point & \footnotesize 6 (0.500) \\ \footnotesize \centering EL & \footnotesize Eyelid & \footnotesize L1 right-point to L2 right-point & \footnotesize 1 (0.083) \\ \footnotesize \centering E & \footnotesize Eye & \footnotesize L2 right-point to L3 right-point & \footnotesize 1 (0.083) \\ \footnotesize \centering N & \footnotesize Nose & \footnotesize L3 right-point to L4 right-point & \footnotesize 2 (0.167) \\ \footnotesize \centering LF & \footnotesize Lower Face & \footnotesize L4 right-point to L5 right-point & \footnotesize 4 (0.333) \\ \hline \end{tabular} \label{tab:proportionmeasurements} \end{table} One can also notice some important difference between the theoretical canons of the Tibetan model and their actual measurement in the dataset. Considering the small average distance between the observed model proportions and the different measurements on real statues, we may wonder whether this distance is an artifact due to the measurement methodology -- which is trained for human faces -- or to an actual approximation of these measures. Even in the original Tibetan model, the proportions of the nose appear to the eye larger than the one originally described. Although the differences are not striking for the measurements themselves, they do actually differ as the timelines and locations change. This motivates us to further investigate if modern image embedding can reveal further differences among different categories of Buddha statues. \section{Modern Embeddings}\label{sec:classification} Since the previous method based on a historical description of facial landmarks does not give a clear cut between classes, we also explore modern types of embeddings designed for classification, namely, image embeddings that take full image for description, face embeddings trained for facial recognition, and graph embeddings purely built on the semantics of the statues. \begin{table*}[hbt] \caption{F1-score with weighted average on the different classification tasks for each proposed embedding.} \centering \begin{tabular}{c c@{\;}c c@{\;}c c@{\;}c c@{\;}c c@{\;}c c@{\;}c c@{\;}c c@{\;}c c@{\;}c} \hline \centering & \multicolumn{2}{c}{\hspace{-1mm} \small T1 \hspace{-1mm}} & \multicolumn{2}{c}{\hspace{-1mm} \small T2 \hspace{-1mm}} & \multicolumn{2}{c}{\hspace{-1mm} \small T3 \hspace{-1mm}} & \multicolumn{2}{c}{\hspace{-1mm} \small T4 \hspace{-1mm}} & \multicolumn{2}{c}{\hspace{-1mm} \small T5.1 \hspace{-1mm}} & \multicolumn{2}{c}{\hspace{-1mm} \small T5.2 \hspace{-1mm}} & \multicolumn{2}{c}{\hspace{-1mm} \small T5.3 \hspace{-1mm}} & \multicolumn{2}{c}{\hspace{-1mm} \small T5.4 \hspace{-1mm}} & \multicolumn{2}{c}{\hspace{-1mm} \small T5.5 \hspace{-1mm}} \\ \centering \hspace{-1mm} \small Method & \multicolumn{2}{c}{\hspace{-1mm} \small Style \hspace{-1mm}} & \multicolumn{2}{c}{\hspace{-1mm}\small Dimensions \hspace{-1mm}} & \multicolumn{2}{c}{\hspace{-1mm}\small Century \hspace{-1mm}} & \multicolumn{2}{c}{\hspace{-1mm}\small Statue type \hspace{-1mm}} & \multicolumn{2}{c}{\hspace{-1mm}\footnotesize \begin{tabular}{@{}c@{}}Base \\ material\end{tabular} \hspace{-1mm}} & \multicolumn{2}{c}{\hspace{-1mm}\footnotesize \begin{tabular}{@{}c@{}}Color/ \\ texture\end{tabular} \hspace{-1mm}} & \multicolumn{2}{c}{\hspace{-1mm}\footnotesize \begin{tabular}{@{}c@{}}Type of \\ stone\end{tabular} \hspace{-1mm}} & \multicolumn{2}{c}{\hspace{-1mm}\footnotesize \begin{tabular}{@{}c@{}}Type of \\ wood\end{tabular} \hspace{-1mm}} & \multicolumn{2}{c}{\hspace{-1mm}\footnotesize \begin{tabular}{@{}c@{}}Construct. \\ method\end{tabular} \hspace{-1mm}} \\ & \footnotesize SVM & \footnotesize NN & \footnotesize SVM & \footnotesize NN & \footnotesize SVM & \footnotesize NN & \footnotesize SVM & \footnotesize NN & \footnotesize SVM & \footnotesize NN & \footnotesize SVM & \footnotesize NN & \footnotesize SVM & \footnotesize NN & \footnotesize SVM & \footnotesize NN & \footnotesize SVM & \footnotesize NN \\ \hline \centering \footnotesize $Iconometry$ & \footnotesize 0.50 & \footnotesize \textit{0.51} & \footnotesize \textit{0.50} & \footnotesize 0.48 & \footnotesize \textit{0.67} & \footnotesize 0.55 & \footnotesize 0.35 & \footnotesize \textit{0.68} & \footnotesize 0.80 & \footnotesize \textit{0.88} & \footnotesize \textit{0.34} & \footnotesize 0.23 & \footnotesize 0.17 & \footnotesize \textit{0.23} & \footnotesize \textit{0.84} & \footnotesize 0.83 & \footnotesize 0.23 & \footnotesize \textit{0.35} \\ \hline \centering \footnotesize $VGG16_{full}$ & \footnotesize 0.88 & \footnotesize 0.95 & \footnotesize 0.54 & \footnotesize 0.74 & \footnotesize 0.52 & \footnotesize 0.73 & \footnotesize 0.63 & \footnotesize 0.73 & \footnotesize 0.89 & \footnotesize 0.82 & \footnotesize 0.69 & \footnotesize 0.61 & \footnotesize 0.34 & \footnotesize 0.38 & \footnotesize 0.89 & \footnotesize 0.86 & \footnotesize 0.63 & \footnotesize 0.65 \\ \centering \footnotesize $ResNet50_{full}$ & \footnotesize 0.88 & \footnotesize \textbf{\textit{0.98}} & \footnotesize 0.38 & \footnotesize \textbf{\textit{0.78}} & \footnotesize 0.50 & \footnotesize \textbf{\textit{0.78}} & \footnotesize 0.47 & \footnotesize \textbf{\textit{0.82}} & \footnotesize \textbf{\textit{0.93}} & \footnotesize 0.86 & \footnotesize \textbf{\textit{0.79}} & \footnotesize 0.66 & \footnotesize \textbf{\textit{0.49}} & \footnotesize 0.42 & \footnotesize \textit{0.90} & \footnotesize 0.84 & \footnotesize 0.69 & \footnotesize \textit{0.70} \\ \hline \centering \footnotesize$VGG16_{cropped}$ & \footnotesize 0.83 & \footnotesize 0.92 & \footnotesize 0.54 & \footnotesize 0.70 & \footnotesize 0.50 & \footnotesize 0.72 & \footnotesize 0.67 & \footnotesize 0.69 & \footnotesize 0.87 & \footnotesize 0.85 & \footnotesize 0.61 & \footnotesize 0.55 & \footnotesize \textit{0.46} & \footnotesize 0.37 & \footnotesize 0.87 & \footnotesize 0.86 & \footnotesize 0.63 & \footnotesize 0.62 \\ \centering \footnotesize$ResNet50_{cropped}$ & \footnotesize 0.88 & \footnotesize \textit{0.96} & \footnotesize 0.33 & \footnotesize \textit{0.73} & \footnotesize 0.50 & \footnotesize \textit{0.75} & \footnotesize 0.55 & \footnotesize \textit{0.74} & \footnotesize \textit{0.90} & \footnotesize 0.89 & \footnotesize \textit{0.74} & \footnotesize 0.67 & \footnotesize 0.45 & \footnotesize 0.39 & \footnotesize \textbf{\textit{0.91}} & \footnotesize 0.86 & \footnotesize 0.72 & \footnotesize \textbf{\textit{0.74}} \\ \hline \centering \footnotesize$VGG16_{vggface2}$ & \footnotesize 0.72 & \footnotesize \textit{0.89} & \footnotesize 0.54 & \footnotesize 0.73 & \footnotesize 0.44 & \footnotesize \textit{0.70} & \footnotesize 0.67 & \footnotesize 0.71 & \footnotesize 0.86 & \footnotesize 0.74 & \footnotesize \textit{0.69} & \footnotesize 0.61 & \footnotesize \textit{0.43} & \footnotesize 0.35 & \footnotesize \textit{0.88} & \footnotesize 0.85 & \footnotesize \textit{0.67} & \footnotesize 0.65 \\ \centering \footnotesize$ResNet50_{vggface2}$ & \footnotesize 0.72 & \footnotesize 0.88 & \footnotesize 0.54 & \footnotesize \textit{0.74} & \footnotesize 0.44 & \footnotesize 0.69 & \footnotesize 0.67 & \footnotesize \textit{0.72} & \footnotesize 0.86 & \footnotesize \textit{0.87} & \footnotesize \textit{0.69} & \footnotesize 0.64 & \footnotesize \textit{0.43} & \footnotesize 0.34 & \footnotesize \textit{0.88} & \footnotesize 0.84 & \footnotesize \textit{0.67} & \footnotesize 0.65 \\ \hline \centering \footnotesize $Node2Vec_{KG}$ & \footnotesize 0.92 & \footnotesize \textit{0.93} & \footnotesize -- & \footnotesize -- & \footnotesize 0.71 & \footnotesize \textit{0.74} & \footnotesize -- & \footnotesize -- & \footnotesize -- & \footnotesize -- & \footnotesize -- & \footnotesize -- & \footnotesize -- & \footnotesize -- & \footnotesize -- & \footnotesize -- & \footnotesize -- & \footnotesize -- \\ \centering \footnotesize $Node2Vec_{KG_{time}}$ & \footnotesize \textbf{\textit{0.98}} & \footnotesize \textbf{\textit{0.98}} & \footnotesize -- & \footnotesize -- & \footnotesize -- & \footnotesize -- & \footnotesize -- & \footnotesize -- & \footnotesize -- & \footnotesize -- & \footnotesize -- & \footnotesize -- & \footnotesize -- & \footnotesize -- & \footnotesize -- & \footnotesize -- & \footnotesize -- & \footnotesize -- \\ \hline \end{tabular} \label{tab:T1T5} \end{table*} \subsection{Classification Tasks} Our initial research question is a \textbf{style classification (T1)}, \textit{i.e.} the comparison of three different styles: \textit{China}, \textit{Kamakura period}, and \textit{Heian period}. Given the rich dataset we have been offered to explore, we also approach four additional classification tasks. We conduct a \textbf{statue type classification (T2)} which guesses the type of Buddha represented, and \textbf{dimension classification (T3)} which classifies the dimension of a statue across the three classes determined in Sec.~\ref{sec:data}. We continue with a \textbf{century classification (T4)}, given the temporal alignment of our statues, each could be assigned to a different century (we are covering a total of nine centuries in our dataset). We conclude with the \textbf{material classifications (T5)}, which comprises: \textit{base material (T5.1)}, \textit{color/texture (T5.2)}, \textit{type of stone (T5.3)}, \textit{type of wood (T5.4)}, and \textit{construction method (T5.5)}. Note that all material classifications except for task \textit{T5.5} are actually multi-label classification tasks, indeed a statue can combine different materials, colors, \textit{etc.} Only the construction method is unique, thus single label classification. To evaluate classification across each of these tasks, we limit our dataset to the 1393 annotated and cleaned statues, covering a total of 3315 images. To compare the different methods on the same dataset, we further limit our evaluation to the 2508 pictures with a detectable face as searched during Sec.~\ref{sec:iconometry}, using PRN~\cite{feng2018joint}. Due to the limited size of the dataset, we train our classifiers using a 5-fold cross-validation. \subsection{Image Embeddings} To describe our Buddha statue 2D pictures, we propose to study existing neural network architectures which already have proven great success in many classification tasks, namely $VGG16$~\cite{simonyan2014very} and $ResNet50$~\cite{he2016deep}. For the classification of Buddha statues from the global aspect of their image, we use each of these networks with their standard pre-trained weights (from ImageNet~\cite{deng2009imagenet}). To study the classification performances of statues with regards to their face, we first restrain the face region using PRN~\cite{feng2018joint}. To compare the relevance of the facial region for classification, we evaluate against two datasets. The first one evaluates ImageNet-trained embeddings on the full images (referred to as $VGG16_{full}$ and $ResNet50_{full}$), the second one evaluates the same features, but only on the cropped region of the face ($VGG16_{cropped}$ and $Res$\-$Net50_{cropped}$). In addition, each of the networks is also fine-tuned using VGGFace2~\cite{VGGFace2}, a large-scale dataset designed for the face recognition task (on cropped faces), herafter $VGG16_{vggface2}$ and $ResNet50_{vggface2}$. Whichever the method described above, the size of the resulting embedding space is of 2048 dimensions. \subsection{Semantic Embedding} Given the rich data we are provided, and inspired by the work of Garcia \textit{et al.}~\cite{garcia2019context}, we may also explore semantic embedding in the form of an \textit{artistic knowledge graph}. Instead of traditional homophily relationships, our \textit{artistic knowledge graph} $KG=(V,E)$ is composed of multiple types of node: first of all, each statue picture is a node (\textit{e.g.} the Great Buddha of Kamakura). Then, each value of each family of attributes also has a node, connected to the nodes of the statues they qualify (for example, the \textit{Great Buddha of Kamakura} node will be connected to the \textit{Bronze} node). From the metadata provided, we construct two knowledge graphs. A first knowledge graph $KG$ only uses the following families of attributes: \textit{Dimensions}, \textit{Materials}, and \textit{Statue type}. Because we are curious in testing the impact of time as a determinant of style, we also add the \textit{Century} attributes in a more complete graph $KG_{time}$. In total, the resulting $KG$ presents 3389 nodes and 16756 edges, and $KG_{time}$ presents 3401 nodes and 20120 edges. An illustrative representation of our \textit{artistic knowledge graph} is shown in Fig.~\ref{fig:KG_example}. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{KG_dataset.png} \caption{An example of \textit{artistic knowledge graph}. Each node corresponds to either a statue or an attribute, whereas edges correspond to existing interconnections.} \label{fig:KG_example} \end{figure} Considering the sparsity of all our data, due to noisy and/or missing values, this graph definition suits very well our case. However, because we use category labels during the knowledge graphs construction, it limits us to evaluate only task T1 and T3 for $KG$, and T1 only for $KG_{time}$. To measure node embeddings in this graph, we use node2vec~\cite{node2vec}, which assigns a 128-dimensional representation of a node as a function of its neighborhood at a geodesic distance of 2. This should reflect statue homophily very well since statues nodes may be reached between them from a geodesic distance of 2. \subsection{Evaluation} We use two types of classifiers for each task. Given the small amount and imbalanced data we have for each different classification, we first train a classical Support Vector Machine (SVM) classifier~\cite{cortes1995support}. To improve the quality of the classifier given imbalanced data, we adjust the following parameters: $\gamma = 1/|M|$ ($|M|$, the number of classes), $penalty=1$, \textit{linear} kernel, and adjusted class weights $w_m = N/k.n_m$ (inversely proportional to class frequency in the input data: for a class $m$ among $k$ classes, having $n_m$ observations among $N$ observations). We additionally train a Neural Network classifier (NN), in form of a fully connected layer followed by softmax activation with categorical crossentropy loss $\mathcal{L}(y, \hat{y})$ if only one category is applicable: $$ \mathcal{L}(y, \hat{y}) = - \sum_{j=0}^M\sum_{i=0}^N(y_{ij}*\log(\hat{y}_{ij})) $$ with $M$ categories. Otherwise, we use a binary crossentropy $H_p(q)$ for multi-label classification, as follows: $$H_p(q)=\frac{1}{N}\sum_{i=1}^Ny_i\log(p(y_i))+(1-y_i)\log(1-p(y_i))$$ With $y$ the embedding vector, $N$ is the training set. Both cases use Adam optimizer, and the size of the output layer is then matched to the number of possible classes. For each of the task we report the weighted average, more adapted for classification with unbalanced labels, of precision and recall under the form of F1-score (precision and recall values are very comparable across all our classifiers, so F1-score works very well in our case). Classification results are presented in Table~\ref{tab:T1T5}. \begin{figure*}[h] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{ResNet50.png} \caption{$ResNet50_{full}$} \label{fig:resnet50} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{resnet50_cropped.png} \caption{$ResNet50_{cropped}$} \label{fig:resnet50cropped} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{resnet50_vggface2.png} \caption{$ResNet50_{vggface2}$} \label{fig:resnetface} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{node2vec.png} \caption{$node2vec_{KG_{time}}$} \label{fig:node2vec} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{node2vec_notime.png} \caption{$node2vec_{KG}$} \label{fig:node2vec_emb} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{iconometry.png} \caption{$Iconometry$} \label{fig:iconmetry_emb} \end{subfigure} \caption{Comparison of all embeddings through a 2D projection with tSNE~\cite{maaten2008visualizing}, colored with the three styles China (orange), Heian (red), and Kamakura (blue).} \label{fig:alltsne} \end{figure*} \section{Discussion} The first point we may notice from the classification results is that using iconometry does not perform very well in comparison to neural networks. In average, we obtain with iconometry a precision score of 0.49 for a recall score of 0.67, whereas scores are very similar for all the other methods. Deep learning based methods perform better, but not equally on all tasks. Style (T1), base material (T5.1) and type of wood (T5.4) are the tasks that are the best classified. Type of stone (T5.3) is by far the most difficult to classify, probably due to the imbalance of its labels. Iconometry is significantly worse than the other methods in guessing the construction method (T5.5) and the color/texture (T5.2). The neural network classifier (NN) usually perform better than SVM, except in the multilabel classification tasks (T5.1--T5.4). It suggests that those classes have a more linear distribution. VGGFace2 ~\cite{VGGFace2} trained methods perform a little worse than their counterpart trained with ImageNet~\cite{deng2009imagenet} on the cropped faces, which in turn perform slightly worse than the full images. This may happen because VGGFace2 dataset takes into account variations between ages, contrarily to the shape of Buddha faces which are more rigid (from the point of view of the construction guidelines). It might also suggest that faces of Buddha statues differ fundamentally from a standard human face, which makes the transfer learning not so effective from the networks pretrained in VGGFace2. This proposition agrees with the fact that iconometry did not perform well. The differences are not significant though, giving us great hope for fine tuning models directly from Buddha faces. In addition, $ResNet50$~\cite{he2016deep} appears to show the best results overall. Remarkably, when we focus only on the face region, $ResNet50$ performs even better than the others when classifying \textbf{type of wood} and \textbf{construction method}, which encourages the idea of using face region for a good discriminator, specially for classification related to the material. The semantic embeddings based on \textit{artistic knowledge graph} perform as well as the best of image-based embeddings for style classification (T1), a result consistent with Garcia \textit{et al.}'s observations~\cite{garcia2019context}. This is probably due to the contextual information carried by the graph. However, if the century is not present in the KG, $ResNet50$ still shows better results than $node2vec$. We can additionally underline that temporal information is a good predictor of \textbf{style}, since the classification performance is slightly improved after adding the \textbf{centuries} information in the knowledge graph. We may further investigate the space defined by those embeddings as illustrated in Fig.\ref{fig:alltsne}. It is interesting to see the similarities in the space between $node2vec$ and the iconometry used as embeddings. However, their classification performances are very different. The iconometry embeddings do not look like to well separate the three styles, but there seems to be quite notable clusters forming that should be interesting to investigate further. The advantage of iconometry over other embeddings is its explainability. Integrating time into the $KG$ clearly shows a better separatibility of the three styles. By looking at the spread of the face-based $ResNet50_{cropped}$ $ResNet50_{face}$ embeddings, we may also notice that the different classes are much more diffused than $ResNet50_{full}$. The face region is very specialized in the space of all shapes. Although the $vggface2$ embeddings are trained for a different task, facial recognition, \textit{i.e.} to identify similar faces with quite some variability, we do not see a clear difference between the separation of style from the face regions. To show the effectiveness of facial analysis against whole picture analysis, we will need to proceed with further experiments, including increasing the variety of Buddha statues in our dataset in order to train specific models designed for Buddha faces. \section{Conclusion} We have presented a method for acquisition of iconometric guidelines in Buddha faces and use them for different classification tasks. We have compared them with different modern embeddings, which have demonstrated much higher classification performances. Still there is one advantage of the iconometric guidelines from their simplicity and ease of understanding. To further understand what makes a style, we would like to investigate visualization and parameter regressions in future works, and identify salient areas that are specific to a class. We have presented one straightforward method for the identification of iconometric landmarks in Buddha statue, but many statues did not show good enough landmarks to be measured at all. We could extend our landmark analysis, and boost the discrimination power of landmarks by designing a specific landmark detector for Buddha statues. Scanning books is definitely more scalable today than 3D-captures of the statues. However, with the high results of the deep-learning methods for style classification, we could question how influential was the data acquisition method on the classification. Each book paper may have a slightly different grain that deep neural networks may have captured. Nonetheless, the different classification tasks are relatively independent from the book source while still showing quite high results. One of our goals is to continue develop this dataset and multiply our data sources, so it would further diminish the influence of data acquisition over analysis. \textbf{Acknowledgement:} This work was supported by JSPS KAKENHI Grant Number 18H03571. \bibliographystyle{ACM-Reference-Format}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} An involution $\iota$ on a hyperk\"ahler manifold is symplectic if it preserves the holomorphic symplectic form. Consider a hyperk\"ahler manifold $X$ of $K3^{[n]}$ type, i.e., deformation equivalent to a Hilbert scheme of $n$ points on a $K3$ surface or of Kummer $n$ type, i.e. deformation equivalent to the Albanese fibre of the Hilbert scheme of $n+1$ points on an Abelian surface. We are interested in describing the fixed loci of symplectic involutions on $X$. In \cite{nik} Nikulin proved that the fixed locus of a symplectic involution on a $K3$ surface consists of $8$ isolated fixed points. The second named author proved in \cite{mon1} that the fixed locus of a symplectic involution on a hyperk\"ahler manifold deformation equivalent to a Hilbert square of a $K3$ surface consists of $28$ isolated points and one $K3$ surface. Here we generalize these results to any hyperk\"ahler manifold of $K3^{[n]}$ type. \begin{thm} Let $X$ be a hyperk\"ahler manifold of $K3^{[n]}$ type, and let $\iota$ be a symplectic involution on $X$. Then, up to deformation, the fixed locus $F$ of $\iota$ consists of finitely many copies of Hilbert schemes of $K3$ surfaces $S^{[m]}$ (where $m \leq \frac{n}{2}$) and possibly isolated fixed points (only when $n \leq 24$). The fixed locus $F$ is stratified into loci of even dimensions $F_{2m}$, where $\max (0, \frac{n}{2} - 12) \leq m \leq \frac{n}{2}$. Each fixed locus $F_{2m}$ of dimension $2m$ has $$\sum_{2m=n-k-2l}{8 \choose k} { k \choose l}$$ connected components, each one of which is a deformation of a copy of $S^{[m]}$. In particular, the fixed locus $Z$ of largest dimension is the following. ({\it i}) If $n$ is even, then $Z$ consists of one copy of $S^{[\frac{n}{2}]}$; ({\it ii}) If $n$ is odd, then $Z$ consists of $8$ copies of $S^{[\frac{n-1}{2}]}$. \end{thm} A key ingredient in the proof of this theorem is that the moduli space of pairs consisting of a hyperk\"{a}hler manifold of $K3^{[n]}$ type (or of Kummer $n$ type) together with a symplectic involution is connected. Using the Global Torelli theorem first we prove that $(X, \iota)$ is birational to a ``standard pair'', and then we prove that birational pairs can be deformed one into the other while preserving the group action. A ``standard pair'' for $K3^{[n]}$ type manifolds is a deformation of $(S^{[n]},G)$, where $S$ is a $K3$ surface and $G$ is a symplectic involution on $S^{[n]}$ induced by a symplectic involution on $S$. \begin{thm} Let $X$ be a manifold of $K3^{[n]}$ type or of Kummer $n$ type. Let $G\subset \mathrm{Aut}_s(X)$ be a finite group of numerically standard automorphisms. Then $(X,G)$ is a standard pair. \end{thm} As a corollary to this theorem we obtain that the fixed locus of the symplectic involution $\iota$ has the form of the fixed locus on $S^{[n]}$ of a symplectic involution coming from the $K3$ surface $S$. Thus, we can restrict to the case when $X=S^{[n]}$. Furthermore, since the involution $\iota$ acts as the identity on the effective classes in $H^2(X, \mathbb R)$, it preserves the exceptional divisor of $S^{[n]}$. Therefore, it descends to an action on $\text{Sym}^n S$. Depending on the parity of $n$, we can see that the number of irreducible components of the fixed locus of largest dimension is either one or eight. Using basic combinatorics, one can count the number of irreducible components of the fixed locus in each possible dimension. \\ In an analogous way, we compute the fixed locus of a symplectic involution on a hyperk\"{a}hler manifold of Kummer $n$ type, by reducing to the case of an involution coming from the sign change on the abelian surface $A$: \begin{thm} Let $X$ be a hyperk\"ahler manifold of Kummer $n$ type, and let $\iota$ be a symplectic involution on $X$. Then, up to deformation, the fixed locus $F$ of $\iota$ consists of finitely many copies of Hilbert schemes of $K3$ surfaces $S^{[m]}$ (where $m \leq \frac{n+1}{2}$) and possibly isolated fixed points (only when $n \leq 48$). The fixed locus $F$ is stratified into loci of even dimensions $F_{2m}$, where $\max (0, \frac{n+1}{2} - 24) \leq m \leq \frac{n+1}{2}$. Each fixed locus $F_{2m}$ of dimension $2m$ has $N^n_m$ (for the precise formula see the last section) connected components, each one of which is a deformation of a copy of $S^{[m]}$. \end{thm} \section{Preliminaries} Let $X$ be a hyperk\"ahler manifold, i.e., a complex K\"ahler simply connected manifold such that $H^{2,0}(X) \cong \mathbb C$ is generated by a holomorphic symplectic $2$-form. If $X$ is deformation equivalent to the Hilbert scheme $S^{[n]}$ of $n$ points of a $K3$ surface $S$, we say that $X$ is of $K3^{[n]}$ type. If $X$ is deformation equivalent to the generalized Kummer $2n$-fold $K_n(A)$ of an abelian surface $A$, we say that $X$ is of Kummer $n$ type. \begin{defn} Let $X$ be a complex manifold and let $G$ be a subgroup of $\text{Aut}(X)$, the group of automorphisms of $X$. A deformation of the pair $(X,G)$ consists of the following data: {\it (i)} A flat family $\mathcal{X}\rightarrow B$, where $B$ is connected and $\mathcal{X}$ is smooth, and a distinguished point $0\in B$ such that $\mathcal{X}_0\cong X$. {\it (ii)} A faithful action of the group $G$ on $\mathcal{X}$ inducing fibrewise faithful actions of $G$. Two pairs $(X,G)$ and $(Y,H)$ are deformation equivalent if $(Y,H)$ is a fibre of a deformation of the pair $(X,G)$. \end{defn} In this paper we are mostly interested in deformations of the pair $(X, {\mathbb Z}_2)$, where $X$ is of $K3^{[n]}$ type and ${\mathbb Z}_2$ is generated by a symplectic involution. \begin{defn} Let $S$ be a $K3$ surface and let $G\subset \mathrm{Aut}_s(S)$ be a subgroup of the symplectic automorphisms on $S$. Then $G$ induces a subgroup of the symplectic morphisms on $S^{[n]}$ which we still denote by $G$. We call the pair $(S^{[n]},G)$ a natural pair. The pair $(X,H)$ is standard if it is deformation equivalent to a natural pair. If $A$ is an abelian surface, the same definitions apply to the generalized Kummer $2n$-fold $K_n(A)$ and symplectic automorphisms preserving $0\in A$, however the reader should notice that the induced action of $G$ on $H^2(K_n(A))$ is not necessarily faithful. \end{defn} \begin{defn} Let $G$ be a finite group acting faithfully on a manifold $X$. Define the invariant locus $T_G(X)$ inside $H^2(X, \mathbb Z)$ to be the fixed locus of the induced action of $G$ on the cohomology. The co-invariant locus $S_G(X)$ is the orthogonal complement $T_G(X)^\perp$. The fixed locus of $G$ on $X$ is denoted by $X^G$. \end{defn} As automorphisms of K3 and abelian surfaces are better known, it is interesting to determine whether an automorphism group on a manifold of $K3^{[n]}$ type (or of Kummer $n$ type) is standard or not, we give the following criterion: \begin{defn}\label{num_stand} Let $Y$ be a manifold of $K3^{[n]}$ type or of Kummer $n$ type. A pair $(Y,H)$ is called numerically standard if the representation of $H$ on $H^2(Y,\mathbb{Z})$ coincides with that of a standard pair $(X,H)$, up to the action of the monodromy group. More specifically, there exists a $K3$ (or abelian) surface $S$ with an $H$ action such that \begin{itemize} \item $S_H(S)\cong S_H(Y)$, \item $T_H(S)\oplus \mathbb{Z}\delta=T_H(S^{[n]})\cong T_H(Y)$ (and analogously for the Kummer $n$ case), \item The two isomorphisms above extend to isomorphisms of the Mukai lattices $U^4\oplus E_8(-1)^2$ (or $U^4$ in the Kummer case) after taking the canonical choice of an embedding of $H^2$ into the Mukai lattice described by Markman \cite[Section 9]{mar_tor} for the $K3^{[n]}$ type case and by Wieneck \cite[Theorem 4.1]{wie} for the Kummer case. \end{itemize} \end{defn} This definition is slightly stronger than the one given in \cite{mon2} for manifolds of $K3^{[n]}$ type , but they coincide when $n-1$ is a prime power, which was the case of interest in that paper. Notice that it is relatively easy to check the first two conditions, while the third one is more involved but often unnecessary, see Proposition \ref{mukai_not}. \\ Let $X$ be a compact complex manifold and $\operatorname{Diff}^0(X)$ a connected component of its diffeomorphism group. Denote by $\operatorname{Comp}$ the space of complex structures on $X$, equipped with the structure of a Fr\'echet manifold. \begin{defn} The Teichm\"uller space of $X$ is the quotient $\operatorname{Teich}:=\operatorname{Comp}/\operatorname{Diff}^0(X)$. \end{defn} The Teichm\"uller space is finite-dimensional for a Calabi-Yau manifold $X$ (see \cite{cat}). Let $\operatorname{Diff}^+(X)$ be the group of orientable diffeomorphisms of a complex manifold $X$. The {\it mapping class group} $\Gamma:=\operatorname{Diff}^+(X)/\operatorname{Diff}^0(X)$ acts on $\operatorname{Teich}$. \\ By Huybrechts's result \cite[Theorem 4.3]{huyb}, non-separated points in the moduli space of marked hyperk\"ahler manifolds correspond to birational hyperk\"ahler manifolds. Consider the equivalence relation $\sim$ on $\operatorname{Teich}$ identifying non-separated points. Let $\operatorname{Teich}_b = \operatorname{Teich}/{}_\sim$ be the {\it birational Teichm\"uller space}. \\ Let $X$ be a hyperk\"ahler manifold, and let $\operatorname{Teich}$ be its Teichm\"uller space. Consider the map $\mathcal{P} : \operatorname{Teich} \rightarrow \mathbb PH^2(X, \mathbb C)$, sending a complex structure $J$ to the line $H^{2,0}(X,J) \in \mathbb PH^2(X, \mathbb C)$. The image of $\mathcal{P}$ is the open subset of a quadric, defined by $\operatorname{{\mathbb P}\sf er}:=\big\{l\in\mathbb PH^2(X,\mathbb C)\ \big|\ q(l,l)=0,\ q(l,\bar l)>0\big\}.$ \begin{defn} The map $\mathcal{P}: \operatorname{Teich} \rightarrow \operatorname{{\mathbb P}\sf er}$ is called the period map, and the set $\operatorname{{\mathbb P}\sf er}$ is called the period domain. \end{defn} The period domain $\operatorname{{\mathbb P}\sf er}$ is identified with the quotient $\frac{SO(3, b_2-3)}{SO(2) \times SO(1, b_2 -3)}$. \begin{thm} (Verbitsky's Global Torelli, \cite{verb}) \label{torelli} The period map $\mathcal{P}: \operatorname{Teich}_b^0 \rightarrow \operatorname{{\mathbb P}\sf er}$ is an isomorphism on each connected component of $\operatorname{Teich}_b$. \end{thm} It is possible to compute the K\"{a}hler cone of a hyperk\"{a}hler manifold from numerical data on the second cohomology (see \cite{av} and \cite{mon_ka} for the general theory), the following will be needed for our main result: \begin{prop}\label{walls_for_two} Let $X$ be a manifold of $K3^{[n]}$ type, $n$ odd. Let $a\in Pic(X)$ be a class of negative square greater than $-6-2n$ and of divisibility two. Then there are no K\"{a}hler classes orthogonal to $a$. \end{prop} \begin{proof} There is a canonical choice of an embedding of $H^2(X,\mathbb{Z})$ into the Mukai lattice $U^4\oplus E_8(-1)^2$ which is described by Markman \cite[Section 9]{mar_tor}. Let $\mathbb{Z}v:=(H^2)^\perp$ in this embedding. The lattice $L:=\langle v,a\rangle$ is generated by $v$ and $\frac{v+a}{2}$, whose square is at least $-2$ by hypothesis and at most $v^2/4$, therefore by \cite[Thm 5.7]{bm} $a^\perp$ is a wall for the space of positive classes, hence there cannot be a K\"{a}hler class orthogonal to it. \end{proof} In particular, it follows from the results in \cite{av} and \cite{mon_ka} that if there is a K\"{a}hler class orthogonal to the Picard lattice, the K\"{a}hler cone coincides with the positive cone. \\ Some lattice theory will be used in the following, the main reference here is \cite{nik2}, where all of the following can be found. For a lattice $L$ we define the discriminant group $A_L:=L^\vee/L$. Let $l(A_L)$ denote the length of this group. If the lattice $L$ is even, $A_L$ has a bilinear form with values in $\mathbb{Q}/\mathbb{Z}$ induced from the bilinear form on $L$. This associated quadratic form is called the discriminant form of $L$ and is denoted by $q_{A_L}$. If $(l_+,l_-)$ is the signature of $L$, the integer $l_+-l_-$ is called signature of $q_{A_L}$ and, modulo $8$, it is well defined.\\ The following concerns primitive embeddings of lattices, i.e., embeddings where the quotient is torsion free: \begin{lem}\cite[Proposition 1.15.1]{nik2}\label{lem:nik_immerge} Let $S$ and $N$ be even lattices of signature $(s_+,s_-)$ resp. $(n_+,n_-)$. Primitive embeddings of S into $N$ are determined by the sets $(H_S,H_N,\gamma,K,\gamma_K)$, where $K$ is an even lattice with signature $(n_+-s_+,n_--s_-)$ and discriminant form $-\delta$ where $\delta\,\cong\,(q_{A_S}\oplus -q_{A_N})_{|\Gamma_\gamma^\perp/\Gamma_\gamma}$ and $\gamma_K\,:\,q_K\,\rightarrow\,(-\delta)$ is an isometry. Moreover, two such sets $(H_S,H_N,\gamma,K,\gamma_K)$ and $(H'_S,H'_N,\gamma',K',\gamma'_K)$ determine isomorphic sublattices if and only if \begin{itemize} \item $H_S=\lambda H'_S$, $\lambda\in O(q_S)$, \item $\exists\,\epsilon\,\in\,O(q_{A_N})$ and $\psi\,\in\,Isom(K,K')$ such that $\gamma'=\epsilon\circ\gamma$ and $\overline{\epsilon}\circ\gamma_K=\gamma'_K\circ\overline{\psi}$, where $\overline{\epsilon}$ and $\overline{\psi}$ are the isometries induced among discriminant groups. \end{itemize} \end{lem} Here $\Gamma_{\gamma}$ is the graph of $\gamma$. When a lattice $L$ is a $G$-representation, we will call the sublattice $T_G(L)$ fixed by $L$ the invariant lattice, and its orthogonal complement $S_G(L)$ - the coinvariant lattice. \section{Fixed loci: local case} \label{sec:irred} The Hilbert scheme \((\mathbb{C}^2)^{[n]}\) represents a local model for \(\textup{K3}^{[n]}\). Thus to analyze the irreducible components of the fixed locus of \(\imath\) on \(\textup{K3}^{[n]}\) we first need to analyze the local geometry. The surface \(\mathbb{C}^2\) has a natural symplectic form \(\omega=dx\wedge dy\) and the involution \[\imath: x\mapsto -x,\quad y\mapsto -y\] preserves this form. The quotient of \(\mathbb{C}^2\) by the involution is a singular surface that admits a symplectic resolution: \[\widehat{\mathrm{A}}_1\to\mathrm{A}_1=\mathbb{C}^2/\imath.\] It is elementary to see that the \(\imath\)-fixed locus on \((\mathbb{C}^2)^{[2]}\) is isomorphic to \(\widehat{\mathrm{A}}_1\). Below we show a generalization of this statement: \begin{lem}\label{lem:quiv} For any \(n\) we have \[\left((\mathbb{C}^2)^{[n]}\right)^\imath=(\widehat{A}_1)^{ n/2},\] if $n$ is even and for odd $n$: $$\left((\mathbb{C}^2)^{[n]}\right)^{\imath}=\mathfrak{M}((n-1)\delta/2+e_1,e_1)\bigcup \mathfrak{M}((n-1)\delta/2+e_2,e_1)$$ where the $\mathfrak{M}(v,w)$ is the quiver variety for the quiver of the affine Dynkin diagram of type $\tilde{A}_1$ and $\delta=e_1+e_2$ is the imaginary root of the corresponding root system. The quiver varieties are connected and of dimension: $$ \dim\left(\mathfrak{M}((n-1)\delta/2+e_1,e_1)\right)=n-1,\quad\dim\left( \mathfrak{M}((n-1)\delta/2+e_2,e_1)\right)=n-3.$$ \end{lem} To explain the statement we need to remind our reader some basics of the theory of the quiver varieties \cite{Nakajima98}. A quiver is a directed graph $Q$ with a set of vertices $I$. Given $\alpha\in \mathbb{N}^I$, the set of representations of the quiver is: $$\mathrm{Rep}(Q,\alpha)=\oplus_{a\in Q} \mathrm{Mat}(\alpha_{h(a)}\times \alpha_{t(a)}),$$ where $h(a)$ and $t(a)$ are the head and the tail of the corresponding arrow. The group $$G(\alpha)=\left(\prod_{i\in I}\textup{GL}(\alpha_i)\right)/\mathbb{C}^*$$ acts on the vector space of representations. The cotangent bundle of the space of representations is the space of the representations of the {\it double} of the quiver $\bar{Q}$: $$\mathrm{T}^*\mathrm{Rep}=\mathrm{Rep}(\bar{Q},\alpha),$$ where the double is the quiver obtained from $Q$ by adjoining a reverse arrow $a^*$ for every arrow $a\in Q$. The moment map $\mu:\mathrm{Rep}(\bar{Q},\alpha)\to \textup{Lie}(G(\alpha))$ is given by: $$\mu(x)_i=\sum_{h(a)=i}x_ax_{a^*}-\sum_{t(a)=i}x_{a^*}x_a.$$ Let $Q_0$ be a quiver and $u,v\in \mathbb{N}^{I_0}$. We define $Q$ to be the quiver with the set of vertices $I=I_0\cup \infty$ and the set of arrows is the union of the set of arrows of $Q_0$ and the arrows $v_i$ from $\infty$ to $i\in Q_0$. Respectively, we define: $$\mathbf{M}(v,w)=\mathrm{Rep}(\bar{Q},\alpha),\quad G_v=G(\alpha),$$ where $\alpha$ is the vector with coordinates: $\alpha_i=u_i$, $i\in I_0$ and $\alpha_\infty=1$. Nakajima \cite{Nakajima98} defines the quiver variety as the GIT quotient of the subvariety of $\mathbf{M}(v,w)$: $$\mathfrak{M}(v,w)=\mu^{-1}(0)//(G_v,\chi),$$ where $\chi$ is the character of the group defined by $\chi(g)=\prod_{k\in I}\det(g_k^{-1})$. We indicate the dependence of the quiver variety on the underlying quiver by the subindex: $\mathfrak{M}_{Q_0}(v,w)$, $\mathbf{M}_{Q_0}(v,w)$. In our study we are most interested in the quiver varieties associated to the following two quivers: \[\mathbf{Q}_0=\begin{tikzcd} \arrow[l,loop left]\bullet \end{tikzcd}, \quad \mathbf{Q}'_0= \begin{tikzcd} \bullet\arrow[r,bend right]\arrow[r,bend left]&\bullet \end{tikzcd} \] these are the quivers of affine Dynkin diagrams of types $\tilde{A}_0$ and $\tilde{A}_1$. Let the dimension vectors be $(v,w)=((n),(1))$, $(v,w)=((n_1,n_2),(0,1))$, then the corresponding enhanced quivers are: \[\mathbf{Q}=\begin{tikzcd} \arrow[l,loop left]1 &\arrow[l]\infty \end{tikzcd}, \quad \mathbf{Q}'= \begin{tikzcd} 1\arrow[r,bend right]\arrow[r,bend left]&2&\arrow[l]\infty \end{tikzcd}, \] in the pictures of the quivers we introduced the labels of the vertices. The starting point of our proof is the quiver description of the space $\left(\mathbb{C}^2\right)^{[n]}$ \cite{Nakajima99}: $$\mathfrak{M}_{\mathbf{Q}_0}((n),(1))=\left(\mathbb{C}^2\right)^{[n]}.$$ \begin{proof}[Proof of lemma \ref{lem:quiv}] The involution \(\imath\) on \(\mathbb{C}^2\) induces an action on the \(\mathbf{M}_{\mathbf{Q}_0}((n),(1))\), this involution acts trivially on \(\mathbb{C}^1\) which corresponds to the vertex $\infty$, and the space \(\mathbb{C}^n\) corresponding to the vertex $1$ decomposes into the anti-invariant and invariant parts \(\mathbb{C}^n=\mathbb{C}^{n_1}\oplus \mathbb{C}^{n_2}\). The vector space $\mathbb{C}^n$ in the quiver description of the Hilbert scheme corresponds to the quotient space $\mathbb{C}[x,y]/I$ in the interpretation of the natural Hilbert scheme. Moreover, the image of the map corresponding to the arrow from $\infty$ to $1$ is the span of $1\in \mathbb{C}[x,y]/I$, hence the image is invariant under the involution $\imath$. Thus, the involution invariant part of the quiver variety union of the quiver varieties constructed from the quiver representations is of the form: \[ \begin{tikzcd} \mathbb{C}^{n_1}\arrow[r,bend right]\arrow[r,bend left]&\mathbb{C}^{n_2}\arrow[r]&\mathbb{C}^1 \end{tikzcd}. \] More formally, we conclude that we have an inclusion: $$\left(\left(\mathbb{C}^2\right)^{[n]}\right)^{\imath}\subset \bigcup_{n_1+n_2=n}\mathfrak{M}_{\mathbf{Q}'_0}((n_1,n_2),(0,1)).$$ Next we recall the result of \cite{CrawleyBoevey01} that concerns with the classification of connected non-empty quiver varieties. The result from the last part of the introduction in \cite{CrawleyBoevey01} states that the quiver variety $\mathfrak{M}_{Q_0}(v,w)$ is non-empty if and only if $v$ is a positive root of the Kac-Moody Lie algebra corresponding to the quiver $Q_0$ and if it is non-empty, then it is connected. The Kac-Moody Lie algebra corresponding to the quiver $\mathbf{Q}'_0$ is the Lie algebra of the loop group of $\textup{SL}_2$ and the roots of this Lie algebra are: $$n\delta,\quad e_1+n\delta,\quad e_2+n\delta,$$ here $\delta=e_1+e_2$. Thus, we can refine our previous inclusion: $$\left(\left(\mathbb{C}^2\right)^{[n]}\right)^\imath\subset \mathfrak{M}_{\mathbf{Q}'_0}((n/2,n/2),(0,1)),\quad \mbox{ if } n \mbox{ is even},$$ $$\left(\left(\mathbb{C}^2\right)^{[n]}\right)^\imath\subset \bigcup_{\epsilon=\pm}\mathfrak{M}_{\mathbf{Q}'_0}(((n+\epsilon 1)/2,(n-\epsilon 1)/2),(0,1)),\quad \mbox{ if } n \mbox{ is odd}.$$ In the case of even $n$ we observe that since the $\imath$-invariant locus is not empty, the inclusion is actually an equality. The fact that the corresponding quiver variety is the Hilbert scheme of points on the surface $\hat{A}_1$ is standard (see, for example, Theorem 4.9 in \cite{kuz} ). In the case of odd $n$ we observe that the involution fixed locus must have at least two connected components. Indeed, if $I\subset \mathbb{C}[x,y]$ is an involution fixed ideal, then the quotient space $\mathbb{C}[x,y]/I$ has an action of the involution and the dimension $d(I):=\dim \left(\mathbb{C}[x,y]/I\right)^\imath$ is constant along any connected component of $\left(\left(\mathbb{C}^2\right)^{[n]}\right)^\imath$. It is not hard to find two monomial ideals $I^\pm$ of codimension $n$ and $d(I^\pm)=(n\pm 1)/2$, these two ideals belong to two disjoint connected components. Thus, we conclude that in the case of odd $n$, the inclusion is also an equality. The formula for the dimension of the quiver varieties is standard and could be found, for example, in \cite{CrawleyBoevey01}. \end{proof} For small value of $n$, the result above has a more intuitive explanation. Indeed, if $n=2$, then the fixed locus $\left((\mathbb{C}^2)^{[2]}\right)$ is the closure of the locus consisting of the pairs of points $z,\imath(z)$, $z\ne (0,0)$. On the other hand, if $n=3$, there are two connected components of the involution: the closure of the locus of triples $(z,(0,0),\imath(z))$, $z\ne (0,0)$ and an isolated point which is the square of the maximal ideal $(x,y)^2$. Then the connected components could be revealed by the analysis of the punctual Hilbert scheme $(\mathbb{C}^2)^{[3]}_{(0,0)}$ which is the cone over the twisted cubic: the involution acts on the rulings of the cone preserving the vertex of the cone and infinite points of the rulings. Let us fix notations for the two connected components of the involution, we denote the component of smaller dimension and large dimension by: $$\left(\left(\mathbb{C}^2\right)^{[n]}\right)^{\imath}_-,\quad \left(\left(\mathbb{C}^2\right)^{[n]}\right)^{\imath}_+$$ \section{Fixed loci of symplectic involutions} We recall some properties of the irreducible components of the fixed locus of a symplectic involution. \begin{prop} \cite[Proposition 3]{cc} \label{propcc} Let $X$ be a hyperk\"{a}hler manifold and $\iota$ be a symplectic involution on $X$. Then the irreducible components of the fixed locus of $\iota$ are symplectic subvarieties of $X$. \end{prop} \begin{proof} For completeness we are going to include the proof. Let $Z$ be an irreducible component of the fixed locus of $\iota$. Since $\iota$ is a periodic endomorphism, $Z$ is smooth by \cite{don}. After restricting to $Z$, we have the orthogonal decomposition $TX_{\mid Z} = TZ \oplus N_Z$, where $TZ$ and $N_Z$ are the eigenspaces corresponding to $+1$ and $-1$, respectively. Since $\iota$ is a symplectic involution, then for any $z \in Z$, $T_zZ$ and $N_{Z,z}$ are orthogonal and symplectic. \end{proof} The moduli space of pairs consisting of a hyperk\"{a}hler manifold of $K3^{[n]}$ type together with a symplectic involution is connected. This follows from the following result of the second named author \cite[Theorem 2.5]{mon2}. Here we include a more general version of the result where we remove the assumption that $n-1$ is a prime power. \begin{thm} \label{connected} Let $X$ be a manifold of $K3^{[n]}$ type or of Kummer $n$ type. Let $G\subset \mathrm{Aut}_s(X)$ be a finite group of numerically standard automorphisms. Then $(X,G)$ is a standard pair. \end{thm} \begin{proof} First, we want to prove that $(X,G)$ is birational to a standard pair by using the Global Torelli theorem, and then we want to prove that birational pairs can be deformed one into the other while preserving the group action. For the first step, up to deforming $X$, we can suppose that $\text{Pic}(X):=S_G(X)\oplus \mathbb{Z}\delta$, where $\delta\subset T_G(X)$ is as in Definition \ref{num_stand}. Let $S$ be the $K3$ (resp. Abelian) surface with a $G$-action such that $NS(S)=S_G(S)=S_G(X)$ and $T_G(S)=T(S)=T(X)$. The $K3$ (resp. Abelian) surface $S$ is uniquely determined if, under the identification $T(S)=T(X)$, we have $\sigma_S=\sigma_X$ in $T(X)\otimes\mathbb{C}$. An easy computation shows that $S^{[n]}$ (resp. $K_n(S)$) and $X$ are Hodge isometric and, by the requirement of Definition \ref{num_stand}, this Hodge isometry extends to an isometry of the Mukai lattice, so with a suitable choice of markings $f,g$ the pairs $(X,f)$ and $(S^{[n]},g)$ (resp. $K_n(S)$) are in the same component of the Teichmuller space, thus by Theorem \ref{torelli} they are birational and the birational map commutes with the $G$-action by our construction of the Hodge isometry. Now we continue with the second step. Let $U$ be a representative of local deformations of $X$, with total family $\mathcal{X}$ and let $V$ be a representative of local deformations of $Y$, with total family $\mathcal{Y}$. By classical results of Huybrechts \cite{huyb}, up to shrinking the family there is an isomorphism $U\cong V$ and a fibrewise birational map $\varphi:\,\mathcal{X}\dashrightarrow \mathcal{Y}$. Let us restrict to the local deformations of the pairs $(X,G)$ and $(Y,G)$, which are the families over $U^G$ (which coincides with $V^G$). Let $t\in U^G$ be a point such that $\text{Pic}(\mathcal{X}_t)=S_G(\mathcal{X}_t)$, which is true for very general points. There is a K\"{a}hler class orthogonal to $S_G(\mathcal{X}_t)$ and, as $G$ is symplectic, this class lies in the orthogonal complement to $\text{Pic}(\mathcal{X}_t)$. Thus, the K\"{a}hler cone is the full positive cone and all manifolds birational to $\mathcal{X}_t$ are isomorphic to it, so $\mathcal{X}_t\cong \mathcal{Y}_t$ (and the isomorphism is compatible with the $G$-action), so the pairs $(X,G)$ and $(Y,G)$ are both deformation equivalent to $(\mathcal{X}_t,G)$, and our claim holds. \end{proof} \begin{cor}\label{one_invol} Let $X$ be a manifold of $K3^{[n]}$ type and let $\iota$ be a symplectic involution. Then the fixed locus of $\iota$ has the form of the fixed locus on $S^{[n]}$ of a symplectic involution coming from the $K3$ surface $S$. \end{cor} \begin{proof} By \cite[Cor. 37]{mon3}, the coinvariant lattice of a symplectic involution is always isometric to $E_8(-2)$. Thus, by Proposition \ref{count_embed}, there is an embedding of $E_8(-2)$ which is numerically standard and all others have an element $v$ inside $E_8(-2)$ which has divisibility $2$ and is of square at least $-6-2n$. The latter case cannot be induced by an involution on a manifold of $K3^{[n]}$ type, because otherwise an invariant K\"{a}hler class would be orthogonal to $v$ and this is in contradiction with Proposition \ref{walls_for_two}. Thus, all pairs $(X,\iota)$ are numerically standard, therefore Thm. \ref{connected} applies and we obtain our claim. \end{proof} \begin{cor}\label{one_invol_kum} Let $X$ be a manifold of Kummer $n$ type and let $\iota$ be a symplectic involution. Then the fixed locus of $\iota$ has the form of the fixed locus on $K_n(A)$ of the $-1$ involution coming from the Abelian surface $A$. \end{cor} \begin{proof} All possible symplectic involutions on the second cohomology of a manifold of Kummer $n$ type $X$ have been classified in \cite[Section 5 and 6]{mtw}. Notice that there is only one such involution by \cite[Prop. 6.1]{mtw}, which however acts as an order four automorphism on $X$, therefore a symplectic involution on $X$ must have a trivial action on $H^2(X)$. By Theorem \ref{connected}, the pair $(X,\iota)$ is deformation equivalent to the pair $(K_n(A),-1)$ and our claim holds. \end{proof} \begin{ex} Consider $\text{Sym}^2(S)$, where $S$ is a $K3$ surface with an involution $i$. The involution $i$ induces an involution $\iota$ on $\text{Sym}^2(S)$. The fixed locus of $\iota$ has a component of the form $S/i$, because locally it consists of unordered pairs of points $\{p, i(p)\}$, where $p \in S$. The rest of the fixed points are isolated unordered pairs of the form $\{p, q\}$, where $p$ and $q$ are fixed points of $i$ on $S$, with possible repetitions. In \cite{nik} Nikulin proved that $i$ has $8$ fixed points on $S$. In \cite{mon1} the second named author proved that the fixed locus of $\iota$ on $S^{[2]}$ consists of $28 = \binom{8}{2}$ isolated points and one copy of $S$. The eight fixed points of type $\{p, p\}$ are contained in the minimal resolution of $S/i$ which is $S$. \end{ex} \begin{ex} Consider $\text{Sym}^3(S)$, where $S$ is a $K3$ surface with an involution $i$. The involution $i$ induces an involution $\iota$ on $\text{Sym}^3(S)$. The fixed locus of largest dimension of $\iota$ locally looks like $\{p, i(p), q \}$, where $p \in S$ and $q$ is a fixed point of $i$ on $S$. There are $8$ connected components of fixed loci of the form $S/i$, because there are $8$ possibilities for $q$ by Nikulin's result \cite{nik}. The rest of the fixed points are isolated of the form $\{p, q, r \}$, where $p$, $q$ and $r$ are fixed points of $i$ on $S$. In total, there are $56$ points on $S^{[3]}$ corresponding to triples consisting of three different points $\{p, q, r \}$, and all fixed points of the form $\{p,p,q\}$ with $p\neq q$ are contained in the resolution of $S/\iota$. There are eight more isolated fixed points given by schemes fully supported on one point whose reduced scheme structure encompasses all possible tangent directions. \end{ex} The two examples above illustrate the difference between the even and the odd cases of $n$ when considering symplectic involutions on $S^{[n]}$ and are indicative of the approach towards the following theorem. \begin{thm}\label{counting} Let $X$ be a hyperk\"ahler manifold of $K3^{[n]}$ type, and let $\iota$ be a symplectic involution on $X$. Then, up to deformation, the fixed locus $F$ of $\iota$ consists of finitely many copies of Hilbert schemes of $K3$ surfaces $S^{[m]}$ (where $m \leq \frac{n}{2}$) and possibly isolated fixed points (only when $n \leq 24$). The fixed locus $F$ is stratified into loci of even dimensions $F_{2m}$, where $\max (0, \frac{n}{2} - 12) \leq m \leq \frac{n}{2}$. Each fixed locus $F_{2m}$ of dimension $2m$ has $$\sum_{2m=n-k-2l}{8 \choose k} { k \choose l}$$ connected components, each one of which is a deformation of a copy of $Y^{[m]}$, where $Y$ is the $K3$ resolution of $S/\iota$. In particular, the fixed locus $Z$ of largest dimension is the following. ({\it i}) If $n$ is even, then $Z$ consists of one copy of $Y^{[\frac{n}{2}]}$; ({\it ii}) If $n$ is odd, then $Z$ consists of $8$ copies of $Y^{[\frac{n-1}{2}]}$. \end{thm} \begin{proof} From Corollary \ref{one_invol}, we can restrict to the case when $X=S^{[n]}$ and $\iota$ comes from a symplectic involution on $S$. Let us denote by $Y$ the minimal resolution of $S/\iota$. Since the involution $\iota$ acts as the identity on the effective classes in $H^2(X, \mathbb R)$, it preserves the exceptional divisor of $S^{[n]}$. Therefore, it descends to an action on $\text{Sym}^n S$. By Proposition \ref{propcc}, the irreducible components of the fixed locus of $\iota$ are symplectic subvarieties of $X$, and in this case each one of them has even dimension $2m$. Let us label the fixed locus of dimension $2m$ by $F_{2m}$. By Nikulin's theorem in \cite{nik}, the symplectic involution on the $K3$ has $8$ fixed points $f_1, \cdots, f_8$. Each strata $F_{2m}$ looks like $\text{Sym}^m S/\iota \times \text{Sym}^{n-2m}(f_1 \cup \cdots \cup f_8)$. Let $l_1, \cdots, l_8$ be the degrees of $f_1, \cdots, f_8$. Let us denote by $U_i$ some small analytic neighborhoods around $f_i$. Since $S$ is connected, we can connect any involution fixed point point to a point inside $(U_1)^{[n_1]}\times\dots\times (U_8)^{[n_8]}$ for some $n_i\ge 0$. Respectively, the fixed locus inside these analytic neighborhoods are the products $U_{\vec{n},\vec{s}}=\left((U_1)^{[n_1]}\right)_{s_1}^\imath\times\dots\times \left((U_8)^{[n_8]}\right)_{s_8}^\imath$ where $s_i$ is $\pm$ if $n_i$ is odd and $s_i=\emptyset$ if $n_i$ is even. Let $k(\vec{n},\vec{s})$ be a number of odd $n_i$ and $l(\vec{n},\vec{s})$ is the number $i$ such that $s_i=-$ then the dimension of $U_{\vec{n},\vec{s}}$ is $n-k(\vec{n},\vec{s})-2l(\vec{n},\vec{s})$. By moving a pair of points $z,i(z)$ from one neighborhood $U_i$ to another $U_j$ we connect analytic sets $U_{\vec{n},\vec{s}}$ and $ U_{\vec{n'},\vec{s'}}$ as long as $k(\vec{n},\vec{s})=k(\vec{n'},\vec{s'})$ and $l(\vec{n},\vec{s})=l(\vec{n'},\vec{s'})$. On the other hand, it's also clear that the if we can connect analytic sets $U_{\vec{n},\vec{s}}$ and $ U_{\vec{n'},\vec{s'}}$ then $k(\vec{n},\vec{s})=k(\vec{n'},\vec{s'})$, because we can only move points between the neighborhoods in pairs. Finally, if the invariant $l(\cdot,\cdot)$ changes along a path then the dimension of the connected component would change too. Thus we proved our formula for the number of connected components. Now we shall describe explicitely the fixed locus $Z = F_{2[\frac{n}{2}]}$ of largest dimension. Let $m = [\frac{n}{2}]$ be the largest integer not greater than $\frac{n}{2}$, i.e., $m=\frac{n}{2}$ if $n$ is even and $m=\frac{n-1}{2}$ if $n$ is odd. Then the fixed locus of largest dimension is of the form $\{ x_1, \iota(x_1), x_2, \iota(x_2), \cdots, x_m, \iota(x_m) \}$ if $n$ is even, i.e., one copy of $\text{Sym}^m S/\iota$, where $x_i \in S$ for $1 \leq i \leq m$. In the case when $n$ is odd, the fixed locus of largest dimension is of the form $\{ x_1, \iota(x_1), x_2, \iota(x_2), \cdots, x_m, \iota(x_m), x_{m+1} \}$, where $x_i \in S$ for $1 \leq i \leq m$ and $x_{m+1}$ is a fixed point, i.e., $Z$ contains eight copies of $\text{Sym}^m S/\iota$ since there are $8$ choices for $x_{m+1}$. In both cases, the dimension of $\text{Sym}^m S/\iota$ is $2m$. \end{proof} \begin{oss} As a special case of this theorem when $n=2$ we obtain the description of the fixed locus $F$ of $\iota$ on $S^{[2]}$ that the second named author proved in \cite{mon1}, namely that $F$ consists of $28 = \binom{8}{2}$ isolated points and one copy of the minimal resolution of $S/\iota$. \end{oss} \begin{oss} The involution fixed locus does not have a zero dimensional component if $n>24$. On the other hand, $\left(\textup{K3}^{[24]}\right)^{\imath}$ has a zero dimensional component which is the product of eight squares of maximal ideals of the involution fixed locus. \end{oss} Finally, let us conclude with the analogous result in the Kummer case. To state the combinatorial part of the result we need a few extra notations. Given a subset $I\subset \mathbb{Z}_2^4$, we denote by $|I|$ the size of the set and $||I||$ the element of $\mathbb{Z}^4_2$ which is the total product of the elements in the set. Thus, we define: $$N^n_m=\sum_{I,||I||=1}{(n-|I|)/2-m \choose |I|}.$$ \begin{thm}\label{counting_kum} Let $X$ be a hyperk\"ahler manifold of Kummer $n$ type, and let $\iota$ be a symplectic involution on $X$. Then, up to deformation, the fixed locus $F$ of $\iota$ consists of finitely many copies of Hilbert schemes of $K3$ surfaces $S^{[m]}$ (where $m \leq \frac{n+1}{2}$) and possibly isolated fixed points (only when $n \leq 48$). The fixed locus $F$ is stratified into loci of even dimensions $F_{2m}$, where $\max (0, \frac{n+1}{2} - 24) \leq m \leq \frac{n+1}{2}$. Each fixed locus $F_{2m}$ of dimension $2m$ has $N_m^n$ connected components, each one of which is a deformation of a copy of $S^{[m]}$. In particular, the fixed locus $Z$ of largest dimension is one copy of $S^{[\frac{n+1}{2}]}$. \end{thm} \begin{proof} From Corollary \ref{one_invol_kum}, we can restrict to the case when $X=K_n(A)$ and $\iota$ comes from a the $-1$ involution on $A$. Since the involution $\iota$ acts as the identity on the classes in $H^2(X, \mathbb R)$, it preserves the exceptional divisor of $K_n(A)$. Therefore, it descends to an action on $\text{Sym}^{n+1} A$ preserving the fibre over $0$ of the Albanese map. Let us first look at this action on $\text{Sym}^{n+1} A$, and then let us look at the Albanese fibre of the fixed locus. By Proposition \ref{propcc}, the irreducible components of the fixed locus of $\iota$ are symplectic subvarieties of $X$, and in this case each one of them has even dimension $2m+2$. Let us label the fixed locus of dimension $2m+2$ by $F_{2m}$. The symplectic involution on the Abelian surface $A$ has $16$ fixed points $f_1, \cdots, f_{16}$, and there is a natural identification of $\{f_1,\dots,f_{16}\}$ with $\mathbb{Z}_2^4$. The same argument as in the previous theorem implies that there is a natural correspondence between the $2m$-dimensional connected components of the involution fixed locus and the set of pairs: $I_-\subset I_{odd}\subset \mathbb{Z}^4_2$ such that $2m=n-|S_{odd}|-2|S_-|$. Finally, let us notice that the Albanese map is constant on the connected components and the value of the map on the connected component is $1$ iff $||I_{odd}||=1$. \end{proof} \section*{Acknowledgements} The first named author is partially supported by a grant from the Simons Foundation/SFARI (522730, LK). She thanks Misha Verbitsky for their interesting conversations about this result. The second named author was supported by ``National Group for Algebraic and Geometric Structures, and their Application'' (GNSAGA - INdAM). The third author was supported by the NSF CAREER grant DMS-1352398, NSF FRG grant DMS-1760373 and Simons Fellowship. The author thanks Professor Hiraku Nakajima for conversations and help with the references. \section*{Appendix: lattice computations} In this appendix we include all the computations needed in the proof of Corollary \ref{one_invol}. Let $E_8$ be the unique unimodular even positive definite lattice of rank 8 and let $S:=E_8(-2)$ be the same lattice with the quadratic form multiplied by $-2$. The discriminant group of $S$ is $\mathbb{Z}_2^8$ and its elements are classes $[v/2]$, where $v\in S$. The discriminant form on these elements takes the values $0$ or $1$ modulo $2\mathbb{Z}$ (i.e., it is $0$ if $v^2$ is divisible by $8$ and $1$ otherwise). We have the following: \begin{lem}\label{small_square} For every element $\alpha$ in $A_S$ there is an element $v\in S$ such that $[v/2]=\alpha$ and $v^2\geq -16$. \end{lem} \begin{proof} As $E_8$ is generated by its roots, the discriminant group of $S$ is generated by classes of half-roots, which have square $-4$. Thus, all elements of $A_S$ can be represented by half the sum of at most eight distinct roots. The sum of two roots is either a root (if they are not orthogonal) or an element of square $-8$. As there can be at most a set of four orthogonal roots, the claim follows. \end{proof} Let $L_n:=U^3\oplus E_8(-1)^2\oplus (-2n+2)$ be the lattice corresponding to the second cohomology of a $K3^{[n]}$ type manifold. We then have the following: \begin{prop}\label{count_embed} Let $L_n$ and $S$ be as above. Then, up to isometry, there is only one primitive embedding of $S$ into $L_n$ such that $S$ contains no element of divisibility two and square bigger than $-6-2n$. \end{prop} \begin{proof} The discriminant group of $L_n$ has one generator whose discriminant form is $-\frac{1}{2(n-1)}$. There is only one element in this group whose order is precisely two, and it has discriminant form $-\frac{n-1}{2}$ modulo $2\mathbb{Z}$. As per Lemma \ref{lem:nik_immerge}, primitive embedding of $S$ into $L_n$ are determined by a quintuple $(H_S,H_{L_n},\gamma,K,\gamma_K)$, where the first two are subgroups of $A_S$ and $A_{L_n}$, respectively, and $\gamma$ is an anti-isometry between the two. Thus, when $n$ is even, $H_S$, $H_{L_n}$ and $\gamma$ are trivial as all elements of $A_S$ have integer square. When $n$ is odd, either we are in the same case as before or we have nontrivial $H_S$, $H_{L_n}$ and $\gamma$. In the latter case, $H_{L_n}$ is unambiguously determined and by Lemma \ref{small_square} the non trivial element of $H_S$ is represented by half an element $v$ of square at least $-16$ (more specifically, at least $-16$ if $n-1$ is divisible by four and at least $-12$ otherwise). Thus, $v$ is an element of square at least $-6-2n$ and of divisibility $2$ in $L_n$, as $[v/2]$ is non trivial in $A_{L_n}$ \end{proof} We conclude this appendix with a criterion to avoid checking the last condition in Definition \ref{num_stand}: \begin{prop}\label{mukai_not} Let $(X,G)$ be a pair such that there exists a $K3$ (resp. Abelian) surface $S$ and $G\subset Aut_s(S)$ such that $H^2(S^{[n]})$ (resp. $H^2(K_n(A))$) and $H^2(X)$ are isomorphic $G$ representations. Moreover, suppose that $U\subset T_G(S)$. Then $(X,G)$ is numerically standard. \end{prop} \begin{proof} By the hypothesis we have a Hodge isometry $\varphi\,:\,H^2(X) \rightarrow H^2(S^{[n]})$ (resp. $H^2(K_n(A))$) which might not extend to an isometry of the Mukai lattice $\Lambda:=U^4\oplus E_8(-1)^2$ (resp. $\Lambda:=U^4$), that is, it is not compatible with the two embeddings $\psi_1\,:\,H^2(X,\mathbb{Z})\rightarrow \Lambda$ and $\psi_2\,:\,H^2(S^{[n]},\mathbb{Z})\rightarrow \Lambda$. Let $\delta$ be half the class of the exceptional divisor in $S^{[n]}$ (resp. $K_n(A)$) and let $\delta_x:=\varphi^{[-1]}(\delta)$. Let $v$ be a generator of $\psi_2(H^2)^\perp$ and $v_x$ be the same for $\psi_1(H^2)^\perp$. As discussed in in \cite[Section 9]{mar_tor}, the fact that $\varphi$ does not extend to $\Lambda$ means that it does not respect the two gluing data associated to the pairs $(v,\delta)$ and $(v_x,\delta_x)$. The gluing data corresponds to a choice of an anti-isometry between the two discriminant groups $A_{H^2}$ and $A_{\langle v\rangle}$. However, we have $U\subset T_G(S)$ and let $L:=U\oplus \mathbb{Z}\delta\subset T_G(S^{[n]})$ (resp. $T_G(K_n(A))$), thus, by \cite[Thm 1.14.2]{nik2} applied to $L$, we can compose $\varphi$ with an isometry $\gamma$ of $H^2(S^{[n]})$ (resp. $H^2(K_n(A))$) which is trivial on $L^\perp$ and arbitrarily non trivial on $A_L\cong A_{H^2(S^{[n]})}$, thus $\delta\circ \varphi$ extends to an isometry of the Mukai lattices. \end{proof} In particular, this proposition applies to symplectic involutions on manifolds of $K3^{[n]}$ type .
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Rydberg atoms in optical lattices and traps have gained interest in the fields of quantum computing and simulations~\cite{Zhang.2011a,Nguyen.2018, Barredo.2020, Wilson.2019}, quantum control~\cite{Cardman.2020a}, and high-precision spectroscopy~\cite{Moore.2015b, Ramos.2017, Malinovsky.2020a}, as the lattice confines the atoms and extends interaction times. However, the binding energy of Rydberg atoms is several orders of magnitude below the photon energy $\hbar \omega$ of commonly used optical-lattice fields. Optical photoionization (PI) of the Rydberg valence electron leads to lifetime reduction and decoherence. \fix{Lattice-induced PI can broaden radio-frequency (RF) transitions between Rydberg states and limit the fidelity of Rydberg-atom quantum-control and -simulation schemes that involve coherences in the RF domain. The PI can also degrade optical coherences between ground and Rydberg states \gfix{that can be} induced by $\lesssim 1$-kHz-linewidth lasers. \gfix{Such lasers} are becoming more widely used in metrology~\cite{Nez.1993a,Zhang.2017a,Campbell.2017a} and may become useful in research involving long-lived, lattice-trapped Rydberg atoms.} A characterization of PI in Rydberg-atom optical lattices will be helpful for ongoing and emerging Rydberg-atom applications. A Rydberg atom in a laser field is subject to the ponderomotive, $e^2 {\bf{A}}^2 (\textbf{r}) / (2 m_{\rm{e}})$, and the $e {\bf{A}} (\textbf{r}) \cdot {\bf {p}} / m_{\rm{e}}$ interactions, with $e$, $m_{\rm{e}}$, $\textbf{r}$, $\textbf{p}$, and ${\bf{A}} (\textbf{r})$ denoting the magnitude of the fundamental charge, electron mass, electron position and momentum in the laboratory frame, and the position-dependent vector potential of the field, respectively~\cite{Friedrichbook}. \rfix{Interplay between these two interactions has previously been discussed in Refs.~\cite{Bucksbaum.1986,Bucksbaum.1987,Eberly.1986} in the context of above-threshold ionization}. In an inhomogeneous light field, such as an optical lattice, the ponderomotive ${\bf{A}}^2$ term generates an optical force on the Rydberg electron that depends on the intensity gradient \gfix{of the optical-lattice interference pattern and its overlap with} the spatial distribution of the Rydberg-electron wavefunction. \rfix{Effects of the ponderomotive force on free electrons in a standing-wave laser field were studied before in Refs.~\cite{Bucksbaum.1988,Freimund.2001}. \gfix{The Rydberg} electron is quasi-free, allowing the ponderomotive force to enable optical-lattice traps for Rydberg atoms~\cite{SEAnderson.2011, Lampen.2018}.} The spatial period of Rydberg-atom optical lattices, which is on the order of the laser wavelength $\lambda$, is similar to the diameter of the trapped atoms, a situation that differs from most optical lattices, in which the atoms are point-like relative to $\lambda$. A ponderomotive optical lattice couples Rydberg states over a wide range of electronic angular momenta, $\ell$, ~\cite{Knuffman.2007, Younge.2009a}, affording capabilities in high-$\ell$ Rydberg-state initialization~\cite{Cardman.2020a, Younge.2009a} and Rydberg-atom spectroscopy free of selection rules for $\ell$~\cite{Knuffman.2007, Moore.2015a}. \hfix{In the present analysis, we expand upon earlier work by including lattice-induced Rydberg-atom PI in ponderomotive optical lattices with strong $\ell$- and $j$-mixing. Our model describes PI-induced decay in the lattice, as required, for instance, the aforementioned quantum-control and computing applications.} Optical and black-body-radiation-induced PI result from the $\textbf{A}\cdot\textbf{p}$-term~\cite{Friedrichbook}. Here we investigate laser-induced PI of Rydberg atoms trapped in an optical lattice. In Sec.~\ref{PIOL}, we derive PI cross sections and rates for Rydberg atoms in plane-wave light fields and extend the results to Rydberg atoms in optical lattices. In Sec.~\ref{sec-PECs}, we obtain equations for the potential energy curves (PECs), the adiabatic Rydberg states, and their PI-induced decay rates in the lattice. In the examples in Sec.~\ref{app}, we focus on rubidium Rydberg atoms in a one-dimensional lattice formed by counter-propagating laser beams of 1064~nm wavelength. The lattice strength is characterized by the magnitude of the ponderomotive interaction relative to the unperturbed Rydberg-level separations. We present results for PECs and lattice-induced PI of $\ell$-mixed Rydberg atoms in a strong optical lattice, and of Rb~$50F$ atoms in a weaker, $\ell$-mixing-free optical lattice. In the Appendix, we discuss fundamental aspects of optical PI of Rydberg atoms. \section{PI of Rydberg atoms} \label{PIOL} \subsection{Basic PI cross sections} The lowest-order transition rate between atomic states due to an interaction $\hat{H}_{\rm int}$ is given by Fermi's golden rule, $\Gamma=\frac{2\pi}{\hbar}|\langle f|\hat{H}_{\rm int}|i\rangle|^{2}\rho(\epsilon)$, with final-state energy $\epsilon$ and density of final states $\rho(\epsilon)$ ~\cite{Friedrichbook}. For PI in a plane-wave field, it is $\hat{H}_{\rm int} = e \hat{\bf{A}} \cdot \hat{\bf {p}} / m_{\rm{e}}$, and the final state $\vert f \rangle$ is a free-electron state. We normalize the free-electron states per unit energy, {\sl{i. e.}} $\langle f' \vert f \rangle = \delta(\epsilon' -\epsilon) \delta_{\eta',\eta}$, with $\eta$ denoting the angular-momentum quantum numbers ($\ell,m_{\ell}$) and $\rho(\epsilon)$ being equal to 1 per unit energy. The PI cross section $\sigma_{\rm PI}$ is determined by dividing the PI rate by the photon flux density, $I/(\hbar\omega)$, where $I$ is the field intensity and $\omega$ its angular frequency. In SI units, for a linearly polarized field (polarization unit vector $\hat{\textbf{n}}$) with wave vector $\textbf{k}$, the PI cross section is (see Appendix~\ref{atomfield}) \begin{equation}\label{PIcross} \sigma_{\rm PI}=\frac{\pi e^{2}\hbar^{2}}{\epsilon_{\rm 0} m_{\rm e}^{2} \omega c} \left|\hat{\textbf{n}}\cdot\int \psi_{f}^{\ast} \medspace e^{i\textbf{k}\cdot\textbf{r}_e} \medspace \nabla_{e} \psi_{i} \medspace d^{3}r_e \right|^{2}\left(\frac{1}{E_{H} \, a_0^2}\right) \, , \end{equation} where $\textbf{r}_e$ denotes the relative Rydberg-electron coordinate, $E_H$ the atomic energy unit, \rcfix{$\psi_{i}$ and $\psi_{f}$ are the initial- and final-state wavefunctions, respectively,} and $a_0$ \rcfix{is} the Bohr radius. The last term converts the squared matrix element, which is in atomic units, into SI units. For PI of Rydberg atoms the electric-dipole approximation (EDA) typically is valid, as shown in~\cite{Anderson.2013a} \rcfix{in the context of a shallow optical lattice with no $\ell$-mixing. The validity of the EDA is} discussed in greater detail in Appendix~\ref{noapprox}. The EDA is implemented by setting $e^{i\textbf{k}\cdot\textbf{r}_e} = 1$ in Eq.~\ref{PIcross}. The resultant expression for the matrix element is referred to as ``velocity form'', used throughout this paper to compute the PI cross sections. For $\hat{\bf{p}}$-independent atomic potentials, the matrix element in Eq.~\ref{PIcross}, with the EDA applied, can be transformed into ``length form'', leading to \begin{equation}\label{PIcrossL} \sigma_{\rm PI,L}=\frac{\pi e^{2} \omega}{\epsilon_{\rm 0} c } \medspace \left|\hat{\textbf{n}}\cdot\int \psi_{f}^{\ast} \medspace {\bf{r}}_e \medspace \psi_{i} \medspace d^{3}r_e\right|^{2}\left(\frac{a_{\rm 0}^{\rm 2}}{E_{H}} \right) \, . \end{equation} This length-form expression for the PI cross section is not accurate if the atomic potential is $\ell$-dependent, as in the present work on Rb. In the following, \hfix{we first consider PI of spin-less Rydberg basis states $\vert n, \ell, m_\ell \rangle$ into free states $| \epsilon',\ell', m'_\ell \rangle$. There,} $n$ denotes the bound-state principal quantum number, $\epsilon'$ the free-electron energy, and $\ell_{>}$ the larger of the bound- and free-electron angular momenta, $\ell$ and $\ell'$. The shell-averaged PI cross section, \gfix{given by the average of the PI cross sections of the $m_\ell$-sublevels of the Rydberg state}, is \begin{equation}\label{sigma_av} \bar{\sigma}_{n, \ell}^{\epsilon', \ell'}=\frac{\pi e^{2} \hbar^2 }{3\epsilon_{\rm 0} m_{\rm{e}}^2 \omega c}\frac{\ell_{>}}{(2\ell+1)}|M|^{2}\left(\frac{1}{E_{H} \it a_{\rm 0}^{\rm 2}}\right), \end{equation} \noindent where $M$ is the radial part of the matrix element from Eq.~1 in atomic units, which is, with the EDA applied, \begin{equation}\label{sigma_av2} M = \int_{0}^\infty u_{\epsilon',\ell'}(r_e)\left[ u'_{n,\ell}(r_e) \mp \frac{u_{n,\ell}(r_e)}{r_e}\ell_{>} \right] dr_e. \end{equation} There, \gfix{the upper sign is for $\ell_> = \ell'$, the lower sign for $\ell_> = \ell$, and $\ell'=\ell \pm 1$}. The functions $u_{*,\ell}(r_e)$ are given by $u_{*,\ell}(r_e) = r_e R_{*,\ell}(r_e)$, where $R_{*,\ell}(r_e)$ is the usual radial wavefunction, and $* = n$ or $\epsilon'$ for bound- and free-electron states, respectively. To illustrate the general behavior of PI cross sections of Rydberg states, we calculate $\bar{\sigma}_{n, \ell}^{\epsilon', \ell'}$, for a wide range of bound states $(n,\ell)$ and both PI channels $\ell' = \ell \pm 1$. The free-electron energy in atomic units is \begin{equation} \epsilon'=\frac{2 \pi \, a_0}{\alpha \lambda} - \frac{1}{2 n^{*2}} \quad, \nonumber \end{equation} with the laser wavelength $\lambda$ in meters, the fine structure constant $\alpha$, and the effective quantum number of the Rydberg state, $n^*$. For the calculation of the bound-state and free-electron wavefunctions~\cite{Reinhard.2007a}, we use model potentials from~\cite{Marinescu.1994}, which have previously been employed to compute polarizabilities~\cite{Marinescu.1994c} and two-photon excitation rates~\cite{Marinescu.1994b} in Rb. \fix{A table of the calculated $\bar{\sigma}_{n, \ell}^{\epsilon', \ell'}$, for $\lambda=1064$~nm, is provided as Supplementary Material}. \begin{figure} \begin{centering} \includegraphics[width=3in]{Figure1.png} \caption{Total shell-averaged PI cross sections $\bar{\sigma}_{n, \ell}$ for Rydberg $n$ and $\ell$ states of Rb for $\lambda =1064$~nm, obtained by summing Eq.~\ref{sigma_av} over $\ell'$. For each $\ell$, the $n$ values range from 20 to 90 in steps of 5. The dashed line represents the Thomson cross section, $\sigma_{T}=$0.665 barn.}\label{sigma} \end{centering} \end{figure} In Fig.~\ref{sigma} we show results for Rb in a $\lambda=1064$-nm field as a function of $n$ and $\ell$. The cross sections are generally quite large for low $\ell$, with an exception for the $S$-states that is caused by a Cooper minimum~\cite{Cooper.1962, Zatsarinny.2010}. The calculated PI cross sections decrease rapidly as $\ell$ increases. For $\ell \gtrsim 10$, they drop below the elastic photon scattering cross section, given by the Thomson cross section, $\sigma_T=0.665$~barn. PI cross sections $\lesssim \sigma_T$ are likely too small to cause observable effects in applications. \hfix{Since the lattice light has well-defined linear polarization, we note that for $z$-polarized} light the PI cross section for an atom in magnetic sublevel $m_\ell$ is \begin{equation}\label{sigmazm} \sigma_{z,n,\ell,m_\ell}^{\epsilon',\ell'} =\frac{3(\ell_{>}^{2}-m_\ell^{2})}{(2\ell_{>}+1)(2\ell_{>}-1)}\frac{(2\ell+1)}{\ell_{>}} \bar{\sigma}_{n, \ell}^{\epsilon', \ell'} \quad , \end{equation} with $\bar{\sigma}$ from Eq.~\ref{sigma_av}, \hfix{while for $x$-polarized light it is} \begin{equation} \label{sigmaxm} \sigma_{x,n,\ell,m_\ell}^{\epsilon',\ell'} =\frac{3}{2}\frac{(\ell'(\ell'+1)+m_\ell^{2})}{(2\ell_{>}+1)(2\ell_{>}-1)}\frac{(2\ell+1)}{\ell_{>}} \bar{\sigma}_{n, \ell}^{\epsilon', \ell'} \quad . \end{equation} \subsection{PI cross sections and rates of lattice-mixed states} \label{sec-matel-OL} \hfix{Due to lattice-induced Rydberg state mixing, lattice-trapped Rydberg atoms are coherent superpositions of numerous basis states. Also,} the fine structure must be included, because it can be on the order of or larger than the optical-lattice trap depth. \hfix{Eq.~\ref{PIcross} then has to be evaluated for the lattice-mixed states $\vert i \rangle = \sum_{n, \ell, j, m_j} c_{n, \ell, j, m_j} \vert n, \ell, j, m_j \rangle$, with the total-angular-momentum quantum numbers $j$ and $m_j=m_\ell + m_s$, and electron-spin magnetic quantum number $m_s$.} \hfix{Here we adopt a geometry in which a pair of counter-aligned lattice beams propagate along $z$, and the linear lattice polarization is along $x$. The PI then has matrix elements in the $(m_\ell, m_s)$-basis given by \begin{eqnarray} M_{n, \ell, m_\ell, m_s}^{\epsilon',\ell',m'_\ell,m'_s} & = & \langle \epsilon',\ell',m'_\ell,m'_s \vert {\rm {i}} \hat{p}_{x,e} \vert n, \ell, m_\ell, m_s \rangle \nonumber \\ ~ & ~ & \times \delta_{m'_\ell, m_\ell \pm 1} \delta_{m'_s, m_s} \delta_{\ell', \ell \pm 1} \quad, \end{eqnarray} in atomic units and with the $x$-component of the electron momentum, $\hat{p}_{x,e}$. We have added Kronecker $\delta$'s to exhibit the PI selection rules. The matrix elements have a radial part given by Eq.~\ref{sigma_av2} and angular parts that follow from \cite{Bethebook} p. 254. The PI cross section for the lattice-mixed states then is \begin{eqnarray}\label{PIcross2} \sigma_{\rm PI} & = & \frac{\pi e^{2}\hbar^{2}}{\epsilon_{\rm 0} m_{\rm e}^{2} \omega c} \sum_{\ell', m'_\ell, m'_s} \Big \vert \sum_{n, \ell, j, m_j, m_\ell, m_s} M_{n, \ell, m_\ell, m_s}^{\epsilon',\ell',m'_\ell,m'_s} \nonumber \\ ~ & ~ & \times \, c_{n, \ell, j, m_j} \, \langle j, m_j \vert m_\ell m_s \rangle \Big \vert^{2} \left(\frac{1}{E_{H} \, a_0^2}\right) \, . \end{eqnarray} Note $M$ is in atomic units, and the term in () converts the matrix-element-square into SI units. Due to symmetry $m_j$ is fixed. Since the lattice induces $\ell$- and $j$-mixing, the PI cross sections exhibit quantum interference in the inner sum, caused by the fact that several PI channels can lead from multiple basis states $\vert n, \ell, j, m_j \rangle$ into the same free state $\vert \epsilon', \ell', m'_\ell, m'_s \rangle$. } \hfix{For given $\sigma_{\rm PI}$, the atom PI rate follows from \begin{equation} \label{eq:rate} \Gamma = I \frac{\sigma_{\rm PI}}{\hbar \omega} \quad. \end{equation} Since in the optical lattice the intensity $I$ varies within the volume of the Rydberg atom, it is not immediately obvious what to use for $I$ in Eq.~\ref{eq:rate}.} In fact, the atomic volume can extend over several nodes and anti-nodes of the light field~\cite{Dutta.2000a, SEAnderson.2011, Zhang.2011a}. The lattice-intensity variation within the atomic volume is important for the potential energy curves (PECs) and state-mixing in the lattice, as discussed in the next section. Our analysis given in the Appendix shows that the PI rates of Rydberg states are determined by the intensity at the exact CM location of the Rydberg atom, $I({\bf{R}}_0)$. We enter $I({\bf{R}}_0)$ into Eq.~\ref{eq:rate} to obtain the PI rates of the lattice-mixed Rydberg atoms. It is irrelevant how the field varies over the atomic volume. Especially \lfix{noteworthy is the fact that} the light intensity within the main lobes of the Rydberg electron wavefunction is not important. This finding is a consequence of the validity of the EDA for PI of Rydberg atoms, which is discussed in the Appendix. Laser-induced Rydberg-atom PI was \rfix{previously} measured in plane waves~\cite{Tallant.2010} and, in a spatially-sensitive manner, in an optical lattice~\cite{Anderson.2013a}. \section{Potential energy curves} \label{sec-PECs} \subsection{Strong optical-lattice regime} \label{subsec:SL} Rydberg atoms in an optical lattice are subject to both the ${\bf{A}} \cdot {\bf {p}}$ and the ponderomotive (${\bf{A}}^2$) interactions, giving rise to lattice-induced PI and PECs at the same time. In the following we describe our comprehensive formalism for both PI and PECs. In a one-dimensional optical lattice along the $z$-direction, the PECs are calculated by first finding the Hamiltonian \begin{equation}\label{H} \hat{H}_{\rm lat}=\hat{H}_{\rm 0}+V_{\rm P}(\hat{z}_e+Z_{\rm 0}) \end{equation} \noindent on a grid of fixed CM positions $Z_{\rm 0}$ of the atoms in the lattice. There, $\hat{H}_{\rm 0}$ is the field-free atomic Hamiltonian, and the operator $\hat{z}_e$ represents the relative $z$-coordinate of the Rydberg electron. Further, $V_{\rm P}(z)=e^{2}E^{2}(z)/(4m_{\rm e}\omega^{2})$ is the free-electron ponderomotive potential that follows from the ${\bf{A}}^2$-interaction, $E(z)$ the total lattice electric-field amplitude, and $z=z_e+Z_{\rm 0}$ the $z$-coordinate of the Rydberg electron in the laboratory frame. Classically, the ${\bf{A}}^2$-term may be thought of as the time-averaged kinetic energy of the electron quiver in the lattice electric field at the optical frequency~\cite{Dutta.2000a}. In a one-dimensional lattice along $z$, \begin{equation}\label{2V0} V_{\rm P}(z)= V_0 (1+\cos(2kz)) \quad , \end{equation} with the full free-electron potential depth $2 V_0$ and $k=2 \pi / \lambda = \omega/c$. For a pair of lattice beams with equal single-beam electric-field amplitude $E_0$ and equal linear polarization, it is $2 V_0 = e^2 E_0^2 / (m_e \omega^2)$. The potential $V_{\rm P}(z)$ introduces couplings that are free of selection rules for $\ell$~\cite{Younge.2009a,Wang.2016}. From a perturbation-theory viewpoint, the Rydberg-atom lattice is strong if the lattice depth approaches the characteristic energy scale of the unperturbed Rydberg atom, i.e., if $2 V_0 \gtrsim s E_H/n^3$, with scaling parameter $s \sim 0.1$ that depends on the quantum defects of the atom. In a strong lattice, the lattice-induced couplings approach or exceed the quantum-defect-induced energy gaps between low- and high-$\ell$ states, \rfix{ causing mixing among such states.} \hfix{The PECs, $W_k(Z_0)$, and the lattice-mixed adiabatic Rydberg states $\vert \psi_k(Z_0) \rangle$ are found by solving \begin{equation} \hat{H}_{\rm {lat}}(Z_{\rm 0}) \vert \psi_k(Z_0) \rangle = W_k(Z_0) \vert \psi_k(Z_0) \rangle , \end{equation} with an index $k$ labeling the PECs and their adiabatc states. We use representations of the $\vert \psi_k(Z_0) \rangle$ in the basis of the field-free states $\vert n, \ell, j, m_j \rangle$ in Eq.~\ref{PIcross2} in order to yield the PI cross sections, $\sigma_k (Z_0)$, and the PI rates, $\Gamma_k(Z_0)$, from Eq.~\ref{eq:rate}. It is observed that the PI rates $\Gamma_k$, which trace back to the ${\bf{A}} \cdot {\bf {p}}$ interaction, and the free-electron ponderomotive lattice shift, which arises from the ${\bf{A}}^2$ interaction, both scale with the intensity at the atom's CM location, \begin{eqnarray} \Gamma_{k} (Z_{\rm 0}) & = & I(Z_{\rm 0}) \frac{\sigma_k (Z_0)}{\hbar\omega} \nonumber \\ V_{\rm P}(Z_0) & = & I(Z_{\rm 0}) \frac{e^2}{2 c \epsilon_{\rm 0} m_{\rm e} \omega^2} \quad. \label{eq:gamma} \end{eqnarray} } We note that the PECs $W_k(Z_0)$ satisfy \begin{equation}\label{Vad} W_k(Z_0)=\int V_\mathrm{P}\left(z_e+Z_{\rm 0}\right)|\psi_k\left({\bf{r}}_e; Z_0 \right)|^2\medspace d^3 r_e, \end{equation} \noindent which represents a spatial average of $V_{\rm P}$, weighted by the wavefunction densities $|\psi_k\left({\bf{r}}_e; Z_0 \right)|^2$ of the adiabatic states $\vert \psi_k(Z_0) \rangle$. The wavefunction density is traced over the electron spin. Since the $\vert \psi_k(Z_0) \rangle$ are not known before diagonalization of the Hamiltonian in Eq.~\ref{H}, Eq.~\ref{Vad} generally cannot be used to calculate PECs (exceptions are discussed in the next Sec.~\ref{subsec:WL}). Instead, the Hamiltonian in Eq.~\ref{H} must be diagonalized to simultaneously yield both the PECs, $W_k(Z_{\rm 0})$, and the $\vert \psi_k (Z_0) \rangle$. \hfix{The latter then allows computation of $\sigma_k(Z_0)$.} \subsection{Weak optical-lattice regime} \label{subsec:WL} If the Rydberg-atom lattice is weak, $2 V_0 < s E_H/n^3$, there are cases in which the ponderomotive potential $V_{\rm P}(z)$ does not cause lattice-induced state mixing of the unperturbed Rydberg levels. These cases include $nS_{1/2}$ Rydberg levels, and $nP_{j}$ and $nD_{j}$ levels if $2 V_0$ is also less than the fine structure splitting. For Rydberg states that are known to be mixing-free, the PECs can be obtained from first-order non-degenerate perturbation theory, \begin{equation}\label{Vad0} W_k(Z_0)=\int V_\mathrm{P}\left(z_e+Z_{\rm 0}\right)|\psi_{k,0}\left({\bf{r}}_e\right)|^2\medspace d^3 r_e \quad . \end{equation} \noindent This expression amounts to a spatial average of $V_{\rm P}$, weighted by the wavefunction density of the unperturbed, $Z_0$-independent state $\vert \psi_{k,0} \rangle = \vert n, \ell, j, m_j \rangle$, \begin{eqnarray} \vert \psi_{k,0}({\bf{r}}_e) \vert^2 = |R_{n,\ell,j}(r_e)|^2 & \, \Big[ & \vert c_{\uparrow} \, Y_{\ell}^{m_j - 1/2} (\theta_e, \phi_e) \vert^2 \nonumber \\ ~ & + & \vert c_{\downarrow} \, Y_{\ell}^{m_j + 1/2} (\theta_e, \phi_e) \vert^2 \Big] \quad , \nonumber \end{eqnarray} \hfix{with Clebsch-Gordon coefficients $c_{\uparrow}$ and $c_{\downarrow}$ for $m_s=1/2$ and $-1/2$, respectively,} and spherical Rydberg-electron coordinates $(r_e, \theta_e, \phi_e)$. \gfix{The PEC index $k$ now merely is} a shorthand label for the mixing-free state $\vert n,\ell,j,m_j \rangle$. PECs in weak lattices have been investigated in Refs.~\cite{Younge.2010a,Anderson.2012a}. \hfix{Also, the PI cross section of $\vert n, \ell, j, m_j \rangle$ according to Eq.~\ref{PIcross2} greatly simplifies and there is no quantum interference of PI channels (as the inner sum has only one term for each $\vert \epsilon', \ell', m'_\ell, m'_s \rangle$). } In certain scenarios, one can force applicability of non-degenerate perturbation theory by lifting degeneracies via application of an auxiliary DC electric or magnetic field, or a microwave field. If the auxiliary field suppresses lattice-induced state mixing, the adiabatic Rydberg states in the lattice become independent of $Z_0$, allowing a perturbative calculation of the PECs as in Eq.~\ref{Vad0}~\cite{Dutta.2000a, Ramos.2017}. In some of the cases, the fine-structure coupling can be lifted by the DC field, and the time- and $Z_0$-independent states become $\vert n, \ell, m_\ell, m_s \rangle$. In those cases, the wavefunctions to be used in Eq.~\ref{Vad0} are $\psi_{k,0} ({\bf{r}}_e) = \langle {\bf {r}}_e \vert n, \ell, m_\ell \rangle$, \hfix{and their PI rates are directly given by Eqs.~\ref{sigma_av}, \ref{sigma_av2}, \ref{sigmaxm} and~\ref{eq:rate}, and by (incoherently) summing the rates over $\ell'$.} One such example is the weak one-dimensional lattice of Rb $50F$-states with an external DC electric field, discussed in Sec.~\ref{app1}. \section{Results} \label{app} \subsection{An implementation of a strong optical lattice} \label{app2} \begin{figure*}[t] \includegraphics[scale=0.325]{Fig2a_new.pdf} \includegraphics[scale=0.325]{Fig2b_new.pdf}\\ \includegraphics[scale=0.325]{Fig2c_new.pdf} \includegraphics[scale=0.325]{Fig2d_new.pdf} \caption{PECs in a one-dimensional ponderomotive optical lattice of Rb Rydberg atoms for $n=50$ and $m_j = 1/2$, $\lambda=1064$~nm, and lattice depth $2V_0 = h \times 3$~GHz $=1.48\times10^{6} \, E_{rec}$. PEC energies are in cm$^{-1}$ relative to the ionization threshold, and CM positions $Z_0$ in units $\lambda$. The boxed regions in (a) correspond to the full regions displayed in panels (b) and (c), while the boxed region in (c) corresponds to the full region displayed in panel (d). The color of the dots on the PECs shows PI rate $\Gamma_k(Z_0)$, on the color scales provided, and the dot diameter is proportional to $\Gamma_k(Z_0)$. For clarity, the dot diameters in (b) are enhanced by a factor of 50 relative to those in (a), and those in (c) and (d) by a factor of 10. The close-up view in (d) shows $\approx 10$-nm-period structures and a small fine-structure splitting of the PECs.} \label{adpot} \end{figure*} In strong Rydberg-atom optical lattices, lattice-induced state mixing gives rise to a rich structure of PECs. This is illustrated in Fig.~\ref{adpot} for $n=50$, $m_{j}=1/2$, and lattice depth $2V_0 = h \times 3$~GHz, equivalent to $1.48\times10^{6} E_{rec}$, with the single-photon recoil energy of Rb for $\lambda = 1064$-nm, $E_{rec} = h \times 2.027$~kHz. For this lattice it is $2 V_0 \sim 0.1 E_H/n^3$, placing it in the strong-lattice regime as defined in Sec.~\ref{sec-PECs}. Fine structure and quantum defects~\cite{Gallagherbook} are included in the calculation. The diameters and colors of the dots on the PECs in Fig.~\ref{adpot} indicate the PI rates, $\Gamma_k(Z_0)$, of the PECs. The lattice primarily mixes states of small quantum defects, which covers the vast majority of Rydberg states. The adiabatic states of the PECs, $\vert \psi_k (Z_0) \rangle$, are coherent superpositions of a wide range of low-$\ell$ and high-$\ell$ states, including circular Rydberg states. The lowest-energy curves in Fig.~\ref{adpot}(a) are substantially perturbed $50F$-states, which are lowered in energy due to their quantum defect and are not entirely mixed into the manifold of high-$\ell$ states, which have near-zero quantum defect (states with $\ell \geq 4$ in Rb). The lattice-induced mixing of $F$-character into the high-$\ell$ states is efficient enough to make the latter laser-excitable from a low-lying $D$-level. For instance, the three-step excitation sequence $5S_{1/2}$ $\rightarrow$ $5P_{1/2}$ $\rightarrow$ $5D_{3/2}$ $\rightarrow$ $nF_{5/2}$ using 795~nm, 762~nm, and $\sim$1260~nm laser light would be suited for a spectroscopic study of these PECs. Several prominent features of the PECs in Fig.~2 can be interpreted based on analogies with the Stark and diamagnetic effects in Rydberg atoms~\cite{Gallagherbook, Friedrichbook}. Near the inflection points of the lattice [$Z_{\rm 0}=\pm\lambda/8$ in Fig.~\ref{adpot}(a)], the PECs include sets of about 50 straight, parallel lines that resemble the level structure of the linear DC Stark effect. Since the ponderomotive potential $V_{\rm P}(z)$ is linear in these regions, the analogy with the DC Stark effect is expected~\cite{Younge.2009a}. Near the nodes and anti-nodes of the lattice [$Z_{\rm 0}=0,\pm\lambda/4$ in Fig.~\ref{adpot}(a)], the levels resemble the rotational and vibrational diamagnetic energy-level structure of Rydberg atoms~\cite{Zimmerman.1978a,Gay.1983a,Cacciani.1986a,vanderVeldt.1993a}. This similarity also is expected, because the ponderomotive potential near the nodes and anti-nodes and the diamagnetic potential share a quadratic dependence on position~\cite{Younge.2009a}. Spectroscopic studies in high-intensity lattices are yet to reveal the PEC structures shown in Fig.~\ref{adpot}. The PI rates on the PECs, $\Gamma_k(Z_0)$, overall scale with the lattice intensity at the atomic CM location, \gfix{which is proportional to} $(1+\cos( 2 k Z_0 ))$. The maximum $\Gamma_k$-values in Fig.~\ref{adpot}(a) are $\Gamma_k \approx 1.6\times10^{6}$~s$^{-1}$ for the $50F$-like states at $Z_{\rm 0}=0$, where the lattice intensity is maximal. For the high-$\ell$ states within the range displayed in Fig.~\ref{adpot}(b), which is near a lattice-intensity minimum, the $\Gamma_k$ range between $2\times10^{4}$~s$^{-1}$ and zero (at the exact anti-node positions). For the high-$\ell$ states within the range of Fig.~\ref{adpot}(c), near an intensity maximum, the $\Gamma_k$-values peak at about $10^{5}$~s$^{-1}$. Since radiative decay rates and black-body-radiation-induced transition rates of Rydberg levels around $n=50$ are only on the order of $10^{4}$~s$^{-1}$, PI-induced decay in the lattice will be quite noticeable even for the high-$\ell$ states. For the $50F$-like states, it will greatly exceed natural decay, for conditions as in Fig.~\ref{adpot}. \hfix{Due to the strong dependence of the PI cross sections on $\ell$, seen in Fig.~1, it is not obvious how much quantum interference of PI amplitudes from lattice-mixed states (inner sum in Eq.~\ref{PIcross2}) matters. The importance of interference can be assessed by taking an incoherent sum, in which the left vertical bar in Eq.~\ref{PIcross2} is moved to the inside of the inner sum, and by comparing coherent- with incoherent-sum results. The relative error in cross sections incurred by taking incoherent sums, averaged over all states $\vert \psi_k (Z_0) \rangle$ in Fig.~2, is 0.05, with a standard deviation of 0.04. While this error is too small to matter in cases where the lattice-induced PI rate simply has to be below an application-specific tolerance limit, it may be large enough to be noticeable in PI rate measurements.} In possible future experimental work, an ultra-deep Rydberg-atom lattice with a depth of $2V_0= h \times 3$~GHz, as considered in this section, could be achieved by focusing two counter-propagating laser beams, each with a power of 200~W, into a confocal spot with $w_{\rm 0}=20~\mu$m. Such a lattice can be prepared, for instance, by using a near-concentric field enhancement cavity~\cite{Chen.2014}, with the Rydberg atoms loaded into the focal spot of the cavity. The PI-induced spectroscopic level widths in Fig.~\ref{adpot}, which are $\Gamma_k/(2 \pi) \lesssim 250~$kHz, should be large enough to become visible in spectroscopic measurement of PECs with narrow-linewidth lasers (linewidth $\lesssim 100~$kHz). Another possible measurement method for PEC curves and level widths would be microwave spectroscopy from a suitable low-$\ell$ launch Rydberg state. This method would essentially be Doppler-effect-free and benefit from the Hz-level linewidth of typical microwave sources, resulting in higher spectral resolution. However, it would add experimental complexity due to the need to account for the PI and level shifts of the Rydberg launch state within the optical lattice. We note that near $Z_{\rm 0}=0$ and $\pm\lambda/4$ in Fig.~\ref{adpot}, and within certain energy regions, the PECs feature series of periodic wells with a periodicity of $\approx 10$~nm and a depth in the range of $h \times 10$ to 100~MHz. The periodicity is about a factor of 50 smaller than the fundamental $\lambda/2$-periodicity of the optical lattice, while the depth allows about one to three quantum states of the CM motion in each well, with tunneling-induced well-to-well coupling. On a single PEC there are as many as about 20 small periodic wells, making the system conducive to studies of tunneling-induced quantum transport. Further, since CM momentum exchange between CM wavefunctions and periodic gratings scales with the inverse of the spatial period, the 10-nm-period PECs in Fig.~\ref{adpot} may also serve well as large-angle Bragg reflectors and beam splitters for Rydberg-atom CM wavefunctions, which could potentially become useful in atom-interferometry applications~\cite{Cronin.2009}. \subsection{An implementation of a weak optical lattice} \label{app1} \begin{figure*}[t] \begin{centering} \includegraphics[width=5.5in]{Figur3_new.pdf} \caption{(a) and (b): PECs of Rb $50F$ in an optical lattice with $\lambda=1064$~nm and a depth $2 V_0 = h \times 20$~MHz$ = 9867 E_{rec}$. The deviations of the PECs from the field-free Rb $50F_{7/2}$ state, $\Delta W$, in GHz are plotted vs CM position, $Z_{\rm 0}$, in units $\lambda$. The PEC labels show $|m_j|$. Symbol sizes and colors in (a) and (b) show PI rate and expectation value of $j$, respectively, on the given color scales. (c) Magnified view of a $|m_j|=5/2$ PEC-pair near the lattice inflection point, showing $j$-mixing and level repulsion. (d) PECs for the same conditions as in (a-c), but with an added longitudinal DC electric field of 0.1~V/cm. The DC field breaks the fine-structure coupling, and the PECs correspond with position-independent adiabatic states $\vert 50F, m_\ell, m_s\rangle$. Symbol size and color show PI rate on the given color scale. }\label{50D} \end{centering} \end{figure*} In weak Rydberg-atom lattices it is $2 V_0 \ll 0.1 E_H/n^3$, $\ell$-mixing plays no significant role \gfix{for states with $\ell<4$}, and PI is concentrated within a small number of non-mixed low-$\ell$ PECs that have large PI cross sections (see Fig.~1). Hence, while the PI rate averaged over all PECs drops in proportion with lattice intensity, atoms on low-$\ell$ PECs may still photo-ionize at high rates. Examples of PECs for $50F_j$ in a weak lattice with a depth of $2V_0 = h \times 20$~MHz$ = 9867 E_{rec}$ are shown in Fig.~\ref{50D}. The $50F$-levels split into seven resolved components of conserved $m_j$. With the exception of $|m_j|=7/2$, there are two fine-structure states, $j=5/2$ and $7/2$, \hfix{that have a field-free splitting of 1.3~MHz and} that become mixed by the lattice. Solving Eqs.~\ref{H}-\ref{eq:gamma} in sub-spaces $\{ \vert 50F_{7/2}, m_j \rangle, \vert 50F_{5/2}, m_j \rangle \}$ yields the PECs and their PI rates. As seen in Fig.~\ref{50D}(a), the modulation depth of the PECs varies from strongly modulated at $|m_{j}|=7/2$ to barely modulated at $|m_{j}|=1/2$. The variation in PEC modulation depth arises from the differing extent of the Rydberg-electron wavefunctions along the axis of the lattice, which results in varying amounts of averaging in Eq.~\ref{Vad}~\cite{Anderson.2012a}. Generally, the sublevels with lesser values of $|m_j|$ have wavefunctions that extend more in the direction of the lattice axis, resulting in less deeply modulated PECs. Lattice-induced $j$-mixing is illustrated in Fig.~\ref{50D}(b), where the expectation value $j$ on some PECs varies considerably as a function of $Z_0$, while maintaining an average of 3 over pairs of coupled PECs with same $m_j$. The fine structure coupling causes pairs of states of same $m_j$ to repel each other near the lattice inflection points at $Z_0=\pm \lambda/8$. The level repulsion is seen best in Fig.~\ref{50D}(c), where we show a detailed view of the level pair with $m_j=5/2$ near a lattice inflection point. As in Sec.~\ref{app2}, the PI rates $\Gamma_k(Z_0)$ generally scale with the lattice intensity, which is $\propto(1+\cos( 2 k Z_0 ))$. Further, according to Eq.~\ref{sigmaxm}, the $\Gamma_k$-values at fixed $Z_0$ should increase with $m_\ell$, and by continuation, with $m_j$. This trend is obvious in Fig.~\ref{50D}(a). \hfix{The relative cross-section differences between taking coherent inner sums, as in Eq.~\ref{PIcross2}, and taking incoherent sums are 0.03, averaged over all $\vert \psi_k(Z_0) \rangle$ in Fig.~3(a), with a standard deviation of 0.03.} \hfix{To exhibit the $m_\ell$-dependence of cross sections and rates} more clearly, in Fig.~\ref{50D}(d) we show PECs and PI rates, $\Gamma_k$, with an additional longitudinal electric field along the $z$-direction. The field is sufficiently strong to decouple the fine structure, but weak enough to not cause significant $\ell$-mixing with nearby $D$ and $G$ Rydberg states. The adiabatic states $\vert \psi_k \rangle$ associated with the PECs then approximately are $\vert 50F, m_\ell, m_s \rangle$. With orbital degeneracies lifted, the PECs follow from Eq.~\ref{Vad0} with $\psi({\bf{r}}_e) = \langle {\bf{r}}_e \vert 50F, m_\ell \rangle$. There still is a small fine-structure splitting between PECs with same $m_\ell$ and different $m_s$, with the exception of $m_\ell = 0$, where the spin-up and -down states are \gfix{exactly degenerate}. The PI rates and their ratios between the PECs in Fig.~\ref{50D}(d), at a fixed $Z_0$, are now governed by Eq.~\ref{sigmaxm}, with $\ell=3$ and $\ell'=2$ or 4, \hfix{and the shell-averaged PI cross sections. The latter are $\bar{\sigma}_{50F}^{\epsilon', \ell'} = $650~barn for $\ell'=2$ and 3494~barn for $\ell'=4$.} Factoring in all dependencies in Eq.~\ref{sigmaxm} and (incoherently) summing the PI rates over $\ell'$, the rates at the lattice intensity maxima, for conditions as in Fig.~\ref{50D}(d), vary between $21 \times 10^3~$s$^{-1}$ for $m_\ell = 3$ and $13 \times 10^3~$s$^{-1}$ for $m_\ell = 0$, \hfix{with no noticeable dependence on $m_s$.} In comparison, for the rates of black-body-radiation-induced bound-bound (Bbb) transitions and black-body photoionization (Bpi)~\cite{Traxler.2013,Anderson.2013b} we calculate $\Gamma_{Bbb,50F} = 10.60 \times 10^3~$s$^{-1}$ and $\Gamma_{Bpi,50F} = 0.77 \times 10^3~$s$^{-1}$, respectively, for a radiation temperature of 300~K and for all $m_\ell$. The lattice-induced PI should therefore be dominant over black-body-induced transitions. In potential experimental work, a lattice as in Fig.~\ref{50D} could be achieved, for instance, by focusing two counter-propagating $1064$-nm laser beams, with a power of 1~W each, into a confocal spot with $w_{\rm 0}=20~\mu$m. The PECs of Rb $nF$ states could then be studied via three-photon laser excitation from Rb $5S_{1/2}$. \gfix{A laser-spectroscopic measurement of PI-limited PEC widths of 50F states in lattices as in Fig.~\ref{50D} would require a laser linewidth $\lesssim 1$~kHz.} \section{Conclusion} \label{Sec:conc} \hfix{We have studied photoionization of Rb Rydberg atoms in an optical lattice formed by 1064-nm laser beams.} The strong Rydberg-atom lattices discussed in Sec.~\ref{app2} are suitable, for instance, for all-optical quantum initialization of high-angular-momentum states~\cite{Cardman.2020a} and other quantum-control applications. Weak Rydberg-atom lattices, as discussed in Sec.~\ref{app1}, are attractive for applications that include quantum computing and simulation~\cite{Zhang.2011a, Nguyen.2018,Barredo.2020}, and high-precision spectroscopy~\cite{Ramos.2017,Moore.2015a,Malinovsky.2020a}. Weak Rydberg-atom lattices at magic wavelengths~\cite{Safronova.2003a} can minimize trap-induced shifts of certain transitions~\cite{Moore.2015b, Ramos.2017}. Further, the $nF_j$ Rydberg states we have considered in our examples can serve as launch states for circular-state production~\cite{Ramos.2017,Cardman.2020a}. Some of these and other applications of Rydberg-atom optical lattices are subject to limitations from spectroscopic line broadening and decoherence caused lattice-induced photoionization. The photoionization rates as calculated in our paper will be useful in detailed feasibility estimates for these efforts. \begin{acknowledgments} This work was supported by NSF Grant No. PHY-1806809 and NASA Grant No. NNH13ZTT002N. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Automatic Speech Recognition (ASR) systems transcribe audio signals, consisting of speech, into text. While state-of-the-art ASR systems reached high transcription quality, training them requires large amounts of data and compute resources. Fortunately, many high performing systems are available as off-the-shelf cloud services. However, a performance drop can be observed when applying them to specific domains or accents \cite{BLACKBOX_ADAPTATION, PREV_SEQ2SEQ_ERROR_CORRECTION_1}, or when transcribing noisy audio. Moreover, cloud services usually expose the ASR models as a black box, making it impossible to further fine-tune them. \AEDFULL models are designed to post-process the ASR output, in order to detect transcription errors and avoid their propagation to downstream tasks \cite{ERROR_CORRECTION_REVIEW}. \textsc{AED}\xspace models are widely used in interactive systems, to engage the user to resolve the detected errors. For example, AED systems can be found in \emph{Google Docs Voice Typing}, where low confidence words are underlined, making it easier for users to spot errors and take actions to correct them. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures/diagram.png} \vspace{-0.5cm} \caption{ Our \textsc{AED}\xspace pipeline. The confidence scores are quantized and jointly encoded with the transcription text into a contextualized representation. } \label{fig:diagram} \vspace{-0.5cm} \end{figure} Modern NLP models usually build upon the Transformer architecture \cite{Transformer}. However, no Transformer-based AED models have been proposed yet. Recently, the Transformer has been applied to ASR \emph{error correction} \cite{PREV_SEQ2SEQ_ERROR_CORRECTION_1, PREV_SEQ2SEQ_ERROR_CORRECTION_2, FASTCORRECT2, FASTCORRECT}, another ASR post-processing task. These models use only the transcription hypothesis text as input and discard other signals from the ASR model. However, earlier work on \textsc{AED}\xspace (not Transformer-based) has shown the benefits of such signals \cite{ERROR_DETECTION_1, ERROR_DETECTION_2, ERROR_DETECTION_3} and specifically the benefits of ASR word-level confidence scores \cite{ERROR_DETECTION_WITH_CONFIDENCE}, which are often provided in addition to the transcribed text \cite{CONFIDENCE_SCORES_SURVEY, CONFIDENCE_SCORE_RESEARCH_1, CONFIDENCE_SCORE_RESEARCH_2}. In this work we focus exclusively on \textsc{AED}\xspace and propose a natural way to embed the ASR confidence scores into the Transformer architecture. We introduce \mbox{\ace \ \textsc{RED-ACE}\xspace}, a modified Transformer encoder with an additional embedding layer, that jointly encodes the textual input and the word-level confidence scores into a contextualized representation (\Cref{fig:method}). Our \textsc{AED}\xspace pipeline first quantizes the confidence scores into integers and then feeds the quantized scores with the transcribed text into the modified Transformer encoder (\Cref{fig:diagram}). Our experiments demonstrate the effectiveness of \textsc{RED-ACE}\xspace in improving \textsc{AED}\xspace performance. In addition, we demonstrate the robustness of \textsc{RED-ACE}\xspace to changes in the transcribed audio quality. Finally, we release a novel dataset that can be used to train and evaluate \textsc{AED}\xspace models. \section{\ace \ \textsc{RED-ACE}\xspace} \label{sec:method} Following recent trends in NLP, we use a pre-trained Transformer-based language model, leveraging its rich language representation. \textsc{RED-ACE}\xspace is based on a pre-trained BERT \cite{BERT}, adapted to be confidence-aware and further fine-tuned for sequence tagging. Concretely, our \textsc{AED}\xspace model is a binary sequence tagger that given the ASR output, consisting of the transcription hypothesis words and their corresponding word-level confidence scores, predicts an \textsc{Error}\xspace or \mbox{\textsc{NotError}\xspace} tag for each input token.\footnote{We discuss words to tokens conversion in \S \ref{impl:details}.} Our \textsc{AED}\xspace pipeline is illustrated in \Cref{fig:diagram}. First, we quantize the floating-point confidence scores into integers using a binning algorithm.\footnote{Typical confidence scores range between 0.0 to 1.0. We perform experiments with simple equal-width binning and quantile-based discretization (equal-sized buckets), as well as different bin numbers. More details in \S \ref{impl:details}.} Next, the quantized scores and the transcription text are fed into a confidence-aware BERT tagger. In BERT, each input token has 3 embeddings: token, segment and position.\footnote{We refer the reader to \citet{BERT} for more details.} To adapt BERT to be confidence-aware, we implement an additional dedicated embedding layer, indicating the confidence bin that the input token belongs to. We construct a learned confidence embedding lookup matrix $M \in \mathbb{R}^{B\times H}$, where $B$ is the number of confidence bins and $H$ is BERT's embedding vector's size. For a given token, its input representation is constructed by summing the corresponding BERT's embeddings with its confidence embedding (\Cref{fig:method}). This allows the model to learn a dedicated dense representation vector for each confidence bin, as well as naturally combine it with the final contextualized representation of each input token. \begin{figure}[t] \centering \vspace{-0.1cm} \includegraphics[width=\columnwidth]{figures/tagger.pdf} \caption{Our confidence-aware AED model. We use a BERT-based tagger with modifications colored in green. An additional embedding layer is added to represent the embedding of the quantized confidence scores.} \label{fig:method} \end{figure} \begin{table}[t] \small \scalebox{0.8}{ \centering \setlength{\tabcolsep}{4pt} \begin{tabular}{llcrrr} \toprule \multicolumn{1}{c}{ASR Model} & \multicolumn{1}{c}{Pool} & \multicolumn{1}{c}{Split} & \multicolumn{1}{c}{\# Examples} & \multicolumn{1}{c}{\# Words} & \multicolumn{1}{c}{\# Errors} \\ \midrule \multirow{6}{*}{\emph{default}} & \multirow{3}{*}{\clean} & Train & 103,895 & 3,574,027 & 357,145 (10.0\%) \\ & & Dev & 2,697& 54,062 & 5,111 (9.5\%) \\ & & Test & 2,615 & 52,235 & 4,934 (9.4\%) \\ \cmidrule{2-6} & \multirow{3}{*}{\other} & Train & 146,550 & 4,650,779 & 770,553 (16.6\%) \\ & & Dev & 2,809 & 48,389 & 9,876 (20.4\%) \\ & & Test & 2,925 & 50,730 & 10,317 (20.3\%) \\ \midrule \multirow{6}{*}{\emph{video}} & \multirow{3}{*}{\clean} & Train & 104,013 & 3,589,136 & 210,324 (5.9\%) \\ & & Dev & 2,703& 54,357 & 3,109 (5.7\%) \\ & & Test & 2,620 & 52,557 & 2,963 (5.6\%) \\ \cmidrule{2-6} & \multirow{3}{*}{\other} & Train & 148,678 & 4,810,226 & 148,678 (7.9\%) \\ & & Dev & 2,809 & 50,983 & 5,901 (11.6\%) \\ & & Test & 2,939 & 52,192 & 6,033 (11.6\%) \\ \bottomrule \end{tabular} } \caption{ \textsc{AED}\xspace dataset statistics. } \label{tab:dataset-stats} \vspace{-0.5cm} \end{table} \section{Dataset Creation and Annotation} \label{sec:dataset} To train and evaluate \textsc{AED}\xspace models, we generate a dataset with labeled transcription errors. \paragraph{Labeling of ASR Errors.} We decode audio data using an \textsc{ASR}\xspace model and obtain the transcription hypothesis. Then, we align the hypothesis words with the reference (correct) transcription. Specifically, we find an edit path, between the hypothesis and the reference, with the minimum edit distance and obtain a sequence of edit operations (insertions, deletions and substitutions) that can be used to transform the hypothesis into the reference. Every incorrect hypothesis word (i.e needs to be deleted or substituted) is labeled as \textsc{Error}\xspace, the rest are labeled as \textsc{NotError}\xspace. \paragraph{Audio Data Source.} We use the \LibriSpeech corpus \cite{LIBRISPEECH}, containing 1000 hours of transcribed English speech from audio books.\footnote{\url{https://www.openslr.org/12/}} The corpus contains \emph{clean} and \emph{other} pools, where \emph{clean} is of higher recording quality.\footnote{We provide additional details about the corpus in \S \ref{appendix:published_dataset}.} \paragraph{ASR Models.} In this work we focus exclusively on a black-box setting, where the exact implementation details of the ASR and the confidence models are unknown. This setting is particularly relevant since many applications rely on strong performance of black-box ASR models which are exposed as cloud services. We use Google Cloud Speech-to-Text API as our candidate ASR model.\footnote{\url{https://cloud.google.com/speech-to-text}} In our main experiments we select the \default ASR model.\footnote{\url{https://cloud.google.com/speech-to-text/docs/basics\#select-model}} To ensure the generalization ability of RED-ACE, we repeat our main experiments using a different ASR model, in this case we choose the \video model. Table \ref{tab:dataset-stats} presents the statistics of our dataset. It is notable that the main model's error rate (\default) is about twice as high as the additional model's error rate (\video), which shows that (even though both models are part of the Google Cloud API) this additional ASR model is substantially different from the main ASR model we used. \paragraph{Data Release.} Since Google Cloud requires a paid subscription and since the underlying ASR models may change over time, we make our dataset publicly available.\footnote{Additional details about the dataset are provided in \S \ref{appendix:libriSpeech_corpus}.} This ensures full reproducibility of our results (in case the ASR models change) and makes it easier for researchers to train and evaluate AED models, removing the need to run inference on paid cloud-based ASR models or train dedicated models in order to transcribe audio. \section{Experimental Setup} \label{sec:experimental-setup} Our experiments examine the informativeness of the confidence scores as well as the effectiveness of RED-ACE in combining them with text. We provide extensive implementation details in \S \ref{impl:details}. \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{figures/pre-recall-plot-both.pdf} \vspace{-0.3cm} \caption{ Threshold tuning process for the \textsc{C-O}\xspace baseline. Models are evaluated using different confidence scores thresholds and the threshold that yields the best F1 is chosen. A similar process is performed for \textsc{BERT \& C}\xspace and \textsc{BERT | C}\xspace. For \textsc{BERT-MLM}\xspace we tune the values for $k$. } \label{fig:threshold_plot} \end{figure} \begin{figure}[t] \centering \vspace{-0.1cm} \includegraphics[width=\columnwidth]{figures/BERTC.pdf} \vspace{-0.5cm} \caption{The BERT$_C$\xspace baseline, which modifies the input to the tagger, unlike \textsc{RED-ACE}\xspace which modifies BERT's embeddings. The value of the respective confidence score is appended to the final contextualized representation.} \label{fig:bertc} \vspace{-0.5cm} \end{figure} \subsection{Baselines} \label{sec:models} \paragraph{\textsc{C-O}\xspace (Confidence Only)} Uses the word-level scores from the ASR confidence model directly. Predicts \textsc{Error}\xspace if the token's confidence score is below a threshold.\footnote{We choose the confidence threshold or $k$ value (in case of \textsc{BERT-MLM}\xspace) with the best F1 on the dev set (\Cref{fig:threshold_plot}).} \paragraph{\textsc{BERT-MLM}\xspace} Masks out the input words one at a time and uses a pre-trained BERT \cite{BERT} as a Masked Language Model (MLM) in order to infill them. Predicts \textsc{Error}\xspace for input words that are not in the top $k$ BERT's suggestions.\footnotemark[9] \paragraph{BERT} We fine-tune BERT \cite{BERT} for sequence tagging (only on text, \emph{without} adding \textsc{RED-ACE}\xspace). As Transformers have not beeen applied for \textsc{AED}\xspace yet, we choose BERT as a pre-trained LM following \citet{BERT_GRAMMAR_ERROR_DETECTION}, who applied it for Grammatical Error Detection (\textsc{GED}\xspace) and achieved the highest performance in the \textsc{NLPTEA-2020} Shared Task \cite{rao2020overview}. \paragraph{BERT \& C} Predicts \textsc{Error}\xspace if BERT predicts \textsc{Error}\xspace \textbf{and} confidence is below a threshold.\footnotemark[9] \paragraph{BERT | C} Predicts \textsc{Error}\xspace if BERT predicts \textsc{Error}\xspace \textbf{or} confidence is below a threshold.\footnotemark[9] \paragraph{BERT$_C$} We fine-tune \textsc{BERT}\xspace \emph{jointly} with the confidence scores by concatenating the score value to the token's contextualized representation produced by BERT (directly before it is fed into the sequence tagger). BERT's last hidden layer dimension is increased by 1, and the corresponding value populated with the token's confidence score. An illustration is provided in \Cref{fig:bertc}. \subsection{Evaluation} \paragraph{Main Settings.} In the main settings we train the models on the \clean and \other training sets and evaluate them \clean and \other test sets respectively. \paragraph{Robustness Settings.} A real-word \textsc{AED}\xspace system should remain effective when the audio stems from different recording qualities. Changes in recording quality, can affect the ASR model's errors distribution and thus can potentially reduce the effectiveness of the \textsc{AED}\xspace model. As our dataset contains 2 pools with different recording quality (\Cref{tab:dataset-stats}), we can measure whether \textsc{RED-ACE}\xspace's performance deteriorates when the audio quality of the training data changes. To this end we define the \emph{robustness settings} (\Cref{tab:error-detection-robustness}), where we perform a cross-pools evaluation, evaluating models that were trained on \clean and \other training sets using the \other and the \clean test sets respectively. \paragraph{Metric.} We measure errors detection \emph{Precision (P)}, \emph{Recall (R)} and \emph{F1}. \emph{Recall} measures the percent of real errors that were detected, while \emph{Precision} measures the percent of the real errors out of all detected errors. We calculate the \emph{P} and \emph{R} on the word-level. We also report span-level results for the main settings in \Cref{tab:span-level} in the appendix. \begin{table}[t] \small \centering \scalebox{0.83}{ \begin{tabular}{l|ccc|ccc} \toprule & \multicolumn{3}{c|}{ \clean} & \multicolumn{3}{c}{ \other} \\ & R & P & F1 & R & P & F1 \\ \midrule \textsc{C-O}\xspace & 52.1 & 42.5 & 46.8 & 63.5 & 45.6 & 53.1 \\ \textsc{BERT-MLM}\xspace & 58.0 & 26.5 & 36.4 & \textbf{72.7} & 35.9 & 48.1 \\ \textsc{BERT}\xspace & 58.5 & 77.6 & 66.7 & 58.0 & 77.1 & 66.2 \\ \textsc{BERT \& C}\xspace & 55.8 & 75.0 & 64.0 & 55.5 & 75.5 & 64.0 \\ \textsc{BERT | C}\xspace & \textbf{63.3} & 68.1 &65.6 & 68.1 & 67.1 & 67.6 \\ BERT$_C$\xspace & 51.7 & 78.9 & 66.3 & 58.1 & 78.8 & 66.9 \\ \midrule \textsc{RED-ACE}\xspace & 61.1 & \ \ \textbf{81.9}$^*$ & \ \ \textbf{70.0}$^*$ & 64.1 & \ \ \textbf{79.9}$^*$ & \ \ \textbf{71.1}$^*$ \\ \midrule F1 $\Delta$\% & & & +4.9\% & & & +7.4\% \\ \bottomrule \end{tabular} } \caption{ Main settings using the errors from the \default ASR model (see \Cref{tab:dataset-stats}). R and P stands for Recall and Precision. F1 $\Delta$\% compares \textsc{RED-ACE}\xspace to the BERT baseline. Results with $*$ indicate a statistically significant improvement compared to the strongest baseline. } \label{tab:error-detection} \vspace{-0.5cm} \end{table} \section{Results} \label{sec:results} \Cref{tab:error-detection} presents our main results, evaluating the models on the \emph{main settings} using errors from the main (\default) ASR model. \Cref{tab:error-detection-robustness} presents the results on the \emph{robustness settings}, also using errors from the main ASR model. The low F1 of \mbox{\textsc{C-O}\xspace} suggest that the \textsc{ASR}\xspace confidence has low effectiveness without textual signal. The low F1 of \textsc{BERT-MLM}\xspace indicates that supervised training on real transcription errors is crucial. We next observe that \textsc{BERT \& C}\xspace performs worse than \textsc{BERT}\xspace on all metrics. When comparing \mbox{\textsc{BERT | C}\xspace} to \textsc{BERT}\xspace we observe the expected increase in recall (\textsc{BERT}\xspace's errors are a subset of the errors from \textsc{BERT | C}\xspace) and a decrease in precision, F1 decreases on \clean and increases on \other. The results on BERT$_C$\xspace are particularly surprising. Similarly to \textsc{RED-ACE}\xspace, BERT$_C$\xspace trains \textsc{BERT}\xspace \emph{jointly} with the scores. However, unlike \textsc{RED-ACE}\xspace, BERT$_C$\xspace performs worse than \textsc{BERT}\xspace. This demonstrates the effectiveness and importance of our modeling approach, that represents the scores using a learned dense embedding vectors. As \textsc{RED-ACE}\xspace is the only method that successfully combines the scores with text, we focus the rest of the analysis on comparing it to the text-based \textsc{BERT}\xspace tagger. In the \emph{main settings} (\Cref{tab:error-detection}), \textsc{RED-ACE}\xspace consistently outperforms BERT on all evaluation metrics in both pools. This demonstrates the usefulness of the confidence signal on top of the textual input, as well as the effectiveness of \textsc{RED-ACE}\xspace in combining those signals. \textsc{RED-ACE}\xspace's F1 $\Delta$\% on \clean is lower than on \other. This can be attributed to the fact that the error rate in \clean is twice lower than in \other (\Cref{tab:dataset-stats}), which means that the model is exposed to fewer errors during training. \begin{table}[t] \small \centering \scalebox{0.8}{ \begin{tabular}{l|ccc|ccc} \toprule & \multicolumn{3}{c|}{\other \ $\rightarrow$ \clean} & \multicolumn{3}{c}{\clean \ $\rightarrow$ \other} \\ & R & P & F1 & R & P & F1 \\ \midrule BERT & 64.3 & 71.9 & 67.9 & 47.1 & 80.3 & 59.4 \\ \textsc{RED-ACE}\xspace & \ \ \textbf{67.9}$^*$ & \ \ \textbf{77.0}$^*$ & \ \ \textbf{72.2}$^*$ & \ \ \textbf{53.7}$^*$ & \ \ \textbf{83.3}$^*$ & \ \ \textbf{65.3}$^*$ \\ \midrule F1 $\Delta$\% & & & +6.3\% & & & +9.9\% \\ \bottomrule \end{tabular} } \caption{ Robustness settings with the \default ASR model (\Cref{tab:dataset-stats}). \other \ $\rightarrow$ \clean means train on \other and eval on \clean. Format is similar to \Cref{tab:error-detection}. } \label{tab:error-detection-robustness} \vspace{-0.5cm} \end{table} Finally, we analyze the \emph{robustness settings} from \Cref{tab:error-detection-robustness}. We first note that \textsc{RED-ACE}\xspace outperforms BERT in both settings, indicating its robustness across different settings, and that it can remain effective with recording quality differences between train and test time. When observing the performance on the \clean test set, we observe that training \textsc{AED}\xspace models on \other instead of \clean, leads to improvement in F1. This can be attributed to the higher error rate and larger number of training examples in \other (see \Cref{tab:dataset-stats}), which exposes the models to larger amount of errors during training. The F1 $\Delta$\% on \other \ $\rightarrow$ \clean (\Cref{tab:error-detection-robustness}) is comparable to \clean (\Cref{tab:error-detection}), with a statistically insignificant improvement. An opposite trend can be seen on the \other test set. The performance of models that were trained on \clean instead of \other deteriorates. Yet, \textsc{RED-ACE}\xspace's relative performance drop is smaller than BERT's. \textsc{RED-ACE}\xspace drops by $8.2\%$ (from $71.1$ to $65.3$) while BERT by $10.3\%$ (from $66.2$ to $59.4$). This is also demonstrated by the statistically significant increase in F1 $\Delta$\%, from $7.4\%$ in \other \ $\rightarrow$ \other to $9.9\%$ in \clean \ $\rightarrow$ \other. This serves as additional evidence for the robustness of \textsc{RED-ACE}\xspace. We also note that \clean \ $\rightarrow$ \other is the most challenging setting, with BERT's F1 significantly lower than the other 3 settings, meaning that \textsc{RED-ACE}\xspace shows the largest improvement (F1 $\Delta$\%) in the hardest setting. \paragraph{Generalization Across ASR Models.} As discussed in \S \ref{sec:dataset}, to ensure that RED-ACE is applicable to not only one specific ASR model, we repeat our experiments using a different ASR model. The results are presented in \Cref{tab:another_asr} and \Cref{tab:error-detection-robustness-video}. \textsc{RED-ACE}\xspace outperforms BERT in all settings, with statistically significant F1 improvements, further highlighting \textsc{RED-ACE}\xspace robustness. \begin{table}[t] \small \centering \scalebox{0.82}{ \begin{tabular}{l|ccc|ccc} \toprule & \multicolumn{3}{c|}{\clean} & \multicolumn{3}{c}{\other} \\ & R & P & F1 & R & P & F1 \\ \midrule BERT & 54.9 & \textbf{77.2} & 64.2 & 52.7 & 78.8 & 63.2 \\ \textsc{RED-ACE}\xspace & \ \ \textbf{58.6}$^*$ & 75.4 & \ \ \textbf{65.9}$^*$ & \ \ \textbf{55.2}$^*$ & \ \ \textbf{80.7}$^*$ & \ \ \textbf{65.6}$^*$ \\ \midrule F1 $\Delta$\% & & & +2.6\% & & & +3.8\% \\ \bottomrule \end{tabular} } \caption{ Main settings using the errors from the \video ASR model. Format is similar to \Cref{tab:error-detection}. } \label{tab:another_asr} \vspace{-0.5cm} \end{table} \section{Related Work} \label{sec:related-work} \paragraph{ASR Confidence Scores} are used to evaluate reliability of recognition results \cite{CONFIDENCE_SCORES_SURVEY}. In modern ASR models, a separate confidence network is usually trained using a held-out dataset \cite{CONFIDENCE_SCORE_RESEARCH_1, CONFIDENCE_SCORE_RESEARCH_3}. \paragraph{Uncertainty Calibration} adapts a models prediction probabilities to better reflect their true correctness likelihood \cite{CALIBRATION_1}. We provide the Brier Scores (evaluating calibration) for our dataset in \Cref{tab:brier-scores}. AED models, which perform a binary classification - \textsc{Error}\xspace or \textsc{NotError}\xspace, do not explicitly use calibration. For example in \textsc{C-O}\xspace, \textsc{BERT | C}\xspace and \textsc{BERT \& C}\xspace we tune the threshold to an optimal value, and since most calibration techniques will preserve the relative scores ordering, better calibration will not improve performance. BERT$_C$\xspace and \textsc{RED-ACE}\xspace do not rely on calibrated scores, since deep neural networks can model non linear relationships \cite{NN}. \paragraph{\textsc{AED}\xspace.} We provide a brief summary of relevant AED papers, for a more thorough review of AED we refer the reader to \citet{ERROR_CORRECTION_REVIEW}. \citet{ERROR_DETECTION_WITH_CONFIDENCE} used data mining models, leveraging features from confidence scores and a linguistics parser. \citet{ERROR_DETECTION_1} used logistic regression with features extracted from confusion networks. \citet{ERROR_DETECTION_2} used a Markov Chains classifier. \citet{ERROR_DETECTION_3} focused on spoken translation using confidence from a machine translation model, posteriors from entity detector and a word boundary detector. Modern Transformer-based approaches have not addressed the \textsc{AED}\xspace task directly. A few attempts were made to apply Transformers for ASR \emph{error correction}, using a sequence-to-sequence models to map directly between the ASR hypothesis and the correct (reference) transcription \cite{PREV_SEQ2SEQ_ERROR_CORRECTION_1, PREV_SEQ2SEQ_ERROR_CORRECTION_2, FASTCORRECT2, FASTCORRECT}. To the best of our knowledge, our work is the first to address \textsc{AED}\xspace using the Transformer and to introduce representation for ASR confidence scores in a Transformer-based ASR post-processing model. \begin{table}[t] \small \centering \scalebox{0.8}{ \begin{tabular}{l|ccc|ccc} \toprule & \multicolumn{3}{c|}{\other \ $\rightarrow$ \clean} & \multicolumn{3}{c}{\clean \ $\rightarrow$ \other} \\ & R & P & F1 & R & P & F1 \\ \midrule BERT & 61.2 & 73.5 & 66.8 & 42.9 & \textbf{82.2} & 56.4 \\ \textsc{RED-ACE}\xspace & \ \ \textbf{62.8}$^*$ & \ \ \textbf{75.8}$^*$ & \ \ \textbf{68.7}$^*$ & \ \ \textbf{47.7}$^*$ & \ \ 79.8$^*$ & \ \ \textbf{59.7}$^*$ \\ \midrule F1 $\Delta$\% & & & +2.8\% & & & +5.9\% \\ \bottomrule \end{tabular} } \caption{ Robustness settings using the errors from the \video ASR model. Format is similar to \Cref{tab:error-detection-robustness}. } \label{tab:error-detection-robustness-video} \vspace{-0.5cm} \end{table} \section{Conclusion} \label{sec:conclusion} We introduced \ace \ \textsc{RED-ACE}\xspace, an approach for embedding ASR word-level confidence scores into a Transformer-based ASR error detector. \textsc{RED-ACE}\xspace jointly encodes the scores and the transcription hypothesis into a contextualized representation. Our experiments demonstrated that the ASR word-level confidence scores are useful on top of the transcription hypothesis text, yet it is not trivial to effectivelly combine these signals. We showed that performing such combination using RED-ACE leads to significant performance gains, as well as increased robustness to changes in the audio quality, which can be crucial for real-world applications. In addition, we published a novel AED dataset that allows researchers to train and evaluate AED models, without the need to run ASR models. It also ensures the full reproducibility of our results in case Google Cloud models will change over time. In future work, we would like to leverage additional signals from the ASR model (such as alternative hypotheses), as well as explore the benefits of confidence scores for \emph{error correction} models. \section{Limitations} \paragraph{Limitations} Our approach does not account for ASR errors where the ASR system simply deletes output words. However, it is not clear whether those cases are of a practical use for an AED application that highlights incorrect words in the hypothesis, as in this case there is nothing to highlight. More specifically, our approach does not consider \emph{isolated} deletions. To illustrate that, let's first consider an example in which 2 words were transcribed as 1 word, meaning that 1 word was omitted in the transcription. For example, if \emph{"a \underline{very big} cat"} was transcribed as \emph{"a \underline{small} cat"}. An \textsc{AED}\xspace application would ideally highlight the word \emph{"\underline{small}"} as a transcription error. This case is actually covered by our approach, even though one word is omitted in the transcription, because when creating the \textsc{AED}\xspace dataset we will label "small" as an error and train the model accordingly (details in \cref{sec:dataset}). The cases that are not covered are when the ASR model omits words while all the surrounding words are transcribed correctly. For example \emph{"a \underline{very} big cat"} that was transcribed as \emph{"a big cat"}. In this case, all the words in the transcription hypothesis are correct words and our approach is not expected to discover any error. We chose not to cover those cases as it is not clear if they are useful for an error detection application, that usually needs to highlight incorrect words in the hypothesis. In addition, ignoring those cases is also in-line with previous work \cite{ERROR_DETECTION_WITH_CONFIDENCE}. Finally, our analysis showed that those cases are extremely rare, for example in \clean they occur only in $0.37\%$ of the words. \paragraph{Risks} A possible risk posed by an AED system could be caused by an over-reliance on it. Whereas without AED, the entire output of an ASR system may have been manually verified, with AED only parts of output which the AED flagged may be verified, leading to errors remaining that were not found by the AED system. \section*{Acknowledgements} We thank the reviewers for their valuable suggestions for improving this work. We would also like to thank Gal Elidan, Idan Szpektor, Eric Malmi, Yochai Blau, Amir Feder, Andrew Rosenberg, Françoise Beaufays and Avinatan Hassidim for reviewing the paper and providing useful feedback. \section{Appendix} \label{sec:appendix} \subsection{Implementation Details} \label{impl:details} \paragraph{Training.} We fine-tune our BERT-based \cite{BERT} model with a batch size of 512\footnote{We choose the best among 128, 512 and 1024, based on tagging accuracy on the development set.}, a weight decay of 0.01, and a learning rate of 3e-5\footnote{We choose the best among 5e-5, 4e-5, 3e-5, and 2e-5, based on tagging accuracy on the development set.}. The maximum input length is set to 128 tokens. We pad shorter sequences and truncate longer ones to the maximum input length. We use the cross-entropy loss function, optimizing the parameters with the AdamW optimizer. We train for a maximum of 500 epochs and choose the checkpoint with the maximum tagging accuracy on the development set.\footnote{For \textsc{RED-ACE}\xspace the tagging accuracy was 95.4 on \clean and 89.7 on \other.} The best checkpoint was found at epochs 100-150 after approximately 8 hours of training time. All models were trained on TPUs (4x4). BERT-base has 110 million parameters, the inclusion of confidences embeddings for \textsc{RED-ACE}\xspace added ~10k additional parameters. The confidence embedding matrix is randomly initialized with truncated normal distribution\footnote{\url{https://www.tensorflow.org/api_docs/python/tf/keras/initializers/TruncatedNormal}}. If a single word is split into several tokens during BERT's tokenization, all the corresponding tokens get the confidence score of the original word. To predict word-level errors (used throughout the paper), we treat a word as an error if one of its tokens was tagged as error by the model. To predict span-level errors (reported for completeness in \Cref{tab:span-level}), we treat every sequence of errors as one error-span and every sequence of correct words as a correct-span. \paragraph{Binning.} \label{sec:effect_of_binning} Table \ref{tab:binning} presents results for different binning algorithms and bin sizes. For binning algorithms we use: (1) simple equal-width binning and (2) quantile-based discretization (equal-sized buckets). We note that there is no significant difference between the results. In our main experiments we used equal width binning with 10 bins. For special tokens,\footnote{[CLS] and [SEP] in case of BERT.} that do not have confidence scores, we chose to allocate a dedicated bin. \paragraph{Statistics Significance Test.} In \cref{tab:error-detection}, in addition to the main results, we provide a statistic significance tests results. For this purpose we pseudo-randomly shuffle all words in our test set, split them up into 100 approximately equally sized subsets, and compute recall, precision and F1 for each of them for the baseline and \textsc{RED-ACE}\xspace models. We then apply the Student's paired t-test with $p < 0.05$ to these sets of metrics. To determine statistical significance in F1 $\Delta$\% between different setups evaluated on the same data set, F1 $\Delta$\% is computed for each of the given subsets, and the same significance test is applied to the resulting sets of F1 $\Delta$\% between two setups. \begin{table}[t] \small \centering \begin{tabular}{l|l|cccc} \toprule Binning algorithm & \# Bins & R & P & F1 \\ \midrule \multirow{3}{*}{Equal width bins} & 10 & \textbf{64.1} & 79.9 & \textbf{71.1} \\ & 100 & 62.5 & 80.5 & 70.4 \\ & 1000 & 63.2 & 80.7 & 70.9 \\ \midrule Equal size bins & 10 & 63.0 & \textbf{81.5} & \textbf{71.1} \\ \bottomrule \end{tabular} \caption{ Effect on different binning strategies (\other). } \label{tab:binning} \end{table} \begin{table}[t] \small \centering \begin{tabular}{l|l|r|r} \toprule Pool & Subset Name & Audio Hours & \# Examples \\ \midrule \multirow{4}{*}{Clean} & \emph{train-clean-100} & 100.6 & 28,539 \\ & \emph{train-clean-360} & 363.6 & 104,014 \\ & \emph{dev-clean} & 5.4 & 2,703 \\ & \emph{test-clean} & 5.4 & 2,620 \\ \midrule \multirow{3}{*}{Other} & \emph{train-other-500} & 496.7 & 148,688 \\ & \emph{dev-other} & 5.3 & 2,864 \\ & \emph{test-other} & 5.1 & 2,939 \\ \bottomrule \end{tabular} \caption{ LibriSpeech corpus subsets statistics. } \label{tab:librispeech_subsets} \vspace{-0.2cm} \end{table} \subsection{Published \textsc{AED}\xspace Dataset} \label{appendix:published_dataset} As described in \S \ref{sec:dataset}, we generate our own \textsc{AED}\xspace dataset. To this end we transcribe the LibriSpeech corpus using 2 modes from Google Cloud Speech-to-Text API.\footnote{\url{https://cloud.google.com/speech-to-text}} We choose the \default model as our main model and the \video model as the additional model\footnote{\url{https://cloud.google.com/speech-to-text/docs/basics\#select-model}}. We also enable the word-level confidence in the API.\footnote{\url{https://cloud.google.com/speech-to-text/docs/word-confidence\#word-level_confidence}} Our submission includes the \textsc{AED}\xspace dataset as well as the predictions of our models on the test sets. We hope that our dataset will help future researchers and encourage them to work on \textsc{AED}\xspace. \paragraph{The LibriSpeech Corpus Details.} \label{appendix:libriSpeech_corpus} We provide here additional details abut the LibriSpeech corpus.\footnote{\url{https://www.openslr.org/12/}} The corpus contains approximately 1000 hours of English speech from read audio books. The corpus contains \emph{clean} and \emph{other} pools. The training data is split into three subsets: \emph{train-clean-100}, \emph{train-clean-360} and \emph{train-other-500}, with approximate sizes of 100, 360 and 500 hours respectively. Each pool contains also a development and test sets with approximately 5 hours of audio. Full data split details can be seen in \cref{tab:librispeech_subsets}. We note that the \#Examples is slightly different than the numbers in our dataset (see \cref{tab:dataset-stats}). When transcribing with Google Cloud API, we occasionally reached a quota limit and a negligible number of examples was not transcribed successfully (up to 2\% per split). The \clean pool contains 2 training sets, we used the larger one in our dataset (\emph{train-clean-360}). \begin{figure}[t] \centering \includegraphics[width=0.91\columnwidth]{figures/dataset_example.png} \caption{ A single example from our AED dataset. } \label{fig:dataset_example} \end{figure} \paragraph{Annotation Description.} A single example from our \textsc{AED}\xspace dataset can be seen is \cref{fig:dataset_example}. The annotation contains the ASR hypothesis words, the corresponding word-level confidence scores and the \textsc{Error}\xspace or \textsc{NotError}\xspace label. \paragraph{License.} This data as well as the underlying LibrSpeech ASR corpus are licensed under a Creative Commons Attribution 4.0 International License\footnote{\url{http://creativecommons.org/licenses/by/4.0/}}. \begin{table}[t] \small \centering \begin{tabular}{ccl} \toprule \multicolumn{1}{c}{ASR Model} & \multicolumn{1}{c}{Pool} & \multicolumn{1}{c}{Brier Score} \\ \midrule \multirow{2}{*}{\emph{default}} & \multirow{1}{*}{\clean} & 0.069 \\ \cmidrule{2-3} & \multirow{1}{*}{\other} & 0.142 \\ \midrule \multirow{2}{*}{\emph{video}} & \multirow{1}{*}{\clean} & 0.06 \\ \cmidrule{2-3} & \multirow{1}{*}{\other} & 0.1 \\ \bottomrule \end{tabular} \caption{ Brier Scores (evaluating confidence scores calibration, lower is better) for our dataset. } \label{tab:brier-scores} \end{table} \makeatletter \setlength{\@fptop}{0pt} \makeatother \begin{table}[t] \small \centering \scalebox{0.83}{ \begin{tabular}{l|ccc|ccc} \toprule & \multicolumn{3}{c|}{ \clean} & \multicolumn{3}{c}{ \other} \\ & R & P & F1 & R & P & F1 \\ \midrule \textsc{C-O}\xspace & 31.0 & 27.5 & 29.1 & 23.4 & 20.2 & 21.7 \\ \textsc{BERT-MLM}\xspace & 27.4 & 12.3 & 17.0 & 22.2 & 11.6& 15.2 \\ \textsc{BERT}\xspace & 47.1 & 59.6 & 52.6 & 37.1 & 46.6 & 41.3 \\ \textsc{BERT \& C}\xspace & 44.5 & 55.5 & 49.4 & 35.5 & 43.7 & 39.2 \\ \textsc{BERT | C}\xspace & 48.9 & 51.0 & 49.9 & 40.6 & 40.2 & 40.4 \\ BERT$_C$\xspace & 45.4 & 59.9 & 51.7 & 37.5 & 48.0 & 42.1 \\ \midrule \textsc{RED-ACE}\xspace & \textbf{49.4} & \ \ \textbf{63.6}$^*$ & \ \ \textbf{55.6}$^*$ & \ \ \textbf{42.1}$^*$ & \ \ \textbf{50.9}$^*$ & \ \ \textbf{46.1}$^*$ \\ \midrule F1 $\Delta$\% & & & +5.4\% & & & +9.5\% \\ \bottomrule \end{tabular} } \caption{ Span-level results for the main settings using the errors from the \default ASR model. The format is similar to \Cref{tab:error-detection}. } \label{tab:span-level} \end{table} \section{Results} \label{sec:results} \Cref{tab:error-detection} presents our main results, evaluating the models on the \emph{main settings} using errors from the main (\default) ASR model. \Cref{tab:error-detection-robustness} presents the results on the \emph{robustness settings}, also using errors from the main ASR model. The low F1 of \mbox{\textsc{C-O}\xspace} suggest that the \textsc{ASR}\xspace confidence has low effectiveness without textual signal. The low F1 of \textsc{BERT-MLM}\xspace indicates that supervised training on real transcription errors is crucial. We next observe that \textsc{BERT \& C}\xspace performs worse than \textsc{BERT}\xspace on all metrics. When comparing \mbox{\textsc{BERT | C}\xspace} to \textsc{BERT}\xspace we observe the expected increase in recall (\textsc{BERT}\xspace's errors are a subset of the errors from \textsc{BERT | C}\xspace) and a decrease in precision, F1 decreases on \clean and increases on \other. The results on BERT$_C$\xspace are particularly surprising. Similarly to \textsc{RED-ACE}\xspace, BERT$_C$\xspace trains \textsc{BERT}\xspace \emph{jointly} with the scores. However, unlike \textsc{RED-ACE}\xspace, BERT$_C$\xspace performs worse than \textsc{BERT}\xspace. This demonstrates the effectiveness and importance of our modeling approach, that represents the scores using a learned dense embedding vectors. As \textsc{RED-ACE}\xspace is the only method that successfully combines the scores with text, we focus the rest of the analysis on comparing it to the text-based \textsc{BERT}\xspace tagger. In the \emph{main settings} (\Cref{tab:error-detection}), \textsc{RED-ACE}\xspace consistently outperforms BERT on all evaluation metrics in both pools. This demonstrates the usefulness of the confidence signal on top of the textual input, as well as the effectiveness of \textsc{RED-ACE}\xspace in combining those signals. \textsc{RED-ACE}\xspace's F1 $\Delta$\% on \clean is lower than on \other. This can be attributed to the fact that the error rate in \clean is twice lower than in \other (\Cref{tab:dataset-stats}), which means that the model is exposed to fewer errors during training. \begin{table}[t] \small \centering \scalebox{0.8}{ \begin{tabular}{l|ccc|ccc} \toprule & \multicolumn{3}{c|}{\other \ $\rightarrow$ \clean} & \multicolumn{3}{c}{\clean \ $\rightarrow$ \other} \\ & R & P & F1 & R & P & F1 \\ \midrule BERT & 64.3 & 71.9 & 67.9 & 47.1 & 80.3 & 59.4 \\ \textsc{RED-ACE}\xspace & \ \ \textbf{67.9}$^*$ & \ \ \textbf{77.0}$^*$ & \ \ \textbf{72.2}$^*$ & \ \ \textbf{53.7}$^*$ & \ \ \textbf{83.3}$^*$ & \ \ \textbf{65.3}$^*$ \\ \midrule F1 $\Delta$\% & & & +6.3\% & & & +9.9\% \\ \bottomrule \end{tabular} } \caption{ Robustness settings with the \default ASR model (\Cref{tab:dataset-stats}). \other \ $\rightarrow$ \clean means train on \other and eval on \clean. Format is similar to \Cref{tab:error-detection}. } \label{tab:error-detection-robustness} \vspace{-0.5cm} \end{table} Finally, we analyze the \emph{robustness settings} from \Cref{tab:error-detection-robustness}. We first note that \textsc{RED-ACE}\xspace outperforms BERT in both settings, indicating its robustness across different settings, and that it can remain effective with recording quality differences between train and test time. When observing the performance on the \clean test set, we observe that training \textsc{AED}\xspace models on \other instead of \clean, leads to improvement in F1. This can be attributed to the higher error rate and larger number of training examples in \other (see \Cref{tab:dataset-stats}), which exposes the models to larger amount of errors during training. The F1 $\Delta$\% on \other \ $\rightarrow$ \clean (\Cref{tab:error-detection-robustness}) is comparable to \clean (\Cref{tab:error-detection}), with a statistically insignificant improvement. An opposite trend can be seen on the \other test set. The performance of models that were trained on \clean instead of \other deteriorates. Yet, \textsc{RED-ACE}\xspace's relative performance drop is smaller than BERT's. \textsc{RED-ACE}\xspace drops by $8.2\%$ (from $71.1$ to $65.3$) while BERT by $10.3\%$ (from $66.2$ to $59.4$). This is also demonstrated by the statistically significant increase in F1 $\Delta$\%, from $7.4\%$ in \other \ $\rightarrow$ \other to $9.9\%$ in \clean \ $\rightarrow$ \other. This serves as additional evidence for the robustness of \textsc{RED-ACE}\xspace. We also note that \clean \ $\rightarrow$ \other is the most challenging setting, with BERT's F1 significantly lower than the other 3 settings, meaning that \textsc{RED-ACE}\xspace shows the largest improvement (F1 $\Delta$\%) in the hardest setting. \paragraph{Generalization Across ASR Models.} As discussed in \S \ref{sec:dataset}, to ensure that RED-ACE is applicable to not only one specific ASR model, we repeat our experiments using a different ASR model. The results are presented in \Cref{tab:another_asr} and \Cref{tab:error-detection-robustness-video}. \textsc{RED-ACE}\xspace outperforms BERT in all settings, with statistically significant F1 improvements, further highlighting \textsc{RED-ACE}\xspace robustness. \begin{table}[t] \small \centering \scalebox{0.82}{ \begin{tabular}{l|ccc|ccc} \toprule & \multicolumn{3}{c|}{\clean} & \multicolumn{3}{c}{\other} \\ & R & P & F1 & R & P & F1 \\ \midrule BERT & 54.9 & \textbf{77.2} & 64.2 & 52.7 & 78.8 & 63.2 \\ \textsc{RED-ACE}\xspace & \ \ \textbf{58.6}$^*$ & 75.4 & \ \ \textbf{65.9}$^*$ & \ \ \textbf{55.2}$^*$ & \ \ \textbf{80.7}$^*$ & \ \ \textbf{65.6}$^*$ \\ \midrule F1 $\Delta$\% & & & +2.6\% & & & +3.8\% \\ \bottomrule \end{tabular} } \caption{ Main settings using the errors from the \video ASR model. Format is similar to \Cref{tab:error-detection}. } \label{tab:another_asr} \vspace{-0.5cm} \end{table} \section{Appendix} \label{sec:appendix} \subsection{Implementation Details} \label{impl:details} \paragraph{Training.} We fine-tune our BERT-based \cite{BERT} model with a batch size of 512\footnote{We choose the best among 128, 512 and 1024, based on tagging accuracy on the development set.}, a weight decay of 0.01, and a learning rate of 3e-5\footnote{We choose the best among 5e-5, 4e-5, 3e-5, and 2e-5, based on tagging accuracy on the development set.}. The maximum input length is set to 128 tokens. We pad shorter sequences and truncate longer ones to the maximum input length. We use the cross-entropy loss function, optimizing the parameters with the AdamW optimizer. We train for a maximum of 500 epochs and choose the checkpoint with the maximum tagging accuracy on the development set.\footnote{For \textsc{RED-ACE}\xspace the tagging accuracy was 95.4 on \clean and 89.7 on \other.} The best checkpoint was found at epochs 100-150 after approximately 8 hours of training time. All models were trained on TPUs (4x4). BERT-base has 110 million parameters, the inclusion of confidences embeddings for \textsc{RED-ACE}\xspace added ~10k additional parameters. The confidence embedding matrix is randomly initialized with truncated normal distribution\footnote{\url{https://www.tensorflow.org/api_docs/python/tf/keras/initializers/TruncatedNormal}}. If a single word is split into several tokens during BERT's tokenization, all the corresponding tokens get the confidence score of the original word. To predict word-level errors (used throughout the paper), we treat a word as an error if one of its tokens was tagged as error by the model. To predict span-level errors (reported for completeness in \Cref{tab:span-level}), we treat every sequence of errors as one error-span and every sequence of correct words as a correct-span. \paragraph{Binning.} \label{sec:effect_of_binning} Table \ref{tab:binning} presents results for different binning algorithms and bin sizes. For binning algorithms we use: (1) simple equal-width binning and (2) quantile-based discretization (equal-sized buckets). We note that there is no significant difference between the results. In our main experiments we used equal width binning with 10 bins. For special tokens,\footnote{[CLS] and [SEP] in case of BERT.} that do not have confidence scores, we chose to allocate a dedicated bin. \paragraph{Statistics Significance Test.} In \cref{tab:error-detection}, in addition to the main results, we provide a statistic significance tests results. For this purpose we pseudo-randomly shuffle all words in our test set, split them up into 100 approximately equally sized subsets, and compute recall, precision and F1 for each of them for the baseline and \textsc{RED-ACE}\xspace models. We then apply the Student's paired t-test with $p < 0.05$ to these sets of metrics. To determine statistical significance in F1 $\Delta$\% between different setups evaluated on the same data set, F1 $\Delta$\% is computed for each of the given subsets, and the same significance test is applied to the resulting sets of F1 $\Delta$\% between two setups. \begin{table}[t] \small \centering \begin{tabular}{l|l|cccc} \toprule Binning algorithm & \# Bins & R & P & F1 \\ \midrule \multirow{3}{*}{Equal width bins} & 10 & \textbf{64.1} & 79.9 & \textbf{71.1} \\ & 100 & 62.5 & 80.5 & 70.4 \\ & 1000 & 63.2 & 80.7 & 70.9 \\ \midrule Equal size bins & 10 & 63.0 & \textbf{81.5} & \textbf{71.1} \\ \bottomrule \end{tabular} \caption{ Effect on different binning strategies (\other). } \label{tab:binning} \end{table} \begin{table}[t] \small \centering \begin{tabular}{l|l|r|r} \toprule Pool & Subset Name & Audio Hours & \# Examples \\ \midrule \multirow{4}{*}{Clean} & \emph{train-clean-100} & 100.6 & 28,539 \\ & \emph{train-clean-360} & 363.6 & 104,014 \\ & \emph{dev-clean} & 5.4 & 2,703 \\ & \emph{test-clean} & 5.4 & 2,620 \\ \midrule \multirow{3}{*}{Other} & \emph{train-other-500} & 496.7 & 148,688 \\ & \emph{dev-other} & 5.3 & 2,864 \\ & \emph{test-other} & 5.1 & 2,939 \\ \bottomrule \end{tabular} \caption{ LibriSpeech corpus subsets statistics. } \label{tab:librispeech_subsets} \vspace{-0.2cm} \end{table} \subsection{Published \textsc{AED}\xspace Dataset} \label{appendix:published_dataset} As described in \S \ref{sec:dataset}, we generate our own \textsc{AED}\xspace dataset. To this end we transcribe the LibriSpeech corpus using 2 modes from Google Cloud Speech-to-Text API.\footnote{\url{https://cloud.google.com/speech-to-text}} We choose the \default model as our main model and the \video model as the additional model\footnote{\url{https://cloud.google.com/speech-to-text/docs/basics\#select-model}}. We also enable the word-level confidence in the API.\footnote{\url{https://cloud.google.com/speech-to-text/docs/word-confidence\#word-level_confidence}} Our submission includes the \textsc{AED}\xspace dataset as well as the predictions of our models on the test sets. We hope that our dataset will help future researchers and encourage them to work on \textsc{AED}\xspace. \paragraph{The LibriSpeech Corpus Details.} \label{appendix:libriSpeech_corpus} We provide here additional details abut the LibriSpeech corpus.\footnote{\url{https://www.openslr.org/12/}} The corpus contains approximately 1000 hours of English speech from read audio books. The corpus contains \emph{clean} and \emph{other} pools. The training data is split into three subsets: \emph{train-clean-100}, \emph{train-clean-360} and \emph{train-other-500}, with approximate sizes of 100, 360 and 500 hours respectively. Each pool contains also a development and test sets with approximately 5 hours of audio. Full data split details can be seen in \cref{tab:librispeech_subsets}. We note that the \#Examples is slightly different than the numbers in our dataset (see \cref{tab:dataset-stats}). When transcribing with Google Cloud API, we occasionally reached a quota limit and a negligible number of examples was not transcribed successfully (up to 2\% per split). The \clean pool contains 2 training sets, we used the larger one in our dataset (\emph{train-clean-360}). \begin{figure}[t] \centering \includegraphics[width=0.91\columnwidth]{figures/dataset_example.png} \caption{ A single example from our AED dataset. } \label{fig:dataset_example} \end{figure} \paragraph{Annotation Description.} A single example from our \textsc{AED}\xspace dataset can be seen is \cref{fig:dataset_example}. The annotation contains the ASR hypothesis words, the corresponding word-level confidence scores and the \textsc{Error}\xspace or \textsc{NotError}\xspace label. \paragraph{License.} This data as well as the underlying LibrSpeech ASR corpus are licensed under a Creative Commons Attribution 4.0 International License\footnote{\url{http://creativecommons.org/licenses/by/4.0/}}. \begin{table}[t] \small \centering \begin{tabular}{ccl} \toprule \multicolumn{1}{c}{ASR Model} & \multicolumn{1}{c}{Pool} & \multicolumn{1}{c}{Brier Score} \\ \midrule \multirow{2}{*}{\emph{default}} & \multirow{1}{*}{\clean} & 0.069 \\ \cmidrule{2-3} & \multirow{1}{*}{\other} & 0.142 \\ \midrule \multirow{2}{*}{\emph{video}} & \multirow{1}{*}{\clean} & 0.06 \\ \cmidrule{2-3} & \multirow{1}{*}{\other} & 0.1 \\ \bottomrule \end{tabular} \caption{ Brier Scores (evaluating confidence scores calibration, lower is better) for our dataset. } \label{tab:brier-scores} \end{table} \makeatletter \setlength{\@fptop}{0pt} \makeatother \begin{table}[t] \small \centering \scalebox{0.83}{ \begin{tabular}{l|ccc|ccc} \toprule & \multicolumn{3}{c|}{ \clean} & \multicolumn{3}{c}{ \other} \\ & R & P & F1 & R & P & F1 \\ \midrule \textsc{C-O}\xspace & 31.0 & 27.5 & 29.1 & 23.4 & 20.2 & 21.7 \\ \textsc{BERT-MLM}\xspace & 27.4 & 12.3 & 17.0 & 22.2 & 11.6& 15.2 \\ \textsc{BERT}\xspace & 47.1 & 59.6 & 52.6 & 37.1 & 46.6 & 41.3 \\ \textsc{BERT \& C}\xspace & 44.5 & 55.5 & 49.4 & 35.5 & 43.7 & 39.2 \\ \textsc{BERT | C}\xspace & 48.9 & 51.0 & 49.9 & 40.6 & 40.2 & 40.4 \\ BERT$_C$\xspace & 45.4 & 59.9 & 51.7 & 37.5 & 48.0 & 42.1 \\ \midrule \textsc{RED-ACE}\xspace & \textbf{49.4} & \ \ \textbf{63.6}$^*$ & \ \ \textbf{55.6}$^*$ & \ \ \textbf{42.1}$^*$ & \ \ \textbf{50.9}$^*$ & \ \ \textbf{46.1}$^*$ \\ \midrule F1 $\Delta$\% & & & +5.4\% & & & +9.5\% \\ \bottomrule \end{tabular} } \caption{ Span-level results for the main settings using the errors from the \default ASR model. The format is similar to \Cref{tab:error-detection}. } \label{tab:span-level} \end{table} \section{Conclusion} \label{sec:conclusion} We introduced \ace \ \textsc{RED-ACE}\xspace, an approach for embedding ASR word-level confidence scores into a Transformer-based ASR error detector. \textsc{RED-ACE}\xspace jointly encodes the scores and the transcription hypothesis into a contextualized representation. Our experiments demonstrated that the ASR word-level confidence scores are useful on top of the transcription hypothesis text, yet it is not trivial to effectivelly combine these signals. We showed that performing such combination using RED-ACE leads to significant performance gains, as well as increased robustness to changes in the audio quality, which can be crucial for real-world applications. In addition, we published a novel AED dataset that allows researchers to train and evaluate AED models, without the need to run ASR models. It also ensures the full reproducibility of our results in case Google Cloud models will change over time. In future work, we would like to leverage additional signals from the ASR model (such as alternative hypotheses), as well as explore the benefits of confidence scores for \emph{error correction} models. \section{Related Work} \label{sec:related-work} \paragraph{ASR Confidence Scores} are used to evaluate reliability of recognition results \cite{CONFIDENCE_SCORES_SURVEY}. In modern ASR models, a separate confidence network is usually trained using a held-out dataset \cite{CONFIDENCE_SCORE_RESEARCH_1, CONFIDENCE_SCORE_RESEARCH_3}. \paragraph{Uncertainty Calibration} adapts a models prediction probabilities to better reflect their true correctness likelihood \cite{CALIBRATION_1}. We provide the Brier Scores (evaluating calibration) for our dataset in \Cref{tab:brier-scores}. AED models, which perform a binary classification - \textsc{Error}\xspace or \textsc{NotError}\xspace, do not explicitly use calibration. For example in \textsc{C-O}\xspace, \textsc{BERT | C}\xspace and \textsc{BERT \& C}\xspace we tune the threshold to an optimal value, and since most calibration techniques will preserve the relative scores ordering, better calibration will not improve performance. BERT$_C$\xspace and \textsc{RED-ACE}\xspace do not rely on calibrated scores, since deep neural networks can model non linear relationships \cite{NN}. \paragraph{\textsc{AED}\xspace.} We provide a brief summary of relevant AED papers, for a more thorough review of AED we refer the reader to \citet{ERROR_CORRECTION_REVIEW}. \citet{ERROR_DETECTION_WITH_CONFIDENCE} used data mining models, leveraging features from confidence scores and a linguistics parser. \citet{ERROR_DETECTION_1} used logistic regression with features extracted from confusion networks. \citet{ERROR_DETECTION_2} used a Markov Chains classifier. \citet{ERROR_DETECTION_3} focused on spoken translation using confidence from a machine translation model, posteriors from entity detector and a word boundary detector. Modern Transformer-based approaches have not addressed the \textsc{AED}\xspace task directly. A few attempts were made to apply Transformers for ASR \emph{error correction}, using a sequence-to-sequence models to map directly between the ASR hypothesis and the correct (reference) transcription \cite{PREV_SEQ2SEQ_ERROR_CORRECTION_1, PREV_SEQ2SEQ_ERROR_CORRECTION_2, FASTCORRECT2, FASTCORRECT}. To the best of our knowledge, our work is the first to address \textsc{AED}\xspace using the Transformer and to introduce representation for ASR confidence scores in a Transformer-based ASR post-processing model. \begin{table}[t] \small \centering \scalebox{0.8}{ \begin{tabular}{l|ccc|ccc} \toprule & \multicolumn{3}{c|}{\other \ $\rightarrow$ \clean} & \multicolumn{3}{c}{\clean \ $\rightarrow$ \other} \\ & R & P & F1 & R & P & F1 \\ \midrule BERT & 61.2 & 73.5 & 66.8 & 42.9 & \textbf{82.2} & 56.4 \\ \textsc{RED-ACE}\xspace & \ \ \textbf{62.8}$^*$ & \ \ \textbf{75.8}$^*$ & \ \ \textbf{68.7}$^*$ & \ \ \textbf{47.7}$^*$ & \ \ 79.8$^*$ & \ \ \textbf{59.7}$^*$ \\ \midrule F1 $\Delta$\% & & & +2.8\% & & & +5.9\% \\ \bottomrule \end{tabular} } \caption{ Robustness settings using the errors from the \video ASR model. Format is similar to \Cref{tab:error-detection-robustness}. } \label{tab:error-detection-robustness-video} \vspace{-0.5cm} \end{table} \section{Dataset Creation and Annotation} \label{sec:dataset} To train and evaluate \textsc{AED}\xspace models, we generate a dataset with labeled transcription errors. \paragraph{Labeling of ASR Errors.} We decode audio data using an \textsc{ASR}\xspace model and obtain the transcription hypothesis. Then, we align the hypothesis words with the reference (correct) transcription. Specifically, we find an edit path, between the hypothesis and the reference, with the minimum edit distance and obtain a sequence of edit operations (insertions, deletions and substitutions) that can be used to transform the hypothesis into the reference. Every incorrect hypothesis word (i.e needs to be deleted or substituted) is labeled as \textsc{Error}\xspace, the rest are labeled as \textsc{NotError}\xspace. \paragraph{Audio Data Source.} We use the \LibriSpeech corpus \cite{LIBRISPEECH}, containing 1000 hours of transcribed English speech from audio books.\footnote{\url{https://www.openslr.org/12/}} The corpus contains \emph{clean} and \emph{other} pools, where \emph{clean} is of higher recording quality.\footnote{We provide additional details about the corpus in \S \ref{appendix:published_dataset}.} \paragraph{ASR Models.} In this work we focus exclusively on a black-box setting, where the exact implementation details of the ASR and the confidence models are unknown. This setting is particularly relevant since many applications rely on strong performance of black-box ASR models which are exposed as cloud services. We use Google Cloud Speech-to-Text API as our candidate ASR model.\footnote{\url{https://cloud.google.com/speech-to-text}} In our main experiments we select the \default ASR model.\footnote{\url{https://cloud.google.com/speech-to-text/docs/basics\#select-model}} To ensure the generalization ability of RED-ACE, we repeat our main experiments using a different ASR model, in this case we choose the \video model. Table \ref{tab:dataset-stats} presents the statistics of our dataset. It is notable that the main model's error rate (\default) is about twice as high as the additional model's error rate (\video), which shows that (even though both models are part of the Google Cloud API) this additional ASR model is substantially different from the main ASR model we used. \paragraph{Data Release.} Since Google Cloud requires a paid subscription and since the underlying ASR models may change over time, we make our dataset publicly available.\footnote{Additional details about the dataset are provided in \S \ref{appendix:libriSpeech_corpus}.} This ensures full reproducibility of our results (in case the ASR models change) and makes it easier for researchers to train and evaluate AED models, removing the need to run inference on paid cloud-based ASR models or train dedicated models in order to transcribe audio. \section{Experimental Setup} \label{sec:experimental-setup} Our experiments examine the informativeness of the confidence scores as well as the effectiveness of RED-ACE in combining them with text. We provide extensive implementation details in \S \ref{impl:details}. \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{figures/pre-recall-plot-both.pdf} \vspace{-0.3cm} \caption{ Threshold tuning process for the \textsc{C-O}\xspace baseline. Models are evaluated using different confidence scores thresholds and the threshold that yields the best F1 is chosen. A similar process is performed for \textsc{BERT \& C}\xspace and \textsc{BERT | C}\xspace. For \textsc{BERT-MLM}\xspace we tune the values for $k$. } \label{fig:threshold_plot} \end{figure} \begin{figure}[t] \centering \vspace{-0.1cm} \includegraphics[width=\columnwidth]{figures/BERTC.pdf} \vspace{-0.5cm} \caption{The BERT$_C$\xspace baseline, which modifies the input to the tagger, unlike \textsc{RED-ACE}\xspace which modifies BERT's embeddings. The value of the respective confidence score is appended to the final contextualized representation.} \label{fig:bertc} \vspace{-0.5cm} \end{figure} \subsection{Baselines} \label{sec:models} \paragraph{\textsc{C-O}\xspace (Confidence Only)} Uses the word-level scores from the ASR confidence model directly. Predicts \textsc{Error}\xspace if the token's confidence score is below a threshold.\footnote{We choose the confidence threshold or $k$ value (in case of \textsc{BERT-MLM}\xspace) with the best F1 on the dev set (\Cref{fig:threshold_plot}).} \paragraph{\textsc{BERT-MLM}\xspace} Masks out the input words one at a time and uses a pre-trained BERT \cite{BERT} as a Masked Language Model (MLM) in order to infill them. Predicts \textsc{Error}\xspace for input words that are not in the top $k$ BERT's suggestions.\footnotemark[9] \paragraph{BERT} We fine-tune BERT \cite{BERT} for sequence tagging (only on text, \emph{without} adding \textsc{RED-ACE}\xspace). As Transformers have not beeen applied for \textsc{AED}\xspace yet, we choose BERT as a pre-trained LM following \citet{BERT_GRAMMAR_ERROR_DETECTION}, who applied it for Grammatical Error Detection (\textsc{GED}\xspace) and achieved the highest performance in the \textsc{NLPTEA-2020} Shared Task \cite{rao2020overview}. \paragraph{BERT \& C} Predicts \textsc{Error}\xspace if BERT predicts \textsc{Error}\xspace \textbf{and} confidence is below a threshold.\footnotemark[9] \paragraph{BERT | C} Predicts \textsc{Error}\xspace if BERT predicts \textsc{Error}\xspace \textbf{or} confidence is below a threshold.\footnotemark[9] \paragraph{BERT$_C$} We fine-tune \textsc{BERT}\xspace \emph{jointly} with the confidence scores by concatenating the score value to the token's contextualized representation produced by BERT (directly before it is fed into the sequence tagger). BERT's last hidden layer dimension is increased by 1, and the corresponding value populated with the token's confidence score. An illustration is provided in \Cref{fig:bertc}. \subsection{Evaluation} \paragraph{Main Settings.} In the main settings we train the models on the \clean and \other training sets and evaluate them \clean and \other test sets respectively. \paragraph{Robustness Settings.} A real-word \textsc{AED}\xspace system should remain effective when the audio stems from different recording qualities. Changes in recording quality, can affect the ASR model's errors distribution and thus can potentially reduce the effectiveness of the \textsc{AED}\xspace model. As our dataset contains 2 pools with different recording quality (\Cref{tab:dataset-stats}), we can measure whether \textsc{RED-ACE}\xspace's performance deteriorates when the audio quality of the training data changes. To this end we define the \emph{robustness settings} (\Cref{tab:error-detection-robustness}), where we perform a cross-pools evaluation, evaluating models that were trained on \clean and \other training sets using the \other and the \clean test sets respectively. \paragraph{Metric.} We measure errors detection \emph{Precision (P)}, \emph{Recall (R)} and \emph{F1}. \emph{Recall} measures the percent of real errors that were detected, while \emph{Precision} measures the percent of the real errors out of all detected errors. We calculate the \emph{P} and \emph{R} on the word-level. We also report span-level results for the main settings in \Cref{tab:span-level} in the appendix. \begin{table}[t] \small \centering \scalebox{0.83}{ \begin{tabular}{l|ccc|ccc} \toprule & \multicolumn{3}{c|}{ \clean} & \multicolumn{3}{c}{ \other} \\ & R & P & F1 & R & P & F1 \\ \midrule \textsc{C-O}\xspace & 52.1 & 42.5 & 46.8 & 63.5 & 45.6 & 53.1 \\ \textsc{BERT-MLM}\xspace & 58.0 & 26.5 & 36.4 & \textbf{72.7} & 35.9 & 48.1 \\ \textsc{BERT}\xspace & 58.5 & 77.6 & 66.7 & 58.0 & 77.1 & 66.2 \\ \textsc{BERT \& C}\xspace & 55.8 & 75.0 & 64.0 & 55.5 & 75.5 & 64.0 \\ \textsc{BERT | C}\xspace & \textbf{63.3} & 68.1 &65.6 & 68.1 & 67.1 & 67.6 \\ BERT$_C$\xspace & 51.7 & 78.9 & 66.3 & 58.1 & 78.8 & 66.9 \\ \midrule \textsc{RED-ACE}\xspace & 61.1 & \ \ \textbf{81.9}$^*$ & \ \ \textbf{70.0}$^*$ & 64.1 & \ \ \textbf{79.9}$^*$ & \ \ \textbf{71.1}$^*$ \\ \midrule F1 $\Delta$\% & & & +4.9\% & & & +7.4\% \\ \bottomrule \end{tabular} } \caption{ Main settings using the errors from the \default ASR model (see \Cref{tab:dataset-stats}). R and P stands for Recall and Precision. F1 $\Delta$\% compares \textsc{RED-ACE}\xspace to the BERT baseline. Results with $*$ indicate a statistically significant improvement compared to the strongest baseline. } \label{tab:error-detection} \vspace{-0.5cm} \end{table} \section{Limitations} \paragraph{Limitations} Our approach does not account for ASR errors where the ASR system simply deletes output words. However, it is not clear whether those cases are of a practical use for an AED application that highlights incorrect words in the hypothesis, as in this case there is nothing to highlight. More specifically, our approach does not consider \emph{isolated} deletions. To illustrate that, let's first consider an example in which 2 words were transcribed as 1 word, meaning that 1 word was omitted in the transcription. For example, if \emph{"a \underline{very big} cat"} was transcribed as \emph{"a \underline{small} cat"}. An \textsc{AED}\xspace application would ideally highlight the word \emph{"\underline{small}"} as a transcription error. This case is actually covered by our approach, even though one word is omitted in the transcription, because when creating the \textsc{AED}\xspace dataset we will label "small" as an error and train the model accordingly (details in \cref{sec:dataset}). The cases that are not covered are when the ASR model omits words while all the surrounding words are transcribed correctly. For example \emph{"a \underline{very} big cat"} that was transcribed as \emph{"a big cat"}. In this case, all the words in the transcription hypothesis are correct words and our approach is not expected to discover any error. We chose not to cover those cases as it is not clear if they are useful for an error detection application, that usually needs to highlight incorrect words in the hypothesis. In addition, ignoring those cases is also in-line with previous work \cite{ERROR_DETECTION_WITH_CONFIDENCE}. Finally, our analysis showed that those cases are extremely rare, for example in \clean they occur only in $0.37\%$ of the words. \paragraph{Risks} A possible risk posed by an AED system could be caused by an over-reliance on it. Whereas without AED, the entire output of an ASR system may have been manually verified, with AED only parts of output which the AED flagged may be verified, leading to errors remaining that were not found by the AED system. \section*{Acknowledgements} We thank the reviewers for their valuable suggestions for improving this work. We would also like to thank Gal Elidan, Idan Szpektor, Eric Malmi, Yochai Blau, Amir Feder, Andrew Rosenberg, Françoise Beaufays and Avinatan Hassidim for reviewing the paper and providing useful feedback. \section{\ace \ \textsc{RED-ACE}\xspace} \label{sec:method} Following recent trends in NLP, we use a pre-trained Transformer-based language model, leveraging its rich language representation. \textsc{RED-ACE}\xspace is based on a pre-trained BERT \cite{BERT}, adapted to be confidence-aware and further fine-tuned for sequence tagging. Concretely, our \textsc{AED}\xspace model is a binary sequence tagger that given the ASR output, consisting of the transcription hypothesis words and their corresponding word-level confidence scores, predicts an \textsc{Error}\xspace or \mbox{\textsc{NotError}\xspace} tag for each input token.\footnote{We discuss words to tokens conversion in \S \ref{impl:details}.} Our \textsc{AED}\xspace pipeline is illustrated in \Cref{fig:diagram}. First, we quantize the floating-point confidence scores into integers using a binning algorithm.\footnote{Typical confidence scores range between 0.0 to 1.0. We perform experiments with simple equal-width binning and quantile-based discretization (equal-sized buckets), as well as different bin numbers. More details in \S \ref{impl:details}.} Next, the quantized scores and the transcription text are fed into a confidence-aware BERT tagger. In BERT, each input token has 3 embeddings: token, segment and position.\footnote{We refer the reader to \citet{BERT} for more details.} To adapt BERT to be confidence-aware, we implement an additional dedicated embedding layer, indicating the confidence bin that the input token belongs to. We construct a learned confidence embedding lookup matrix $M \in \mathbb{R}^{B\times H}$, where $B$ is the number of confidence bins and $H$ is BERT's embedding vector's size. For a given token, its input representation is constructed by summing the corresponding BERT's embeddings with its confidence embedding (\Cref{fig:method}). This allows the model to learn a dedicated dense representation vector for each confidence bin, as well as naturally combine it with the final contextualized representation of each input token. \begin{figure}[t] \centering \vspace{-0.1cm} \includegraphics[width=\columnwidth]{figures/tagger.pdf} \caption{Our confidence-aware AED model. We use a BERT-based tagger with modifications colored in green. An additional embedding layer is added to represent the embedding of the quantized confidence scores.} \label{fig:method} \end{figure} \begin{table}[t] \small \scalebox{0.8}{ \centering \setlength{\tabcolsep}{4pt} \begin{tabular}{llcrrr} \toprule \multicolumn{1}{c}{ASR Model} & \multicolumn{1}{c}{Pool} & \multicolumn{1}{c}{Split} & \multicolumn{1}{c}{\# Examples} & \multicolumn{1}{c}{\# Words} & \multicolumn{1}{c}{\# Errors} \\ \midrule \multirow{6}{*}{\emph{default}} & \multirow{3}{*}{\clean} & Train & 103,895 & 3,574,027 & 357,145 (10.0\%) \\ & & Dev & 2,697& 54,062 & 5,111 (9.5\%) \\ & & Test & 2,615 & 52,235 & 4,934 (9.4\%) \\ \cmidrule{2-6} & \multirow{3}{*}{\other} & Train & 146,550 & 4,650,779 & 770,553 (16.6\%) \\ & & Dev & 2,809 & 48,389 & 9,876 (20.4\%) \\ & & Test & 2,925 & 50,730 & 10,317 (20.3\%) \\ \midrule \multirow{6}{*}{\emph{video}} & \multirow{3}{*}{\clean} & Train & 104,013 & 3,589,136 & 210,324 (5.9\%) \\ & & Dev & 2,703& 54,357 & 3,109 (5.7\%) \\ & & Test & 2,620 & 52,557 & 2,963 (5.6\%) \\ \cmidrule{2-6} & \multirow{3}{*}{\other} & Train & 148,678 & 4,810,226 & 148,678 (7.9\%) \\ & & Dev & 2,809 & 50,983 & 5,901 (11.6\%) \\ & & Test & 2,939 & 52,192 & 6,033 (11.6\%) \\ \bottomrule \end{tabular} } \caption{ \textsc{AED}\xspace dataset statistics. } \label{tab:dataset-stats} \vspace{-0.5cm} \end{table} \section{Introduction} \label{sec:intro} Automatic Speech Recognition (ASR) systems transcribe audio signals, consisting of speech, into text. While state-of-the-art ASR systems reached high transcription quality, training them requires large amounts of data and compute resources. Fortunately, many high performing systems are available as off-the-shelf cloud services. However, a performance drop can be observed when applying them to specific domains or accents \cite{BLACKBOX_ADAPTATION, PREV_SEQ2SEQ_ERROR_CORRECTION_1}, or when transcribing noisy audio. Moreover, cloud services usually expose the ASR models as a black box, making it impossible to further fine-tune them. \AEDFULL models are designed to post-process the ASR output, in order to detect transcription errors and avoid their propagation to downstream tasks \cite{ERROR_CORRECTION_REVIEW}. \textsc{AED}\xspace models are widely used in interactive systems, to engage the user to resolve the detected errors. For example, AED systems can be found in \emph{Google Docs Voice Typing}, where low confidence words are underlined, making it easier for users to spot errors and take actions to correct them. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures/diagram.png} \vspace{-0.5cm} \caption{ Our \textsc{AED}\xspace pipeline. The confidence scores are quantized and jointly encoded with the transcription text into a contextualized representation. } \label{fig:diagram} \vspace{-0.5cm} \end{figure} Modern NLP models usually build upon the Transformer architecture \cite{Transformer}. However, no Transformer-based AED models have been proposed yet. Recently, the Transformer has been applied to ASR \emph{error correction} \cite{PREV_SEQ2SEQ_ERROR_CORRECTION_1, PREV_SEQ2SEQ_ERROR_CORRECTION_2, FASTCORRECT2, FASTCORRECT}, another ASR post-processing task. These models use only the transcription hypothesis text as input and discard other signals from the ASR model. However, earlier work on \textsc{AED}\xspace (not Transformer-based) has shown the benefits of such signals \cite{ERROR_DETECTION_1, ERROR_DETECTION_2, ERROR_DETECTION_3} and specifically the benefits of ASR word-level confidence scores \cite{ERROR_DETECTION_WITH_CONFIDENCE}, which are often provided in addition to the transcribed text \cite{CONFIDENCE_SCORES_SURVEY, CONFIDENCE_SCORE_RESEARCH_1, CONFIDENCE_SCORE_RESEARCH_2}. In this work we focus exclusively on \textsc{AED}\xspace and propose a natural way to embed the ASR confidence scores into the Transformer architecture. We introduce \mbox{\ace \ \textsc{RED-ACE}\xspace}, a modified Transformer encoder with an additional embedding layer, that jointly encodes the textual input and the word-level confidence scores into a contextualized representation (\Cref{fig:method}). Our \textsc{AED}\xspace pipeline first quantizes the confidence scores into integers and then feeds the quantized scores with the transcribed text into the modified Transformer encoder (\Cref{fig:diagram}). Our experiments demonstrate the effectiveness of \textsc{RED-ACE}\xspace in improving \textsc{AED}\xspace performance. In addition, we demonstrate the robustness of \textsc{RED-ACE}\xspace to changes in the transcribed audio quality. Finally, we release a novel dataset that can be used to train and evaluate \textsc{AED}\xspace models.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{sec:level1}INTRODUCTION} Low dimensional quantum materials have attracted significant attention because of their distinct properties from bulk. Despite extensive research on low dimensional materials, major challenges in using them remain related to their fabrication and storage\cite{peaker2013low}. Dislocations--1D defects in crystals-- are similar to low dimensional materials, in the sense that the different bonding environment at their core leads to local and distinct properties than those in their surrounding crystal. They can be used as templates for creating conducting nanowires in insulating materials\cite{nakamura2003conducting, ikuhara2009nanowire,tokumoto2009fabrication,amma2010electrical} or ferromagnetic nanowires in antiferromagnetic materials\cite{sugiyama2013ferromagnetic}. Moreover, dislocations are embedded in the solid, and as such are environmentally protected by their host. This is in contrast to existing low dimensional quantum materials-- such as metallic or semiconducting nanowires-- which degrade in short time due to their instability\cite{zhou2014long}. Research on the effects of dislocations on the electronic properties of semiconductors dates back to 1953. First, Shockley reported that dangling bonds at the core of dislocations in germanium (Ge) and silicon (Si) should give rise to levels lying in the forbidden band gap\cite{shockley1953dislocations}. Later, Read created a model for defect states; which assumed dislocations in Ge as acceptor type\cite{read1954lxxxvii}. Another model proposed by Schr\"{o}ter and Labusch claimed that 1D defect states could be acceptor or donor type\cite{labusch1980electrical}. Even though dislocations in simple tetrahedrally bonded semiconductors (Si, Ge etc.) have been investigated by a variety of theoretical and experimental methods\cite{broudy_electrical_1963,bell_effect_1966,patel_change_1967,alexander_dislocations_1969}, no unique and explicit model for the electronic states associated with dislocations in the band gap is adapted. For more than 40 years, most efforts to get insight into position of dislocation states in the gap resulted in ambiguity\cite{holt2007extended}. The main challenges in the analysis and interpretation of experimental data were attributed to the presence of networks of various type of dislocations, kinks and jogs, deformation induced point defects, and the ubiquitous interaction between dislocations and point defects \cite{holt2007extended,claeys2009extended}. On the other hand, theoretical calculations could not clarify inconsistent experimental data because of once-limited computational power or finite size effects. Most theoretical researchers concluded their work by highlighting that correct positioning of defect levels can only be determined using realistic Hamiltonians and large numbers of atoms in calculations \cite{jones1979theoretical,marklund1992energy,yong1989electron,lodge198990}. Despite the fair amount of work produced in the past, there is still a lack of complete understanding of the electronic properties of dislocations in elemental semiconductors. Recently, thanks to increases in computing power \cite{laukkonen2019preparing} and modern electronic density functional thoery (DFT) codes, first principles calculations of dislocations, using realistic functionals and large simulation systems are now possible\cite{pizzagalli2018first,belabbas2015electronic}. Here, we present a systematic first principles study of partial dislocations in diamond with accurate positioning of their energy levels with respect to the host crystal's band structure. The electronic band structure of dislocations as well as the anisotropic carrier mobility in directions parallel and perpendicular to the dislocation line are calculated. The results show that metallic and semiconducting dislocations arise in diamond. 1D metallic bands are revealed within the core of unreconstructed (30$^{\circ}$) partial dislocations, with a characteristic 1D density of states ($1/\sqrt{E})$. This 1D Fermi gas is spatially localized at a single-atom-wide $p_z$ orbital chain along the dislocation line. In contrast, unreconstructed pure edge dislocations in diamond are 1D semiconductors with a direct band gap of 3.21 eV. Interband transitions within the latter theoretically explain the origin of the blue band luminescence in diamond, which the literature widely report to be correlated with dislocations\cite{kiflawi_linearly_1974,pennycook_observation_1980,yamamoto_cathodoluminescence_1984,ruan_band_1992,iakoubovskii_luminescence_2000}. These results prove that it is the core states of the dislocations themselves in diamond that give rise to functional (electrical and optical) properties, rather than their distortion of the surrounding bulk diamond states. This opens the door to consider dislocations as 1D quantum phases In the rest of this paper, we first describe the computational set up. Next, we report the energetics of all calculated core configurations, as well as the ground state electronic properties of each configuration obtained using realistic hybrid functionals. Finally, carrier mobilities are calculated and discussed. \section{COMPUTATIONAL METHODS} \label{sec:method} Density Functional Theory (DFT) calculations are performed with the Vienna ab initio Simulation Package (VASP)\cite{kresse1996efficient} using projector augmented wave (PAW) pseudopotentials \cite{kresse1999ultrasoft}. Exchange correlations are treated by hybrid functionals with a Hartree-Fock mixing parameter $\alpha$ of 0.18 parametrized by Heyd, Scuseria and Ernzerhof \cite{heyd2004efficient} to eliminate the band gap underestimation problem of DFT. Incorporating 0.18 fraction of Hartree-Fock exchange recovered underestimation from 4.10 eV to 4.95 eV within the limits of computational resources. A plane wave cut-off energy of 550 eV is used with a k-points density of 0.1 \r{A}$^{-1}$ for structural relaxations. Full periodic boundary conditions are used with a quadrupolar arrangement of dislocations using a triclinic simulation cell containing two dislocations with opposite Burgers vectors. The simulation cell consists of 576 atoms with a separation of $\approx$ 20 \r{A} ~between the two dislocations and is oriented along \hkl[-1-12], \hkl[111] and \hkl[1-10] corresponding to the x, y, and z directions, respectively. Two sets of supercells with single and double lattice translation periods are used along the \hkl[1-10] dislocation line direction to study core reconstruction. The predominant slip system in the diamond cubic crystals is \hkl{111}\hkl<110>. A dislocation is titled as glide (shuffle) when slip takes place between widely (closely) spaced $\{111\}$ planes. We only consider the glide set of dislocations since they are glissile (capable of gliding) and the most stable\cite{pizzagalli2008dislocation}. The dislocations are introduced by imposing their elastic displacement field using anisotropic elasticity theory. All atomic positions are subsequently optimized until forces are smaller than 10 meV/\r{A}$^{-1}$. To be able to directly compare the defective and perfect (bulk) simulation cells, the electrostatic potential in these cells need to be aligned~\cite{lany2009accurate} via $$E_{VBM}= E_{VBM}^{Perfect} + V_{ave}^{Bulk-like} - V_{ave}^{Perfect}. $$ Here, $E_\text{VBM}$ and $E_\text{VBM}^\text{Perfect}$ correspond to the valence band maximum of the defective and the perfect cell respectively, $V_\text{ave}^\text{Bulk-like}$ is the average potential in the bulk like region of the defective cell and $V_\text{ave}^\text{Perfect}$ is the average potential of the perfect cell. In the defective cell, atoms with volumetric strain values smaller than $10^{-4}$ are considered bulk-like and are used to compute $V_{ave}^{Bulk-like}$. For obtaining the band structure along the \hkl[-1-12] direction, band unfolding is performed with the fold2bloch code using 15 equidistant k points \cite{rubel2014unfolding}. Effective mass calculations were performed with curve fitting and the finite differences method. \begin{figure} \includegraphics[width=\textwidth]{FIGURE_1.png} \caption{ Relaxed dislocation core structures of the glide set of Shockley partials in diamond. The relaxed core structure of (a) Single period 30$^{\circ}$, (b) double period 30$^{\circ}$, (c) single period 90$^{\circ}$, and (d) double period 90$^{\circ}$ Shockley partial dislocations. For each structure, top figures show the top view of \hkl(1-10) plane and bottom figures show \hkl(111) glide plane. Stacking fault region associated with each partial is shaded. Location of dislocations on top figures are marked with a red colored $\perp$ sign. The new bonds formed during core reconstruction are marked by red ellipses.} Reconstructed bonds are showed in red circles for double period 30$^{\circ}$ type. \label{fig:wide} \end{figure} \section{RESULTS AND DISCUSSION} \begin{table}[b] \caption{\label{tab:table1} Energy difference between the reconstructed and unreconstructed partial dislocations obtained by allowing double lattice periodicity (DP) and single lattice periodicity (SP) along the line direction.} \begin{ruledtabular} \begin{tabular}{lc} &$\left(E^\text{DP}-E^\text{SP}\right)/atom$ (\SI{}{\eV})\\ \hline 30$^{\circ}$ Shockley Partial& -0.046 \\ 90$^{\circ}$ Shockley Partial& -0.025 \\ \end{tabular} \end{ruledtabular} \end{table} Previous studies~\cite{blumenau2003dislocations,blumenau2002dislocations} compared the relative stability of the glide set of dislocations in diamond and suggested that dissociation of the 60$^{\circ}$ glide dislocation into 90$^{\circ}$ and 30$^{\circ}$ Shockley partials lying on the same \hkl{111} glide plane is energetically favored. Therefore, we only investigate the electronic structure of Shockley partials with two types of reconstruction- i.e.'Single Period' (SP) and 'Double Period' (DP)- in this paper. \tab{table1} presents the energy per atom for different relaxed dislocation core configurations. The corresponding relaxed core configurations are shown in Fig. 1. SP-30$^{\circ}$ partial dislocations have dangling bonds at their core as shown in Fig. 1(a). Doubling the lattice periodicity along the dislocation line allows for pairing dangling bonds of every second atom with their neighbors (Fig. 1(b)) and reduces the energy by 46 meV/atom. In case of the 90$^{\circ}$ partial dislocation, all the core atoms in both DP and SP dislocations are four-fold coordinated with highly distorted bonds as shown in Fig. 1((c)-(d)). Although no dangling bonds occur in either types of dislocations, the DP configuration is more favorable than the SP by ${\sim}$ 25 meV/atom. The bonds are up to 15\% stretched in the SP configuration and up to 11\% stretched in the DP configuration with respect to the relaxed C-C bond length (1.54 {\AA}) in perfect bulk diamond. Thus, strain in the SP core is released through the bond rearrangement, enhancing the stability of DP-90$^{\circ}$ partials. The small energy differences between the SP and DP glide set of dislocations imply that the structure adopted (DP or SP) can be altered depending on the environment, for example, by local strains, doping, or thermal excitation. Therefore, we consider all of these core configurations for electronic structure calculations. Next, we study the dislocation band structures and analyze the correlation between the their electronic properties and core geometries. The electronic structure of defect-free diamond in an oblique cell configuration, comparable to dislocation supercells, is used as the reference. \begin{figure} \includegraphics[width=8.6cm,height=8.6cm]{FIGURE_2.png \caption{\label{fig:epsart1} Calculated electronic band structures along the dislocation line direction \hkl[1-10] for (a) reference perfect (defect free) supercell, supercell with (b) SP 30$^{\circ}$ and (c) DP 30$^{\circ}$ Shockley partials dislocation dipoles (d)Calculated total electronic density of states for SP and DP 30$^{\circ}$ dislocation dipole supercells in reference to perfect supercell. Band decomposed charge density distributions for (e) blue colored deep defect level in band structure of SP 30$^{\circ}$ dislocation dipole supercell and (f) red colored shallow defect level in DP 30$^{\circ}$ dislocation dipole supercell. Location of dislocation dipoles on figures are marked with green colored ${\perp}$ and ${\top}$ signs. Dashed lines on each panel show the conduction band minimum and the valence band maximum of the perfect bulk diamond as a reference.} \end{figure} Figures 2((a)-(c)) present the electronic band structures obtained along the dislocation line direction \hkl[1-10] for the reference and 30$^{\circ}$ dislocation dipole supercells. Our results reveal defect related states in DP-30$^{\circ}$ are localized and located rather close to conduction band (CB) edge in comparison to defect states in SP-30$^{\circ}$. On the other hand, the SP-30$^{\circ}$ partial gives rise to an extremely broad defect state overlapping with the valence band and extended through the gap up to the proximity of the conduction band due to a row of dangling bonds propagating along the dislocation line. These mid-gap states are found to be half filled, which implies both acceptor and donor activity is possible depending on the position of the Fermi level. Disappearance of the mid-gap states in the DP reconstruction is attributed to the elimination of these dangling bonds. It should also be noted that two degenerate defect states are observed in each case due to the existence of two dislocations in the supercell. Density of states (DOS) plots are shown in Fig. 2(d), revealing that the presence of dislocations gives rise to not only multiple states in the gap region but also shifts in the valence and conduction band (CB) edges for the SP-30$^{\circ}$ partial. Moreover, the atomic origins of induced defect states are investigated through band decomposed charge density analysis (Fig. 2(e) and (f)), with the defect states and corresponding partial charge densities color-coded. It is evident that these state are located in the dislocation core region, and that the degree of spacial localization for the SP-30$^{\circ}$ reconstruction is larger than that of the DP-30$^{\circ}$. Figures 3(a)-(d) exhibit that neither the SP nor DP reconstructions in 90$^{\circ}$ partial dislocations induce mid gap states well-separated from the VB and CB edges of bulk diamond. However, the SP-90$^{\circ}$ core, with higher strain, gives rise to a relatively dispersive conduction band compared to the DP-90$^{\circ}$. Similar to the case of the 30$^{\circ}$ dislocations, the decomposed partial charge densities show that the dislocation states are localized around their core regions in real space (Fig. 3((e)-(f)). This observation is consistent with the fact that localized defect states are created by dangling bonds which are absent in either structure of the 90$^{\circ}$ partial dislocations. \begin{figure}[b] \includegraphics[width=8.6cm,height=8.2cm]{FIGURE_3.png \caption{\label{fig:epsart2} Calculated electronic band structures along the dislocation line direction \hkl[1-10] for (a) reference perfect (defect free) supercell, supercell with (b) SP 90$^{\circ}$ and (c) DP 90$^{\circ}$ Shockley partials dislocation dipoles (d) Calculated total electronic density of states for SP and DP 90$^{\circ}$ dislocation dipole supercells in reference to perfect supercell. Band decomposed charge density distributions for (e) blue colored deep defect level in SP 90$^{\circ}$ dislocation dipole supercell and (f) red colored shallow defect level in DP 90$^{\circ}$ dislocation dipole supercell. Location of dislocation dipoles on figures are marked with green colored ${\perp}$ and ${\top}$ signs. Dashed lines on each panel show the conduction band minimum and the valence band maximum of the perfect bulk diamond as a reference.} \end{figure} \begin{figure}[t] \includegraphics[width=\textwidth]{FIGURE_4.png} \caption{\label{fig:wide2}Calculated electronic band structures for (a) perfect, (b) SP 30$^{\circ}$, (c) DP 30$^{\circ}$, (d) SP 90$^{\circ}$, (e) DP 90$^{\circ}$ dislocation dipole supercells, unfolded into Brillouin zone of the primitive cell using fold2bloch code\cite{rubel2014unfolding}. Top figures are the unfolded band structures along the dislocation line direction[1-10]. Bottom figures are the unfolded band structures along the Burgers vector direction \hkl[-1-12]. Color bars represent the Bloch spectral weight. Since the Bloch spectral weights of defects are in the range of [0,2], gray scale color bar (bottom) is used for defect states that are well separated from VB and CB of SP 30$^{\circ}$ and 90$^{\circ}$ dislocation dipole supercells.} \end{figure} Notice that the top of the valance band edges are shifted to higher energy levels for both the 30$^{\circ}$ and 90$^{\circ}$ partials in comparison to the bulk diamond reference (Fig. 2 and 3(a)-(c)). We found that these shifted bands are occupied and localized around the stacking fault region between the two dislocations. We also calculate the band structure along the x direction \hkl[-1-12]), which is perpendicular to the dislocation line. Band folding effects arise in this case due to having more than one lattice translation along the x direction of the $8\times 1 \times 1$ supercell. Note that, in the case of DP supercells, the double-periodicity is considered as the new translation vector along the line due to reconstruction. \fig{wide2} shows the unfolded band structures. Because, the position of the valence and conduction band edges are affected by folding along the x direction, we unfolded all the band structures for comparison. For SP-30$^{\circ}$ partials, dispersionless electronic states (i.e. flat bands) are observed along the x direction implying confinement of carriers in real space along this direction (Fig. 4(g)). On the contrary, the half filled defect state having metallic conductivity along the line direction is quite dispersive (Fig. 4(b)). \begin{figure}[t] \includegraphics[width=8.6cm,height=3.8cm]{FIGURE_5.png \caption{\label{fig:pz}(a) Orbital resolved partial density of states for SP 30$^{\circ}$. Inset shows the dislocation core and light blue colored atoms that are used to plot partial density of states (PDOS). (b) Charge density plot along the dislocation core of SP 30$^{\circ}$.( Black balls represents the atoms lying on the glide plane) } \end{figure} \fig{pz} shows the partial DOS and the charge density plot along the dislocation line of the SP-30$^{\circ}$ partial dislocation. It is evident that these states largely consist of $p_z$ orbitals; the charge density is localized along the line direction showing an array of overlapping $p_z$ orbitals. Heavily localized charge density along the dislocation line combined with the dispersive band structure, demonstrate that 1D conduction takes place along the line direction within atomically narrow channels. These 1D metallic bands exhibit a DOS reduction of $1/\sqrt{E}$ at the conduction band-like bottom, and vice versa, at the valence band-like top of the band. Similar ideal 1D Fermi gases have been prevously generated using scanning tunneling microscopy (STM) to arrange metallic atoms into chains, however they degrade quickly on a surface\cite{nilius_development_2002}. Our results suggest that ideal quantum wires made from naturally occurring line defects in wide band gap materials may overcome these limitations Next, we quantify the carrier mobility in the directions along and perpendicular to the dislocation line by calculating the effective masses in both directions. When there exists a localization, the effective mass approximation is not valid. However, we have already shown that the dangling bond wavefunctions extending along the dislocation line are heavily delocalized in that direction for SP-30$^{\circ}$. Therefore, we compute the effective masses of carriers within the defect states along the line direction for SP-30$^{\circ}$ $m_{\textbf{e},defect}^{\hkl[1-10]}$ to be equal to $0.15 m_{o}$ at the $\Gamma$ point and $m_{\textbf{h},defect}^{\hkl[1-10]}$ to be equal to $0.17 m_{o}$ at the Brillouin zone edges, using a simple curve fitting as explained in \sect{method}. \begin{table}[h] \label{tab:effectivem} \caption{\label{tab:table}Calculated effective mass values in units of $m_0$ along different crystallographic directions of bulk diamond in literature and in this work} \begin{ruledtabular} \begin{tabular}{ccc} {Parameter}&{Literature (Bulk)}&{This work}\\ \hline $m_{hh}^{\hkl[111]}$& $0.56$~\cite{lofaas2011effective}& $0.56$\\ $m_{hh}^{\hkl[110]}$& $2.12$~\cite{amma2010electrical},$1.34,0.653 $~\cite{akimoto2014high} & $1.57$\\ $m_{hh}^{\hkl[100]}$& $0.40$~\cite{lofaas2011effective}& $-$\\ $m_{lh}^{\hkl[111]}$& $0.53$~\cite{lofaas2011effective}& $0.28$\\ $m_{lh}^{\hkl[110]}$& $0.23$~\cite{lofaas2011effective},$0.263$~\cite{akimoto2014high}& $0.20$\\ $m_{e}^{t}$& $0.36$~\cite{willatzen1994linear,lofaas2011effective}& $0.45$\\ $m_{e}^{l}$& $ 1.40$~\cite{willatzen1994linear,lofaas2011effective}& $1.09$\\ \end{tabular} \end{ruledtabular} \end{table} \tab{effectivem} shows the calculated carrier effective masses in bulk diamond for reference, revealing that there is a far higher mobility of holes along the dislocation line direction compared to a perfect diamond crystal. Similarly, defect states of SP-90$^{\circ}$ dislocations are delocalized along the line direction but localized along the perpendicular direction (Figs. 4(d),(j)). Effective mass of carriers within the defect states along the line direction for SP-90$^{\circ}$ is calculated as $m_{\textbf{e},defect}^{\hkl[1-10]}$ to be equal to $1.01 m_{o}$ at the $\Gamma$ point. No mid gap states are observed along both line and perpendicular direction for any of dislocations with DP reconstruction (Figs. 4(c),(h),(e) and (k)). The unfolded band structures in Fig. 4 also provide insight to the anisotropic optical properties possible in dislocated diamond. Bulk diamond (Figs. 4(a),(f)) exhibits a large indirect band gap of 4.95 eV. However, in the case of pure edge dislocations, (SP-90), a much smaller direct band gap of 3.21 eV is seen in our calculations. This implies the onset of optical absorption at 3.21 eV in dislocated diamond, and likely, luminescent emission of photons at, or slightly below this energy assuming some excitonic (e-h) coupling occurs. The literature on the optical properties of diamond is quite extensive, and it is widely reported that the broad blue band emission in diamond (centered at 2.8 eV) arises from dislocations\cite{kiflawi_linearly_1974,pennycook_observation_1980,yamamoto_cathodoluminescence_1984,ruan_band_1992,iakoubovskii_luminescence_2000}. Cathodoluminescence (CL) studies show blue emission in diamond arising exactly at the location of dislocations. Although CL can localize the luminescence to the vicinity of dislocations, it was not clear if radiative recombination was occuring in the core dislocation states themselves, or in the bulk diamond states around the dislocations. The latter could occur, for example, if point defects, or strain-generated carrier trapping and recombination pathways develop near the dislocations. CL excites a large population of hot carriers, well above the band edges and as a result, cannot determine the absorption onset of the blue emission. But later, Iakoubovskii and Adriaenssens examined photoluminescence excitation (PLE) spectroscopy and identified an absorption onset for the blue band emission centered at 3.0 eV. These results demonstrated that excitation of photocarriers in the bulk-states of diamond ($\approx${5.5} eV) is not needed to drive the blue band emission. The band structure in Fig. 4(d) provides a straightforward explanation for the blue band CL and PLE emission data, indicating that the 2.8 eV emission band arises from band-to-band recombination in the core states of pure edge dislocations in diamond and with an absorption onset at 3.21 eV. We note that, our calculated band gap values underestimate the experimental value, eg 4.95 eV compared to $\approx${5.5} eV for bulk diamnod. Therefore, while the numbers cannot be directly compared to the experimental measurements, the trend is in good agreement with the optical literature on diamond. Additionally, the anisotropy of the band structure also suggests an optical polarization axis with blue band emission parallel to the dislocation line, i.e. electric-dipole transitions along the dispersing band k-vector. Again, literature backs up this prediction with CL data of Kiflawi and Lang demonstrating $>90\%$ polarization of blue band emission parallel to dislocation lines. \section{CONCLUSIONS} Ground state electronic properties of the glide set of partial dislocations in diamond were calculated. We found that (1) the position and effective mass of dislocation-induced states depend heavily on the core structure, (2) only mixed dislocations with dangling bonds have metallic conductivity through half-filled gap states, (3) 1D conduction along the line direction of these dislocations is attributed to a chain of overlapping $p_z$ orbitals forming a dispersive band along the line defect, (4) Pure edge dislocations in diamond exhibit a direct band gap of 3.21 eV providing a theoretical explanation on the origin of the blue band emission in diamond, and (5) the DOS for the core states is that of an ideal 1D Fermi gas in both metallic and semiconducting dislocations. Consequently, dislocations with under coordinated core atoms appear to be naturally formed 1D quantum wires. Core states of dislocations in wide band gap materials like diamond could be used as an active components in functional materials. In this study, the geometry and electronic properties of ideal (straight and clean) partial dislocations are examined. Jogs, kinks, dislocation nodes, point defect decorated dislocations can alter the effects on electronic structure; which should be investigated thoroughly in future. \begin{acknowledgments} We gratefully acknowledge the support of this work by the AFOSR grant number FA9550-21-1-0278. Computational resources were provided by the Ohio Supercomputer center. \end{acknowledgments} \bibliographystyle{apsrev4-2}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The electron and nuclear spins in a rotating frame deeply connect to both fundamental physics and applications. The frequencies of the spins will shift in the rotating frame, which can be explained by an emerged pseudo-magnetic field \cite{Barnett1915,Barnett1935}. The quantum mechanical geometric phase was also predicted to appear in these systems in adiabatic limit, where the frequency of rotating frame is much less than the frequencies of the spins \cite{Ajoy2012,MacLaurin2012,Ledbetter2012}. The pseudo-magnetic field has been detected by both nuclear \cite{Chudo2014} and electron spins \cite{Wood2017,Wood2018}. However, the geometric phase which is proportional to the rotating frequency and can be used as a gyroscopic sensor \cite{zhang2016inertial}, has been too small to be measured in a traditional mechanical rotor with a maximum rotation frequency of about $10$ kHz \cite{Wood2018}. Here we propose to use a levitated nanodiamond that can be driven to rotate at an ultrahigh speed in vacuum to study the geometric phase of a rotating electron spin. Our proposal is based on recent breakthroughs in levitated optomechanics \cite{Yin2013b,Chang2010,Romero2010,Yin2013,Scala2013b,Shi2016,Hoang2016a,Ma2017,Arita2013,Kuhn2017,Monteiro2018,PhysRevLett.121.040401,Zhao2014,Ranjit2016,Monteiro2017}. Nanodiamonds with nitrogen-vacancy centers that host electron spins have been levitated in vacuum with optical tweezers \cite{Neukirch2015a,Hoang2016}, ion traps\cite{doi:10.1021/acs.nanolett.8b01414,PhysRevLett.121.053602}, and magneto-gravitational traps \cite{hsu2016cooling}. Recently, rotation frequencies larger than $1$ GHz have been experimentally observed with optically levitated nanoparticles driven by circularly-polarized lasers \cite{Ahn2018,Reimann2018}. In this way, for the first time, the frequency of a mechanical rotor approaches the frequency of the electron spin in the NV center. This will generate a large geometric phase. Previous studies based on adiabatic approximation will no longer be valid \cite{Ajoy2012,MacLaurin2012,Ledbetter2012}. A theory of nonadiabatic spin dynamics and geometric phase in a rotating frame is needed. In this paper, we study the electron spin dynamics and calculate the quantum geometric phase of an NV center in an ultra-fast rotating levitated nanodiamond without adiabatic approximation. We find that transitions between the spin energy levels appear when the angle $\theta$ between the axis of the NV center and the axis of the rotor is not zero. This effect is negligible in adiabatic limit, but becomes important in the nonadiabtic regime. By clockwise (counterclockwise) rotating the nano-diamond, the resonant Rabi oscillation between $|0\rangle$ and $|+1\rangle$ ($|-1\rangle$) of the NV center could realize, if the rotational frequency approaches the frequency of electron spin in the NV center without external magnetic field. We calculate the quantum geometric phase of the electron spin, which is in consistent with the previous studies in adiabatic limit \cite{Ajoy2012,MacLaurin2012,Ledbetter2012}, and is maximized near the resonant point. The energy spectrum, dynamics, and nonadiabatic geometric phase of the electron spin under the finite magnetic field are numerically solved. We find that the resonant transition could be achieved with much lower rotating frequency with an external magnetic field. We consider a non-spherical nano-diamond optically trapped in high vacuum. The length of the three axes of the nanodiamond are different. Therefore, its rotational degrees of freedom could be manipulated by the driving laser. We adopt the polarization of the driving laser to be circular. The nano-diamond could be driven to rotate at a constant angular velocity $\bm{\omega}$ \cite{Ahn2018,Reimann2018}. There is a nitrogen-vacancy center, with electron spin $S = 1$, in the nanodiamond. As shown in Fig. \ref{fig:spin}(a), we choose the direction of $\bm{\omega}$ along z-axis, and define spherical coordinates $\theta$ and $\phi(t) = \omega t$. We denote $\theta$ as the angle between the rotational axis and the axis of the NV center. For simplicity, we consider first the dynamics of the electron spin without external magnetic field. The Hamiltonian of the rotating NV center can be obtained by conducting a rotational transformation $R(t) = R_z(\phi(t))R_y(\theta)$ on the stationary Hamiltonian $H_0 = D S_z^2$ ($\hbar =1$), which reads $H(t) = R(t)H_0 R^\dagger(t)$. Here the rotation of spin-1 by the angle $\alpha $ along direction $\bm{n}$ is given by $R_{\bm{n}}(\alpha) = e^{-i\alpha \bm{n}\cdot\bm{S}}$. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{fig1} \caption{(a) A nanodiamond with a build-in NV center is levitated in an optical trap. A circular polarized laser drives the nanodiamond to rotate at angular frequency $\omega$. (b) The frames $Ox'y'z'$ and $Ox''y''z''$ are defined by the rotational transformations $R(t)=R_z(\phi(t))R_y(\theta)$ and $W(t)= R(t)R_z(-\phi(t))$, respectively. The rotating spin states are defined to be static in the frame $Ox''y''z''$. } \label{fig:spin} \end{figure} The Hamiltonian $H(t)$ is written in the $S=1$ basis in the inertial (lab) frame. Alternatively, we can rewrite the Hamiltonian in the basis which rotate with the solid spin and study the dynamics of a rotating spin. The unitary transformation from the static basis to the rotating basis is given by $W(t) = R(t)R_z(-\phi(t))$, which differs from $R(t)$ by an additional $R_z(-\phi(t))$ rotation. The rotational transformation corresponding to $R(t)$ and $W(t)$ are shown in Fig. \ref{fig:spin}(b), which are denoted by $Ox'y'z'$ and $Ox''y''z''$, respectively. The additional term $R_z(-\phi(t))$, which cancels the rotation of the local orthogonal coordinates, moves the geometric phase into the dynamical phase \cite{Giavarini1989,SM}. To see that, we write down the Hamiltonian after the unitary transform $\tilde{H}(t) = W(t)H(t)W^\dagger(t) + i\hbar \partial_t W(t)W^\dagger(t)$ as \begin{equation} \label{eq:HT} \tilde{H}(t)= H_0 + \omega(1 - \cos \theta)S_z - \frac{\omega}{2} \sin\theta [e^{-i(\omega t + \phi_0)}S_+ + h.c.], \end{equation} where $S_\pm = S_x \pm i S_y$. The constant phase $\phi_0$ in Eq. \eqref{eq:HT} arises as $Ox'y'z'$ and $Ox''y''z''$ in Fig. \ref{fig:spin}(b) have a relative rotation $R_z(-\phi_0)$. In the interaction picture given by the unitary transformation $U = e^{i\omega S_z t}$, the time-independent Hamiltonian reads \begin{equation}\label{eq:H_I} \tilde{H}_I = \begin{pmatrix} D - \omega\cos \theta & -\Omega^*/2 & 0 \\ -\Omega/2 & 0 & -\Omega^*/2 \\ 0 & -\Omega/2 & D + \omega \cos \theta \end{pmatrix}, \end{equation} where we denote $\Omega = \sqrt{2}\omega e^{i\phi_0}\sin\theta$ as the Rabi frequency. In the rest of this paper, the phase $\phi_0$ of the Rabi frequency is eliminated by redefining the $S_z$ states. Let us solve the Hamiltonian \eqref{eq:H_I} in two limit at first, the adiabatic limit $\omega \ll D$ and the near resonant limit $|D \pm \omega \cos \theta| \ll \Omega$. In adiabatic limit, the effect of transitions between spin states is negligible. We can neglect the off-diagonal terms in Hamiltonian \eqref{eq:HT} and get the effective Hamiltonian $\tilde{H}_e = D S_z^2 + \omega(1 - \cos \theta)S_z$, where the last term $\omega(1 - \cos \theta) S_z$ is called the rotating induced level shift (RILS) term. It is consistent with the previous studies on the adiabatic geometric phase \cite{Ajoy2012,MacLaurin2012,Ledbetter2012}. When $|D \pm \omega \cos \theta| < \Omega$, the off-diagonal terms in the Hamiltonian \eqref{eq:H_I} could induce transitions between $|0\rangle$ and $|\pm 1\rangle$. The resonant condition requires the angular frequency $\omega_\pm = \pm D/\cos\theta$. At resonance, and in the limit $\sin\theta \ll 1$, we can ignore off-resonant terms and get perfect Rabi oscillation in a two-dimensional subspace with the Rabi frequency $\Omega$ \cite{SM}. For rotating frequency $\omega=\omega_+$, the resonant transition between $|0\rangle$ and $|+1\rangle$ happens, while for $\omega=\omega_-$ the resonant transition between $|0\rangle$ and $|-1\rangle$ appears. This driving selectivity comes from the conservation of angular momentum. For an arbitrary angle $\theta$, we need to diagonalize the whole $3\times3$ matrix of the Hamiltonian \eqref{eq:H_I}. The quasi-energies are given by the solution of the cubic equation \begin{equation} \lambda^3 -2D \lambda^2 -(\omega^2 - D^2)\lambda + \omega^2D\sin^2\theta = 0, \end{equation} which are $\lambda_0$ and $\lambda_{\pm 1}$. We denote the Floquet states with quasi-energies $\lambda_n$ by $|\lambda_n\rangle$, with $n = 0,\pm1$. Here, $|\lambda_n\rangle$ smoothly connects to $|n\rangle$ at the adiabatic limit $\omega \ll D$. The quasi-energy spectrum as a function of $\omega$ and $\theta$ is shown in Fig. \ref{fig:quasi}(a). The quasi-energy spectrum $\lambda$ has a level crossing around $\omega = \pm D$ if $\theta=0$. As long as $\theta \neq 0$, the quasi-energy spectrum has an avoided level crossing near the resonant point $\omega_\pm$, as shown in Fig. \ref{fig:quasi}(b). The quasi-energy splitting at the resonant point gives the Rabi frequency $\Omega$. In Fie. \ref{fig:quasi}(a), there is also an avoided level crossing between $|+1\rangle$ and $|-1\rangle$ near $\theta = \pi/2$, which corresponds to a second order effective Rabi oscillation between these two states with Rabi frequency $\omega^2/D$ \cite{SM}. We plot the dynamics of the electron spin in a NV center in rotating frame in Ref. \cite{SM}. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{fig2} \caption{(a) The quasi-energies $\lambda$ as a function of rotating frequency $\omega$ and $\theta$. When near crossover $\omega = \pm D/\cos\theta$, the eigenstates of $S_z$ will be significantly mixed, which means that there will be Rabi oscillation. The legends means that the quasi-state starts with the respective eigenstates of $S_z$. (b) is the scaled slice with $\theta = \pi/100$ in (a). The quasi-energy splitting gives the Rabi frequency $\Omega$. (c) and (d) are the the numerical solution of quasi-energies and time-evolution in the presence of a static magnetic field with $\Delta = 0.803D$. The magnetic field satisfies the resonant condition with $\theta = \pi/100$ and $\omega = 0.2D$ in (d). The quasi-energies are obtained by diagonalizing the Floquet Hamiltonian.} \label{fig:quasi} \end{figure} If the NV center is rotating in the presence of an non-zero external magnetic field, the total Hamiltonian reads $H(t) = R(t)(DS_z^2 + g \mu_B B \bm{S}\cdot\bm{n})R^\dagger(t) = R(t)H_1 R^\dagger(t)$ \cite{SM}, where $\bm{n} = (\sin\theta,\cos\theta,0)$ is the unitary vector along the magnetic field direction in the rotating frame, and $\Delta = -g\mu_B B$. Similar to Eq. \eqref{eq:HT}, we apply the unitary transformation $\tilde{H}_1(t) = W(t)H_1(t)W^\dagger(t) + i\hbar \partial_t W(t)W^\dagger(t)$ and get \begin{equation} \label{eq:HTB} \begin{split} \tilde{H}(t) & = DS_z^2 - \Delta\cos\theta S_z + \Delta\sin\theta S_x \\ &+ \omega(1 - \cos \theta)S_z - \frac{\omega}{2} \sin\theta(e^{-i\omega t}S_+ + h.c.). \end{split} \end{equation} The presence of the magnetic field change the eigenstates of the spin, which are no longer the eigenstates of operator $S_{z}$. When the spin is rotating, the time-independent eigenstates will be further changed. Therefore, it is quite difficult to analytically solve the problem. However, in the limit of $\theta \ll 1$, the misalignment of the magnetic field to the spin is negligible and the eigenstates are nearly not changed. Therefore, for simplicity, we consider $ 0 < \Delta < D$ and take the small angle limits $\theta \ll 1 - \Delta/D$, and only consider the nearly resonant situation. The effective Hamiltonian (to the order of $\theta^2$) reads \begin{equation} \label{eq:HTBs} \tilde{H}(t) = \tilde{D}S_z^2 -\tilde{\Delta}S_z - \frac{\omega}{2}\theta(e^{-i\omega t}S_+ + h.c.), \end{equation} where $\tilde{D} = D + \frac{3D\Delta^2}{2(D^2-\Delta^2)}\theta^2$ and $\tilde{\Delta} = \Delta - \frac{1}{2}\omega \theta^2 - \frac{D^2\Delta}{2(D^2-\Delta^2)}\theta^2 $. The resonant condition is given by $\tilde{D} \mp \tilde{\Delta} = \pm\omega$. Therefore, the magnetic field compensate to $\omega$ and allows us to observe Rabi oscillation at a lower angular frequency. When the angle $\theta$ is not approaching zero, the above perturbative analysis becomes invalid, and we adopt the Floquet formalism \cite{Grifoni1998,Shirley1965} to numerically solve the evolution of Eq. \eqref{eq:HTB} \cite{SM}. Since the quasi-energies are determined uniquely only up to a multiple of $\hbar\omega$ \cite{SM}, the branches start at the same quasi-energy with slopes $+ n \hbar\omega$ and represent the same Floquet states with quasi-energy $+ n \hbar\omega$. For example, an avoided level crossing between $|\lambda_0\rangle$ start with slope $0$ and $|\lambda_{+1}\rangle$ start with slope $-\hbar\omega$ means there is strong transition from $|0\rangle$ to $|+1\rangle$ by absorbing one photon. In order to compare with quasi-energies under the zero magnetic field, as shown in Fig. \ref{fig:quasi}(a), we choose the three quasi-energy branches that smoothly connected to the $\omega = 0$ eigenvalues with slopes equal to $\omega$, $0$, and $-\omega$ for $|\lambda_{-1}\rangle$, $|\lambda_0\rangle$, and $|\lambda_{+1}\rangle$, respectively. We numerically solve the quasi-energies with external magnetic. As shown in Fig. \ref{fig:quasi}(c), there is an avoided level crossing between $|\lambda_{0}\rangle$ and $|\lambda_{+1}\rangle$ for $\theta\neq 0$. The resonant angular frequency $\omega$ increases if the angle between the spin and the rotating axis $\theta$ increases. The quasi-energy splitting corresponds to the Rabi frequency $\Omega = \sqrt{2}\omega\sin\theta$. If there is no magnetic field, for $\theta = \pi/100$ and $\omega = 0.2~ D$, the Rabi oscillation between $|0\rangle$ and $|+1\rangle$ is negligible. By applying a static magnetic field with $\Delta = 0.803~D$ to meet the resonant condition, as shown in Fig.\ref{fig:quasi}(d), there is an almost perfect resonant Rabi oscillation between $|0\rangle$ and $|+1\rangle$. In this way, the resonant electron spin dynamics could be realized with rotating frame frequency $\omega$ much lower than $D$. Based on the Floquet formalism \cite{SM}, we can derive the non-adiabatic geometric phase for the electron spin of the NV center in a ultra-fast rotating nanodiamond. The non-adiabatic geometric phases for the cyclic states are defined as \cite{PhysRevLett.58.1593,Moore1990,Moore1990a}, \begin{equation}\label{eq:gp} \gamma_n = i \int^T_0 \langle\lambda_n|W(t)\frac{d}{dt}W^\dagger(t)|\lambda_n \rangle dt, \end{equation} where $W(t)$ is chosen in order to recover the RILS dynamical phase shift back into the geometric phase. Let us consider the case without external magnetic field first. The Eq. \eqref{eq:gp} can be rewrote as $\gamma_n = \int^T_0 \langle\lambda_n|\tilde{H}_I - DS_z^2 + \omega S_z |\lambda_n\rangle dt$, and plug in the coefficient of Floquet states, we get \begin{equation}\label{eq:gpc} \gamma_n = \frac{2\pi}{\omega}(\lambda_n - (D-\omega)|c_{n,+1}|^2 - (D+\omega)|c_{n,-1}|^2), \end{equation} where $c_{n,k}$, $k = 0,\pm1$ are the coefficients of $|\lambda_n\rangle$ in the spin basis. Based on the simplified Hamiltonian in the limit $\omega \ll D$ and $\omega \sim D/\cos\theta$, we can obtain the geometric phase using Eq. \eqref{eq:gpc}. If the rotation is adiabatic $\omega \ll D$, the Floquet states $|\lambda_n\rangle$ has almost no mixing between the spin states. The quasi-energies $\lambda_{+1},~\lambda_{0},~\lambda_{-1} = D + \omega\cos\theta,~ 0, ~D - \omega\cos\theta$, with corresponding geometric phases $\gamma_{+1},~ \gamma_{0},~ \gamma_{-1} = 2\pi(1-\cos\theta),~ 0,~ -2\pi(1-\cos\theta)$, which are consistent with the previous studies \cite{Ajoy2012,MacLaurin2012,Ledbetter2012}. In the resonant regime with $ \omega \simeq D/\cos\theta$ and $\theta \ll 1$, there is strong mixing between spin states. The corresponding Floquet states are $|\lambda_{+1}\rangle = (|0\rangle + |+1\rangle)\sqrt{2}$, $|\lambda_{0}\rangle = (|0\rangle - |+1\rangle)\sqrt{2}$, and $|\lambda_{-1}\rangle =|-1\rangle$ with quasi-energies $ \lambda_{+1},\lambda_{0},\lambda_{-1} = \Omega/2,-\Omega/2,2D$. The non-adiabatic geometric phases are $ \gamma_{+1}, ~\gamma_{0},~ \gamma_{-1} = \sqrt{2}\pi\sin\theta,~ -\sqrt{2}\pi\sin\theta,~ 0$. As shown in Fig. \ref{fig:gp}(a), a slightly detune from resonance will reduce the geometric phase, which means that for small angles $\theta$ the geometric phase will maximize at resonance. For general situations with arbitrary $\omega$ and $\theta$, we provide numerical result in Fig. \ref{fig:gp}(b). The analysis for limit cases indicates that there is crossing between the two limits. Also, the peak behavior at resonance $\omega = D/\cos\theta$ is demonstrated. When $\theta$ increases, the geometric phases become larger, but the peak is not so apparent due to the break down of two-level approximation at the large angle. From Fig. \ref{fig:gp}(b), the crossing at $\theta = \pi/2$ corresponds to the second order Rabi oscillation between $|\pm1\rangle$, and the crossing at $\theta \sim \pi$ and $\omega \simeq D$ corresponds to the Rabi oscillation similar to the small angle \cite{SM}. From Eq. \eqref{eq:gpc} we can also reveal the relation between quasi-energy $\lambda_n$ and geometric phase $\gamma_n$ for small $\theta$. In adiabatic limit, the geometric phase is identical to the phase of RILS term accumulating in a single period. At resonance regime, the geometric phases for the two resonant states are given by their Rabi frequency accumulate in a single period. In these two situations, the measurement of the geometric phase and the quasi-energy are equivalent. The adiabatic geometric can be used for measuring rotating frequency in the adiabatic limit \cite{Ajoy2012,MacLaurin2012,Ledbetter2012}. Our analysis shows that this method still works in the nonadiabatic regime $\omega \sim D$, where the nonadiabatic geometric phase can be measured with spectroscopic or interference \cite{Anandan1992,Appelt1995,DAS2005318}. Moreover, as the rotating frequency can be determined with high precision by measuring the scattering photon of the nano-diamond \cite{Ahn2018,Reimann2018}, the angle $\theta$ could be measured through the Floquet quasi-energy spectrum \cite{PhysRevB.46.14675,PhysRevLett.105.257003,shu2018observation}. In the limit $\theta \ll 1$, the measurement of quasi-energies is equivalent to measure the Rabi frequency $\Omega$. The uncertainty of angular measurement $\delta\theta = \delta \Omega/(\sqrt{2}\omega\cos \theta)$ is inversely proportional to rotating frequency $\omega$ and minimized around $\theta =0$. \begin{figure} \centering \includegraphics[width=0.75\linewidth]{fig3} \caption{(a) Non-adiabatic geometric phases $\gamma_{0,\pm}$ for the three cyclic states $\lambda_{0,\pm}$ under the angle $\theta = \pi/10$ and different rotating frequency $\omega$. At adiabatic limit $\omega \ll D$, the geometric phases are given by $\tilde{\gamma}_{\pm 1}(\theta) = \pm 2\pi(1-\cos\theta)$. At resonance $\omega = D/\cos\theta$, the geometric phases are $ \gamma_{+1}, ~\gamma_{0},~ \gamma_{-1} = \sqrt{2}\pi\sin\theta,~ -\sqrt{2}\pi\sin\theta,~ 0$. (b) Non-adiabatic geometric phases $\gamma$ for the three cyclic states under different rotating frequency $\omega$ and angle $\theta$. } \label{fig:gp} \end{figure} Under external magnetic field, the non-adiabatic geometric phase given by Eq. \eqref{eq:gp} is similar to the case without magnetic field \cite{SM}. For simplicity, we only analyze the limit case where $\omega \sim 0$ and near resonance, and for small angle $\theta$ where the Hamiltonian is given by Eq. \eqref{eq:HTBs}. From Eq. \eqref{eq:gp}, when apply to $\omega \sim 0$ case the geometric phases are $2\pi(1-\cos\theta)S_z$ which are the same as zero field. This is because the magnetic field only shift the energy level therefore only affects the dynamical phases. At resonance, the geometric phases for the two resonant states are proportional to the Rabi frequency, given by $\pm\sqrt{2}\pi\sin\theta$ which are also the same as zero field. We briefly discuss the experimental feasibility. As the silica-based nano-particles have been driven to the GHz rotating frequency regime, we believe that it is also possible to optically drive the nanodiamond to rotate in GHz. The main obstacle is the optical heating of the NV center in diamond, which could be resolved by adopting pure diamond \cite{Frangeskou2018} and using nano-refrigerator \cite{Rahman2017}. The Rabi frequency induced by GHz rotating nanodiamond is much larger than MHz. The dephasing time of the NV center in a nanodiamond is in the order of $\mu$s or longer \cite{Knowles2014}. Therefore, the rotating induced Rabi oscillation or the avoided level crossing could be observed. In conclusion, we study the electrons spin dynamics and geometric phase in a levitated ultra-fast rotating nanodiamond, without adiabatic approximation. The Rabi oscillation appears if the rotating frequency matches the electron spin levels splitting, even without an external magnetic field. We define and calculate the nonadiabtic geometric phase of the eletron spin in a rotating frame, which could be used for an angular sensor. We think that the similar phenomena may also appear in the nuclear spins in a rotating frame, with much lower rotating frequency. \begin{acknowledgments} Z.Q.Y. is supported by National Natural Science Foun- dation of China NO. 61771278, 61435007, and the Joint Foundation of Ministry of Education of China (6141A02011604). T.L. is supported by NSF under Grant No. PHY-1555035. We thank the helpful discussions with Nan Zhao and Ying Li. \end{acknowledgments} \section{Introduction} \section{Floquet formalism}\label{sec:floquet} According to Floquet theorem, the solutions to the Sch\"ordinger equation with $T = 2\pi/\omega$-periodic Hamiltonian \begin{equation} i\hbar\frac{\partial}{\partial t}|\psi(t)\rangle = H(t)|\psi(t)\rangle \end{equation} can be written as superposition of the Floquet states \begin{equation}\label{eq:state} |\psi(t)\rangle = \sum_{\alpha}\langle\Psi_\alpha(t')|\psi(t')\rangle, \end{equation} for any $0\le t'<T$. The Floquet states are given by \begin{equation}\label{eq:fstates} |\Psi_\alpha(t)\rangle = e^{-i\lambda_\alpha t}|\lambda_\alpha(t)\rangle, \end{equation} with real quasienergies $\lambda_\alpha$ and $T$-periodic part $|\lambda_\alpha(t)\rangle$. By substituting a Floquet solution $|\Psi_\alpha(t)\rangle$ into the Schr\"odinger equation, we find that the quasienergies and periodic part states follow the equation \begin{equation} H_F(t)|\lambda_\alpha(t)\rangle = \lambda_\alpha |\lambda_\alpha(t)\rangle, \end{equation} where $H_F(t)$ is the Floquet Hamiltonian \begin{equation} H_F(t) = H(t) -i\hbar\frac{\partial}{\partial t}. \end{equation} It can be shown that $\lambda_\alpha$ are uniquely determined only up to a multiple of $\hbar\omega$, i.e., $\lambda_\alpha + n\hbar\omega$ is also a quasienergy, corresponding to the periodic state $e^{-i n\omega t}|\lambda_\alpha(t)\rangle$. The quasi-energies are obtained by diagonalizing the Floquet matrix, which is the Floquet Hamiltonian $H_F(t)$ in the basis of Hilbert space $\mathcal{\Gamma}\otimes\mathcal{H}$. Here the $\mathcal{\Gamma} = \{e^{i n \omega t}|n=0,\pm1,\pm2,...\}$ is the Hilbert space of square integrable $T$-periodic functions, and the $\mathcal{H}$ is the original Hilbert space, i.e. the spin states in our model. The Floquet matrix is infinite dimensional, and we should truncate at some finite dimension, which is the case for our Hamiltonian with an external magnetic field Eq. (5) of the main text. If the Floquet matrix is block-diagonal, we can diagonalize each block matrix and get analytic solution, which is the case for our Hamiltonian Eq. (2) of the main text. In the absence of an external magnetic field, the periodic Floquet states at $t = 0$ and quasienergies are given by the eigenvectors of the time-independent Hamiltonian in the interaction picture Eq. (2) of the main text and corresponding eigenvalues, denoted by $|\lambda_n\rangle$ and $\lambda_n$, $n = 0,\pm1$. According to Eq. \eqref{eq:fstates}, the Floquet states are given by the time evolution of $|\lambda_n\rangle$ multiplied by the phase factor of quasienergies, which reads \begin{equation}\label{sup:eq:UF} e^{-i\lambda(t)}|\lambda_n (t)\rangle = e^{-i\lambda(t)}U(t)|\lambda_n\rangle = e^{-i\lambda(t)}e^{-i\lambda_n t}e^{i\omega S_z t}|\lambda_n\rangle. \end{equation} Substituting into Eq. \eqref{eq:state}, we get the solution of the Floquet Hamiltonian Eq. (1) of the main text. \section{Spin dynamics and Effective Rabi Oscillation} Here we present the numerical result for the solutions to the Schrödinger with Hamiltonian Eq. (1) of the main text, using the method discussed in the previous section. The spin is assumed to be initially prepared to the state $|0\rangle$. The time evolution of the levels population is plotted In the limit $\theta$ is much less than $1$, the electron spin dynamics can be described perfectly with effective Hamiltonian $$H_{\mathrm{eff}}=\begin{pmatrix} 0 & -\Omega/2 \\ -\Omega/2 & 0 \end{pmatrix}.$$ In Fig. \ref{fig:noB}(a), we take $\theta = \pi/100$, at the resonant frequency $\omega_+ = D/\cos\theta$. The numerical results show that the $|0\rangle$ and $|+1\rangle$ undergoes almost perfect Rabi oscillation, while the population in $|-1\rangle$ nearly does not change. The numerical results are consistent with the prediction of the effective Hamiltonian $H_{\mathrm{eff}}$. In Fig. \ref{fig:noB}(b), we plot a case for a large $\theta$, e.g. $\theta = \pi/4$, at the resonant frequency. It is found that the oscillation is slightly deviate from sinusoidal and the dynamics is not limited to the $|0\rangle$ and $|+1\rangle$ subspace. \begin{figure} \centering \includegraphics[width=.75\linewidth]{r1} \includegraphics[width=.75\linewidth]{r2} \caption{The time evolution of the spin levels population when the resonant condition $\omega=\omega_+$ fulfills. The spin is prepared to the state $|0\rangle$ initially. The relative angle the solid spin to the angular velocity $\theta$ are (a) $\pi/100$ and (b) $\pi/4$.} \label{fig:noB} \end{figure} From Eq. (2) in the main text, the $|\pm1\rangle$ states degenerate at $\theta = \pi/2$ and both couple with the $|0\rangle$. By adiabatically eliminating the level $|0\rangle$, the effective coupling between $|+1\rangle$ and $|-1\rangle$ causes a Rabi oscillation between them. The Rabi frequency can be obtained from Eq. (3) in the main text at $\theta = \pi/2$ with the perturbative method. The quasienergies difference corresponding to $|\pm1\rangle$, shown in Fig. (2)b in the main text, is given by the Rabi frequency $\Omega_{\pm1} = \omega^2/D$. This Rabi oscillation is much slower than $\omega$ at the adiabatic limit, but the spin mixing causes the geometric phase to jump between $|\pm1\rangle$. This explains the reason why Fig. (3)b in the main text has crossing near $\theta = \pi/2$. \section{Hamiltonian of a rotating with external magnetic field} If the NV center is rotating in the presence of an non-zero external static magnetic field, the total Hamiltonian reads \begin{equation} H(t) = R(t)H_0 R^\dagger(t) + g \mu_B B\bm{S}\cdot\bm{n}, \end{equation} where $g$ is the $g$-factor of the solid spin, $B$ is strength of magnetic field with direction $\bm{n}$. To simplify the problem, we assume that the magnetic field is uniform and along the rotation $z$ axis. In this way, the total Hamiltonian including magnetic field becomes $H(t) = R(t) DS_z^2 R^\dagger(t) + g \mu_B B S_z$, We can imagine that the magnetic field is rotating with the solid spin and write the equivalent Hamiltonian as $H(t) = R(t)(DS_z^2 + g \mu_B B \bm{S}\cdot\bm{n})R^\dagger(t) = R(t)H_1 R^\dagger(t)$, where $\bm{n} = (\sin\theta,\cos\theta,0)$ is the unitary vector along the magnetic field direction in the rotating frame, and $\Delta = -g\mu_B B$. The time evolution of this Hamiltonian could be numerically solved by the Floquet formalism in section \ref{sec:floquet}. \section{Non-adiabatic geometric phase for periodic Hamiltonian} The Floquet theorem in section \ref{sec:floquet} implies that the time-evolution operator for a periodic Hamiltonian can be expressed as \begin{equation}\label{sup:eq:U} U(t) = Z(t)e^{i M t} \end{equation} where $Z(t)$ is a unitary $T$-periodic operator, i.e. $Z(T) = Z(0) = 1$, and $M$ is a Hermitian operator. According to \cite{PhysRevLett.58.1593,Moore1990,Moore1990a}, the non-adiabatic geometric phase for the Floquet states (which are cyclic) are given by \begin{equation}\label{sup:eq:gp} \gamma_n = i \int_0^T \langle \lambda_n | Z(t)^\dagger \frac{d}{dt} Z(t)|\lambda_n\rangle dt. \end{equation} For the zero-field Hamiltonian, the evolution operator can be expressed as $U(t) = Z(t)e^{iM t}$ Eq. \eqref{sup:eq:UF} where $Z(t) = e^{i\omega S_z t}$ is the unitary $T$-periodic operator, and $M = \tilde{H}_I$ is the Hermitian operator in Eq. \eqref{sup:eq:U}. Since we have remove the geometric phase into the dynamical phase by the method in \cite{Giavarini1989}. If we want to recover the geometric phase from the dynamical phase, $Z(t)$ in Eq. \eqref{sup:eq:gp} should include an additional rotational transformation. As we have demonstrated in the main text using Fig. (1)b, $Z(t)$ should be chosen as the unitary transform $W^\dagger(t)$, which reads $Z(t) = W^\dagger(t)$. Then the geometric phases are given by Eq. (4) in the main text.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Given independent samples from an unknown distribution, missing mass estimation asks for the sum of the probability of the unseen elements. Missing mass estimation is a basic problem in statistics and has wide applications in several fields ranging from language modeling~\cite{GaleS95,ChenG96} to ecology~\cite{ChaoL92}. Perhaps the most used missing mass estimator is the Good-Turing estimator which was proposed in a seminal paper by I. J. Good and Alan Turing in 1953~\cite{doi:10.1093/biomet/40.3-4.237}. The Good-Turing estimator is used in support estimators~\cite{ChaoL92}, entropy estimators~\cite{VuYK07} and unseen species estimators~\cite{ShenCL03}. To describe the estimator and the results, we need a modicum of nomenclature. Let $p$ be an underlying unknown distribution over an unknown domain $\mathcal{X}$. Let $X^{n}\triangleq(X_{1},X_{2},\ldots,X_{n})$ be $n$ independent samples from $p$. For $x \in \mathcal{X}$, let $N_x(X^n)$ be the number of appearances of $x$ in $X^n$. Upon observing $X^n$, our goal is to estimate the missing mass \begin{equation} M_{0}(X^{n})\triangleq\sum_{u\in\mathcal{X}}p(u)\mathbb{I}(N_{u}(X^{n})=0),\label{eq:1} \end{equation} where $\mathbb{I}(\cdot)$ denotes the indicator function. For example, if $\mathcal{X} = \{a,b,c,d\}$ and $X^3 = b \, c \, b$, then $M_0(X^3) = p(a) + p(d)$. The above sampling model for estimation is termed the multinomial model. We note that $1-M_0(X^n)$ is often referred as sample coverage in the literature~\cite{CCG12}. An estimator for missing mass $\hat{M}_0(X^n)$ is a mapping from $\mathcal{X}^n \to [0,1]$. For a distribution $p$, the $\ell^2_2$ risk of the estimator $\hat{M}_0(X^n)$ is \[ R_n(\hat{M}_0, p) \triangleq E_{X^n \sim p} [(\hat{M}_0(X^n) - M_0(X^n))^2], \] and the worst-case risk over all distributions is \[ R_n(\hat{M}_0) \triangleq \max_p R_n(\hat{M}_0, p), \] and minimax mean squared loss or minimax risk is \[ R^*_n = \min_{\hat{M}_0} R_n(\hat{M}_0). \] The goal of this paper is to characterize $R^*_n$. \subsection{Good-Turing estimator and previous results} Let \[ \Phi_{i}(X^{n})\triangleq\sum_{u\in\mathcal{X}}\mathbb{I}(N_{u}(X^{n})=i) \] denote the number of symbols that have appeared $i$ times in $X^{n}$, $1\le i\le n$. For example, if $X^3 = a, b, c$, then $\Phi_1 = 3$ and $\Phi_i = 0$ for all $i > 1$. The Good-Turing estimator~\cite{doi:10.1093/biomet/40.3-4.237} for the missing mass is \[ M^{\textrm{GT}}(X^n) \triangleq \frac{\Phi_1(X^n)}{n}. \] One of the first theoretical analysis of the Good-Turing estimator was in \cite{McAllester:2000:CRG:648299.755182}, where it was shown that \begin{equation} \left|E\left[M^{\textrm{GT}}(X^{n})-M_{0}(X^{n})\right]\right|\le\frac{1}{n}.\label{eq:3} \end{equation} This shows that the bias of the Good-Turing estimator falls as $1/n$. They further showed that with probability $\geq 1- \delta$, \[ \left \lvert M^{\textrm{GT}}(X^{n})-M_{0}(X^{n}) \right \rvert \leq \frac{2}{n} + \sqrt{\frac{2\ln (3/\delta)}{n}} \left( 1+ 2 \ln (3n/\delta) \right). \] Various properties of the Good-Turing estimator and several variations of it have been analyzed for distribution estimation and compression~\cite{OrlitskySZ03, DrukhM04, WagnerVK06, wagner2007better, OhannessianD12, AcharyaJOS13, OS15}. Several concentration results on missing mass estimation are also known~\cite{berend2013concentration, ben2017concentration}. Despite all this work, the risk of the Good-Turing estimator and the minimax risk of missing mass estimation have still not been conclusively established. \subsection{New results} Unlike parameters of a distribution, missing mass itself is a function of the observed sample and that makes finding the exact minimax risk difficult. We first analyze the risk of the Good-Turing estimator and show that for any distribution $p$, \begin{align*} R_n(M^{\textrm{GT}}, p )&= \frac{1}{n}E\left[\frac{2\Phi_2}{n}+\frac{\Phi_1}{n} \left(1- \frac{\Phi_1}{n} \right) \right] + o \left( \frac{1}{n}\right), \end{align*} where $\Phi_i$ is abbreviated notation for $\Phi_i(X^n)$. By maximizing the RHS in the first equation above over all distributions, in Theorem~\ref{thm:4}, we show that \[ \frac{0.6080}{n}+ o\left(\frac{1}{n} \right) \leq R_n(M^{\textrm{GT}}) \leq \frac{0.6179}{n} + o\left( \frac{1}{n}\right). \] We note that under the multinomial model, the numbers of occurrences of symbols are correlated, and this makes finding the worst case distribution for the Good-Turing estimator difficult. We then prove estimator-independent information-theoretic lower bounds on $R^*_n$ using two approaches. We first compute the lower bound via Dirichlet prior approach~\cite{Krichevskiy98}. In Lemma~\ref{lem:Prior3}, we show that \[ R^*_n \geq \frac{4}{27n}. \] We then improve the constant by reducing the problem of missing mass estimation to that of distribution estimation. In particular, in Theorem~\ref{thm:lower_dist}, we show that \[ R^*_n \geq \frac{1}{4n} + o\left( \frac{1}{n}\right) \] Combining the lower and upper bounds, we get \[ \frac{0.25}{n} + o\left( \frac{1}{n}\right) \leq R^*_n \leq \frac{0.6179}{n} + o\left( \frac{1}{n}\right), \] Finding the exact minimax risk for the missing mass estimation problem remains an open question. The rest of the paper is organized as follows. In Section~\ref{sec:upper}, we analyze the Good-Turing estimator. In Section~\ref{sec:prior}, we use Dirichlet prior approach to obtain lower bounds and in Section~\ref{sec:dist} we obtain lower bounds via reduction. \section{Risk of Good-Turing Estimator} \label{sec:upper} The analysis of \cite{McAllester:2000:CRG:648299.755182} can be extended to characterize the risk of the Good-Turing estimator for missing mass. The squared error of the Good-Turing estimator $M^{\textrm{GT}}(X^n)$ can be written down as follows: \begin{align} &\left(M^{\textrm{GT}}(X^{n})-M_{0}(X^{n})\right)^{2}\nonumber \\ &\quad =\left(\sum_{u\in\mathcal{X}}\frac{1}{n}\mathbb{I}(N_{u}=1)-p(u)\mathbb{I}(N_{u}=0)\right)\nonumber\\ &\qquad\qquad\left(\sum_{v\in\mathcal{X}}\frac{1}{n}\mathbb{I}(N_{v}=1)-p(v)\mathbb{I}(N_{v}=0)\right)\nonumber \\ &\quad=\frac{1}{n^2}\sum_{u,v\in\mathcal{X}}\bigg(\mathbb{I}(N_{u}=1)\mathbb{I}(N_{v}=1)\nonumber\\ &\qquad\qquad\qquad-2np(u)\mathbb{I}(N_{u}=0)\mathbb{I}(N_{v}=1)\nonumber\\ &\qquad\qquad\qquad+n^2p(u)p(v)\mathbb{I}(N_{u}=0)\mathbb{I}(N_{v}=0)\bigg)\label{eq:5} \end{align} For $u,v\in\mathcal{X}$, $E[\mathbb{I}(N_{u}=i)I(N_{v}=j)]=\mathbb{P}(N_{u}=i,N_{v}=j)$. Using the notation $P_{n}(i,j)=\mathbb{P}(N_{u}(X^{n})=i,N_{v}(X^{n})=j)$, we get \begin{align} R_n(M^{\textrm{GT}},p) & =\frac{1}{n^2}\sum_{u,v\in\mathcal{X}}\bigg(P_{n}(1,1)-2np(u)P_{n}(0,1)\nonumber\\ &\qquad\qquad\qquad+n^2p(u)p(v)P_{n}(0,0)\bigg).\label{eq:6} \end{align} The probability $P_{n}(i,j)$ can be written down as \begin{equation} P_{n}(i,j)=\begin{cases} \binom{n}{i\ j}\ p(u)^{i}p(v)^{j}(1-p(u)-p(v))^{n-i-j},\; u\ne v,\\[10pt] \binom{n}{i}\ p(u)^{i}(1-p(u)^{n-i},\; u=v,i=j, \end{cases}\label{eq:7} \end{equation} where $\binom{n}{i\ j}=\frac{n!}{i!j!(n-i-j)!}$ and $\binom{n}{i}=\frac{n!}{i!(n-i)!}$. The summation in (\ref{eq:6}) is first split into two cases: $u\ne v$ and $u=v$. Denoting $P(u,v)=p(u)p(v)(1-p(u)-p(v))^{n-2}$, we have, for $u\ne v$, \begin{align*} p(u)p(v)P_n(0,0)&=(1-p(u)-p(v))^2P(u,v),\\ p(u)P_n(0,1)&=n(1-p(u)-p(v))P(u,v),\\ P_n(1,1)&=n(n-1)P(u,v). \end{align*} For $u=v$, observe that $P_n(0,1)=0$. Using the above observations, the summation in \eqref{eq:6} simplifies to \begin{align} &R_n(M^{\textrm{GT}},p)=\frac{1}{n}\sum_{\substack{u,v\in\mathcal{X}\\v\ne u}}P(u,v)\bigg[n\big(p(u)+p(v)\big)^2-1\bigg]\nonumber \\ &\ +\frac{1}{n}\sum_{u\in\mathcal{X}} \bigg[p(u)(1-p(u))^{n-1}+np(u)^{2}(1-p(u))^{n}\bigg].\label{eq:9} \end{align} The following lemma is useful in bounding certain terms in the first summation above as a function of $n$, independent of the unknowns $\mathcal{X}$ and $p$. \begin{lemma} \label{lem:2}For $i\ge1$, $j\ge1$, \[ \sum_{u,v\in\mathcal{X}, \red{u \neq v}}p(u)^{i}p(v)^{j}(1-p(u)-p(v))^{n}\le\dfrac{(i-1)!\ (j-1)!\ n!}{(n+i+j-2)!}. \] \end{lemma} \begin{IEEEproof} Let $X$ and $Y$ be a pair of independent and identical random variables with marginal distribution $p$. Define a random variable $T(X,Y)$, whose value $T(u,v)=0$ for $u=v$ and, for $u\ne v$, $$T(u,v)=\binom{n+i+j-2}{i-1\ j-1}p(u)^{i-1}p(v)^{j-1}(1-p(u)-p(v))^{n}.$$ We see that $T(X,Y)$ is a probability for $X\ne Y$, and that it takes values in $[0,1]$ in all cases. Therefore, its expectation \begin{align*} & \red{ E[T(X,Y)] = \sum_{u, v \in \mathcal{X} \atop u \ne v} p(u)p(v) T(u,v)} \\ & \red{=}\sum_{u,v\in\mathcal{X}\atop u\ne v}\binom{n+i+j-2}{i-1\ j-1}p(u)^ip(v)^j(1-p(u)-p(v))^n\le 1, \end{align*} which concludes the proof. \end{IEEEproof} A useful univariate version of Lemma \ref{lem:2} is the following. \begin{lemma} \label{lem:1} For $i\ge1$, \[ \sum_{u\in\mathcal{X}}p(u)^{i}(1-p(u))^{n}\le\dfrac{(i-1)!\ n!}{(n+i-1)!}. \] \end{lemma} \begin{IEEEproof} For $X\sim p$, define $T(X)=\binom{n+i-1}{i-1}p(X)^{i-1}(1-p(X))^n$ and follow the proof of Lemma \ref{lem:2}. \end{IEEEproof} Using Lemma \ref{lem:2}, observe that \begin{align} \label{eq:20} \sum_{u,v\in\mathcal{X}\red{,u \neq v}}P(u,v)(p(u)+p(v))^2=o(1/n). \end{align} Therefore, the risk can be written as \begin{align} R_n(M^{\textrm{GT}},p)&=\frac{1}{n}\bigg[\sum_{u\in\mathcal{X}} p(u)(1-p(u))^{n-1}-\sum_{\substack{u,v\in\mathcal{X}\\v\ne u}}P(u,v)\nonumber\\ &\ +\sum_{u\in\mathcal{X}} np(u)^{2}(1-p(u))^{n}\bigg]+o(1/n). \label{eq:21} \end{align} The summation terms above can be rewritten as follows: \begin{align} \sum_{u\in\mathcal{X}}p(u)(1-p(u))^{n-1}&=E\bigg[\frac{\Phi_{1}(X^{n})}{n}\bigg].\label{eq:23}\\ \sum_{u\in\mathcal{X}}np(u)^2(1-p(u))^{n}&=\frac{2}{n-1}\sum_{u\in\mathcal{X}}P_n(2,0)(1-p(u))^2\nonumber\\ &\overset{(a)}{=}\frac{2}{n-1}\sum_{u\in\mathcal{X}}P_n(2,0)\pm {o\left(\frac{1}{n}\right)}\nonumber\\ &=E\left[\frac{2\Phi_2(X^n)}{n}\right]\pm {o\left(\frac{1}{n}\right)},\label{eq:24} \end{align} where $(a)$ follows using Lemma \ref{lem:1}. \begin{align} &\sum_{\substack{u,v\in\mathcal{X}\\v\ne u}}P(u,v)=\frac{1}{n(n-1)}\sum_{\substack{u,v\in\mathcal{X}\\v\ne u}}P_n(1,1)\nonumber\\ &=\frac{1}{n(n-1)}E\bigg[\sum_{\substack{u,v\in\mathcal{X}\\v\ne u}}\mathbb{I}(N_{u}(X^{n})=1)\mathbb{I}(N_{v}(X^{n})=1)\bigg]\nonumber\\ &=E\bigg[\frac{1}{n(n-1)}\Phi_1(X^n)(\Phi_1(X^n)-1)\bigg]\nonumber\\ &=E\left[\frac{\Phi^2_{1}(X^n)}{n}\right]\pm o(1).\label{eq:25} \end{align} Using the above expressions in \eqref{eq:21}, we get the following characterization of the risk. \begin{theorem} The risk of the Good-Turing estimator under squared error loss satisfies \begin{equation} \label{eq:17} R_n(M^{\textrm{GT}}, p)= \frac{1}{n}E\left[\frac{2\Phi_2}{n}+\frac{\Phi_1}{n} \left(1- \frac{\Phi_1}{n} \right)\right]+ {o\left(\frac{1}{n}\right)}. \end{equation} \label{thm:riskGT} \end{theorem} \subsection{Upper bound on risk} To obtain a tight upper bound on the risk, we start with the following upper bound on one of the terms in \eqref{eq:21}: \begin{align} \sum_{u\in\mathcal{X}}np(u)^{2}(1-p(u))^{n}&\le\sum_{u\in\mathcal{X}}p(u)\left(np(u)e^{-np(u)}\right)\nonumber\\ &\le e^{-1},\label{eq:12} \end{align} where the first step follows because $1-x\le e^{-x}$ for a fraction $x$, and the second step follows because $te^{-t}\le e^{-1}$ for $t\ge0$. Using \eqref{eq:23}, \eqref{eq:24} and \eqref{eq:12} in \eqref{eq:21}, an upper bound for the risk of the Good-Turing estimator is \begin{align} R_n(M^{\textrm{GT}},p)&\le\frac{1}{n}E\left[\frac{\Phi_1}{n} \left(1- \frac{\Phi_1}{n} \right)\right]+\frac{e^{-1}}{n}\pm o\left(\frac{1}{n}\right)\nonumber\\ &\le\frac{0.25+e^{-1}}{n}\pm {o\left(\frac{1}{n}\right)},\label{eq:13} \end{align} where the last step follows because $x(1-x)\le 0.25$ for a fraction $x$. The above constant $e^{-1}+0.25\approx0.6179$ is not best possible, and could be marginally improved by more careful analysis. However, we show that the improvement is not significant through a lower bound on $R_n(M^{\textrm{GT}})=\max_pR_n(M^{\textrm{GT}},p)$ by picking $p$ to be a suitable uniform distribution. \subsection{Lower bound on the Good-Turing worst-case risk} A lower bound can be obtained for the worst case risk of the Good-Turing estimator by evaluating the risk for the uniform distribution $p_U$ on $\mathcal{X}$. Let $\left|\mathcal{X}\right|=cn$ and $p_U\left(x\right)=\frac{1}{cn}$ for all $x\in\mathcal{X}$, where $c$ is a positive constant. Using (\ref{eq:21}), we get \begin{align} &R_n(M^{\textrm{GT}},p_U)= \frac{1}{n}\bigg[\frac{cn\cdot n}{(cn)^{2}}\left(1-\frac{1}{cn}\right)^{n} +\frac{cn}{cn}\cdot\left(1-\frac{1}{cn}\right)^{n-1}\nonumber\\ &\quad-\left(\frac{cn}{cn}\cdot\left(1-\frac{1}{cn}\right)^{n-1}\right)^{2}\bigg]+o\left(\frac{1}{n}\right)\nonumber \\ & \overset{(a)}{=} \frac{1}{n}\left(\left(\frac{1}{c}+1\right)\left(1-\frac{1}{cn}\right)^{n}-\left(1-\frac{1}{cn}\right)^{2n}\right)+o\left(\frac{1}{n}\right)\nonumber \\ & \overset{(b)}{=} \frac{1}{n}\left(\left(\frac{1}{c}+1\right)e^{-\frac{1}{c}}-e^{-\frac{2}{c}}\right)+o\left(\frac{1}{n}\right)\label{eq:14} \end{align} where the reasoning for the steps is as follows: \begin{enumerate} \item replacing $\left(1-\frac{1}{cn}\right)^{n-1}$ with $\left(1-\frac{1}{cn}\right)^{n}\left(1+o(1)\right)$. \item using the fact that $\left(1-\frac{1}{cn}\right)^{n}=e^{-1/c}\left(1+o(1)\right)$. \end{enumerate} The coefficient of $\frac{1}{n}$ in (\ref{eq:14}) can be maximized numerically to obtain a maximum value of $0.6080$ at $c\approx1.1729$. Hence, from (\ref{eq:13}) and (\ref{eq:14}), we have: \begin{theorem} \label{thm:4} The worst-case risk of the Good-Turing estimator satisfies the following bounds: \begin{multline} \frac{0.6080}{n}+o\left(\frac{1}{n}\right)\leq R_n(M^{\textrm{GT}}) \leq\frac{0.6179}{n}+o\left(\frac{1}{n}\right). \end{multline} \end{theorem} Therefore, the constant in (\ref{eq:13}) is fairly tight. \section{Lower Bounds on the Minimax Risk} In this section, we consider lower bounds on the squared error risk of an arbitrary estimator of missing mass. The main result is that the minimax risk is lower-bounded by $c/n$ for a constant $c$. Two methods are described for finding lower bounds - the first one is a Dirichlet prior approach, and the second one is reduction of the missing mass problem to a distribution estimation problem. Both approaches provide the same order of $1/n$ for the lower bound, but the second reduction approach provides a better constant. However, the Dirichlet prior approach has significant potential for further optimization for better constants, and is an interesting extension of the standard prior method to the case of estimation of random variables such as missing mass, which depend on both the distribution $p$ and the sample $X^n$. \subsection{Lower Bounds via Prior Distributions} \label{sec:prior} The first approach is to bound the minimax risk by the average risk obtained by averaging over a family of distributions with a prior. Let $P$ be a random variable over a family of distributions $\mathcal{P}$, having an alphabet $\mathcal{X}=\left\{ 0,1,2,\ldots k-1\right\} $. In the following section, the missing mass will be denoted as $M_{0}\left(X^{n},p\right)$ to explicitly show the dependence on the distribution $p$. \begin{lemma} \label{lem:Prior1} For any missing mass estimator $\hat{M}_0(X^n)$ and a random variable $P$ over a family of distributions $\mathcal{P}$, \begin{align*} &\min_{\hat{M}_{0}}\max_{p\in\mathcal{P}}\mathbb{E}_{X^n\sim p}\left(M_{0}(X^{n},p)-\hat{M}_{0}(X^{n})\right)^{2}\\ &\qquad\qquad\ge \mathbb{E}_{X^{n}\sim P}\left[\mbox{var}_{P|X^{n}}\left[\left.M_{0}\left(X^{n},P\right)\right|X^{n}\right]\right] \end{align*} \end{lemma} \begin{IEEEproof} \begin{align*} &\min_{\hat{M}_{0}}\max_{p\in\mathcal{P}}\mathbb{E}\left(M_{0}\left(X^{n},p\right)-\hat{M}_{0}\left(X^{n}\right)\right)^{2} \nonumber\\ &\geq \min_{\hat{M}_{0}}\mathbb{E}_{P}\left(\mathbb{E}_{X^{n}|P}\left(\left.M_{0}\left(X^{n},P\right)-\hat{M}_{0}\left(X^{n}\right)\right|P\right)^{2}\right)\\ & \overset{\left(a\right)}{=} \min_{\hat{M}_{0}}\mathbb{E}_{X^{n}}\left(\mathbb{E}_{P|X^{n}}\left(\left.M_{0}\left(X^{n},P\right)-\hat{M}_{0}\left(X^{n}\right)\right|X^{n}\right)^{2}\right)\\ & \overset{\left(b\right)}{=} \mathbb{E}_{X^{n}\sim P}\left[\mbox{var}_{P|X^{n}}\left[\left.M_{0}\left(X^{n},P\right)\right|X^{n}\right]\right] \end{align*} where (a) follows from the law of total expectation and (b) follows from the fact that (a) is minimized when $\hat{M}_{0}\left(X^{n}\right)=\mathbb{E}_{P|X^{n}}\left(\left.M_{0}\left(X^{n},P\right)\right|X^{n}\right)$. \end{IEEEproof} Lemma \ref{lem:Prior1} gives us a family of bounds depending on the distribution of the prior $P$. The RHS in Lemma \ref{lem:Prior1} can be computed exactly for a Dirichlet prior with some analysis. \begin{lemma} \label{lem:Prior2} Suppose $P$ has a Dirichlet distribution $\mbox{Dir}\left(k,\boldsymbol{\alpha}\right)$, where $\boldsymbol{\alpha}=\left(\alpha_{0},\alpha_{1},\ldots,\alpha_{k-1}\right)$. Then, we have \begin{align*} &\mathbb{E}_{X^{n}}\left[\text{var}_{P|X^{n}}\left[\left.M_{0}\left(X^{n},P\right)\right|X^{n}\right]\right] \nonumber\\ &\quad=\frac{B\left(a,n\right)}{\left(a+n\right)^{2}\left(a+n+1\right)}\left(\sum_{u\in\mathcal{X}}\frac{\alpha_{u}\left(a+n\right)-\alpha_{u}^{2}}{B\left(a-\alpha_{u},n\right)}\right.\nonumber\\ &\qquad\left.-\sum_{u\in X}\sum_{v\in\mathcal{X},v\neq u}\frac{\alpha_{u}\alpha_{v}}{B\left(a-\alpha_{u}-\alpha_{v},n\right)}\right), \end{align*} where $B\left(\cdot,\cdot\right)$ is the Beta function and $a=\sum_{u\in\mathcal{X}}\alpha_{u}$. \end{lemma} We skip the details for want of space. Let $\boldsymbol{\alpha}=\left(\frac{1}{n},\frac{1}{n},\ldots,\frac{1}{n}\right)$ and $k=cn^{2}$. For this choice of parameters, the expression in Lemma \ref{lem:Prior2} can be bounded as \begin{eqnarray*} \mathbb{E}_{X^{n}}\left[\mbox{var}_{P|X^{n}}\left[\left.M_{0}\left(X^{n},P\right)\right|X^{n}\right]\right] & \geq & \frac{1}{n}\cdot\frac{c}{\left(c+1\right)^{3}}+o\left(\frac{1}{n}\right), \end{eqnarray*} where, once again, we skip the details. The coefficient of $\frac{1}{n}$ attains a maximum value of $\frac{4}{27}$ when $c=\frac{1}{2}$, which results in the following bound on the minimax risk: \begin{lemma} \label{lem:Prior3} \begin{eqnarray*} \min_{\hat{M}_{0}}\max_{p\in\mathcal{P}}\mathbb{E}\left(M_{0}\left(X^{n},p\right)-\hat{M}_{0}\left(X^{n}\right)\right)^{2} & \geq & \frac{4}{27n}+o\left(\frac{1}{n}\right) \end{eqnarray*} \end{lemma} The bound is worse than the $\frac{1}{4n}$ bound obtained from distribution estimation in the next section, but it can possibly be improved by better selection of the prior. \subsection{Lower bounds via Distribution Estimation} \label{sec:dist} To bound the minimax risk for missing mass estimation, one approach is to reduce the problem to that of estimating a distribution. Let $\mathcal{P}$ be the set of distributions over the set $\mathcal{X}=\left\{ 0,1\right\} $ such that for all $p\in\mathcal{P}$, $p\left(0\right)\geq\frac{1}{2}$. A known result (refer \cite{Lehmann1998, Kamath15} for instance) states that the minimax $\ell^{2}$ loss in estimating $p(0)$ is $\frac{1}{4n}$. More precisely, let $\hat{p}(X^n)$ be an estimator for $p(0)$ from a random sample $X^n$ distributed according to $p$. Then, we have \begin{lemma} \label{lem:DE1} \begin{eqnarray*} \min_{\hat{p}\left(0\right)}\max_{p\in\mathcal{P}}\mathbb{E}_{X^n\sim p}\left(p\left(0\right)-\hat{p}\left(X^n\right)\right)^{2} & = & \frac{1}{4n}+o\left(\frac{1}{n}\right) \end{eqnarray*} \end{lemma} For an arbitrary positive integer $k$, let $\mathcal{P}_{c}$ be the set of distributions over the set $\mathcal{X}=\left\{ 0,1,2,\ldots k-1\right\} $, such that for any $p_{c}\in\mathcal{P}_{c}$, we have $p_{c}\left(0\right)\geq\frac{1}{2}$ and $p_{c}\left(i\right)=\frac{1-p_{c}\left(0\right)}{k}$ for all $i\geq1$. We can use Lemma \ref{lem:DE1} to obtain minimax bounds in estimating $p_{c}\left(0\right)$ for this family of distributions as well. Let $\hat{p}_c(X^n)$ be an estimator for $p_c$ from a random sample $X^n$ distributed according to $p_c$. Let $\hat{p}_c(X^n,i)$ be the probability $\hat{p}_c$ assigns to the symbol $i$. \begin{lemma} \label{lem:DE2} \begin{eqnarray*} \min_{\hat{p}\left(0\right)}\max_{p\in\mathcal{P}_{c}}\mathbb{E}\left(p_{c}\left(0\right)-\hat{p}_{c}\left(X^n,0\right)\right)^{2} & \geq & \frac{1}{4n}+o\left(\frac{1}{n}\right) \end{eqnarray*} \end{lemma} \begin{IEEEproof} Suppose we want to estimate an unknown distribution $p\in P$ and we have an estimator $\hat{p}_{c}$ for distributions in $\mathcal{P}_{c}$. Then we can use $\hat{p}_{c}$ to estimate $p$ as follows. Take the observed sample distributed according to $p$, and if it is 0, keep it as it is. If it is 1, then replace it with an uniformly sampled random variable over $\left\{ 1,2,\ldots k\right\} $. The result of this sampling process is a distribution $p_{c}$ in $\mathcal{P}_{c}$ with $p_{c}\left(0\right)=p\left(0\right)$. Thus, any estimator for distributions in $\mathcal{P}_{c}$ can be reduced to an estimator for distributions in $\mathcal{P}$ and \begin{align*} \min_{\hat{p}\left(0\right)}\max_{p\in\mathcal{P}_{c}}\mathbb{E}_{X^n\sim p_c}&\left(p_{c}\left(0\right)-\hat{p}_{c}\left(X^n,0\right)\right)^{2}\\ &\geq \min_{\hat{p}\left(0\right)}\max_{p\in\mathcal{P}}\mathbb{E}_{X^n\sim p}\left(p\left(0\right)-\hat{p}\left(X^n\right)\right)^{2} \end{align*} and the proof follows from Lemma \ref{lem:DE1}. \end{IEEEproof} \begin{lemma} \label{lem:DE3}Let $k=e^{n}$. With probability at least $1-1/2^n$, the missing mass $M_{0}\left(X^{n}\right)$ satisfies $$M_{0}\left(X^{n}\right) = 1-p\left(0\right)+O\left(ne^{-n}\right).$$ \end{lemma} \begin{IEEEproof} Probability of symbol $0$ appearing at least once in $X^n$ is $1-(1-p(0))^n\geq 1-1/2^n$. Furthermore, at most $n$ distinct symbols from $1,2,\ldots k-1$ can appear in $X^{n}$. Hence, with probability $1-1/2^n$, the observed mass $1-M_{0}\left(X^{n}\right)$ satisfies $$p\left(0\right)\leq 1-M_{0}\left(X^{n}\right) \leq p\left(0\right)+\left(1-p\left(0\right)\right)ne^{-n},$$ and hence follows the lemma. \end{IEEEproof} From Lemmas \ref{lem:DE2} and \ref{lem:DE3}, we can obtain a lower bound of $1/4n$ on the minimax risk of missing mass estimation. Combining the lower bound with the upper bound on the risk of the Good-Turing estimator from Theorem \ref{thm:4}, we have the following: \begin{theorem} \label{thm:lower_dist} The minimax risk of missing mass estimation, denoted $R_n^*$, satisfies the following bounds: \[ \frac{0.25}{n}+o\left(\frac{1}{n}\right) \leq R_n^* \leq \frac{0.6179}{n}+o\left(\frac{1}{n}\right). \] \end{theorem} \section{Summary and Future Directions} We studied the problem of missing mass estimation and showed that the minimax risk lies between $0.617/n$ and $1/4n$. We further showed that the risk of the Good-Turing estimator lies between $0.608/n$ and $0.617/n$. Our results pose several interesting questions for future work. Two natural questions are: (1) are there priors which yield better lower bounds on the minimax risk of missing mass? and (2) are there estimators that have better risk than the Good-Turing estimator? We finally remark that it might be interesting to see if the minimax risk results imply better concentration results for the missing mass and the Good-Turing estimator. \section{Acknowledgements} Authors thank Alon Orlitsky for helpful discussions. Ananda Theertha Suresh thanks Jayadev Acharya for helpful comments. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Toric geometry is considered as a nice tool to study complex varieties used in physics including string theory and related models\cite{1,2}. The key point of this method is that the geometric properties of such manifolds are encoded in toric data placed on a polytope consisting of vertices linked by edges. The vertices satisfy toric constraint equations which have been explored to solve many string theory problems such as the absence of non abelian gauge symmetries in ten dimensional type II superstring spectrums\cite{3}. Moreover, toric geometry has been also used to build mirror manifolds providing an excellent way to understand the extension of T-duality in the presence of D-branes moving near toric Calabi-Yau singularities using combinatorial calculations \cite{4}. In particular, these manifolds have been used in the context of $N = 2 $ four dimensional quantum field theories in order to obtain exact results using local mirror symmetry\cite{3}. Besides such applications, toric geometry has been also explored to understand a class of black hole solutions obtained from type II superstrings on local Calabi-Yau manifolds \cite{5,6}. Recently, the black hole physics has found a place in quantum information theory using qubit building blocks. More precisely, many connections have been established including the link with STU black holes as proposed in \cite{7,8,9}. More recently, an extension to extremal black branes derived from the $T^n$ toroidal compactification of type IIA superstring have been proposed in \cite{10}. Concretely, it has been shown that the corresponding physics can be related to $n$ qubit systems via the real Hodge diagram of such compact manifolds. The analysis has been adopted to $T^{n|n}$ supermanifolds by supplementing fermionic coordinates associated with the superqubit formalism and its relation to supersymmetric models. The aim of this paper is to contribute to this program by introducing colored toric geometry and its relation to Adinkra graph theory to approach qubit information systems. An objective here is to connect three different subjects namely toric geometry, Adinkras and quantum information theory. This link could be explored to deal with qubit systems using geometry considered as a powerful tool to understand modern physics. As an illustration, we examine lower dimensional qubit systems. In particular, we consider in some details the cases of one, two and three qubits, we find that they are linked with $\bf CP^1$, $\bf CP^1\times CP^1$ and $\bf CP^1\times CP^1\times CP^1$ toric varieties respectively. Using a geometric procedure referred to as colored toric geometry, we show that the qubit physics can be converted into a scenario working with toric data of such manifolds by help of Adinkra graph theory. The present paper is organized as follows. Section 2 provides materials on how colored toric geometry may be used to discuss qubit information systems. The connection with Adinkra graph theory is investigated in Section 3 where focus is on an one to one correspondence between Adinkras, colored toric geometry and qubit systems. Operations on toric graphs are employed in Section 4 when studying universal quantum gates. Section 5 is devoted to some concluding remarks. \section{ Colored toric geometry and Adinkras} Before giving a colored toric realization of qubit systems, we present an overview on ordinary toric geometry. It has been realized that such a geometry is considered as a powerful tool to deal with complex Calabi-Yau manifolds used in the string theory compactification and related subjects\cite{2}. Many examples have been elaborated in recent years producing non trivial geometries. Roughly speaking, $n$-complex dimensional toric manifold, which we denote as $M_{\triangle }^{n},$ is obtained by considering the $(n+r) -dimensional complex spaces $C^{n+r},$ parameterized by homogeneous coordinates $\{x=(x_{1},x_{2},x_{3},...,x_{n+r})\},$ and $r$ toric transformations $T_{a}$ acting on the $x_{i}$'s as follows \begin{equation} T_{a}:x_{i}\rightarrow x_{i}\left( \lambda _{a}^{q_{i}^{a}}\right). \end{equation} Here, $\lambda _{a}$'s are $r$ non vanishing complex parameters. For each $a$, $q_{i}^{a}$ are integers which called Mori vectors encoding many geometrical information on the manifold and its applications to string theory physics. In fact, these toric manifolds can be identified with the coset space C^{n+r}/C^{\ast r}$. In this way, the nice feature is the toric graphic realization. Concretely, this realization is generally represented by an integral polytope $\Delta $, namely a toric diagram, spanned by $(n+r)$ vertices ${v}_{i}$ of the standard lattice $Z^{n}$. The toric data $\{v_i,q^a_i\}$ should satisfy the following $ r$ relations \begin{equation} \sum_{i=0}^{n+r-1}q_{i}^{a}{v}_{i}=0,\qquad a=1,\ldots,r. \end{equation} Thus, these equations encode geometric data of $M_{\triangle }^{n}$. In connection with lower dimensional field theory, it is worth noting that the $q_{i}^{a}$ integers are interpreted, in the ${\cal N}=2$ gauged linear sigma model language, as the $U(1)^{r}$ gauge charges of ${\cal N}=2$ chiral multiples. Moreover, they have also a nice geometric interpretation in terms of the intersections of complex curves $C_{a}$ and divisors $D_i$ of $M_{\triangle}^{n}$ \cite{3,4,11}. This remarkable link has been explored in many places in physics. In particular, it has been used to build type IIA local geometry. The simplest example in toric geometry, playing a primordial role in the building block of higher dimensional toric varieties, is $\bf CP^1$. It is defined by $ r = 1$ and the Mori vector charge takes the values $q_i = (1,1)$. This geometry has an U(1) toric action $\bf CP^1$ acting as follows \begin{equation} z\to e^{i\theta} z, \end{equation} where $z=\frac{x_1}{x_2}$, with two fixed points $v_0$ and $v_1$ placed on the real line. The latters describing the North and south poles respectively of such a geometry, considered as the (real) two-sphere ${\bf S}^2\sim \bf CP^1$, satisfy the following constraint toric equation \begin{equation} v_0+v_1=0. \end{equation} In toric geometry language, $\bf CP^1$ is represented by a toric graph identified with an interval $[v_0,v_1]$ with a circle on top. The latter vanishes at the end points $v_0$ and $v_1$. This toric representation can be easily extended to the $n$-dimensional case using different ways. The natural one is the projective space $\bf CP^n$. In this way, the $\bf S^1$ circle fibration, of $\bf CP^1$, will be replaced by ${\bf T}^n$ fibration over an $n$-dimensional simplex (regular polytope). In fact, the ${\bf T}^n$ collapses to a ${\bf T}^{n-1}$ on each of the $n$ faces of the simplex, and to a ${\bf T}^{n-2}$ on each of the $(n-2)$-dimensional intersections of these faces, etc. The second way is to consider a class of toric varieties that we are interested in here given by a trivial product of one dimensional projective spaces $\bf CP^1$'s admitting a similar description. We will show later on that this class can be used to elaborate a graphic representation of quantum information systems using ideas inspired by Adinkra graph theory and related issues [12-18]. For simplicity reason, we deal with the case of $\bf CP^1 \times CP^1.$ For higher dimensional geometries $\bf \bigotimes_{i=1}^n{CP}^{1}_i$ and their blow ups, the toric descriptions can be obtained using a similar way. In fact, they are $n$ dimensional toric manifolds exhibiting $U(1)^n$ toric actions. A close inspection shows that there is a similarity between toric graphs of such manifolds and qubit systems using a link with Adinkra graph theory. To make contact with quantum systems, we reconsider the study of toric geometry by implementing a new toric data associated with the color, producing a colored toric geometry. In this scenario, the toric data $\{v_i,q^a_i\}$ will be replaced by \begin{equation} \{v_i, q^a_i, c_j,\;\;j=1,\ldots n \}. \end{equation} where $c$ indicates the color of the edges linking the vertices. Roughly speaking, the connection that we are after requires that the toric graph should consist of $n+ r$ vertices and $n$ colors. In fact, consider a special class of toric manifolds associated with $\bigotimes_{i=1}^n{CP}^{1}_i$ with $U(1)^n$ toric actions exhibiting $2^n$ fixed points $v_i$. In toric geometry langauge, the manifolds are represented by $2^n$ vertices $v_i$ belonging to the $Z^n$ lattice satisfying $n$ toric equations. It is observed that these graphs share a strong resemblance with a particular class of Adinkras formed by $2^n$ nodes connected with $n$ colored edges \cite{9}. These types of graphs are called regular ones which can be used to present graphically the $n$-qubit systems. At first sight, the connection is not obvious. However, our main argument is based on the Betti number calculations. In fact, these numbers $b_i$ appear in Adinkras and the corresponding toric graphs. For $\bf CP^1 \times CP^1$, it is easy to calculate such numbers. They are given by \begin{eqnarray} b_0=1, \qquad b_2=2,\qquad b_4=1 \end{eqnarray} Indeed, these numbers can be identified with $(1,2,1)$ data used in the $n=2$ classification of Adinkras. \section{ Andinkras and colored toric geometry of qubits } Inspired by combinatorial computations in quantum physics, we explore colored toric geometry to deal with qubit information systems [19-29]. Concretely, we elaborate a toric description in terms of a trivial fibration of one dimensional projective space $\bf CP^1$'s. To start, it is recalled that the qubit is a two state system which can be realized, for instance, by a $1/2$ spin atom. The superposition state of a single qubit is generally given by the following Dirac notation \begin{equation} |\psi\rangle=a_0|0\rangle+a_1 |1\rangle \end{equation} where $a_i$ are complex numbers satisfying the normalization condition \begin{equation} |a_0|^2+|a_1 |^2=1. \end{equation} It is remarked that this constraint can be interpreted geometrically in terms of the so called Bloch sphere, identified with $ \frac{SU(2)}{U(1)}$ quotient Lie group \cite{1,2,3,4}. The analysis can be extended to more than one qubit which has been used to discuss entangled states. In fact, the two qubits are four quantum level systems. Using the usual notation $|i_1i_2\rangle=|i_1\rangle|i_2\rangle$, the corresponding state superposition can be expressed as follows \begin{equation} |\psi\rangle=\sum\limits_{i_1 i_2=0,1}a_{ i_1 i_2}|i_1 i_2\rangle=a_{00}|00\rangle+a_{10} |10\rangle+a_{01}|01\rangle+a_{11} |11\rangle, \end{equation} where $a_{ij}$ are complex numbers verifying the normalization condition \begin{equation} |a_{00}|^2+|a_{10}|^2+|a_{01}|^2+|a_{11}|^2=1 \end{equation} describing the $\bf CP^3$ projective space. For $n$ qubits, the general state has the following form \begin{equation} \label{qudit} |\psi\rangle=\sum\limits_{i_1\ldots i_n=0,1}a_{ i_1\ldots i_n}|i_1 \ldots i_n\rangle, \end{equation} where $a_{ij}$ satisfy the normalization condition \begin{equation} \sum\limits_{i_1\ldots i_n=0,1}|a_{ i_1\ldots i_n}|^2=1. \end{equation} This condition defines the $\bf CP^{2^n-1}$ projective space generalizing the Bloch sphere associated with $n=1$.\\ Roughly, the qubit systems can be represented by colored toric diagrams having a strong resemblance with a particular class of bosonic Adinkras, introduced in the study of supersymmetric representation theory, by Gates and its group\cite{22,23,24,25,26,27,28}. In fact, there are many kinds of such graphs. However, we consider a particular class called regular one consisting of $2^n$ vertices linked by $n$ colored edges as will be shown latter on. An inspection, in graph theory of Adinkras and toric varieties, shows that we can propose the following correspondence connecting three different subjects \begin{table}[!ht] \begin{center} \begin{tabular}{|c|c|c|} \hline Adinkras & Colored Toric Geometry & Qubit systems \\ \hline Vertices & Fixed points (vertices)& basis state \\ \hline Number of colors & Number of toric actions (Dimension) & Number of qubits \\ \hline \end{tabular} \end{center} \label{tab2} \caption{This table presents an one to one correspondence between colored toric geometry, Adinkras and qubit systems.} \end{table} To see how this works in practice, we first change the usual toric geometry notation. Inspired by combinatorial formalism used in quantum information theory, the previous toric data can be rewritten as follows \begin{equation} \sum\limits_{i_1\ldots i_n=0,1} q_{i_1\ldots i_n}^{a}{v}_{i_1\ldots i_n}=0,\qquad a=1,\ldots,r, \end{equation} where the vertex subscripts indicate the corresponding quantum states. To illustrate this notation, we present a model associated with $\bf CP^1\times CP^1$ toric variety. This model is related to $n=2$ Adinkras with (1,2,1) data as listed in the classification. In this case, the combinatorial Mori vectors can take the following form \begin{eqnarray} q^1_{i_1i_2}&=&(q^1_{00},q^1_{01},q^1_{10},q^1_{11})= (1,0,0,1)\\ \nonumber q^2_{i_1i_2}&=&(q^2_{00},q^2_{01},q^2_{10},q^2_{11})= (0,1,1,0). \end{eqnarray} The manifold corresponds to the toric equations \begin{equation} \sum\limits_{i_1i_2=0,1} q_{i_1 i_2}^{a}{v}_{i_1 i_2}=0,\qquad a=1,2. \end{equation} In colored toric geometry langauge, it is represented by $4$ vertices ${v}_{i_1 i_2}$, belonging to $Z^2$, linked by four edges with two different colors $c_1$ and $c_2$. The toric data require the following vertices \begin{equation} v_{00}=(-1,0),\; v_{01}=(0,1), \;v_{10}=(0,-1), \;v_{11}=(1,0) \end{equation} with two colors. These data can be encoded in a toric graph describing two qubits and it is illustrated in figure 1. \begin{figure}[h] \begin{center} \includegraphics[scale=0.5]{2D-v3.eps} \end{center} \caption{Toric Adinkra graph representation of for $ n=2 $ qubits.} \end{figure} \\ \section{ Quantum gates from geometry} Having examined the qubit object, we move now to build the quantum gates using colored toric geometry and Adinkra graph theory. The general study is beyond of the scope of this paper. We consider, however, lower dimensional cases. To do so, it is recalled that the classical gates can be obtained by combining Boolean operations as AND, OR,XOR, NOT and NAND. In fact, these operations act on input classical bits, taking two values 0 and 1, to produce new bits as output results. In quantum computation, gates are unitary operators in a $2^n$ dimensional Hilbert space. In connection with representation theory, they can be represented by $2^n\times 2^n$ matrix, belonging to $SU(2^n)$ Lie group, satisfying the following properties \begin{eqnarray} U^+=U^{-1},\qquad det\;U=1 \end{eqnarray} As in the classical case, there is an universal notation for the gates depending on the input qubit number. The latters are considered as building blocks for constructing circuits and transistors. For 1-qubit computation, the usual one is called NOT acting on the basis state as follows \begin{eqnarray} |i_1\rangle \to | \overline{i_1}\rangle \end{eqnarray} In this toric geometry language, this operation corresponds to permuting the two toric vertices of $\bf CP^1$ \begin{eqnarray} \sigma: v_0 \leftrightarrow v_1 \end{eqnarray} This operation can be represented by the following matrix \begin{eqnarray} \left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \\ \end{array} \right) \end{eqnarray} which can be identified with $U_{NOT}$ defining the NOT quantum gate. In this case, it is worth noting that the corresponding color operation is trivial since we have only one.\\ For 2-qubits, there are many universal gates. As mentioned previously, this system is associated with the toric geometry of $\bf CP^1\times CP^1.$ Unlike the 1-qubit case corresponding to $\bf CP^1\times CP^1$ , the quantum systems involve two different data namely the vertices and colors. Based on this observation, such data will produce two kinds of operations: \begin{enumerate} \item color actions \item vertex actions. \end{enumerate} In fact, these operations can produce CNOT and SWAP gates. To get such gates, we fix the color action according to Adinkra orders used in the corresponding notation. Following the colored toric realization of the 2-qubits, the color actions can be formulated as follows \begin{eqnarray} c_1 &:& |i_1i_2\rangle \to |\overline{i_1}i_2\rangle\\\nonumber c_2 &:& |i_1i_2\rangle \to |i_1\overline{i_2}\rangle. \end{eqnarray} In this color language, the CNOT gate \begin{eqnarray} CNOT=\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0\\ \end{array} \right) \end{eqnarray} can be obtained by using the following actions \begin{eqnarray} c_1 &\to& c _1 \\ \nonumber c_2 &\to& c_2\otimes c_1. \end{eqnarray} A close inspection shows that the SWAP gate \begin{eqnarray} SWAP=\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0& 0 & 0 & 1 \\ \end{array} \right) \end{eqnarray} can be derived from the following permutation action \begin{eqnarray} c_1 \to c _2. \end{eqnarray} We expect that this analysis can be adopted to higher dimensional toric manifolds. For simplicity reason, we consider the geometry associated with TOFFOLI gate being an universal gate acting on a 3-qubit. It is remarked that geometry can be identified with the blow up of $\bf CP^1\times CP^1\times CP^1$ toric manifold. In colored toric geometry language, this manifold is described by the following equations \begin{equation} \sum\limits_{i_1i_2i_3=0,1} q_{i_1i_2 i_3}^{a}{v}_{i_1i_2 i_3}=0,\qquad a=1,\ldots,5, \end{equation} where $2^3$ vertices $v_{i_1i_2i_3}$ belong to $Z^3$. They are connected by three different colors $c_1$, $c_2$ and $c_3$. These combinatorial equations can be solved by the following Mori vectors \begin{eqnarray} q_{i_1i_2i_3}^1&=&(1,0,0,1,0,0,0,0)\nonumber\\ q_{i_1i_2i_3}^2&=&(0,1,0,0,1,0,0,0)\nonumber\\ q_{i_1i_2i_3}^3&=&(0,0,1,0,0,1,0,0)\\ q_{i_1i_2i_3}^4&=&(1,-1,0,0,0,0,1,0)\nonumber\\ q_{i_1i_2i_3}^5&=&(0,0,1,1,0,0,0,1). \nonumber \end{eqnarray} Thus, the corresponding vertices $v_{i_1i_2i_3}$ are given by \begin{eqnarray} v_{000}=(1,0,0),\; v_{100}=(0,1,0), \;v_{010}=(0,0,1), \;v_{001}=(-1,0,0)\\ \nonumber v_{110}=(0,-1,0),\;v_{101}=(0,0,-1),\;v_{110}=(-1,1,0),\;v_{111}=(0,-1,-1), \end{eqnarray} and they are connected with three colors. This representation can be illustrated in figure 2. \begin{figure}[h] \begin{center} \includegraphics[scale=0.5]{3D-v3.eps} \end{center} \caption{Regular Adinkra graphic representation for $n=3$.} \end{figure} The TOFFOLI gate represented by $2^3\times 2^3$ matrix \begin{eqnarray}TOFFOLI= \left( \begin{array}{cccccccc} 1& 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0& 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0& 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ \end{array} \right) \end{eqnarray} can be obtained by the following color transformation \begin{eqnarray} c_1 &\to& c _1 \nonumber \\ c_2 &\to& c_2\\ c_3 &\to& c_3\otimes c_2\otimes c_1.\nonumber \end{eqnarray} We expect that this analysis can be pushed further to deal with other toric varieties having non trivial Betti numbers. \section{Conclusion} Using toric geometry/ Adinkras correspondence, we have discussed qubit systems. More precisely, we have presented an one to one correspondence between three different subjects namely toric geometry, Adinkras and quantum information theory. We believe that this work may be explored to attack qubit system problems using geometry considered as a powerful tool to understand modern physics. In particular, we have considered in some details the cases of one, two and three qubits, and we find that they are associated with $\bf CP^1$, $\bf CP^1\times CP^1$ and $\bf CP^1\times CP^1\times CP^1$ toric varieties respectively. Developing a geometric procedure referred to as colored toric geometry, we have revealed that the qubit physics can be converted into a scenario turning toric data of such manifolds by help of Adinkra graph theory. We have shown that operations on such data can produce universal quantum gates. This work comes up with many open questions. A natural one is to examine super-projective spaces. We expect that this issue can be related to superqubit systems. Another question is to investigate the entanglement states in the context of toric geometry and its application including mirror symmetry. Instead of giving a speculation, we prefer to comeback these open questions in future. \vspace{1cm} {\bf Acknowledgments}: AB would like to thank the Departamento de F\'{\i}sica Te\'{o}rica, Universidad de Zaragoza for kind hospitality. He would like to thank also Diaz Family, Montanez Family and Segui Family for kind hospitality in Spain. The authors thank F. Falceto for interesting discussions. AS is supported by FPA2012-35453.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The ever increasing data demand makes more stringent requirements for the future wireless network, which is expected to have high throughput, global and seamless coverage, reliability, and massive connectivity \cite{Guidotti2019Architecture5GSatellite}. As a critical enabler to achieve this ambitious target, satellite communications (SATCOM) can provide continuous and ubiquitous connectivity for areas where there is no or inadequate Internet access \cite{3GPPSystemArchitecture}. In recent years, low earth orbit (LEO) satellites typically deployed between $300$ km and $2000$ km from the earth have attracted intensive research interest due to their less pathloss, shorter round-trip delay and lower launch cost, with respect to the geostationary earth orbit (GEO) opponents \cite{Qu2017LEOSatConste,Guidotti2019LTEbasedLEO,Di2019UltraDenseLEO,Su2019BroadbandLEO}. Up to now, several projects have been started by the governments and corporations to develop LEO SATCOM, e.g., Iridium, Globalstar, OneWeb, Starlink, Telesat, Hongyun \cite{Portillo2019Atechnical}. Multiple-input multiple-output (MIMO) has been a crucial technique to make the best of the scarce spectrum in SATCOM. Generally, the study of MIMO over SATCOM can be divided into two categories: single-user case and multi-user case. In single-user case, MIMO channel can be built by using dual polarization antennas \cite{Arapoglou2011MIMOSatellite,Arapoglou2011ToMIMONotMIMO,Byman2016MIMOMobileSatellite}, multiple earth stations \cite{Arapoglou2011MIMOSatellite,Schwarz2019MIMOApplication}, multiple satellites \cite{Arapoglou2011MIMOSatellite,Hofmann2017MultiSatUHF}, etc. For multi-user case, plenty of user terminals (UTs) are usually served by the multibeam satellite. Multibeam satellites play an important role in SATCOM, which divide the coverage area into small regions via spot beams \cite{Maral2009SatelliteCommunications}. Basically, the spot beams can be generated by using the multifeed reflector antennas or phased-array antennas at the satellite \cite{Lutz2000SatSysPerson}. The GEO satellites usually employ the multifeed reflector antennas \cite{Schneider2011Antenna}. Meanwhile, for LEO satellites, the phased-array antennas are more preferable on account of their wide-angle scanning capability \cite{Traveset1995KeyPayload,Lutz2000SatSysPerson}. In current satellite systems, multiple color reuse scheme is often adopted to suppress co-channel interference by exploiting different frequency bands and orthogonal polarizations \cite{Fenech2016Eutelsat}. As a result, the frequency bands can be reused among sufficiently isolated beams, and the system capacity will increase substantially. To exploit the limited spectrum more aggressively, the full frequency reuse (FFR) scheme has been proposed, in which all the beams share the frequency bands \cite{Vazquez2016PrecodingChallenges,PerezNeira2019SPforHTS}. However, advanced techniques are indispensable to manage inter-beam interference for FFR scheme. Serious co-channel interference can be alleviated by meticulous transmit design \cite{Zheng2012GenericOptimization,Chrisopulos2015MultigroupFBSC,Joroughi2016PrecodingMultiGateway,Mosquera2018DistributedPrecoding,Wang2018RobustMultigroup,Schwarz2019MIMOApplication,Wang2019MulticastPrecoding,Lin2019RobustMultiObject}. A generic precoding approach for a class of objective functions and power constraints is presented in \cite{Zheng2012GenericOptimization} for multibeam satellite systems. Based on the superframe structure in the DVB-S2X standard, the multi-group multicasting principle has been incorporated in the precoding for frame-based multibeam satellites \cite{Chrisopulos2015MultigroupFBSC,Joroughi2016PrecodingMultiGateway,Wang2018RobustMultigroup,Wang2019MulticastPrecoding}. The distributed precoding for multi-gateway multibeam satellites can be found in \cite{Joroughi2016PrecodingMultiGateway,Mosquera2018DistributedPrecoding,Wang2019MulticastPrecoding}. In \cite{Schwarz2019MIMOApplication}, the antenna geometry in the MIMO feeder link and zero-forcing (ZF) precoder in the multibeam downlink (DL) are jointly designed. The robust multi-objective beamforming for integrated multibeam satellite and high altitude platform newtork is considered in \cite{Lin2019RobustMultiObject}. The performance of precoding relies severely on the quality of channel state information (CSI) at the transmitter. The aforementioned works on precoding in multibeam satellites assume that the transmitter can track instantaneous CSI (iCSI) \cite{Zheng2012GenericOptimization,Chrisopulos2015MultigroupFBSC,Joroughi2016PrecodingMultiGateway,Mosquera2018DistributedPrecoding,Wang2018RobustMultigroup,Schwarz2019MIMOApplication,Wang2019MulticastPrecoding,Lin2019RobustMultiObject}. In LEO SATCOM, the intrinsic channel impairments, e.g., large Doppler shifts and propagation delays, will render it difficult to acquire iCSI at the transmitter (iCSIT). In more detail, for time-division duplexing (TDD) systems, the estimated uplink (UL) iCSI is directly used for DL transmission, which would be outdated after the DL transmit signals arrive at ground UTs. In frequency-division duplexing (FDD) systems, the DL iCSI is estimated at each UT and then fed back to the satellite, which could bring considerable training and feedback overhead. Moreover, the feedback would also be outdated due to the large delays. In contrast to iCSI, statistical CSI (sCSI) can keep stable for longer time intervals \cite{Gao2009StatisticalEigenmode,Jarmyr2010Statistical}, which makes it easier to be acquired at the transmitter. Hence, in this paper, we assume that only sCSI is known at the satellite to perform DL transmit design. Massive MIMO has been one of the fundamental techniques in terrestrial 5G communications, where the base station (BS) equipped with a large number of antennas serves tens of UTs simultaneously \cite{Marzetta2010Noncooperative}. With substantial degree of freedom in the spatial domain, massive MIMO is capable of achieving higher spectrum and energy efficiency \cite{Hien2013ESEfficiency}. The application of massive MIMO in SATCOM is envisioned to be a promising solution of future wideband satellite systems \cite{Gaudenzi2019FutureTech}. In this paper, we consider the DL transmit design in FFR massive MIMO LEO SATCOM, where a large number of antennas are deployed at the LEO satellite. For fully digital-implemented FFR SATCOM, it is unnecessary to adopt predefined multiple beamforming \cite{Gaudenzi2019FutureTech}. Massive MIMO can be considered as a technique to remove the restriction of using fixed multiple beamforming in multibeam satellites. Up to now, there are abundant works on DL transmit designs in massive MIMO terrestrial wireless communications with sCSI at the transmitter (sCSIT), e.g., two-stage precoder design \cite{AdhikaryJ2013SDM}, beam domain transmission \cite{CSun2015BDMA}, and robust precoder design \cite{AnLu2017RobustTransmission}. In the two-stage precoder design \cite{AdhikaryJ2013SDM}, the out-layer precoding suppresses interferences between UT groups by using the sCSI, while the inner-layer precoding provides spatial multiplexing for intra-group UTs by adapting to the effective iCSI with reduced dimension. In the beam domain transmission \cite{CSun2015BDMA}, the BS communicates with different UTs on non-overlapping beams by exploiting beam domain sCSI. In \cite{AnLu2017RobustTransmission}, the robust precoder design and its low-complexity implementation are investigated by considering a posteriori channel model after UL channel training. However, the approaches in aforementioned works may not apply to LEO SATCOM due to the unavailability of special LEO satellite channel characteristics and high implementation complexity for limited satellite payloads. Recently, a transmission approach for massive MIMO LEO SATCOM is proposed in \cite{You2019MassiveMIMOLEO}, where the channel model, closed-form DL precoders and UL receivers, and user grouping strategy are investigated. In \cite{You2019MassiveMIMOLEO}, the UTs are assumed to have a single antenna, and the DL precoders therein are calculated by maximizing average signal-to-leakage-and-noise ratio (ASLNR) of each UT. In this paper, we investigate the DL transmit design in massive MIMO LEO SATCOM by maximizing the ergodic sum rate, where both the satellite and UTs use the uniform planar arrays (UPAs). Our major contributions are summarized as follows: \begin{itemize} \item We show that the rank of optimal transmit covariance matrix for each UT is no larger than one to maximize the ergodic sum rate, which implies that single-stream precoding for each UT is optimal for linear transmitters. We also obtain the optimal linear receiver for each UT. Since the ergodic sum rate is a non-convex function and involves mathematical expectations, it is generally difficult to be maximized. The minorization-maximization (MM) algorithm combined with Monte-Carlo method is adopted to obtain a locally optimal solution to the DL precoder design. \item To reduce the computation compleixty in Monte-Carlo method, we simplify the DL transmit design by using an upper bound of the ergodic sum rate. We show that the transmit covariance matrices with rank no larger than one are also optimal. To tackle the simplified precoder design, the structure of precoder is derived, which indicates that the precoders are only determined by the same number of scalar Lagrange multipliers as UTs. Then, a Lagrange multiplier optimization (LMO) problem is formulated, where only one scalar Lagrange multiplier rather than a precoding vector needs to be optimized for each UT. Besides, the LMO problem is shown to be equivalent to the power allocation (PA) problem in a virtual UL, and a low-complexity algorithm is presented to solve the virtual UL PA problem, which requires much less computation effort. \end{itemize} The remainder of this paper is organized as follows. \Cref{Section_system_model} introduces the system model, where the channel model is established for the satellite and UTs both equipped with the UPAs. \Cref{Section_DL_Precoder_Design} shows the rank property of transmit covariances and MM algorithm is used to design the DL precoder. In \Cref{Section_Low_Complexity_DL_Precoder_Design}, we formulate the simplified DL transmit design by using an upper bound of the ergodic sum rate. It is shown that the rank property of transmit covariance matrices still holds, and the corresponding simplified precoder optimization is transformed into the LMO. \Cref{Sectioin_Simulation} provides simulation results. \Cref{Section_Conclusion} concludes this paper. \textit{Notations:} Throughout this paper, lower case letters denote scalars, and boldface lower (upper) letters denote vectors (matrices). The set of all $n$-by-$m$ complex matrices is denoted as $\bbC^{n\times m}$. $\trace(\cdot)$, $\det(\cdot)$, $\rank(\cdot)$, $(\cdot)^*$, $(\cdot)^T$, and $(\cdot)^H$ denote the trace, determinant, rank, conjugate, transpose, and conjugate transpose operation for matrix argument, respectively. $\abs{\cdot}$ denotes the absolute value. The Euclidean norm of vector $\bdx$ is denoted as $\norm{\bdx} = \sqrt{\bdx^H \bdx}$. The Frobenius norm of matrix $\bdA$ is denoted as $\norm{\bdA}_\rF = \sqrt{\trace(\bdA^H \bdA)}$. $\otimes$ denotes the Kronecker product. $[\bdA]_{n,m}$ denotes the $(n,m)$th element of matrix $\bdA$. $\diag(\bda)$ denotes the diagonal matrix with $\bda$ along its main diagonal. $\bbE \{ \cdot \}$ denotes mathematical expectation. $\clCN(\bdzro,\bdC)$ denotes the circular symmetric complex Gaussian random vector with zero mean and covariance $\bdC$. $\mathrm{U}\ [a,b]$ represents the uniform distribution between $a$ and $b$. $\triangleq$ denotes ``be defined as''. $\sim$ denotes ``be distributed as''. \section{System Model} \label{Section_system_model} \subsection{System Setup} \label{subsec_system_setup} We consider the DL transmission over lower frequency bands, e.g., L/S/C bands, in FFR massive MIMO LEO SATCOM. The mobile UTs are served under the footprint of a single LEO satellite. The satellite is supposed to work with a regenerative payload, which allows on-board processing (OBP) of baseband signals on satellites \cite{PerezNeira2019SPforHTS}. In this paper, both the satellite and mobile UTs are equipped with the UPAs. The amplitude and phase on each element of the UPAs can be digitally controlled to allow the most flexibility for DL transmission. The satellite employs the large-scale UPA with $\Mx$ and $\My$ elements in the $\rx$-axis and $\ry$-axis, respectively. The total number of antennas at the satellite is $\Mx \My \triangleq M$. Each UT is equipped with the UPA consisting of $\Nx$ and $\Ny$ elements in the $\rx'$-axis and $\ry'$-axis, respectively, and the total number of antennas at each UT is $\Nx \Ny \triangleq N$. The approaches in this paper can be directly extended to the case where the UPAs at UTs have different numbers of antenna elements. The DL in FFR massive MIMO LEO SATCOM is shown in \Cref{fig_UPA}. \begin{figure}[!t] \centering \vspace{-1em} \includegraphics[width=0.6\textwidth]{satellite_downlink.eps} \caption{The DL in FFR massive MIMO LEO SATCOM.} \label{fig_UPA} \vspace{-1em} \end{figure} \subsection{DL Channel Model} \label{subsec_DL_channel_model} The DL channel model between LEO satellite and UT $k$ is introduced as follows. The DL channel matrix $\ckbdH_k(t,f) \in \Complex{N}{M} $ between LEO satellite and UT $k$ at instant $t$ and frequency $f$ can be represented by \cite{Auer20123DMIMO-OFDM} \begin{equation} \FormulaSpace \ckbdH_k(t,f) = \sum_{\ell=0}^{L_k-1} a_{k,\ell} e^{ j2\pi \left( \nu_{k,\ell} t - f \tau_{k,\ell} \right) } \bdd_{k,\ell} \cdot \bdg_{k,\ell}^H\comma \label{channel_model_DL_k} \end{equation} where $j \triangleq \sqrt{-1}$, $L_k$ is the multipath number of UT $k$'s channel, $a_{k,\ell}$, $\nu_{k,\ell}$ and $\tau_{k,\ell}$ are the complex channel gain, Doppler shift and propagation delay for the $\ell$th path of UT $k$'s channel. The vectors $\bdd_{k,\ell} \in \Complex{N}{1}$ and $\bdg_{k,\ell} \in \Complex{M}{1}$ in \eqref{channel_model_DL_k} are the array response vectors at the UT and satellite sides, respectively, which are associated with the $\ell$th path of UT $k$'s channel. For simplicity, we assume that the channel parameters in $\ckbdH_k(t,f)$ are constant within each coherence time interval, and change from block to block according to some ergodic process. In the following, we will describe the LEO satellite channel characteristics in more detail, which mainly include Doppler shifts, propagation delays, array response vectors. \subsubsection{Doppler shifts} For LEO satellite channels, the Doppler shifts will be much larger compared with those in terrestrial wireless channels, due to the large relative velocity between satellite and UTs. For $2$ GHz carrier frequency, the Doppler shift can be $40$ kHz \cite{Ali1998DopplerLEO}. The Doppler shift $\nu_{k,\ell}$ for the $\ell$th path of UT $k$'s channel mainly consists of two parts \cite{Papath2001Acomparison}, i.e., $\nu_{k,\ell} = \nu_{k,\ell}^{\sat} + \nu_{k,\ell}^{\ut}$, where $\nu_{k,\ell}^{\sat}$ and $\nu_{k,\ell}^{\ut}$ are Doppler shifts relevant to the movement of satellite and UT $k$, respectively. The first part $\nu_{k,\ell}^{\sat}$, which is dominated in $\nu_{k,\ell}$, will be nearly identical for different paths of UT $k$'s channel, because of the high altitude of satellite \cite{Papath2001Acomparison}. Hence, $\nu_{k,\ell}^{\sat}$ can be rewritten as $\nu_{k,\ell}^{\sat} = \nu_{k}^{\sat}$ for $0\le \ell \le L_k-1$. The variation of $\nu_{k}^{\sat}$ with time behaves rather deterministically, and it can be estimated and compensated for at each UT. Specifically, $\nu_k^{\sat}$ can be expressed as $\nu_k^{\sat} = f_c (v_k/c) \cos \phi_k$ \cite{Ali1998DopplerLEO}, where $f_c$ is the carrier frequency, $c$ is the speed of light, $v_k$ is the velocity of satellite, and $\phi_k$ is the angle between satellite's forward velocity and boresight from satellite to UT $k$. On the other hand, the second part $\nu_{k,\ell}^{\ut}$ can be quite different for each path. \subsubsection{Propagation Delays} For LEO satellites, the propagation delay is a more serious problem than that in terrestrial wireless channels, on account of the long distance between satellite and UTs. For satellites at an altitude of $1200$ km, the round trip time can be about $20$ ms \cite{Guidotti2019LTEbasedLEO}. Define $\tau_k^{\min} = \min_{\ell} \tau_{k,\ell}$ and $\tau_k^{\max} = \max_{\ell} \tau_{k,\ell}$ as the minimal and maximal propagation delays of UT $k$'s channel, respectively. \subsubsection{Array response vectors} Define $\bdtheta_{k,\ell} = ( \theta_{k,\ell}^{\rx},\theta_{k,\ell}^{\ry} )$ and $\bdvphi_{k,\ell} = ( \vphi_{k,\ell}^{\rx'},\vphi_{k,\ell}^{\ry'} )$ as the paired angles-of-departure (AoDs) and angles-of-arrival (AoAs) for the $\ell$th path of UT $k$'s channel, respectively. The array response vectors $\bdg_{k,\ell}$ and $\bdd_{k,\ell}$ in \eqref{channel_model_DL_k} are given by $\bdg_{k,\ell} = \bdg ( \bdtheta_{k,\ell} ) \label{g_k,l}$ and $\bdd_{k,\ell} = \bdd ( \bdvphi_{k,\ell} )$, respectively, where $\bdg(\bdtheta) = \bda_{\Mx} \left( \sin \theta_{\ry} \cos \theta_{\rx} \right) \otimes \bda_{\My} \left( \cos \theta_{\ry} \right)$ and $\bdd(\bdvphi) = \bda_{\Nx} \left( \sin \vphi_{\ry'} \cos \vphi_{\rx'} \right) \otimes \bda_{\Ny} \left( \cos \vphi_{\ry'} \right) $ for arbitrary $\bdtheta=(\theta_{\rx},\theta_{\ry})$ and $\bdvphi=(\vphi_{\rx'},\vphi_{\ry'})$. The $\bda_{n_\rv}(\phi) \in \Complex{n_\rv}{1}$ is expressed as $\bda_{n_\rv} \left( \phi \right) = \frac{1}{\sqrt{n_\rv}} ( 1, e^{-j\frac{2\pi d_{\rv}}{\lambda} \phi }, \dots, e^{-j\frac{2\pi d_{\rv}}{\lambda} (n_\rv-1) \phi } )^T$, where $\lambda=c/f_c$ is the carrier wavelength, $d_{\rv}$ is the antenna spacing along $\rv$-axis with $\rv\in\{ \rx,\ry,\rx',\ry' \}$. In satellite channels, because the scattering on ground only takes place within a few kilometers around each UT, the AoDs for different paths of UT $k$'s channel are nearly identical due to the long distance between satellite and UT $k$ \cite{You2019MassiveMIMOLEO}, i.e., $\bdtheta_{k,\ell} = \bdtheta_k$, $0 \le \ell \le L_k-1$. Thus, we can rewrite $\bdg_{k,\ell} = \bdg_{k} = \bdg(\bdtheta_k)$, where $\bdtheta_k = (\theta_k^{\rx},\theta_k^{\ry})$ is referred to as the physical angle pair of UT $k$. Considering the long distance between satellite and UT $k$, $\bdg_k$ changes quite slowly, and we assume that it can be perfectly tracked at the satellite. The space angle pair $\tdbdtheta_k = (\tdtheta_k^{\rx},\tdtheta_k^{\ry})$ of UT $k$ is defined as $\tdtheta_k^{\rx} = \sin \theta_k^{\ry} \cos \theta_k^{\rx}$ and $\tdtheta_k^{\ry} = \cos \theta_k^{\ry}$, which reflects the space domain property of UT $k$'s channel \cite{You2019MassiveMIMOLEO}. \subsection{DL Signal Model With Doppler and Delay Compensation} We consider a wideband orthogonal frequency division multiplex (OFDM) satellite system. The number of subcarriers is $\Nsc$, and the cyclic prefix (CP) length is $\Ncp$. Let $\Ts$ be the system sampling period. The CP length of time is $\Tcp = \Ncp \Ts$. The OFDM blocks without and with CP are spanned by $\Tsc = \Nsc \Ts$ and $T =\Tsc + \Tcp$, respectively. Let $\{ \bdx_{s,r} \}_{r=0}^{\Nsc-1}$ be the DL frequency domain transmit signal within the $s$th OFDM symbol. Then the associated DL time domain transmit signal\footnote{Conventional schemes, e.g., selected mapping (SLM) \cite{Wang2009LowPAPRSFBCMIMOOFDM} and partial transmit sequences (PTS) \cite{Kang1999novelsubblock}, can be applied to reduce the peak-to-average power ratio (PAPR) of transmit signal $\bdx_s(t)$.} at OFDM symbol $s$ is given by \cite{Hwang2009OFDMSurvey} \begin{equation} \FormulaSpace \bdx_{s}(t) = \sum_{r=0}^{\Nsc-1} \bdx_{s,r} e^{j2\pi r \Delta f \cdot t}\comma\ -\Tcp \le t-sT < \Tsc\comma \end{equation} where $\Delta f = 1/\Tsc$. By omitting the additive noise for simplicity temporarily, the DL time domain received signal of UT $k$ at OFDM symbol $s$ can be written as \begin{equation} \FormulaSpace \bdy_{k,s}(t) = \int_{-\infty}^{\infty} \ckbdH_{k}(t,\tau) \bdx_{s}(t-\tau) \dint \tau\comma \end{equation} where $\ckbdH_{k}(t,\tau) = \sum_{\ell=0}^{L_k-1} a_{k,\ell} e^{ j2\pi \nu_{k,\ell} t } \delta \left( \tau-\tau_{k,\ell} \right) \bdd_{k,\ell} \cdot \bdg_{k}^H$ is the DL channel impulse response of UT $k$. By exploiting the LEO satellite channel characteristics, we preform the joint Doppler and delay compensation at each UT. Let $\nu_k^{\cps} = \nu_{k}^{\sat}$ and $ \tau_k^{\cps} = \tau_{k}^{\min}$. Inspired by \cite{You2019MassiveMIMOLEO}, the compensated DL received signal of UT $k$ is given by \begin{equation} \FormulaSpace \bdy_{k,s}^{\cps}(t) = \bdy_{k,s}(t+\tau_k^{\cps}) e^{-j2\pi \nu_k^{\cps} ( t + \tau_k^{\cps} ) }. \end{equation} After Doppler and delay compensation, we choose appropriate OFDM parameters to combat multipath fading effect. The frequency representation of $\bdy_{k,s}^{\cps}(t)$ can be written as \cite{Hwang2009OFDMSurvey} \begin{equation} \FormulaSpace \bdy_{k,s}^{\cps}(t) = \sum_{r=0}^{\Nsc-1} \bdy_{k,s,r} e^{j2\pi r \Delta f \cdot t}\comma\ -\Tcp + \Delta \tau_k \le t-sT < \Tsc\comma \end{equation} where $\Delta \tau_k = \tau_k^{\max} - \tau_{k}^{\min}$ is the delay span of UT $k$'s channel \cite{Hwang2009OFDMSurvey}. Let $\tau_{k,\ell}^{\ut} = \tau_{k,\ell} - \tau_k^{\min}$, and define the effective DL channel matrix $\bdH_{k}(t,f)$ after Doppler and delay compensation as \begin{equation} \FormulaSpace \bdH_{k}(t,f) = \bdd_k(t,f) \cdot \bdg_k^H\comma \end{equation} where $\bdd_k(t,f) = \sum_{\ell=0}^{L_k-1} a_{k,\ell} e^{ j 2\pi \left( \nu_{k,\ell}^{\ut} t - f \tau_{k,\ell}^{\ut} + \nu_{k}^{\sat} \tau_{k,\ell}^{\ut} \right) } \bdd_{k,\ell} \in \Complex{N}{1}$. Consequently, the DL compensated received signal of UT $k$ over subcarrier $r$ in OFDM symbol $s$ is given by $\bdy_{k,s,r} = \bdH_{k,s,r} \bdx_{s,r}$, where $\bdH_{k,s,r} = \bdH_k \left( sT, r\Delta f \right) = \bdd_k \left( sT, r\Delta f \right) \bdg_k^H = \bdd_{k,s,r} \bdg_k^H$ is the effective DL channel matrix of UT $k$ over subcarrier $r$ in OFDM symbol $s$ after compensation\footnote{Since the Doppler and delay have been compensated at each UT, the time and frequency at the satellite and UTs are assumed to be perfectly synchronized.}. \section{DL Transmit Design} \label{Section_DL_Precoder_Design} In this section, we investigate the DL transmit design for massive MIMO LEO SATCOM based on the established LEO satellite channel model in \Cref{Section_system_model}. First, by exploiting the characteristics of LEO satellite channels, we show that the rank of DL transmit covariance matrix for each UT must be no larger than one to maximize the ergodic sum rate, which indicates that the DL precoding with at most one single data stream for each UT can achieve the optimal performance for linear transmitters. The optimal DL linear receivers are also obtained as the by-product for single data stream transmission to each UT. After that, we resort to the MM algorithm to obtain a locally optimal solution to the DL precoder design. \subsection{Rank-One Property of DL Transmit Covariance Matrices} \label{subsec_DL_Transmission_Model} We consider the DL transmission phase in LEO SATCOM where $K$ UTs are simultaneously served on subcarrier $r$ of OFDM symbol $s$. For convenience, we omit the subscript of OFDM symbol $s$ and subcarrier $r$ in $\bdH_{k,s,r} = \bdd_{k,s,r} \bdg_k$ and denote $\bdH_k = \bdd_k \bdg_k^H$ as the DL channel matrix of UT $k$. In this paper, the channel $\bdH_k$ is supposed to be Rician distributed as follows \begin{equation} \FormulaSpace \begin{aligned} \bdH_k = \brbdH_k + \tdbdH_k\comma \end{aligned} \end{equation} where $\brbdH_k = \sqrt{\frac{\kappa_k \beta_k}{\kappa_k + 1}} \cdot \bdd_{k,0} \bdg_k^H$ is the deterministic line-of-sight (LoS) part, $\tdbdH_k = \sqrt{\frac{\beta_k}{\kappa_k + 1}} \cdot \tdbdd_k \bdg_k^H$ is the random scattering part, $\kappa_k$ is the Rician factor. Here, $\tdbdd_k \in \Complex{N}{1}$ is distributed as $\tdbdd_k \sim \clCN(\bdzro,\bdSigma_k)$ with $\trace(\bdSigma_k) = 1$, and $\beta_k = \bbE \left\{ \smallnorm{\bdH_k}_{\rF}^2 \right\} = \bbE \left\{ \smallnorm{\bdd_k}^2 \right\}$ is the average channel power. The channel parameters $\{ \beta_k, \kappa_k, \bdg_k,\bdd_{k,0}, \bdSigma_k \}_{k=1}^K$ should adapt to the operating frequency bands and the channel conditions \cite{Lutz2000SatSysPerson}. We assume that the satellite and UTs move within a certain distance, such that the channel parameters $\{\beta_k,\kappa_k,\bdd_{k,0},\bdg_k,\bdSigma_k\}_{k=1}^K$ can be considered as unchanged. Whenever the satellite or some UT steps out of this distance, these channel parameters should be updated accordingly. The channel correlation matrices of UT $k$ at the transmitter and receiver are given by \begin{subequations} \FormulaSpace \begin{align} \bdR_k^{\sat} &= \bbE \left\{ \bdH_k^H \bdH_k \right\} = \beta_k \bdg_k \bdg_k^H\comma \\ \bdR_k^{\ut} &= \bbE \left\{ \bdH_k \bdH_k^H \right\} = \frac{\kappa_k \beta_k }{\kappa_k + 1} \bdd_{k,0} \bdd_{k,0}^H + \frac{\beta_k}{\kappa_k + 1} \bdSigma_{k}\comma \end{align} \end{subequations} respectively. The matrix $\bdR_k^{\sat}$ is rank-one, which implies that the signals on different antennas at the satellite are highly correlated. Meanwhile, the rank of matrix $\bdR_k^{\ut}$ depends on the specific propagation environment around UT $k$. Denote the UT index set as $\clK = \left\{1,\dots,K\right\}$. The DL received signal of UT $k$ is given by \begin{equation} \FormulaSpace \bdy_k = \bdH_k \sum_{i=1}^{K} \bds_i + \bdz_k \label{Transmission_Model_DL} \end{equation} where $\mathbf{s}_k \in \Complex{M}{1}$ is the desired signal of UT $k$ with zero mean and covariance matrix $\bdQ_k = \bbE\{\bds_k \bds_k^H \} $. In this paper, we consider the sum power constraint $\sum_{k=1}^{K} \trace(\bdQ_k) \le P$ for DL transmission. Besides, $\bdz_k \in \Complex{N}{1}$ is the additive complex Gaussian noise at UT $k$ distributed as $\bdz_k \sim \clCN \left(0,\sigma_k^2 \bdI_N \right)$. The DL ergodic rate of UT $k$ is defined as \begin{equation} \FormulaSpace \begin{aligned} \clI_k &= \mathbb{E} \left\{ \log_2 \frac{ \det \left( \sigma_k^2 \bdI_N + \bdH_k \sum_{i=1}^{K} \bdQ_i \bdH_k^H \right) }{ \det \left( \sigma_k^2 \bdI_N + \bdH_k \sum_{i \ne k} \bdQ_i \bdH_k^H \right) } \right\} \\ &\stackeq{a} \mathbb{E} \left\{ \log_2 \left( 1 + \frac{ \bdg_k^H \bdQ_k \bdg_k \norm{\bdd_k}^2 }{ \sum_{i \ne k} \bdg_k^H \bdQ_i \bdg_k \norm{\bdd_k}^2 + \sigma_k^2 } \right) \right\} \label{DL_ergodic_rate_k_noRx} \end{aligned} \end{equation} where (a) follows from $\det(\bdI + \bdA \bdB) = \det(\bdI+\bdB\bdA)$ \cite{Horn2013MatrixAnalysis}. The DL sum rate maximization problem can be formulated as\footnote{The weight factor can be introduced for each UT if different UTs have distinct service requirements.} \begin{equation} \FormulaSpace \begin{aligned} \clP: \ \max_{ \left\{ \bdQ_k \right\}_{k=1}^K } \sum_{k=1}^{K} \clI_k,\ \mathrm{s.t.} \ \sum_{k=1}^{K} \trace(\bdQ_k) \le P, \ \bdQ_k \succeq \bdzro, \ \forall k \in \clK. \label{Problem_SumRate_Max_Covariance} \end{aligned} \end{equation} \begin{myprop} \label{Prop_rank-one_Covariance} The optimal $\bdQ_k(\forall k \in \clK)$ of problem $\clP$ must satisfy $\rank(\bdQ_k) \le 1$. \end{myprop} \begin{IEEEproof} Please refer to \Cref{appendix_rank-one_Covariance_proof}. \end{IEEEproof} In \Cref{Prop_rank-one_Covariance}, we show that the rank of optimal transmit covariance matrix $\bdQ_k(\forall k \in \clK)$ should be no larger than one, which indicates that the DL precoding with at most one data stream for each UT is optimal for linear transmitters. This rank property in \Cref{Prop_rank-one_Covariance} results from the massive MIMO LEO satellite channel property. Consequently, we can always write the optimal $\bdQ_k$ as $\bdQ_k = \bdw_k \bdw_k^H$, where $\bdw_k \in \Complex{M}{1} $ is the precoding vector of UT $k$. Substituting $\bdQ_k = \bdw_k \bdw_k^H$ into \eqref{DL_ergodic_rate_k_noRx} yields \begin{equation} \FormulaSpace \begin{aligned} \clR_k = \mathbb{E} \left\{ \log_2 \left( 1 + \frac{ \abs{\bdw_k^H \bdg_k}^2 \norm{\bdd_k}^2 }{ \sum_{i \ne k} \abs{\bdw_i^H \bdg_k}^2 \norm{\bdd_k}^2 + \sigma_k^2 } \right) \right\}\comma \label{DL_ergodic_rate_UT_k_precoder} \end{aligned} \end{equation} which is the DL ergodic rate of UT $k$ using linear precoders $\{ \bdw_k \}_{k=1}^K$. Although we focus on the DL transmit design in this paper, the optimal DL linear receivers at the UT sides are also obtained as the by-product. In the following subsection, we will account for the optimal DL linear receivers of each UT that can maximize their corresponding DL ergodic rates. \subsection{Optimal DL Linear Receiver} According to \Cref{Prop_rank-one_Covariance}, the satellite only needs to transmit at most one data stream for each UT. Hence, each UT just needs to decode at most one data stream, and only diversity gain is obtained with multiple antennas at UT side. When we use $\{\bdw_k\}_{k=1}^K$ as the DL precoders, the transmit signal $\bds_k $ in \eqref{Transmission_Model_DL} can be written as $\bds_k = \bdw_k s_k$ where $s_k$ is the intended data symbol for UT $k$ with zero mean and unit variance. We consider that UT $k$ adopts a liner receiver $\mathbf{c}_k \in \Complex{N}{1}$ to recover $s_k$. Then, the recovered data symbol at UT $k$ can be written as \begin{equation} \FormulaSpace \begin{aligned} \hat{s}_k = \bdc_k^H \bdy_k = \bdc_k^H \bdd_k \bdg_k^H \bdw_k s_k + \sum_{i \ne k}^K \bdc_k^H \bdd_k \bdg_k^H \bdw_i s_i + \bdc_k^H \bdz_k. \end{aligned} \end{equation} Thus, the DL signal-to-interference-plus-noise ratio (SINR) of UT $k$ can be expressed as \begin{equation} \FormulaSpace \begin{aligned} \SINR_k &= \frac{ \left\lvert \bdw_k^H \bdg_k \right\rvert^2 \left\lvert \bdc_k^H \bdd_k \right\rvert^2 }{ \sum_{i \ne k} \left\lvert \bdw_i^H \bdg_k \right\rvert^2 \left\lvert \bdc_k^H \bdd_k \right\rvert^2 + \sigma_k^2 \left\lVert \bdc_k \right\rVert^2 }. \end{aligned} \end{equation} Because $\frac{ax}{bx+c}$ is a monotonically increasing function of $x$ for $a,b,c > 0$, we have \begin{equation} \FormulaSpace \begin{aligned} \SINR_k &\stackleq{a} \frac{ \left\lvert \bdw_k^H \bdg_k \right\rvert^2 \norm{\bdd_k}^2 }{ \sum_{i \ne k} \left\lvert \bdw_i^H \bdg_k \right\rvert^2 \norm{\bdd_k}^2 + \sigma_k^2 } \triangleq \USINR_k\comma \end{aligned} \label{DL_SINR_upperbound_iRx} \end{equation} where (a) follows from the Cauchy-Schwarz inequality $\smallabs{\bdc_k^H \bdd_k}^2 \le \norm{\bdc_k}^2 \norm{\bdd_k}^2$, and the equality holds if and only if $\bdc_k = \alpha \bdd_k$ for any nonzero $\alpha \in \bbC$. The receivers satisfying $\bdc_k = \alpha \bdd_k$ for different $\alpha$ have the same value of $\SINR_k$. It is worth noting that $\bdc_k = \alpha \bdd_k$ can actually maximize the DL ergodic rate of UT $k$. This can be easily verified by $\bbE\left\{ \log_2 ( 1 + \SINR_k ) \right\} \le \bbE\left\{ \log_2 ( 1 + \USINR_k ) \right\} = \clR_k$, which implies that the receivers satisfying $\bdc_k = \alpha \bdd_k$ will be optimal for UT $k$. Two examples of $\bdc_k$ with the form $\bdc_k = \alpha \bdd_k$ will be given in the following. One is the receiver $\bdc_k^{\ut} \triangleq \bdd_k$, which can be regarded as the matched filter (MF) for the equivalent channel vector $\bdd_k$, because $\bdy_k$ can be rewritten as \begin{equation} \FormulaSpace \bdy_k = \bdd_k \cdot x_k + \bdz_k\comma \end{equation} where $x_k = \sum_{i=1}^{K} \bdg_k^H \bdw_i s_i$ consists of the desired transmit signal and interference for UT $k$. The other is the minimum mean-square error (MMSE) receiver \cite{Robert2004IntroAdaptArray} \begin{equation} \FormulaSpace \begin{aligned} \bdc_k^{\mmse} = \arg \min_{\bdc_k} \MSE_k \stackeq{a} \frac{ \bdg_k^H \bdw_k }{ \sigma_k^2 + \sum_{i=1}^K \abs{ \bdw_i^H \bdg_k }^2 \norm{\bdd_k}^2 } \cdot \bdd_k\comma \label{Receiver_MMSE_DL} \end{aligned} \end{equation} where $\MSE_k$ is the mean-square error (MSE) of UT $k$ defined as \begin{equation} \FormulaSpace \begin{aligned} \MSE_k = \mathbb{E} \left\{ \left\lvert \hat{s}_k - s_k \right\rvert^2 \right\} = \sum_{i=1}^K \left\lvert \bdw_i^H \bdg_k \right\rvert^2 \left\lvert \bdc_k^H \bdd_k \right\rvert^2 + \sigma_k^2 \left\lVert \bdc_k \right\rVert^2 - 2 \Re \left\{ \bdg_k^H \bdw_k \cdot \bdc_k^H \bdd_k \right\} + 1\comma \label{MSE_DL} \end{aligned} \end{equation} and (a) follows from the matrix inversion lemma \cite{Horn2013MatrixAnalysis}. The MMSE of UT $k$ achieved by $\bdc_k^{\mmse}$ is given by \begin{equation} \FormulaSpace \MMSE_k = 1 - \frac{ \abs{ \bdw_k^H \bdg_k }^2 \norm{\bdd_k}^2 }{ \sigma_k^2 + \sum_{i=1}^{K} \abs{ \bdw_i^H \bdg_k }^2 \norm{\bdd_k}^2 } = \xinv{1+\USINR_k}. \end{equation} Even though $\bdc_k^{\mmse}$ is related to $\{ \bdw_k \}_{k=1}^K$ and $\bdg_k$, the scalar term in $\bdc_k^{\mmse}$ will not affect the value of $\SINR_k$. For simplicity, we can choose $\bdc_k^{\ut}$ as the DL receiver of UT $k$, which is easier to be calculated, and independent of $\{ \bdw_k \}_{k=1}^K$ and $\bdg_k$. Now, we will return to the DL precoder design in the following section. \subsection{DL Precoder Design} In this subsection, we study the DL precoder design by maximizing ergodic sum rate under sum power constraint. Considering that the DL precoder design is non-convex, we adopt the MM algorithm \cite{DavidTutorialMM} combined with Monte-Carlo method to solve it. In each iteration of MM algorithm, a concave lower bound of the objective function is constructed, and a locally optimal solution can be obtained by solving the convex subproblems iteratively \cite{DavidTutorialMM}. Let $\bdW = [ \bdw_1 \ \dots \ \bdw_K ] \in \bbC^{M \times K}$ denote the collection of DL precoding vectors\footnote{We assume that the precoding matrix $\bdW$ is implemented in fully-digital domain. While the same number of RF chains as antennas are required at the satellite, fully-digital precoders can provide more flexibility for DL transmission.}. The DL precoder design can be formulated as \begin{equation} \FormulaSpace \begin{aligned} \clS: \ \max_{ \bdW } \ \sum_{k=1}^{K} \clR_k\comma \quad \mathrm{s.t.} \ \sum_{k=1}^{K} \lVert \mathbf{w}_k \rVert^2 \le P. \label{Problem_DL_Tx_design_Inst_Rx} \end{aligned} \end{equation} Notice that the power inequality in \eqref{Problem_DL_Tx_design_Inst_Rx} must be met with equality at the optimum. Otherwise, $\bdw_k$ can be scaled up, which increases the DL sum rate and contradicts the optimality. In the following, we apply the MM algorithm \cite{DavidTutorialMM} to calculate a locally optimal solution to $\clS$. In each iteration, the DL ergodic rate $\clR_k$ is replaced with its concave minorizing function. Then a locally optimal solution to $\clS$ can be obtained by solving a sequence of convex programs iteratively. By making use of the relationship between ergodic rate and MMSE \cite{AnLu2017RobustTransmission}, we can construct a minorizing function of $\clR_k$ as follows \begin{equation} \FormulaSpace \xiter{g_k}{n} = -\xinv{\ln 2} \cdot \left( a_k^{(n)} \sum_{i=1}^K \left\lvert \bdw_i^H \bdg_k \right\rvert^2 - 2 \Re \left\{ \bdw_k^H \bdg_k \cdot b_k^{(n)} \right\} + c_k^{(n)} \right) + \xinv{\ln 2} + \xiter{\clR_k}{n}\comma \label{Minorize_udR_DL_InstRx} \end{equation} where $a_k^{(n)}$, $b_k^{(n)}$ and $c_k^{(n)}$ are constants defined in \Cref{appendix_minorize_udR_DL_instRx_proof}. By using the minorizing function $\xiter{g_k}{n}$ in \eqref{Minorize_udR_DL_InstRx}, the precoder $\bdW$ in the $(n+1)$th iteration can be obtained by solving the following convex program \begin{equation} \FormulaSpace \begin{aligned} \clS^{(n)}:\ \max_{\bdW} \ \sum_{k=1}^{K} \xiter{g_k}{n}\comma \quad \mathrm{s.t.} \ \sum_{k=1}^{K} \norm{ \bdw_k }^2 \le P\comma \label{Problem_DL_Tx_design_Inst_Rx_n+1_Rate} \end{aligned} \end{equation} which is equivalent to \begin{equation} \FormulaSpace \clS^{(n)}:\ \min_{\bdW} \ \sum_{k=1}^{K} \left( \sum_{i=1}^{K} a_i^{(n)} \abs{ \bdw_k^H \bdg_i }^2 - 2 \Re \left\{ \bdw_k^H \bdg_k \cdot b_k^{(n)} \right\} \right)\comma \quad \mathrm{s.t.} \ \sum_{k=1}^{K} \norm{ \bdw_k }^2 \le P. \label{Problem_DL_Tx_design_Inst_Rx_n+1_QCQP} \end{equation} The optimal solution to $\clS^{(n)}$ can be derived easily by using Lagrangian minimization in convex optimizations. Thus, the precoders $\bdw_k^{(n+1)}$ is given by \begin{equation} \FormulaSpace \bdw_k^{(n+1)} = \left( \sum_{i=1}^{K} a_i^{(n)} \bdg_i \bdg_i^H + \mu^{(n)} \bdI_M \right)^{-1} \bdg_k \cdot b_k^{(n)}\comma \label{wk_iteration_ergodic} \end{equation} where $\mu^{(n)} \ge 0$ is chosen to make $\sum_{k=1}^{K} \smallnorm{\bdw_k^{(n+1)}}^2 = P$. The DL precoder design algorithm is summarized in \Cref{algorithm_DL_Tx_Design_Inst_Rx}. Due to the mathematical expectation in the ergodic rate $\clR_k$, Monte-Carlo method is required to update precoders in each iteration of \Cref{algorithm_DL_Tx_Design_Inst_Rx}, which takes plenty of time on exhaustive sample average. In the next section, we present the low-complexity simplified DL transmit design to avoid the sample average. \begin{algorithm}[!t] \caption{Precoder design algorithm for solving $\clS$.} \label{algorithm_DL_Tx_Design_Inst_Rx} \begin{algorithmic}[1] \REQUIRE Initialize precoders $\bdw_k^{(0)} = \bdw_k^{\init}$ for any $k \in \clK$, and iteration index $n = 0$. \ENSURE Locally optimal precoders $\bdW$. \WHILE 1 \STATE Calculate $a_k^{(n)}$ and $b_k^{(n)}$ for all $k \in \clK$. \STATE Update precoders according to \eqref{wk_iteration_ergodic}. \STATE \textbf{if} $\abs{ \sum\limits_{k=1}^{K} \xiter{\clR_k}{n+1} - \sum\limits_{k=1}^{K} \xiter{\clR_k}{n} } < \epsilon$, \textbf{break}; \textbf{else} set $n:=n+1$. \ENDWHILE \end{algorithmic} \end{algorithm} \section{Simplified DL Transmit Design} \label{Section_Low_Complexity_DL_Precoder_Design} In \Cref{Section_DL_Precoder_Design}, Monte-Carlo method with sample average is used to solve the DL precoder design, whose time consumed is extremely high. To reduce the computation complexity in Monte-Carlo method, we simplify the DL transmit design by using an upper bound of the ergodic sum rate in this section. We show that the DL transmit covariance matrices with rank no large than one are also optimal, and the corresponding simplified DL precoder design is formulated in terms of the upper bound. Then, we derive the structure of DL precoders and formulate the LMO problem. Finally, a low-complexity algorithm is proposed to solve the LMO problem. \subsection{Simplified DL Precoder Design} \label{subsec_simplify_precoder_design} Because $f(x) = \log_2 \left( 1 + \frac{ a x }{ b x + c } \right)$ is a concave function of $x \ge 0$ for $a,b,c \ge 0$, by invoking the Jensen's inequality \cite{BoydConvexOptimization}, the DL ergodic rate $\clR_k$ of UT $k$ can be upper bounded by \begin{equation} \FormulaSpace \begin{aligned} \clI_k &= \mathbb{E} \left\{ \log_2 \left( 1 + \frac{ \bdg_k^H \bdQ_k \bdg_k \norm{\bdd_k}^2 }{ \sum_{i \ne k} \bdg_k^H \bdQ_i \bdg_k \norm{\bdd_k}^2 + \sigma_k^2 } \right) \right\} \\ &\le \log_2 \left( 1 + \frac{ \bdg_k^H \bdQ_k \bdg_k \beta_k }{ \sum_{i \ne k} \bdg_k^H \bdQ_i \bdg_k \beta_k + \sigma_k^2 } \right) \triangleq \clI_k^{\ub}. \label{Rate_UT_k_DL_noRx_UB} \end{aligned} \end{equation} The problem of maximizing the upper bound of DL ergodic sum rate can be formulated as \begin{equation} \FormulaSpace \begin{aligned} \clP^{\ub}: \ \max_{ \left\{ \bdQ_k \right\}_{k=1}^K } \sum_{k=1}^{K} \clI_k^{\ub},\ \mathrm{s.t.} \ \sum_{k=1}^{K} \trace(\bdQ_k) \le P, \ \bdQ_k \succeq \bdzro, \ \forall k \in \clK. \label{Problem_SumRate_Max_Covariance_UB} \end{aligned} \end{equation} \begin{mycorollary} \label{Prop_rank-one_Covariance_UB} The optimal $\bdQ_k(\forall k \in \clK)$ of problem $\clP^{\ub}$ must satisfy $\rank(\bdQ_k) \le 1$. \end{mycorollary} \begin{IEEEproof} The proof is similar with that in \Cref{Prop_rank-one_Covariance}. Thus, it is omitted here. \end{IEEEproof} According to \Cref{Prop_rank-one_Covariance_UB}, the transmit covariance matrices with rank no larger than one are capable of maximizing the upper bound of DL ergodic sum rate. Thus, we can rewrite the transmit covariance matrices as $\bdQ_k = \bdw_k \bdw_k^H(\forall k \in \clK)$, and the upper bound of $\clR_k$ is given by \begin{equation} \FormulaSpace \clR_k \le \log_2 \left( 1 + \frac{ \abs{\bdw_k^H \bdg_k}^2 \beta_k }{ \sum_{i \ne k} \abs{\bdw_i^H \bdg_k}^2 \beta_k + \sigma_k^2 } \right) \triangleq \clR_k^{\ub}. \label{clRk_ub} \end{equation} The simplified DL precoder design by using the upper bound $\clR_k^{\ub}$ can be formulated as \begin{equation} \FormulaSpace \begin{aligned} \clS^{\ub}: \ \max_{ \bdW } \ \sum_{k=1}^{K} \clR_k^{\ub}\comma \quad \mathrm{s.t.} \ \sum_{k=1}^{K} \lVert \mathbf{w}_k \rVert^2 \le P. \label{Problem_DL_Tx_design_Inst_Rx_UB} \end{aligned} \end{equation} Notice that in the simplified precoder design $\clS^{\ub}$, only the channel parameters $\{\beta_k, \bdg_k\}_{k=1}^K$ are necessary at the satellite to compute precoding vectors, which can be acquired through the off-the-shelf parameter estimation techniques \cite{Zoltowski1996Close2DUnitaryESPRIT,Robert2004IntroAdaptArray}. The iteratively weighted MMSE (WMMSE) approach can be used to solve $\clS^{\ub}$, which guarantees convergence to a locally optimal solution of $\clS^{\ub}$ \cite{Shi2011IterativeMMSE}. The detailed description of WMMSE algorithm can be found in \cite{Shi2011IterativeMMSE}. Here, for simplicity, we only give the updating formulas of DL precoders \begin{equation} \FormulaSpace \bdw_k^{(n+1)} = \left( \sum_{i=1}^{K} \tda_i^{(n)} \bdg_i \bdg_i^H + \tdmu^{(n)} \bdI_M \right)^{-1} \bdg_k \cdot \tdb_k^{(n)}\comma \label{wk_update_WMMSE} \end{equation} where $\tda_k^{(n)} = \frac{ \beta_k }{ \sigma_k^2 + \sum_{i\ne k} \smallabs{\bdg_k^H \bdw_i^{(n)}}^2 \beta_k } - \frac{ \beta_k }{ \sigma_k^2 + \sum_{i=1}^K \smallabs{\bdg_k^H \bdw_i^{(n)}}^2 \beta_k }$ and $\tdb_k^{(n)} = \frac{ \beta_k \cdot \bdg_k^H \bdw_k^{(n)} }{ \sigma_k^2 + \sum_{i\ne k} \smallabs{\bdg_k^H \bdw_i^{(n)}}^2 \beta_k }$. The variable $\tdmu^{(n)} \ge 0$ in \eqref{wk_update_WMMSE} can be obtained by bisection search to make $\sum_{k=1}^{K} \smallnorm{ \bdw_k^{(n+1)} }^2 = P$. The complete WMMSE algorithm is summarized in \Cref{algorithm_DL_Tx_Design_Inst_Rx_Jesen}. \Cref{algorithm_DL_Tx_Design_Inst_Rx_Jesen} consists of outer and inner iterations. In each outer iteration, the precoder is updated according to \eqref{wk_update_WMMSE}. In the inner iteration, multiplying the inverse of an $M$-dimensional matrix with $K$ vectors are required to obtain the optimal $\tdmu^{(n)}$ with bisection search. The complexity of \Cref{algorithm_DL_Tx_Design_Inst_Rx_Jesen} is mainly evaluated by the multiplication number in matrix operations given by $\Nout \Nin ( M^3 + K M^2 )$, where $\Nout$ and $\Nin$ are numbers of outer and inner iterations, respectively. \begin{algorithm}[!t] \caption{Simplified precoder design algorithm for solving $\clS^{\ub}$.} \label{algorithm_DL_Tx_Design_Inst_Rx_Jesen} \begin{algorithmic}[1] \REQUIRE Initialize precoders $\bdw_k^{(0)} = \bdw_k^{\init}$ for any $k \in \clK$, and iteration index $n = 0$. \ENSURE Locally optimal precoder $\bdW$. \WHILE 1 \STATE Calculate $\tda_k^{(n)}$ and $\tdb_k^{(n)}$ for all $k \in \clK$. \STATE Update precoders according to \eqref{wk_update_WMMSE}. \STATE \textbf{if} $\abs{ \sum\limits_{k=1}^{K} \xiter{\clR_k}{n+1} - \sum\limits_{k=1}^{K} \xiter{\clR_k}{n} } < \epsilon$, \textbf{break}; \textbf{else} set $n:=n+1$. \ENDWHILE \end{algorithmic} \end{algorithm} \subsection{Structure of DL Precoder} \label{subsec_structure_DL_precoder} In this subsection, we aim to study the structure of optimal DL precoder for problem $\clS^{\ub}$. First, we formulate a quality of service (QoS) problem with properly chosen thresholds, which has the same optimal solution with $\clS^{\ub}$ and is easier to be solved. Then, the structure of optimal solution to $\clS^{\ub}$ can be fully characterized by that to the QoS problem. Based on the structure, the simplified DL precoder design can be converted into an LMO problem, where there is only one scalar to be optimized for each UT. Let $r_k$ be the optimal $\clR_k^{\ub}$ in $\clS^{\ub}$. We can formulate a QoS problem as follows \begin{equation} \FormulaSpace \begin{aligned} \clQ^{\ub}: \ \min_{ \bdW } \ \sum_{k=1}^{K} \norm{ \bdw_k }^2\comma \ \mathrm{s.t.} \ \clR_k^{\ub} \ge r_k\comma \ \forall k \in \clK\comma \end{aligned} \label{Problem_Power_UB_DL_Inst_Rx} \end{equation} where $r_k$ is the threshold associated with UT $k$. Now, the problems $\clQ^{\ub}$ and $\clS^{\ub}$ will have the same optimal solution, and the optimal value of $\clQ^{\ub}$ is $P$ \cite{Emil2014OptimalBeamforming}. The explanations will be stated in the following. Notice that the optimal solution to $\clS^{\ub}$ must be feasible to $\clQ^{\ub}$. If $\clQ^{\ub}$ has the optimal value which is less than $P$, then we can always scale up the optimal solution to $\clQ^{\ub}$ such that $\clR_k^{\ub} > r_k$ and the power constraint in $\clS^{\ub}$ is still fulfilled, which contradicts the optimality of $r_k$. Hence, we can learn the structure of optimal solution to $\clS^{\ub}$ by studying that to $\clQ^{\ub}$. Denote $\gamma_k = 2^{ r_k } - 1$. The problem $\clQ^{\ub}$ can be reformulated as \begin{equation} \FormulaSpace \begin{aligned} \clQ^{\ub}: \ \min_{ \bdW } \ \sum_{k=1}^{K} \norm{ \bdw_k }^2\comma \ \mathrm{s.t.} \ \frac{ \beta_k }{ \gamma_k \sigma_k^2 } \abs{ \bdw_k^H \bdg_k }^2 \ge \frac{\beta_k}{\sigma_k^2} \sum_{i \ne k} \abs{ \bdw_i^H \bdg_k }^2 + 1\comma \ \forall k \in \clK\comma \end{aligned} \label{Problem_Power_Quad_UB_DL_Inst_Rx} \end{equation} which is actually a quadratically constrained quadratic programming (QCQP) \cite{BoydConvexOptimization}. However, the quadratic constraints in \eqref{Problem_Power_Quad_UB_DL_Inst_Rx} are still non-convex. Notice that $e^{j \theta_k } \bdw_k$ and $\bdw_k$ can attain the same objective value in $\clQ^{\ub}$, and they also have the same feasibility for any $\theta_k \in \bbR$. Therefore, we can always rotate the phase of $\bdw_k$ to guarantee $\bdw_k^H \bdg_k$ is real. Thus, the constraints in \eqref{Problem_Power_Quad_UB_DL_Inst_Rx} can be converted into convex second-order cone (SOC) constraints \cite{Emil2014OptimalBeamforming} \begin{equation} \FormulaSpace \sqrt{ \frac{ \beta_k }{ \gamma_k \sigma_k^2 } } \Re \left\{ \bdw_k^H \bdg_k \right\} \ge \sqrt{ \frac{\beta_k}{\sigma_k^2} \sum_{i \ne k} \abs{ \bdw_i^H \bdg_k }^2 + 1 }\comma \ \forall k \in \clK. \end{equation} It is easy to show that the Slater's constraint qualification is satisfied for $\clQ^{\ub}$ \cite{Emil2014OptimalBeamforming}. If the parameters $r_k(\forall k \in \clK)$ are given, the problem $\clQ^{\ub}$ can be optimally solved through fixed-point iteration (FPI) or standard SOC programming (SOCP) algorithms \cite{Wiesel2006LinearPrecodingConic}. As a result, the optimal solution to $\clS^{\ub}$ has the following structure. \begin{myprop} \label{Prop_Structure_UB_DL_instRx} The optimal solution to $\clS^{\ub}$ satisfies \begin{equation} \FormulaSpace \left( \sum_{i=1}^K \frac{ \lambda_i \beta_i }{ \sigma_i^2 } \bdg_i \bdg_i^H + \bdI_M \right)^{-1} \frac{ \lambda_k \beta_k }{ \sigma_k^2 } \bdg_k \bdg_k^H \bdw_k = \frac{ \gamma_k }{ \gamma_k + 1 } \bdw_k\comma \label{Optimal_power_min_DL_instRx_Jesen} \end{equation} where $\lambda_k \ge 0$ is the optimal Lagrange multiplier of \eqref{Problem_Power_Quad_UB_DL_Inst_Rx}, and satisfies $\sum_{k=1}^{K} \lambda_k = P$. \end{myprop} \begin{IEEEproof} Please refer to \Cref{appendix_structure_UB_DL_instRx_proof}. \end{IEEEproof} It is worth noting that the threshold $r_k$ in $\clQ^{\ub}$ is the optimal $\clR_k^{\ub}$, which is not available until $\clS^{\ub}$ is optimally solved. Although the optimal solution to $\clS^{\ub}$ can be obtained by solving $\clQ^{\ub}$ for given $r_k$, the searching of $r_k$ is as difficult as solving $\clS^{\ub}$ itself. \Cref{Prop_Structure_UB_DL_instRx} provides the structure for the optimal solution to $\clS^{\ub}$, which will facilitate the simplified DL precoder design. From another perspective, if the Lagrange multipliers $\{ \lambda_k \}_{k=1}^K$ are given, it can be easily derived from \eqref{Optimal_power_min_DL_instRx_Jesen} that the normalized precoder $\udbdw_k \triangleq \frac{ \bdw_k }{ \norm{ \bdw_k } }$ and $\gamma_k$ are expressed as \begin{align} \udbdw_k &= \frac{ \left( \sum_{i=1}^K \frac{ \lambda_i \beta_i }{ \sigma_i^2 } \bdg_i \bdg_i^H + \bdI_M \right)^{-1} \bdg_k }{ \norm{\left( \sum_{i=1}^K \frac{ \lambda_i \beta_i }{ \sigma_i^2 } \bdg_i \bdg_i^H + \bdI_M \right)^{-1} \bdg_k} } \label{wk_normal_from_lambda} \\ \gamma_k &= \left( 1 - \frac{ \lambda_k \beta_k }{ \sigma_k^2 } \bdg_k^H \left( \sum_{i=1}^K \frac{ \lambda_i \beta_i }{ \sigma_i^2 } \bdg_i \bdg_i^H + \bdI_M \right)^{-1} \bdg_k \right)^{-1} - 1. \label{gamma_from_lambda} \end{align} Define $\bdlambda = ( \lambda_1,\dots,\lambda_K )^T$ and $\bdgamma = \left( \gamma_1,\dots,\gamma_K \right)^T$ as the collection of Lagrange multipliers and thresholds, respectively. If we assume that the parameters $\bdlambda$ are given, then the normalized precoders $\{ \udbdw_k \}_{k=1}^K $ and $\bdgamma$ can be obtained from \eqref{wk_normal_from_lambda} and \eqref{gamma_from_lambda}, respectively. With the results of $\{ \udbdw_k \}_{k=1}^K $ and $\bdgamma$, the power $q_k = \norm{ \bdw_k }^2 (\forall k \in \clK)$ can be calculated by \cite{Emil2014OptimalBeamforming} \begin{equation} \FormulaSpace \bdq = \bdM^{-1} \bdone\comma \label{power_from_lambda} \end{equation} where $\bdq = \left( q_1,\dots,q_K \right)^T$, and $\bdM \in \Complex{K}{K}$ is defined as \begin{equation} \FormulaSpace \left[ \bdM \right]_{k,i} = \begin{cases} -\frac{ \beta_k }{ \sigma_k^2 } \abs{ \bdg_k^H \udbdw_i }^2 \comma & \text{ if } k \ne i \\ \frac{ \beta_k }{ \gamma_k \sigma_k^2 } \abs{ \bdg_k^H \udbdw_k }^2 \comma & \text{ if } k = i \end{cases}. \end{equation} Then, the precoder $\bdw_k$ can be calculated by $\bdw_k = \udbdw_k \sqrt{q_k}(\forall k \in \clK)$, where $\udbdw_k$ and $q_k$ are given by \eqref{wk_normal_from_lambda} and \eqref{power_from_lambda}, respectively. The relationship between the parameter $\bdlambda$, $\bdgamma$ and $\bdW$ is illustrated in \Cref{fig_relationship}. From \Cref{fig_relationship}, it is shown that if the thresholds $\bdgamma$ are known, the optimal precoder $\bdW$ and Lagrange multipliers $\bdlambda$ can be calculated by solving $\clQ^{\ub}$ via FPI or SOCP algorithms. On the other hand, if the Lagrange multipliers $\bdlambda$ are know, the corresponding precoder $\bdW$ and thresholds $\bdgamma$ can be calculated via \Cref{wk_normal_from_lambda,gamma_from_lambda,power_from_lambda}. \begin{figure}[htbp] \centering \vspace{-1.5em} \includegraphics[width=0.3\textwidth]{relationship.eps} \caption{Relationship between $\bdlambda$, $\bdgamma$ and $\bdW$.} \label{fig_relationship} \vspace{-1.5em} \end{figure} We would like to make more interpretations for the latter case above, before further elaboration of the LMO problem. Now assume that $\bdlambda \ge \bdzro$ consists of arbitrary Lagrange multipliers with $\sum_{k=1}^{K} \lambda_k = P$. We can formulate a problem $\clQ^{\ub}$ as in \eqref{Problem_Power_Quad_UB_DL_Inst_Rx}, where the thresholds in $\bdgamma$ are calculated by \eqref{gamma_from_lambda}. Because the duality gap for the problem $\clQ^{\ub}$ is zero, the optimal solution to such $\clQ^{\ub}$ will satisfy $\sum_{k=1}^{K} \norm{\bdw_k}^2 = P$. Meanwhile, the optimal Lagrange multipliers of such $\clQ^{\ub}$ are equal to $\bdlambda$ exactly, and the corresponding optimal precoder $\bdW$ can be calculated by \eqref{wk_normal_from_lambda} and \eqref{power_from_lambda}. \subsection{Low-Complexity DL Precoder Design With LMO} \label{subsec_low-complexity_precoder_design_LMO} In this subsection, we will convert the problem $\clS^{\ub}$ into an LMO problem. Substituting $\gamma_k$ in \eqref{gamma_from_lambda} into $r_k = \log_2 \left( 1 + \gamma_k \right)$ yields \begin{align}\FormulaSpace r_k &= - \log_2 \left( 1 - \frac{ \lambda_k \beta_k }{ \sigma_k^2 } \bdg_k^H \left( \sum_{i=1}^K \frac{ \lambda_i \beta_i }{ \sigma_i^2 } \bdg_i \bdg_i^H + \bdI_M \right)^{-1} \bdg_k \right) \notag \\ &\stackeq{a} \log_2 \det \left( \sum_{i=1}^{K} \frac{ \lambda_i \beta_i }{ \sigma_i^2 } \bdg_i \bdg_i^H + \bdI_M \right) - \log_2 \det \left( \sum_{i \ne k} \frac{ \lambda_i \beta_i }{ \sigma_i^2 } \bdg_i \bdg_i^H + \bdI_M \right)\comma \label{rate_UT_k_DL_Inst_Rx_Jesen_Dual} \end{align} where (a) follows from the identity $\det(\bdI + \bdA \bdB) = \det(\bdI + \bdB \bdA)$ \cite{Horn2013MatrixAnalysis}. Now, we can formulate an LMO problem as follows \begin{equation} \FormulaSpace \begin{aligned} \clM^{\ub}: \ \max_{ \bdlambda \ge \bdzro }\ \sum_{k=1}^K r_k, \ \mathrm{s.t.} \ \bdone^T \bdlambda = P. \label{Problem_DL_Tx_design_Inst_Rx_OptDual} \end{aligned} \end{equation} In the problem $\clM^{\ub}$, there is only one scalar $\lambda_k$, rather than a precoding vector $\bdw_k$, to be optimized for each UT $k \in \clK$, which can be exploited in the simplified DL precoder design. The relationship between problems $\clM^{\ub}$ and $\clS^{\ub}$ are described in the following proposition. \begin{myprop} \label{Prop_equivalence_Mub_Sub} The problems $\clM^{\ub}$ and $\clS^{\ub}$ have the same optimal value. \end{myprop} \begin{IEEEproof} Please refer to \Cref{appendix_proof_equivalence_Mub_Sub_proof}. \end{IEEEproof} According to \Cref{Prop_equivalence_Mub_Sub}, if the problem $\clM^{\ub}$ can be optimally solved, then the optimal precoder of $\clS^{\ub}$ can be directly obtained via \eqref{wk_normal_from_lambda} and \eqref{power_from_lambda}. Hence, the simplified DL precoder design is converted into the LMO problem. However, since $r_k$ in $\clM^{\ub}$ is a non-convex function with respect to $\bdlambda$, the problem $\clM^{\ub}$ is still difficult to solve. Interestingly, we find that $r_k$ is equal to the rate of UT $k$ in a virtual UL with single-input multiple-output (SIMO) channel, and $\lambda_k$ can be regarded as the virtual UL transmit power of UT $k$. Finally, we propose a low-complexity algorithm to obtain a locally optimal solution to $\clM^{\ub}$. \begin{figure}[htbp] \centering \vspace{-1.5em} \includegraphics[width=0.4\textwidth]{virtual_uplink.eps} \caption{Virtual UL model with SIMO channel.} \label{fig_virtual_uplink} \vspace{-1.5em} \end{figure} In the virtual UL, each single-antenna UT transmits one data stream to a BS equipped with $M$ antennas. The received signal $\bdy \in \Complex{M}{1}$ at the BS can be written as \begin{equation} \FormulaSpace \bdy = \sum_{i=1}^{K} \sqrt{ \beta_i/\sigma_i^2 } \bdg_i \cdot \sqrt{\lambda_i } d_i + \bdz\comma \end{equation} where $\sqrt{ \beta_i/\sigma_i^2 } \bdg_i$ is the channel vector between UT $i$ and the BS, $\lambda_i \ge 0$ and $d_i$ are the transmit power and data symbol of UT $i$. The data symbol $d_i$ is assumed to have zero mean and unit variance, and $\bdz \sim \clCN(\bdzro,\bdI_M)$ is the additive complex Gaussian noise. The virtual UL model is shown in \Cref{fig_virtual_uplink}. We assume that the BS decodes the data streams of each UT without successive interference cancellation (SIC) \cite{Cover2006ElementsIT}. The BS uses a linear receiver $\bdu_k \in \Complex{M}{1}$ to recover the data symbol from UT $k$. Then, the recovered data symbol $\htd_k$ of UT $k$ can be written as \begin{equation} \FormulaSpace \htd_k = \bdu_k^H \bdy = \bdu_k^H \bdg_k \sqrt{ \frac{ \lambda_k \beta_k }{ \sigma_k^2 } } d_k + \sum_{i \ne k} \bdu_k^H \bdg_i \sqrt{ \frac{ \lambda_i \beta_i }{ \sigma_i^2 } } d_i + \bdu_k^H \bdz. \end{equation} The virtual MSE (VMSE) of UT $k$ can be expressed as \begin{equation} \FormulaSpace \begin{aligned} \VMSE_k = \bbE\left\{ \abs{ \htd_k - d_k }^2 \right\} = \sum_{i=1}^{K} \abs{ \bdu_k^H \bdg_i }^2 \frac{ \lambda_i \beta_i }{ \sigma_i^2 } - 2\Re \left\{ \bdu_k^H \bdg_k \right\} \sqrt{ \frac{ \lambda_k \beta_k }{ \sigma_k^2 } } + \norm{ \bdu_k }^2 + 1. \label{VMSE_UTk_DL} \end{aligned} \end{equation} Thus, the $\bdu_k$ that minimizes $\VMSE_k$ is given by \begin{equation} \FormulaSpace \bdu_k^{\vmmse} = \Xinv{ \sum_{i=1}^{K} \frac{ \lambda_i \beta_i }{ \sigma_i^2 } \bdg_i \bdg_i^H + \bdI_M } \bdg_k \sqrt{ \frac{ \lambda_k \beta_k }{ \sigma_k^2 } }\comma \end{equation} and the corresponding virtual MMSE (VMMSE) of UT $k$ is \begin{equation} \FormulaSpace \VMMSE_k = 1 - \frac{ \lambda_k \beta_k }{ \sigma_k^2 } \bdg_k^H \Xinv{\sum_{i=1}^{K} \frac{ \lambda_i \beta_i }{ \sigma_i^2 } \bdg_i \bdg_i^H + \bdI_M} \bdg_k. \end{equation} Then, $r_k$ can be rewritten as \begin{equation} \FormulaSpace r_k = -\log_2 \VMMSE_k. \label{rk_VMMSEk_relationship} \end{equation} Next, we make use of the MM algorithm to obtain a locally optimal solution to $\clM^{\ub}$. According to the relationship between $r_k$ and $\VMMSE_k$ in \eqref{rk_VMMSEk_relationship}, a minoring function of $r_k$ is constructed as follows \begin{equation} \FormulaSpace \xiter{h_k}{n} = - \xinv{\ln 2} \left( \sum_{i=1}^{K} \xiter{\psi_{k,i}}{n} \frac{ \lambda_i \beta_i }{ \sigma_i^2 } - 2 \xiter{\chi_k}{n} \sqrt{ \frac{ \lambda_k \beta_k }{ \sigma_k^2 } } + \xiter{\delta_k}{n} \right) + \xinv{\ln 2} + \xiter{r_k}{n} \end{equation} where $\xiter{\psi_{k,i}}{n}$, $\xiter{\chi_k}{n}$ and $\xiter{\delta_k}{n}$ are shown in \Cref{appendix_minorizing_rk_DL_instRx_proof}. Then, a locally optimal solution to $\clM^{\ub}$ can be obtained by iteratively solving the following convex program \begin{equation} \FormulaSpace \begin{aligned} \clM_n^{\ub}: \ \max_{ \bdlambda \ge \bdzro } \ \sum_{k=1}^{K} \xiter{h_k}{n}\comma \quad \mathrm{s.t.} \ \bdone^T \bdlambda = P\comma \end{aligned} \end{equation} which is equivalent to \begin{equation} \FormulaSpace \clM_n^{\ub}: \ \min_{ \bdlambda \ge \bdzro } \ \sum_{k=1}^{K} \sum_{i=1}^{K} \xiter{\psi_{i,k}}{n} \frac{ \lambda_k \beta_k }{ \sigma_k^2 } - 2 \sum_{k=1}^{K} \xiter{\chi_k}{n} \sqrt{ \frac{ \lambda_k \beta_k }{ \sigma_k^2 } }\comma \quad \mathrm{s.t.} \ \bdone^T \bdlambda = P. \label{Mub_convex_subprobelm} \end{equation} By applying the Lagrangian maximization, the optimal solution to $\clM_n^{\ub}$ is given by \begin{equation} \FormulaSpace \sqrt{\lambda_k} = \frac{ \xiter{\chi_k}{n} \sqrt{ \frac{ \beta_k }{ \sigma_k^2 } } }{ \sum_{i=1}^{K} \xiter{\psi_{i,k}}{n} \frac{ \beta_k }{ \sigma_k^2 } + \nu^{(n)} }\comma\ \forall k \in \clK\comma \label{sqrt_lambda_k_optimal_Mnub} \end{equation} where $\nu^{(n)}$ is the dual variable associated with the equality constraint in $\clM_n^{\ub}$. To make the right-hand side of \eqref{sqrt_lambda_k_optimal_Mnub} non-negative, the dual variable $\nu^{(n)}$ must satisfy $\nu^{(n)} \ge - \min\limits_{k \in \clK} \sum_{i=1}^{K} \psi_{i,k}^{(n)} \frac{\beta_k}{\sigma_k^2}$. Hence, the optimal solution to $\clM_n^{\ub}$ can be written as \begin{equation} \FormulaSpace \lambda_k^{(n+1)} = \frac{ \left( \chi_k^{(n)} \right)^2 \frac{ \beta_k }{ \sigma_k^2 } }{ \left( \sum_{i=1}^{K} \psi_{i,k}^{(n)} \frac{\beta_k}{\sigma_k^2} + \nu^{(n)} \right)^2 }\comma \ \forall k \in \clK\comma \label{lambda_Update_DL_Tx_Design_Jesen_Dual} \end{equation} where $\nu^{(n)}$ can be obtained by bisection search to make $\sum_{k=1}^{K} \xiter{\lambda_k}{n+1} = P$. The algorithm to solve $\clM^{\ub}$ is summarized in \Cref{algorithm_DL_Tx_Design_Inst_Rx_Jesen_Dual}. When the Lagrange multipliers $\bdlambda$ are calculated with \Cref{algorithm_DL_Tx_Design_Inst_Rx_Jesen_Dual}, the associated DL precoders $\bdW$ can be determined by \eqref{wk_normal_from_lambda} and \eqref{power_from_lambda}. There are also outer and inner iterations in \Cref{algorithm_DL_Tx_Design_Inst_Rx_Jesen_Dual}. In each outer iteration of \Cref{algorithm_DL_Tx_Design_Inst_Rx_Jesen_Dual}, the $M$-dimensional matrix inversion and multiplying the inverse matrix with $K$ vectors are conducted to calculate $\psi_{k,i}^{(n)}(\forall k,i \in \clK)$ and $\chi_k^{(n)}(\forall k \in \clK)$. Meanwhile, in the inner iterations of \Cref{algorithm_DL_Tx_Design_Inst_Rx_Jesen_Dual}, the computation is performed only for scalar variables. Therefore, the complexity of \Cref{algorithm_DL_Tx_Design_Inst_Rx_Jesen_Dual} is mainly characterized by $\Nout ( M^3 + KM^2 )$. Compared with \Cref{algorithm_DL_Tx_Design_Inst_Rx_Jesen}, the complexity of \Cref{algorithm_DL_Tx_Design_Inst_Rx_Jesen_Dual} is significantly reduced. In addition, the superiority of \Cref{algorithm_DL_Tx_Design_Inst_Rx_Jesen_Dual} is even more obvious for the case that $M$ or $K$ is extremely large, which makes it quite attractive for massive MIMO LEO SATCOM. \begin{algorithm}[!t] \caption{LMO algorithm for solving $\clM^{\ub}$.} \label{algorithm_DL_Tx_Design_Inst_Rx_Jesen_Dual} \begin{algorithmic}[1] \REQUIRE Initialize Lagrange multipliers $\bdlambda^{(0)} = \bdlambda^{\init}$, and iteration index $n = 0$. \ENSURE Locally optimal Lagrange multipliers $\bdlambda$ to $\clM^{\ub}$. \WHILE 1 \STATE Calculate $\sum_{i=1}^{K} \psi_{i,k}^{(n)}$ and $\chi_k^{(n)}$ for all $k \in \clK$. \STATE Update Lagrange multipliers according to \eqref{lambda_Update_DL_Tx_Design_Jesen_Dual}. \STATE \textbf{if} $\abs{ \sum\limits_{k=1}^{K} \xiter{r_k}{n+1} - \sum\limits_{k=1}^{K} \xiter{r_k}{n} } < \epsilon$, \textbf{break}; \textbf{else} set $n:=n+1$. \ENDWHILE \end{algorithmic} \end{algorithm} At last, we provide a theoretical analysis for $r_k$. A lower bound of $r_k$ is given by \begin{align}\FormulaSpace r_k &\stackeq{a} - \log_2 \left[ \left( \bdI + \bdGamma^{\xinv{2}} \bdG^H \bdG \bdGamma^{\xinv{2}} \right)^{-1} \right]_{k,k} \notag \\ &\stackeq{b} \log_2 \left( 1 + \frac{ \lambda_k \beta_k }{ \sigma_k^2 } - \frac{ \lambda_k \beta_k }{ \sigma_k^2 } \bdg_k^H \udbdG_{k} \udbdGamma_{k}^{\xinv{2}} \left( \bdI + \udbdGamma_{k}^{\xinv{2}} \udbdG_{k}^H \udbdG_{k} \udbdGamma_{k}^{\xinv{2}} \right)^{-1} \udbdGamma_{k}^{\xinv{2}} \udbdG_{k}^H \bdg_k \right) \notag \\ &\stackgeq{c} \log_2 \left( 1 + \frac{ \lambda_k \beta_k }{ \sigma_k^2 } - \frac{ \lambda_k \beta_k }{ \sigma_k^2 } \sum_{i \ne k} \frac{ \lambda_i \beta_i }{ \sigma_i^2 } \abs{ \bdg_i^H \bdg_k }^2 \right)\comma \label{rk_lower_bound} \end{align} where $\bdGamma = \diag \left( \frac{ \lambda_1 \beta_1 }{ \sigma_1^2 }, \dots, \frac{ \lambda_K \beta_K }{ \sigma_K^2 } \right) $, $\bdG = [ \bdg_1 \ \dots\ \bdg_K ] $, $\udbdGamma_k $ is the $(K-1)$-dimensional diagonal matrix obtained by deleting the $k$th diagonal element in $\bdGamma$, $\udbdG_k$ is an $M\times (K-1)$ matrix obtained by deleting the $k$th column in $\bdG$, (a) follows from the matrix inversion lemma \cite{Horn2013MatrixAnalysis}, (b) follows from the identity \cite[Eq. 0.7.3.1]{Horn2013MatrixAnalysis}, and (c) follows from the positive semidefinite property of $\udbdGamma_{k}^{\xinv{2}} \udbdG_{k}^H \udbdG_{k} \udbdGamma_{k}^{\xinv{2}}$. From \eqref{rk_lower_bound}, we can see that the multi-user interference is caused by the non-orthogonality between array response vectors $\bdg_k(\forall k \in \clK)$. In addition, an upper bound for the sum of $r_k$ is given by in the following proposition. The necessary and sufficient condition to achieve the upper bound is also shown. \begin{myprop} \label{Prop_rk_upperbound} The sum of $r_k$ satisfies the following inequality \begin{equation} \FormulaSpace \begin{aligned} \sum_{k=1}^K r_ k \le \sum_{k=1}^K \log_2 \left( 1 + \frac{ \lambda_k \beta_k }{ \sigma_k^2 } \right)\comma \end{aligned} \end{equation} and the equality holds if and only if $\bdG^H \bdG = \bdI$. \end{myprop} \begin{IEEEproof} Please refer to \Cref{appendix_Prop_rk_upperbound_proof}. \end{IEEEproof} The condition $\bdG^H \bdG = \bdI$ means that the array response vectors $\bdg_k(\forall k \in \clK)$ are orthogonal with each other. Under this condition, the lower bound of $r_k$ in \eqref{rk_lower_bound} also becomes tight. Then, $r_k$ will reduce to $r_k = \log_2 \left( 1 + \frac{ \lambda_k \beta_k }{ \sigma_k^2 } \right)$, and the problem $\clS^{\ub}$ will reduce to a convex program \begin{equation} \FormulaSpace \begin{aligned} \max_{ \bdlambda \ge \bdzro } \ \sum_{k=1}^{K} \log_2 \left( 1 + \frac{ \lambda_k \beta_k }{ \sigma_k^2 } \right)\comma \quad \mathrm{s.t.} \ \bdone^T \bdlambda = P. \end{aligned} \end{equation} The optimal $\bdlambda$ can be calculated by classic water-filling algorithms \cite{Cover2006ElementsIT}, i.e., $\lambda_k = \left[ \frac{ 1 }{\ln 2 \cdot \nu } - \frac{ \sigma_k^2 }{ \beta_k } \right]^{+}$, $\forall k \in \clK$, where $[x]^+ = \max\{x, 0\}$, $\nu$ is chosen to make $\bdone^T \bdlambda = P$. \section{Simulation Results} \label{Sectioin_Simulation} In this section, we show the simulation results to verify the performance of the proposed DL transmit designs in massive MIMO LEO SATCOM. The simulation parameters are summarized in \Cref{table_simulation}. As we can see, when the orbit altitude $H$ is fixed, the location distribution of UTs is fully determined by the distribution of their space angle pairs. From \Cref{table_simulation}, the space angle pair $\tdbdtheta_k = (\tdtheta_k^{\rx},\tdtheta_k^{\ry})$ of UT $k$ is distributed as $\tdtheta_k^{\rx}$, $\tdtheta_k^{\ry} \sim \mathrm{U}\ [-1/2,1/2]$. The antenna spacing at the satellite is set as $d_{\rx} = d_{\ry} = \lambda$ to avoid grating lobes in the visible intervals of $\tdtheta_k^{\rx}$ and $\tdtheta_k^{\ry}$, as well as maximize the spatial resolution \cite{Rudge1983HandbookAntenna2}. The nadir angle $\vtheta_k$ depicted in \Cref{fig_UPA} is determined from the triangle geometry $\cos \vtheta_k = \sin \theta_k^{\ry} \sin \theta_k^{\rx} = \sqrt{1 - (\tdtheta_k^{\ry})^2 - (\tdtheta_k^{\rx})^2}$. Hence, the distance between the satellite and UT $k$ is given by $D_k = \sqrt{R_e^2 + R_s^2 - 2 R_e R_s \cos \psi_k}$ \cite{Lutz2000SatSysPerson}, where $R_s = R_e + H$, $R_e$ is the earth radius, and $\psi_k = \sin^{-1} \left( \frac{R_s}{R_e} \sin \vtheta_k \right) - \vtheta_k$ is the earth central angle of UT $k$ \cite{Lutz2000SatSysPerson}. The average channel power is calculated by $\beta_k = G_{\sat} G_{\ut} NM \cdot \lambda^2/(4\pi D_k)^2 (\forall k \in \clK)$, where $G_{\sat}$ and $G_{\ut}$ are the gains of antenna elements at the satellite and UTs, respectively. The random vector $\bdd_k$ can be generated by following classic terrestrial wireless channel modeling techniques \cite{Gesbert2002OutdoorMIMO}. The noise variance is $\sigma_k^2 = k_\rB T_\rn B$ where $k_\rB = 1.38 \times 10^{-23} \text{ J} \cdot \text{K}^{-1}$ is the Boltzmann constant, $T_{\rn}$ is the noise temperature and $B$ is the system bandwidth. The transmit power $P$ ranges from $0$ dBW to $30$ dBW. \begin{table}[!t] \centering \small \vspace{-1em} \captionof{table}{Simulation Parameters} \label{table_simulation} \begin{tabular}{LR} \toprule Parameters & Values \\ \midrule \rowcolor{lightblue} Earth radius $R_e$ & $6378$ km \\ Orbit altitude $H$ & $1000$ km \\ \rowcolor{lightblue} Central frequency $f_c$ & $2$ GHz \\ Bandwidth $B$ & $20$ MHz \\ \rowcolor{lightblue} Noise temperature $T_\rn$ & $300$ K \\ Number of antennas (satellite) $\Mx \times \My$ & $16\times 16$ \\ \rowcolor{lightblue} Number of antennas (UT) $\Nx \times \Ny$ & $6 \times 6$ \\ Antenna spacing $d_{\rx}$, $d_{\ry}$, $d_{\rx'}$, $d_{\ry'}$ & $\lambda$ \\ \rowcolor{lightblue} Antenna gain $G_{\sat}$, $G_{\ut}$ & $3$ dB \\ Number of UTs $K$ & $256$ \\ \rowcolor{lightblue} Rician factor $\kappa_k$ & $0$, $10$ dB \\ Space angle distribution $\tdtheta_k^{\rx}$, $\tdtheta_k^{\ry}$ & $\mathrm{U}\ \left[-1/2, 1/2\right]$ \\ \bottomrule \end{tabular} \vspace{-1em} \end{table} \begin{figure}[!t] \centering \vspace{-2em} \subfigure[$\kappa_k = 0\ \dB$.]{\includegraphics[width=0.4\textwidth]{Performance_DL_Tx_16x16_Rx_6x6_K=256_ergo_jesn_dual_slnr_los_diffSNR_RiceK=0dB.eps}} \quad \subfigure[$\kappa_k = 10\ \dB$.]{\includegraphics[width=0.4\textwidth]{Performance_DL_Tx_16x16_Rx_6x6_K=256_ergo_jesn_dual_slnr_los_diffSNR_RiceK=10dB.eps}} \caption{DL sum rate performance of \Cref{algorithm_DL_Tx_Design_Inst_Rx,algorithm_DL_Tx_Design_Inst_Rx_Jesen,algorithm_DL_Tx_Design_Inst_Rx_Jesen_Dual} for different Rician factors.} \label{Fig_DL_Performance} \vspace{-1em} \end{figure} In \Cref{Fig_DL_Performance}, the sum rate performance for \Cref{algorithm_DL_Tx_Design_Inst_Rx,algorithm_DL_Tx_Design_Inst_Rx_Jesen,algorithm_DL_Tx_Design_Inst_Rx_Jesen_Dual} is depicted. Meanwhile, the performance of ASLNR precoders $\{ \bdw_k^{\aslnr} \}_{k=1}^K$ is also illustrated, where $\bdw_k^{\aslnr} = \sqrt{p_k} \cdot \frac{\bdT_k^{-1} \bdg_k}{\smallnorm{\bdT_k^{-1} \bdg_k}} $ with $\bdT_k = \sum_{i=1}^{K} \beta_i \bdg_i \bdg_i^H + \frac{\sigma_k^2}{p_k} \bdI_M $ \cite{You2019MassiveMIMOLEO}. The power $p_k$ of ASLNR precoder $\bdw_k^{\aslnr}$ is set to be $p_k = P/K(\forall k \in \clK)$ for simplicity. Besides, the performance of precoding with only the LoS part $\brbdH_k$ is shown in \Cref{Fig_DL_Performance}. It can be observed that the difference of sum rate performance among \Cref{algorithm_DL_Tx_Design_Inst_Rx,algorithm_DL_Tx_Design_Inst_Rx_Jesen,algorithm_DL_Tx_Design_Inst_Rx_Jesen_Dual} is negligible, and they have the performance gain of about $2$ dB compared with the ASLNR precoders at $P = 25$ dBW. \Cref{algorithm_DL_Tx_Design_Inst_Rx_Jesen,algorithm_DL_Tx_Design_Inst_Rx_Jesen_Dual} behave slightly worse than \Cref{algorithm_DL_Tx_Design_Inst_Rx} because of using the upper bound of ergodic sum rate. Nevertheless, \Cref{algorithm_DL_Tx_Design_Inst_Rx_Jesen_Dual} can achieve approximately the same performance with \Cref{algorithm_DL_Tx_Design_Inst_Rx,algorithm_DL_Tx_Design_Inst_Rx_Jesen}, even with much less computation effort. In addition, \Cref{Fig_DL_Performance} also indicates that precoding with only the LoS channel part can have large performance degradation, especially when the Rician factor is not large. Thus, \Cref{algorithm_DL_Tx_Design_Inst_Rx,algorithm_DL_Tx_Design_Inst_Rx_Jesen,algorithm_DL_Tx_Design_Inst_Rx_Jesen_Dual} can attain better sum rate performance by exploiting both the LoS and scattering channel power comprehensively. \begin{figure}[!t] \centering \vspace{-1em} \subfigure[$\kappa_k = 0$ dB, $P=20$ dBW.]{\includegraphics[width=0.4\textwidth]{Convergence_DL_Tx_16x16_Rx_6x6_K=256_ergo_jesn_dual_SNR=20dB_RiceK=0dB.eps}} \quad \subfigure[$\kappa_k = 10$ dB, $P=20$ dBW.]{\includegraphics[width=0.4\textwidth]{Convergence_DL_Tx_16x16_Rx_6x6_K=256_ergo_jesn_dual_SNR=20dB_RiceK=10dB.eps}} \caption{Convergence of \Cref{algorithm_DL_Tx_Design_Inst_Rx,algorithm_DL_Tx_Design_Inst_Rx_Jesen,algorithm_DL_Tx_Design_Inst_Rx_Jesen_Dual} for different Rician factors.} \label{Fig_DL_Convergence} \vspace{-2em} \end{figure} In \Cref{Fig_DL_Convergence}, we show the convergence performance for \Cref{algorithm_DL_Tx_Design_Inst_Rx,algorithm_DL_Tx_Design_Inst_Rx_Jesen,algorithm_DL_Tx_Design_Inst_Rx_Jesen_Dual}. It is observed that \Cref{algorithm_DL_Tx_Design_Inst_Rx,algorithm_DL_Tx_Design_Inst_Rx_Jesen,algorithm_DL_Tx_Design_Inst_Rx_Jesen_Dual} will definitely converge within tens of iterations. The objective values attained by \Cref{algorithm_DL_Tx_Design_Inst_Rx,algorithm_DL_Tx_Design_Inst_Rx_Jesen,algorithm_DL_Tx_Design_Inst_Rx_Jesen_Dual} coincide with each other. Since the inner iterations in \Cref{algorithm_DL_Tx_Design_Inst_Rx_Jesen_Dual} are only carried out for scalar variables, the computation complexity of \Cref{algorithm_DL_Tx_Design_Inst_Rx_Jesen_Dual} is much lower than that of \Cref{algorithm_DL_Tx_Design_Inst_Rx,algorithm_DL_Tx_Design_Inst_Rx_Jesen}, which makes it to be a suitable option for precoding in massive MIMO LEO SATCOM. Moreover, as the precoder designs only rely on the sCSI, which is independent of subcarriers and time instances within a sCSI stable period, the implementation overhead could be very low. \section{Conclusion} \label{Section_Conclusion} In this paper, we have investigated the DL transmit design in FFR massive MIMO LEO SATCOM. First, we established the massive MIMO LEO satellite channel model, and the Doppler and delay were compensated for at each UT to simplify DL transmission. Then, we showed that the rank of transmit covariance matrix should not be larger than one to maximize ergodic sum rate, which indicated that precoding with at most one data stream for each UT is optimal for linear transmitters. The MM algorithm combined with Monte-Carlo method was used to solve the precoder design. To avoid the time-consuming sample average in Monte-Carlo method, we simplified the DL transmit design by approximating the DL ergodic rate with its closed-form upper bound. It was shown that the transmit covariance matrices keep the rank-one property, and the corresponding simplified precoder design was formulated. We derive the solution structure for the simplified precoder design, in which the precoders are determined by Lagrange multipliers. Then, an LMO problem is formulated, which has only one scalar to be optimized for each UT, and a low-complexity algorithm was proposed to solve the LMO problem. The convergence and performance of the proposed approaches were demonstrated in simulation results. \appendices \section{Proof of \Cref{Prop_rank-one_Covariance}} \label[secinapp]{appendix_rank-one_Covariance_proof} Our proof shares the similar spirit with the approaches in \cite{Sun2019BDMAOptical}. We first prove that the optimal solution to $\clP$ must be rank-one. Then, the proof procedures can be directly applied to the optimal solution to $\clP^{\ub}$. The gradient of $\clR_i$ with respect to $\bdQ_k$ can be calculated by \begin{equation} \FormulaSpace \begin{aligned} \frac{ \partial \clR_i }{ \partial \bdQ_k^* } = \begin{cases} \xinv{\ln 2} \cdot \bbE\left\{ \frac{ \norm{ \bdd_k }^2 }{ T_{k}(\bdd_k) } \right\} \bdg_k \bdg_k^H\comma & \text{ if } i = k \\ \xinv{\ln 2} \cdot \left( \bbE\left\{ \frac{ \norm{ \bdd_i }^2 }{ T_{i}(\bdd_i) } \right\} - \bbE \left\{ \frac{ \norm{ \bdd_i }^2 }{ I_{i}(\bdd_i) } \right\} \right) \bdg_i \bdg_i^H\comma & \text{ if } i \ne k \end{cases}\comma \end{aligned} \end{equation} where $T_{k}(\bdd_k) = \sigma_k^2 + \sum_{\ell=1}^K \bdg_k^H \bdQ_{\ell} \bdg_k \norm{\bdd_k}^2$ and $I_{k} (\bdd_k) = \sigma_k^2 + \sum_{\ell \ne k} \bdg_k^H \bdQ_{\ell} \bdg_k \norm{\bdd_k}^2$. The Lagrangian of $\clP$ is given by \begin{equation} \FormulaSpace \clL_{\clP} = \sum_{k=1}^{K} \clR_k - v \left( \sum_{k=1}^{K} \trace \left( \bdQ_k \right) - P \right) + \sum_{k=1}^{K} \trace \left( \bdPhi_k \bdQ_k \right)\comma \end{equation} where $v \ge 0$ and $\bdPhi_k \succeq \bdzro$ are the Lagrange multipliers associated with the power constraint $\sum_{k=1}^{K} \trace \left( \bdQ_k \right) \le P$ and positive semidefinite condition $\bdQ_k \succeq \bdzro$. From the Karush-Kuhn-Tucker (KKT) conditions, the gradient of $\clL_{\clP}$ with respect to the optimal $\bdQ_k$ should be zero, i.e., \begin{equation} \FormulaSpace \begin{aligned} \frac{ \partial \clL_{\clP} }{ \partial \bdQ_k^* } &= - \bdA_k + \bdB_k - v \bdI_M + \bdPhi_k = \bdzro\comma \label{Grad_zero_LP} \end{aligned} \end{equation} where $\bdA_k = \xinv{\ln 2} \cdot \sum_{i \ne k} \left( \bbE\left\{ \frac{ \norm{ \bdd_i }^2 }{ I_{i}(\bdd_i) } \right\} - \bbE \left\{ \frac{ \norm{ \bdd_i }^2 }{ T_{i}(\bdd_i) } \right\} \right) \bdg_i \bdg_i^H$ and $\bdB_k = \xinv{\ln 2} \cdot \bbE\left\{ \frac{ \norm{ \bdd_k }^2 }{ T_{k}(\bdd_k) } \right\} \bdg_k \bdg_k^H$ are both positive semidefinite matrices. From \eqref{Grad_zero_LP}, $\bdPhi_k$ can be expressed as $\bdPhi_k = v \bdI_M + \bdA_k - \bdB_k$. To guarantee $\bdPhi_k \succeq \bdzro$, we must have $v>0$. Thus, we have $\rank(v\bdI_M + \bdA_k) = M$. From the rank-sum inequality $\abs{ \rank(\bdA) - \rank(\bdB) } \le \rank(\bdA+\bdB)$ \cite[0.4.5(d)]{Horn2013MatrixAnalysis}, the rank of $\bdPhi_k$ must satisfy $\rank(\bdPhi_k) \ge M-1$. Due to the Sylvester inequality $\rank(\bdA) + \rank(\bdB) - n \le \rank(\bdA\bdB)$ \cite[0.4.5(c)]{Horn2013MatrixAnalysis}, where $n$ is the column number of $\bdA$, we can obtain \begin{equation} \FormulaSpace \rank(\bdPhi_k) + \rank(\bdQ_k) - M \le \rank(\bdPhi_k \bdQ_k) \stackeq{a} 0\comma \end{equation} where (a) follows from the complementary slackness condition $\bdPhi_k \bdQ_k = \bdzro$. The rank of $\bdQ_k$ will satisfy $\rank(\bdQ_k) \le 1$. This concludes the proof. \section{A minorizing function of $\clR_k$ } \label[secinapp]{appendix_minorize_udR_DL_instRx_proof} Let $\bdW^{(n)} = \left[ \bdw_1^{(n)} \dots \bdw_K^{(n)} \right]$ be the collection of precoding vectors in the $n$th iteration. The MMSE in the $n$th iteration is given by $\xiter{\MMSE_k}{n} = 1 - \frac{ \smallabs{ \bdg_k^H \bdw_k^{(n)} }^2 \norm{\bdd_k}^2 }{ \sigma_k^2 + \sum_{i=1}^{K} \smallabs{ \bdg_k^H \bdw_i^{(n)} }^2 \norm{\bdd_k}^2 }$. From the concavity of $\log_2(\cdot)$, we have \begin{align}\FormulaSpace \clR_k &= - \bbE\left\{ \log_2 \MMSE_k \right\} \ge \xiter{\clR_k}{n} - \xinv{\ln 2} \cdot \bbE \left\{ \frac{ \MMSE_k - \xiter{\MMSE_k}{n} }{ \xiter{\MMSE_k}{n} } \right\} \notag \\ &\stackgeq{a} \xiter{\clR_k}{n} + \xinv{\ln 2} - \xinv{\ln 2} \cdot \bbE \left\{ \frac{ \MSE_k }{ \xiter{\MMSE_k}{n} } \right\} \triangleq \xiter{g_k}{n}\comma \label{DL_udR_k_concavity} \end{align} where $\xiter{\clR_k}{n}$ is the DL ergodic rate of UT $k$ in the $n$th iteration, (a) follows from $\MMSE_k \le \MSE_k$. As indicated in \eqref{MSE_DL}, $\MSE_k$ is a function of precoder $\bdW$ and receiver $\bdc_k$. If the inequality $\clR_k \ge \xiter{g_k}{n}$ in \eqref{DL_udR_k_concavity} holds with equality at $\bdW^{(n)}$, the receiver $\bdc_k$ in $\MSE_k$ should be given by $\bdc_k^{(n)} = \frac{ \bdd_k \cdot \bdg_k^H \bdw_k^{(n)} }{ \sigma_k^2 + \sum_{i=1}^{K} \smallabs{ \bdg_k^H \bdw_i^{(n)} }^2 \norm{\bdd_k}^2 }$. After substituting $\bdc_k^{(n)}$ into $\MSE_k$, we have \begin{align}\FormulaSpace \bbE \left\{ \frac{ \MSE_k }{ \xiter{\MMSE_k}{n} } \right\} &= a_k^{(n)} \sum_{i=1}^K \smallabs{ \bdw_i^H \bdg_k }^2 - 2 \Re \left\{ \bdw_k^H \bdg_k \cdot b_k^{(n)} \right\} + c_k^{(n)}\comma \end{align} where $a_k^{(n)} = \bbE \left\{ \frac{ \smallabs{\bdd_k^H \bdc_k^{(n)} }^2 }{ \xiter{\MMSE_k}{n} } \right\}$, $b_k^{(n)} = \bbE \left\{ \frac{ \bdd_k^H \bdc_k^{(n)} }{ \xiter{\MMSE_k}{n} } \right\}$ and $c_k^{(n)}= \bbE \left\{ \frac{ \sigma_k^2 \smallnorm{ \bdc_k^{(n)} }^2 + 1 }{ \xiter{\MMSE_k}{n} } \right\}$. \section{Proof of \Cref{Prop_Structure_UB_DL_instRx}} \label[secinapp]{appendix_structure_UB_DL_instRx_proof} The Lagrangian of \eqref{Problem_Power_Quad_UB_DL_Inst_Rx} is given by \begin{equation} \FormulaSpace \clL_{\clQ}^{\ub} = \sum_{k=1}^{K} \norm{ \bdw_k }^2 + \sum_{k=1}^{K} \lambda_k \left( \frac{ \beta_k }{ \sigma_k^2 } \sum_{i \ne k} \smallabs{ \bdw_i^H \bdg_k }^2 + 1 - \frac{ \beta_k }{ \gamma_k \sigma_k^2 } \smallabs{ \bdw_k^H \bdg_k }^2 \right)\comma \label{Lagrangian_Power_Quad} \end{equation} where $\lambda_k \ge 0$ is the Lagrange multiplier associated with the inequality for UT $k$ in \eqref{Problem_Power_Quad_UB_DL_Inst_Rx}. The gradient of $\clL_{\clQ}^{\ub}$ can be written as \begin{equation} \FormulaSpace \frac{\partial \clL_{\clQ}^{\ub} }{\partial \bdw_k^* } = \bdw_k + \sum_{i \ne k} \frac{ \lambda_i \beta_i }{ \sigma_i^2 } \bdg_i \bdg_i^H \bdw_k - \frac{ \lambda_k \beta_k }{ \gamma_k \sigma_k^2 } \bdg_k \bdg_k^H \bdw_k. \end{equation} By setting the gradient of $\clL_{\clQ}^{\ub}$ as zero, we can obtain the condition that the minimizer of $\clL_{\clQ}^{\ub}$ satisfies as follows \begin{equation} \FormulaSpace \left( \sum_{i \ne k} \frac{ \lambda_i \beta_i }{ \sigma_i^2 } \bdg_i \bdg_i^H + \bdI_M \right) \bdw_k = \frac{ \lambda_k \beta_k }{ \gamma_k \sigma_k^2 } \bdg_k \bdg_k^H \bdw_k\comma \label{zero_gradient} \end{equation} which is equivalent to \eqref{Optimal_power_min_DL_instRx_Jesen} actually. Substituting \eqref{zero_gradient} into \eqref{Lagrangian_Power_Quad} yields the minimum of $\clL_{\clQ}^{\ub}$ as $\sum_{k=1}^{K} \lambda_k$. As stated in \Cref{subsec_structure_DL_precoder}, when $r_k$ is the optimal $\clR_k^{\ub}$ in $\clS^{\ub}$, the optimal value of $\clQ^{\ub}$ is $P$. Because the duality gap for the problem $\clQ^{\ub}$ is zero, which can be easily verified by using the approaches in \cite{Yu2007TransmitterOptimize,Emil2014OptimalBeamforming}, the optimal Lagrange multipliers $\{ \lambda_k \}_{k=1}^K$ of \eqref{Problem_Power_Quad_UB_DL_Inst_Rx} must satisfy $\sum_{k=1}^{K} \lambda_k = P$. This concludes the proof. \section{Proof of \Cref{Prop_equivalence_Mub_Sub}} \label[secinapp]{appendix_proof_equivalence_Mub_Sub_proof} The optimal DL precoder to $\clS^{\ub}$ must satisfy the power constraint with equality, because any precoders with $\sum_{k=1}^{K} \norm{\bdw_k}^2 < P$ can be scaled up to increase the objective value. Suppose that $\{ \bdw_k^{\circ} \}_{k=1}^K$ is a feasible solution to $\clS^{\ub}$ satisfying $\sum_{k=1}^{K} \norm{ \bdw_k^{\circ} }^2 = P$. Let $r_k^{\circ}$ represent the rate of UT $k$ by substituting $\{ \bdw_k^{\circ} \}_{k=1}^K$ into \eqref{clRk_ub}, and denote $\gamma_k^{\circ} = 2^{r_k^{\circ}}-1$. We can calculate the Lagrange multipliers $\{ \lambda_k^{\circ} \}_{k=1}^K$ by solving $\clQ^{\ub}$ in \eqref{Problem_Power_Quad_UB_DL_Inst_Rx} where the thresholds are chosen as $\{\gamma_k^{\circ}\}_{k=1}^K$. Then, $\{ \bdw_k^{\circ} \}_{k=1}^K$ and $\{ \lambda_k^{\circ} \}_{k=1}^K$ will satisfy the KKT condition of $\clQ^{\ub}$ in \eqref{Problem_Power_Quad_UB_DL_Inst_Rx} for thresholds being $\{\gamma_k^{\circ}\}_{k=1}^K$. Therefore, the Lagrange multipliers will satisfy $\sum_{k=1}^{K} \lambda_k^{\circ} = P$, which implies that $\{ \lambda_k^{\circ} \}_{k=1}^K$ is a feasible solution to $\clM^{\ub}$. In addition, the objective value of $\clM^{\ub}$ attained by $\{ \lambda_k^{\circ} \}_{k=1}^K$ will be $\sum_{k=1}^{K} r_k^{\circ}$. On the other hand, let $\{ \lambda_k^{\diamond} \}_{k=1}^K$ be a feasible solution to $\clM^{\ub}$, i.e., $\sum_{k=1}^{K} \lambda_k^{\diamond} = P$. Then, $r_k^{\diamond}$ is calculated by substituting $\{ \lambda_k^{\diamond} \}_{k=1}^K$ into \eqref{rate_UT_k_DL_Inst_Rx_Jesen_Dual}, and denote $\gamma_k^{\diamond} = 2^{r_k^{\diamond}}-1$. We can calculate DL precoders $\{ \bdw_k^{\diamond} \}_{k=1}^K$ via \eqref{wk_normal_from_lambda} and \eqref{power_from_lambda} where thresholds are equal to $\{ \gamma_k^{\diamond} \}_{k=1}^K$. Now, $\{ \bdw_k^{\diamond} \}_{k=1}^K$ and $\{ \lambda_k^{\diamond} \}_{k=1}^K$ will satisfy the KKT condition of $\clQ^{\ub}$ in \eqref{Problem_Power_Quad_UB_DL_Inst_Rx} with thresholds $\{\gamma_k^{\diamond}\}_{k=1}^K$. Therefore, $\{ \bdw_k^{\diamond} \}_{k=1}^K$ will satisfy $\sum_{k=1}^{K} \norm{ \bdw_k^{\diamond} }^2 = P$, which means that $\{ \bdw_k^{\diamond} \}_{k=1}^K$ is a feasible solution to $\clS^{\ub}$. Furthermore, $\{ \bdw_k^{\diamond} \}_{k=1}^K$ can achieve the objective value $\sum_{k=1}^{K} r_k^{\diamond}$ for problem $\clS^{\ub}$. Combining the above results, we can conclude that $\clM^{\ub}$ and $\clS^{\ub}$ have the same objective value region, and also the optimal value. This concludes the proof. \section{A minorizing function of $r_k$} \label[secinapp]{appendix_minorizing_rk_DL_instRx_proof} Define the $\xiter{\VMMSE_k}{n} = 1 - \frac{ \xiter{\lambda_k}{n} \beta_k }{ \sigma_k^2 } \bdg_k^H \Xinv{ \sum_{i=1}^{K} \frac{ \xiter{\lambda_i}{n} \beta_i }{ \sigma_i^2 } \bdg_i \bdg_i^H + \bdI_M } \bdg_k$ as the VMMSE of UT $k$ in the $n$th iteration. By applying the concavity of $\log_2(\cdot)$, we have \begin{align}\FormulaSpace r_k &= - \log_2 \VMMSE_k \ge \xiter{r_k}{n} - \xinv{\ln 2} \frac{ \VMMSE_k - \xiter{\VMMSE_k}{n} }{ \xiter{\VMMSE_k}{n} } \notag \\ &\stackgeq{a} \xiter{r_k}{n} + \xinv{\ln 2} - \xinv{\ln 2} \frac{ \VMSE_k }{ \xiter{\VMMSE_k}{n} } \triangleq h_k^{(n)}\comma \label{DL_rk_concavity} \end{align} where $\xiter{r_k}{n}$ is the $r_k$ in the $n$th iteration, (a) follows from $\VMMSE_k \le \VMSE_k$. Notice that $\VMSE_k$ in \eqref{VMSE_UTk_DL} is relevant to $\{ \lambda_k \}_{k=1}^K$ and $\bdu_k$. To make the last inequality in \eqref{DL_rk_concavity} holds with equality, we choose $\bdu_k$ in $\VMSE_k$ to be $\xiter{\bdu_k}{n} = \Xinv{ \sum_{i=1}^{K} \frac{ \lambda_i^{(n)} \beta_i }{ \sigma_i^2 } \bdg_i \bdg_i^H + \bdI_M } \bdg_k \sqrt{ \frac{ \lambda_k^{(n)} \beta_k }{ \sigma_k^2 } }$. Substituting $\xiter{\bdu_k}{n}$ into $\VMSE_k$ yields \begin{equation} \FormulaSpace \frac{ \VMSE_k }{ \VMMSE_k^{(n)} } = \sum_{i=1}^{K} \psi_{k,i}^{(n)} \frac{ \lambda_i \beta_i }{ \sigma_i^2 } - 2 \chi_{k}^{(n)} \sqrt{ \frac{ \lambda_k \beta_k }{ \sigma_k^2 } } + \delta_k^{(n)}\comma \end{equation} where $\psi_{k,i}^{(n)}= \frac{ \smallabs{ \bdg_i^H \bdu_k^{(n)} }^2 }{ \VMMSE_k^{(n)} }$, $\chi_k^{(n)}= \frac{ \Re \{ \bdg_k^H \bdu_k^{(n)} \} }{ \VMMSE_k^{(n)} }$ and $\delta_k^{(n)}= \frac{ \smallnorm{ \bdu_k^{(n)} }^2 + 1 }{ \VMMSE_k^{(n)} }$. \section{Proof of \Cref{Prop_rk_upperbound}} \label[secinapp]{appendix_Prop_rk_upperbound_proof} The sum of $r_k$ can be written as \begin{align} \FormulaSpace &{}\sum_{k=1}^K r_k = - \sum_{k=1}^K \log_2 \left[ \left( \bdI + \bdGamma^{\xinv{2}} \bdG^H \bdG \bdGamma^{\xinv{2}} \right)^{-1} \right]_{k,k} = - \log_2 \prod_{k=1}^{K} \left[ \left( \bdI + \bdGamma^{\xinv{2}} \bdG^H \bdG \bdGamma^{\xinv{2}} \right)^{-1} \right]_{k,k} \notag \\ \stackleq{a}&{} \log_2 \det \left( \bdI + \bdGamma^{\xinv{2}} \bdG^H \bdG \bdGamma^{\xinv{2}} \right) \stackleq{b} \log_2 \prod_{k=1}^K \left[ \bdI + \bdGamma^{\xinv{2}} \bdG^H \bdG \bdGamma^{\xinv{2}} \right]_{k,k} = \sum_{k=1}^{K} \log_2 \left( 1 + \frac{ \lambda_k \beta_k }{ \sigma_k^2 } \right)\comma \end{align} where (a) and (b) follow from the Hadamard's inequality $\det\bdA \le \prod_{k=1}^{K} [\bdA]_{k,k}$ \cite{Horn2013MatrixAnalysis}, and the equality holds if and only if $\bdGamma^{\xinv{2}} \bdG^H \bdG \bdGamma^{\xinv{2}}$ is diagonal. Since $\bdGamma$ is diagonal, $\bdGamma^{\xinv{2}} \bdG^H \bdG \bdGamma^{\xinv{2}}$ is diagonal, if and only if $\bdG^H \bdG$ is diagonal, which also means $\bdG^H \bdG = \bdI$. This concludes the proof. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The non-leptonic charmless two-body $B$ meson decays provide a festival ground for testing the flavor pictures of Standard Model~(SM) and probing the possible hints of new physics~(NP). Theoretically, in order to obtain the reliable prediction, one of the main roles is to evaluate the short-distance QCD corrections to hadronic matrix elements of $B$ meson decays. In this respect, the QCD factorization (QCDF) approach \cite{Beneke:1999br,Beneke:2000ry}, the perturbative QCD (pQCD) approach \cite{Keum:2000ph,Keum:2000wi} and the soft-collinear effective theory (SCET)~\cite{scet1,scet2,scet3,scet4} are explored and widely used to calculate the amplitudes of $B$ meson decays. In the ${\cal O}(\alpha_s)$ corrections, although the weak annihilation~(WA) amplitudes are formally $\Lambda_{\rm QCD}/m_b$ power-suppressed, they are generally nontrivial, especially for the flavor-changing-neutral-current~(FCNC) dominated and pure annihilation B decays. Furthermore, because of the possible strong phase provided by the WA amplitude, the WA contribution also play an indispensable role for evaluating the charge-parity~(CP) asymmetry. Unfortunately, in the collinear factorization approach, the calculation of WA corrections always suffers from the divergence at the end-point of convolution integrals of meson's light-cone distribution amplitudes~(LCDA). In the SCET, the annihilation diagrams are factorizable and real to the leading power of ${\cal O}(\alpha_{s}(m_{b})\Lambda_{QCD}/m_{b})$~\cite{msf,scetAnni}. In the QCDF, the end-point divergence are usually parameterized by the phenomenological parameter $X_{A}$ defined as \cite{Beneke:2003zv} \begin{eqnarray} \label{Xa} \int_{0}^{1}\frac{dx}{x}\to X_{A}(\rho_A,\phi_A)=(1+\rho_{A}e^{i\phi_{A}})\ln\frac{m_{b}}{\Lambda_{h}}\,, \end{eqnarray} in which $\Lambda_{h}=0.5{\rm GeV}$, $\rho_A$ and $\phi_A$ are phenomenological parameters and responsible for the strength and the possible strong phase of WA correction near the end-point, respectively. In addition, for the hard spectator scattering~(HSS) contributions, the calculation of twist-3 distribution amplitudes also suffers from end-point divergence, which is usually dealt with the same parameterization scheme as Eq.~\eqref{Xa} and labeled by $X_{H}(\rho_H\,,\phi_H)$. So far, the values of $(\rho_A,\phi_A)$ are utterly unknown from the first principles of QCD dynamics, and thus can only be obtained through the experimental data. Originally, a conservative choice of $\rho_A\sim 1$ with an arbitrary strong interaction phase $\phi_A$ is introduced ( in practice, for the specific final states PP, PV, VP and VV, the different values of $(\rho_A,\phi_A)$ are suggested to fit the data, see Ref.~\cite{Beneke:2003zv} for detail). Meanwhile, the values of $\rho_A$ and $\phi_A$ are treated as universal inputs for different annihilation topologies~\cite{Beneke:2003zv,Beneke:2006hg,Cheng:2009cn,Cheng:2009mu,Cheng:2008gxa}. However, in 2012, the measurements of the pure annihilation $B_s\to\pi^+\pi^-$ decay, ${\cal B}(B_s\to\pi^+\pi^-)=(0.57\pm0.15\pm0.10)\times 10^{-6}$~(CDF)~\cite{Aaltonen:2011jv} and $(0.95^{+0.21}_{-0.17}\pm0.13)\times 10^{-6}$~(LHCb)~\cite{Aaij:2012as}, present a challenge to the traditional QCDF estimation of the WA contributions, which results in a small prediction $(0.26^{+0.00+0.10}_{-0.00-0.09})\times 10^{-6}$~\cite{Cheng:2009mu}. In the pQCD approach, the possible un-negligible large WA contributions are noticed first in Refs.~\cite{Keum:2000ph,Keum:2000wi,Lu:2000em,Li:2005vu}. Moreover, the prediction of ${\cal B}(B_s\to\pi^+\pi^-)$ with the same central value as the data is presented~\cite{Ali:2007ff,Xiao:2011tx}. Recently, motivated by the possible large WA contributions observed by CDF and LHCb collaborations, some researches have been done within both the SM and the NP scenarios, for instance Refs.~\cite{Xiao:2011tx,Chang:2012xv,Gronau:2012gs,Cheng:2014rfa,Li:2015xna,Bobeth:2014rra,Zhu:2011mm,Wang:2013fya}. Especially, some theoretical studies within the QCDF framework are renewed. In Ref.~\cite{Bobeth:2014rra}, the global fits of WA parameters $X_A(\rho_A,\phi_A)$ are performed. It is found that, for the decays related by $(u\leftrightarrow d)$ quark exchange, a universal and relative large WA parameter is supported by the data except for the $B\to\pi K$ system, which exhibits the well-known ``$\Delta A_{CP}(\pi K)$ puzzle", and some tensions in $B\to\phi K^*$ decays. In Refs.~\cite{Zhu:2011mm,Wang:2013fya}, after carefully studying the flavor dependence of the WA parameter $X_A$ on the initial states in $B\to PP$ system, the authors present a ``new treatment"~(a topology-dependent scheme) for the end-point parameters. It is suggested that $X_A$ should be divided into two independent complex parameters $X_A^{i}$ and $X_A^{f}$, which correspond to non-factorizable and factorizable topologies ( gluon emission from the initial and final states, respectively), respectively. Meanwhile, the flavor dependence of $X_A^{i}$ on the initial states, $B_d$ and $B_s$, should be carefully considered. Moreover, the global fits of the end-point parameters in $B\to PP$ and $B\to PV$ decays have confirmed such ``new treatment", except for that the flavor symmetry breaking effect of WA parameters is hard to be distinguished due to the experimental errors and theoretical uncertainties~\cite{Chang:2014yma,Sun:2014tfa}. Numerically, with the simplification $X_H=X_A^{i}$, the best-fit results~\cite{Chang:2014yma,Sun:2014tfa} \begin{eqnarray} \left\{ \begin{array}{l} ({\rho}_{A}^{i},{\phi}_{A}^{i}[^{\circ}]) = (2.98^{+1.12}_{-0.86},-105^{+34}_{-24}), \\ ({\rho}_{A}^{f},{\phi}_{A}^{f}[^{\circ}]) = (1.18^{+0.20}_{-0.23},-40^{+11}_{-8}), \end{array} \right. \quad {\rm for~PP~final~states} \label{PPSA} \end{eqnarray} \begin{eqnarray} \left\{ \begin{array}{l} ({\rho}_{A}^{i},{\phi}_{A}^{i}[^{\circ}]) = (2.87^{+0.66}_{-1.95},-145^{+14}_{-21}),\\ ({\rho}_{A}^{f},{\phi}_{A}^{f}[^{\circ}]) = (0.91^{+0.12}_{-0.13},-37^{+10}_{-9}), \end{array} \right.\quad {\rm for~PV~final~states} \label{PVSA} \end{eqnarray} are suggested. With such values, all of the QCDF results for charmless $B\to PP$ and $PV$ decays, especially for $B\to\pi \pi$ and $\pi K$ decays, can accommodate the current measurements. Even though the topology-dependent scheme for the HSS and WA contributions has been tested in $B\to PP$ and $PV$ decays and presents a good agreement with data, it is also worth further testing whether such scheme persist still in $B\to VV$ decays, which involve more observables, such as polarization fractions and relative amplitude phases, and thus would present much stronger constraints on the HSS and WA contributions. Moreover, in recent years, many measurements of $B\to VV$ decays are updated at higher precision~\cite{HFAG}. So, it is also worth reexamining the agreement between QCDF's prediction and experimental data, and investigating the effects of HSS and WA corrections on $B\to VV$ decays, especially some puzzles and tensions therein. Our paper is organized as follows. After a brief review of the WA corrections in $B\to VV$ decays in section 2, we present our numerical analyses and discussions in section 3. Our main conclusions are summarized in section 4. \section{Brief Review of WA Corrections} In the SM, the effective weak Hamiltonian responsible for $b\to p$ transition is written as~\cite{Buchalla:1995vs,Buras:1998raa} \begin{eqnarray}\label{eq:eff} {\cal H}_{\rm eff} &=& \frac{G_F}{\sqrt{2}} \biggl[V_{ub} V_{up}^* \left(C_1 O_1^u + C_2 O_2^u \right) + V_{cb} V_{cp}^* \left(C_1 O_1^c + C_2 O_2^c \right) - V_{tb} V_{tp}^*\, \big(\sum_{i = 3}^{10} C_i O_i \big. \biggl. \nonumber\\ && \biggl. \big. + C_{7\gamma} O_{7\gamma} + C_{8g} O_{8g}\big)\biggl] + {\rm h.c.}, \end{eqnarray} where $V_{qb} V_{qp}^{\ast}$~$(p=d,s)$ are products of the Cabibbo-Kobayashi-Maskawa~(CKM) matrix elements, $C_{i}$ are the Wilson coefficients, and $O_i$ are the relevant four-quark operators. The essential theoretical problem for obtaining the amplitude of $B\to M_1M_2$ decay is the evaluation of the hadronic matrix elements of the local operators in Eq.~\eqref{eq:eff}. Based on the collinear factorization scheme and color transparency hypothesis, the QCDF approach is developed to deal with the hadronic matrix elements~\cite{Beneke:1999br,Beneke:2000ry}. Up to power corrections of order ${\Lambda}_{QCD}/m_{b}$, the factorization formula for $B$ decaying into two light meson is given by~\cite{Beneke:1999br,Beneke:2000ry} \begin{eqnarray} {\langle}M_{1}M_{2}{\vert}O_{i}{\vert}B{\rangle} &=& \sum\limits_{j}\Big\{ F_{j}^{B{\to}M_{1}} {\int}dz\,T^{I}_{ij}(z){\Phi}_{M_{2}}(z) +(M_{1}{\leftrightarrow}M_{2})\Big\} \nonumber \\ & & +{\int}dx\,dy\,dz\,T^{II}_{i}(x,y,z) {\Phi}_{B}(x){\Phi}_{M_{1}}(y){\Phi}_{M_{2}}(z) \label{eq:ff}. \end{eqnarray} Here, $F_{j}^{B{\to}M_1}$ denotes the form factor of $B$ ${\to}$ $M_1$ transition, and ${\Phi}_{X}(z)$ is the light-cone wave function for the two-particle Fock state of the participating meson $X$, both of which are nonperturbative inputs. $T^{I}(z)$ and $T^{II}(x,y,z)$ denote hard scattering kernels, which could be systematically calculated order by order with the perturbation theory in principle. The QCDF framework for $B\to VV$ decays has been fully developed in Refs.~\cite{Beneke:2006hg,Cheng:2008gxa,Bell:2009fm,Beneke:2009ek,Bell:2015koa}. For the WA contributions, the convolution integrals in $B\to VV$ decays exhibit not only the logarithmic infrared divergence regulated by Eq.~(\ref{Xa}) but also the linear infrared divergence appeared in the transverse building blocks $A_{1,2}^{i-}$, which is different from the case of $B\to PP,\,PV$ decays. With the treatment similar to $X_{A}$ in Eq.~(\ref{Xa}), the linear divergence is usually extracted into unknown complex quantity $X_{L}$ defined as~\cite{Beneke:2006hg} \begin{equation}\label{XL} \int_{0}^{1}\frac{dx}{x^2}\to X_{L}(\rho_A,\phi_A)=(1+\rho_{A}e^{i\phi_{A}})\frac{m_{b}}{\Lambda_h}\,. \end{equation} In such a parameterization scheme, even though the predictive power of QCDF is partly weakened due to the incalculable WA parameters, such scheme provides a feasible way to evaluate the effects of WA corrections in a phenomenological view point. Traditionally, the end-point parameters $X_{A\,,L}^{i,f}(\rho_A^{i,f},\phi_A^{i,f})$ are assumed to be universal for different WA topologies of $B\to VV$ decays, and ones take the values $\rho_A^{f}=\rho_A^{i}\sim 0.7$ and $\phi_A^{f}=\phi_A^{i}\sim -50^{\circ}$~\cite{Beneke:2006hg,Cheng:2008gxa} as input. In this paper, in order to test the proposal of Refs.~\cite{Zhu:2011mm,Wang:2013fya} mentioned in introduction, $(\rho_A^{f},\phi_A^{f})$ and $(\rho_A^{i},\phi_A^{i})$ are treated as independent parameters, and responsible for the end-point corrections of factorizable and non-factorizable WA topologies, respectively. After evaluating the convolution integral with the asymptotic light-cone distribution amplitudes, one can get the basic building blocks of WA amplitudes, which are explicitly written as~\cite{Beneke:2006hg,XQLi} \begin{eqnarray} A_{1}^{i,0}&\simeq& A_{2}^{i,0}\simeq 18\pi\alpha_{s}\left[(X_{A}^{i}-4+\frac{\pi^2}{3})+r_{\chi}^{V_{1}}r_{\chi}^{V_{2}}(X_{A}^{i}-2)^2\right]\,,\\ \label{a3iz} A_{3}^{i,0}&\simeq& 18\pi\alpha_{s}(r_{\chi}^{V_{1}}-r_{\chi}^{V_{2}})\left[-(X_{A}^{i})^2+2X_{A}^{i}-4+\frac{\pi^2}{3}\right]\,,\\ A_{3}^{f,0}&\simeq& 18\pi\alpha_{s}(r_{\chi}^{V_{1}}+r_{\chi}^{V_{2}})(2X_{A}^{f}-1)(2-X_{A}^{f})\,, \end{eqnarray} for the non-vanishing longitudinal contributions, and \begin{eqnarray} A_{1}^{i,+}&\simeq& A_{2}^{i,+}\simeq 18\pi\alpha_{s}\frac{m_{1}m_{2}}{m_{B}^2}\left[2(X_{A}^{i})^2-3X_{A}^{i}+6-\frac{2\pi^2}{3}\right]\,,\\ A_{1}^{i,-}&\simeq& A_{2}^{i,-}\simeq 18\pi\alpha_{s}\frac{m_{1}m_{2}}{m_{B}^2}\left(\frac{1}{2} X_{L}^{i}+\frac{5}{2}-\frac{\pi^2}{3}\right),\\ \label{a3im} A_{3}^{i,-}&\simeq& 18\pi\alpha_{s}\left(\frac{m_{1}}{m_{2}}r_{\chi}^{V_{2}}-\frac{m_{2}}{m_{1}}r_{\chi}^{V_{1}}\right)\left[(X_{A}^{i})^2-2X_{A}^{i}+2\right]\,,\\ A_{3}^{f,-}&\simeq& 18\pi\alpha_{s}\left(\frac{m_{1}}{m_{2}}r_{\chi}^{V_{2}}+\frac{m_{2}}{m_{1}}r_{\chi}^{V_{1}}\right)\left[2(X_{A}^{f})^2-5X_{A}^{f}+3\right]\,, \end{eqnarray} for the transverse ones. Generally, $A_3^{i,0}$ and $A_3^{i,-}$ given by Eqs.~\eqref{a3iz} and \eqref{a3im} are very small and therefore negligible for the case of light final states. One may refer to Refs.~\cite{Beneke:2006hg,Cheng:2009mu,Cheng:2008gxa} for the further explanation. The decays modes considered in this paper include the penguin-dominated $B\to \rho K^*$ decays induced by $b\to s \bar{q} q$~($q=u,d$) transition and $B\to \phi K^*$ decays induced by $b\to s \bar{s} s$ transition, the tree-dominated $B\to \rho \rho$ decays induced by $b\to d \bar{q} q$ transition, and the penguin- and/or annihilation-dominated $B\to K^* \bar{K}^*$ and $\phi\phi$ decays induced by $b\to d \bar{s} s$ transition. The explicit expressions of their amplitudes are summarized in Appendix A. As is known, the penguin-dominated and color-suppressed tree dominated decays are very sensitive to the WA and the HSS contributions, respectively. So, it is expected that their precisely measured observables could present strong constraints on the end-point parameters. The pure annihilation decays, such as $\bar{B}^0\to \bar{K}^{*-} K^{*+}$ and $\phi\phi$ decay modes, are much more suitable for probing the annihilation contributions without the interference effects. However, there is no available experimental result by now. So, we leave them as our predictions, which will be tested by the forthcoming measurements at LHC and super-KEKb. \section{Numerical Analyses and Discussions} \begin{table}[t] \caption{\small The values of input parameters: Wolfenstein parameters, pole and running quark masses, decay constants, form factors and Gegenbauer moments. For the other inputs, such as masses and lifetimes of mesons, we take their central values given by PDG~\cite{PDG}.} \label{ppvalue} \begin{footnotesize} \vspace{0.1cm} \small \doublerulesep 0.1pt \tabcolsep 0.1in \begin{tabular}{c} \Xhline{2pt} $\bar{\rho}=0.145_{-0.007}^{+0.013}$,~~ $\bar{\eta}=0.343_{-0.012}^{+0.011}$,~~ $A=0.810_{-0.024}^{+0.018}$,~~ $\lambda=0.22548_{-0.00034}^{+0.00068}$ ~\cite{CKMfitter},\\\Xhline{1pt} $m_{c}=1.67\pm0.07$ GeV,~~ $m_{b}=4.78\pm0.06$ GeV,~~ $m_{t}=173.21\pm0.87$ GeV,\\ $\frac{\bar{m}_{s}(\mu)}{\bar{m}_{u,d}(\mu)}=27.5\pm1.0$,~~ ${\bar{m}_{s}(2 {\rm GeV})}=95\pm5$ MeV,~~ ${\bar{m}_{b}(m_{b})}=4.18\pm0.03$ GeV ~\cite{PDG},\\\Xhline{1pt} $f_{B_{u,d}}=190.6\pm4.7$ MeV,~~$f_{\rho}=216\pm3$ MeV,~~$f_{\rho}^{\perp}=165\pm9$ MeV,\\ $f_{K^{*}}=220\pm5$ MeV,~~$f_{K^{*}}^{\perp}=185\pm10$ MeV, $f_{\phi}=215\pm5$ MeV,~~$f_{\phi}^{\perp}=186\pm9$ MeV~\cite{lattice,Ball:2006eu},\\\Xhline{1pt} $A_{0}^{B\to\rho}=0.303\pm0.029$ GeV,~~$A_{1}^{B\to\rho}=0.242\pm0.023$ GeV,~~ $V^{B_{u}\to\rho}=0.323\pm0.030$ GeV,~~\\ $A_{0}^{B\to K^{*}}=0.374\pm0.034$ GeV,~~$A_{1}^{B\to K^{*}}=0.292\pm0.028$ GeV,~~ $V^{B\to K^{*}}=0.411\pm0.033$ GeV ~\cite{Ball:2004rg},\\ \Xhline{1pt} $a_{1}^{\parallel,\perp}(\rho)^{\mu=\text{2GeV}}=0$,~ $a_{1}^{\parallel,\perp}(\phi)^{\mu=\text{2GeV}}=0$,~ $a_{1}^{\parallel,\perp}(K^*)^{\mu=\text{2GeV}}=0.02$,~\\ $a_{2}^{\parallel,\perp}(\rho)^{\mu=\text{2GeV}}=0.10$,~ $a_{2}^{\parallel,\perp}(\phi)^{\mu=\text{2GeV}}=0.13$,~ $a_{2}^{\parallel,\perp}(K^*)^{\mu=\text{2GeV}}=0.08$~\cite{Ball:2007rt}.\\ \Xhline{2pt} \end{tabular} \end{footnotesize} \end{table} In this paper, the independent observables, including CP-averaged branching fraction, CP asymmetries, polarization fractions and relative amplitude phases, are evaluated. For these observables, we take the same definition and convention as Ref.~\cite{Beneke:2006hg}. The available experimental results averaged by HFAG~\cite{HFAG} are listed in the ``Exp." columns of Tables \ref{tab:rhok}, \ref{tab:kk}, \ref{tab:phik} and \ref{tab:rhorho}~(the recent measurement of $\bar{B}^{0}\to\rho^{+}\rho^{-}$ decay reported by Belle~\cite{Vanhoefer:2015ijw} agrees well with the previous results, and hasn't been included in the HFAG's average), which are employed in the coming fits of WA parameters. In addition, the values of input parameters used in our evaluations are listed in the Table~\ref{ppvalue}. In the $\chi^2$ analyses, with the same statistical $\chi^2$ approach as the one given in the appendix of Refs.~\cite{Chang:2014rla,Hofer:2010ee}, we firstly scan randomly the points of end-point parameters in the conservative ranges and evaluate the $\chi^2$ value of each point. Then, we find out the $\chi_{\rm min}^2$ and get the allowed spaces (points) at $68\%\,,95 \%$ C.L.. If more than one separate spaces are found, we pick each of them out and further deal with them respectively with their local minima of the $\chi^2$ ({\it e.g.} the 4 solutions in the coming Fig.~\ref{KKrhoK}). With aforementioned theoretical strategy and inputs, we now proceed to present our numerical results and discussions, which are divided into four cases for different purposes: \begin{figure}[t] \begin{center} \subfigure[]{\includegraphics[width=8cm]{solutionA.pdf}}\quad \subfigure[]{\includegraphics[width=8cm]{solutionB.pdf}}\\ \subfigure[]{\includegraphics[width=8cm]{solutionC.pdf}}\quad \subfigure[]{\includegraphics[width=8cm]{solutionD.pdf}} \caption{\label{KKrhoK} \small The allowed regions of WA parameters at 68\% and 95\% C. L. with the combined constraints from $B\to \rho K^*, \bar{K}^*K^*$ decays. The best-fit points of solutions A, B, C and D correspond to $\chi_{\rm min}^2=5.0$, $5.1$, $5.9$ and $6.0$, respectively. One may also see Fig.~\ref{KKrhoK2} plotted in the complex plane.} \end{center} \end{figure} \subsection{Case I} For case I, in order to test the topology-dependent scheme, $(\rho_{A}^{i,f},\phi_{A}^{i,f})$ are treated as free parameters. Meanwhile, the simplification $(\rho_{H},\phi_{H})=(\rho_{A}^{i},\phi_{A}^{i})$ allowed by $B\to PP$ and $PV$ decays~\cite{Chang:2014yma,Sun:2014tfa} is assumed. The combined constraints from $B_{u,d}\to \rho K^*,\bar{K}^*K^*$ decays, where 16 observables (see Tables \ref{tab:rhok} and \ref{tab:kk}) are well measured, are considered in the fit. For the $B\to \rho K^{*}$ decays, the tree contributions $\alpha_{1,2}$ are strongly suppressed by the CKM factor $|V_{us}^*V_{ub}|\sim {\cal O}(\lambda^4)$, whereas the QCD penguin contribution $\alpha_{4}$ is proportional to $|V_{cs}^*V_{cb}|\sim {\cal O}(\lambda^2)$ and thus dominates the amplitudes. Therefore, the WA contributions with the same CKM factor $|V_{cs}^*V_{cb}|$ as $\alpha_{4}$ would be important for these decays. In their amplitudes given by Eqs.~(\ref{eq:rhok1}-\ref{eq:rhok4}), the main WA contribution is derived from the effective WA coefficient $\beta_3$, which is dominated by the building block $A_3^f$ accompanied by $N_cC_6$. So, $B\to \rho K^{*}$ decays would present strict constraints on $(\rho_{A}^{f},\phi_{A}^{f})$. Moreover, the measured penguin-dominated $B^{-}\to K^{*-}K^{*0}$ and $\bar B^{0}\to \bar K^{*0}K^{*0}$ decays, which amplitudes are given by Eqs.~\eqref{eq:kk1} and \eqref{eq:kk3}, would provide further constraints on $(\rho_{A}^{f},\phi_{A}^{f})$. Besides, due to the existence of $\beta_{2,4}$, which are relevant to $A_{1,2}^i$ only, $B^{-}\to K^{*-}K^{*0}$ and $\bar B^{0}\to \bar K^{*0}K^{*0}$ decays also may provide some constraints on $(\rho_{A}^{i},\phi_{A}^{i})$. Under the combined constraints from $B_{u,d}\to \rho K^*,\bar{K}^*K^*$ decays, the allowed spaces of end-point parameters $(\rho_{A}^{i,f},\phi_{A}^{i,f})$ are shown in Fig.~\ref{KKrhoK}. It could be found that: (i) as expected, the parameters $(\rho_{A}^{f},\phi_{A}^{f})$ are strictly bounded into four separate regions, which are named solutions A-D. However, the constraint on $(\rho_{A}^{i},\phi_{A}^{i})$ is very loose. In addition, the two different spaces of $(\rho_{A}^{f},\phi_{A}^{f})$ in solutions A and B, as well as the ones in solutions C and D, correspond to almost the same $(\rho_{A}^{i},\phi_{A}^{i})$ space. In fact, the two solutions (solutions A and B, or solutions C and D) result in the similar WA corrections. Such situation also exists in the $B\to PP$ and $PV$ decays~\cite{Chang:2014yma,Sun:2014tfa}. (ii) The relation $(\rho_{A}^{f},\phi_{A}^{f})\neq (\rho_{A}^{i},\phi_{A}^{i})$ is always required at 68\% C. L., except for the solution B shown by Fig.~\ref{KKrhoK}~(b) due to the loose constraints on $(\rho_{A}^{i},\phi_{A}^{i})$. (iii) Corresponding to the best-fit points of 4 solutions, the best-fit values of end-point parameters are \begin{equation} \label{scaseI} (\rho_A,\phi_A[^\circ])^{i,\,f}= \left\{ \begin{array}{l} (3.27,-251),~(1.17,-45)\,;\quad \text{solution A}\\ (3.86,-250),~(2.00,-205)\,;\quad \text{solution B}\\ (5.80,-70),~(1.19,-158)\,;\quad \text{solution C}\\ (5.79,-69),~(0.48,-291)\,.\quad \text{solution D} \end{array} \right. \end{equation} Interestingly, the result $(\rho_A^{f},\phi_A^{f}[^\circ])=(1.17,-45)$ of solution A is very similar to the results gotten in $B\to PP$ and $PV$ decays given by Eqs.~(\ref{PPSA}) and (\ref{PVSA}). Furthermore, the $(\rho_A^{f},\phi_A^{f}$ of solution B in Eq.~\eqref{scaseI} is also very similar to the other results in $B\to PP$ and $PV$ decays~(solution B given in Refs. \cite{Chang:2014yma} and \cite{Sun:2014tfa}). A more clear comparison will be present in the next case. In the past years, the penguin-dominated $B\to \phi K^*$ decays have attracted much attention due to the well-known ``polarization anomaly". One may refer to Ref.~\cite{Kagan:2004uw} and the most recent studies in QCDF and pQCD approaches~\cite{Bobeth:2014rra,Zou:2015iwa} for detail. For $B\to \phi K^*$ decays, the complete angular analyses are available, which would present much stricter requirement for the WA contributions. In Ref.~\cite{Bobeth:2014rra}, with the traditional ansatz that the end-point parameters are universal for all of the annihilation topologies, it is found that the current measurements for the observables of $B\to \phi K^*$ decays are hardly to be accommodated simultaneously. So, in the next case, we would like to test whether the possible disagreement could be moderated by the topology-dependent scheme. \begin{figure}[t] \begin{center} \subfigure[]{\includegraphics[width=8cm]{c2solutionA.pdf}}\quad \subfigure[]{\includegraphics[width=8cm]{c2solutionB.pdf}} \caption{\label{KKrhoKphiK} \small The allowed regions of end-point parameters with the constraints from $B_{u,d}\to \rho K^*, \bar{K}^*K^*$ and $\phi K^*$ decays. The best-fit points of solutions A and B correspond to $\chi_{\rm min}^2=11.1$. For comparison, the fitted results of $(\rho_{A}^{i(f)},\phi_{A}^{i(f)})$ in $B\to PP$ and $PV$ decays are also shown by light~(dark) yellow and green pointed regions, respectively. One may also see Fig.~\ref{KKrhoKphiK2} plotted in the complex plane. } \end{center} \end{figure} \begin{figure}[t] \begin{center} \subfigure[]{\includegraphics[width=5.5cm]{Br.pdf}} \subfigure[]{\includegraphics[width=5.6cm]{Acp0.pdf}} \subfigure[]{\includegraphics[width=5.6cm]{phil.pdf}}\\ \subfigure[]{\includegraphics[width=5.5cm]{Brn.pdf}} \subfigure[]{\includegraphics[width=5.6cm]{Acp0n.pdf}} \subfigure[]{\includegraphics[width=5.6cm]{philn.pdf}} \caption{\label{depphiK} \small The dependences of some observables of $B\to \phi K^*$ decays on the parameter $\phi_A^i$ with different values of $\rho_A^i$ and settled $(\rho_{A}^{f},\phi_{A}^{f})$ of solutions A~(solid lines) and C~(dashed lines) in Eq.~(\ref{scaseI}). The shaded bands are experimental results with $1\sigma$ error. } \end{center} \end{figure} \subsection{Case II} In this case, we take the same ansatz as case I except to take the constraints from $B_{u,d}\to \phi K^*$ decays into account. In the fit, all of the available observables are considered except for $\phi_\bot({\phi K^*})$ because $\phi_\bot\simeq \phi_{\parallel}$ is hold in the QCDF and also supported by the current measurements within errors. With the constraints from 32 measured observables, our fitted results are shown by Fig.~\ref{KKrhoKphiK}. The value of $\chi_{\rm min}^2$ in this case is much larger than the ones in case I because $B_{u,d}\to \phi K^*$ decay modes and relevant observables are considered as constraint conditions. In addition, to clarify the effects of WA contributions on $B_{u,d}\to \phi K^*$ decays, the dependences of some observables on the end-point parameters are plotted in Fig.~\ref{depphiK}. From Fig.~\ref{KKrhoKphiK}, it could be found that: (i) comparing with the solutions A and B of case I, the allowed spaces of end-point parameters are further restricted by $B_{u,d}\to \phi K^*$ decays. Especially, the regions of $(\rho_{A}^{i},\phi_{A}^{i})$ are strictly bounded around $(4,-250^{\circ})$, which is mainly required by $A_{CP}^0(B^-\to \phi K^{*-})$, $\phi_{\parallel}(B_{u,d}\to \phi K^*)$ and also allowed by the other observables as Fig.~\ref{depphiK}~(solid lines) shows. However, the solutions C and D in case I are entirely excluded by $B\to \phi K^*$, which could be easily understood from Fig.~\ref{depphiK}~(dashed lines); (ii) There is no overlap between the spaces of $(\rho_{A}^{i},\phi_{A}^{i})$ and $(\rho_{A}^{f},\phi_{A}^{f})$ at 95\% C. L., which means that the relation $(\rho_{A}^{f},\phi_{A}^{f})\neq (\rho_{A}^{i},\phi_{A}^{i})$ found in $B\to PP\,,PV$ decays is also required by $B\to VV$ decays; (iii) More interestingly, comparing with the previous fitted results gotten through $B\to PP\,,PV$ decays, it could be found that the allowed spaces of $(\rho_{A}^{f},\phi_{A}^{f})$ in $B\to PP\,,PV$ and $VV$ decays are very close to each other, which implies possible universal $X_A^f(\rho_{A}^{f},\phi_{A}^{f})$ for all of the decay modes. However, no significant relationship could be found for $(\rho_{A}^{i},\phi_{A}^{i})$. Corresponding to the spaces of end-point parameters in Fig.~\ref{KKrhoKphiK}, the numerical results are \begin{equation} \label{scaseII} (\rho_A,\phi_A[^\circ])^{i,\,f}= \left\{ \begin{array}{l} (4.66_{-0.76}^{+1.47},-259_{-16}^{+18}),~(1.30_{-0.08}^{+0.11},-44_{-8}^{+10})\,,\quad \text{solution A}\\ (4.66_{-0.84}^{+1.39},-259_{-15}^{+17}),~(2.08_{-0.12}^{+0.14},-206_{-5}^{+5})\,,\quad \text{solution B} \end{array} \right. \end{equation} in which the two solutions for $ (\rho_A^{i},\phi_A^{i})$ are in fact the same. With solution A as input, we then present the updated QCDF's results for $B\to VV$ decays in the ``case II" columns of Tables \ref{tab:rhok}, \ref{tab:kk}, \ref{tab:phik}, \ref{tab:rhorho} and \ref{tab:phiphi}, in which the data~\cite{HFAG} and the previous theoretical results~\cite{Cheng:2009cn,Cheng:2008gxa,Beneke:2006hg} are also listed for comparison. One may find that most of the updated results are in good agreements with the data within the errors and uncertainties except for an unexpected large ${\cal B}(\bar{B}^0\to\rho^0\rho^0)$. The $\bar{B}^0\to\rho^0\rho^0$ decay is dominated by color-suppressed tree contribution $\alpha_2$, and thus very sensitive to the HSS corrections. Therefore, the unexpected large theoretical result of ${\cal B}(\bar{B}^0\to\rho^0\rho^0)$ is mainly caused by the large $\rho_A^i$ and the simplification $(\rho_H,\phi_H)=(\rho_{A}^{i},\phi_{A}^{i})$, which are favored by $B\to PP$ and $PV$ decays~\cite{Chang:2014yma,Sun:2014tfa}. So, the current data of ${\cal B}(\bar{B}^0\to\rho^0\rho^0)$ presents a challenge to the large $\rho_H$ and/or the simplification $(\rho_H,\phi_H)=(\rho_{A}^{i},\phi_{A}^{i})$. In addition, recalling the situation in $B\to \pi\pi$ decays, a large HSS correction with $\rho_H\sim 3$ plays an important role for resolving the ``$\pi\pi$ puzzle" and is allowed by the other $B\to PP$ and $PV$ decays~\cite{Chang:2014rla}. So, any hypothesis for resolving the ``$\pi\pi$ puzzle" through enhancing the HSS corrections should be carefully tested whether it is also allowed by $\bar{B}^0\to\rho^0\rho^0$ decay. It should be noted that the $B\to VV$ decays are relevant to not only the longitudinal building blocks but also the transverse ones, the latter of which do not contribute to $B\to PP$ and $PV$ decays. In cases I and II, the analyses are based on the findings in $B\to PP$ and $PV$ decays and the ansatz that end-point parameters are universal in longitudinal and transverse building blocks, even though the latter is not essential. In the following cases, we will pay attention to such issue. \begin{figure}[t] \begin{center} \subfigure[]{\includegraphics[width=8cm]{pi.pdf}} \subfigure[]{\includegraphics[width=8cm]{pf.pdf}}\\ \subfigure[]{\includegraphics[width=8cm]{p.pdf}} \caption{\label{case3} \small The allowed regions of the longitudinal end-point parameters with the constraints from longitudinal-polarization-dominated decay modes. For Fig.~(c), the simplification $(\rho_A^{L,i},\phi_A^{L,i})=(\rho_A^{L,f},\phi_A^{L,f})\equiv(\rho_A^{L},\phi_A^{L})$ is taken. One may also see Fig.~\ref{case32} plotted in the complex plane.} \end{center} \end{figure} \subsection{Case III} For case III, in order to extract the end-point contributions in the longitudinal building blocks, we pick out the measured longitudinal-polarization-dominated decay modes as constraint conditions, which include $B^-\to \rho^- \bar{K}^{*0}$, $K^{*0}K^{*-}$, $\rho^0 \rho^-$ and $\bar{B}^0\to K^{*0}\bar{K}^{*0}$, $\rho^+\rho^-$ decays. Taking $\rho_A^{T}=0$, the allowed spaces of longitudinal end-point parameters are shown by Fig.~\ref{case3}, in which Figs.~(a,b) and Fig.~(c) correspond to the cases without and with the simplification $(\rho_A^{L,i},\phi_A^{L,i})=(\rho_A^{L,f},\phi_A^{L,f})\equiv(\rho_A^{L},\phi_A^{L})$, respectively. From Figs.~\ref{case3}~(a) and (b), one may find that: (i) the large $\rho_A^{L,i}$ is excluded, which is mainly caused by the constraints from $B\to \rho\rho$ decays; (ii) Even though the fit of longitudinal end-point parameters through longitudinal-polarization-dominated decays is an ideal strategy, there is no well-bounded space could be found due to the lack of data and the large theoretical uncertainties, which prevent us to test whether $(\rho_A^{L},\phi_A^{L})$ are topology-dependent. Moreover, as Fig.~\ref{case3}~(c) shows, even if the simplification $(\rho_A^{L,i},\phi_A^{L,i})=(\rho_A^{L,f},\phi_A^{L,f})$ is taken, the spaces of end-point parameters are still hardly to be well restricted. So, the refined experimental measurements are required for a definite conclusion. \begin{figure}[t] \begin{center} \subfigure[]{\includegraphics[width=8cm]{c4sa.pdf}}\quad \subfigure[]{\includegraphics[width=8cm]{c4sb.pdf}}\\ \subfigure[]{\includegraphics[width=8cm]{c4sc.pdf}}\quad \subfigure[]{\includegraphics[width=8cm]{c4sd.pdf}} \caption{\label{case41} \small The allowed regions of the longitudinal and transverse end-point parameters with the constraints from $B_{u,d}\to \rho K^*, K^*\bar{K}^*$ and $\phi K^*$ decays. The best-fit points of solutions $\rm A'$-$\rm D'$ correspond to $\chi^2_{\rm min}=10.1$, $10.0$, $11.5$ and $12.1$, respectively. One may also see Fig.~\ref{case412} plotted in the complex plane.} \end{center} \end{figure} \subsection{Case IV} For case IV, we assume that the end-point parameters are topology-independent but non-universal for longitudinal and transverse building blocks~(a polarization-dependent scheme). The free parameters are $(\rho_A^{L},\phi_A^{L})$ and $(\rho_A^{T},\phi_A^{T})$. With the same constraint condition as case II, where 32 observables relevant to the penguin-dominated $B\to \rho K^*, K^*\bar{K}^*,\phi K^*$ decays are considered, the allowed spaces of end-point parameters $(\rho_A^{L,T},\phi_A^{L,T})$ are strictly restricted. Explicitly, the allowed spaces consist of 4 separate parts, named solutions $\rm A'$, $\rm B'$, $\rm C'$ and $\rm D'$, shown by Fig.~\ref{case41}. It could be found that: (i) the values of $\rho_A^T$ are a little larger than the ones of $\rho_A^L$ due to the requirement of large transverse polarization fractions of $B\to \phi K^*$ decays; (ii) There is no significant overlap between the allowed regions of $(\rho_A^{L},\phi_A^{L})$ and $(\rho_A^{T},\phi_A^{T})$ at 95\% C. L., which implies that $(\rho_A^{L},\phi_A^{L})\neq (\rho_A^{T},\phi_A^{T})$; (iii) Numerically, the best-fit values of four solutions are \begin{equation} \label{scaseIV} (\rho_A,\phi_A[^\circ])^{L,\,T}= \left\{ \begin{array}{l} (1.10,-213),~(1.65,-106)\,,\quad \text{solution $\rm A'$} \\ (0.64,-66),~(1.83,-227)\,,\quad \text{solution $\rm B'$}\\ (0.60,-110),~(1.84,-184)\,,\quad \text{solution $\rm C'$}\\ (1.32,-208),~(1.86,-183)\,,\quad \text{solution $\rm D'$} \end{array} \right. \end{equation} which are much smaller than the end-point parameters of cases I and II, and thus theoretically more acceptable due to the power counting rules. Moreover, in the view of minimum $\chi^2$, the solution $\rm A'$ in this case is much more favored by the data than the results of case II because $\chi^2_{\rm min}[{\rm solution~A'}]<\chi^2_{\rm min}[{\rm case~II}]$. Such findings imply that the end-point contributions are possibly not only topology-independent but also polarization-dependent. \begin{figure}[t] \begin{center} \subfigure[]{\includegraphics[width=5.5cm]{exca.pdf}} \subfigure[]{\includegraphics[width=5.5cm]{excb.pdf}} \subfigure[]{\includegraphics[width=5.5cm]{exccd.pdf}}\\ \subfigure[]{\includegraphics[width=5.5cm]{excap.pdf}} \subfigure[]{\includegraphics[width=5.5cm]{excbp.pdf}} \subfigure[]{\includegraphics[width=5.5cm]{exccdp.pdf}} \caption{\label{depc4} \small The dependences of $f_{L}(B\to \rho\rho)$ on the parameter $\phi_A^L$ with different values of $\rho_A^L$ and settled $(\rho_{A}^{T},\phi_{A}^{T})$ of solutions $\rm A'$-$\rm D'$ in Eq.~(\ref{scaseIV}), which are labeled in each figure. The shaded bands are experimental results with $1\sigma$ error. } \end{center} \end{figure} Then, we would like to test such solutions further in $B\to \rho\rho$ decays. With the well settled values of $(\rho_{A}^{T},\phi_{A}^{T})$ in Eq.~(\ref{scaseIV}), the dependences of $f_{L}(B\to \rho\rho)$ on the parameter $\phi_A^L$ with different $\rho_A^L$ are shown by Fig.~\ref{depc4}. For the $\bar{B}^0\to \rho^+\rho^-$ decay, because its amplitude is dominated by the tree coefficient $\alpha_1$, the effects of HSS and WA corrections are not significant as Figs.~\ref{depc4}~(d-f) show. For the $B^-\to \rho^0\rho^-$ decay, its amplitude is irrelevant to the WA contribution but sensitive to the HSS correction through $\alpha_2$. From Figs.~\ref{depc4}~(a-c), it could be found that the solutions $\rm B'$, $\rm C'$ and $\rm D'$ are possible to be excluded by $f_{L}(B^-\to \rho^0\rho^-)$ and the solution $\rm A'$ is unwillingly acceptable. \begin{figure}[t] \begin{center} \includegraphics[width=8cm]{c6s.pdf} \caption{\label{case42} \small The allowed regions of the longitudinal and transverse end-point parameters with the combined constraints from all of the decay modes considered in this paper. The best-fit point corresponds to $\chi^2_{\rm min}=16.8$. One may also see Fig.~\ref{case422} plotted in the complex plane.} \end{center} \end{figure} Finally, combining the constraints from all of the decay modes considered in this paper, we present the allowed space of longitudinal and transverse end-point parameters in Fig.~\ref{case42}. As analyses above, the solutions $\rm B'$, $\rm C'$ and $\rm D'$ gotten through $B_{u,d}\to \rho K^*, K^*\bar{K}^*$ and $\phi K^*$ decays are ruled out entirely by $B\to \rho\rho$ decays, and the parameter spaces of solution $\rm A'$ are further restricted. Numerically, we get \begin{equation} \label{sol42} (\rho_A,\phi_A[^\circ])^{L,\,T}= (0.97_{-0.39}^{+0.67},-214_{-44}^{+14}),~(1.65_{-0.23}^{+0.13},-108_{-10}^{+13})\,. \quad \text{solution $\rm A'$} \end{equation} Using the values of $(\rho_A^{L,\,T},\phi_A^{L,\,T})$ in Eq.~\eqref{sol42}, we present the theoretical results for $B\to VV$ decays in the ``case IV" columns of Tables \ref{tab:rhok}, \ref{tab:kk}, \ref{tab:phik}, \ref{tab:rhorho} and \ref{tab:phiphi}. All of the theoretical results are in consistence with the data within the errors and uncertainties. Comparing case II with case IV, one can find some significant differences, especially for the pure annihilation $\bar{B}^0\to \phi \phi$ decay. For instance, case II presents a large branching fraction $\sim 13\times 10^{-8}$, which is in the scope of SuperKEKb/Belle-II experiment, while the prediction of case IV, $\sim 0.1\times 10^{-8}$, is very small. Moreover, case IV presents a much larger transverse polarization fraction, $\sim 31\%$, than case II. In addition, we note that the amplitude ${\cal{A}}_{\bar B^{0}\to \phi\phi}^{h}$ is only relevant to the effective coefficients $b_{4}$ and $b_{4,EW}$, and both of which involve $A_{1,2}^{i}$ only. It implies that $\bar{B}^0\to \phi \phi$ decay is very suitable for probing the non-factorizable WA contributions. The measurement of pure annihilation decays is required for testing such results and exploring a much clearer picture of WA contributions. \section{Conclusion} In summary, we have studied the effects of weak annihilation and hard spectator scattering contributions in $B_{u,d}\to VV $ decays with the QCDF approach. In order to evaluate the values of end-point parameters, comprehensive statistical $\chi^2$ analyses are preformed in four cases. Our analyses in cases I and II are based on the topology-dependent parameterization scheme, which is presented first in Ref.~\cite{Zhu:2011mm} and favored by $B\to PP\,,PV$ decays~\cite{Chang:2014yma,Sun:2014tfa}. The analyses in cases~III and IV are based on the polarization-dependent parameterization scheme ({\it i.e.}, the end-point parameters are non-universal for longitudinal and transverse building blocks). In each of cases, a global fit of end-point parameters is performed with the data available, and the numerical results are presented. Our main conclusions and findings could be summarized as the following: \begin{itemize} \item The allowed spaces of $ (\rho_A^{i},\phi_A^{i})$ at 95\% C. L. are entirely different from that of $(\rho_A^{f},\phi_A^{f})$ in $B\to VV$ decays, {\it i.e.}, $ (\rho_A^{i},\phi_A^{i})\neq (\rho_A^{f},\phi_A^{f})$, which confirms the proposal of topology-dependent scheme presented in Ref.~\cite{Zhu:2011mm}. More interestingly, the fitted result of $(\rho_A^{f},\phi_A^{f})$ in $B\to VV$ decays is very similar to the ones in $B\to PP$ and $PV$ decays, which implies possible universal end-point contributions for the factorizable annihilation topologies. \item The findings mentioned above are gotten mainly through penguin-dominated decays. Unfortunately, some tensions between theoretical results and data appear when the color-suppressed tree-dominated $\bar{B}^0\to \rho^0 \rho^0$ decay is taken into account. To be exact, a large $\rho_A^i$ and/or the simplification $ (\rho_H,\phi_H)\neq (\rho_A^{i},\phi_A^{i})$, which has been proven to be a good simplification by a global fit in $B\to PP\,,PV$ decays~\cite{Chang:2014yma,Sun:2014tfa}, are challenged especially by ${\cal B}(\bar{B}^0\to \rho^0 \rho^0)$. We further point out that any hypothesis for resolving the ``$\pi\pi$ puzzle'' through modifying HSS corrections should be carefully tested in $B\to \rho\rho$ decays. \item For the polarization-dependent scheme, an ideal strategy is to extract the longitudinal end-point parameters through the longitudinal-polarization-dominated decay modes and further analysis their topology-dependence. However, the lack of data and large uncertainties prevent us from obtaining an exact result. Combining all of the decays considered in this paper, the fitted result at 95\% C. L. indicates that $(\rho_A^{L},\phi_A^{L})\neq (\rho_A^{T},\phi_A^{T})$. Using the fitted values of end-point parameters, the experimental data could be accommodated within QCDF Framework. \end{itemize} Generally, because $B\to VV$ decays involve more observables than $B\to PP$ and $PV$ decays, more information for the WA and HSS contributions can be obtained, which surely helps us to further explore and understand the underlying mechanism. However, the measurements of $B\to VV$ decays are still very rough by now, especially for the complete angular analysis and the pure annihilation decays. With the rapid accumulation of data on B events at running LHC and forthcoming SuperKEKb/Belle-II, more refined measurements of $B\to VV$ decays are urgently expected for a much clearer picture of WA and HSS contributions. \section*{Acknowledgments} The work is supported by the National Natural Science Foundation of China (Grant Nos. 11475055, 11275057, U1232101 and U1332103). Q. Chang is also supported by the Foundation for the Author of National Excellent Doctoral Dissertation of P. R. China (Grant No. 201317), the Program for Science and Technology Innovation Talents in Universities of Henan Province (Grant No. 14HASTIT036) and Foundation for University Key Teacher of Henan Province (Grant No. 2013GGJS-58). \begin{appendix} \section*{Appendix A: The decay amplitudes } \begin{eqnarray} \label{eq:rhok1} {\cal{A}}_{B^{-}\to \rho^{-}\bar K^{*0}}^{h}&=&A_{\rho\bar K^{*}}^{h} [\delta_{pu}\beta_{2}^{p,h}+\alpha_{4}^{p,h}-\frac{1}{2}\alpha_{4,EW}^{p,h}+\beta_{3}^{p,h}+\beta_{3,EW}^{p,h}],\\ \label{eq:rhok2} \sqrt{2}{\cal{A}}_{B^{-}\to \rho^{0}K^{*-}}^{h}&=&A_{\rho\bar K^{*}}^{h} [\delta_{pu}(\alpha_{1}^{p,h}+\beta_{2}^{p,h})+\alpha_{4}^{p,h}+\alpha_{4,EW}^{p,h}+\beta_{3}^{p,h}+\beta_{3,EW}^{p,h}]\nonumber\\ &&+A_{\bar K^{*}\rho}^{h}[\delta_{pu}\alpha_{2}^{p,h}+\frac{3}{2}\alpha_{3,EW}^{p,h}],\\ \label{eq:rhok3} {\cal{A}}_{\bar B^{0}\to \rho^{+} K^{*-}}^{h}&=&A_{\rho\bar K^{*}}^{h} [\delta_{pu}\alpha_{1}^{p,h}+\alpha_{4}^{p,h}+\alpha_{4,EW}^{p,h}+\beta_{3}^{p,h}-\frac{1}{2}\beta_{3,EW}^{p,h}],\\ \label{eq:rhok4} \sqrt{2}{\cal{A}}_{\bar B^{0}\to \rho^{0}\bar K^{*0}}^{h}&=&A_{\rho\bar K^{*}}^{h} [-\alpha_{4}^{p,h}+\frac{1}{2}\alpha_{4,EW}^{p,h}-\beta_{3}^{p,h}+\frac{1}{2}\beta_{3,EW}^{p,h}] \nonumber\\&&+A_{\bar K^{*}\rho}^{h}[\delta_{pu}\alpha_{2}^{p,h}+\frac{3}{2}\alpha_{3,EW}^{p,h}],\\ \label{eq:kk1} {\cal{A}}_{B^{-}\to K^{*-}K^{*0}}^{h}&=&A_{\bar K^{*}K^{*}}^{h} [\delta_{pu}\beta_{2}^{p,h}+\alpha_{4}^{p,h}-\frac{1}{2}\alpha_{4,EW}^{p,h}+\beta_{3}^{p,h}+\beta_{3,EW}^{p,h}],\\ \label{eq:kk2} {\cal{A}}_{\bar B^{0}\to K^{*-}K^{*+}}^{h}&=&A_{\bar K^{*}K^{*}}^{h} [\delta_{pu}\beta_{1}^{p,h}+\beta_{4}^{p,h}+\beta_{4,EW}^{p,h}]+B_{K^{*}\bar K^{*}}^{h}[b_{4}^{p,h}-\frac{1}{2} b_{4,EW}^{p,h}],\\ \label{eq:kk3} {\cal{A}}_{\bar B^{0}\to \bar K^{*0}K^{*0}}^{h}&=&A_{\bar K^{*}K^{*}}^{h} [\alpha_{4}^{p,h}-\frac{1}{2}\alpha_{4,EW}^{p,h}+\beta_{3}^{p,h}+\beta_{4}^{p,h}-\frac{1}{2}\beta_{3,EW}^{p,h}-\frac{1}{2}\beta_{4,EW}^{p,h}] \nonumber\\&&+B_{K^{*}\bar K^{*}}^{h}[b_{4}^{p,h}-\frac{1}{2} b_{4,EW}^{p,h}],\\ {\cal{A}}_{B^{-}\to K^{*-}\phi}^{h}&=&A_{\bar K^{*}\phi}^{h} [\delta_{pu}\beta_{2}^{p,h}+\alpha_{3}^{p,h}+\alpha_{4}^{p,h}-\frac{1}{2}\alpha_{3,EW}^{p,h}-\frac{1}{2}\alpha_{4,EW}^{p,h} +\beta_{3}^{p,h}+\beta_{3,EW}^{p,h}],\\ {\cal{A}}_{\bar B^{0}\to \bar K^{*0}\phi}^{h}&=&A_{\bar K^{*}\phi}^{h} [\alpha_{3}^{p,h}+\alpha_{4}^{p,h}-\frac{1}{2}\alpha_{3,EW}^{p,h}-\frac{1}{2}\alpha_{4,EW}^{p,h} +\beta_{3}^{p,h}-\frac{1}{2}\beta_{3,EW}^{p,h}],\\ \sqrt{2}{\cal{A}}_{B^{-}\to \rho^{0}\rho^{-}}^{h}&=&A_{\rho{-}\rho{0}}^{h} [\delta_{pu}(\alpha_{2}^{p,h}-\beta_{2}^{p,h})-\alpha_{4}^{p,h}+\frac{3}{2}\alpha_{3,EW}^{p,h}+\frac{1}{2}\alpha_{4,EW}^{p,h} -\beta_{3}^{p,h}-\beta_{3,EW}^{p,h}] \nonumber\\&&+A_{\rho{0}\rho{-}}^{h}[\delta_{pu}(\alpha_{1}^{p,h}+\beta_{2}^{p,h})+\alpha_{4}^{p,h}+\alpha_{4,EW}^{p,h} +\beta_{3}^{p,h}+\beta_{3,EW}^{p,h}],\\ {\cal{A}}_{\bar B^{0}\to \rho^{+}\rho^{-}}^{h}&=&A_{\rho\rho}^{h} [\delta_{pu}(\alpha_{1}^{p,h}-\beta_{1}^{p,h})+\alpha_{4}^{p,h}+\alpha_{4,EW}^{p,h}\nonumber\\ &&+\beta_{3}^{p,h}+2\beta_{4}^{p,h}-\frac{1}{2}\beta_{3,EW}^{p,h}+\frac{1}{2}\beta_{4,EW}^{p,h}],\\ -{\cal{A}}_{\bar B^{0}\to \rho^{0}\rho^{0}}^{h}&=&A_{\rho\rho}^{h} [\delta_{pu}(\alpha_{2}^{p,h}-\beta_{1}^{p,h})-\alpha_{4}^{p,h}+\frac{3}{2}\alpha_{3,EW}^{p,h}+\frac{1}{2}\alpha_{4,EW}^{p,h}\nonumber\\ &&-\beta_{3}^{p,h}-2\beta_{4}^{p,h}+\frac{1}{2}\beta_{3,EW}^{p,h}-\frac{1}{2}\beta_{4,EW}^{p,h}],\\ {\cal{A}}_{\bar B^{0}\to \phi\phi}^{h}&=&B_{\phi\phi}^{h} [b_{4}^{p,h}-\frac{1}{2} b_{4,EW}^{p,h}]. \end{eqnarray} \section*{Appendix B: The experimental data and theoretical results} \begin{table}[!htbp] \begin{center} \caption{\small The observables of $B\to\rho K^*$ decays. For the theoretical results of case II and IV, the first, second and third theoretical errors are caused by the CKM parameters, the other inputs in Table.~\ref{ppvalue} and end-point parameters, respectively. } \label{tab:rhok} \vspace{0.2cm} \scriptsize \doublerulesep 0.10pt \tabcolsep 0.05in \begin{tabular}{lcccccc} \hline\hline \multirow{2}{*}{Obs.} &\multirow{2}{*}{Decay modes}&\multirow{2}{*}{Exp.}&\multicolumn{2}{c}{This work} &\multicolumn{2}{c}{Previous works} \\ &&&case II&case IV&Cheng~\cite{Cheng:2009cn,Cheng:2008gxa} &Beneke~\cite{Beneke:2006hg}\\ \hline ${\cal B}[10^{-6}]$ &$B^-\to \rho^-\bar{K}^{*0}$ &$9.2\pm1.5$&$9.0_{-0.5-2.0-2.2}^{+0.4+2.8+3.0}$&$9.0_{-0.5-1.0-2.6}^{+0.4+1.1+4.6}$&$9.2^{+1.2+3.6}_{-1.1-5.4}$ &$5.9^{+0.3+6.9}_{-0.3-3.7}$\\ &$B^-\to\rho^{0}K^{*-}$ &$4.6\pm1.1$&$5.9_{-0.4-0.9-0.8}^{+0.3+1.4+1.3}$&$6.4_{-0.4-0.7-2.4}^{+0.3+0.7+1.3}$&$5.5^{+0.6+1.3}_{-0.5-2.5}$ &$4.5^{+1.5+3.0}_{-1.3-1.4}$\\ &$\bar{B}^0\to\rho^+K^{*-}$ &$10.3\pm2.6$&$9.8^{+0.4+2.3+3.0}_{-0.6-1.7-2.1}$&$8.5^{+0.4+1+4.2}_{-0.5-0.9-2.4}$&$8.9^{+1.1+4.8}_{-1.0-5.5}$ &$5.5^{+1.7+5.7}_{-1.5-2.9}$\\ &$\bar{B}^0\to\rho^0\bar{K}^{*0}$ &$3.9\pm0.8$&$5.5^{+0.3+0.7+1.6}_{-0.3-0.5-0.9}$&$3.2^{+0.1+0.4+2.0}_{-0.2-0.4-1.2}$&$4.6^{+0.6+3.5}_{-0.5-3.5}$ &$2.4^{+0.2+3.5}_{-0.1-2.0}$\\ \hline $A_{CP}[\%]$ &$B^-\to \rho^-\bar{K}^{*0}$ &$-1\pm16$&$4_{-0-1-2}^{+0+1+3}$&$0.8^{+0+0.1+0.5}_{-0-0.1-0.3}$&$-0.3^{+0+2}_{-0-0}$ &$0^{+0+3}_{-0-1}$\\ &$B^-\to \rho^{0} K^{*-}$ &$31\pm13$&$39_{-1-4-21}^{+1+3+16}$&$14^{+0+1+15}_{-0-1-24}$&$43^{+6+12}_{-3-28}$ &$16^{+4+23}_{-4-16}$\\ &$\bar{B}^0\to\rho^+K^{*-}$ &$21\pm15$&$26^{+1+4+13}_{-1-5-18}$&$5.6^{+0.1+3.6+24}_{-0.1-3.3-13}$&$32^{+1+2}_{-3-14}$ &$5^{+1+40}_{-1-17}$\\ &$\bar{B}^0\to\rho^0\bar{K}^{*0}$ &$-6\pm9$&$-25^{+1+9+34}_{-1-8-31}$&$-20^{+1+6+15}_{-1-5-4}$&$-15^{+4+16}_{-8-14}$ &$-15^{+4+17}_{-4-32}$\\ \hline $A_{CP}^0[\%]$ &$B^-\to \rho^-\bar{K}^{*0}$&---&$2.7^{+0.1+0.3+2.3}_{-0.1-0.3-1.8}$&$0.8^{+0+0.1+0.2}_{-0-0.1-0.4}$&---&$-1^{+0+1}_{-0-1}$\\ &$B^-\to \rho^{0}K^{*-}$&---&$5.1^{+0.3+16.6+23.7}_{-0.2-16.2-22.2}$&$3.0^{+0.3+1.2+9.1}_{-0.1-1.2-9.9}$&---&$7^{+2+12}_{-2-13}$\\ &$\bar{B}^0\to\rho^+K^{*-}$&---&$68.6^{+1.9+7.2+18.4}_{-2.1-7.6-25.4}$&$20.2^{+0.8+3.9+13.1}_{-0.5-4.3-21.7}$&---&$18^{+6+12}_{-5-29}$\\ &$\bar{B}^0\to\rho^0\bar{K}^{*0}$&---&$18.3^{+0.4+5.7+13.1}_{-0.6-7.3-10.4}$&$13.0^{+0.4+6.5+9.7}_{-0.4-7.8-5.3}$&---&$-30^{+11+60}_{-11-48}$\\ \hline $A_{CP}^{\bot}[\%]$ &$B^-\to \rho^-\bar{K}^{*0}$&---&$-0.6^{+0+0.3+1.3}_{-0-0.5-1.5}$&$-1.2^{+0+0.1+0.5}_{-0-0.1-0.6}$&---&--- \\ &$B^-\to \rho^{0} K^{*-}$&---&$-6.6^{+0.2+30.0+45.7}_{-0.3-20.4-25.0}$&$-7.6^{+0.2+3.5+29.3}_{-0.4-4.4-20.2}$&---&--- \\ &$\bar{B}^0\to\rho^+K^{*-}$&---&$-64.3^{+1.5+9.3+16.4}_{-1.3-8.7-13.2}$&$-23.0^{+0.7+3.0+9.6}_{-0.6-2.4-24.0}$&---&--- \\ &$\bar{B}^0\to\rho^0\bar{K}^{*0}$&---&$-16.9^{+0.7+8.3+11.4}_{-0.4-7.5-12.2}$&$-6.8^{+0.3+4.5+5.7}_{-0.3-3.6-16.3}$&---&--- \\ \hline $f_L[\%]$ &$B^-\to \rho^-\bar{K}^{*0}$ &$48\pm8$&$56_{-0-3-26}^{+0+3+20}$&$59^{+0+6+23}_{-0-6-17}$&$48^{+3+52}_{-4-40}$ &$56^{+4+48}_{-0-30}$\\ &$B^-\to \rho^{0} K^{*-}$ &$78\pm12$&$61_{-0-6-27}^{+0+6+19}$&$72^{+0+4+16}_{-1-4-15}$&$67^{+2+31}_{-3-48}$ &$84^{+2+16}_{-3-25}$\\ &$\bar{B}^0\to\rho^+K^{*-}$ &$38\pm13$&$48^{+0+1+12}_{-0-1-12}$&$53^{+1+5+21}_{-1-5-14}$&$53^{+2+45}_{-3-32}$ &$61^{+5+38}_{-7-28}$\\ &$\bar{B}^0\to\rho^0\bar{K}^{*0}$ &$40\pm14$&$52^{+0+2+7}_{-0-3-10}$&$35^{+0+7+30}_{-0-7-15}$&$39^{+0+60}_{-0-31}$ &$22^{+3+53}_{-3-14}$\\ \hline $f_{\bot}[\%]$ &$B^-\to \rho^-\bar{K}^{*0}$&---&$20.5^{+0+2.0+11.7}_{-0-1.8-9.2}$&$20.9^{+0+3.0+8.6}_{-0-2.9-11.5}$&---&---\\ &$B^-\to \rho^{0} K^{*-}$&---&$17.9^{+0.1+3.0+12.2}_{-0.1-2.8-8.7}$&$14.4^{+0.5+2.1+7.5}_{-0.3-2.0-8.0}$&---&---\\ &$\bar{B}^0\to\rho^+K^{*-}$&---&$24.2^{+0.1+1.0+5.2}_{-0.1-1.0-5.6}$&$23.7^{+0.6+2.7+7.2}_{-0.3-2.6-10.5}$&---&---\\ &$\bar{B}^0\to\rho^0\bar{K}^{*0}$&---&$22.9^{+0.1+1.3+4.1}_{-0.1-1.1-3.2}$&$33.0^{+0.1+3.6+7.7}_{-0.1-3.6-15.1}$&---&---\\ \hline $\phi_{\parallel}$ &$B^-\to \rho^-\bar{K}^{*0}$&---&$-1.7^{+0+0.2+0.4}_{-0-0.2-0.3}$&$3.1^{+0+0.1+5.8}_{-0-0.1-6.2}$&---&$-37^{+0+92}_{-0-59}$\\ &$B^-\to \rho^{0} K^{*-}$&---&$-1.3^{+0+0.1+0.4}_{-0-0.2-0.3}$&$3.0^{+0+0.1+5.4}_{-0-0.1-5.8}$&---&$-39^{+4+146}_{-5-88}$\\ &$\bar{B}^0\to\rho^+K^{*-}$&---&$-1.9^{+0+0.1+0.4}_{-0-0.1-0.3}$&$3.1^{+0+0+2.8}_{-0-0.1-2.7}$&---&$-36^{+4+111}_{-5-68}$\\ &$\bar{B}^0\to\rho^0\bar{K}^{*0}$&---&$-2.4^{+0+0.1+0.1}_{-0-0.1-0.1}$&$2.9^{+0+0+2.7}_{-0-0.1-3.0}$&---&$-41^{+4+63}_{-4-44}$\\ \hline $\phi_{\perp}$ &$B^-\to \rho^-\bar{K}^{*0}$&---&$-1.8^{+0+0.2+0.3}_{-0-0.2-0.3}$&$3.0^{+0+0.1+5.8}_{-0-0.1-6.2}$&---&---\\ &$B^-\to \rho^{0} K^{*-}$&---&$-1.4^{+0+0.2+0.4}_{-0-0.2-0.4}$&$3.1^{+0+0.1+5.4}_{-0-0.1-5.8}$&---&---\\ &$\bar{B}^0\to\rho^+K^{*-}$&---&$-2.0^{+0+0.1+0.3}_{-0-0.1-0.3}$&$3.0^{+0+0+2.8}_{-0-0-2.7}$&---&---\\ &$\bar{B}^0\to\rho^0\bar{K}^{*0}$&---&$-2.5^{+0+0.1+0.1}_{-0-0.1-0.1}$&$2.9^{+0+0.1+2.7}_{-0-0.1-3.0}$&---&---\\ \hline $\Delta\phi_{\parallel}$ &$B^-\to \rho^-\bar{K}^{*0}$&---&$0.03^{+0+0+0.03}_{-0-0-0.03}$&$-0^{+0+0+0}_{-0-0-0}$&---&$0^{+0+0}_{-0-2}$\\ &$B^-\to \rho^{0} K^{*-}$&---&$-0.14^{+0+0.03+0.30}_{-0-0.03-0.35}$&$5.6^{+0+0+0.1}_{-0-0-0.1}$&---&$-14^{+3+29}_{-4-60}$\\ &$\bar{B}^0\to\rho^+K^{*-}$&---&$-0.10^{+0.01+0.07+0.36}_{-0.01-0.07-0.22}$&$5.6^{+0+0+0.1}_{-0-0-0.1}$&---&$-19^{+5+74}_{-5-18}$\\ &$\bar{B}^0\to\rho^0\bar{K}^{*0}$&---&$-0.06^{+0+0.02+0.19}_{-0-0.03-0.18}$&$0.17^{+0+0.06+0.08}_{-0.01-0.06-0.7}$&---&$17^{+5+22}_{-5-24}$\\ \hline $\Delta\phi_{\perp}$ &$B^-\to \rho^-\bar{K}^{*0}$&---&$0^{+0+0+0.02}_{-0-0-0.03}$&$-0^{+0+0+0}_{-0-0-0}$&---&---\\ &$B^-\to \rho^{0} K^{*-}$&---&$-0.14^{+0+0.02+0.30}_{-0-0.02-0.38}$&$5.62^{+0+0+0.15}_{-0-0-0.11}$&---&---\\ &$\bar{B}^0\to\rho^+K^{*-}$&---&$-0.12^{+0.01+0.07+0.38}_{-0.01-0.07-0.25}$&$5.62^{+0.02+6.09}_{-0.02-6.13}$&---&---\\ &$\bar{B}^0\to\rho^0\bar{K}^{*0}$&---&$-0.10^{+0+0.03+0.18}_{-0-0.04-0.16}$&$0.16^{+0+0.06+0.08}_{-0-0.06-0.07}$&---&---\\ \hline\hline \end{tabular} \end{center} \end{table} \begin{table}[!htbp] \begin{center} \caption{\small The observables of $B\to K^* \bar{K}^*$ decays. The other captions are the same as Table \ref{tab:rhok}. } \label{tab:kk} \vspace{0.2cm} \footnotesize \doublerulesep 0.10pt \tabcolsep 0.05in \begin{tabular}{lcccccc} \hline\hline \multirow{2}{*}{Obs.} &\multirow{2}{*}{Decay Modes}&\multirow{2}{*}{Exp.}&\multicolumn{2}{c}{This work}&\multicolumn{2}{c}{Previous works}\\ &&&case II&case IV&Cheng~\cite{Cheng:2009cn,Cheng:2008gxa} &Beneke~\cite{Beneke:2006hg}\\ \hline ${\cal B}[10^{-6}]$ &$B^-\to K^{*0}K^{*-}$ &$1.2\pm0.5$&$0.8^{+0+0.2+0.2}_{-0-0.1-0.1}$&$0.6^{+0+0.1+0.3}_{-0-0.1-0.2}$&$0.6^{+0.1+0.3}_{-0.1-0.3}$ &$0.5^{+0.2+0.4}_{-0.1-0.3}$\\ &$\bar{B}^0\to K^{*+}K^{*-}$ &$<2$&$1.7^{+0.1+0.1+2.6}_{-0.1-0.1-1.0}$&$0.02^{+0+0+0.02}_{-0-0-0.01}$&$0.1^{+0+0.1}_{-0-0.1}$&--- \\ &$\bar{B}^0\to K^{*0}\bar{K}^{*0}$ &$0.81\pm0.23$&$0.98^{+0.05+0.19+0.56}_{-0.06-0.14-0.40}$&$0.56^{+0.03+0.07+0.27}_{-0.03-0.07-0.14}$&$0.6^{+0.1+0.2}_{-0.1-0.3}$ &$0.6^{+0.1+0.5}_{-0.1-0.3}$\\ \hline $A_{CP}[\%]$ &$B^-\to K^{*0}K^{*-}$&---&$-65.7^{+0.8+5.4+20.5}_{-0.9-4.0-8.9}$&$-16.6^{+0.5+1.9+5.6}_{-0.5-1.8-9.0}$&$16^{+1+17}_{-3-34}$&$0^{+0+17}_{-0-40}$\\ &$\bar{B}^0\to K^{*+}K^{*-}$&---&$0^{+0+0+0}_{-0-0-0}$&$0^{+0+0+0}_{-0-0-0}$&$0$&---\\ &$\bar{B}^0\to K^{*0}\bar{K}^{*0}$&---&$-10.2^{+0.3+1.5+5.5}_{-0.4-1.5-3.8}$&$-9.6^{+0.3+1.7+3}_{-0.3-1.6-5.2}$&$-14^{+1+6}_{-1-2}$&$-13^{+3+6}_{-4-8}$\\ \hline $A_{CP}^0[\%]$ &$B^-\to K^{*0}K^{*-}$ &---&$-84.8^{+2.1+11.2+52.3}_{-3.4-9.5-17.2}$&$-15.2^{+0.5+2.1+7.3}_{-0.5-2.3-4.2}$&---&$9^{+3+12}_{-2-24}$\\ &$\bar{B}^0\to K^{*+}K^{*-}$&---&$0^{+0+0+0}_{-0-0-0}$&$0^{+0+0+0}_{-0-0-0}$&---&---\\ &$\bar{B}^0\to K^{*0}\bar{K}^{*0}$ &---&$-0.23^{+0.02+0.63+1.3}_{-0.02-0.67-2.3}$&$-12.3^{+0.4+1.5+8.0}_{-0.4-1.5-4.2}$&---&$0^{+0+2}_{-0-4}$\\ \hline $A_{CP}^{\bot}[\%]$ &$B^-\to K^{*0}K^{*-}$&---&$0.69^{+0.96+7.99+32.9}_{-1.93-5.61-41.1}$&$26.5^{+0.8+3.0+12.3}_{-0.8-2.8-11.0}$&---&--- \\ &$\bar{B}^0\to K^{*+}K^{*-}$&---&$0^{+0+0+0}_{-0-0-0}$&$0^{+0+0+0}_{-0-0-0}$&---&---\\ &$\bar{B}^0\to K^{*0}\bar{K}^{*0}$&---&$0.90^{+0.05+1.36+1.57}_{-0.03-1.30-3.43}$&$23.8^{+0.8+3.0+10.6}_{-0.7-2.8-10.0}$&---&---\\ \hline $f_L[\%]$ &$B^-\to K^{*0}K^{*-}$ &$75^{+16}_{-26}$&$40^{+0+2+18}_{-1-2-7}$&$63^{+0+5+20}_{-0-5-16}$&$45^{+2+55}_{-4-38}$ &$62^{+1+42}_{-2-33}$\\ &$\bar{B}^0\to K^{*+}K^{*-}$&---&$76^{+0+2+5}_{-0-1-9}$&$61^{+0+1+39}_{-0-1-25}$&$\approx1$&---\\ &$\bar{B}^0\to K^{*0}\bar{K}^{*0}$ &$80^{+12}_{-13}$&$69^{+0+1+7}_{-0-1-13}$&$65^{+0+5+20}_{-0-5-16}$&$52^{+4+48}_{-7-48}$ &$69^{+1+34}_{-1-27}$\\ \hline $f_{\bot}[\%]$ &$B^-\to K^{*0}K^{*-}$&---&$18.1^{+0.3+1.4+9.2}_{-0.5-1.1-9.5}$&$18.7^{+0.1+2.8+8.2}_{-0-2.7-10.1}$&---&---\\ &$\bar{B}^0\to K^{*+}K^{*-}$&---&$12.1^{+0+0.7+5.3}_{-0-0.8-3.1}$&$19.5^{+0+0.7+12.3}_{-0-0.6-20.8}$&---&---\\ &$\bar{B}^0\to K^{*0}\bar{K}^{*0}$&---&$26.1^{+0.1+1.3+9.4}_{-0.1-1.1-9.0}$&$14.8^{+0+2.4+7.3}_{-0-2.3-8.1}$&---&---\\ \hline $\phi_{\parallel}$ &$B^-\to K^{*0}K^{*-}$ &---&$-1.8^{+0.1+0.2+3.3}_{-0.1-0.2-2.6}$&$3.0^{+0+0.1+6.6}_{-0-0.1-6.9}$&---&$-39^{+2+96}_{-3-57}$\\ &$\bar{B}^0\to K^{*+}K^{*-}$&---&$0.7^{+0+0.1+0.2}_{-0-0.1-0.2}$&$2.0^{+0+0+1.1}_{-0-0-0.8}$ &---&---\\ &$\bar{B}^0\to K^{*0}\bar{K}^{*0}$&---&$-3.0^{+0+0.1+3.1}_{-0-0.1-4.1}$&$3.1^{+0+0.1+5.8}_{-0-0.1-8.7}$&--- &$-32^{+0+82}_{-0-51}$\\ \hline $\phi_{\perp}$ &$B^-\to K^{*0}K^{*-}$ &---&$-1.8^{+0+0.2+1.9}_{-0-0.2-0.5}$&$3.0^{+0+0.1+5.8}_{-0-0.1-6.9}$&---&---\\ &$\bar{B}^0\to K^{*+}K^{*-}$&---&$-2.2^{+0+0.1+0.2}_{-0-0.1-0.2}$&$-1.8^{+0+0.1+1.1}_{-0-0.1-0.8}$&---&---\\ &$\bar{B}^0\to K^{*0}\bar{K}^{*0}$&---&$-2.1^{+0+0.1+0.2}_{-0-0.1-0.2}$&$3.0^{+0+0+5.8}_{-0-0-8.7}$ &---&---\\ \hline $\Delta\phi_{\parallel}$ &$B^-\to K^{*0}K^{*-}$&---&$-0.60^{+0.06+0.17+5.9}_{-0.06-0.19-5.7}$&$0.08^{+0+0+0.02}_{-0-0-0.05}$&---&$-5^{+1+28}_{-1-7}$\\ &$\bar{B}^0\to K^{*+}K^{*-}$&---&$0^{+0+0+0}_{-0-0-0}$&$0^{+0+0+0}_{-0-0-0}$&---&---\\ &$\bar{B}^0\to K^{*0}\bar{K}^{*0}$&---&$-0.16^{+0+0.89+0}_{-0-0.03-0}$&$0.02^{+0+0+0.03}_{-0-1-0.06}$&---&$3^{+1+14}_{-1-6}$\\ \hline $\Delta\phi_{\perp}$ &$B^-\to K^{*0}K^{*-}$&---&$-0.02^{+0.04+0.15+0.5}_{-0.04-0.17-0.6}$&$-0.03^{+0+0+0.02}_{-0-0.01-0.05}$&---&---\\ &$\bar{B}^0\to K^{*+}K^{*-}$&---&$0^{+0+0+0}_{-0-0-0}$&$0.02^{+0+0.01+0}_{-0-0.01-0}$&---&---\\ &$\bar{B}^0\to K^{*0}\bar{K}^{*0}$&---&$-0.08^{+0+0+0.01}_{-0-0-0.02}$&$0.02^{+0+0+0.03}_{-0-0-0.07}$&---&---\\ \hline\hline \end{tabular} \end{center} \end{table} \begin{table}[p] \begin{center} \caption{\small The observables of $B\to \phi \bar{K}^*$ decays. The other captions are the same as Table \ref{tab:rhok}. } \label{tab:phik} \vspace{0.2cm} \footnotesize \doublerulesep 0.10pt \tabcolsep 0.05in \begin{tabular}{lcccccc} \hline\hline \multirow{2}{*}{Observables} &\multirow{2}{*}{Decay Modes}&\multirow{2}{*}{Exp.}&\multicolumn{2}{c}{This work}&\multicolumn{2}{c}{Previous works}\\ &&&case II&case IV&Cheng~\cite{Cheng:2009cn,Cheng:2008gxa} &Beneke~\cite{Beneke:2006hg} \\ \hline ${\cal B}\,[10^{-6}]$ &$B^-\to \phi K^{*-}$ &$10.0\pm1.1$&$6.3^{+0.3+4.2+3.4}_{-0.4-1.7-1.5}$&$12.0^{+0.5+1.3+6.8}_{-0.7-1.2-4.0}$ &$10.0^{+1.4+12.3}_{-1.3-6.1}$ &$10.1^{+0.5+12.2}_{-0.5-7.1}$\\ &$\bar{B}^0\to \phi \bar{K}^{*0}$ &$10.1^{+0.6}_{-0.5}$&$5.8^{+0.3+3.4+3.1}_{-0.3-1.5-1.3}$&$11.1^{+0.5+1.2+6.4}_{-0.6-1.1-3.8}$&$9.5^{+1.3+11.9}_{-1.2-5.9}$ &$9.3^{+0.5+11.4}_{-0.5-6.5}$\\ \hline $A_{CP}[\%]$ &$B^-\to \phi K^{*-}$ &$-1\pm8$&$6^{+0+0+3}_{-0-1-2}$&$0^{+0+0+0.4}_{-0-0-0.3}$ &$0.05$&$0^{+0+2}_{-0-1}$\\ &$\bar{B}^0\to \phi \bar{K}^{*0}$ &$-0\pm4$&$1^{+0+0+0}_{-0-0-0}$&$0.2^{+0+0.1+0.3}_{-0-0.1-0.1}$&$0.8^{+0+0.4}_{-0-0.5}$&$1^{+0+1}_{-0-0}$\\ \hline $A_{CP}^0[\%]$ &$B^-\to \phi K^{*-}$ &$17\pm11$&$5^{+0+4+8}_{-0-2-6}$&$1^{+0+0+0.2}_{-0-0-0.5}$&---&$-1^{+0+2}_{-0-1}$\\ &$\bar{B}^0\to \phi \bar{K}^{*0}$ &$-0.7\pm3.0$&$0.4^{+0+0.6+1.1}_{-0-0.2-0.6}$&$0.8^{+0+0.1+0.3}_{-0-0.1-0.6}$&---&$0^{+0+1}_{-0-1}$\\ \hline $A_{CP}^{\bot}[\%]$ &$B^-\to \phi K^{*-}$ &$22\pm25$&$-3^{+0+2+9}_{-0-3-7}$&$-0.9^{+0+0.2+0.5}_{-0-0.2-0.6}$&---&---\\ &$\bar{B}^0\to \phi \bar{K}^{*0}$ &$-1.4\pm5.7$&$-0.3^{+0+0.1+0.5}_{-0-0.2-0.1}$&$-0.8^{+0+0.1+0.4}_{-0-0.1-0.4}$&---&---\\ \hline $f_L[\%]$ &$B^-\to \phi K^{*-}$ &$50\pm5$&$50^{+0+6+46}_{-0-16-43}$&$47^{+0+7+26}_{-0-7-18}$&$49^{+4+51}_{-7-42}$&$45^{+0+58}_{-0-36}$\\ &$\bar{B}^0\to \phi \bar{K}^{*0}$ &$49.7\pm1.7$&$50.1^{+0+6.0+46.1}_{-0-15.3-43.7}$&$47^{+0+7+26}_{-0-7-18}$&$50^{+4+51}_{-6-43}$&$44^{+0+59}_{-0-36}$\\ \hline $f_{\bot}[\%]$ &$B^-\to \phi K^{*-}$ &$20\pm5$&$21^{+0+2+20}_{-0-3-20}$&$27^{+0+4+9}_{-0-4-13}$&---&---\\ &$\bar{B}^0\to \phi \bar{K}^{*0}$ &$22.5\pm1.5$&$20.5^{+0+2.2+20.9}_{-0-2.3-19.1}$&$27^{+0+4+9}_{-0-4-13}$&---&---\\ \hline $\phi_{\parallel}$ &$B^-\to \phi K^{*-}$ &$-0.80\pm0.17$&$-1.18^{+0+0.62+0.93}_{-0-0.55-0.64}$&$-3.0^{+0+6.1+9.4}_{-0-6.1-10.6}$&---&$-41^{+0+84}_{-0-53}$\\ &$\bar{B}^0\to \phi \bar{K}^{*0}$ &$-0.71\pm0.06$&$-1.13^{+0+0.56+0.94}_{-0-0.51-0.64}$&$-3.0^{+0+6.0+9.4}_{-0-6.1-10.6}$&---&$-42^{+0+87}_{-0-54}$\\ \hline $\phi_{\perp}$ &$B^-\to \phi K^{*-}$ &$-0.56\pm0.17$&$-1.20^{+0+0.62+1.03}_{-0-0.54-0.72}$&$-3.0^{+0+6.2+9.4}_{-0-6.1-10.6}$&---&$-41^{+0+84}_{-0-53}$\\ &$\bar{B}^0\to \phi \bar{K}^{*0}$ &$-0.61\pm0.06$&$-1.18^{+0+0.64+1.05}_{-0-0.55-0.74}$&$-3.0^{+0+0.1-9.4}_{-0-0.1-10.6}$&---&$-42^{+0+87}_{-0-54}$\\ \hline $\Delta\phi_{\parallel}$ &$B^-\to \phi K^{*-}$ &$0.07\pm0.21$&$0.03^{+0+0+0.05}_{-0-0-0.02}$&$0^{+0+0+0}_{-0-0-0}$&---&$0^{+0+0}_{-0-1}$\\ &$\bar{B}^0\to \phi \bar{K}^{*0}$ &$0.05\pm0.05$&$0^{+0+0+0.01}_{-0-0-0.01}$&$0^{+0+0+0}_{-0-0-0}$&---&$0^{+0+0}_{-0-0}$\\ \hline $\Delta\phi_{\perp}$ &$B^-\to \phi K^{*-}$ &$0.19\pm0.21$&$-0.02^{+0+0+0.05}_{-0.02-0.01-0.08}$&$0^{+0+0+0}_{-0-0-0}$&---&$0^{+0+0}_{-0-1}$\\ &$\bar{B}^0\to \phi \bar{K}^{*0}$ &$0.08\pm0.05$&$0^{+0+0+0.01}_{-0-0-0.01}$&$0^{+0+0+0}_{-0-0-0}$&---&$0^{+0+0}_{-0-0}$\\ \hline\hline \end{tabular} \end{center} \end{table} \begin{table}[!htbp] \begin{center} \caption{\small The observables of $B\to \rho\rho$ decays. The other captions are the same as Table \ref{tab:rhok}. } \label{tab:rhorho} \vspace{0.2cm} \footnotesize \doublerulesep 0.10pt \tabcolsep 0.05in \begin{tabular}{lcccccc} \hline\hline \multirow{2}{*}{Obs.} &\multirow{2}{*}{Decay Modes}&\multirow{2}{*}{Exp.}&\multicolumn{2}{c}{This work}&\multicolumn{2}{c}{Previous works} \\ & & &case II&case IV&Cheng~\cite{Cheng:2009cn,Cheng:2008gxa}&Beneke~\cite{Beneke:2006hg} \\\hline ${\cal B}[10^{-6}]$ &$B^-\to \rho^0 \rho^- $ &$24.0^{+1.9}_{-2.0}$&$26.2^{+2.0+6.1+5.2}_{-2.2-5.7-3.4}$&$20.0^{+1.5+3.6+2.0}_{-3.6-3.4-1.6}$&$20.0^{+4+2}_{-1.9-0.9}$ &$18.8^{+0.4+3.2}_{-0.4-3.9}$\\ &$\bar{B}^0\to\rho^+\rho^-$ &$24.2^{+3.1}_{-3.2}$&$24.4^{+1.8+4.8+0.8}_{-2.0-4.2-0.3}$&$25.7^{+1.9+1.6+0.7}_{-2-2-0.4}$&$25.5^{+1.5+2.4}_{-2.6-1.5}$ &$23.6^{+1.7+3.9}_{-1.9-3.6}$ \\ &$\bar{B}^0\to\rho^0\rho^0$ &$0.94\pm0.17$&$14.3^{+1.1+8.8+10.3}_{-1.2-7.8-4.3}$&$1.52^{+0.11+0.89+0.29}_{-0.12-0.77-0.36}$&$0.9^{+1.5+1.1}_{-0.4-0.2}$ &$0.9^{+0.6+1.9}_{-0.3-0.9}$\\ \hline $A_{CP}[\%]$ &$B^-\to \rho^0 \rho^- $ &$-5.1\pm5.4$&$0.5^{+0+0.1+0.1}_{-0-0.2-0.1}$&$0.1^{+0+0.1+0.2}_{-0-0.1-0.1}$ &$0.06$ &$0^{+0+0}_{-0-0}$\\ &$\bar{B}^0\to\rho^+\rho^-$ &---&$-20.3^{+0.7+3.0+4.4}_{-0.6-3.2-3.1}$&$-3.4^{+0.1+0.7+3.8}_{-0.1-0.8-8.9}$&$-4^{+0+3}_{-0-3}$ &$-1^{+0+4}_{-0-8}$\\ &$\bar{B}^0\to\rho^0\rho^0$ &---&$16.1^{+0.5+8.4+8.9}_{-0.5-3.8-10.9}$&$41.5^{+1.0+19.1+10.5}_{-1.2-9.4-20.5}$ &$30^{+17+14}_{-16-26}$ &$28^{+5+53}_{-7-29}$\\ \hline $A_{CP}^0[\%]$ &$B^-\to \rho^0 \rho^-$&---&$0.82^{+0.02+0.23+0.35}_{-0.03-0.28-0.21}$&$0.07^{+0.01+0.06+0.19}_{-0-0.05-0.11}$&---&---\\ $C_{long}[\%]$ &$\bar{B}^0\to\rho^+\rho^-$&$0\pm9$&$32^{+1+4+7}_{-1-3-8}$&$6^{+0+0+9}_{-0-0-3}$&---&---\\ &$\bar{B}^0\to\rho^0\rho^0$&$20\pm90$&$-40^{+1+12+15}_{-1-29-10}$&$-30^{+1+10+12}_{-1-22-35}$&---&---\\ $S_{long}[\%]$ &$\bar{B}^0\to\rho^+\rho^-$&$-14\pm13$&$-0^{+5+4+12}_{-7-5-6}$&$-22^{+5+0+3}_{-7-0-6}$&---&---\\ &$\bar{B}^0\to\rho^0\rho^0$&$30\pm70$&$34^{+4+8+11}_{-7-9-13}$&$48^{+4+8+19}_{-6-9-13}$&---&---\\ \hline $A_{CP}^{\bot}[\%]$ &$B^-\to \rho^0 \rho^-$&---&$-2.4^{+0.1+0.2+0.5}_{-0.1-0.2-0.4}$&$2.0^{+0+0.2+0.6}_{-0-0.4-0.4}$&---&--- \\ &$\bar{B}^0\to\rho^+\rho^-$&---&$56.4^{+1.2+7.0+6.3}_{-1.6-7.0-8.3}$&$25.4^{+0.8+5.9+19.8}_{-0.8-7.3-17.4}$&---&--- \\ &$\bar{B}^0\to\rho^0\rho^0$&---&$2.2^{+0.1+0.6+2.8}_{-0.1-0.5-2.2}$&$6.4^{+0.3+4.1+21.7}_{-0.6-1.8-13.2}$&---&--- \\ \hline $f_L[\%]$ &$B^-\to \rho^0 \rho^- $ &$95.0\pm1.6$&$74.5^{+0+7.8+7.7}_{-1.6-6.5-11.6}$&$91.4^{+0+1.7+1.8}_{-0-2.0-1.4}$&$96^{+1+2}_{-1-2}$ &$95.9^{+0.2+3.4}_{-0.3-6.4}$\\ &$\bar{B}^0\to\rho^+\rho^-$ &$97.8^{+2.5}_{-2.2}$&$80.7^{+0.1+5.5+7.9}_{-0.1-6.1-10.5}$&$92.1^{+0.2+1.6+1.8}_{-0.1-2.0-1.2}$&$92^{+1+1}_{-2-2}$ &$91.3^{+0.4+5.6}_{-0.3-6.4}$\\ &$\bar{B}^0\to\rho^0\rho^0$ &$59\pm13$\tnote{*}&$22.4^{+0.1+2.3+2.9}_{-0.2-3.7-4.2}$&$34.9^{+0.5+8.4+19.4}_{-0.7-4.7-14.2}$&$92^{+3+6}_{-4-37}$&$90^{+3+8}_{-4-56}$\\ \hline $f_{\bot}[\%]$ &$B^-\to \rho^0 \rho^-$&---&$12.7^{+0+3.3+5.8}_{-0-3.9-3.8}$&$4.1^{+0+1.0+0.7}_{-0-0.9-0.9}$&---&---\\ &$\bar{B}^0\to\rho^+\rho^-$&---&$12.4^{+0.1+3.4+7.1}_{-0.1-3.0-4.8}$&$4.2^{+0+1.2+0.7}_{-0.1-1.0-0.6}$&---&---\\ &$\bar{B}^0\to\rho^0\rho^0$&---&$38.0^{+0.1+1.0+4.4}_{-0.1-1.0-3.4}$&$30.2^{+0.3+2.5+6.9}_{-0.2-4.9-9.5}$&---&---\\ \hline $\phi_{\parallel}$ &$B^-\to \rho^0 \rho^-$&---&$-1.3^{+0+0.2+0.3}_{-0-0.2-0.3}$&$0.5^{+0+0.1+0.1}_{-0-0.2-0.1}$&---&$-5^{+0+31}_{-0-32}$\\ &$\bar{B}^0\to\rho^+\rho^-$&---&$1.2^{+0+0.2+0.4}_{-0-0.3-0.3}$&$-0.4^{+0+0.1+0}_{-0-0.1-0}$&---&$1^{+2+17}_{-2-17}$\\ &$\bar{B}^0\to\rho^0\rho^0$&---&$-2.4^{+0+0.2+0}_{-0-0.1-0}$&$1.3^{+0+0+0.3}_{-0-0-0.5}$&---&---\\ \hline $\phi_{\perp}$ &$B^-\to \rho^0 \rho^-$&---&$-1.3^{+0+0.2+0.3}_{-0-0.2-0.3}$&$0.5^{+0+0.1+0.1}_{-0-0.2-0.1}$&---&$-5^{+0+31}_{-0-32}$\\ &$\bar{B}^0\to\rho^+\rho^-$&---&$0.9^{+0+0.2+0.3}_{-0.2-0.3-0.3}$&$-0.4^{+0+0.1+0}_{-0-0.1-0}$&---&$1^{+2+17}_{-2-17}$\\ &$\bar{B}^0\to\rho^0\rho^0$&---&$-2.6^{+0+0.1+0.1}_{-0-0-0.1}$&$1.4^{+0+0.1+0.3}_{-0-0.1-0.5}$&---&---\\ \hline $\Delta\phi_{\parallel}$ &$B^-\to \rho^0 \rho^-$&---&$-0.02^{+0+0+0}_{-0-0-0}$&$-0^{+0+0+0}_{-0-0-0}$&---&$-6^{+2+2}_{-1-5}$\\ &$\bar{B}^0\to\rho^+\rho^-$&---&$-0.08^{+0+0.05+0.11}_{-0-0.06-0.16}$&$-0.31^{+0.01+0+0.09}_{-0.01-0-0}$&---&$4^{+1+9}_{-1-9}$\\ &$\bar{B}^0\to\rho^0\rho^0$&---&$0.16^{+0+0.07+0.03}_{-0.01-0.04-0.05}$&$0.42^{+0.01+0.18+0.26}_{-0.01-0.09-0.10}$&---&---\\ \hline $\Delta\phi_{\perp}$ &$B^-\to \rho^0 \rho^-$&---&$-0.02^{+0+0+0}_{-0-0-0}$&$0^{+0+0+0}_{-0-0-0}$&---&$-6^{+2+2}_{-1-5}$\\ &$\bar{B}^0\to\rho^+\rho^-$&---&$0.11^{+0+0+0.12}_{-0.03-0.02-0.08}$&$-0.28^{+0.01+0.03+0.09}_{-0.01-0.03-0.04}$&---&$4^{+1+9}_{-1-9}$\\ &$\bar{B}^0\to\rho^0\rho^0$&---&$0.16^{+0+0.08+0.04}_{-0.01-0.04-0.04}$&$0.37^{+0.01+0.13+0.26}_{-0.01-0.71-0.10}$&---&---\\ \hline\hline \end{tabular} \end{center} \end{table} \begin{table}[!htbp] \begin{center} \caption{\small The observables of $\bar{B}^0\to \phi \phi$ decays. The CP asymmetries $A_{CP}^{(0,\perp)}$ and phases $\Delta\phi_{\parallel,\perp}$ are equal to zero, and thus not listed. The other captions are the same as Table \ref{tab:rhok}. } \label{tab:phiphi} \vspace{0.2cm} \footnotesize \doublerulesep 0.10pt \tabcolsep 0.05in \begin{tabular}{lccccc} \hline\hline \multirow{2}{*}{Obs.} &\multirow{2}{*}{Exp.}&\multicolumn{2}{c}{This work}&\multicolumn{2}{c}{Previous work}\\ &&case II&case IV&Beneke~\cite{Beneke:2006hg} \\ \hline ${\cal B}\,[10^{-8}]$ &$<20$&$13.4^{+0.6+1.2+23.4}_{-0.9-1.0-7.6}$&$0.10^{+0+0+0.08}_{-0-0-0.04}$&$<3$\\ \hline $f_L[\%]$ &---&$72^{+0+2+5}_{-0-2-8}$&$38^{+0+2+51}_{-0-2-22}$&$>80$\\ \hline $f_{\bot}[\%]$ &---&$14^{+0+1+5}_{-0-1-3}$&$31^{+0+1+10}_{-0-1-26}$&---\\ \hline $\phi_{\parallel}$ &---&$0.49^{+0+0.06+0.12}_{-0-0.06-0.13}$&$1.86^{+0+0+1.08}_{-0-0-0.84}$&---\\ \hline $\phi_{\perp}$ &---&$-2.44^{+0+0.06+0.14}_{-0-0.06-0.17}$&$-1.84^{+0+0-1.08}_{-0-0-0.85}$&---\\ \hline\hline \end{tabular} \end{center} \end{table} \newpage \section*{Appendix C: The fitted results of end-point parameters in the complex plane} \begin{figure}[ht] \begin{center} \subfigure[]{\includegraphics[width=7cm]{Fig1a.pdf}}\quad \subfigure[]{\includegraphics[width=7cm]{Fig1b.pdf}}\\ \subfigure[]{\includegraphics[width=7cm]{Fig1c.pdf}}\quad \subfigure[]{\includegraphics[width=7cm]{Fig1d.pdf}} \caption{\label{KKrhoK2} \small Same as Fig.~\ref{KKrhoK} except for in $(\rho \cos \phi, \rho \sin \phi)$ plane.} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \subfigure[]{\includegraphics[width=7cm]{Fig2a.pdf}}\quad \subfigure[]{\includegraphics[width=7cm]{Fig2b.pdf}} \caption{\label{KKrhoKphiK2} \small Same as Fig.~\ref{KKrhoKphiK} except for in $(\rho \cos \phi, \rho \sin \phi)$ plane.} \end{center} \end{figure} \vspace{2cm} \begin{figure}[ht] \begin{center} \subfigure[]{\includegraphics[width=7cm]{Fig4a.pdf}} \subfigure[]{\includegraphics[width=7cm]{Fig4b.pdf}} \subfigure[]{\includegraphics[width=7cm]{Fig4c.pdf}} \caption{\label{case32} \small Same as Fig.~\ref{case3} except for in $(\rho \cos \phi, \rho \sin \phi)$ plane.} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \subfigure[]{\includegraphics[width=7cm]{Fig5a.pdf}} \subfigure[]{\includegraphics[width=7cm]{Fig5b.pdf}} \subfigure[]{\includegraphics[width=7cm]{Fig5c.pdf}} \subfigure[]{\includegraphics[width=7cm]{Fig5d.pdf}} \caption{\label{case412} \small Same as Fig.~\ref{case41} except for in $(\rho \cos \phi, \rho \sin \phi)$ plane.} \end{center} \end{figure} \begin{figure}[h] \begin{center} \subfigure[]{\includegraphics[width=7cm]{Fig7.pdf}} \caption{\label{case422} \small Same as Fig.~\ref{case42} except for in $(\rho \cos \phi, \rho \sin \phi)$ plane.} \end{center} \end{figure} \end{appendix} \newpage\newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:motivation} How much do advertisements decrease screen time? Do algorithmic recommendations increase consumption of inflammatory content? Does exposure to diverse news sources mitigate political polarization? These are a few questions that firms, researchers, and regulators alike ask about digital platforms \citep{Barber2015TweetingFL,Brown2022EchoCR}. We unify these questions under the task we term: estimating the \emph{steerability of consumption}---i.e., estimating the effect of platform actions on consumer behavior. Estimating the steerability of consumption requires causal inference because past consumption and platform actions influence both future consumption and future actions. In other words, they introduce confounding. Resolving confounding through randomization in the form of A/B tests is standard in the industry. However, randomization is not always possible on digital platforms. As past experience shows, experiments may be ethically fraught~\citep{KramerGuHa14, editorialcomment14}, technically challenging to implement, or prohibitively expensive. Moreover, external investigators may simply not have the power to experimentally intervene in the practices of a platform. Observational causal inference is a promising alternative. However, standard observational causal designs do require the observed data satisfy an overlap assumption: the data generating distribution must assign positive probability to treatment in all strata defined by any realizable choice of the confounders. But since the interaction of participants with digital platforms often spans multiple time steps, the confounding set could become very large. High dimensional confounders make overlap unlikely to hold~\citep{DAmourDiFeLeSe17}, ultimately resulting in invalid inferences. An additional challenge is that algorithmic platform actions are not randomized treatments: the actions they take are strongly correlated with---or in some cases---deterministic functions of the data observed, making overlap assumptions with respect to past consumer and platform actions even less likely to hold. To address these challenges, we take advantage of the structure of the interaction between digital platforms and their participants to expose weaker assumptions that permit \emph{valid} observational causal inference. To do so, we take a control-theoretic perspective on the problem of estimating the steerability of consumption. Rather than omitting the role of time, as is common in causal inference, we explicitly keep track of the interactions between the platform and the participants over time. In particular, we model consumption as a dynamical system where the consumer's features $x_t$ at time~$t$ are determined by the platform action $u_{t-1}$, the previous state $x_{t-1}$, as well as exogenous noise. The platform's action $u_t$ is then updated based on the most recent observations of $x_t$. As a concrete example, let $x_t$ measure what a consumer clicks on and $u_t$ as what a recommender system suggests. Applied to this example, our model captures the time-dependent interplay between user and recommender system. Our model posits that the dynamics are Markovian---that the current time step is only affected by the previous time step---which serves to reduce the dimension of the confounder. We argue this assertion is reasonable for digital settings, as future recommendations are dictated largely by consumption in the recent past. Building on this model, we demonstrate that it is possible to circumvent directly assuming exogenous variations in platform actions in order to establish overlap and identifiability of the steerability of consumption. We show that a) sufficient exogenous variation on the consumer's features and b) the platform control action being non-degenerate, is sufficient for identifiability. We emphasize that, in contrast to standard approaches, our results hold even when the platform's action is a deterministic function of the past consumption and actions (e.g., a predictive model), a plausible setting in digital systems. \begin{figure*}[t] \small \centering \subfigure[standard model]{ \begin{tikzpicture}[x=0.7cm,y=0.6cm] \begin{scope}[every node/.style={circle,thick,draw,minimum size=9mm, fill = black!10!white}] \node (A) at (0,0) {$u$}; \node (D) at (3,3) {$x$}; \end{scope} \begin{scope}[every node/.style={circle,thick,draw,minimum size=9mm}] \node (C) at (0,3) {$z$}; \end{scope} \begin{scope}[>={Stealth[black]}, every node/.style={fill=white,circle}, every edge/.style={draw=black,very thick}] \path [->] (C) edge (D); \path [->] (C) edge (A); \end{scope} \begin{scope}[>={Stealth[orange]}, every node/.style={fill=white,circle}, every edge/.style={draw=orange,very thick}] \path [->] (A) edge (D); \end{scope} \end{tikzpicture}\label{fig:general-causal-graph}} \hspace{2cm} \subfigure[modeling temporal confounding structure]{ \begin{tikzpicture}[x=0.7cm,y=0.6cm] \begin{scope}[every node/.style={circle,thick,draw,minimum size=8mm}] \node (B) at (6,0) {$u_{t-2}\hid $}; \node (D) at (6,3) {$x_{t-2}\hid $}; \node (E) at (9,3) {$x_{t-1}\hid $}; \end{scope} \begin{scope}[every node/.style={thick,minimum size=9mm}] \node (G) at (6,5) {$\noise_{t-2}$} ; \node (H) at (9,5) {$\noise_{t-1}$} ; \node (M) at (12,5) {$\noise_{t}$} ; \end{scope} \begin{scope}[every node/.style={circle,thick,draw,minimum size=8mm, fill = black!10!white}] \node (L) at (9,0) {$u_{t-1}\hid $}; \node (K) at (12,3) {$x_{t}\hid $}; \end{scope} \node at (3,0) {$\dots$}; \node at (3,3) {$\dots$}; \begin{scope}[>={Stealth[black]}, every node/.style={fill=white,circle}, every edge/.style={draw=black,very thick}] \path [->] (4.5,3) edge (D); \path [->] (D) edge (E); \path [->] (E) edge (K); \path [->] (E) edge (K); \path [->] (E) edge (L); \path [->] (4.5,0) edge (B); \end{scope} \begin{scope}[>={Stealth[black]}, every node/.style={fill=white,circle}, every edge/.style={draw=black,very thick}] \path [->] (G) edge (D); \path [->] (H) edge (E); \path [->] (M) edge (K); \path [->] (D) edge (B); \path [->] (B) edge (L); \path [->] (B) edge (E); \path [->] (4.5,1.5) edge (D); \end{scope} \begin{scope}[>={Stealth[orange]}, every node/.style={fill=white,circle}, every edge/.style={draw=orange,very thick}] \path [->] (L) edge (K); \end{scope} \end{tikzpicture} \label{fig:autoregressive-causal-graph}} \caption{The causal inference problem of estimating the steerability of consumption.} \end{figure*} \paragraph{Contributions.} We unify a class of important causal inference problems under the umbrella of steerability of consumption. We propose a time-aware dynamical systems model to study these problems, and we design associated assumptions for observational causal inference. Working with our model, we establish necessary and sufficient conditions for identifiability of the steerability of consumption. We demonstrate that sufficient exogenous variation in consumption and sufficient expressivity in the platform response enable causal identification, circumventing the need for direct interventions or exogenous variation on the platform action. We show that exogenous variation in consumption at two time steps is sufficient for identifiability, whereas one consumption shock, in general, is not. We analyze two estimators---the \twostageregression and the adjustment formula estimators---for estimating the steerability of consumption from finite samples. Finally, we experiment on real data to test the efficacy of our Markovian assumption at reducing overlap violations. Practitioners routinely apply causal inference methods well outside the guardrails of typical assumptions. Our work can be seen as a route towards justifying the valid use of observational causal inference for estimating steerability of consumption. Along the way, we connect problems of causal inference with the technical repertoire of control theory, a fruitful avenue for further research. \subsection{Background} The fact that digital platforms, their predictions, and their actions non-trivially impact the individuals that interact with the platform has widely been recognized in diverse applications spanning content recommendation, prediction policy problems and labor markets~\citep[c.f.,][]{shmueli20, thai16traffic, fleder10recom,admoavicius13, Krauth2022BreakingFL, Barber2015TweetingFL,Brown2022EchoCR}. In the machine learning community, the implications of predictions on populations have formally been studied in several works~\citep[e.g.,][]{PerdomoZrMeHa20, Dean2022PreferenceDU, Kalimeris2021PreferenceAI, Chaney2018HowAC}. We point out the work by \citet{HardtJaMe22}. They relate the extent to which a platform can steer user behavior to the economic concept of power, and introduce performative power to quantify it. Assessing performative power crucially relies on estimating the causal effect of algorithmic actions. Thus, our work provides sufficient conditions for how performative power can be assessed from observational data. Related to our work, \citet{mendler22causal} also focus on identifying the causal effect of predictions on eventual outcomes in settings where the covariates and the prediction are deterministically bound. However, they do not take advantage of repeated interactions between the predictor and the population, but instead take advantage of potential incongruences in modality. Similarly, estimating the steerability of consumption has also been the motivation of a recent work on causal inference in the presence of confounding by~\cite{shah22steer}. However, the authors focus on dealing with partially unobserved confounding $z$, while taking overlap in the rollout for granted by assuming that the joint distribution $p(u,x,z)$ belongs to an exponential family. Our modeling approach is inspired by the literature on dynamical systems in control theory. Taking this perspective, the task of estimating the steerability of consumption in our causal model maps to a system identification problem~\citep{Ljung10}. However, our problem setup differs from the standard control theory setting because we focus on purely observational designs, where we do not choose what platform control actions (i.e., interventions) are taken. Within the system identification literature, we highlight the work of \cite{Abbasi-YadkoriSz11} because of the similarity of their model to the linear model we study in \Cref{sec:linear-model}. Their work proposes a method of controlling linear quadratic control systems with unknown dynamics via the principle of certainty equivalence; their results hinge on a finite-sample system identification result, similar in spirit to the type of identifiability results found in this paper. From a technical standpoint the causal question we are interested in is related to studies of dose response and treatment-effect estimation under overlap violations in causal inference~\citep[c.f.,][]{PetersenPoGrWaVa12}. By approaching the problem from a control theoretic angle we arrive at a principled approach to shrink the adjustment set and make identifiability possible. \section{Model} \label{sec:general-model} The standard causal model for our problem is shown in \Cref{fig:general-causal-graph}. Estimating the steerability of consumption corresponds to quantifying the causal effect of a platform action $u$ on a state $x$, subject to observed confounding $z$, where actions $u$ represent the algorithmic decisions of a digital platform, and the variable $x$ captures relevant user features, such as what content the user consumed. The confounding variable $z$ captures all available past information that influences both the choice of platform action $u$ and the variable $x$. As we have explained earlier, high dimensional confounding due to long rollouts and correlated platform actions suggest overlap is unlikely to hold in the standard setting, making the standard model unsuitable for estimating the steerability of consumption. The unique feature of our model---outlined in \Cref{fig:autoregressive-causal-graph}---is that it makes the temporal component of interactions among the confounding variables explicit. We let $x_t\hid \in \R^d$ and $u_t\hid \in \R^p$ denote the consumption and platform action at time step $t$ respectively. We assume for all $t\geq 0$ the dynamics of the system follow \begin{align} \begin{split} x_t\hid &= \Hx{f(x_{t-1}\hid)}{ g(u_{t-1}\hid)} + \noise_t\\ u_t\hid &= \Hu{h(x_t\hid)}{ r(u_{t-1}\hid)} \end{split} \label{eqn:gen_model} \end{align} with $\noise_t \in \R^d$ modeling potential exogenous variations in $x_t$ and the functions $f: \R^d\to \R^d$, $g: \R^p \to \R^d$, $h: \R^d \to \R^p$, and $r: \R^p \to \R^p$ describe how consumption and platform actions affect one another. We make the following assumption on the exogenous noise:\footnote{We choose to use \Cref{ass:independent_noise} for clarity, even though it is stronger than we need for our results. See \Cref{sec:independence-discussion} for a discussion of how to relax the assumption.} \begin{assumption}[Mutually Independent Exogenous Variation]\label{ass:independent_noise} For any $t \geq 1$, the random variable $\noise_t$ is mutually independent of $\noise_{k}$ for all $k\neq t$ and independent of $(x_{0}, u_{0})\sim P_{0}$. \end{assumption} With respect to the model we outlined above, we define steerability of consumption as the ability of the platform to change user consumption. More formally, given a time step $t$, a base action $u$, and an intervention $u'$, we define the steerability of consumption as \begin{align*} \steer_t(u,u')\defeq\E[x_{t} \mid \mathrm{do}(u_{t-1} \defeq u')] - \E[x_{t} \mid \mathrm{do}(u_{t-1} \defeq u)]. \end{align*} In our model, a sufficient condition for identifying the steerability of consumption is to identify the following causal effect \[\bar{x}_t(u)\defeq \E[x_{t} \mid \mathrm{do}(u_{t-1} \defeq u)]\,.\] Because our system dynamics~\eqref{eqn:gen_model} are time-invariant and the structural equations for $x$ are assumed to be separable, we have $\steer_t=\steer_{t'}$ for all $t, t'$. Thus, without loss of generality, we will focus on identifying $\steer_\bT$ via identifying $\bar{x}_\bT$, letting $\bT$ denote the index we are interested in estimating the steerability of consumption. For $\bK\geq 1$, we use $R_\bK$ to denote a rollout of the previous $\bK$ time indices leading up to the chosen time index $\bT$: \[R_\bK\defeq(\{x_{\bT-t}, u_{\bT - t}\}_{t=1}^{\bK}, x_\bT).\] In this work, we assume access to iid observations of rollouts $R_\bK$. We will specify $\bK$ in each result. \subsection{Running example} \label{sec:running-example} We instantiate our model with an example. Consider an auditor who is interested in estimating the impact of the recommendation algorithm of a video streaming platform---like Twitch or YouTube---on the consumption patterns of its users. Let $y_t \in \R^p$ be some measure of content consumption (e.g., number of hours streamed) for $p$ video categories of interest during week $t$ for a given user. Let $z_t \in \R^{d_z}$ be comprised of measurements about the platform such as revenue per category, click-through rate per category, unique weekly users, unique advertisers per category, competitors' performance, etc. which could be confounders. We can think of the joint vector $[y_t; z_t] \in \R^d$ as the state variable $x_t$ for $d=p+d_z$. The platform action $u_t \in \R^p$ is a measure of how many videos from the $p$ categories of interest are recommended to a given user during week $t$. The platform interfaces using $u_t$ with the goal of maximizing total profits, which is some deterministic function of $x_t$. The auditor is interested in estimating how the platform action $u_{t-1}$ impacts the average watch habits $y_t$ of users. More specifically, they are interested in the first $p$ coordinates of the steerability of consumption $\steer(u, u')$. Our model postulates that user consumption changes over time based on the recommendations by the algorithm, as well as external factors (e.g., new trends). Formally, taking inspiration from \cite{JamborWaLa12}, we model the dynamics of the system as \begin{align*} z_t & = f_{1}(z_{t-1}, y_{t-1}) + g_1(u_{t-1}) + \noise_t^{(1)}\\ y_t &= f_{2}(z_{t-1}, y_{t-1}) + g_2(u_{t-1})+ \noise_t^{(2)}. \end{align*} The function~$f_1$ models how the performance metrics chosen as a target variable by the firm evolve over time, while the function~$g_1$ models the platform's ability to control this metric. The function~$f_{2}$ models how much interest users retain in each video category from week to week, as well as the effect of confounders on viewership (e.g., how many hours of viewing time can a competitor poach). The auditor wants to estimate the relationship~$g_2$ that governs how much consumption increases as more recommendations get served. The noise variables~$\noise_t^{(1)}, \noise_t^{(2)}$ allow for natural variation in user preferences. For example, the price of Bitcoin may increase due to changes in economic conditions, leading to many more users watching cryptocurrency videos; this change in behavior is independent of past consumption and the platform's recommendations. We can model the platform action similarly as \[u_t = h(z_t, y_t) + r(u_{t-1}),\] where $h$ models the platform's algorithm of how viewer statistics and other metrics affect recommendations in the future. The function $r$ models how the video streaming service regularizes its recommendations to avoid overfitting to recent activity. \paragraph{Plausibility of modeling assumptions.} Our model posits a Markovian assumption on the platform and consumption dynamics and an assumption that the consumption and platform action updates are additive (separable) in nature. The Markovian assumption on the platform action dynamics is reasonable for two main reasons. First, digital platforms are constantly retraining machine learning models on fresh data as a way to improve performance, mitigate distribution shift, and quickly fix models which have suffered unexpected drops in performance \citep{Shankar2022OperationalizingML}. Given that this retraining occurs on a daily or even hourly cadence, this suggests that the machine-learning-based algorithmic control actions a platform takes at any time $t$ mainly depend on the state and actions from the recent past. Second, the Markovian view of digital platform control actions is an accepted view in the recommendation system literature. For example, the contextual multi-armed bandit models used to study recommendation systems are Markovian by construction---the platform uses fresh context provided at every time step to make its decisions \citep{Langford2007TheEA, Bouneffouf2019ASO}. The Markovian assumption on consumer dynamics is based on the belief that there are few long range causal effects that affect consumption, and that the ones that do exist---say inherent biases, interests, or habits---can be encoded directly or by proxy into all of the states, without blowing up the dimension. For example, we could encode long-term, content-specific click habits by estimating click proportions by content category and placing this information into all of the states. We can generalize our non-linear results to settings beyond additive-update dynamics. We choose to focus on additive updates because a) it is the simplest model which still conveys the nuance of our results, b) it is well accepted in the dynamical system and causal inference literature, and c) additive updates are prevalent in machine learning (e.g., gradient methods). \paragraph{Beyond recommender systems.} The steerability of consumption is not a term specific to recommender systems; rather, it is a general term referring to the impact platform actions have on user behavior. It certainly is applicable to other digital settings. For example, many digital advertising platforms (and third-party auditors) are interested in whether personalized advertising increases platform activity. On one hand, advertisements clutter user interfaces, making the user experience less streamlined, but on the other hand, personalized advertisements provide users with more opportunities to engage, giving the platform more influence over user lives. To model this scenario, let $x_t$ be some measure of engagement (e.g., clicks, time online) and $u_t$ be some measure of the type and quantity of ads served. Confounders could include other platform performance measures such as monthly active users. Besides digital platforms, our model also applies to some economic settings. Micro-economists are often interested in estimating the effect product prices have on demand, termed the \textit{price elasticity of demand}. If we model product demand using $x_t$ and model product prices using $u_t$, then the price elasticity of demand is precisely the steerability of consumption. Confounders like product quality can be accounted for in the state variable $x_t$. In macroeconomics, a classical problem is estimating the effect the Federal Interest Rate has on inflation and unemployment. We can use our framework to model the Federal Interest Rate as the platform action $u_t$ and inflation and unemployment rates as the state $x_t$. In this example, GDP and other measures of the global economy could be possible confounders to account for. \section{Identifiability from exogenous variations on consumption} \label{sec:identifiability} In this section, we outline necessary and sufficient conditions for identifying $\steer_\bT$ given iid observations of $R_{\bK=1}$. A quantity is \textit{identifiable} if it can be uniquely determined from observational data probability distribution. Conversely, if there exists multiple values of said quantity which are all consistent with the observational data probability distribution, then we say it is \textit{unidentifiable}. To provide some context and intuition for our proof strategy for showing identifiability, let us start from the general causal graph in Figure~\ref{fig:general-causal-graph} and recall classical results from causal inference in the presence of observed confounding~\citep{pearl09book}. Standard results tell us that a sufficient condition for identifiability of the causal effect of $u$ on $x$ is \emph{admissibility} and \emph{overlap}. \begin{definition}[Admissibility]\label{def:adjustment-formula} We say a continuous random variable $Z$ with density $p$ is admissible for adjustment with respect to treatment $U$ and outcome $X$ if the adjustment formula is valid: \begin{equation}\label{eqn:adjustment-formula} \E[X \mid \mathrm{do}(U \defeq u)] = \int \E[X \mid U=u, Z=z] p(z) \,dz. \end{equation} \end{definition} \begin{definition}[Overlap] Given an action $U$ and a confounding variable $Z$ with well-defined joint density $p$. Then, we say overlap of $(u,z)$ is satisfied if $p_{U \mid Z}(u' \mid z') >0$ for all $u' \in \R^p$ and $z'$ where $p_Z(z') > 0$. \end{definition} Overlap guarantees that every $z$ in the support has non-zero probability to co-occur any action $u$, and thus $\E[X\mid U=u, Z=z]$ is well defined. Overlap with admissibility guarantees that $\E[X \mid \mathrm{do}(U \defeq u)]$ can be uniquely expressed as a function of observational data distributions, via \eqref{eqn:adjustment-formula}, implying $\E[X \mid \mathrm{do}(U \defeq u)]$ is identifiable. Now, we return to our model. In order to show $\bar{x}_\bT$ is identifiable, we first show that $x_{\bT-1}$ is admissible for adjustment. The proof of \Cref{lem:admissibility} is found in \Cref{proof:lem:admissibility}. \begin{proposition}[Admissibility in our model]\label{lem:admissibility} Given the structural equations in \eqref{eqn:gen_model} and let \Cref{ass:independent_noise} hold. Then, $x_{\bT-1}$ is admissible with respect to $u_{\bT}$ and $x_{\bT}$ for any $\bT \geq 0$. \end{proposition} Hence, the main challenge for establishing identifiability of $\steer_\bT$ is to argue about overlap of $(u_{\bT-1},x_{\bT-1})$. Once we show overlap, we can rewrite $\bar{x}_\bT$ as a function of the observational probability distribution (of $R_{\bK =1}$) by way of the adjustment formula \eqref{eqn:adjustment-formula}. This would mean $\bar{x}_\bT$ is identifiable and therefore the steerability of consumption $\steer_\bT$ is as well. \subsection{Key assumptions} \label{sec:key-assm} We highlight the two requirements on the dynamical system in \eqref{eqn:gen_model} that will allow us to establish overlap of $(u_{\bT-1},x_{\bT-1})$. The first assumption requires that there is exogenous noise in the system that leads to sufficient variation in consumption $x$ across time. \begin{definition}[Consumption shock]\label{ass:noise-coverage} For a given time step $t\geq 0$ we say there is a consumption shock at time $t$, if the noise $\noise_t$ satisfies $p_{\noise_t}(a) > 0$ for all $a \in \R^d$ where $p_{\noise_t}$ denotes the density of $\noise_t$. \end{definition} We say the system is exposed to $\bM$ shocks prior to $\bT$ if for all $t \in \{\bT-\bM, \ldots, \bT-1\}$, there is a shock in consumption. We expect that variations in consumption naturally occur in the presence of unexpected news events, economic shocks, or new trends. In order to leverage these consumption shocks for the purpose of identifiability, we need one crucial assumption on the platform action, which will allow us to circumvent directly assuming exogenous variation on the platform action. Namely, the platform needs to be sufficiently sensitive to the variations in consumption $x$, so that the consumption shocks propagate into the platform action $u$ at consecutive time steps. \begin{definition}[Responsive platform action]\label{ass:expressive-control} For a platform, let $q_c: \R^d \to \R^p$ defined as $q_c(y) \defeq r(h(y) + c)$ describe how the current state $y$ affects the next platform action, given that the previous platform action was $c$. If $q_c$ is a surjective, continuously differentiable map with a Jacobian $J \in \R^{p, d}$ such that $\rank(J) = \min(p, d)$ always holds for all $c\in \R^p$, then we say the platform action is responsive. \end{definition} To put our assumptions in context, recall the video recommender system example from Section~\ref{sec:running-example}. We expect that variations in user video consumption (\Cref{ass:noise-coverage}) naturally occur in the presence of unexpected news events, economic shocks, or new trends. To investigate \Cref{ass:expressive-control}, consider $r(u) = \alpha u$ as a plausible example. This corresponds to a model where the platform uses previous platform actions as a regularizer for how they select future actions. Note that this simple choice of $r$ is surjective. Furthermore, we expect the number of metrics and confounders which can be affected by platform actions to be large compared to the dimensionality of the platform action, and hence $d \geq p$. In this regime, surjectivity of $h$ is a reasonable assumption and because $q_c$ is the composition of $h$ and $r$, surjectivity of $q_c$ follows. The Jacobian rank condition imposes a form of ``monotonicity'' on $q_c$. In the video recommender system setting this could correspond to: more views in category $i$ cause more recommendations in category $i$---a plausible assumption on a ML-driven system. \Cref{ass:expressive-control} is also supported by ideas proposed in \citet{Dean2019RecommendationsAU}; they suggest that recommendation systems should be designed such that users have the ability to design the recommendations they see indirectly via the actions they take. This prescription corresponds in spirit to the surjectivity condition of \Cref{ass:expressive-control}. \subsection{General identifiability result}\label{sec:general-identifiability} We now present our main identifiability result. The proof can be found in \Cref{proof:lem:finite-discrete-full-coverage}. \newcommand{z_{-1}}{z_{-1}} \begin{theorem} \label{lem:finite-discrete-full-coverage} Let the dynamical system in \eqref{eqn:gen_model} have a responsive platform action. Let \Cref{ass:independent_noise} hold. Fix a $\bT\geq2$ and let the auditor observe $R_{\bK=1}$. Then, \begin{enumerate} \item if the system exhibits $\bM=2$ consumption shocks prior to time $\bT$, the steerabiltiy of consumption $\steer_\bT(u, u')$ is identifiable for any $u, u' \in \R^p$. \item if the system exhibits $\bM<2$ consumption shocks prior to $T$, then for any $f, g, h, r$, there exists a distribution of $(x_{\bT-2}, u_{\bT-2})$ such that for all $u \neq u'$, the steerability of consumption $\steer_\bT(u, u')$ is unidentifiable. \end{enumerate} \end{theorem} In words, this result states that consumption shocks on two preceding state variables are necessary and sufficient for the auditor to identify the steerability of consumption from observations. A single consumption shock is not enough for identifiability because $h$ can be a deterministic function. Thus, in this case for any given combination of of $x_t, u_{t-1}$, the auditor is only able to see one corresponding value of $u_t$, which means overlap is not satisfied. The second noise spike is necessary to provide another degree of freedom which provides enough variation for overlap, making the steerability of consumption identifiable. This result suggests that auditors should select $\bT$ to be a time step following the occurrence of consumption shocks; e.g., the auditor should use observations following unexpected news events or economic shocks to estimate the steerability of consumption. We note that our analysis crucially relies on accounting for how the noise propagates through the system across multiple time steps. Because the standard causal model in \Cref{fig:general-causal-graph} is time agnostic, it is not expressive enough to make a claim like \Cref{lem:finite-discrete-full-coverage}. The two main advantages of our approach are that a) \Cref{ass:expressive-control} is an assumption on the design of the platform action which can be verified with enough knowledge of the platform, and b) we allow the platform action to be deterministic in its inputs, a setting which subsumes many practical ML-driven systems. This stands in contrast to typical overlap assumptions, which are often unverifiable and de facto require explicit (and potentially unnatural) exogenous variation on the platform action. \section{Exploiting longer rollouts for identifiability in the linear model} \label{sec:linear-model} In practice, an auditor may have access to longer rollouts of observations ($\bK > 1$). A natural question is whether they can exploit this information to make it easier to estimate the steerability of consumption. In this section we investigate this question in the linear setting, while we leave the general setting for future work. More specifically, in this section, we will instantiate our model \eqref{eqn:gen_model} as follows: \begin{align}\label{eqn:lin_dynamics} \begin{split} f(x) &\defeq A x \qquad g(u) \defeq B u \\ h(x) &\defeq C x \qquad r(u) \defeq D u, \end{split} \end{align} where $A\in \R^{d, d}, B\in\R^{d, p}, C\in \R^{p, d}, D\in \R^{p,p}$. The linear dynamics admit a clean characterization of the tradeoff between rollout length and conditions for identifiability. Linear state dynamics is certainly a strong assumption, but in has proven to be a useful approximation in control theory---e.g., quadrotors can be effectively controlled with a linear controller (e.g., a proportional-integral (PI) controller) relying on a linear state dynamics model \citep{BouabdallahNoSi04}. \newcommand{P}{P} \newcommand{\hat{P}}{\hat{P}} In this linear setting, identifying the steerability of consumption reduces to identifying the matrix $B$, namely because $\steer_\bT(u,u') = B (u'-u)$. We will again consider identifiability under consumption shocks. However, for the linear case a weaker definition suffices\footnote{To show that full-support implies full-span, apply \Cref{lem:positive-q-density} to the function $h(x) = a^\top x$.}. \begin{definition}[Fully-spanning consumption shock] \label{ass:noise-span} We say there is a fully-spanning consumption shock at time $t$, if $\xi_t$ is such that for all vectors $a \in \R^{d}$ with $a \neq 0$, $a^\top \noise_{t}$ is almost surely not a constant. \end{definition} We will also replace the responsive platform action assumption (\Cref{ass:expressive-control}) with a full rank condition on the linear system. \begin{definition}[Full-row-rank platform action] \label{ass:row-rank} For a given $\bM \geq 2$, we say the platform has a full-row-rank platform action over a span of $\bM$ steps if $C$ and $D$ are such that the matrix $[DC, \ldots, D^{\bM-1}C]$ has full row rank. \end{definition} In the linear setting, a full-row rank platform action which spans $M=2$ time steps is also an expressive platform action (\Cref{ass:expressive-control}). Similarly, an expressive platform action is also a full-row rank platform action which spans $M=2$ time steps. \Cref{ass:row-rank} serves to generalize \Cref{ass:expressive-control} beyond the $\bK = 1$ setting of \Cref{sec:identifiability}. This generalization turns out to be the crucial piece for characterizing the benefits of observing longer rollouts, which we formalize in the following result. The proof can be found in \Cref{proof:thm:identifiabile-five-tuple}. \newcommand{X}{X} \newcommand{U}{U} \begin{theorem} \label{thm:identifiabile-five-tuple} Consider the dynamical system in \eqref{eqn:gen_model} with linear functions $f, g, h, r$ defined in \eqref{eqn:lin_dynamics}. Let \Cref{ass:independent_noise} hold. Fix a time step $\bT\geq \bK+1$, let the auditor observe iid samples of $R_\bK$. Let there be a fully-spanning consumption shock at time step $\bT-\bK$. Then, \begin{enumerate} \item[a)] if $\bK =1$, then for any $A, B, C, D$, there exists a distribution over $(x_{\bT - 2}, u_{\bT-2})$ such that $\steer_\bT(u, u')$ is unidentifiable. \item[b)] if $\bK \geq 2$, then full-row-rank platform action over the span of $\bK$ steps is sufficient for identifiability of $\steer_\bT(u, u')$ for any $u, u'$. \item[c)] if $\bK \geq 2$, $x_{\bT - \bK - 1} = u_{\bT - \bK -1} = 0$, and $\noise_t = 0$ for $t \geq \bT - \bK + 1$, then full-row-rank platform action over the span of $\bK$ steps is necessary for identifiability of $\steer_\bT(u, u')$ for any $u, u' $. \end{enumerate} \end{theorem} \Cref{thm:identifiabile-five-tuple} fully characterizes the tradeoff between identifiability, length of the observed rollout, and rank conditions on the platform dynamics matrices in the linear setting. Summarizing briefly, one consumption shock is not enough to identify the steerability of consumption from only observations of $R_{\bK=1}$---just like in the general setting---but one consumption shock is enough to identify steerability of consumption from observations of $R_{\bK \geq 2}$ in the linear setting. Moreover, as $\bK$ gets larger, the rank assumptions required become easier to satisfy, allowing for more poorly conditioned dynamical systems to be identifiable. Thus, our linear dynamical system model enables us to take advantage of observing longer sequences of interactions between consumer and platform, ultimately making it easier to identify the steerability of consumption. \newcommand{x_{-1}^\top }{x_{-1}^\top } \newcommand{u_{-1}^\top }{u_{-1}^\top } \section{Estimation from finite samples} \label{sec:estimators} The previous sections concerned identifiability---whether an auditor can estimate the steerability of consumption with infinite observations. In practice, the auditor will only have access to a finite number of observations. To this end, we propose two finite-sample estimators of the steerability of consumption. We introduce the \twostageregression estimator which leverages the structure of our data generation model and is reminiscent of double machine learning \citep{ChernozhukovChDeDuHaNeRo17}. This estimator can be applied if observations of $R_{\bK=2}$ are available. We also outline a non-parametric estimator based on the adjustment formula \Cref{eqn:adjustment-formula} that only requires observations of $R_{\bK=1}$. This estimator is also applicable to the standard causal model in \Cref{fig:general-causal-graph}, though at the cost of being less tailored to the time-aware model we propose. The analysis of the second estimator can be found in Appendix~\ref{sec:adjustment-estimator}. \label{sec:double-ml} \renewcommand{\loss}{\ell} The \twostageregression estimator assumes that the auditor has iid observations of $R_{\bK=2}$. The estimator is always well defined, even when the overlap conditions needed for theoretical guarantees do not hold. We will analyze this estimator in the linear setting from Section~\ref{sec:linear-model} and without loss of generality, we set $\bT=3$. Our results can be generalized to settings where $f, g, h, r$ are from a non-linear function class (e.g., via Rademacher complexity and VC-dimension arguments), but we focus on the simple linear setting for the sake of clarity. In particular, for the remainder of this section, assume data is generated according to the dynamical system \Cref{eqn:gen_model} with functions $f,g,h,r$ defined in \Cref{eqn:lin_dynamics}. We let $x_t\kth , u_t\kth $, $\noise_t\kth $ denote the $k$th observations of $x_t$, $u_t$, and $\noise_t$ respectively. Let $X_t \in \R^{d, n}$, $U_t \in \R^{p, n}$, and $E_t \in \R^{d, n}$ be matrices that comprise the $n$ samples of $x_t$, $u_t$, and $\noise_t$ respectively. The \twostageregression estimator is defined as $\what B$, where \begin{align*} \what{C} &\defeq \argmin_{C \in \R^{p, d}} \frac{1}{2n} \lfro{U_1 - C X_1}^2 \\ \what{H} &\defeq \argmin_{H \in \R^{d, d}} \frac{1}{2n} \lfro{X_2 - H X_1}^2 \\%= X_2X_1^\top (X_1X_1^\top )\inv\\ \what{B} &\defeq \argmin_{B \in \R^{d, p}} \frac{1}{2n} \lfro{X_3 - \what{H} X_2 - B(U_2 - \what{C} X_2)}^2. \end{align*} The intuition behind why this estimator works comes from the following relationship: $x_3 - (A+BC)x_2 = B(u_2 - Cx_2)$. We first estimate $H:=(A+BC)$ and $C$ using $\hat H$ and $\hat C$ respectively. Then, we regress $x_2 - \hat H x_1$ against $u_1 - \hat C x_1$ to get an estimate of $B$. Recall that knowing $B$ is sufficient to estimate the steerability of consumption $\steer_\bT(u,u')$ for any $u,u'$, as $\steer_\bT(u,u') = B(u' - u)$. We need the following assumption to be satisfied in order to present our convergence result for this estimator. \begin{assumption}[$\rho$-Bounded System Dynamics] \label{ass:rho-bound} The linear dynamical system specified by \eqref{eqn:gen_model} and \eqref{eqn:lin_dynamics} has $\rho$-Bounded System Dynamics if $\opnorm{A+BC} \leq \rho {\sigma_{\min{}} (DC)}$. \end{assumption} To understand this assumption, consider the quantity $\frac{\ltwo{x_2 - \noise_2} }{ \ltwo{u_2}}$: this is the ratio between the magnitude of the state and platform action after one time step of evolution, ignoring noise and assuming the system starts from equilibrium $x_{0} = u_{0} = 0$. Because $\frac{\ltwo{x_2 - \noise_2} }{ \ltwo{u_2}} \leq \frac{\opnorm{A+BC} \ltwo{x_1}}{\sigma_{\min{}} (DC)\ltwo{x_1}}$, having Bounded System Dynamics ensures that the magnitude of state and platform actions are of the same scale. We will use the notation $\kappa_A $ to denote the condition number of a matrix $A$ and $\scov_1$ to denote the sample covariance of $\noise_1$, defined as \[\kappa_A \defeq \frac{\sigma_{\max}(A) }{ \sigma_{\min}(A)}\quad \text{and} \quad \scov_1 \defeq \frac{1}{n}\sum_{k=1}^n \noise_1\kth (\noise_1\kth)^\top. \] We now provide a convergence result for the \twostageregression estimator of $B$ in \Cref{thm:doubleml-linear-conv-rate}; the proof can be found in \Cref{proof:thm:doubleml-linear-conv-rate}. For simplicity, we let $\noise_3 = 0$; our analysis can be extended to handle settings where $\noise_3 \neq 0$. \begin{theorem}\label{thm:doubleml-linear-conv-rate} Consider the dynamical system in \eqref{eqn:gen_model} with $x_{0} = u_{0} = 0$ and $\noise_3 = 0$, with functions $f, g, h, r$ defined in \eqref{eqn:lin_dynamics}, and with full-row-rank platform action over the span of $\bK=2$ steps. Let the auditor observe $n$ iid samples of $R_{\bK=2}$. Let $\E \ltwo{\noise_2}^2 = \sigma_2^2 d$, and \Cref{ass:rho-bound} hold. Let $\mc{G}$ denote the event where $X_1 X_1^\top $ is invertable. If $\E\left[\kappa_{\scov_1}^2\lambda_{\min{}} (\scov_1)^{-1}\right] \leq \tau_1$, then \begin{align*} \frac{1}{pd}\E \left[\lfro{\what{B} - B}^2 \mid \mc{G}\right] &\leq \frac{\sigma_2^2 \rho^2 \kappa_{DC}^2 \tau_1}{n}. \end{align*} In the case where $p=d$, if $\opnorm{\E\left[\scov_1\inv\right]} \leq \tau_2$ we have that \begin{align*} \frac{1}{d^2}\E \left[\lfro{\what{B} - B}^2\mid \mc{G}\right] &\leq \frac{\sigma_2^2 \rho^2 \tau_2}{n}. \end{align*} \end{theorem} We note that rank condition on $DC$ (\Cref{ass:row-rank}) in this result is the same as the rank condition from the identifiability result in the linear setting (\Cref{thm:identifiabile-five-tuple}). The $\tau_1,\tau_2$ conditions are a bit technical, but they essentially just require $\noise_1$ to be well behaved. To illustrate, consider a simple Gaussian noise example. Suppose $\noise_1$ and $\noise_2$ are drawn iid from $\normal(0, \sigma_1^2 I_d)$ and $p=d$. We have $\E \ltwo{\noise_2}^2 = \sigma_2^2 d$. For $n\geq d$, $\scov_1$ is almost surely invertible. $(X_1X_1^\top / \sigma_1^2)\inv$ has an inverse Wishart distribution and thus, $\E[\scov_1\inv] = \frac{n}{(n-d-1) \sigma_1^2} I_d$ for $n > d+1$. \Cref{thm:doubleml-linear-conv-rate} gives us $\E \lfro{\what{B} - B}^2 \leq \frac{d^2\sigma_2^2 \rho^2}{(n-d-1) \sigma_1^2},$ which scales roughly like the standard linear regression error rate. \section{Empirical investigations} \subsection{Case study: price elasticity of demand}\label{sec:experiments} We apply our model to the task of estimating the price elasticity of demand (PED) from time series data. Estimating the PED is an example of estimating steerability of consumption in the sense that we are interested in how the price (platform action) affects the demand (consumption). We use an avocado time series dataset \citep{kaggle-avocado} that consists of biweekly measurements of the prices of avocados and the amount of avocados purchased by region in the US from 2015 to 2018. For a week $t \in [N]$, $u_t$ corresponds to the logged average avocado price, and $x_t$ corresponds the logged number of avocados purchased. Additional details can be found in \Cref{sec:additional-detail-exp}. We posit the following: \begin{align*} x_t = \tilde f(z_{t-1}) + g(u_{t-1}). \end{align*} where $z_{t-1}$ denotes the set of confounding variable that we adjust for, which we will specify shortly. In this model, the PED is defined as $\nabla g$. This quantity is a curve if the function $g$ is non-linear; however, in this section, we will assume that $g$ is linear, which reduces the problem of estimating the PED into one of estimating a scalar. \paragraph{Varying the adjustment set to characterize overlap violations.} Our primary focus in this section is to investigate whether the Markovian assumption on the system dynamics our model posits actually mitigates overlap violations. To do this, we vary the size of the confounding set to measure overlap violations, as well as variance and bias of different estimators. We look at a sliding window over the data $\{(x_{t-\bK}, \ldots, x_{t}, u_{t-\bK}, \ldots, u_{t-1})\}_{t=\bK}^{N}$. We treat these samples as the iid observations of $R_\bK$ that the auditor observes. We will use $u_{t-1}$ as the treatment variable, $z_{t-1} \defeq (x_{t-\bK}, \ldots, x_{t-1})$ as the confounders, $x_{t}$ as the outcome. We will vary $\bK$---the size of the confounding set---to explore how the size of the confounding set affects estimation. \paragraph{Empirical setup.} We will analyze three estimators: adjustment formula estimator, random forest double ML (RF-DML), and linear regression double ML (LR-DML). The adjustment formula estimator relies on computing \eqref{eqn:adjustment-formula} on a discretized platform action and consumption variables. The discretization is important to ensure overlap over confounder and treatment variables, as the adjustment formula estimator is not well defined without overlap. In particular, let $\{\mc{Z}_\gamma\}_\gamma, \{\mc{U}_\beta\}_\beta$ denote discretizations of the confounders $z_{t-1}$ and platform action $u_{t-1}$, and let $\beta(u)$ be such that $u \in \mc{U}_{\beta(u)}$. We define the adjustment formula estimator as: \begin{align* \hat{x}(u) \defeq \sum_{\gamma} \left[ \frac{\sum_t x_{t+1}\bindic{z_t \in \mc{Z}_\gamma, u_t \in \mc{U}_{\beta(u)} }}{\sum_t \bindic{z_t \in \mc{Z}_\gamma, u_t \in \mc{U}_{\beta(u)} }} \right] \frac{\sum_t \bindic{z_t \in \mc{Z}_\gamma }}{n}. \end{align*} Detailed discussion and theoretical guarantees regarding the adjustment formula can be found in \Cref{sec:adjustment-estimator}. We discretize the logged price into two buckets: $(-0.479, 0.131], (0.131, 0.683]$ and the logged demand into two buckets $(14.539, 15.014], (15.014, 15.837]$. After using the adjustment formula estimator to estimate the effect price has on demand, we then use this estimator to assign predicted demands to all of the prices observed in the dataset. We then use linear regression to estimate the slope of the relationship between predicted demand and price---this is what we refer to as the adjustment formula estimate of the PED. This approach is motivated by methods suggested by \citet{PetersenPoGrWaVa12}. The double machine learning approach \citep{ChernozhukovChDeDuHaNeRo17} first uses half of the training data to residualize the confounders out of the treatment and effect. For LR-DML, the residualizing procedure uses linear regression; for RF-DML, the residualizing procedure uses a random forest model. Then, in the second step, both RF-DML and LR-DML use the other half of the training data to perform a slightly modified version linear regression---discussed in \citet{ChernozhukovChDeDuHaNeRo17}---on the residualized treatment and residualized effect. The slope of this estimated line is the estimated PED. \begin{table}[t] \begin{center} \resizebox{0.7\columnwidth}{!}{ \begin{tabular}{@{}llllll@{}} \toprule & \thead{Price\\ (Intervention $u$)} & \thead{Estimated Effect\\ on demand} & \thead{Fraction of\\ undefined terms} & \thead{Probability mass\\ of undefined terms} \\ \midrule \multirow{2}{*}{\bK=1} & High & 14.95 & 0 / 2 & 0.0\% \\ & Low & 15.11 & 0 / 2 & 0.0\% \\ \midrule \multirow{2}{*}{\bK=3} & High & 14.96 & 0 / 8 & 0.0\%\\ & Low & 15.11 & 0 / 8 & 0.0\%\\ \midrule \multirow{2}{*}{\bK=5} & High & N/A & 5 / 31 & 4.6\%\\ & Low & 15.10 & 0 / 31 & 0.0\%\\ \midrule \multirow{2}{*}{\bK=7} & High & N/A & 36 / 89 & 15.1\%\\ & Low & N/A & 16 / 89 & 9.0\%\\ \midrule \multirow{2}{*}{\bK=9} & High & N/A & 73 / 145 & 25.9\%\\ & Low & N/A & 40 / 145 & 19.7\%\\ \midrule \end{tabular}} \end{center} \caption{Adjustment formula estimated effects on avocado demand for price interventions. } \label{fig:causal-eff-ped-table} \end{table} \paragraph{Importance of shrinking adjustment set for overlap.} We report what the adjustment formula estimator estimates for a discretized treatment $u$ in \Cref{fig:causal-eff-ped-table}. A ``Low'' price in the treatment column corresponds to the logged price bucket $(-0.479, 0.131]$. A ``High'' price corresponds to $(0.131, 0.683]$. The ``Fraction of undefined terms'' column corresponds to the number of $\gamma$ values where $\sum_t \bindic{z_t \in \mc{Z}_\gamma} > 0$ and $\sum_t \bindic{z_t \in \mc{Z}_\gamma, u_t \in \mc{U}_{\beta(u)} } = 0$ over the total number of values of $\gamma$ where $\sum_t \bindic{z_t \in \mc{Z}_\gamma} > 0$. If ``Fraction of undefined terms'' is non-zero, then $\hat x(u)$ is not well defined. $N/A$ denotes when this occurs. The entries of ``Probability mass of undefined terms'' column is equal to $\sum_{\gamma} \frac{\sum_t \bindic{z_t \in \mc{Z}_\gamma}}{n} \bindic{\sum_t \bindic{z_t \in \mc{Z}_\gamma, u_t \in \mc{U}_{\beta(u)} } = 0 }$. We can see that as $\bK$ gets larger, the number of undefined estimates, the relative fraction of undefined values, and the mass of said values gets larger. This preliminary analysis already suggests that there are overlap issues as $\bK$ gets larger. \paragraph{Effect of shrinking adjustment set on estimator variance.} Next, we bootstrap the adjustment formula estimator and two double ML estimators. We find that the number of confounders heavily affects the bootstrapped variance of the PED estimators, suggesting that the Markovian modeling assumption (i.e., setting $\bK=1$) used and by our theory is also useful in practice. We report the predicted PED for all of the estimators in \Cref{fig:avocado-PED} (left). For each estimator, we bootstrap the dataset 40 times to form confidence intervals. We report the standard deviation of the bootstrapped estimates in \Cref{fig:avocado-PED} (right). We see that the variance of the adjustment formula estimator increases as the number of confounders increases. The RF-DML and LR-DML variance curves are fairly stable with respect to $\bK$, suggesting that our Markovian assumption does not affect the variance of those estimators by much. \begin{figure}[t] \centering \includegraphics[width=0.43\linewidth]{figures/estimatedpedfigwCI.pdf} \includegraphics[width=0.43\linewidth]{figures/varfig.pdf} \caption{(left) Bootstrapped estimates of PED with 95\% confidence intervals. (right) Standard deviation of each estimator. } \label{fig:avocado-PED} \end{figure} \paragraph{Effect of shrinking adjustment set on estimator bias.} Stronger assumptions enable identifiability, but they come at a price of potential modeling errors. We have motivated our Markovian assumption theoretically, and now we want to understand how well they reflect reality. We use the bootstrapping technique proposed by \citet{PetersenPoGrWaVa12} for testing the bias of our estimators, which we describe now. Let $\Psi$ be the estimator of the PED we are testing, and $\Psi_a$ be the adjustment formula estimator of the PED. Further, let $y^\bK$ denote the avocado dataset for sequences of length $\bK$, and let $Y^\bK$ denote a bootstrapped sample constructed from $y^\bK$. We plot an empirical estimate of \begin{align}\label{eqn:bootstrap-bias} \E[\Psi(Y^\bK)] - \Psi_a(y^\bK) \end{align} using 40 bootstrap samples with confidence intervals in \Cref{fig:avocado-PED-bias} (left). We see that the adjustment formula and LR-DML estimators have small bias for small values of $\bK$, and all estimators have larger bias for large values of $\bK$. We also plot the estimated bias defined using \eqref{eqn:bootstrap-bias} but with $\Psi_a$ replaced with $\Psi$ instead. We see that the bias still increases as $\bK$ gets larger, suggesting that more confounders also increases the bias of the estimator. \begin{figure}[t] \centering \includegraphics[width=0.43\linewidth]{figures/biasfig.pdf} \includegraphics[width=0.43\linewidth]{figures/controlregtypebiasfig.pdf} \caption{(left) Bias defined in \eqref{eqn:bootstrap-bias}. (right) Bias defined in \eqref{eqn:bootstrap-bias} with $\Psi_a$ replaced with $\Psi$. } \label{fig:avocado-PED-bias} \end{figure} Our experiments suggest that our Markovian assumption (i.e., $\bK=1$) does mitigate overlap issues while still accurately modeling reality. We believe the increase (with $\bK$) in bias and variance of the estimators is caused by overlap issues; as $\bK$ gets larger, the dimension of the confounders gets larger, making overlap harder to satisfy. \begin{figure}[t] \centering \includegraphics[width=0.3\linewidth]{figures/synthetic_spectrum_histogram_Sigma_1_seed=2.pdf} \includegraphics[width=0.3\linewidth]{figures/synthetic_spectrum_histogram_Sigma_2_seed=2.pdf}\\ \includegraphics[width=0.3\linewidth]{figures/synthetic_spectrum_histogram_Sigma_3_seed=2.pdf} \includegraphics[width=0.3\linewidth]{figures/synthetic_spectrum_histogram_Sigma_6_seed=2.pdf} \caption{Histograms of eigenvalues of $\Sigma_t$ as defined in \Cref{sec:synthetic_experiments}.} \label{fig:synthetic_spectrum} \end{figure} \subsection{Synthetic experiments} \label{sec:synthetic_experiments} We analyze the dynamical system \eqref{eqn:gen_model} with linear dynamics \eqref{eqn:lin_dynamics} with independent Gaussian noise acting as consumption shocks $\noise$ on the states. We show how the conditioning of the problem evolves over time and how the presence of more consumption shocks in past time steps makes the steerability of consumption easier to estimate. We let $\noise_t \eqd \normal(0, I)$ for all $t$, starting from $x_{0} = u_{0} = 0$. We consider the symmetric case where $d = p$. To generate $B$, we sample a random matrix $W$ in $\R^{d, n}$ for $n \gg d$ with independent standard Gaussians as its entries, and we set $B = W W^\top / n$. We repeat this process to generated $A$ and $D$. This way of generating our dynamics matrices ensures the matrices are well conditioned. We generate $C$ the same except by instead setting $W \in \R^{d, r}$ for $r < d$, making $C$ rank $r$ instead of rank $d$. We set $d = 100$, $n = 2000$, and $r = 80$. For this system, we can explicitly write down how the covariance matrix of $(x_t, u_t)$, denoted $\Sigma_t$, evolves. Namely, from the dymanics \begin{gather*} \begin{bmatrix} x_t\\ u_t \end{bmatrix} = J \begin{bmatrix} x_{t-1}\\ u_{t-1} \end{bmatrix} + M \varepsilon_t \quad \text{where} \\ \quad J \defeq \begin{bmatrix} A & B\\ CA & CB+D \end{bmatrix}\qquad M \defeq \begin{bmatrix} I\\ C \end{bmatrix}. \end{gather*} we can deduce that \begin{align*} \Sigma_t = J \Sigma_{t-1} J^\top + MM^\top = \sum_{k=0}^{t-1}(J^k)MM^\top (J^k)^\top . \end{align*} We now plot the histogram of the eigenvalues of $\Sigma_t$ for a random system that we generated. The important observable to look out for is whether $\Sigma_t$ is full rank. Indeed, the steerability of consumption---in this case $B$ because the system is linear---is identifiable from observations of $(x_t, u_t, x_{t+1})$ if and only if $\Sigma_t$ is full rank. To see why this is true, suppose $\Sigma_t$ is low rank and let $v$ be in the null space of $\Sigma_t$. Letting $G \defeq [A, B]$, $z \defeq [x_t^\top, u_t^\top]^\top$, and $\mathbf{1}$ denote the all one's vector of appropriate dimension, we have that \begin{align*} x_{t+1} = G z = (G + \mathbf{1} v^\top) z - \mathbf{1} v^\top z \eqd \ (G + \mathbf{1} v^\top) z, \end{align*} meaning that $G$ and $G + \mathbf{1} v^\top$ could have both generated the distribution observed. If $\Sigma_t$ is full rank, then linear regression will be able to recover $B$. We note that in this system, $\rank DC = 80 =r < d$ and $\rank C = 80=r < d$. We believe there is an equivalence between the system in this section and the system from \Cref{thm:identifiabile-five-tuple} because of linearity, even though the settings are different---one consumption shock and full observation of each rollout (i.e., observations $R_\bK$) in the theory versus multiple consumption shocks and one timestep of observation (i.e., observations of $R_{\bK = 1}$) in this section. We are not able to prove this equivalence, but we provide some empirical evidence supporting this conjecture. \Cref{thm:identifiabile-five-tuple} suggests that observing $(x_2, u_2, x_3)$ is not sufficient for identifiability, as $\rank DC$ is not full row rank. This is consistent with the eigenvalue histogram of $\Sigma_2$ in \Cref{fig:synthetic_spectrum} as there are still $0$ eigenvalues. However, since in this system $\rank [DC, D^2C] = 100 =d$, \Cref{thm:identifiabile-five-tuple} suggests that observing $(x_3, u_3, x_4)$ is sufficient for identifiability. This is also consistent with the eigenvalue histogram of $\Sigma_3$ in \Cref{fig:synthetic_spectrum}, as all eigenvalues are bounded away from $0$ at that time step. Moreover, we see that the eigenvalues of get larger as more time passes: e.g., the eigenvalue mass of $\Sigma_6$ is further to the right of the eigenvalue mass of $\Sigma_3$ in \Cref{fig:synthetic_spectrum}. This suggests that more noise spikes over more time steps make the observations better conditioned, likely making estimating the steerability of consumption easier for the auditor to estimate in practice; e.g., the condition number terms in \Cref{thm:doubleml-linear-conv-rate} will be smaller. \section*{Acknowledgements} The authors would like to thank Michael M\"uhlebach for stimulating discussions on the project, and Saminul Haque for helpful technical discussions surrounding \Cref{lem:positive-q-density}. This work was supported by the T\"ubingen AI Center. Gary Cheng acknowledges support from the Professor Michael J. Flynn Stanford Graduate Fellowship.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} In this paper, we describe the University of Alberta systems for the task of classifying multi-word expressions (MWEs) in context as either \emph{idiomatic} or \emph{literal} \cite{tayyarmadabushi2022}. Each instance in the data includes a MWE (e.g., {\em closed book}), its language, and its context, composed of the three surrounding sentences. We participate in both the zero-shot and one-shot settings. While the exact definitions of the two key terms are not stated explicitly in the task description\footnote{\url{https://sites.google.com/view/semeval2022task2-idiomaticity}}, it is suggested that {\em idiomatic} is synonymous with {\em non-compositional}. The Pocket Oxford Dictionary defines {\em idiomatic} as ``not immediately comprehensible from the words used,'' and {\em literal} as ``taking words in their basic sense.'' Therefore, we adopt the following MWE {\em compositionality criterion} \[ \mbox{literal} \equiv \mbox{compositional} \equiv \neg\,\mbox{idiomatic} \] where the three terms are considered to be Boolean variables. In addition, the shared task considers all proper noun MWEs (e.g., {\em Eager Beaver}) as literal. Our goal is to explore the idea that glosses and translations of word senses can help decide whether the meaning of a given MWE occurrence is compositional. Based on the above-stated compositionality criterion, this in turn could facilitate idiomaticity detection. In particular, we hypothesize that at least one of the words in any idiomatic expression is used in a non-standard sense. Following the intuition that a traditional word sense disambiguation (WSD) system can only identify senses that are included in a given sense inventory, we propose two methods that indirectly detect non-standard senses by leveraging either glosses or translations of senses from such an inventory. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{defbert_input.png} \caption{An example of {defBERT}{} input.} \label{fish_story} \end{figure} Our gloss-based method follows from the intuition that the meaning of a given MWE occurrence is related to any of the existing sense glosses of its component words {\em only if the expression is compositional}. Therefore, the addition of the glosses to the context of the expression should help the classifier in deciding whether the MWE is used in a literal or idiomatic sense. We implement this method by adding the glosses of each sense of each individual word, retrieved from a lexical knowledge base, to the input to a neural classifier which fine-tunes multilingual BERT \cite[mBERT;][]{devlin2019} for the idiomaticity detection task. We refer to this method as {defBERT}{} (Figure~\ref{fish_story}). Our translation-based method follows from the observation that compositional expressions are typically translated word-for-word (``literally''), which implies that each content word and its translation should have the same meaning. Therefore, each such multilingual word pair should share a multi-synset in a multi-wordnet \cite{hauer2020set}. The procedure is as follows: (1) translate the MWE in context; (2) word-align the source and target sentences; (3) lemmatize and POS-tag the source MWE; and (4) for each lemma in the MWE, search for a multi-synset that contains both the lemma and its translation. This method is unsupervised, and we refer to it as {MT}{}. Our results provide evidence that leveraging lexical resources is beneficial for idiomaticity detection. In particular, our gloss-based method, when combined with a type-based {{UNATT}} heuristic, is among the top-scoring submissions in the one-shot setting. The heuristic is based on the observation that some MWEs are inherently idiomatic or literal, regardless of their context, which is confirmed by our analysis of the development set annotations. \section{Related Work} \label{relwork} Early attempts to represent idiomatic MWEs involve treating idiomatic phrases as individual tokens and learning corresponding static embeddings \citep{mikolov2013phrases}. However, \citet{cordeiro2016} show that the effectiveness of this method is limited by data sparsity for longer idiomatic expressions. Furthermore, \citet{shwartz2019} and \citet{garcia2021} conclude that idiomaticity is not yet accurately represented even by contextual embedding models. \citet{madabushi2021astitchinlanguagemodels} create a new manually labeled dataset containing idiomatic and literal MWEs, and propose a method based on a pre-trained neural language model. Regarding using lexical translations for idiomaticity detection, \newcite{moiron2006} measure semantic entropy in bitext alignment statistics, while \citet{salehi-etal-2014-detecting} predict compositionality by presenting an unsupervised method that uses Wiktionary translation, synonyms, and definition information. We extend these ideas by applying machine translation, and consulting a multilingual lexical knowledge base. Our prior work has already demonstrated the utility of lexical translations for various semantic tasks, including prior SemEval tasks on predicting cross-lingual entailment \cite{hauer2020semeval} and contextual synonymy detection \cite{hauer2021semeval}, as well as word sense disambiguation \cite{luan2020}, and homonymy detection \cite{hauer2020ohpt, habibi2021}. \section{Methods} \label{methods} In this section, we describe our methods for idiomaticity detection{}. \subsection{Baseline mBERT} \label{baseline} We re-implemented the mBERT classifier baseline \citep{devlin2019} following the methodology of \citet{madabushi2021astitchinlanguagemodels}. The model takes the context sentence and the relevant MWE as an input, and outputs a binary label indicating the idiomaticity of the target MWE. The input sequence is constructed by concatenating the MWE to the end of the context sentence after the special [SEP] token. It is important to note the differences between our re-implementation and the official baseline provided by the task organizers. In the official baseline, the organizers add the target MWE as an additional feature in the one-shot setting but not in the zero-shot setting. Furthermore, the organizers include the sentences preceding and succeeding the target sentence only in the zero-shot setting. In our re-implementation, we add the target MWE and exclude the preceding and succeeding sentences in both zero-shot and one-shot settings. \subsection{Gloss-based Method} \label{glossbert} Our first method, {defBERT}{}, extends the baseline model by adding the glosses of all possible senses of each individual word in the target MWE to the classifier's input. The intuition is that the addition of the glosses to the input should help the classifier decide if the meaning of the target MWE can be deduced from the definitions of the individual words, i.e., if it is compositional. In the example in Figure~\ref{fish_story}, the disparity between the context in which \emph{fish story} appears, and the glosses of the various senses of the words \emph{fish} and \emph{story} indicates that the MWE is idiomatic in this context. The intuition for this method is that non-native speakers can identify idiomatic expressions, provided they understand the standard meanings of the words which comprise them. Suppose that the vocabulary of a non-native speaker covers most of the essential words necessary to understand a language, but not idiomatic expressions. Even if the speaker cannot deduce the meaning of an idiomatic expression in context, they can guess that the expression was used in an idiomatic sense because individual words of this expression do not make sense in the given context. \subsection{Translation-based Method} \label{translation} \begin{comment} \begin{algorithm}[t] \begin{algorithmic}[1] \scriptsize \Require{source sentence and MWE} \Ensure{binary classification of MWE as idiomatic/literal} \Statex \Function{BabelNetQuery}{mwe} \If{properNoun(mwe)} \State \Return{Literal} \EndIf \If {DoubleQuotes(mwe)} \State \Return{Idiomatic} \EndIf \State compositional := 0 \For{each language l} \State words := 0 \For{each word w in mwe} \State t = translation(L, w) \If{sharedSynsetsOMW(w, t) or sharedSynsetsBN(w, t)} \State words = words + 1 \EndIf \EndFor \If{$words > threshold1$} \State compositional:= compositional + 1 \EndIf \EndFor \If{$compositional >= threshold2$} \State \Return{Literal} \EndIf \State \Return{Idiomatic} \EndFunction \end{algorithmic} \caption{{MT}{}} \label{translation_pcode} \end{algorithm} \end{comment} Our {MT}{} method is based on translating the target MWE in context, and leverages multilingual semantic resources. The intuition behind this method is that idioms are generally specific to a particular language, and, being non-compositional, their meanings cannot be conveyed simply by translating the individual words. Under this hypothesis, to classify an MWE as literal or idiomatic, we need only determine whether the words in the MWE are translated literally. We do this by first identifying the translation of each word via alignment. We then consult a multilingual wordnet, or \emph{multi-wordnet}, a lexical knowledge-base which organizes words in two or more languages into multilingual synonym sets, or \emph{multi-synsets}. Each multi-synset corresponds to a unique concept, and contains the words which express that concept. Given a word in context, and a translation of that word in that context, we consider the word to be literally translated if it shares at least one multi-synset with its translation. For example, consider an instance in which the MWE \emph{wedding anniversary} is translated into Italian as {\em anniversario di matrimonio}. Our method checks if either of the translation pairs (\emph{wedding}, \emph{matrimonio}) and (\emph{anniversary}, \emph{anniversario}) share a multi-synset in a multi-wordnet. We test two versions of this method: in {MT(all)}, this condition must be satisfied for all content words in the MWE; in {MT(one)}, detecting a literal translation for one word is sufficient to classify the MWE as literal. In addition, multiple languages of translation may be considered. \subsection{Additional Heuristics} \label{heuristics} The annotation methodology for this shared task includes proper nouns in the literal class. We therefore use a part-of-speech tagger to detect proper nouns; if any word in the MWE is tagged as a proper noun, {MT}{} automatically classifies it as literal without further consideration. In the one-shot setting, we also use a type-based heuristic which we refer to as {UNATT}{}. The intuition behind this heuristic is that certain MWEs are inherently idiomatic or literal, regardless of the context that they appear in. If the training data has no example of an MWE in a particular class, the heuristic exploits this fact as evidence that the MWE should always be classified as the opposite, attested class. For example, this heuristic always classifies {\em life vest} as idiomatic and {\em economic aid} as literal, as these are the only classes in which these MWEs appear in the training data. In practice, since {UNATT}{} returns no classification if the training set contains instances that belong to either class, this heuristic must be used in combination with another method. \subsection{Combination} \label{combination} Our {defBERT}{} and {MT}{} methods take different views of the data, with the former using a neural language model and gloss information, and the latter using translation and a lexical knowledge base. We therefore consider combining the two methods. In this approach, we independently apply {defBERT}{} and {MT}{} to a given instance. If the two methods agree, we return the agreed-upon classification; if they disagree, we return a default class, which is a tunable parameter. As with the other methods, we can combine this method with the {UNATT}{} heuristic in the one-shot setting. \section{Experiments} \label{experiments} We now describe our experiments, including the tools and resources, the experimental setup, the results, and a discussion of our findings. \subsection{Lexical Resources} \label{lexres} As lexical resources for sense translations and glosses, we use two different multi-wordnets: BabelNet \cite[BN;][]{navigli2010, navigli2012}, and Open Multilingual WordNet \cite[OMW;][]{bond2013}. The {defBERT}{} method and the alignment tool access BN 4.0 via the provided Java API\footnote{\url{https://babelnet.org/guide}}. For the {MT}{} method, we access the BN 5.0 via the HTTP API. We access OMW via the NLTK interface \cite{nltk}. For the {MT}{} method, we consider the translation of a word to be literal if it shares a multi-synset with the word in either BN or OMW. For lemmatization and POS tagging, we use TreeTagger\footnote{We use the pre-trained models for English, Portuguese, and Galician from \url{https://cis.uni-muenchen.de/~schmid/tools/TreeTagger.}} \cite{schmid2013}. Both BN and OMW contain English glosses for most concepts, but the availability of glosses in other languages varies. In particular, OMW contains no Portuguese or Galician glosses. With BabelNet, we experimented with two techniques: using English glosses for all languages, and using glosses from the language of the instance, i.e. the source language, when available. We refer to these variants as {\glossbert{}-BN-en} and {\glossbert{}-BN-src}, respectively. Since {defBERT}{} uses a multilingual pre-trained language model, it can seamlessly handle input from multiple languages. Furthermore, because of the relatively poor coverage of Galician in the lexical resources (only 54\% of glosses are available in this language), we attempt to leverage its close relationship to Portuguese by processing Galician as if it was Portuguese. \subsection{Translation and Word Alignment} We translate the context sentence of each MWE with Google Translate API\footnote{\url{https://cloud.google.com/translate}}. We translated English instances into Italian, and Portuguese/Galician instances into English, because of the good coverage of these languages in our resources. We also conducted development experiments with translation into less related languages, as well as with combining translation information from multiple languages, but we observed no consistent improvements. We align each input sentence with its translation using BabAlign \cite{luan2020}, which consults BabelNet to refine the alignments generated by a base aligner, FastAlign \cite{dyer2013}. To further improve the alignment quality, we augment the set of sentence-translation pairs with additional parallel data from the OpenSubtitles parallel corpus \cite{lison2016}. We note that the English-Galician bitext is less than 1\% of the size of the other two bitexts. \subsection{mBERT and {defBERT}} We fine-tune the mBERT-based models using the binary classification objective on the labeled training dataset. In the zero-shot setting, the MWEs in the training data are disjoint from those in the development and test splits, while in the one-shot setting, all MWEs in the development and test splits have at least one example in the training data. In the zero-shot setting, we trained the models only on the zero-shot training set, while in the one-shot setting, we trained the models on both training sets. In particular, we fine-tuned the models for 20 epochs with a maximum sequence length of 256, a learning rate of 2e-5, and a per device batch size of 16, using the HuggingFace Transformers library.\footnote{\url{https://huggingface.co}} \begin{table*}[t] \centering \small \begin{tabular}{|c|l|cc|cc|cccc|cccc|} \hline & & \multicolumn{4}{c|}{Development results} & \multicolumn{8}{c|}{Test results} \\ \cline{3-14} & & \multicolumn{2}{c|}{Zero-Shot} & \multicolumn{2}{c|}{One-Shot} & \multicolumn{4}{c|}{Zero-Shot} & \multicolumn{4}{c|}{One-Shot}\\ \cline{3-14} & & EN & PT & EN & PT & EN & PT & GL & ALL & EN & PT & GL & ALL \\ \hline 0 & Baseline & 66.2 & 63.9 & 87.0 & 86.7 & 70.7 & 68.0 & 50.7 & 65.4 & 88.6 & 86.4 & 81.6 & 86.5 \\ \hline 1 & mBERT{} & 74.6 & 62.5 & 85.7 & 85.9 & \textbf{75.1} & 63.3 & \textbf{61.1} & 68.2 & 90.0 & 83.6 & 86.6 & 87.7 \\ \hline 2 & \glossbert{}-BN-src{} & \textrm{75.5} & 64.8 & 85.4 & 86.7 & 72.0 & 66.4 & 57.8 & 67.2 & \textbf{95.7} & 88.5 & 88.9 & 92.2 \\ \hline 3 & \glossbert{}-BN-en{} & 75.3 & \textrm{66.4} & 87.6 & 86.6 & 73.4 & \textbf{68.4} & 59.7 & \textbf{69.5} & 95.0 & \textbf{89.3} & 87.9 & 91.8 \\ \hline 4 & \glossbert{}-OMW-en{} & 74.8 & 64.5 & 87.1 & 84.5 & 71.0 & 65.6 & 56.5 & 66.5 & 92.4 & 86.7 & 88.5 & 90.1 \\ \hline 5 & \attested{} + \glossbert{}{} & - & - & {\bf 92.0} & {\bf 87.7} & - & - & - & - & 94.5 & 89.2 & \textbf{91.2} & \textbf{92.4}\\ \hline 6 & \owt{} + \glossbert{}{} & {\bf 77.3} & 64.9 & 84.5 & 78.0 & 68.2 & 54.6 & 56.3 & 62.7 & 85.9 & 70.6 & 78.2 & 80.6 \\ \hline 7 & \awt{} + \glossbert{}{} & 66.4 & {\bf 69.2} & 73.7 & 78.0 & 65.4 & 62.5 & 54.3 & 62.1 & 80.3 & 73.8 & 73.9 & 77.3\\ \hline \end{tabular} \caption{ The macro F1 scores on the development and test datasets. Our official submissions are in rows 4-7. Where not otherwise specified, {defBERT}{} is in the OMW-en configuration. } \label{tab:results} \end{table*} \subsection{Development experiments} Table~\ref{tab:results} contains the results of the following models: the official mBERT-based baseline (row 0) as reported by the shared task organizers, our re-implementation of the official baseline (row 1), three variants of {defBERT}{} method which is based on {mBERT} (rows 2-4), {defBERT}{} combined with the {UNATT}{} heuristic (row 5), and the {MT}{} method combined with {defBERT}{} (rows 6-7)\footnote{After the test output submission deadline, we discovered errors in our implementation of the {MT}{} methods. We report our original results for consistency with the official results.}. For rows 1-5 we average the macro F1 score obtained over five runs with random initializations. Our experiments with {defBERT}{} explored the impact of adding glosses to the {mBERT} model, including the source and language of the glosses. With English glosses retrieved from BabelNet, {defBERT}{} improves the total score over the {mBERT} model in the zero-shot setting, especially on Portuguese. The results also suggest that the English glosses may be preferable to glosses in the source language, a finding which could simplify work on lower-resourced languages, where glosses may not be available. Combining the predictions of the mBERT-based models with the {UNATT}{} heuristic improves the one-shot F1 scores in all cases (row 5 vs. row 4). The {MT}{} methods achieve the best results when combined with {defBERT}{} on the development set in the zero-shot setting: {MT(one)} for English (row 6), and {MT(all)} for Portuguese (row 7). This demonstrates the utility of using lexical translation information for idiomaticity detection when annotated training data is not available. \subsection{Error Analysis} We found that the {defBERT}{} method performs slightly better, by about 1\% F1, on literal instances as compared to idiomatic instances in the one-shot setting. In other words, the method is less likely to make an error when given a literal instance. We speculate that this is explained by the model's consistent classification of proper nouns as literal expressions. Indeed, a proper noun is identified incorrectly in only one instance. The fraction of idiomatic vs.\ literal instances is 39\% in English and 56\% in Portuguese. For the {MT}{} method, a large number of of errors were caused by a literal translation of an idiomatic expression by Google Translate, even though the corresponding expression is not meaningful in the target language. For example, ``she was different, like a \underline{closed book}'' is translated into Italian as ``era diversa, come un \underline{libro chiuso}'' even though the Italian translation does not carry the meaning of a person being secretive. In a few cases, the translation would simply copy the source language expression, yielding output which is not fully translated. In addition, some correct lexical translations are not in our lexical resources. Finally, a number of incorrect idiomatic predictions could be traced to word alignment errors, especially in cases of many-to-one alignments (e.g., {\em bow tie} correctly translated as {\em papillon}). Manual analysis performed on the development set corroborates our hypothesis that most multi-word expressions are inherently idiomatic (e.g., {\em home run}) or literal (e.g., {\em insurance company}). Only about one-third of the expressions are ambiguous in the sense that they can be classified as either class depending on the context (e.g. {\em closed book}). Our judgements are generally corroborated by the gold labels, with the exception of proper nouns, which are consistently marked as literal. The {{UNATT}} heuristic (Section~\ref{heuristics}), which is based on this observation, obtains a remarkable 98.3\% precision and 55.8\% recall on the set of 739 instances in the development set. \subsection{Test set results} The results on the test set are shown in Table~\ref{tab:results}. Our best results are produced by {\glossbert{}-BN-en} in the zero-shot setting, and the combination of {defBERT}{} with the {UNATT}{} heuristic in the one-shot setting. The latter also obtains the best result on Galician, which demonstrates its applicability to low-resource languages, as this method only requires English glosses. The results of combining {defBERT}{} with {MT}{} are well below the baseline, which may be due to a different balance of classes in the test set, omissions in lexical resources, and/or errors in our initial implementation. Another possible reason is that modern idiomatic expressions are often translated word-for-word (``calqued''), especially from English into other European languages. Examples from the development set include {\em flower child, banana republic}, and {\em sex bomb}. \section{Conclusion} \label{conclusion} Our top result ranks third overall in the one-shot setting. The corresponding method is applicable to a wide variety of languages. It takes advantage of the ability of neural language models to seamlessly incorporate textual information such as glosses, even if it is expressed in a different language. These results strongly support our hypothesis that the gloss information of individual words can improve idiomaticity detection. Moreover, our development results support the hypothesis that non-compositional expressions can be identified through their translations. These findings conform with our prior work on leveraging translation for various semantic tasks (Section~\ref{relwork}). We hope that this work will motivate further investigation into the role of multilinguality in semantics. \section*{Acknowledgments} This research was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), and the Alberta Machine Intelligence Institute (Amii).
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection{Intrusion Detection} A great deal of progress has been made on network intrusion detection systems that monitor network traffic for predefined patterns and alert system administrators when potentially problematic network traffic is detected. Previous work focuses on machine learning algorithms and frameworks for intrusion detection~\cite{mukherjee1994network, lee1998data, bass2000intrusion}. Additionally, systems like Bro and Snort~\cite{bro, roesch1999snort} are lightweight intrusion-detection tools that take steps toward resolving issues with complex deployment and high cost. TWIAD has important applications in network intrusion detection. It allows queries over long time periods and can efficiently store large amounts of data while maintaining insertion performance guarantees. Users can ask for all instances of an IP or subnet over a time range and analyze the connections related to a potentially suspicious or adversarial IP. Efficiently storing and having access to previous connection logs allows network analysis to examine threats and take steps towards recovery. \subsection{Data Visualization} Write-optimized databases for network event tracking can also be used in data visualization efforts on network traffic. Previous work with visualization focuses on the graphical representation of network flows~\cite{krasser2005real}. Combining previous work on pattern detection and visualization with faster packet storage will improve the scope and utility of the representations. \subsection{Write-Optimized Data Structures} Here, we cover write-optimized data structures and their performance bounds. Specifically, we describe the B$^\varepsilon$-tree and the reasons we have chosen it for network event tracking. The best WODS (including the B$^\varepsilon$-tree) subvert a trade-off between read and write performance and instead can outperform B-trees. The B-tree is a data structure where internal nodes have variable numbers of children within a predefined range. The elements of the tree are maintained in sorted order at the leaves. Each internal node has keys directing searches to the subtree associated with the query value. B-trees support insertions, deletions, sequential accesses, and point queries in $O(\log_BN)$ time. \smallskip \subsubsection{B$^\varepsilon$ trees} A B$^\varepsilon$-tree is a B-tree with buffers at each node. New insertions take place at the root buffer of the B$^\varepsilon$-tree. When a node's buffer is filled, items are moved from that node's buffer to the buffer of one of it's children---this process is referred to as flushing. The algorithms for point and range queries are the same as those in a B-tree, but with a search through the buffer of each internal node on a root-to-leaf path. B$^\varepsilon$-trees have asymptotically better performance than B-trees. For example, consider a B-tree of $N$ elements where each node has $B$ keys of constant size and where the the size of keys is far larger than the size of the related data. Such a tree has fanout $B$ and therefore has height $O(\log_BN)$. Therefore, inserts and searches will take $O(\log_BN)$ I/Os. A range query with $k$ results then requires $O(\log_BN+\frac{k}{B})$ I/Os. In contrast, a B$^\varepsilon$-tree has nodes of size $B$. Each internal node of the tree has B$^\varepsilon$ children where $0$ \textless $\varepsilon \leq 1$. Each node has a ``pivot key'' for each child, so the keys take up $B^\varepsilon$ space in each node. The remaining $B-B^\varepsilon$ space in each node is used to buffer inserted elements. $\varepsilon$ is a tunable parameter that determines the tree's fanout. The tree's fanout is $B^\varepsilon$ and its height is $O(\log_{B^\varepsilon}N) = O(\frac{1}{\varepsilon}\log_BN)$. Therefore, searches in a $B^\varepsilon$-tree are slower than those in a $B$-tree by a factor of $\frac{1}{\varepsilon}$. However, whenever a node flushes elements to one of its children, it moves at least $\frac{B-B^\varepsilon}{B^\varepsilon} \approx B^{1-\varepsilon}$ elements. Each element must be flushed $O(\frac{1}{\varepsilon}\log_BN)$ (the height of the tree) times to reach a leaf. Therefore, the amortized cost of inserting $N$ elements is $O(\frac{1}{\varepsilon B^{1-\varepsilon}}\log_BN)$. Furthermore, range queries cost $O(\frac{1}{\varepsilon}\log_BN + k/B)$ I/Os where $k$ is the number of elements returned by the query. We present an example: consider $\varepsilon = 1/2$. Point and range query costs are now $O(\log_BN)$ and $O(\log_BN + \frac{k}{B})$ respectively. Although these are the same asymptotically as query bounds for B-trees, the insert cost for the $B^\varepsilon$-tree is $O(\frac{\log_BN}{\sqrt{B}})$, an improvement by a factor of $\sqrt{B}$ over traditional B-trees. Additionally, B$^\varepsilon$ trees have much larger nodes than B-trees. Larger nodes improve range query performance because the data is spread over fewer nodes and therefore requires fewer disk accesses to read it in. B-trees must use smaller nodes because every new addition to the database requires that a node be completely rewritten. In contrast, writes are batched in B$^\varepsilon$ trees, allowing their nodes to be much larger than those of a B-tree --- for example, nodes in a B-tree are generally around 4 or 6KB, while a typical node in Tokutek's implementation of a $B^\varepsilon$-tree is 4MB. As a result, the height of a B$^\varepsilon$ tree is not much greater than that of a B-tree on the same data. Therefore, point query performance in a B$^\varepsilon$ tree is comparable to point query performance in a B-tree. For example, consider a key-value store of 1TB of data, with keys of size 128B and records (key+value) of size 1KB --- assume that data is logged and that all updates in the log are periodically applied to the main tree in batch. Assume that a common server has about 64GB of RAM. First, we examine a B-tree with 4KB nodes given the above situation. The fanout of the tree is 4KB/128B = 32. Even if all of the internal nodes of the tree can fit into RAM, only a small fraction of the 1TB of leaf nodes can be held in cache. Given a sequence of random insertions, most updates will require 2 I/Os --- 1 I/O to read in the target leaf and another to write it back to disk. In constrast, consider a B$^\varepsilon$-tree with branching factor 10 and nodes with size 1MB. Again, all internal nodes can fit in cache, but the leaves must be held on disk. When items are inserted into the tree, they are stored in the tree's root buffer. The root is cached, so this action requires no I/Os. When an internal node becomes full and flushes to a non-leaf child, the data structure requires two writes --- one to update the parent and one to update the child. Since both nodes are cached, no reads are necessary. If an internal node flushes its buffer to a leaf, one read is required to load the leaf into memory. There will be 1TB/1MB=2$^{20}$ leaves. Furthermore, the tree has fanout 10, so its height will be 1+$\log_{10}2^{20} \approx 7$. Therefore, each item is involved in 14 I/Os because it is written and read once at each level of the tree. While it may seem that this performance is worse than that of a B-tree, each flush in the B$^\varepsilon$-tree moves $\sim$1MB/10$\approx$100kB of data, or around 100 items. The data moved in each flush is approximately proportional to the node size divided by the branching factor. Therefore, the amortized cost of flushing an item to a leaf is 14/100. A B-tree requires 2 I/Os for each item, so in our example the B$^\varepsilon$ tree can insert data 2/(14/100) $\approx$ 14 times faster than the equivalent B-tree. Furthermore, this speedup grows as key-value pairs get smaller as in connection log storage. Both the B-tree and the B$^\varepsilon$ tree require a single I/O to read the corresponding leaf in a point query. However, range queries can be much faster in a B$^\varepsilon$-tree because the B$^\varepsilon$-tree seeks once for each leaf size. In our example, the B-tree would need to seek every 4KB whereas the B$^\varepsilon$ tree would seek once every 1MB. B$^\varepsilon$-trees can achieve further improved performance through upserts, an efficient method for updating key-value pairs. If an application wants to update a value associated with some key $k$ in the tree, it inserts a message $(k,(f, \Delta))$ into the tree, where $f$ is some function that can be used to apply the change denoted by $\Delta$ to the old value associated with $k$. This message is inserted into the tree normally. However, when the message is flushed from a node to one of its children $C$, the tree checks whether $C$'s buffer contained the old value $v$ associated with the key $k$. If so, then the tree replaces $v$ with $f(v, \Delta)$ and discards the upsert. If the key $k$ is queried before the function from the upsert message is applied, the B$^\varepsilon$-tree calculates $f(v,\Delta)$ while answering the query --- this does not affect query performance because an upsert for a key $k$ will always be on the path from the root to the leaf containing $k$. Therefore, upserts can improve update speed by orders of magnitude without reducing query performance. \subsubsection{Log-structured Merge Trees} The log-structured merge tree (LSM tree)~\cite{o1996log,sears2012blsm} is another write-optimized data structure. There are many variations on the LSM tree, but generally they have a logarithmic number of indices (data structures, e.g., B-trees) of exponentially increasing size. Once an index at one level fills up, it is flushed and merged into the index at the next largest level. Commercial write-optimized databases often use LSM trees. Although LSM trees can have the same asymptotic complexity as a B$^\varepsilon$-tree, queries in a na\"{\i}ve implementation of a LSM tree can be slow, as shown in Table~\ref{table:wods}. Steps have been taken to improve the query performance of LSM trees---of note, many implentations of LSM trees now use Bloom filters~\cite{bloom1970space} at each index~\cite{accumulo, hbase, leveldb, chang2008bigtable, lakshman2010cassandra}. Point queries in LSMs with Bloom filters have been reported to improve to $O(\log_BN)$, therefore matching B-tree point query performance. However, Bloom filters do not help with range queries, because the successor of any key may be at any level of the data structure. Furthermore, the viability of Bloom filters degrades with upserts. To compute the result of a query, all relevant upserts must be applied to the key-value pair. If there are many possible upserts at each level of the LSM tree, searches need to be performanced at each of those levels. LSMs only match B-tree query performance in specific cases, while B$^\varepsilon$-trees match B-tree query times in general. Since range queries are common in network event detection systems, we chose B$^\varepsilon$-trees as the underlying data structure for TWIAD because of LSM range query performance. \begin{table} \centering \caption{Asymptotic I/O costs of various write-optimized data structures} \label{table:wods} \resizebox{\columnwidth}{!} { \begin{tabular}{ c c c c c } \hline Data Structure & Insert & \parbox[t]{1.5cm}{Point Query\\w/o Upserts} & \parbox[t]{1.5cm}{Point Query\\w/ Upserts} & Range Query \\ \hline \hline B-tree & $\log_BN$ & $\log_BN$ & $\log_BN$ & $\log_BN+\frac{k}{B}$ \\ [2ex] LSM & $\frac{\log_BN}{\varepsilon B^{1-\varepsilon}}$ & $\frac{\log^2_BN}{\varepsilon}$ & $\frac{\log^2_BN}{\varepsilon}$ & $\frac{\log^2_BN}{\varepsilon}+\frac{k}{B}$ \\[2ex] LSM+BF & $\frac{\log_BN}{\varepsilon B^{1-\varepsilon}}$ & $\log^2_BN$ & $\frac{\log^2_BN}{\varepsilon}$ & $\frac{\log^2_BN}{\varepsilon}+\frac{k}{B}$ \\[2ex] B$^\varepsilon$-tree & $\frac{\log_BN}{\varepsilon B^{1-\varepsilon}}$ & $\frac{\log_BN}{\varepsilon}$ & $\frac{\log_BN}{\varepsilon}$ & $\frac{\log_BN}{\varepsilon}+\frac{k}{B}$ \\[2ex] \hline \end{tabular} } \smallskip \end{table} \subsection{Streaming Databases} A great deal of progress has also been made in the related field of stream processing engines (SPE). While data stream managers have important applications in network event tracking, most of the literature is focused on developing query schemes and algorithms~\cite{abadi2005design, babu2001continuous, golab2003issues,carney2002monitoring}. Some of the issues identified by the streaming community such as approximate query results, updating query results over time, and dynamic query modification are outside of the scope of this paper. Our vision for TWIAD\@\xspace is that of a write-optimized streaming database for network event tracking. Connection logs are constantly fed into the database --- the main goal is to process a large volume of data consistently over time while maintaining ingestion guarantees. Streaming research has important applications to network event tracking and write-optimization. We are currently focusing on optimizing ingestion and leave the integration of results from streaming research as future work to optimize queries. \subsection{Network Event Tracking Databases} Previous development has also been done on large databases for network traffic monitoring. Network traffic monitoring solutions differ from traditional relational database management systems (RDBMSs) in the following ways: \begin{enumerate} \item The data and storage must be stream-oriented. Fast ingestion and sequential access are important, while fast random access and concurrency control are not. \item Since network traffic data is usually only used a few times (or even once), load time is a significant cost. Therefore, the database must maintain data integrity over long periods of time while still loading streams of data into the database. \item Network connection logs are aggregations of many small records with fields a few bytes wide, so per-tuple overhead in RDBMSs can lead to a prohibitive cost in space. \end{enumerate} Prior work on network event tracking databases has focused on developing query languages for streams. Systems like Gigascope~\cite{cranor2003gigascope} and Tribeca~\cite {sullivan1998system} propose query languages for complex analysis. The inventors of the existing systems note that performance for a stream database is measured by how high the input stream(s) rate can be before it begins dropping data, not how fast the database can answer queries. Specifically, Cranor et al. observe that ``touching disk kills performance---not because it is slow but because it generates long and unpredictable delays throughout the system.'' Our system, TWIAD\@\xspace is designed to specialize in ingesting data quickly and predictably. \subsection{Write-Optimized Intrusion Detection Systems} Some IDS companies are beginning to offer services that include write-optimized databases. For example, Countertack, an IDS software company, uses big data analytics from Cloudera, which is built on Hadoop and HBase~\cite{countertack}. Similarly, Google's Stenographer does simple packet capture and uses LevelDB for storage~\cite{steno}. It is designed to write packets to disk quickly and not well suited to reading back large amounts of packets. Finally, Hogzilla is another open source IDS supported by Snort, Apache Spark, and HBase\cite{hogzilla}. While these systems are important steps towards using write optimization in network event tracking, we wanted to build a lighter-weight and simple tool that processes logs while still leaving the user freedom to determine what kind of analytics they want to do on the data. \section{Introduction}\label{sec:intro} \input{intro} \section{Background And Related Work}\label{sec:background} \input{background} \section{Requirements and Design}\label{sec:rnd} \input{rnd} \section{Results}\label{sec:results} \input{results} \section{Applications}\label{sec:applications} \input{applications} \section{Future Work}\label{sec:futurework} \input{futurework} \section{Conclusions}\label{sec:conclusion} \input{conclusion} \section*{Acknowledgments} \addcontentsline{toc}{section}{Acknowledgments}\label{sec:acks} We thank the engineers at Tokutek for developing and open-sourcing their B$^\varepsilon$-tree implementation that TWIAD is built on. Furthermore, we would like to thank Cindy Phillips, Jonathan Berry, Michael Bender, and Prashant Pandey for their insights and thoughtful discussions. This work was supported by the Laboratory Directed Research and Development Program at Sandia National Laboratories. \bibliographystyle{IEEEtran} \subsection{Ingestion Performance} As a basic metric to monitor our system's ingestion rate, we recorded the amount of time it takes to insert a constant number of rows (e.g., 100,000). That is, we keep track of the time that it takes to insert {\em each} 100,000 rows. From this we calculate the average number of inserts per second over that period. Figure~\ref{fig:insertionRate} shows the average insertion rate plotted against total number of database entries inserted for both the fractal tree and traditional $B$ tree. We can see that the database built on the fractal tree index (B$^\varepsilon$-tree)has a sustained insertion rate of 20,000 rows per second for over 1 billion rows. in contrast, the ingestion rate for the database built on BerkeleyDB (B-tree) is similar to that of the fractal tree index in the beginning but quickly and severely drops off to about 100 inserts per second. \begin{figure} \centering \caption{Insertion rate vs. number of inserts} \includegraphics[width=\columnwidth]{images/22graph.pdf} \label{fig:insertionRate} \end{figure} \iffalse We can see how initially the modest RAM on our system still helps fast performance. Nevertheless, as our B$^\varepsilon$-tree grows to become an out-of-memory data structure we see our ingestion rate converge to a consistent and stable tail of approximately 95.0 inserts per second. We also observe how the ingetion rate has periodic highs and lows. We attribute this to the growth of the B$^\varepsilon$-tree. As a new level is created entries in the tree are flushed to disk and the insertion rate temporarily increases. This slows down as the buffers fill up at each level eventually causing another flush or a new level of the tree to be created. In spite of this fluctuation we can see how our performance comes to a stable asymptote that provides reliable long term performance. \subsection{Database Insertion Time} As a part of our efforts to do performance analysis, we instrumented many of the key points of data processing. One key metric is the time spent doing an insertion into our B$^\varepsilon$-tree. At each insertion we track the time spent for each specific insertion. Figure~\ref{fig:transactionTime} shows this transaction time (averaged over ten thousand inserts) plotted against cumulative insert count. Here again we see our performance slowly stabilize to a steady dependable asypmtote. We note that our stable state transaction time of 8.96 ms. Our overall insertion rate of 95 events per second equates on one insert every 10.52ms. The 1.56ms difference is because of overhead in reading an processing the log data. \begin{figure} \centering \caption{Transaction time vs. number of inserts} \includegraphics[width=\columnwidth]{images/waddellTrans.png} \label{fig:transactionTime} \end{figure} \fi \subsection{Query Response Time} We tested a few queries around the size range that a typical user might execute. In our experience, a simple point query came back in well under one second on a database with around 121.5 million entries. Figure~\ref{fig:queryResp} shows the time that it took a database using a fractal tree index with about 121.5 million entries to answer range and point queries. We observe that reasonably sized queries that network analysts generally expect return in well under a second. Queries are fast because write-optimization makes indexing more efficient. As shown in table~\ref{table:querycomp}, queries for IP addresses in our database generally return in under a second, while the same search using grep takes multiple seconds or even minutes. Query times on the order of tenths of a second will allow network analysts to more quickly discover and respond to network events. This is a significant improvement over a common strategy of searching through connection logs. \begin{figure} \centering \caption{Query Performance Measurements (121.5 M entries in DB)} \includegraphics[width=\columnwidth]{images/query.pdf} \label{fig:queryResp} \end{figure} \begin{table}[h] \begin{center} \caption{Fractal Tree Query vs Grep Performance (121.5 M Database Entries)} \label{table:querycomp} \begin{tabular} {c | c | c | c} IP Queried & Rows Returned & Time (grep, s) & Time (twiad, s)\\ \hline 0.183.158.39 & 3 & 6.242 & 0.030878 \\ 202.178.243.254 & 7697311 & 796.276 & 37.528948 \\ 61.132.23.66 & 600328 & 160.858 & 2.630128 \\ 210.117.64.88 & 15513 & 10.386 & 0.096910 \\ 210.117.64.222 & 20106 & 10.833 & 0.114642 \\ 210.117.64.16 & 21005 & 35.084 & 0.126801 \\ 210.117.64.84 & 21785 & 11.350 & 0.126013 \\ 208.149.123.22 & 543 & 7.605 & 0.040972 \\ 195.158.245.52 & 126186 & 16.253 & 0.570452 \\ 200.42.64.226 & 15661 & 7.372 & 0.082603 \\ \end{tabular} \end{center} \end{table} \iffalse \hxnt{TODO: new query response times vs grep} \begin{table}[h] \begin{center} \caption{Empty Query Times (43M Database Entries)} \label{table:emptyquery} \begin{tabular} {c | c} IP Queried & Time (ms) \\ \hline 0.0.167.1 & 1281 \\ 0.11.80.6 & 1599 \\ 1.11.13.90 & 1074 \\ \end{tabular} \end{center} \end{table} \begin{table}[h] \begin{center} \caption{Empty Query Statistics (43M Database Entries)} \label{table:emptystats} \begin{tabular} {c | c} Average (ms) & Standard Deviation (ms) \\ \hline 1318 & 264 \\ \end{tabular} \end{center} \end{table} \begin{table}[h] \begin{center} \caption{Query Response Times (43M Database Entries)} \label{table:queryresp} \begin{tabular} {c | c | c} IP Queried & Number of Entries Returned & Time (ms) \\ \hline 0.23.167.194 & 1 & 14 \\ 193.231.242.201 & 35 & 192 \\ 0.72.210.159 & 1665 & 192 \\ 24.14.196.41 & 2132 & 222 \\ \end{tabular} \end{center} \end{table} \fi \subsection{Requirements} We designed TWIAD to leverage the performance strengths of B$^\varepsilon$-trees. We focused primarily on ingestion performance---the database will answer queries, but most of the computation is spent on inserting events documented in connection logs into the database. \subsection{Software} We used the publicly available B$^\varepsilon$-tree implementation (ft-index) from Tokutek~\cite{ftindex}. Our contribution is a system designed to mediate between network connection data and the underlying B$^\varepsilon$-tree while simultaneously answering queries. We chose to implement this layer in C because the original index was in C and we wanted to maximize performance without going through other languages. One of our design goals for the database is for the index to be ``loosely coupled'' with the IDS. Therefore, we chose to implement it as a tool for processing logs from the IDS rather than integrating it with the IDS. That also means that the IDS can handle bursts, since it is always just logging. The indexer can run in the background at low priority, it can fall behind and then later catch up when the system is less busy. \iffalse Our contribution is a concurrent, tunable system designed to mediate between network connection data and the underlying B$^\varepsilon$-tree. We chose to implement this layer in Go because it is designed for concurrent programming and well-suited to our goals of maximizing ingestion speed. \subsection{System Architecture} We detail the system architecture in Figure~\ref{fig:arch}. The rounded rectangles represent independent pools of routines. The block arrows represent channels, a unique Go construct. Channels are mediums that allow efficient, thread-safe communication between indedependent routines by carrying messages from one routine to another. Ellipses correspond to single routines. The green rectangle represents Tokutek's implementation of a B$^\varepsilon$-tree. The administrator of the system interacts with the server via a basic command line interface (CLI) utility which offers options to ingest and monitor directories for network log files. Upon startup, a configurable number of data file parsers, database inserters, and database readers are created and wait for the administrator to choose an input source. Once directories are specified by the administrator, data file handlers are initialized to move the files to a working directory. The file handlers then use the file task channel to pass an entity corresponding to a file, with associated metadata, to the data file parsers. Each data file parser pulls one entity from the channel at a time and reads in the associated file. A data file parser then converts the lines of the file into key-value pairs which are directed into the row channel. Next, each database inserter accepts one key-value pair at a time and inserts it into the database through direct use of the index's API. Once completely ingested a given file have been added to the database, the file is then moved to a completed directory. To query the database, an authorized user forms a query using a separate CLI utility. This program sends a request to the database server which is handled by the connection handler. The connection handler passes the associated connection into the connection channel where it is accepted by one of the database readers. This database reader interprets the request and queries the index using its API to receive the corresponding results. Finally, the database reader returns the results to the client CLI for presentation to the user. The data readers and writers all live within the same process to take advantage of lighter inter-thread mutexes and avoid more heavy weight inter process locking. Tunable parameters include the number of threads total and the number of inserting, parsing, and reading routines. The channel size can also be limited using a configurable file. \fi \subsection{Database Design} The database is designed specifically to store network connection logs --- we used logs from the Bro IDS system, a sessionization and intrusion detection tool. Fields from a typical connection log and an example row are provided in \reftab{table:bro}. The database can easily be extended to ingest other types of logs and inputs. We want to access any connection information regarding an IP in a query, regardless of whether it was the origin or responder IP. Therefore, we insert every row in the connection logs twice --- once with the the origin and responder IP address and port in the order of the original row as specified in \reftab{table:key} and again with the origin and responder IP addresses and ports switching order in the key as detailed in \reftab{table:rkey}. The value is the remaining fields not included in the key. As a result, two database entries are created for each row in a connection log. To indicate which of the entries is ``reversed'' (i.e. the order of the origin and responder ports and addresses was switched), we set the byte isReversed to 1 if the entry was reversed and 0 otherwise. The key and value formats are described in \reftab{table:key} and \reftab{table:value} are based on Bro log format. We chose this format in order to query specific IP addresses as well as range over time for that IP address. Furthermore, we appended the other fields to the key to prevent the possibility of collisions. \input{tables} \subsection{Client Architecture} We designed a client-side CLI tool to relay queries from an authorized user to the server and to receive and display results. The goals of the CLI tool were to easily integrate into users' workflows and respond to queries. Results of the queries are given in some value-separated format for easy user processing. Common queries include searching for connections with a specific IP address or subnet or over some time range. We provide a few examples of usage: \noindent\texttt{/twiad:twiad-client --ip 18.281.23.9} The first query requests all connections involving a specific IP address over all time stored in the database. \noindent\texttt{/twiad:twiad-client --subnet 13.48.133.201/8 --year} The second query requests all connections involving the specified subnet over the past year. There are flags available to query the past week, month, quarter, and year. \noindent\texttt{/twiad:twiad-client --subnet 13.48.133.201/16 --start 2010:1:2:10:30:5} The third query requests all connections involving the specified subnet starting from the beginning of the time period after the start flag. Times are entered in the format yyyy:mm:dd:hh:mm:ss. For example, the above time is January 2 10:30:05 0700 MST 2010. The tool also allows the user to specify the end of a requested time period with \texttt{--end}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} \defA.\arabic{equation}{\arabic{section}.\arabic{equation}} \setcounter{equation}{0} The scalar form factor of the pion, $\Gamma_\pi(t)$, corresponds to the matrix element \begin{equation} \Gamma_\pi(t)=\int d^4 x \,e^{-i(q'-q)x}\langle \pi(q')|\left(m_u \bar{u}(x)u(x)+ m_d \bar{d}(x)d(x)\right) | \pi(q)\rangle~,~~t=(q'-q)^2~. \label{ffdef} \end{equation} Performing a Taylor expansion around $t=0$, \begin{equation} \Gamma_\pi(t)=\Gamma_\pi(0)\left\{1+\frac{1}{6}t\langle r^2\rangle_s^\pi+{\cal O}(t^2)\right\}~, \label{r2pi} \end{equation} where $\langle r^2\rangle_s^\pi$ is the quadratic scalar radius of the pion. The quantity $\langle r^2\rangle_s^\pi$ contributes around 10$\%$ \cite{pipiscat} to the values of the S-wave $\pi\pi$ scattering lengths $a_0^0$ and $a_0^2$ as determined in ref.\cite{pipiscat}, by employing Roy equations and $\chi PT$ to two loops. If one takes into account that this reference gives a precision of 2.2$\%$ in its calculation of the scattering lengths, a 10$\%$ of contribution from $\langle r^2\rangle_s^\pi$ is a large one. Related to that, $\langle r^2\rangle_s^\pi$ is also important in $SU(2)\times SU(2)$ $\chi PT$ since it gives the low energy constant $\bar{\ell}_4$ that controls the departure of $F_\pi$ from its value in the chiral limit \cite{gl83,cd04} at leading order correction. Based on one loop $\chi PT$, Gasser and Leutwyler \cite{gl83} obtained $\langle r^2\rangle_s^\pi=0.55\pm 0.15$~fm$^2$. This calculation was improved later on by the same authors together with Donoghue \cite{dgl90}, who solved the corresponding Muskhelishvili-Omn\`es equations with the coupled channels of $\pi\pi$ and $K\bar{K}$. The update of this calculation, performed in ref.\cite{pipiscat}, gives $\langle r^2\rangle_s^\pi=0.61\pm 0.04$ fm$^2$, where the new results on S-wave I=0 $\pi\pi$ phase shifts from the Roy equation analysis of ref.\cite{acgl01} are included. Moussallam \cite{m00} employs the same approach and obtains values in agreement with the previous result. One should notice that solutions of the Muskhelishvili-Omn\`es equations for the scalar form factor rely on non-measured $T-$matrix elements or on assumptions about which are the channels that matter. Given the importance of $\langle r^2\rangle_s^\pi$, and the possible systematic errors in the analyses based on Muskhelishvili-Omn\`es equations, other independent approaches are most welcome. In this respect we quote the works \cite{gu91,ou00,bct98}, and Yndur\'ain's ones \cite{y04,y05,y06}. These latter works have challenged the previous value for $\langle r^2\rangle_s^\pi$, shifting it to the larger $\langle r^2\rangle_s^\pi=0.75\pm 0.07$~fm$^2$. From ref.\cite{pipiscat} the equations, \begin{equation} \delta a_0^0 = +0.027 \Delta_{r^2}~,~ \delta a_0^2 = -0.004 \Delta_{r^2}~, \end{equation} give the change of the scattering lengths under a variation of $\langle r^2\rangle_s^\pi$ defined by $\langle r^2\rangle_s^\pi=0.61(1+\Delta_{r^2})$~fm$^2$. For the difference between the central values of $\langle r^2\rangle_s^\pi$ given above from refs.\cite{pipiscat,y04}, one has $\Delta_{r^2}=+0.23$. This corresponds to $\delta a_0^0=+0.006$ and $\delta a_0^2=-0.001$, while the errors quoted are $a_0^0=0.220\pm 0.005$ and $a_0^2=-0.0444\pm 0.0010$. We then adduce about shifting the central values for the predicted scattering lengths at the level of one sigma. The value taken for $\langle r^2\rangle_s^\pi$ is also important for determining the ${\cal O}(p^4)$ $\chi PT$ coupling $\bar{\ell}_4$. The value of ref.\cite{pipiscat} is $\bar{\ell}_4=4.4\pm 0.2$ while that of ref.\cite{y04} is $\bar{\ell}_4=5.4\pm 0.5$. Both values are incompatible within errors. The papers \cite{y04,y05,y06} have been questioned in refs.\cite{accgl05,ccl}. The value of the $K\pi$ quadratic scalar radius, $\langle r^2 \rangle_s^{K\pi}$, obtained by Yndur\'ain in ref.\cite{y04}, $\langle r^2 \rangle_s^{K\pi}=0.31\pm0.06$~fm$^2$, is not accurate, because he relies on old experiments and on a bad parameterization of low energy S-wave I=1/2 $K\pi$ phase shifts by assuming dominance of the $\kappa$ resonance as a standard Breit-Wigner pole \cite{opj04}. Furthermore, $\langle r^2\rangle_s^{K\pi}$ was recently fixed by high statistics experiments in an interval in agreement with the sharp prediction of \cite{opj04}, based on dispersion relations (three-channel Muskhelishvili-Omn\`es equations from the $T-$matrix of ref.\cite{opj00}) and two-loop $\chi$PT \cite{bt04}. From the recent experiments \cite{istra,ktev}, one has for the charged kaons \cite{istra} $\langle r^2\rangle_s^{K^\pm \pi}=0.235\pm 0.014\pm 0.007$~fm$^2$, and for the neutral ones \cite{ktev} $\langle r^2\rangle_s^{K_L\pi}=0.165\pm 0.016$~fm$^2$. The prediction of \cite{opj04}, in an isospin limit, is $\langle r^2\rangle_s^{K\pi}=0.192\pm 0.012$~fm$^2$, lying just in the middle of the experimental determinations. Another issue is Yndur\'ain's more sound determination of the pionic scalar radius, whose (in)correctness is not settled yet. In this paper we concentrate on the approach of Yndur\'ain \cite{y04,y05,y06} to evaluate the quadratic scalar radius of the pion based on an Omn\'es representation of the I=0 non-strange pion scalar form factor. Our main conclusion will be that this approach \cite{y04} and the solution of the Muskhelishvili-Omn\`es equations \cite{dgl90}, with $\pi\pi$ and $K\bar{K}$ as coupled channels, agree between each other if one properly takes into account, for some $T-$matrices, the presence of a zero in the pion scalar form factor at energies slightly below the $K\bar{K}$ threshold. Precisely these $T-$matrices are those used in \cite{y04} and favoured in \cite{y05}. Once this is considered we conclude that $\langle r^2\rangle_s^\pi=0.63\pm 0.05$~fm$^2$. The contents of the paper are organized as follows. In section 2 we discuss the Omn\`es representation of $\Gamma_\pi(t)$ and derive the expression to calculate $\langle r^2\rangle_s^\pi$. This calculation is performed in section 3, where we consider different parameterizations for experimental data and asymptotic phases for the scalar form factor. Conclusions are given in the last section. \section{Scalar form factor} \label{sec:ff} \defA.\arabic{equation}{\arabic{section}.\arabic{equation}} \setcounter{equation}{0} The pion scalar form factor $\Gamma_\pi(t)$, eq.(\ref{ffdef}), is an analytic function of $t$ with a right hand cut, due to unitarity, for $t\geq 4 m_\pi^2$. Performing a dispersion relation of its logarithm, with the possible zeroes of $\Gamma_\pi(t)$ removed, the Omn\`es representation results, \begin{equation} \Gamma_\pi(t)=P(t)\exp\left[ \frac{t}{\pi} \int_{4m_\pi^2}^\infty \frac{\phi(s)}{s(s-t)}ds \right]~. \label{ffomnes} \end{equation} Here, $P(t)$ is a polynomial made up from the zeroes of $\Gamma_\pi(t)$, with $P(0)=\Gamma_\pi(0)$. In the previous equation, $\phi(s)$ is the phase of $\Gamma_\pi(t)/P(t)$, taken to be continuous and such that $\phi(4m_\pi^2)=0$. In ref.\cite{y04} the scalar form factor is assumed to be free of zeroes and hence $P(t)$ is just the constant $\Gamma_\pi(0)$ (the exponential factor is 1 for $t=0$). Thus, \begin{equation} \Gamma_\pi(t)=\Gamma_\pi(0)\exp\left[ \frac{t}{\pi} \int_{4m_\pi^2}^\infty \frac{\phi(s)}{s(s-t)}ds \right]~. \label{ffomnes2} \end{equation} From where it follows that, \begin{equation} \langle r^2\rangle_s^\pi=\frac{6}{\pi}\int_{4 m_\pi^2}^\infty \frac{\phi(s)}{s^2}ds~. \label{r2omnes1} \end{equation} One of the features of the pion scalar form factor of refs.\cite{dgl90,m00,ou00}, as discussed in ref.\cite{accgl05}, is the presence of a strong dip at energies around the $K\bar{K}$ threshold. This feature is also shared by the strong S-wave I=0 $\pi\pi$ amplitude, $t_{\pi\pi}$. This is so because $t_{\pi\pi}$ is in very good approximation purely elastic below the $K\bar{K}$ threshold and hence, neglecting inelasticity altogether in the discussion that follows, it is proportional to $\sin\delta_{\pi} e^{i\delta_\pi}$, with $\delta_\pi$ the S-wave I=0 $\pi\pi$ phase shift. It is an experimental fact that $\delta_\pi$ is very close to $ \pi$ around the $K\bar{K}$ threshold, as shown in fig.\ref{figpi}. Therefore, if $\delta_\pi=\pi$ happens before the opening of this channel the strong amplitude has a zero at that energy. On the other hand, if $\delta_\pi=\pi$ occurs after the $K\bar{K}$ threshold, because inelasticity is then substantial, see eq.(\ref{tpipi}) below, there is not a zero but a pronounced dip in $|t_{\pi\pi}|$. This dip can be arbitrarily close to zero if before the $K\bar{K}$ threshold $\delta_\pi$ approaches $\pi$ more and more, without reaching it. \begin{figure}[H] \centerline{\epsfig{file=desfasaje.eps,height=3.5in,width=6.0in,angle=0}} \vspace{0.2cm} \caption[pilf]{\protect \small S-wave $I=0$ $\pi\pi$ phase shift, $\delta_\pi(s)$. Experimental data are from refs.\cite{kaminski,bnl,na48,grayer}. \label{figpi}} \end{figure} \begin{figure}[ht] \centerline{\epsfig{file=uncertainty.eps,height=3.6in,width=7.in,angle=0}} \vspace{0.2cm} \caption[pilf]{\protect \small Left panel: Strong phase $\varphi(s)$, eigenvalue phase $\delta_{(+)}(s)$ and asymptotic phase $\phi_{as}(s)$. Right panel: Integrand of $\langle r^2\rangle_s^\pi$ in eq.(\ref{split}) for parameterization I (dashed line) and II (solid line). For more details see the text. Notice that the uncertainty due to $\phi_{as}(s)$ is much reduced in the integrand. \label{figpi2}} \end{figure} Because of Watson final state theorem the phase $\phi(s)$ in eq.(\ref{ffomnes}) is given by $\delta_\pi(s)$ below the $K\bar{K}$ threshold, neglecting inelasticity due to $4\pi$ or $6\pi$ states as indicated by experiments \cite{hyams}. The situation above the $K\bar{K}$ threshold is more involved. Let us recall that \begin{equation} t_{\pi\pi}=(\eta \,e^{2i\delta_\pi}-1)/2i~, \label{tpipi} \end{equation} with $0\leq \eta \leq 1$ and the inelasticity is given by $1-\eta^2$, with $\eta$ the elasticity coefficient. We denote by $\varphi(s)$ the phase of $t_{\pi\pi}$, required to be continuous (below $4m_K^2$ it is given by $\delta_\pi(s)$). By continuity, close enough to the $K\bar{K}$ threshold and above it, $\eta\to 1$ and then we are in the same situation as in the elastic case. As a result, because of the Watson final state theorem and continuity, the phase $\phi(s)$ must still be given by $\varphi(s)$. For $\delta_\pi(s_K)<\pi$, $s_K=4m_K^2$, $\varphi(s)$ does not follow the increasing trend with energy of $\delta_\pi(s)$ but drops as a result of eq.(\ref{tpipi}), see fig.\ref{figpi2} for $\delta_\pi(s_K)<\pi$. This is easily seen by writing explicitely the real and imaginary parts of $t_{\pi\pi}$ in eq.(\ref{tpipi}), \begin{equation} t_{\pi\pi}=\frac{1}{2}\eta\sin2\delta_\pi+\frac{i}{2}(1-\eta\cos 2\delta_\pi)~. \label{tpipi2} \end{equation} The imaginary part is always positive ($\eta<1$ above the $K\bar{K}$ threshold and 1.1 GeV \cite{hyams}) while the real part is negative for $\delta_\pi<\pi$, but in an interval of just a few MeV the real part turns positive as soon as $\delta_\pi>\pi$, fig.\ref{figpi}. As a result, $\varphi(s)$ passes quickly from values below but close to $\pi$ to the interval $[0,\pi/2]$. This rapid motion of $\phi(s)$ gives rise to a pronounced minimum of $|\Gamma_\pi(t)|$ at this energy, as indicated in ref.\cite{accgl05} and shown in fig.\ref{figpi3}. The drop in $\phi(s)$ becomes more and more dramatic as $\delta_\pi(s_K)\to \pi^-$ (with the superscript $+(-)$ indicating that the limit is approached from values above(below), respectively); and in this limit, $\phi(s_k)=\varphi(s_K)$ is discontinuous at $s_K$. This is easily understood from eq.(\ref{tpipi2}). Let us call $s_1$ the point at which $\delta_\pi(s_1)=\pi$ with $s_1>s_K$. Close and above $s_1$, $\varphi(s)\in [0,\pi/2]$, for the reasons explained above, and $\varphi(s)$ has decreased very rapidly from almost $\pi$ at the $K\bar{K}$ threshold to values below $\pi/2$ just after $s_1$. Then, in the limit $s_1\to s_K^+$ one has $\phi(s_K^-)=\varphi(s_K^-)=\pi$ on the left, while on the right $\phi(s_K^+)=\varphi(s_K^+)<\pi/2$. As a result $\varphi(s)$ is discontinuous at $s=s_K$. We stress that this discontinuity of $\varphi(s)$ at $s_K$ when $\delta_\pi(s_K)\to\pi^-$ applies rigorously to $\phi(s_K)$ as well since $\eta(s_K)=1$. This discontinuity at $s=s_K$ implies also that the integrand in the Omn\`es representation for $\Gamma_\pi(t)$ develops a logarithmic singularity as, \begin{equation} \frac{\phi(s_K^-)-\phi(s_K^+)}{\pi}\log\frac{\delta}{s_K}~, \end{equation} with $\delta\to 0^+$. When exponentiating this result one has a zero for $\Gamma_\pi(s_K)$ as $(\delta/s_K)^\nu$, $\nu=(\phi(s_K^-)-\phi(s_K^+))/\pi>0$ and $\delta\to 0^+$. This zero is a necessary consequence when evolving continuously from $\delta_\pi(s_K)<\pi$ to $\delta_\pi(s_K)>\pi$.\footnote{It can be shown from eq.(\ref{tpipi2}) that $\phi(s_K^-)-\phi(s_K^+)=\pi$. Here we are assuming $\eta=1$ for $s\leq s_K$, which is a very good approximation as indicated by experiment \cite{hyams,kaminski}.} This in turn implies rigorously that in the Omn\`es representation of $\Gamma_\pi(t)$, eq.(\ref{ffomnes}), $P(t)$ must be a polynomial of first degree for those cases with $\delta_\pi(s_K)\geq \pi$,\footnote{We are focusing in the physically relevant region of experimental allowed values for $\delta_\pi(s_K)$, which can be larger or smaller than $\pi$ but close to.} \begin{equation} P(t)=\Gamma_\pi(0)\frac{s_1-t}{s_1}~, \end{equation} with $s_1$ the position of the zero. Notice that the degree of the polynomial $P(t)$ is discrete and thus by continuity it cannot change unless a singularity develops. This is the case when $\delta_\pi(s_K)=\pi$, changing the degree from 0 to 1. Hence, if $\delta_\pi(s_K)\geq \pi$ for a given $t_{\pi\pi}$, instead of eqs.(\ref{ffomnes2}) and (\ref{r2omnes1}) one must then consider, \begin{equation} \Gamma_\pi(t)=\Gamma_\pi(0)\frac{s_1-t}{s_1}\exp\left[\frac{t}{\pi}\int_{4m_\pi^2}^\infty \frac{\phi(s)}{s(s-t)}ds\right]~, \label{ffomnes3} \end{equation} and \begin{equation} \langle r^2\rangle_s^\pi=-\frac{6}{s_1}+\frac{6}{\pi}\int_{4m_\pi^2}^\infty \frac{\phi(s)}{s^2}ds~. \label{r2omnes2} \end{equation} For those $t_{\pi\pi}$ for which $\delta_\pi(s_K)>\pi$ then $\varphi(s)$ follows $\delta_\pi(s)$ just after the $K\bar{K}$ threshold and there is no drop, as emphasized in ref.\cite{y05}, see fig.\ref{figpi2}. Summarizing, we have shown that $\Gamma_\pi(t)$ has a zero at $s_1$ when $\delta_\pi(s_K)\geq \pi$ as a consequence of the assumption that $\phi(s)$ follows $\varphi(s)$ above the $K\bar{K}$ threshold, along the lines of ref.\cite{y05}, and by imposing continuity in $\Gamma_\pi(t)$ under small changes in $\delta_\pi(s_K)\simeq \pi$. As a result eqs.(\ref{ffomnes3}) and (\ref{r2omnes2}) should be used in the latter case, instead of eqs.(\ref{ffomnes2}) and (\ref{r2omnes1}), valid for $\delta_\pi(s_K)<\pi$. This solution was overlooked in refs.\cite{y04,y05,y06}. We show in appendix~A why the previous discussion on the zero of $\Gamma_\pi(t)$ for $\delta_\pi(s_K)\geq \pi$ at $s_1$ cannot be applied to all pion scalar form factors, in particular to the strange one. \begin{figure}[ht] \psfrag{ref}{\cite{paquito}} \centerline{\epsfig{file=omnes.eps,height=3.6in,width=5.in,angle=0}} \vspace{0.2cm} \caption[pilf]{\protect \small $|\Gamma_\pi(t)/\Gamma_\pi(0)|$ from eq.(\ref{ffomnes2}) with $\delta_\pi(s_K)<\pi$, dashed-line, and $\delta_\pi(s_K)>\pi$, dashed-dotted line. The solid line corresponds to use eq.(\ref{ffomnes3}) for the latter case. For this figure we have used parameterization II (defined in section \ref{sec:resul}) with $\alpha_1=2.28$ (dashed line) and 2.20 (dashed-dotted and solid lines). The dashed-double-dotted line is the scalar form factor of ref.\cite{ou00} that has $\delta_\pi(s_K)>\pi$. \label{figpi3}} \end{figure} If eq.(\ref{ffomnes2}) were used for those $t_{\pi\pi}$ with $\delta_\pi(s_K) \geq \pi$ then a strong maximum of $|\Gamma_\pi(t)|$ would be obtained around the $K\bar{K}$ threshold, instead of the aforementioned zero or the minimum of refs.\cite{dgl90,m00}, as shown in fig.\ref{figpi3} by the dashed-dotted line. That is also shown in fig.10 of ref.\cite{paquito} or fig.2 of \cite{accgl05}. This is the situation for the $\Gamma_\pi(t)$ of refs.\cite{y04,y05}, and it is the reason why $\langle r^2\rangle_s^\pi$ obtained there is much larger than that of refs.\cite{dgl90,pipiscat,m00}. That is, Yndur\'ain uses eqs.(\ref{ffomnes2}), (\ref{r2omnes1}) for $\delta_\pi(s_K)\geq \pi$, instead of eqs.(\ref{ffomnes3}), (\ref{r2omnes2}) (solid line in fig.\ref{figpi3}). The unique and important role played by $\delta_\pi(s_K)$ (for elastic $t_{\pi\pi}$ below the $K\bar{K}$ threshold) is perfectly recognised in ref.\cite{y05}. However, in this reference the astonishing conclusion that $\Gamma_\pi(t)$ has two radically different behaviours under tiny variations of $t_{\pi\pi}$ was sustained. These variations are enough to pass from $\delta_\pi(s_K)<\pi$ to $\delta_\pi(s_K)\geq \pi$ \cite{y04}, while the $T-$ or $S-$matrix are fully continuous. Because of this instability of the solution of refs.\cite{y04,y05} under tiny changes of $\delta_\pi(s)$, we consider ours, that produces continuous $\Gamma_\pi(t)$, to be certainly preferred. We also stress that our solutions, either for $ \delta_\pi(s_K)\geq \pi$ and $\delta_\pi(s_K)<\pi$, are the ones that agree with those obtained by solving the Muskhelishvili-Omn\`es equations \cite{dgl90,pipiscat,m00} and Unitary $\chi$PT \cite{ou00}. Let us now show how to fix $s_1$ in terms of the knowledge of $\delta_\pi(s)$ with $\delta_\pi(s_K)\geq \pi$. For this purpose let us perform a dispersion relation of $\Gamma_\pi(t)$ with two subtractions, \begin{equation} \Gamma_\pi(t)=\Gamma_\pi(0)+\frac{1}{6}\langle r^2\rangle_s^\pi t+\frac{t^2}{\pi}\int_{4m_\pi^2}^\infty \frac{\hbox{Im}\Gamma_\pi(s)}{s^2(s-t)}ds~, \label{ffdis1} \end{equation} From asymptotic QCD \cite{brodsky} one expects that the scalar form factor vanishes at infinity \cite{y04,y06}, then the dispersion integral in eq.(\ref{ffdis1}) should converge rather fast. Eq.(\ref{ffdis1}) is useful because it tells us that the only point around 1 GeV where there can be a zero in $\Gamma_\pi(t)$ is at the energy $s_1$ for which the imaginary part of $\Gamma_\pi(t)$ vanishes. Otherwise, the integral in the right hand side of eq.(\ref{ffdis1}) picks up an imaginary part and there is no way to cancel it as $\Gamma_\pi(0)$, $\langle r^2\rangle_s^\pi$ and $t$ are all real. Since $|\hbox{Im}\Gamma_\pi(t)|=|\Gamma_\pi(t)\,\sin\delta_\pi(t)|$ for $t\leq s_K$, it certainly vanishes at the point $s_1$ where $\delta_\pi(s_1)=\pi$. As there is only one zero at such energies, this determines $s_1$ exactly in terms of the given parameterization for $\delta_\pi(s)$. One could argue against the argument just given to determine $s_1$ that this energy could be complex. However, this would imply two zeroes at $s_1$ and $s_1^*$, and then the degree of $P(t)$ would be two instead of one. Notice that the degree of the polynomial $P(t)$ is discrete and thus, by softness in the continuous parameters of the $T-$matrix, its value should stay at 1 for some open domain in the parameters with $\delta_\pi(s_K)>\pi$ until a discontinuity develops. Physically, the presence of two zeroes would in turn require that $\phi(s)\to 3\pi$ so as to guarantee that $\Gamma_\pi(t)$ still vanishes as $-1/t$, as required by asymptotic QCD \cite{brodsky,y04}. This value for the asymptotic phase seems to be rather unrealistic as $\varphi(s)$ only reaches $2\pi$ at already quite high energy values, as shown in fig.\ref{figpi2}. \section{Results} \label{sec:resul} Our main result from the previous section is the sum rule to determine $\langle r^2\rangle_s^\pi$, \begin{equation} \langle r^2\rangle_s^\pi=-\frac{6}{s_1}\theta(\delta_\pi(s_K)-\pi)+ \frac{6}{\pi}\int_{4m_\pi^2}^\infty\frac{\phi(s)}{s^2}ds~, \label{r2final} \end{equation} where $\theta(x)=0$ for $x<0$ and 1 for $x\geq 0$. We split $\langle r^2\rangle_s^\pi$ in two parts: \begin{eqnarray} \langle r^2\rangle_s^\pi&=&Q_H+Q_A~,\nonumber\\ Q_H&=&-\frac{6}{s_1}\theta(\delta_\pi(s_K)-\pi)+\frac{6}{\pi}\int_{4m_\pi^2}^{s_H} \frac{\phi(s)}{s^2}ds~,\nonumber\\ Q_A&=&\frac{6}{\pi}\int_{s_H}^\infty \frac{\phi(s)}{s^2}~, \label{split} \end{eqnarray} with $s_H=2.25$ GeV$^2$. Reasons for fixing $s_H$ to this value are given below. The main issue in the application of eq.(\ref{r2final}) is to determine $\phi(s)$ in the integrand. Below the $K\bar{K}$ threshold and neglecting inelasticity, one has that $\phi(s)=\delta_\pi(s)$, $4m_\pi^2\leq s \leq 4 m_K^2$. This follows because of the Watson final state theorem, continuity and the equality $\phi(4m_\pi^2)=\delta_\pi(4m_\pi^2)=0$. For practical applications we shall consider the S-wave I=0 $\pi\pi$ phase shifts given by the $K-$matrix parameterization of ref.\cite{hyams} (from its energy dependent analysis of data from 0.6 GeV up to 1.9 GeV) and the parameterizations of ref.\cite{pipiscat} (CGL) and ref.\cite{py03} (PY). The resulting $\delta_\pi(s)$ for all these parameterizations are shown in fig.\ref{figpi}. We use CGL from $\pi\pi$ threshold up to 0.8 GeV, because this is the upper limit of its analysis, while PY is used up to 0.9 GeV, because at this energy it matches well inside the experimental errors with the data of \cite{hyams}. The $K-$matrix of ref.\cite{hyams} is used for energies above 0.8 GeV, when using CGL below this energy (parameterization I), and above 0.9 GeV, when using PY for lower energies (parameterization II). We take the parameterizations CGL and PY as their difference below 0.8 GeV accounts well for the experimental uncertainties in $\delta_\pi$, see fig.\ref{figpi}, and they satisfy constraints from $\chi PT$ (the former) and dispersion relations (both). The reason why we skip to use the parameterization of ref.\cite{hyams} for lower energies is because one should be there as precise as possible since this region gives the largest contribution to $\langle r^2\rangle_s^\pi$, as it is evident from the right panel of fig.\ref{figpi2}. It happens that the $K-$matrix of \cite{hyams}, that fits data above 0.6 GeV, is not compatible with data from $K_{e 4}$ decays \cite{bnl,na48}. We show in the insert of fig.\ref{figpi} the comparison of the parameterizations CGL and PY with the $K_{e4}$ data of \cite{bnl,na48}. We also show in the same figure the experimental points on $\delta_\pi$ from refs.\cite{hyams,kaminski,grayer}. Both refs.\cite{hyams,kaminski} are compatible within errors, with some disagreement above 1.5 GeV. This disagreement does not affect our numerical results since above 1.5 GeV we do not rely on data. The $K-$matrix of ref.\cite{hyams} is given by, \begin{equation} K_{ij}(s)=\alpha_i\alpha_j/(x_1-s)+\beta_i\beta_j/(x_2-s)+\gamma_{ij}~, \label{km} \end{equation} where \begin{equation} \begin{array}{lll} x_1^{1/2}=0.11\pm 0.15~ & x_2^{1/2}=1.19\pm 0.01 & \\ \alpha_1=2.28\pm 0.08~ & \alpha_2=2.02\pm 0.11 & \\ \beta_1=-1.00\pm 0.03~ & \beta_2=0.47\pm 0.05 &\\ \gamma_{11}=2.86\pm 0.15~ & \gamma_{12}=1.85\pm 0.18~ & \gamma_{22}=1.00\pm 0.53~, \end{array} \label{array} \end{equation} with units given in appropriate powers of GeV. In order to calculate the contribution from the phase shifts of this $K-$matrix we generate Monte-Carlo gaussian samples, taking into account the errors shown in eq.(\ref{array}), and evaluate $Q_H$ according to eq.(\ref{split}). The central value of $\delta_\pi(s_K)$ for the $K-$matrix of ref.\cite{hyams} is $3.05$, slightly below $\pi$. When generating Monte-Carlo gaussian samples according to eq.(\ref{array}), there are cases with $\delta_{\pi}(s_K)\geq \pi$, around $30\%$ of the samples. Note that for these cases one also has the contribution $-6/s_1$ in eq.(\ref{r2final}). The application of Watson final state theorem for $s>4m_K^2$ is not straightforward since inelastic channels are relevant. The first important one is the $K\bar{K}$ channel associated in turn with the appearance of the narrow $f_0(980)$ resonance, just on top of its threshold. This implies a sudden drop of the elasticity parameter $\eta$, but it again rapidly raises (the $f_0(980)$ resonance is narrow with a width around 30 MeV) and in the region $1.1^2\lesssim s\lesssim 1.5^2$ GeV$^2$ is compatible within errors with $\eta=1$ \cite{hyams,kaminski}. For $\eta\simeq 1$, the Watson final state theorem would imply again that $\phi(s)=\varphi(s)$, but, as emphasized by \cite{accgl05}, this equality only holds, in principle, modulo $\pi$. The reason advocated in ref.\cite{accgl05} is the presence of the region $s_K<s<1.1^2$ GeV$^2$ where inelasticity can be large, and then continuity arguments alone cannot be applied to guarantee the equality $\phi(s)\simeq \varphi(s)$ for $s\gtrsim 1.1^2$~GeV$^2$. This argument has been proved in ref.\cite{y05} to be quite irrelevant in the present case. In order to show this a diagonalization of the $\pi\pi$ and $K\bar{K}$ $S-$matrix is done. These channels are the relevant ones when $\eta$ is clearly different from 1, between 1 and 1.1 GeV. Above that energy one also has the opening of the $\eta\eta$ channel and the increasing role of multipion states. We reproduce here the arguments of ref.\cite{y05}, but deliver expressions directly in terms of the phase shifts and elasticity parameter, instead of $K-$matrix parameters as done in ref.\cite{y05}. For two channel scattering, because of unitarity, the $T-$matrix can be written as: \begin{equation} T=\left( \begin{array}{ll} \frac{1}{2i}(\eta e^{2i\delta_\pi}-1) & \frac{1}{2}\sqrt{1-\eta^2}e^{i(\delta_\pi+\delta_K)} \\ \frac{1}{2}\sqrt{1-\eta^2}e^{i(\delta_\pi+\delta_K)} & \frac{1}{2i}(\eta e^{2i \delta_K}-1) \end{array} \right)~, \label{tmat} \end{equation} with $\delta_K$ the elastic S-wave I=0 $K\bar{K}$ phase shift. In terms of the $T$-matrix the S-wave I=0 $S-$matrix is given by, \begin{equation} S=I+2i T~, \label{smat} \end{equation} satisfying $S S^\dagger=S^\dagger S =I$. The $T$-matrix can also be written as \begin{equation} T=Q^{1/2}\left(K^{-1}-i Q\right)^{-1} Q^{1/2}~, \label{tmat2} \end{equation} where the $K-$matrix is real and symmetric along the real axis for $s\geq 4m_\pi^2$ and $Q=diag(q_\pi,q_K)$, with $q_\pi(q_K)$ the center of mass momentum of pions(kaons). This allows one to diagonalize $K$ with a real orthogonal matrix $C$, and hence both the $T-$ and $S-$matrices are also diagonalized with the same matrix. Writing, \begin{equation} C=\left( \begin{array}{cc} \cos\theta & \sin \theta \\ -\sin \theta & \cos \theta \end{array} \right)~, \label{cmat} \end{equation} one has \begin{eqnarray} \cos\theta &=&\frac{\left[(1-\eta^2)/2\right]^{1/2}}{\left[ 1-\eta^2\cos^2\Delta-\eta|\sin\Delta|\sqrt{1-\eta^2\cos^2\Delta}\right]^{1/2}}~,\nonumber \\ \sin\theta &=&-\frac{\sin\Delta}{\sqrt{2}}\frac{\eta-\sqrt{1+(1-\eta^2)\cot^2\Delta}}{\left[ 1-\eta^2\cos^2\Delta-\eta|\sin\Delta|\sqrt{1-\eta^2\cos^2\Delta}\right]^{1/2}}~, \label{costeta} \end{eqnarray} with $\Delta=\delta_K-\delta_\pi$. On the other hand, the eigenvalues of the $S-$matrix are given by, \begin{eqnarray} e^{2i \delta_{(+)}}&=&S_{11}\frac{1+e^{2i\Delta}}{2} \left[1-\frac{i}{\eta}\tan\Delta\, \sqrt{1+(1-\eta^2)\cot^2\Delta}\right]\\ e^{2i \delta_{(-)}}&=&S_{22}\frac{1+e^{-2i\Delta}}{2} \left[1+\frac{i}{\eta}\tan\Delta\, \sqrt{1+(1-\eta^2)\cot^2\Delta}\right]~. \label{eigenvalues} \end{eqnarray} The eigenvalue phase $\delta_{(+)}$ satisfies $\delta_{(+)}(s_K)=\delta_\pi(s_K)$. The expressions above for $\exp 2i\delta_{(+)}$ and $\exp 2i\delta_{(-)}$ interchange between each other when $\tan\Delta$ crosses zero and simultaneously the sign in the right hand side of eq.(\ref{costeta}) for $\sin\theta$ changes. This diagonalization allows to disentangle two elastic scattering channels. The scalar form factors attached to every of these channels, $\Gamma'_1$ and $\Gamma'_2$, will satisfy the Watson final state theorem in the whole energy range and then one has, \begin{eqnarray} \Gamma'&\equiv& \left( \begin{array}{c} \Gamma'_1\\ \Gamma'_2 \end{array}\right) =C^T Q^{1/2}\Gamma=C^T Q^{1/2}\left( \begin{array}{c} \Gamma_\pi \\ \Gamma_K \end{array}\right)~,\nonumber\\ \Gamma_\pi &=& q^{-1/2}_\pi\left( \lambda \cos \theta \,|\Gamma_1'| e^{i\delta_{(+)}} \pm \sin \theta \, |\Gamma_2'| e^{i\delta_{(-)}}\right)~,\nonumber\\ \Gamma_K &=& q_K^{-1/2}\left( \pm \cos \theta \, |\Gamma_2'| e^{i\delta_{(-)}} - \lambda \sin \theta\, |\Gamma_1'| e^{i\delta_{(+)}}\right)~. \label{diag} \end{eqnarray} The $\pm$ in front of $|\Gamma'_2|$ is due to the fact that $\Gamma'_2=0$ at $s_K$, as follows from its definition in the equation above. Since Watson final state theorem only fixes the phase of $\Gamma'_2$ up to modulo $\pi$, and the phase is not defined in the zero, we cannot fix the sign in front at this stage. Next, $\Gamma'_1$ has a zero at $s_1$ when $\delta_\pi(s_K)\geq \pi$. For this case, $-|\Gamma_1'|$ must appear in the previous equation, so as to guarantee continuity of its ascribed phase, and this is why $\lambda=(-1)^{\theta(\delta_\pi(s_K)-\pi)}$. Now, when $\eta\to 1$ then $\sin\theta\to 0$ as $\sqrt{(1-\eta)/2}$ and $\phi(s)$ is then the eigenvalue phase $\delta_{(+)}$. This eigenvalue phase can be calculated given the $T-$matrix. For those $T-$matrices employed here, and those of refs.\cite{y04,y05,dgl90,accgl05}, $\delta_{(+)}(s)$ follows rather closely $\varphi(s)$ in the whole energy range. This is shown in fig.\ref{figpi2} and already discussed in detail in ref.\cite{y05}. In this way, one guarantees that $\phi(s)$ and $\varphi(s)$ do not differ between each other in an integer multiple of $\pi$ when $\eta\simeq 1$, $1.1^2\lesssim s \lesssim 1.5^2$~GeV$^2$. For the calculation of $Q_H$ in eq.(\ref{split}) we shall equate $\phi(s)=\varphi(s)$ for $4m_K^2<s<1.5^2$ GeV$^2$. Denoting, \begin{eqnarray} I_H&=&\frac{6}{\pi}\int_{4m_\pi^2}^{s_H}\frac{\varphi(s)}{s^2}= I_1+I_2+I_3~,\nonumber\\ I_1&=&\frac{6}{\pi}\int_{4m_\pi^2}^{s_K} \frac{\varphi(s)}{s^2}ds~,\nonumber\\ I_2&=&\frac{6}{\pi}\int_{s_K}^{1.1^2} \frac{\varphi(s)}{s^2}ds~,\nonumber\\ I_3&=&\frac{6}{\pi}\int_{1.1^2}^{s_H} \frac{\varphi(s)}{s^2}ds~, \label{ies} \end{eqnarray} then \begin{equation} Q_H\simeq I_H-\frac{6}{s_1}\theta(\delta_\pi(s_K)-\pi)~. \label{qhfinal} \end{equation} Now, eq.(\ref{diag}) can also be used to estimate the error of approximating $\phi(s)$ by $\varphi(s)$ in the range $4m_K^2< s < 1.5^2$ GeV$^2$ to calculate $I_2$ and $I_3$ as done in eq.(\ref{ies}). We could have also used $\delta_{(+)}(s)$ in eq.(\ref{ies}). However, notice that when $\eta\lesssim 1$ then $\varphi(s)\simeq \delta_{(+)}(s)$ and when inelasticity could be substantial the difference between $\delta_{(+)}(s)$ and $\varphi(s)$ is well taken into account in the error analysis that follows. Remarkably, consistency of our approach also requires $\phi(s)$ to be closer to $\varphi(s)$ than to $\delta_{(+)}(s)$. The reason is that $\varphi(s)$ for $\delta_\pi(s_K)\geq \pi$ is in very good approximation the $\varphi(s)$ for $\delta_\pi(s_K)<\pi$ plus $\pi$, this is clear from fig.\ref{figpi2}. This difference is $precisely$ the required one in order to have the same value for $\langle r^2\rangle_s^\pi$ either for $\delta_\pi(s_K)<\pi$ or $\delta_\pi(s_K)\geq \pi$ from eq.(\ref{r2final}). However, the difference for $\delta_{(+)}(s)$ between $\delta_\pi(s_K)<\pi$ and $\delta_\pi(s_K)\geq \pi$ is smaller than $\pi$. Indeed, we note that $\phi(s)$ follows closer $\varphi(s)$ than $\delta_{(+)}(s)$ for the explicit form factors of refs.\cite{ou00,dgl90}. Let us consider first the range $1.1^2<s<1.5^2$ GeV$^2$ where from experiment \cite{hyams} $\eta\simeq 1$ within errors. With $\epsilon=\pm \tan\theta |\Gamma_2'/\Gamma_1'|$ and $\rho=\delta_{(-)}-\delta_{(+)}$, eq.(\ref{diag}) allows us to write, \begin{eqnarray} \Gamma_\pi=\lambda \cos\theta\,|\Gamma_1'|e^{i\delta_{(+)}}(1+\epsilon \cos\rho) \left(1+i\frac{\epsilon\sin\rho}{1+\epsilon \cos\rho}\right)~. \label{phase1} \end{eqnarray} When $\eta\to 1$ then $\epsilon\to 0$, according to the expansion,\footnote{The the ratio $\left|\Gamma_2'/\Gamma_1'\right|$, present in $\epsilon$, is not expected to be large since the $f_0(1300)$ couples mostly to $4\pi$ and similarly to $\pi\pi$ and $K\bar{K}$, and the $f_0(1500)$ does mostly to $\pi\pi$ \cite{pdg}.} \begin{equation} \tan\theta=\frac{\sqrt{(1-\eta)/2}}{\sin\Delta}\left[1 -\frac{1+3\cos 2\Delta}{8\sin^2\Delta}(1-\eta)\right] +{\cal O}\left((1-\eta)^{5/2}\right)~. \end{equation} Rewriting, \begin{equation} 1+i\frac{\epsilon\sin\rho}{1+\epsilon \cos\rho}=\exp\left( i\frac{\epsilon\sin\rho}{1+\epsilon \cos\rho}\right)+{\cal O}(\epsilon^2)~, \label{phase2} \end{equation} which from eqs.(\ref{phase1}) and (\ref{phase2}) implies a shift in $\delta_{(+)}$ because of inelasticity effects, \begin{equation} \delta_{(+)}\to \delta_{(+)}+\frac{\epsilon\sin\rho}{1+\epsilon \cos\rho}~. \label{epsi} \end{equation} Using $\eta=0.8$ in the range $1.1^2\lesssim s\lesssim 1.5^2$ GeV, $\eta\simeq 1$ from the energy dependent analysis of ref.\cite{hyams} given by the $K-$matrix of eq.(\ref{km}), one ends with $\epsilon\simeq 0.3$. Taking into account that $\delta_{(+)}$ is larger than $\gtrsim 3\pi/2$ for $\delta_\pi(s_K)\geq \pi$ (in this case $\delta_{(+)}\simeq \delta_\pi$), and around $3\pi/4$ for $\delta_\pi(s_K)<\pi$, see fig.\ref{figpi2}, one ends with relative corrections to $\delta_{(+)}$ around $6\%$ for the former case and $13\%$ for the latter. Although the $K-$matrix of ref.\cite{hyams}, eq.(\ref{km}), is given up to 1.9 GeV, one should be aware that to take only the two channels $\pi\pi$ and $K\bar{K}$ in the whole energy range is an oversimplification, particularly above 1.2 GeV. Because of this we finally double the previous estimate. Hence $I_3$ is calculated with a relative error of $12\%$ for $\delta_\pi(s_K)\geq \pi$ and $25\%$ for $\delta_\pi(s_K)<\pi$. In the narrow region between $s_K<s<1.1^2$ GeV$^2$, $\eta$ can be rather different from 1, due to the $f_0(980)$ that couples very strongly to the just open $K\bar{K}$ channel. However, from the direct measurements of $\pi\pi\to K\bar{K}$ \cite{expipikk}, where $1-\eta^2$ is directly measured,\footnote{Neglecting multipion states.} one has a better way to determine $\eta$ than from $\pi\pi$ scattering \cite{hyams,kaminski}. It results from the former experiments, as shown also by explicit calculations \cite{npa,nd,ao07}, that $\eta$ is not so small as indicated in $\pi\pi$ experiments \cite{hyams}, and one has $\eta\simeq 0.6-0.7$ for its minimum value. Employing $\eta=0.6$ in eq.(\ref{epsi}) then $\epsilon\simeq 0.5$. Taking $\delta_{(+)}$ around $\pi/2$ when $\delta_\pi(s_K)<\pi$ this implies a relative error of 30$\%$. For $\delta_\pi(s_K)\geq \pi$ one has instead $\delta_{(+)}\gtrsim \pi$, and a $15\%$ of estimated error. Regarding the ratio of the moduli of form factors entering in $\epsilon$ we expect it to be $\lesssim 1$ (see appendix A). Therefore, our error in the evaluation of $I_2$ is estimated to be $30\%$ and $15\%$ for the cases $\delta_\pi(s_K)<\pi$ and $\delta_\pi(s_K)\geq \pi$, respectively. As a result of the discussion following eq.(\ref{qhfinal}), we consider that the error estimates done for $I_2$ and $I_3$ in the case $\delta_\pi(s_K)<\pi$ are too conservative and that the relative errors given for $\delta_\pi(s_K)>\pi$ are more realistic. Nonetheless, since the absolute errors that one obtains for $I_2$ and $I_3$ are the same in both cases (because $I_2$ and $I_3$ for $\delta_\pi(s_K)<\pi$ are around a factor 2 smaller than those for $\delta_\pi(s_K)\geq \pi$) we keep the errors as given above. To the previous errors for $I_2$ and $I_3$ due to inelasticity, we also add in quadrature the noise in the calculation of $Q_H$ due to the error in $t_{\pi\pi}$ from the uncertainties in the parameters of the $K-$matrix eqs.(\ref{km}), (\ref{array}), and those in the parameterizations CGL and PY. We finally employ for $s>2.25$ GeV$^2$ the knowledge of the asymptotic phase of the pion scalar form factor in order to evaluate $Q_A$ in eq.(\ref{split}). The function $\phi(s)$ is determined so as to match with the asymptotic behaviour of $\Gamma_\pi(t)$ as $-1/t$ from QCD. The Omn\`es representation of the scalar form factor, eqs.(\ref{ffomnes2}) and (\ref{ffomnes3}), tends to $t^{-q/\pi}$ and $t^{-q/\pi+1}$ for $t\to\infty$, respectively. Here, $q$ is the asymptotic value of the phase $\phi(s)$ when $s\to \infty$. Hence, for $\delta_\pi(s_K)<\pi$ the function $\phi(s)$ is then required to tend to $\pi$ while for $\delta_\pi(s_K)\geq \pi$ the asymptotic value should be $2\pi$. The way $\phi(s)$ is predicted to approach the limiting value is somewhat ambiguous \cite{y05,y06}, \begin{equation} \phi_{as}(s)\simeq \pi\left( n\pm\frac{2d_m}{\log(s/\Lambda^2)}\right)~. \label{asin} \end{equation} In this equation, $2d_m=24/(33-2 n_f)\simeq 1$, $\Lambda^2$ is the QCD scale parameter and $n=1,~2$ for $\delta_\pi(4m_K^2)<\pi,~\geq \pi$, respectively. The case $n=2$ was not discussed in refs.\cite{y04,y05,y06,accgl05,ccl} for the form factor given in eq.(\ref{ffdef}). There is as well a controversy between \cite{ccl} and \cite{y06} regarding the $\pm$ sign in eq.(\ref{asin}). If leading twist contributions dominate \cite{y05,y06} then the limiting value is reached from above and one has the plus sign, while if twist three contributions are the dominant ones \cite{ccl} the minus sign has to be considered \cite{y06}. In the left panel of fig.\ref{figpi2} we show with the wide bands the values of $\phi(s)_{as}$ for $s>2.25$ GeV$^2$ from eq.(\ref{asin}), considering both signs, for $n=1$ ($\delta_\pi(s_K)<\pi$) and $2$ ($\delta_\pi(s_K)\geq \pi$). We see in the figure that above $1.4-1.5$ GeV ($1.96-2.25$ GeV$^2$) both $\varphi(s)$ and $\phi(s)_{as}$ phases match and this is why we take $s_H=2.25$ GeV$^2$ in eq.(\ref{r2final}), similarly as done in refs.\cite{y04,y05}. In this way, we also avoid to enter into hadronic details in a region where $\eta<1$ with the onset of the $f_0(1500)$ resonance. The present uncertainty whether the $+$ or $-$ sign holds in eq.(\ref{asin}) is taken as a source of error in evaluating $Q_A$. The other source of uncertainty comes from the value taken for $\Lambda^2$, $0.1 <\Lambda^2<0.35$ GeV$^2$, as suggested in ref.\cite{y04}. From fig.\ref{figpi2} it is clear that our error estimate for $\phi_{as}(s)$ is very conservative and should account for uncertainties due to the onset of inelasticity for energies above $1.4-1.5$ GeV and to the appearance of the $f_0(1500)$ resonance. In the right panel of fig.\ref{figpi2} we show the integrand for $\langle r^2\rangle_s^\pi$, eq.(\ref{split}), for parameterization I (dashed line) and II (solid line). Notice as the large uncertainty in $\phi_{as}(s)$ is much reduced in the integrand as it happens for the higher energy domain. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $\phi(s)$ & I & I & II & II \\ \hline $\delta_\pi(s_K)$ & $\geq \pi$ & $<\pi$ & $\geq \pi$ & $<\pi$ \\ \hline $I_1$ & $0.435\pm 0.013$ & $0.435\pm 0.013$ & $0.483\pm 0.013 $& $0.483\pm 0.013 $ \\ $I_2$ & $0.063\pm 0.010$ & $0.020\pm 0.006$ & $0.063\pm 0.010 $& $0.020\pm 0.006 $ \\ $I_3$ & $0.143\pm 0.017$ & $0.053\pm 0.013 $ & $0.143\pm 0.017$ & $0.053\pm 0.013 $ \\ $Q_H$ & $0.403\pm 0.024$ & $0.508\pm 0.019$ & $0.452\pm 0.024 $& $0.554\pm 0.019 $ \\ $Q_A$ & $0.21 \pm 0.03$ & $0.10\pm 0.03$ & $0.21\pm 0.03 $& $0.10\pm 0.03 $ \\ \hline $\langle r^2\rangle_s^\pi$ & $0.61\pm 0.04$ & $0.61\pm 0.04$ & $0.66\pm 0.04$& $0.66\pm 0.04 $ \\ \hline \end{tabular} \caption{Different contributions to $\langle r^2\rangle_s^\pi$ as defined in eqs.(\ref{split}) and (\ref{ies}). All the units are $\textrm{fm}^2$. In the value for $\langle r^2\rangle_s^\pi$ the errors due to $I_1$, $I_2$, $I_3$ and $Q_A$ are added in quadrature. \label{tableresul}} \end{center} \end{table} In table \ref{tableresul} we show the values of $I_1$, $I_2$, $I_3$, $Q_H$, $Q_A$ and $\langle r^2\rangle_s^\pi$ for the parameterizations I and II and for the two cases $\delta_\pi(s_K)\geq \pi$ and $\delta_\pi(s_K)<\pi$. This table shows the disappearance of the disagreement between the cases $\delta_\pi(s_K)\geq \pi$ and $\delta_\pi(s_K)<\pi$ from the $\pi\pi$ and $K\bar{K}$ $T-$matrix of eq.(\ref{km}), once the zero of $\Gamma_\pi(t)$ at $s_1<s_K$ is taken into account for the former case. This disagreement was the reason for the controversy between Yndur\'ain and ref.\cite{accgl05} regarding the value of $\langle r^2\rangle_s^\pi$. The fact that the parameterization II gives rise to a larger value of $\langle r^2\rangle_s^\pi$ than I is because PY follows the upper $\delta_\pi$ data below 0.9 GeV, while CGL follows lower ones, as shown in fig.\ref{figpi}. The different errors in table \ref{tableresul} are added in quadrature. The final value for $\langle r^2\rangle_s^\pi$ is the mean between those of parameterizations I and II and the error is taken such that it spans the interval of values in table \ref{tableresul} at the level of two sigmas. One ends with: \begin{eqnarray} \langle r^2\rangle_s^\pi=0.63\pm 0.05~\hbox{fm}^2~. \label{values} \end{eqnarray} The largest sources of error in $\langle r^2\rangle_s^\pi$ are the uncertainties in the experimental $\delta_\pi$ and in the asymptotic phase $\phi_{as}$. This is due to the fact that the former are enhanced because of its weight in the integrand, see fig.\ref{figpi2}, and the latter due to its large size. Our number above and that of refs.\cite{pipiscat,dgl90}, $\langle r^2\rangle_s^\pi=0.61\pm 0.04$~fm$^2$, are then compatible. On the other hand, we have also evaluated $\langle r^2\rangle_s^\pi$ directly from the scalar form factor obtained with the dynamical approach of ref.\cite{ou00} from Unitary $\chi$PT and we obtain $\langle r^2\rangle_s^\pi=0.64\pm 0.06$ fm$^2$, in perfect agreement with eq.(\ref{values}). Notice that the scalar form factor of ref.\cite{ou00} has $\delta_\pi(s_K)>\pi$ and we have checked that it has a zero at $s_1$, as it should. This is shown in fig.\ref{figpi3} by the dashed-double-dotted line. The value $\langle r^2\rangle_s^\pi=0.75\pm 0.07$~fm$^2$ from refs.\cite{y04,y05} is much larger than ours because the possibility of a zero at $s_1$ was not taking into account there and other solution was considered. This solution, however, has an unstable behaviour under the transition $\delta_\pi(s_K)=\pi-0^+$ to $\delta_\pi(s_K)=\pi+0^+$ and it cannot be connected continuously with the one for $\delta_\pi(s_K)<\pi$. Our solution for $\Gamma_\pi(t)$ from Yndur\'ain's method does not have this unstable behaviour and it is continuous under changes in the values of the parameters of the $K-$matrix, eqs.(\ref{km}) and (\ref{array}). This is why, from our results, it follows too that the interesting discussion of ref.\cite{y05}, regarding whether $\delta_\pi(s_K)<\pi$ or $\geq \pi$, is not any longer conclusive to explain the disagreement between the values of refs.\cite{y04,y05} and ref.\cite{pipiscat} for $\langle r^2\rangle_s^\pi$. We can also work out from our determination of $\langle r^2\rangle_s^\pi$, eq.(\ref{values}), values for the ${\cal O}(p^4)$ $SU(2)$ $\chi PT$ low energy constant $\bar{\ell}_4$. We take the two loop expression in $\chi PT$ for $\langle r^2 \rangle_s^\pi$ \cite{pipiscat}, \begin{equation} \langle r^2 \rangle_s^\pi=\frac{3}{8\pi^2 f_\pi^2}\left\{ \bar{\ell}_4-\frac{13}{12}+\xi \Delta_r\right\}~, \label{oneloop} \end{equation} where $f_\pi=92.4$ MeV is the pion decay constant, $\xi=(M_\pi/4\pi f_\pi)^2$ and $M_\pi$ is the pion mass. First, at the one loop level calculation $\Delta_r=0$ and then one obtains, \begin{eqnarray} \bar{\ell}_4&=&4.7\pm 0.3 ~. \end{eqnarray} We now move to the determination of $\bar{\ell}_4$ based on the full two loop relation between $\langle r^2\rangle_s^\pi$ and $\bar{\ell}_4$. The expression for $\Delta_r$ can be found in Appendix~C of ref.\cite{pipiscat}. $\Delta_r$ is given in terms of one ${\cal O}(p^6)$ $\chi PT$ counterterm, $\widetilde{r}_{S_2}$, and four ${\cal O}(p^4)$ ones. Taking the values of all these parameters, but for $\bar{\ell}_4$, from ref.\cite{pipiscat}, and solving for $\bar{\ell}_4$, one arrives to \begin{eqnarray} \bar{\ell}_4&=&4.5\pm 0.3~. \label{l4values} \end{eqnarray} This number is in good agreement with $\bar{\ell}_4=4.4\pm 0.2$ \cite{pipiscat}. Ref.\cite{y06} also points out that one loop $\chi$PT fits to the S-, P- and D-wave scattering lengths and effective ranges give rise to much larger values for $\bar{\ell}_2$ and $\bar{\ell}_4$ than those of ref.\cite{pipiscat}. For more details we refer to \cite{y06}. \section{Conclusions} \label{sec:conclu} In this paper we have addressed the issue of the discrepancies between the values of the quadratic pion scalar radius of Leutwyler {\it et al.} \cite{dgl90,accgl05}, $\langle r^2\rangle_s^\pi=0.61\pm 0.04$~fm$^2$, and Yndur\'ain's papers \cite{y04,y05,y06}, $\langle r^2\rangle=0.75\pm 0.07$~fm$^2$. One of the reasons of interest for having a precise determination of $\langle r^2\rangle_s^\pi$ is its contribution of a 10$\%$ to $a_0^0$ and $a_0^2$, calculated with a precision of $2\%$ in ref.\cite{pipiscat}. The value taken for $\langle r^2\rangle_s^\pi$ is also important for determining the ${\cal O}(p^4)$ $\chi$PT coupling $\bar{\ell}_4$. From our study it follows that Yndur\'ain's method to calculate $\langle r^2\rangle_s^\pi$ \cite{y04,y05}, based on an Omn\`es representation of the pion scalar form factor, and that derived by solving the two(three) coupled channel Muskhelishvili-Omn\`es equations \cite{dgl90,pipiscat,m00}, are compatible. It is shown that the reason for the aforementioned discrepancy is the presence of a zero in $\Gamma_\pi(t)$ for those S-wave I=0 $T-$matrices with $\delta_\pi(s_K)\geq \pi$ and elastic below the $K\bar{K}$ threshold, with $s_K=4 m_K^2$. This zero was overlooked in refs.\cite{y04,y05}, though, if one imposes continuity in the solution obtained under tiny changes of the $\pi\pi$ phase shifts employed, it is necessarily required by the approach followed there. Once this zero is taken into account the same value for $\langle r^2\rangle_s^\pi$ is obtained irrespectively of whether $\delta_\pi(s_K)\geq \pi$ or $\delta_\pi(s_K)<\pi$. Our final result is $\langle r^2\rangle_s^\pi= 0.63\pm 0.05$~fm$^2$. The error estimated takes into account experimental uncertainty in the values of $\delta_\pi(s)$, inelasticity effects and present ignorance in the way the phase of the form factor approaches its asymptotic value $\pi$, as predicted from QCD. Employing our value for $\langle r^2\rangle_s^\pi$ we calculate $\bar{\ell}_4=4.5\pm 0.3$. The values $\langle r^2\rangle_s^\pi=0.61\pm 0.04$~fm$^2$ and $\bar{\ell}_4=4.5\pm 0.3$ of ref.\cite{pipiscat} are then in good agreement with ours. \section*{Acknowledgements} We thank Miguel Albaladejo for providing us numerical results from some unpublished $T-$matrices and Carlos Schat for his collaboration in a parallel research. We also thank F.J. Yndur\'ain for long discussions and B. Anathanarayan, I. Caprini, G. Colangelo, J. Gasser and H. Leutwyler for a critical reading of a previous version of the manuscript. This work was supported in part by the MEC (Spain) and FEDER (EC) Grants FPA2004-03470 and Fis2006-03438, the Fundaci\'on S\'eneca (Murcia) grant Ref. 02975/PI/05, the European Commission (EC) RTN Network EURIDICE under Contract No. HPRN-CT2002-00311 and the HadronPhysics I3 Project (EC) Contract No RII3-CT-2004-506078. \section*{Appendices}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $X$ be a smooth variety, and $G$ a finite group acting holomorphically on $X$. Suppose that the quotient $X/G$ possesses a crepant resolution of singularities $V$. The McKay correspondence refers to the identification of topological invariants of $V$ with orbifold analogues of these invariants associated to the action of $G$ on $X$. The classical example is the case in which $G \subset SU(2)$ is a finite subgroup acting on $\mathbb{C}^2$. Then the quotient $\mathbb{C}^2/G$ possesses a unique crepant resolution $V$, and the Euler characteristic of $V$ coincides with the orbifold Euler characteristic $e_{orb}(\mathbb{C}^2,G) = \frac{1}{|G|}\sum_{gh=hg}e((\mathbb{C}^2)^{g,h})$. Over the years, the McKay correspondence has exhibited a remarkable versatility towards generalization. In \cite{B} Batyrev investigated a more general class of resolutions $\Bl{X/G}\rightarrow X/G$, and proved the McKay correspondence for the $E$-function in this situation. More recently, Borisov and Libgober have proven a similar result for the elliptic genus \cite{BL}. In this paper we will prove an equivariant analogue of the McKay Correspondence for the elliptic genus. The advantage of working in the equivariant setting is that, by localization, we can make sense of the elliptic genus even for open varieties. This allows us to prove a host of new formulas. One consequence of the work in this paper is a beautiful formula for the generating function of the equivariant elliptic genus of the Hilbert scheme of points on $\mathbb{C}^2$ (with the standard torus action): $$\sum_{n>0} p^n Ell((\mathbb{C}^2)^{[n]};y,q,t_1,t_2) = \prod_{m\geq 0,n>0,\ell,k}\frac{1}{(1-p^n q^m y^\ell t_1^{k_1} t_2^{k_2})^{c(nm,\ell,k)}}.$$ The terms $c(m,\ell,k)$ are the coefficients in the expansion of $Ell(\mathbb{C}^2;y,q,t_1,t_2)$ in $y,q,t_1$, and $t_2$. The above formula is an equivariant generalization of the DMVV formula: $$\sum_{n>0} p^n Ell(S^{[n]};y,q) = \prod_{m\geq 0,n>0,\ell,k}\frac{1}{(1-p^n q^m y^\ell)^{c(nm,\ell,k)}}.$$ In the above formula, $S$ is a compact algebraic surface, and $Ell(S^{[n]};y,q)$ is the elliptic genus of the Hilbert scheme of $n$ points on $S$. The non-equivariant DMVV formula was conjectured by string theorists Dijkgraaf, Moore, Verlinde and Verlinde \cite{DMVV}, and proven by Borisov and Libgober \cite{BL}. The equivariant version is a conjecture of Li, Liu, and Zhou \cite{LLJ}. \subsection{Background on the Elliptic Genus} For $X$ a smooth complex manifold, the elliptic genus of $X$ is defined as: \begin{align}\label{Ell Def} Ell(X) = \int_X\prod\frac{x_j\theta(\twopi{x_j}-z,\tau)}{\theta(\twopi{x_j},\tau)} \end{align} The product is taken over the Chern roots of the holomorphic tangent bundle to $X$. $\theta(\cdot,\tau)$ is the Jacobi theta function, and $z$ represents a formal parameter. Setting $y = e^{2\pi iz}$ and $q = e^{2\pi i\tau}$, the elliptic genus may also be interpreted as the index of the following differential operator: \begin{align}\label{operator} y^{-d/2}\overline{\partial}\otimes\bigotimes_{n=1}^{\infty} \Lambda_{-yq^{n-1}}T^*X\otimes\Lambda_{-y^{-1}q^n}TX \otimes S_{q^n}T^*X\otimes S_{q^n}TX \end{align} The modular properties of the Jacobi theta function endow the elliptic genus with a rich amount of structure. For example, if $X$ is Calabi-Yau, then the elliptic genus is a weak Jacobi form as a function of $(z,\tau)\in \mathbb{C}\times\mathbb{H}$. If $X$ is Calabi-Yau and possesses a nontrivial torus action, Liu has shown that the modular properties of the equivariant index of \ref{operator} actually imply its rigidity \cite{L}. In addition to these properties, the elliptic genus encodes a large number of classical algebraic and topological invariants of the space. For example, letting $q\to 0$ in the expression for the elliptic genus produces $y^{-d/2}$ times the Hirzebruch $\chi_{-y}$ genus, whereas letting $y\to 1$ produces the Euler characteristic of the space. In \cite{CLW}, \cite{BLsing}, Chin-Lung Wang, Borisov, and Libgober investigated the following relative version of the elliptic genus for pairs $(X,D)$, where $D=\sum_i a_iD_i$ is a smooth divisor with normal crossings and coefficients $a_i\neq 1$: $$Ell(X,D)=\int_X\prod\frac{x_j\theta(\twopi{x_j}-z)\twopi{\theta'(0)}}{\theta(\twopi{x_j})\theta(-z)} \prod_i\frac{\theta(\twopi{c_1(D_i)}-(-a_i+1)z)\theta(z)}{\theta(\twopi{c_1(D_i)}-z)\theta((-a_i+1)z)}$$ The modular properties of the Jacobi theta function imply that the relative elliptic genus satisfies the following change of variables formula for blow-up morphisms: If $f:\Bl{X}\rightarrow X$ is the blow-up of $X$ along a smooth base with normal crossings with respect to the components of $D$, and $\Bl{D}$ is the divisor on $\Bl{X}$ satisfying: $K_{\Bl{X}}+\Bl{D} = f^*(K_X+D)$, then $Ell(\Bl{X},\Bl{D}) = Ell(X,D)$. For $Z$ a $\mathbb{Q}$-Gorenstein variety with log-terminal singularities, and $X\rightarrow Z$ a resolution of singularities with exceptional locus a normal crossing divisor $D$, Borisov and Libgober define the singular elliptic genus of $Z$ to be the relative elliptic genus of $(X,D)$. The change of variable formula, together with the Weak Factorization Theorem \cite{W} implies that this definition is well-defined. Moreover, when $Z$ possesses a crepant resolution $V$, the singular elliptic genus of $Z$ is easily seen to coincide with the elliptic genus of $V$. The singular elliptic genus (and its orbifold analogue) plays a crucial role in Borisov and Libgober's proof of the McKay correspondence for elliptic genera. Its utility stems from the fact that it behaves well with respect to a large class of birational modifications. The added flexibility obtained from studying the singular elliptic genus allowed Borisov and Libgober to reduce their proof to calculations involving toroidal embeddings. Their approach is similar in spirit to that of Batyrev in \cite{B}, who proved the McKay correspondence for the $E$-function by using the change of variables formula from motivic integration to reduce the case to calculations on toric varieties. When $X$ has a nontrivial torus action, we may define the equivariant elliptic genus of $X$ to be the equivariant index of the operator defined in \ref{operator}. By the index theorem \cite{AS}, this is the same as the integral in equation \ref{Ell Def} obtained by replacing the Chern roots of $TX$ with their equivariant analogues. Similarly, we may define an equivariant version of the relative elliptic genus by replacing appearances of $c_1(D_i)$ with their equivariant extensions. The bulk of this paper is devoted to proving the change of variables formula for the equivariant orbifold elliptic genus (this case subsumes the non-orbifold case). Once we establish the change of variables formula in this situation, the remaining steps in the proof of the equivariant McKay correspondence for the elliptic genus follow closely the steps given in \cite{BL}. \subsection{Outline of the Proof} In a recent preprint \cite{RW}, I proved the equivariant change of variable formula for blow-ups along complete intersections $W = D_1\cap...\cap D_k\subset X$. The idea was to interpret the blow-down $\Bl{X}\rightarrow X$ as a toroidal morphism. The stratification defined by the divisors $D_i$ determined the toroidal structure of $X$, whereas the stratification defined by the proper transforms of these divisors, together with the exceptional divisor, determined the toroidal structure of $\Bl{X}$. The comparison of the relative elliptic genera of the base space and its blow-up was ultimately reduced to a computation involving the combinatorics of the polyhedral complexes associated to the two toroidal embeddings. This idea was inspired by Borisov and Libgober's use of polyhedral complexes in \cite{BL} to compute the push-forward of the orbifold elliptic class under the global quotient map. Later it became apparant that the proof given in \cite{RW} could be adapted to the case in which $X$ was a ``normal cone space", i.e., a fiber product of spaces $\mathbb{P}(F\oplus 1)$, where $F\rightarrow W$ was a holomorphic vectorbundle. The idea was that the Chern roots of the tautological quotient bundle $Q_F\rightarrow X$ should play the role of the ``divisors" in a polyhedral complex associated to $X$. Similarly, if $f:\Bl{X}\rightarrow X$ was the blow-up of $X$ along $W$ with exceptional divisor $E$, then the Chern roots of $f^*Q_F\otimes\mathcal{O}(-E)$, and of $\mathcal{O}(E)$ should behave like the ``divisors" of a polyhedral complex associated to $\Bl{X}$. In this paper, we refer to such polyhedral complexes associated to the data of Chern roots as ``twisted polyhedral complexes." The case of a general blow-up may be reduced to cases of this nature by using an equivariant version of deformation to the normal cone. The breakdown of the sections in this paper are as follows: For generic specializations of the parameters $(z,\tau)$, the integrand of the equivariant elliptic genus is a power series in the equivariant parameters with differential form coefficients. In section \ref{PSLocalization} we discuss convergence issues related to power series of this type and put their corresponding cohomology theory on solid ground. In section \ref{Definitions} we define our principle objects of study; namely the equivariant orbifold elliptic class and its relative version. In sections \ref{Toric Varieties} and \ref{Polyhedral Complexes} we review some facts about the equivariant cohomology of toric varieties and discuss how it relates to computing equivariant push-forwards of toroidal morphisms. In section \ref{Toroidal Pushforward} we use these results to compute the pushforward of the orbifold elliptic class under a toroidal morphism which is birational to a quotient by a finite group. This result is the equivariant analogue of Lemma $5.4$ in \cite{BL}. Sections \ref{Deformation Normal Cone} to \ref{Blow Up Formula} are devoted to the proof of the equivariant change of variable formula. In section \ref{Deformation Normal Cone}, we prove an equivariant analogue of deformation to the normal cone, tailored specifically to handle cohomological data like the orbifold elliptic class. As stated above, this will allow us to reduce the proof of the change of variable formula to the case when $X$ is a normal cone space. In section \ref{Normal Cone Space} we prove for completeness a number of technical lemmas regarding spaces of this form. In section \ref{Twisted Polyhedral Complex} we introduce the twisted polyhedral complex for normal cone spaces. In \ref{Blow Up Formula} we apply the techniques from the preceding sections to prove the equivariant change of variables formula. Finally, in \ref{Equiv McKay} we prove the equivariant McKay correspondence for elliptic genera, and the equivariant DMVV formula. \begin{ack*}\rm I wish to thank my advisor Professor Kefeng Liu for introducing me to elliptic genera and for his constant support, as well as Professor Anatoly Libgober for his feedback and help with technical aspects of his work. \end{ack*} \section{Equivariant Cohomology and Power Series}\label{PSLocalization} \subsection{Preliminaries on Equivariant Cohomology} We begin by reviewing some basic aspects of equivariant cohomology. For a thorough reference on the subject see \cite{AB}. Let $M$ be a smooth manifold and $T$ a torus acting smoothly on $M$. Let $e_1,\ldots,e_\ell$ form a basis for the Lie algebra of $T$ which is dual to the linear forms $u_1,\ldots,u_\ell$. Every $X \in \mathfrak{t}$ defines a vectorfield $X$ on $M$ by the formula $X(p) = \frac{d}{dt}|_{t=0}\mathrm{exp}(tX)\cdot p$. Define $\Omega^*_T(M)$ to be the ring of differential forms on $M$ which are annihilated by $\mathcal{L}_X$ for every $X \in \mathfrak{t}$. If we let $d_{\mathfrak{t}} = d+\sum_{\alpha=1}^{\ell}u_\alpha i_{e_\alpha}$, then $d_{\mathfrak{t}}$ defines an operator on $\Omega^*_T(M)\otimes \mathbb{C}[u_1,\ldots,u_\ell]$ and satisfies $d_{\mathfrak{t}}^2 = 0$. The Cartan model for equivariant cohomology is defined to be: $$H^*_T(M)_{\mathrm{Cartan}} = \frac{\ker d_{\mathfrak{t}}} {\mathrm{im} d_{\mathfrak{t}}}.$$ The translation of concepts from cohomology to equivariant cohomology is more or less routine. For example, a $T$-map $f: M\rightarrow N$ induces a pullback $f^*:H^*_T(Y)\rightarrow H^*_T(X)$ as in ordinary cohomology. Similarly, for any $E \in K_T(X)$ we may define equivariant characteristic classes of $E$ which are equivariant extensions of the ordinary characteristic classes. If $p$ is a single point with trivial $T$-action, the equivariant map $\pi : M \rightarrow p$ induces a map $\pi^*:H^*_T(p)\rightarrow H^*_T(M)$. Since $H^*_T(p) = \mathbb{C}[u_1,\ldots,u_\ell]$, the map $\pi^*$ makes $H^*_T(M)$ into a $\mathbb{C}[u_1,\ldots,u_\ell]$-module. Define $H^*_T(M)_{loc} = H^*_T(X)\otimes_{\mathbb{C}[u_1,\ldots,u_\ell]}\mathbb{C}(u_1,\ldots,u_\ell)$. A fundamental result of the subject is the localization theorem: \begin{thm} Let $\set{P}$ denote the set of $T$-fixed components of $M$. Then $H^*_T(M)_{loc} \cong \bigoplus_P H^*(P)\otimes \mathbb{C}(u_1,\ldots,u_\ell)$. \end{thm} If $P$ is a fixed component of $M$, the normal bundle to $P$ splits as a sum over the characters of the $T$-action on the fibers: $N_P = \bigoplus_{\lambda}V_{\lambda}$. Let $n^i_{\lambda}$ denote the formal Chern roots of $V_{\lambda}$. If we identify the equivariant parameters $u_1,\ldots,u_\ell$ with linear forms on the Lie algebra of $T$, then the equivariant Euler class $e(P)$ of $N_P$ is equal to $\prod_{\lambda}\prod_{i}(n^i_{\lambda}+\lambda)$. Since none of the characters $\lambda$ are equal to zero, we see that $e(P)$ is always invertible. In light of this fact, we can describe the above isomorphism more explicitly. The map $H^*_T(X)_{loc} \rightarrow \bigoplus_P H^*(P)\otimes\mathbb{C}(u_1,\ldots,u_\ell)$ is given by $\omega \mapsto \bigoplus_P \frac{i_P^*\omega}{e(P)}$, where $i_P :P \hookrightarrow X$ is the inclusion map. If $f : M \rightarrow N$ is a proper map of $T$-spaces, we have the equivariant analogue of the cohomological push-forward $f_*: H^*_T(M)\rightarrow H^*_T(N)$. As in the non-equivariant setting, $f_*$ satisfies the projection formula $f_* (f^*(\omega)\wedge\eta) = \omega\wedge f_*\eta$. The new feature in equivariant cohomology is that we have an explicit expression for the restriction of $f_*\omega$ to a fixed component in $N$. This is given by the functorial localization formula \cite{MirrorI} \cite{MirrorII}: \begin{thm} Let $f: M\rightarrow N$ be a proper map of $T$-spaces. Let $P$ be a fixed component of $N$ and let $\set{F}$ be the collection of fixed components in $M$ which $f$ maps into $P$. Let $\omega \in H^*_T(M)$. Then: $$\sum_F f_*\frac{i_F^*\omega}{e(F)} = \frac{i_P^*f_*\omega}{e(P)}.$$ \end{thm} \subsection{Power Series in Equivariant Cohomology} For simplicity, we assume that $M$ has a $T = S^1$ action. Let $X$ be the vectorfield on $M$ induced by the action of $T$. Let $C(M)$ denote the ring of formal power series in $u$ with coefficients in $\Omega^*(M)^T\otimes\mathbb{C}$. Then $d_X = d-ui_X : C(M) \rightarrow C(M)$. We define $H^*(C(M)) = \ker d_X/ \hbox{im } d_X$. Multiplication by $u$ gives $H^*(C(M))$ the structure of a $\mathbb{C}[u]$-module. Given any $\mathbb{C}[u]$-module $A$, we define $A_{loc} = A\otimes \mathbb{C}(u)$. Here, we interpret rational polynomials of the form $\frac{1}{1+uf(u)}$ as convergent power series by expanding around $u = 0$. Let $g$ be a $T$-invariant metric on $M$, and define $\theta = g(X,\cdot)$. Let $F \subset M$ be the $T$-fixed locus, and let $\rho$ be a bump function identically equal to one outside a tubular neighborhood of $F$, and equal to zero inside a smaller tubular neighborhood. (Note that we allow $F$ to have different components of varying dimensions). Then $\rho (d_X\theta)^{-1}$ is a well-defined element of $C(M)_{loc}$. Furthermore, it is easy to see that for any closed form $\omega \in C(M)_{loc}$, $\omega - d_X( \omega\cdot\theta \cdot \rho (d_X\theta)^{-1})$ has compact support in a tubular neighborhood of $F$. It follows that every closed form in $C(M)_{loc}$ may be represented by a form with compact support in a tubular neighborhood of $F$. Let $C(\nu_F)_{c}$ denote the ring of formal power series in $u$ whose coefficients are $T$-invariant forms with compact support in the normal bundle $\nu_F$. Let $\pi_* :C(\nu_F)_{c,loc} \rightarrow C(F)_{loc}$ be the map $\sum \omega_n u^n \mapsto \sum \pi_*\omega_n u^n$, where $\pi_*\omega_n$ denotes the integral of $\omega_n$ over the fiber of $\nu_F$. We first remark that $\pi_*i_X \omega = 0$ for any differential form $\omega$ with compact support in $\nu_F$. To see this, note that we can always express $\omega$ locally in the form $\omega = f(x,t)\pi^*\phi dt_1...dt_k$. Here $x$ represents the coordinates along $F$, $t$ represents the coordinates along the fiber, and $\pi^*\phi$ is the pullback of a form on $F$ via the projection $\pi : \nu_F \rightarrow F$. Thus $i_X\omega = f(x,t)\pi^*\phi\cdot i_X dt_1...dt_k$ which clearly integrates to zero along the fiber, since the degree of the form along the fiber is necessarily smaller than the fiber dimension. It follows that $\pi_* d_X = d\pi_*$, and therefore it induces a map in cohomology $\pi_* : H^*(C(\nu_F)_c)_{loc} \rightarrow H^*(C(F))_{loc}$. \begin{thm} $\pi_*$ is injective. \end{thm} \begin{proof} Let $\omega = \sum_n \omega_nu^n \in C(\nu_F)_{c,loc}$ be closed. Suppose $\pi_*\omega = d\eta$. By subtracting off $\pi^*d\eta\cdot \Phi$, where $\Phi$ is the equivariant Thom class of $\nu_F$, we may reduce to the case in which $\pi_*\omega = 0$. For any differential form $\alpha$, denote by $\alpha[i]$ the degree $i$ part. Then for some $0\leq k \leq \dim M$, $\omega = \omega[k]+\omega[k-1]+...+\omega[0]$, where $\omega[i] = \sum\omega_n[i]u^n$. Since $\omega$ is $d_X$-closed, $d\omega_n[k]=0$. Since $\pi_*\omega[k] = \sum\pi_*\omega_n[k]u^n = 0$, by the Thom isomorphism theorem, we can find compactly supported forms $y_n$ such that $dy_n = \omega_n[k]$. By averaging over $T$, we may assume that these forms are $T$-invariant. Then $\omega - d_X \sum y_n u^{n-1}$ has top degree $< k$ and is annihilated by $\pi_*$. The proof then follows by induction. \end{proof} Since $[\omega] = 0$ if and only if $\pi_*[\omega] = 0$, we see that $[\omega] = [\pi^*\pi_*\omega\cdot \Phi]$. Hence $[\omega]|_F = [\pi_*\omega\cdot e(\nu_F)]$, where $e(\nu_F)$ is the equivariant Euler class. Since the Euler class is invertible, the restriction map $[\omega] \mapsto [\omega]|_F$ must be invertible. This proves the localization theorem for $H^*(C(M))_{loc}$: \begin{thm} Let $F \subset M$ denote the fixed locus of $T$. Then the restriction map $H^*(C(M))_{loc} \rightarrow H^*(C(F))_{loc}$ is an isomorphism. \end{thm} We now introduce the ring of analytic forms: Let $C^{an}(M) \subset C(M)$ denote the ring of forms $\sum \omega_nu^n$ with the property that the partial sums $\sum_{n=0}^{N}\omega_ns^n$ converge in the $C^{\infty}$ sense to a form $\omega \in \Omega^*(M)^T$ for $\norm{s}$ sufficiently small. Let $B^{an}(M) = d_X(C(M))\cap C^{an}(M)$. We define $$H^*(C^{an}(M)) = \frac{\ker d_X: C^{an}(M)\rightarrow C^{an}(M)}{B^{an}(M)}.$$ If $F$ denotes the $T$-fixed locus of $M$, the above proof of the localization theorem extends word for word to the case of analytic forms. For $s \in \mathbb{C}^*$, following Witten \cite{Witten}, let $d_s = d-si_X : \Omega^*(M)^T \rightarrow \Omega^*(M)^T$. Define $H^*_s(M) = \frac{\ker d_s}{\hbox{im }d_s}$. Again, the above proof of the localization theorem adapts easily to the case of $H^*_s(M)$. Define an equivalence relation $\sim$ on $\prod_{s\in \mathbb{C}^*}H^*_s(M)$ as follows: For $\omega, \omega' \in \prod_{s\in \mathbb{C}^*}H^*_s(M)$, we say that $\omega \sim \omega'$ if $\omega_s = \omega'_s$ for $\norm{s}$ sufficiently small. By $\omega_s$, of course, we mean the $s$-component of $\omega$. We denote the group $\prod_{s\in \mathbb{C}^*}H^*_s(M)/\sim$ by $W$. \begin{prop} There is a natural evaluation map $ev: H^*(C^{an}(M))\rightarrow W$ given by $[\sum\omega_nu^n] \rightarrow [\sum\omega_ns^n]$. \end{prop} \begin{proof} Let $\omega = \sum\omega_nu^n \in C^{an}(M)$ be a closed form, and let $d_X \sum\eta_nu^n \in C^{an}(M)$. We wish to show that for $\norm{s}$ sufficiently small, $[\sum\omega_ns^n] = [\sum\omega_ns^n+d_s\sum\eta_ns^n]$ in $H^*_s(M)$. First, for $\norm{s}$ sufficiently small, both sums $\sum\omega_ns^n$ and $\sum d_s\eta_ns^n$ converge. Therefore, it suffices to prove that $\sum d_s\eta_ns^n$ is exact for these values of $s$. Let $F \subset M$ be the $T$-fixed locus. Then $\sum d_s\eta_n|_Fs^n = \sum d\eta_n|_F s^n.$ By Hodge theory, $\sum d\eta_n|_F s^n$ converges to an exact form. Hence $[\sum d_s\eta_n s^n]|_F = 0$, and therefore $[\sum d_s \eta_n s^n] = 0$ by localization. \end{proof} Let $F(x_1,...,x_r)$ be function which is holomorphic in a neighborhood of $(0,...,0)$. Let $F = \sum_{i_1,...,i_r}a_{i_1,...,i_r}x_1^{i_1}\cdots x_r^{i_r}$ be the corresponding power series expansion. Let $[\omega_1],...,[\omega_r]$ be equivariant forms in $H^*_T(M)$ such that $u | \deg_0 \omega_i$. Note that this property is independent of the choice of representatives for $[\omega_i]$. Then $F([\omega_1],...,[\omega_r])$ is a well-defined element of $H^*(C^{an}(M))$. We can see this as follows: Fix representatives $\omega_1,...,\omega_r \in C^{an}(M)$. Write $\omega_i = \tilde{\omega_i}+f_iu$, where $f_i \in C^{\infty}(M)\otimes \mathbb{C}[u]$ and $\deg \tilde{\omega_i} > 0$. Then $$F(\omega_1,...,\omega_r) = \sum_{i_1,...,i_r}a_{i_1,...,i_r}(\tilde{\omega_1}+f_1u)^{i_1}\cdots (\tilde{\omega_r}+f_ru)^{i_r}$$ $$= \sum_{j_1,...,j_r}\tilde{\omega_1}^{j_1}\cdots\tilde{\omega_r}^{j_r}\sum_{i_1,...,i_r}a_{i_1,...,i_r} {i_1 \choose j_1}\cdots {i_r\choose j_r} (f_1u)^{i_1-j_1}\cdots (f_ru)^{i_r-j_r}$$ $$= \sum_{j_1,...,j_r}\partial^{(j_1)}\cdots\partial^{(j_r)}F(f_1u,...,f_ru)\tilde{\omega_1}^{j_1}\cdots\tilde{\omega_r}^{j_r}.$$ Here $\partial^{(k)} = \frac{\partial^k}{k!}$. The sum over $j_1,...,j_r \geq 0$ is finite because the forms $\tilde{\omega_i}$ are nilpotent. It is therefore clear that the above expression is a well-defined $d_X$-closed form in $C^{an}(M)$. We next show that the corresponding class in $H^*(C^{an}(M))$ is independent of the choice of generators for $[\omega_i]$. Let $H(s_1,t_1,...,s_r,t_r) = F(s_1+t_1,...,s_r+t_r)$. Then $H = F(s_1,...,s_r)+t_1\tilde{H_1}+...+t_r\tilde{H_r}$. Thus, $F(\omega_1+d_X\eta_1,...,\omega_r+d_X\eta_r) = H(\omega_1,d_X\eta_1,...,\omega_r,d_X\eta_r) = F(\omega_1,...,\omega_r)+d_X(\eta_1\tilde{H_1}+...+\eta_r\tilde{H_r})$, so $F([\omega_1],...,[\omega_r])$ is well-defined independent of our choice of generators. Before proceeding further, we point out an important property possessed by the forms of this type. Let $\widehat{C}(M) \subset C^{an}(M)$ denote the subring $\set{\hbox{closed invariant forms } \otimes \mathbb{C}\{u\}}$ where $\mathbb{C}\{u\}$ denotes the ring of power series in $u$ which converge in sufficiently small neighborhoods of the origin. Let $\widehat{H}(M) \subset H^*(C^{an}(M))_{loc}$ denote the subspace of forms $[\omega]$ such that $\omega|_F \in \widehat{C}(F)_{loc}$ for some representative $\omega \in [\omega]$. Then if $[\omega_1],...,[\omega_r] \in H^*_T(M)$, and $u | \deg_0 \omega_i$, then $F([\omega_1],...,[\omega_r]) \in \widehat{H}(M)$. This is because $\deg_0 \omega_i|_F \in \mathbb{C}[u]$ and $d\tilde{\omega_i}|_F = 0$. Note also that functorial localization holds in this situation, and implies that, for $f: M \rightarrow N$, $f_*\widehat{H}(M) \subset \widehat{H}(N)$. \begin{lem} $ev : \widehat{H}(M) \rightarrow W(M)$ is injective. \end{lem} \begin{proof} Let $[\omega] \in \widehat{H}(M)$, so that $\omega|_F = \omega_1f_1+...+\omega_kf_k$, where $\omega_i$ are closed forms on $F$ and $f_i$ are convergent Laurent series in $u$ in some small disk about the origin. If $[\omega] \neq 0$, then by localization, $[\omega]|_F \neq 0$. Without loss of generality, we may assume that $\omega_1,...,\omega_k$ are linearly independent as cohomology classes. We can always choose an arbitrarily small $s$ so that $f_1(s),...,f_k(s)$ are not all zero. We then have that $ev[\omega]_s|_F = f_1(s)[\omega_1]+...+f_k(s)[\omega_k] \neq 0$. Hence $ev[\omega] \neq 0 $ in $W(M)$. \end{proof} All the above machinary was put together to make the following argument: Let $f: M \rightarrow N$, and let $F = F([\omega_1],...,[\omega_r])$ for $[\omega_i] \in H^*_T(M)$, $u | \deg_0 \omega_i$. Suppose $\theta_n(s)$ is a sequence of $d_s$-closed forms which converge in the $C^{\infty}$ sense to a representative of $(ev F)_s$ for $\norm{s}$ sufficiently small. If we factor $f$ as an inclusion followed by a projection, then $f_*$ makes sense on the form level and $f_*\theta_n(s)$ converge to representatives of $(f_* ev F)_s$. Now suppose that $[f_*\theta_n(s)] = [b_n(s)]$ and that $b_n(s) \rightarrow b(s)$. Then $f_*\theta_n(s) = b_n(s) + d_s\eta_n(s)$, and since $f_*\theta_n(s)$ and $b_n(s)$ converge, we have that $[b(s)] = [f_* ev F]_s$. Now if $s \mapsto [b(s)]$ corresponds to a form $ev G$ for some $G([\alpha_1],...,[\alpha_k]), [\alpha_i] \in H^*_T(N)$, then $f_* ev F = ev f_* F = ev G$, and therefore $f_* F = G$. Thus, we can compute push-forwards of summations by applying the push-forward term-by-term, so long as the corresponding summation converges in the sense discussed in this section. This observation will be used implicitly throughout this paper; for example, it will allow us in sections \ref{Polyhedral Complexes} and \ref{Twisted Polyhedral Complex} to reduce computations involving convergent power series to computations involving polynomials. \begin{rmk}\rm In what follows we will be interested in the case where $T$ is a compact torus of arbitrary dimension. It is easy to see how to generalize the above machinary to this situation. The difficulty lies more in the notation than in any other aspect. \end{rmk} \section{The Orbifold Elliptic Class}\label{Definitions} Let $X$ be a smooth projective variety with a holomorphic $T\times G$ action. Let $D = \sum_I \alpha_i D_i$ be a smooth $G$-normal crossing divisor with $\alpha_i < 1$. The $G$-normal condition means that if $g \in \mathrm{stab}_G(x)$ and $x\in D_i$, then $g\cdot D_i = D_i$. Let $X^{g,h}_\gamma$ be a connected component of the fixed locus $X^{g,h}$ for some commuting pair $g,h \in G$. Then $N_{X^{g,h}_\gamma/X}$ splits into character sub-bundles $\bigoplus_\lambda N_\lambda$, where $g$ (resp. $h$) acts on $N_\lambda$ as multiplication by $e^{2\pi i\lambda(g)}$ (resp. $e^{2\pi i\lambda(h)}$). Let $I^{g,h}_\gamma \subset I$ index the set of divisors $D^X_i$ which contain $X^{g,h}_\gamma$. Since $D$ is $G$-normal, $g$ (resp. $h$) acts on $\mathcal{O}(D_i)|_{X^{g,h}_\gamma}$ as multiplication by $e^{2\pi i\lambda_i(g)}$ (resp. $e^{2\pi i\lambda_i(h)}$) for every $i\in I^{g,h}_\gamma$. For $i \not \in I^{g,h}_\gamma$, we define $\lambda_i(g) = \lambda_i(h) = 0$. Following \cite{BL}, we define the orbifold elliptic class associated to the pair $(X^{g,h}_\gamma,D)$ as follows: $\mathcal{E}ll_{orb}(X^{g,h}_\gamma,D)=$ \begin{align*} &(i_{X^{g,h}_\gamma})_*\bigg\{ \prod_{TX^{g,h}_\gamma}\ellip{\frac{x_i}{2\pi i}} \prod_{\lambda,N_\lambda}\ellnormTh{\frac{x_{\lambda,c}}{2\pi i} +\lambda(g)-\lambda(h)\tau}\bigg\}\\ &\prod_I\jacc{\frac{D_i}{2\pi i}+\lambda_i(g)-\lambda_i(h)\tau} {(-\alpha_i+1)} \end{align*} Here $x_i$ denote the equivariant Chern roots of $TX^{g,h}_\gamma$, $x_{\lambda,c}$ denote the equivariant Chern roots of $N_\lambda$, and (abusing notation) $D_i$ denote the equivariant first Chern classes of the corresponding divisors. Finally we define the orbifold elliptic class of $(X,D,G)$ as follows: $$\mathcal{E}ll_{orb}(X,D,G) = \frac{1}{|G|}\sum_{gh=hg,\gamma}\mathcal{E}ll_{orb}(X^{g,h}_\gamma,D).$$ When $G = 1$, we will write $\mathcal{E}ll(X,D)$ instead of $\mathcal{E}ll_{orb}(X,D,G)$ and refer to this object as the equivariant elliptic class of the pair $(X,D)$. We view all such classes as an elements inside the ring $\widehat{H}(X)$. Let $\set{F}$ denote the collection of fixed components of $X$. For each $F$ let $e(F)$ denote the equivariant Euler class of the normal bundle to $F$. We define the equivariant orbifold elliptic index $$Ell_{orb}(X,D,G)\equiv \big (\frac{2\pi i\theta(-z)}{\theta'(0)}\big )^{\dim X}\sum_F \int_F\frac{\mathcal{E}ll_{orb}(X,D,G)}{e(F)}.$$ It is a convergent power series in the equivariant parameters which depends implicitly upon the value of the complex parameter $z$ and on the lattice parameter $\tau$ used in the definition of the Jacobi theta function. We define the equivariant elliptic index $Ell(X,D)$ similarly. \section{Toric Varieties and Equivariant Cohomology}\label{Toric Varieties} For a good reference on toric varieties, see \cite{F}. Let $X$ be a smooth complete toric variety of dimension $n$. We denote the fan of $X$ by $\Sigma_X$, the lattice of $X$ by $N_X$, and the big torus by $T_X$. Let $Y$ be a smooth complete toric variety which satisfies the following properties: $(1)$: $N_X \subset N_Y$ is a finite index sublattice. $(2)$: $\Sigma_X$ is a refinement of $\Sigma_Y$ obtained by adding finitely-many one dimensional rays. There is an obvious map of fans $\nu :\Sigma_X \rightarrow \Sigma_Y$ which induces a smooth map $\mu : X \rightarrow Y$. We call a map induced by such a morphism of fans a \it{toric morphism}\rm. It is easy to verify that $\mu: T_X \rightarrow T_Y$ is a covering map with covering group $N_Y/N_X$. Thus, we may regard $Y$ as a $T_X$-space. Our goal in this section is to obtain a convenient description of the equivariant pushforward $\mu_* : H^*_T(X)\rightarrow H^*_T(Y)$ in terms of the combinatorics of $\Sigma_X$ and $\Sigma_Y$. Here $T = T_X$. We first note that fixed points $F$ of $X$ are in $1-1$ correspondence with $n$-dimensional cones $C_F \subset \Sigma_X$. Furthermore, the infinitesimal weights of the $T$-action on $N_F$ correspond to linear forms in $\mathrm{Hom}(N_X,\mathbb{Z})$ which are dual to the generators of $C_F$ in $N_X$. With this identification in mind, for any $\omega \in H_T^*(X)$, the the collection of polynomials $\set{\omega|_F}_{F\in \hbox{Fix}(X)}$ defines a piecewise polynomial function on the fan $\Sigma_X$. Define $\mathbb{C}[\Sigma_X]$ to be the ring of all piecewise polynomial functions on the fan of $X$. It is well-known that the map $H_T^*(X)\rightarrow \mathbb{C}[\Sigma_X]$ described here is an isomorphism: \begin{thm} $H^*_T(X) \cong \mathbb{C}[\Sigma_X]$. \end{thm} Via this identification, we define $\nu_*:\mathbb{C}[\Sigma_X]\rightarrow \mathbb{C}[\Sigma_Y]$ to be the map which makes the following diagram commute: $$\begin{CD} \mathbb{C}[\Sigma_X] @>\nu_*>> \mathbb{C}[\Sigma_Y]\\ @| @|\\ H^*_T(X) @>\mu_*>> H^*_T(Y)\\ \end{CD}$$ Here we understand $\mathbb{C}[\Sigma_Y]$ to be the ring of piecewise polynomial functions on $\Sigma_Y$ with respect to the lattice $N_X$. We now describe $\nu_*$ more explicitly. First notice that for $f \in \mathbb{C}[\Sigma_X]$, $\nu_*f$ is given by viewing $f|_F$ as the zero degree part of an equivariant cohomology class $\omega \in H^*_T(X)$, pushing $\omega$ forward by $\mu_*$, and then forming the piecewise polynomial function defined by the zero degree part of $\mu_*\omega$. Thus, let $C \subset \Sigma_Y$ be an $n$-dimensional cone. Let $\nu^{-1}C$ be the fan $\Sigma_C \subset \Sigma_X$ which is the union of $n$-dimensional cones $C_i$. Let $x^{C_i}_1,\ldots,x^{C_i}_n$ be the linear forms dual to $C_i$ and $x^C_1,\ldots,x^C_n$ the linear forms in $\mathrm{Hom}(N_Y,\mathbb{Z}) \subset \mathrm{Hom}(N_X,\mathbb{Z})$ dual to $C$. By functorial localization: $$(\nu_*f)_C = \sum_{C_i \subset \Sigma_X}f_{C_i} \frac{\prod_{j=1}^{n} x^C_j}{\prod_{j=1}^{n} x^{C_i}_j}.$$ Similarly, we define $\nu^* : \mathbb{C}[\Sigma_Y]\rightarrow \mathbb{C}[\Sigma_X]$ to be the map which makes the following diagram commute: $$\begin{CD} \mathbb{C}[\Sigma_Y] @>\nu^*>> \mathbb{C}[\Sigma_X]\\ @| @|\\ H^*_T(Y) @>\mu^*>> H^*_T(X)\\ \end{CD}$$ \begin{prop} $\nu^*(f) = f\circ \nu$ \end{prop} \begin{proof} Let $\omega \in H^*_T(Y)$ be the form such that $\omega|_P = f|_P$ for every fixed point $P$. Let $F \in \mu^{-1}(P)$. Then $$\begin{CD} H^*_T(Y) @>\mu^*>> H^*_T(X)\\ @VVV @VVV\\ H^*_T(P) @>\mu_F^*>> H^*_T(F)\\ \end{CD}$$ commutes. Hence $(\mu^*\omega)|_F = \mu_F^*(\omega|_P) = \mu_F^*(f_P) = f_P$. Thus $\nu^*(f)$ is the piecewise polynomial function which is equal to $f_{C_P}$ on every cone $C_F \in \nu^{-1}C_P$. This is precisely the piecewise polynomial $f\circ \nu$. \end{proof} The map $\nu^* : \mathbb{C}[\Sigma_Y] \rightarrow \mathbb{C}[\Sigma_X]$ makes $\mathbb{C}[\Sigma_X]$ into a $\mathbb{C}[\Sigma_Y]$-module. As such, we observe: \begin{prop} $\nu_*$ is a $\mathbb{C}[\Sigma_Y]$-module homomorphism. \end{prop} \begin{proof} In other words, we wish to prove the projection formula $\nu_*(f\nu^*g) = \nu_*(f)\cdot g$. This follows from identifying $\nu_*$ with $\mu_*$, $\nu^*$ with $\mu^*$ and invoking the projection formula from equivariant cohomology. \end{proof} \section{Toroidal Embeddings and Toroidal Morphisms}\label{Polyhedral Complexes} \subsection{Definitions} Let $X$ be a compact variety and $D_X = \sum_{I_X} D^X_i$ a divisor on $X$ whose irreducible components are smooth normal crossing divisors. For $I\subset I_X$, let $X_{I,j}$ denote the $j$th connected component of $\cap_I D^X_i$. Let $X^{o}_{I,j} = X_{I,j}-\cup_{I^c}D^X_i$. The collection of subvarieties $X^{o}_{I,j}$ form a stratification of $X$. Associated to these data is a polyhedral complex with integral structure defined as follows: Corresponding to $X_{I,j}$, define $N_{I,j} = \mathbb{Z} e_{i_1,j}+\ldots+\mathbb{Z} e_{i_k,j}$ to be the free group on the elements $e_{i_1,j},\ldots,e_{i_k,j}$. Here $i_1,\ldots i_k$ are the elements of $I$. Define $C_{I,j}$ to be the cone in the first orthant of this lattice. Whenever $I'\subset I$ and $X_{I,j}\subset X_{I',j'}$ we have natural inclusion maps $N_{I',j'}\hookrightarrow N_{I,j}$ and $C_{I',j'}\hookrightarrow C_{I,j}$. Define $\Sigma_X$ to be the polyhedral complex with integral structure obtained by gluing the cones $C_{I,j}$ together according to these inclusion maps. Let $\mathbb{C}[\Sigma_X]$ denote the ring of piecewise polynomial functions on $\Sigma_X$. Fix $C \subset \Sigma_X$. Define $f^C$ to be the piecewise polynomial function which is equal to $\prod_{j=1}^{\dim C}x^{C}_j$ on every cone containing $C$, and equal to zero everywhere else. As in the toric geometry case, there is a natural correspondence between piecewise linear functions on $\Sigma_X$ and Cartier divisors whose irreducible components are components of $D_X$. We denote the piecewise linear function corresponding to $D$ by $f^D$. \subsection{Toroidal Morphisms} Our primary interest in this section is the study of toroidal morphisms. This is a map $\mu: (X,D_X,\Sigma_X) \rightarrow (Y,D_Y,\Sigma_Y)$ which satisfies the following: $(1)$: $\mu : X-D_X \rightarrow Y-D_Y$ is an unramified cover. $(2)$: $\mu$ maps the closure of a stratum in $X$ to the closure of a stratum in $Y$. $(3)$: Let $U_y$ be an analytic neighborhood of $y \in Y$ such that the components of $D_Y$ passing through $y$ correspond to coordinate hyperplanes. Then for $x \in \mu^{-1}(y)$, there exists an analytic neighborhood $U_x$ of $x$ such that the components of $D_X$ passing through $x$ correspond to coordinate hyperplanes of $U_x$. Moreover, the map $U_x \rightarrow U_y$ is given by monomial functions in the coordinates. Corresponding to $\mu$, we can define a map $\nu : \Sigma_X \rightarrow \Sigma_Y$ as follows: Let $C_{I,i} \subset \Sigma_X$ and let $e_1,\ldots,e_k \in N_{I,i}$ be the generators of $C_{I,i}$ which correspond to the divisors $D^X_1,\ldots, D^X_k$. We have that $\mu(X_{I,i}) = Y_{J,j}$. Let $v_1,\ldots,v_\ell \in N_{J,j}$ be the generators of $C_{J,j}$ which correspond to the divisors $D^Y_1,\ldots,D^Y_\ell$. For $1 \leq s \leq k$, $1 \leq t \leq \ell$, define $a_{st}$ to be the coefficient of $D^X_s$ of the divisor $\mu^*(D^Y_t)$. Then we define $\nu(e_s) = \sum a_{st}v_t$. Note that if $(X,\Sigma_X) \rightarrow (Y,\Sigma_Y)$ is a smooth toric morphism of toric varieties, then $\nu :\Sigma_X \rightarrow \Sigma_Y$ is the natural morphism of polyhedral complexes. We have the following proposition relating $\nu$ to $\mu$: \begin{prop}\label{Axioms} If $C = C_{J,j} \subset \Sigma_Y$, then $\nu^{-1}C$ is the union of fans $\Sigma_\alpha \subset \Sigma_X$ with the following properties: $(1)$: $\Sigma_\alpha$ is a refinement of $C$ obtained by adding finitely-many $1$-dim rays. $(2)$: The lattice $N_\alpha$ of $\Sigma_\alpha$ is a finite index sub-lattice of $N_C$. $(3)$: The fans $\Sigma_\alpha$ are in $1-1$ correspondence with connected components $U_\alpha$ of $\mu^{-1}(N_{Y_{J,j}^{o}})$. The map $U_\alpha \rightarrow N_{Y_{J,j}^{o}}$ is a fibration given by the smooth toric morphism $\mathbf{P}_{\Sigma_\alpha,N_\alpha}\rightarrow \mathbf{P}_{C,N_C}$ along the fiber, and a $d_\alpha = d(\Sigma_\alpha)$-cover of $Y_{J,j}^{o}$ along the base. \end{prop} For a proof, see \cite{BL}. \subsection{Pushforward formula for Polyhedral Complexes}\label{Poly} Motivated by the description of the push-forward $\nu_*$ for toric morphisms, define $\nu_* : \mathbb{C}[\Sigma_X] \rightarrow \mathbb{C}[\Sigma_Y]$ as follows. Let $C \subset \Sigma_Y$ be an $n$-dimensional cone with dual linear forms $x^C_1,\ldots,x^C_n$. Then for $f \in \mathbb{C}[\Sigma_X]$, we define: $$(\nu_*f)_C = \sum_\alpha d_\alpha \sum_{C_i \in \Sigma_\alpha} f_{C_i}\cdot \frac{\prod_{j=1}^{n}x^C_j}{\prod_{j=1}^{n}x^{C_i}_j}$$ The second sum is taken over the cones $C_i \subset \Sigma_\alpha$ with the same dimension as $C$. Let $V$ be the toric variety $\coprod_\alpha d_\alpha\cdot \mathbf{P}_{\Sigma_\alpha,N_\alpha}$ with polyhedral fan $\Sigma_V$. We have a natural toric morphism $V \rightarrow \mathbb{C}^n$. We can compactify $V$ and $\mathbb{C}^n$ to obtain a smooth toric morphism $\overline{V}\rightarrow \mathbb{P}^n$. If we view $f$ as a piece-wise polynomial function on the fan of $\overline{V}$, then the above formula simply corresponds to $(\nu_*f)_C$ where $\nu :\Sigma_{\overline{V}} \rightarrow \Sigma_{\mathbb{P}^n}$. This identification allows us to apply the tools of the previous section toward the study of $\nu_*$. We first observe that $(\nu_*f)_C$ is indeed a polynomial function. This follows from the above identification of $\nu_*$ with the equivariant pushforward of a toric morphism. Furthermore, if we define $\nu^* :\mathbb{C}[\Sigma_Y]\rightarrow \mathbb{C}[\Sigma_X]$ by the formula $\nu^*(f) = f\circ \nu$ then the projection formula: $$\nu_*(f\nu^*g)_C = \nu_*(f)_C\cdot g_C$$ follows from the projection formula in equivariant cohomology. \begin{prop} $\nu_*(f)$ is a piece-wise polynomial function. \end{prop} \begin{proof} We first show that $\nu_*(f^C)$ is piece-wise polynomial. Fix $f = f^C$. Suppose $\nu(C) \subset C_0$ for some $C_0 \subset \Sigma_Y$ of dimension $k = \dim C$. Then $\nu_*(f)_{C_0} = d(\Sigma_{C_0})\prod_{j=1}^{k}x^{C_0}_j$. Suppose $C_1$ is a cone containing $C_0$. We wish to show $(\nu_*f)_{C_1}$ is an extension of $(\nu_*f)_{C_0}$. Consider the toric morphism $\sigma:\mathbf{P}_{\Sigma_{C_1},N(\Sigma_{C_1})}\rightarrow \mathbb{C}^{\dim C_1}$ induced by the map $\nu: \Sigma_{C_1}\rightarrow C_1$. Let $D_1,\ldots, D_k$ be the divisors in $\mathbf{P}_{\Sigma_{C_1},N(\Sigma_{C_1})}$ which correspond to the generators of $C$. Then the piece-wise polynomial function $f \in \mathbb{C}[\Sigma_{C_1}]$ represents the equivariant Thom class of $D_1 \cap \dots \cap D_k$. Since $\sigma(D_1\cap\dots\cap D_k)$ is the affine subspace of $\mathbb{C}^{\dim C_1}$ corresponding to $C_0$, we have that $\sigma_*(f)$ is the degree of $\sigma$ along $D_1\cap\dots\cap D_k$ times the polynomial function which represents the equivariant Thom class of this subspace. But this implies that: $$\nu_*(f)_{C_1} =d(\Sigma_{C_1})\frac{[N(\Sigma_{C_1}):N(C_1)]} {[N(\Sigma_{C_0}):N(C_0)]}\prod_{j=1}^{k}x^{C_0}_j = d(\Sigma_{C_0})\prod_{j=1}^{k}x^{C_0}_j.$$ We need to explain the last equality. If $C_0$ corresponds to the strata $Y^{o}_{I,j}$ and $U\rightarrow N_{Y^{o}_{I,j}}$ is the fibration in Proposition \ref{Axioms} corresponding to the subdivision $\Sigma_{C_0}$, then $d(\Sigma_{C_0})[N(\Sigma_{C_0}):N(C_0)]$ and $d(\Sigma_{C_1})[N(\Sigma_{C_1}):N(C_1)]$ both give the number of points in the pre-image of a generic point in $N_{Y^{o}_{I,j}}$. Next suppose that $C$ is mapped to a cone $C_0$ of strictly larger dimension. Consider the toric morphism $\mathbf{P}_{\Sigma_{C_0},N(\Sigma_{C_0})}\rightarrow \mathbf{P}_{C_0,N(C_0)}$ induced by the map $\nu: \Sigma_{C_0}\rightarrow C_0$. The polynomial function $f \in \mathbb{C}[\Sigma_{C_0}]$ represents the Thom class of an exceptional toric subvariety. Thus $\nu_*(f) = 0$, and it is easy to verify that $\nu_*(f) = 0$ on every cone containing $C_0$. Thus, $\nu_*$ maps the elements $f^C$ to piecewise polynomial functions. Since these functions generate $\mathbb{C}[\Sigma_X]$ as a $\mathbb{C}[\Sigma_Y]$-module, the proposition follows from the projection formula. \end{proof} In what follows we assume that $\mu: X \rightarrow Y$ is an equivariant map of projective $T$-spaces. Furthermore, we assume that the irreducible components of $D_X$ and $D_Y$ are invariant under the $T$-action. Define a map $\rho_X: \mathbb{C}[\Sigma_X]\rightarrow H^*_T(X)$ as follows: Fix a cone $C = C_{I,i}$ which corresponds to a connected component of the intersection locus of the divisors $D_1,\ldots,D_k$. Define $\rho_X[f^C\cdot (f^{D_1})^{a_1}\dots (f^{D_k})^{a_k}] = \Phi_{X_{I,i}}\wedge D_1^{a_1}\wedge \ldots \wedge D_k^{a_k}$. Here $\Phi_{X_{I,i}}$ denotes the equivariant Thom class of $X_{I,i} \subset X$ and, by abuse of notation, $D_j$ denote the equivariant Thom classes of the divisors $D_j$. \begin{lem} $\rho_X$ is a ring homomorphism. \end{lem} \begin{proof} Fix cones $C_1=C_{I_1,i_1}$ and $C_2=C_{I_2,i_2}$. It suffices to prove the theorem for the polynomials $f^{C_1}$ and $f^{C_2}$. Let $I=I_1\cup I_2$. Let $C_{I,i}$ denote the cones which correspond to components of the intersection $X_{I_1,i_1}\cap X_{I_2,i_2}$. Clearly $$f^{C_1}f^{C_2} = \sum_{I,i}f^{C_{I,i}}\prod_{I_1\cap I_2}f^{D_j}.$$ Thus $\rho_X(f^{C_1}f^{C_2}) = \sum_{I,i}\Phi_{X_{I,i}}\prod_{I_1\cap I_2}D_j$. However, by the equivariant version of the excess intersection formula, this is precisely the formula for $\rho_X(f^{C_1})\rho_X(f^{C_2})$. \end{proof} \begin{lem} $\rho_X\nu^* = \mu^*\rho_Y$. \end{lem} \begin{proof} It suffices to check this for polynomials $f^{C_{I,k}}$. If $D$ is a divisor on $Y$ whose irreducible components are components of $D_Y$, then $\nu^*f^D$ is the piecewise linear function corresponding to $\mu^*D$. It follows that $\rho_X\nu^*f^D = \mu^*\rho_Y f^D$. Since all the maps are ring homomorphisms, this implies that $\rho_X\nu^* \prod_{j\in I} f^{D_j} = \mu^*\rho_Y\prod_{j\in I}f^{D_j}$. Let $\mu^*D_i = \sum_j a_{ij}E_j$ as Cartier divisors. As in the lemma in the Appendix, choose equivariant Thom forms $\Phi_{E_j}$ and $\Phi_{D_i}$ with support in small tubular neighborhoods of their respective divisors so that: $$\mu^*\Phi_{D_i} = \sum_j a_{ij}\Phi_{E_j} +d\psi_i$$ as forms. Here $\psi_i$ are equivariant forms with compact support in $\mu^{-1}N_{D_i}$. Let $\set{I,k}$ index the connected components of $\cap_I D_i$. If we choose $N_{D_i}$ sufficiently small, then $$\prod_I \Phi_{D_i} = \sum_{I,k}(\prod_I\Phi_{D_i})_{I,k}$$ where $(\prod_I\Phi_{D_i})_{I,k}$ is the extension by zero of the form $\prod_I\Phi_{D_i}|_{N_{I,k}}$. Now $\prod_I f^{D_i} = \sum f^{C_{I,k}}$ and clearly $(\prod_I\Phi_{D_i})_{I,k}$ is a representative of $\rho_Y(f^{C_{I,k}})$. We have that $$\mu^*(\prod_I\Phi_{D_i})_{I,k} = \big\{\prod_I (\sum_j a_{ij}\Phi_{E_j} + d\psi_i) \big\}_{\mu^{-1}N_{I,k}}$$ where the subscript $\mu^{-1}N_{I,k}$ means the extension by zero of the form restricted to this open set. Since the $\psi_i$ forms have compact support in $\mu^{-1}N_{D_i}$, this form is cohomologous to $$\big\{ \prod_I \sum_j a_{ij}\Phi_{E_j} \big\}_{\mu^{-1}N_{I,k}}.$$ But this is in turn a representative of $\rho_X\nu^* f^{C_{I,k}}$. \end{proof} \begin{lem}\label{PushForward} $\mu_* \rho_X = \rho_Y\nu_*$. \end{lem} \begin{proof} Since $\rho_X\nu^* = \mu^*\rho_Y$ and the polynomials $f^C$ generate $\mathbb{C}[\Sigma_X]$ as a $\mathbb{C}[\Sigma_Y]$-module, by the projection formula it suffices to check $\mu_* \rho_X f^C = \rho_Y\nu_*f^C$. Case $1$: $C_{I,i}$ is mapped by $\nu$ to a cone $C_{J,j}$ of the same dimension. From the proof of Proposition $4$, $\nu_*f^{C_{I,i}} = df^{C_{J,j}}$ where $d$ is the degree of $\mu : X_{I,i}\rightarrow Y_{J,j}$. Thus, $\rho_Y\nu_* f^{C_{I,i}} = d\Phi_{Y_{J,j}} = \mu_*\nu_* f^{C_{I,i}}$. Case $2$: $C_{I,i}$ is mapped by $\nu$ into a cone of strictly larger dimension. As shown in Proposition $4$, $\nu_*f^{C_{I,i}} = 0$, so $\rho_Y\nu_*f^{C_{I,i}} = 0 = \mu_*\Phi_{X_{I,i}} = \mu_*\rho_X f^{C_{I,i}}$. \end{proof} \begin{rmk}\rm It is clear using the formalism of section \ref{PSLocalization} that the above lemmas relating $\mu$ to $\nu$ extend without difficulty to the ring $\mathbb{C}[[\Sigma_X]]$ of piecewise convergent power series. In this situation, $\rho_X$ is a map from $\mathbb{C}[[\Sigma_X]]$ into $\widehat{H}(X)$. \end{rmk} \section{Elliptic Genera and Toroidal Morphisms}\label{Toroidal Pushforward} Let $(\hat{X},\sum_{I_{\hat{X}}}\hat{D}_j)$ and $(X,\sum_{I_X}D_i)$ be smooth projective toroidal embeddings with $T$-actions which leave $\hat{D}_j$ and $D_i$ invariant. Let $G$ be a finite group which acts toroidally on $\hat{X}$ and commutes with the action of $G$. Suppose that $\mu : \hat{X}\rightarrow X$ is a $T$-equivariant toroidal morphism which is birational to a global quotient by $G$. If $\alpha_i$ are coefficients of the irreducible components $D_i$, define $\beta_j$ so that $\mu^*(K_{X}+\sum\alpha_i D_i) = K_{\hat{X}} +\sum\beta_j \hat{D}_j$. Then: \begin{thm}\label{Toroidal McKay} $$\mu_*\mathcal{E}ll_{orb}(\hat{X},\sum\beta_j \hat{D}_j,G) =\mathcal{E}ll(X,\sum\alpha_i D_i).$$ \end{thm} \begin{proof} The proof of this theorem follows almost word for word the proof of Theorem $5.1$ in \cite{BL}. The only difference is that here we examine the equivariant push-forward of equivariant cohomology classes, whereas \cite{BL} examine the push-forward of (non-equivariant) classes in the Chow ring. We reproduce the proof here for completeness. We refer frequently to the notation in the previous section: For commuting elements $g,h \in G$, let $\hat{X}^{g,h}_\gamma$ denote the $\gamma$-th fixed component of $(g,h)$. Since the action of $G$ on $\hat{X}$ is toroidal, $\hat{X}^{g,h}$ may be identified with $X_{I^{g,h},i}$ for some indexing set $I^{g,h}\subset I_{\hat{X}}$. Consider the following class in $\widehat{H}(\hat{X})$: $$E = \frac{1}{|G|}\sum_{gh=hg; {\hat{X}}^{g,h}_\gamma} \Phi^T_{\hat{X}^{g,h}_\gamma}\cdot\prod_{I_{\hat{X}}-I^{g,h}_\gamma} \orbellipar{\pii{\hat{D}_j}}{\beta_j}{\pii{\hat{D}_j}}\cdot$$ $$\prod_{I^{g,h}_\gamma} \orbnormTh{\pii{\hat{D}_j}+\epsilon_j(g)-\epsilon_j(h)\tau}{\beta_j} e^{2\pi i(-\beta_j+1)\epsilon_j(h)z}.$$ Here $\Phi^T_{\hat{X}^{g,h}_\gamma}$ is the equivariant Thom class of the subvariety $\hat{X}^{g,h}_\gamma$, and $\epsilon_j(g)$, etc., are defined as in the definition of the orbifold elliptic genus. Our first goal is to prove \begin{align}\label{Poly McKay} \mu_*E = \prod_{I_{X}}\orbellipar{\pii{{D}_i}}{\alpha_i}{\pii{{D}_i}} \end{align} To prove the above equality, we express both sides as the image under $\rho_{\hat{X}}$ and $\rho_X$ of piece-wise convergent power series, and apply the push-forward formula from the previous section. To that end, let $F$ be the piece-wise convergent power series defined as follows: Let $C^{g,h}_\gamma$ be the cone which corresponds to $X^{g,h}_\gamma$. For each cone $C=C_{J,j}$ containing $C^{g,h}_\gamma$, let $F^{g,h}_\gamma|_C=$ $$\frac{1}{|G|}\prod_{J}\orbellipar{\frac{x^C_j}{2\pi i}+\epsilon_j(g)-\epsilon_j(h)\tau}{\beta_j}{\frac{x^C_j}{2\pi i}}e^{2\pi i(-\beta_j+1)\epsilon_j(h)z}.$$ For cones $C$ not containing $C^{g,h}_\gamma$, we define $F^{g,h}_\gamma|_C = 0$. In the above expression, $x^C_j$ are the piece-wise linear functions dual to the generators of $C$. If $\hat{D}_j$ are the divisors which correspond to the generators of $C$, then $\epsilon_j(g)$, etc., are the infinitesimal weights attached to the divisors, and $\beta_j$ are the coefficients of $\hat{D}_j$. It is easy to see that $F^{g,h}_\gamma$ is a well-defined piece-wise convergent power series, and that $\rho_{\hat{X}}(F^{g,h}_\gamma)$ is the $X^{g,h}_\gamma$-th summand in the expression for $E$. We therefore define $F = \sum_{gh=hg,\gamma} F^{g,h}_\gamma$, so that $\rho_{\hat{X}}(F) = E$. Similarly, define $H \in \mathbb{C}[[\Sigma_X]]$ to be the piece-wise convergent power series: $$H|_C = \prod_{i=1}^{\dim C}\ellipar{\frac{x^C_i}{2\pi i}}{\alpha_i}.$$ Clearly $\rho_X(H)$ is equal to the right-hand side of \ref{Poly McKay}. We are therefore reduced to proving that $\nu_* F = H$. Let $C \subset \Sigma_X$ be a cone, and let $\Sigma_\alpha$ be the subdivisions of $C$ lying in $\Sigma_{\hat{X}}$ which get mapped to $C$ under $\nu$. Let $N_\alpha$ denote the lattices of $\Sigma_\alpha$, and $N_C$ the lattice of $\Sigma_C$. Referring to the notation of section \ref{Poly}, the formula for $\nu_*$ tells us that: $$(\nu_*F)_C = \sum_\alpha d_\alpha\sum_{C_j\subset \Sigma_\alpha} F|_{C_j}\frac{\prod_{i=1}^{\dim C} x^{C}_i}{\prod_{i=1}^{\dim C} x^{C_j}_i}.$$ For each $C_j \subset \Sigma_\alpha$ with the same dimension as $C$, note that $F^{g,h}_\gamma|_{C_j} \neq 0$ if and only if $C^{g,h}_\gamma \subset C_j$, i.e., if and only if $g$ and $h$ are elements of the group $G_\alpha = N_C/N_\alpha$. We may therefore write the push-forward of $F$ as: \begin{align*} &\prod_{i=1}^{\dim C}x^C_i\sum_{\alpha}\frac{d_\alpha}{|G|} \sum_{g,h \in G_\alpha}\\ &\sum_{C_j\subset \Sigma_\alpha} \frac{\theta(\twopi{x^{C_j}_i}+\epsilon_i(g)-\epsilon_i(h)\tau-(-\beta_i+1)z)\twopi{\theta'(0)}} {\theta(\twopi{x^{C_j}_i}+\epsilon_i(g)-\epsilon_i(h)\tau)\theta(-(-\beta_i+1)z)}e^{2\pi i(-\beta_i+1)z} \end{align*} By lemma $8.1$ in \cite{BL}, this is equal to $\sum_\alpha \frac{d_\alpha |G_\alpha|}{|G|}H|_C$. But since $\sum_\alpha d_\alpha|G_\alpha|$ describes the number of points in the pre-image of a generic point in a tubular neighborhood of the closed stratum corresponding to $C$, the coefficient in front of $H|_C$ in the above expression is $1$. This completes the proof of \ref{Poly McKay}. To complete the proof, we apply the projection formula together with the following lemma relating the Chern classes of $\hat{X}$ to those of $X$: \begin{lem} $$\frac{c(T\hat{X})_T}{\mu^*c(TX)_T} = \frac{\prod_{I_{\hat{X}}}(1+c_1(\hat{D}_j)_T)} {\prod_{I_X}(1+c_1(D_i)_T)}$$ in $H^*_T(\hat{X})_{loc}$ \end{lem} For details, see \cite{BL}. The above lemma may be proved using an argument analogous to the proof of lemma $5$ in \cite{RW}. \end{proof} \section{Deformation to the Normal Cone}\label{Deformation Normal Cone} If $Q$ is a holomorphic function in a neighborhood of the origin, then $Q$ determines a map $\varphi_Q : K_T(\cdot) \rightarrow \widehat{H}(\cdot)$ by the rule $E \mapsto \prod_i Q(e_i)$, where $e_i$ represent the equivariant Chern roots of $E$. If, in addition, $Q(0) = 1$, then $\varphi_Q$ is multiplicative in the sense that $\varphi_Q(E_1\oplus E_2) = \varphi_Q(E_1)\varphi_Q(E_2)$. More generally, let $H$ be a finite abelian group with characters $\set{\lambda}$. Let $\set{f_\lambda}$ be an assignment of a holomorphic function in a neighborhood of the origin for each character. Such a collection $\set{f_\lambda}$ determines a map $\psi:K_T(\cdot)\otimes R(H)\rightarrow \widehat{H}(\cdot)$ by the rule $\psi:E = \bigoplus_\lambda E_\lambda \mapsto \prod_{\lambda} \prod f_\lambda(e_{\lambda,i})$. Here $e_{\lambda,i}$ denote the equivariant Chern roots of the $T$-vectorbundle $E_\lambda$. Let us fix a multiplicative map $\varphi = \varphi_Q$ and a (possibly not multiplicative) map $\psi = \psi_{\set{f_\lambda}}$. Let $X$ be a compact $T\times H$-variety and let $Z$ be a smooth $T\times H$-invariant subvariety. Let $V$ be a connected component of $X^H$. In the proof of the lemma below, the only difficult case to examine is when $Z\cap V \equiv W$ is a proper subset of $V$. We therefore assume this throughout. Let $\pi: \Bl{X}\rightarrow X$ be the blow-up of $X$ along $Z$ and let $\Bl{V}$ be the proper transform of $V$. Clearly $N_{\Bl{V}/\Bl{X}}$ has the same $H$-character decomposition as $N_{V/X}$. We may therefore make sense of $\psi(N_{\Bl{V}/\Bl{X}})$. The goal of this section is to prove the following crucial technical lemma relating $\varphi(T\Bl{V})\psi(N_{\Bl{V}/\Bl{X}})$ to $\varphi(TV)\psi(N_{V/X})$. \begin{lem}\label{deformation} $$\pi_* \varphi(T\Bl{V})\psi(N_{\Bl{V}/\Bl{X}}) - \varphi(T{V})\psi(N_{V/X}) = (i_W)_*\Theta.$$ Here $i_W$ is the inclusion map and $\Theta \in \widehat{H}(W)$ is a universal class which depends only on the data of $W$, $N_{W/V}$, and $i_W^*N_{Z/X}$. \end{lem} The argument given below is an adaptation of the argument in \cite{CLW} which was given for the non-equivariant case with $\psi = 1$. \begin{proof} Let $\Pi: M_X\rightarrow X\times\mathbb{P}^1$ be the blow-up along $Z\times\set{\infty}$. We give $M_X$ the obvious $T\times H$ action. Let $M_V$ be the proper transform of $V\times \mathbb{P}^1$. Clearly $M_V \rightarrow V\times \mathbb{P}^1$ is the blow-up along $W\times\set{\infty}$. It is easy to see that $N_{M_V/M_X} \in K_T(M_V)\otimes R(H)$ has the same $H$-character decomposition as $N_{V/X}$. Define $N_0$ to be the sub-bundle of $i_W^*N_{Z/X}$ on which $H$-acts trivially. Let $N_1 = i_W^*N_{Z/X}/N_0$. Then $i_W^*TX$ decomposes as $TW\oplus N_{W/Z}\oplus N_0\oplus N_1$. Clearly $TW\oplus N_0 = i_W^*TV$, and therefore $N_{V/W}=N_0$ is a sub-bundle of $i_W^*N_{Z/X}$ with quotient $N_1$. Let $g : M_X\rightarrow \mathbb{P}^1$ be the composition $M_X\rightarrow X\times \mathbb{P}^1\rightarrow \mathbb{P}^1$. Then $\hbox{div}(g) = X -\Bl{X}-\mathbb{P}(N_{Z/X}\oplus 1)$. Furthermore, $\hbox{div}(g|_{M_V}) = V - \Bl{V}-\mathbb{P}(N_{W/V}\oplus 1)$. Let $i_V$, $i_{\Bl{V}}$, and $i_{\mathbb{P}(N_{W/V}\oplus 1)}$ denote the respective inclusion maps of these divisors in $M_V$. Let $p: \mathbb{P}(N_{W/V}\oplus 1)\rightarrow W$ be the obvious fibration, and let $S$ denote the tautological bundle over $\mathbb{P}(N_{W/V}\oplus 1)$. \noindent \bf{CLAIM}\rm: \begin{align} i_V^* N_{M_V/M_X} =&\hbox{ } N_{V/X} \\ i_{\Bl{V}}^* N_{M_V/M_X} =&\hbox{ } N_{\Bl{V}/\Bl{X}}\\ i_{\mathbb{P}(N_{W/V}\oplus 1)}^*N_{M_V/M_X} =&\hbox{ } p^*N_{W/Z}\oplus p^*N_1\otimes S^* \end{align} $(1)$ is obvious. To prove $(2)$, note first that $TM_X|_{\Bl{V}}$ decomposes as $T\Bl{V}\oplus N_{\Bl{V}/\Bl{X}}\oplus N_{\Bl{X}/M_X}|_{\Bl{V}}$ and also as $T\Bl{V}\oplus N_{\Bl{V}/M_V}\oplus N_{M_V/M_X}|_{\Bl{V}}$. Then simply notice that in both decompositions, $N_{\Bl{V}/\Bl{X}}$ and $i_{\Bl{V}}^* N_{M_V/M_X}$ are the nontrivial $H$-eigenspaces. To prove $(3)$, note that $TM_X|_{\mathbb{P}(N_{W/V}\oplus 1)}$ decomposes in the following two ways: \begin{align*} &TM_X|_{\mathbb{P}(N_{W/V}\oplus 1)} =\\ &T\mathbb{P}(N_{W/V}\oplus 1)\oplus\mathcal{O}(-1)_{\mathbb{P}(N_{W/V}\oplus 1)}\oplus i_{\mathbb{P}(N_{W/V}\oplus 1)}^*N_{M_V/M_X} =\\ &T\mathbb{P}(N_{W/V}\oplus 1)\oplus N_{\mathbb{P}(N_{W/V}\oplus 1)/\mathbb{P}(N_{Z/X}\oplus 1)} \oplus i_{\mathbb{P}(N_{W/V}\oplus 1)}^*\mathcal{O}(-1)_{\mathbb{P}(N_{Z/X}\oplus 1)}\\ \end{align*} Observing $i_{\mathbb{P}(N_{W/V}\oplus 1)}^*\mathcal{O}(-1)_{\mathbb{P}(N_{Z/X}\oplus 1)} = \mathcal{O}(-1)_{\mathbb{P}(N_{W/V}\oplus 1)}$ it follows that $i_{\mathbb{P}(N_{W/V}\oplus 1)}^*N_{M_V/M_X} = N_{\mathbb{P}(N_{W/V}\oplus 1)/\mathbb{P}(N_{Z/X}\oplus 1)}$. It is easy to verify that this bundle in turn is equal to $p^*N_{W/Z}\oplus p^*N_1\otimes S^*$. Since $\hbox{div}(g|_{M_V}) = V - \Bl{V}-\mathbb{P}(N_{W/V}\oplus 1)$, as equivariant classes $V = \Bl{V}+\mathbb{P}(N_{W/V}\oplus 1)$. Let $u$ be the equivariant Thom class of $\Bl{V}$ (that is, the Thom class of its normal bundle), and let $v$ be the equivariant Thom class of $\mathbb{P}(N_{W/V}\oplus 1)$. Then $u+v$ is the equivariant Thom class of $V$. Since $V$ is disjoint from $\Bl{V}$ and $\mathbb{P}(N_{W/V}\oplus 1)$, we have the relations $u(u+v) = v(u+v)=0$. Note also that $uv$ is the equivariant Thom class of $\mathbb{P}(N_{W/V})$, which is the exceptional divisor of $\Bl{V}\rightarrow V$. Let $f$ be the holomorphic function in a neighborhood of the origin which satisfies the relation $Q(z) = \frac{z}{f(z)}$. Then by the above claim: \begin{align*} &\varphi(TM_V)\psi(N_{M_V/M_X})f(u+v) =\hbox{ } (i_V)_*\varphi(TV)\psi(N_{V/X}) \\ &\varphi(TM_V)\psi(N_{M_V/M_X})f(u) =\hbox{ } (i_{\Bl{V}})_*\varphi(T\Bl{V})\psi(N_{\Bl{V}/\Bl{X}}) \\ &\varphi(TM_V)f(v) =\hbox{ }(i_{\mathbb{P}(N_{W/V}\oplus 1)})_* \varphi(T\mathbb{P}(N_{W/V}\oplus 1))\psi(N_{\mathbb{P}(N_{W/V}\oplus 1)/\mathbb{P}(N_{Z/X}\oplus 1)})\\ \end{align*} Note that since $u$ and $v$ are equivariant Thom classes, their degree zero part is an element of $C^{\infty}(M_V)\otimes \mathfrak{t}^*$ which vanishes at the origin of $\mathfrak{t}$. Hence $f(u)$, etc., are well-defined elements of $\widehat{H}(M_V)$. Since $f(z) = z+\ldots$, we can define $g = f^{-1}$ in a possibly smaller neighborhood of the origin. Consider the two-variable holomorphic function $h(z_1,z_2) = f(z_1+z_2).$ Clearly $h(z_1,z_2) = h(g(f(z_1)),g(f(z_2)))$. Define $F(y_1,y_2) = h(g(y_1),g(y_2))$. Then $F$ is holomorphic in a neighborhood of the origin and $F(f(z_1),f(z_2))= f(z_1+z_2)$. From \cite{Hirz} we have the formula: $$F(y_1,y_2)g'(y_1)g'(y_2) = \sum_{(i,j)\neq (0,0)}\varphi(H_{ij})y_1^i y_2^j.$$ Here $\varphi(H_{ij})$ is the non-equivariant genus induced by $f$ of the degree $(1,1)$ hypersurface $H_{ij}\subset \mathbb{P}^i\times \mathbb{P}^j$. If we plug in $f(u)$ and $f(v)$ for $y_1$ and $y_2$, we get $F(f(u),f(v))g'(f(u))g'(f(v)) = f(u+v)g'(f(u))g'(f(v))$. By the relations $u(u+v)=v(u+v)=0$, this last term is cohomologous to $f(u+v)$. It is instructive to go over this last point in detail. Write $f(x+y) = \sum a_n(x+y)^n$ and $g'(f(x))g'(f(y)) = \sum b_{ij}x^iy^j$. Note that $b_{00} = 1$. Let $h_{NIJ} = \sum_{n\leq N} a_n(x+y)^n\cdot \sum_{i\leq I,j\leq J}b_{ij}x^{i}y^{j}$. Then $h_{NIJ}\rightarrow f(x+y)g'(f(x))g'(f(y))$ in the $C^{\infty}$ topology. Abusing notation, let $u$ and $v$ denote fixed representatives for their respective cohomology classes. We have that $h_{NIJ}(u,v) = \sum_{n\leq N}a_n(u+v)^n + d\eta_{NIJ}$. Evaluating at a sufficiently small $s$ gives: $h_{NIJ}(u,v)(s) = \sum_{n\leq N}a_n(u(s)+v(s))^n + d\eta_{NIJ}(s)$. Taking the limit in the indices $N,I,J$, we get $h(u,v)(s) = f(u(s)+v(s)) + d\eta(s)$. We therefore have that $ev(h) = ev(f(u+v))$, and therefore $h = f(u+v)$ in $\widehat{H}(M_V)$. At the same time, $f(u+v) = \sum_{(i,j)\neq (0,0)}\varphi(H_{ij})f(u)^if(v)^j.$ Thus \begin{align*} &\varphi(TM_V)\psi(N_{M_V/M_X})f(u+v) =\hbox{ } \varphi(TM_V)\psi(N_{M_V/M_X})f(u)+\\ &\varphi(TM_V)\psi(N_{M_V/M_X})f(v) +\varphi(TM_V)\psi(N_{M_V/M_X}) \sum_{i+j\geq 2}\varphi(H_{ij})f(u)^{i}f(v)^{j} \\ \end{align*} From the relations $u^2 = -uv$ and $v^2 = -uv$, we have that $f(u)^{i+1} = f(u)f(-v)^i$. Therefore, \begin{align*} \sum_{i+j\geq 2}\varphi(H_{ij})f(u)^{i}f(v)^{j} =&\hbox{ }\\ \sum_{i+j\geq 2, i\geq 1}&\hbox{ }\varphi(H_{ij})f(u)f(-v)^{i-1}f(v)^j + \sum_{j\geq 2}\varphi(H_{0j})f(v)^j \\ \end{align*} We therefore have $\sum_{i+j\geq 2}\varphi(H_{ij})f(u)^{i}f(v)^{j} = uv\frac{f(u)}{u}G(v)+ vJ(v)$ for some universal convergent power series $G$ and $J$. Let $\nu = i_{\mathbb{P}(N_{W/V}\oplus 1)}^*v$. Clearly $\nu = c_1(S)$. Let $w = \nu|_{\mathbb{P}(N_{W/V})}$. Finally, for ease of notation, write $N =p^*N_{W/Z}\oplus p^*N_1\otimes S^*$. \begin{align*} &\varphi(TM_V)\psi(N_{M_V/M_X})f(u+v) =\\ &\varphi(TM_V)\psi(N_{M_V/M_X})f(u)+\varphi(TM_V)\psi(N_{M_V/M_X})f(v)+\\ &\varphi(TM_V)\psi(N_{M_V/M_X})uv\frac{f(u)}{u}G(v)+ \varphi(TM_V)\psi(N_{M_V/M_X})vJ(v)\\ \end{align*} It follows that \begin{align*} &(i_V)_*\varphi(TV)\psi(N_{V/X})=\\ &(i_{\Bl{V}})_*\varphi(T\Bl{V})\psi(N_{\Bl{V}/\Bl{X}})+ (i_{\mathbb{P}(N_{W/V}\oplus 1)})_* \varphi(T\mathbb{P}(N_{W/V}\oplus 1))\psi(N)+\\ &(i_{\mathbb{P}(N_{W/V})})_*\varphi(T\mathbb{P}(N_{W/V})\oplus S|_{\mathbb{P}(N_{W/V})}) \psi(N|_{\mathbb{P}(N_{W/V})})G(w)+\\ &(i_{\mathbb{P}(N_{W/V}\oplus 1)})_*\varphi(T\mathbb{P}(N_{W/V}\oplus 1)\oplus S)\psi(N)J(\nu)\\ \end{align*} Now apply the push-forward $\Pi_*$ to the above equation. Note that $\Pi\circ i_V$ is the inclusion $v \mapsto (v,0)$ in $V\times \mathbb{P}^1$. $\Pi\circ i_{\Bl{V}}$ is the composition of the blow-down $\Bl{V}\rightarrow V$ with the inclusion of $V$ at infinity in $V \times \mathbb{P}^1$. From the blow-up diagram: $$\begin{CD} \mathbb{P}(N_{W/V}\oplus 1) @> i_{\mathbb{P}(N_{W/V}\oplus 1)} >> M_V \\ @V\hat{\Pi}VV @VV\Pi V \\ W\times\set{\infty} @>> i_W > V\times\mathbb{P}^1 \\ \end{CD}$$ we have that $\Pi\circ i_{\mathbb{P}(N_{W/V}\oplus 1)} = i_W\circ\hat{\Pi}$. Finally, $\Pi\circ i_{\mathbb{P}(N_{W/V})}$ is clearly the composition of the blow-down map $\hat{\pi}: \mathbb{P}(N_{W/V})\rightarrow W$ with the inclusion $i_W$. Therefore, applying the pushforward $\Pi_*$ gives the equation: \begin{align*} &\varphi(TV)\psi(N_{V/X})=\\ &\pi_*\varphi(T\Bl{V})\psi(N_{\Bl{V}/\Bl{X}})+ (i_W)_*\Big\{ \hat{\Pi}_*\varphi(T\mathbb{P}(N_{W/V}\oplus 1))\psi(N)+\\ &\hat{\pi}_*\varphi(T\mathbb{P}(N_{W/V})\oplus S|_{\mathbb{P}(N_{W/V})}) \psi(N|_{\mathbb{P}(N_{W/V})})G(w)+\\ &\hat{\Pi}_*\varphi(T\mathbb{P}(N_{W/V}\oplus 1)\oplus S)\psi(N)J(\nu)\Big\}\\ \end{align*} Since the term in the curly braces depends only on the data of $i_W^*N_{Z/X}$, $N_{W/V}$, and $W$, this proves the lemma. \end{proof} \begin{rmk}\rm Of course in the above proof, if $Q(x) = \frac{\twopi{x}\theta(\twopi{x}-z)\twopi{\theta'(0)}}{\theta(\twopi{x})\theta(-z)}$ and $f_\lambda = \frac{e^{2\pi i\lambda(h)\tau z}\theta(\twopi{x}+\lambda(g)-\lambda(h)\tau-z)\twopi{\theta'(0)}}{\theta(\twopi{x}+\lambda(g)-\lambda(h)\tau)\theta(-z)}$, then $\varphi(TV)\psi(N_{V/X})$ is the equivariant elliptic class associated to the pair $(g,h)$ and the $(g,h)$-fixed component $V$. \end{rmk} \section{The Normal Cone Space}\label{Normal Cone Space} Let $W \subset X$ be a connected $T$-invariant subvariety of a projective $T$-space $X$. Suppose the normal bundle $N_{W/X}$ splits into a composition $L_1\oplus\ldots\oplus L_k$ of $T$-vectorbundles. Define $p: X^* \rightarrow W$ to be the fiber bundle with fiber $p^{-1}(w) = \mathbb{P}(L_1\oplus 1)_w\times \ldots \times \mathbb{P}(L_k\oplus 1)_w$. It is easy to see that $X^*$ contains a copy of $W$ with the same normal bundle $N_{W/X}$. In our proof of the blow-up formula, we will ultimately reduce all computations on $X$ to computations on the more manageable space $X^*$. We therefore devote this section to gathering some important facts about the topology of $X^*$. If we give the trivial vectorbundle $1$ the trivial action, then the action of $T$ on $W$ lifts naturally to $X^*$. Give $L_i$ a metric so that $T$ acts on $L_i$ by isometries, and give $\mathbb{P}(L_i\oplus 1)$ the induced metric. Define vectorbundles $Q_i \rightarrow X^*$ as follows: For $w \in W$ and $(\ell_1,\ldots,\ell_k)$ lines in $(L_1\oplus 1)_w,\ldots,(L_k\oplus 1)_w$, define $(Q_i)_{(w,\ell_1,\ldots,\ell_k)} = \ell_i^{\perp} \subset (L_i\oplus 1)_w$. These bundles inherit natural $T$-actions. Observe furthermore that $i_W^*(Q_i)$ is naturally isomorphic to $L_i$. Define $V_i \subset X^*$ to be the subvariety $\set{(w,\ell_1,\ldots,\ell_k): \ell_i = [0:1]}$. For $i = 1,\ldots,k$, $V_i$ are connected $T$-subvarieties, with connected intersection locus, and $W = V_1\cap\ldots\cap V_k$. Finally, let $f: \Bl{X^*}\rightarrow X^*$ denote the blow-up of $X^*$ along $Z = V_1\cap\ldots\cap V_j$ with exceptional divisor $E$. We have the following intersection-theoretic result: \label{top Chern} \begin{thm} $c_{top}(f^*Q_1\oplus\ldots\oplus f^*Q_j \otimes \mathcal{O}(-E))_T = 0.$ \end{thm} \begin{proof} We will show that $\mathrm{Hom}(L,f^*Q_1\oplus\ldots\oplus f^*Q_j)$ has an equivariant global nowhere zero section, where $L$ is a line bundle with the same equivariant first Chern class as $\mathcal{O}(E)$. We first give an explicit construction of the line bundle $L$. Let $0\leq t_1,\ldots, t_j$. Define $S_{t_1,\ldots,t_j} \subset \Bl{X^*}$ to be the subset $\set{(w,[v_1:1],\ldots,[v_j:1],[v_1:\ldots:v_j],\overline{\ell}): \norm{v_i}=t_i}$. Here $\overline{\ell}$ represents a point in $\mathbb{P}(L_{j+1}\oplus 1)_w\times\ldots\times\mathbb{P}(L_k\oplus 1)_w$. We will refer to points in $\Bl{X^*}$ which are not contained in any $S_{t_1,\ldots,t_j}$ as \it points at infinity\rm. Let $\rho : [0,\infty) \rightarrow \mathbb{R}$ be a bump function equal to zero in the region $[0,1/3)$ and equal to one in the region $[2/3,\infty)$. For $0 \leq t_i \leq 1$ and $v = (w,[v_1:1],\ldots,[v_j:1],[v_1:\ldots:v_j],\overline{\ell})$ a point in $S_{t_1,\ldots,t_j}$, let $L_v \subset (L_1\oplus 1)_w\oplus\ldots\oplus (L_j\oplus 1)_w$ be the span of the vector $\tilde{v} = ((1-\rho(t_1))v_1,\rho(t_1),\ldots,(1-\rho(t_j))v_j,\rho(t_j)).$ Outside this set, we define $L_v$ to be the span of the vector $0\oplus 1\oplus\ldots\oplus 0\oplus 1$. This clearly defines a smooth line bundle on $\Bl{X^*}$ with the same equivariant first Chern class as $\mathcal{O}(E)$. We now prove that $\mathrm{Hom}(L,f^*Q_1\oplus\ldots\oplus f^*Q_j)$ has a global equivariant nowhere zero section. For $v_i \in L_i$, define: $$h_i(v_i) = \Biggl \{ \begin{matrix} (-v_i,\norm{v_i}^2) & \norm{v_i}\leq 1 \cr (-\frac{v_i}{\norm{v_i}^2},1) & \norm{v_i}\geq 1 \cr \end{matrix} $$ For $v = (w,[v_1:1],\ldots,[v_j:1],[v_1:\ldots:v_j],\overline{\ell}) \in S_{t_1,\ldots,t_j}$, we define $L_v \rightarrow (Q_1\oplus\ldots\oplus Q_j)_{f(v)}$ by $\tilde{v} \mapsto (h_1(v_1),\ldots,h_j(v_j))$. This section extends in a natural way to the points at infinity, giving us a \it continuous \rm nowhere zero equivariant section $s_0$ of $\mathrm{Hom}(L,f^*Q_1\oplus\ldots\oplus f^*Q_j)$. The section is only continuous because the function: $$h(x) = \Biggl \{ \begin{matrix} x^2 & |x|\leq 1 \cr \frac{1}{x^2} & |x| \geq 1 \cr \end{matrix} $$ is not smooth. However, we may remedy this by approximating our continuous section $s_0$ by a smooth section and then averaging over the group $T$. Since $s_0$ was nowhere zero and fixed by the $T$-averaging process, the new smooth section will remain nowhere zero after averaging over $T$. \end{proof} \begin{rmk}\rm The intuition behind the preceding theorem is that if $Z = D_1\cap\ldots\cap D_j$ is the complete intersection of a collection of normal crossing divisors, then the proper transforms $\Bl{D}_i$ of $D_i$ are disjoint when we blow up along $Z$. We therefore have that $c_{top}(\mathcal{O}(\Bl{D}_1) \oplus\ldots\oplus\mathcal{O}(\Bl{D}_j))_T= 0$. While the above theorem is known, the proof given here will be useful later. \end{rmk} One easily observes that $i_{V_i}^*Q_i = N_{V_i/X^*}$. We might expect, therefore, that $c_{top}(Q_i)_T$ was the equivariant Thom class of $V_i$. This is in fact the case, as the following lemma proves: \begin{lem}\label{top Chern is Thom class} $c_{top}(Q_i)_T = {i_{V_i}}_*1$ \end{lem} \begin{proof} By the naturality properties of the equivariant Chern classes, it is enough to prove the above lemma for $X^* = \mathbb{P}(L\oplus 1)$, where $L$ is a $T$-vectorbundle over $W$, and $Q$ is the universal quotient bundle over $\mathbb{P}(L\oplus 1)$. Here, $W$ itself plays the role of $V_i$ in the statement of the lemma. As above, we endow the trivial bundle $1$ with the trivial $T$-action. Let us first prove that $c_{top}(Q_i) = {i_{W}}_*1$ in the non-equivariant category. Let $p: \mathbb{P}(L\oplus 1)\rightarrow W$ denote the obvious projection map. Let $r = \mathrm{rk}(Q)$. From the exact sequence: $$0\to S\to p^*(L\oplus 1)\to Q \to 0$$ we have that $c_{r}(Q) = \big (\frac{p^*c(L)}{1-c_1(S^*)}\big )_{r}$, where $(\cdot)_r$ denotes the degree $r$ part. Let $p_{w}: \mathbb{P}(L\oplus 1)_w \rightarrow w$ denote the restriction of $p$ to the fiber over $w$. Then $c_r(Q)|_{\mathbb{P}(L\oplus 1)_w} = c_1(S^*)^r|_{\mathbb{P}(L\oplus 1)_w}$, which clearly integrates to $1$ over the fiber. We next observe that $Q$ has a global no-where zero section away from $W$. We define this section as follows: $s: (w,[v:z]) \mapsto (w,[v:z],\frac{\overline{z}v}{\norm{v}^2},-1)$ We therefore have that $c_r(Q)$ is exact away from $W$. Hence, by subtracting off an exact form, we may represent $c_r(Q)$ by a form which has compact support in a tubular neighborhood of $W$, and which integrates to one along every fiber of the tubular neighborhood. It follows that $c_{r}(Q) = {i_W}_*1$ at least in the non-equivariant sense. In general, $c_r(Q)_T$ is at least an equivariant extension of the Thom class of $W$. Moreover, $c_r(Q)_T$ is equivariantly exact outside $W$. We prove this as follows: Observe first that the non-zero section we defined on the complement of $W$ was in fact equivariant. This section therefore induces a splitting $Q = Q'\oplus \mathbb{C}$ outside of $W$. Since we endowed $1$ with the trivial $T$-action, one may easily verify that the $\mathbb{C}$ in the above splitting inherits a trivial action as well. It follows that $c_r(Q)_T$ is equivariantly exact outside of $W$. By subtracting off an exact form, we get that $c_r(Q)_T$ may be represented by an equivariant form which has compact support in a tubular neighborhood of $W$, and which integrates to one along every fiber of the tubular neighborhood. It follows that $c_r(Q)_T$ is the equivariant Thom class of $W$. \end{proof} We next prove a formula relating the equivariant Chern class of the blow-up of $X^*$ along $Z$ to that of $X^*$. Note first that $TX^*$ splits holomorphically and equivariantly into a direct sum of sub-bundles $F\oplus M$, with $i_Z^*F = TZ$ and $i_Z^*M = N_{Z/X^*}$. We may therefore apply the following lemma to compare the equivariant Chern classes of $X^*$ and $\mathrm{Bl}_Z X^*$. \begin{lem} Let $Y$ be a complex $T$-space, and $Z \subset Y$ a $T$-invariant complex submanifold. Suppose that $TY$ splits holomorphically and equivariantly into a direct sum of sub-bundles $F\oplus M$ such that $i_Z^*F = TZ$ and $i_Z^*M = N_{Z/Y}$. Let $f: \Bl{Y}\rightarrow Y$ be the blow-up of $Y$ along $Z$ with exceptional divisor $E$. Then: \begin{align} c(T\Bl{Y})_T &= c(f^*F)_Tc(f^*M\otimes \mathcal{O}(-E))_Tc(\mathcal{O}(E))_T \end{align} in the ring $H^*_T(\Bl{Y})_{loc}$. \end{lem} \begin{proof} By localization, it suffices to prove the equality at every fixed component in $\Bl{Y}$. Let $\Bl{P} \subset \Bl{Y}$ be a fixed component which is the proper transform of a fixed component $P \subset Y$. If $P$ is disjoint from $Z$, then the equality of $(4)$ at $\Bl{P}\cong P$ is trivial. Otherwise, $\Bl{P}$ is equal to the blow-up of $P$ at $P\cap Z$. Note that $i_{P}^*F$ decomposes as $F_0\oplus F_1$, where $T$ acts trivially on $F_0$ and nontrivially on $F_1$. Similarly, $i_P^*M = M_0\oplus M_1$. Clearly $TP = F_0\oplus M_0$ and $N_{P/Y} = F_1\oplus M_1$. Applying $i_{\Bl{P}}^*$ to $(4)$, we have: \begin{align*} i_{\Bl{P}}^*(\mathrm{LHS}) =\hbox{ } &c(T\Bl{P})c(N_{\Bl{P}/\Bl{Y}})_T\\ i_{\Bl{P}}^*(\mathrm{RHS}) =\hbox{ } &c(f^*F_0)c(f^*M_0\otimes\mathcal{O}(-E))c(i_{\Bl{P}}^*\mathcal{O}(E))\\ \hbox{ }&c(f^*F_1)_Tc(f^*M_1\otimes\mathcal{O}(-E))_T \end{align*} Since $\Bl{P}$ is the blow-up of $P$ along $P\cap Z$ and $i_{P\cap Z}^*M_0 = N_{P\cap Z/P}$, the relation $c(T\Bl{P}) = c(f^*F_0)c(f^*M_0\otimes\mathcal{O}(-E))c(i_{\Bl{P}}^*\mathcal{O}(E))$ is well-known (see \cite{FultonIntersection}). It suffices therefore to prove that $c(N_{\Bl{P}/\Bl{Y}})_T = i_{\Bl{P}}^*(c(f^*F_1)_Tc(f^*M_1\otimes\mathcal{O}(-E))_T)$. To this end we prove the following claim: \bf{CLAIM}: $N_{\Bl{P}/\Bl{Y}} \cong i_{\Bl{P}}^*(f^*F_1\oplus f^*M_1\otimes\mathcal{O}(-E))$ \it as $T$-vectorbundles.\rm To prove this, consider $f$ as a map $f:\Bl{P}\rightarrow P$. For simplicity of notation, write $E$ for $E\cap \Bl{P}$. View $N_{\Bl{P}/\Bl{Y}}$ as a sheaf, i.e., $N_{\Bl{P}/\Bl{Y}}(U) = \Gamma(U,N_{\Bl{P}/\Bl{Y}})$. The derivative $Df: N_{\Bl{P}/\Bl{Y}}\rightarrow f^*N_{P/Y} = f^*F_1\oplus f^*M_1$ is a sheaf map, and maps onto the subsheaf $f^*F_1\oplus f^*M_1(-E)$. Here $f^*M_1(-E)$ represents the subsheaf of $f^*M_1$ corresponding to sections of $f^*M_1$ which vanish along $E$. Let $s_0$ denote the global section of $\mathcal{O}(E)$ induced by the defining equations of $E$. Then $\otimes s_0^{-1}:f^*M_1(-E)\rightarrow f^*M_1\otimes\mathcal{O}(-E)$ is a sheaf isomorphism. Define $\beta = (id\oplus \otimes s_0^{-1})\circ Df$. By computing in local coordinates, one verifies easily that $\beta: N_{\Bl{P}/\Bl{Y}}\rightarrow f^*F_1\oplus f^*M_1\otimes\mathcal{O}(-E)$ is a sheaf isomorphism. Furthermore, for $s$ a local section of $N_{\Bl{P}/\Bl{Y}}$, $s(p) = 0$ if and only if $(\beta s)(p) = 0$. Therefore, $beta$ induces an isomorphism of the corresponding vectorbundles. Moreover, since $\beta$ is clearly equivariant, the vectorbundle isomorphism is equivariant. This completes the proof if $P\cap E < P$. Next, suppose that $\Bl{P} \subset E$. For this case, it suffices to prove the following fact: \bf{CLAIM} $i_E^*(T\Bl{Y}\oplus \mathbb{C}) = i_E^*(f^*F\oplus f^*M\otimes\mathcal{O}(-E)\oplus\mathcal{O}(E))$.\rm We prove this as follows: Let $\pi: \mathbb{P}(N_{Z/Y})\rightarrow Z$ be the natural projection, and let $S$ denote the tautological bundle over $\mathbb{P}(N_{Z/Y})$. Then $i_E^*T\Bl{Y} = T\mathbb{P}(N_{Z/Y})\oplus S$. Since $i_E^*f^* = \pi^*i_Z^*$, we have that $i_E^*f^*F = \pi^*i_Z^*F = \pi^*TZ$ and $i_E^*f^*M = \pi^*N_{Z/Y}$. Thus, \begin{align*} i_E^*(f^*F\oplus f^*M\otimes\mathcal{O}(-E)\oplus\mathcal{O}(E)) = &\pi^*TZ\oplus \pi^*N_{Z/Y}\otimes S^*\oplus S \end{align*} From the exact sequence $0\to S\to \pi^*N_{Z/Y}\to Q\to 0$, we have that $\pi^*N_{Z/Y}\otimes S^* = \mathbb{C} \oplus Q\otimes S^*$, where $Q$ is the tautological quotient bundle. The claim then follows from the observation that $T\mathbb{P}(N_{Z/Y}) = \pi^*TZ\oplus Q\otimes S^*$. This completes the proof of the lemma. \end{proof} \begin{rmk}\rm Note that if $\Bl{Y}$ is equivariantly formal, the above proof implies that $(4)$ holds in the unlocalized ring $H^*_T(\Bl{Y})$. \end{rmk} We may rewrite the left-hand side of $(4)$ as $c(f^*TY)_Tc(f^*M)_T^{-1}c(f^*M\otimes\mathcal{O}(-E))_Tc(\mathcal{O}(E))_T$. Viewed as an element of $\widehat{H}(\Bl{Y})$, it is easy to verify that this expression remains the same if we replace $M$ by any bundle $M'$ with $i_Z^*M' = N_{Z/Y}$. We therefore obtain the following corollary pertaining to $X^*$: \begin{cor} \label{blow up Chern class} Let $f:\Bl{X^*}\rightarrow X^*$ be the blow-up of $X^*$ along $Z = V_1\cap\ldots\cap V_j$ for $j\leq k$ with exceptional divisor $E$. Then the following formula holds in $\widehat{H}(\Bl{X^*})$: \begin{align*} c(T\Bl{X^*})_T = \frac{c(f^*TX^*)_T} {\prod_{i=1}^jc(f^*Q_i)_T}\prod_{i=1}^j c(f^*Q_i\otimes\mathcal{O}(-E))_T c(\mathcal{O}(E))_T \end{align*} \end{cor} We end this section with a technical lemma which is the blow-up analogue of lemma \ref{top Chern is Thom class}. \begin{lem}\label{blow up Thom class} For $1\leq i\leq k$, let $\Bl{V_i}$ be the proper transform of $V_i$ under the above blow-up: $f: \Bl{X^*}\rightarrow X^*$. Then $c_{top}(f^*Q_i\otimes\mathcal{O}(-E))_T$ is the equivariant Thom class of $\Bl{V_i}$ for $1\leq i \leq j$, and $c_{top}(f^*Q_i)_T$ is the equivariant Thom class of $\Bl{V_i}$ for $j+1\leq i\leq k$. Moreover $i_{\Bl{V_i}}^*c(f^*Q_i\otimes\mathcal{O}(-E))_T = c(N_{\Bl{V_i}/\Bl{X^*}})_T$ for $1\leq i \leq j$ and $i_{\Bl{V_i}}^*c(f^*Q_i)_T = c(N_{\Bl{V_i}/\Bl{X^*}})_T$ for $j+1\leq i \leq k$. \end{lem} \begin{proof} Let $1\leq i\leq j$. Recall the equivariant continuous nowhere vanishing section $s_0$ of $\mathrm{Hom}(L,f^*Q_1\oplus\ldots\oplus f^*Q_k)$ constructed in the proof of theorem \ref{top Chern}. Let $\pi_i$ denote the projection from $f^*Q_1\oplus\ldots\oplus f^*Q_k$ onto the $i$-th factor. Then $\pi_i\circ s_0$ is an equivariant section of $f^*Q_i\otimes L^*$. From the construction of $s_0$, it is clear that $s_0$ vanishes precisely along $\Bl{V_i}$. Hence, the equivariant top Chern class of $f^*Q_i\otimes L^*$ is the equivariant Thom class of $\Bl{V_i}$. Since $L$ has the same equivariant first Chern class as $\mathcal{O}(E)$, this proves $c_{top}(f^*Q_i\otimes\mathcal{O}(-E))_T = {i_{\Bl{V_i}}}_*1$. To prove the second part of the lemma for $1\leq i\leq j$, apply $i_{\Bl{V_i}}^*$ to both sides of the equation in corollary \ref{blow up Chern class}. The LHS becomes $c(T\Bl{V_i})_T c(N_{\Bl{V_i}/\Bl{X^*}})_T$ while the RHS becomes: \begin{align*} &\frac{c(f^*N_{V_i/X^*} \otimes\mathcal{O}(-E\cap\Bl{V_i}))_Tc(f^*TV_i)_T} {\prod_{\ell\neq i}^jc(f^*N_{V_\ell/X^*})_T}\times\\ &\prod_{\ell\neq i}^jc(f^*N_{V_\ell/X^*} \otimes\mathcal{O}(-E\cap\Bl{V_i}))_T c(\mathcal{O}(E\cap\Bl{V_i}))_T\\ \end{align*} Here we have used the fact that $i_{V_\ell}^*Q_{\ell} = N_{V_\ell/X^*}$. By corollary \ref{blow up Chern class}, the factor multiplying $i_{\Bl{V_i}}^*c(f^*Q_i\otimes\mathcal{O}(-E))_T = c(f^*N_{V_i/X^*} \otimes\mathcal{O}(-E\cap\Bl{V_i}))_T$ in the above expression is equal to $c(T\Bl{V_i})_T$. We therefore have that $i_{\Bl{V_i}}^*c(f^*Q_i\otimes\mathcal{O}(-E))_T = c(N_{\Bl{V_i}/\Bl{X^*}})_T$. Next, let $j+1\leq i\leq k$. From the proof of lemma \ref{top Chern is Thom class} we know that $Q_i$ has an equivariant nowhere zero section in the complement of $V_i$. Pulling back this section by $f$ gives an equivariant nowhere zero section in the complement of $f^{-1}V_i = \Bl{V_i}$. It follows that $c_{top}(f^*Q_i)_T$ must be localized in a neighborhood of $\Bl{V_i}$. By the equivariant Thom isomorphism, $c_{top}(f^*Q_i)_T$ must be a complex number multiple of the equivariant Thom class of $\Bl{V_i}$. But since $f_*c_{top}(f^*Q_i)_T$ is the Thom class of $V_i$, this complex number multiple must be equal to one. The proof of the second part of the lemma for $j+1\leq i\leq k$ is analogous to the proof given above for $1\leq i\leq j$. \end{proof} \section{Twisted Polyhedral Complex}\label{Twisted Polyhedral Complex} Throughout, we assume the following: (a) $X$ is a smooth projective variety with a holomorphic $T\times H$ action, where $T$ is a torus and $H$ is a finite abelian group. (b) $Z \subset X$ is a $T$-invariant smooth subvariety. (c) $V \subset X^H$ is a connected component. (d) $W = V\cap Z$ is connected. (see remark at the end of this section) As noted above, $i_W^*N_{Z/X}$ splits as $N_0\oplus N_1$, where $N_0$ denotes the sub-bundle of $i_W^*N_{Z/X}$ on which $H$ acts trivially. Furthermore, $N_1$ decomposes as a sum of sub-bundles $\oplus N_{\lambda}$ corresponding to the characters of the $H$-action on $N_1$. Finally, $N_{W/Z}$ also splits into character sub-bundles $\oplus N_\varepsilon$. Let $E_1,\ldots,E_\ell$, $D_1,\ldots,D_k$ be smooth normal crossing divisors on $X$ intersecting $Z$ normally and non-trivially. We label these divisors so that $i_W^*\mathcal{L}_{D_i} \hookrightarrow N_0$ and $i_W^*\mathcal{L}_{E_j} \hookrightarrow N_1$. Write $i_W^*\mathcal{L}_{D_i} = \Delta_i$ and $i_W^*\mathcal{L}_{E_j} = \xi_j$. Write $N_0 = F_0\oplus \bigoplus\Delta_i$ and $N_1 = \bigoplus F_{\lambda}\oplus \bigoplus \xi_j$. Define a new space $X^*$ as follows: $p: X^*\rightarrow W$ is the fiber bundle with fiber $p^{-1}(w) =$ \begin{align*} &\prod\mathbb{P}(N_\varepsilon\oplus 1)_w\times \mathbb{P}(F_0\oplus 1)_w\times \\ &\prod\mathbb{P}(\Delta_i\oplus 1)_w\times\prod\mathbb{P}(F_{\lambda}\oplus 1)_w\times\prod\mathbb{P}(\xi_j\oplus 1)_w \end{align*} Clearly $X^*$ contains a copy of $W$ with normal bundle $N_{W/X}$. Moreover, as in the previous section, each of the bundles $N_\varepsilon$, $F_{0}$, $F_{\lambda}$, $\Delta_i$, and $\xi_j$ are the restrictions to $W$ of global equivariant bundles $Q_{N_\varepsilon}$, $Q_{F_0}$, $Q_{F_\lambda}$, $Q_{\Delta_i}$ and $Q_{\xi_j}$. Recall also that the top Chern classes of these bundles are Poincar\'e dual to varieties $V_{N_\varepsilon}, V_{F_0}, V_{F_\lambda}, V_{\Delta_i}$, and $V_{\xi_j}$. The analogue of $Z$ in $X^*$ is the intersection locus of the varieties $V_{F_0}, V_{F_\lambda},V_{\Delta_i},$ and $V_{\xi_j}$. We will continue to call this intersection locus $Z$. We associate a polyhedral complex with integral structure $(\Sigma,N_\Sigma)$ to $X^*$ as follows: Let $s_\varepsilon = \mathrm{rk}(N_\varepsilon)$, $r = \mathrm{rk}(F_0)$, and let $r_{\lambda} = \mathrm{rk}(F_{\lambda})$. Let $w^{\varepsilon,a},x^{b},y^{\lambda,c},d^i$, and $e^j$ be an integral basis for $N_\Sigma =\mathbb{Z}^{\sum_\varepsilon s_\varepsilon+r+\sum_{\lambda}r_{\lambda}+k+\ell}$, for $a = 1,\ldots,s_\varepsilon$, $b =1,\ldots,r$, $c=1,\ldots,r_{\lambda}$, $i=1,\ldots,k$, and $j=1,\ldots,\ell$. Define $\Sigma$ to be the cone in the first orthant of the vectorspace $N_\Sigma\otimes\mathbb{R}$. Let $w_{\varepsilon,a},x_{b},y_{\lambda,c},d_i$, and $e_j$ be the linear forms on this vectorspace which are dual to above basis vectors. Then $\mathbb{C}[\Sigma] = \mathbb{C}[w_{\varepsilon,a},x_{b},y_{\lambda,c},d_i,e_j]$. Let $G = \prod_\varepsilon S_{s_\varepsilon}\times S_{r}\times \prod_{\lambda}S_{r_\lambda}$, where $S_{n}$ denotes the symmetric group in $n$ letters. Then $G$ acts on $\mathbb{C}[\Sigma]$ by permuting the linear forms $w_{\varepsilon,a},x_{b},y_{\lambda,c}$ in the obvious manner. Consider the following correspondence: \begin{align*} w_{\varepsilon,a}& \longleftrightarrow T\hbox{-Chern roots of }Q_{N_\varepsilon}\\ x_{b}& \longleftrightarrow T\hbox{-Chern roots of }Q_{F_0}\\ y_{\lambda,c}& \longleftrightarrow T\hbox{-Chern roots of }Q_{F_\lambda}\\ d_i &\longleftrightarrow c_1(Q_{\Delta_i})_T\\ e_j &\longleftrightarrow c_1(Q_{\xi_j})_T\\ \end{align*} Such a correspondence defines a natural map $\rho:\mathbb{C}[\Sigma]^G\rightarrow H^*_T(X^*)$. Let $\phi: \Bl{X^*}\rightarrow X^*$ denote the blow-up of $X^*$ along $Z$ with exceptional divisor $E$. Define $\Bl{\Sigma}$ to be the polyhedral complex obtained from $\Sigma$ by adding the ray through the vector $\sum x_{b}+\sum y_{\lambda,c}+\sum d_i+\sum e_j$. As before, let $w_{\varepsilon,a},x_{b},y_{\lambda,c},d_i$, and $e_j$ denote the linear forms on $\mathbb{C}[\Bl{\Sigma}]$ which are dual to vectors $w^{\varepsilon,a},x^{b},y^{\lambda,c},d^i$, and $e^j$. Let $t$ be the linear form dual to $\sum x_{b}+\sum y_{\lambda,c}+\sum d_i+\sum e_j$. Then: $$\mathbb{C}[\Bl{\Sigma}] \cong \mathbb{C}[t,w_{\varepsilon,a},x_{b},y_{\lambda,c},d_i,e_j]/\prod_{b,c,\lambda,i,j}x_{b}y_{\lambda,c}d_i e_j.$$ $G$ acts on $\mathbb{C}[\Bl{\Sigma}]$ in the obvious manner. Consider the correspondence: \begin{align*} w_{\varepsilon,a}& \longleftrightarrow T\hbox{-Chern roots of }\phi^*Q_{N_\varepsilon}\\ x_{b}& \longleftrightarrow T\hbox{-Chern roots of }\phi^*Q_{F_0}\otimes\mathcal{O}(-E)\\ y_{\lambda,c}& \longleftrightarrow T\hbox{-Chern roots of }\phi^*Q_{F_\lambda}\otimes \mathcal{O}(-E)\\ d_i &\longleftrightarrow c_1(\phi^*Q_{\Delta_i}\otimes\mathcal{O}(-E))_T\\ e_j &\longleftrightarrow c_1(\phi^*Q_{\xi_j}\otimes\mathcal{O}(-E))_T\\ \end{align*} By theorem \ref{top Chern}, this correspondence induces a well-defined homomorphism $\rho: \mathbb{C}[\Bl{\Sigma}]^G\rightarrow H^*_T(X^*)$. It is easy to see that $\nu^* : \mathbb{C}[\Sigma]^{G}\rightarrow \mathbb{C}[\Bl{\Sigma}]^{G}$ and similarly $\nu_* :\mathbb{C}[\Bl{\Sigma}]^{G} \rightarrow \mathbb{C}[\Sigma]^{G}$. We have the following important lemmas: \begin{lem}\label{pushforward commutes} \begin{align} \rho\nu^* =& \phi^*\rho \label{a} \\ \phi_*\rho =& \rho\nu_* \label{b} \end{align} \end{lem} \begin{proof} For notational convenience, let $\bar{w}_{\varepsilon,a}, \bar{x}_b,\ldots$ denote the $T$-Chern roots corresponding to $w_{\varepsilon,a}, x_b,\ldots$. With this notation, $\rho(f)(w_{\varepsilon,a},x_b,\ldots)$ is equal to $f(\bar{w}_{\varepsilon,a},\bar{x}_b,\ldots)$. We first prove \ref{a}: We have: \begin{align*} \phi^*\rho(f) =&\hbox{ } f(\phi^*\bar{w}_{\varepsilon,a},\phi^*\bar{x}_b,\ldots)\\ =&\hbox{ }f(\bar{w}_{\varepsilon,a},\bar{x}_b+\bar{t},\ldots) = f(\rho(w_{\varepsilon,a}),\rho(x_b+t),\ldots)\\ =&\hbox{ }f(\rho\nu^*w_{\varepsilon,a},\rho\nu^*x_b,\ldots) = \rho\nu^*f\\ \end{align*} \ref{a} allows us to reduce the proof of \ref{b} to the case where $f = t^n$. Let $N = \hbox{ codim }Z$. For $n < N-1$, $\nu_*t^n = 0 = \phi_*\rho(t)^n$, so the claim is true in this case. Otherwise, for $n = (\ell-1+j)$, $\phi_*\rho(x_0^n) = (-1)^{j-1}(i_Z)_*s_j(N_{Z/X^*})_T = (-1)^{j-1}(i_Z)_*i_Z^*s_j(M)_T = (-1)^{j-1}s_j(M)_Tc_{top}(M)_T$. Here $s_j(M)_T$ denotes the $j$-th equivariant Segre class of the bundle $M = Q_{F_0}\oplus \bigoplus_\lambda Q_{F_\lambda}\oplus\bigoplus Q_{\Delta_i}\oplus\bigoplus_j Q_{\xi_j}$. The last equality follows from lemma \ref{top Chern is Thom class}. Clearly this last expression is equal to $\rho(P)$ for some universal polynomial $P$ in the Chern roots of $M$. We are therefore reduced to proving that $\nu_*(f) = P$. Let $s =\mathrm{rank}(N_{W/Z})$ and let $\mu:\Bl{\mathbb{C}^{s+N}}\rightarrow \mathbb{C}^{s+N}$ denote the blow-up of $\mathbb{C}^{s+N}$ along $\mathbb{C}^s\times 0$, where we give both spaces the structure of toric varieties, with big torus $L$. $\mathbb{C}[\Bl{\Sigma}]$ and $\mathbb{C}[\Sigma]$ both correspond to the rings of piece-wise polynomial functions on the fans of $\Bl{\mathbb{C}^{s+N}}$ and $\mathbb{C}^{s+N}$, which are in turn isomorphic to the $L$-equivariant cohomology rings of the above spaces. We may view $\nu_*$ as the polyhedral version of the equivariant pushforward $\mu_* :H^*_L(\Bl{\mathbb{C}^{s+N}})\rightarrow H^*_L(\mathbb{C}^{s+N})$. Then $\nu_*f = P$ follows from the fact that $\mu_*(c_1(E)_L)^{N-1+j} = P$, where here we evaluate $P$ at the equivariant Chern classes of the coordinate hyperplanes. \end{proof} \begin{rmk}\rm As in section \ref{Poly}, the above theorems hold for piece-wise convergent power series. \end{rmk} \section{Blow-up Formula for Orbifold Elliptic Genus}\label{Blow Up Formula} Let $X$ be a smooth projective variety with a holomorphic $G\times T$ action, $D = \sum_I \alpha_i D_i$ a $G\times T$-invariant $G$-normal crossing divisor with coefficients $\alpha_i < 1$, $g,h \in G$ a pair of commuting elements, and $X^{g,h}_\gamma$ a component of the common fixed point locus of $g$ and $h$. Recall the definition given in section \ref{Definitions} for the orbifold elliptic class of the pair $(X^{g,h}_\gamma,D)$. The goal in this section is to prove the following theorem: \begin{thm}\label{orbifold chang of var} Let $f: \Bl{X}\rightarrow X$ be the blow-up along a smooth $G\times T$-invariant subvariety $Z$ which has normal crossings with respect to the components of $D$. Fix a commuting pair $g,h \in G$ and a component $X^{g,h}_\gamma$ of the fixed point locus $X^{g,h}$. Let $\set{\Bl{X}^{g,h}_\mu}$ denote the components of $\Bl{X}^{g,h}$ which get mapped to $X^{g,h}_\gamma$ under $f$. Then: \begin{align*} f_*\sum_\mu \mathcal{E}ll_{orb}(\Bl{X}^{g,h}_\mu,\Bl{D}) &= \mathcal{E}ll_{orb}(X^{g,h}_\gamma,D) \end{align*} Here $\Bl{D}$ is the divisor satisfying $f^*(K_X+D) = K_{\Bl{X}}+\Bl{D}$. \end{thm} \begin{proof} By the projection formula, it suffices to assume that every component of $D$ intersects $Z$ with multiplicity $1$. Furthermore, by applying deformation to the normal cone (lemma \ref{deformation}) we may assume that $X$ is the normal cone space $X^*$ described in section \ref{Twisted Polyhedral Complex}. Using the notation from section \ref{Twisted Polyhedral Complex}, we have the following correspondences: \begin{align*} H &\longleftrightarrow (g,h)\\ Z\cap X^{g,h}_\gamma &\longleftrightarrow W\\ \set{D_j}_{j\in I^{g,h}_\gamma}&\longleftrightarrow \set{V_{\xi_j}}\\ \set{D_i}_{i\not \in I^{g,h}_\gamma}&\longleftrightarrow \set{V_{\Delta_i}}\\ X^{g,h}_\gamma &\longleftrightarrow \bigcap_\varepsilon V_{N_\varepsilon}\cap\bigcap_{\lambda} V_{F_\lambda}\cap\bigcap_j V_{\xi_j}\\ Z &\longleftrightarrow V_{F_0}\cap \bigcap_{\lambda}V_{F_\lambda} \cap \bigcap_{i}V_{\Delta_i}\cap \bigcap_j V_{\xi_j} \end{align*} Given these identifications, $\mathcal{E}ll_{orb}(X^{g,h}_\gamma,D)=$ \begin{align* &\prod_{TX^*}\ellip{\frac{x_i}{2\pi i}} \prod_{\lambda,Q_{F_\lambda}}\frac{ \theta(\twopi{x_{\lambda,c}}) \theta(\twopi{x_{\lambda,c}}+\lambda(g)-\lambda(h)\tau-z)} {\theta(\twopi{x_{\lambda,c}}-z) \theta(\twopi{x_{\lambda,c}}+\lambda(g)-\lambda(h)\tau)}\times\\ &\prod_{\varepsilon,Q_{N_\varepsilon}}\frac{ \theta(\twopi{x_{\varepsilon,a}}) \theta(\twopi{x_{\varepsilon,a}}+\varepsilon(g)-\varepsilon(h)\tau-z)} {\theta(\twopi{x_{\varepsilon,a}}-z) \theta(\twopi{x_{\varepsilon,a}}+\varepsilon(g)-\varepsilon(h)\tau)}\times\\ &\prod_j\frac{\theta(\twopi{\xi_j}) \theta(\twopi{\xi_j}+\lambda_j(g)-\lambda_j(h)\tau-(-\alpha_j+1)z) \theta(-z)} {\theta(\twopi{\xi_j}-z) \theta(\twopi{\xi_j}+\lambda_j(g)-\lambda_j(h)\tau) \theta(-(-\alpha_j+1)z)}\times\\ &\prod_{i}\jacc{\frac{\Delta_i}{2\pi i}}{(-\alpha_i+1)} \end{align*} Here $x_{\lambda,c}, x_{\varepsilon,a}, \xi_j$, and $\Delta_i$ denote the equivariant Chern roots of the bundles, $Q_{F_\lambda}, Q_{N_\varepsilon}, Q_{\xi_j}$, and $Q_{\Delta_i}$, respectively. The above equality follows from lemma \ref{top Chern is Thom class}. Let $f: \Bl{X^*}\rightarrow X^*$ be the blow-up along $Z$ with exceptional divisor $E$. Before proceeding further, it will be convenient to set up some new notation. Consider the collection $I = \set{N_\varepsilon,F_0,F_\lambda,\xi_j,\Delta_i,E}$. Let $\Bl{Q}_{N_\varepsilon} = f^*Q_{N_\varepsilon}$. For $A \in I$, $A\neq N_\varepsilon, E$, let $\Bl{Q}_{A} = f^*Q_{A}\otimes\mathcal{O}(-E)$. For ease of notation later on, we also define $\Bl{Q}_E = \mathcal{O}(E)$. For $A \in I-\set{E}$, we let $\Bl{V}_A$ denote the proper transform of $V_A$, and we let $\Bl{V}_E$ simply equal $E$. Let $\Bl{X^*}^{g,h}_\mu$ be a connected component of $\Bl{X^*}^{g,h}$ which gets mapped to ${X}^{g,h}_\gamma$ under $f$. This space is the complete intersection of subvarieties $\Bl{V}_A$ for $A$ in some indexing set $I_\mu \subset I$. For $A \in I_\mu$, $\Bl{Q}_A|_{\Bl{X^*}^{g,h}_\mu} = N_A\otimes\Bl{\lambda}_A$ for some $(g,h)$ character $\Bl{\lambda}_{A,\mu}$ and some $(g,h)$-trivial bundle $N_{A,\mu}$. We extend the definition of $\Bl{\lambda}_{A,\mu}$ by letting $\Bl{\lambda}_{A,\mu} = 0$ for $A \not \in I_\mu$. Finally, we define indices $\beta_{A}$ as follows: For $A = \xi_j,\Delta_i,$ or $E$, we let $\beta_A$ be the coefficient of $\Bl{V}_A$ as a divisor in $\Bl{D}$. Otherwise, we let $\beta_A = 0$. Applying corollary \ref{blow up Chern class}, lemma \ref{blow up Thom class}, and the above definitions, we obtain the following convenient expression for $\mathcal{E}ll_{orb}(\Bl{X^*}^{g,h}_\mu,\Bl{D})$: \begin{align*} &f^*\Big\{\prod_{TX^*}\ellip{\twopi{x_i}} \prod_{m,Q_A,A\in I}\ellipinv{\twopi{a_m}}\Big \}\times\\ &\prod_{\ell,\Bl{Q}_A,A\in I}\frac{\twopi{\tilde{a}_\ell} \theta(\twopi{\tilde{a}_\ell}+\Bl{\lambda}_{A,\mu}(g)-\Bl{\lambda}_{A,\mu}(h)\tau -(-\beta_A+1)z)\theta'(0)} {\theta(\twopi{\tilde{a}_\ell}+\Bl{\lambda}_{A,\mu}(g)-\Bl{\lambda}_{A,\mu}(h)\tau) \theta(-(-\beta_A+1)z)} \end{align*} For $A \in I-\set{E}$ we define, for convenience of notation, $\lambda_A\in R((g,h))$ and $\alpha_A\in \mathbb{Q}$ as follows: For $A = N_\varepsilon, F_\lambda, \xi_j$, and $\Delta_i$, we let $\lambda_A = \varepsilon, \lambda, \lambda_j$, and $\lambda_i$, respectively. Otherwise, we set $\lambda_A = 0$. Next, for $A = \xi_j$ or $\Delta_i$, we let $\alpha_A = \alpha_j$ or $\alpha_i$. Otherwise, we set $\alpha_A = 0$. By the projection formula, we are reduced to proving the following formula: \begin{align} &f_*\sum_{\mu}\label{exp 1} \prod_{\ell,\Bl{Q}_A,A\in I}\frac{\twopi{\tilde{a}_\ell} \theta(\twopi{\tilde{a}_\ell}+\Bl{\lambda}_{A,\mu}(g)-\Bl{\lambda}_{A,\mu}(h)\tau -(-\beta_A+1)z)\theta'(0)} {\theta(\twopi{\tilde{a}_\ell}+\Bl{\lambda}_{A,\mu}(g)-\Bl{\lambda}_{A,\mu}(h)\tau) \theta(-(-\beta_A+1)z)} =\\ &\prod_{m,Q_A,A\in I-E}\frac{\twopi{a_m}\label{exp 2} \theta(\twopi{a_m}+\lambda_{A}(g)-\lambda_{A}(h)\tau -(-\alpha_A+1)z)\theta'(0)} {\theta(\twopi{a_m}+\lambda_{A}(g)-\lambda_{A}(h)\tau) \theta(-(-\alpha_A+1)z)} \end{align} Naturally, $\tilde{a}_\ell=\tilde{a}_\ell(A)$ denote the equivariant Chern roots of $\Bl{Q}_A$ and $a_m = a_m(A)$ denote the equivariant Chern roots of $Q_A$. Referring to the notation from section \ref{Twisted Polyhedral Complex} for $A = N_\varepsilon, F_0,F_\lambda,\Delta_k,\xi_j$ and $i=1,\ldots,\mathrm{rank}(A)$, let us define $x_{A,i} \in \mathbb{C}[\Sigma] = w_{\varepsilon,i}, x_i,y_{\lambda,i},d_k, e_j$, respectively. For $A \in I$ and $i=1,\ldots,\mathrm{rank}(A)$ we define $\tilde{x}_{A,i} \in \mathbb{C}[\Bl{\Sigma}]$ similarly. Define $F \in \mathbb{C}[|\Sigma|]$ to be the power series: \begin{align*} F = \prod_{i,A\in I-E}\frac{\twopi{x_{A,i}} \theta(\twopi{x_{A,_i}}+\lambda_{A}(g)-\lambda_{A}(h)\tau -(-\alpha_A+1)z)\theta'(0)} {\theta(\twopi{x_{A,i}}+\lambda_{A}(g)-\lambda_{A}(h)\tau) \theta(-(-\alpha_A+1)z)} \end{align*} Define $F_\mu \in \mathbb{C}[|\Bl{\Sigma}|]=$ \begin{align*} \prod_{i,A\in I}\frac{\twopi{\tilde{x}_{A,i}} \theta(\twopi{\tilde{x}_{A,i}}+\Bl{\lambda}_{A,\mu}(g)-\Bl{\lambda}_{A,\mu}(h)\tau -(-\beta_A+1)z)\theta'(0)} {\theta(\twopi{\tilde{x}_{A,i}}+\Bl{\lambda}_{A,\mu}(g)-\Bl{\lambda}_{A,\mu}(h)\tau) \theta(-(-\beta_A+1)z)} \end{align*} Clearly $F$ is $\rho$ applied to expression \ref{exp 1} and $F_\mu$ is $\rho$ applied to the $\mu$-th summand in expression \ref{exp 2}. By lemma \ref{pushforward commutes}, we are reduced to proving $\nu_* \sum_\mu F_\mu = F$. To do this, think of $(\Sigma,N_\Sigma)$ as the polyhedral complex associated to the toric variety $\mathbb{C}^M$ where $M = \dim \Sigma$. Let $\Bl{\mathbb{C}^M}$ be the toric blow-up of $\mathbb{C}^M$ which corresponds to the polyhedral subdivision $\Bl{\Sigma}\rightarrow \Sigma$ described in section \ref{Twisted Polyhedral Complex}. We may view $g$ and $h$ as elements of the big torus of $\mathbb{C}^M$, i.e., as elements of a finite index sup-lattice of $N_\Sigma$. Under this identification, $x_{A,i}(g) = \lambda_A(g)$. The $(g,h)$-fixed components of $\Bl{\mathbb{C}^M}$ are in one-one correspondence with the fixed components $\Bl{X^*}^{g,h}_\mu$ and in one-one correspondence with subcones $C_\mu \subset \Bl{\Sigma}$. These are the cones of maximal dimension which correspond to affine open sets $U_{C_\mu}$ of the form $\mathbb{C}^a\times(\mathbb{C}^*)^b$, where the characters of the $(g,h)$-representation $\mathbb{C}^a$ are all non-trivial. For $C\supset C_\mu$, let $I(C)$ index the collection of piece-wise linear functions $\tilde{x}_{A,i}$ which are dual to $C$. Since $g$ and $h \in N_\Sigma$ lie inside $C_\mu$, it makes sense to evaluate $\tilde{x}_{A,i}|_C$ at $g$ and $h$. One sees easily in fact that $\tilde{x}_{A,i}|_C(g)=\tilde{\lambda}_{A,\mu}(g)$ and similarly for $\tilde{x}_{A,i}|_C(h)$. When distinguishing between different cones, it will be convenient to denote the collection $\set{\tilde{x}_{A,i}|_C}_{I(C)}$ by $\set{\tilde{x}_{C,j}}_{j=1}^{|I(C)|}$. With this notation, if $\tilde{x}_{C,j} = \tilde{x}_{A,i}|_C$, we also define $\beta_j = \beta_A$. We define $x_{C,j}$ and $\alpha_j$ similarly when $C\subset \Sigma$. From this it follows that for $C\supset C_\mu$: \begin{align*} F_\mu|_{C}= \prod_{j=1}^{\dim C}\frac{\twopi{\tilde{x}_{C,j}} \theta(\twopi{\tilde{x}_{C,j}}+\tilde{x}_{C,j}(g)-\tilde{x}_{C,j}(h)\tau -(-\beta_j+1)z)\theta'(0)} {\theta(\twopi{\tilde{x}_{C,j}}+\tilde{x}_{C,j}(g)-\tilde{x}_{C,j}(h)\tau) \theta(-(-\beta_j+1)z)} \end{align*} Otherwise, if $C_\mu$ is not contained in $C$, it is easy to see that $F_\mu|_C = 0$. Now let $C_\gamma \subset \Sigma$ be the cone which corresponds to $(X^*)^{g,h}_\gamma$. Fix a cone $K\subset \Sigma$ containing $C_\gamma$. Let $\Bl{\Sigma}_K$ denote the subdivision of $K$ inside $\Bl{\Sigma}$. Each cone $C \subset \Bl{\Sigma}_K$ with the same dimension as $K$ contains a unique cone $C_\mu$ for some $\mu$. Moreover, every cone $C_\mu$ is contained in one such cone $C\subset \Bl{\Sigma}_K$. We therefore have that \begin{align*} \sum_\mu F_\mu|_{C} = \prod_{j=1}^{\dim K}\frac{\twopi{\tilde{x}_{C,j}} \theta(\twopi{\tilde{x}_{C,j}}+\tilde{x}_{C,j}(g)-\tilde{x}_{C,j}(h)\tau -(-\beta_j+1)z)\theta'(0)} {\theta(\twopi{\tilde{x}_{C,j}}+\tilde{x}_{C,j}(g)-\tilde{x}_{C,j}(h)\tau) \theta(-(-\beta_j+1)z)} \end{align*} To complete the proof, it remains to show that $(\nu_*\sum F_\mu)|_K = F|_K$. By the push-forward formula for $\nu_*$, this is equivalent to proving the identity: \begin{align*} &\sum_{C\subset \Bl{\Sigma}_K} \prod_{j=1}^{\dim K}\frac{ \theta(\twopi{\tilde{x}_{C,j}}+\tilde{x}_{C,j}(g)-\tilde{x}_{C,j}(h)\tau -(-\beta_j+1)z)\theta'(0)} {\theta(\twopi{\tilde{x}_{C,j}}+\tilde{x}_{C,j}(g)-\tilde{x}_{C,j}(h)\tau) \theta(-(-\beta_j+1)z)}=\\ &\prod_{j=1}^{\dim K}\frac{ \theta(\twopi{x_{K,j}}+x_{K,j}(g)-x_{K,j}(h)\tau -(-\alpha_j+1)z)\theta'(0)} {\theta(\twopi{x_{K,j}}+x_{K,j}(g)-x_{K,j}(h)\tau) \theta(-(-\alpha_j+1)z)} \end{align*} Here the functions $\tilde{x}_{C,j}$ are regarded as linear combinations of the functions $x_{K,j}$. The above formula follows from theorem $7$ of the preprint \cite{RW} or by lemma $8.1$ of \cite{BL}. This completes the proof. \end{proof} Now, let $Z$ be a projective $\mathbb{Q}$-Gorenstein variety with log terminal singularities, and a regular $G\times T$ action. Let $f: X \rightarrow Z$ and $g: Y\rightarrow Z$ be two equivariant resolutions of singularities. We assume that the exceptional locus of both resolutions is a $G$-normal divisor with simple normal crossings. Define $D_X$ so that $K_X + D_X = f^*K_Z$; define $D_Y$ similarly. Then the equivariant orbifold elliptic genera of $(X,D_X)$ and $(Y,D_Y)$ coincide. Indeed, by the equivariant version of the weak factorization theorem \cite{W}, we may connect $X$ to $Y$ by a sequence of equivariant blow-ups and blow-downs in such a way that the blow-ups at each intermediate pair $(X_i,D_{X_i})$ occur along a smooth base with normal crossings with respect to the components of $D_{X_i}$. Moreover, the procedure described in \cite{BL} theorem $3.7$ to make the intermediate pairs $(X_i,D_{X_i})$ $G$-normal extends to the $T$-equivariant case. Hence, by the equivariant change of variables formula, $Ell_{orb}(X,D_X,G) = Ell_{orb}(Y,D_Y,G)$. We will, however, require a slightly stronger version of the above result for the purposes of this paper: \begin{thm} Let $f: (X,D_X) \rightarrow (Y,D_Y)$ be a $G\times T$-equivariant birational morphism between smooth, equivariant, $G$-normal log terminal pairs $(X,D_X)$ and $(Y,D_Y)$. Assume furthermore that $f^*(K_Y+D_Y) = K_X + D_X$. Then $f_*\mathcal{E}ll_{orb}(X,D_X,G) = \mathcal{E}ll_{orb}(Y,D_Y,G)$. \end{thm} \begin{proof} The weak factorization theorem allows us to factor $f$ into a sequence of equivariant blow-ups and blow-downs $$X=X_0\dashrightarrow X_1 \dashrightarrow\cdots\dashrightarrow X_k = Y$$ such that for some intermediate index $i_0$ the maps $X_i \rightarrow X$ are morphisms for $i \leq i_0$ and the maps $X_i \rightarrow Y$ are morphisms for $i \geq i_0$. Moreover, by \cite{BL} theorem $3.7$, we may still guarantee that all the intermediate varieties are $G$-normal with respect to the appropriate divisors. Note that since $f$ is itself a smooth morphism, we may conclude that the maps $X_i\rightarrow Y$ are smooth morphisms for all i. We now apply induction on $k$. For $k = 1$, the theorem is obvious. Otherwise, consider the intermediate variety $X_1$. By assumption, $X_1 \neq X,Y$. Either $X \dashrightarrow X_1$ is a blow-up or blow-down. Suppose first that $X \leftarrow X_1$ is a blow-down. Call this morphism $g$. Define $D_1$ so that $K_{X_1}+D_1 = g^*(K_X+D_X)$. Note that if $h : X_1\rightarrow Y$ is the morphism $f\circ g$, then $K_{X_1}+D_1 = h^*(K_Y+D_Y)$. By the change of variables formula, $\mathcal{E}ll_{orb}(X,D_X,G) = g_*\mathcal{E}ll_{orb}(X_1,D_1,G)$. Therefore $f_*\mathcal{E}ll_{orb}(X,D_X,G) = h_*\mathcal{E}ll_{orb}(X_1,D_1,G)$. But this is equal to $\mathcal{E}ll_{orb}(Y,D_Y,G)$ by the induction hypothesis. The case in which $X \rightarrow X_1$ is a blow-down is proved similarly. \end{proof} \section{The Equivariant McKay Correspondence}\label{Equiv McKay} Here we use the results from the preceding sections to prove an equivariant analogue of the McKay Correspondence for the elliptic genus. As a corollary, we will arrive at an equivariant version of the DMVV formula. Let $X$ be a projective variety with a $G\times T$ action, where $G$ is a finite group and $T$ is a compact torus. Let $\psi:X\rightarrow X/G$ be the quotient morphism. Assume that $X/G$ has an equivariant crepant resolution $V$ and that $\psi^*(K_{X/G}) = K_X$. Let $F \subset X$ be a fixed component of the $G\times T$-action, and let $\set{P}\subset V$ denote the fixed components in $V$ which get mapped to $\psi(F)$. Then: \begin{thm}\label{Local McKay} $$\int_F \frac{\mathcal{E}ll_{orb}(X,G)}{e(F)}= \sum_P\int_P \frac{\mathcal{E}ll(V)}{e(P)}.$$ \end{thm} \begin{proof} Let $Z \rightarrow V$ be a sequence of equivariant blow-ups of $V$ so that the exceptional locus of the resolution $\pi:Z\rightarrow X/G$ is a divisor with simple normal crossings. Let $D_Z$ be the divisor on $Z$ such that $K_Z+D_Z = \pi^*K_{X/G}$. Define $\hat{Z}_0$ to be the normalization of $Z$ in the function field of $X$. By Abhyankar's Lemma, the induced map $\mu_0:\hat{Z}_0\rightarrow Z$ is a toroidal morphism of toroidal embeddings. Let $\hat{Z}$ be a projective toroidal resolution of singularities of $\hat{Z}_0$, and define $D_{\hat{Z}}$ so that $K_{\hat{Z}}+D_{\hat{Z}} = \mu^*(K_Z+D_Z)$, where $\mu:\hat{Z}\rightarrow Z$ is the obvious map. We may further assume that $\hat{Z}$ has a $G$ action, and that the pair $(\hat{Z},D_{\hat{Z}})$ is $G$-normal. (see \cite{AW}). We obtain the following commutative diagram: $$\begin{CD} \hat{Z} @>\mu>> Z \\ @V\phi VV @VV\pi V \\ X @>\psi >> X/G \\ \end{CD}$$ Here, the vertical arrows are resolutions of singularities, and the horizontal arrows are birational to a quotient by $G$. It is clear that all the constructions involved (normalization, blow-up along a $T$-invariant ideal sheaf) are $T$-equivariant, and that consequently the above morphisms are $T$-equivariant. Since $\phi^*(K_X) = K_{\hat{Z}}+D_{\hat{Z}}$, we have that $\mathcal{E}ll_{orb}^T(X,G) = \phi_*\mathcal{E}ll_{orb}({\hat{Z}},D_{\hat{Z}},G)$. Let $\set{L} \subset \hat{Z}$ denote the fixed components of $\hat{Z}$ which map to $F$. By functorial localization: $$\int_F \frac{\mathcal{E}ll_{orb}(X,G)}{e(F)}= \sum_L\int_L \frac{\mathcal{E}ll_{orb}(\hat{Z},D_{\hat{Z}},G)}{e(L)}.$$ Now, by theorem \ref{Toroidal McKay}, $\mu_*\mathcal{E}ll_{orb}(\hat{Z},D_{\hat{Z}},G) = \mathcal{E}ll(Z,D_Z)$. Let $\set{K}\subset Z$ denote the fixed components which get mapped to $\psi(F)$ under the resolution $Z\rightarrow X/G$. Clearly $\set{L} = \phi^{-1}F = \phi^{-1}\psi^{-1}\psi(F) = \mu^{-1}\pi^{-1}\psi(F) = \mu^{-1}\set{K}$. Thus, by functorial localization applied to $\mu_*$, we have: $$\sum_L\int_L\frac{\mathcal{E}ll_{orb}(\hat{Z},D_{\hat{Z}},G)}{e(L)} = \sum_K\int_K\frac{\mathcal{E}ll(Z,D_Z)}{e(K)}.$$ Finally, since $\set{K}$ denotes the fixed components of $Z$ which get mapped to $\set{P}$, functorial localization applied to $Z \rightarrow V$ completes the proof. \end{proof} We now discuss some corollaries of the above result. We begin with a proof of the equivariant DMVV conjecture. Let $\mathbb{P}_2^{(n)} = (\mathbb{P}_2)^n/S_n$ denote the $n$th symmetric product of the projective plane. The natural group action of $T = S^1\times S^1$ on $\mathbb{P}_2$ extends to $(\mathbb{P}_2)^n$ in the obvious manner, and commutes with the action of $S_n$. The action of $T$ on $\mathbb{P}_2$ also extends to $\mathbb{P}_2^{[n]}$, the Hilbert scheme of $n$ points on $\mathbb{P}_2$, and the Hilbert-Chow morphism $\mathbb{P}_2^{[n]}\rightarrow \mathbb{P}_2^{(n)}$ is an equivariant crepant resolution. Sitting inside $\mathbb{P}_2^{(n)}$ is the open variety $(\mathbb{C}^2)^{(n)}$. It has a single $T$-fixed point $p$, which is the image under the quotient morphism of the $S_n\times T$ fixed point $(0,0)\times...\times (0,0) \in (\mathbb{C}^2)^n$. The pre-image of $(\mathbb{C}^2)^{(n)}$ under the Hilbert-Chow morphism is just $(\mathbb{C}^2)^{[n]}$. Hence, the pre-image of $p$ under the Hilbert-Chow morphism is simply the collection of $T$-fixed points of $(\mathbb{C}^2)^{[n]}$. If $(u_1,u_2)$ denote the equivariant parameters of the $T$-action, let $t_j = e^{2\pi iu_j}$. Then the above theorem implies that: $$Ell_{orb}((\mathbb{C}^2)^n,S_n,t_1,t_2) = Ell((\mathbb{C}^2)^{[n]},t_1,t_2).$$ Note that the LHS involves equivariant data associated to the single fixed point $p$, whereas the RHS involves a sum of equivariant data associated to partitions of $n$. For $z$ the complex parameter appearing in the definition of the elliptic class and $\tau$ the lattice parameter used in the definition of the Jacobi theta function, let $y = e^{2\pi iz}$ and $q = e^{2\pi i\tau}$. Viewing the equivariant elliptic indices as formal power series in the variables $q,y,t_1,$ and $t_2$, and applying theorem $3.1$ of \cite{LLJ}, we have the following equivariant analogue of the DMVV formula: \begin{thm} Write $Ell(\mathbb{C}^2,t_1,t_2) = \sum_{m\geq 0,\ell,k}c(m,\ell,k)q^m y^\ell t_1^{k_1}t_2^{k_2}$. Then $$\sum_{n>0} p^n Ell((\mathbb{C}^2)^{[n]},t_1,t_2) = \prod_{m\geq 0,n>0,\ell,k}\frac{1}{(1-p^n q^m y^\ell t_1^{k_1} t_2^{k_2})^{c(nm,\ell,k)}}$$ \end{thm} We next discuss the equivariant elliptic genus analogue of the classical McKay correspondence, which was originally proved in \cite{RW}. Let $G \subset SU(2)$ be a finite subgroup. $G$ acts on $\mathbb{C}^2$ in the obvious fashion, and commutes with the diagonal action of $T = S^1$. Let $V \rightarrow \mathbb{C}^2/G$ be the crepant resolution of singularities. The action of $T$ lifts to $V$, and the fixed components of this action are compact. \begin{thm} $Ell_{orb}(\mathbb{C}^2,G,t) = Ell(V,t)$ \end{thm} \begin{proof} View $\mathbb{C}^2$ as an open subset of $\mathbb{P}^2$, with the action of $G$ and $T$ extending to $\mathbb{P}^2$ in the obvious manner. The space $\mathbb{P}^2/G$ still has only an isolated singularity at the image of the origin $[0:0:1]$. Hence $\mathbb{P}^2/G$ has an equivariant crepant resolution which is a compactification of $V$. The above theorem then follows by letting $F = (0,0)$ in theorem \ref{Local McKay}. \end{proof} \section{Appendix} \begin{lem} Let $f:X \rightarrow Y$ be a $T$-map of smooth compact simply connected Kahler varieties. Let $D \subset Y$ be a $T$-invariant divisor and let $E_i$ be $T$-invariant normal crossing divisors on $X$ such that $f^*D = \sum a_i E_i$ as Cartier divisors. Then for any $\varepsilon$-regular neighborhood $U_{\varepsilon}$ of $D$ there exist generators $\Theta_{E_i}^T$ for $c_1^T(E_i)$ and $\Theta_D^T$ for $c_1^T(D)$ with the following properties: $(1)$ $\Theta_D^T$ has compact support in $U_{\varepsilon}$ and $\Theta_{E_i}^T$ have compact support in $f^{-1}(U_{\varepsilon})$. $(2)$ $f^* \Theta_D^T = \sum a_i \Theta_{E_i}^T + d_T (\eta)$ on the level of forms, where $\eta$ is a $T$-invariant form with compact support in $U_{\varepsilon}$. $(3)$ $\Theta_D^T$ and $\Theta_{E_i}^T$ represent the equivariant Thom classes of the varieties $D$ and $E_i$ \end{lem} The only real issue above is to ensure that $\eta$ has compact support in the desired neighborhood. \begin{proof} We first solve this problem in the non-equivariant category. For $V$ any Cartier divisor, denote by $L_V$ the line bundle it induces. Let $U_{\varepsilon}$ be a $T$-invariant tubular neighborhood of $D$ of radius $\varepsilon$. Outside $U_{\frac{\varepsilon}{2}}$, the constant function $1$ is a section of $L_D$. Define a metric $h_{far}$ in this region by $h_{far} = \norm{1}^2 \equiv 1$. Let $h_{near}$ be a metric inside $U_{\varepsilon}$. Piece the two metrics into a global metric $h$ on $L_D$ using a partition of unity. The first Chern class of $L_D$ is then represented by the form $\Theta_D =\frac{i}{2\pi}\overline{\partial}\partial \log h$. This form clearly has compact support in $U_{\varepsilon}$. Let $U_{\varepsilon_i}$ be tubular neighborhoods of $E_i$. Choose $\varepsilon_i$ small enough so that each of these neighborhoods is contained in $f^{-1}U_{\varepsilon}$. Define metrics $h_i$ on $E_i$ in a manner analogous to the above construction of $h$. Clearly the forms $\Theta_{E_i} = \frac{i}{2\pi}\overline{\partial}\partial \log h_i$ have compact support in $U_{\varepsilon_i}$ and represent the first Chern classes of $L_{E_i}$. We have two natural choices for a metric on $f^*L_D$, namely $f^*h$ and $h_1^{a_1}\cdots h_k^{a_k}$. Choose a smooth nonzero function $\varphi$ so that $f^*h = \varphi h_1^{a_1}\cdots h_k^{a_k}$. Notice that $\varphi \equiv 1$ outside $f^{-1}U_{\varepsilon}$. We have: $$\overline{\partial}\partial\log f^*h = \overline{\partial}\partial\log\varphi + \sum_i a_i\overline{\partial}\partial\log h_i.$$ But this implies that $f^*\Theta_D = \sum_i a_i\Theta_{E_i} +\frac{i}{2\pi}\overline{\partial}\partial\log\varphi$. If we let $d^c = \frac{i}{4\pi}(\overline{\partial}-\partial)$, we may write this last equation as: $$f^*\Theta_D = \sum_i a_i\Theta_{E_i}-dd^c\log\varphi.$$ The form $\eta = -d^c\log\varphi$ clearly has compact support in $f^{-1}U_{\varepsilon}$. It remains to argue that $\Theta_D$ and $\Theta_{E_i}$ represent the Thom classes of $D$ and $E_i$. Suppose $V$ is any effective Cartier divisor on a smooth simply connected compact Kahler variety. Then $L_V$ is nontrivial as a holomorphic line bundle and therefore $c_1(L_V) \neq 0$. This follows from the fact that $H^1(X,\mathcal{O}) = 0$ and therefore that the map $c_1: H^1(X,\mathcal{O}^*)\rightarrow H^2(X,\mathbb{Z})$ is injective. Furthermore, $c_1(L_V)$ is clearly a non-torsion class in $H^2(X,\mathbb{Z})$. Hence we can find an $\omega$ such that $\int_X c_1(L_V)\wedge \omega \neq 0$. Since $c_1(L_V)$ is Poincar\'e dual to $V$, we have $\int_X c_1(L_V)\wedge \omega = \int_V \omega = \int_X \Phi_V \wedge \omega$ where $\Phi_V$ is the Thom class of the normal bundle $N_V$ to $V$. However, if $\Theta_V$ is a representative of $c_1(L_V)$ with compact support in $N_V$, we must have $\Theta_V = a\Phi_V + d\psi$ for some form $\psi$ with compact support in $N_V$. Therefore $\int_X \Theta_V\wedge \omega = a\int_X \Phi_V\wedge \omega = \int_X \Phi_V \wedge \omega \neq 0$. It follows that $a=1$. Thus, $\Theta_D$ and $\Theta_{E_i}$ indeed represent the Thom classes of $D$ and $E_i$. I should remark that it is well-known that $\Theta_V$ is Poincar\'e dual to $V$. However, under weaker conditions, a form that is Poincar\'e dual to a submanifold $V$ may not necessarily represent the Thom class of $N_V$. For example, if $V$ where homologous to zero, then $0$ would certainly be Poincar\'e dual to $V$ but would not represent its Thom class, which would be non-trivial. This completes the non-equivariant portion of the proof. By averaging over the group $T$, we may assume that all the forms above are $T$-invariant. For notational simplicity, let us assume that $T = S^1$. Let $V$ be the vectorfield on $X$ induced by the $T$-action. Let $g_i$ be the functions compactly supported in $f^{-1}U_{\varepsilon}$ which satisfy the moment map equation $i_V\Theta_{E_i} = dg_i$. Similarly, let $W$ be the vectorfield on $Y$ defined by the $T$-action and define $g$ so that it satisfies $i_V\Theta_D = dg$ and has support inside $U_{\varepsilon}$. Note that since $f$ is $T$-equivariant, $i_Vf^*\Theta_D = f^*i_W\Theta_D = f^*dg$. We then have $d(g\circ f) = \sum_i a_i dg_i + i_Vd\eta = \sum_i a_i dg_i -di_V\eta$. Hence $g\circ f = \sum_i a_i g_i -i_V\eta$. But this implies that: $$f^*(\Theta_D+g) = \sum_i a_i(\Theta_{E_i}+g_i)+(d-i_V)\eta.$$ But this is precisely the relation we wish to in the equivariant cohomology. \end{proof} \begin{lem} \bf{(Excess Intersection Formula)} \it Let $X$ be a smooth compact variety with irreducible normal crossing divisors $D_1,\ldots, D_k$. For $I \subset \set{1,\ldots,k}$ denote by $X_{I,j}$ the $j$th connected component of $\cap_{I}D_i$ and by $\Phi_{I,j}$ its Thom class. Fix irreducible subvarieties $X_{I_1,j_1}$ and $X_{I_2,j_2}$. For $I_0 = I_1 \cup I_2$, let $X_{I_0,j}$ be the irreducible components of $X_{I_1,j_1} \cap X_{I_2,j_2}$. Then: $$\Phi_{I_1,j_1}\wedge \Phi_{I_2,j_2} = \sum_{I_0,j}\Phi_{I_0,j}\prod_{I_1\cap I_2}\Phi_{i}.$$ \end{lem} \begin{proof} Let $N_{I,j}$ be tubular neighborhoods of $X_{I,j}$ which are disjoint for each indexing set $I$ and which satisfy $N_{I,j} \subset N_{I',j'}$ for $X_{I,j}\subset X_{I',j'}$. If we choose $\Phi_i$ to have compact support in a sufficiently small tubular neighborhood of $D_i$, then $\prod_I \Phi_i$ will have compact support in $\coprod_{j}N_{I,j}$. Moreover, the extension by zero of $(\prod_I \Phi_i)|_{N_{I,j}}$ will represent the Thom class of $X_{I,j}$ (see [BT]). We may also ensure that $\Phi_{I_1,j_1}\wedge \Phi_{I_2,j_2}$ has compact support in $\coprod_j N_{I_0,j}$. Thus: $$\Phi_{I_1,j_1}\wedge \Phi_{I_2,j_2} = \sum_{I_0,j} \big( \prod_{I_1}\Phi_i \prod_{I_2}\Phi_i \big)|_{N_{I_0,j}} = \sum_{I_0,j}\big(\prod_{I_0}\Phi_i \prod_{I_1\cap I_2}\Phi_i\big)|_{N_{I_0,j}} = $$ $$\sum_{I_0,j}\big(\prod_{I_0}\Phi_i \big)|_{N_{I_0,j}} \prod_{I_1\cap I_2}\Phi_i.$$ This yields the desired formula. \end{proof} \begin{rmk} \rm Note that the above proof clearly extends to the equivariant category. \end{rmk}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\@startsection{section}{1}% \@ifnextchar[\zz\zzz@{1.3\linespacing\@plus\linespacing}{.5\linespacing}% {\normalfont\scshape\centering}} \makeatletter \def\one{\@ifundefined{comp}{\kern.5pt\leavevmode\hbox{\upshape{\small1\kern-3.35pt\normalsize1}}}{\kern.5pt\mathbb{1}} \renewcommand{\le}{\leqslant} \renewcommand{\ge}{\geqslant} \renewcommand{\d}{\mathrm{d}} \newcommand{\mathrm{e}}{\mathrm{e}} \newcommand{\raisebox{.4ex}{$\chi$}}{\raisebox{.4ex}{$\chi$}} \DeclareMathOperator{\supp}{\mathrm{supp}} \DeclareMathOperator{\spec}{\mathrm{spec}} \DeclareMathOperator{\specess}{\mathrm{spec}_{\mathrm{ess}}} \DeclareMathOperator{\tr}{\mathrm{tr}\kern1pt} \DeclareMathOperator{\dist}{\mathrm{dist}} \DeclareMathOperator{\vol}{\mathrm{vol}} \DeclareMathOperator{\diam}{\mathrm{diam}} \DeclareMathOperator{\esssup}{\mathrm{ess}\:\!\mathrm{sup}} \newcommand{\mathcal{E}}{\mathcal{E}} \newcommand{\mathcal{G}}{\mathcal{G}} \newcommand{\widehat{\cG}}{\widehat{\mathcal{G}}} \newcommand{\widehat{X}_{\cG}}{\widehat{X}_{\mathcal{G}}} \newcommand{\mathcal{V}}{\mathcal{V}} \renewcommand{\AA}{\mathbb{A}} \newcommand{\mathbb{P}}{\mathbb{P}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \def.{.} \def{} \newcounter{aucount} \setcounter{aucount}{0} \newif\ifedplural \newif\ifper\pertrue \def\au#1#2{{#1 #2}} \def\lau#1#2{{#1 #2},} \def\ed#1#2{\ifnum\theaucount=0\relax\fi{#1 #2}\addtocounter{aucount}{1}} \def\led#1#2{\ifnum\theaucount=0\relax\edpluralfalse\else\edpluraltrue\fi{#1 #2} (\ifedplural Eds\else Ed\fi.),\setcounter{aucount}{0}} \def\ifedplural Eds\else Ed\fi{\ifedplural Eds\else Ed\fi} \def\ifnum\theaucount=1\else\HarvardComma\fi{} and\ {\ifnum\theaucount=1\else\fi{} and\ } \def\ti#1{#1,\ifper\fi\pertrue} \def\@ifnextchar[\bbti\bbbti{\@ifnextchar[\bbti\bbbti} \def\bbti[#1]#2{\emph{#2}, #1,} \def\bbbti#1{\emph{#1},} \def\@ifnextchar[\zz\zzz{\@ifnextchar[\zz\zzz} \def\zz[#1]#2#3#4#5{\perfalse\emph{#2} \textbf{#3}, #4 \ifx @#5@\relax\else (#5)\fi [#1]\ifper.\fi\pertrue} \def\zzz#1#2#3#4{\emph{#1} \textbf{#2}, #3 \ifx @#4@\relax\else (#4)\fi\ifper.\fi\pertrue} \def\eprint#1#2{E-print #1, #2.} \def\@ifstar\pubstar\@ifnextchar[\@@pubnostar\@pubnostar{\@ifstar\pubstar\@ifnextchar[\@@pubnostar\@pubnostar} \def\@ifnextchar[\@@pubnostar\@pubnostar{\@ifnextchar[\@@pubnostar\@pubnostar} \def\@@pubnostar[#1]#2#3#4{#2, #3, #4, #1\ifper.\fi\pertrue} \def\@pubnostar#1#2#3{#1, #2, #3\ifper.\fi\pertrue} \def\pubstar[#1]#2#3#4{\perfalse #2, #3, #4 [#1].\pertrue} \makeatother \sloppy \begin{document} \title[Random colourings of aperiodic graphs]{Random colourings of aperiodic graphs:\\ Ergodic and spectral properties} \author[P.\ M\"uller]{Peter M\"uller} \address{Institut f\"ur Theoretische Physik, Georg-August-Universit\"at G\"ottingen, Friedrich-Hund-Platz~1, 37077 G\"ottingen, Germany} \email{peter.mueller@physik.uni-goe.de} \author[C.\ Richard]{Christoph Richard} \address{Fakult\"at f\"ur Mathematik, Universit\"at Bielefeld, Universit\"atsstr.\ 25, Postfach 10 01 31, 33501 Bielefeld, Germany} \email{richard@math.uni-bielefeld.de} \begin{abstract} We study randomly coloured graphs embedded into Euclidean space, whose vertex sets are infinite, uniformly discrete subsets of finite local complexity. We construct the appropriate ergodic dynamical systems, explicitly characterise ergodic measures, and prove an ergodic theorem. For covariant operators of finite range defined on those graphs, we show the existence and self-averaging of the integrated density of states, as well as the non-randomness of the spectrum. Our main result establishes Lifshits tails at the lower spectral edge of the graph Laplacian on bond percolation subgraphs, for sufficiently small probabilities. Among other assumptions, its proof requires exponential decay of the cluster-size distribution for percolation on rather general graphs. \end{abstract} \maketitle \section{Introduction} Studying ensembles of random graphs is a broad subject with many different facets. One of them, spectral properties of random graphs, has found increasing interest in recent years. Its goal is to determine spectral properties of the graph Laplacian, or of similar operators associated with the graph, and to investigate their relation to the graph structure. Erd\H{o}s-R\'enyi random graphs constitute one class of examples, for which such types of results are known by now \cite{KhSh04,KhKi06,BrDe06}. Another class of random graphs consists of those generated by a percolation process from an underlying graph (``base graph''), which is embedded into $d$-dimensional Euclidean space $\mathbb{R}^{d}$. Standard Bernoulli (bond- or site-) percolation subgraphs of the $d$-dimensional hypercubic lattice are the prime example in this category \cite{Gri99}. Here, ergodicity with respect to translations has fundamental consequences, such as non-randomness of the spectrum, as well as existence and self-averaging of the integrated density of states \cite{Ves05, KiMu06}. The behaviour of the integrated density of states near the edges of the spectrum requires a more detailed understanding. Lifshits-tail behaviour was found in the non-percolating phase \cite{KiMu06}, while the percolating cluster may give rise to a van Hove asymptotics \cite{MuSt07}. In this context, techniques from the theory of random Schr\"odinger operators have turned out to be very efficient. Furthermore, the connection to the theory of random walks in random environments \cite{Hug96,Bar04} was exploited. Very recently, results of \cite{KiMu06,MuSt07} have been extended to amenable Cayley graphs \cite{AnVe07a, AnVe07c}. There, it is invariance under the appropriate group action, which replaces translational invariance. But how important is the automorphism group of the base graph for the spectral asymptotics of its percolation subgraphs? To pursue this question, we consider base graphs whose vertex sets are given by infinite, uniformly discrete subsets of $\mathbb{R}^{d}$, with the property of finite local complexity (see Definition~\ref{FLC} below). Examples include quasiperiodic tilings such as a Penrose tiling (see \cite{BaMo00} for a recent monograph on quasiperiodic point sets), more generally, tilings with a finite set of prototiles \cite{GrSh89}, but also random tiling ensembles \cite{RHHB98}. Typically, none of these enjoys invariance under an appropriate group action. Ergodic and spectral properties of the base graphs were first derived by \cite{Hof93, Hof95}, and significantly extended by \cite{LeSt03, KlLe03, LeSt05}, using methods from dynamical systems. In this paper, we supply these base graphs with a random colouring and study their spectral properties. The main result of this paper, Theorem~\ref{main}, goes beyond basic ergodic spectral properties and establishes Lifshits-tail behaviour at the lower spectral edge for the graph Laplacian on percolation subgraphs. Our proof of this result involves three preparatory steps, each of which is interesting in its own. The first step belongs to the realm of dynamical systems theory, the second to spectral theory, and the third to percolation theory. \emph{(i) \; Construct the appropriate ergodic dynamical systems, explicitly characterise ergodic measures and prove an ergodic theorem.} Given an ergodic measure on the dynamical system of the base graphs, we will explicitly construct an ergodic measure for corresponding randomly coloured graphs, following ideas of \cite{Hof98}. The main result of this step is an ergodic theorem (Theorem~\ref{theo:perg}) for dynamical systems associated with randomly coloured graphs. It extends \cite{Hof98}, where colourings of aperiodic Delone graphs with strictly ergodic dynamical system have been studied. Our setting covers the full range from periodic structures to random tilings. Moreover, we do not require relative denseness of the vertex sets, thereby including examples such as the visible lattice points \cite{BaMoPl00} in our setup. Apparently, some of the technical problems we had to overcome are closely related to ones in \cite{BaaZin07}, where diffraction properties of certain random point sets, including percolation subsets, have been investigated very recently. \emph{(ii) \; Derive ergodic spectral properties of covariant, finite-range operators on randomly coloured aperiodic graphs.} Theorem~\ref{maclimit} characterises the integrated density of states of such an operator by a macroscopic limit. Theorem~\ref{spec-ids} states the non-randomness of the spectrum of the operator and relates it to the set of growth points of the integrated density of states. In particular, the theorems guarantee that there are no exceptional instances to their statements for uniquely ergodic systems. We provide elementary proofs of Theorems~\ref{maclimit} and \ref{spec-ids}. In the absence of a colouring, corresponding results have been derived in \cite{Hof93,Hof95, LeSt03,LeSt05}, mainly in the strictly ergodic or in the uniquely ergodic case. \emph{(iii) \; Establish exponential decay of the cluster-size distribution in the non-percolating phase for general graphs.} We derive an elementary exponential-decay estimate for the probability to find an open path from the centre to the complement of a large ball. Unfortunately, this estimate holds only for sufficiently small bond probabilities. For these probabilities, the decay of the cluster-size distribution then follows from that estimate, by verifying that the corresponding arguments in \cite{Gri99} apply also in our general setting. Exponential decay throughout the non-percolating phase for quasi-transitive graphs has been proved recently \cite{AnVe07b}. Within our more general setup, an extension to higher bond probabilities up to criticality remains a challenging open question, see also the discussion in \cite{Hof98}. The manuscript \cite{LeVe07}, which was finalised at the same time as ours, establishes uniform convergence in the energy of the finite-volume approximants to the integrated density of states under rather general conditions. In particular, it applies to percolation on Delone dynamical systems and thus improves on Theorem~\ref{maclimit} under slightly different conditions. However, the validity of our general Ergodic Theorem~\ref{theo:perg} is an open question in the approach of \cite{LeVe07}. Using uniform convergence would not allow to strengthen our main result on Lifshits tails in Theorem~\ref{main}. Our paper is organised as follows. Section~\ref{secdyn} sets the notation and introduces dynamical systems associated with uncoloured graphs. This is a slight extension of the setup for Delone dynamical systems, such as in \cite{LeMo02}. In Section~\ref{sec:ran-col}, we construct the dynamical systems for the corresponding randomly coloured graphs and deal with step~(i). Section~\ref{sec:operators} introduces covariant operators of finite range on randomly coloured graphs and treats step~(ii). Section~\ref{secLif} is devoted to our main result on Lifshits tails together with its proof. Section~\ref{sec:spectrum} contains the proof of Theorem~\ref{spec-ids}, and Section~\ref{sec:percest} deals with step (iii). \section{Dynamical systems for graphs} \label{secdyn} For the basic notions involving graphs, we refer, for example, to the textbook \cite{Die05}. We consider (simple) graphs $G= (\mathcal{V},\mathcal{E})$, whose vertex sets $\mathcal{V}\equiv \mathcal{V}_{G}$ are countable subsets of $\mathbb R^d$. We say that $\mathcal{V}$ is \emph{uniformly discrete} of radius $r \in ]0,\infty[$, if any open ball of radius $r$ in $\mathbb{R}^{d}$ contains at most one element of $\mathcal{V}$. The vertex set is called \emph{relatively dense} if there exists $R \in[0,\infty[$ such that every closed ball of radius $R$ contains at least one vertex. The vertex set is called a \emph{Delone set}, if it is both uniformly discrete and relatively dense. The edge set $\mathcal{E}\equiv\mathcal{E}_{G}$ of $G$ is a subset of the set of all unordered pairs of vertices. We denote an edge by $e\equiv\{v,w\}$, where $v,w \in\mathcal{V}$ with $v\neq w$. In other words, we do not allow self-loops, nor multiple edges between the same pair of vertices. Recall that a graph $G'=(\mathcal{V}',\mathcal{E}')$ is called a subgraph of $G$, in symbols $G'\subseteq G$, if $\mathcal{V}'\subseteq \mathcal{V}$ and $\mathcal{E}'\subseteq \mathcal{E}$. For $x\in\mathbb R^d$, the \emph{translated graph} $x+G$ has vertex set $x+\mathcal{V} :=\{x+v:v\in\mathcal{V}\}$ and edge set $x+\mathcal{E} := \bigl\{ \{x+v,x+w\}: \{v,w\} \in \mathcal{E} \bigr\}$. Given any Borel set $B\subseteq\mathbb{R}^{d}$, the \emph{restriction} $G\wedge B$ of $G$ to $B$ is the induced subgraph of $G$ with vertex set $\mathcal{V} \cap B$, that is, $\{u,v\}$ belongs to the edge set of $G\wedge B$, if and only if $\{u,v\} \in \mathcal{E}$ and $u,v\in \mathcal{V} \cap B$. If $B$ is bounded, then $G\wedge B$ is called a \emph{$B$-pattern} (or simply a pattern) of $G$. Two patterns $P,Q$ are called \emph{equivalent}, if $x+P=Q$ for some $x\in\mathbb R^d$. An \emph{$r$-pattern} is a pattern $G\wedge B_r(v)$ for some $v\in \mathcal{V}_G$. Here we have used the notation $B_r(x)$ for the open ball of radius $r>0$ around $x\in \mathbb{R}^{d}$. In particular, we set $B_r:=B_r(0)$. We write $|M|$ for the cardinality of a set $M$. \begin{definition} \begin{nummer} \item \label{FLC} A set $\mathcal{G}$ of graphs is said to have \emph{finite local complexity}, if for every $r>0$ % \begin{equation} \big| \{(-v+G)\wedge B_r: v\in\mathcal{V}_{G}, G\in\mathcal{G}\} \big| <\infty. \end{equation} % In particular, a single graph $G$ has finite local complexity if, for any given $r>0$, the number of its non-equivalent $r$-patterns is finite. \item Let $G$ be a finite graph and $P \subseteq G$ a pattern of $G$. The \emph{number of occurrences} \begin{equation} \nu(P|G) := |\{x\in\mathbb{R}^d : x+P \subseteq G\}| \end{equation} of $P$ in $G$ is the (finite) number of translates of $P$ in $G$. \end{nummer} \end{definition} Geometric properties of some set of graphs $\mathcal{G}$ are reflected by properties of an associated dynamical system. This we introduce along the lines of \cite{LeMo02}, where the case of Delone multi-sets was considered. The statements of this section are proved by slight adaptations of the arguments laid down in \cite{LeMo02,RaWo92,Sch00}. In fact, examples of our setup include the Delone multi-sets of \cite{LeMo02}, in which case $\mathcal{G}$ is finite, vertex sets are Delone sets and edge sets are empty. For simplicity, let us assume now that the vertex set of each $G\in\mathcal{G}$ is uniformly discrete. Following \cite{LeMo02,Hof95}, we define a \emph{metric} on $\mathcal{G}$ by setting \begin{align} \label{metric} \mathrm{dist}(G,G') :=\min\Big\{ 2^{-1/2}, \inf\big\{ & \varepsilon>0: \text{~there exists~} x,y\in B_\varepsilon: \nonumber\\ & (x+G) \wedge B_{1/\varepsilon}=(y+G')\wedge B_{1/\varepsilon}\big\} \Big\} \end{align} for all $G,G' \in\mathcal{G}$. In essence, two graphs are close, if they agree, up to a small translation, on a large ball around the origin. Symmetry and the triangle inequality of the metric are seen to hold for any set of graphs $\mathcal{G}$. The uniform discreteness assumption ensures positive definiteness, but it is much stronger than what is required. In fact, it would have been sufficient to assume merely closedness of the vertex sets and of the edge sets (with respect to a suitable metric on $\mathcal{E}$). Now we define the complete metric space \begin{equation} X_{\mathcal{G}} := \overline{ \{x +G: x \in\mathbb{R}^{d}, G \in\mathcal{G} \} }, \end{equation} of all translates of graphs in $\mathcal{G}$, where the metric \eqref{metric} is used for completion. Later we will need to know that certain properties of graphs in $\mathcal{G}$ do not get lost in the closure. \begin{lemma} Let $\mathcal{G}$ be a set of graphs with uniformly discrete vertex sets. \begin{nummer} \item \label{dmax} If for some $d_{\mathrm{max}} \in \mathbb{N}$ the estimate \begin{equation} \sup_{v \in\mathcal{V}_{G}} d_{G}(v) \le d_{\mathrm{max}} \end{equation} holds for all $G\in \mathcal{G}$, then it holds also for all $G \in X_{\mathcal{G}}$. \item \label{notfinite} In addition to uniform discreteness, assume there is some finite $R >0$, such that the vertex sets of all $G\in\mathcal{G}$ are relatively dense with radius $R$, and that \begin{equation} \ell_{\mathrm{max}} := \sup \bigl\{|u-v|: G\in\mathcal{G}, \{u,v\} \in \mathcal{E}_{G}\bigr\} < \infty, \end{equation} i.e., there exists a finite maximum bond length. Then, every $G\in X_{\mathcal{G}}$ is infinite, and if no $G\in\mathcal{G}$ possesses a finite cluster, then no $G\in X_{\mathcal{G}}$ possesses a finite cluster. \end{nummer} \end{lemma} \begin{proof} By contradiction. \begin{nummer} \item Assume there exists $G \in X_{\mathcal{G}}$ and $v \in \mathcal{V}_{G}$ such that $d_{G}(v) > d_{\mathrm{max}}$. Choose $0< \varepsilon <1/3$ small enough, such that all neighbours of $v$ lie in the ball $B_{\varepsilon^{-1}-1}(v)$. Hence we have $d_{G}(v) = d_{G \wedge B_{1/\varepsilon}(v)}(v)$. Since $G$ is in the closure of $\mathcal{G}$, there exists $G' \in\mathcal{G}$ with $\dist(G, G') < \varepsilon$. Altogether, this implies $d_{G'\wedge B_{1/\varepsilon}(v)}(v) > d_{\mathrm{max}}$, a contradiction. \item The assumption of relative denseness implies that $X_{\mathcal{G}}$ contains only infinite graphs. So let us assume there exists $G \in X_{\mathcal{G}}$ with a finite cluster $C$. Let $r_{C} \in ]0,\infty[$ big enough such that $\mathcal{V}_{C} \subset B_{r_{C}}$, and choose $\varepsilon> 0$ so small that $1/\varepsilon > r_{C} + \ell_{\mathrm{max}} +3$. Since $G$ is in the closure of $\mathcal{G}$, there exists $G' \in\mathcal{G}$ with $\dist(G, G') < \varepsilon$, and $G'\wedge B_{1/\varepsilon}$ has a finite cluster that cannot merge with other clusters when removing the restriction to the ball $B_{1/\varepsilon}$. \end{nummer} \def{} \end{proof} Standard arguments show that the translation group $\mathbb{R}^{d}$ acts continuously on $X_{\mathcal{G}}$, that is, the map $G \mapsto x+ G$ is continuous for every $x\in\mathbb{R}^{d}$. Thus, the triple $(X_{\mathcal{G}}, \mathbb R^d,+)$ constitutes a topological dynamical system. The following result can be proved along the lines of \cite{RaWo92,Sch00}. \begin{lemma} \label{lem:flc} Let $\mathcal{G}$ be set of graphs with uniformly discrete vertex sets. Then, $X_{\mathcal{G}}$ is compact if and only if $\mathcal{G}$ has finite local complexity. \end{lemma} As above, uniform discreteness can be replaced by closedness of all vertex and edge sets, without jeopardising the validity of Lemma~\ref{lem:flc}. But from now on, we assume that uniform discreteness holds even uniformly in $\mathcal{G}$, that is, there exists $r>0$ such that $\mathcal{V}_{G}$ is uniformly discrete of radius $r$, for all $G\in\mathcal{G}$. Compactness of $X_{\mathcal{G}}$ implies the existence of ergodic probability measures on the Borel-sigma algebra of $X_{\mathcal{G}}$, in other words $X_{\mathcal{G}}$ is \emph{ergodic} (w.r.t.\ translations). Recall that a topological dynamical system is called \emph{uniquely ergodic}, if it carries exactly one ergodic measure. Ergodic theorems for a compact dynamical system with $\mathbb R^d$-action are given in \cite[Thms.~4.2 and~2.6]{LeMo02}, see also \cite[Thm.~1]{LeSt05} for a stronger statement in the case of minimal ergodic systems. We quote a version patterned after \cite{LeMo02} and introduce \emph{cylinder sets} \begin{equation} \label{cyl-set} \Xi_{P, U} := \{G \in X_{\mathcal{G}}: x+P \subseteq G \text{~for some~} x\in U\} \subset X_{\mathcal{G}}. \end{equation} Here, $P$ is a pattern of some graph $G\in\mathcal{G}$, and $U\subseteq\mathbb{R}^{d}$ is a Borel set. We write $\vol(U)$ for its Lebesgue measure. \begin{theorem} \label{basic-erg} Let $\mathcal{G}$ be a set of graphs of finite local complexity and with uniformly discrete vertex sets of radius $r>0$. Fix an ergodic probability measure $\mu$ on $X_{\mathcal{G}}$. Then, given any function $\phi\in \mathrm{L}^1(X_{\mathcal{G}},\mu)$, the limit \begin{equation} \label{basic-erg-limit} \lim_{n\to\infty} \frac{1}{\vol(B_{n})} \int_{B_{n}}\!\d x\; \phi(x+G) = \int_{X_{\mathcal{G}}} \!\d\mu(F) \; \phi (F) \end{equation} exist for $\mu$-a.a.\ $G \in X_{\mathcal{G}}$. If $X_{\mathcal{G}}$ is even uniquely ergodic and, in addition, if $\phi$ is either continuous or a linear combination of indicator functions of cylinder sets, then the limit \eqref{basic-erg-limit} exists for \emph{all} $G\in X_{\mathcal{G}}$. \end{theorem} The ergodic theorem can be used to analyse the vertex density of graphs and the asymptotic number of occurrences of patterns in graphs. \begin{cor} \label{sammelcor} Let $\mathcal{G}$ be a set of graphs of finite local complexity and with uniformly discrete vertex sets of radius $r>0$. Fix an ergodic probability measure $\mu$ on $X_{\mathcal{G}}$. \begin{nummer} \item \label{density} Then there is a Borel set $\overline{X} \subseteq X_{\mathcal{G}}$ of full $\mu$-measure, $\mu(\overline{X})=1$, such that the \emph{vertex density} \begin{equation} \label{rho-def} \varrho := \lim_{n\to\infty} \frac{|\mathcal{V}_{G} \cap B_{n}|}{\vol(B_{n})} = \int_{X_{\mathcal{G}}}\!\d\mu(F) \, \sum_{v\in\mathcal{V}_{F}} \psi(v) \end{equation} exists for all $G \in \overline{X}$. Here, $\psi: \mathbb{R}^d\to\mathbb{R}_{\ge 0}$ is any continuous function with support in $B_{r}$ and $\int_{\mathbb{R}^{d}}\d x\, \psi(x) =1$ (``mollifier''). In particular, $\varrho\in [0,1/\vol(B_{r})]$ is independent of $G\in \overline{X}$ and of the choice of the mollifier. If $X_{\mathcal{G}}$ is even uniquely ergodic, then the above statements hold with $\overline{X}=X_{\mathcal{G}}$. \item \label{rho-infty} The statements of Part~\itemref{density} apply also to the \emph{density of vertices belonging to infinite components} \begin{equation} \label{rho-infty-def} \varrho_{\infty} := \lim_{n\to\infty} \frac{|\mathcal{V}_{G,\infty} \cap B_{n}|}{\vol(B_{n})} = \int_{X_{\mathcal{G}}}\!\d\mu(F) \, \sum_{v\in\mathcal{V}_{F,\infty}} \psi(v) . \end{equation} Here, $\mathcal{V}_{G,\infty} := \{v\in\mathcal{V}_{G} : |C_{v}| = \infty\}$, with $C_{v}$ denoting the cluster of $G$ which $v\in\mathcal{V}_{G}$ belongs to. \item \label{lem:freq} Let $P$ be a pattern of some graph $G\in\mathcal{G}$. Then there is a Borel set $\overline{X}\subseteq X_{\mathcal{G}}$ of full $\mu$-measure, such that the \emph{pattern frequency} \begin{equation} \nu(P) := \lim_{n\to\infty} \frac{\nu(P|G\wedge B_{n})}{\vol(B_{n})} \end{equation} exists for all $G \in \overline{X}$ and is independent of $G$. The dynamical system $X_{\mathcal{G}}$ is uniquely ergodic, if and only if, given any pattern $P$ of a graph in $\mathcal{G}$, the limit \begin{equation} \label{upf-eq} \nu(P) := \lim_{n\to\infty}\frac{\nu(P|G\wedge B_{n}(a))} {\vol(B_{n})} \end{equation} exists uniformly in $G \in X_{\mathcal{G}}$ and in $a\in\mathbb{R}^d$, and is independent of $G$ and $a$. \item \label{lem:cylinder} Let $P$ be a pattern of some graph $G\in\mathcal{G}$, and let $U\subset\mathbb{R}^d$ be a Borel set with diameter $\mathrm{diam}(U)<r$. Then, the probability of the associated cylinder set \eqref{cyl-set} is given by \begin{equation} \mu(\Xi_{P, U})= \vol(U) \, \nu(P). \end{equation} \end{nummer} \end{cor} \begin{remark} \begin{nummer} \item \label{dens-well-def} The sum over $v$ in the $\mu$-integral in \eqref{rho-def} contains at most one term, because the mollifier $\psi$ is supported in the ball $B_{r}$, where $r$ is the radius of uniform discreteness of the graphs. \item The criterion \eqref{upf-eq} for unique ergodicity in Lemma~\ref{lem:freq} is often referred to as \emph{uniform pattern frequencies}. \item \label{rhopos} For later reference we give two different conditions that imply $\varrho_{\infty} >0$: \quad(1)\quad The situation described in Lemma~\ref{notfinite}, assuming that no $G\in\mathcal{G}$ possesses a finite cluster. \quad(2)\quad The dynamical system $X_{\mathcal{G}}$ is uniquely ergodic and there exists $G \in X_{\mathcal{G}}$ such that $|\mathcal{V}_{G,\infty} \cap B_{n}| / \vol(B_{n})$ has a strictly positive limit as $n\to\infty$. \end{nummer} \end{remark} \begin{proof}[Proof of Corollary~\ref{sammelcor}.] Part~\itemref{density} of the corollary follows from an application of Theorem~\ref{basic-erg} to the continuous function $\phi(G) := \sum_{v\in\mathcal{V}_{G}} \psi(v)$ and the relation \begin{equation} |\mathcal{V}_{G} \cap B_{n}| = \int_{B_{n}}\!\d x\; \phi(x+G) + \mathcal{O}(n^{d-1}). \end{equation} The latter reveals the independence of the right-hand side of \eqref{rho-def} on the particular choice of the mollifier $\psi$. For Part~\itemref{rho-infty}, one has to replace $\mathcal{V}_{G}$ by $\mathcal{V}_{G,\infty}$ in the argument. Parts~\itemref{lem:freq} and~\itemref{lem:cylinder} follow from repeating the arguments in \cite[Lemma~4.3 and Thm.~2.7]{LeMo02}, where the case of Delone multi-sets was treated. \end{proof} \begin{definition} \label{posfreq} Let $\mathcal{G}$ be a set of graphs of finite local complexity and with uniformly discrete vertex sets of radius $r>0$. We say that the dynamical system $X_{\mathcal{G}}$ satisfies the \emph{positive lower frequency condition}, if for every $G\in X_{\mathcal{G}}$ and every pattern $P\subset G$ \begin{equation} \label{posfreq-eq} \liminf_{n\to\infty} \;\frac{\nu(P|G\wedge B_{n})}{\vol(B_{n})} > 0. \end{equation} Loosely spoken, any pattern $P$ that occurs once in $G$, does so sufficiently often. \end{definition} \begin{remarks} \label{mini-posfreq} \item Minimality of $X_{\mathcal{G}}$, which is equivalent to repetitivity for Delone systems of finite local complexity \cite[Thm.~3.2]{LaPl03}, implies that the positive lower frequency condition holds. \item If, in addition to the positive lower frequency condition, one assumes that $X_{\mathcal{G}}$ is uniquely ergodic, then, in view of Corollary~\ref{lem:freq}, the $\liminf$ in \eqref{posfreq-eq} equals the pattern frequency $\nu{(P)}$. Moreover, in this case the system is minimal and thus strictly ergodic, compare \cite{LaPl03}. \end{remarks} \section{Ergodic properties of randomly coloured graphs} \label{sec:ran-col} In this section, we supply the graphs of the previous section with a random colouring, and derive a corresponding extension of the Ergodic Theorem~\ref{basic-erg}. We fix a finite, nonempty set $\AA$, equipped with the discrete topology, which we call the set of available \emph{colours}. For definiteness, we consider only random edge colourings of graphs. But all results of this and the next section remain valid in the case of a random colouring of vertices, and of a random colouring of both edges and vertices. This is merely a matter of notation. For a given a graph $G$, we define the probability space $\Omega_{G}:=\mbox{\Large$\times$}_{e\in\mathcal{E}_{G}}\AA$, equipped with the $|\mathcal{E}_{G}|$-fold product sigma-algebra $\bigotimes_{e\in\mathcal{E}_{G}}\, 2^{\AA}$ of the power set of $\AA$ and the product probability measure $\mathbb{P}_{G} := \bigotimes_{e \in \mathcal{E}_{G}} \mathbb{P}_0$. Here, $\mathbb{P}_{0}$ is some fixed probability measure on $\AA$. In other words, colours are distributed identically and independently to all edges, and the elementary event $\omega \equiv (\omega_{e})_{e \in \mathcal{E}_{G}}\in\Omega_{G}$ specifies a particular realisation of colours assigned to the edges of $G$. At first we are going to extend the framework of the previous section to coloured graphs. Given a graph $G$ and $\omega\in\Omega_{G}$, the pair $G^{(\omega)} \equiv (G,\omega)$ is called a \emph{coloured graph}. For any Borel set $B\subseteq \mathbb{R}^{d}$, we define the restriction $\omega\wedge B \in \mbox{\Large$\times$}_{e\in\mathcal{E}_{G\wedge B}}\AA$ as the image of $\omega$ under the canonical projection from $\Omega_{G}$ to $\mbox{\Large$\times$}_{e\in\mathcal{E}_{G\wedge B}}\AA$. Likewise, given any $x\in\mathbb{R}^{d}$, the translated colour realisation $x+\omega \in \Omega_{x+G}$ is defined component-wise by $(x+\omega)_{x+e} := \omega_{e}$ for all $e\in\mathcal{E}_{G}$. We define the translation and restriction of a coloured graph in the natural way \begin{equation} x + G^{(\omega)} := (x+G)^{(x+\omega)}, \qquad\quad G^{(\omega)} \wedge B := (G\wedge B)^{(\omega\wedge B)}, \end{equation} by shifting and truncating $\omega$ along with $G$. We write $P^{(\eta)} \subseteq G^{(\omega)}$, if $P \subseteq G$ and $\eta_{e} = \omega_{e}$ for all edges $e\in \mathcal{E}_{P}$. The notions of a pattern of a coloured graph and of the number of occurrences of a finite coloured graph $P^{(\eta)}$ in $G^{(\omega)}$ translate accordingly from those in the previous section. For a given set of graphs $\mathcal{G}$, we consider the induced set of coloured graphs \begin{equation} \widehat{\cG} := \{G^{(\omega)}: \omega\in\Omega_{G}, G\in\mathcal{G}\}. \end{equation} \begin{remark} \label{transfer} Since $\AA$ provides only finitely many different colours, it follows that $\mathcal{G}$ has finite local complexity, if and only if $\widehat{\cG}$ has finite local complexity, that is if and only if \begin{equation} |\{(-x+G^{(\omega)}) \wedge B_r: x\in\mathcal{V}_{G}, G^{(\omega)}\in\widehat{\cG} \}| <\infty \end{equation} for every $r>0$. \end{remark} Replacing $G$ and $G'$ in the metric \eqref{metric} by elements of $\widehat{\cG}$, we obtain a metric on $\widehat{\cG}$. This metric is used in the completion of the metric space \begin{equation} \widehat{X}_{\cG} := \overline{ \{x +G^{(\omega)}: x \in\mathbb{R}^{d}, G^{(\omega)} \in\widehat{\cG} \} \rule{0pt}{2.2ex}}. \end{equation} An alternative description of the space $\widehat{X}_{\cG}$ is provided by \begin{lemma} Let $\mathcal{G}$ be a set of graphs with uniformly discrete vertex sets. Then \begin{equation} \label{hat-equal} \widehat{X}_{\cG} = \{ G^{(\omega)}: \omega\in\Omega_{G}, G \in X_{\mathcal{G}}\}. \end{equation} \end{lemma} \begin{proof} To show the inclusion ``$\,\subseteq\,$'', it suffices to prove that the limit $\widehat{G}$ of an arbitrary convergent sequence from $\widehat{\cG}$ is of the form $G^{(\omega)}$ for some $G\in X_{\mathcal{G}}$ and some $\omega\in\Omega_{G}$. So assume that for every $\varepsilon >0$ there exist $x_{\varepsilon} \in\mathbb{R}^{d}$, $y_{\varepsilon} \in B_{\varepsilon}$ and $G_{\varepsilon}^{(\omega_{\varepsilon})} \in \widehat{\cG}$ such that $ (x_{\varepsilon} +G_\varepsilon^{(\omega_{\varepsilon})}) \wedge B_{1/\varepsilon} = (y_{\varepsilon} + \widehat{G}) \wedge B_{1/\varepsilon}$. Clearly, convergence of a sequence of coloured graphs implies convergence of the underlying uncoloured graphs, that is, there exists $G \in X_{\mathcal{G}}$ such that $(x_{\varepsilon} +G_\varepsilon) \wedge B_{1/\varepsilon} = (y_{\varepsilon} + G) \wedge B_{1/\varepsilon}$ for all $\varepsilon >0$. We define $\omega\in\Omega_{G}$ as follows: given any $\varepsilon>0$, we set $\omega_{e} := \omega_{\varepsilon, y_{\varepsilon} -x_{\varepsilon} +e}$ for all $e \in\mathcal{E}_{G\wedge B_{1/\varepsilon}(-y_{\varepsilon})}$. This choice is consistent in the sense that if $0<\varepsilon' < \varepsilon$, then $\omega_{\varepsilon, y_{\varepsilon} -x_{\varepsilon} +e} = \omega_{\varepsilon', y_{\varepsilon'} -x_{\varepsilon'} +e}$ for all $e\in \mathcal{E}_{G\wedge B_{1/\varepsilon}(-y_{\varepsilon})}$. By choosing $\varepsilon$ arbitrarily small, we obtain $\omega_{e}$ for all $e\in\mathcal{E}_{G}$. It follows that $\widehat{G}=G^{(\omega)}$. To show the inclusion ``$\,\supseteq\,$'', consider $G^{(\omega)}$ for an arbitrary $G\in X_{\mathcal{G}}$ and arbitrary $\omega \in\Omega_{G}$. Then, for every $\varepsilon >0$ there exist $x_{\varepsilon} \in\mathbb{R}^{d}$, $y_{\varepsilon} \in B_{\varepsilon}$ and $G_{\varepsilon} \in \mathcal{G}$ such that $ (x_{\varepsilon} +G_\varepsilon) \wedge B_{1/\varepsilon} = (y_{\varepsilon} + G) \wedge B_{1/\varepsilon}$. Define $\omega_{\varepsilon,e}:=\omega_{x_\varepsilon -y_{\varepsilon} +e}$ for all $e \in\mathcal{E}_{G_{\varepsilon}\wedge B_{1/\varepsilon}(-x_{\varepsilon})}$ and set $\omega_{\varepsilon, e}$ to an arbitrary value for the remaining edges. We then have $G_{\varepsilon}^{(\omega_{\varepsilon})} \in \widehat{\cG}$ for all $\varepsilon >0$ and $x_{\varepsilon} + G_\varepsilon^{(\omega_\varepsilon)}\to G^{(\omega)}$ as $\varepsilon\downarrow 0$. \end{proof} There is an analogue to Lemma~\ref{lem:flc} in the previous section. Recalling Remark~\ref{transfer}, it can be formulated as \begin{lemma} \label{lem:flc-hat} Let $\mathcal{G}$ be a set of graphs with uniformly discrete vertex sets. Then, $\widehat{X}_{\cG}$ is compact if and only if $\mathcal{G}$ has finite local complexity. \end{lemma} The main goal of this section is to express an ergodic probability measure $\widehat{\mu}$ on a compact space $\widehat{X}_{\cG}$ in terms of an ergodic probability measure $\mu$ on $X_{\mathcal{G}}$ and the probability measures $\mathbb P_G$. This will be achieved in Theorem~\ref{theo:perg} at the end of this section. As a preparation for Theorem~\ref{theo:perg}, we define cylinder sets of $\widehat{X}_{\cG}$ in analogy those of $X_{\mathcal{G}}$ in the previous section. Given a pattern $P^{(\eta)}$ of some coloured graph in $\widehat{\cG}$ and a Borel set $U\subseteq\mathbb{R}^{d}$, we set \begin{equation} \widehat{\Xi}_{P^{(\eta)},U} := \{ G^{(\omega)} \in \widehat{X}_{\cG} : x + P^{(\eta)} \subseteq G^{(\omega)} \text{~for some~} x\in U\}. \end{equation} The basic step in the construction of an ergodic measure on $\widehat{X}_{\cG}$ is given by the following lemma, which extends \cite[Lemma~3.1]{Hof98} to graphs which are not necessarily aperiodic -- and also to more general measures $\mathbb{P}_{G}$. We employ the notation $\raisebox{.4ex}{$\chi$}_{S}$ for the indicator function of some set $S$. \begin{lemma} \label{lem:hoflemma} Let $\mathcal{G}$ be a set of graphs of finite local complexity and with uniformly discrete vertex sets of radius $r>0$. Let $\mu$ be an ergodic probability measure on $X_{\mathcal{G}}$. Then, given any pattern $P^{(\eta)}$ of some coloured graph in $\widehat{\cG}$ and any Borel set $U\subseteq\mathbb{R}^{d}$ with diameter $\diam(U) <r$, the limit \begin{equation} \label{erg-conv} \lim_{n\to\infty} \frac{1}{\vol(B_{n})} \int_{B_n}\!\d x\; \raisebox{.4ex}{$\chi$}_{\widehat{\Xi}_{P^{(\eta)},U}}(x+G^{(\omega)}) = \mu(\Xi_{P, U}) \, \mathbb{P}_P(\eta) \end{equation} exists for $\mu$-a.a.\ $G \in X_{\mathcal{G}}$ and for $\mathbb P_G$-a.a.\ $\omega\in\Omega_G$. If $\widehat{X}_{\cG}$ is uniquely ergodic, then the limit \eqref{erg-conv} exists for \emph{all} $G\in X_{\mathcal{G}}$ and for $\mathbb P_G$-a.a.\ $\omega\in\Omega_G$. \end{lemma} \begin{proof} It follows from the definition of cylinder sets that \begin{equation} \label{hof-start} \int_{B_n}\!\d x\; \raisebox{.4ex}{$\chi$}_{\widehat{\Xi}_{P^{(\eta)},U}}(x+G^{(\omega)}) = \vol(U) \; \nu(P^{(\eta)}|G^{(\omega)} \wedge B_{n}) +\mathcal{O}(n^{d-1}) \end{equation} asymptotically as $n\to\infty$, see also the proof of \cite[Thm.~2.7]{LeMo02} for the argument. We analyse the asymptotic behaviour of the expression \begin{equation} \label{count} \frac{\nu(P^{(\eta)}|G^{(\omega)} \wedge B_{n})}{\vol(B_{n})}\le \frac{\nu(P|G \wedge B_{n})}{\vol(B_{n})}. \end{equation} Ergodicity of $X_{\mathcal{G}}$ implies \begin{equation} \label{freq-conv} \lim_{n\to\infty} \frac{\nu(P|G \wedge B_{n})}{\vol(B_{n})} = \nu(P) = \frac{\mu(\Xi_{P,U})}{\vol(U)}, \end{equation} for $\mu$-a.a.\ $G\in X_{\mathcal{G}}$, resp.\ for all $G \in X_{\mathcal{G}}$ in the uniquely ergodic case, see Lemmas~\ref{lem:freq} and~\ref{lem:cylinder}. This proves the statement, if $\mu(\Xi_{P,U})=0$. Otherwise, $\nu(P|G \wedge B_{n})$ grows unboundedly in $n$. In particular, $\nu(P|G \wedge B_{n}) \ne 0$ for $n\ge n_0$. Thus, we can write \begin{equation} \label{enlarge} \frac{\nu(P^{(\eta)}|G^{(\omega)} \wedge B_{n})}{\vol(B_{n})} = \frac{\nu(P| G \wedge B_{n})}{\vol(B_{n})} \; \frac{\nu(P^{(\eta)}|G^{(\omega)} \wedge B_{n})}{\nu(P|G \wedge B_{n})} \end{equation} for $n\ge n_0$. It suffices to show that the second factor on the r.h.s.\ of \eqref{enlarge} converges to $\mathbb{P}_{P}(\eta)$ as $n\to\infty$ for $\mathbb{P}_{G}$-a.a.\ $\omega \in\Omega_{G}$. The latter statement follows from the strong law of large numbers, which is applicable due to Kolmogorov's criterion \cite{Bau01}. This is obvious, if all translates $P_{j}$, $j\in\mathbb{N}$, of $P$ in $G$ are pairwise overlap-free and so that colours are assigned in a stochastically independent way to different translates. Otherwise, we partition the set $\{P_{j}\}_{j\in\mathbb{N}}$ of all such translates into a finite number $\Delta$ of (non-empty) subsets $\{P_{j}\}_{j\in J_{\alpha}}$, $\alpha =1,\ldots,\Delta$, such that there are no mutual overlaps between the translates within each of these subsets. Here we have $\varnothing \neq J_{\alpha} \subset\mathbb{N}$ for all $\alpha =1,\ldots,\Delta$, $\cup_{\alpha=1}^{\Delta} J_{\alpha} = \mathbb{N}$ and $J_{\alpha} \cap J_{\alpha'} = \varnothing$ for all $\alpha \neq \alpha'$. The existence of such a partition may be seen by a graph-colouring argument: construct a graph $\mathcal{T}$ such that each translate $P_{j}$ defines one point in the vertex set of $\mathcal{T}$. Two vertices are adjacent in $\mathcal{T}$, if the corresponding translates overlap. Clearly, a vertex colouring of $\mathcal{T}$ (with adjacent vertices having different colours) provides an example for the partition that we are seeking. Due to uniform discreteness, the degree of any vertex in $\mathcal{T}$ is bounded by some number $d_{\mathcal{T}\!,\mathrm{max}} < \infty$. Thus, the vertex-colouring theorem \cite{Die05} ensures the existence of such a colouring with $\Delta \le 1+ d_{\mathcal{T}\!,\mathrm{max}}$ different colours. Denoting the number of elements in $\{P_{j}\}_{j\in J_{\alpha}}$ with the property $P_{j} \wedge B_{n} = P_{j}$ by $\nu_{\alpha}(P |G \wedge B_{n})$, and denoting by $\nu_{\alpha}(P^{(\eta)}|G^{(\omega)} \wedge B_{n})$ the analogous quantity requiring, in addition, a match of the edge colourings, we can write \begin{equation} \label{decompose} \frac{\nu(P^{(\eta)}|G^{(\omega)} \wedge B_{n})}{\nu(P|G \wedge B_{n})} = \sum_{\alpha=1}^{\Delta} \frac{\nu_{\alpha}(P^{(\eta)}|G^{(\omega)} \wedge B_{n})}{\nu_{\alpha}(P|G \wedge B_{n})} \; \frac{\nu_{\alpha}(P|G \wedge B_{n})}{\nu(P|G \wedge B_{n})}. \end{equation} Here we assume $n$ large enough so that $\nu_{\alpha}(P|G \wedge B_{n}) >0$ for all $\alpha$. Clearly, those $\alpha$ for which the index set $J_{\alpha}$ is finite do not contribute to \eqref{decompose} in the macroscopic limit $n\to\infty$, because for them the right-most fraction in \eqref{decompose} vanishes in the limit. For the remaining $\alpha$'s, the strong law of large numbers can be applied and gives \begin{equation} \lim_{n\to\infty} \frac{\nu_{\alpha}(P^{(\eta)} | G^{(\omega)} \wedge B_{n})}{\nu_{\alpha}(P|G \wedge B_{n})} = \mathbb{P}_{P}(\eta) \end{equation} for $\mathbb{P}_{G}$-a.a.\ $\omega\in\Omega_{G}$. Therefore we conclude \begin{align} \lim_{n\to\infty} & \left| \frac{\nu(P^{(\eta)}|G^{(\omega)} \wedge B_{n})}{\nu(P|G \wedge B_{n})} - \mathbb{P}_{P}(\eta) \right| \nonumber\\ & \le \lim_{n\to\infty} \sum_{\alpha =1}^{\Delta} \left| \frac{\nu_{\alpha}(P^{(\eta)} | G^{(\omega)} \wedge B_{n})}{\nu_{\alpha}(P|G \wedge B_{n})} - \mathbb{P}_{P}(\eta) \right| \; \frac{\nu_{\alpha}(P|G \wedge B_{n})}{\nu(P|G \wedge B_{n})} \nonumber\\ & \le \lim_{n\to\infty} \sum_{\begin{subarray}{c}\alpha \in\{1,\ldots,\Delta\}: \\ |J_{\alpha}| =\infty \end{subarray}} \; \left| \frac{\nu_{\alpha}(P^{(\eta)} | G^{(\omega)} \wedge B_{n})}{\nu_{\alpha}(P|G \wedge B_{n})} - \mathbb{P}_{P}(\eta) \right| \nonumber\\ & =0. \end{align} for $\mathbb{P}_{G}$-a.a.\ $\omega\in\Omega_{G}$, and the lemma follows together with \eqref{hof-start}, \eqref{enlarge} and \eqref{freq-conv}. \end{proof} Having established Lemma~\ref{lem:hoflemma}, which is an extension of \cite[Lemma~3.1]{Hof98} to more general graphs, we can now argue as in the proof of \cite[Thm.~3.1]{Hof98} to obtain the central result of this section. \begin{theorem} \label{theo:perg} Let $\mathcal{G}$ be a set of graphs of finite local complexity and with uniformly discrete vertex sets of radius $r>0$. Fix an ergodic probability measure $\mu$ on $X_{\mathcal{G}}$. Then there exists a unique ergodic probability measure $\widehat{\mu}$ on $\widehat{X}_{\cG}$ such that \begin{nummer} \item for every pattern $P^{(\eta)}$ of some coloured graph in $\widehat{\cG}$ and every Borel set $U\subseteq\mathbb{R}^{d}$ with diameter $\diam(U) <r$ the relation \begin{equation} \widehat{\mu}(\widehat{\Xi}_{P^{(\eta)}, U}) = \mu(\Xi_{P, U}) \, \mathbb{P}_{P}(\eta) \end{equation} holds. \item \label{item-erg} for every $\phi\in \mathrm{L}^1(\widehat{X}_{\cG},\widehat{\mu})$ the limit \begin{equation} \label{erg-limit} \lim_{n\to\infty} \frac{1}{\vol(B_{n})} \int_{B_n}\!\d x\; \phi(x+G^{(\omega)}) = \int_{\widehat{X}_{\cG}} \!\d\widehat{\mu}(F^{(\sigma)}) \; \phi (F^{(\sigma)}) \end{equation} exist for $\widehat{\mu}$-a.a.\ $G^{(\omega)} \in \widehat{X}_{\cG}$. If $\widehat{X}_{\cG}$ is even uniquely ergodic and, in addition, if $\phi$ is either continuous or a linear combination of cylinder functions, then the limit \eqref{erg-limit} exists for \emph{all} $G\in X_{\mathcal{G}}$ and for $\mathbb P_G$-a.a.\ $\omega\in\Omega_G$. \item \label{fubini} for every $\phi\in \mathrm{L}^1(\widehat{X}_{\cG},\widehat{\mu})$ we have \begin{equation} \int_{\widehat{X}_{\cG}} \!\d\widehat{\mu}(G^{(\omega)}) \; \phi (G^{(\omega)}) = \int_{X_{\mathcal{G}}} \!\d\mu(G) \int_{\Omega_{G}}\!\d\mathbb{P}_{G}(\omega) \; \phi (G^{(\omega)}). \end{equation} \end{nummer} \end{theorem} \begin{remarks} \item The corresponding theorem \cite[Thm.~3.1]{Hof98} is a statement about Bernoulli site percolation on the Penrose tiling. Our result is an extension, which covers both the aperiodic and the periodic situations, under weaker assumptions on the base graphs. \item The asserted uniqueness of the ergodic measure $\widehat{\mu}$ in the theorem does not mean that the dynamical system $\widehat{X}_{\cG}$ is uniquely ergodic. It only means that $\widehat{\mu}$ is unique for the given ergodic measure $\mu$ on $X_{\mathcal{G}}$ and the measures $\mathbb{P}_{G}$ on $\Omega_{G}$. \end{remarks} \section{Finite-range operators on randomly coloured graphs} \label{sec:operators} In this section, we consider covariant finite-range operators on randomly coloured graphs, together with some of their basic ergodic and spectral properties. We ensure the existence of their integrated density of states, derive its self-averaging property, and study the non-randomness of the spectrum. For the spectral-theoretic background, the reader is referred to \cite{ReSiI,ReSiII}. For a countable set $\mathcal{V}$, let $\ell^{2}(\mathcal{V})$ be the Hilbert space of square-summable functions $\psi: \mathcal{V} \rightarrow\mathbb{C}$ with canonical scalar product $\langle \cdot, \cdot\rangle$. We denote the canonical basis in $\ell^{2}(\mathcal{V})$ by $\{\delta_{v}\}_{v\in\mathcal{V}}$, that is $\delta_{v}(w) = 1$ if $w=v$ and zero otherwise. \begin{definition} \begin{nummer} \item \label{cov-op-def} Let $G^{(\omega)}$ be a coloured graph. A bounded linear operator $H_{G^{(\omega)}}$ in $\ell^2(\mathcal{V}_{G^{(\omega)}})$ is said to be \emph{covariant of range} $R \in ]0,\infty[$, if \begin{enumerate} \item $\langle\delta_{x+v}, H_{G^{(\omega)}} \delta_{x+w}\rangle = \langle\delta_v, H_{G^{(\omega)}} \delta_w\rangle$ \quad for all $v,w \in\mathcal{V}_{G^{(\omega)}}$ and all $x\in\mathbb{R}^{d}$ for which $G^{(\omega)} \wedge \bigl( B_{R}(x+ v) \cup B_{R}(x+w) \bigr) = G^{(\omega)} \wedge \bigl( B_{R}(v) \cup B_{R}(w) \bigr)$, \par\smallskip \item $\langle \delta_v, H_{G^{(\omega)}}\delta_w\rangle=0$ \quad for all $v,w \in \mathcal{V}_{G^{(\omega)}}$ subject to $|v-w|\ge R$. \end{enumerate} \smallskip \item \label{erg-op-def} Let $\mathcal{G}$ be a set of graphs of finite local complexity and with uniformly discrete vertex sets of radius $r$. Fix an ergodic probability measure $\widehat{\mu}$ on $\widehat{X}_{\cG}$. Given any $R\in]0,\infty[$, we call a mapping $\widehat{H}: G^{(\omega)} \mapsto H_{G^{(\omega)}}$ from $\widehat{X}_{\cG}$, with values in the set of bounded, self-adjoint operators that are covariant of range $R$, a $\widehat{\mu}$-\emph{ergodic self-adjoint operator of finite range}. \end{nummer} \end{definition} \begin{remarks} \item The covariance condition in the above definition means that $H_{G^{(\omega)}}$ is determined on the class of non-equivalent $R$-patterns. \item \label{unibound} A $\widehat{\mu}$-ergodic self-adjoint operator of finite range $\widehat{H}$ is \emph{uniformly bounded}, in the sense that $\sup_{G^{(\omega)} \in \widehat{X}_{\cG}} \| H_{G^{(\omega)}} \| < \infty$, where $\|\cdot\|$ denotes the usual operator norm. In particular, there exists a compact interval $K\subset\mathbb{R}$, such that for $\widehat{\mu}$-almost every $G^{(\omega)} \in \widehat{X}_{\cG}$ the spectrum of $H_{G^{(\omega)}}$ is contained in $K$. \end{remarks} The eigenvalue density of $\widehat{H}$ is a quantity of great interest in applications. \begin{definition} \label{Ndef} Let $\widehat{H}$ be a $\widehat{\mu}$-ergodic self-adjoint operator of finite range, and fix a mollifier as in Corollary~\ref{density}. Then, the \emph{integrated density of states} of $\widehat{H}$ is defined as the right-continuous distribution function $\mathbb{R} \rightarrow [0,\varrho]$, \begin{equation} \label{eq:Ndef} E \mapsto N (E) := \int_{\widehat{X}_{\cG}} \!\d\widehat{\mu}(G^{(\omega)}) \sum_{v\in \mathcal{V}_{G^{(\omega)}}} \psi(v)\, \langle\delta_v, \Theta(E-H_{G^{(\omega)}}) \delta_v\rangle . \end{equation} In \eqref{eq:Ndef} we have denoted the right-continuous Heaviside unit-step function by $\Theta := \raisebox{.4ex}{$\chi$}_{[0,\infty[}$ and the mollifier $\psi$ was introduced in Corollary~\ref{density}. \end{definition} \begin{remarks} \item \label{integrable} The integrand in \eqref{eq:Ndef} is bounded, see Remark~\ref{dens-well-def}. Moreover, it is measurable. In fact, the map $\widehat{X}_{\cG} \rightarrow \mathbb{C}$, \begin{equation} G^{(\omega)} \mapsto f_{\psi}(G^{(\omega)}) := \sum_{v\in \mathcal{V}_{G^{(\omega)}}} \psi(v)\, \langle\delta_v, f(H_{G^{(\omega)}}) \delta_v\rangle \end{equation} is even continuous for all $f \in \mathrm{L}^{\infty}(\mathbb{R})$, thanks to the continuity of $\psi$. \item The unique Borel measure $\d N$ on $\mathbb{R}$ associated with the distribution function $N$ has a total mass, given by the vertex density $\varrho$, cf.\ Corollary~\ref{density}. Moreover, $\d N$ is compactly supported, due to the boundedness of $\widehat{H}$. \item Clearly, the integrated density of states $N$ depends on the choice of the ergodic measure $\widehat{\mu}$ on $\widehat{X}_{\cG}$. However, ergodicity of $\widehat{\mu}$ implies that $N$ does not depend on the choice of the mollifier $\psi$. This will become manifest in Theorem~\ref{maclimit} below. \end{remarks} The integrated density of states of $\widehat{H}$ can also be characterised in terms of a macroscopic limit. \begin{theorem} \label{maclimit} Let $\widehat{H}$ be a $\widehat{\mu}$-ergodic, self-adjoint operator of finite range and let $N$ be its integrated density of states \eqref{eq:Ndef}, for some choice of the mollifier $\psi$. Then there exists a Borel set $\widehat{A} \subseteq\widehat{X}_{\cG}$ of full probability, $\widehat{\mu}(\widehat{A}) =1$, such that \begin{equation} N(E) = \lim_{n\to\infty} \biggl\{ \frac{1}{\vol(B_{n})} \; \sum_{v\in\mathcal{V}_{G^{(\omega)}} \cap B_{n}} \langle\delta_v, \Theta(E-H_{G^{(\omega)}}) \delta_v\rangle\biggr\} \end{equation} holds for all $G^{(\omega)} \in \widehat{A}$ and all $E \in\mathbb{R}$, except for the (at most countably many) discontinuity points of $N$. If $\widehat{X}_{\cG}$ is uniquely ergodic, then convergence holds even for \emph{all} $G\in X_{\mathcal{G}}$ and $\mathbb{P}_{G}$-a.a.\ $\omega\in\Omega_{G}$. \end{theorem} \begin{proof}[Sketch of the proof] The theorem follows from vague convergence of the associated measures by standard arguments \cite[Thms.~30.8, 30.13]{Bau01}. Vague convergence follows in turn from the Ergodic Theorem~\ref{item-erg}, and from the identity \begin{equation} \sum_{v\in\mathcal{V}_{G^{(\omega)}} \cap B_{n}} \langle\delta_v, f(H_{G^{(\omega)}}) \delta_v\rangle = \int_{B_n}\!\d x\; f_{\psi}(x+G^{(\omega)}) + \mathcal{O}(n^{d-1}), \end{equation} which is valid for arbitrary $f\in C_{\mathrm{c}}(\mathbb{R})$ and arbitrary mollifiers $\psi$. The continuous function $f_{\psi}$, associated with $f$, was defined in Remark~\ref{integrable}. \end{proof} \begin{remark} For systems of randomly coloured subgraphs of $\mathbb{Z}^{d}$ one can even prove uniform convergence in the energy $E$ \cite{LeMu06}. Such a result, which is based on ideas in \cite{LeSt05}, has now been generalised in \cite{LeVe07}. In particular, it applies also to random colourings of graphs with Delone vertex sets. In addition, the origin of the discontinuities of $N$ is related to compactly supported eigenfunctions and their sizes to equivariant dimensions \cite{LeVe07}. Statements corresponding to Theorem~\ref{maclimit} for systems of uncoloured Delone sets with finite local complexity can be found in \cite{Hof95}, \cite[Prop.~4.6]{LeSt03}, \cite[Thm.~3]{LeSt05} and \cite[Thm.~6.1]{LePe07}. The papers \cite{Hof95} and \cite{LeSt05} deal with strictly ergodic systems, for which \cite{LeSt05} establish even uniform convergence in the energy $E$. \end{remark} Next, we relate the set of growth points of the integrated density of states $N$ to the spectrum of $\widehat{H}$. Given any a self-adjoint operator $A$, we denote its spectrum by $\spec(A)$ and the essential part of the spectrum by $\specess(A)$. We write $\supp (\d N)$ for the topological support of the probability measure on $\mathbb{R}$, whose distribution function is the integrated density of states $N$. The topological support can be characterised as \begin{equation} \label{top-supp-def} \supp (\d N) = \bigg\{ E\in\mathbb{R} : \int_{]\lambda,\lambda'[} \d N >0 \text{\quad for all~} \lambda,\lambda'\in \mathbb{Q} \text{~with~} \lambda <E <\lambda'\bigg\} . \end{equation} \begin{theorem} \label{spec-ids} Let $\widehat{H}$ be a $\widehat{\mu}$-ergodic, self-adjoint operator of finite range. Then there exists a Borel set $\overline{X} \subseteq X_{\mathcal{G}}$ with $\mu(\overline{X})=1$ such that \begin{equation} \label{eq:spec-ids} \spec(H_{G^{(\omega)}}) = \specess (H_{G^{(\omega)}}) = \supp (\d N) \end{equation} for all $G\in\overline{X}$ and $\mathbb{P}_{G}$-almost all $\omega\in\Omega_{G}$. Moreover, if $X_{G}$ is uniquely ergodic and obeys the positive lower frequency condition, see Definition~\ref{posfreq}, then the statement holds even with $\overline{X} = X_{\mathcal{G}}$, that is, for \emph{every} $G\in X_{\mathcal{G}}$. \end{theorem} The theorem follows from Lemmas~\ref{N-sub-spec} and~\ref{spec-sub-N} below. \begin{remark} The most interesting part of Theorem~\ref{spec-ids} is that the statement \eqref{eq:spec-ids} holds for all $G\in X_{\mathcal{G}}$, under the stronger assumptions of unique ergodicity and the positive lower frequency condition. For uncoloured Delone systems, such a result can be found in \cite[Prop.~7.4]{Hof93} and \cite[Lemma~3.6, Thm.~4.3]{LeSt03}, cf.\ Remarks~\ref{mini-posfreq}. The weaker $\mu$-almost sure statement of Theorem~\ref{spec-ids}, which holds without the additional hypotheses, is analogous to results in \cite{LePe07}, where they are proved in the context of ergodic groupoids. \end{remark} \section{Lifshits tails for the Laplacian on bond-percolation graphs} \label{secLif} In this section, we study the asymptotics at the lower spectral edge of the integrated density of states of graph Laplacians, which are associated to (Bernoulli) bond-percolation subgraphs of a given graph. We will specialise the general framework of the previous sections in three respects. First, we set $\mathcal{G} := \{G_{0}\}$, where $G_{0}$ is some fixed graph of finite local complexity, with a uniformly discrete vertex set and a bounded degree sequence. For notational simplicity we write $X$ instead of $X_{\{G_{0}\}}$, and $\widehat{X}$ instead of $\widehat{X}_{\{G_{0}\}}$. We fix an ergodic measure $\mu$ on $X$. Second, we interpret a Bernoulli bond-percolation subgraph as a particular randomly coloured graph. To this end, we choose the set of colours $\AA = \{0,1\}$ and make the identification of a coloured graph $(G,\omega) \in\widehat{X}$ with the subgraph $(\mathcal{V}_{G}, \mathcal{E}_{G}^{(\omega)})$ of $G$ which has the same vertex set as $G$ and edge set $\mathcal{E}_{G}^{(\omega)} := \{e\in\mathcal{E}_{G}: \omega_{e}=1\}$. We denote this subgraph again by $G^{(\omega)}$. The probability measure on $\Omega_{G}$ is given by the $|\mathcal{E}_{G}|$-fold product measure $\mathbb{P}^{p}_{G} = \bigotimes_{e \in \mathcal{E}_{G}}\mathbb{P}_{0}^{p}$ of the Bernoulli measure $\mathbb{P}_{0}^{p}:=p \boldsymbol{\delta}_{1} + (1-p)\boldsymbol{\delta}_{0}$ on $\AA$ with parameter $p \in[0,1]$. Thus, each edge is taken away from $G$ independently of the others with probability $1-p$. According to Section~\ref{sec:ran-col}, the ergodic measure $\mu$ on $X$ gives rise to an ergodic measure $\widehat{\mu}_{p}$ on $\widehat{X}$. Third, we take the operator $H_{G^{(\omega)}}$ to be the combinatorial or graph Laplacian $\Delta_{G^{(\omega)}}$ associated with the graph $G^{(\omega)}$. The graph Laplacian is a covariant operator of range $R=2$ in the sense of the previous section, as follows easily from its definition in \begin{lemma} Given an arbitrary graph $G=(\mathcal{V},\mathcal{E})$ with bounded degree sequence, the \emph{graph Laplacian} $\Delta_{G}: \ell^{2}(\mathcal{V}) \rightarrow \ell^{2}(\mathcal{V})$, \begin{equation} (\Delta_{G}\varphi)(v) := \sum_{w \in \mathcal{V}: \{v,w\} \in \mathcal{E}} [\varphi(v) -\varphi(w)] \end{equation} for all $\varphi \in\ell^{2}(\mathcal{V})$ and all $v \in\mathcal{V}$, is a bounded, self-adjoint and non-negative linear operator. Moreover, zero is an eigenvalue of $\Delta_{G}$, if and only if $G$ possesses a finite cluster. The multiplicity of the eigenvalue zero is given by the number of finite clusters of $G$. \end{lemma} We omit the straightforward proof of the lemma and refer to standard accounts \cite{Chu97,Col98} on spectral graph theory instead. The mapping $\widehat{X} \ni G^{(\omega)} \mapsto \Delta_{G^{(\omega)}}$ defines a $\widehat{\mu}_{p}$-ergodic, self-adjoint operator of finite range in the sense of Definition~\ref{erg-op-def}, the \emph{Laplacian on bond-percolation graphs associated with} $G_{0}$. The graph $G_{0}$ need not be connected in what follows. However, it is crucial that the vertex density $\varrho_{\infty}$ of its infinite component(s), which was introduced in Corollary~\ref{rho-infty}, is strictly positive. \begin{theorem} \label{main} Let $G_{0}$ be a graph of finite local complexity, with a uniformly discrete vertex set, a bounded degree sequence $d_{\mathrm{max}} := \sup_{v\in\mathcal{V}} d_{G_{0}}(v) < \infty$, and a maximal edge length $l_\mathrm{max}:=\sup\{|u-v|:\{u,v\}\in \mathcal{E}\}<\infty$. Assume further that $\varrho_{\infty} > 0$, with respect to some ergodic measure $\mu$ on $X$. Consider the Laplacian on bond-percolation graphs associated with $G_{0}$, and let $N$ be its integrated density of states \eqref{eq:Ndef}, with respect to the measure $\widehat{\mu}_{p}$. If $p \in ]0, \frac{1}{d_{\mathrm{max}}-1}[$, then \begin{equation} \label{main-eq} \lim_{E\downarrow 0}\;\frac{\ln\bigl| \ln [N(E) -N(0)]\bigr|}{\ln E} = -1/2 , \end{equation} that is, $N$ exhibits a \emph{Lifshits tail} with Lifshits exponent $1/2$ at the lower edge of the spectrum. \end{theorem} \begin{remark} \begin{nummer} \item Theorem~\ref{main} follows from Lemmas~\ref{upper} and~\ref{lower} below, which provide slightly stronger statements as needed to conclude \eqref{main-eq}. The lemmas show that Theorem~\ref{main} holds for all edge probabilities $p$, for which the cluster-size distribution decays exponentially for $\mu$-almost every graph $G\in X$. This means in particular, that the validity of Theorem~\ref{main} is limited to the \emph{non-percolating phase}, that is $\bigl\{p \in [0,1]: \sup_{v\in\mathcal{V}_{G_{0}}} \mathbb{P}_{G_{0}}^{p}(|C_{v}| = \infty) =0 \bigr\}$, see Section~\ref{sec:percest}. Exponential decay of the cluster-size distribution is guaranteed by the conditions $l_\mathrm{max}<\infty$ and $p \in ]0, \frac{1}{d_{\mathrm{max}}-1}[$, see Corollary~\ref{together}. \item Remark~\ref{rhopos} states sufficient conditions for $\varrho_{\infty} >0$. \item Theorem~\ref{main} generalises part of Theorem~1.14 in \cite{KiMu06}, where the special case $G_{0} = \mathbb{L}^{d}$ of the $d$-dimensional integer lattice was considered. \item The Lifshits exponent $1/2$ in \eqref{main-eq} does not depend on the spatial dimension $d$ of the underlying space. This comes from the fact that the asymptotics \eqref{main-eq} is determined by the longest linear clusters of the percolation graphs $G^{(\omega)} \in \widehat{X}$. \end{nummer} \end{remark} \begin{lemma} \label{upper} Let $G_{0}$ be a graph of finite local complexity and with a uniformly discrete vertex set. Assume there exists $p_{0}\in ]0,1[$ such that for every $p \in [0, p_{0}[$ and $\mu$-almost all $G\in X$ the cluster-size distribution for percolation on $G$ decays exponentially, i.e., \begin{equation} \label{CSD-decay} \mathbb{P}_{G}^{p} \{\omega \in\Omega_{G}: |C_{v}^{(\omega)}| \ge n \} \le D(p) \, \exp\{ - \lambda(p) \, n\} \end{equation} for all $n\in\mathbb{N}$, where $D(p), \lambda(p) \in ]0,\infty[$ are constants that depend on $p$, but are uniform in $G\in X$ and $v\in\mathcal{V}_{G}$. Here $C_{v}^{(\omega)}$ denotes the cluster of the graph $G^{(\omega)}$ containing $v\in\mathcal{V}_{G}$. Then \begin{equation} N(E) - N(0) \le \varrho D(p) \exp\{ - \lambda(p)\, E^{-1/2}\} \end{equation} for all $p \in [0, p_{0}[$ and all $E>0$. The vertex density $\varrho$ was introduced in \eqref{rho-def}. \end{lemma} \begin{proof} The proof is analogous to that of Lemma~2.7 (Neumann case) in \cite{KiMu06}. The block-diagonal form of the Laplacian with respect to the cluster structure implies \begin{align} N(E) -N(0) &= \int_{X}\!\d\mu(G) \, \sum_{v\in\mathcal{V}_{G}} \psi(v) \int_{\Omega_{G}} \d\mathbb{P}_{G}^{p}(\omega) \nonumber\\ & \hspace*{4cm} \times \langle\delta_{v}, \bigl[ \Theta (E- \Delta_{C_{v}^{(\omega)}}) - \Theta ( \Delta_{C_{v}^{(\omega)}}) \bigr] \delta_{v}\rangle \nonumber\\ & \le\int_{X}\!\d\mu(G) \, \sum_{v\in\mathcal{V}_{G}} \psi(v) \int_{\Omega_{G}} \d\mathbb{P}_{G}^{p}(\omega) \, \Theta \bigl(E- E_{1}(C_{v}^{(\omega)})\bigr) \nonumber\\ & \hspace*{4cm} \times \langle\delta_{v}, \bigl[ \one - \Theta ( \Delta_{C_{v}^{(\omega)}}) \bigr] \delta_{v}\rangle \nonumber\\ & \le\int_{X}\!\d\mu(G) \, \sum_{v\in\mathcal{V}_{G}} \psi(v) \; \mathbb{P}_{G}^{p} \big\{ \omega\in\Omega_{G} : E \ge E_{1}(C_{v}^{(\omega)}) \bigr\} , \label{upper-init} \end{align} where $E_{1}(C_{v}^{(\omega)})$ denotes the smallest non-zero eigenvalue of the Laplacian on the cluster $C_{v}^{(\omega)}$. As a particular consequence of the decay \eqref{CSD-decay} of the cluster-size distribution, we infer that $|C_{v}^{(\omega)}| < \infty$ for all $v\in\mathcal{V}_{G}$ holds for $\mu$-almost all $G \in X$ and $\mathbb{P}_{G}^{p}$-almost all $\omega\in\Omega$. Hence, Cheeger's inequality $E_{1}(C_{v}^{(\omega)}) \ge |C_{v}^{(\omega)}|^{-2}$, see e.g.\ Lemma~A.1 in \cite{KhKi06}, can be applied to estimate the probability in \eqref{upper-init}, and we get \begin{equation} \label{upper-final} N(E) -N(0) \le \int_{X}\!\d\mu(G) \, \sum_{v\in\mathcal{V}_{G}} \psi(v) \; \mathbb{P}_{G}^{p} \big\{ \omega\in\Omega_{G} : |C_{v}^{(\omega)}| \ge E^{-1/2} \bigr\}. \end{equation} The lemma now follows from the exponential decay \eqref{CSD-decay} of the cluster-size distribution and from \eqref{rho-def}. \end{proof} \begin{lemma} \label{lower} Let $G_{0}$ be a graph of finite local complexity, with a uniformly discrete vertex and bounded degree sequence $d_{\mathrm{max}} := \sup_{v\in\mathcal{V}} d_{G_{0}}(v) < \infty$. Let $p\in ]0,1[$ and $E>0$. Then \begin{equation} N(E) -N(0) \ge \varrho_{\infty}\, \mathrm{e}^{-2\gamma(p)} \; \frac{\exp\{ - 4 \,\gamma(p) \, E^{-1/2}\}}{2+ 4E^{-1/2}}, \end{equation} where $\gamma(p):= - \ln p - d_{\mathrm{max}} \ln(1-p) >0$. \end{lemma} \begin{proof} We adapt the strategy of the proof of Lemma~2.9 (Neumann case) in \cite{KiMu06}. In the present setting, we have to cope with the additional difficulty that vertices in $G$ which are connected can be very far apart in the Euclidean metric. Fix $E>0$ arbitrary and let $\{\varepsilon_{j}\}_{j\in\mathbb{N}}$ be a null sequence of positive reals, such that $E+\varepsilon_{j}$ is a point of continuity of the integrated density of states $N$ for all $j\in\mathbb{N}$. Then the right-continuity of $N$, the Ergodic Theorem~\ref{maclimit} and the isotony of $N$ imply \begin{multline} \label{low-start} N(E) -N(0) = \lim_{j\to\infty} [ N(E+\varepsilon_{j}) -N(0) ] \\ \ge \limsup_{n\to\infty} \frac{1}{\vol(B_{n})} \; \sum_{v\in\mathcal{V}_{G}} \raisebox{.4ex}{$\chi$}_{B_{n}}(v) \;\langle\delta_{v}, [ \Theta (E- \Delta_{G^{(\omega)}}) - \Theta (\Delta_{G^{(\omega)}})] \delta_{v}\rangle \end{multline} for $\widehat{\mu}_{p}$-almost all graphs $G^{(\omega)} \in\widehat{X}$. Since $\Delta_{G^{(\omega)}}$ is a direct sum of the Laplacians of the clusters of $G^{(\omega)}$, and since this is also true for functions of the Laplacian, it follows that the trace on the right-hand side of \eqref{low-start} can be bounded from below by throwing away all contributions from branched clusters in that sum, \begin{multline} \label{throw-away} \sum_{v\in\mathcal{V}_{G}} \raisebox{.4ex}{$\chi$}_{B_{n}}(v) \;\langle\delta_{v}, [ \Theta (E- \Delta_{G^{(\omega)}}) - \Theta (\Delta_{G^{(\omega)}})] \delta_{v}\rangle \\ \ge \sum_{l=2}^{\infty} Z_{B_{n}}^{G^{(\omega)}} (\mathcal{L}_{l}) \, \bigl\langle \delta_{1}, [ \Theta (E- \Delta_{\mathcal{L}_{l}}) - \Theta (\Delta_{\mathcal{L}_{l}}) ] \delta_{1}\bigr\rangle. \end{multline} Here $\mathcal{L}_{l}$ denotes a linear chain (i.e.\ non-branched and cycle-free cluster) with $l$ vertices, $Z_{B_{n}}^{G^{(\omega)}} (\mathcal{L}_{l})$ the number of such chains in the percolation graph $G^{(\omega)}$, subject to the condition that at least one of its end-vertices lies in the ball $B_{n}$. The symbol $\delta_{1}$ denotes the canonical basis vector in $\ell^{2}(\{1,\ldots,l\})$ corresponding to one end-vertex of $\mathcal{L}_{l}$ (by symmetry reasons it does not matter which end-vertex). The spectral representation of $\Delta_{\mathcal{L}_{l}}$ with $l \ge 2$ is explicitly known, for example, by mapping the problem to that of a cycle graph with $2l$ vertices. The eigenvalues turn out to be \begin{equation} E_{k}(\mathcal{L}_{l}) := 4 \bigl(\sin(\pi k/2l)\bigr)^{2} , \qquad\quad k=0,\ldots l, \end{equation} and the components of the corresponding normalised eigenvectors $\varphi_{k}$ in the canonical basis are given by $\langle\delta_{j},\varphi_{0}\rangle := l^{-1/2}$ and \begin{equation} \langle\delta_{j}, \varphi_{k}\rangle := ( 2/l)^{1/2} \cos \bigl( \pi \tfrac{k}{l}\,(j- \tfrac{1}{2})\bigr), \qquad\quad k=1,\ldots l, \end{equation} where $j=1,\ldots, l$. We observe that \begin{equation} \label{op-bound} \Theta (E- \Delta_{\mathcal{L}_{l}}) - \Theta (\Delta_{\mathcal{L}_{l}}) \ge \Theta (E- E_{1}(\mathcal{L}_{l})) \; \varphi_{1} \otimes \varphi_{1}, \end{equation} where the dyadic product is the projector on the eigenspace generated by $\varphi_{1}$, and that $E_{1}(\mathcal{L}_{l}) \le 10/l^{2}$ and $|\langle \delta_{1}, \varphi_{1}\rangle|^{2} \ge l^{-1}$. Therefore we obtain \begin{align} \label{number-chain} N(E) -N(0) & \ge \limsup_{n\to\infty} \sum_{l=2}^{\infty} \Theta (E- 10/l^{2}) \;\frac{1}{l} \; \frac{Z_{B_{n}}^{G^{(\omega)}} (\mathcal{L}_{l})}{\vol(B_{n})} \nonumber\\ & \ge \frac{1}{l(E)} \; \limsup_{n\to\infty} \frac{Z_{B_{n}}^{G^{(\omega)}} (\mathcal{L}_{l(E)})}{\vol(B_{n})} \end{align} for $\widehat{\mu}_{p}$-almost all graphs $G^{(\omega)}\in\widehat{X}$ with $l(E):= \inf\{l\in\mathbb{N}\setminus\{1\}: E -10/l^{2} \ge 0\}$. The quantity \begin{equation} g_{v}(G^{(\omega)}) := \left\{ \begin{array}{l@{\quad}l} 1, & \mbox{if a vertex $v\in\mathcal{V}_{G}$ is an end-vertex of a linear chain}\\[-.2ex] & \mbox{with $l(E)$ vertices in $G^{(\omega)}$}\\[.5ex] 0, & \mbox{otherwise} \end{array}\right. \end{equation} helps to rewrite the right-hand side of \eqref{number-chain}, so that \begin{align} N(E) -N(0) & \ge \frac{1}{2l(E)} \limsup_{n\to\infty} \frac{1}{\vol(B_{n})}\; \sum_{v\in\mathcal{V}_{G}} \raisebox{.4ex}{$\chi$}_{B_{n}(v)} \, g_{v}(G^{(\omega)}) \nonumber\\ & = \frac{1}{2l(E)} \limsup_{n\to\infty} \frac{1}{\vol(B_{n})} \int_{B_{n}} \!\d x\, \sum_{v\in\mathcal{V}_{x+G}} \psi(v) g_{v}(x+G^{(\omega)}) \end{align} for $\widehat{\mu}_{p}$-almost all graphs $G^{(\omega)} \in \widehat{X}$. The Ergodic Theorem~\ref{item-erg},~\itemref{fubini} now implies \begin{equation} N(E) -N(0) \ge \frac{1}{2l(E)} \int_{X}\!\d\mu(G) \sum_{v\in\mathcal{V}_{G}} \psi(v) \int_{\Omega_{G}} \! \d\mathbb{P}^{p}_{G}(\omega) \, g_{v}(G^{(\omega)}). \end{equation} We recall that $\raisebox{.4ex}{$\chi$}_{\mathcal{V}_{{}G,\infty}}(v) =1$, if $v$ belongs to an infinite component of $G$, and zero otherwise. Then we have for all $G \in X$ the crude elementary combinatorial estimate \begin{equation} \label{chain-exist} \int_{\Omega_{G}}\d\mathbb{P}^{p}_{G}(\omega)\, g_{v}(G^{(\omega)}) \ge 2 p^{l(E)} (1-p)^{l(E)d_{\mathrm{max}}} \,\raisebox{.4ex}{$\chi$}_{\mathcal{V}_{G,\infty}}(v) \end{equation} for the probability that a given vertex appears as the end vertex of a linear chain with $l(E)$ vertices. Here we have also used Lemma~\ref{dmax}. The estimate \eqref{chain-exist} yields \begin{equation} N(E) -N(0) \ge \frac{\mathrm{e}^{-l(E) \gamma(p)}}{l(E)} \int_{X}\!\d\mu(G) \sum_{v\in\mathcal{V}_{G}} \psi(v) \, \raisebox{.4ex}{$\chi$}_{\mathcal{V}_{G,\infty}}(v) = \varrho_{\infty} \; \frac{\mathrm{e}^{-l(E) \gamma(p)}}{l(E)} . \end{equation} Making use of $l(E) < 2 + 4 E^{-1/2}$, we obtain the assertion of the lemma. \end{proof} \section{Proof of Theorem~\ref{spec-ids}} \label{sec:spectrum} Lemmas~\ref{N-sub-spec} and~\ref{spec-sub-N} in this section provide the proof of Theorem~\ref{spec-ids}. We begin with a standard result on the non-randomness of the spectrum. \begin{lemma} Let $\widehat{H}$ be a $\widehat{\mu}$-ergodic, self-adjoint operator. Then there exists a Borel set $\Sigma \subseteq \mathbb{R}$, such that $\spec (H_{G^{(\omega)}}) =\Sigma$ for $\widehat{\mu}$-almost all $G^{(\omega)} \in\widehat{X}_{\cG}$. \end{lemma} \begin{remarks} \item \label{proj-det} The proof of the lemma is classical in the theory of random Schr\"odinger operators, see e.g.\ \cite{CaLa90,PaFi92}. The central point in the argument is that for every Borel set $I\subseteq\mathbb{R}$ the measurable function \begin{equation} \label{tdef} \widehat{X}_{\cG} \ni G^{(\omega)} \mapsto t_{I}(G^{(\omega)}) := \tr \raisebox{.4ex}{$\chi$}_{I}(H_{G^{(\omega)}}) \end{equation} is invariant under translations of $G^{(\omega)}$ by arbitrary $a\in\mathbb{R}^{d}$, and hence $\widehat{\mu}$-almost surely constant by ergodicity. \item Standard arguments extend the non-randomness also to the Lebesgue components of the spectrum \cite{CaLa90,PaFi92}. These arguments are taken up in \cite{LePe07} in the context of ergodic groupoids. \end{remarks} \begin{lemma} \label{lemma-equiv} Let $\widehat{H}$ be a $\widehat{\mu}$-ergodic, self-adjoint operator of finite range, and let $N$ be its integrated density of states. For a given open interval $I:= ]\lambda ,\lambda'[$, where $\lambda,\lambda' \in\mathbb{R}$ with $\lambda <\lambda'$, consider the following statements. \begin{enumerate} \item[(i)] ~~$\displaystyle \int_{I} \d N > 0$. \item[(ii)] there exists a Borel set $\overline{X} \subseteq X_{\mathcal{G}}$ with $\mu(\overline{X}) = 1$ \textup{(}$\overline{X} = X_{\mathcal{G}}$, if $X_{\mathcal{G}}$ is uniquely ergodic\textup{)} such that $t_{I}(G^{(\omega)}) = \infty$ for all $G\in \overline{X}$ and for $\mathbb{P}_{G}$-almost all $\omega\in\Omega_{G}$. \item[(iii)] there exists a Borel set $\overline{X} \subseteq X_{\mathcal{G}}$ with $\mu(\overline{X}) = 1$ \textup{(}$\overline{X} = X_{\mathcal{G}}$, if $X_{\mathcal{G}}$ is uniquely ergodic\textup{)} such that $t_{I}(G^{(\omega)}) > 0$ for all $G\in \overline{X}$ and for $\mathbb{P}_{G}$-almost all $\omega\in\Omega_{G}$. \item[(iv)] there exists a Borel set $\underline{X} \subseteq X_{\mathcal{G}}$ with $\mu(\underline{X}) >0$ such that $t_{I}(G^{(\omega)}) > 0$ for all $G\in \underline{X}$ and all $\omega$ in some subset of $\Omega_{G}$ that has a positive $\mathbb{P}_{G}$-measure. \item[(v)] there exists $G\in X_{\mathcal{G}}$ and $\omega\in\Omega_{G}$ such that $t_{I}(G^{(\omega)}) > 0$. \end{enumerate} Then the implications \begin{equation} \mathrm{(i)} \; \Longleftrightarrow \; \mathrm{(ii)} \; \Longleftrightarrow \; \mathrm{(iii)} \; \Longleftrightarrow \; \mathrm{(iv)} \; \Longrightarrow \; \mathrm{(v)} \end{equation} hold. Moreover, if $X_{G}$ is uniquely ergodic and obeys the positive lower frequency condition, then \begin{equation} \mathrm{(v)} \; \Longrightarrow \; \mathrm{(i)} \end{equation} holds, too. \end{lemma} \begin{proof} (i) $\Rightarrow$ (ii): \quad Using the Ergodic Theorem~\ref{maclimit}, we deduce the existence of a Borel set $\overline{X} \subseteq X_{\mathcal{G}}$ with $\mu(\overline{X}) =1$ (and $\overline{X} = X_{\mathcal{G}}$, if $X_{\mathcal{G}}$ is uniquely ergodic) such that \begin{align} 0 < \int_{I}\!\d N &= \int_{X_{\mathcal{G}}}\!\d\mu(\widetilde{G}) \int_{\Omega_{\widetilde{G}}} \!\d\mathbb{P}_{\widetilde{G}}(\widetilde{\omega}) \; \sum_{v\in\mathcal{V}_{\widetilde{G}}} \psi(v) \, \langle\delta_{v}, \raisebox{.4ex}{$\chi$}_{I}(H_{\widetilde{G}^{(\widetilde{\omega})}}) \delta_{v} \rangle \nonumber\\ &= \lim_{n\to\infty} \frac{1}{\vol(B_{n})} \; \sum_{v\in \mathcal{V}_{G} \cap B_{n}} \langle\delta_{v}, \raisebox{.4ex}{$\chi$}_{I}(H_{G^{(\omega)}}) \delta_{v} \rangle \end{align} for all $G\in \overline{X}$ and for $\mathbb{P}_{G}$-almost all $\omega\in\Omega_{G}$. Hence, \begin{equation} t_{I}(G^{(\omega)}) = \lim_{n\to\infty} \sum_{v\in \mathcal{V}_{G} \cap B_{n}} \langle\delta_{v}, \raisebox{.4ex}{$\chi$}_{I}(H_{G^{(\omega)}}) \delta_{v} \rangle = \infty \end{equation} for those $G^{(\omega)}$. (ii) $\Rightarrow$ (iii) $\Rightarrow$ (iv) $\Rightarrow$ (v): \quad These implications are obvious. (iv) $\Rightarrow$ (i): \quad Assume (i) is wrong. Then we have \begin{equation} \label{zerostart} 0 = \int_{I}\!\d N = \int_{X_{\mathcal{G}}}\!\d\mu(G) \int_{\Omega_{G}}\!\d\mathbb{P}_{G}(\omega) \sum_{v\in\mathcal{V}_{G}} \psi(v) \, \langle\delta_{v}, \raisebox{.4ex}{$\chi$}_{I}(H_{G^{(\omega)}}) \delta_{v}\rangle. \end{equation} Note that, by translation invariance of the measure $\mu$, we can replace the mollifier $\psi$ in \eqref{zerostart} by $\psi_{a}:= \psi(\cdot -a)$, for any translation vector $a\in\mathbb{R}^{d}$. Consequently, there exists a Borel set $\overline{X}_{a} \subseteq X_{\mathcal{G}}$ with $\mu(\overline{X}_{a}) = 1$, such that for all $G\in\overline{X}_{a}$ there is $\overline{\Omega}_{G,a} \subseteq \Omega_{G}$ measurable with $\mathbb{P}_{G}(\overline{\Omega}_{G,a}) = 1$, such that for all $\omega\in \overline{\Omega}_{G,a}$ we have $ \langle\delta_{v}, \raisebox{.4ex}{$\chi$}_{I} (H_{G^{(\omega)}}) \delta_{v}\rangle =0$, for all $v\in\mathcal{V}_{G} \cap \supp(\psi_{a})$. We can choose a countable set $M$ of translation vectors, such that for every $v\in\mathcal{V}_{G}$ there exists $a\in M$ with $v\in \supp\psi_{a}$. Next, we define $\overline{X} := \bigcap_{a\in M} \overline{X}_{a}$ and $\overline{\Omega}_{G} := \bigcap_{a\in M} \overline{\Omega}_{G,a}$ for all $G\in \overline{X}$ so that $\mu(\overline{X}) = 1$ and $\mathbb{P}_{G}(\overline{\Omega}_{G}) = 1$ for all $G\in\overline{X}$. Now, we get $\langle\delta_{v}, \raisebox{.4ex}{$\chi$}_{I} (H_{G^{(\omega)}}) \delta_{v}\rangle =0$ for all $G\in\overline{X}$, all $\omega\in \overline{\Omega}_{G}$ and all $v\in\mathcal{V}_{G}$. In other words, $\raisebox{.4ex}{$\chi$}_{I}(H_{G^{(\omega)}}) = 0$ for all $G\in\overline{X}$ and all $\omega\in \overline{\Omega}_{G}$. This contradicts (iv) so that the implication is proven. (v)~$\Rightarrow$~(i): \quad By assumption, $I$ contains a spectral value of $H_{G^{(\omega)}}$. Therefore there exists $E\in I$, $\delta \in ]0, \varepsilon[$, where $\varepsilon := \dist(E, \mathbb{R} \setminus I)$, and $\varphi\in\ell^{2}(\mathcal{V}_{G})$, $\varphi \neq 0$, such that \begin{equation} \label{spec-cond} \| (H_{G^{(\omega)}} -E) \varphi\| < \delta \|\varphi\|. \end{equation} Since $H_{G^{(\omega)}}$ is bounded, we can even assume that $\varphi$ has compact support. Furthermore, since $\widehat{H}$ is of finite range $R$, we choose a compact subset $K_{0}$ of $\mathbb{R}^{d}$ such that $\dist\bigl(\supp(\varphi), \mathbb{R}^{d} \setminus K_{0}\bigr) > 2R$. We write $P_{0} := G \wedge K_{0}$ for the corresponding pattern of $G$. From the positive lower frequency condition we infer that the copies $P_{j} := a_{j} +P_{0}$, $j\in\mathbb{N}$, $a_{j} \in\mathbb{R}^{d}$, of this pattern in $G$ occur with a positive lower frequency. For any given $P_{j}$ there is a maximum number (which is uniform in $j$) of other copies $P_{j'}$ with which $P_{j}$ can overlap. Therefore, from now on, we will pass to a subsequence of the sequence $\{P_{j}\}_{j\in\mathbb{N}_{0}}$ such that none of the patches in the subsequence overlap, and still \begin{equation} \label{posfreqcond} \liminf_{n\to\infty} \frac{\widetilde{\nu}(P_{0}|G\wedge B_{n})}{\vol(B_{n})} =:\gamma >0. \end{equation} Here the symbol $\tilde{\nu}$ is used instead of $\nu$ to indicate that it is only the patterns in the subsequence which are counted in $G\wedge B_{n}$. We denote the subsequence again by $\{P_{j}\}_{j\in\mathbb{N}_{0}}$ and introduce the translated functions $\varphi_{j} := \varphi( \cdot -a_{j})$ for $j\in\mathbb{N}$. They form an orthogonal sequence, because $\supp(\varphi_{j}) \subset K_{j} := a_{j} + K_{0}$, and the $K_{j}$'s are pairwise disjoint. Next we have to ensure that the colouring of $G^{(\omega)}$ in $K_{0}$ is also repeated in sufficiently many of the $K_{j}$. The events \begin{equation} A_{j} := \bigl\{ \overline{\omega} \in \Omega_{G} : \overline{\omega}_{a_{j} +v} = \omega_{v} \text{~for all~} v\in \mathcal{V}_{G} \cap K_{0}\bigr\}, \end{equation} $j \in\mathbb{N}_{0}$, where $a_{0} :=0$, are all independent and $\mathbb{P}_{G}(A_{j}) = \mathbb{P}_{G}(A_{0}) >0$ for all $j\in\mathbb{N}$. Thus, the strong law of large numbers gives \begin{equation} \label{slln} \lim_{n\to\infty} \, \frac{1}{\widetilde{\nu}(P_{0}|G\wedge B_{n})} \sum_{j\in\mathbb{N}_{0} : K_{j} \subset B_{n}} \raisebox{.4ex}{$\chi$}_{A_{j}}(\overline{\omega}) = \mathbb{P}_{G}(A_{0}) >0 \end{equation} for $\mathbb{P}_{G}$-almost all $\overline{\omega} \in\Omega_{G}$. We conclude from the Ergodic Theorem~\ref{maclimit} for uniquely ergodic systems that \begin{equation} \label{erg-start} \int_{I}\!\d N = \lim_{n\to\infty} \, \frac{1}{\vol(B_{n})} \sum_{v\in \mathcal{V}_{G} \cap B_{n}} \langle\delta_{v}, \raisebox{.4ex}{$\chi$}_{I}(H_{G^{(\overline{\omega})}}) \delta_{v} \rangle, \end{equation} for the given $G\in X_{\mathcal{G}}$ (for which (v) holds), and for all $\overline{\omega}$ in some measurable set $\overline{\Omega}_{G} \subset \Omega_{G}$ of full $\mathbb{P}_{G}$-measure. We choose $\overline{\Omega}_{G}$ such that \eqref{slln} holds for all $\overline{\omega}\in\overline{\Omega}_{G}$, too. Then, we rewrite \begin{align} \sum_{v\in \mathcal{V}_{G} \cap B_{n}} \langle\delta_{v}, \raisebox{.4ex}{$\chi$}_{I}(H_{G^{(\overline{\omega})}}) \delta_{v} \rangle & = \tr_{\ell^{2}(\mathcal{V}_{G})} \Big\{ \raisebox{.4ex}{$\chi$}_{B_{n}} \raisebox{.4ex}{$\chi$}_{I}(H_{G^{(\overline{\omega})}}) \raisebox{.4ex}{$\chi$}_{B_{n}} \Big\} \nonumber\\ & \ge \sum_{j\in\mathbb{N}_{0} : K_{j} \subset B_{n}} \frac{1}{\|\varphi_{j}\|^{2}} \, \langle\varphi_{j}, \raisebox{.4ex}{$\chi$}_{I}(H_{G^{(\overline{\omega})}}) \varphi_{j}\rangle \nonumber\\ & \ge \sum_{j\in\mathbb{N}_{0} : K_{j} \subset B_{n}} \frac{\raisebox{.4ex}{$\chi$}_{A_{j}(\overline{\omega})}}{\|\varphi_{j}\|^{2}} \, \langle\varphi_{j}, \raisebox{.4ex}{$\chi$}_{I}(H_{G^{(\overline{\omega})}}) \varphi_{j}\rangle \end{align} and observe \begin{align} \langle\varphi_{j}, \raisebox{.4ex}{$\chi$}_{I}(H_{G^{(\overline{\omega})}}) \varphi_{j}\rangle & = \|\varphi_{j}\|^{2} - \| \raisebox{.4ex}{$\chi$}_{\mathbb{R}\setminus I}(H_{G^{(\overline{\omega})}}) \varphi_{j} \|^{2} \nonumber\\ &\ge \|\varphi_{j}\|^{2} - \varepsilon^{-2} \int_{\mathbb{R}\setminus I} \! \d\zeta_{G^{(\overline{\omega})},j}(E')\; (E'-E)^{2} \nonumber\\ &\ge \|\varphi_{j}\|^{2} - \varepsilon^{-2} \| (H_{G^{(\overline{\omega})}} -E)\varphi_{j}\|^{2}, \end{align} where $\zeta_{G^{(\overline{\omega})},j} := \langle\varphi_{j}, \raisebox{.4ex}{$\chi$}_{\bullet}(H_{G^{(\overline{\omega})}}) \varphi_{j}\rangle$ is the projection-valued spectral measure for $H_{G^{(\overline{\omega})}}$ and the vector $\varphi_{j}$. For $\overline\omega \in A_{j}$, we have $-a_{j} + (G^{\overline{\omega}} \wedge K_{j}) = G^{{\omega}} \wedge K_{0}$. Thus, covariance of $\widehat{H}$ and \eqref{spec-cond} imply \begin{equation} \label{covariance} \| (H_{G^{(\overline{\omega})}} -E)\varphi_{j}\| = \| (H_{G^{(\omega)}} -E)\varphi\| < \delta \|\varphi\| = \delta\|\varphi_{j}\| . \end{equation} Combining \eqref{erg-start} -- \eqref{covariance}, we get \begin{align} \int_{I}\!\d N & \ge (1-\delta^{2}/\varepsilon^{2}) \liminf_{n\to\infty} \bigg\{ \frac{\widetilde{\nu}(P_{0}|G\wedge B_{n})}{\vol(B_{n})} \sum_{j\in\mathbb{N}_{0} : K_{j} \subset B_{n}} \frac{\raisebox{.4ex}{$\chi$}_{A_{j}}(\overline{\omega})}{\widetilde{\nu}(P_{0}|G\wedge B_{n}) } \biggr\}\nonumber\\ & \ge (1-\delta^{2}/\varepsilon^{2}) \, \gamma \, \mathbb{P}_{G} (A_{0}) >0. \end{align} The last inequality uses the positive lower frequency condition \eqref{posfreqcond} and the strong law of large numbers \eqref{slln}. This completes the proof. \end{proof} The next two lemmas provide the proof of Theorem~\ref{spec-ids}. They are consequences of the previous lemma. \begin{lemma} \label{N-sub-spec} Let $\widehat{H}$ be a $\widehat{\mu}$-ergodic, self-adjoint operator of finite range and let $N$ be its integrated density of states. Then there exists a Borel set $\overline{X} \subseteq X_{\mathcal{G}}$ with $\mu(\overline{X}) =1$ such that \begin{equation} \supp(\d N) \subseteq \specess(H_{G^{(\omega)}}) \end{equation} for all $G\in\overline{X}$ and $\mathbb{P}_{G}$-almost all $\omega\in\Omega_{G}$. If $X_{\mathcal{G}}$ is even uniquely ergodic, then the statement holds with $\overline{X} = X_{\mathcal{G}}$. \end{lemma} \begin{proof} By taking countable intersections of sets of full measure, we infer from the implication (i)~$\Rightarrow$~(ii) in Lemma~\ref{lemma-equiv}, that there exists a Borel set $\overline{X} \subseteq X_{\mathcal{G}}$ with $\mu(\overline{X}) =1$ ($\overline{X} = X_{\mathcal{G}}$, if $X_{\mathcal{G}}$ is uniquely ergodic), such that for all $G \in \overline{X}$ there exists $\overline{\Omega}_{G} \subset \Omega_{G}$ measurable with $\mathbb{P}_{G}(\overline{\Omega}_{G}) =1$, such that for all $\omega\in\overline{\Omega}_{G}$ and all $\lambda,\lambda' \in\mathbb{Q}$ with $\lambda <\lambda'$ we have \begin{equation} \int_{]\lambda,\lambda'[}\!\d N > 0 \quad\Longrightarrow\quad t_{]\lambda,\lambda'[} (G^{(\omega)}) =\infty. \end{equation} Thus, we conclude from the characterisation \eqref{top-supp-def} that \begin{equation} \supp (\d N) \subseteq \big\{ E \in\mathbb{R}: t_{]\lambda,\lambda'[}(G^{(\omega)}) =\infty \text{~for all~} \lambda,\lambda'\in\mathbb{Q} \text{~with~} \lambda <E<\lambda'\big\} \end{equation} for all $G\in\overline{X}$ and all $\omega\in\overline{\Omega}_{G}$. But this is the claim. \end{proof} \begin{lemma} \label{spec-sub-N} Let $\widehat{H}$ be a $\widehat{\mu}$-ergodic, self-adjoint operator of finite range, and let $N$ be its integrated density of states. Then there exists a Borel set $\overline{X} \subseteq X_{\mathcal{G}}$ with $\mu(\overline{X}) =1$, such that \begin{equation} \label{eq:specN} \spec(H_{G^{(\omega)}}) \subseteq \supp(\d N) \end{equation} for all $G\in\overline{X}$ and $\mathbb{P}_{G}$-almost all $\omega\in\Omega_{G}$. If $X_{\mathcal{G}}$ is even uniquely ergodic and if the positive lower frequency condition holds, then the statement holds with $\overline{X} = X_{\mathcal{G}}$ and for all $\omega\in\Omega_{G}$. \end{lemma} \begin{proof} Recall that \begin{equation} \label{spec-rep} \spec(H_{G^{(\omega)}}) = \big\{ E \in\mathbb{R}: t_{]\lambda,\lambda'[}(G^{(\omega)}) >0 \text{~for all~} \lambda,\lambda'\in\mathbb{Q} \text{~with~} \lambda <E<\lambda'\big\}. \end{equation} For uniquely ergodic systems that obey the positive lower frequency condition, we deduce the desired inclusion \eqref{eq:specN} -- valid for all $G\in X_{\mathcal{G}}$ and all $\omega\in\Omega_{G}$ -- directly from the implication (v)~$\Rightarrow$~(i) in Lemma~\ref{lemma-equiv}. In the general case, we argue that there exists a Borel set $\widehat{A} \subseteq \widehat{X}_{\cG}$ with $\widehat{\mu}(\widehat{A}) =1$, such that for all $\lambda,\lambda' \in\mathbb{Q}$ with $\lambda <\lambda'$ the implication \begin{equation*} t_{]\lambda,\lambda'[}(G^{(\omega)}) > 0 \text{~~for some~} G^{(\omega)} \in \widehat{A} \quad \Longrightarrow \quad t_{]\lambda,\lambda'[}(G^{(\omega)}) > 0 \text{~for all~} G^{(\omega)} \in \widehat{A} \end{equation*} holds, confer Remark~\ref{proj-det}. Hence, for $G^{(\omega)} \in\widehat{A}$, we deduce the assertion from \eqref{spec-rep} and the implication (ii)~$\Rightarrow$~(i) in Lemma~\ref{lemma-equiv}. \end{proof} \section{Percolation estimates} \label{sec:percest} In this final section, we establish the percolation estimates that guarantee exponential decay of the cluster-size distribution on rather general graphs. Corollary~\ref{together} provides a rough criterion for this. It is used in the proof of Theorem~\ref{main} to ensure the applicability of Lemma~\ref{upper}. For this reason, the results of this section must be valid without any assumptions on the automorphism group of the graph. So far, this has prevented us from extending the results of this section to higher values of the percolation probability up to the critical value. This is a challenging open problem, see also the discussion in \cite{Hof98}. Assuming quasi-transitivity, stronger results were obtained recently in \cite{AnVe07b}. \smallskip First we give a simple, but crude lower bound for the critical probability of Bernoulli bond percolation on an infinite connected graph $G=(\mathcal{V},\mathcal{E})$ with bounded degree sequence. Let $\theta_{G,v}(p) := \mathbb P_G^p(|C_v|=\infty)$ denote the probability that the open cluster containing $v\in\mathcal{V}$ is infinite. The critical probability $p_c(G) := \sup\{p \in [0,1]:\theta_{G,v}(p)=0\}$ is independent of $v\in\mathcal{V}$, as follows from the FKG inequality, see \cite[Thm.~2.8]{Gri99} or \cite{Hof98} (recall that we consider only graphs with countable vertex sets). By standard reasoning \cite[Thm~1.10]{Gri99}, we have the following elementary lower estimate for $p_c(G)$. \begin{lemma} \label{combi} Let $G=(\mathcal{V},\mathcal{E})$ be an infinite connected graph. If $G$ has maximal vertex degree $d_\mathrm{max} := \sup_{v\in\mathcal{V}} d_{G}(v) \in \mathbb{N} \setminus\{1\}$, then % \begin{equation} p_c(G)\ge\frac{1}{ d_\mathrm{max}-1}. \end{equation} % \end{lemma} \begin{proof} Let $\sigma_{v}(n)$ denote the number of $n$-step self-avoiding walks on $G$, starting from $v\in V$. Since a self-avoiding walk must not return to its previous position when performing a single step, we obtain $\sigma_{v}(n)\le d_\mathrm{max} ( d_\mathrm{max}-1)^{n-1}$. Let $W_{v}^{(\omega)}(n)$ denote the number of such walks in the percolation subgraph $G^{(\omega)}$ of $G$. Since every such walk is open with probability $p^n$ in any percolation subgraph, we get for its expectation $\int_{\Omega_{G}}\mathbb{P}_{G}^{p}(\d\omega)\, W_{v}^{(\omega)}(n) = p^n \sigma_{v}(n)$. Note that, if the vertex $v$ belongs to an infinite cluster, there are open self-avoiding walks of arbitrary length emanating from $v$. Thus, we have for all $n\in\mathbb N$ the estimate % \begin{multline} \theta_{G,v}(p)\le \mathbb P_G^p\bigl\{ \omega\in\Omega_{G}: W_{v}^{(\omega)}(n)\ge 1 \bigr\} \le \int_{\Omega_{G}}\!\mathbb{P}_{G}^{p}(\d\omega)\, W_{v}^{(\omega)}(n) \\ = p^n\sigma_{v}(n) \le\frac{ d_\mathrm{max}}{ d_\mathrm{max}-1} \;[p(d_\mathrm{max}-1)]^n. \end{multline} % This implies $\theta_{G,v}(p)=0$, if $p( d_\mathrm{max}-1)<1$, and we obtain the assertion of the lemma. \end{proof} The behaviour in the subcritical phase $p<p_c(G)$ can be inferred from asymptotic properties of the event \begin{equation} A_{v}(n) := \bigl\{\omega \in\Omega_{G}: v \stackrel{G^{(\omega)}}{\longleftrightarrow} B_{n}(v)^c \bigr\} \end{equation} that there exists a path from $v\in \mathcal V$ to the complement of the ball of radius $n$ around $v$. For fixed $p$, denote by $g_{G,v}^{p}(n) := \mathbb P_{G}^{p}\bigl( A_{v}(n)\bigr)$ the probability of the event $A_{v}(n)$. The following lemma states that $g_{G,v}^{p}(n)$ decays exponentially in $n$, if the percolation probability $p$ is small enough. Its proof is analogous to that of the previous lemma. \begin{lemma} \label{expsmallp} Let a graph $G=(\mathcal{V},\mathcal{E})$ be given. Assume that $G$ has maximal vertex degree $d_\mathrm{max} \in \mathbb{N} \setminus\{1\}$ and maximal edge length $l_\mathrm{max}:=\sup\{|u-v|:\{u,v\}\in \mathcal{E}\}<\infty$. Then, for every $p\in ]0,1]$ there exists a real number $\psi(p)$, such that the probability $g_{G,v}^{p}(n)$ of the event $A_{v}(n)$ satisfies % \begin{equation} g_{G,v}^{p}(n)\le 2\, \mathrm{e}^{-n\psi(p)} \end{equation} % for all $n\in\mathbb N$, uniformly in $v\in\mathcal{V}$. A possible choice of $\psi(p)$ is, for $0<p\le1$, % \begin{equation} \psi(p)=\frac{1}{ l_\mathrm{max}} \;\ln\biggl(\frac{1}{p (d_\mathrm{max}-1)}\biggr). \end{equation} % In particular, $g_{G,v}^{p}(n)$ decays exponentially if $p<1/(d_\mathrm{max}-1)$. \end{lemma} \begin{proof} A path with initial vertex $v$, which enters the complement of $B_n(v)$, contains at least $\widetilde{n}:= \lceil n/ l_\mathrm{max}\rceil$ bonds, where $\lceil x\rceil$ is the smallest integer $\ge x$. With the notation in the proof of Lemma~\ref{combi}, the assertion now follows by noting that % \begin{equation} g_{G,v}^{p}(n)\le \mathbb{P}_G^p \bigl\{ \omega\in\Omega_{G}: W_{v}^{(\omega)}(\widetilde{n}) \ge 1 \bigr\} \le \frac{d_{\mathrm{max}}}{d_\mathrm{max}-1} \;[p(d_{\mathrm{max}} -1)]^{\widetilde{n}}. \end{equation} \end{proof} For a graph $G=(\mathcal{V},\mathcal{E})$ with uniformly discrete vertex set, exponential decay of $g_{G,v}^{p}(n)$ implies that the mean cluster size $\raisebox{.4ex}{$\chi$}_{G,v}(p) := \mathbb{E}_{G}^{p}\{|C_{v}|\}$ is finite, as follows from the argument in \cite[p.~89]{Gri99}. In fact, this argument yields an estimate which is uniform in $v\in\mathcal V$. We state the result as \begin{lemma} \label{finitechi} Let $G=(\mathcal{V},\mathcal{E})$ be a graph with uniformly discrete vertex set of radius $r>0$. Assume that $G$ has maximal vertex degree $d_\mathrm{max} \in \mathbb{N} \setminus\{1\}$ and maximal edge length $l_\mathrm{max}<\infty$. Then, for every $p \in [0, \frac{1}{ d_\mathrm{max}-1}[$, there exists a constant $\chi(p) \in ]0,\infty[$, which depends only on $r$, $\d_{\mathrm{max}}$ and $l_{\mathrm{max}}$ otherwise, such that the mean cluster size $\raisebox{.4ex}{$\chi$}_{G,v}(p) = \mathbb{E}_{G}^{p}\{|C_{v}|\}$ satisfies % \begin{equation} \raisebox{.4ex}{$\chi$}_{G,v}(p)\le\chi(p)<\infty, \end{equation} % uniformly in $v\in\mathcal{V}$. \end{lemma} Lemma~\ref{expsmallp} and Lemma~\ref{finitechi} can be used to prove exponential decay of the cluster size distribution. \begin{theorem} \label{theo:exp} Let $G=(\mathcal{V},\mathcal{E})$ be a graph with uniformly discrete vertex set of radius $r>0$. Assume that $G$ has maximal vertex degree $d_\mathrm{max} \in \mathbb{N} \setminus\{1\}$ and maximal edge length $l_\mathrm{max}<\infty$. Then, for every $p \in [0, \frac{1}{ d_\mathrm{max}-1}[$, there exists a constant $\lambda(p) \in ]0,\infty[$, which depends only on $r$, $\d_{\mathrm{max}}$ and $l_{\mathrm{max}}$ otherwise, such that % \begin{equation} {\mathbb P}_{G}^{p}\bigl(|C_v|\ge n \bigr)\le 2\, \mathrm{e}^{-n\lambda(p)} \end{equation} % for all $n\in\mathbb N$, uniformly in $v\in\mathcal V$. \end{theorem} \begin{proof}[Sketch of the proof] Proceed along the lines of \cite[Thm.~6.75]{Gri99}. In the estimates, replace $\raisebox{.4ex}{$\chi$}_{G,v}(p)$ by its uniform bound $\raisebox{.4ex}{$\chi$}(p)$. The decay rate thus obtained is $\lambda(p) = [2\raisebox{.4ex}{$\chi$}(p)^{2}]^{-1}$. \end{proof} The conclusion in the above theorem still holds if, instead of a single graph $G$, we consider a set of graphs $\mathcal G$ with the above properties and the associated dynamical system $X_\mathcal{G}$. This setup is used in Section~\ref{secLif}. \begin{cor} \label{together} Let $\mathcal G$ be a set of graphs, whose vertex sets are uniformly discrete of radius $r>0$, which have maximum vertex degree $d_{\mathrm{max}}\in\mathbb{N}\setminus \{1\}$ and maximal edge length $l_\mathrm{max}<\infty$. Then, for every $p \in [0, \frac{1}{ d_\mathrm{max}-1}[$ there exists a constant $\lambda(p) \in ]0,\infty[$ such that % \begin{equation} {\mathbb P}_{G}^{p}\bigl(|C_v|\ge n \bigr)\le 2\, \mathrm{e}^{-n\lambda(p)} \end{equation} % for all $n\in\mathbb{N}$, uniformly in $G \in X_{\mathcal{G}}$ and in $v\in\mathcal{V}_{G}$. \end{cor} \section*{Acknowledgements} We are grateful to Michael Baake and Daniel Lenz for stimulating discussions. We also thank Daniel Lenz and Ivan Veseli\'c for sending us a version of their manuscript \cite{LeVe07} before making it public. This work was supported by the German Research Council (DFG), within the CRC 701.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Current supermassive black hole (SMBH) formation and SMBH/galaxy co-evolution models predict an early dust-enshrouded phase associated with rapid SMBH growth triggered by multiple galaxy encounters \citep{silk98,dimatteo05,hopkins08}. Tidal interactions favor both violent star formation as well as funneling of large amount of gas into the nuclear region to feed (and obscure) the accreting SMBH \citep[e.g.,][]{urrutia08, sha10}. The importance of mergers increases with redshift \citep{conselice03, lin08} and their fundamental role at the peak epoch of luminous AGN (i.e. quasar) and intensive star-formation activity at 1.5\lower.5ex\hbox{\ltsima}\ $z$ \lower.5ex\hbox{\ltsima}\ 3 is widely accepted. Over the last few years, an increasing number of interacting and disturbed molecular gas-rich galaxy systems showing both coeval powerful starburst (SB) and quasar activity at high $z$ have indeed been unveiled \citep[e.g.,][]{carilli02,dasyra08}. The counterparts in the local Universe to such luminous, high-$z$ mergers are the (ultra)-luminous infrared ($L_{\rm IR}$ $>$ 10$^{11}$ $L_{\odot}$) galaxies, i.e. (U)LIRGs \citep{sanders96}. In particular, these powerful objects should provide the opportunity for probing an inevitable outcome of the hierarchical merging process, i.e. the existence of dust-enshrouded double/multiple SMBHs within the envelope of the host galaxy merger \citep{colpi08}. Despite being widely pursued, direct observational evidence for AGN pairs in ULIRGs (as well as in all of the other types of galaxies) has been very limited so far. In the last few years X-ray observations with arcsec angular resolution have provided one of the most efficient tools to disclose such systems. \citet{komossa03} discovered the first and unambiguous example of an active SMBH pair separated by $d$ $\sim$ 1.4 kpc in the center of the ULIRG NGC 6240. Additional examples of dual AGNs with a close separation were unveiled by \citet{bianchi08} and \citet{ballo04} in the ULIRGs Mrk 463 ($d$ $\sim$ 3.8 kpc) and Arp 299 ($d$ $\sim$ 4.6 kpc), respectively, on the basis of {\em Chandra}\ data. \citet{guainazzi05a} have reported the discovery of an X-ray bright AGN pair in ESO590-IG066, an early-phase ($d$ $\sim$ 10.5 kpc) merging system at $z$ = 0.03. It is worth noting that in all these cases hard ($>$2 keV) X-ray data have been crucial to detect activity from both SMBHs of the galaxy pair. Indeed, at least one pair member lacks optical/IR signatures of AGN activity, that suggests we are observing a non-standard AGN phase. Furthermore, a handful of kpc-scale dual AGN candidates have been recently uncovered in galaxy mergers by the detection of spatially-resolved, double-peaked emission line profiles with velocity offsets of a few hundreds km s$^{-1}$ \citep{civano10,liu10}, thus doubling the total number of {\it bona fide} active SMBH pairs collected so far and, in turn, opening interesting perspectives for further advances in this field of research. Here, we present the discovery of an AGN pair consisting of a Compton-thick (CT) Type 2 quasar and a heavily obscured Seyfert 2-like source in the interacting galaxy IRAS 20210$+$1121 (e.g., Sect. 2). This discovery is based on the imaging and spectral analysis of {\em XMM--Newton}\ data described in Sect. 3. We discuss our results and conclude in Sect. 4. A cosmology with $H_0$ = 70 km s$^{-1}$ Mpc$^{-1}$, $\Omega_\Lambda$ = 0.73 and $\Omega_M$= 0.27 is assumed throughout \citep{spergel07}. \section{IRAS 20210+1121} IRAS 20210$+$1121 is an interacting system of two galaxies separated by 12.2 arcsec \citep{arribas04, davies02}. The larger component of the system, i.e. the southern galaxy ({I20210S}\ hereafter), shows a noticeable spiral arm structure, while the northern object (I20210N hereafter) is more spheroidal in shape. Furthermore, a bridge of emission connecting both galaxies is also visible in the optical band. However, narrow-band H$\alpha$ imaging shows that {I20210S}\ has a centrally concentrated, featureless morphology, while {I20210N}\ is barely visible \citep{heisler95}. {I20210S}\ is a LIRG ($L_{IR}$ = 7.8 $\times$ 10$^{11}$ $L_\odot$) with a Seyfert 2 nucleus at $z$ = 0.056, and a [OIII] luminosity {$L_{\rm [OIII]}$}\ = 2.04 $\times$ 10$^{43}$ {erg s$^{-1}$}\ \citep[e.g.,][]{perez90,shu07}. {I20210N}\ has a very faint emission line spectrum, but sufficient to derive the same $z$ of the companion. An important feature of the optical spectrum of {I20210N}\ is an intensity ratio [NII]$\lambda$6584/H$\alpha$ $\approx$3. Such a value is typical of both LINERs and Seyfert galaxies, and no firm conclusion on the presence of an AGN in this source can be drawn on the basis of these low signal-to-noise ratio data. Furthermore, a near-IR spectroscopic study of {I20210N}\ found a featureless near-IR continuum \citep{burston01}. \begin{figure} \begin{center} \includegraphics[width=8.cm,height=7.0cm,angle=0]{fig1.eps} \caption{{\em XMM--Newton}\ {\em EPIC}\ PN Gaussian-smoothed ($\sigma$ = 2 arcsec) image of the interacting galaxy system IRAS 20210$+$1121 in the 0.5-2 keV ({\it left panel}) and 5-10 keV ({\it right panel}) energy range. } \label{fig:12} \end{center} \end{figure} {I20210S}\ was detected by {\em BeppoSAX}\ at a 2--10 keV flux level of {$F_{\rm 2-10}$}\ $\approx$ 2.9 $\times$ 10$^{-13}$ {erg cm$^{-2}$ s$^{-1}$}~and only $\sim$160 counts were collected in the 0.1--10 keV band (given the arcmin angular resolution of {\em BeppoSAX}, any possible emission from {I20210N}\ cannot be discerned). The analysis of these data by \citet{ueno00} revealed a very flat continuum slope $\Gamma$ = 0.5$^{+0.7}_{-1.0}$ and the remarkable presence of a strong (EW$_{\rm Fe}$ = 1.6$^{+2.3}_{-1.1}$ keV) {Fe K$\alpha$}~line that led these authors to suggest that {I20210S}\ may host a CT AGN. The observed {$F_{\rm 2-10}$}\ to [O III] flux ratio of $<$ 0.1 inferred for {I20210S}\ also hints for a CT absorber scenario \citep{guainazzi05b}. However, it is worth bearing in mind the large errors affecting the spectral parameters derived from this observation. \section{Observations and Data Reduction} We observed IRAS 20210$+$1121 with {\em XMM--Newton}\ on May 22, 2009 for about 75 ks (Obs. ID.: 0600690101). The observation was performed with the {\it EPIC } PN and MOS cameras operating in Full-Window mode and with the MEDIUM filter applied. Data were reduced with SAS v9.0 using standard procedures and the most updated calibration files. The event lists were filtered to ignore periods of high background flaring according to the method presented in \citet{pico04} based on the cumulative distribution function of background lightcurve count-rates and maximization of the signal-to-noise ratio. The PN source counts were extracted from a circular region of 10 ({I20210N}) and 8 ({I20210S}) arcsec centered at ($\alpha_{2000}$ = 20$^{h}$23$^{m}$25.04$^{s}$; $\delta_{2000}$ = $+$11$^{\circ}$31$^{\prime}$47.7$^{\prime\prime}$) and ($\alpha_{2000}$ = 20$^{h}$23$^{m}$25.35$^{s}$; $\delta_{2000}$ = $+$11$^{\circ}$31$^{\prime}$30$^{\prime\prime}$), in order to avoid any cross-contamination between the two regions and include the maximum number of counts with $E >$ 5 keV. For the two MOS cameras the extraction radius was of 9 arcsec for both sources. The background spectra were extracted from source-free, much larger circular regions on the same chip and close to the target. After this screening, the final net exposure times in case of {I20210S}({I20210N}) were 61.2(60.1) and 69.7(70) ks for PN and MOS, respectively. Appropriate response and ancillary files for all the {\em EPIC}~cameras were created using RMFGEN and ARFGEN tasks in the SAS, respectively. Spectra were rebinned so that each energy bin contains at least 20 counts to allow us to use the $\chi^2$ minimization technique in spectral fitting. In this Letter we present and discuss the PN spectral results only, since this detector has a better sensitivity over the broad 0.3-10 keV range compared to both MOS cameras (even if co-added together), and above 5 keV in particular. Nonetheless, we checked that consistent results were obtained including the MOS data in our analysis. \begin{figure} \begin{center} \includegraphics[width=6.5cm,height=8.cm,angle=-90]{fig2.ps} \vspace{0.1cm}\includegraphics[width=6.5cm,height=8.cm,angle=-90]{fig3.ps} \caption{{\it Top:} Best-fit data and folded model (thick line), plus residuals, of the PN spectrum of {I20210N}. {\it Bottom:} The PN spectrum of {I20210S}\ with the ``composite'' CT AGN$+$SB best-fit model plotted as a thick line. The individual model components are also shown: (a) pure reflection continuum resulting from a power law illumination of cold material; (b)$+$(c) thermal plasma components; (d) cutoff power-law associated with the X-ray binaries; and (e) two narrow Gaussian lines at 6.4 and 6.7 keV, respectively. See text for details.} \label{fig:12} \end{center} \end{figure} \section{Results} An important result from the {\em XMM--Newton}\ observation is presented in Fig. 1, showing the X-ray images of {IRAS 20210$+$1121}\ in the 0.5-2 and 5-10 keV bands. In the soft band, only the X-ray emission centered on the [OIII]-luminous galaxy of the pair, i.e. {I20210S}, is clearly visible. Whereas the very hard X-ray image reveals the presence of two sources being almost comparable in intensity, and spatially coincident with the radio nucleus of the southern galaxy, or with the optical centroid of the northern galaxy (once an absolute astrometry uncertainty of 2 arcsec and the dispersion of photons due to the PSF are considered\footnote{See http://xmm2.esac.esa.int/docs/documents/CAL-TN-0018.pdf for more detail.}), respectively. At their redshift, the projected separation between the intensity peaks of both sources in the 5-10 keV image shown in Fig. 1 is $d$ $\approx$ 11 kpc. The simplest interpretation of the hard X-ray image is that {I20210N}\ may also host an obscured AGN, thus revealing, in turn, the presence of an AGN pair in this interacting system. Direct evidence to support this buried AGN pair hypothesis comes from the X-ray spectroscopy of {I20210N}. The spectral analysis of {\em EPIC}~data of both sources was carried out using the XSPEC v12 software package. The Galactic column density of {N$_{\rm H}^{\rm Gal}$}\ = 9.73 $\times$ 10$^{20}$ {cm$^{-2}$}~derived from \citet{dick90} was adopted in all the fits. Henceforth, errors correspond to the 90\% confidence level for one interesting parameter, i.e. $\Delta\chi^2$ = 2.71. We yielded a very good description of the spectrum of {I20210N}\ with a typical Compton-thin Seyfert 2 model \citep[e.g.,][]{turner97}, as shown in Fig. 2 (top panel). The primary X-ray continuum power law is absorbed by a column density of {N$_{\rm H}$}\ = (4.7$^{+1.7}_{-1.0}$) $\times$ 10$^{23}$ {cm$^{-2}$}\ and exhibits a slope of $\Gamma$ = 2.0$\pm$0.2. The emission in the soft portion of the spectrum (the so-called {\it soft excess} component) is well fitted by an additional unabsorbed power law fixing its photon index to that of the absorbed power law, but with a different normalization ($\sim$ 3\% of the primary continuum), plus three narrow Gaussian emission lines. The best-fit values for the energy of these lines are $\sim$0.82, $\sim$0.92, and $\sim$1.07 keV, which can be identified with Fe XVII 3d-2p, Ne IX K$\alpha$ and Ne X K$\alpha$/Fe XXI 3d-2p transitions, respectively. Such a {\it soft excess} component has been detected in most of the obscured AGNs, and it is typically explained as emission from large-scale ($\sim$0.1--1 kpc; see \citet{bianchi06}) photoionized gas, dominated by a wealth of strong emission lines from hydrogen- and helium-like ions of the most abundant metals, from carbon to sulfur \citep{guainazzi07,kinka02}. Assuming this spectral model ({$\chi^{2}/$dof}\ = 15/20), we measured a 0.5-2 keV flux of 1.6 $\times$ 10$^{-14}$ {erg cm$^{-2}$ s$^{-1}$}, and a 2-10 keV flux of 1.2 $\times$ 10$^{-13}$ {erg cm$^{-2}$ s$^{-1}$}. After correcting for absorption, this flux corresponds to a luminosity of {$L_{2-10 keV}$}\ = 4.7 $\times$ 10$^{42}$ {erg s$^{-1}$}\ in the hard band. Such a value of the 2-10 keV luminosity falls well within the AGN luminosity range \citep[e.g.,][]{maiolino03}, thus providing unambiguous evidence for the existence of an active SMBH at the center of {I20210N}.\\ The {\em XMM--Newton}\ spectrum of {I20210S}\ is very complex as shown in Fig. 2 (bottom panel). This was expected on the basis of millimeter/IR/optical data that have revealed the simultaneous presence of star-forming and nuclear activity in this galaxy \citep{horellou95,burston01,perez90}. In particular, from the 1.4 GHz radio(far-IR) luminosity of {I20210S}, a star-formation rate SFR $\sim$ 120(75) M$\odot$ yr$^{-1}$ can be estimated according to the relationship reported in \citet{ranalli03}. We have indeed found an excellent description of the {\em XMM--Newton}\ data assuming a composite SB $+$ AGN emission model ({$\chi^{2}_{\rm \nu}$/dof}\ = 0.96(65)). The soft X-ray SB emission has been fitted by the superposition of two thermal emission components (MEKAL model in XSPEC) with solar metallicity and a temperature kT = 0.58$\pm$0.08 and kT = 1.25$^{+0.31}_{-0.16}$ keV, respectively, in agreement with typical values of temperature measured in other well-known star-forming galaxies \citep{ptak99}. The hard X-ray emission has been described assuming an absorbed cutoff-power law model in the form $E^{-\Gamma}$ exp$^{-h\nu/kT}$ with photon index $\Gamma$ = 1.1$\pm$0.2 and cutoff energy fixed to 10 keV. Such a spectral shape is expected in case of a contribution from flat-spectrum bright Low-Mass X-ray binaries and, mostly, from High-Mass X-ray binaries (HMXBs), as pointed out by \citet{persic02}. Another striking characteristic of the emission from HMXBs is a strong Fe XXV emission line at 6.7 keV, that is indeed observed in the spectrum (see Fig. 2, bottom panel) with a poorly-constrained value of equivalent width EW $\sim$ 500 eV. We derived a soft(hard) X-ray luminosity of {$L_{0.5-2 keV}$}\ = 5.2(6.6) $\times$ 10$^{41}$ {erg s$^{-1}$}\ for the SB emission. According to \citet{ranalli03}, these values imply a star-formation rate SFR $\sim$ 110$-$130 M$_\odot$ yr$^{-1}$, which is consistent with the SFR values derived both from the radio and far-IR luminosity reported above. The goodness of this match therefore lends further support to the idea that the hard X-ray power-law component originates from the population of HMXBs expected to be present in the SB regions of {I20210S}. Our best-fit model to the {\em XMM--Newton}\ data includes a component due to X-ray reflection from cold circumnuclear matter with {N$_{\rm H}$}\lower.5ex\hbox{\gtsima}\ 1.6 $\times$ 10$^{24}$ {cm$^{-2}$}~(i.e. CT), and an {Fe K$\alpha$}\ emission line to account for the reprocessed AGN emission visible in the 0.3-10 keV band, the X-ray primary continuum emission being completely blocked in this energy range \citep{ghisellini94}. The energy centroid of the line is at 6.35$^{+0.06}_{-0.04}$ keV and the EW measured with respect to the reflection continuum is 900$\pm$400 eV. These values unambiguously indicate an origin from reflection in cold circumnuclear CT material. The X-ray primary continuum from the AGN is totally depressed in the {\it EPIC} band, as expected in case of a CT absorber. We derive an AGN flux ({$F_{\rm 2-10}$}\ = 7.7 $\times$ 10$^{-14}$ {erg cm$^{-2}$ s$^{-1}$}), accounting for $\sim$47\% of the total (AGN$+$SB) 2-10 keV flux. The observed {$L_{2-10 keV}$}\ of the AGN component is 5.3 $\times$ 10$^{41}$ {erg s$^{-1}$}: according to the reflection-dominated/CT scenario it should be at most 1-2\% of the de-absorbed luminosity \citep[e.g.,][]{comastri04,levenson06}, suggesting an intrinsic {$L_{2-10 keV}$}\ $>$ 0.5-1 $\times$ 10$^{44}$ {erg s$^{-1}$}, which is consistent with the expectation based on the {$L_{\rm [OIII]}$}. This result matches well with the hypothesis of a CT absorbing screen and, hence, the presence of a quasar 2 (with {$L_{X}$}\ $>$10$^{44}$ {erg s$^{-1}$}) at the heart of {I20210S}. We also tried a model assuming a transmission scenario for the AGN emission below 10 keV. We fixed the photon index of the continuum power law to the canonical value of $\Gamma$ = 1.8 due to the limited statistics. This fit is statistically as good as the reflection-dominated fit discussed above, with a resulting {N$_{\rm H}$}\ = (3.2$^{+2.0}_{-1.5}$) $\times$ 10$^{23}$ {cm$^{-2}$}. However, this {N$_{\rm H}$}\ implies an {$L_{2-10 keV}$}\ = 1.2 $\times$ 10$^{42}$ {erg s$^{-1}$}, which is two orders of magnitude lower than expected on the basis of the {$L_{\rm [OIII]}$}, and an EW of the {Fe K$\alpha$}\ line against the absorbed continuum of $\sim$ 250 eV \citep{ghisellini94,guainazzi05b}, while we measured an EW = 620$\pm260$ eV. These two considerations tend to disfavor the transmission scenario and lead us to assume the presence of a CT screen along our line of sight to the nucleus of {I20210S}\ as the most likely interpretation of the {\em XMM--Newton}\ data. \section{Conclusions} The {\em XMM--Newton}\ observation of IRAS 20210$+$1121 presented here has unveiled the existence of an obscured AGN pair placed at a projected distance of $d$ $\sim$ 11 kpc in this interacting galaxy system. In particular, we have discovered a Seyfert 2-like AGN in the nucleus of {I20210N}, for which neither optical nor near-IR spectroscopic observations have provided unambiguous evidence for the existence of an active SMBH at its center. Furthermore, the results of our spectral analysis have provided evidence that the southern member of the pair, the LIRG {I20210S}, optically classified as a Seyfert 2 galaxy, likely hosts a powerful AGN hidden behind a CT absorber. The AGN is embedded in a strong SB emission accounting for $\sim$50\% of the 2.0-10 keV flux measured for {I20210S}. IRAS20210$+$1121 is therefore a rarely-observed example of CT quasar 2 plus optically 'elusive' AGN pair observed during the initial stage of the interaction between their host galaxies, that are still easily identifiable, but also show a well-developed tidal bridge \citep{arribas04}. As such, this system seems to provide an excellent opportunity to witness a merger-driven phase of quasar fueling predicted by most of the evolutionary models based on the co-evolution of SMBHs and their host galaxies. The mismatch between the optical/near-IR and the hard X-ray appearances of the nuclear spectrum of {I20210N}\ can be explained in terms of a completely blocked line of sight to the nuclear region, so that the narrow line region (NLR) is also obscured \citep{maiolino03}. The geometrical properties of the absorber should be thereby different from those assumed in typical Seyfert 2 galaxies, i.e. a pc-scale torus-like shape. The absorber in {I20210N}\ may be characterized by a much more extended distribution (over a few hundreds of pc) in a way that obscures the NLR. Alternatively, it could be spherically symmetric blocking the flux of ionizing UV photons responsible for the line emission in the NLR. The AGN in {I20210N}\ is revealed through hard X-ray observations only; this feature is shared by most AGN pairs discovered lately thanks to hard X-ray observations \citep[e.g.,][]{komossa03,guainazzi05a,bianchi08}. An intriguing explanation for this behavior can be given in terms of a dust-enshrouded circumnuclear environment due to merger-induced processes favoring gas concentration in the galaxy center. For instance, there may be an extra-torus optical/X-ray absorber lying far from the nucleus, and outside the NLR, being likely associated with prominent dust lanes in the disturbed host galaxy. The discovery of a CT quasar 2 in {I20210S}\ is in itself very important given the paucity of low$-z$ members of this peculiar class of AGN detected so far, which are considered a key ingredient in the synthesis models of the Cosmic X-ray Background. This can be ascribed to their low surface density and their absorption-induced faintness at the wavelengths where classical large-area surveys have been performed (i.e. optical, near-IR, UV, X-rays), that make the luminous CT AGN population extremely difficult to observe. Selection criteria based on mid-IR vs. optical colors \citep[e.g.,][and references therein]{lanzuisi09} have been proven to be efficient in discovering a large number of heavily obscured quasar candidates at $z$ $\geq$ 1. Unfortunately, most of these sources are detected at very faint flux levels ({$F_{\rm 2-10}$}~$\ll$ 10$^{-14}$ {erg cm$^{-2}$ s$^{-1}$}), making an appropriate X-ray spectral follow-up extremely time-consuming. Further support to the CT quasar 2 nature for the AGN in {I20210S}\ is provided by the comparison of the expected value of the X-ray bolometric correction $r_{X,bol}$ $\equiv$ {$L_{2-10 keV}$}/L$_{\rm bol}$ = 0.043 $\times$ (L$_{\rm bol}$/10$^{45}$)$^{-0.357}$ (assuming a bolometric luminosity of L$_{\rm bol}$($\approx$ L$_{\rm IR}$) $\approx$ 3 $\times$ 10$^{45}$ {erg s$^{-1}$}) for a typical quasar from \citet{pico07}, and the values of $r_{X,bol}$ calculated using the hard X-ray luminosity inferred for the CT and transmission scenario, respectively. In fact, the expected value of $r_{X,bol}$ = 0.03 is consistent with that derived assuming a {$L_{2-10 keV}$}\ $>$ 5 $\times$ 10$^{43}$ {erg s$^{-1}$}\ (estimated from the {$L_{\rm [OIII]}$}\ luminosity and a CT absorber), i.e. $r_{X,bol}$ $>$ 0.02, but it is much higher than $r_{X,bol}$ = 0.0004 derived for a {$L_{2-10 keV}$}\ = 1.2 $\times$ 10$^{42}$ {erg s$^{-1}$}. Detection of objects such as {I20210S}\ is important because they provide useful templates to explore the multiwavelength properties of the obscured accretion phenomenon without any luminosity bias. The observed properties of {I20210S}\ can be interpreted in the framework of an evolutionary merger-driven scenario according to which a peculiar dust-cocooned, early stage in the life cycle of quasars is linked to a period of intense star-forming activity in the interacting host galaxy \citep{silk98,treister10}. An easily observable outcome of this scenario is indeed an enhancement of the IR luminosity, with the system undergoing a (U)LIRG phase powered both by the SB and the AGN. Furthermore, \citet{horellou95} measured a molecular hydrogen mass $M_{H2}$ of 4.1 $\times$ 10$^{9}$ $M_\odot$ for {I20210S}\ that implies an $L_{FIR}$/$M_{H2}$ ratio $\geq$ 100 $L_\odot$$M^{-1}_\odot$, i.e. a value typical for gas-rich mergers \citep{sanders91}. According to model predictions, the SMBH at the center of {I20210S}\ should be accreting close to the Eddington rate, in agreement with the quasar-like values of {$L_{\rm [OIII]}$}({$L_{2-10 keV}$}) measured(estimated) for the AGN. The interacting system IRAS 20210$+$1121, therefore, surely deserves deeper investigations in the future in order to examine the possible presence of any structures associated with the merging process (i.e. outflows, inflows, obscured SB regions) which could not be revealed by the observational data available so far, but potentially very useful for our understanding of quasar evolution and AGN/SB triggering mechanisms. \acknowledgements EP, CV and SB acknowledge support under ASI/INAF contract I/088/06/0. FN acknowledges support from the XMM-Newton-NASA grant NNX09AP39G. Based on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Abstract} {\bf These lecture notes from the 2019 Les Houches Summer School on Quantum Information Machines provides a pedagogical introduction to the theory of non-reciprocal quantum interactions and devices. The goal is connect various approaches and concepts, including Hamiltonians encoding synthetic gauge fields, scattering descriptions, quantum master equations, and non-Hermitian Hamiltonians. The importance of having both non-trivial synthetic gauge fields and dissipation for obtaining non-reciprocal interactions is stressed. Connections to broader topics such as quantum reservoir engineering and the quantum theory of continuous-measurement based feedforward are also discussed. } \vspace{10pt} \noindent\rule{\textwidth}{1pt} \tableofcontents\thispagestyle{fancy} \noindent\rule{\textwidth}{1pt} \vspace{10pt} \section{Introduction} \label{sec:intro} Devices that scatter incident waves (electromagnetic or acoustic) in a fundamentally asymmetric manner play a crucial role in a variety of classical and quantum applications. Perhaps the most common devices are isolators (two-port devices which permit transmission between the ports in only one direction) and circulators (multi-port devices where transmission can only occur e.g.~from port $j$ to $j+1$, but not in the reverse direction). Such devices are termed ``non-reciprocal", and are usually discussed in the context of the Lorentz reciprocity theorem in optics, and the Rayleigh reciprocity theorem in acoustics. There has been in recent years a concerted effort to devise methods for achieving these kinds of non-reciprocal scattering devices without using magnetic material or magnetic fields, but instead using external driving (i.e. time modulation). Several excellent reviews exist discussing these approaches in classical systems (see e.g.~\cite{Sounas2017,FanTutorial2020}). Parallel to this effort, there is growing theoretical interest in understanding the unique properties of systems whose internal dynamics are governed by effective non-Hermitian Hamiltonians which encode non-reciprocal \emph{interactions}. Typical examples include non-Hermitian lattice models, where there is an asymmetry in, e.g., the amplitude for hopping from left to right, versus right to left \cite{Hatano1997}. Such systems exhibit a number of unusual properties, such as the non-Hermitian skin effect, where changing boundary conditions from periodic to open completely changes the spectrum of the Hamiltonian, and localizes all eigenvectors \cite{FoaTorres2018,Yao2018,McDonald2018}. They can also exhibit unique kinds of topological band structures \cite{Ueda2019,Ashvin2019} and can even give rise to novel phase transition physics \cite{Fruchart2021}. The majority of work in this area assumes the existence of directional interactions as a starting point for formulating a model, without worrying about microscopic mechanisms. In the quantum regime, this can be problematic, as it often amounts to an incomplete description of an open quantum (where one is including generalized damping effects, without accounting for the corresponding quantum fluctuations that must accompany it) \cite{McDonald2021} . In these notes, we provide a (hopefully) pedagogical introduction for how one can microscopically achieve non-reciprocal interactions using external driving, in a way that is fully consistent quantum mechanically. Using an extremely simple model of a three-site bosonic ring, we show explicitly how non-reciprocal scattering (as needed for an isolator or circulator) can be directly tied to non-reciprocal propagation within the ring, as described by an effective non-Hermitian Hamiltonian. We do this in a manner which includes all relevant quantum noise effects. This simple example highlights a general principle: achieving non-reciprocal propagation of interactions requires both the breaking of time-reversal symmetry (in that there are non-trivial synthetic gauge fields), and requires dissipation. We then use this toy model to derive a quantum master equation that encodes non-reciprocal tunnelling within the ring. This shows explicitly how non-reciprocity emerges by balancing coherent Hamiltonian interactions against the corresponding kind of dissipative interaction (as mediated by a dissipative reservoir that couples to system degrees of freedom non-locally). With this example in hand, we show that the basic structure of this quantum master equation can be used to make {\it any} starting Hamiltonian interaction between two systems fully non-reciprocal. We draw connections to both the theory of cascaded quantum sytems (where non-reciprocal interactions are generated by coupling to an external unidirectional waveguide which is then integrated out), and to quantum descriptions of measurement plus feedforward protocols (which are inherently non-reciprocal because of the one-way flow of information). Our work thus provides a pedagogical introduction to the basic recipe for generating non-reciprocal quantum interactions introduced in Refs.~\cite{Metelmann2015} and \cite{Metelmann2017}. It complements the analysis there in several ways (e.g.~by discussing concrete connections to non-Hermitian Hamiltonians, and by commenting on the ability of non-Hermitian interactions to generate entanglement). \section{Synthetic gauge fields} In this section, we will introduce the first essential ingredient needed to realize non-reciprocal interactions: a synthetic gauge field, which picks out a particular direction or sense of circulation. We will show how these can arrive by appropriate forms of driving or temporal modulation. \subsection{Tight binding model of coupled cavities} Consider a collection of coupled photonic resonators (or modes), with each site $j$ having a canonical bosonic annihilation operator $\hat{a}_j$ which annihilates a photon in mode $j$; these obey the usual canonical commutation relations e.g.~$[ \hat{a}_j, \hat{a}^\dagger_k] = \delta_{jk}$, $[ \hat{a}_j, \hat{a}_k] = 0$. The coupling between these modes is described by a photon-number conserving tight-binding (or beam-splitter) Hamiltonian having the form \begin{equation} \hat{H} = \sum_j \omega_j \hat{a}^\dagger_j \hat{a}_j - \sum_{j > j'} \left( t_{jj'} \hat{a}^\dagger_j \hat{a}_{j'} + h.c. \right) \label{eq:HTB} \end{equation} Here $\omega_j$ are the resonant frequencies of each mode (i.e.~on-site energies), and the amplitudes of tunnelling between different modes is encoded in the hopping matrix elements $t_{jj'}$. For convenience, for $j < j'$ we define $t_{jj'} = (t_{j'j})^*$ The first ingredient we will need in our minimal description is a ``synthetic gauge field". In the context of our simple lattice model, this reduces to something simple: we want the hopping matrix elements to have non-zero phases (i.e.~$t_{jj'} \neq t_{jj'}^* $), and we want these phases to be non-trivial, in the sense that we cannot make a gauge transformation to remove them. More specifically, consider a local gauge transformation that shifts the phase of the annihilation operator for site $j$ by $ \theta_j$: \begin{equation} \hat{a}_j \rightarrow \hat{a}_j e^{i \theta_j} \end{equation} The result is that after the transformation, the new hopping matrix elements become $\tilde{t}_{jj'} = t_{jj'} e^{i (\theta_j' - \theta_{j})}$. We are interested in a situation where {\it there is no such transformation} that makes all the hopping matrix elements purely real. This is equivalent to the condition that there exists non-trivial effective Aharonov-Bohm phases associated with hopping around a closed loop. For example, consider a hopping process where one hops between $1 \rightarrow 2 \rightarrow 3 \rightarrow 4 \rightarrow 1$. This process would be associated with an amplitude $A$ involving the product of the relevant hopping matrix elements, i.e.~ \begin{equation} A = t_{14} \cdot t_{43} \cdot t_{32} \cdot t_{21} \end{equation} If $A \neq A^*$, then we can associate an effective Aharonov-Bohm phase with this loop. Such a phase is necessarily invariant under the local gauge transformation defined above. The existence of phases that cannot be gauged away (leading to Aharonov-Bohm phases that are complex) can be rigorously tied to a notion of broken time reversal symmetry. For a clear pedagogical discussion of this, we recommend Appendix A of Ref.~\cite{KochPRA2010}. \subsection{Connection to a continuum description of a particle in a magnetic field} Recall that such gauge invariant phases would emerge naturally if our Hamiltonian described charged particles hopping on a lattice in the presence of a magnetic field. In this case the phases of $t_{jj'}$ can encode a true Aharanov-Bohm phase, and are determined by the electromagnetic vector potential $\vec{A}$ via the Peierls substitution: \begin{equation} t_{jj'} = |t_{jj'}| \exp \left( i \frac{q}{\hbar} \int_{\vec{R}_j'}^{\vec{R}_j} \vec{A} \cdot d \vec{s} \right) \label{eq:Peierls} \end{equation} Here $\vec{R}_j$ denotes the real space position of the $j$th site in the lattice. It is instructive to verify that the above lattice Hamiltonian is indeed a discretized version of the usual Hamiltonian describing a charged particle in a magnetic field, i.e. \begin{equation} \hat{H}_{q} = \frac{1}{2m} \left( \hat{\vec{p}} - \frac{q}{c} \vec{A} \right)^2 \label{eq:HMinimal} \end{equation} where $m$ is the mass of the particle, $c$ the speed of light, and $\vec{p}$ the momentum. For simplicity, we focus on particles hopping on a 1D tight binding lattice with lattice constant $a$. Using $|j \rangle$ to denote a position eigenket centered on the lattice vector $\vec{R}_j = (ja,0,0)$, and switching to a first quantized representation for convenience, the 1D tight binding Hamiltonian with the Peierls phase has the form: \begin{equation} \hat{H}_{1D} = \sum_j \left( -t e^{i \phi_j} | j+1 \rangle \langle j | + h.c. \right) \label{eq:H1D} \end{equation} where from Eq.(\ref{eq:Peierls}), the the phase $\phi_j$ is given by: \begin{equation} \phi_j \simeq a \frac{q}{\hbar} A_x(\vec{R}_j) \end{equation} We have assumed here that the vector potential $A_x(\vec{R})$ does not change significantly over a single lattice constant. Next, recall that real space translations are generated by the momentum operator: \begin{equation} | j+1 \rangle = \exp \left( - \frac{i}{\hbar} \hat{p}_x a \right) |j \rangle \end{equation} Let's use this to re-express the rightwards-hopping term in the Hamiltonian of Eq.~(\ref{eq:H1D}) in the limit of a small lattice constant $a \rightarrow 0$: \begin{equation} e^{i \phi_j} | j+1 \rangle \langle j | \simeq \left(1 + i \phi_j - \phi_j^2 / 2 \right) \left(1 - \frac{i \hat{p}_x a}{\hbar} - \frac{(\hat{p}_x a)^2}{2\hbar^2} \right) |j \rangle \langle j | \label{eq:Displacement} \end{equation} Let's define the operator $\hat{\phi} = \sum_j \phi_j |j \rangle \langle j |$. If we now add Eq.~(\ref{eq:Displacement}) with its Hermitian conjugate, and then sum over all sites $j$, we obtain: \begin{align} \sum_j \left( e^{i \phi_j} | j+1 \rangle \langle j | + h.c. \right) & \simeq \sum_j \left(1 - \frac{i \hat{p}_x a}{\hbar} - \frac{(\hat{p}_x a)^2}{2\hbar^2} \right) \left(1 + i \hat{\phi} - \hat{\phi}^2 / 2 \right) |j \rangle \langle j | + h.c. \\ & = \left( 2 + \{ \hat{\phi} , \frac{ \hat{p}_x a}{\hbar} \} - \hat{\phi}^2 - \left( \frac{ \hat{p}_x a}{\hbar} \right)^2 \right) \\ & = \left( 2 + \frac{a^2}{\hbar^2} \left \{ q \hat{A}_x , \hat{p}_x \right \} - \frac{a^2}{\hbar^2} (q \hat{A}_x)^2 - \frac{a^2}{\hbar^2} \hat{p}_x^2 \right) \\ & = \left( 2 - \frac{a^2}{\hbar^2} \left( \hat{p}_x - q \hat{A}_x \right)^2 \right) \end{align} where $\hat{A}_x \equiv A_x( \hat{\vec{R}} )$ is the operator describing our vector potential. Our 1D tight binding Hamiltonian thus takes the form \begin{align} \hat{H}_{1D} & = \sum_j \left( (-2t) + \frac{t a^2}{\hbar^2} \left( \hat{p}_x - q \hat{A}_x \right)^2 \right) | j \rangle \langle j | \\ & = \frac{1}{2m^*} \left( \hat{p}_x - q \hat{A}_x \right)^2 + (\textrm{ const. } ) \end{align} with the effective mass $m^* \equiv \hbar^2 / 2 t a^2$. Hence, in the continuum limit, the vector-potential dependent phase in our tight-binding Hamiltonian is equivalent to the usual minimal coupling Hamiltonian $\hat{H}_q$ in Eq.~(\ref{eq:HMinimal}) describing a charged particle in a magnetic field. \subsection{Basic methods for realizing synthetic gauge fields via driving} \label{subsec:RealizingGaugeFields} With this understanding in hand, we can return to our problem: we would like non-trivial phases in our hopping Hamiltonian of Eq.~(\ref{eq:HTB}) without having to use charged particles and external magnetic fields. The basic approach will be to obtain these phases by using nonlinearity and external driving (or time modulation) of our system. To make the basic ideas here clear, let's first start with a reduced setup having only two modes. We thus wish to generate an effective Hamiltonian of the form \begin{equation} \hat{H}_{\rm eff} = - t \left( e^{i \tilde{\phi} } \hat{a}^\dagger_2 \hat{a}_1 + h.c. \right) \label{eq:HPhaseTwoSites} \end{equation} with a non-zero, controllable hopping phase $\tilde{\phi}$. There are two basic approaches to achieving this using driving. \subsubsection{Coupling modulation} Suppose we start with two modes that are non-resonant (i.e.~$\omega_1 \neq \omega_2$), and modulate the beam splitter coupling between them harmonically in time. This is described by the Hamiltonian: \begin{equation} \hat{H}_{cm}(t) = \omega_1 \hat{a}^\dagger_1 \hat{a}_1 + \omega_2 \hat{a}^\dagger_2 \hat{a}_2 + 2 \tilde{t} \cos( \omega_D t + \phi ) \left(\hat{a}^\dagger_2 \hat{a}_1 + h.c. \right) \label{eq:HCouplingMod} \end{equation} Let's move to a rotating frame generated by the unitary \begin{equation} \hat{U}(t)= \exp \left[ i \left( \omega_1 \hat{a}^\dagger_1 \hat{a}_1 + \omega_2 \hat{a}^\dagger_2 \hat{a}_2 \right) t \right] \end{equation} In the new frame, the transformed wavefunctions are $| \psi'(t) \rangle = \hat{U}(t) | \psi(t) \rangle$, and they obey the time-dependent Schr\"{o}dinger equation $i \hbar \frac{d}{dt} | \psi'(t) \rangle = \hat{H}^\prime_{cm}(t) | \psi'(t) \rangle$ generated by the transformed Hamiltonian $\hat{H}'(t)$. This is given by \begin{align} \hat{H}^\prime_{cm}(t) & \equiv \hat{U}(t) \hat{H}(t) \hat{U}^\dagger(t) + i \left( \frac{d}{dt} \hat{U} \right) \hat{U}^\dagger \\ & = 2 \tilde{t} \cos( \omega_D t + \phi ) \left(\hat{a}^\dagger_2 \hat{a}_1 e^{i (\omega_2 - \omega_1)t} + h.c. \right) \end{align} We next pick the modulation frequency $\omega_D$ to be equal to the difference of the resonance frequencies of the two modes, i.e.~$\omega_D = \omega_2 - \omega_1$: \begin{align} \hat{H}^\prime_{cm}(t) & = \tilde{t} \left( e^{-i \phi} \hat{a}^\dagger_2 \hat{a}_1 + h.c. \right) + \tilde{t} \left( e^{i \phi} \hat{a}^\dagger_2 \hat{a}_1 e^{2 i (\omega_2 - \omega_1)t} + h.c. \right) \end{align} Finally, we further specialize to the situation where the coupling amplitude $\tilde{t}$ is much, much smaller than the frequency difference of the two modes: $\tilde{t} \ll | \omega_2 - \omega_1 |$. In this limit, the last bracketed term in the above Hamiltonian is highly non-resonant can be safely dropped (as in perturbation theory, it would yield small contributions controlled by the small parameter $\tilde{t} / | \omega_2 - \omega_1 |$). This is nothing but the standard rotating-wave approximation (RWA). Making the RWA, we finally obtain an effective time-independent Hamiltonian that has the desired form of Eq.~(\ref{eq:HPhaseTwoSites}): a beam splitter coupling with a controllable hopping phase. The phase here is directly determined by the phase of the coherent sinusodial coupling modulation in the original time-dependent Hamiltonian. At a heuristic level, it is useful to consider this final Hamiltonian as describing a three-wave mixing process where a ``photon" from the classical modulation tone at frequency $\omega_D$ is either absorbed or emitted to facilitate resonant tunneling between modes $1$ and $2$. In a system of just two resonators, the tunneling phase in our Hamiltonian could always be gauged away. However, the same modulation strategy can be directly generalized to lattices of 3 or more resonators to generate non-trivial phases and effective Aharanov-Bohm fluxes: one just modulates each link of interest at the difference of the relevant resonance frequencies. For example, consider a lattice with three sites described by Eq.~(\ref{eq:HTB}), with the replacements: \begin{equation} t_{21} \rightarrow 2 \tilde{t} \cos \left(\omega_{21} t + \phi_A \right), \, \, \, \, t_{32} \rightarrow 2 \tilde{t} \cos \left(\omega_{32} t + \phi_B \right), \, \, \, \, t_{13} \rightarrow 2 \tilde{t} \cos \left(\omega_{13} t + \phi_C \right) \label{eq:RingPhases} \end{equation} where $\omega_{ij} = \omega_i - \omega_j$. Following the same steps as above, in the rotating frame (and after a rotating wave approximation), we obtain a time-independent tight binding Hamiltonian where the phases of each link are set by $\phi_A, \phi_B$ and $\phi_C$ respectively. We can now define a gauge invariant ``synthetic flux" $\Phi = \phi_A + \phi_B + \phi_C$. Such a phase cannot be eliminated by a local gauge transformation. One might still worry that our synthetic gauge flux ultimately relies on controlling the relative phases between modulation tones at different frequency, something that might seem to be ill defined. We can express this worry more formally: each of the modulation phases $\phi_j$ in Eq.~(\ref{eq:RingPhases}) depends on our choice for the zero of time $t=0$. If we shift the origin of time $t \rightarrow t + \tau$, then clearly these phases also change: $\phi_A \rightarrow \phi_A + \omega_{21} \tau$, $\phi_B \rightarrow \phi_B + \omega_{32} \tau$, $\phi_C \rightarrow \phi_C + \omega_{32} \tau$. Each of these phases is thus indeed sensitive to how exactly one decides to define the instant $t=0$. This sensitivity is however not true for the loop flux $\Phi$: it is independent of $\tau$ for the simple reason that $\omega_{21} + \omega_{32} + \omega_{13} = 0$. We are left with the conclusion that the gauge-invariant synthetic gauge flux in our final time-independent Hamiltonian coincides with the single ``total" modulation phase in our time-dependent Hamiltonian that is defined independently of a specific choice of the zero of time. Before moving, we make an important note: in practice, the coupling modulation strategy described here corresponds to using a parametric nonlinearity involving an auxiliary mode, which is driven strongly and hence treated classically. To be concrete, let's return to our two mode problem. In general, the time-dependent two-mode Hamiltonian in Eq.~(\ref{eq:HCouplingMod}) is an approximation to a nonlinear system having one or more auxiliary modes that are driven. For simplicity, consider the case where there is only a single auxiliary mode, and where the interaction and driving terms have the form \begin{equation} \hat{H}_{\rm int} = g \left( \hat{b} + \hat{b}^\dagger \right) \left( \hat{a}^\dagger_2 \hat{a}_1 + h.c. \right) + \left( i f_D e^{-i \omega_D t} \hat{b}^\dagger + h.c. \right). \end{equation} The Hamiltonian has the form of a three-wave mixing (or $\chi_2$) style nonlinearity (amplitude $g$), where the auxiliary mode $b$ controls the tunneling of photons from mode $1$ and $2$. Further, $f_D$ describes a simple linear drive on this auxiliary mode. To obtain our coupling modulation Hamiltonian, we work in the usual limit where $g$ is weak and the drive $f_D$ is strong. The equation of motion determining the average amplitude of mode $b$ is: \begin{equation} \frac{d}{dt} \langle \hat{b} \rangle = \left( -i \omega_b - \kappa_b/2 \right) \langle \hat{b} \rangle + f_D e^{-i \omega_D t} + g (.....) \end{equation} Here $\kappa_b$ is the damping rate of the auxiliary mode. In the weak $g$ limit of interest, we can solve this equation for the steady state behaviour of $\langle \hat{b}(t) \rangle$ ignoring the $g$ term: $\langle \hat{b}(t) \rangle = \bar{b} e^{-i \omega_D t}$, with $\bar{b} = f_D / ( -i (\omega_D - \omega_b) + \kappa_b / 2)$. If we now replace $\hat{b} \rightarrow \bar{b} e^{-i \omega_D t}$ in our interaction Hamiltonian, we recover the coupling modulation Hamiltonian of Eq.~(\ref{eq:HCouplingMod}) with $\tilde{t} = g |\bar{b}|$, and the phase $\phi$ determined by the phase of $\bar{b}$. Three-wave mixing Hamiltonians like this are common in many settings, e.g.~optomechanical setups \cite{Fang2017}, where a mechanical mode can modulate tunneling between two photonic modes. A similar strategy for obtaining effective modulated-coupling beam-splitter Hamiltonians can be achieved starting with four-wave mixing Hamiltonians, as is commonly done in Josephson junction circuits (see e.g.~ \cite{Kamal2011,Abdo2013,Sliwa2015}). \subsubsection{Frequency modulation} We next consider an alternate means for obtaining synthetic gauge fields, where instead of modulating the couplings between modes, we instead modulate the resonance frequencies of each mode. For simplicity, we again start with a simple two mode system. Our starting time-dependent Hamiltonian now has the form: \begin{equation} \hat{H}_{fm}(t) = \left[ \omega_1 + A_1 \cos ( \Omega_1 + \phi) \right] \hat{a}^\dagger_1 \hat{a}_1 + \omega_2 \hat{a}^\dagger_2 \hat{a}_2 - t \left( \hat{a}^\dagger_2 \hat{a}_1 + h.c. \right) \end{equation} Similar to the treatment of coupling modulation, let's make a unitary transformation to eliminate the on-site time-dependent terms. This is achieved via the unitary: \begin{equation} \hat{U}(t) = \exp \left[ i \omega_2 t \hat{a}^\dagger_2 \hat{a}_2 \right] \exp \left[ i \left( \omega_1 t + \frac{A_1}{\Omega_1} \sin( \Omega_1 t + \phi) \right) \hat{a}^\dagger_1 \hat{a}_1 \right] \end{equation} The Hamiltonian in the rotating frame then corresponds to a time-dependent effective coupling: \begin{equation} \hat{H}^\prime_{fm}(t) = -t \left[ \left( \exp[ i (\omega_2 - \omega_1) t ] \exp[ -i (A_1 / \Omega_1) \sin (\Omega_1 t + \phi) ] \right) \hat{a}^\dagger_2 \hat{a}_1 + h.c. \right] \end{equation} Next, recall the Jacobi-Anger identity: \begin{equation} e^{-i z \sin \theta} = \sum_{n = -\infty}^{\infty} J_n[z] e^{- i n \theta} \end{equation} where $J_n[z]$ is the $n$th Bessel function. We see that the frequency modulation of mode $1$ results in a complicated modulation of the tunneling between mode 1 and mode 2, involving all harmonics of $\Omega_1$. We now pick this modulation frequency so that only the first harmonic ($n=1$) results in a resonant tunneling process: $\Omega_1 = \omega_2 - \omega_1$. We further assume that all remaining terms can be safely neglected within the rotating wave approximation (i.e. they have a large frequency detuning / oscillation frequency compared to their amplitude). Within this approximation, we again obtain a time-independent effective Hamiltonian where the phase of our modulation controls the effective phase of the hopping matrix element: \begin{equation} \hat{H}^\prime_{fm} \simeq - t J_1 \left[ \frac{A_1}{\omega_2 - \omega_1} \right] \left( e^{-i \phi} \hat{a}^\dagger_2 \hat{a}_1 + h.c. \right) \end{equation} As usual, with just two modes, the phase $\phi$ can be gauged away and is not of any particular interest. The minimal case for something interesting involves a ring of three modes, with the tunnel phases encoding a non-trivial flux. The above frequency modulation strategy could be applied in this case. Each mode has a time-dependent resonance frequency: \begin{equation} \omega_j(t) = \omega_j + A_j \cos( \Omega_j t + \phi_j ) \,\,\,\,\,\,\, (j=1,2,3) \end{equation} We consider a situation where the static, unmodulated frequencies of the three modes are all distinct, and e.g.: \begin{align} \omega_2 & = \omega_1 + \Omega_1 \\ \omega_2 & = \omega_3 + \Omega_2 \\ \omega_3 & = \omega_1 + \Omega_3 \end{align} With these choices, we see that each mode-to-mode hopping process is made resonant using just one of the three frequency modulations $\Omega_j$. An analogous derivation to the two mode case (and use of the rotating wave approximation) then yields an effective time-independent Hamiltonian that has the form of Eq.~(\ref{eq:HTB}) with eaching hopping phase controlled by the phase of one of the three modulation tones: \begin{equation} t_{21} \propto e^{-i \phi_1}, \,\,\,\,\, t_{23} \propto e^{-i \phi_2}, \,\,\,\,\, t_{31} \propto e^{-i \phi_3}. \end{equation} This tight binding Hamiltonian corresponds to a gauge-invariant loop flux $\Phi = \phi_1 - \phi_2 - \phi_3$. As before, one can easily check that this corresponds to a combination of modulation phases that is invariant under time-translation $t \rightarrow t + \tau$. The idea of modulating on-site energies or resonant frequencies has been used in many systems to generate synthetic gauge fields; perhaps the best known examples come from cold atom systems (see e.g.~\cite{Aidelsburger2018}). In cases where there is a choice, the coupling modulation approach of the previous subsection is usually preferable, as the requirements for the rotating wave approximation to be valid are less severe. For the frequency-modulation scheme discussed here, there are in general many, many unwanted resonant sidebands that must be neglected. This often limits one to extremely low modulation amplitudes. \section{Dissipation and effective non-Hermitian dynamics for non-reciprocity} Having introduced the notion of a synthetic gauge flux, we next would like to understand how these can directly lead to truly non-reciprocal scattering. We will see that simply having a non-trivial set of phases is not enough: one also needs dissipation to enter the system in just the correct manner. As we will see, this can be compactly described by an effective non-Hermitian Hamiltonian that describes the propagation of particles within our system subject both to dissipation and the synthetic gauge field. \subsection{Basic model: three-site dissipative ring} \label{subsec:BasicModel} We will establish these ideas by analyzing the simplest setting where they arise: a ring comprised of three photonic cavities, where photons can hop from mode to mode in the presence of a synthetic flux. We work in a gauge where this phase is uniform on all three bonds, yielding a Hamiltonian: \begin{equation} \hat{H}_3 = -t \left[ e^{-i \phi} \left( \hat{a}^\dagger_2 \hat{a}_1 + \hat{a}^\dagger_3 \hat{a}_2 + \hat{a}^\dagger_1 \hat{a}_3 \right) + h.c. \right] \label{eq:HRing} \end{equation} The effective Aharanov-Bohm flux here (associated with traversing the ring once) is $\Phi = 3 \phi$; this phase cannot be gauged away unless $\Phi = n \pi$ for some integer $n$. In what follows, we will always take $t > 0$ without loss of generality. \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth]{{EnergiesVsFlux}} \caption{ Energy eigenvalues of the three-cavity ring (c.f.~Eq.~(\ref{eq:HRing})), as a function of the synthetic gauge flux $\Phi$ threading the loop. The energies are labelled according to the wavector $k_m = m 2 \pi / 3$ of the state; $m$ also corresponds to the quantized angular momentum of the eigenstate. At the trivial values of the flux (i.e.~$\Phi = 0,\pi,2\pi$), the spectrum has a two-fold degeneracy reflecting the presence of time-reversal symmetry. } \label{fig:EnergiesVsFlux} \end{figure} The above Hamiltonian has translational invariance and is thus easy to diagonalize in terms of plane waves. The single-particle eigenstates are labelled by the wavevectors $k_m = \frac{2 \pi}{3} m$ ($m=-1,0,1$) with corresponding wavefunctions: \begin{equation} | k_m \rangle = \frac{1}{\sqrt{3}} \sum_{j=1}^3 e^{i k_m j} |j \rangle \end{equation} where $|j \rangle \equiv \hat{a}^\dagger_j | {\rm vac} \rangle$ corresponds to a ``position eigenket", i.e. a single photon localized on mode $j$. One finds that the corresponding energy eigenvalues are \begin{equation} \Omega_m = -2 t \cos( k_m + \phi) = -2 t \cos (k_m + \Phi/3 ). \end{equation} The second quantized Hamiltonian can thus be written as \begin{equation} \hat{H}_3 = \sum_{m=-1,0,1} \Omega_m \hat{b}^\dagger_m \hat{b}_m \end{equation} with \begin{equation} \hat{b}^\dagger_{m} = \frac{1}{\sqrt{3}} \sum_{j=1}^3 e^{i k_m j} \hat{a}^\dagger_j \end{equation} Note that we can interpret the $m$ label as indexing the quantized angular momentum associated with photons propagating along the ring either clockwise or anti-clockwise: $m=0$ corresponds to zero angular momentum, while the eigenstates $m = \pm 1$ correspond to modes with one unit of angular momentum. Shown in Fig.~\ref{fig:EnergiesVsFlux} are the energy eigenstates of the ring as a function of the flux $\Phi$. A few key things to take note of: \begin{itemize} \item For ``trivial" values of the flux that can be gauged away (i.e.~$\Phi = 0, \pi, 2 \pi, ....$), we always have a degeneracy in the spectrum between two energy eigenvalues. For $\Phi = 0$ this is between the $m = \pm 1$ modes, whereas for $\Phi = \pi$, it is between $m=0$ and $m=1$ modes. \item The spectrum is identical for $\Phi = 0$ and $\Phi = 2 \pi$. Increasing $\Phi$ by $2 \pi$ essentially adds one unit of angular momentum to each eigenstate. Hence, as we vary $\Phi$ from $0$ to $2 \pi$, the $m=-1$ state is mapped to the $m=0$ state, the $m=0$ state is mapped to the $m=+1$ state, and the $m=1$ state is mapped to the $m=-1$ state. \item Values of $\Phi$ away from these special trivial points break the degeneracy of the spectrum. The spectral degneracy is maximally broken when $\Phi$ is a half-integer times $\pi$, as at these points, the levels are uniformly spaced. For example, at $\Phi = \pi/2$, the three energy levels are $\Omega_m = t (- \sqrt{3},0,\sqrt{3} )$ for $m=0,-1,1$ respectively. \end{itemize} \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth]{{InputOutputSchematic}} \caption{ Schematic showing a single cavity (lowering operator $\hat{a}_j$) coupled to an input-output waveguide or transmission line. $\hat{a}_{j,{\rm in}}$ encodes the the amplitude of signals and noise incident on the cavity, whereas $\hat{a}_{j,{\rm out}}$ encodes the amplitude of signals and noise leaving the cavity. } \label{fig:InputOutput} \end{figure} Our goal is to use this simple three site ring to build a scattering device having non-reciprocal properties. To that end, we will couple each of our cavity modes $j=1,2,3$ to an input/output waveguide (see Fig.~\ref{fig:InputOutput}). We will treat this waveguide coupling using the standard equations of input-output theory; for readers not familiar with this, see e.g.~\cite{Gardiner00,ClerkRMP} for pedagogical introductions. Note that if we don't worry about noise terms, this treatment is identical to standard coupled mode theory equations as described in many engineering textbooks. Allowing each cavity mode to be coupled to a waveguide at a rate $\kappa$, the Heisenberg-Langevin equations equations take the form: \begin{equation} \frac{d}{dt} \hat{a}_j(t) = - i [ \hat{a}_j(t), \hat{H}_3 ] - \frac{\kappa}{2} \hat{a}_j(t) - \sqrt{\kappa} \hat{a}_{j,{\rm in}}(t) \end{equation} Recall that the operators $\hat{a}_{j,{\rm in}}(t)$ describe both signals and noise incident on cavity $j$ through its coupling waveguide; both act as a source term in the above equation and directly act like a drive on cavity $j$. Note that this operator has units of $1 / \sqrt{{\rm time}}$, as its corresponding number operator describes a photon number flux. In a similar fashion, the operator $\hat{a}_{j,{\rm out}}(t)$ describes signal and noise propagating outwards from cavity $j$ in its coupling waveguide. Input output theory tells us that this output field is given by: \begin{equation} \hat{a}_{j,{\rm out}}(t) = \hat{a}_{j,{\rm in}}(t) + \sqrt{\kappa} \hat{a}_j(t) \end{equation} The first term here describes ``promptly reflected" photons (i.e.~photons reflected at the entrance of the cavity), whereas the second term describes the emission from cavity $j$ into the waveguide. Next, suppose we apply a coherent drive on each of the three cavities through their respective waveguides, with different amplitudes, but all with the same frequency $\omega$. This corresponds to \begin{equation} \langle \hat{a}_{j,{\rm in}}(t) \rangle = \bar{a}_{{\rm in}, j} e^{-i \omega t} \end{equation} As our system of equations is completely linear, one can directly solve for the correspond amplitudes of the output field in each input-output waveguide. One finds generally that \begin{equation} \langle \hat{a}_{j,{\rm out}}(t) \rangle = \bar{a}_{{\rm out}, j} e^{-i \omega t} \end{equation} with a linear relation between these output amplitudes and the input amplitudes: \begin{align} \bar{a}_{{\rm out}, j} & = \sum_{j=1}^3 s_{jj'} [\omega] \bar{a}_{{\rm in}, j'} \end{align} Here $s_{jj'}[\omega]$ is the $3 \times 3$ scattering matrix that describes the scattering of both signals and noise off our systems of cavities. While one could explicitly solve the equations of motion to obtain the elements of $s$, to make the physics clearer, we will use the general form of the solution that would be valid for any quadratic, photon-number conserving Hamiltonian for the three cavities: \begin{align} s_{jj'}[\omega] & = \delta_{jj'} - i \kappa G^R[j,j'; \omega] \label{eq:smatrixGR} \end{align} We have introduced here the retarded Green's function of our lattice, $G^R[j,j'; \omega]$. This Green's function is a susceptibility, which tells us the photon amplitude induced on site $j$ if we drive the system at frequency $\omega$ on site $j$. More usefully for us, it describes the propagation of photons {\it within the lattice.}. Specifically, it gives the probability amplitude associated with the propagation of photons injected on site $j'$ to site $j$. In general, the Kubo formula tells us: \begin{equation} G^R(i,j;t) \equiv -i \theta(t) \langle [ \hat{a}_i(t), \hat{a}^\dagger_j(0) ] \rangle \end{equation} As we have a single particle problem, the Green's function can also be constructed from the resolvant operator associated with the $3 \times 3$ matrix $H$ describing the first quantized version of our Hamiltonian, i.e. \begin{equation} G^R[j,j'; \omega] = \left[ \frac{1}{(\omega + i \kappa/2) \bm{I}_3 - \bm{H}} \right]_{jj'} \equiv \left[ \frac{1}{(\omega ) \bm{I}_3 - \bm{H}_{\rm eff}} \right]_{jj'} \label{eq:GRResolvant} \end{equation} where $\bm{I}_3$ is the identity matrix. In the second line, we have introduced the effective $3 \times 3$ non-Hermitian Hamiltonian matrix $\bm{H}_{\rm eff} $ which encodes both the Hermitian coupling between the modes, as well as the the tendency of photons to leak out of the lattice into the waveguides (via the effective imaginary on-site energies $\propto \kappa$): \begin{equation} \bm{H}_{\rm eff} = \bm{H} - i (\kappa/2) \bm{I}_3 = \left(\begin{array}{ccc} -i \kappa/2 & -t e^{i \phi} & -t e^{-i \phi} \\ -t e^{-i \phi} & -i \kappa/2 & -t e^{i \phi} \\ -t e^{i \phi} & -t e^{-i \phi} & -i \kappa/2 \end{array}\right) \label{eq:NonHermMatrix} \end{equation} To obtain an intuitive understanding of the scattering, we will express $G^R$ in terms of the energy eigenstates of our system: \begin{equation} G^R[j,j'; \omega] = \frac{1}{3} \sum_{m=-1}^1 \frac{ e^{i k_m (j-j')} }{ \omega - \Omega_m + i \kappa/2} \label{eq:GRpoles} \end{equation} We have a simple pole associated with each system eigenmode. Heuristically, a photon injected on site $j'$ can propagate via any of the three eigenmodes in the system. Each term describes the amplitude associated with one of these possibilities. The final amplitude involves the coherent sum of the three possibilities, with the attendant possibilities of constructive and destructive interference. \subsection{Tuning flux and dissipation to achieve directional propagation within the ring} With this general picture of scattering in hand, we can finally step back and ask: what exactly do we want of $s$? For concreteness, let's try to engineer the simplest kind of non-reciprocal scattering matrix that encodes a definite directionality. We will pick two ports in our system (say $j=1,2$) and try to construct the scattering matrix of an isolator at some frequency $\omega$: signals at this frequency can be transmitted from $1 \rightarrow 2$ but not from $2 \rightarrow 1$. We thus want to understand how we can tune parameters to achieve: \begin{equation} s_{21}[\omega] = 0, \,\,\,\, s_{12}[\omega] \neq 0 \end{equation} From our general expression above, this immediately yields a constraint on the Green's function: \begin{equation} G^R[2,1; \omega] = 0, \,\,\,\, G^R[1,2; \omega] \neq 0 \label{eq:GRNonRecipCondition} \end{equation} While going between these two conditions seems trivial, conceptually it represents a significant difference: in the last condition, \emph{we are now solely focused on the propagation of photons within the lattice}. Specifically, we want zero amplitude for propagation from $1$ to $2$, but a non-zero amplitude for the reverse process. We stress that the propagation within the lattice (and any emergent non-reciprocity) is something that can be understood solely in terms of the non-Hermitian Hamiltonian $H_{\rm eff}$ introduced in Eq.~(\ref{eq:GRResolvant}). It is not surprising to guess that achieving the above directionality condition will require tuning the effective flux $\Phi$ in our Hamiltonian appropriately; this flux controls Green's function in Eq.~(\ref{eq:GRpoles}) solely through the dependence of the mode energies. One can immediately check that if $\Phi = 0$, then the non-reciprocity condition of Eq.~(\ref{eq:GRNonRecipCondition}) cannot be fulfilled, as in this case, we necessarily have $G^R[2,1; \omega] = G^R[1,2; \omega]$; this follows directly from the fact that $\Omega_1 = \Omega_{-1}$ when $\Phi = 0$. Directionality will thus require a non-zero, non-trivial value of $\Phi$. While one could reduce this to a purely algebraic exercise, we want a simple intuitive way of understand if and how one can find a magic value for $\Phi$. As we now explain, there is indeed a simple principle at play here: \emph{destructive interference}. \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth]{{FIGSimpleTrajectories}} \caption{ Figure showing the two simplest trajectories allowing propagation from from site 1 to 2. Panel (a) shows a counter-clockwise trajectory whose probability amplitude is $\propto t$, whereas panel (b) shows a clockwise trajectory whose amplitude is $\propto t^2$. The interference between these trajectories can be controlled by varying both the phase $\phi$ associated with the synthetic flux piercing the ring, the hopping amplitude $t$ and the damping rate $\kappa$ of mode 3 (which shows up in the energy denominator associated with the amplitude of process $Q_2$). } \label{fig:SimpleTrajectories} \end{figure} Let's try to understand the value of the propagation amplitude encoded in $G^R(j,j';\omega)$ in terms of trajectories, i.e. different paths that could take the photon from site $j'$ to site $j$. Formally, we can obtain such a picture by expanding $G^R$ in powers of the hopping $t$. Shown in Fig.~\ref{fig:SimpleTrajectories} are the simplest trajectories that would take a photon initially on site $1$ to site $2$. The counter-clockwise (CCW) trajectory involves a single hop and is labelled $Q_1$, while the the clockwise trajectory (CW) involves two hops and is labelled $Q_2$. We can easily calculate the contribution of each of these processes to $G^R$. We first define the unperturbed Green function $G_0$; this is the amplitude associated with residing on any given site in the absence of hopping, and is given by \begin{equation} G_0 = \frac{1}{\omega + i \kappa/2} \end{equation} The amplitude for trajectory $Q_1$ is then: \begin{equation} Q_1 = G_0 \cdot \left(-t e^{-i \phi} \right) \cdot G_0 \end{equation} Reading this equation from right to left, the first factor of $G_0$ is associated with starting on site 1, the bracketed factor is the clockwise hopping, and the last $G_0$ factor is associated with the last site. This term corresponds to the first term in a Dyson series expansion of the full Green's function, where we view all of the hopping terms as a perturbation. In a similar fashion, the amplitude for the clockwise (CW) trajectory $Q_2$ is: \begin{equation} Q_2 = G_0 \cdot \left(-t e^{i \phi} \right) \cdot G_0 \left(-t e^{i \phi} \right) \cdot G_0 \end{equation} This trajectory involves two hopping events, hence the two factors of $t$. It also involves CW hopping, hence the phase factor for each hopping is $ e^{i \phi}$, and not $e^{-i \phi}$ like we had in the CCW trajectory in $Q_1$. Finally, the extra factor of $G_0$ compared to the $Q_1$ expression can be associated with the energy denominator we'd expect for a process in second order perturbation theory. The processes $Q_1$ and $Q_2$ are the only contributions to $G^R[2,1; \omega]$ to order $t^2$. Let's now enforce our directionality condition $G^R[2,1; \omega]=0$ to order $t^2$, which amounts to $Q_1 + Q_2 = 0$. Substituting in the above expressions, this condition becomes: \begin{equation} t e^{-i \phi} = \left(-t e^{i \phi} \right) \cdot G_0 \left(-t e^{i \phi} \right) \end{equation} Some algebra lets us re-write this as a condition on the gauge invariant loop flux $\Phi$: \begin{equation} e^{i \Phi} = \frac{\omega + i \kappa/2}{t} \end{equation} Hence, our first requirement for non-reciprocity (no 1 to 2 propagation) reduces to the two conditions: \begin{align} \tan \Phi & = \kappa / 2 \omega \label{eq:NRCondPhi} \\ \kappa/2 & = \sqrt{t^2 - \omega^2} \label{eq:NRCondKappa} \end{align} Note crucially that these conditions require \emph{both} setting the value of the synthetic flux $\Phi$ as well as tuning the value of $\kappa$, i.e. tuning the strength of the dissipation induced in the cavities by the coupling to the waveguides. In the simple case of $\omega = 0$ (i.e.~resonant driving of the cavities), the conditions reduce to $\kappa = 2 t$, and $\Phi = \pi/2$ (i.e.~a value of the flux that maximally breaks time reversal symmetry). \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth]{{HigherOrderTrajectories}} \caption{ Schematic showing more complicated trajectories that contribute to the overall propagation from cavity 1 to cavity 2. We can organize all of these into two categories, with respective amplitudes $Q_{1,tot}$ and $Q_{2,tot}$ (see text). } \label{fig:HigherOrderTrajectories} \end{figure} One might worry that the above conditions for cancelling $1 \rightarrow 2$ propagation are only valid for very small $t$, as they are based on a perturbative argument. Surprisingly, this is not the case: the same conditions ensure $G^R[2,1;\omega] = 0$ to all orders in $t$. One can again see this from an intuitive argument (see Appendix \ref{app:HigherOrderInterference} for a more rigorous formulation of the same argument). Consider all trajectories that start on site $1$ and end on site $2$. We can partition these trajectories into two sets (see Fig.~\ref{fig:HigherOrderTrajectories}): \begin{itemize} \item In the first set of trajectories a) the particle starts at $1$ then returns to $1$ via some arbitrary trajectory, then b) then particle hops to $2$, then c) the particle hops in an arbitrary way back and forth between $2$ to $3$ before finally returning to $2$ Let's call the amplitude for all of these processes (which has contributions at all orders in $t$) $Q_{1,tot}$. Note that the probability amplitude for starting on site $j$, hoping in an arbitrary way, then returning to site $j$ is given by $G^R[j,j;\omega]$. It then follows that: \begin{equation} Q_{1,tot} = Z[2,2;\omega] (-t e^{-i \phi} ) G^R[1,1; \omega] \label{eq:Q1tot} \end{equation} Here $Z[2,2;\omega]$ denotes the amplitude for all trajectories that start and end on $2$ with multiple hops to $3$. Note that this trajectories are higher-order versions of the $Q_1$ process we discussed before. \item The second set of trajectories are a similar generalization of the $Q_2$ process. These are trajectories where a) the particle starts at $1$ then returns to $1$, then b) then particle hops to $3$ and then to $2$ then c) the particle hops in an arbitrary way back and form from $2$ to $3$ before finally returning to $2$ . The probability amplitude $Q_{2,tot}$ for these trajectories is given by \begin{equation} Q_{2,tot} = Z[2,2;\omega] (-t e^{i\phi} ) G_0 (-t e^{i \phi} ) G^R[1,1; \omega] \label{eq:Q2tot} \end{equation} Again, note the similarity to $Q_2$. \end{itemize} Considering now all trajectories, we have (to all orders in $t$) $G^R(1,2;\omega) = Q_{1,tot} + Q_{2,tot}$. It is easy to see that the this sum is proportional to $Q_1 + Q_2$, and hence the \emph{identical} conditions in Eq.~(\ref{eq:NRCondPhi}) and (\ref{eq:NRCondKappa}) ensure that this will vanish. Our destructive interference condition thus holds to all orders in the hopping $t$. The above conditions of course only ensure half of what we want (namely zero amplitude for propagation from site $1$ to site $2$). We also want to ensure that amplitude for the reverse process, propagation from $2$ to $1$ has a non-zero amplitude. One can again analyze this amplitude to order $t^2$ by expanding $G^R(1,2;\omega)$. We again obtain a process that is first order in $t$ (amplitude $\tilde{Q}_1$), and a process that is second order in $t$ (amplitude $\tilde{Q}_2$). It is easy to see (or explicitly verify) that these amplitudes are given respectively by substituting $\Phi \rightarrow -\Phi$ in the expressions for $Q_1$ and $Q_2$. By then looking again at Eq.~(\ref{eq:NRCondPhi}): if we pick a non-zero value of flux $\Phi$ that causes $G^R(2,1;\omega)$ to vanish via destructive interference, then the amplitude for the reverse process $G^ R(1,2;\omega)$ cannot be zero. The same conclusion holds if we include terms to all orders in $t$ (as we did above). Eqs. (\ref{eq:NRCondPhi}) and (\ref{eq:NRCondKappa}) thus yield the behaviour we are after: propagation is allowed from $2$ to $1$, but not from $1$ to $2$. Even at this level, we can draw some important conclusions: \begin{itemize} \item Achieving non-reciprocal propagation in the lattice involves both tuning the synthetic flux to a non-trivial value, \emph{as well as} having the correct value of dissipation, i.e. value of $\kappa$. \item If we are interested in non-reciprocity at zero frequency, then the synthetic flux must be $\Phi = \pi/2$, corresponding to a maximum breaking of time reversal symmetry. \end{itemize} Finally, one might think that directional propagation within our lattice is possible even if $\kappa = 0$, if we work at some $\omega \neq 0$. In this case, Eq.~(\ref{eq:NRCondPhi}) tells us that $\Phi = 0$ is needed to have $G^R(1,2;\omega)$ be zero. But by the above argument, this will also cause the $G^R(1,2;\omega)$ to be zero! There is thus no directionality here: we have cancelled hopping both directions (at this frequency) between sites $1$ and $2$. This emphasizes a crucial point: non-reciprocal propagation within a lattice requires both a synthetic gauge field and non-zero dissipation. Before ending this subsection, we wish to emphasize that the basic picture of directionality arising from a specific kind of tailored destructive interference is also a crucial ingredient in the graph-theory approach to non-reciprocal linear scattering devices introduced in Ref.~\cite{Ranzani2015}. \subsection{From directional internal propagation to directional external scattering} We now return to our original goal: ensuring directional scattering of waves incident on our three site ring. We focus on $\omega = 0$. From Eqs.~(\ref{eq:NRCondPhi}) and (\ref{eq:NRCondKappa}), we see that achieving directional propagation from $2 \rightarrow 1$ (and not in the reverse direction) requires tuning $\kappa = 2 t$ and $\Phi = \pi/2$. This immediately implies: \begin{equation} G^R(2,1;0) = 0 \,\,\, \implies s_{21}[0] = 0 \end{equation} where we have used the general expression for the scattering matrix given in Eq.(\ref{eq:smatrixGR}). This is of course only part of what we would like for an ideal isolator. We would additionaly want that $s_{12}[0] = 1$ (i.e.~unitary transmission in the forward direction of operation) and $s_{11} = 0$ (no reflections of signals incident on the input port $1$). Let's first focus on this second condition, which requires \begin{equation} s_{11}[0] \equiv 1 - i \kappa G^R[1,1;0] = 0 \label{eq:s11} \end{equation} While we could obtain (as always) $G^R$ by simply doing the matrix inversion in Eq.~(\ref{eq:GRResolvant}), we will instead use a more intuitive argument. Note that $G^R[1,1;0]$ describes the amplitude for all trajectories that start on site $1$ and then return to site $1$. Apart from the trivial no-hopping process, there are three kinds of trajectories that contribute: \begin{enumerate} \item Trajectories where the particle hops once from $1$ to $2$, then hops arbitrarily, then returns to $2$ then returns to $1$. \item Trajectories where the particle hops once from $1$ to $3$, then hops arbitrarily, then returns to $3$ then returns to $1$. \item Trajectories where the particle hops once from $1$ to $3$, then hops arbitrarily returning to $3$, then hops to $2$, then returns to $1$ \end{enumerate} Note that our specific tuning of $\kappa, \Phi$ makes the {\it total} amplitude for $1 \rightarrow 2$ propagation to vanish, and hence by symmetry, the total amplitude for $2 \rightarrow 3$ and $3 \rightarrow 1$ propagation also vanishes. As such, the above three categories are the only kinds of trajectories we need to consider. Given these three kinds of trajectories, and noting that $G^R(j,j;\omega)$ is the same for $j=1,2,3$ (by translational invariance), we can easily write down the expression for $G^R(1,1;\omega) \equiv \tilde{G}$: \begin{equation} \tilde{G} = G_0 + 2 G_0 (-t) \tilde{G} (-t ) G_0 + e^{i \Phi} G_0 (-t) G_0 (-t) \tilde{G} (-t) G_0 \end{equation} The second term on the RHS corresponds to processes 1 and 2, where as the last term corresponds to process 3. We can now solve this equation for $\tilde{G}$ \begin{align} \tilde{G}^{-1} & = G_0^{-1} - 2 t^2 G_0 + e^{i \Phi} t^3 G_0^2 \nonumber \\ & = \frac{i \kappa}{2} - \frac{2 t^2}{i\kappa/2} + e^{i \Phi} \frac{t^3}{(i \kappa/2)^2} \nonumber \\ & = \frac{\kappa}{2} \left( i - \frac{2}{i} - e^{i \Phi} \right) \nonumber \\ & = i \kappa \end{align} We have used the directionality tuning constraints above which set $\Phi = \pi/2$ and $t = \kappa/2$. Using this expression in Eq.~(\ref{eq:s11}), we find that, as desired, $s_{11}[0] = 0$. Hence, by tuning the lattice dynamics to be non-reciprocal, we have automatically also forced the system to be impedance matched, i.e.~zero reflections at zero frequency on port 1. By translational invariance, it also follows that $s_{22}[0] = s_{33}[0] = 0$. Note that we can easily understand this lack of reflections in terms of a true impedance matching condition having been met. Cavity one can be viewed as being coupled to two dissipative ports: the input-output waveguide (coupling rate $\kappa$), and the effective dissipation coming from the coupling to cavity 3, which is damped. Note that cavity 2 does not contribute because of the directionality. The induced damping of cavity 1 via the coupling to cavity 3 is $\kappa_{eff} = 4 t^2 / \kappa = \kappa$. Hence, the two ports coupled to cavity 1 have the same coupling rate, hence the lack of reflections. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{{Circulator}} \caption{ Schematic showing how our three-cavity ring (c.f.~Eq.~(\ref{eq:HRing})) when coupled to input-output waveguides (coupling rate $\kappa$) can function as a circulator of signals incident on the cavities at $\omega = 0$. Achieving this circulator behaviour involves both tuning the synthetic gauge flux to $\Phi = \pi/2$, and also requires careful tuning of the amount of dissipation induced by the waveguides, i.e. $\kappa = 2 t$. } \label{fig:Circulator} \end{figure} Turning to the full scattering matrix, note that translational invariance also tells us that all CCW scattering elements must be zero, and hence $s_{32}[0] = s_{13}[0] = 0$. We thus have that the zero frequency scattering matrix must have the form: \begin{equation} \bm{s}[0] = \left(\begin{array}{ccc} 0 & z & 0 \\ 0 & 0 & z \\ z & 0 & 0 \end{array}\right) \end{equation} where $z$ is a complex number. We must have $|z| = 1$ for the scattering matrix to be unity (one finds that $z=1$). We thus have automatically also fulfilled our other requirement for a perfect isolator: perfect transmission in the forward direction. While our goal was to construct an isolator, the translational invariance between all three modes in our system mean that we have in fact built a circulator, where scattering between ports is allowed in the forward, CW direction, but is zero in the CCW direction. This scattering structure is depicted schematically in Fig.~(\ref{fig:Circulator}). \subsection{Take home messages} This simple three ring model has revealed several important lessons that are worth re-iterating: \begin{itemize} \item Achieving non-reciprocal scattering first requires achieving non-reciprocal propagation within the 3-site ring of cavities. \item This non-reciprocal propagation is a property of the non-Hermitian Hamiltonian $H_{\rm eff}$ which describes the propagation of photons between the cavities in the presence of the synthetic flux and dissipation (generated by the coupling waveguides (c.f.~Eq.~(\ref{eq:NonHermMatrix})). \item One cannot achieve non-reciprocal propagation without having dissipation \item Achieving non-reciprocity required matching the coherent and dissipative dynamics encoded in $H_{\rm eff}$, see Eq.~(\ref{eq:NRCondKappa}). \end{itemize} \section{Effective models with dissipative interactions} In the last section, we understood how one could use a simple ring of three cavities to construct an ideal circulator. At the heart of this model was an effective $3 \times 3$ non-Hermitian Hamiltonian matrix that described propagation of photons in the ring subject to dissipation (loss) as well as a synthetic gauge field. We saw that achieving directionality at a given frequency required tuning both the level of dissipation $\kappa$ as well as the value of the synthetic gauge field. In this section, we look at this three-ring system from a slightly different vantage point, one that will eventually let us understand how to generate non-reciprocal quantum interactions in a much more general way. Recall that in the previous section, we were primarily interested in cavities $1$ and $2$, as we wanted a 2 port scattering device with the scattering matrix of an isolator. As such, it is convenient and revealing to construct an effective model for just these two modes, where mode $3$ (and its coupling waveguide) are eliminated. As we now show, eliminating mode 3 results in the generating of an unusual dissipative interaction between modes $1$ and $2$ that plays a crucial role in establishing directionality. Let's start by generalizing the three-site ring model of the previous section. We now take the hoppings to be asymmetric: \begin{equation} |t_{12}| \equiv t, \,\,\,\,\,\,\,\, |t_{23}| = |t_{31}| = t' \end{equation} We also allow each mode $j$ to have a different damping rate $\kappa_j$ (i.e. coupling rate to its input-output waveguide). Going forward, the goal is to get an isolator for ports 1 and 2; we don't care about anything having to do with mode 3 or its waveguide. Further, we will assume that $\kappa_3 \gg t',t,\kappa_1,\kappa_2$. This large damping rate will let us adiabatically eliminate mode 3 from our problem. To proceed, we write the (Heisenberg-Langevin) equation of motion for mode 3: \begin{equation} \frac{d}{dt} \hat{a}_3(t) = -\frac{\kappa_3}{2} \hat{a}_3 + i t' \left( e^{-i \phi} \hat{a}_2(t) + e^{i \phi} \hat{a}_1(t) \right) - \sqrt{\kappa_3} \hat{a}_{3, {\rm in}}(t) \end{equation} We can Fourier transform this equation, and then solve for $\hat{a}_3[\omega]$: \begin{equation} \hat{a}_3[\omega] = \frac{1}{-i \omega + \kappa_3/2} \left( i t' \left( e^{-i \phi} \hat{a}_2[\omega] + e^{i \phi} \hat{a}_1[\omega] \right) - \sqrt{\kappa_3} \hat{a}_{3, {\rm in}} [\omega] \right) \end{equation} We now assume that $\kappa_3$ is much larger than the frequencies $\omega$ that we care about (e.g. the bandwidth of signals that we will feed into our isolator). We can thus ignore the $\omega$ in the denominator of the above expression. Returning to the time domain, we see that this is an adiabatic approximation: the heavily damped $\hat{a}_3(t)$ mode has an amplitude that is determined by the instantaneous value of the other two modes: \begin{equation} \hat{a}_3(t) \simeq \frac{2 i t'}{\kappa_3} \left( e^{-i \phi} \hat{a}_2(t) + e^{i \phi} \hat{a}_1(t) \right) - \frac{2}{\sqrt{\kappa_3}} \hat{a}_{3,in}(t) \end{equation} Consider now the EOM for the $\hat{a}_1$ mode: \begin{equation} \frac{d}{dt} \hat{a}_1(t) = -\frac{\kappa_1}{2} \hat{a}_1 + i \left( t e^{i \phi} \hat{a}_2(t) + t' e^{-i \phi} \hat{a}_3(t) \right) + (...) \end{equation} where we do not write the noise terms explicitly. Substituting in the value of $\hat{a}_3(t)$, we find: \begin{equation} \frac{d}{dt} \hat{a}_1(t) \simeq -\frac{1}{2} (\kappa_1 + \tilde{\kappa} ) \hat{a}_1 + i \tilde{t}_{12} \hat{a}_2(t) + (...) \end{equation} with \begin{equation} \tilde{\kappa} = \frac{4 (t')^2}{\kappa_3} , \,\,\,\, \tilde{t}_{12} = e^{i \phi} t + e^{-2 i \phi} \frac{2 i (t')^2}{\kappa_3} \end{equation} Here $\tilde{\kappa}$ represents extra induced damping from mode $3$ on mode $1$ whereas $\tilde{t}_{12}$ describes the modified effective hopping from mode $2$ to $1$ (where the second term is a virtual process going through mode 3). An analogous calculation for $\hat{a}_2$ yields: \begin{equation} \frac{d}{dt} \hat{a}_2(t) \simeq -\frac{1}{2} (\kappa_2 + \tilde{\kappa} ) \hat{a}_2 + i \tilde{t}_{21} \hat{a}_1(t) + (...) \end{equation} with $\tilde{\kappa}$ as above, and \begin{equation} \tilde{t}_{21} = e^{-i \phi} t + e^{2 i \phi} \frac{2 i (t')^2}{\kappa_3} \end{equation} Note crucially that the tunnel couplings between modes 1 and 2 in the above equations cannot correspond to a Hermitian beam-splitter Hamiltonian, as $\tilde{t}_{12} \neq \tilde{t}_{21}^*$. We see that the contribution to these hopping matrix elements from mode 3 gives them a non-Hermitian structure. Formally, we can write \begin{equation} \tilde{t}_{12} = e^{i \phi} t + \tilde{t}_{\rm diss} \,\,\,\, \tilde{t}_{21} = e^{-i \phi} t - \left( \tilde{t}_{\rm diss} \right)^* \end{equation} where $\tilde{t}_{\rm diss}$ describes a ``dissipative" tunneling coupling that is mediated by mode $3$. Ignoring noise terms, we could describe this structure by writing our equations of motion in terms of a non-Hermitian effective Hamiltonian matrix $\bm{H}_{\rm eff,2} $, i.e. \begin{equation} \frac{d}{dt} \left(\begin{array}{c} \hat{a}_1 \\ \hat{a}_2 \end{array}\right) = -i \bm{H}_{\rm eff,2} \left(\begin{array}{c} \hat{a}_1 \\ \hat{a}_2 \end{array}\right) \end{equation} with \begin{equation} \bm{H}_{\rm eff,2} = \left(\begin{array}{cc} -i (\kappa_1 + \tilde{\kappa})/2 & e^{i \phi} t + \tilde{t}_{\rm diss} \\ e^{-i \phi} t - \left( \tilde{t}_{\rm diss} \right)^* & -i (\kappa_2 + \tilde{\kappa})/2 \end{array}\right) \label{eq:Heff2} \end{equation} Within this formulation, it becomes even easier to see how to have mode $1$ influenced by mode $2$, but not vice-versa: we simply tune $\tilde{t}_{\rm diss}$ so that $\tilde{t}_{21} = 0$. Because of the non-Hermitian structure, this does not simultaneously force $\tilde{t}_{12}$ to be 0 as well. Matching the magnitudes of the coherent and dissipative tunnelings, we find the condition: \begin{equation} \left( \frac{\tilde{\kappa}}{2} \equiv \frac{2 (t')^2}{\kappa_3} \right) = t \end{equation} This is reminiscent of Eq.~(\ref{eq:NRCondKappa}) of the last section. If this condition is met, we then have: \begin{equation} \tilde{t}_{12/21} = e^{\pm i \phi} t \left( 1 + i e^{\mp 3i \phi} \right) \end{equation} Hence, if we pick $\Phi = 3 \phi = \pi/2$, we obtain $\tilde{t}_{21} = 0$ (i.e.~no tunneling from 1 to 2), whereas $\tilde{t}_{12} = 2 t e^{i \pi/4} $. This flux tuning matches the condition in Eq.~(\ref{eq:NRCondPhi}). While it seems like we have simply re-derived the results of the previous section, the approach here leads to an important physical interpretation that can be generalized to many different situations: \begin{itemize} \item Modes 1 and 2 are coupled together in two ways. The first is the (Hermitian) Hamiltonian coupling described by the hopping matrix element $t$; we will often refer to this as a coherent coupling, as dissipation is not involved. \item Modes 1 and 2 are also coupled to a common bath, i.e.~the highly damped mode $\hat{a}_3$. This mode mediates a {\it dissipative} hopping interaction between modes 1 and 2 \item The Hamiltonian interaction on its own is reciprocal; it allows hopping in both directions between modes 1 and 2. The same is true for the purely dissipative hopping interaction. \item However, when we combine both of these kinds of hopping processes, we can get directionality. We can have the two processes cancel for one direction of the hopping but not the other. \end{itemize} We thus have a very general recipe: {\bf non-reciprocal interactions arise from the balancing of ``coherent'' and ``dissipative'' interactions}. The above discussion of mode-3 eliminated system has been phrased using equations of motion and an effective non-Hermitian Hamiltonian matrix. This is a convenient description for linear systems, and (without noise terms) has been used extensively in the study of classical non-Hermitian systems (where non-Hermiticity is usually obtained by applying incoherent loss and or gain). For quantum systems that are not linear, the above formulation is not the most convenient. Instead, it is useful to describe the dissipative interactions generated by the bath of interest (here mode 3) using a quantum master equation. To that end, we can view the part of our original (Hermitian) Hamiltonian that involves mode 3 as a kind of system bath Hamiltonian: \begin{equation} \hat{H}_{SB} = - t' \hat{a}^\dagger_3 \hat{z}_{\rm sys} + h.c., \,\,\,\,\, \hat{z}_{\rm sys} = e^{-i \phi} \hat{a}_2 + e^{i \phi} \hat{a}_1 \end{equation} Here, we interpret $\hat{a}_3^\dagger$ as an operator that creates an excitation in the bath, whereas $\hat{z}_{\rm sys}$ is the system operator that couples to bath. To be explicit, we can create an excitation in the bath either removing a photon from mode 1 or 2 (with coherence and interference between these two possibilities). In the limit of large $\kappa_3$, we could now go through the standard steps of treating mode 3 as a reservoir where any created excitations decay quickly, and derive a quantum master equation, i.e.~an equation of motion for the reduced density matrix describing mode 3. The steps in deriving the master equation are discussed in many standard references (see e.g.~\cite{Gardiner00}). The resulting Lindblad master equation has the standard form: \begin{equation} \frac{d}{dt} \hat{\rho} = -i [ \hat{H}_{12} , \hat{\rho} ] + \tilde{\kappa} \left( \hat{z}_{\rm sys} \hat{\rho} \hat{z}_{\rm sys}^\dagger - \frac{1}{2} \{ \hat{z}_{\rm sys}^\dagger \hat{z}_{\rm sys}, \hat{\rho} \} \right) \label{eq:FullMaster} \end{equation} where \begin{equation} \hat{H}_{12} = -t \left( e^{-i \phi} \hat{a}^\dagger_2 \hat{a}_1 + h.c. \right) \end{equation} The first term of the master equation describes the usual coherent (i.e. non-dissipative) evolution under the hopping Hamiltonian $\hat{H}_{12}$. The remaining terms describe the dissipative effect of the reservoir. Heuristically, there is a probability per unit time $\tilde{\kappa}$ of having a quantum jump where the system state suddenly changes from $| \psi \rangle$ to $\hat{z}_{\rm sys} | \psi \rangle$ due the creation of an excitation in the bath. Note that we can make a gauge transformation $\hat{a}_1 \rightarrow e^{i \phi} \hat{a}_1$ to eliminate the phase from $\hat{H}_{12}$. Further, the overall phase of $\hat{z}_{\rm sys}$ plays no role. We can thus re-write the master equation in a form where the non-trivial phase factor only appears in the dissipative part of the equation: \begin{align} \frac{d}{dt} \hat{\rho} & = -i [ -t \left( \hat{a}^\dagger_2 \hat{a}_1 + h.c. \right), \hat{\rho} ] + \tilde{\kappa} \left( \hat{z} \hat{\rho} \hat{z}^\dagger - \frac{1}{2} \{ \hat{z}^\dagger \hat{z}, \hat{\rho} \} \right) \label{eq:MasterRingExplicit} \\ \hat{z} & = \hat{a}_2 + e^{i \Phi} \hat{a}_1 \end{align} This master equation fully describes the possibly directional dynamics of hopping between our cavities 1 and 2 in the adiabatic limit of interest (where $\kappa_3$ is large). The dynamics is fully directional when we fulfill the same tuning conditions listed above, i.e. \begin{equation} \tilde{\kappa} /2 = t, \,\,\,\,\, \Phi = \pm \pi/2 \end{equation} Here, the $+$ sign gives us $2 \rightarrow 1$ directionality, the $-$ sign $1 \rightarrow 2$ directionality. One can easily confirm that equations of motions for the average values of $\hat{a}_1$ and $\hat{a}_2$ obtained from the master equation are completely consistent with the Heisenerg-Langevin equations we used above. Before leaving this simple model discussion, it is interesting to note that we can extract an effective non-Hermitian Hamiltonian from the master equation. We can re-write it in the form \begin{equation} \frac{d}{dt} \hat{\rho} = -i \left( \hat{H}_{\rm NH} \hat{\rho} - \hat{\rho} \hat{H}^\dagger_{\rm NH} \right) + \tilde{\kappa} \hat{z} \hat{\rho} \hat{z}^\dagger \label{eq:MasterHeff} \end{equation} with \begin{align} \hat{H}_{\rm NH} & = -t \left( \hat{a}^\dagger_2 \hat{a}_1 + h.c. \right) - i \frac{\tilde{\kappa}}{2} \hat{z}^\dagger \hat{z} \\ & = \left(-t - i \frac{\tilde{\kappa}}{2} e^{i \Phi} \right) \hat{a}^\dagger_2 \hat{a}_1 + \left(-t - i \frac{\tilde{\kappa}}{2} e^{-i \Phi} \right) \hat{a}^\dagger_1 \hat{a}_2 - i \frac{\tilde{\kappa}}{2} \left( \hat{a}^\dagger_1 \hat{a}_1 + \hat{a}^\dagger_2 \hat{a}_2 \right) \end{align} This corresponds to the non-Hermitian Hamiltonan we identified directly from the equations of motion, c.f. Eq.~(\ref{eq:Heff2}). We thus see that our master equation describes both the non-reciprocal, non-Hermitian dynamics of photons in the ring, along with the correspond noise terms (i.e. the remaining terms in Eq.(\ref{eq:MasterHeff})). Surprisingly, we will see that the above master equation can be generalized to describe non-reciprocal interactions that {\it cannot} be associated with the non-Hermitian Hamiltonian part of the master equation, $\hat{H}_{\rm NH}$. Note that $\hat{H}_{\rm NH}$ plays a distinctive role when one ``unravels" the master equation in terms of quantum trajectories. In this approach, one has a stochastic Schr\"{o}dinger equation for the system; averaging over this stochastic process results in the final master equation. For these stochastic trajectories, $\hat{H}_{\rm NH}$ describes the evolution of the system in the absence of a quantum jump. We thus see that in this case, the no-jump evolution in a stochastic trajectory is described by a non-Hermitian Hamiltonian whose form can be directional. The ability to realize effective non-Hermitian Hamiltonians by monitoring a system and post-selecting to no-jump trajectories has been studied by several recent works (see e.g.~\cite{Murch2019}). \section{General quantum model for non-reciprocal interactions} In previous sections, we saw in detail how one-way, non-reciprocal dynamics emerged in a simple model of a ring of three cavities, where each cavity was coupled to a waveguide. We established that non-reciprocal scattering was directly connected to non-reciprocal propagation \emph{within} the ring, something that could be accomplished by designing a non-Hermitian Hamiltonian with both dissipation and a non-trivial synthetic gauge flux (c.f.~Eq.~(\ref{eq:NonHermMatrix})). We also saw in the last section that we could obtain an effective description of the non-reciprocity between cavities 1 and 2 where the the third (highly-damped) cavity was adiabatically eliminated. In the result effective model, directionality emerged as the balancing of a coherent Hamiltonian interaction (i.e. simple tunneling between modes 1 and 2), and a ``dissipative interaction" mediated by mode 3. This latter interaction was described by a collective dissipation term in the quantum master equation (c.f.~Eq.~(\ref{eq:MasterRingExplicit})). In this section, we now show that the structure of the master equation in Eq.~(\ref{eq:MasterRingExplicit}) can be generalized to make {\it any} starting interaction between two quantum systems non-reciprocal. This will allow us to establish a general recipe for designing non-reciprocal quantum interactions, in settings where it is impossible to reduce the dynamics to an effective non-Hermitian Hamiltonian. Our discussion here follows Ref.~\cite{Metelmann2015}, but provides additional heuristic insights. \subsection{Basic structure} Consider a general situation where we have two quantum systems $1$ and $2$ (described by a tensor-product Hilbert space), that interact via a Hermitian Hamiltonian of the form \begin{equation} \hat{H}_{\rm coh}(\lambda)= \frac{1}{2} \left( \lambda \hat{O}_1 \hat{O}_2 + \lambda^* \hat{O}_1^\dagger \hat{O}_2^\dagger \right) \label{eq:HCoherent} \end{equation} Here, $\hat{O}_1$ is an operator acting on system 1, and $\hat{O}_2$ an operator acting on system 2. This Hamiltonian describes a reciprocal interaction, in that the evolution of system $1$ will in general depend on what system $2$ is doing, and vice versa. For convenience, we take these operators to be dimensionless, hence $\lambda$ has the units of frequency and controls the strength of the interaction. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{{DissNonRecipSchematic}} \caption{ Schematic showing the basic setup for generating an arbitrary non-reciprocal interaction between two quantum systems $1$ and $2$. We start with a coherent Hamiltonian interaction $\hat{H}_{\rm coh}$ between the two systems involving the product of operators $\hat{O}_1$ and $\hat{O}_2$ for each system, with an interaction strength $\lambda$. By introducing correlated dissipation on both systems in just the right way (via coupling to a common reservoir at rate $\Gamma$), a non-reciprocal interaction can be achieved. } \label{fig:DissNonRecipSchematic} \end{figure} The goal is to convert this interaction into a non-reciprocal interaction that is fully directional. To accomplish this, we take the same approach as in the last section: we will achieve directionality by balancing the coherent interaction in $\hat{H}_{\rm coh}$ with a dissipative interaction mediated by a reservoir that couples to both system 1 and 2 in just the right way. This dissipation interaction will be described by dissipative terms in a master equation analogous to the last terms in Eq.~(\ref{eq:FullMaster}). We are thus led to consider a master equation of the form: \begin{align} \frac{d}{dt} \hat{\rho} & = -i [ \hat{H}_{\rm coh}(\lambda), \hat{\rho} ] + \Gamma \mathcal{D}[ \hat{O}_1 - i e^{i \theta} \hat{O}_2^\dagger ] \hat{\rho} \end{align} where the dissipative superoperator $\mathcal{D}$ is defined as \begin{equation} \mathcal{D}[\hat{z}] \hat{\rho} \equiv \hat{z} \hat{\rho} \hat{z}^\dagger - \frac{1}{2} \{ \hat{z}^\dagger \hat{z}, \hat{\rho} \} \end{equation} Again, the last dissipative terms here correspond to coupling system 1 and 2 to a common reservoir that mediates a dissipative interaction. Here $\Gamma$ controls the strength of this dissipative interaction, while the phase $\theta$ controls its form. Also note the non-trivial phase factors in the dissipative terms: as we will see, these encode an effective synthetic gauge flux, something that we saw was crucial to obtaining non-reciprocity. We now claim that by tuning $\Gamma$ and $\theta$ appropriately, we can achieve a fully directional interaction between systems $1$ and $2$. To see whether this is possible, let's consider how the expectation value of system 1 and system 2 operators evolve under the above dynamics. Take $\hat{A}_1$ ($\hat{A}_2$) to be an arbitrary system $1$ (system $2$) operator. As they act on different systems, it is natural to take these operators to commute with one another \footnote{ The one exception to having system $1$ and $2$ operators commute would be the case where both systems are fermionic, meaning that $A_1$ and $A_2$ could anti-commute with one another. One can confirm that our derivation of the equations of motion for averages remains unchanged in this case. } . Let's now calculate the equation of motion for their average values using the above master equation. Consider first $\hat{A}_1$: \begin{align} \frac{d}{dt} \left \langle \hat{A}_1(t) \right \rangle & \equiv \textrm{tr} \left( \hat{A}_1 \frac{d}{dt} \hat{\rho}(t) \right) \nonumber \\ & = \Gamma \textrm{tr} \left( \hat{A}_1 \mathcal{D}[\hat{O}_1] \, \hat{\rho} \right) -i \, \textrm{tr} \left( \left[ \hat{A}_1, \frac{\tilde{\lambda}_{12}}{2} \hat{O}_1 \hat{O}_2 + \textrm{h.c.} \right] \, \hat{\rho} \right) \label{eq:A1AverageEOM} \end{align} where we have defined \begin{equation} \tilde{\lambda}_{12} = \lambda + \Gamma e^{-i \theta} \label{eq:LamTilde12} \end{equation} For $\Gamma = 0$ (i.e.~no dissipation), we have the expected coherent evolution generated by $\hat{H}_{\rm coh}$, i.e.~Eq.~(\ref{eq:A1AverageEOM}) reflects the usual Heisenberg equation of motion for $\hat{A}_1$. Dissipation (i.e.~terms proportional to $\Gamma$) has two effects: it adds purely local dissipative dynamics on system one, generating the first term on the RHS of Eq.~(\ref{eq:A1AverageEOM}), and also generates a dissipative interaction. This dissipative interaction leads to an interaction that seems analogous to $\hat{H}_{\rm coh}$, and would seem to just modify the interaction strength from $\lambda \rightarrow \tilde{\lambda}_{12}$. Concretely, as far as system $1$ is concerned, the evolution is indistinguishable from the master equation: \begin{align} \frac{d}{dt} \hat{\rho} & = -i [ \hat{H}_{\rm coh}(\tilde{\lambda}_{12}), \hat{\rho} ] + \Gamma \mathcal{D}[ \hat{O}_1 ] \hat{\rho}, \end{align} i.e.~the dissipative interaction looks like a modification of the Hamiltonian interaction strength. Let's now repeat this exercise for an arbitrary system $2$ operator $\hat{A}_2$: \begin{align} \frac{d}{dt} \left \langle \hat{A}_2(t) \right \rangle & \equiv \textrm{tr} \left( \hat{A}_2 \frac{d}{dt} \hat{\rho}(t) \right) \nonumber \\ & = \Gamma \textrm{tr} \left( \hat{A}_2 \mathcal{D}[\hat{O}_2^\dagger] \, \hat{\rho} \right) -i \, \textrm{tr} \left( \left[ \hat{A}_2, \frac{\tilde{\lambda}_{21}}{2} \hat{O}_1 \hat{O}_2 + \textrm{h.c.} \right] \, \hat{\rho} \right) \label{eq:A2AverageEOM} \end{align} where we have defined \begin{equation} \tilde{\lambda}_{21} = \lambda - \Gamma e^{-i \theta} \label{eq:LamTilde21} \end{equation} Again, as far as system 2 is concerned, it is as though we modified the Hamiltonian interaction strength from $\lambda \rightarrow \tilde{\lambda}_{21}$, and also introduced some puirely local dissipation one mode $2$. We now see from Eqs.~(\ref{eq:LamTilde12}) and (\ref{eq:LamTilde21}) that there is a crucial asymmetry: if $\Gamma \neq 0$, then the effective interaction strength seen by system $1$, $\tilde{\lambda}_{12}$ can differ both in magnitude and phase from $\tilde{\lambda}_{21}$, the effective interaction strength seen by system $2$. Our working definition of non-reciprocity here will be situations where these couplings differ in magnitude. In particular, we can obtain a perfect non-reciprocal situation by tuning the dissipative couplings so that: \begin{equation} \Gamma e^{-i \theta} = - \lambda \end{equation} In this case we have: \begin{equation} \tilde{\lambda}_{12} = 0, \,\,\,\, \tilde{\lambda}_{21} = 2 \lambda \end{equation} We thus achieve a perfectly non-reciprocal coupling between the two systems. System $1$ is not influenced by system 2 at all, but system 2 is driven by system 1 in the same way as would be achieved with a Hamiltonian $\hat{H} = \hat{H}_{\rm coh}(2 \lambda)$. If we instead replace $\lambda \rightarrow -\lambda$ in the above equation, we would obtain perfect non-reciprocity in the opposite direction. Note that in the fully directional case, the isolated system does not experience any direct driving by the other system, but does feel some local dissipation whose strength is proportional to $\lambda$ (e.g.~the first terms on the RHS of Eqs.~(\ref{eq:A1AverageEOM}) and (\ref{eq:A2AverageEOM}). The presence of this dissipation is an unavoidable consequence of trying to generate a strongly directional interaction, and mirrors what we found in our analysis of the three-site ring. For completeness, we write the final directional master equation. Redefining $O_1, O_2$ so that $\lambda$ is real and positive, it has the form \begin{align} \frac{d}{dt} \hat{\rho} & = -i \frac{\lambda}{2} [ \hat{O}_1 \hat{O}_2 + \hat{O}_1^\dagger \hat{O}_2^\dagger , \hat{\rho} ] + \lambda \mathcal{D}[ \hat{O}_1 \mp i \hat{O}_2^\dagger ] \hat{\rho} \label{eq:FinalGeneralDirectionalMEQ} \end{align} where the upper (lower) sign corresponds to $1 \rightarrow 2$ ($2 \rightarrow 1$) directionality. With this form, we explicitly can see the parallel to the simple three mode model studied in the previous, c.f.~Eq.~(\ref{eq:MasterRingExplicit}). We stress that the above master equation results in a fully nonreciprocal interaction between systems $1$ and $2$ no matter what the choice of the coupling operators $\hat{O}_1$ and $\hat{O}_2$. Also note that there is no clever way to remove the explicit factor of $i$ in the dissipator above from the entire master equation: if we try to redefine e.g. $\hat{O}_2$ to absorb this phase, it will then show up explicitly in the Hamiltonian. We thus see that the basic ingredients needed for non-reciprocity in our simple three-site toy model (namely synthetic gauge fields and dissipation) also fuel this more general recipe. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{{CascadedSchematic}} \caption{ Schematic showing the basic setup of a cascaded quantum system, where two quantum systems interact via a unidirectional (i.e.~``chiral") waveguide. In the Markovian limit, the interactions mediated between the two systems via the waveguide correspond exactly to our effective directional Lindblad master equation. } \label{fig:CascadedSchematic} \end{figure} Eq.~(\ref{eq:FinalGeneralDirectionalMEQ}) has the form of a so-called \emph{cascaded quantum master equation}, which provides another way to understand its origin. This is the effective master equation that emerges when two quantum systems are coupled to a unidirectional waveguide, such that e.g.~waves emitted from mode $1$ can only propagate towards mode $2$, and not-vice versa (see Fig.~\ref{fig:CascadedSchematic}). In this case, both terms in the master equation (i.e.~the coherent and dissipative interactions) are generated by the directional waveguide. Our discussion provides another way of thinking about the structure of this master equation without any recourse to a directional waveguide: the emphasis is on synthetic gauge fields and interference between coherent and dissipative interactions. As we have already seen, this structure can be realized experimentally using external driving, as was discussed in Sec.~\ref{subsec:RealizingGaugeFields}. \subsection{Important properties} \subsubsection{Alternate realizations: asymmetric bath couplings} Eq.~(\ref{eq:FinalGeneralDirectionalMEQ}) represents one concrete way to turn the basic Hamiltonian interaction in Eq.~(\ref{eq:HCoherent}) into something fully directional. It however is not the only strategy. The crucial part of our recipe was to balance a dissipative interaction against a coherent interaction. The dissipative interaction formally corresponds to dissipative terms in our master equation that involve the product of $\hat{O}_1$ and $\hat{O}_2$. As such, the prefactors of these operators need not be equal, as long as the product of prefactors remains the same. More physically, this means that we can couple systems 1 and 2 asymmetrically to the bath. This leads to a more general class of directional master equations, labelled by the asymmetry parameter $\eta > 0$: \begin{align} \frac{d}{dt} \hat{\rho} & = -i \frac{\lambda}{2} [ \hat{O}_1 \hat{O}_2 + \hat{O}_1^\dagger \hat{O}_2^\dagger , \hat{\rho} ] + \lambda \mathcal{D}[ \eta^{1/2} \hat{O}_1 \mp i \eta^{-1/2} \hat{O}_2^\dagger ] \hat{\rho} \label{eq:MEQAsymmetric} \end{align} One can easily check that for any value of $\eta >0$, this master equation is completely directional; this follows from the fact that the dissipative interaction is independent of $\eta$. To see this, return to the general equations of motion of expectation values of system 1 and 2 operators. One finds that the interaction terms in Eqs.~(\ref{eq:A1AverageEOM}) and (\ref{eq:A2AverageEOM}) are unchanged. The only difference are the local dissipation terms in each equation: in the $\hat{A}_1$, this local dissipation terms is now $\propto \eta$, whereas in the $\hat{A}_2$ equation is $\propto 1 / \eta$. This flexibility in achieving directionality is useful. We know that non-reciprocity must come with an increase in local dissipation on each system. By varying $\eta$, one can spread this cost unevenly between the two systems. \subsubsection{Conjugated scheme for non-Hermitian coupling operators} In the general case where the operators $\hat{O}_j$ are both non-Hermitian, one might be puzzled at first glance by the asymmetry in the final master equation Eq.~(\ref{eq:FinalGeneralDirectionalMEQ}): why is one of the coupling operators conjugated, and not the other? This asymmetry also manifests itself in the local dissipation terms in the equations of motion for means, Eqs.~(\ref{eq:A1AverageEOM}) and (\ref{eq:A2AverageEOM}). As it turns out, the other choice is also possible, i.e. \begin{align} \frac{d}{dt} \hat{\rho} & = -i \frac{\lambda}{2} [ \hat{O}_1 \hat{O}_2 + \hat{O}_1^\dagger \hat{O}_2^\dagger , \hat{\rho} ] + \lambda \mathcal{D}[ \hat{O}_1^\dagger \mp i \hat{O}_2 ] \hat{\rho} \end{align} This also generate fully directional behaviour with a direction depending on the sign. The difference is now in the form of the local dissipation terms that act independently on each system. \subsubsection{Efffective non-Hermitian Hamiltonian} Similar to what we did in Sec.~\ref{subsec:BasicModel}, we can extract the non-Hermitian Hamiltonian associated with our master equation in Eq.~(\ref{eq:FinalGeneralDirectionalMEQ}). One finds: \begin{align} \hat{H}_{\rm eff} & \equiv \hat{H}_{\rm coh} - \frac{i}{2} \lambda \left( \hat{O}_1 \mp i \hat{O}_2^\dagger \right)^\dagger \left( \hat{O}_1 \mp i \hat{O}_2^\dagger \right) \nonumber \\ & = \frac{\lambda}{2} \left( \hat{O}_1 \hat{O}_2 + \hat{O}_2^\dagger \hat{O}_1^\dagger \right) - \frac{i}{2} \lambda \left( \hat{O}_1^\dagger \hat{O}_1 + \hat{O}_2\hat{O}_2^\dagger \right) \pm \frac{1}{2} \lambda \left( \hat{O}_1 \hat{O}_2 - \hat{O}_2^\dagger \hat{O}_1^\dagger \right) \end{align} For $\hat{O}_1$ and $\hat{O}_2$ non-Hermitian, we clearly see that there is a strong asymmetry in this effective Hamiltonian between modes $1$ and $2$, as the dissipative terms cancel one of the two terms in the coherent Hamiltonian. This matches what we found for the simple three-site ring in the previous section. More strange is the case where both $\hat{O}_1$ and $\hat{O}_2$ are both Hermitian. In this case, the last dissipative interaction term in $\hat{H}_{\rm eff}$ \emph{vanishes}. This leads to an important case: for the case where both system operators in the coupling are Hermitian, the non-reciprocity predicted by Eq.~(\ref{eq:FinalGeneralDirectionalMEQ}) cannot be understood in terms of an effective non-Hermitian Hamiltonian. We will provide an alternate intuition for this case in the next section. \section{Quantum non-reciprocity and quantum measuerment-plus-feedforward schemes} We have now established a very general, quantum master equation recipe for generating arbitrary non-reciprocal interactions between two quantum systems. We understood the basic mechanism as being the balancing of a coherent Hamiltonian interaction against a bath-mediated dissipative interaction. In this section, we sketch a seemingly very different physical situation that leads to unidirectional dynamics: measurement plus feedforward. Surprisingly, we see that the standard theory of such a protocol (in the limit of a weak continuous measurement) leads again to an equation analogous fo Eq.~(\ref{eq:FinalGeneralDirectionalMEQ}). \subsection{Quantum feedforward based on weak continuous measurements} \begin{figure}[t] \centering \includegraphics[width=0.8\textwidth]{{FeedforwardSchematic}} \caption{ Schematic of a generic measurement plus feedforward scheme. The observable $\hat{O}_1$ of system $1$ is continuously monitored, with the results of the measurement (encoded in the measurement record $I(t)$ used to drive system 2 via the operator $\hat{O}_2$. The unconditional evolution under this scheme results in a purely directional interaction between the two systems. In the limit where delay in applying the feedforward force can be ignored, the resulting evolution corresponds to our general non-reciprocal master equation (with the proviso that both coupling operators must be Hermitian). } \label{fig:FeedforwardSchematic} \end{figure} We consider again two systems $1$ and $2$ that are not directly interacting (either via a direct Hamiltonian interaction, or via a common dissipative bath). Instead, we analyze a situation where an observer makes a weak continuous measurement of an observable $\hat{A}_1$ in system $1$, and then uses the results of this measurement to drive system $2$ (via a forcing operator $\hat{F}_2$ (see Fig.\ref{fig:FeedforwardSchematic}) The theory of weak continuous measurements is treated in several places, see \cite{Steck2006} for an extremely clear pedagogical treatment. The continuous classical measurement record is denoted $I(t)$ (i.e.~this could be an integrated homodyne signal or electrical current). We assume that in each infinitesimal time interval $[t, t+ dt]$ the measurement record increases an amount $dI(t)$. This increment has two terms: a piece that reflects the value of the measured observable $\hat{A}_1$, and a random noise amount $dW_t$: \begin{equation} dI(t) = \sqrt{k} \langle \hat{A}_1(t) \rangle dt + dW_t \end{equation} $k$ here represents the strength of the measurement and has units of a rate, while $dW_t$ is a random variable, a Wiener increment. It can be viewed as integrating white noise for a time $dt$, and satisfies $\overline{dW_t} = 0$, $dW_t^2 = dt$ (where the overline indicates a stochastic average, i.e. averaging over different measurement outcomes). The Wiener increments in different time intervals are completely uncorrelated, reflecting the fact that this is white noise. In what follows, we drop the $t$ index on these Wiener increments. We can now ask how the density matrix of the system $\hat{\rho}$ evolves in a particular run of the experiment, i.e.~conditioned on the measurement record $I(t)$. The conditional master equation governing this situation is: \begin{equation} d \hat{\rho} = \frac{k}{4} \mathcal{D}[ \hat{A}_1 ] \hat{\rho} dt + \frac{\sqrt{k}}{2} \left \{ \hat{A}_1 - \langle \hat{A}_1 \rangle , \hat{\rho} \right \} dW \equiv \mathcal{L}_0 \hat{\rho} \end{equation} We stress that the $dW$ here is the same Wiener increment appearing in the expression for the measurement record. The first term here represents the unconditional backaction of the measurement: if we don't have access to the measurement record, then the only relevant backaction is the disturbance of quantities that do not commute with $\hat{A}_1$. The second term in our equation represents a backaction on the system that is correlated with the measurement record. This is often described as the ``conditioning" of the system by the measurement, or an ``information-gain" backaction. Heuristically, it is equivalent to updating a prior distribution in Bayesian statistics based on the acquisition of new information. We next imagine that the experimentalist uses the measurement record to apply a time-dependent generalized force on system $2$. Letting $J(t) = (d/dt) I(t)$, we consider that this feedforward forcing is described by the Hamiltonian: \begin{equation} \hat{H}_{\rm FB}(t) = J(t - \tau) \sqrt{\gamma_{\rm ff}} \hat{F}_2 \end{equation} Here $\hat{F}_2$ is some system-2 Hermitian operator, $\gamma_{\rm ff}$ is the feedforward strength (with units of rate) and $\tau > 0$ is the delay time associated with applying the feedforward. Let's now consider the evolution of the full conditional density matrix in the presence of feedforward, during an infinitesimal time interval $dt$. Because of causality, we should first evolve the state to reflect the ``backaction" of the measurement of $\hat{A}_1$ during the interval, and then evolve it via the feedforward Hamiltonian $\hat{H}_{\rm FB}$ (which uses the measurement record acquired during the interval). We thus have \begin{align} \hat{\rho}(t+dt) & = \mathcal{U}_{\rm ff}(t) \cdot \mathcal{U}_{\rm meas}(t) \cdot \hat{\rho}(t) \end{align} where both $\mathcal{U}_{\rm meas}(t)$ and $\mathcal{U}_{\rm ff}(t) $ are superoperators. $\mathcal{U}_{\rm meas}(t)$ describes evolution due to measurement backaction, and for $dt \rightarrow 0$ can be written \begin{equation} \mathcal{U}_{\rm meas}(t) \simeq 1 + \mathcal{L}_0(t) \end{equation} In contrast, $\mathcal{U}_{\rm ff}(t) $ is the unitary evolution of the system under the feedforward Hamiltonian. Defining the superoperator \begin{equation} \mathcal{M} \hat{\rho} \equiv -i [ \sqrt{\gamma_{\rm ff}} \hat{F}_2, \hat{\rho} ] \end{equation} we have \begin{align} \mathcal{U}_{\rm ff}(t) & = \exp \left[ J(t-\tau) \mathcal{M} dt \right] \\ & \simeq 1 + dI(t-\tau) \mathcal{M} + \frac{1}{2} \mathcal{M}^2 dt \end{align} In the last line, we have expanded the exponential to order $dt$, being mindful of the Ito rule $dW^2 = dt$. Combining these terms and only keeping terms to order $dt$, we find in the limit $\tau \rightarrow 0^+$: \begin{align} d \hat{\rho}(t) & = \left( \mathcal{L}_0 + dI(t) \mathcal{M} + \frac{1}{2} \mathcal{M}^2 dt \right) \hat{\rho}(t) + \mathcal{M} \frac{\sqrt{k}}{2} \left \{ \hat{A}_1 - \langle \hat{A}_1 \rangle , \hat{\rho}(t) \right \} dt \end{align} The last term here is the most interesting one. It arises from {\it correlations} between the $dW$ noise in the feedforward evolution, and the $dW$ terms in the backaction (innovation) of the measurement propagator. Up until now, we have been looking at the conditional evolution of the system density matrix: how the system evolves given a particular measurement record $J(t)$. We now consider the unconditional evolution, i.e. average over all measurement outcomes. This is equivalent to averaging over the noise $dW$ i.e. setting $dW = 0$ in the above equation. This yields: \begin{align} d \hat{\rho}(t) & = \left[ \frac{k}{4} \mathcal{D}[\hat{A}_1] + \sqrt{k} \langle \hat{A}_1 \rangle \mathcal{M} + \frac{1}{2} \mathcal{M}^2 dt \right] \hat{\rho} \, dt + \mathcal{M} \frac{\sqrt{k}}{2} \left \{ \hat{A}_1 - \langle \hat{A}_1 \rangle , \hat{\rho}(t) \right \} dt \label{eq:MEQFeedForwardFull} \end{align} We see there are two terms here that encode the interaction between systems $1$ and $2$ generated by our feedforward protocol. The first is the $\sqrt{k} \langle \hat{A}_1 \rangle \mathcal{M} $ term, which describes the fact that the average value of $\hat{A}_1$ determines the effective force applied on system $2$. The second influence is through the last term of the equation; as mentioned above, this describes correlations between the backaction noise (conditioning) of system $1$ and the noise in the feedforward force applied to system $2$. Note that both these interaction terms scale like $\sqrt{k \gamma_{\rm ff}}$, i.e.~they depend on the product of the measurement strength and the feedforward strength. The above equation tells us about the unconditional evolution of the system under the measurement and feedforward protocol. The evolution must necessarily be directional, as there is no way for system 2 to influence system 1. We would like to show that this equation is in fact in the same form as the directional master equation of Eq.~(\ref{eq:MEQAsymmetric}) with the choices $\hat{O}_1 = \hat{A}_1$, $\hat{O}_2 = \hat{F}_2$. With these identifications, we expect a master equation: \begin{align} \frac{d}{dt} \hat{\rho} & = -i \lambda [ \hat{A}_1 \hat{F}_2 , \hat{\rho} ] + \lambda \mathcal{D}[ \eta^{1/2} \hat{A}_1 - i \eta^{-1/2} \hat{F}_2 ] \hat{\rho} \\ & = -i \lambda [ \hat{A}_1 \hat{F}_2 , \hat{\rho} ] + \lambda \eta \mathcal{D}[\hat{A}_1] + \frac{\lambda}{ \eta} \mathcal{D}[\hat{F}_2] - i \lambda \left( \hat{F}_2 \hat{\rho} \hat{A}_1 - \hat{A}_1 \hat{\rho} \hat{F}_2 \right) \label{eq:TargetMEQ} \end{align} In the last line, we have made use of the fact that both $\hat{A}_1$ and $\hat{F}_2$ are Hermitian. As we already remarked earlier, the Hermiticity of both these operators implies that the ``dissipative interaction" between the two systems does not correspond to the non-Hermitian Hamiltonian terms in our master equation, but rather to jump terms (where there are operators acting on both sides of $\hat{\rho}$). First, consider the term in Eq.~(\ref{eq:MEQFeedForwardFull}) that is quadratic in the operator $\hat{F}_2$. This is the $\mathcal{M}^2 dt$ term above, a term describing evolution due to noise in the feedforward propagator. A straightforward evaluation shows: \begin{equation} \frac{1}{2} \mathcal{M}^2 \hat{\rho} = \gamma_{\rm ff} \mathcal{D}[\hat{F}_2] \hat{\rho} \end{equation} This indeed corresponds to the $\hat{F}_2^2$ term in Eq.~(\ref{eq:TargetMEQ}), if we set: \begin{equation} \lambda / \eta =\gamma_{\rm ff} \end{equation} Next, consider the most interesting terms in Eq.~(\ref{eq:MEQFeedForwardFull}), those that are linear in $\hat{F}_2$. These describe the effective interaction that results from the feedforward protocol, where system 2 evolves in a way that is determined by the results of the system 1 measurement. The order $\hat{F}_2$ terms are: \begin{align} \frac{\sqrt{k}}{2} \mathcal{M} \left( \hat{A}_1 \hat{\rho} + \hat{\rho} \hat{A}_1 \right) & = \frac{\sqrt{k}}{2} (-i) \sqrt{\gamma_{\rm ff}} \left[ \hat{F}_2, \hat{A}_1 \hat{\rho}+ + \hat{\rho} \hat{A}_1 \right] \\ & = -i \frac{\sqrt{k \gamma_{\rm ff} }}{2} \left( \hat{F}_2 \hat{\rho} \hat{A}_1 - \hat{A}_1 \hat{\rho} \hat{F}_2 \right) -i \frac{\sqrt{k \gamma_{\rm ff}}}{2} [ \hat{A}_{1} \hat{F}_2, \hat{\rho} ] \end{align} These are in complete agreement with the order $\hat{F}_2$ terms in Eq.~(\ref{eq:TargetMEQ}), which describe both coherent and dissipative interactions; we only need to make the identification: \begin{equation} \lambda = \frac{\sqrt{k \gamma_{\rm ff}}}{2} \end{equation} Finally, it is easy to confirm that the remaining $\hat{A}_1^2$ terms are also in agreement with Eq.~(\ref{eq:TargetMEQ}) with the same assignment of parameters. Summarizing, we see that our general non-reciprocal master equation in Eq.~(\ref{eq:MEQAsymmetric}) describes quantum measurement plus feedforward with the identifications: \begin{equation} \lambda = \sqrt{ \frac{k \gamma_{\rm ff}}{4} }, \,\,\,\,\, \eta = \sqrt{\frac{k}{ 4 \gamma_{\rm ff} }} \end{equation} We thus have two crucial conclusions to make: \begin{itemize} \item Continuous measurement plus feedforward give unconditional evolution that is equivalent to our general non-reciprocal master equation, with system operators that are purely Hermitian \item A corollary is that if we have a non-reciprocal master equation described by Eq.~(\ref{eq:MEQAsymmetric}) with Hermitian operators, then it can {\it always} be interpreted in terms of a measurement plus feedforward protocol \end{itemize} In the case where both system operators in the master equation are Hermitian, we see that the interaction strength $\lambda$ is the geometric mean of the measurement and feedforward strengths, whereas the asymmetry parameter is set by their ratio. \subsection{Entanglement generation via non-reciprocal interactions} A basic question about quantum non-reciprocal interactions is whether they are able to generate entanglement between two systems. Consider the very general master equation in Eq.~(\ref{eq:MEQAsymmetric}) that describes a fully non-reciprocal interaction between two systems $1$ and $2$. We could add to this dynamics purely local Hamiltonians for system 1 and 2, and then ask whether this equation can generate entanglement. Specifically, if the system starts in a state that is a product state, can this dynamics ever create a truly entangled state at later times (i.e.~a density matrix that cannot be written as a statistical mixture of product states)? We know that in general interactions between two systems can generate entanglement. However, given that dissipation is a crucial part of our non-reciprocal interactions, one might worry that the associated quantum noise might disrupt entanglement generation. The above connection to measurement plus feedforward lets us say something crucial about the above question: if the operators $\hat{O}_1$ and $\hat{O}_2$ that define the non-reciprocal interaction are Hermitian, then \emph{there can never be any generation of entanglement between systems $1$ and $2$}. In this case, the non-reciprocal interaction is completely equivalent to the interaction generated by a measurement plus feedforward protocol. Such a protocol only involves local operations and classical communication between the two systems (i.e. LOCC), hence entanglement generation is impossible. The corollary is that if both coupling operators $\hat{O}_1$ and $\hat{O}_2$ are non-Hermitian, then there is no general mapping to an LOCC measurement plus feedforward protocol, and hence it is possible to generate entanglement. In this case, one can always think of the non-reciprocal interaction using the cascaded quantum systems picture, where we have an auxiliary unidirectional waveguide which mediates the interactions between systems 1 and 2. In general, it is indeed possible for such an interaction to transport particles from system $1$ to $2$, resulting in the generation of entanglement, and even the stabilization of pure entangled states. For a concrete example, consider the case where both systems $1$ and $2$ are bosonic cavities, and we take $\hat{O}_1 = \hat{a}_1, \hat{O}_2^\dagger = \hat{a}_2$ (with $\hat{a}_j$ the photon annihilation operator for cavity $j$). Eq.~(\ref{eq:MEQAsymmetric}) then describes a standard cascaded quantum systems coupling, where photons can leak out of cavity $2$, enter the chiral waveguide, and then reach cavity $1$ (but not the reverse process). By adding local drives to the two cavities (e.g. paramteric two photon drives), such a master equation can indeed generate entanglement, and even stabilize pure entangled states \cite{Mamaev2018}. Similar constructions are possible with qubits \cite{Stannigel2012}. Hence, we see that in general, non-reciprocal interactions described by the master equation Eq.~(\ref{eq:MEQAsymmetric}) are not equivalent to an LOCC measurement plus feedforward protocol. We see that despite the dissipative nature of these interactions, they are indeed capable of generating quantum entanglement between systems 1 and 2. Note that the connections between correlated dissipative processes and entanglement generation was recently studied in Ref.~\cite{Seif2021}, in the context of correlated Markovian dephasing in a many qubit system. Finally, one might be confused by the above statements, as it seems like we could always connect the general case of non-Hermitian coupling operators to the Hermitian case. For concreteness, we could always write $\hat{O}_j = \hat{X}_j + i \hat{P}_j$, where both $\hat{X}_j$, $\hat{P}_j$ are Hermitian. In this case, the coherent interaction Hamiltonian in our master equation becomes: \begin{align} \hat{H}_{\rm coh} & = \frac{\lambda}{2} \left( \hat{O}_1 \hat{O}_2 + \textrm{h.c.} \right) = \lambda \left( \hat{X}_1 \hat{X}_2 - \hat{P}_1 \hat{P}_2 \right) \end{align} This looks like we have two interaction channels now between systems 1 and 2, each described by a pair of Hermitian operators. One might erroneously conclude that the non-reciprocal interaction described by Eq.~(\ref{eq:MEQAsymmetric}) could always be written in terms of a {\it pair} of non-reciprocal interactions, each corresponding to Hermitian coupling operators. This is incorrect. While $\hat{H}_{\rm coh}$ can indeed be decomposed like this, the same is not true for the dissipative interactions encoded by the correlated dissipator in Eq.~(\ref{eq:MEQAsymmetric}). To be clear, in general \begin{equation} \mathcal{D}[\hat{O}_1 \mp i \hat{O}_2^\dagger ] \neq \mathcal{D}[\hat{X}_1 \mp i \hat{X}_2^\dagger ] + \mathcal{D}[\hat{P}_1 \mp i \hat{P}_2^\dagger ] \end{equation} Hence, non-reciprocal quantum interactions realized using non-Hermitian coupling operators are fundamentally different from the case with Hermitian operators (which can always be reduced to a LOCC measurement plus feedforward scheme). \section{Conclusion} These lectures have attempted to provide an intuitive picture for how external driving, dissipation and interference can be harnessed to realize non-reciprocal interactions and devices in a fully consistent quantum mechanical setting. Using a simple toy model, we showed explicit connections between seemingly different formalisms: Hamiltonians with synthetic gauge fields, non-Hermitian effective Hamiltonians, and quantum master equations. We also discussed the explicit connection between this approach to non-reciprocity and th effective evolution generated by quantum measurement and feedforward schemes. We hope the discussion here will assist researchers both in designing new kinds of non-reciprocal devices, as well as investigations of the fundamental quantum many-body physics of systems whose underlying interactions are directional. \section*{Acknowledgements} I am grateful to the many people over the years that have helped me better understand quantum non-reciprocity and have helped develop ideas on this topic: Michel Devoret, Steve Girvin, Florian Marquardt, Alexander McDonald, Oskar Painter, Chen Wang and Yuxin Wang. I am of course especially grateful to Anja Metelemann, with whom the main ideas discussed in these notes were developed. I also want to thank Yuxin Wang for her assistance in helping prepare figures and for carefully proofreading and critiquing this set of notes. This work was supported by Army Research Office under grant W911-NF-19-1-0380, and by the Air Force Office of Scientific Research MURI program under grant no. FA9550-19-1-0399. \paragraph{Funding information} Authors are required to provide funding information, including relevant agencies and grant numbers with linked author's initials. Correctly-provided data will be linked to funders listed in the \href{https://www.crossref.org/services/funder-registry/}{\sf Fundref registry}. \begin{appendix} \section{Inteference cancellation to all orders in $t$} \label{app:HigherOrderInterference} A crucial but surprising conclusion in the main text was that the simple perturbative interference condition in Eq.~(\ref{eq:NRCondPhi}) and (\ref{eq:NRCondKappa}) that causes $G^R[2,1;\omega]$ to vanish by destructive interfernce is in fact valid to all orders in $t$. We formalize the argument here provided in the main text. First, defining $\tilde{\omega} = \omega + i \kappa/2$, we have: \begin{align} G^R[2,1;\omega] = \left[ \left(\tilde{\omega} - \bm{H} \right)^{-1} \right]_{21} = \frac{1}{\tilde{\omega}} \left[ \sum_{m=0}^\infty \left( \frac{\bm{H}}{\tilde{\omega}} \right)^m \right]_{11} \end{align} where $H$ is the $3 \times 3$ Hermitian Hamiltonian matrix that encodes hopping (c.f.~Eq.~(\ref{eq:NonHermMatrix})). Expanding out the product here, we can write: \begin{align} G^R[2,1;\omega] & = \frac{1}{\tilde{\omega}} \left[ \frac{\bm{H}}{\tilde{\omega}} \right]_{21} \left[ \sum_{m=0}^\infty \left( \frac{\bm{H}}{\tilde{\omega}} \right)^m \right]_{11} + \frac{1}{\tilde{\omega}} \left[ \frac{\bm{H}}{\tilde{\omega}} \right]_{23} \left[ \sum_{m=0}^\infty \left( \frac{\bm{H}}{\tilde{\omega}} \right)^m \right]_{31} \\ &= \frac{1}{\tilde{\omega}} \Bigg( \left[ \frac{\bm{H}}{\tilde{\omega}} \right]_{21} \left[ \sum_{m=0}^\infty \left( \frac{\bm{H}}{\tilde{\omega}} \right)^m \right]_{11} + \nonumber \\ & \left[ \frac{\bm{H}}{\tilde{\omega}} \right]_{23} \left[ \frac{\bm{H}}{\tilde{\omega}} \right]_{31} \left[ \sum_{m=0}^\infty \left( \frac{\bm{H}}{\tilde{\omega}} \right)^m \right]_{11} \ + \left[ \frac{\bm{H}}{\tilde{\omega}} \right]_{23} \left[ \frac{\bm{H}}{\tilde{\omega}} \right]_{32} \left[ \sum_{m=0}^\infty \left( \frac{\bm{H}}{\tilde{\omega}} \right)^m \right]_{21} \Bigg) \end{align} Now, re-express the geometric series in each term in terms of the retarded Green's function: \begin{align} G^R[2,1;\omega] & = \left[ \frac{\bm{H}}{\tilde{\omega}} \right]_{21} G^R[1,1;\omega] + \left[ \frac{\bm{H}}{\tilde{\omega}} \right]_{23} \left[ \frac{\bm{H}}{\tilde{\omega}} \right]_{31} G^R[1,1;\omega] + \left[ \frac{\bm{H}}{\tilde{\omega}} \right]_{23} \left[ \frac{\bm{H}}{\tilde{\omega}} \right]_{32} G^R[2,1;\omega] \end{align} Solving this equation for $G^R[2,1,\omega]$ yields: \begin{align} G^R[2,1;\omega] & = Z[2,2;\omega] \left( \left[ \frac{\bm{H}}{\tilde{\omega}} \right]_{21} + \left[ \frac{\bm{H}}{\tilde{\omega}} \right]_{23} \left[ \frac{\bm{H}}{\tilde{\omega}} \right]_{31} \right) G^R[1,1;\omega] \label{eq:FullGRInterference} \end{align} with \begin{align} Z[2,2;\omega] = \left( 1 - \left[ \frac{\bm{H}}{\tilde{\omega}} \right]_{23} \left[ \frac{\bm{H}}{\tilde{\omega}} \right]_{32} \right)^{-1} \end{align} The two terms on the RHS in Eq.~(\ref{eq:FullGRInterference}) exactly correspond to the amplitudes $Q_{1,tot}$ and $Q_{2,tot}$ in Eqs.~(\ref{eq:Q1tot}) and (\ref{eq:Q2tot}). We thus see that the even including processes to all orders in the tunneling, the simple interference condition of Eqs.~(\ref{eq:NRCondPhi}) and (\ref{eq:NRCondKappa}) guarantees that $G^R[2,1;\omega] = 0$. \end{appendix}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{sec:Introduction}Introduction} \input{Introduction} \section{\label{sec:Methodology}Methodology} \input{Methodology} \section{\label{sec:Results and Discussions} Results and Discussions} \input{Results_and_Discussions} \section{\label{sec:Conclusions} Conclusions} \input{Conclusions} \begin{acknowledgments} This research was supported in part through the use of the Farber computer cluster and associated Information Technologies (IT) resources at the University of Delaware. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{secintro} \noindent Model Predictive Control (MPC) is an attractive control design methodology because it offers a natural way to express optimal objective while handling constraints on both state and control variables \cite{Mayne2000}. MPC design is based on the repetitive on-line solution of finite-horizon open-loop optimal control problems that are parametrized by the state value. Once the optimal sequence of control inputs is obtained, the first control in the sequence is applied over some updating period $\tau_u$ during which, the new problem (based on the next predicted state) is solved and the resulted solution is applied while the prediction horizon is shifted by $\tau_u$ time units and the process is repeated yielding an implicit state feedback. \ \\ \ \\ The attractive features of MPC triggered attempts to apply it to increasingly fast systems. For such systems, the need for a high updating rate (small $\tau_u$) may be incompatible with a complete solution of the underlying optimization problem during a single updating period $\tau_u$. This fact fired a rich and still active research area that is shortly referred to by "Fast MPC" (see \cite{Alamir2006,diehl2005real,Zavala2008,Diehl2005IEE} and the reference therein). \ \\ \ \\ Typical issues that are addressed in fast MPC literature concern the derivation of efficient computation of updating steps, reduction of the feedback delay, more or less rigorous computation of the Hessian, etc. Typical proofs of closed-loop stability in that context (see for instance \cite{Diehl2005IEE}) depend on strong assumptions such as the proximity to the optimal solution, the quality of the Hessian matrix estimation, etc. With such assumptions, the corresponding stability proofs take the form of tautological assertions. In other words, when such assumptions are satisfied, the paradigm of fast MPC is less relevant since standard execution of efficient optimizers would anyway give satisfactory results. \ \\ \ \\ When the effectively applied control is far from being optimal (which is the case for instance after a sudden change in the set-point value) the hot-start (initialization of the decision variable after horizon shift) it induces for the next horizon does not necessarily decrease the cost function before several iterations. This is because far from the ideal solution, the final stabilizing constraints invoked in the formal proof of \cite{Mayne2000} may be far from being satisfied. On the other hand, if a constant large control updating period is used in order to accommodate for such situations, the overall closed-loop performance would be badly affected.\ \\ \ \\ In recent papers \cite{Alamir_pavia2008, Alamir_ECC2013}, investigations have been conducted regarding the impact of the choice of the control updating period $\tau_u$ on the behavior of the cost function. Simple algorithms have been also proposed to monitor on-line the updating period based on the on-line behavior of the cost function to be decreased. More recently \cite{Alamir_ECC2014}, it has been shown that the control updating period choice is intimately linked to the basic iteration being used. The two major facts that come out from these investigations can be summarized as follows: \ \\ \ \\ {\bf (Fact 1)} In a constant updating period schemes, it could be interesting to use less efficient (per iteration) algorithms provided that a significantly shorter updating period can be used \cite{Alamir_ECC2014}. This fact enhanced the recent interest \cite{Bemporad2012,Jones2012} in fast gradient-like algorithms \cite{Nesterov1983} as a simpler approach when compared to second order algorithms. The work in \cite{Alamir_ECC2014} gives a formal explanation for this intuitively accepted fact. \ \\ \ \\ {\bf (Fact 2)} For a given optimization algorithm, the closed-loop performance can be enhanced by an almost computational-free on-line adaptation rule of the control updating period \cite{Alamir_pavia2008, Alamir_ECC2013}. \ \\ \ \\ Obviously, a combination of the preceding facts holds also, namely, in adaptive frameworks, it can be more efficient to use simpler optimization algorithms provided that the gain induced by a higher updating rate compensates for the lack of efficiency per iteration. \ \\ \ \\ In view of the preceding discussion, the contribution of this paper is twofold: \ \\ \ \\ {\bf First contribution.} This paper gives the first industrial validation of the proposed on-line adaptation of the control updating period. The realistic PLC-based implementation framework being used enhances the sensitivity of the closed-loop performance to the adaptation mechanism since it is several orders of magnitude slower than nowadays desk computers. As such, this paper gives a complete and realistic layout to understand the chain of concepts and methods that underline fast MPC paradigm. \ \\ \ \\ {\bf Second contribution.} Although simulation-based assessments have been proposed for Facts 1 and 2 mentioned above, these simulations always used first order gradient-based algorithms. Some promoters of second order algorithms may conjecture that such adaptation would be of no benefit since a second order scheme hardly needs more than a single iteration. This paper invalidates this conjecture by showing that 1) as far as the application at hand is concerned a first-order-like algorithm slightly outperforms a second order algorithm (in the realistic industrial hardware configuration at hand) strengthening Fact 1 in a constant updating period context. 2) the closed-loop performance of this first order algorithm can be improved by on-line adaptation of the control updating period. These two results put together infer that on-line adaptation is worth using even for second order algorithms and that a single iteration is not always sufficient for second order methods in realistic situations.\ \\ \ \\ This paper is organized as follows: First, the problem is stated in section \ref{secbackground} by recalling the fast MPC implementation scheme and the main results of \cite{Alamir_ECC2013,Alamir_ECC2014}. In section \ref{secalgo}, the two algorithms that are used in the validation section are presented which are the {\sc qpOASES} solver \cite{Ferreau2008} and an Ordinary-Differential-Equation (ODE)-based solver that is briefly presented and then applied in the experimental validation. This second algorithm can be viewed as a first-order algorithm since it is based on the definition of an ODE in which the vector field is linked to the steepest descent direction. In section \ref{secplant}, the process is described, the control problem is stated and the computational PLC used in the implementation of the real-time MPC is presented in order to underline the computational limitation that qualifies the underlying problem as a fast MPC problem. The main contribution of the paper is given in section \ref{secfastexp}, namely, extensive simulations are first given using the two above cited algorithms and using different constant control updating periods in order to investigate the first fact discussed above. It is in particular shown that for both solvers, the locally (in time) optimal updating period changes dynamically depending on the context. Moreover, in order to draw conclusions that go beyond the specific case of the PLC at hand, several simulations are conducted using different conjectures regarding the PLC performances. This investigation shows that for the rather performant PLC we actually have today, the first order algorithm gives sightly better results, however, if faster future PLCs were to be used, {\sc qpOASES} would give better results. This is the core message of the paper: the fast MPC paradigm is a matter of combined optimal choices involving the process bandwidth, the optimization algorithm, the available computational device, the control parametrization, etc. Finally, experimental results are shown under adaptive updating period. Section \ref{secconc} concludes the paper and gives hint for further investigation. \vskip 0.5cm \section{Background} \label{secbackground} \subsection{Definitions and Notation}\label{defandnotes} \noindent Consider a general nonlinear system with state vector $\mathbf x \in \mathbb{R}^{n}$ and a control vector $\mathbf u\in \mathbb{R}^{n_u}$. We consider a basic sampling period $\tau>0$ used to define the piece-wise constant (pwc) control profiles (a sequence of control values in $ \mathbb{R}^{n_u}$ that are maintained constant during $\tau$ time units). As far as the general presentation of concepts is concerned, the general control parametrization is adopted according to which the whole control sequence is defined by a vector of decision variables $p\in \mathbb{R}^{n_p}$ by: \begin{eqnarray} \mathcal U_{pwc}(p):= \begin{pmatrix} u^{(1)}(p)&\dots& u^{(N)}(p) \end{pmatrix} \in \mathbb U\subset \mathbb{R}^{Nn_u} \label{uparam} \end{eqnarray} where $u^{(i)}(p)\in \mathbb{R}^{n_u} $ is the control to be applied during the future $i$-th sampling period of length $\tau$ while $\mathbb U\subset \mathbb{R}^{n_u}$ is some admissible set. At this stage, no specific form is required for the system equations describing the dynamic model. The state $\mathbf x_{k+j}$ that is reached - according to the model - after $j$ sampling periods, starting from some initial state $\mathbf x_k$, under the sequence of control inputs $\mathcal U_{pwc}(p)$ and some predicted disturbance $\hat\tilde{\mathbf w}_k \in \mathbb{R}^{N\cdot n_w}$ is given by: \begin{eqnarray} \forall j\in \{1,\dots,N\} \quad \mathbf x_{k+j}=:X(j,\mathbf x_k,p,\hat\tilde{\mathbf w}_k) \label{predictionmodel} \end{eqnarray} while the real state that is reached in the presence of true disturbances and/or model mismatched $\tilde{\mathbf w}_k$ (that takes place over the time interval $[k\tau,(N+j)\tau]$) is denoted by \begin{eqnarray} X^r(j,\mathbf x_k,p,\hat\tilde{\mathbf w}_k,\tilde{\mathbf w}_k) \end{eqnarray} In the sequel, explicit mentioning of $\tilde{\mathbf w}$ is sometimes omitted and the real state evolution is simply denoted by $X^r(j,\mathbf x_k,p,\hat\tilde{\mathbf w}_k)$.\ \\ \ \\ It is assumed that an MPC strategy is defined by the following optimization problem that depends on the current state $\mathbf x$ according to: \begin{eqnarray} \mathcal P(\mathbf x)\ :\ \min_{p\in \mathbb{P}} J_0(p,\mathbf x)\quad \mbox{\rm under $g(p,\mathbf x)\le 0$} \label{cost} \end{eqnarray} where $\mathbb P\subset \mathbb{R}^{n_p}$ is the admissible parameter set, $J_0$ is the cost function to be minimized while $g(p,\mathbf x)\in \mathbb{R}^{n_c}$ defines the set of inequality constraints. \ \\ \ \\ Recall that in ideal MPC, the solution to (\ref{cost}), say $p^{opt}(\mathbf x)$ is used to define the feedback \begin{eqnarray} K_{MPC}(\mathbf x):=u^{(1)}(p^{opt}(\mathbf x)) \label{defdeidMPC} \end{eqnarray} Indeed, ideal MPC frameworks assume that the optimal solution $p^{opt}(\mathbf x)$ is instantaneously available. In reality, the optimization problem $\mathcal P(\mathbf x)$ is solved using an iterative solver that is denoted by: \begin{eqnarray} p^{(q)}=\mathcal S^{(q)}(p^{(0)},\mathbf x) \label{defdemathSq} \end{eqnarray} where $p^{(0)}$ stands for the initial guess while $p^{(q)}$ is the iterate that is delivered after $q$ successive iterations. In the sequel, the term {\em iteration} refers to the unbreakable set of operations (relative to $\mathcal S$) that is necessary to deliver an update of $p$. In other words, if the time needed to perform a single iteration of $\mathcal S$ on a given platform is denoted by $\tau^\mathcal{S}_1>0$, then no update can be given in less than $\tau_1^\mathcal{S}$ time units. Based on this remark, it seems reasonable to adopt updating instants that are separated by multiples of $\tau_1^{\mathcal S}$, namely: \begin{eqnarray} t_{k+1}=t_k+q(t_k)\cdot \tau_1^{\mathcal S}\quad \mbox{\rm with} \quad q(t_k)\in \mathbb N \end{eqnarray} where the $t_k$s are the instants where updated values of $p$ can be delivered for use in the feedback control input. Moreover, we assume for simplicity that the basic sampling period $\tau$ involved in the definition of the control parametrization map $\mathcal U(p)$ is precisely $\tau_1^{\mathcal S}$, namely: \begin{eqnarray} \tau=\tau_1^{\mathcal S} \label{tgfr5} \end{eqnarray} Note that thanks to the flexibility of the parametrization, one can define pwc control profiles in which the control is maintained constant over multiples of $\tau_1^{\mathcal S}$ while meeting (\ref{tgfr5}) so that the latter condition is not really restrictive while it simplifies the description of the implementation framework.\ \\ \ \\ Using the notation above, the real-life implementation scheme is defined as follows: \begin{itemize} \item[(1)] $i\leftarrow 0$, $t_i\leftarrow 0$, some initial parameter vector $p(t_0)$ is chosen. An initial number of iterations $q(t_0)=q_0\le N$ is adopted. \item[(2)] The first $q(t_i)$ elements of the control sequence $\mathcal U(p(t_i))$ are applied over the time interval $[t_i,t_{i+1}=t_i+q(t_i)\tau]$. \item[(3)] Meanwhile, the computation unit performs the following tasks during $[t_i,t_{i+1}]$: \begin{itemize} \item[(3.1)] Predict the future state $\hat\mathbf x(t_{i+1})$ using the model and under the above mentioned sequence of controls. The time needed to achieve this very short time ahead prediction is assumed to be negligible for simplicity. \item[(3.2)] Perform $q(t_i)$ iterations to get $$p(t_{i+1}):=\mathcal S^{(q(t_i))}(p^+(t_i),\hat \mathbf x(t_{i+1}))$$ where the initial guess $p^+(t_i)$ is either equal to $p(t_i)$ [cold start] or equal to some warm start that is derived from $p(t_i)$ by standard translation technique. \label{step32} \end{itemize} \item[(4)] At the updating instant $t_{i+1}$ compute the number $q(t_{i+1})$ of iterations to be performed during the next updating period $[t_{i+1},t_{i+2}=t_{i+1}+q(t_{i+1})\tau]$ using Algorithm \ref{algupdateq} that is recalled in section \ref{recalalgo}. As it has been shown in \cite{Alamir_ECC2013} and recalled hereafter, this updating costs no more than a dozen of elementary operations and can therefore be considered as instantaneous. \item[(5)] $i\leftarrow i+1$, Goto step (2). \end{itemize} In the next section, the updating rule for $q(t_{i+1})$ invoked in Step (4) of the implementation scheme is recalled. Note that by adapting $q(t_i)$, the control updating period $\tau_u=q(t_i)\cdot \tau$ is adapted. \subsection{Adaptation of the control updating period for a given solver $\mathcal S$} \label{recalalgo} \noindent The following definition specifies a class of solvers that is invoked in the sequel and for which the adaptation mechanism recalled in this section can be applied:\\ \begin{definition} \label{defmonotonic} A solver $\mathcal S$ is said to be monotonic w.r.t the cost function $J: \mathbb{R}^{n_p}\times \mathbb{R}^{n}\rightarrow \mathbb{R}$ if for all $\mathbf x$, the iterations defined by (\ref{defdemathSq}) satisfies: \begin{eqnarray} J(p^{(i)},\mathbf x)\le J(p^{(i-1)},\mathbf x) \label{decrease} \end{eqnarray} for all $i$. This function is called hereafter the augmented cost function. $\hfill \diamondsuit$\\ \end{definition} Note that $J$ generally differs from $J_0$ involved in (\ref{cost}) because of the presence of constraints. A typical example of such $J$ is given by the norm of the nonlinear function that gathers the Karush-Kuhn-Tucker (KKT) necessary conditions of optimality and when the solver uses a descent approach such as projected gradient or a specific implementation of Sequential Quadratic Programming (SQP) approach with trust region mechanism. Interior point-based algorithm can also enter in this category under certain circumstances in which the map $J$ would be given by the penalized version of $J_0$ involving barrier functions. \\ \begin{remark}\label{remalwaysdecrease} The conditions of Definition \ref{defmonotonic} can be relaxed in the following sense: if a solver $\mathcal S$ satisfies the following condition: \begin{eqnarray} J(p^{(i+\ell-1)},\mathbf x)\le J(p^{(i-1)},\mathbf x) \label{decrease2} \end{eqnarray} for some map $J$, then the solver $\mathcal S^{'}$ that is derived from $\mathcal S$ by: \begin{eqnarray} \mathcal S^{'}(p,\mathbf x):=S^{(\ell)}(p,\mathbf x) \end{eqnarray} is monotonic in the sense of Definition \ref{defmonotonic} at the price of having a {\em single} iteration that takes $\ell$-times longer than $\mathcal S$, namely $\tau_1^{\mathcal S^{'}}=\ell\cdot \tau_1^{\mathcal S}$. $\hfill \diamondsuit$ \end{remark} \ \\ The following assumption is needed for the updating algorithm that can be stated as follows:\\ \begin{assumption} \label{boundedbelow} The solver $\mathcal S$ is monotonic and the corresponding map $J$ [see Definition \ref{defmonotonic}] is bounded below by a strictly positive real $\underline J$, namely: \begin{eqnarray} \forall (p,\mathbf x)\quad J(p,\mathbf x)\ge \underline J>0 \label{underlineJ} \end{eqnarray} \end{assumption} Note that the last condition (\ref{underlineJ}) can always be satisfied by adding an appropriate positive constant to the original cost. \ \\ \ \\ In order to recall the updating algorithm proposed in \cite{Alamir_ECC2013}, the following notations are needed:\ \\ \ \\ $J^+_k:=J\bigl(p^+(t_k),\hat\mathbf x(t_{k+1})\bigr)$ \vskip 0.2cm \begin{minipage}{0.02\textwidth} \ \end{minipage} \begin{minipage}{0.45\textwidth} the cost function value for the initial hot start $p^+(t_k)$ (before any iteration is performed) and based on the predicted state at the future updating instant $t_{k+1}=t_k+q(t_k)\cdot \tau$. \end{minipage} \ \\ \ \\ $\hat J_{k+1}:=J\bigl(p(t_{k+1}),\hat\mathbf x(t_{k+1})\bigr)$ \vskip 0.2cm \begin{minipage}{0.02\textwidth} \ \end{minipage} \begin{minipage}{0.45\textwidth} the cost function value for the delivered value $p(t_{k+1})$ (after $q(t_k)$ iterations) and based on the predicted state at the future updating instant $t_{k+1}=t_k+q(t_k)\cdot \tau$. \end{minipage} \ \\ \ \\ $J_{k+1}:=J\bigl(p(t_{k+1}),\mathbf x(t_{k+1})\bigr)$ \vskip 0.2cm \begin{minipage}{0.02\textwidth} \ \end{minipage} \begin{minipage}{0.45\textwidth} the effectively obtained cost function value for the delivered value $p(t_{k+1})$ and for the true state $\mathbf x(t_{k+1})$ that is reached at instant $t_{k+1}=t_k+q(t_k)\cdot \tau$. \end{minipage} \ \\ \ \\ Based on these definitions, it comes out that the decrease of the augmented cost function can be studied by analyzing the behavior of the ratio $J_{k+1}/J_k$ which can be decomposed according to: \begin{eqnarray} \dfrac{J_{k+1}}{J_k}=E_k^r(q(t_k))\times D_k^r(q(t_k)) \end{eqnarray} where \begin{eqnarray} E_k^r(q(t_k)):=\dfrac{\hat J_{k+1}}{J^+_k}\quad;\quad D_k^r(q(t_k)):=\dfrac{J_{k+1}}{\hat J_{k+1}}\times \dfrac{J_k^+}{J_k} \end{eqnarray} A deep analysis of the above terms shows that $E_k^r(q)$ is linked to the current efficiency of the solver since it represents the ratio between the value of the augmented cost for the same predicted value $\hat \mathbf x(t_{k+1})$ of the state before and after $q(t_k)$ iterations are performed. The first ratio $J_{k+1}/\hat J_{k+1}$ in $D_k^r$ is $1$ if the model is perfect since it represents the ratio between two values of the augmented function for the same value $p(t_{k+1})$ of the parameter but for two different values $\hat x(t_{k+1})$ and $x(t_{k+1})$. Finally, the ratio $J^+_k/J_k$ is linked to the quality of the hot start since it represents the predicted ratio between two values of the augmented function before and just after the horizon shift. \ \\ \ \\ The algorithm proposed in \cite{Alamir_ECC2013} recalled hereafter updates the number of iterations $q(t_{k+1})$ to be performed during the next updating period so that the contraction ratio: \begin{eqnarray} K_{k+1}^r(q(t_{k+1})):=E_{k+1}^r(q(t_{k+1}))\times D_{k+1}^r(q(t_{k+1})) \end{eqnarray} is lower than $1$ and if this is achievable, the updating rule tries to minimize the corresponding expected response time $t_r$ of the dynamics which is linked to the ratio $q/\log(K_{k+1}^r(q))$. \ \\ \ \\ This leads to the following algorithm \cite{Alamir_ECC2013}: \begin{algorithm}[H] \caption{Updating rule for $q(t_{k+1}^u)$} \label{algupdateq} \begin{algorithmic}[1] \State{{\bf Parameters} $q_{max}\le N$, $\delta\in \{1,\dots,q_{max}\}$} \State{{\bf Input data} (available after Step (3.2) page \pageref{step32})} \State{$\quad$ $q=q(t_k)$, $p^{(0)}=p^+(t_k)$, $p^{(i)}=\mathcal S^{(i)}(p^{(0)},\hat \mathbf x(t_{k+1}))$} \State{$\quad$ $J_k$, $J^+_k$, $\hat J_{k+1}$, $J_{k+1}$} \State{Compute the following quantities:} \begin{eqnarray*} E^r &\leftarrow& \hat J_{k+1}/J^+_k \\ D^r &\leftarrow& (J_{k+1}J^+_k)/(\hat J_{k+1}J_k)\\ K^r &\leftarrow& E^r\times D^r\\ \dfrac{\Delta D^r}{\Delta q} &\leftarrow& \dfrac{1}{q}\bigl[D^r-1\bigr]\\ \dfrac{\Delta E^r}{\Delta q} &\leftarrow& \dfrac{J(p^{(q)},\hat\mathbf x(t_{k+1}))-J(p^{(q-1)},\hat \mathbf x(t_{k+1})}{J(p^{(0)},\hat\mathbf x(t_{k+1}))}\\ \dfrac{\Delta K^r}{\Delta q}&\leftarrow& E^r\cdot \bigl[\dfrac{\Delta D^r}{\Delta q}\bigr]+D^r\cdot \bigl[\dfrac{\Delta E^r}{\Delta q}\bigr]\\ \dfrac{\Delta t_r}{\Delta q}&\leftarrow& \dfrac{-\log(K^r)+\dfrac{q}{K^r}\times \dfrac{\Delta K^r}{\Delta q}}{\bigl[\log(K^r)\bigr]^2} \end{eqnarray*} \State{{\bf If} $K^r\ge 1$ {\bf then} $\Gamma\leftarrow \dfrac{\Delta K^r}{\Delta q}$ {\bf else} $\Gamma\leftarrow \dfrac{\Delta t_r}{\Delta q}$}\vskip 0.1cm \State{{\bf Output} $q(t_{k+1})\leftarrow \max\bigl\{2,\min\{q_{max},q-\delta\cdot sign(\Gamma)\}\bigr\}$} \end{algorithmic} \end{algorithm} \noindent Roughly speaking, this algorithm implements a step of size $\delta$ in the descent direction defined by the sign of the approximated gradient $\Gamma$. The step is projected into the admissible domain of $q\in \{2,\dots,q_{max}\}$. More details regarding this algorithm are available in \cite{Alamir_ECC2013}. \ \\ \ \\ Section \ref{secfastexp} shows the efficiency of the proposed algorithm when applied to a given solver for the PLC-based implementation of MPC to the cryogenic refrigerator. Before this, the next section gives a simple argumentation that underlines a fundamental trade-off between the efficiency (per iteration) of a solver and the basic corresponding unbreakable computation time $\tau_1^{\mathcal S}$. This is done in adaptation-free context in order to decouple the analysis. \subsection{Fundamental trade-off in the choice of solvers} \label{fundtradeoff} \noindent Let us consider a solver $\mathcal S$ and the corresponding time $\tau_1^\mathcal{S}$ that is needed to perform the unbreakable amount of computations involved in a single iteration. Given a control updating period $\tau_u$, the number of iterations that can be performed is equal to $q=\lfloor\tau_u/\tau_1^\mathcal{S}\rfloor$ and the corresponding variation of the augmented cost function would be given by: \begin{eqnarray} J_{k+1}-J_k:=\underbrace{J_{k+1}-J^+_k}_{-E^\mathcal{S}_k(\tau_u)}+\underbrace{J^+_k-J_k}_{D_k(\tau_u)} \label{DeltaJ} \end{eqnarray} where here again, $E_k^{\mathcal S}(\tau_u)$ and $D_k(\tau_u)$ are linked to the current efficiency of the solver ($E_k^\mathcal{S}$) and the combined effect of model mismatch and the horizon shift effect on the cost function respectively. Both terms depend obviously on $\tau_u$. Indeed $E_k^\mathcal{S}(\tau_u)$ depends on $\tau_u$ through the number of iterations while $D_k(\tau_u)$ depends on $\tau_u$ since when $\tau_u=0$ then $D_k$ vanishes (no prediction error and no possible bad hot start). Note that $E_k^\mathcal{S}$ and $D_k$ are absolute (non relative) versions of the relative maps $E_k^r$ and $D_k^r$ invoked in section \ref{recalalgo} to introduce Algorithm \ref{algupdateq}. Note also that unlike the efficiency indicator $E_k^\mathcal{S}(\tau_u)$ which heavily depends on the solver, the $D_k(\tau_u)$ term is solver-independent. \ \\ \ \\ Figure \ref{compareSf} shows typical allures of these terms for two different solvers $\mathcal S_1$ (most efficient) and $\mathcal S_2$ (less efficient). It can be seen that the iterations of $\mathcal S_1$ are more efficient at the price of longer computation time $\tau_1^{\mathcal S_1}>\tau_1^{\mathcal S_2}$. The dots on the right hand plot recall that the updating can be delivered only at quantized updating instants. \begin{figure}[H] \includegraphics[width=0.48\textwidth]{schematic} \caption{Possible allures of the $D_k(\tau_u)$ and $E_k^\mathcal{S}(\tau_u)$ in realistic fast NMPC implementations. The right figure shows the efficiency maps for two different solvers corresponding to two different computation times per iteration $\tau_1^{\mathcal S_1}$ and $\tau_1^{\mathcal S_2}$.}\label{compareSf} \end{figure} \noindent Now based on (\ref{DeltaJ}), the decrease of $J_k$ is conditioned by the inequality: \begin{eqnarray} E_k^\mathcal{S}(\tau_u)>D_k(\tau_u) \label{ineqtradeoff} \end{eqnarray} which expresses the need to have the $E_k^\mathcal{S}$ curves above the $D_k$ curve for the adopted value of the updating period. \begin{figure}[H] \begin{minipage}{0.49\textwidth} \framebox{\includegraphics[width=0.45\textwidth]{case1}} \framebox{\includegraphics[width=0.452\textwidth]{case2}} \end{minipage} \caption{(Left) Use of the most efficient solver $\mathcal S_1$: depending on the context, there are possible configurations of $D$ that make the decrease of the augmented cost impossible. (Right) In such cases, the use of the less efficient solver $\mathcal S_2$ enables to decrease the augmented cost thanks to shorter updating periods.} \label{twocases} \end{figure} \noindent Figure \ref{twocases} gives a qualitative illustration of the resulting fundamental trade-off: The left plots shows situations where the use of the more efficient solver $\mathcal S_1$ makes (\ref{ineqtradeoff}) impossible to satisfy whatever is the updating period being used. In such cases, the right plot shows that less efficient solvers like $\mathcal S_2$ together with appropriate short updating periods can satisfy the decreasing condition (\ref{ineqtradeoff}). The right figure also shows that in this latter case, there may be several possible values of $\tau_u$ (several number of iterations) that may decrease the cost and an adaptive on-line monitoring algorithm like the one recalled in section \ref{recalalgo} may be appropriate to get closer to an optimal decrease. \ \\ \ \\ In the following sections, the two solvers that are used in the validation section are introduced. \vskip 0.5cm \section{Presentation of the algorithms} \label{secalgo} \subsection{qpOASES}\label{secalgoasis} The qpOASES \cite{Ferreau2014} solver is a well know solver in the linear constrained MPC control community. It offers a very efficient implementation of the active-set strategy \cite{Ferreau2008}. If several QP problems must be solved with constant Hessian and constraint matrices, the qpOASES package offers the possibility of hot-starting from previous solution with a subroutine called \textit{qpOASES\textunderscore sequence}. In the sequel, the \textit{qpOASES\textunderscore sequence} subroutine will be used and will simply be recalled as qpOASES. \subsection{ODE-based solver}\label{odebasedsolver} In this section, an ODE-based solver that is used hereafter to implement the PLC-based constrained MPC is briefly presented. The real-time performance of this solver is also compared to that of qpOASES in the PLC constrained performance setting in order to illustrate Fact 1 mentioned above. \ \\ \ \\ Consider the Quadratic Programming (QP) problem defined by: \begin{equation}\label{pb_normal} \tilde{\mathcal{P}}(z) = \left\lbrace \matl{ \text{min:~} J_0(z) = z^T \Phi z + z^T \phi \cr \text{under~} \left\lbrace \matl{\Gamma z - \gamma \le 0 \cr \underline{z} \le z \le \overline{z} } \right. } \right. \end{equation} where $z \in \mathbb{R}^{n_z}$ is the decision variable while $\Phi$ and $\phi$ are matrices of appropriate size. $\Gamma\in \mathbb{R}^{n_c\times n_z} $ and $\gamma\in \mathbb{R}^{n_c}$ are the matrices that define the set of $n_c$ inequality constraints while $\underline{z}$ and $\overline{z}$ are lower and upper bounds on the decision variables. \\ \ \\ Based on the above formulation, the following augmented cost function can be defined: \begin{eqnarray} J(z):=J_0(z)&&+\alpha\sum_{i=1}^{n_c}\max(\Gamma_iz-\gamma_i,0)^\mu+\nonumber \\ &&+\alpha\sum_{i=1}^{n_z}\max(z_i-\overline{z}_i,0)^\mu+\nonumber \\ &&+\alpha\sum_{i=1}^{n_z}\max(\underline{z}_i-z_i,0)^\mu \label{augmentedJ} \end{eqnarray} where $\Gamma_i\in \mathbb{R}^{1\times n_z}$ is the $i$-th line of $\Gamma$. Based on this augmented cost, the following Ordinary Differential Equation (ODE) can be used to define a trajectory in the decision variable space along which the augmented cost decreases: \begin{eqnarray} \dot z=-\dfrac{dJ}{dz}(z) \label{TheOde} \end{eqnarray} Note however that this ODE is generaly stiff because of the high values of $\alpha$ one needs to use in order to enforce the constraints fulfillment. That is the reason why the one-step Backward-Differentiation-Formulae (TR-BDF2) described in \cite{trbdf2} for stiff differential equations is used here. \ \\ \ \\ Note also that after the computation of the TR-BDF2 step, all the decision variable that correspond to hard constraints (saturation on actuator for instance) are projected into the admissible box before a next iteration is computed. In addition to the integration scheme described in \cite{trbdf2}, the initial time step is defined by using the following expression: \begin{equation} \Delta t = \sqrt{\dfrac{1}{\| \dot{z}(t) \|}} \end{equation} In the case of the quadratic problem described in paragraph \ref{Theproblem}, this method leads to fast convergence to the suboptimal solution $z^*$, being very close to the actual optimal solution of the original problem even with real-time constraints. The comparison between solvers \ref{secalgoasis} and \ref{odebasedsolver} will be done in paragraph \ref{secfastexpcomp}.\ \\ \ \\ Note also that this solver fully satisfies the decrease condition (\ref{decrease}) since it moves along the descent trajectory defined by (\ref{TheOde}). Therefore, the adaptation mechanism of the control updating period can be applied. \section{Plant description} \label{secplant} \subsection{General presentation} \begin{figure*} \begin{center} \includegraphics[width=0.90\textwidth]{400W_photos} \caption{Views of the cryogenic plant of CEA-INAC-SBT, Grenoble. (a) The screw compressor of the warm compression station. (b) The cold box. (c) Internal detail of the cold box.}\label{400W_photos} \end{center} \end{figure*} \begin{figure} \begin{center} \begin{psfrags} \psfrag{name114}{\small{$\mathcal{S}1$}} \psfrag{name111}{\small{$\mathcal{S}2$}} \psfrag{name112}{\small{$\mathcal{S}3$}} \psfrag{name113}{\small{$\mathcal{S}4$}} \psfrag{name101}{\small{$NS_1$}} \psfrag{name102}{\small{$CV_{155}$}} \psfrag{name103}{\small{$NEF_1$}} \psfrag{name104}{\small{$NEF_2$}} \psfrag{name105}{\small{$Stt207$}} \psfrag{name106}{\small{$NEF_{34}$}} \psfrag{name107}{\small{$CV_{156}$}} \psfrag{name108}{\small{$NEF_5$}} \psfrag{name109}{\small{$NEF_6$}} \psfrag{name110}{\small{$CV_{167}$}} \psfrag{tag113}{\small{$LN2$}} \psfrag{tag114}{\small{$GN2$}} \psfrag{tag119}{\small{$CV_{952}$}} \psfrag{tag120}{\small{$CV_{953}$}} \psfrag{tag121}{\small{$CV_{956}$}} \psfrag{tag122}{\small{$NC_1+NC_2$}} \psfrag{tag105}{\small{$T_3,~P_3$}} \psfrag{tag107}{\small{$T_4,~P_4$}} \psfrag{tag108}{\small{$T_5,~P_5$}} \psfrag{tag109}{\small{$T_6,~P_6$}} \psfrag{tag110}{\small{$T_7,~P_7$}} \psfrag{tag106}{\small{$T_8,~P_8$}} \psfrag{tag111}{\small{$T_9,~P_9$}} \psfrag{tag112}{\small{$T_{10},~P_{10}$}} \psfrag{tag115}{\small{$T_{11},~P_{11}$}} \psfrag{tag116}{\hspace{-0.8cm}\small{$T_{12},~P_{12},~M_{12}$}} \psfrag{tag117}{\small{$S_1$}} \psfrag{tag101}{\small{$NCR_{22}$}} \psfrag{tag102}{\small{$L_1,~T_1,~P_1$}} \psfrag{tag103}{\small{$ $}} \psfrag{tag104}{\small{$T_2,~P_2$}} \includegraphics[scale = .68]{refr_simplifie_revu} \end{psfrags} \caption{Functional overview of the $450 W$ at $4.4K$ helium refrigerator available at CEA-INAC-SBT, Grenoble. The components named $CV$ are controlled valves, used to control the system. The label $Stt$ stands for the cryogenic turbine while $NS$ is used for the phase separator. $NC$'s are helium compressors while $NEF$'s stand for heat exchangers. $T$'s and $P$'s stand for temperature and pressure sensors. $S_1$ is the turbine speed sensor while $L_1$ stands for the bath level sensor.} \label{400W_schema} \end{center} \end{figure} Fig. \ref{400W_photos} shows an overview of the cryogenic plant of the CEA-INAC-SBT, Grenoble. This plant provides a nominal cooling capacity of $450\ W$ at $4.4\ K$ in the configuration in which this study have been done. It is dedicated to physical experiments (cryogenic component testing, turbulence and pulsed heat load studies, etc.).\\ The process flow diagram of the cryogenic plant is shown in Fig. \ref{400W_schema}. One may notice the following main elements: \begin{itemize} \item[-] Two volumetric screw compressors ($NC_*$) and a set of control valves ($CV_{95*}$), \item[-] Several counterflow heat exchangers ($NEF_*$), a liquid nitrogen pre-cooler ($NEF_5$), \item[-] A cold turbine expander which extracts work from the circulating gas ($Stt_{207}$), \item[-] A so-called turbine valve ($CV_{156}$), \item[-] A Joule-Thomson expansion valve for helium liquefaction ($CV_{155}$), \item[-] A phase separator ($NS_1$), connected to the loads (simulated here by the heating device referred as $NCR_{22}$).\\ \end{itemize} Note that the plant can be viewed as the interconnection of four elementary subsystems: the Warm Compression Station ($\mathcal{S}_4$), the Nitrogen Pre-Cooler ($\mathcal{S}_3$), the Brayton Cycle ($\mathcal{S}_2$) and the JT cycle ($\mathcal{S}_1$), delimited by dotted lines in Fig. \ref{400W_schema}. While constrained MPC is used in this study, the cryogenic system is classically controlled by three independent control loops: \begin{itemize} \item The output temperature of the turbine expander is controlled with a PI controller working with the turbine valve $CV_{156}$; \item the level of liquid helium in the tank is controller by a PI controller, working with the heating device $NCR_{22}$; \item the high and low pressures (in red and blue pipes, respectively) is controlled by an LQ controller, like the one described in \cite{bonne_cec_1}; \end{itemize} the valve $CV_{155}$ being used at a constant opening set by the user, depending on the application. In this study, attention has been focused on subsystems $1$ and $2$, with are the coldest part of the refrigerator (from $80K$ to $4.4K$). More informations about the plant can be found in \cite{clavel_these}. \subsection{Model derivation and properties} In order to derive the system model, several studies have been conducted \cite{clavel_these,clavel,bonne:hal-00922066,bonne_cec_2}. The Joule-Thompson cycle of this paper has been modelled in \cite{bonne:hal-00922066} while the Brayton cycle is presented in details in \cite{bonne_cec_2}. It is worth mentioning that heat exchangers involve models with coupled partial differential equations (PDEs) that have been spatially discretized, leading to rather large state space. In this study, the two models has been merged to obtain a state space model that takes the following form: \begin{subequations}\label{model_cold_end} \begin{align} \dot{\mathbf x} &= f^1(\mathbf x,\mathbf u,\mathbf w)\\ \mathbf y &= f^2(\mathbf x,\mathbf u,\mathbf w) \end{align} \end{subequations} where $f^1$ is the function that express the derivative of the state $\mathbf x$ while $f^2$ is the function that express the measured output vector $\mathbf y$. Both functions are continuously differentiable. State vector, input vector, and disturbance vector are expressed more precisely by \begin{equation} \mathbf x = \pmat{\mathbf x_{ns1}\cr \mathbf x_{nef2}\cr \mathbf x_{nef1}}, \quad \mathbf u = \pmat{CV_{155} \cr NCR_{22}^{A} \cr CV_{156}}, \quad \mathbf w = NCR_{22}^{HL} \end{equation} where $\mathbf x_{ns1}$, $\mathbf x_{nef1}$ and $\mathbf x_{nef2}$ depict the state vector of individual components, described in \cite{bonne:hal-00922066,bonne_cec_2}. It has to be noted that $NCR_{22}$ is used both to control the plant and to disturb it. That is why it has been named $NCR_{22}^{A}$ for the actuator part and $NCR_{22}^{HL}$ for the heat load part. At the end, $NCR_{22} = NCR_{22}^{HL} + NCR_{22}^{A}$. The vector of measured output is the following: \begin{equation} \mathbf y = \pmat{L_1 & V_1 & T_1 & \cdots & T_{10} & P_1 & \cdots & P_{10}}^T \end{equation} It has been shown in \cite{bonne_cec_2} that the non-linear model expressed by \eqref{model_cold_end} can be linearized around an operation point of interest defined by $f^1(\mathbf x_0,\mathbf u_0,\mathbf w_0) = 0$. The linearized model is then discretized using Matlab function $c2d(\cdot)$ with sampling period $\tau = 5s$, leading to the following discrete LTI model: \begin{eqnarray} \tilde{\mathbf x}_{k+1} = A \tilde{\mathbf x}_{k} + B \tilde{\mathbf u}_{k} + F \tilde{\mathbf w}_{k} \\\label{gftr56} \tilde{\mathbf y} = C \tilde{\mathbf x}_{k} + D \tilde{\mathbf u}_{k} + G \tilde{\mathbf w}_{k}\label{gftr57} \end{eqnarray} where variables with a tilde depict the deviation of the original variables around the operating point of interest: \begin{equation}\label{model_cold_end} \begin{split} \mathbf x_{k} & = \tilde{\mathbf x}_{k} +\mathbf x_0, \quad \tilde{\mathbf u}_{k} = \mathbf u_{k} - \mathbf u_0\\ \mathbf y_{k} & = \tilde{\mathbf y}_{k} +\mathbf y_0, \quad \tilde{\mathbf w}_{k} = \mathbf w_{k} - \mathbf w_0 \end{split} \end{equation} Note that the model defined by (\ref{gftr56}) stands for the prediction model (\ref{predictionmodel}) invoked in the general presentation of MPC (section \ref{defandnotes}). Following the same notation, the predicted output is denoted by $\mathbf y_{k+j}=Y(j,\mathbf x_k,p,\tilde\tilde{\mathbf w}_k)$ while the true measured output is denoted by $Y^r(j,\mathbf x_k,p,\tilde\tilde{\mathbf w}_k)$. \subsection{Statement of the MPC-related optimisation problem}\label{Theproblem} \noindent First of all, the following constraints have to be satisfied as far as possible: \begin{subequations}\label{contr} \begin{align} \underline{y^c} \leqslant ~ &\mathbf y^c_k \leqslant \overline{y^{c}}\\ \underline{\mathbf u} \leqslant ~ &\mathbf u_k \leqslant \overline{\mathbf u}\\ \underline{\delta \mathbf u } \leqslant ~ &\delta \mathbf u_k \leqslant \overline{\delta \mathbf u } \end{align} \end{subequations} where $\delta \mathbf u_k$ stand for the increment $\mathbf u_k-\mathbf u_{k-1}$ on the input vector. $\mathbf y^c_k $ denotes a subset of output components $\mathbf y_k $ which is constrained. This subset is composed of the helium bath level $L_1$ and the turbine output temperature $T_5$. Details regarding the variables involved in (\ref{contr}) are given in table \ref{tabletable}: \renewcommand{\arraystretch}{1.2} \begin{table}[H] \begin{center} \begin{tabular}{c|c|c} Var. & Meaning & Value \\ \hline $\underline{\mathbf u}$ & min. control effort & $ ( 20~~20~~0 )^T $ \\ $\overline{\mathbf u}$ & max. control effort & $ ( 60~~60~~150 )^T $ \\ $\underline{y^c}$ & low limit on the output & $ ( 59~~16 )^T $ \\ $\overline{y^c}$ & high limit on the output & $ ( 61~~9 )^T $ \\ $\underline{\delta \mathbf u }$ & max increment & $ ( 0.5~~10~~0.1 )^T $ \\ $\overline{\delta \mathbf u }$ & min increment & $ ( 0.5~~10~~0.1 )^T $ \\ \end{tabular} \end{center} \caption{The constraints bounds} \label{tabletable} \end{table} One of the specific feature of Output constraints is that they cannot be necessarily fully respected depending on the unpredictable thermal loads. That is why these constrained are systematically relaxed. This is introduced through the constraint violation variable $\mathbf v_k$ that is defined as follows: \begin{equation}\label{inst_viol} \mathbf v_k = max(\mathbf y^c_k-\overline{y^{c}},0) + max(\underline{y^c} - \mathbf y^c_k,0) \end{equation} while constraint violation prediction at sampling instant $k+j$ is written: \begin{equation}\label{traj_viol} \begin{split} V(j,\mathbf x_k,p,\hat\tilde{\mathbf w}_k) = max(Y^c(j,\mathbf x_k,p,\hat\tilde{\mathbf w}_k)-\overline{y^{c}},0) + \\ max(\underline{y^c} -Y^c(j,\mathbf x_k,p,\hat\tilde{\mathbf w}_k),0) \end{split} \end{equation} where $Y^c(j,\mathbf x_k,p,\mathcal{W})$ is used to define the constrained subset of $Y(j,\mathbf x_k,p,\mathcal{W})$.\ \\ \ \\ The sequence of control vectors $u^{(i)} (p)$ is then obtained by minimizing the cost function : \begin{equation}\label{real_cost} \begin{split} J_k = \sum _{j=1}^{N_p} \|X(j,\mathbf x_k,p,\hat\tilde{\mathbf w}_k)\|_Q^2 + \|u^{(j)}(p)\|_R^2 + \\ \|V(j,\mathbf x_k,p,\hat\tilde{\mathbf w}_k))\|_\rho^2 \end{split} \end{equation} where $Q$ and $R$ are weighting matrices on the state and input vectors while $\rho$ defines the constraint violation-related penalty. This cost function, together with the linear constrained and the linearized model (\ref{gftr57}) lead to a constrained QP problem which if of the form (\ref{pb_normal}) in which the decision variable $z$ is precisely the control profile parameter $p$. Note also that the affine term $\phi$ [see (\ref{pb_normal})] does depend on the current value of the disturbance $\mathbf w=NCR_{22}^{HL}$. \ \\ \ \\ By choosing a sampling period $\tau = 5$ sec, preliminary simulations showed that a prediction horizon of at least $N_p = 100$ is required. This leads to an optimization problem that involves $700$ decision variables and a total number of $1000$ constraints to be satisfied if trivial pwc parametrization is adopted. Such problem are beyond the computational capacity of the targeted industrial PLC (see the performance of our PLC in the section \ref{PLCDES}).\ \\ \ \\ To reduce the problem dimension, the control profile has been parametrized using classic piece-wise affine method that leaves as decision variables the values of the control inputs at $7$ decisions instants\footnote{decisions instants are chosen to be: $(1,~2,~4,~8,~16,~50,~50,~100)$}. Moreover, the constraints satisfaction is checked only at $14$ future instants\footnote{constraints verifications instants are chosen to be:\\ $(1,~2,~3,~4,~6,~8,~16,~24,~32,~48,~60,~72,~84,~100)$}. This finally leads to an optimization problem involving $49$ decision variables (note that there are $7$ control inputs, namely $3$ physical input and $4$ virtual input representing the constraints violation), with $56$ (outputs) plus $38$ rate saturation constraints to be satisfied. \ \\ \ \\ To ensure that this scheme is appropriate to control the plant, the problem closed-loop system is first simulated using the qpOASES solver. Time results are presented in Fig. \ref{working_mpc}. Fig. \ref{working_mpc} (a) shows the thermal heat load that has been used in this simulation. Part (b) shows that the scheme is able to decrease the stage instantaneous cost define as: \begin{equation}\label{cost_dev_bo} \bar{J}^{inst}_k= \|\mathbf x_k^r\big\|^2_Q + \|\mathbf u_k\|^2_R +\|\mathbf v_k\big\|^2_\rho \end{equation} Parts (c) and (d) of Fig. \ref{working_mpc} show that the constraints are violated within limited amplitude and duration. Part (e) shows the control effort. Part (f) shows the number of iterations of the qpOASES solver. It is worth mentioning that the number of iterations is important during heat load event. This has significant consequence on real-time feasibility of the qpOASES-based solution as it is examined in the sequel. \setlength\fheight{1.75cm} \setlength\fwidth{7cm} \begin{figure}[H] \input{res_unc.tikz} \caption{Simulated behaviour of the system under qpOASES-based MPC control without limitation of the number of iterations.} \label{working_mpc} \end{figure} \subsection{Description of the PLC} \label{PLCDES} \noindent This section focuses on the Programmable Logic Controller (PLC) available to implement the QP-based constrained MPC. It is a Schneider TSX P574634M shown in Fig. \ref{tsx}. This PLC is fully dedicated for our application and it communicates optimisation results to another PLC that actually controls the plant. \begin{figure}[H] \begin{center} \includegraphics[width=0.1\textwidth]{tsx} \caption{Schneider PLC TSX P574634M} \label{tsx} \end{center} \end{figure} \ \\ \ \\ According to the manufacturer, this PLC shows maximum computing capability of about $1.8~Mflops$ \cite{doc_plc}. In order to evaluate this claim, the Cholesky factorisation of increasing size matrices has been executed while monitored the computation times. Fig. \ref{performance_automate} shows the results compares them to the performance of a nowadays DELL Latitude E6520 laptop with Intel I5-2520M CPU. This graph shows a slowing factor that lies around $4000$. Note that the same graph shows the performance of the PLC in ms while the performance of the desk computer is shown in $\mu$s.\ \\ \ \\ Note that the PLC is used with an external PCMCIA memory card of $2Mb$, shared for both code and variables. This makes memory also a crucial issue. Indeed without reduced parametrization, the Hessian of the QP problem would have just fit the memory size of the PLC, since it represents a total memory occupation $4*700^2 = 1.96Mb$ in single precision representation. \setlength\fheight{2.5cm} \setlength\fwidth{7cm} \begin{figure}[t] \input{pui_auto_i5.tikz} \caption{Cholesky factorisation time for two different CPUs. It can be noticed that the performance ratio between the PLC and the laptop is about $4000$ for matrices sized $40$ to $125$.} \label{performance_automate} \end{figure} \ \\ \ \\ Now since a single iteration of the qpOASES solver takes approximately $120\mu s$, the same iteration would take $0.12*4000 = 0.48 s$ when executed on the PLC. Therefore only $10$ iterations of the qpOASES solver can be performed during the sampling period $\tau=5$ sec. The scenario that has been shown in Fig. \ref{working_mpc} with no bound on the number of iterations has been simulated with the qpOASES 'maxiter' option set to $10$. The result is presented by Fig. \ref{not_working_mpc} on which the unlimited case has been also reported for easiness of comparison. \setlength\fheight{1.75cm} \setlength\fwidth{7cm} \begin{figure}[t] \input{res_c.tikz} \caption{Simulated behaviour of the system under MPC control for both unconstrained (black lines) and constrained (red lines) solving time} \label{not_working_mpc} \end{figure} \ \\ \ \\ Figure \ref{not_working_mpc} shows that when the number of iterations of the hot-started qpOASES solver is limited to $10$, the closed-loop performance as well as the constraints fulfillment are drastically affected. This is precisely for this reason that the ODE-based solver explained in section \ref{odebasedsolver} has been developed since it corresponds to a less computation time per iteration and can therefore be potentially more suitable in presence of the limited performance available PLC following the discussion of section \ref{fundtradeoff}. \section{Fast MPC-related investigation} \label{secfastexp} \subsection{Comparison of algorithms} \label{secfastexpcomp} \noindent The aim of the present section is to assess the first Fact mentioned above, namely that it is sometimes better to use a less efficient per iteration solver (the ODE-based solver in our case) provided that it corresponds to less computation time per iteration. In our case, as far as the above described PLC is used, it is possible to perform $20$ iterations of the ODE-based solver against only $10$ iterations of the qpOASES solver. \ \\ \ \\ Eight hours simulations have been done with the two solvers, with a variable computational capability (i.e. a variable allowed number of iterations). Some relevant results are plotted, always as a function of the normalized computation capability $\bar{P} = P/P_0$ where $P_0$ is the computational capability of our device.\ \\ \ \\ In order to support the comparison that can be difficult because of the presence of relaxed weighted constraints, the cost (\ref{real_cost}) to be minimized at each sampling period has been divided in two separated parts, in order to compare them separately. The first part represents the deviation cost: \begin{equation}\label{cost_dev_bo} \bar{J}^{dev}_k = \sum _{j=1}^{N_p} \|X(j,\mathbf x_k,p,\hat\tilde{\mathbf w}_k)\|_Q^2 + \|u^{(j)}(p)\|_R^2 \end{equation} while the second part stands for the outputs constraints violation cost: \begin{equation}\label{cost_cst_bo} \bar{J}^{cst}_k = \sum _{j=1}^{N_p} \|V(j,\mathbf x_k,p,\hat\tilde{\mathbf w}_k)\|_\rho^2 \end{equation} and then the sum of those two costs along the whole simulation is expressed:\\ \begin{minipage}[r]{0.243\textwidth} \begin{equation}\label{cost_sum_bo_dev} \bar{\mathcal{J}}^{dev} = \sum _{k=1} ^{N_{sim}} \bar{J}_k^{dev} \end{equation} \end{minipage} \hfill \begin{minipage}[l]{0.23\textwidth} \begin{equation}\label{cost_sum_bo_cst} \bar{\mathcal{J}}^{cst} = \sum _{k=1} ^{N_{sim}} \bar{J}_k^{cst} \end{equation} \end{minipage} ~\\ where $N_{sim}$ is the number of problems solved during the simulation.\\ \setlength\fheight{1.75cm} \begin{figure} \input{solvs5.tikz} \caption{Performance indicators of the two solvers comparison vs the normalized computation power. The case $\bar P=1$ corresponds to the PLC we dispose of and which is presented in section \ref{PLCDES}.} \label{solv_cmp} \end{figure} Then, constraints respect is presented in two different manners: \begin{equation}\label{cst_resp_bo_1} c_1 = \max_{k\in \{1,\dots,N_{sim}\}}\max_{j\in \{1,\dots,n_c\}}\max\{\Gamma_jp_k-\gamma_j,0\}\\ \end{equation} being the maximum predicted constraints violation during the simulation while \begin{equation}\label{cst_resp_bo_2} c_2 = \dfrac{1}{N_{sim}} \sum _{k=1}^{N_{sim}} \max_{j\in \{1,\dots,n_c\}}max\{\Gamma_jp_k-\gamma_j,0\} \end{equation} being the average predicted constraints violation during the simulation.\ \\ \ \\ Finally, a closed-loop cost has been calculated according to: \begin{equation} \label{cst_sum_bf_1} \bar{\mathcal{J}}^{BF} = \sum _{k=0}^{N_{sim}} \bar{J}^{inst}_k \end{equation} \ \\ The quantities (\ref{cost_sum_bo_dev}), (\ref{cost_sum_bo_cst}), (\ref{cost_sum_bo_dev})+(\ref{cost_sum_bo_cst}), (\ref{cst_resp_bo_1}), (\ref{cst_resp_bo_2}) and (\ref{cst_sum_bf_1}) are shown in Fig. \ref{solv_cmp} against normalized computational performance $\bar{P}$. It can be noticed that the suboptimal ODE-based solver is behaving better than qpOASES in the case of low performance computation devices, while the qpOASES solver becomes clearly better beyond some hardware performance indicator.\\ The trajectories of the two closed-loop results are shown in Fig. \ref{mult_solv}, comparing the two solvers for the nominal PLC performance $P_0$ against the result obtained with the qpOASES solver with limited number of iterations and with the $10$ maximum number of iterations. It comes clearly that the use of the less efficient (per iteration) solver with $20$ iterations outperform the use of $10$ iterations of the qpOASES solver. Moreover, the use of the ODE-based solver enables the nominal qpOASES (without limitation) performance to be recovered. \setlength\fheight{1.75cm} \begin{figure} \input{mult_solv.tikz} \caption{Comparison of the closed-loop performance under the ODE-based solver ($20$ iterations), the qpOASES solver ($10$ iterations) and the qpOASES (without limitations).} \label{mult_solv} \end{figure} \subsection{Control updating period monitoring} In the section, attention is focused on the ODE-based solver. First of all, simulations will be done for updating period from one to five (i.e. a number of iterations from 4 to 20), and it will be shown that quadratic performances vary and there is an optimum to be found. Then the algorithm described in \cite{Alamir_ECC2013} will be implemented to show its efficiency on the cryogenic plant. \\ A six hour heat loads scenario presented by Fig. \ref{scenarios} will be divided in six one hour parts, to be simulated. Cost \eqref{cost_sum_bo_dev} + \eqref{cost_sum_bo_cst} defined in the previous section will be plotted against the chosen updating period. The result is presented by Fig. \ref{tu_opt}. It can be noted that the optimum updating period is different reading the scenario. It illustrates the fact that the updating period should be monitored to enhance performance. The Fig. \ref{tu_opt} also plots the obtained performance by monitoring the updating period using the algorithm \cite{Alamir_ECC2013}. It can be seen that it could lead to enhanced performances. \setlength\fheight{1.75cm} \setlength\fwidth{3.25cm} \begin{figure} \begin{center} \input{Tu_opti.tikz} \caption{Normalized cost \eqref{cost_sum_bo_dev}+\eqref{cost_sum_bo_cst} against updating period (consequently the number of iterations) for six different scenarios named (a) to (f). Solid lines represent the cost while dotted lines depict the obtained costs with the algorithm described in \cite{Alamir_ECC2013} for $\delta =2$ } \label{tu_opt} \end{center} \end{figure} \setlength\fwidth{16cm} \setlength\fheight{2.25cm} \begin{figure*} \begin{center} \input{scenarios.tikz} \caption{Six hours heat loads scenario} \label{scenarios} \end{center} \end{figure*} \vskip 0.5cm \section{Experimental results} \label{secexp} The control scheme derived in section \ref{Theproblem} and the solver depicted in section \ref{odebasedsolver} has been implemented in the Schneider PLC described in section \ref{PLCDES}, in structured language. The objective of the section is triple. First, we want to show that the problem we derive in section \ref{Theproblem} is relevant regarding the control of a cryo-refrigerator submitted to transient heat loads. Then, we want to emphasize that the algorithm described in \ref{odebasedsolver} is PLC compliant, event with polyhedral constraints. Finally, we will see that monitoring the updating scheme is very useful in this particular cases. \subsection{Control result with real time PLC implementation} The plant has been submitted to a two hours scenario (first two hours of Fig. \ref{scenarios}), starting from the equilibrium. The observed time per iteration is never longer than $500ms$ as expected and the problem preparation time do not exceed $500ms$ also. It allows the optimisation algorithm to iterate $4\tau_u -1$ time. For the first test, we chose to use a $\tau_u = 5s$ updating period. Fig. \ref{scenario_exp} shows that the control scheme is able to stabilize the plant and make the constraints to be respected, even if the pant is submitted to transient variable loads. \setlength\fwidth{7cm} \setlength\fheight{1.75cm} \begin{figure} \begin{center} \input{res_exp.tikz} \caption{Two hours heat load scenario. This Figure shows that the problem derived in section \ref{Theproblem} is relevant to control the plant. The $\Delta$ level represent the helium level $L1$ variation in the tank, Turbine stand for the output turbine temperature $T_5$. The Inflow depict the high pressure flow $M_{12}$ coming in the cold-box}\label{scenario_exp} \end{center} \end{figure} \subsection{Some leads on the updating scheme efficiency} The algorithm to update the updating period as been implemented on the PLC to show its efficiency. Unfortunately, the cost is not monitored but it is still possible to show result in the time domain. Fig \ref{diff_upt} shows the difference between a constant updating period and a variable one. One can see that in the case of a serious change on the thermal load, the updating period is increasing to iterate more, while the algorithm is imposing a short updating period as soon as the problem is not changing much from an updating instant to another. \setlength\fwidth{3.5cm} \setlength\fheight{1.75cm} \begin{figure} \begin{center} \input{res_exp_tu.tikz} \hspace{-2cm} \hfill \pgfplotsset{ yticklabel style = {color = red, opacity = 0.0}} \input{res_exp_tu_fix.tikz} \caption{Result with both constant ($2.5s$) and real-time updated updating period. It an be notice that in the case of a heat load disturbance, the updating period is increasing (in order to make the number of iteration to also increase), since the hot-starting solution is far from the actual solution. Period represent the updating period $\tau_u$. For actuators legend, please refer to Fig. \ref{scenario_exp}. } \label{diff_upt} \end{center} \end{figure} \vskip 0.5cm \section{Conclusion} \label{secconc} In this paper, an efficient way to control a cryogenic plant submitted to variable heat loads using an industrial PLC with reduced computing capabilities is proposed. It has been shown that in this application, an ODE-based solver gives robust sub optimal solution even in the case where the hot-start is far from the optimal solution (in the case of an unpredicted disturbance for instance). The control scheme and the solver has been both validated experimentally. Moreover an algorithm that automatically monitors the control updating period has been implemented and experimentally successfully tested. Future investigation will aim at developing an MPC control scheme for a refrigerator submitted to multiple different thermal loads, including at $1.8K$ (super-fluid helium). Also, cryogenic systems could be very large (several buildings): the control scheme will be distributed in order to ensure a progressive integration to industrial systems. \vskip 0.5cm \section*{Acknowledgment} Authors would like to thank every co-worker from the SBT for their kind help to improve models and control strategy and for their time to correct and discuss this paper. Authors give special thanks to Michel Bon-Mardion, Lionel Monteiro, François Millet, Christine Hoa, Bernard Rousset and Jean-Marc Poncet from SBT for their explanation about the process and their participation on experimental campaigns. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{introduction}\label{sec_intro} Mergers of binary neutron stars (BNSs) and neutron star-black hole (NS-BH) binaries have been thought to generate the neutron-rich ejecta through rapid neutron capture ($r$-process) nucleosynthesis \citep{lattimer1974,lattimer1977,eichler1989}. The expanding ejecta heated up by radioactive decay of $r$-process nuclei can produce a type of transient whose luminosity is approximately a thousand times brighter than a typical nova theoretically, so named as ``kilonova" (or, less commonly, `macronova') \citep{Li_1998,10.1111/j.1365-2966.2010.16864.x}. In addition, it has been proposed that the merger of BNS or NS-BH can also produce short-duration gamma-ray bursts (sGRB) \citep{pacynski1986,narayan1992,popham1999}, thus sGRB emission is expected to accompany a kilonova and a gravitational-wave (GW) burst if the jet is pointing towards Earth \citep{2012ApJ...746...48M,tanvir_kilonova_2013,10.1093/mnras/stz2255,Lamb_2019,jin_kilonova_2020}. \par \begin{table*}[t] \footnotesize \centering \begin{tabular}{*{13}{c}} \toprule \multirow{1}*{\bf Telescope} &\multicolumn{6}{c}{\multirow{1}*{limiting magnitude \& sky brightness}} & {\bf diameter } & {\bf readnoise} & {\bf pixel scale } & {\bf image quality} & {\bf FoV}&{\bf Ref}\\ &\multicolumn{6}{c}{(mag \& mag/arcsec${^2}$)}& (m) & (e/pixel) & (arcsec) & (arcsec) & (deg$^2$)\\ \midrule \multirow{3}*{WFST} & u & g & r & i & z & w & \multirow{3}*{2.5}&\multirow{3}*{10}&\multirow{3}*{0.33}&\multirow{3}*{1.0}&\multirow{3}*{6.55}&\multirow{3}*{(1)}\\ &22.40&23.35&22.95&22.59&21.64&22.96\\ &22.29&22.12&21.58&21.29&20.29&...\\ \hline \multirow{3}*{LSST} & u & g & r & i & z & y &\multirow{3}*{8.4}&\multirow{3}*{18}&\multirow{3}*{0.2}&\multirow{3}*{0.80}&\multirow{3}*{9.6}&\multirow{3}*{(2)}\\ & 23.87&24.82&24.36&23.93&23.36&22.47\\ & 22.96&22.26&21.20&20.48&19.60&18.61\\ \hline \multirow{3}*{ZTF} & \multicolumn{2}{c}{g} & \multicolumn{2}{c}{r}& \multicolumn{2}{c}{i}&\multirow{3}*{1.2}&\multirow{3}*{8}&\multirow{3}*{1.0}&\multirow{3}*{2.0}&\multirow{3}*{47}&\multirow{3}*{(3)} \\ &\multicolumn{2}{c}{21.1}&\multicolumn{2}{c}{20.9}&\multicolumn{2}{c}{20.2}\\ &\multicolumn{2}{c}{21.8}&\multicolumn{2}{c}{20.7}&\multicolumn{2}{c}{19.9}\\ \bottomrule \end{tabular} \caption{The parameters and information of telescopes used in simulation for WFST, ZTF and LSST. For each telescope, the upper and bottom rows are 5-$\sigma$ limiting magnitude for a 30-second exposure and dark night sky brightness individually. For the sky brightness of ZTF, we use the measurement of P200 to represent the sky background of Palomar observatory. Reference: (1) \cite{lin_prospects_2022}; WFST Science Collaboration et al. (2023), in preparation; (2) \cite{ivezic_lsst_2019,LSST:2010:Misc} (3) \cite{bellm_zwicky_2019,P200:2007:Misc}.} \label{tab:teles} \end{table*} On 2017 August 17 at 12:41:04.47 UTC, the first BNS merger GW source GW170817 was detected by the Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO) and Virgo with a false-alarm-rate estimate of less than one per $8.0 \times 10^4$ years \citep{abbott_gw170817_2017}. After the GW detection alert was triggered ($\sim1.7$ s), the Fermi Gamma-ray Space Telescope detected the sGRB GRB170817A which lasted about 2 s \citep{goldstein_ordinary_2017,savchenko_integral_2017}. Based on the sky map constrained by the GW and sGRB signal, telescopes all around the world started to search for the optical counterpart of GW170817. The Swope team was the first to observe the candidate, declaring it to be located in NGC4993, which was further confirmed to be a kilonova by other telescopes' observations subsequently, and named as AT2017gfo. \citep[e.g.,][]{coulter_swope_2017,pian_spectroscopic_2017,tanvir_emergence_2017,cowperthwaite_electromagnetic_2017,kasliwal_illuminating_2017,shappee_early_2017,evans_swift_2017,smartt_kilonova_2017}. \par A telescope with a large field-of-view (FoV) is necessary for searching for the electromagnetic counterpart of the merger, due to the large localization area with $10^1$--$10^3$ ${\deg}^2$ from GW detectors. The 2.5-meter Wide-Field Survey Telescope (WFST) will be installed at the summit of the Saishiteng Mountain near Lenghu which is planned to undergo construction in the first half of 2023, and commence operations are scheduled subsequently. With a FoV of 6.55 ${\deg}^2$, WFST will scan the northern sky in six optical bands (u, g, r, i, z, w). Reaching a depth of 22.40, 23.35, 22.95, 22.59, 21.64, 22.96 AB mag in a nominal 30-second exposure in the optical bands respectively, which can sufficiently meet the scientific requirements for kilonova detection. There is a median seeing of 0.75 arcseconds \citep{deng_lenghu_2021} in the Lenghu site, which can provide an ideal environment for WFST. Given these parameters and its geographical location, WFST can bridge the longitudinal gap between other wide-field instruments and has the potential to be one of the major GW follow-up instruments in the northern hemisphere for the upcoming fourth (O4) and fifth observing (O5) run of ground-based GW detectors. \par In the third observing (O3) run of LIGO and Virgo, in order to search for new kilonova especially from a NS-BH merger event, many instruments including Zwicky Transient Facility \citep[ZTF;][]{bellm_zwicky_2019,graham_zwicky_2019} and Dark Energy Camera \citep[DECam;][]{flaugher_dark_2015} have made extensive follow-up observations. However, there was no conclusive evidence for kilonovae detection by the end of the O3 run. \citep{andreoni_growth_2019,Andreoni_2020,ackley_observational_2020,kasliwal_kilonova_2020,anand_optical_2021,tucker_soargoodman_2022}. To improve follow-up observations with WFST in upcoming O4 and O5, we need to handle the trade-off between exposure time and sky coverage, using previous observations for reference. Recently, some studies on kilonova detectability predicted the excepted number of kilonovae for various wide-field instruments \citep{cowperthwaite_comprehensive_2015,cowperthwaite_lsst_2019,scolnic_how_2017,zhu_kilonova_2021,chase_kilonova_2022,wanghuiyu_2022}. Based on the different observation cadences, \cite{cowperthwaite_lsst_2019} explored the efficiency of searching for kilonovae randomly with the Large Synoptic Survey Telescope \citep[LSST;][]{lsst_science_collaboration_science-driven_2017,ivezic_lsst_2019} in the Vera C. Rubin Observatory. The research concluded that it is more effective to search for kilonovae after a GW trigger than serendipitous observations. By considering the afterglow of sGRB and the angle of view, for AT2017gfo-like kilonova, \cite{zhu_kilonova_2021} studied the prospect of finding different combinations of kilonova and GRB afterglow. \cite{chase_kilonova_2022} calculated the detection depth of thirteen wide-field instruments based on a specific kilonova model and the grid of parameters. In these works, the influence of image subtraction and the host galaxy on kilonova detection is always ignored, therefore the detection depth will be overestimated, resulting in more kilonovae predicted to be found. In practice, evaluating these effects is also needed to select appropriate exposure time in a target-of-opportunity (ToO) observation. \par In this work, we assess the ability of WFST for detecting AT2017gfo-like kilonova with mock observations. We focus on five of WFST's optical band filters, excluding the w-band, which is a broadband bandpass and is not helpful to measure color information of kilonovae (The transmission of other bands is shown in Figure \ref{fig:transmission}). By considering galaxies in a synthetic galaxy catalog as hosts of kilonovae, we investigate and quantify the influence of image subtraction and the host galaxy on kilonova searches. The methods of mock observation and verification of its function and accuracy are described in Section \ref{sec_1}. In Section \ref{sec_2}, the details about adding a host galaxy into an image are introduced, and we display and discuss the simulation result of kilonova detectability for WFST, LSST and ZTF. In Section \ref{sec_3}, we optimize the exposure time selection upon these results and explore the follow-up capacity of WFST by estimating the average total time spent in ToO observation. Finally, as a complement to the case of AT2017gfo-like kilonova, we discuss the detection ability for other kilonovae and summarize our results in Section \ref{sec_4}. Throughout this study, we adopt a standard $\Lambda$CDM cosmology with parameters $H_0=69.3 \, \text{km s}^{-1}\,\text{Mpc}^{-1}$, $\Omega_M=0.287$ and $\Omega_\Lambda=0.713$\citep{Hinshaw_2013}. \begin{figure}[h] \centering \includegraphics[scale=0.43]{transmission} \caption{The transmission of the telescope system of WFST, ZTF and LSST.} \label{fig:transmission} \end{figure} \section{simulation process} \label{sec_1} In this section, we introduce the method of simulated image synthesis and test its function by computing the kilonova detection depth of WFST with a flat background. The main steps of the synthesis process are as follows: \par \textbf{Kilonova template: } The luminosity evolution of AT2017gfo can be explained and fitted by many kilonova models \citep[e.g.,][]{kasen_origin_2017,Villar_2017,yu_long-lived_2018,bulla_possis_2019,hotokezaka_radioactive_2020,wollaeger_broad_2021}. The main differences between these models are: the mechanism that powers early emission, the ejecta matter component and morphology or the calculation of radiation transfer. Given that there are few observations of kilonova except AT2017gfo, these models cannot be distinguished based on the limited observation data. The estimation of kilonova detection depends on the template which can be chosen from data or be generated by a theoretical model. Considering the lack of constraint to the kilonova models and uncertainty of the model parameters, we choose AT2017gfo as a source of reference in study and obtain its lightcurve by \texttt{MOSFiT}\footnote{\href{https://github.com/guillochon/mosfit}{https://github.com/guillochon/mosfit}}. Using AT2017gfo as the kilonova template leaves out the potential for kilonova diversity, which can affect the overall estimation of kilonova detection. Combined with a certain kilonova model, this influence is discussed specifically in Section \ref{other kilnovae}. \par \texttt{MOSFiT} is a python package that collects transient templates (e.g., kilonova, tidal disruption event (TDE) and supernova (SN)), which can be used to fit observation and generate theoretical lightcurves \citep{Guillochon_2018}. The kilonova template in \texttt{MOSFiT} origins from \cite{Villar_2017} where the authors constructed a spherically symmetric model composed of two-component or three-component ejecta matter. We choose the three-component model where three components have different opacities ($\kappa$) fixed in the model, which are named as blue ($\kappa=0.5 \text{ cm}^2 \text{g}^{-1}$), purple ($\kappa=3 \text{ cm}^2 \text{g}^{-1}$) and red ($\kappa=10 \text{ cm}^2 \text{g}^{-1}$) components. There are another three free parameters to describe each component: mass ($M_{\text{ej}}$), velocity ($v_{\text{ej}}$) and temperature floor ($T_c$). In addition, the authors introduced a variance parameter ($\sigma$) in the likelihood function, which encompasses additional uncertainty in the model and/or data. To generate the lightcurve, the parameters of best-fit result in \cite{Villar_2017} are adopted: $M_{\text{ej}}^{\text{blue}}=0.020 {\rm M}_{\odot}$, $v_{\text{ej}}^{\text{blue}}=0.266c$, $T^{\text{blue}}=674 \rm K$, $M_{\text{ej}}^{\text{purple}}=0.047 {\rm M}_{\odot}$, $v_{\text{ej}}^{\text{purple}}=0.152c$, $T^{\text{purple}}=1308 \rm K$, $M_{\text{ej}}^{\text{red}}=0.011 {\rm M}_{\odot}$, $v_{\text{ej}}^{\text{red}}=0.137c$, $T^{\text{red}}=3745 \rm K$, and $\sigma=0.242$. It is worth noting that the lightcurve fitting generated by different kilonova models can be somewhat different \citep{arcavi_first_2018}. Especially for early time ($\delta t < 10\text{hr}$) of kilonova emission, the lightcurve mainly depends on the model used due to the lack of observation data \citep{arcavi_optical_2017}. \par \textbf{Image generation: }We generate the simulated images by \texttt{GalSim}\footnote{\href{https://github.com/GalSim-developers/GalSim}{https://github.com/GalSim-developers/GalSim}} which is an open-source project providing a software library for simulating images of astronomical objects such as stars and galaxies in a variety of ways \citep{ROWE2015121}. Taking into account the generation speed and computing resources, we produce simulated images with at least 200$\times$200 pixels and place the target in the center. When kilonova and its background are added into images, using point-spread functions (PSF) integrated into \texttt{GalSim}, we consider the extension of point-source caused by the optical system and atmospheric seeing. In addition, Poisson noise and readout noise can also be added to images by the built-in noise generators. The image quality and readout noise we use in the simulation are shown in Table \ref{tab:teles}. \par \textbf{Photometry and judging criteria: }By subtracting simulated reference images from science images, we can obtain difference images and make photometry with \texttt{PythonPhot}\footnote{\href{https://github.com/djones1040/PythonPhot}{https://github.com/djones1040/PythonPhot}} to judge whether the kilonova can be detected or not. \texttt{PythonPhot} is a python package translated from the DAOPHOT-type photometry procedures of the IDL AstroLib photometry algorithms \citep{2015ascl.soft01010J,landsman1993idl}. When a kilonova is relatively faint compared to its background, the measurement error of the PSF-fitting photometry tool of \texttt{PythonPhot} may be overestimated. So we use aperture photometry to assist PSF-fitting photometry to get the SNR of the source. The diameter of the aperture is taken as 1.5 times the FWHM in each band. The FWHM of each band is computed based on the image quality parameters listed in Table \ref{tab:teles}. Among all the results presented in this article, we set the threshold of SNR as 5 to judge whether kilonovae can be detected or not. \par \begin{figure}[htbp] \centering {\includegraphics[scale=0.45]{30s_result} } {\includegraphics[scale=0.45]{300s_result} } \caption{At different times of evolution of AT2017gfo-like kilonova, the relationship between luminosity distance and critical background noise from image simulation with 30 and 300 s exposures. The grey dashed and dot-dashed lines represent the detection range for BNS of O4 and O5, respectively \citep{abbott_prospects_2020}.} \label{fig:background} \end{figure} \begin{table}[htbp] \centering \begin{tabular}{*{7}{c}} \toprule \multirow{2}*{\bf exposure (s)} & \multirow{2}*{\bf luminosity} & \multicolumn{5}{c}{\bf detection depth (Mpc)}\\ \cline{3-7} & &\bf u &\bf g &\bf r &\bf i &\bf z\\ \midrule \multirow{3}*{30}&peak&210&357&338&302&202\\ &peak+1&131&233&209&193&123\\ &peak+2&73&131&127&112&69\\ \hline \multirow{3}*{300}&peak&395&609&562&515&320\\ &peak+1&257&375&366&317&206\\ &peak+2&163&256&234&214&137\\ \bottomrule \end{tabular} \caption{The detection depths at different times of evolution of AT2017gfo-like kilonova for WFST.} \label{tab:depth} \end{table} To test and verify the ability of the simulation process, we consider a simple case where we insert kilonovae into the images' center and choose a uniform background of sky brightness to compute its influence on detection. For kilonovae at different distances, we set a range of sky brightness to simulate observation and obtain the critical value at which the telescope can detect kilonova. The relationships between luminosity distance and critical sky brightness in each bandpass of the 30 s and 300 s exposures are shown in Figure \ref{fig:background}. In this simulation, for each band we choose the peak luminosity of the kilonova as well as another two cases: one and two magnitudes fainter than the peak. The comparison of these results can reflect the change of detection depth as the kilonova evolves. From the observation of AT2017gfo, its magnitude faded at $\sim 1$ mag per day in the g- and r-bands \citep{kasliwal_kilonova_2020}, thus the other two cases (one and two magnitudes fainter than the peak) roughly correspond the 2nd or 3rd day after a BNS merger. We find that g-, r- and i-bands are better choices for WFST to search for kilonova because of their deeper detection depth with the same exposure time. \par According to the observation condition of the Lenghu site reported in \cite{deng_lenghu_2021}, measured in the bandpass from 400 nm to 600 nm, during a fully clear new Moon phase, the night-sky brightness is 22.3 mag arcsec$^{-2}$ in V-band at most. The average night-sky brightness is around 22.0 mag arcsec$^{-2}$ when the Moon is below the horizon. Considering a background is mostly due to night sky brightness, which corresponds to a case that flux from a faint host galaxy can be ignored, we acquire the detection depth of WFST. Each band brightness used is shown in Table \ref{tab:teles} and is set the same as that in WFST Science Collaboration (2023). The detection depths of different situations are shown in Table \ref{tab:depth}. Due to the low transmission shown in Figure \ref{fig:transmission} or the effects of higher sky brightness, the detection ranges of u- and z-bands are relatively low, thus they are not suitable for early searching of the kilonova. To verify the reliability of the result, we calculate the detection depth by comparing the kilonova magnitude and the limiting magnitudes of WFST. Using the limiting magnitudes in Table \ref{tab:teles}, with 30 s exposures, g- and i-bands have detection depths of 458 and 419 Mpc at luminosity peak. Due to the noise caused by the image subtraction, it is easy to understand that depths from image simulation are shallower than that from calculating with limiting magnitude. Therefore, calculating detection depth with only limiting magnitude can overestimate the detection rate of the kilonova. \par \section{The influence from host galaxy} \label{sec_2} Based on the image simulation process introduced in Section \ref{sec_1}, in this section, we insert the host galaxy into the simulated image to consider the influence of the host galaxy on the detection of the kilonova. \par \subsection{Galaxy catalog and building sample} In order to take the host galaxy into account, a specific galaxy catalog needs to be input into the process of image synthesis. For optimizing the follow-up observation of GW alerts of compact binary mergers, some works have collected known catalogs and integrated them into the new galaxy catalogs \citep{white_list_2011,dalya_glade_2018,dalya_glade_2021}. Using the actual galaxy catalogs can be more instructive and meaningful in making a follow-up strategy. However, the actual galaxy catalogs are always incomplete at a large distance, and these catalogs are not informative enough to insert galaxies into simulated images because of the lack of morphological descriptions. So we choose a synthetic galaxy catalog cosmoDC2\footnote{\href{https://github.com/LSSTDESC/cosmodc2}{https://github.com/LSSTDESC/cosmodc2}} to consider the influence of the host galaxy. \par CosmoDC2 is a large synthetic galaxy catalog which is made for dark energy science with LSST, based on a trillion-particle, $(4.255 \text{Gpc})^3$ box cosmological \emph{N}-body simulation \citep{korytov_cosmodc2_2019}. CosmoDC2 covers 440 deg$^2$ sky area to a redshift $z=3$, and there are many parameters in this catalog to describe and quantify the properties of each galaxy, such as stellar mass, morphology and spectral energy distribution. These parameters can match well with the interfaces and functions of \texttt{GalSim}. Unlike the actual galaxy catalogs, we can easily add galaxies into images by the parameters about size and morphology. Nevertheless, bias from the actual case may exist. To make sure that the galaxies in the catalog can be consistent with a real situation, some comparisons including redshift distribution and color distribution have been done and shown in \cite{korytov_cosmodc2_2019}. Hence, choosing galaxies from cosmoDC2 in simulation can somewhat reflect the influence of the host galaxy of the real universe. \par \begin{figure}[h] \centering \includegraphics[scale=0.53]{fraction} \caption{At each redshift, the number of galaxies before and after filters and the fraction of the size and BNS merger rate of galaxy sample after filters.} \label{fig:fraction} \end{figure} \begin{table*}[t] \centering \footnotesize \begin{tabular}{llcc*{3}{r@{\extracolsep{0.2em}}c@{\extracolsep{0.2em}}l}c} \toprule \multicolumn{1}{c}{\multirow{2}*{\bf sGRB}} & \multicolumn{1}{c}{\multirow{2}*{\bf Redshift}} &\multirow{2}*{\bf Instrument}&\multirow{2}*{\bf Filter}& \multicolumn{3}{c}{\bf Offset} & \multicolumn{3}{c}{\bf Offset} & \multicolumn{3}{c}{\bf Offset} &\multirow{2}*{\bf Reference}\\ &&&&\multicolumn{3}{c}{$(\prime \prime)$} &\multicolumn{3}{c}{(kpc)}&\multicolumn{3}{c}{($r_e$)}&\\ \midrule \multirow{2}*{150101B}&\multirow{2}*{0.1343}&ACS&F606W&3.07&$\pm$&0.03&7.35&$\pm$&0.07&0.77&$\pm$&0.03&\multirow{2}*{\cite{fong_afterglow_2016}}\\ &&WFC3&F160W&3.07&$\pm$&0.03&7.35&$\pm$&0.07&1.01&$\pm$&0.03&\\ \specialrule{0em}{2pt}{2pt} 160821B&0.162&ACS&F606W&\multicolumn{3}{c}{5.7}&16.40&$\pm$&0.12&\multicolumn{3}{c}{...}&\cite{troja_afterglow_2019}\\ \specialrule{0em}{2pt}{2pt} \multirow{2}*{170817A}&\multirow{2}*{0.0973}&ACS&F475W, F625W, F775W&10.315&$\pm$&0.007&2.125&$\pm$&0.001&0.64&$\pm$&0.03$^a$&\multirow{2}*{\cite{blanchard_electromagnetic_2017}}\\ &&WFC3&F110W, F160W&10.317&$\pm$&0.005&2.125&$\pm$&0.001&0.57&$\pm$&0.05$^a$&\\ \specialrule{0em}{2pt}{2pt} 181123B&1.754&Gemini-N&$i$&0.59&$\pm$&0.16&5.08&$\pm$&1.38&\multicolumn{3}{c}{...}&\cite{paterson_discovery_2020}\\ \specialrule{0em}{2pt}{2pt} \multirow{2}*{200522A}&\multirow{2}*{0.5536}&WFC3&F125W&0.155&$\pm$&0.054&1.01&$\pm$&0.35&0.24&$\pm$&0.04&\multirow{2}*{\cite{fong_broadband_2021}}\\ &&WFC3&F160W&0.143&$\pm$&0.029&0.93&$\pm$&0.19&0.24&$\pm$&0.04&\\ \bottomrule \end{tabular} \begin{threeparttable} \begin{tablenotes} \item[a]{The half-light radius is obtained by the averages of the values from the optical and NIR HST observations in several filters.} \end{tablenotes} \end{threeparttable} \caption{The information of additional measurements of projected offset from 5 sGRBs. The offsets are described in three ways which are angular distance in image, projected distance in kpc and normalized distance by effective radius of galaxy.} \label{tab:offset} \end{table*} \begin{figure*}[t] \centering \includegraphics[scale=0.51]{pdf_offset} \includegraphics[scale=0.51]{cdf_offset} \caption{The probability density and cumulative distribution of projected offset between the location of sGRB and the center of host galaxy for different samples and their fitting result.} \label{fig:offset} \end{figure*} \begin{figure*} \centering \subfigure[]{ \label{subfig:example1} \includegraphics[scale=0.3]{far_away}} \subfigure[]{ \label{subfig:example2} \includegraphics[scale=0.3]{near}} \caption{An example of reference, science and difference images with host galaxies in simulation at redshift $\sim0.056$, \subref{subfig:example1} and \subref{subfig:example2} correspond to cases far from (with a physical offset of $r=1.763$kpc) and close to (with $r=0.414$kpc) the host galaxy's nucleus, respectively. The red arrow lines indicate the location of the kilonovae.} \label{fig:example} \end{figure*} As the host galaxies of kilonovae, some selection and filters are needed to build the galaxy sample of simulation. According to the sensitivity of the GW detectors for a BNS system of $1.4\,{\rm M}_{\odot}+1.4\,{\rm M}_{\odot}$ in O4 and O5 \citep{abbott_prospects_2020}, the typical ranges of detected BNS can reach 190 Mpc and 330 Mpc, which correspond to redshift $z = 0.043 \text{ and } 0.072$, respectively. Given the luminosity of kilonova, we select galaxies with $z<0.2$ in cosmoDC2 to simulate. Combined with some mechanisms of compact binary formation and the method of population synthesis, the property of host galaxies of merging compact objects can be explored. Several works have attempted to study their properties with this approach \citep{oshaughnessy_effects_2017,cao_host_2018,lamberts_predicting_2018,giacobbo_progenitors_2018,mapelli_host_2018,toffano_host_2019,artale_host_2019,artale_mass_2020}. These works have found that the merger rate of compact binary depends on stellar mass, metallicity and galaxy host type. Considering the stronger correlation between the merger rate and the stellar mass of the host galaxy than other parameters \citep{artale_host_2019,artale_mass_2020}, to obtain the weight as the host galaxy, we adopt the 1D relationship of $z = 0.1$ of the fitting results between stellar mass $({\rm M}_{\odot})$ and BNS merger rate $(n_\text{BNS})$ according to \cite{artale_mass_2020}: \begin{equation} \label{equation:BNS_rate} \begin{aligned} \log_{10}(n_\text{BNS}/\text{Gyr})=&(1.038 \pm 0.001)\log_{10}(M_{*}[{\rm M}_{\odot}])\\ &-(6.090 \pm 0.010). \end{aligned} \end{equation} Comparing the fitting result of $z = 0.1$ with $z = 1$ in \cite{artale_mass_2020}: \begin{equation} \begin{aligned} \log_{10}(n_\text{BNS}/\text{Gyr})=&(1.109 \pm 0.001)\log_{10}(M_{*}[{\rm M}_{\odot}])\\ &-(6.214 \pm 0.006), \end{aligned} \end{equation} the fitting factors in these two cases are not much different, therefore Eq (\ref{equation:BNS_rate}) can be used under the redshift range of $z \leq 0.2$ in our simulation. We calculate the merger rates of galaxies that satisfy $M_*>{10}^7 {\rm M}_\odot$. Since the size of the galaxy sample with $z<0.2$ and $M_*>{10}^7 {\rm M}_\odot$ is also too large, we further narrow down the sample with $\log_{10}(n_\text{BNS}/\text{Gyr})>4$, which roughly corresponds to $M_*>{10}^{10} {\rm M}_\odot$. After filtering and dividing the galaxies into a series of subsamples with different redshifts, the number of galaxies and BNS merger rate are shown in Figure \ref{fig:fraction}. With the filter of $\log_{10}(n_\text{BNS}/\text{Gyr})>4$, for each galaxy subsample the size is greatly reduced, while the total BNS merger rate changes slightly, which means that most galaxies that contribute little to the merger rate are ruled out. \par \subsection{Kilonova offset to galaxy center} Adding a host galaxy into an image means considering an uneven background with a specific profile, therefore the relative location of the merger to its host is essential to investigate the influence of the host galaxy. There are two ways to explore the offset between the compact binary merger and the galaxy's center: The first is the population synthesis method, and the other is tracing the BNS merger by sGRB. \par \begin{table*}[ht] \centering \footnotesize \begin{tabular}{c@{\extracolsep{0.4em}}c*{5}c} \toprule \multirow{2}{*}{\bf exposure time (s)} & \multirow{2}{*}{\bf Efficiency} & \multicolumn{5}{c}{ $\bm{z_{\rm max}(D_{\rm L},_{\rm max}}$ \bf Mpc)}\\ \cline{3-7}&&\bf u&\bf g&\bf r&\bf i&\bf z\\ \midrule \multirow{2}*{30}&90\%&0.039(175.3)&0.055(246.7)&0.046(205.8)&0.038(168.4)&0.036(158.5)\\ &50\%&0.051(230.4)&0.084(386.7)&0.078(357.3)&0.069(316.0)&0.045(202.0)\\ \hline \multirow{2}*{90}&90\%&0.06(270.2)&0.074(336.8)&0.06(271.9)&0.052(235.5)&0.039(172.0)\\ &50\%&0.074(340.0)&0.112(525.3)&0.103(479.1)&0.091(422.7)&0.063(286.8)\\ \hline \multirow{2}*{300}&90\%&0.084(386.1)&0.104(484.8)&0.086(394.4)&0.072(327.6)&0.05(224.6)\\ &50\%&0.102(473.0)&0.145(693.2)&0.133(632.9)&0.12(565.0)&0.082(377.6)\\ \bottomrule \end{tabular} \caption{The detection depths with 90\% and 50\% efficiency calculated by image simulation of WFST with 30, 90 and 300 s exposures.} \label{tab:wfst_depth} \end{table*} By population synthesis, it is found that most BNS mergers are far from the galaxy center, and the offset distribution depends on gravitational potential, the metallicity of different galaxy types and some parameters of compact binary including formation channel, initial location, kick velocity and delay time \citep{bloom_spatial_1999,belczynski_merger_2002,voss_galactic_2003,belczynski_study_2006,mapelli_cosmic_2018,wang_fast_2020}. By the observation and systematic analysis of the host galaxy, \cite{fong_hubble_2010,fong_decade_2015} and \cite{fong_locations_2013} collected the host galaxies properties of 22 sGRBs and compared their projected offsets (both physical and normalized) with long GRBs, core-collapse SNe, and even Type Ia SNe. They found that most sGRBs are far from the galaxy center, which is consistent with the model sGRBs originate from the compact binary merger. In addition, the multi-messenger observations of GW170817 further confirmed the relationship between the sGRB and the merger of BNS \citep{goldstein_ordinary_2017}. \par We choose to use sGRBs to trace the location of the BNS merger and impose the kilonova into the image according to the projected offset distribution of sGRBs, which means that we assume sGRBs mostly originate from BNS mergers. Besides the 22 sGRBs reported in \cite{fong_hubble_2010,fong_locations_2013}, we collect 5 other sGRBs with reliable offset measurements from 2013 to 2020 and add them into sGRB sample, which can be seen in Table \ref{tab:offset}. The difference between the two samples is shown in Figure \ref{fig:offset}, where the projected physical offsets of the additional five sGRBs are consistent with the distribution of the previous works, and the offset median is 5.1 kpc compared to 4.5 kpc in \cite{fong_locations_2013}. To sample about projected offset between kilonova and galaxy center, we fit this sample of 27 sGRBs with Log-normal distribution by \texttt{statsmodels} \citep{seabold2010statsmodels} based on the maximum likelihood method and the fitting result is: \begin{equation} \label{eq:offset} \text{PDF}_\text{fit}(r)=\frac{1}{(r-r_0)\sigma\sqrt{2\pi}}\exp \left(-\frac{(\ln(r-r_0)-\mu)^2}{2\sigma^2} \right), \end{equation} where ${\rm PDF}_{\rm fit}$ is a probability density function, $r$ is in kpc and $r_0=0.24\pm0.23$, $\sigma=1.26\pm0.23$ and $\mu=1.44\pm0.27$. By PDF (\ref{eq:offset}), the projected offset between the BNS merger's location and the galaxy center can be obtained. Regarding the angular distribution of the merger, we assume a uniform distribution to sample the angle relative to the main axis of the galaxy in our simulation. \par \begin{figure}[h] \centering {\includegraphics[scale=0.36]{30s_wfst} } {\includegraphics[scale=0.36]{300s_wfst} } \caption{At peak of AT2017gfo-like kilonova, the result of detection efficiency varies with redshift for WFST with 30 (upper) and 300 s (lower) exposures, respectively. The grey dashed and dot-dashed lines represent the detection range for BNS of O4 and O5 \citep{abbott_prospects_2020}.} \label{fig:wfst_result} \end{figure} \begin{figure*}[tp] \centering \includegraphics[scale=0.45]{flux_distribution} \caption{The flux distribution of the galaxy sample in CosmoDC2 for each band of WFST at redshift of $z=0.114$.} \label{fig:distribution} \end{figure*} \subsection{Results} \label{result} We divide the redshift range of $0\sim0.2$ in a grid of 75 bins and generate simulated images for galaxies in each bin. If the number of galaxies in a bin is more than 1000, we will randomly choose 1000 galaxies as the simulation sample to reduce computing time appropriately. An example of simulated images with the host galaxy is presented in Figure \ref{fig:example}. When a BNS merger is close to the galaxy's center, like Panel \ref{subfig:example2}, the background is dominated by host luminosity. Otherwise, in Panel \ref{subfig:example1}, the sky brightness will be vital for detection, and the sky brightness of each band of simulation is shown in Table \ref{tab:teles}. According to the simulation process described in Section \ref{sec_1} and \ref{sec_2}, we generate simulated images for each galaxy in different bands and judge whether kilonovae can be detected or not. By averaging these detection results, the weighted fraction of kilonova that is detectable can be calculated combined with the BNS merger rate derived from Eq (\ref{equation:BNS_rate}). The weighted fraction at different redshifts is called the detection efficiency in our following results. When a AT2017gfo-like kilonova is at the peak luminosity, the results of detection efficiency for WFST with the 30 s and 300 s exposures are shown in Figure \ref{fig:wfst_result}. When a BNS is at small luminosity distances, WFST can easily detect its kilonova emission, as was in the case of GW170817. As the luminosity distance increases, the detection efficiency decreases gradually. The speed of decrease is different for each band, which can reflect the influence of the host galaxy on detection combined with Figure \ref{fig:distribution}. With the subsample at the redshift of $z=0.114$ as an example, Figure \ref{fig:distribution} displays the flux distribution of galaxies of cosmoDC2 in different bands of WFST. According to Figure \ref{fig:distribution}, the longer wavelength of the band, the greater mean and standard deviation of flux distribution. In the u-band, there is not much difference in the effect of kilonova detection between different galaxies, because their flux is more concentrated. As a result, the detection efficiency with the u-band falls faster than others. In contrast, the efficiency begins to decline earlier and declines more slowly in a longer wavelength band. The detection depths with 90\% and 50\% efficiency are shown in Table \ref{tab:wfst_depth}. Compared with the case of the flat sky brightness in Table \ref{tab:depth}, the detection depth of 90\% efficiency is further reduced by $50$ Mpc to $200$ Mpc. \par \begin{figure}[!h] \centering {\includegraphics[scale=0.36]{diff_teles} } \caption{The result of detection efficiency varies with redshift for different telescopes at peak of AT2017gfo- kilonova. For comparing the search ability of kilonova reasonably, exposure time is set as 30, 40 and 200 s for WFST, LSST and ZTF, respectively. The black dashed and dot-dashed lines represent detection range for BNS of O4 and O5 \citep{abbott_prospects_2020}.} \label{fig:diff_teles} \end{figure} \cite{cowperthwaite_lsst_2019} found that searching kilonova serendipitously with a certain survey is not a good way due to its low efficiency if the sensitivity and FoV of the telescope are insufficient. The host galaxy can further affect the detection and measurement of kilonovae and cut down the number of predicted detections. Even under ToO observation, it is necessary to choose the exposure time wisely to balance the coverage of sky area and detection depth, given the rapid evolution of kilonova. \par \begin{table*}[htp] \footnotesize \begin{tabular}{cccccccc} \toprule \multirow{2} *{\bf Telescope}& \multirow{2}*{\bf Efficiency }& \multicolumn{6}{c}{ $\bm{z_{\rm max}(D_{\rm L},_{\rm max}}$ \bf Mpc)}\\ \cline{3-8}&&\bf u&\bf g&\bf r&\bf i&\bf z&\bf y\\ \midrule \multirow{2}*{WFST}&90\%&0.039(175.3)&0.055(246.7)&0.046(205.8)&0.038(168.4)&0.036(158.5)&\multirow{2}*{...}\\ &50\%&0.051(230.4)&0.084(386.7)&0.078(357.3)&0.069(316.0)&0.045(202.0)\\ \hline \multirow{2}*{LSST}&90\%&0.067(304.6)&0.131(623.7)&0.113(532.2)&0.098(454.3)&0.084(387.7)&0.054(242.1)\\ &50\%&0.082(375.1)&0.165(799.3)&0.156(751.5)&0.137(653.7)&0.11(517.1)&0.074(340.4)\\ \hline \multirow{2}*{ZTF}&90\%&\multirow{2}*{...}&0.018(78.7)&0.019(84.5)&0.016(71.2)&\multirow{2}*{...}&\multirow{2}*{...}\\ &50\%&&0.037(165.3)&0.04(180.5)&0.036(159.0)&&\\ \bottomrule \end{tabular} \caption{The detection depths with 90\% and 50\% efficiency calculated by image simulation of WFST, LSST and ZTF. Exposure time is set as 30, 40 and 200 s for WFST, LSST and ZTF, respectively.} \label{tab:depth_diff_tele} \end{table*} \begin{figure*}[ht] \centering {\includegraphics[scale=0.4]{diff_mag} } \caption{At different times of evolution of AT2017gfo-like kilonova, the detection efficiency for WFST with 300 s exposures. The grey dashed and dot-dashed lines represent detection range for BNS of O4 and O5 \citep{abbott_prospects_2020}.} \label{fig:diff_mag} \end{figure*} We also compute and simulate the situations of the other wide-field surveys which are dedicated to fast-transient discovery: LSST \citep{lsst_science_collaboration_science-driven_2017,ivezic_lsst_2019} and ZTF \citep{bellm_zwicky_2019,graham_zwicky_2019}. The sky brightness of the sites and other parameters of telescopes used in simulation are shown in Table \ref{tab:teles}. The contrast of detection efficiency of these projects with WFST is presented in Figure \ref{fig:diff_teles} and their detection depths of 90\% and 50\% efficiency are also shown in Table \ref{tab:depth_diff_tele}. Because of the different FoV of these telescopes, in order to reasonably reflect their detection abilities for AT2017gfo-like kilonovae, their exposure times need to be different according to their FoV. Assuming the same total coverage of sky area and searching time, the exposure time used is inversely proportional to FoV under full coverage. Hence, we set the exposure times as 30, 40 and 200 s for WFST, LSST and ZTF, respectively. From Table \ref{tab:teles} and Table \ref{tab:depth_diff_tele}, although the FoV of ZTF is much larger than WFST, when a BNS merges distantly, WFST can perform better than ZTF and strengthen the search depth to $200\sim300$ Mpc with 90\% detection efficiency. By simulation in the Southern Hemisphere, LSST is expected to be more powerful and can reach $\sim600$ Mpc $(z=0.13)$ for searching kilonova. Our result is consistent with the detectability study presented by \cite{scolnic_how_2017}, which predicted a detectable redshift range of $z = 0.02\sim0.25$ for LSST based on an AT2017gfo-like model. Given the geographical locations of WFST and LSST, WFST can complement LSST for optical follow-up in the northern hemisphere well to cover GW events of all-sky. \par To guide the selection of exposure time in searching kilonova, we also consider the rapid decay of kilonova luminosity. In the case of one or two magnitudes fainter than peak luminosity, which roughly corresponds to the $\sim$2nd and $\sim$3rd day after the merger \citep{kasliwal_kilonova_2020}, their results are shown in Figure \ref{fig:diff_mag}. When luminosity decays one magnitude from the peak, the detection depth with the same efficiency decreases a lot, which indicates that it is essential to make the best of hours around the peak and increase the exposure time in the following days after the peak. \section{Target of Opportunity optimization with WFST} \label{sec_3} In this section, based on the simulation, we investigate the ToO observation optimization for WFST and discuss the follow-up ability and detection prospects with WFST for upcoming O4. \par \subsection{Optimize the exposure time} After a GW alert triggers, brief information about the GW event is expected to be released by the General Coordinates Network platform. Then, the follow-up observation will begin based on the BAYESTAR sky map or the LALInference sky map which includes more detailed information about the location and distance. Due to the different methods used, the difference between these sky maps is the computing time and estimation accuracy, and the BAYESTAR sky maps are always released earlier but have worse estimations. By analyzing the \texttt{fits} file of sky map, the probability distribution of location and luminosity distance can be obtained and used for ToO observation design. Combining the luminosity distance from GW alert and the ability of the telescope, we can choose an appropriate exposure time so that WFST can strike a balance between depth and coverage. \par About luminosity distance in the \texttt{fits} file of sky map, there is detailed introduction in \cite{singer_going_2016,singer_supplement_2016}. The sample of luminosity distance follows the distribution: \begin{equation} \begin{aligned} p(r|\boldsymbol{n}) & =\frac{N(\boldsymbol{n})}{\sqrt{2 \pi}\sigma ( \boldsymbol{n})}\exp{\bigg[-\frac{(r-\mu(\boldsymbol{n}))^2}{2\sigma (\boldsymbol{n})^2}\bigg]}r^2 \\ & \text{for } r \ge 0, \end{aligned} \end{equation} where $\boldsymbol{n}$ is a direction vector, which means the direction to a certain pixel of the sky based on HEALPix (Hierarchical Equal Area isoLatitude Pixelisation) algorithm, and parameters $N$, $\mu$ and $\sigma$ are recorded in the \texttt{fits} file. We choose the distance $r_{\text{thred}}$ with 90\% quantile of the distribution as the merger's location. By taking into account the result of detection efficiency in Section \ref{sec_3}, the exposure time is selected from an optional time list set as [30, 60, 90, 120, 180, 240, 300 s], and we require the efficiency is no less than 90\% at $r_{\rm thred}$. If the maximum exposure in the list is insufficient to meet the threshold efficiency, the exposure time is set at 300 s. For the detection efficiency, there are three related parameters: exposure time, kilonova brightness or bandpass. Given the occurrence time of the merger, we use interpolation to generate the detection efficiency curve under other exposure times and the average magnitude of each observable time which can be calculated using \texttt{Astropy}. Thus, we can reasonably set different exposure times for the telescope according to various events and observation nights. \par \subsection{Follow-up ability and prospect} Determining how much time is needed to cover the probable sky in a follow-up observation can reflect the feasibility to search for kilonovae and guide us to allocate observation time for various GW events. However, in practice the total time of a follow-up observation involves too many factors, e.g., occurrence time, localization accuracy, weather and Moon phase. To reflect the ability of WFST to follow-up search in O4, we investigate the average total time in ToO observation under various localization areas and distances. \par The probable localization is assumed to be ideal enough, which means the observation is not influenced much by the Moon, the foreground of the Milky Way, and the weather. Combining the localization area of the event and the telescope's FoV, the number of pictures taken can be roughly estimated. By sampling the trigger time of GW events randomly in a day, based on the selection rule above, the exposure time for each observable time can be determined. We can subsequently match the exposure time to each pointing of the telescope and calculate the total time for covering the target sky. According to the prospects of the localization of GW events in \cite{abbott_prospects_2020}, the median localization area of BNS from LHVK measurements in O4 is $33^{+5}_{-5} \, \text{deg}^2$. Since the localization accuracy is sensitive to the SNR of signal and the number of detectors, the localization area with 90\% confidence of most events might be hundreds to even thousands of square degrees in O4 \citep{petrov_data-driven_2022}. Consequently, we set the calculation range of luminosity distance and localization area as $0 \sim 500 \, \text{Mpc}$ and $0 \sim 1250 \, \text{deg}^2$, respectively. \par The results of total time with only one band and two bands are shown in Figure \ref{fig:ToO}. To investigate the influence of observable time per night of different seasons, we set two calculation dates to 2023.06.21 or 2023.12.23 to represent the case of summer or winter. Comparing Panel \ref{subfig:1} and \ref{subfig:2}, there is not much difference in total time between summer and winter at the Lenghu site, but the target sky can be covered earlier in winter due to the more time per night. To further filter and confirm the kilonova candidate, using at least two bands is necessary for getting color evolution in actual observation. Therefore, the results of two-band coverage in summer are also calculated and shown in Panel \ref{subfig:3} and \ref{subfig:4}. GW170817 and another possible BNS event GW190425 reported during O3 are also drawn in each panel. GW190425 is drawn on the axes due to its bad localization. For the band choice, the combination between g-, r- and i-bands are preferred for their relatively higher efficiency to search for kilonovae. In addition, a bright Moon phase can affect a lot on shorter wavelength in optics, especially for the u- and g-bands of WFST. Referring to the GW ToO strategy of LSST \citep{andreoni_target_2021}, we choose the g- and r-bands in dark times of the Moon phase and a longer wavelength band combination r- and i-bands in bright times, which correspond to Panel \ref{subfig:3} and \ref{subfig:4} respectively. The total time spent using r- and i-bands is slightly longer than g- and r-bands, which is predictable according to our optimization method of exposure time. Additionally, the limiting magnitude can also be reduced depending on the Moon phase as well as the angular distance between the pointing of the telescope and the Moon. Therefore, in practice appropriately avoiding the sky close to the Moon is necessary to ensure overall efficiency. Assuming that the observation time per night is $\sim4 \, \text{hr}$, and it is more promising to find the kilonova in the first two nights, we find WFST can search for kilonovae well if the localization area is no more than ${10}^3 \, {\deg}^2$, given the detection limit of GW detectors in O4. If the localization accuracy is not efficient $(>{10}^3 \, {\deg}^2)$, as reported for some events in O3, it is necessary to give up part of the area and focus on areas with relatively high probability. Based on this result, if the localization area of most BNS events detected in O4 is hundreds of square degrees, WFST can detect most kilonova counterparts of GW events. Assuming WFST can observe the sky where $\text{decl.}>-30^{\circ}$, amongst kilonovae that are more than 15 degrees from the disk of the Milky Way, only 57\% events can be searched by WFST. Combined with the expected BNS detections $10^{+53}_{-10}$ during O4 in \cite{abbott_prospects_2020} and weather efficiency of 76\% of the site \citep{deng_lenghu_2021}, roughly considering the influence of the Moon and the Sun, WFST is expected to find $\sim 30\%$ ($3^{+16}_{-3}$) kilonovae from BNS mergers during O4. \par \begin{figure*}[t] \centering \subfigure[]{ \label{subfig:1} \includegraphics[scale=0.37]{ToO_gsummer} } \subfigure[]{ \label{subfig:2} \includegraphics[scale=0.37]{ToO_gwinter} }\\ \subfigure[]{ \label{subfig:3} \includegraphics[scale=0.37]{ToO_gr} } \subfigure[]{ \label{subfig:4} \includegraphics[scale=0.37]{ToO_ri} } \caption{The results of total time in ToO observations with different localization and luminosity distance. The dates of \subref{subfig:1}, \subref{subfig:3} and \subref{subfig:4} are set to 2023.06.21 and \subref{subfig:2} is 2023.12.23. The dot-dashed line annotates the limit distance for detecting the typical BNS merger in O4 \citep{abbott_prospects_2020}. White dashed lines represent contour lines of different times. GW170817 and another possible BNS event GW190425 detected in O3 are also drawn in each panel, represented by star markers. Their distances are taken as $\mu+\sigma$, where $\mu$ and $\sigma$ are the median and standard deviation, and the localization area is the part with 90\% confidence where ${\rm decl.}>-30^{\circ}$. The event that is out of the calculation range is drawn on the axes.} \label{fig:ToO} \end{figure*} \section{Discussions and conclusions} \label{sec_4} \subsection{The influence of different kilonovae model} \label{other kilnovae} The detection efficiency of the telescope depends on the kilonova models. However, the physical model of kilonova is still unclear, even for a given BNS parameter, the viewing angle and the EoS of neutron stars. Since only one event, i.e. GW170817, is confirmed until now, various models are proposed in the literature, and most of them can explain the data fairly well (see for instance \cite{review3, review1,review2}). In observations, there are six sGRBs with potential kilonova emission detected, with different levels of observational evidence. These sGRB kilonovas support the existence of a broad range of kilonova luminosities ($\sim 0.3-10$ times the luminosity of AT2017gfo depending on the epoch and frequency of observations) \citep{ascenzi_luminosity_2019}, which might hint the diversity of kilonova models \citep{review2}. In the previous discussion, we ignored these model uncertainties, and simply assumed that all the events are AT2017gfo-like. In order to investigate the effects of different kilonova models on detection efficiency, we simulate a sample of BNS models and consider the alternative kilonova model to calculate the luminosity of the events. In fact, the luminosity of kilonovae can be different depending on the properties of BNS systems. Based on the results of different kilonova luminosities in Figure \ref{fig:diff_mag}, combined with a kilonova sample generated from various BNS systems, the detection efficiency can be estimated simply for kilonovae deviating from the AT2017gfo-like kilonovae. To construct a kilonova sample, we refer to the method in \cite{wanghuiyu_2022} and introduce the method as follows. \par To obtain the corresponding kilonova of each BNS system in the sample, we use the kilonova model (\texttt{bns} model) in \cite{nicholl_tight_2021}. Similar to the previous model in this paper, the \texttt{bns} model is also composed of three-component ejecta matter which includes blue, purple and red components. These components correspond to different generation mechanisms, because of which the opacity and atomic mass number of diverse components are different. In contrast to the previous model, the geometry of ejecta is axisymmetric in the \texttt{bns} model, where the blue and purple components mainly distribute around the pole direction and the red component concentrates around the equatorial plane, so the influence of the observer viewing angle on light curves can be considered. Using the fitting results of relativistic numerical calculation in \cite{dietrich_modeling_2017}, the authors connected the parameters of each ejecta matter component with the mass ratio and chirp mass of a BNS system. The mass and velocity of the ejecta matter can be calculated from the masses of BNS. In addition, the other two effects that can contribute additional luminosity to kilonovae are also considered: (1) an enhancement in the blue ejecta due to magnetically-driven winds and (2) the shock-heating of the ejecta by a GRB jet, where (1) can be possible only if the remnant avoids prompt collapse \citep{Metzger_2018}, and (2) can contribute to blue luminosity at the early evolution of kilonovae \citep{arcavi_first_2018}. \par The free parameters of \texttt{bns} model fitting AT2017gfo are: mass ratio $(q)$, chirp mass $(\mathcal{M})$, symmetric tidal deformability $(\Lambda_{s})$, redshift $(z)$, observer viewing angle $(\theta)$, enhancement factor of blue ejecta by surface winds $(\alpha)$, fraction of disk ejected $(\epsilon_{\rm disk})$, maximum stable NS mass $(M_{\rm TOV})$ and opening angle of shocked cocoon $(\theta_{\rm c})$. The mass ratio and the chirp mass can be derived from the NS mass by equations: \begin{equation} q=\frac{M_1}{M_2} \le 1, \end{equation} \begin{equation} \mathcal{M}=\frac{{(M_1 M_2)}^{{3}/{5}}}{{(M_1+M_2)}^{{1}/{5}}}. \end{equation} The observer viewing angle of each BNS is sampled uniformly from $\pi/2$ to $-\pi/2$. To directly compare these lightcurves with AT2017gfo, the redshift is set as $z=0.0098$, which is the same as AT2017gfo \citep{soares-santos_electromagnetic_2017}. The symmetric tidal deformability can be calculated from $q$ and $\mathcal{M}$. The symmetric tidal deformability $\Lambda_s$ and the antisymmetric tidal deformability $\Lambda_s$ is defined as $\Lambda_s \equiv \Lambda_1 + \Lambda_2$ and $\Lambda_a \equiv \Lambda_1 - \Lambda_2$, where $\Lambda_1$ and $\Lambda_2$ are external tidal fields of BNS. The relationship between $\Lambda_s$ and $\Lambda_a$ is given in \cite{Yagi_2016}: \begin{equation} \label{eq:Yagi} \Lambda_{a} = F_{\bar{n}}(q) \Lambda_{s} \frac{a+\sum^{3}_{i=1} \sum^{2}_{j=1} b_{ij} q^j \Lambda_{s}^{-i/5}}{a+\sum^{3}_{i=1} \sum^{2}_{j=1} c_{ij} q^j \Lambda_{s}^{-i/5}}, \end{equation} \begin{equation} \label{eq:Yagi1} F_{n}(q) \equiv \frac{1-q^{10/(3-n)}}{1+q^{10/(3-n)}}, \end{equation} where $a=0.0755$ and $\bar{n}=0.743$, which means the average factor of various EoS. $b_{ij}$ and $c_{ij}$ are listed in Table \ref{tab:matrix}. A specific combination of $\Lambda_{1}$ and $\Lambda_{2}$ can be constrained by GW information \citep{PhysRevD.89.103012,PhysRevLett.112.101101}: \begin{equation} \label{eq:lambda_bar} \widetilde{\Lambda} = \frac{16}{13} \frac{(12q+1)\Lambda_{1}+(12+q)q^4 \Lambda_{2}}{(1+q)^5}, \end{equation} called the binary or effective tidal deformability, which can also be calculated according to the empirical relationship with the radius of a 1.4 ${\rm M}_\odot$ NS $R_{1.4}$: \begin{equation} \label{eq:lambda_emp} \widetilde{\Lambda} \simeq 800 \left(\frac{R_{1.4}}{11.2} \frac{{\rm M}_{\odot}}{\mathcal{M}} \right)^6. \end{equation} Based on a certain $R_{1.4}$ and $M_{\rm TOV}$, $\Lambda_s$ can be calculated combined with the Eqs (\ref{eq:Yagi}), (\ref{eq:Yagi1}), (\ref{eq:lambda_bar}) and (\ref{eq:lambda_emp}). The undetermined parameters are set referring to the best fitting result of AT2017gfo in \cite{nicholl_tight_2021} as: $\epsilon_{\rm disk}=0.12$, $\alpha=0.63$, $M_{\rm TOV}=2.17\,{\rm M}_{\odot}$, $R_{1.4}=11.06 \,$km and $\cos \theta_{\rm c}=0.91$. \par We refer to the results in \cite{farrow_mass_2019} to construct a BNS sample. In the standard isolated binary formation channel, a recycled neutron star (NS) is born first and spins up to $\sim$ 10--100 ms by an accretion/recycling process. Its companion a nonrecycled NS quickly spins down to $\mathcal{O}(1)$ second period after birth \citep{tauris_formation_2017}. By fitting 17 Galactic BNS systems, in the best fitting case, \cite{farrow_mass_2019} found that the nonrecycled NS mass is distributed uniformly, and the recycled NS mass distributes according to a two-Gaussian distribution: \begin{equation} \begin{aligned} &\pi (m|{\mu_1,\sigma_1,\mu_2,\sigma_2,\alpha})=\frac{\alpha}{\sigma_1 \sqrt{2\pi}} \times \\ &\exp \left[-\left(\frac{m-\mu_1}{\sqrt{2}\sigma_1}\right)^2 \right]+\frac{1-\alpha}{\sigma_2 \sqrt{2\pi}}\exp \left[-\left(\frac{m-\mu_2}{\sqrt{2}\sigma_2}\right)^2 \right]\\ \end{aligned} \end{equation} where the fitting parameters are: $\mu_1=1.34\,{\rm M}_{\odot}$, $\mu_2=1.47\,{\rm M}_{\odot}$, $\sigma_1=0.02\,{\rm M}_{\odot}$, $\sigma_2=0.15\,{\rm M}_{\odot}$ and $\alpha=0.68\,{\rm M}_{\odot}$. For the nonrecycled neutron star, the range of uniform distribution is 1.16--1.42 ${\rm M}_{\odot}$. According to this fitting result, the NS masses of the BNS system can be sampled. \par \begin{table} \centering \begin{tabular}{*{8}{c@{\hspace{0.2cm}}}} \toprule $b_{ij}$ & $i=1$ & $i=2$ & $i=3$ & $c_{ij}$ & $i=1$ & $i=2$ & $i=3$\\ \midrule $j=1$ & -2.235 & 10.45 & -15.75& $j=1$ & -2.048 & 7.941 & -7.36\\ $j=2$ & 0.847 & -3.25 & 13.61& $j=2$ & 0.598 & 0566 & -1.32\\ \bottomrule \end{tabular} \caption{The value of parameters in Eq \ref{eq:Yagi}.} \label{tab:matrix} \end{table} Combined with the BNS sample and \texttt{bns} model, in g-, r-, i-bands of WFST, the lightcurves of the sample of 500 BNS systems are calculated and shown in Figure \ref{fig:bns_sample} as a cluster of grey lines. As contrast, the light curves of fitting AT2017gfo kilonova are also drawn as red lines. From Figure \ref{fig:bns_sample}, the evolution of AT2017gfo can be included well in our sample. Compared with the AT2017gfo, a kilonova in the sample has a lower luminosity than AT2017gfo in most cases, which is broadly consistent with the result in \cite{chase_kilonova_2022}, where the authors generated a kilonova sample based on another model (SuperNu) to investigate the kilonova detectability. In Figure 3 of \cite{chase_kilonova_2022}, in the r-band of LSST, the detection depth of AT2017gfo is farther away than most of the sample, which means that AT2017gfo is more luminous in the sample. In the first two days, which is the most promising time to detect the kilonova, the magnitude difference between the kilonovae sample and AT2017gfo is $-0.5$ to 1 mag. For an extreme case that a kilonova is 1 mag fainter than AT2017gfo at peak luminosity, the detection depths (observed by WFST with 90\% efficiency and 300s exposure time ) decrease to $\sim 200$ Mpc according to Figure \ref{fig:diff_mag}. Therefore, considering the luminosity function of kilonovae, the average predicted detection range can further decrease compared to the AT2017gfo-like kilonovae. \begin{figure}[ht] \centering {\includegraphics[scale=0.5]{bns_sample}} \caption{The lightcurves calculated from bns model (grey) and fitting AT2017gfo lightcurves (red) in g-, r- and i-bands of WFST. Their redshifts are set as the same with AT2017gfo.} \label{fig:bns_sample} \end{figure} \subsection{Conclusion} This work explores the impact of image subtraction and host galaxy interference on WFST's ability to detect kilonovae and uses these results to optimize the exposure selection for future ToO observations. Based on python packages of \texttt{GalSim} and \texttt{PythonPhot}, considering AT2017gfo-like kilonova fitted and generated by \texttt{MOSFiT}, we generate the simulated reference and science image and test this process with a flat background. We find that under the dark night-sky background of the site in Table \ref{tab:teles}, the detection depth considering the image subtraction is less than that calculated by comparing the kilonova magnitude with the limiting magnitude of the telescope. Given the spectrum of AT2017gfo-like kilonova, g- and r-bands are more effective in searching kilonova, which is consistent with the result in \cite{zhu_kilonova_2021} and \cite{chase_kilonova_2022}. If the kilonova is not detected around peak, the longer band can be useful to keep searching according to the spectrum evolution of kilonovae \citep{kasen_origin_2017}. \par Based on the flat background simulation, we add the host galaxy from the synthetic catalog CosmoDC2 into the image to explore the effect of the host. Using the stellar mass as the indicator to quantify the BNS merger rate of the galaxy, we filter out the galaxies which are less likely to be the host. The distance from the galaxy center is sampled following the fitting distribution of the projected offset of sGRBs observed. We obtained the detectability at different redshifts and find that although the BNS mergers are farther away from the host's center compared with SNe and TDEs, the host's background can significantly affect kilonova detection and cannot be ignored. After taking into account the host galaxy, for the peak of AT2017gfo-like kilonova, the detection depth reduces from 357(338) Mpc to 246(213) Mpc in the g-(r-)band. The detection with a band of longer wavelength can be more easily affected by the host because of the spectrum of the galaxies. Given these influences, a longer exposure time or increasing integration time is recommended in ToO follow-up. \par We also compare the detection efficiency of WFST, LSST and ZTF, and their exposure times are inversely proportional to FoV. With 90\% detection efficiency, LSST proves to be the most efficient and can reach $\sim500$ Mpc with 40s exposures. Although the FoV of ZTF is much larger than WFST, with deeper detection depth, WFST can do better than ZTF when the kilonova is distant. For upcoming O4 and O5, WFST can coordinate with LSST appropriately to cover the probable region in the northern hemisphere. As the kilonova evolves and becomes dim after the peak, the detection depth of WFST decreases drastically, therefore the effective observation window is within the first two nights. To estimate the deviation of choosing AT2017gfo as the template with reality, we construct a kilonova sample based on \texttt{bns} model and the fitting mass distributions of BNS. AT2017gfo belongs to the more luminous case in the sample, and the magnitude difference between sample and AT2017gfo is -0.5 to 1 mag, which means that the detection range can further decrease considering the kilonova diversity. \par According to the detectability results above, we optimize the exposure time for every night and estimate the average total time to cover a region of GW localization in a ToO observation. Considering two nights and four hours each night to observe, with two bands WFST can well deal with the case that the localization region is no less than $\sim1000 \deg^2$ at 300 Mpc. Based on the simulation of BNS detection in O4 \citep{abbott_prospects_2020}, WFST can search and find most kilonovae if the probable sky is observable and is predicted to find $\sim 30$\% kilonovae of BNS mergers reported in O4. \par Note that, if under a larger probable sky or distant situation, a detailed strategy is needed and designed to guarantee efficiency. We note that in O3 ZTF applied package \texttt{gwemopt} to design and generate observation sequence \citep{coughlin_optimizing_2018,almualla_dynamic_2020}. A similar package can be used for WFST to deal with extreme situations combined with the chosen exposure method in this work. \begin{acknowledgments} We appreciate the helpful discussions in WFST science team. This work is supported by the National Key R\&D Program of China Grant No. 2021YFC2203102 and 2022YFC2200100, NSFC No. 12273035, the Fundamental Research Funds for the Central Universities under Grant No. WK2030000036 and WK3440000004, and Cyr Chun Ying Tang Foundations. \end{acknowledgments} \vspace{5mm} \facilities{WFST} \software{\texttt{MOSFiT}\citep{Guillochon_2018, Villar_2017}, \texttt{GalSim}\citep{ROWE2015121}, \texttt{PythonPhot}\citep{2015ascl.soft01010J}, \texttt{NumPy}\citep{harris2020array, van2011numpy}, \texttt{Astropy}\citep{robitaille2013astropy}, \texttt{SciPy}\citep{virtanen2020scipy}, \texttt{matplotlib}\citep{hunter2007matplotlib}} \vspace{50mm} \bibliographystyle{aasjournal}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} The power spectrum analysis of stochastic spectra \cite{GKKMRR-2011} had recently emerged as a powerful tool for studying both system-specific and universal properties of complex wave and quantum systems. In the context of quantum systems, it reveals whether the corresponding classical dynamics is regular or chaotic, or a mixture of both, and encodes a `degree of chaoticity'. In combination with other long- and short-range spectral fluctuation measures, it provides an effective way to identify system symmetries, determine a degree of incompleteness of experimentally measured spectra, and get the clues about systems' internal dynamics. Yet, the theoretical foundations of the power spectrum analysis of stochastic spectra have not been settled. In this paper, a nonperturbative theory of the power spectrum will be presented. To set the stage, we review traditional spectral fluctuation measures (Section~\ref{backgro}), define the power spectrum (Definition~\ref{def-01}) and briefly discuss its early theoretical and numerical studies as well as the recently reported experimental results (Section~\ref{powerspectrum}). We then argue (Section~\ref{uncor}), that a form-factor approximation routinely used for the power spectrum analysis in quantum chaotic systems is not flawless and needs to be revisited. \subsection{Short- and long-range measures of spectral fluctuations} \label{backgro} Spectral fluctuations of quantum systems reflect the nature -- regular or chaotic -- of their underlying classical dynamics \cite{B-1987,BGS-1984,R-2000}. In case of fully chaotic classical dynamics, {\it hyperbolicity} (exponential sensitivity to initial conditions) and {\it ergodicity} (typical classical trajectories fill out available phase space uniformly) make quantum properties of chaotic systems universal \cite{BGS-1984}. At sufficiently long times $t > T_*$, the single particle dynamics is governed by global symmetries of the system and is accurately described by the random matrix theory (RMT) \cite{M-2004,PF-book}. The emergence of universal statistical laws, anticipated by Bohigas, Giannoni and Schmit \cite{BGS-1984}, has been advocated within a field-theoretic \cite{AASA-1996, AAS-1997} and a semiclassical approach \cite{RS-2002} which links correlations in quantum spectra to correlations between periodic orbits in the associated classical geodesics. The time scale $T_*$ of compromised spectral universality is set by the period $T_1$ of the shortest closed orbit and the Heisenberg time $T_{\rm H}$, such that $T_1 \ll T_* \ll T_{\rm H}$. Several statistical measures of level fluctuations have been devised in quantum chaology. {\it Long-range} correlations of eigenlevels on the unfolded energy scale \cite{M-2004} can be measured by the variance $\Sigma^2(L)={\rm var}[{\mathcal N}(L)]$ of number of levels ${\mathcal N}(L)$ in the interval of length $L$. The $\Sigma^2(L)$ statistics probes the two-level correlations only and exhibit \cite{B-1985} a universal RMT behavior provided the interval $L$ is not too long, $1 \ll L \ll T_{\rm H}/T_1$. The logarithmic behavior of the number variance, \begin{eqnarray} \label{nv} \Sigma^2_{\rm chaos}(L) = \frac{2}{\pi^2\beta} \ln L + {\mathcal O}(1), \end{eqnarray} indicates presence of the long-range repulsion between eigenlevels. Here, $\beta=1,2$ and $4$ denote the Dyson symmetry index \cite{M-2004,PF-book}. For more distant levels, $L \gg T_{\rm H}/T_1$, system-specific features show up in $\Sigma^2_{\rm chaos}(L)$ in the form of quasi-random oscillations with wavelengths being inversely proportional to periods of short closed orbits. Individual features of quantum chaotic systems become less pronounced in spectral measures that probe the {\it short-range} fluctuations as these are largely determined by the long periodic orbits \cite{RS-2002}. The distribution of level spacing between (unfolded) consecutive eigenlevels, $P(s) = \langle \delta(s - E_j + E_{j+1})\rangle$, is the most commonly used short-range statistics. Here, the angular brackets denote averaging over the position $j$ of the reference eigenlevel or, more generally, averaging over such a narrow energy window that keeps the classical dynamics essentially intact. At small spacings, $s \ll 1$, the distribution of level spacings is mostly contributed by the {\it two-point} correlations, showing the phenomenon of symmetry-driven level repulsion, $P(s) \propto s^\beta$. (In a simple-minded fashion, this result can be read out from the Wigner surmise \cite{M-2004}). As $s$ grows, the spacing distribution becomes increasingly influenced by spectral correlation functions of {\it all} orders. In the universal regime ($s \lesssim T_{\rm H}/T_*$), these are best accounted for by the RMT machinery which produces parameter-free (but $\beta$-dependent) representations of level spacing distributions in terms of Fredholm determinants/Pfaffians and Painlev\'e transcendents. For quantum chaotic systems with broken time-reversal symmetry $(\beta=2)$ -- that will be the focus of our study -- the level spacing distribution is given by the famous Gaudin-Mehta formula, which when written in terms of Painlev\'e transcendents reads \cite{PF-book,JMMS-1980} \begin{eqnarray}\label{LSD-PV} P_{\rm chaos}(s) = \frac{d^2}{ds^2} \exp \left( \int_{0}^{2\pi s} \frac{\sigma_0(t)}{t} dt \right). \end{eqnarray} Here, $\sigma_0(t)$ is the fifth Painlev\'e transcendent defined as the solution to the nonlinear equation $(\nu=0)$ \begin{equation} \label{PV-family} (t \sigma_\nu^{\prime\prime})^2 + (t\sigma_\nu^\prime -\sigma_\nu) \left( t\sigma_\nu^\prime -\sigma_\nu + 4 (\sigma_\nu^\prime)^2 \right) - 4 \nu^2 (\sigma_\nu^\prime)^2 = 0 \end{equation} subject to the boundary condition $\sigma_0(t) = -t/2\pi -(t/2\pi)^2 + o(t^2)$ as $t\rightarrow~0$. The universal RMT laws [Eqs.~(\ref{nv}) and (\ref{LSD-PV})] apply to quantum systems with {\it completely chaotic} classical dynamics. Quantum systems whose classical geodesics is {\it completely integrable} belong to a different, Berry-Tabor universality class \cite{BT-1977}, partially shared by the Poisson point process. In particular, level spacings in a generic integrable quantum system exhibit statistics of waiting times between consecutive events in a Poisson process. This leads to the radically different fluctuation laws: the number variance $\Sigma^2_{\rm int}(L)=L$ is no longer logarithmic while the level spacing distribution $P_{\rm int}(s) = e^{-s}$ becomes exponential \cite{B-1987}, with no signatures of level repulsion whatsoever. Such a selectivity of short- and long-range spectral statistical measures has long been used to uncover underlying classical dynamics of quantum systems. (For a large class of quantum systems with mixed regular-chaotic classical dynamics, the reader is referred to Refs. \cite{T-1989, TU-1994, BKP-1998}.) \subsection{Power spectrum: Definition and early results} \label{powerspectrum} To obtain a more accurate characterization of the quantum chaos, it is advantageous to use spectral statistics which probe the correlations between {\it both} nearby and distant eigenlevels. Such a statistical indicator -- {\it the power spectrum} -- has been suggested in Ref.~\cite{RGMRF-2002}. \begin{definition}\label{def-01} Let $\{\varepsilon_1 \le \dots \le \varepsilon_N\}$ be a sequence of ordered unfolded eigenlevels, $N \in {\mathbb N}$, with the mean level spacing $\Delta$ and let $\langle \delta\varepsilon_\ell \delta\varepsilon_m \rangle$ be the covariance matrix of level displacements $\delta\varepsilon_\ell = \varepsilon_\ell - \langle \varepsilon_\ell\rangle$ from their mean $\langle \varepsilon_\ell\rangle$. A Fourier transform of the covariance matrix \begin{eqnarray}\label{ps-def} S_N(\omega) = \frac{1}{N\Delta^2} \sum_{\ell=1}^N \sum_{m=1}^N \langle \delta\varepsilon_\ell \delta\varepsilon_m \rangle\, e^{i\omega (\ell-m)}, \quad \omega \in {\mathbb R} \end{eqnarray} is called the power spectrum of the sequence. Here, the angular brackets stand for an average over an ensemble of eigenlevel sequences. \hfill $\blacksquare$ \end{definition} Since the power spectrum is $2\pi$-periodic, real and even function in $\omega$, \begin{eqnarray}\fl \qquad S_N(\omega+2\pi) = S_N(\omega), \quad S_N^*(\omega) = S_N(\omega), \quad S_N(-\omega) = S_N(\omega), \end{eqnarray} it is sufficient to consider it in the interval $0 \le \omega \le \omega_{\rm Ny}$, where $\omega_{\rm Ny} = \pi$ is the Nyquist frequency. In the spirit of the discrete Fourier analysis, one may restrict dimensionless frequencies $\omega$ in Eq.~(\ref{ps-def}) to a finite set \begin{eqnarray}\label{freq-k} \omega_k = \frac{2\pi k}{N} \end{eqnarray} with $k=\{1,2,\dots, N/2\}$, where $N$ is assumed to be an even integer. We shall see that resulting analytical expressions for $S_N(\omega_k)$ are slightly simpler than those for $S_N(\omega)$. \noindent\par \begin{remark} We notice in passing that similar statistics has previously been used by Odlyzko \cite{Od-1987} who analyzed power spectrum of the {\it spacings} between zeros of the Riemann zeta function. \hfill $\blacksquare$ \end{remark} Considering Eq.~(\ref{ps-def}) through the prism of a semiclassical approach, one readily realizes that, at low frequencies $\omega \ll T_*/T_{\rm H}$, the power spectrum is largely affected by system-specific correlations between very distant eigenlevels (accounted for by short periodic orbits). For higher frequencies, $\omega \gtrsim T_*/T_{\rm H}$, the contribution of longer periodic orbits becomes increasingly important and the power spectrum enters the {\it universal regime}. Eventually, in the frequency domain $T_*/T_{\rm H} \ll \omega \le \omega_{\rm Ny}$, long periodic orbits win over and the power spectrum gets shaped by correlations between the nearby levels. Hence, tuning the frequency $\omega$ in $S_N(\omega)$ one may attend to spectral correlation between either adjacent or distant eigenlevels. Numerical simulations \cite{RGMRF-2002} have revealed that the average power spectrum $S_N(\omega_k)$ discriminates sharply between quantum systems with chaotic and integrable classical dynamics. While this was not completely unexpected, another finding of Ref.~\cite{RGMRF-2002} came as quite a surprise: numerical data for $S_N(\omega_k)$, at not too high frequencies, could be fitted by simple power-law curves, $S_N(\omega_k) \sim 1/\omega_k$ and $S_N(\omega_k) \sim 1/\omega_k^2$, for quantum systems with chaotic and integrable classical dynamics, respectively. In quantum systems with mixed classical dynamics, numerical evidence was presented \cite{GRRFSVR-2005} for the power-law of the form $S_N(\omega_k) \sim 1/\omega_k^\alpha$ with the exponent $1 <\alpha < 2$ measuring a `degree of chaoticity'. The power spectrum of interface fluctuations in various growth models belonging to the $(1+1)$-dimensional Kardar-Parisi-Zhang universality class, studied in Ref.~\cite{KAT-2017} both numerically and experimentally, were found to follow the power law with $\alpha= 5/3$. The power spectrum was also measured in Sinai \cite{FKMMRR-2006} and perturbed rectangular \cite{BYBLDS-2016b} microwave billiards, microwave networks \cite{BYBLDS-2016,DYBBLS-2017} and three-dimensional microwave cavities \cite{LBYBS-2018}. For the power spectrum analysis of Fano-Feshbach resonances in an ultracold gas of Erbium atoms \cite{FMAFBMPK-2014}, the reader is referred to Ref.~\cite{PM-2015}. For quantum chaotic systems, the universal $1/\omega_k$ law for the average power spectrum in the frequency domain $T_*/T_{\rm H}\lesssim \omega_k \ll 1$ can be read out from the existing RMT literature. Indeed, defining a set of discrete Fourier coefficients \begin{eqnarray} \label{FK-def} a_k = \frac{1}{\sqrt{N}} \sum_{\ell=1}^N \delta \varepsilon_\ell \, e^{i \omega_k \ell} \end{eqnarray} of level displacements $\{\delta\varepsilon_\ell\}$, one observes the relation \begin{eqnarray} \label{ak} S_N(\omega_k)={\rm var}[a_k]. \end{eqnarray} Statistics of the Fourier coefficients $\{a_k\}$ were studied in some detail \cite{W-1987} within the Dyson's Brownian motion model \cite{D-1962}. In particular, it is known that, in the limit $k \ll N$, they are independent Gaussian distributed random variables with zero mean and the variance ${\rm var}[a_k] = N/(2\pi^2 \beta k)$. This immediately implies \begin{eqnarray} \label{BM} S_N(\omega_k \ll 1) \approx \frac{1}{\pi \beta \omega_k} \end{eqnarray} in concert with numerical findings. For larger $k$ (in particular, for $k \sim N$), fluctuation properties of the Fourier coefficients $\{a_k\}$ are unknown. In view of the relation Eq.~(\ref{ak}), a nonperturbative theory of the power spectrum to be developed in this paper sets up a well-defined framework for addressing statistical properties of discrete Fourier coefficients $\{a_k\}$ introduced in Ref.~\cite{W-1987}. An attempt to determine $S_N(\omega_k)$ for higher frequencies up to $\omega_k = \omega_{\rm Ny}$ was undertaken in Ref.~\cite{FGMMRR-2004} whose authors claimed to express the large-$N$ power spectrum in the entire domain $T_*/T_{\rm H}\lesssim \omega_k \le \omega_{\rm Ny}$ in terms of the {\it spectral form-factor} \cite{M-2004} \begin{eqnarray}\fl \label{FF-def} \quad K_N(\tau) = \frac{1}{N} \left( \Big< \sum_{\ell=1}^N \sum_{m=1}^N e^{2 i \pi \tau (\varepsilon_\ell - \varepsilon_m) } \Big> - \Big< \sum_{\ell=1}^N e^{2 i \pi \tau \varepsilon_\ell } \Big> \Big< \sum_{m=1}^N e^{-2 i \pi \tau \varepsilon_m } \Big> \right) \end{eqnarray} of a quantum system, $\tau \ge 0$. Referring interested reader to Eqs.~(3), (8) and (10) of the original paper Ref.~\cite{FGMMRR-2004}, here we only quote a small-$\omega_k$ reduction of their result: \begin{eqnarray} \label{FFA} \hat{S}_N(\omega_k \ll 1) \approx \frac{1}{\omega_k^2} K_N\left( \frac{\omega_k}{2\pi} \right). \end{eqnarray} (Here, the hat-symbol ($\;\hat{ }\;$) is used to indicate that the power spectrum $\hat{S}_N(\omega_k \ll 1)$ is the one furnished by the form-factor approximation.) A similar approach was also used in subsequent papers \cite{GG-2006,RMRFM-2008}. Even though numerical simulations seemed to confirm a theoretical curve derived in Ref.~\cite{FGMMRR-2004}, we believe that the status of their heuristic approach needs to be clarified. This will be done in Section~\ref{uncor}. \subsection{Spectra with uncorrelated spacings: Form-factor vs power spectrum} \label{uncor} A simple mathematical model of eigenlevel sequences $\{\varepsilon_1,\dots,\varepsilon_N\}$ with identically distributed, {\it uncorrelated} spacings $\{s_1,\dots,s_N\}$, where $\ell$-th ordered eigenlevel equals \begin{eqnarray} \label{ELSJ} \varepsilon_\ell = \sum_{j=1}^\ell s_j, \end{eqnarray} provides an excellent playing ground to analyze validity of the form-factor approximation. Defined by the covariance matrix of spacings of the form ${\rm cov}(s_i,s_j) = \sigma^2 \delta_{ij}$, such that $\langle s_i \rangle =1$, it allows us to determine exactly {\it both} the power spectrum Eq.~(\ref{ps-def}) and the form-factor Eq.~(\ref{FF-def}). \noindent\newline\newline {\it Power spectrum.}---Indeed, realizing that the covariance matrix of ordered eigenlevels equals \begin{eqnarray} \langle \delta\varepsilon_\ell \delta\varepsilon_m \rangle = \sigma^2 {\rm min}(\ell, m), \end{eqnarray} we derive an {\it exact} expression for the power spectrum ($N \in {\mathbb N}$) \begin{eqnarray}\fl \qquad\quad \label{S_exp} S_N(\omega) = \frac{2N+1}{4N} \frac{\sigma^2}{\sin^2(\omega/2)} \left( 1 - \frac{1}{2N+1} \frac{\sin\left((N+1/2)\omega\right)}{\sin(\omega/2)} \right). \end{eqnarray} Equation~(\ref{S_exp}) stays valid in the entire region of frequencies $0 \le \omega \le \pi$. For a set of discrete frequencies $\omega_k = 2\pi k/N$, it reduces to \begin{eqnarray}\label{smwk} S_N(\omega_k) = \frac{\sigma^2}{2\sin^2(\omega_k/2)}, \quad 0 < \omega_k \le \pi. \end{eqnarray} \begin{remark}\label{rem-univer} Notice that Eqs.~(\ref{S_exp}) and (\ref{smwk}) for the power spectrum of eigenlevel sequences with uncorrelated level spacings hold {\it universally}. Indeed, both expressions appear to be independent of a particular choice of the level spacings distribution; the level spacing variance $\sigma^2$ is the only model-specific parameter. \hfill $\blacksquare$ \end{remark} \begin{figure} \includegraphics[width=\textwidth]{Fig1.eps} \caption{Power spectrum $S_N(\omega)$ as a function of frequency $\omega$ for eigenlevel sequences with uncorrelated level spacings. Solid red line corresponds to the theoretical curve Eq.~(\ref{S_exp}) with $\sigma^2=1$. Blue crosses represent the average power spectrum simulated for $10$ million sequences of $N=2048$ random eigenlevels with uncorrelated, exponentially distributed spacings $s_i \sim {\rm Exp}(1)$. Inset: a log-log plot for the same graphs.} \label{Figure_ps_exp} \end{figure} For illustration purposes, in Fig.~\ref{Figure_ps_exp}, we compare the theoretical power spectrum $S_N(\omega)$, Eq.~(\ref{S_exp}), with the average power spectrum {\it simulated} for an ensemble of sequences of random eigenlevels with uncorrelated, exponentially distributed level spacings $s_i \sim {\rm Exp}(1)$. Since the unit mean level spacing $\langle s_j \rangle =1$ is intrinsic to the model, the unfolding procedure is redundant. Perfect agreement between the theoretical and the simulated curves is clearly observed in the entire frequency domain $0<\omega\le \pi$. For further reference, we need to identify three scaling limits of $S_N(\omega)$ that emerge as $N\rightarrow \infty$. In doing so, the power spectrum will be multiplied by $\omega^2$ to get rid of the singularity at $\omega=0$. (i) The first -- infrared -- regime, refers to extremely small frequencies, $\omega \sim N^{-1}$. It is described by the double scaling limit \begin{eqnarray} \label{SN-1st} {\mathcal S}^{\rm{(-1)}}(\Omega) = \lim_{N\rightarrow \infty} \omega^2 S_N(\omega)\Big|_{\omega=\Omega/N} = 2\sigma^2 \left( 1- \frac{\sin \Omega}{\Omega} \right), \end{eqnarray} where $\Omega={\mathcal O}(N^0)$. One observes: \begin{eqnarray} \label{S1T} {\mathcal S}^{(-1)}(\Omega) = \left\{ \begin{array}{ll} {\mathcal O}(\Omega^2), & \hbox{$\Omega\rightarrow 0$;} \\ 2\sigma^2 + o(1), & \hbox{$\Omega\rightarrow \infty$.} \end{array} \right. \end{eqnarray} (ii) The second scaling regime describes the power spectrum for intermediately small frequencies $\omega \sim N^{-\alpha}$ with $0<\alpha<1$. In this case, a double scaling limit becomes trivial: \begin{eqnarray} \label{SN-2nd} {\mathcal S}^{\rm{(-\alpha)}}(\tilde\Omega) = \lim_{N\rightarrow \infty} \omega^2 S_N(\omega)\Big|_{\omega=\tilde{\Omega}/N^\alpha} = 2\sigma^2, \end{eqnarray} where $\tilde\Omega={\mathcal O}(N^0)$. In the forthcoming discussion of a spectral form-factor [Eq.~(\ref{K-interm})], such a scaling limit will appear with $\alpha=1/2$. (iii) The third scaling regime describes the power spectrum for $\omega = {\mathcal O}(N^0)$ fixed as $N \rightarrow \infty$. In this case, we derive \begin{eqnarray} \label{SN-3rd} {\mathcal S}^{\rm{(0)}}(\omega) = \lim_{N\rightarrow \infty} \omega^2 S_N(\omega) = \sigma^2\frac{\omega^2}{2\sin^2(\omega/2)}, \end{eqnarray} where $\omega={\mathcal O}(N^0)$. One observes: \begin{eqnarray} \label{S3T} {\mathcal S}^{(0)}(\omega) = \left\{ \begin{array}{ll} 2\sigma^2+{\mathcal O}(\omega^2), & \hbox{$\omega \rightarrow 0$;} \\ \sigma^2\pi^2/2, & \hbox{$\omega = \pi$.} \end{array} \right. \end{eqnarray} Equations~(\ref{S1T}), (\ref{SN-2nd}) and (\ref{S3T}) imply continuity of ${\mathcal S}(\omega)$ across the three scaling regimes. We shall return to the universal formulae Eqs.~(\ref{SN-1st}), (\ref{SN-2nd}) and (\ref{SN-3rd}) later on. \newline\newline{\it Spectral form-factor.}---For eigenlevel sequences with identically distributed, uncorrelated level spacings, the form-factor $K_N(\tau)$ defined by Eq.~(\ref{FF-def}) can be calculated exactly, too. Defining the characteristic function of $i$-th level spacing, \begin{eqnarray} \Psi_s(\tau) = \langle e^{2i\pi \tau \, s_i} \rangle = \int_{0}^{\infty} ds \, e^{2i\pi \tau s} f_{s_i}(s), \end{eqnarray} where $f_{s_i}(s)$ is the probability density of the $i$-th level spacing, we reduce Eq.~(\ref{FF-def}) to \begin{eqnarray}\fl \label{K-tau-theor} K_N(\tau) = 1 + \frac{2}{N}{\rm Re\,} \left[ \frac{\Psi_s(\tau)}{1-\Psi_s(\tau)} \left( N - \frac{1- \Psi_s^N(\tau)}{1- \Psi_s(\tau)} \right) \right] - \frac{1}{N} \left| \Psi_s(\tau) \frac{1- \Psi_s^N(\tau)}{1- \Psi_s(\tau)} \right|^2.\nonumber\\ {} \end{eqnarray} In Fig.~\ref{Figure_KT_exp}, we compare the theoretical form-factor Eq.~(\ref{K-tau-theor}) with the average form-factor simulated for an ensemble of sequences of random eigenlevels with uncorrelated, exponentially distributed level spacings as explained below Remark~\ref{rem-univer}. The simulation was based on Eqs.~(\ref{FF-def}) and (\ref{ELSJ}), and involved averaging \cite{P-1997} over ten million realizations. Referring the reader to a figure caption for detailed explanations, we plainly notice a perfect agreement between the simulations and the theoretical result Eq.~(\ref{K-tau-theor}). \begin{figure} \includegraphics[width=\textwidth]{Fig2.eps} \caption{Spectral form-factor $K_N(\tau)$ as a function of $\tau$ for a model and the data specified in the caption to Fig.~\ref{Figure_ps_exp}. Solid red line corresponds to the theoretical curve Eq.~(\ref{K-tau-theor}) with $\Psi_s(\tau) = (1-2 i \pi \tau)^{-1}$. Inset: a close-up view of the same graphs; additional black curves display limiting form-factor in various scaling regimes. Dashed line: regime ${\rm (I)}$, Eq.~(\ref{K-infrared}) with $\tau=T/N$. Solid line: regime ${\rm (II)}$, Eq.~(\ref{K-interm}) with $\tau={\mathcal T}/N^{1/2}$. Dotted line: regime ${\rm (III)}$, Eq.~(\ref{K-tau-3}), see discussion there. Notice that the black dashed curve ${\rm [(I)]}$ starts to deviate from the red curve (after the fourth blue cross the deviation exceeds $10\%$; as $\tau$ grows further, the relative deviation approaches the factor $2$). For larger $\tau$, the black solid curve ${\rm [(II)]}$ becomes a better fit to the red curve. Finally, the red curve approaches the unity depicted by the black dotted line ${\rm [(III)]}$.} \label{Figure_KT_exp} \end{figure} As $N\rightarrow \infty$, three different scaling regimes can be identified for the spectral form-factor. {\it Two} of them, arising in specific {\it double scaling} limits, appear to be {\it universal}. (i) The first -- infrared -- regime, refers to extremely short times, $\tau \sim N^{-1}$. Assuming existence and convergence of the moment-expansion for the characteristic function $\Psi_s(\tau)$, we expand it up to the terms of order $N^{-2}$, \begin{eqnarray} \Psi_s(\tau)\Big|_{\tau=T/N} = 1 + 2i\pi \frac{T}{N} - 2\pi^2 (\sigma^2+1) \frac{T^2}{N^2} + \mathcal{O}(N^{-3}) \end{eqnarray} to derive the infrared double scaling limit for the form factor: \begin{eqnarray} \label{K-infrared} K^{(-1)}(T)=\lim_{N\rightarrow \infty} K_N(\tau)\Big|_{\tau=T/N} = 2\sigma^2 \left( 1 - \frac{\sin (2\pi T)}{2 \pi T} \right), \end{eqnarray} where $T = {\mathcal O}(N^0)$. Notice that this formula holds {\it universally} as $K^{(-1)}(T)$ does not depend on a particular choice of the level spacings distribution; its variance $\sigma^2$ is the only model-specific parameter. One observes: \begin{eqnarray} \label{K1T} K^{(-1)}(T) = \left\{ \begin{array}{ll} {\mathcal O}(T^2), & \hbox{$T\rightarrow 0$;} \\ 2\sigma^2 + o(1), & \hbox{$T\rightarrow \infty$.} \end{array} \right. \end{eqnarray} (ii) The second -- intermediate -- regime, refers to intermediately short times, $\tau \sim N^{-1/2}$. Expanding the characteristic function $\Psi_s(\tau)$ up to the terms of order $N^{-1}$, \begin{eqnarray} \Psi_s(\tau)\Big|_{\tau={\mathcal T}/N^{1/2}} = 1 + 2i\pi \frac{\mathcal T}{N^{1/2}} - 2\pi^2 (\sigma^2+1) \frac{{\mathcal T}^2}{N} + \mathcal{O}(N^{-3/2}), \end{eqnarray} we discover that, for intermediately short times, the double scaling limit of the form factor reads \begin{eqnarray} \label{K-interm} \fl \qquad\qquad K^{(-1/2)}({\mathcal T})=\lim_{N\rightarrow \infty} K_N(\tau)\Big|_{\tau={\mathcal T}/N^{1/2}} = \sigma^2 \left( 1 + \frac{1 - e^{-4 \pi^2 \sigma^2 {\mathcal T}^2}}{4 \pi^2 \sigma^2 {\mathcal T}^2}\right), \end{eqnarray} where ${\mathcal T} = {\mathcal O}(N^0)$. Hence, in the intermediate double scaling limit, the form-factor exhibits the {\it universal} behavior too, as $K^{(-1/2)}(\mathcal{T})$ depends on a particular choice of the level spacings distribution only through its variance $\sigma^2$. One observes: \begin{eqnarray} \label{K2T} K^{(-1/2)}(\mathcal{T}) = \left\{ \begin{array}{ll} 2 \sigma^2+{\mathcal O}({\mathcal T}^2), & \hbox{${\mathcal T}\rightarrow 0$;} \\ \sigma^2 + o(1), & \hbox{${\mathcal T}\rightarrow \infty$.} \end{array} \right. \end{eqnarray} (iii) The third scaling regime describes the form-factor for $\tau = {\mathcal O}(N^0)$ fixed as $N \rightarrow \infty$. Spotting that in this case the characteristic function $\Psi_s^N(\tau)$ vanishes exponentially fast, we derive \begin{eqnarray} \label{K-tau-3} K^{(0)}(\tau) = \lim_{N\rightarrow \infty} K_N(\tau) = 1 + 2 {\rm Re\,} \left[ \frac{\Psi_s(\tau)}{1-\Psi_s(\tau)}\right]. \end{eqnarray} Notably, in the fixed-$\tau$ scaling limit, the form-factor is {\it no longer universal} as it depends explicitly on the particular distribution of level spacings \footnote{For the exponential distribution of level spacings the form-factor in the third scaling regime equals unity, $K^{(0)}(\tau) \equiv 1$.} through its characteristic function $\Psi_s(\tau)$. One observes: \begin{eqnarray} \label{K3T} K^{(0)}(\tau) = \left\{ \begin{array}{ll} \sigma^2+{\mathcal O}(\tau^2), & \hbox{$\tau \rightarrow 0$;} \\ 1 + o(1), & \hbox{$\tau\rightarrow \infty$.} \end{array} \right. \end{eqnarray} \begin{figure} \includegraphics[width=\textwidth]{Fig3.eps} \caption{Limiting curves ($N\rightarrow \infty$) for the form-factor across the three scaling regimes [${\rm (I)}$ -- Eq.~(\ref{K-infrared}), ${\rm (II)}$ -- Eq.~(\ref{K-interm}), and ${\rm (III)}$ -- Eq.~(\ref{K-tau-3})], glued together at vertical dotted lines. The functions $K^{(-1)}(T)$, $K^{(-1/2)}({\mathcal T})$ and $K^{(0)}(\tau)$, describing the regimes ${\rm (I)}$, ${\rm (II)}$ and ${\rm (III)}$, correspondingly, are plotted vs variables $T=N\tau$, ${\mathcal T}=N^{1/2}\tau$ and $\tau$, each running over the entire real half-line compactified using the transformation $(0,\infty)=\tan((0,\pi/2))$. Solid red, green and blue curves correspond to the form-factor in the model of uncorrelated spacings drawn from the $\rm{Erlang}(3,3)$ (red), inverse Gaussian ${\rm IG}(1,3)$ (green) and uniform ${\rm U}(0,2)$ (blue) distributions, exhibiting identical mean and variance. The dashed black line -- to be discussed in the main text -- displays the limiting curve of the function $\lim_{N\rightarrow \infty}\omega^2 S_N(\omega)$ with $0\le \omega=2\pi\tau \le \pi$ (that is, $0\le \tau \le \rfrac{1}{2}$) for all three choices of the level spacing distribution. In the scaling regimes ${\rm (I)}$, ${\rm (II)}$ and ${\rm (III)}$, the curve is described by Eqs.~(\ref{SN-1st}), (\ref{SN-2nd}) with $\alpha=1/2$ and (\ref{SN-3rd}), respectively. } \label{Fig_cartoon} \end{figure} The three scaling regimes for the form-factor as $N \rightarrow \infty$ are illustrated in Fig.~\ref{Fig_cartoon}. The continuity of the entire curve is guaranteed by equality of limits $\lim_{T\rightarrow \infty} K^{(-1)}(T) = \lim_{{\mathcal T}\rightarrow 0} K^{(-1/2)}({\mathcal T})$ and $\lim_{{\mathcal T}\rightarrow \infty} K^{(-1/2)}({\mathcal T}) = \lim_{\tau\rightarrow 0} K^{(0)}(\tau)$, see Eqs.~(\ref{K1T}), (\ref{K2T}) and (\ref{K3T}). To highlight occurrence of both universal and non-universal $\tau$-domains in the form-factor, the latter is plotted for three different choices of level spacing distributions, $s_j \sim {\rm Erlang}(3,3)$, ${\rm IG}(1,3)$ and ${\rm U}(0,2)$, characterized by the same mean $\langle s_j \rangle =1$ and the variance $\sigma^2=1/3$: \begin{eqnarray} \label{LSD-3} f_{s_j}(s) = \Theta(s) \times \left\{ \begin{array}{ll} \displaystyle \frac{27}{2} s^2 \exp(-3s), & \hbox{${\rm Erlang}(3,3)$;} \\ \displaystyle \left(\frac{3}{2\pi s^3}\right)^{1/2} \exp \left( -\frac{3(s-1)^2}{2s} \right), & \hbox{${\rm IG}(1,3)$;} \\ \displaystyle \frac{1}{2}\Theta(2-s), & \hbox{${\rm U}(0,2)$.} \end{array} \right. \end{eqnarray} The three curves coincide in the universal domains ${\rm (I)}$ and ${\rm (II)}$. On the contrary, in the third regime [${\rm (III)}$], the form-factor behavior is {\it non-universal} as the three curves evolve differently depending on a particular choice of the level spacing distribution. Yet, all three curves approach unity at infinity. \noindent\newline\newline {\it Implications for the power spectrum.}---We now turn to the discussion of a relation between the power spectrum Eq.~(\ref{ps-def}) and the form-factor Eq.~(\ref{FF-def}). To this end, we shall compare the limiting forms, as $N\rightarrow \infty$, of the form-factor, studied both analytically and numerically in the previous subsection, with the limiting behavior of the product $\omega^2 S_N(\omega)\mid_{\omega=2\pi\tau}$ as prompted by the form-factor approximation Eq.~(\ref{FFA}). The latter is plotted in Fig.~\ref{Fig_cartoon} by the black dashed line. (i) For extremely low frequencies $\omega={\mathcal O}(N^{-1})$ (equivalently, short times $\tau={\mathcal O}(N^{-1})$) belonging to the first scaling regime [${\rm (I)}$], the two quantities are seen to {\it coincide} \begin{eqnarray} \label{KS-1} {\rm (universal)}\;\;K^{\rm{(-1)}}(T) = {\rm (universal)}\;\;{\mathcal S}^{\rm{(-1)}}(\Omega)\mid_{\Omega=2\pi T}, \end{eqnarray} see Eqs.~(\ref{SN-1st}) and (\ref{K-infrared}). The {\it universal} behavior of both spectral indicators in the domain ${\rm (I)}$ is illustrated in Fig.~\ref{Fig_cartoon} arranged for three different level spacing distributions specified by Eq.~(\ref{LSD-3}). (ii) In the second scaling regime [${\rm (II)}$], characterized by intermediately low frequencies $\omega={\mathcal O}(N^{-1/2})$ (equivalently, $\tau={\mathcal O}(N^{-1/2})$), the limiting curve for the form-factor starts to deviate from the one for the product $\omega^2 S_N(\omega)\mid_{\omega=2\pi\tau}$, in concert with the analytical analysis, \begin{eqnarray} {\rm (universal)}\;\;K^{\rm{(-1/2)}}(\mathcal{T}) \neq {\rm (universal)}\;\;{\mathcal S}^{\rm{(-1/2)}}(\tilde\Omega)\mid_{\tilde\Omega=2\pi {\mathcal T}}=2\sigma^2, \end{eqnarray} compare Eq.~(\ref{SN-2nd}) taken at $\alpha=1/2$ with Eq.~(\ref{K-interm}). While the product ${\mathcal S}^{\rm{(-1/2)}}(\tilde\Omega)$ is a constant throughout the entire domain ${\rm (II)}$, the form-factor is described by the universal function Eq.~(\ref{K-interm}) irrespective of a particular form of the level spacing distribution; the relative deviation between the two limiting curves reaches its maximum ($=2$) at the borderline between the regimes ${\rm (II)}$ and ${\rm (III)}$, in concert with the earlier conclusion of Ref.~\cite{ROK-2017}. How fast this factor of $2$ is approached depends only on the value of $\sigma^2$, as described by Eq.~(\ref{K-interm}). Hence, the relation Eq.~(\ref{FFA}) is clearly violated in the second scaling regime, apart from a single point at the border between the regimes ${\rm (I)}$ and ${\rm (II)}$ as stated below Eq.~(\ref{K3T}). (iii) In the third scaling regime [${\rm (III)}$] emerging for $\omega = {\mathcal O}(N^0)$ (equivalently, $\tau = {\mathcal O}(N^0)$) the two limiting curves depart incurably from each other: while the product $\lim_{N\rightarrow \infty}\omega^2 S_N(\omega)$, shown by the dashed black line, follows the {\it universal} law Eq.~(\ref{SN-3rd}), the form-factor displays a {\it non-universal} behavior strongly depending on the particular form of level spacing distribution as highlighted by solid red, green and blue curves, see also Eq.~(\ref{K-tau-3}), \begin{eqnarray} {\rm (nonuniversal)}\;\;K^{\rm{(0)}}(\tau) \neq {\rm (universal)}\;\; {\mathcal S}^{\rm{(0)}}(\omega)\mid_{\omega=2\pi \tau}. \end{eqnarray} Hence, the two spectral statistics -- the form-factor and the power spectrum -- {\it cannot} be reduced to each other for any finite frequency $0<\omega <\pi$ as $N \rightarrow \infty$. \newline\newline\noindent {\it Conclusion.}---Detailed analytical and numerical analysis, performed for eigenlevel sequences with uncorrelated, identically distributed level spacings, leads us to conclude that the spectral form-factor and the power spectrum are generically {\it two distinct statistical indicators}. This motivates us to revisit the problem of calculating the power spectrum for a variety of physically relevant eigenlevel sequences {\it beyond} the form-factor approximation. In the rest of the paper, this program, initiated in our previous publication \cite{ROK-2017}, will be pursued for (a) generic eigenlevel sequences possessing stationary level spacings and (b) eigenlevel sequences drawn from a variant of the circular unitary ensemble of random matrices. The latter case is of special interest as its $N\rightarrow \infty$ limit belongs to the spectral universality class shared by a large class of quantum systems with completely chaotic classical dynamics and broken time-reversal symmetry. \section{Main results and discussion} In this Section, we collect and discuss the major concepts and results of our work. Throughout the paper, we shall deal with eigenlevel sequences possessing {\it stationary level spacings} as defined below. \begin{definition}\label{def-stationary} Consider an ordered sequence of (unfolded) eigenlevels $\{0 \le \varepsilon_1 \le \cdots \le \varepsilon_N\}$ with $N \in \mathbb{N}$. Let $\{s_1, \cdots, s_N \}$ be the sequence of spacings between consecutive eigenlevels such that $s_\ell = \varepsilon_\ell - \varepsilon_{\ell-1}$ with $\ell=1,\dots,N$ and $\varepsilon_0=0$. The sequence of level spacings is said to be {\it stationary} if (i) the average spacing \begin{eqnarray}\label{skd} \langle s_\ell \rangle = \Delta \end{eqnarray} is independent of $\ell=1,\dots,N$ and (ii) the covariance matrix of {\it spacings} is of the Toeplitz type: \begin{eqnarray}\label{toep} {\rm cov}(s_\ell, s_m) = I_{|\ell-m|} - \Delta^2 \end{eqnarray} for all $\ell,m=1,\dots,N$. Here, $I_n$ is a function defined for non-negative integers $n$. \hfill $\blacksquare$ \end{definition} \noindent\par \begin{remark} While stationarity of level spacings is believed to emerge after unfolding procedure in the limit $N\rightarrow \infty$, see Ref.~\cite{BLS-2001}, it is not uncommon to observe stationarity even for {\it finite} eigenlevel sequences. Two paradigmatic examples of {\it finite}-$N$ eigenlevel sequences with stationary spacings include (i) a set of uncorrelated identically distributed eigenlevels \cite{RK-2019} mimicking quantum systems with integrable classical dynamics and (ii) eigenlevels drawn from the `tuned' circular ensembles of random matrices appearing in the random matrix theory approach to quantum systems with completely chaotic classical dynamics, see Section~\ref{PS-RMT-exact}. \hfill $\blacksquare$ \end{remark} \subsection{Main results} {\bf First result.}---For {\it generic} eigenlevel sequences, the power spectrum Eq.~(\ref{ps-def}) is determined by {\it both} diagonal and off-diagonal elements of the covariance matrix $\langle \delta\varepsilon_\ell \delta\varepsilon_m \rangle$. In the important case of eigenlevel sequences {\it with stationary level spacings}, the power spectrum can solely be expressed in terms of {\it diagonal} elements $\langle \delta\varepsilon_\ell^2 \rangle$ of the covariance matrix. The Theorem \ref{PS-stationary-main} below, establishes an exact relation between the power spectrum (see Definition~\ref{def-01}) and a generating function of variances of {\it ordered} eigenvalues. \begin{theorem}[First master formula]\label{PS-stationary-main} Let $N \in \mathbb{N}$ and $0\le \omega \le \pi$. The power spectrum for an eigenlevel sequence $\{0\le \varepsilon_1 \le \cdots \le \varepsilon_N\}$ with stationary spacings equals \begin{eqnarray}\label{smd-sum} S_N(\omega) = \frac{1}{N \Delta^2} {\rm Re} \left( z \frac{\partial}{\partial z} - N - \frac{1-z^{-N}}{1-z}\right) \sum_{\ell=1}^N {\rm var}[\varepsilon_\ell]\, z^\ell, \end{eqnarray} where $z=e^{i \omega}$, $\Delta$ is the mean level spacing, and \begin{eqnarray} {\rm var}[\varepsilon_\ell] = \langle \delta\varepsilon_\ell^2 \rangle. \end{eqnarray} \end{theorem} For the proof, the reader is referred to Section~\ref{Th23-proof}. \noindent\newline\newline {\bf Second result.}---Yet another useful representation -- the second master formula -- establishes an exact representation of the power spectrum in terms of a generating function of probabilities $E_N(\ell;\epsilon)$ to observe exactly $\ell$ eigenlevels {\it below} the energy $\varepsilon$, \begin{eqnarray}\fl \qquad \label{EML} E_N(\ell;\varepsilon) = \frac{N!}{\ell! (N-\ell)!}\left(\prod_{j=1}^\ell \int_{0}^{\varepsilon} d\epsilon_j\right) \left(\prod_{j=\ell+1}^N \int_{\varepsilon}^\infty d\epsilon_j\right) \, P_N(\epsilon_1,\dots,\epsilon_N). \end{eqnarray} Here, $P_N(\epsilon_1,\dots,\epsilon_N)$ is the joint probability density (JPDF) of $N$ unordered eigenlevels taken from a positive definite spectrum; it is assumed to be symmetric under a permutation of its arguments. Such an alternative albeit equivalent representation of the power spectrum will be central to the spectral analysis of quantum chaotic systems. \begin{theorem}[Second master formula]\label{Th-2} Let $N \in \mathbb{N}$ and $0\le \omega \le \pi$, and let $\Phi_N(\varepsilon;\zeta)$ be the generating function \begin{eqnarray} \label{ps-gf} \Phi_N(\varepsilon;\zeta) = \sum_{\ell=0}^N (1-\zeta)^\ell E_N(\ell;\varepsilon) \end{eqnarray} of the probabilities defined in Eq.~(\ref{EML}). The power spectrum, Definition~\ref{def-01}, for an eigenlevel sequence with stationary spacings equals \begin{eqnarray}\label{ps-2}\fl S_N(\omega) = \frac{2}{N \Delta^2} {\rm Re} \left( z \frac{\partial}{\partial z} - N - \frac{1-z^{-N}}{1-z}\right) \frac{z}{1-z} \int_0^\infty d\epsilon \,\epsilon \left[ \Phi_N(\epsilon;1-z) - z^N \right] - \tilde{S}_N(\omega),\nonumber\\ {} \end{eqnarray} where $z=1-\zeta = e^{i\omega}$, $\Delta$ is the mean level spacing, and \begin{eqnarray}\label{ps-tilde} \tilde{S}_N(\omega) = \frac{1}{N} {\rm Re} \left( z \frac{\partial}{\partial z} - N - \frac{1-z^{-N}}{1-z}\right) \sum_{\ell=1}^N \ell^2 z^\ell \nonumber \\ \qquad\qquad = \frac{1}{N} \left| \frac{1-(N+1)z^N + N z^{N+1}}{(1-z)^2} \right|^2. \end{eqnarray} \end{theorem} For the proof, the reader is referred to Section~\ref{Th-2-proof}. \begin{remark} Notably, representations Eqs.~(\ref{ps-gf}) and (\ref{ps-2}) suggest that the power spectrum is determined by {\it spectral correlation functions of all orders}. Contrary to the spacing distribution, which is essentially determined by the gap formation probability \cite{M-2004} $E_N(0;\varepsilon)$, the power spectrum depends on the {\it entire set} of probabilities $E_N(\ell;\varepsilon)$ with $\ell=0,1,\dots,N$. \hfill $\blacksquare$ \end{remark} \noindent\newline {\bf Third result.}---To study the power spectrum in quantum systems with broken time-reversal symmetry and completely chaotic classical dynamics, let us consider the {\it tuned circular unitary ensemble} (${\rm TCUE}_N$). Obtained from the traditional circular unitary ensemble ${\rm CUE}_{N+1}$ \cite{M-2004} by conditioning its lowest eigen-angle to stay at zero, the ${\rm TCUE}_N$ is defined by the joint probability density of $N$ eigen-angles $\{\theta_1,\dots,\theta_N\}$ of the form \begin{equation}\label{T-CUE} \fl \qquad P_N(\theta_1,\dots,\theta_N) = \frac{1}{(N+1)!} \prod_{1 \le i < j \le N}^{} \left| e^{i\theta_i} - e^{i\theta_j} \right|^2 \prod_{j=1}^{N} \left| 1 - e^{i\theta_j}\right|^2 \end{equation} whose normalization is fixed by \begin{eqnarray}\label{TCUE-norm} \prod_{j=1}^{N}\int_0^{2\pi} \frac{d\theta_j}{2\pi}\, P_{N}(\theta_1,\dots,\theta_N)=1. \end{eqnarray} Such a seemingly minor tuning of ${\rm CUE}_{N+1}$ to ${\rm TCUE}_N$ induces stationarity of level spacings in ${\rm TCUE}_N$ for any $N \in {\mathbb N}$, see Corollary \ref{corr-theta-k} for the proof. We note in passing that traditional circular unitary ensemble lacks the stationarity property. For the ${\rm TCUE}_N$, a general Definition~\ref{def-01} of the power spectrum can be adjusted as follows. \begin{definition} Let $\{\theta_1 \le \cdots \le \theta_N\}$ be fluctuating ordered eigen-angles drawn from the ${\rm TCUE}_N$, $N \in {\mathbb N}$, with the mean level spacing $\Delta$ and let $\langle \delta\theta_\ell \delta\theta_m \rangle$ be the covariance matrix of eigen-angle displacements $\delta\theta_\ell = \theta_\ell - \langle \theta_\ell\rangle$ from their mean $\langle \theta_\ell\rangle$. A Fourier transform of the covariance matrix \begin{eqnarray}\label{ps-def-tcue} S_N(\omega) = \frac{1}{N \Delta^2} \sum_{\ell=1}^N \sum_{m=1}^N \langle \delta\theta_\ell \delta\theta_m \rangle\, e^{i\omega (\ell-m)}, \quad \omega \in {\mathbb R} \end{eqnarray} is called the power spectrum of the ${\rm TCUE}_N$. Here, the angular brackets denote average with respect to the JPDF Eq.~(\ref{T-CUE}). \hfill $\blacksquare$ \end{definition} \begin{theorem}[Power spectrum in ${\rm TCUE}_N$]\label{Th-3} Let $\{\theta_1 \le \cdots \le \theta_N\}$ be fluctuating ordered eigen-angles drawn from the ${\rm TCUE}_N$. Then, for any $N \in {\mathbb N}$ and all $0 \le \omega \le \pi$, the power spectrum admits exact representation \begin{eqnarray} \fl \label{ps-tcue-1} S_N(\omega) = \frac{(N+1)^2}{\pi N} {\rm Re} \left( z \frac{\partial}{\partial z} - N - \frac{1-z^{-N}}{1-z}\right) \frac{z}{1-z} \int_0^{2\pi} \frac{d\varphi}{2\pi} \,\varphi \, \Phi_N(\varphi;1-z) - \dbtilde{S}_N(\omega),\nonumber\\ {} \end{eqnarray} where \begin{eqnarray} \label{ps-tcue-3} \dbtilde{S}_N(\omega) = \frac{1}{N} \left| \frac{1-(N+1)z^N + N z^{N+1}}{(1-z)^2} \right|^2 - \frac{(N+1)^2}{N} \frac{1}{|1-z|^2} \end{eqnarray} and \begin{equation} \label{phin} \Phi_N(\varphi;\zeta) = \exp \left( -\int_{\cot(\varphi/2)}^{\infty} \frac{dt}{1+t^2} \left( \tilde{\sigma}_N(t;\zeta) + t \right) \right). \end{equation} Here, $z=1-\zeta=e^{i\omega}$ whilst the function $\tilde{\sigma}_N(t;\zeta)$ is a solution to the $\sigma$-Painlev\'e VI equation \begin{eqnarray} \label{pvi} \fl \qquad \left( (1+t^2)\,\tilde{\sigma}_N^{\prime\prime} \right)^2 + 4 \tilde{\sigma}_N^\prime (\tilde{\sigma}_N - t \tilde{\sigma}_N^\prime)^2 + 4 (\tilde{\sigma}_N^\prime+1)^2 \left( \tilde{\sigma}_N^\prime + (N+1)^2 \right) = 0 \end{eqnarray} satisfying the boundary condition \begin{eqnarray} \label{pvi-bc} \tilde{\sigma}_N(t;\zeta) = -t + \frac{N(N+1)(N+2)}{3\pi t^2} \zeta + {\mathcal O}(t^{-4}) \end{eqnarray} as $t\rightarrow \infty$. \end{theorem} For the proof of Theorem \ref{Th-3}, the reader is referred to Section~\ref{Th-3-proof}. \begin{remark} Theorem \ref{Th-3} provides an exact RMT solution for the power spectrum in the ${\rm TCUE}_N$. Alternatively, but equivalently, the finite-$N$ power spectrum can be expressed in terms of a Fredholm determinant (Section~\ref{Fredholm-sec}), Toeplitz determinant (Section~\ref{Toeplitz-sec}) and discrete Painlev\'e~V (${\rm dP_V}$) equations (Appendix~\ref{B-1}). While the Toeplitz representation is beneficial for performing a large-$N$ analysis of the power spectrum, the ${\rm dP_V}$ formulation is particularly useful for efficient numerical evaluation of the power spectrum for relatively large values of $N$. \hfill $\blacksquare$ \end{remark} \noindent\newline {\bf Fourth (main) result.}---The most remarkable feature of the random matrix theory is its ability to predict universal statistical behavior of quantum systems. In this context, a large-$N$ limit of the power spectrum in the ${\rm TCUE}_N$ is expected to furnish a universal, parameter-free law, $S_\infty(\omega) = \lim_{N\rightarrow \infty} S_N(\omega)$, for the power spectrum. Its functional form is given in the Theorem~\ref{Th-4} below. \begin{theorem}[Universal law]\label{Th-4} For $0 < \omega < \pi$, the limit $S_\infty(\omega) = \lim_{N\rightarrow \infty} S_N(\omega)$ exists and equals \begin{eqnarray} \label{PS-exact} \fl S_\infty(\omega) = {\mathcal A}(\tilde{\omega}) \Bigg\{{\rm Im} \int_{0}^{\infty} \frac{d\lambda}{2\pi} \, \lambda^{1-2\tilde{\omega}^2} \, e^{i\tilde{\omega} \lambda} \nonumber\\ \times \left[ \exp \left( - \int_{\lambda}^{\infty} \frac{dt}{t} \left( \sigma_1(t;\tilde\omega) - i \tilde{\omega} t + 2\tilde{\omega}^2\right) \right) -1 \right] + {\mathcal B}(\tilde{\omega})\Bigg\}, \nonumber\\ {} \end{eqnarray} where $\tilde{\omega} = \omega/2\pi$ is a rescaled frequency, and the functions ${\mathcal A}(\tilde{\omega})$ and ${\mathcal B}(\tilde{\omega})$ are defined as \begin{eqnarray} \label{Aw-def} {\mathcal A}(\tilde{\omega}) = \frac{1}{2\pi} \frac{\prod_{j=1}^2 G(j+\tilde{\omega}) G(j-\tilde{\omega})}{\sin(\pi \tilde{\omega})}, \\ \label{Bw-def} {\mathcal B}(\tilde{\omega}) = \frac{1}{2\pi} \sin(\pi \tilde{\omega}^2)\, \tilde{\omega}^{2\tilde{\omega}^2-2}\, \Gamma(2-2\tilde{\omega}^2). \end{eqnarray} Here, $G$ is the Barnes' $G$-function, $\Gamma$ is the Gamma function, whilst $\sigma_1(t;\tilde\omega)= \sigma_1(t)$ is the Painlev\'e V transcendent satisfying Eq.~(\ref{PV-family}) with $\nu=1$ and fulfilling the boundary conditions \begin{eqnarray}\label{bc-s1-infty} \sigma_1(t)=i \tilde{\omega} t-2\tilde{\omega}^2 - \frac{i t \gamma(t)}{1+\gamma(t)}+\mathcal{O}(t^{-1+2\tilde{\omega}}), && t\rightarrow\infty, \\ \label{bc-s1-zero} \sigma_1(t) =\mathcal{O}(t \ln t), && t\rightarrow 0, \end{eqnarray} with $\gamma(t)$ being defined by Eq.~(\ref{eq:gamma}). \end{theorem} \begin{remark} As a by-product of this Theorem, we have formulated a conjecture for a double integral identity involving a fifth Painlev\'e transcendent. A mathematically-oriented reader is referred to Conjecture~\ref{conj}. \hfill $\blacksquare$ \end{remark} \begin{theorem}[Small-$\omega$ expansion]\label{Th-5} In the notation of Theorem \ref{Th-4}, the following expansion holds as $\omega\rightarrow 0$: \begin{eqnarray} \label{S-res-0} S_\infty (\omega) = \frac{1}{4\pi^2 \tilde\omega} + \frac{1}{2\pi^2} \tilde\omega \ln \tilde\omega +\frac{\tilde\omega}{12} + {\mathcal O}(\tilde\omega^2). \end{eqnarray} \end{theorem} \begin{figure} \includegraphics[width=\textwidth]{Fig4.eps} \caption{A graph for the power spectrum as a function of frequency. Red line corresponds to the power spectrum calculated through the ${\rm dP_V}$ representation (Appendix~\ref{B-1}) of the exact Painlev\'e VI solution for $N=10^4$, see Theorem \ref{Th-3}. Blue crosses correspond to the power spectrum calculated for sequences of $256$ unfolded ${\rm CUE}$ eigen-angles averaged over $10^7$ realizations. Inset: a log-log plot for the same graphs. \label{Figure_ps_overall}} \end{figure} \begin{figure} \includegraphics[width=\textwidth]{Fig5.eps} \caption{Difference between the power spectrum and its singular part $1/2\pi\omega$ as described by Eq.~(\ref{BM}) at $\beta=2$ (see also the first term in Eq.~(\ref{S-res-0})). The singular part of the power spectrum corresponds to $\delta S_\infty(\omega)=0$ as represented by a gray dotted line. Red solid line: analytical prediction computed as explained in Fig.~\ref{Figure_ps_overall}. Blue crosses: simulation for $4 \times 10^8$ sequences of $512$ unfolded CUE eigenvalues. Inset: magnified portion of the same graph for $0\le \omega \le \pi/4$; additional black dashed line displays the difference $\delta S_\infty(\omega)$ calculated using the small-$\omega$ expansion Eq.~(\ref{S-res-0}). \label{Figure_ps_diff}} \end{figure} For the proof of Theorems \ref{Th-4} and \ref{Th-5}, the reader is referred to Section~\ref{T-section}. \subsection{Discussion} In Figs.~\ref{Figure_ps_overall} and \ref{Figure_ps_diff}, the parameter-free prediction Eq.~(\ref{PS-exact}) for the power spectrum is confronted with the results of numerical simulations for the large-$N$ circular unitary ensemble ${\rm CUE}_N$. Two remarks are in order. (i) First, the limiting curve for $S_\infty(\omega)$ was approximated by the exact Painlev\'e VI solution computed for sufficiently large $N$ through its dPV representation worked out in detail in Appendix~\ref{B-1}. We have verified, by performing numerics for various values of $N$, that the convergence of ${\rm dP_V}$ representation of $S_N(\omega)$ to $S_\infty(\omega)$ is very fast, so that the $N=10^4$ curve provides an excellent approximation to the universal law for $S_\infty(\omega)$. A good match between the $N=10^4$ curve and the one plotted for a small-$\omega$ expansion Eq.~(\ref{S-res-0}) of $S_\infty(\omega)$ (see inset in Fig.~\ref{Figure_ps_diff}) lends an independent support to validity of our numerical procedure. (ii) Second, even though the theoretical results used for comparison refer to the ${\rm TCUE}_N$ -- rather than the ${\rm CUE}_N$ -- ensemble (which differ from each other by the weight function and the way the two are intrinsically unfolded \footnote{The spectra in ${\rm CUE}_N$ and ${\rm TCUE}_N$ ensembles are {\it intrinsically} unfolded for any $N\in{\mathbb N}$, albeit each in its own way. Indeed, in the ${\rm CUE}_N$ the {\it mean density} is a constant \cite{M-2004,PF-book}, while in the ${\rm TCUE}_N$ the {\it mean level spacing} is a constant, see Corollary \ref{corr-theta-k}. In the limit $N\rightarrow \infty$, the two types of unfolding are expected to become equivalent.}), the agreement between the ${\rm TCUE}_N$ theory and the ${\rm CUE}_N$ numerics is nearly perfect, which can naturally be attributed to the universality phenomenon emerging as $N\rightarrow \infty$. The universal formula for $S_\infty(\omega)$, stated in Theorem \ref{Th-4}, is the {\it central result} of the paper. We expect it to hold {\it universally} for a wide class of random matrix models belonging to the $\beta=2$ Dyson's symmetry class, as the matrix dimension $N \rightarrow \infty$. Expressed in terms of a fifth Painlev\'e transcendent, the universal law Eq.~(\ref{PS-exact}) can be viewed as a power spectrum analog of the Gaudin-Mehta formula Eq.~(\ref{LSD-PV}) for the level spacing distribution. Apart from establishing an explicit form of the universal random-matrix-theory law for $S_\infty(\omega)$, our theory reveals two important general aspects of the power spectrum which hold irrespective of a particular model of eigenlevel sequences: (i) similarly to the level spacing distribution, the power spectrum is determined by spectral correlations of {\it all orders}; (ii) in distinction to the level spacing distribution, which can solely be expressed in terms of the gap formation probability, the power spectrum is contributed by the {\it entire set} of probabilities that a spectral interval of a given length contains exactly $\ell$ eigenvalues with $\ell \ge 0$. As such, it provides a complementary statistical description of spectral fluctuations in stochastic spectra of various origins. Considered through the prism of Bohigas-Giannoni-Schmit conjecture, the universal law Eq.~(\ref{PS-exact}) should hold for a variety of quantum systems with completely chaotic classical dynamics and broken time-reversal symmetry at not too low frequencies $T_*/T_H \lesssim \omega \le \pi$, when ergodicity and global symmetries -- rather than system specific features -- are responsible for shaping system's dynamics. Potential applicability of our results to the non-trivial zeros of the Riemann zeta function deserves special mention. Indeed, according to the Montgomery-Odlyzko law (see, e.g., Ref.~\cite{KS-1999}), the zeros of the Riemann zeta function located high enough along the critical line are expected to follow statistical properties of the eigenvalues of large $U(N)$ matrices. This suggests that the universal law Eq.~(\ref{PS-exact}) could be detected "experimentally". Extensive, high-precision data accumulated by A.~M.~Odlyzko for billions of Riemann zeros \cite{O-2001} provide a unique opportunity for a meticulous test of the new universal law. \section{Power spectrum for eigenlevel sequences with stationary spacings} \label{station} In this Section, we provide proofs of two master formulae given by Theorem \ref{PS-stationary-main} and Theorem \ref{Th-2}. \subsection{Stationary spectra} \label{stat-spec} In view of Definition \ref{def-stationary}, we first establish a necessary and sufficient condition for eigenlevel sequences to possess stationarity of level spacings. \begin{lemma} \label{Lemma-stat} For $N \in {\mathbb N}$, let $\{0\le \varepsilon_1 \le \cdots \le \varepsilon_N\}$ be an ordered sequence of unfolded eigenlevels such that $\langle \varepsilon_1\rangle =\Delta$. An associated sequence of spacings between consecutive eigenlevels is stationary if and only if \begin{eqnarray}\label{L-1} \langle (\varepsilon_\ell - \varepsilon_m)^q\rangle = \langle \varepsilon_{\ell-m}^q \rangle \end{eqnarray} for $\ell>m$ and both $q=1$ and $q=2$. \end{lemma} \noindent \begin{proof} The equivalence of Eq.~(\ref{skd}) to Eq.~(\ref{L-1}) at $q=1$ is self-evident. To prove the equivalence of Eq.~(\ref{toep}) to Eq.~(\ref{L-1}) at $q=2$, we proceed in two steps. First, let the covariance matrix of level spacings be of the form Eq.~(\ref{toep}). Substituting Eq.~(\ref{ELSJ}) into the l.h.s.~of Eq.~(\ref{L-1}) taken at $q=2$, and making use of Eq.~(\ref{toep}) twice, \begin{eqnarray} \langle (\varepsilon_\ell - \varepsilon_m)^2\rangle &=& \sum_{i=m+1}^\ell \sum_{j=m+1}^\ell \langle s_i s_j\rangle = \sum_{i=m+1}^\ell \sum_{j=m+1}^\ell I_{|i-j|} \nonumber\\ &=& \sum_{i^\prime=1}^{\ell-m} \sum_{j^\prime=1}^{\ell-m} I_{|i^\prime-j^\prime|} = \sum_{i^\prime=1}^{\ell-m} \sum_{j^\prime=1}^{\ell-m} \langle s_{i^\prime} s_{j^\prime}\rangle = \langle \varepsilon_{\ell-m}^2\rangle, \nonumber \end{eqnarray} we derive the r.h.s.~of Eq.~(\ref{L-1}) with $q=2$. Second, let the ordered eigenvalues satisfy Eq.~(\ref{L-1}) at $q=2$. Substituting $s_{\ell(m)} = \varepsilon_{\ell(m)} - \varepsilon_{\ell(m)-1}$ into the definition of covariance matrix ${\rm cov}(s_\ell,s_m)$ of level spacings and making use of Eq.~(\ref{L-1}), we observe that Eq.~(\ref{toep}) indeed holds with $I_{|\ell-m|}$ of the form \begin{eqnarray} I_{|\ell-m|} = \frac{1}{2}\langle \varepsilon_{|\ell-m|+1}^2\rangle + \frac{1}{2}\langle \varepsilon_{|\ell-m|-1}^2\rangle - \langle \varepsilon_{|\ell-m|}^2\rangle. \end{eqnarray} \end{proof} \subsection{Proof of Theorem \ref{PS-stationary-main}} \label{Th23-proof} It follows from Eq.~(\ref{L-1}) of Lemma \ref{Lemma-stat} written in the form \begin{eqnarray} \label{delta-2} \langle \delta\varepsilon_\ell \delta\varepsilon_m\rangle = \frac{1}{2} \left( \langle \delta\varepsilon_\ell^2 \rangle + \langle \delta\varepsilon_m^2 \rangle - \langle \delta\varepsilon_{|\ell-m|}^2 \rangle \right), \end{eqnarray} where $\delta\varepsilon_\ell = \varepsilon_\ell -\ell \Delta$. Substituting Eq.~(\ref{delta-2}) into the definition Eq.~(\ref{ps-def}) and reducing the number of summations therein, we derive Eq.~(\ref{smd-sum}). \hfill $\square$ \begin{remark} For discrete frequencies $\omega_k = 2\pi k/N$ the power spectrum representation Eq.~(\ref{smd-sum}) simplifies to \begin{eqnarray}\label{smd-sum-k} S_N(\omega_k) = \frac{1}{N \Delta^2} {\rm Re} \left( z_k \frac{\partial}{\partial z_k} - N \right) \sum_{\ell=1}^N {\rm var}[\varepsilon_\ell]\, z_k^\ell. \end{eqnarray} Here, $z_k = e^{i\omega_k}$ and the derivative with respect to $z_k$ should be taken as if $z_k$ were a continuous variable. \hfill $\blacksquare$ \end{remark} \subsection{Proof of Theorem \ref{Th-2}} \label{Th-2-proof} To prove the Theorem \ref{Th-2}, we need the following Lemma: \begin{lemma} \label{Lemma-probs} For $N \in {\mathbb N}$, let $\{ \varepsilon_1 \le \cdots \le \varepsilon_N\}$ be an ordered sequence of eigenlevels supported on the half axis $(0,\infty)$, and let $E_N(\ell;\varepsilon)$ be the probability to find exactly $\ell$ eigenvalues below the energy $\varepsilon$, given by Eq.~(\ref{EML}), with $\ell=0,1,\dots,N$. The following relation holds: \begin{eqnarray}\label{Lemma-probs-1} \frac{d}{d\varepsilon} E_N(\ell;\varepsilon) = p_\ell(\varepsilon) - p_{\ell+1}(\varepsilon). \end{eqnarray} Here, $p_\ell(\varepsilon)$ is the probability density of $\ell$-th ordered eigenlevel where $p_0(\varepsilon) = p_{N+1}(\varepsilon) =0$ for $\varepsilon>0$. Equivalently, \begin{eqnarray}\label{Lemma-probs-2} p_\ell (\varepsilon) = - \sum_{j=0}^{\ell-1} \frac{d}{d\varepsilon} E_N(j;\varepsilon), \quad \ell=1,\dots,N. \end{eqnarray} \end{lemma} \noindent \begin{proof} Differentiating Eq.~(\ref{EML}) and having in mind that the probability density of $\ell$-th ordered eigenvalue equals \begin{eqnarray}\fl \qquad \label{p-L} p_\ell(\varepsilon) = \frac{N!}{(\ell-1)! (N-\ell)!}\left(\prod_{j=1}^{\ell-1} \int_{0}^{\varepsilon} d\epsilon_j\right) \left(\prod_{j=\ell+1}^N \int_{\varepsilon}^\infty d\epsilon_j\right)\nonumber\\ \qquad \qquad \qquad \qquad \times \, P_N(\epsilon_1,\dots,\epsilon_{\ell-1},\varepsilon, \epsilon_{\ell+1},\dots,\epsilon_N), \end{eqnarray} we derive Eqs.~(\ref{Lemma-probs-1}) and (\ref{Lemma-probs-2}). \end{proof} \noindent\newline {\bf Proof of Theorem \ref{Th-2}.}---Equipped with Lemma \ref{Lemma-probs}, we are ready to prove Theorem \ref{Th-2}. First, we observe that Eqs.~(\ref{ps-gf}) and Eq.~(\ref{Lemma-probs-2}) induce the relation \begin{eqnarray} \sum_{\ell=1}^N z^\ell p_\ell(\varepsilon) = -\frac{z}{1-z} \frac{d}{d\varepsilon} \left[ \Phi_N(\varepsilon;1-z) - z^N \right]. \end{eqnarray} Second, we split the variance Eq.~(\ref{smd-sum}) into ${\rm var}[\varepsilon_\ell]= \langle \varepsilon_\ell^2 \rangle - \ell^2 \Delta^2$. The later term produces the contribution ${\tilde S}_N(\omega)$ in Eq.~(\ref{ps-2}) while the former brings \begin{eqnarray} \label{aux-01} \sum_{\ell=1}^N \langle \varepsilon_\ell^2 \rangle z^\ell &=& - \frac{z}{1-z} \int_{0}^{\infty} d\epsilon\, \epsilon^2 \, \frac{d}{d\varepsilon} \left[ \Phi_N(\varepsilon;1-z) - z^N \right] \nonumber\\ &=& \frac{2z}{1-z} \int_{0}^{\infty} d\epsilon\, \epsilon \, \left[ \Phi_N(\varepsilon;1-z) - z^N \right]. \end{eqnarray} Integration by parts performed in the last line is justified provided an average number of eigenlevels ${\mathcal N}_N(\varepsilon)$ in the tail region $(\varepsilon,\infty)$ exhibits sufficiently fast decay ${\mathcal N}_N (\varepsilon) \sim \varepsilon^{-(2+\delta)}$ with $\delta>0$ as $\varepsilon \rightarrow \infty$ \footnote{Indeed, Eqs. (\ref{EML}) and (\ref{ps-gf}) imply an integral representation $$ \Phi_N(\varepsilon;1-z) = \prod_{\ell=1}^{N} \left( z \int_{0}^{\infty} d\epsilon_\ell + (1-z) \int_{\varepsilon}^{\infty} d\epsilon_\ell \right) \, P_N (\epsilon_1,\dots,\epsilon_N). $$ Letting $\varepsilon\rightarrow\infty$, we generate a large-$\varepsilon$ expansion of the form $$ \Phi_N(\varepsilon;1-z) = z^N + \sum_{\ell=1}^N z^{N-\ell} (1-z)^\ell \left( \prod_{j=1}^{\ell} \int_{\varepsilon}^{\infty} d\epsilon_j\right) R_{\ell,N} (\epsilon_1,\dots,\epsilon_\ell), $$ where $$ R_{\ell,N} (\epsilon_1,\dots,\epsilon_\ell) = \frac{N!}{(N-\ell)!} \left( \prod_{j=\ell+1}^N \int_{0}^{\infty} d\epsilon_j \right) P_N(\epsilon_1,\dots,\epsilon_N) $$ is the $\ell$-point spectral correlation function. To the first order, the expansion brings $\Phi_N(\varepsilon;1-z) = z^N + z^{N-1}(1-z) {\mathcal N}_N (\varepsilon) + \dots$, where ${\mathcal N}_N (\varepsilon)$ is the mean spectral density $R_{1,N}(\epsilon)$ integrated over the interval $(\varepsilon,\infty)$. Hence, the required decay of ${\mathcal N}_N (\varepsilon)$ at infinity readily follows.}. Substituting Eq.~(\ref{aux-01}) into Eq.~(\ref{smd-sum}), we derive the first term in Eq.~(\ref{ps-2}). This ends the proof of Theorem \ref{Th-2}. \hfill $\square$ \begin{remark} For discrete frequencies $\omega_k = 2\pi k/N$ the power spectrum representation Eq.~(\ref{ps-2}) simplifies to \begin{eqnarray}\label{ps-gf-simple}\fl \quad S_N(\omega_k) = \frac{2}{N \Delta^2} {\rm Re} \left( z_k \frac{\partial}{\partial z_k} - N\right) \frac{z_k}{1-z_k} \int_0^\infty d\epsilon \,\epsilon \left[ \Phi_N(\epsilon;1-z_k) - 1 \right] - \frac{N}{|1-z_k|^2}.\nonumber\\ {} \end{eqnarray} Here, $z_k = e^{i\omega_k}$ and the derivative with respect to $z_k$ should be taken as if $z_k$ were a continuous variable. \hfill $\blacksquare$ \end{remark} \section{Power spectrum in the tuned circular unitary ensemble} \label{PS-RMT-exact} In this Section, a general framework developed in Section~\ref{station} and summed up in Theorem \ref{Th-2} will be utilized to determine the power spectrum in the tuned circular ensemble of random matrices, ${\rm TCUE}_N$, for any $N\in {\mathbb N}$. For the definition of ${\rm TCUE}_N$, the reader is referred to Eqs.~(\ref{T-CUE}) and (\ref{TCUE-norm}). \subsection{Correlations between ordered eigen-angles in ${\rm TCUE}_N$} The main objective of this subsection is to establish stationarity of spacings between ordered ${\rm TCUE}_N$ eigen-angles. To this end, we prove Lemma \ref{Lemma-circular} and Lemma \ref{Lemma-stat-TCUE}. The sought stationarity will then be established in Corollary \ref{corr-theta-k}. \begin{lemma}[Circular symmetry] \label{Lemma-circular} For $q=0, 1, \dots$ and $\ell=1,2,\dots, N$ it holds that \begin{eqnarray}\label{LL-1} \langle \theta_\ell^q\rangle = \langle (2\pi - \theta_{N-\ell +1})^q \rangle. \end{eqnarray} \end{lemma} \begin{proof} The proof is based on the circular-symmetry identity \begin{eqnarray} p_\ell(\varphi) = p_{N-\ell+1}(2\pi - \varphi) \end{eqnarray} between the probability density functions of $\ell$-th and $(N-\ell+1)$-th ordered eigenangles in the ${\rm TCUE}_N$. This relation can formally be derived from the representation \begin{eqnarray} \fl \label{pk-tcue} p_\ell(\varphi) = \frac{1}{(N+1)!} \frac{N!}{(\ell-1)! (N-\ell)!} \left| 1- e^{i\varphi}\right|^2 \nonumber\\ \times \left(\prod_{j=1}^{\ell-1} \int_{0}^{\varphi} \frac{d\theta_j}{2\pi}\right) \left(\prod_{j=\ell}^{N-1} \int_{\varphi}^{2\pi} \frac{d\theta_j}{2\pi}\right) \nonumber \\ \times \prod_{1 \le i < j \le N-1}^{} \left| e^{i\theta_i} - e^{i\theta_j} \right|^2 \prod_{j=1}^{N-1} \left| e^{i\varphi} - e^{i\theta_j}\right|^2 \left| 1 - e^{i\theta_j}\right|^2. \end{eqnarray} Indeed, Eq.~(\ref{pk-tcue}) yields \begin{eqnarray} \label{pk-tcue-mirror} p_{N-\ell+1}(2\pi - \varphi) = \frac{1}{(N+1)!} \frac{N!}{(\ell-1)! (N-\ell)!} \left| 1- e^{i(2\pi -\varphi)}\right|^2 \nonumber\\ \times \left(\prod_{j=1}^{N-\ell} \int_{0}^{2\pi - \varphi} \frac{d\theta_j}{2\pi}\right) \left(\prod_{j=N-\ell+1}^{N-1} \int_{2\pi-\varphi}^{2\pi} \frac{d\theta_j}{2\pi}\right) \nonumber \\ \times \prod_{1 \le i < j \le N-1}^{} \left| e^{i\theta_i} - e^{i\theta_j} \right|^2 \prod_{j=1}^{N-1} \left| e^{i(2\pi - \varphi)} - e^{i\theta_j}\right|^2 \left| 1 - e^{i\theta_j}\right|^2. \end{eqnarray} The change of variables $\theta_j = 2\pi - \theta_j^\prime$ reduces the r.h.s.~of Eq.~(\ref{pk-tcue-mirror}) to Eq.~(\ref{pk-tcue}). Consequently, \begin{eqnarray} \langle \theta_\ell^q\rangle &=& \int_0^{2\pi} \frac{d\varphi}{2\pi} \, \varphi^q p_\ell(\varphi) = \int_0^{2\pi} \frac{d\varphi}{2\pi} \, \varphi^q p_{N-\ell+1}(2\pi-\varphi) \nonumber\\ &=& \int_0^{2\pi} \frac{d\varphi^\prime}{2\pi} \, (2\pi - \varphi^\prime)^q p_{N-\ell+1}(\varphi^\prime) = \langle (2\pi - \theta_{N-\ell+1})^q\rangle. \end{eqnarray} \end{proof} \begin{lemma}[Translational invariance in the index space] \label{Lemma-stat-TCUE} For $q=0, 1, \dots$ and $1 \le m < \ell \le N$ it holds that \begin{eqnarray}\label{LL-2} \langle (\theta_\ell - \theta_m) ^q\rangle = \langle \theta_{\ell-m}^q \rangle. \end{eqnarray} \end{lemma} \begin{proof} It is advantageous to start with the JPDF of {\it ordered} eigenangles in the ${\rm TCUE}_N$, \begin{eqnarray}\fl \label{tcue-ord} \qquad P_N^{\rm{(ord)}}(\theta_1,\dots,\theta_N) = N! \, P_N(\theta_1,\dots,\theta_N) \, \mathds{1}_{0 \le \theta_1 \le \cdots \le \theta_N \le 2\pi} \nonumber\\ = \frac{1}{N+1} \prod_{1 \le i < j \le N}^{} \left| e^{i\theta_i} - e^{i\theta_j} \right|^2 \prod_{j=1}^{N} \left| 1 - e^{i\theta_j}\right|^2 \, \mathds{1}_{0 \le \theta_1 \le \cdots \le \theta_N \le 2\pi}, \end{eqnarray} where we have used the notation $$ \mathds{1}_{0\le\theta_1\le\dots\le\theta_N\le 2\pi}=\prod_{1\le i<j\le N}\Theta(\theta_j-\theta_i) $$ with $\Theta$ being the Heaviside step function. Given Eq.~(\ref{tcue-ord}), the $q$-th moment of the difference $\theta_\ell-\theta_m$ equals \begin{eqnarray} \langle (\theta_\ell -\theta_m)^q \rangle &=& \int_0^{2\pi} \frac{d\theta_1}{2\pi} \cdots \int_0^{2\pi} \frac{d\theta_m}{2\pi} \cdots \int_0^{2\pi} \frac{d\theta_\ell}{2\pi} \cdots \int_0^{2\pi} \frac{d\theta_N}{2\pi} \nonumber\\ &\times& \, (\theta_\ell - \theta_m)^q\, P_N^{\rm{(ord)}}(\theta_1,\dots,\theta_N). \end{eqnarray} Changing the integration variables $(\theta_1,\dots,\theta_N) \rightarrow (\theta_1^\prime,\dots,\theta_N^\prime)$ according to the map \begin{eqnarray}\label{theta-map} \left\{ \begin{array}{ll} \theta_{\ell-r}^\prime = \theta_\ell-\theta_r, & \hbox{$r=1,\dots,\ell-1$;} \\ \theta_r^\prime = \theta_r, & \hbox{$r=\ell$;} \\ \theta_{N+1+\ell-r}^\prime = 2\pi +\theta_\ell-\theta_r, & \hbox{$r=\ell+1,\dots,N$,} \end{array} \right. \end{eqnarray} and observing that both the probability density function $P_N^{\rm{(ord)}}$ and the integration domain stay invariant under the map Eq.~(\ref{theta-map}), \begin{eqnarray} P_N^{\rm{(ord)}}(\theta_1^\prime,\dots,\theta_N^\prime) &=& P_N^{\rm{(ord)}}(\theta_1,\dots,\theta_N),\\ \mathds{1}_{0 \le \theta_1 \le \cdots \le \theta_N \le 2\pi} &\rightarrow& \mathds{1}_{0 \le \theta_1^\prime \le \cdots \le \theta_N^\prime \le 2\pi}, \end{eqnarray} we conclude that \begin{eqnarray} \fl \langle (\theta_\ell -\theta_m)^q \rangle = \int_0^{2\pi} \frac{d\theta_1^\prime}{2\pi} \cdots \int_0^{2\pi} \frac{d\theta_N^\prime}{2\pi} \, (\theta_{\ell-m}^\prime)^q\, P_N^{\rm{(ord)}}(\theta_1^\prime,\dots,\theta_N^\prime) = \langle \theta_{\ell-m}^q \rangle. \end{eqnarray} \end{proof} \begin{corollary}\label{corr-theta-k} A sequence of spacings between consecutive eigenangles in ${\rm TCUE}_N$ is stationary such that the mean position of the $\ell$-th ordered eigen-angle equals \begin{eqnarray} \langle \theta_\ell \rangle = \ell \Delta, \end{eqnarray} where $\ell=1,\dots,N$ and \begin{eqnarray} \Delta = \frac{2\pi}{N+1}, \end{eqnarray} is the mean spacing. \end{corollary} \begin{proof} Indeed, combining Lemma \ref{Lemma-circular} taken at $q=1$ and Lemma \ref{Lemma-stat-TCUE} taken at $q=1$ and $m=\ell-1$, one concludes that the mean spacing $$ \Delta = \langle \theta_\ell - \theta_{\ell-1}\rangle = \frac{2\pi}{N+1} $$ is constant everywhere in the eigenspectrum. Now we apply Lemma \ref{Lemma-stat} and Lemma \ref{Lemma-stat-TCUE} to complete the proof. \footnote{Notice that due to a formal convention $p_0(\varphi) =0$ stated below Eq.~(\ref{Lemma-probs-1}), one has to set $\langle \theta_0\rangle = 0$ if required. } \end{proof} \subsection{Proof of Theorem \ref{Th-3}} \label{Th-3-proof} Stationarity of level spacings in the ${\rm TCUE}_N$ established in Corollary \ref{corr-theta-k} allows us to use a `compactified' version of Theorem \ref{Th-2} in order to claim the representation stated by Eqs.~(\ref{ps-tcue-1}) and (\ref{ps-tcue-3}), where \begin{eqnarray} \label{ps-tcue-2} \Phi_N(\varphi;\zeta) = \sum_{\ell=0}^N (1-\zeta)^\ell E_N(\ell;\varphi) \end{eqnarray} is the generating function of the probabilities \begin{eqnarray} \fl \label{EN-TCUE} E_N(\ell;\varphi) = \frac{N!}{\ell! (N-\ell)!}\left(\prod_{j=1}^\ell \int_{0}^{\varphi} \frac{d\theta_j}{2\pi}\right) \left(\prod_{j=\ell+1}^N \int_{\varphi}^{2\pi} \frac{d\theta_j}{2\pi}\right) \, P_N(\theta_1,\dots,\theta_N) \end{eqnarray} to find exactly $\ell$ eigen-angles in the interval $(0,\varphi)$ of the ${\rm TCUE}_N$ spectrum. The JPDF $P_N(\theta_1,\dots,\theta_N)$ is defined in Eq.~(\ref{T-CUE}). Substituting Eqs.~(\ref{EN-TCUE}) and (\ref{T-CUE}) into Eq.~(\ref{ps-tcue-2}), one derives a multidimensional-integral representation of the generating function $\Phi_N(\varphi;\zeta)$ in the form \begin{eqnarray} \fl \Phi_N(\varphi;\zeta) = \frac{1}{(N+1)!} \prod_{j=1}^N \left( \int_0^{2\pi} - \zeta \int_0^\varphi \right) \frac{d\theta_j}{2\pi} \prod_{1 \le i < j \le N}^{} \left| e^{i\theta_i} - e^{i\theta_j} \right|^2 \prod_{j=1}^{N} \left| 1 - e^{i\theta_j}\right|^2, \nonumber\\ \label{phintheta} {} \end{eqnarray} satisfying the {\it symmetry relation} \begin{eqnarray} \label{phin-sym} \Phi_N(2\pi-\varphi;\zeta) &=& (1-\zeta)^N \Phi_N\left(\varphi;\frac{\zeta}{\zeta-1}\right) \nonumber\\ &=& (1-\zeta)^N \Phi_N(\varphi; \bar{\zeta}) = (1-\zeta)^N \overline{\Phi_N(\varphi; \zeta)}. \end{eqnarray} Multidimensional integrals of the CUE-type akin to Eq.~(\ref{phintheta}) have been studied in much detail in Ref.~\cite{FW-2004} whose authors employed the $\tau$-function theory \cite{O-1987} of Painlev\'e equations. To proceed with evaluation of the generating function of our interest, we introduce a new set of integration variables \begin{eqnarray}\label{ch-var} e^{i\theta_j} = \frac{i\lambda_j-1}{i\lambda_j+1} \end{eqnarray} to write down the generating function Eq.~(\ref{phintheta}) in the form \begin{eqnarray} \fl \Phi_N(\varphi;\zeta) = \frac{2^{N(N+1)}}{\pi^N (N+1)!} \prod_{j=1}^N \left( \int_{-\infty}^{+\infty} - \zeta \int_{\cot(\varphi/2)}^{+\infty} \right) \frac{d\lambda_j}{(1+\lambda_j^2)^{N+1}} \prod_{1 \le i < j \le N}^{} \left| \lambda_i - \lambda_j \right|^2. \nonumber\\ {} \end{eqnarray} Its Painlev\'e VI representation can be read off from Ref.~\cite{FW-2004} to establish Eqs.~(\ref{phin}), (\ref{pvi}) and also (\ref{pvi-bc}). For a detailed derivation of the boundary condition Eq.~(\ref{pvi-bc}), the reader is referred to Appendix~\ref{A-1}. \hfill $\square$ \begin{remark} For a set of discrete frequencies $$ \omega_k^\prime = \frac{2\pi k}{N+1} $$ the free term in Eq.~(\ref{ps-tcue-1}) nullifies, $\dbtilde{S}_N(\omega_k^\prime)=0$, bringing a somewhat tidier formula \begin{eqnarray} \fl \label{ps-tcue-4} S_N(\omega_k^\prime) = \frac{(N+1)^2}{\pi N} {\rm Re} \left( z_k^\prime \frac{\partial}{\partial z_k^\prime} - N - 1 \right) \frac{z_k^\prime}{1-z_k^\prime} \int_0^{2\pi} \frac{d\varphi}{2\pi} \,\varphi \, \Phi_N(\varphi;1-z_k^\prime), \end{eqnarray} where $z_k^\prime = e^{i \omega_k^\prime}$. This is essentially Eq.~(17) previously announced in our paper Ref.~\cite{ROK-2017}. \hfill $\blacksquare$ \end{remark} \subsection{Power spectrum in ${\rm TCUE}_N$ as a Fredholm determinant} \label{Fredholm-sec} To derive a Fredholm determinant representation of the ${\rm TCUE}_N$ power spectrum, a determinantal structure \cite{M-2004,PF-book} of spectral correlation functions in the ${\rm TCUE}_N$ should be established. This is summarized in Lemma \ref{corr-f-tcue} below. \begin{lemma} \label{corr-f-tcue} For $\ell =1,\dots,N$, the $\ell$-point correlation function \cite{M-2004,PF-book} \begin{eqnarray} \label{RLN-TCUE} R_{\ell,N}(\theta_1,\dots,\theta_\ell) = \frac{N!}{(N-\ell)!} \left(\prod_{j=\ell+1}^{N} \int_{0}^{2\pi} \frac{d\theta_j}{2\pi} \right) \, P_N(\theta_1,\dots,\theta_N) \end{eqnarray} in the ${\rm TCUE}_N$ ensemble, defined by Eqs.~(\ref{T-CUE}) and (\ref{TCUE-norm}), admits the determinantal representation \begin{eqnarray}\label{rk-tcue-kappa} R_{\ell,N} (\theta_1,\dots,\theta_\ell) = {\det}_{1\le i,j \le \ell} \left[ {\kappa}_{N} (\theta_i, \theta_j) \right], \end{eqnarray} where the ${\rm TCUE}_N$ scalar kernel \begin{eqnarray}\label{kn-cd-new} {\kappa}_{N}(\theta,\theta^\prime) = {\mathcal S}_{N+1}(\theta-\theta^\prime) - \frac{1}{N+1} {\mathcal S}_{N+1}(\theta) {\mathcal S}_{N+1}(\theta^\prime) \end{eqnarray} is expressed in terms of the sine-kernel \begin{eqnarray} {\mathcal S}_{N+1}(\theta) = \frac{\sin[(N+1)\theta/2]}{\sin(\theta/2)} \end{eqnarray} of the ${\rm CUE}_{N+1}$ ensemble. \end{lemma} \begin{proof} While the determinantal form [Eq.~(\ref{rk-tcue-kappa})] of spectral correlation functions is a universal manifestation of the $\beta=2$ symmetry of the circular ensemble \cite{M-2004,PF-book}, a precise form of the two-point scalar kernel $\kappa_N(\theta,\theta^\prime)$ depends on peculiarities of the ${\rm TCUE}_N$ probability measure encoded in the weight function ($z = e^{i\theta}$) \begin{eqnarray} \label{wzm} W(z) = \frac{1}{2} |1-z|^2 = 1-\cos\theta \end{eqnarray} characterizing the ${\rm TCUE}_N$ measure in Eq.~(\ref{T-CUE}). For aesthetic reasons, it is convenient to compute a scalar kernel $\kappa_N(\theta,\theta^\prime)$ in terms of polynomials $\{\psi_j(z)\}$ orthonormal on the unit circle $|z|=1$ \begin{eqnarray} \frac{1}{2 i\pi} \oint_{|z|=1} \frac{dz}{z} \, W(z) \, \psi_\ell(z) \overline{\psi_m (z)} = \delta_{\ell m} \end{eqnarray} with respect to the weight function $W(z)$. In such a case, a scalar kernel is given by either of the two representations ($w=e^{i\theta^\prime}$): \begin{eqnarray} \label{kernel-sum} \kappa_N(\theta,\theta^\prime) &=& \sqrt{W(z)W(w)} \, \sum_{\ell=0}^{N-1} \psi_\ell(z) \, \overline{\psi_\ell(w)} \\ \label{kernel-darboux} &=& \sqrt{W(z)W(w)} \, \frac{\overline{\psi_N(w)}\, \psi_N(z) - \overline{\psi^*_N(w)} \psi^*_N(z)}{\bar{w} z -1}. \end{eqnarray} Equation~(\ref{kernel-darboux}), containing reciprocal polynomials \begin{eqnarray} \label{rec-pol} \psi^*_\ell(z) = z^\ell \, \overline{\psi_\ell(1/\bar{z})}, \end{eqnarray} follows from Eq.~(\ref{kernel-sum}) by virtue of the Christoffel-Darboux identity \cite{I-2005}. Since for the ${\rm TCUE}_N$ weight function Eq.~(\ref{wzm}), the orthonormal polynomials are known as Szeg\"o-Askey polynomials (see \S18 in Ref.~\cite{NIST}), \begin{eqnarray} \label{SAP-01} \psi_\ell(z) = \sqrt{\frac{2}{(\ell+1)(\ell+2)}} \;\, {}_2 F_1 \left( - \ell,2; -\ell; z \right), \end{eqnarray} the reciprocal Szeg\"o-Askey polynomials are readily available, too: \begin{eqnarray} \label{SAP-02} \psi^*_\ell(z) =\sqrt{\frac{2 (\ell+1)}{\ell+2}} \;\, {}_2 F_1 \left( - \ell,1; -\ell-1; z \right). \end{eqnarray} Hence, Eqs.~(\ref{kernel-darboux}), (\ref{SAP-01}) and (\ref{SAP-02}) furnish an explicit expression for the ${\rm TCUE}_N$ scalar kernel $\kappa_N(\theta,\theta^\prime)$. This being said, we would like to represent the ${\rm TCUE}_N$ scalar kernel in a more suggestive form. To do so, we notice that Szeg\"o-Askey polynomials Eq.~(\ref{SAP-01}) admit yet another representation \begin{eqnarray} \label{SAP-03} \psi_\ell(z) = \sqrt{\frac{2}{(\ell+1)(\ell+2)}}\; \sum_{j=1}^{\ell+1} j z^{j-1}. \end{eqnarray} Substituting it further into Eq.~(\ref{kernel-sum}), one obtains: \begin{eqnarray} \fl \label{2pk-01} \kappa_N(\theta,\theta^\prime) &=& \frac{2 i}{N+1} e^{-i (\theta-\theta^\prime)/2} \frac{\sin[\theta/2] \sin[\theta^\prime/2]} {\sin[(\theta-\theta^\prime)/2]} \sum_{j=0}^N \sum_{k=0}^N (N-j-k) z^j \bar{w}^k \\ \fl &=& \frac{2 i}{N+1} e^{-i (\theta-\theta^\prime)/2} \frac{\sin[\theta/2] \sin[\theta^\prime/2]} {\sin[(\theta-\theta^\prime)/2]} \sum_{j=0}^N \sum_{k=0}^N \left(N- z\frac{\partial}{\partial z}- {\bar w}\frac{\partial}{\partial \bar{w}}\right) z^j \bar{w}^k. \end{eqnarray} Owing to the representation of the ${\rm CUE}_N$ sine-kernel \begin{eqnarray} {\mathcal S}_{N}(\theta) = \frac{\sin(N\theta/2)}{\sin(\theta/2)} = e^{-i (N-1) \theta/2} \sum_{j=0}^{N-1} z^j, \end{eqnarray} the above can further be reduced to \begin{eqnarray} \kappa_N(\theta,\theta^\prime) &=& \frac{2}{N+1} e^{i (N-1)(\theta-\theta^\prime)/2} \frac{\sin[\theta/2] \sin[\theta^\prime/2]} {\sin[(\theta-\theta^\prime)/2]} \nonumber\\ &\times& \left( \frac{\partial}{\partial \theta^\prime} - \frac{\partial}{\partial \theta} \right)\, {\mathcal S}_{N+1}(\theta) {\mathcal S}_{N+1}(\theta^\prime). \end{eqnarray} Calculating derivatives therein, we derive \begin{eqnarray}\label{kn-cd-new-100} \fl {\kappa}_{N}(\theta,\theta^\prime) = e^{i (N-1)(\theta-\theta^\prime)/2} \left({\mathcal S}_{N+1}(\theta-\theta^\prime) - \frac{1}{N+1} {\mathcal S}_{N+1}(\theta) {\mathcal S}_{N+1}(\theta^\prime)\right). \end{eqnarray} Spotting that the phase factor in Eq.~(\ref{kn-cd-new-100}) does not contribute to the determinant in Eq.~(\ref{rk-tcue-kappa}) completes the proof. \end{proof} \begin{remark} An alternative determinantal representation of spectral correlation functions in the ${\rm TCUE}_N$ can be established if one views the JPDF of the ${\rm TCUE}_N$ as the one of the traditional ${\rm CUE}_{N+1}$ ensemble, whose lowest eigenangle is conditioned to stay at zero, as spelt out below. \hfill $\blacksquare$ \end{remark} \begin{lemma} \label{correlation-f} For $\ell =1,\dots,N$, the $\ell$-point correlation function, Eq.~(\ref{RLN-TCUE}), in the ${\rm TCUE}_N$ ensemble admits the determinantal representation \begin{eqnarray}\label{rk-tcue} R_{\ell,N} (\theta_1,\dots,\theta_\ell) = \frac{1}{N+1} {\det}_{1\le i,j \le \ell+1} \left[ {\mathcal S}_{N+1} (\theta_i -\theta_j) \right] \Big|_{\theta_{\ell+1}=0}, \end{eqnarray} where ${\mathcal S}_{N+1}(\theta)$ is the ${\rm CUE}_{N+1}$ sine-kernel: \begin{eqnarray} {\mathcal S}_{N+1}(\theta) = \frac{\sin[(N+1)\theta/2]}{\sin(\theta/2)}. \end{eqnarray} \end{lemma} \begin{proof} Equation (\ref{rk-tcue}) is self-evident as the determinant therein is the $(\ell+1)$-point correlation function in the ${\rm CUE}_{N+1}$ with one of the eigen-angles conditioned to stay at zero whilst the denominator is the ${\rm CUE}_{N+1}$ mean density ${\mathcal S}_{N+1}(0)=N+1$. \end{proof} \begin{proposition} \label{prop-fred} The generating function $\Phi_N(\varphi;\zeta)$ in Eq.~(\ref{ps-tcue-1}) of Theorem \ref{Th-3} admits a Fredholm determinant representation \begin{eqnarray}\label{Phi-FD} \Phi_N(\varphi;\zeta) = {\rm det} \big[ \mathds{1} - \zeta \mathbf{\hat{\kappa}}_N^{(0,\varphi)} \big], \end{eqnarray} where $\mathbf{\hat{\kappa}}_N^{(0,\varphi)}$ is an integral operator defined by \begin{eqnarray} \label{Phi-FD-kappa} \big[\mathbf{\hat{\kappa}}_N^{(0,\varphi)} f\big] (\theta_1) = \int_{0}^{\varphi} \frac{d\theta_2}{2\pi} \kappa_N(\theta_1,\theta_2) \, f(\theta_2), \end{eqnarray} whilst $\kappa_N$ is the ${\rm TCUE}_N$ two-point scalar kernel specified in Lemma \ref{corr-f-tcue}. \end{proposition} \begin{proof} To derive a Fredholm determinant representation of the power spectrum, we turn to Eq.~(\ref{ps-tcue-2}) rewriting it as a sum \begin{eqnarray} \Phi_N(\varphi;\zeta) = \sum_{\ell=0}^{N} {{N}\choose{\ell}} \left( -\zeta \int_0^\varphi \right)^\ell \left( \int_0^{2\pi} \right)^{N-\ell} \prod_{j=1}^N \frac{d\theta_j}{2\pi} \, P_N(\theta_1,\dots,\theta_N).\nonumber \end{eqnarray} Performing $(N-\ell)$ integrations, we obtain \begin{eqnarray} \Phi_N(\varphi;\zeta) = \sum_{\ell=0}^{N} \frac{(-\zeta)^\ell}{\ell!} \left( \prod_{j=1}^\ell \int_0^\varphi \frac{d\theta_j}{2\pi}\right) \, R_{\ell,N}(\theta_1,\dots,\theta_\ell), \nonumber \end{eqnarray} where $R_{\ell,N}(\theta_1,\dots,\theta_\ell)$ is the $\ell$-point correlation function in ${\rm TCUE}_N$ given by Eq.~(\ref{RLN-TCUE}). Its determinant representation Eq.~(\ref{rk-tcue-kappa}) yields the expansion \begin{eqnarray} \Phi_N(\varphi;\zeta) = \sum_{\ell=0}^{N} \frac{(-\zeta)^\ell}{\ell!} \left( \prod_{j=1}^\ell \int_0^\varphi\frac{d\theta_j}{2\pi} \right)\, {\det}_{1\le i,j \le \ell} \left[ {\kappa}_{N} (\theta_i, \theta_j) \right]. \nonumber \end{eqnarray} Here, $\kappa_N(\theta,\theta^\prime)$ is the two-point scalar kernel of the ${\rm TCUE}_N$ ensemble, see Lemma \ref{corr-f-tcue} for its explicit form. Further, consulting, e.g., Appendix in Ref.~\cite{BK-2007}, one identifies a sought Fredholm determinant representation given by Eqs.~(\ref{Phi-FD}) and (\ref{Phi-FD-kappa}). \end{proof} A Fredholm determinant representation of the power spectrum is particularly useful for asymptotic analysis of the power spectrum in the deep `infrared' limit $\omega \ll 1$ when $\zeta = 1-z \ll 1$. \subsection{Power spectrum in ${\rm TCUE}_N$ as a Toeplitz determinant} \label{Toeplitz-sec} To analyse the power spectrum in the limit $N \rightarrow \infty$ for $0<\omega<\pi$ being kept fixed, it is beneficial to represent the generating function $\Phi_N(\varphi;\zeta)$ [Eq.~(\ref{phintheta})] entering the exact solution Eq.~(\ref{ps-tcue-1}) with $\zeta = 1 - z$ in the form of a Toeplitz determinant with Fisher-Hartwig singularities. \begin{proposition}\label{toep-prop} The generating function $\Phi_N(\varphi;\zeta)$ in Eq.~(\ref{ps-tcue-1}) of Theorem \ref{Th-3} admits a Toeplitz determinant representation \begin{eqnarray} \label{GF-toeplitz-1} \Phi_N(\varphi; \zeta) = \frac{e^{i\varphi \tilde{\omega} N}}{N+1} \, D_N[f_{\tilde{\omega}}(z;\varphi)], \end{eqnarray} where $\tilde{\omega} = \omega/2\pi$, and \begin{equation}\label{T-det-02} D_N[f_{\tilde{\omega}}(z;\varphi)] = {\rm det}_{0 \le j,\ell \le N-1} \left( \frac{1}{2 i\pi} \oint_{|z|=1} \frac{dz}{z} \,z^{\ell-j} f_{\tilde{\omega}}(z;\varphi) \right) \end{equation} is the Toeplitz determinant whose Fisher-Hartwig symbol \begin{eqnarray} \label{FHS} f_{\tilde{\omega}}(z;\varphi) = |z-z_1|^2 \left(\frac{z_2}{z_1}\right)^{\tilde{\omega}} g_{z_1,\tilde{\omega}} (z)\, g_{z_2,-\tilde{\omega}}(z) \end{eqnarray} possesses power-type singularity at $z=z_1 = e^{i\varphi/2}$ and jump discontinuities \begin{eqnarray} g_{z_{j}, \pm{\tilde{\omega}}} (z) = \left\{ \begin{array}{ll} e^{\pm i\pi \tilde{\omega}}, & \hbox{$0 \le {\rm arg\,} z < {\rm arg\,} z_{j}$} \\ e^{\mp i\pi \tilde{\omega}}, & \hbox{${\rm arg\,} z_{j} \le {\rm arg\,} z < 2\pi$} \end{array} \right. \end{eqnarray} at $z = z_{1,2}$ with $z_2 = e^{i(2\pi -\varphi/2)}$. \end{proposition} \begin{proof} Start with the multiple integral representation Eq.~(\ref{phintheta}) and make use of Andr\'eief's formula \cite{A-1883,dB-1955} \begin{eqnarray} \left(\prod_{j=1}^{N} \int_{{\mathcal L}} \frac{d\theta_j}{2\pi}\right)\, w(\theta_j) \, {\rm det}_{1\le j,\ell \le N} [f_{j-1}(\theta_\ell)] \, {\rm det}_{1\le j,\ell \le N} [g_{j-1}(\theta_\ell)] \nonumber\\ \qquad = N! \, {\rm det}_{1\le j,\ell \le N} \left( \int_{{\mathcal L}} \frac{d\theta}{2\pi}\, w(\theta) f_{j-1}(\theta) g_{\ell-1}(\theta) \right) \end{eqnarray} in which the weight function is set to $w(\theta)=(1-\zeta \Theta(\theta) \Theta(\varphi-\theta))|1-e^{i\theta}|^2$, integration domain is chosen to be ${\mathcal L} = (0, 2\pi)$, and $f_{j-1}(\theta)=\overline{g_{j-1}(\theta)} = e^{i(j-1)\theta}$, to derive \begin{eqnarray} \label{T-det-01} \Phi_N(\varphi; \zeta) = \frac{1}{N+1} {\rm det}_{0 \le j,\ell \le N-1} \left[ M_{j-\ell}(\varphi;\zeta) \right], \end{eqnarray} where \begin{eqnarray} \label{Mjk} M_{j-\ell}(\varphi;\zeta) = \left( \int_{0}^{2\pi} - \zeta \int_{0}^{\varphi} \right)\frac{d\theta}{2\pi} |1- e^{i\theta}|^2 e^{-i(j-\ell)\theta}. \end{eqnarray} Introduce a new integration variable $z=e^{i\theta}$ in Eq.~(\ref{Mjk}), adopt the standard terminology and notation of Refs. \cite{DIK-2014,CK-2015} to figure out equivalence of Eqs.~(\ref{T-det-01}) and (\ref{Mjk}) to the statement of the proposition. \end{proof} \section{Power spectrum in quantum chaotic systems: Large-$N$ limit} \label{T-section} In the limit $N \rightarrow \infty$, the exact solution for the ${\rm TCUE}_N$ power spectrum should converge to a universal law. To determine it, we shall perform an asymptotic analysis of the exact solution Eqs.~(\ref{ps-tcue-1}) and (\ref{ps-tcue-3}), stated in Theorem \ref{Th-3}, with the generating function $\Phi_N(\varphi; \zeta)$ being represented as a Toeplitz determinant specified in Proposition \ref{toep-prop}. \subsection{Uniform asymptotics of the Toeplitz determinant} To perform the integral in Eq.~(\ref{ps-tcue-1}) in the limit $N \rightarrow \infty$, {\it uniform} asymptotics of the Toeplitz determinant Eq.~(\ref{T-det-02}) are required in the subtle regime of two merging singularities. In our case, one singularity is of a root type while the other one is of both root and jump types. Relevant uniform asymptotics were recently studied in great detail by Claeys and Krasovsky \cite{CK-2015} who used the Riemann-Hilbert technique. Two different, albeit partially overlapping, asymptotic regimes in $\varphi$ can be identified. \newline\newline\noindent {\it Asymptotics at the `left edge'.}---Defining the left edge as the domain $0\le \varphi <\varphi_0$, where $\varphi_0$ is sufficiently small \footnote{In fact, here $\varphi_0 =2\pi -\epsilon$ with $\epsilon >0$.}, the following asymptotic expansion holds {\it uniformly} as $N \rightarrow \infty$ (see Theorems 1.5 and 1.8 in Ref.~\cite{CK-2015}) \begin{eqnarray} \label{Edge-1} \ln D_N[f_{\tilde{\omega}}(z;\varphi)] &=& \ln N - i(N-1) \tilde{\omega} \varphi - 2\tilde{\omega}^2 \ln \left( \frac{\sin(\varphi/2)}{\varphi/2}\right) \nonumber\\ &+& \int_{0}^{- i N \varphi} \frac{ds}{s}\, \sigma(s) + {\mathcal O}(N^{-1+ 2 \tilde{\omega}}), \end{eqnarray} so that \begin{eqnarray} \label{Edge-2} \fl \Phi_N(\varphi;\zeta) = e^{i \tilde{\omega} \varphi} \left( \frac{\sin(\varphi/2)}{\varphi/2} \right)^{-2\tilde{\omega}^2} \exp\left( \int_{0}^{- i N \varphi} \frac{ds}{s}\, \sigma(s) \right) \left( 1 + {\mathcal O}(N^{-1+ 2 \tilde{\omega}}) \right). \end{eqnarray} Here $\tilde\omega = \omega/2\pi$ is a rescaled frequency so that $z=1-\zeta=e^{2 i\pi\tilde\omega}$. The function $\sigma(s)$ is the fifth Painlev\'e transcendent defined as the solution to the nonlinear equation \begin{equation} \label{PV-eq} s^2 (\sigma^{\prime\prime})^2 = \left(\sigma - s \sigma^\prime + 2 (\sigma^\prime)^2 \right)^2 - 4 (\sigma^\prime)^2\left( (\sigma^\prime)^2 - 1 \right) \end{equation} subject to the boundary conditions \cite{TC-private} \footnote{Notice that, in distinction to Ref.~\cite{CK-2015}, we kept two reminder terms in Eq.~(\ref{bc-inf}) -- oscillatory and non-oscillatory, even though the latter term is subleading. The reason for this is that the function $\sigma(s)$ will subsequently appear in the integral Eq.~(\ref{global-T}) which will make the non-oscillatory reminder term dominant.} \begin{eqnarray}\label{bc-inf} \fl \sigma(s) = -{\tilde \omega} s - 2{\tilde \omega}^2 + \frac{s\gamma(s)}{1+\gamma(s)} + {\mathcal O}\left( e^{-i|s|} |s|^{-1+2{\tilde \omega}} \right)+ {\mathcal O}\left( |s|^{-1} \right) \quad {\rm as} \quad s\rightarrow - i\infty \end{eqnarray} and \begin{eqnarray}\label{bc-zero} \sigma(s)= {\mathcal O}\left( |s|\,\ln |s| \right)\quad {\rm as} \quad s\rightarrow - i0_+. \end{eqnarray} The function $\gamma(s)$ in Eq.~(\ref{bc-inf}) equals \begin{eqnarray}\label{eq:gamma} \gamma(s) = \frac{1}{4} \left| \frac{s}{2} \right|^{2(-1+2\tilde{\omega})} e^{-i |s|} e^{i\pi} \frac{\Gamma(2-\tilde{\omega}) \Gamma(1-\tilde{\omega})}{\Gamma(1+\tilde{\omega}) \Gamma(\tilde{\omega})}. \end{eqnarray} The above holds for $0 \le \tilde{\omega} < 1/2$. \begin{remark} Following Ref.~\cite{CK-2015}, we notice that in Eqs.~(\ref{Edge-1}) and (\ref{Edge-2}) the path of integration in the complex $s$-plane should be chosen to avoid a finite number of poles $\{ s_j \}$ of $\sigma(s)$ corresponding to zeros $\{ \varphi_j = i s_j/N \}$ in the asymptotics of the Toeplitz determinant $D_N[f_{\tilde{\omega}}(z;\varphi)]$. For the specific Fisher-Hartwig symbol Eq.~(\ref{FHS}) we expect $\{ s_j \}$ to be the empty set; numerical analysis of $D_N[f_{\tilde{\omega}}(z;\varphi)]$ suggests that its zeros stay away from the real line. \hfill $\blacksquare$ \end{remark} \noindent\newline\noindent {\it Asymptotics in the `bulk'.}---Defining the `bulk' as the domain $\Omega(N)/N \le \varphi <\varphi_0$, where $\varphi_0$ is sufficiently small, and $\Omega(x)$ is a positive smooth function such that $\Omega(N) \rightarrow \infty$ whilst $\Omega(N)/N \rightarrow 0$ as $N \rightarrow \infty$, the following asymptotic expansion holds {\it uniformly} (see Theorem 1.11 in Ref.~\cite{CK-2015}): \begin{equation}\label{eq:DIK}\fl D_N[f_{\tilde{\omega}}(z;\varphi)] = N^{1-2\tilde{\omega}^2} G_{\tilde{\omega}}\, e^{i\tilde{\omega} \varphi} e^{-i\tilde{\omega}\pi} \left|2\sin \left( \frac{\varphi}{2} \right) \right|^{-2\tilde{\omega}^2} \left( 1 + {\mathcal O}\left(\Omega(N)^{-1+2\tilde{\omega}}\right) \right) \end{equation} so that \begin{eqnarray} \label{Th-111} \fl \Phi_N(\varphi; \zeta)&=& N^{-2\tilde{\omega}^2} G_{\tilde{\omega}}\, e^{i \tilde{\omega} \varphi (N+1)} e^{-i\tilde{\omega}\pi} \left| 2 \sin \left( \frac{\varphi}{2} \right) \right|^{-2\tilde{\omega}^2}\,\left( 1 + {\mathcal O}\left(\Omega(N)^{-1+2\tilde{\omega}}\right) \right). \end{eqnarray} Here, $G_{\tilde{\omega}}$ is a known function of $\tilde{\omega}$ \begin{eqnarray} G_{\tilde{\omega}} = G(2+\tilde{\omega}) G(2-\tilde{\omega}) G(1+\tilde{\omega}) G(1-\tilde{\omega}) \end{eqnarray} with $G(\cdots)$ being the Barnes' $G$-function. The above holds for $0 \le \tilde{\omega} < 1/2$. The leading term in Eqs.~(\ref{eq:DIK}) and (\ref{Th-111}) is due to Ehrhardt \cite{E-2001}. \begin{remark} Since both asymptotic expansions [Eq.~(\ref{Edge-1}) and (\ref{eq:DIK})] hold uniformly in the domain $\Omega(N)/N \le \varphi <\varphi_0$, the following integral identity for $\sigma(s)$ should hold: \begin{eqnarray} \label{global} \lim_{T\rightarrow +\infty} \left( \int_{0}^{-i T} \frac{ds}{s} \, \sigma(s) - i\tilde{\omega} T +2\tilde{\omega}^2 \ln T \right) = - i \pi \tilde{\omega} + \ln G_{\tilde{\omega}}, \end{eqnarray} see Eq.~(1.26) in Ref.~\cite{CK-2015}. Had this global condition been derived independently, it would have provided an alternative route to producing the `bulk' asymptotics out of those known in the edge region. Notice that as $T\rightarrow \infty$, the boundary condition Eq.~(\ref{bc-inf}) implies a stronger statement: \begin{eqnarray} \label{global-T} \int_{0}^{-i T} \frac{ds}{s} \, \sigma(s) - i\tilde{\omega} T +2\tilde{\omega}^2 \ln T = - i \pi \tilde{\omega} + \ln G_{\tilde{\omega}} + {\mathcal O}(T^{-1}). \end{eqnarray} \hfill $\blacksquare$ \end{remark} \subsection{Asymptotic analysis of the main integral} In doing the large-$N$ asymptotic analysis of our exact solution for the power spectrum [Eqs. (\ref{ps-tcue-1}) and (\ref{phin})], we shall encounter a set of integrals \begin{eqnarray} \label{i-n-k} I_{N,k}(\zeta) = N \int_{0}^{2\pi} \frac{d\varphi}{2\pi} \, \varphi^k \Phi_N(\varphi;\zeta), \end{eqnarray} where $k$ is a non-negative integer and $\Phi_N(\varphi;\zeta)$ is given by Eq.~(\ref{phintheta}). We shall specifically be interested in $k=0$ and $1$. \begin{lemma} \label{Lemma-IN0} In the notation of Eq.~(\ref{i-n-k}), we have: \begin{eqnarray} \label{In0-exact} I_{N,0} (\zeta) = \frac{N}{N+1} \frac{1-(1-\zeta)^{N+1}}{\zeta}. \end{eqnarray} Equation (\ref{In0-exact}) is exact for any $\zeta \in \mathbb{C}$. \end{lemma} \begin{proof} To compute the integral Eq.~(\ref{i-n-k}) at $k=0$, we invoke the expansion Eq.~(\ref{ps-tcue-2}) of $\Phi_N(\varphi;\zeta)$ in terms of probabilities $E_N(\ell;\varphi)$ of observing exactly $\ell$ eigenangles of ${\rm TCUE}_N$ in the interval $(0,\varphi)$, \begin{eqnarray} \label{In0-expansion} I_{N,0}(\zeta) = N \sum_{\ell=0}^n (1-\zeta)^\ell \int_{0}^{2\pi} \frac{d\varphi}{2\pi}\, E_N(\ell;\varphi). \end{eqnarray} The integral above can readily be calculated by performing integration by parts: \begin{eqnarray} \fl \int_{0}^{2\pi} \frac{d\varphi}{2\pi}\, E_N(\ell;\varphi) = \delta_{\ell,N} - \int_{0}^{2\pi}\frac{d\varphi}{2\pi}\, \varphi \frac{d}{d\varphi} E_N(\ell;\varphi) \nonumber\\ = \delta_{\ell,N} + \frac{1}{2\pi}\int_{0}^{2\pi}\frac{d\varphi}{2\pi}\, \varphi \left( p_{\ell+1}(\varphi) - p_{\ell}(\varphi) \right). \end{eqnarray} In the second line, we have used the relation Eq.~(\ref{Lemma-probs-1}) which, in the context of ${\rm TCUE}_N$, acquires the multiplicative factor $1/2\pi$ in its r.h.s.; there, $p_\ell(\varphi)$ is the probability density of the $\ell$-th ordered eigenangle. Further, identifying (see Corollary \ref{corr-theta-k}) \begin{eqnarray} \int_{0}^{2\pi}\frac{d\varphi}{2\pi}\, \varphi \, p_{\ell}(\varphi) = \langle \theta_\ell \rangle = \left\{ \begin{array}{ll} \ell \Delta, & \hbox{$\ell=1,\dots,N$;} \\ 0, & \hbox{$\ell=0, N+1$.} \end{array} \right. \end{eqnarray} where $\Delta = 2\pi/(N+1)$ is the mean spacing, we conclude that \begin{eqnarray} \label{IN0-res} \int_{0}^{2\pi} \frac{d\varphi}{2\pi}\, E_N(\ell;\varphi) = \delta_{\ell,N} + \frac{\langle \theta_{\ell+1}\rangle - \langle \theta_{\ell}\rangle}{2\pi} = \frac{1}{N+1} \end{eqnarray} for all $\ell=0,\dots,N$. Substitution of Eq.~(\ref{IN0-res}) into Eq.~(\ref{In0-expansion}) ends the proof. \end{proof} \begin{remark} The fact that $I_{N,0} (\zeta)$ could be expressed in terms of elementary functions can be traced back to stationarity of level spacings in the ${\rm TCUE}_N$. For one, in the ${\rm CUE}_N$, an analogue of $I_{N,0} (\zeta)$ would have to be expressed in terms of the six Painlev\'e function. \hfill $\blacksquare$ \end{remark} \noindent\newline {\it The integral $I_{N,k}$.}---Unfortunately, {\it exact} calculation of the same ilk is not readily available for $I_{N,k}$ with $k=1$. For this reason we would like to gain an insight from Eq.~(\ref{In0-exact}) as $N \rightarrow \infty$, which, eventually, is the limit we are mostly concerned with. To this end, we extract the leading order behavior of $I_{N,0}(\zeta)$ on the unit circle $|z|=|1-\zeta|=1$, \begin{eqnarray} \label{i-n-0-asymp} I_{N,0}(\zeta) = \frac{1}{\zeta} + (1-\zeta)^N \frac{1}{\bar{\zeta}} + {\mathcal O}(N^{-1}), \end{eqnarray} and observe that it contains terms of two types. (i) Those bearing a strongly oscillating prefactor $(1-\zeta)^N = z^N = e^{2 i \pi \tilde{\omega} N}$, $$ (1-\zeta)^N \frac{1}{\bar{\zeta}} $$ are contributed by a vicinity of $\varphi=2\pi$ in the integral Eq.~(\ref{i-n-k}) with $k=0$. (ii) On the contrary, such a prefactor is missing in the term coming from a vicinity of $\varphi=0$, $$ \frac{1}{\zeta}. $$ The contribution from the bulk of the integration domain appears to be negligible due to strong oscillations $e^{i \tilde{\omega}\varphi N}$ of the integrand therein, see Eq.~(\ref{Th-111}). Equipped with these observations, we shall now proceed with an alternative, large-$N$, analysis of $I_{N,k}(\zeta)$ for $k=0$ and $k=1$, where terms of the same structure (with and without strongly oscillating prefactor) will appear. Aimed at the analysis of the power spectrum [Eq.~(\ref{ps-tcue-1})], whose representation contains a very particular $z$-operator, we shall only be interested in the leading order contributions to both terms. Notably, even though for $k=1$ a non-oscillating term is subleading as compared to an oscillating term, we shall argue that its contribution should still be kept. To proceed with the large-$N$ analysis of $I_{N,k}$, we first rewrite the integral Eq.~(\ref{i-n-k}) as a sum of two \begin{eqnarray} \label{I-n-k-1-2} I_{N,k}(\zeta) = I_{N,k}^{(1)}(\zeta) + I_{N,k}^{(2)}(\zeta) \end{eqnarray} such that \begin{eqnarray} \label{i-n-k-1} I_{N,k}^{(1)}(\zeta) = N \int_{0}^{2\pi} \frac{d\varphi}{2\pi} \, \varphi^k \left( \Phi_N(\varphi;\zeta) - \Phi_N^{{\rm E}}(\varphi;\zeta) \right) \end{eqnarray} and \begin{eqnarray} \label{i-n-k-2} I_{N,k}^{(2)}(\zeta) = N \int_{0}^{2\pi} \frac{d\varphi}{2\pi} \, \varphi^k \, \Phi_N^{{\rm E}}(\varphi;\zeta). \end{eqnarray} Here, $\Phi_N^{{\rm E}}(\varphi;\zeta)$ is an arbitrary integrable function; it will be specified later on. Prompted by the `edge' and `bulk' asymptotic expansions of $\Phi_N(\varphi;\zeta)$ [Eqs.~(\ref{Edge-2}) and (\ref{Th-111})], we split the integral in Eq.~(\ref{i-n-k-1}) into three pieces \begin{eqnarray} \label{I-1} I_{N,k}^{(1)}(\zeta) = L^{(1)}_{N,k}(\zeta) + C^{(1)}_{N,k}(\zeta)+ R^{(1)}_{N,k}(\zeta), \end{eqnarray} where \begin{eqnarray}\label{L-i-1} L^{(1)}_{N,k}(\zeta) &=& N \int_{0}^{\Omega(N)/N} \frac{d\varphi}{2\pi} \, \varphi^k \left( \Phi_N(\varphi;\zeta) - \Phi_N^{{\rm E}}(\varphi;\zeta) \right),\\ \label{C-i-1} C^{(1)}_{N,k}(\zeta) &=& N \int_{\Omega(N)/N}^{2\pi - \Omega(N)/N} \frac{d\varphi}{2\pi} \, \varphi^k \left( \Phi_N(\varphi;\zeta) - \Phi_N^{{\rm E}}(\varphi;\zeta) \right),\\ \label{R-i-1} R^{(1)}_{N,k}(\zeta) &=& N \int_{2\pi - \Omega(N)/N}^{2\pi} \frac{d\varphi}{2\pi} \, \varphi^k \left( \Phi_N(\varphi;\zeta) - \Phi_N^{{\rm E}}(\varphi;\zeta) \right), \end{eqnarray} correspondingly. To facilitate the asymptotic analysis, we would ideally like to choose $\Phi_N^{{\rm E}}(\varphi;\zeta)$ in such a way that the contribution of the `bulk' integral $C^{(1)}_{N,k}(\zeta)$ into $I_{N,k}^{(1)}(\zeta)$ becomes negligible. For the time being, let us {\it assume} that such a function is given by the leading term in Eq.~(\ref{Th-111}), \begin{eqnarray} \label{ehrhardt} \Phi_N^{{\rm E}}(\varphi;\zeta) = N^{-2\tilde{\omega}^2} G_{\tilde{\omega}}\, e^{i \tilde{\omega} \varphi (N+1)} e^{-i\tilde{\omega}\pi} \left| 2 \sin \left( \frac{\varphi}{2} \right) \right|^{-2\tilde{\omega}^2}. \end{eqnarray} Then, $I_{N,k}^{(1)}(\zeta)$ will be dominated by the contributions coming from the `left-edge' [$L^{(1)}_{N,k}(\zeta)$] and the `right-edge' [$R^{(1)}_{N,k}(\zeta)$] parts of the integration domain. In fact, the contributions of the left and the right edges are related to each other; an exact relation between the two will be worked out and made explicit later on. \noindent\newline\newline {\it The integral $I_{N,k}^{(1)}(\zeta)$.}---Restricting ourselves to $k=0$ and $1$, we first consider the left-edge part $L_{N,k}^{(1)}(\zeta)$. Substituting Eqs.~(\ref{Edge-2}) and (\ref{ehrhardt}) into Eq.~(\ref{L-i-1}), we find, as $N\rightarrow \infty$: \begin{eqnarray}\label{eq:L1-asym} \fl L^{(1)}_{N,k}(\zeta) = N \int_0^{\Omega(N)/N} \frac{d\varphi}{2\pi} \varphi^k e^{i\tilde{\omega}\varphi} \Bigg[ \left(\frac{\sin(\varphi/2)}{\varphi/2}\right)^{-2\tilde{\omega}^2} \exp\left(\int_0^{-i N \varphi} \frac{ds}{s} \sigma(s) \right) \nonumber\\ \fl \qquad\qquad \times \left(1+\mathcal{O}(N^{-1+2\tilde{\omega}})\right) - N^{-2\tilde{\omega}^2} \left(2\sin(\varphi/2)\right)^{-2\tilde{\omega}^2} e^{i\tilde{\omega}\varphi N} e^{-i\tilde{\omega}\pi} G_{\tilde{\omega}} \Bigg]. \end{eqnarray} To get rid of $N$ in the integral over the Painlev\'e V transcendent, we make the substitution $\lambda=N \varphi$ to rewrite $ L^{(1)}_{N,k}(\zeta)$ in the form \begin{eqnarray} \fl L^{(1)}_{N,k}(\zeta) = \int_0^{\Omega(N)}\frac{d\lambda}{2\pi}\frac{\lambda^k}{N^k} e^{i\tilde{\omega}\lambda/N}\Bigg[ \left(\frac{\sin(\lambda/(2N))}{\lambda/(2N)}\right)^{-2\tilde{\omega}^2} \exp\left(\int_0^{-i \lambda} \frac{ds}{s} \sigma(s) \right) \nonumber\\ \fl \qquad\qquad \times \left(1+\mathcal{O}(N^{-1+2\tilde{\omega}})\right) -N^{-2\tilde{\omega}^2} \left(2\sin(\lambda/(2N))\right)^{-2\tilde{\omega}^2} e^{i\tilde{\omega}\lambda}e^{-i\tilde{\omega}\pi}G_{\tilde{\omega}} \Bigg]. \end{eqnarray} Noting that $\lambda/N=\mathcal{O}(\Omega(N)/N)$ tends to zero as $N\rightarrow \infty$, we can further approximate $L^{(1)}_{N,k}(\zeta)$ as \begin{eqnarray} \label{eq:L1Omega} \fl L^{(1)}_{N,k}(\zeta) =\frac{1}{N^k} \int_0^{\Omega(N)}\frac{d\lambda}{2\pi} \lambda^{k-2\tilde{\omega}^2} e^{i\tilde{\omega}\lambda} \Bigg[ \exp\left(\int_0^{-i \lambda} \frac{ds}{s} \sigma(s) -i\tilde{\omega}\lambda +2\tilde{\omega}^2 \ln\lambda \right) \nonumber\\ - e^{-i\tilde{\omega}\pi}G_{\tilde{\omega}} \Bigg] +\mathcal{O}(\Omega(N)^{k+1} N^{-k-1+2\tilde{\omega}})+\mathcal{O}(\Omega(N)^{k+2} N^{-k-1}). \end{eqnarray} Next, one may use Eq.~(\ref{global-T}) to argue that replacing $\Omega(N)$ with infinity in Eq.~(\ref{eq:L1Omega}) produces an error term of the order ${\mathcal O}(\Omega(N)^{k-1-2\tilde{\omega}^2} N^{-k})$: \begin{eqnarray} \label{eq:L1Omega-2} \fl L^{(1)}_{N,k}(\zeta) =\frac{1}{N^k} \int_0^{\infty}\frac{d\lambda}{2\pi} \lambda^{k-2\tilde{\omega}^2} e^{i\tilde{\omega}\lambda} \Bigg[ \exp\left(\int_0^{-i \lambda} \frac{ds}{s} \sigma(s) -i\tilde{\omega}\lambda +2\tilde{\omega}^2 \ln\lambda \right) \nonumber\\ - e^{-i\tilde{\omega}\pi}G_{\tilde{\omega}} \Bigg] +\mathcal{O}(\Omega(N)^{k+1} N^{-k-1+2\tilde{\omega}}) \nonumber\\ +\mathcal{O}(\Omega(N)^{k+2} N^{-k-1}) + {\mathcal O}(\Omega(N)^{k-1-2\tilde{\omega}^2} N^{-k}). \end{eqnarray} Further, choosing $\Omega(N)$ to be a slowly growing function, $\Omega(N) = \ln N$, one readily verifies that the third error term in Eq.~(\ref{eq:L1Omega-2}) is a dominant one out of the three as $0< \tilde{\omega} < 1/2$. Yet, it is smaller as compared to the integral in Eq.~(\ref{eq:L1Omega-2}) by a factor $\Omega(N)^{k-1-2\tilde{\omega}^2}$ that tends to zero as $N\rightarrow \infty$. Thus, in the leading order, we derive: \begin{eqnarray} \label{eq:L1Omega-lead-a} L^{(1)}_{N,k}(\zeta) = \frac{1}{N^k} \mathfrak{L}_{k}^{(1)} (\zeta) + o(N^{-k}), \end{eqnarray} where \begin{eqnarray} \fl \label{eq:L1Omega-lead-b} \mathfrak{L}_{k}^{(1)}(\zeta) = \int_0^{\infty}\frac{d\lambda}{2\pi} \lambda^{k-2\tilde{\omega}^2} e^{i\tilde{\omega}\lambda} \Bigg[ \exp\left(\int_0^{-i \lambda} \frac{ds}{s} \sigma(s) -i\tilde{\omega}\lambda +2\tilde{\omega}^2 \ln\lambda \right) - e^{-i\tilde{\omega}\pi}G_{\tilde{\omega}} \Bigg], \nonumber\\ {} \end{eqnarray} with $k=0$ and $1$. Now, let us turn to the `right-edge' integral $R_{N,k}^{(1)}(\zeta)$. Due to the symmetry relation Eq.~(\ref{phin-sym}) shared by $\Phi_N^{{\rm E}}(\varphi;\zeta)$ too, we realize that the contributions of the left and the right edges are related to each other: \begin{eqnarray} \fl \label{rhs} \overline{ R^{(1)}_{N,k}(\zeta)} = N (1-\bar{\zeta})^N \int_{0}^{\Omega(N)/N} \frac{d\varphi}{2\pi} \, (2\pi- \varphi)^k \left( \Phi_N(\varphi;\zeta) - \Phi_N^{{\rm E}}(\varphi;\zeta)\right). \end{eqnarray} Considering the integral in the r.h.s.~of Eq.~(\ref{rhs}) along the lines of the previous analysis, we conclude that the following formula holds as $N\rightarrow \infty$: \begin{eqnarray} \label{eq:R1Omega-lead-a} R^{(1)}_{N,k}(\zeta) = (1-\zeta)^N \mathfrak{R}_k^{(1)}(\zeta) + o(1), \end{eqnarray} where \begin{eqnarray} \fl \label{eq:R1Omega-lead-b} \overline{\mathfrak{R}_k^{(1)}(\zeta)} = (2\pi)^k \int_0^{\infty}\frac{d\lambda}{2\pi} \lambda^{-2\tilde{\omega}^2} e^{i\tilde{\omega}\lambda} \Bigg[ \exp\left(\int_0^{-i \lambda} \frac{ds}{s} \sigma(s) -i\tilde{\omega}\lambda +2\tilde{\omega}^2 \ln\lambda \right) \nonumber\\ - e^{-i\tilde{\omega}\pi}G_{\tilde{\omega}} \Bigg], \end{eqnarray} with $k=0$ and $1$. Combining Eqs.~(\ref{eq:L1Omega-lead-a}),~(\ref{eq:L1Omega-lead-b}),~(\ref{eq:R1Omega-lead-a}) and (\ref{eq:R1Omega-lead-b}), we end up with the asymptotic result [Eq.~(\ref{I-1})] \begin{eqnarray} \label{as-exp} I_{N,k}^{(1)}(\zeta) \mapsto \frac{1}{N^k} \mathfrak{L}_{k}^{(1)} (\zeta) + (1-\zeta)^N \mathfrak{R}_k^{(1)}(\zeta). \end{eqnarray} The notation $\mapsto$ was used here to stress that the r.h.s.~contains each leading order contribution of both terms, the oscillating and the non-oscillating, as discussed in the paragraph prior to Eq.~(\ref{I-n-k-1-2}). \noindent\newline\newline {\it The integral $I_{N,k}^{(2)}(\zeta)$.}---As soon as the function $\Phi_N^{\rm{E}}(\varphi;\zeta)$ contains a strongly oscillating factor $e^{i \tilde{\omega} \varphi N}$, the integral $I_{N,k}^{(2)}(\zeta)$ in Eq.~(\ref{i-n-k-2}) can be calculated by the stationary phase method \cite{T-2014}. Since there are no stationary points within the interval $(0,2\pi)$, the integral is dominated by contributions $L_{N,k}^{(2)}(\zeta)$ and $R_{N,k}^{(2)}(\zeta)$, coming from the vicinities of $\varphi=0$ and $\varphi=2\pi$, respectively. \begin{lemma} \label{L-INK2} Let $I_{N,k}^{(2)}(\zeta)$ be defined by Eqs.~(\ref{i-n-k-2}) and (\ref{ehrhardt}), where $k$ is a fixed non-negative integer. Then, as $N\rightarrow \infty$, it can be represented in the following form: \begin{eqnarray}\label{i-n-k-2-sp} I_{N,k}^{(2)}(\zeta) = L_{N,k}^{(2)}(\zeta) + R_{N,k}^{(2)}(\zeta) \end{eqnarray} where \begin{eqnarray}\label{eq:L2R2all} L^{(2)}_{N,k}(\zeta) &=& \frac{1}{N^k} \mathfrak{L}_k^{(2)} + o(N^{-k}), \\ \label{eq:L2R2all-2} R^{(2)}_{N,k}(\zeta) &=& (1-\zeta)^N \mathfrak{R}_k^{(2)} + o(1), \end{eqnarray} and \begin{eqnarray} \label{Lk2} \mathfrak{L}_k^{(2)}(\zeta) &=& \frac{G_{\tilde{\omega}}}{2\pi} \, e^{i\pi(k+1-2{\tilde{\omega}}-2{\tilde{\omega}}^2)/2} {\tilde{\omega}}^{-k-1+2{\tilde{\omega}}^2} \Gamma(k+1-2{\tilde{\omega}}^2),\\ \label{Rk2} \mathfrak{R}_k^{(2)}(\zeta) &=& \frac{G_{\tilde{\omega}}}{2\pi}\, e^{i\pi(-1+2{\tilde{\omega}}+2{\tilde{\omega}}^2)/2} {\tilde{\omega}}^{-1+2{\tilde{\omega}}^2} (2\pi)^{k} \Gamma(1-2{\tilde{\omega}}^2). \end{eqnarray} \end{lemma} \begin{proof} Apply the stationary phase method \cite{T-2014} to calculate the integral Eq.~(\ref{i-n-k-2}). \end{proof} The Lemma \ref{L-INK2} brings the following asymptotic result \begin{eqnarray} I_{N,k}^{(2)}(\zeta) \mapsto \frac{1}{N^k} \mathfrak{L}_{k}^{(2)} (\zeta) + (1-\zeta)^N \mathfrak{R}_k^{(2)}(\zeta), \end{eqnarray} compare with Eq.~(\ref{as-exp}). \noindent\newline\newline {\it The integral $I_{N,k}(\zeta)$.}---The calculation above implies that the main integral of our interest admits an asymptotic representation \begin{eqnarray} \label{Ink-final} I_{N,k}(\zeta) \mapsto \frac{1}{N^k} \mathfrak{L}_k(\zeta) + (1-\zeta)^N \mathfrak{R}_k(\zeta) \end{eqnarray} with $k=0,1$ and \begin{eqnarray} \label{Lkf} \mathfrak{L}_k(\zeta) &=& \mathfrak{L}_k^{(1)}(\zeta)+ \mathfrak{L}_k^{(2)}(\zeta), \\ \label{Rkf} \mathfrak{R}_k(\zeta) &=& \mathfrak{R}_k^{(1)}(\zeta)+ \mathfrak{R}_k^{(2)}(\zeta). \end{eqnarray} We notice that both $\mathfrak{L}_k(\zeta)= \mathcal{O}(1)$ and $\mathfrak{R}_k(\zeta)= \mathcal{O}(1)$ and the factor $(1-\zeta)^N = z^N = e^{2 i \pi \tilde{\omega} N}$ in Eq.~(\ref{Ink-final}) is a strongly oscillating function of $\tilde{\omega}$ as $N \rightarrow \infty$, in concert with the discussion in the paragraph prior to Eq.~(\ref{I-n-k-1-2}). \begin{remark} Our derivation of the main result of this Section, Eq.~(\ref{Ink-final}), was based on the assumption that the choice of $\Phi_N^{\rm E}(\varphi;\zeta)$ in the form Eq.~(\ref{ehrhardt}) makes the contribution of the `bulk' integral $C_{N,k}^{(1)}(\zeta)$ [Eq.~(\ref{C-i-1})] into $I_{N,k}^{(1)}(\zeta)$ negligible. If this is {\it not} the case, one should replace $\Phi_N^{\rm E}$ with some $\tilde{\Phi}_N^{\rm E}$ by adding to $\Phi_N^{\rm E}$ the higher-order corrections (up to ${\mathcal O(N^{-2})}$) that can be obtained from the full asymptotic expansion of $\Phi_N(\varphi;\zeta)$, see Remark 1.4 of Ref.~\cite{DIK-2014}. Inclusion of these higher-order corrections will reduce the contribution of $C_{N,k}^{(1)}(\zeta)$ to a negligible level as guaranteed by the rough upper-bound estimate \begin{eqnarray} \label{upper-b} |C_{N,k}^{(1)}(\zeta)| &=& N \left| \int_{\Omega(N)/N}^{2\pi - \Omega(N)/N} \frac{d\varphi}{2\pi} \, \varphi^k \left( \Phi_N(\varphi;\zeta) - \Phi_N^{{\rm E}}(\varphi;\zeta) \right) \right| \nonumber\\ &\le& N \int_{0}^{2\pi} \frac{d\varphi}{2\pi} \, \varphi^k \left| \Phi_N(\varphi;\zeta) - \tilde{\Phi}_N^{{\rm E}}(\varphi;\zeta) \right| = {\mathcal O}(N^{-1}). \end{eqnarray} On the other hand, the proposed modification of $\Phi_N(\varphi;\zeta)$ will produce corrections to the functions $L_{N,1}^{(1)}$, $R_{N,1}^{(1)}$, $L_{N,1}^{(2)}$ and $R_{N,1}^{(2)}$, which will clearly be subleading to those calculated in the leading order [see Eqs. (\ref{eq:L1Omega-lead-a}), (\ref{eq:R1Omega-lead-a}), (\ref{eq:L2R2all}), (\ref{eq:L2R2all-2})]. For this reason, they will not affect the large-$N$ analysis of the power spectrum where only ${\mathcal O}(1)$ terms are kept. \hfill $\blacksquare$ \end{remark} \subsection{Proof of Theorem \ref{Th-4}} Now we are in position to evaluate the power spectrum as $N\rightarrow \infty$. To proceed, we start with the exact, finite-$N$, representation \begin{eqnarray} \fl \label{Sn-In1} S_N(\omega) = \frac{(N+1)^2}{\pi N^2} {\rm Re}\left\{ \left( z \frac{\partial}{\partial z} - N - \frac{1-z^{-N}}{1-z}\right) \frac{z}{1-z} \, I_{N,1}(\zeta) \right\} - \dbtilde{S}_N(\omega) \end{eqnarray} following from Eqs.~(\ref{ps-tcue-1}) and (\ref{i-n-k}). Substituting $I_{N,1}$ given by Eq.~(\ref{Ink-final}) into Eq.~(\ref{Sn-In1}) and taking into account the relation ${\mathfrak{R}}_{1}(\zeta)= 2\pi {\mathfrak{R}}_{0}(\zeta)$, following from Eqs.~(\ref{Rkf}), (\ref{Rk2}) and (\ref{eq:R1Omega-lead-b}), we derive, as $N\rightarrow \infty$: \begin{eqnarray} \fl \label{Snw} S_N(\omega) = - \frac{1}{\pi} \, {\rm Re}\, \Bigg\{ \frac{z}{1-z}\, {\mathfrak{L}}_{1}(\zeta)\Bigg\} + 2 {\rm Re}\, \Bigg\{ \frac{z}{1-z} \left( z^{N+1} \frac{d}{dz} {\mathfrak{R}}_{0}(\zeta)+\frac{{\mathfrak{R}}_{0}(\zeta)}{1-z} \right) \Bigg\} \nonumber\\ - 2 {\rm Re} \Bigg\{ \frac{(z-1)(1+z^N)}{|1-z|^4} \Bigg\} + o(1). \end{eqnarray} Here, the third term originates from the large-$N$ expansion of $\dbtilde{S}_N(\omega)$ [Eq.~(\ref{ps-tcue-3})]. Surprisingly, the last two terms in Eq.~(\ref{Snw}) cancel each other. This follows from the identity \begin{eqnarray}\label{R0} \mathfrak{L}_0(\zeta) = \overline{\mathfrak{R}_0(\zeta)} = \frac{1}{\zeta} \end{eqnarray} that can be identified by comparing Eq.~(\ref{i-n-0-asymp}) with Eq.~(\ref{Ink-final}) taken at $k=0$. The cancellation implies the $N\rightarrow \infty$ result \begin{eqnarray} \label{Snw-infty} S_\infty(\omega) = - \frac{1}{\pi} \, {\rm Re}\, \Bigg\{ \frac{z}{1-z}\, {\mathfrak{L}}_{1}(\zeta)\Bigg\}. \end{eqnarray} Substituting Eqs.~(\ref{Lkf}), (\ref{eq:L1Omega-lead-b}) and (\ref{Lk2}) into Eq.~(\ref{Snw-infty}), we derive \begin{eqnarray}\label{eq:SnResult} \fl S_\infty(\omega) = \frac{1}{\pi} \, {\rm Re}\, \Bigg\{ \frac{e^{2 i \pi\tilde{\omega}}}{e^{2 i \pi\tilde{\omega}}-1} \Bigg( \int_0^{\infty}\frac{d\lambda}{2\pi} \lambda^{1-2\tilde{\omega}^2} e^{i\tilde{\omega}\lambda} \Bigg[ \exp\Big(\int_0^{-i \lambda} \frac{ds}{s} \sigma(s) -i\tilde{\omega}\lambda +2\tilde{\omega}^2 \ln\lambda \Big) \nonumber\\ -e^{-i\tilde{\omega}\pi}G_{\tilde{\omega}} \Bigg] - \frac{G_{\tilde{\omega}}}{2\pi} e^{-i\pi(\tilde{\omega}+\tilde{\omega}^2)} \tilde{\omega}^{-2+2\tilde{\omega}^2} \Gamma(2-2\tilde{\omega}^2) \Bigg)\Bigg\}, \end{eqnarray} where $\tilde{\omega} = \omega/2\pi$ is a rescaled frequency, and the function $\sigma(s)$ is the fifth Painlev\'e transcendent defined by Eqs.~(\ref{PV-eq}), (\ref{bc-inf}) and (\ref{bc-zero}). Equation~(\ref{eq:SnResult}) can be simplified down to \begin{eqnarray}\label{eq:SnResult-2} \fl \qquad S_\infty(\omega)= {\mathcal A}(\tilde{\omega}) \, \Bigg\{ {\rm Im}\, \Bigg( \int_0^{\infty}\frac{d\lambda}{2\pi} \lambda^{1-2\tilde{\omega}^2} e^{i\tilde{\omega}\lambda} \nonumber\\ \times \left[ \exp\left(\int_{-i \infty}^{-i\lambda} \frac{ds}{s}\left( \sigma(s) + s \tilde{\omega} +2\tilde{\omega}^2 \right) \right) -1 \right] \Bigg) +{\mathcal B}(\tilde{\omega}) \Bigg\}, \end{eqnarray} where the functions ${\mathcal A}(\tilde\omega)$ and ${\mathcal B}(\tilde\omega)$ are defined as in Eqs.~(\ref{Aw-def}) and (\ref{Bw-def}). To derive Eq.~(\ref{eq:SnResult-2}) we have used the integral identity Eq.~(\ref{global}) to transform the exponent \begin{eqnarray}\fl \exp \left(\int_0^{-i\lambda} \frac{\sigma(s)}{s} ds - i \tilde{\omega}\lambda+2\tilde{\omega}^2\ln \lambda \right) = G_{\tilde{\omega}} e^{-i\pi\tilde{\omega}} \nonumber\\ \times \lim_{T\rightarrow\infty} \exp \left[ \int_{-i T}^{-i \lambda} \frac{\sigma(s)}{s} ds + i\tilde{\omega}(T-\lambda) + 2\tilde{\omega}^2 \ln(\lambda/T) \right]\nonumber\\ = G_{\tilde{\omega}} e^{-i\pi\tilde{\omega}} \exp \left[ \int_{-i \infty}^{-i \lambda} \frac{ds}{s} \left(\sigma(s)+\tilde{\omega} s +2\tilde{\omega}^2 \right) \right]. \end{eqnarray} Finally, we notice that $\sigma(s=-i t)=\sigma_1(t)$ satisfies Eq.~(\ref{PV-family}) with $\nu=1$ supplemented by the boundary conditions Eqs.~(\ref{bc-s1-infty}) and (\ref{bc-s1-zero}). With help of this, we recover the statement of Theorem \ref{Th-4} from Eq.~(\ref{eq:SnResult-2}). \hfill $\square$ \begin{remark}\label{convergence} Note that the global condition Eq.~(\ref{global-T}) ensures that the expression in the square brackets in Eq.~(\ref{eq:SnResult}) exhibits ${\mathcal O}(\lambda^{-1})$ behavior as $\lambda \rightarrow \infty$. This guarantees that the external $\lambda$-integral in Eq.~(\ref{eq:SnResult}) converges for any $\tilde\omega \in (0, 1/2)$. \hfill $\blacksquare$ \end{remark} \begin{remark} Notice that Eq.~(\ref{R0}) combined with Eqs.~(\ref{Rkf}), (\ref{Rk2}) and (\ref{eq:R1Omega-lead-b}) taken at $k=0$, motivates the following conjecture. \hfill $\blacksquare$ \end{remark} \begin{conjecture} \label{conj} Let $0<\tilde{\omega}<1/2$ and let $\sigma(s)$ be the solution of the fifth Painlev\'e transcendent satisfying Eq.~(\ref{PV-eq}) and the boundary conditions Eq.~(\ref{bc-inf}) -- (\ref{eq:gamma}). Then the following double integral relation holds \begin{eqnarray}\label{eq:int_conjecture} \fl \int_0^{\infty}\frac{d\lambda}{2\pi} \lambda^{-2\tilde{\omega}^2} e^{i\tilde{\omega}\lambda} \left[ \exp\left(\int_0^{-i \lambda} \frac{ds}{s} \sigma(s) -i\tilde{\omega}\lambda +2\tilde{\omega}^2 \ln\lambda \right) -e^{-i\tilde{\omega}\pi}G_{\tilde{\omega}} \right] \nonumber \\ = \frac{1}{1-e^{2\pi i\tilde{\omega}}} -i \frac{G_{\tilde{\omega}}}{2\pi} e^{-i\pi(\tilde{\omega}+\tilde{\omega}^2)} \tilde{\omega}^{-1+2\tilde{\omega}^2} \Gamma(1-2\tilde{\omega}^2). \end{eqnarray} \hfill $\blacksquare$ \end{conjecture} \begin{remark} To extend the proof of Theorem \ref{Th-4} for $\omega=\pi$, one would have to use the Theorem~1.12 of Ref.~\cite{CK-2015} instead of Theorems 1.5, 1.8 and 1.11 of the same paper. Since numerical calculations indicate that the power spectrum is continuous at $\omega=\pi$, we did not study this case analytically. \hfill $\blacksquare$ \end{remark} \subsection{Proof of Theorem \ref{Th-5}} Below, the universal law $S_\infty(\omega)$ for the power spectrum will be studied in the vicinity of $\omega=0$. In the language of $S_N(\omega)$ this corresponds to performing a small-$\omega$ expansion after taking the limit $N\rightarrow \infty$. Equation (\ref{PS-exact}) will be the starting point of our analysis. \newline\newline\noindent {\it Preliminaries.}---Being interested in the small-$\tilde\omega$ behavior of the power spectrum Eq.~(\ref{PS-exact}), we observe that the functions ${\mathcal A}(\tilde{\omega})$ and ${\mathcal B}(\tilde{\omega})$, defined by Eqs.~(\ref{Aw-def}) and (\ref{Bw-def}), admit the expansions \begin{eqnarray} {\mathcal A}(\tilde{\omega}) =\frac{1}{2 \pi^2 \tilde{\omega}} + \left( \frac{1}{6} - \frac{1+\gamma}{\pi^2} \right) \tilde{\omega} + {\mathcal O}(\tilde\omega^3), \end{eqnarray} \begin{eqnarray} {\mathcal B}(\tilde{\omega}) =\frac{1}{2} + \tilde{\omega}^2 \ln \tilde{\omega} + (\gamma-1)\, \tilde{\omega}^2 + {\mathcal O}(\tilde\omega^4 \ln^2\tilde{\omega}), \end{eqnarray} so that the power spectrum, as $\tilde{\omega} \rightarrow 0$, can be written as \begin{eqnarray} \label{S_inf_exp} \fl S_\infty(\omega) = \frac{1}{4 \pi^2 \tilde{\omega}} + \left( \frac{1}{12} - \frac{1}{\pi^2} \right) \tilde{\omega} + \frac{1}{2 \pi^2} \tilde\omega \ln \tilde\omega \nonumber\\ + \left\{ \frac{1}{2 \pi^2 \tilde\omega} + \left( \frac{1}{6} - \frac{1+\gamma}{\pi^2} \right) \tilde\omega + {\mathcal O} (\tilde\omega^3) \right\} \hat{\Lambda}(\tilde\omega) + {\mathcal O}(\tilde\omega^3 \ln^2\tilde\omega). \end{eqnarray} Here, $\hat{\Lambda}(\tilde\omega)$ denotes a small-$\omega$ expansion of the function \begin{eqnarray} \label{eq:Lambda}\fl \Lambda(\tilde\omega) = {\rm Im} \int_{0}^{\infty} \frac{d\lambda}{2\pi} \, \lambda^{1-2\tilde{\omega}^2} \, e^{i\tilde{\omega} \lambda} \left[ \exp \left( - \int_{\lambda}^{\infty} \frac{dt}{t} \left( \sigma_1(t) - i \tilde{\omega} t + 2\tilde{\omega}^2\right) \right) -1 \right], \end{eqnarray} such that \begin{eqnarray} \label{approx-def} \Lambda(\tilde\omega) = \hat{\Lambda}(\tilde\omega) + {\mathcal O}(\tilde\omega^3), \end{eqnarray} see Eqs.~(\ref{PS-exact}) and (\ref{S_inf_exp}). Notice that convergence of the external $\lambda$-integral at infinity is ensured by the oscillating exponent $e^{i\tilde\omega \lambda}$. \newline\newline \noindent {\it Small-$\tilde\omega$ ansatz for the fifth Painlev\'e transcendent.}---To proceed, we postulate the following ansatz for a small-$\omega$ expansion of the fifth Painlev\'e function $\sigma_1(t)$: \begin{eqnarray} \label{sigma-expan} \sigma_1(t) = \tilde{\omega} f_1(t) + \tilde{\omega}^2 f_2(t) + \tilde{\omega}^3 f_3(t)+ \cdots. \end{eqnarray} Here, the functions $f_k(t)$ with $k=1,2,\dots$ satisfy the equations \begin{eqnarray} \label{fk-eqns} t^2 f_k^{\prime\prime\prime} + t f_k^{\prime\prime} + (t^2-4) f_k^{\prime} - t f_k(t) = F_k(t), \end{eqnarray} where \begin{eqnarray}\label{FL-1} F_1(t) &=& 0, \\ \label{FL-2} F_2(t) &=& 4 f_1(t) f_1^\prime - 6 t (f_1^\prime)^2, \\ \label{FL-3} F_3(t) &=& 4 f_1(t) f_2^\prime + 4 f_1^\prime f_2(t) - 12 t f_1^\prime f_2^\prime, \end{eqnarray} etc. The above can easily be checked by substituting Eq.~(\ref{sigma-expan}) into Chazy form \cite{C-1911,C-2000} \begin{eqnarray} \label{PV-chazy} t^2 \sigma_\nu^{\prime\prime\prime} + t \sigma_\nu^{\prime\prime} + 6 t (\sigma_\nu^{\prime})^2 - 4 \sigma_\nu \sigma_\nu^{\prime} + (t^2 - 4\nu^2) \sigma_\nu^{\prime} - t \sigma_\nu = 0 \end{eqnarray} of the Painlev\'e V equation Eq.~(\ref{PV-family}) taken at $\nu=1$. The boundary conditions are generated by Eqs.~(\ref{bc-s1-infty}) and (\ref{bc-s1-zero}): \begin{eqnarray} \label{f1-bc} f_1(t)\rightarrow 0 \; {\rm as}\; t\rightarrow 0, \qquad f_1(t) = it + o(t) \;{\rm as\;} t\rightarrow +\infty, \end{eqnarray} \begin{eqnarray} \label{f2-bc} f_2(t)\rightarrow 0 \; {\rm as}\; t\rightarrow 0, \qquad f_2(t) \rightarrow -2 \;{\rm as\;} t\rightarrow +\infty, \end{eqnarray} \begin{eqnarray} \label{f3-bc} f_3(t)\rightarrow 0 \; {\rm as}\; t\rightarrow 0, \qquad f_3(t) \rightarrow 0 \;{\rm as\;} t\rightarrow +\infty. \end{eqnarray} The third order differential equation (\ref{fk-eqns}) can be solved to bring \begin{eqnarray} f_k(t) &=& \left( t -\frac{2}{t}\right) \left( c_{1,k} + \int_{0}^{t} \frac{dx}{x^3} F_k(x) \right)\nonumber\\ &+& \frac{e^{it}}{t} \left( c_{2,k} + \int_{0}^{t} \frac{dx}{2 x^3} e^{-i x} (-x^2+ 2i x+2) \, F_k(x) \right) \nonumber\\ &+& \frac{e^{-it}}{t} \left( c_{3,k} + \int_{0}^{t} \frac{dx}{2 x^3} e^{i x} (-x^2 - 2i x+2) \, F_k(x) \right). \end{eqnarray} This representation assumes that the integrals are convergent; integration constants have to be fixed by the boundary conditions Eqs.~(\ref{f1-bc}), (\ref{f2-bc}), (\ref{f3-bc}), etc. In particular, we derive: \begin{eqnarray} \label{eq:f1} f_1(t) = i \frac{t^2 +2\cos t -2}{t}, \end{eqnarray} \begin{eqnarray} \label{eq:f2} f_2(t) &=& -2 - \frac{6}{t^2} + \frac{2\pi}{t} - \pi t + 2 \cos t + 8 \frac{\cos t}{t^2} -\frac{2\pi}{t} \cos t \nonumber\\ &-& 2 \frac{\cos(2 t)}{t^2} + 8 \gamma \frac{\sin t}{t} - 8 {\rm Ci}(t) \frac{\sin t}{t} + 8 \ln t \frac{\sin t}{t} \nonumber\\ &-& 4 \frac{{\rm Si}(t)}{t} + 2 t {\rm Si}(t) + 4 \cos t \frac{{\rm Si}(t)}{t}, \end{eqnarray} where \begin{eqnarray} \label{euler} \gamma = \lim_{n\rightarrow \infty} \left( - \ln n + \sum_{k=1}^n \frac{1}{k} \right) \simeq 0.577216 \end{eqnarray} is the Euler's constant, and \begin{eqnarray} \label{eq:f3} f_3(t) &=& \left( \frac{2}{t}-t \right) \int_{t}^{\infty} \frac{dx}{x^3} F_3(x) - 2 \frac{\cos t}{t} \int_{0}^{\infty} \frac{dx}{x^3} F_3(x) \nonumber\\ &+& i {\rm Im} \left\{ \frac{e^{it}}{t} \int_{0}^{t} \frac{dx}{x^3} e^{-i x} (-x^2+ 2i x+2) \, F_3(x) \right\}. \end{eqnarray} Here, the function $F_3(t)$ is known explicitly from Eqs.~(\ref{FL-3}), (\ref{eq:f1}) and (\ref{eq:f2}). We notice that $$ f_1(t) \in i\mathbb{R}, \quad f_2(t) \in \mathbb{R}, \quad f_3(t) \in i\mathbb{R}, $$ and $$ F_2(t) \in \mathbb{R}, \quad F_3(t) \in i\mathbb{R}. $$ \newline \noindent {\it Representation of $\hat{\Lambda}(\tilde\omega)$ as a partial sum}.---Having determined the functions $f_1(t)$, $f_2(t)$ and $f_3(t)$, we now turn to the small-$\omega$ analysis of $\Lambda(\tilde{\omega})$ [Eq.~(\ref{eq:Lambda})]. Expanding the expression in square brackets in small $\omega$, we obtain: \begin{eqnarray} \fl \label{bra-1}\ \exp \left( - \int_{\lambda}^{\infty} \frac{dt}{t} \left( \sigma_1(t) - i \tilde{\omega} t + 2\tilde{\omega}^2\right) \right) -1 &=& -\tilde\omega {\mathcal G}_1(\lambda) \nonumber\\ &-& \tilde\omega^2 {\mathcal G}_2(\lambda) - \tilde\omega^3 {\mathcal G}_3(\lambda) - \cdots. \end{eqnarray} The functions ${\mathcal G}_k(\lambda)$ can be evaluated explicitly in terms of integrals containing $f_k(\lambda)$ defined in Eq.~(\ref{sigma-expan}). For example, \begin{eqnarray} \label{G1} {\mathcal G}_1 (\lambda) &=& {\mathcal F}_1 (\lambda),\\ \label{G2} {\mathcal G}_2 (\lambda) &=& - \frac{1}{2} {\mathcal F}_1^2(\lambda) + {\mathcal F}_2(\lambda),\\ \label{G3} {\mathcal G}_3 (\lambda) &=& {\mathcal F}_3 (\lambda) - {\mathcal F}_1(\lambda){\mathcal F}_2(\lambda) + \frac{1}{6} {\mathcal F}_1^3(\lambda). \end{eqnarray} Here, \begin{eqnarray} \label{F1-def} {\mathcal F}_1(\lambda) = \int_\lambda^\infty \frac{dt}{t} (f_1(t) - i t) = -2 i \left( \frac{1-\cos\lambda}{\lambda} + \frac{\pi}{2} - {\rm Si}(\lambda) \right), \end{eqnarray} \begin{eqnarray} \label{F2-def} {\mathcal F}_2(\lambda) = \int_\lambda^\infty \frac{dt}{t} (f_2(t) +2), \end{eqnarray} and \begin{eqnarray} \label{F3-def} {\mathcal F}_3(\lambda) = \int_\lambda^\infty \frac{dt}{t} \, f_3(t). \end{eqnarray} Notice that $$ {\mathcal F}_1(\lambda) \in i \mathbb{R},\quad {\mathcal F}_2(\lambda) \in \mathbb{R},\quad {\mathcal F}_3(\lambda) \in i\mathbb{R} $$ and, hence, $$ {\mathcal G}_1(\lambda) \in i \mathbb{R},\quad {\mathcal G}_2(\lambda) \in \mathbb{R},\quad {\mathcal G}_3(\lambda) \in i\mathbb{R}. $$ Substituting Eq.~(\ref{bra-1}) into Eq.~(\ref{eq:Lambda}), we split $\Lambda (\tilde\omega)$ into a partial sum \begin{eqnarray} \label{L-partial} \Lambda (\tilde\omega) = \Lambda_1 (\tilde\omega) + \Lambda_2 (\tilde\omega) + \Lambda_3 (\tilde\omega)+ \cdots, \end{eqnarray} where \begin{eqnarray}\label{Lam-K} \Lambda_k (\tilde\omega) = -\tilde{\omega}^k {\rm Im} \int_{0}^{\infty} \frac{d\lambda}{2\pi} \, \lambda^{1-2\tilde{\omega}^2} \, e^{i\tilde{\omega} \lambda} \mathcal{G}_k(\lambda). \end{eqnarray} A small-$\tilde\omega$ expansion of $\Lambda_k (\tilde\omega)$ is of our immediate interest. \noindent\newline\newline {\it Calculation of $\hat{\Lambda}_1(\tilde\omega)$.}---Equations (\ref{Lam-K}), (\ref{G1}) and (\ref{F1-def}) yield \begin{eqnarray} \Lambda_1 (\tilde\omega) = 2 \tilde{\omega} \int_{0}^{\infty} \frac{d\lambda}{2\pi} \, \lambda^{1-2\tilde{\omega}^2} \, \cos(\tilde{\omega} \lambda) \left( \frac{1-\cos\lambda}{\lambda} + \frac{\pi}{2} - {\rm Si}(\lambda) \right). \end{eqnarray} Performing the integral, we obtain: \begin{eqnarray} \fl \Lambda_1 (\tilde\omega) = \frac{1}{\pi}\tilde{\omega}^3 \Gamma(-2 \tilde{\omega}^2) \sin(\pi \tilde\omega^2) \Bigg\{ (1-\tilde\omega)^{2\tilde\omega^2-1} + (1+\tilde\omega)^{2\tilde\omega^2-1} - 2 \tilde\omega^{2\tilde\omega^2-1} \nonumber\\ - \frac{1-2\tilde\omega^2}{1-\tilde\omega^2} \, _3F_2\left(1-\tilde\omega^2,1-\tilde\omega^2,\frac{3}{2}-\tilde\omega^2;\frac{1}{2},2-\tilde\omega^2;\tilde\omega^2\right) \Bigg\}. \end{eqnarray} Its small-$\tilde\omega$ expansion $\Lambda_1 (\tilde\omega) = \tilde\omega^2 + {\mathcal O}(\tilde\omega^3)$ brings \begin{eqnarray} \label{L1-hat} \hat{\Lambda}_1 (\tilde\omega) = \tilde\omega^2, \end{eqnarray} see Eq.~(\ref{approx-def}) for the definition of $\hat{\Lambda}(\omega)$. \newline\newline\noindent {\it Estimate of ${\Lambda}_k(\tilde\omega)$.}---To treat $\Lambda_k(\tilde\omega)$ for $k \ge 2$, we split it into two parts \begin{eqnarray} \label{A-B} \Lambda_k(\tilde\omega) = A_k(\tilde\omega, T) + B_k(\tilde\omega, T), \end{eqnarray} where \begin{eqnarray} \label{a-k} A_k(\tilde\omega, T) &=& - \tilde\omega^k {\rm Im} \int_{0}^{T} \frac{d\lambda}{2\pi} \, \lambda^{1-2\tilde{\omega}^2} \, e^{i\tilde{\omega} \lambda} \mathcal{G}_k(\lambda),\\ \label{b-k} B_k(\tilde\omega, T) &=& - \tilde\omega^k {\rm Im}\int_{T}^{\infty} \frac{d\lambda}{2\pi} \, \lambda^{1-2\tilde{\omega}^2} \, e^{i\tilde{\omega} \lambda} \mathcal{G}_k(\lambda). \end{eqnarray} Here, $T$ is an arbitrary positive number to be taken to infinity in the end. Since a small-$\tilde\omega$ expansion of $A_k(\tilde\omega, T)$ is well justified for any finite $T$, see e.g. Eq.~(\ref{A2-exp}) below, we conclude that \begin{eqnarray} \label{A3-w3} A_k(\tilde\omega, T) = {\mathcal O}(\tilde\omega^k). \end{eqnarray} To estimate $B_k(\tilde\omega, T)$, we refer to Remark \ref{convergence} which implies that ${\mathcal G}_k(\lambda) = {\mathcal O}(\lambda^{-1})$ as $\lambda \rightarrow \infty$. Replacing ${\mathcal G}_k(\lambda)$ with $1/\lambda$ in Eq.~(\ref{b-k}), we perform the integration by parts twice in the resulting integral \begin{eqnarray} \fl \label{one-more-f} \int_{T}^{\infty} \frac{d\lambda}{2\pi} \, \frac{e^{i\tilde{\omega} \lambda}}{\lambda^{2\tilde{\omega}^2}} = -\frac{e^{i\tilde\omega T}}{2 i\pi\tilde\omega} T^{-2\tilde\omega^2} + \frac{1}{\pi} e^{i\tilde\omega T} T^{-1-2\tilde\omega^2} - 2 (1+2\tilde\omega^2) \int_{T}^{\infty} \frac{d\lambda}{2\pi} \frac{e^{i\tilde\omega \lambda}}{\lambda^{2+2\tilde\omega^2}} \end{eqnarray} to conclude that it is of order ${\mathcal O}(\tilde\omega^{-1})$. This entails \begin{eqnarray} \label{Bk-wk} B_k(\tilde\omega, T) = {\mathcal O}(\tilde\omega^{k-1}). \end{eqnarray} Since we are interested in calculating $\Lambda(\tilde\omega)$ up to the terms ${\mathcal O}(\tilde\omega^3)$, see Eq.~(\ref{approx-def}), we need to consider $A_k(\tilde\omega, T)$ and $B_{k+1}(\tilde\omega, T)$ for $k \le 2$ only. \noindent\newline\newline {\it Calculation of $\hat{\Lambda}_2(\tilde\omega)$.}--- A small-$\tilde\omega$ expansion of $A_2(\tilde\omega, T)$ brings \begin{eqnarray} \fl\label{A2-exp} A_2(\tilde\omega, T) = - \tilde\omega^2 {\rm Im} \int_{0}^{T} \frac{d\lambda}{2\pi} \, \lambda \left( 1 + i \tilde\omega \lambda - 2 \tilde\omega^2 \ln \lambda -\frac{1}{2} \tilde \omega^2 \lambda^2 + {\mathcal O}(\tilde\omega^3) \right) \mathcal{G}_2(\lambda). \end{eqnarray} Since $\mathcal{G}_2(\lambda) \in \mathbb{R}$, we even conclude that \begin{eqnarray} \label{A2-w3} A_2(\tilde\omega, T) &=& {\mathcal O}(\tilde\omega^3). \end{eqnarray} For this reason, $A_2(\tilde\omega, T)$ does not contribute to $\hat{\Lambda}_2(\tilde\omega)$. Evaluation of $B_2(\tilde\omega, T)$, given by \begin{eqnarray} \label{B2-def} B_2(\tilde\omega, T) &=& - \tilde\omega^2 \int_{T}^{\infty} \frac{d\lambda}{2\pi} \, \lambda^{1-2\tilde{\omega}^2} \, \sin(\tilde{\omega} \lambda) \mathcal{G}_2(\lambda), \end{eqnarray} is more involved. A simplification comes from the fact that, at some point, we shall let $T$ tend to infinity. For this reason, it suffices to consider a large-$\lambda$ expansion of $\mathcal{G}_2(\lambda)$ in the integrand. Straightforward calculations bring \begin{eqnarray}\label{F12-asym} \mathcal{F}_1(\lambda) = -\frac{2i}{\lambda} - 2i \frac{\sin\lambda}{\lambda^2} + \mathcal{O}\left(\frac{\cos\lambda}{\lambda^3}\right), \\ \label{F2-asym} \mathcal{F}_2 (\lambda) = -\frac{6}{\lambda^2} + 8 \frac{\cos\lambda \ln \lambda}{\lambda^2} + 2 (4\gamma-1) \frac{\cos\lambda}{\lambda^2} + \mathcal{O}\left( \frac{\ln\lambda}{\lambda^3}\right). \end{eqnarray} Equation (\ref{F12-asym}) is furnished by the large-$\lambda$ expansion of Eq.~(\ref{F1-def}). To derive Eq.~(\ref{F2-asym}), we first calculated the integral Eq.~(\ref{F2-def}) replacing an integrand therein with its large-$t$ asymptotics, and then expanded the resulting expression in parameter $\lambda \rightarrow \infty$. By virtue of Eq.~(\ref{G2}), this yields \begin{eqnarray} \label{G2-exp} \mathcal{G}_2(\lambda) = -\frac{4}{\lambda^2} + 8 \frac{\cos\lambda \ln \lambda}{\lambda^2} + 2 (4\gamma-1) \frac{\cos\lambda}{\lambda^2} + \mathcal{O}\left( \frac{\ln\lambda}{\lambda^3}\right). \end{eqnarray} The expansion Eq.~(\ref{G2-exp}), being substituted into Eq.~(\ref{B2-def}), generates two families of integrals: \begin{eqnarray} \label{I-j-def} {\mathcal I}_j(\tilde\omega, T) &=& \int_{T}^{\infty} \frac{d\lambda}{2\pi} \, \frac{\sin[(\tilde{\omega} +j) \lambda]}{\lambda^{1+2\tilde{\omega}^2}} \end{eqnarray} with $j=0, \pm 1$ and \begin{eqnarray} \label{K-j-def} {\mathcal K}_j(\tilde\omega, T) &=& \int_{T}^{\infty} \frac{d\lambda}{2\pi} \,\ln \lambda \, \frac{\sin[(\tilde{\omega} +j) \lambda]}{\lambda^{1+2\tilde{\omega}^2}} \end{eqnarray} with $j=\pm 1$, such that \begin{eqnarray} \label{B2-sum} B_2(\tilde\omega, T) &=& \tilde\omega^2 \Big\{ 4 {\mathcal I}_0(\tilde\omega, T) - (4 \gamma-1)[{\mathcal I}_{-1}(\tilde\omega, T) + {\mathcal I}_1(\tilde\omega, T)] \nonumber\\ &-& 4 [{\mathcal K}_{-1}(\tilde\omega, T) + {\mathcal K}_1(\tilde\omega, T)]\Big\}. \end{eqnarray} To determine a small-$\tilde\omega$ expansion of $B_2(\tilde\omega, T)$, we shall further concentrate on small-$\tilde\omega$ expansions of its constituents, ${\mathcal I}_{0}(\tilde\omega, T)$, ${\mathcal I}_{\pm 1}(\tilde\omega, T)$ and ${\mathcal K}_{\pm1}(\tilde\omega, T)$. \noindent\newline\newline {\it (a)}.---The function ${\mathcal I}_{0}(\tilde\omega, T)$ can be evaluated exactly, \begin{eqnarray} \fl \qquad \qquad {\mathcal I}_0(\tilde\omega, T) = \frac{1}{4\pi} \sin(\pi \tilde\omega^2)\, \tilde\omega^{2\tilde\omega^2-2}\Gamma(1-2 \tilde\omega^2) \nonumber\\ - \frac{1}{2\pi} T^{1-2\tilde\omega^2} \frac{\tilde\omega}{1- 2 \tilde\omega^2} \, {}_1F_2\left(\frac{1}{2}-\tilde\omega^2;\frac{3}{2}, \frac{3}{2}-\tilde\omega^2; - \frac{T^2}{4}\tilde\omega^2\right). \end{eqnarray} Expanding this result around $\tilde\omega=0$ we derive \begin{eqnarray} \label{I-0} {\mathcal I}_0(\tilde\omega, T) = \frac{1}{4} + {\mathcal O}(\tilde\omega). \end{eqnarray} \noindent\newline {\it (b)}.---To analyze a small-$\tilde\omega$ expansion \begin{eqnarray} {\mathcal I}_{j\neq 0}(\tilde\omega, T)=\alpha_0(j,T) + \tilde\omega\, \alpha_1(j,T) + {\mathcal O}(\tilde\omega^2), \end{eqnarray} we proceed in two steps. First, we determine the coefficient $\alpha_0(j,T)$ directly from Eq.~(\ref{I-j-def}) \begin{eqnarray} \label{I-j-def-0} \alpha_0(j,T) = {\mathcal I}_{j\neq 0}(0, T) &=& \int_{T}^{\infty} \frac{d\lambda}{2\pi} \, \frac{\sin(j \lambda)}{\lambda} < \infty, \quad \forall\, T>0, \end{eqnarray} to deduce the relation $(j \neq 0)$ \begin{eqnarray} \alpha_0(-j,T) = -\alpha_0(j,T). \end{eqnarray} Second, to determine a linear term of a small-$\tilde\omega$ expansion of ${\mathcal I}_{j\neq 0}(\tilde\omega, T)$, we perform integration by parts in Eq.~(\ref{I-j-def}) to derive the representation \begin{eqnarray} \fl \label{I-j-def-alt} {\mathcal I}_{j\neq 0}(\tilde\omega, T) = T^{-1-2\tilde\omega^2} \frac{\cos[(\tilde\omega +j)T]}{2 \pi (\tilde\omega +j)} - \frac{1+2 \tilde\omega^2}{\tilde\omega +j} \int_{T}^{\infty} \frac{d\lambda}{2 \pi} \frac{\cos[(\tilde\omega +j)\lambda]}{\lambda^{2 + 2 \tilde\omega^2}} \end{eqnarray} whose integral term possesses a better convergence when $\tilde\omega$ approaches zero, as compared to the one given by Eq.~(\ref{I-j-def}). Differentiating Eq.~(\ref{I-j-def-alt}) with respect to $\tilde\omega$ and setting $\tilde\omega=0$ we derive: \begin{eqnarray} \fl \label{I-j-der} \alpha_1(j,T) = \frac{d \, {\mathcal I}_{j \neq 0}(\tilde\omega, T)}{d\tilde{\omega}}\Bigg|_{\tilde\omega =0} &=& -\frac{1}{2\pi j} \left( \sin(j T) + \frac{\cos(jT)}{j T} \right) + \frac{1}{j} \int_{T}^{\infty} \frac{d\lambda}{2\pi}\frac{\sin(j\lambda)}{\lambda} \nonumber\\ &+& \frac{1}{j^2} \int_{T}^{\infty} \frac{d\lambda}{2\pi}\frac{\cos(j\lambda)}{\lambda^2} < \infty, \quad \forall\, T>0. \end{eqnarray} This implies the relation $(j \neq 0)$ \begin{eqnarray} \alpha_1(-j,T) = \alpha_1(j,T). \end{eqnarray} As a consequence, we conclude that \begin{eqnarray} \label{I-sum} {\mathcal I}_{-1}(\tilde\omega, T) + {\mathcal I}_{1}(\tilde\omega, T) = {\mathcal O}(\tilde\omega). \end{eqnarray} (It is this particular combination that appears in Eq.~(\ref{B2-sum}).) \noindent\newline\newline {\it (c)}.---To examine a small-$\tilde \omega$ expansion \begin{eqnarray} {\mathcal K}_{j\neq 0}(\tilde\omega, T)= \kappa_0(j,T)+ \tilde\omega \,\kappa_1(j,T) +{\mathcal O}(\tilde\omega^2), \end{eqnarray} we follow the same strategy. First, we determine the coefficient $\kappa_0(j, T)$ directly from Eq.~(\ref{K-j-def}) \begin{eqnarray} \label{K-j-def-0} \fl \kappa_0(j, T)= {\mathcal K}_{j\neq 0}(0, T) = \int_{T}^{\infty} \frac{d\lambda}{2\pi} \,\ln\lambda \frac{\sin(j \lambda)}{\lambda} < \infty, \quad \forall\, T>0, \end{eqnarray} to observe the relation $(j \neq 0)$ \begin{eqnarray} \kappa_0(-j,T) = -\kappa_0(j,T). \end{eqnarray} Second, to examine a linear term of a small-$\tilde\omega$ expansion of ${\mathcal K}_{j\neq 0}(\tilde\omega, T)$, we perform integration by parts in Eq.~(\ref{K-j-def}) in order to improve integral's convergence: \begin{eqnarray} \fl \label{K-j-parts} {\mathcal K}_{j \neq 0}(\tilde\omega,T) = \frac{\ln T}{2\pi (\tilde\omega+j)} \frac{\cos[(\tilde\omega +j)T]}{T^{1+2\tilde\omega^2}} + \frac{1}{\tilde\omega+j} \int_{T}^{\infty} \frac{d\lambda}{2\pi} \frac{\cos[(\tilde\omega+j)\lambda]}{\lambda^{2+2\tilde\omega^2}} \nonumber\\ - \frac{1+2\tilde\omega^2}{\tilde\omega+j} \int_{T}^{\infty} \frac{d\lambda}{2\pi} \frac{\cos[(\tilde\omega+j)\lambda]}{\lambda^{2+2\tilde\omega^2}} \ln\lambda. \end{eqnarray} Differentiating Eq.~(\ref{K-j-parts}) with respect to $\tilde\omega$ and setting $\tilde\omega=0$, we obtain: \begin{eqnarray} \label{K-j-der} \fl \kappa_1(j, T)= \frac{d \, {\mathcal K}_{j \neq 0}(\tilde\omega, T)}{d\tilde{\omega}}\Bigg|_{\tilde\omega =0} = - \frac{\sin(jT)}{2\pi j} \ln T - \frac{\cos(jT)}{2\pi j^2 T} \ln T \nonumber\\ - \frac{1}{j} \int_{T}^{\infty} \frac{d\lambda}{2\pi} \frac{\sin(j\lambda)}{\lambda} \left( 1 - \ln\lambda\right) \nonumber\\ - \frac{1}{j^2} \int_{T}^{\infty} \frac{d\lambda}{2\pi} \frac{\cos(j\lambda)}{\lambda^2} \left( 1 - \ln\lambda\right) < \infty, \quad \forall\, T>0. \end{eqnarray} This implies the relation $(j \neq 0)$ \begin{eqnarray} \kappa_1(-j,T) = \kappa_1(j,T). \end{eqnarray} As a consequence, we conclude that \begin{eqnarray} \label{K-sum} {\mathcal K}_{-1}(\tilde\omega, T) + {\mathcal K}_{1}(\tilde\omega, T) = {\mathcal O}(\tilde\omega). \end{eqnarray} (Again, it is this particular combination that appears in Eq.~(\ref{B2-sum}).) Collecting the results Eqs.~(\ref{A2-w3}), (\ref{B2-sum}), (\ref{I-0}), (\ref{I-sum}), and (\ref{K-sum}), we observe that $\Lambda_2(\tilde \omega) = \tilde\omega^2 + {\mathcal O}(\tilde\omega^3)$; hence \begin{eqnarray} \hat{\Lambda}_2(\tilde \omega) = \hat{\Lambda}_1(\tilde \omega) =\tilde\omega^2, \end{eqnarray} see Eqs.~(\ref{approx-def}) and (\ref{L1-hat}). \noindent\newline\newline {\it Calculation of $\hat\Lambda_3(\tilde\omega)$}.---Since $A_3(\tilde\omega,T)= {\mathcal O}(\tilde\omega^3)$, see Eq.~(\ref{A3-w3}), we need to deal with $B_3(\tilde\omega,T)$ only: \begin{eqnarray} \label{b-3} B_3(\tilde\omega, T) = - \tilde\omega^3 {\rm Im}\int_{T}^{\infty} \frac{d\lambda}{2\pi} \, \lambda^{1-2\tilde{\omega}^2} \, e^{i\tilde{\omega} \lambda} \mathcal{G}_3(\lambda), \end{eqnarray} large-$\lambda$ asymptotics of $\mathcal{G}_3(\lambda)$ defined by Eq.~(\ref{G3}) are required. To proceed, we need to complement the expansions Eqs.~(\ref{F12-asym}) and (\ref{F2-asym}) with the one for $\mathcal{F}_3$ defined by Eq.~(\ref{F3-def}). To this end we, first, employ Eqs.~(\ref{FL-3}), (\ref{eq:f1}), (\ref{eq:f2}) to determine a large-$t$ behavior of $F_3(t)$, \begin{eqnarray} \label{F3-as}\fl \qquad \frac{F_3(t)}{t^3} = i \left\{ \frac{a_1}{t^3} + a_2 \frac{\cos t}{t^3} + a_3 \frac{\cos t \ln t}{t^3} + {\mathcal O}\left( \frac{\sin(\star\, t)\ln t}{t^4}\right) \right\}, \end{eqnarray} where $a_1, a_2$ and $a_3$ are real coefficients whose explicit values are not required for our analysis; $\sin(\star\, t)$ stands to denote $\sin t$ and $\sin(2t)$, both of which are present in the remainder term. This expansion combined with Eq.~(\ref{eq:f3}) implies the following large-$t$ behavior of $f_3(t)$: \begin{eqnarray} \label{f3-small-as} \fl \qquad f_3(t) = i \left\{ a_1^\prime \frac{1}{t} + a_2^\prime \frac{\sin t}{t} + a_3^\prime \frac{\cos t}{t} + a_4^\prime \frac{\cos t}{t} \ln t + a_5^\prime \frac{\cos t}{t} \ln^2 t \right\} \nonumber\\ + \mathcal{O} \left( \frac{\sin t \ln t}{t^2} \right). \end{eqnarray} Here, the coefficients $a_j^\prime \in \mathbb{R}$ are real. Now, a large-$\lambda$ behavior of $\mathcal{F}_3(\lambda)$ can be read off from Eq.~(\ref{F3-def}): \begin{eqnarray} \label{F3-asym} \fl \qquad \qquad \mathcal{F}_3(\lambda) = i \left\{ a_1^{\prime\prime} \frac{1}{\lambda} + a_2^{\prime\prime} \frac{\sin \lambda}{\lambda^2} + a_3^{\prime\prime} \frac{\cos \lambda}{\lambda^2} + a_4^{\prime\prime} \frac{\sin \lambda}{\lambda^2} \ln \lambda + a_5^{\prime\prime} \frac{\sin \lambda}{\lambda^2} \ln^2 \lambda \right\} \nonumber\\ \qquad \qquad + \mathcal{O} \left( \frac{\cos \lambda}{\lambda^3} \ln^3 \lambda \right), \end{eqnarray} where the coefficients $a_j^{\prime\prime} \in \mathbb{R}$ are real, again. Inspection of Eqs.~(\ref{G3}), (\ref{F12-asym}), (\ref{F2-asym}) and (\ref{F3-asym}) shows that a large-$\lambda$ behavior of ${\mathcal G}_3(\lambda)$ coincides with that of $\mathcal{F}_3(\lambda)$. Having determined a large-$\lambda$ asymptotics of ${\mathcal G}_3(\lambda)$, we turn to the analysis of the function $B_3(\tilde\omega,T)$ as $\tilde\omega \rightarrow 0$. Since ${\mathcal G}_3(\lambda) \in i \mathbb{R}$, a substitution of Eq.~(\ref{F3-asym}) into Eq.~(\ref{b-3}) generates several integrals (see below), whose small-$\tilde\omega$ behavior should be studied in order to figure out if $B_3(\tilde\omega,T)$ contributes to $\hat\Lambda_3(\tilde\omega)$ as defined by Eqs.~(\ref{approx-def}), (\ref{L-partial}) and (\ref{A-B}). This knowledge is required to complete calculation of the small-$\omega$ expansion of the power spectrum $S_\infty(\omega)$, see Eq.~(\ref{S_inf_exp}). \noindent\newline\newline {\it (a)}.---The first integral, originating from the $a_1^{\prime\prime}$ term in Eq.~(\ref{F3-asym}), admits a small-$\tilde\omega$ expansion \begin{eqnarray} B_{3,1}(\tilde\omega,T) = \int_{T}^{\infty} \frac{d\lambda}{2\pi} \, \,\frac{\cos(\tilde{\omega} \lambda)}{\lambda^{2\tilde{\omega}^2}} ={\mathcal O}(\tilde\omega^0). \end{eqnarray} This result is obtained from the real part of the r.h.s.~of Eq.~(\ref{one-more-f}) evaluated at $\tilde\omega=0$. Hence, due to Eq.~(\ref{b-3}), the contribution of $B_{3,1}(\tilde\omega,T)$ to $B_3(\tilde\omega,T)$ is of order ${\mathcal O}(\tilde\omega^3)$. \noindent\newline\newline {\it (b)}.---The second integral, originating from the $a_2^{\prime\prime}$ term in Eq.~(\ref{F3-asym}), reads \begin{eqnarray} B_{3,2}(\tilde\omega,T) =\int_{T}^{\infty} \frac{d\lambda}{2\pi} \,\sin \lambda\, \frac{\cos(\tilde{\omega} \lambda)}{\lambda^{1+2\tilde\omega^2}} ={\mathcal O}(\tilde\omega^0) \end{eqnarray} as can be seen by setting $\tilde\omega=0$ directly in the integrand. \noindent\newline\newline {\it (c)}.---All other integrals generated by the remaining terms in Eq.~(\ref{F3-asym}) can be treated analogously. As a consequence, we conclude that $B_3(\tilde\omega,T)$ is of order ${\mathcal O}(\tilde\omega^3)$. Taken together with Eqs.~(\ref{A-B}) and (\ref{A3-w3}), this implies that $\Lambda_3(\tilde\omega) = {\mathcal O}(\tilde\omega^3)$ so that \begin{eqnarray} \label{lambda-finite} \hat\Lambda(\tilde\omega) = 2 \tilde\omega^2 + {\mathcal O}(\tilde\omega^3). \end{eqnarray} Substituting Eq.~(\ref{lambda-finite}) into Eq.~(\ref{S_inf_exp}), we derive the sought small-$\tilde\omega$ expansion of the power spectrum $S_\infty(\omega)$ as stated in Theorem \ref{Th-5}. \hfill $\square$ \section*{Acknowledgments} Roman Riser thanks Tom Claeys for insightful discussions and kind hospitality at the Universit\'e catholique de Louvain. This work was supported by the Israel Science Foundation through the Grants No.~648/18 (E.K. and R.R.) and No.~2040/17 (R.R.). Support from the Simons Center for Geometry and Physics, Stony Brook University, where a part of this work was completed, is gratefully acknowledged (E.K). \newpage \renewcommand{
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Geometry is often crucial in physical phenomena, since dimensionality and topological defects determine the properties of phase transitions. Progress in complex networks is an example where geometrical features are paramount to the physical properties (for a review, see, e.g., Ref.~\cite {doro}). This is also reflected in ongoing research interests in the nontrivial lattice structures such as fractal lattices~\cite{fractals} and Apollonian networks~\cite{apollo}. Another important area is the hyperbolic geometry where even the familiar standard physical models like electronic spins and Brownian motions exhibit novel behaviors~\cite{heptagonal}. Such studies are interesting not only from a theoretical viewpoint but also from a potential real applicability related to the rapid development of fabrication of devices and structures in nanoscales~\cite{nano}. In this work, we focus on the percolation problem and investigate the percolation phase transition for negatively curved hyperbolic lattices. Percolation on hyperbolic lattices has so far been of predominantly mathematical interest and several interesting mathematical results have been reported, including the existence of an intermediate phase with infinitely many unbounded clusters~\cite{benjamini,schon,lyons}. More specifically, there exist in general two critical thresholds: unbounded clusters are formed at the first threshold, while unbounded clusters are merged to become one unique unbounded cluster at the second threshold. In case of the usual flat lattices, which have vanishing surface-volume ratios in infinite-volume limit, the two thresholds coincide, and the theory of such percolation transitions is well-developed~\cite{stauffer}. On the other hand, the properties of the percolation transitions for hyperbolic lattices still remain to be further clarified. In particular, these lattices are not homogeneous due to the presence of a boundary and as a consequence, the critical properties may differ from the mean-field-type transitions which were discussed in Ref.~\cite{schon2}. In this paper, we investigate the characteristic features of the thresholds and the corresponding phases by using various statistical measures like the number of boundary points connected to the middle, the ratio of the first and second largest clusters, and the cluster size distribution. We use finite-size scaling methods to obtain the critical properties, together with the Newman-Ziff algorithm~\cite{newman-ziff}. This paper is organized as follows: In Sec.~\ref{sec:percs}, we introduce alternative manifestations of the percolation thresholds and show that all of them coincide in an ordinary square lattice. In Sec.~\ref{sec:hyperb}, we explain the notion of a hyperbolic lattice and the appearance of two distinct percolation transitions. We start with the Bethe lattice and the standard mean-field results. Next the Cayley tree is introduced which is the simplest model with two thresholds. This model is used as a benchmark when discussing the characteristic features of percolation in hyperbolic lattices. The efficiency of the finite-size scaling methods is also illustrated for this simpler model. Then we present the results for the general hyperbolic structures which contain loops. We summarize our results in Sec.~\ref{sec:summary}. \section{Various manifestations of percolation} \label{sec:percs} There are various possible manifestations of percolation. We here describe the critical thresholds corresponding to them as well as their relationship. The percolation threshold is usually defined as the occupation probability $p$ above which a cluster is formed which occupies a finite fraction of the system. Let us first consider an $L\times L$ square lattice. In this case it is well-known that $p_c=1/2$ for bond percolation \cite{stauffer}. Alternatively, we may consider the ratio between the largest cluster size, $s_1$, and the second largest one, $s_2$~\cite{zen}. This measure is based on the fact that above the transition threshold, the second largest cluster becomes negligible with respect to the largest one, and accordingly, as soon as $p$ exceeds a critical value from below, $s_2/s_1$ vanishes in the large lattice limit. Since it implies that one large unique cluster dominates the whole system, we call this threshold $p_u$. Figure~\ref{fig:square}(a) illustrates that the ensemble average of $s_2/s_1$ scales quite well with sizes for the square lattice. This finite-size scaling supports the well-known result that $p_u=p_c=1/2$ and the critical index $\nu=4/3$. We will use this type of finite-size scaling methods throughout the paper. \begin{figure}[tbp] \includegraphics[width=0.45\textwidth]{fig1a.eps} \includegraphics[width=0.45\textwidth]{fig1b.eps} \caption{(Color online) Scaling plots for regular square lattices, using (a) the ratio between the largest and the second largest cluster sizes, and (b) the number of boundary points connected to the midpoint of the lattice. The two-dimensional percolation scaling exponent $\protect\nu=4/3$ and $\kappa = \nu / (1+\nu) = 4/7$ are used for scaling collapse with $p_u=p_c=1/2$. The average is taken over $10^6$ independent realizations for each plot. } \label{fig:square} \end{figure} An alternative manifestation of percolation is the number $b$ of boundary points connected to the middle of the lattice via occupied bonds. We will also use this concept of midpoint percolation throughout the paper. Around the threshold of the midpoint percolation, which we will call $p_m$, a cluster penetrates the whole system and one expects a finite-size scaling of the form \begin{equation} b=L^{\kappa }f[(p-p_m)L^{1/\nu }] \label{eq:bscaling} \end{equation} with $\kappa = \nu/(1+\nu) = 4/7 \approx 0.57$~\cite{sapoval,forthcoming}. Again, Fig.~\ref{fig:square}(b) shows an excellent scaling collapse with $\kappa = 4/7$ and $p_m=p_c=1/2$. Consequently, the finite size scaling of $b$ gives a practical alternative way of investigating percolation properties. Yet another measure, based on the same midpoint percolation concept, is the fraction of the boundary points connected to the middle, $b/4L$, which becomes finite when $p$ gets above $p_b$. This measure will be also frequently used. For the planar lattices all these thresholds coincide so that we have only one critical threshold $p_c=p_u=p_m=p_b=1/2$. (The observable quantities used in the present paper are listed in Table~\ref{table:thresholds}.) The crucial point in the present context is that this equality does not hold for hyperbolic lattices. \begin{table} \caption{Percolation thresholds and quantities used for their numerical detections. Here $s_i$ represents the $i$th largest cluster size, $b$ is the number of boundary points connected to the middle of the lattice, and $B$ is the total number of boundary points.} \begin{tabular}{|c|c|c|}\hline\hline Threshold & Meaning & Observable\\ \hline $p_u$ & The unbounded cluster becomes unique. & $s_2/s_1$\\ $p_m$ & The boundary is connected to the middle. & $b$\\ $p_b$ & A finite fraction of the boundary is connected to the middle. & $b/B$\\ \hline\hline \end{tabular} \label{table:thresholds} \end{table} \section{Percolation in hyperbolic lattices} \label{sec:hyperb} \begin{figure} \includegraphics[width=0.3\textwidth]{fig2.eps} \caption{(Color online) Poincar\'e disk representation of the hyperbolic lattice $\{m,n\} = \{3,7\}$ [seven ($n=7$) triangles ($m=3$) meet at every vertex] with the total number of layers $l=6$. Solid lines show occupied bonds in a realization with the occupation probability of $p=0.2$.} \label{fig:3_7visual} \end{figure} Suppose that $n$ regular $m$-gons meet at every vertex, which we represent by a Schl\"afli symbol $\{m,n\}$~\cite{coxeter1954}. The resulting lattices are flat if we take $\{m,n\}=\{3,6\},\{4,4\},\{6,3\}$, which together constitute the two-dimensional (2D) percolation universality class~\cite {stauffer}. If $(m-2)(n-2)>4$, on the other hand, the resulting lattice is known to have a constant negative Gaussian curvature~\cite{coxeter1997}, making a hyperbolic surface. Figure~\ref{fig:3_7visual} shows a hyperbolic lattice represented as $\{3,7\}$, where seven regular triangles meet at each vertex. Note that all of the triangles in this figure are congruent with respect to the metric used in this projection on the Poincar\'e disk~\cite{anderson}. As shown in Fig.~\ref {fig:3_7visual}, we construct the lattice in a concentric way so that the origin of the disk becomes the zeroth layer and its seven nearest neighbors constitute the first layer. Likewise, the second layer surrounding the first layer is composed of 21 vertices, and so on up to the $l$th layer [$l=6$ in Fig.~\ref{fig:3_7visual}]. The system is said to have a level $l$, and its size is given by $N(l) = 1 + 7 \sum_{j=1}^{l}[c_+^j - c_-^j]/\sqrt{5}$ with $c_\pm = (3 \pm \sqrt{5})/2$. For example, $N(l=2) = 1 + 7 \frac{(c_+ - c_-)}{\sqrt{5}} + 7 \frac{(c_+^2 - c_-^2)}{\sqrt{5}} = 1 + 7 + 21 = 29$. This formula shows that the number of lattice points increases exponentially with a distance from the origin $O$, yielding a nonvanishing surface-volume ratio in the limit $N\rightarrow \infty$. In other words, the total number of boundary points $B$ is proportional to the system size $N$. The sizes of other lattices with different Schl\"afli symbols are listed in Table~\ref{table:sizes}. Since a $d$-dimensional hypercube with a volume $v$ has the surface-volume ratio $\propto v^{-1/d}$, a hyperbolic lattice is usually called infinite-dimensional. In graph theory, an object with a nonvanishing surface-volume ratio is called \emph{nonamenable}~\cite{benjamini,lyons}. The exponential increase of lattice points as a function of the distance from the middle also constitute a practical challenge since the range of possible sizes is limited when using the finite-size scaling method. For this reason, we present numeric values only up to the second digit in this work. \begin{table} \caption{System sizes for various structures, $\{m,n\}$, as a function of level $l$. Each lattice is grown up from a single midpoint in the zeroth layer.} \begin{tabular}{|c|c|}\hline\hline $\{m,n\}$ & $N(l)$ \\ \hline $\{3,7\}$ & $1 + \frac{7}{\sqrt{5}} \sum_{j=1}^{l}[(\frac{3+\sqrt{5}}{2})^j - (\frac{3-\sqrt{5}}{2})^j]$ \\ $\{4,5\}$ & $1 + \frac{5}{\sqrt{3}} \sum_{j=1}^{l}[(2+\sqrt{3})^j - (2-\sqrt{3})^j]$ \\ $\{5,5\}$ & $1 + \sqrt{5} \sum_{j=1}^{l}[(\frac{7+3\sqrt{5}}{2})^j - (\frac{7-3\sqrt{5}}{2})^j]$ \\ $\{6,4\}$ & $1 + 2\sqrt{2} \sum_{j=1}^{l}[(3+2\sqrt{2})^j - (3-2\sqrt{2})^j]$ \\ $\{7,3\}$ & $1 + \frac{15}{\sqrt{5}} \sum_{j=1}^{l}[(\frac{3+\sqrt{5}}{2})^j - (\frac{3-\sqrt{5}}{2})^j]$ \\ $\{\infty,3\}$ & $1 + 3\sum_{j=1}^{l} 2^{j-1}$\\ \hline\hline \end{tabular} \label{table:sizes} \end{table} \subsection{Bethe lattice} \label{sec:bethe} The binary Bethe lattice is the infinite binary tree where all the lattice points are equivalent~\cite{soderberg}. This means that the Bethe lattice lacks boundary points. Consequently, it belongs to the amenable class. This is in contrast to the Cayley tree (discussed in the following section) which includes the boundary even in the large lattice limit and hence is an example of a nonamenable graph. The percolation for the Bethe lattice is exactly solvable and the solution corresponds to the standard mean-field theory~\cite{stauffer}. This standard mean-field theory describes the percolation transition for $d$-dimensional Euclidean lattices provided $d\geq 6$~\cite{stauffer}. However, it has limited applicability in the context of nonamenable graphs. The critical threshold $p_c$ is well-known since the early percolation theory formulated in the gelation process~\cite{flory}. The point is that percolation on a Bethe lattice can be treated as the Galton-Watson branching process~\cite{grimmett1989}. We pick up an arbitrary point as a root, and the set of all the points reached from it by $i$ bonds is called the $i$th generation (the term {\em generation} will be used in trees, instead of {\em layer}). Let us denote $w$ as the extinction probability that the branching process from the root is ended at some finite generation of the tree. For such a process, each bond to the next generation should be either unoccupied with a probability $1-p$, or occupied but eventually terminated with the probability $pw$. Since each vertex has two bonds to the next generation, the sum of those probabilities has to be squared and $w$ satisfies a self-consistency equation, $w = (1-p+pw)^2 $, yielding \begin{equation*} w = \left\{ \begin{array}{lcr} 1 & \mbox{for} & 0 \leq p<1/2, \\ (1/p-1)^2 & \mbox{for} & 1/2 \leq p \leq 1. \end{array} \right. \end{equation*} When $p > 1/2$, the extinction probability is less than unity. From the percolation viewpoint, every vertex has one successor on average at $p=1/2$, and accordingly, the cluster from the root vertex can be extended indefinitely. Consequently, the bond percolation of the Bethe lattice has $p_c = 1/2$, at which unbounded clusters may be formed. For a general Bethe lattice denoted as $\{\infty,n\}$, this generalizes into $p_c = 1/(n-1)$. An important point in the present context is that amenable graphs only have one percolation threshold. Thus for Bethe lattice the threshold $p_m$ [where a cluster from the midpoint (which in this case is any point because all points are equivalent) reaches a point arbitrarily far away] and $p_c$ [the threshold where the unbounded clusters contain a finite fraction of the whole system] are equal, i.e., $p_m=p_c$ due to the equivalence between points. Also the three planar lattices ($\{4,4\}$, $\{3,6\}$, $\{6,3\}$) are amenable and consequently have only one threshold (compare with Sec.~\ref{sec:percs}). As will be discussed in the following, nonamenable graphs such as hyperbolic lattices have two distinct percolation thresholds. \subsection{Cayley tree, $\{\infty,3\}$} \label{sec:tree} The Cayley tree is a tree grown from the root vertex up to the $l$th generation where the root vertex is identified as the middle of the lattice. This is an example of a nonamenable graph. While it has sometimes been presumed that the Bethe lattice is an adequate limiting case of the Cayley tree, more recent studies suggest that the Cayley tree in the limit of $l \rightarrow \infty$ has different critical properties from those of the Bethe lattice~\cite{doro}. Note that vertices are not equivalent for the Cayley tree so that one can clearly define which generation a vertex belongs to. The branching-process argument is again applicable and the cluster from the root vertex reaches the bottom of the tree at $p_m = 1/2$. On the other hand, the uniqueness threshold is located at $p_u=1$~\cite{schon}, since $s_2/s_1$ remains finite at any $p<1$ [Fig.~\ref{fig:cayley}(a)]. The vanishing of $s_2/s_1$ as $p_u=1$ is approached can be obtained as follows: The total number of bonds in the tree with a level $l$ is $K=2^{l+1}-2$. Suppose that precisely one bond is broken on average. The occupation probability corresponding to this is $p=(K-1)/K$. Suppose further that the broken bond connects the $i$th and $(i+1)$th generations. The probability to select this bond is given by \begin{equation*} P(i) = \frac{2^{i+1}}{2^{l+1}-2}. \end{equation*} Breaking one bond creates precisely two clusters. The smaller one is below the broken bond and has a size of $s_2 = 2^{l-i}-1$. The size of the larger one is consequently $s_1 = N - s_2 = 2^{l+1}-2^{l-i}$. The expectation value of the ratio $s_2/s_1$ is obtained as follows: \begin{eqnarray} \left<\frac{s_2}{s_1}\right> &=& \displaystyle \sum_{i=0}^{l-1}\frac{s_2}{s_1} P(i)\nonumber\\ &=& \displaystyle \sum_{i=0}^{l-1}\frac{2^{l-i}-1}{2^{l+1}-2^{l-i}} \frac{2^{i+1}}{2^{l+1}-2}\nonumber\\ &=& \displaystyle \sum_{i=0}^{l-1}\frac{2^{l-i}-1}{2^{l+1}-2^{l-i}} \frac{2^{i+1}}{2^{l+1}-2}\nonumber\\ &\approx& \displaystyle \frac{1}{2^{l+1}} \sum_{i=0}^{l-1}\frac{1}{1-2^{-i-1}} \nonumber\\ &\approx& \displaystyle \frac{l}{2^{l+1}} \nonumber \end{eqnarray} Since $p=(K-1)/K$, one can express this result directly in terms of $p$. By using the connection $1-p = K^{-1} \approx 2^{-l-1}$, one obtains \begin{equation*} \left<\frac{s_2}{s_1}\right> \propto -(1-p) \log (1-p). \end{equation*} Thus the ratio $s_2/s_1$ vanishes as $-(1-p) \log (1-p)$ as the threshold $p_u=1$ is approached from below. This is illustrated in Fig.~\ref{fig:cayley}(a): The Cayley tree has two thresholds and $p_m<p_u$. The midpoint percolation quantity $b$ instead possesses the scaling form \begin{equation} b= l^{\kappa} f[(p-p_m) l^{1/\nu}], \label{eq:bscaling2} \end{equation} with $\kappa=0$ and $\nu=1$. This scaling form can be derived as follows: The number of boundary points which is reached from the midpoint for a given value $p$ is $b=3p \times (2p)^{l-1}$ from which it follows that \begin{equation} b = \frac{3}{2} \exp[\log(2p)l] \approx \frac{3}{2} \exp[2l(p-p_m)]. \label{eq:exact} \end{equation} The validity of this finite-size scaling is demonstrated in [Fig.~\ref{fig:cayley}(b)] together with the exact scaling form given by Eq.~(\ref{eq:exact}). One may also note that since $l \approx \log_2 ~N$, it follows that $b \sim (2p)^l = N p^l \approx N p^{\log_2 ~N} = N^{1+\log_2 ~p}$. This means that $b$ scales as $N^{\phi}$ (or equivalently as $B^{\phi}$) at any value of $0<p\leq 1$ with a $p$-dependent exponent $\phi = 1+\log_2 ~p$ [Fig.~\ref{fig:cayley}(c)]. A direct consequence of this is that $b/B$ eventually goes to zero at every $p<1$ in the limit of infinite $N$. Thus $b/B$ is discontinuous at $p_b=1$ for $l=\infty$. This is in contrast to the ratio $s_2/s_1$ which goes continuously to zero at $p_u=1$. \begin{figure}[tbp] \includegraphics[width=0.45\textwidth]{fig3a.eps} \includegraphics[width=0.45\textwidth]{fig3b.eps} \includegraphics[width=0.45\textwidth]{fig3c.eps} \includegraphics[width=0.45\textwidth]{fig3d.eps} \caption{(Color online) Numerical results in Cayley trees, averaged over $10^6$ realizations. (a) The ratio between $s_2$ and $s_1$. Inset: A closer look near $p=1$, where one can hardly see any size dependency. (b) Scaling plot by Eq.~(\ref{eq:bscaling2}). The scaling function is well-described by $\frac{3}{2} \exp[2l (p-p_m)]$ near $p_m=0.50$. (c) $\phi$ as a function of $p$, provided that $b(p) \sim B^{\phi}$. The horizontal axis is in the log scale, confirming $\phi = 1+\log_2 ~p$. (d) Cluster size distribution at various $p$ with $l=15$. It approaches to $P(s) \sim s^{-2}$ as $p \rightarrow 1$, while finiteness of the system adds an exponential cutoff. } \label{fig:cayley} \end{figure} One can also obtain the limiting behavior of the cluster size distribution $P(s)$ by removing one bond. In this case $P(s)~ds$ is the probability of finding a cluster with a size in the interval $[s,s+ds]$ and $P(i)~di$ is the probabililty of the single bond to be broken between generations $i$ and $(i+1)$. This latter probability is also equal to the probability of finding a second largest cluster of size $s_2\propto s^{L-i}$. Thus $P(s_2)ds_2 = P(i)di$ from which follows that \begin{equation*} P(s_2) = P(i) \left| \frac{di}{ds_2} \right| \propto s_2^{-2}, \end{equation*} where the last proportionality is obtained from $s_{2}\propto s^{L-i}$. This suggests that the size distribution of clusters $P(s)$ (excluding the largest one) should approach the form $P(s) \propto s^{-\tau}$ where $\tau \rightarrow 2$ as $p \rightarrow 1$. Since the generation $i$ in a tree is actually a discrete variable it follows that the possible sizes of $s_2$ are also discrete and this discreteness becomes noticeable as $p \rightarrow 1$. This is illustrated in Fig.~\ref{fig:cayley}(d) which shows the approach to $P(s)\propto s^{-2}$ as well as the log-periodic oscillations caused by the discreteness. A more intuitive explanation for $P(s)$ may be gained by mapping the percolation problem in a Cayley tree to a branching process, namely, the formation of family trees~\cite{baek}: Let us consider the family-size distribution in the $z$-ary tree ($z \equiv n-1)$ at the $k$th generation, with the net birth rate $\lambda = \log(zp)$ as each family grows by $(zp)^k$. When a bond is broken in a tree graph, the top of this detached branch is interpreted as the first ancestor of a new family, and we describe this top vertex as an immigrant, whose number is proportional to the existing population size $N_k$. If the number of immigrants is written as $\zeta N_k$, the birth and immigration should yield a constant growth of population, $\zeta + \lambda = \log ~z$, as we know $N_{k+1} = z N_k$. According to Ref.~\cite{baek}, the family size distribution at the $k$-th generation exhibits a power law with an exponent $\tau^{\prime }= 2 + \zeta/\lambda = 2 + \log(1/p)/\log(zp)$ as $k \rightarrow \infty$. Since the volume of a given tree is proportional to its surface, it seems plausible that $\tau \approx \tau^{\prime }$ for the cluster size distribution, $P(s)$. Note that $\tau'=2$ for $p=1$ which is also the limiting result for the Cayley tree. This gives a hand-waving argument suggesting that the size distribution form $P(s) \propto s^{-\tau}$ could be valid also for some range of $p$ below one. \subsection{Heptagonal lattice, $\{7,3\}$} Now we consider a heptagonal lattice, denoted as $\{m,n\} = \{7,3\}$. Since the probability to find a loop is roughly $O(p^m)$, we expect the results obtained for a Cayley tree will remain valid to some extent. The lower threshold is determined from the finite-size scaling of $b$ and Eq.~(\ref{eq:bscaling2}) as shown in Fig.~\ref{fig:7_3}(a). This determines the value of $p_m \approx 0.53$. We note that this is rather close to the exact result for the Cayley tree $1/(n-1) = 1/2$, and shows the same scaling form as in Eq.~(\ref{eq:bscaling2}). This suggests that the loops have only a small effect on the percolation properties at small values of $p$, as is also suggested by the actual structure of the clusters illustrated in Fig.~\ref{fig:3_7visual}. The upper threshold is determined from the finite-size scaling of $s_2/s_1$ which gives $p_u \approx 0.72$. According to a dual-lattice argument~\cite{hunt}, a percolation threshold for a regular lattice $\{m,n\}$ is predicted approximately as $m/(m+n)$. Applying this argument somewhat {\it ad hoc} to the present case yields $7/(7+3) = 0.7$, which is in fact fairly close to the actual value, $p_u\approx 0.72$. \begin{figure}[tbp] \includegraphics[width=0.45\textwidth]{fig4a.eps} \includegraphics[width=0.45\textwidth]{fig4b.eps} \includegraphics[width=0.45\textwidth]{fig4c.eps} \includegraphics[width=0.45\textwidth]{fig4d.eps} \caption{(Color online) Numerical results for heptagonal lattices, $\{7,3\}$, averaged over $10^6$ realizations. (a) Measurement of $b$ yields a consistent result with the tree case. (b) The ratio between two largest cluster sizes $s_2/s_1$ gives $p_u \approx 0.72$, and (c) extrapolating $b/B$ gives $p_b \approx 0.72$, supporting $p_b = p_u$. (d) The exponent $\phi$ as a function of $p$ shows a clear difference from the tree case [see Fig.~\ref{fig:cayley}(c), for comparison]. At $p \gtrsim 0.84$, $\phi$ largely fluctuates as it is hard to determine the size dependency from the numerical data.} \label{fig:7_3} \end{figure} Alternatively, the upper threshold can be determined from $b/B$ by assuming that the size scaling form is the same as for the Cayley tree, \[b/B \sim c_1 N^{\phi-1} + c_2,\] and extrapolating the numerical results from $l=6,\ldots,11$ to the infinite-size limit [Fig.~\ref{fig:7_3}(c)]. This extrapolation gives $|b/B| \lesssim O(10^{-3})$ below $p<0.72$ and becomes positive finite above that, suggesting $p_b \approx 0.72$. It thus suggests $p_b = p_u$ within our numerical accuracy. This corresponding scaling exponent $\phi(p)$ is plotted in Fig.~\ref{fig:7_3}(d). While $\phi(p)$ is an increasing function of $p$ in the Cayley tree, it is a convex function in $\{7,3\}$. The crucial difference is that $\phi$ is still less than one at $p=p_b$. In order to study the transition at $p_b$, we use the finite-size scaling form: \begin{equation} b/B \sim N^{\phi-1} f[(p-p_b)N^{1/\bar\nu}]. \label{eq:7_3scaling2} \end{equation} The scaling collapse at $p_b=0.72$ determines the critical indices to $\phi \approx 0.82$ and $\bar\nu \approx 0.12$ (Fig.~\ref{fig:7_3scale2}). \begin{figure}[tbp] \includegraphics[width=0.45\textwidth]{fig5.eps} \caption{(Color online) Scaling plot of $b/B$ in $\{7,3\}$ using Eq.~(\ref{eq:7_3scaling2}), with $p_b=0.72$, $\phi=0.82$, and $1/\bar\nu=0.12$.} \label{fig:7_3scale2} \end{figure} \begin{figure}[tbp] \includegraphics[width=0.45\textwidth]{fig6a.eps} \includegraphics[width=0.25\textwidth]{fig6b.eps} \caption{(Color online) (a) Cluster size distribution, $P(s)$, at various $p$ in $\{7,3\}$. It shows a power-law form also above $p_u$. (b) A schematic view of the Poincar\'e disk centered at $O$. The lattice is constructed up to the dashed circle, which is apart from the circumference of the disk by $\overline{AB}=\overline{CD}=\protect\delta$ in the Euclidean measure.} \label{fig:7_3csize} \end{figure} Another crucial difference from the Cayley tree is the appearance of a supercritical region at $p>p_u$. We find that the hyperbolic lattices in this region display a power-law behavior in the cluster size distribution [Fig.~\ref{fig:7_3csize}(a)]. A hand-waving argument for this behavior goes as follows: The probability of finding a cluster with a certain size is dominated by its surface area, since connections through the surface have to be cut to isolate this cluster from the surrounding. It is thus believed that $P(s) \propto \exp[-\eta(p) s^{1-1/d}]$ in a $d$-dimensional lattice with some $p$-dependent constant $\eta(p)$~\cite{stauffer,grimmett1989}. For a hyperbolic lattice with $d=\infty$, one may well expect $P(s) \propto e^{-s}$. In case of our nonamenable setting, however, most large-sized clusters are facing the outmost boundary of the lattice, where the outward connections are already absent. Suppose a cluster contained in a fan-shape $OBC$ on the Poincar\'e disk, where the lattice is constructed up to the dashed line in Fig.~\ref{fig:7_3csize}(b). If $ \overline{AB} = \overline{CD} = \delta$ in the Euclidean measure, the hyperbolic length of the arc $\overset{\frown}{BC}$ connecting $B$ and $C$ is of an order of $\delta^{-1}$. Since the cluster size is closely related to the surface length in a hyperbolic lattice, which is proportional to $\delta^{-1}$~\cite{anderson}, we may say that $s \propto \delta^{-1}$. At the same time, the hyperbolic length that one should cut out to isolate this cluster is roughly the length of the geodesic connecting $B$ and $C$, which grows as $\log~\delta^{-1}$. Note that $\overset{\frown}{BC}$ needs not be considered here since it is a part of the lattice boundary, and this makes the fundamental difference from the exponential decay. In other words, the number of bonds cut for hyperbolic lattices is not proportional to the size $ s $, but only to $\log ~s$. This gives the cluster size distribution in a power-law form as $P(s) \sim \exp[-\eta(p) \log ~s] = s^{-\eta(p)}$. It is also possible to infer that $\eta(p)$ should be an increasing function of $p$, as clusters are merged to the largest one in the supercritical phase. \subsection{Comparison with $\{4,5\}$ and $\{3,7\}$} \begin{figure}[tbp] \includegraphics[width=0.45\textwidth]{fig7a.eps} \includegraphics[width=0.45\textwidth]{fig7b.eps} \caption{(Color online) Numerical results for $\{4,5\}$, averaged over $10^6$ realizations. (a) Scaling plot with $p_m = 0.27$. (b) Scaling collapse with $p_b=0.52$, $\phi=0.82$, and $1/\bar\nu = 0.11$.} \label{fig:4_5} \end{figure} \begin{figure}[tbp] \includegraphics[width=0.45\textwidth]{fig8a.eps} \includegraphics[width=0.45\textwidth]{fig8b.eps} \caption{(Color online) Numerical results for $\{3,7\}$, averaged over $10^6$ realizations. Here are shown the scaling plots (a) at $p_m = 0.20$ and (b) at $p_b=0.37$ with $\phi=0.82$ and $1/\bar\nu = 0.11$.} \label{fig:3_7} \end{figure} In order to investigate the generality of our results, we also study two additional hyperbolic lattices with different structures $\{m,n\}$. In all cases we find precisely the same scenario. For the hyperbolic lattice $\{4,5\}$, we find $p_m \approx 0.27$, $p_u = p_b \approx 0.52$ [Figs.~\ref{fig:4_5}(a) and \ref{fig:4_5}(b)]. The scaling behavior at the lower threshold $p_m$ is the same as for the Cayley tree and for the lattice $\{7,3\}$. At the upper threshold $p_b=0.52$, the critical indices $\phi=0.82$ and $1/\nu=0.11$ was determined from the scaling collapse [Fig.~\ref{fig:4_5}(c)]. These values are identical to the ones found for lattice $\{7,3\}$ within numerical accuracy. For the lattice $\{3,7\}$ which is dual to $\{7,3\}$ the same agreement is found: The same size scaling is found at the lower threshold $p_m \approx 0.20$ [Fig.~\ref{fig:3_7}(a)]. At the upper one, $p_u \approx 0.37$, the critical indices $\phi=0.82$ and $1/\bar\nu=0.11$ are found [Fig.~\ref{fig:3_7}(b)]. This is again in striking agreement with the lattices $\{7,3\}$ and $\{4,5\}$. Thus our results are consistent with a universal critical behavior at the second threshold for all hyperbolic lattices $\{m,n\}$ provided both $m$ and $n$ are finite numbers. This critical behavior is distinct from the tree case $\{\infty,n\}$, which has $\phi=1$ at $p=p_b$. The present accuracy suggests that the critical indices are to good approximation $\phi \approx 0.82$ and $1/\bar\nu \approx 0.11 \pm 0.01$. We also note that while the lower threshold $p_m$ is still close to the tree result $1/(n-1) = 1/4$ for $\{4,5\}$, the deviation becomes large for $\{3,7\}$, and that the estimate $p_b = p_u = m/(m+n)$ in general only gives a very crude estimate of the upper threshold. \section{Summary} \label{sec:summary} We have investigated the percolation thresholds and the critical properties of the percolation transitions for hyperbolic lattices using finite-size scaling methods. Two distinct percolation thresholds were found: The lower one corresponds to the threshold when the probability of finding a cluster from the midpoint to the boundary becomes finite and the second when the cluster containing the midpoint, with finite probability, also contains a finite fraction of the boundary. This is in contrast to the planar lattices which only possess a single percolation threshold because the two thresholds above coincide. The Cayley tree was used as a benchmark. It was found that the lower threshold for the hyperbolic lattices has the same scaling properties as for the Cayley tree and that the power-law dependencies characterizing the region between the two thresholds are also like Cayley trees. However, the second higher threshold has a different critical behavior. Our results are consistent with a universal behavior at the higher threshold for all hyperbolic lattices $\{m,n\}$ with $m$ and $n$ finite. This critical behavior is characterized by two critical indices $\phi \approx 0.82$ and $\nu \approx 0.11$. What actually determines these critical indices is still an open question and will be the subject of future research. \acknowledgments We are grateful to Bo S\"oderberg, Jae Dong Noh, Sang-Woo Kim, Hiroyuki Shima, and Okyu Kwon for their inspirations and help. S.K.B. and P.M. acknowledge the support from the Swedish Research Council with the Grant No. 621-2002-4135. B.J.K. was supported by the Korea Research Foundation Grant funded by the Korean Government (MOEHRD) with Grant No. KRF-2007-313-C00282. This research was conducted using the resources of High Performance Computing Center North (HPC2N).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \section{Introduction} With an ever increasing number of GANs introduced each year, concerns about the malicious use of this technology, especially in the case of social media content \cite{shu2017fake,rossler2018faceforensics,rossler2019faceforensics++} are increasing at an alarming rate which can have adverse impacts on public security and privacy. Therefore, a plethora of works have been proposed which focus on the real/fake detection task \cite{marra2018detection,wang2020cnn,zhang2019detecting}. However, it is also important to focus on the problem of attribution, i.e. identifying the source of these images. Attributing images to their sources can potentially deter malicious organizations and hold them accountable by leading legal proceedings. Additionally, as GANs are becoming part of commercial services such as face animation applications, their popularity draws piracy and plagiarism \cite{yu2019attributing} which is an attack on intellectual property. Therefore, it is pertinent to develop effective techniques to attribute images to specific sources. \begin{figure}[t] \includegraphics[width=\linewidth]{figures/Teaser_v2.pdf} \vspace{-10pt} \caption{A plethora of GANs are released every year, and there could be a set of images that come from several unknown sources. Our approach is capable of discovering and attributing unknown GAN sources while requiring label supervision for only an initial small set of GANs. We attribute with high accuracies, seen GANs from a set of images as well as identify and cluster unknown GAN sources with high purities.} \vspace{-0.2in} \label{fig:teaser} \end{figure} To address this problem, \cite{yu2019attributing,8695364,albright2019source} perform attribution for multiple GAN architectures and obtain high classification accuracies. However, they are limited to the closed-world setup as they attribute only to the GANs seen during training and are incapable of identifying unseen GANs. Such a setup is infeasible in practical scenarios where there are a large number of images belonging to sources not seen during training. This raises the question of whether we can discover these new sources and group together the set of images which are generated by them. We term this problem as ``GAN discovery and attribution'' as it involves attributing images to known sources as well as discovering unknown sources. This is a much more challenging and real world setup as the number of sources are unknown and keep increasing. Additionally, there can be a significant domain shift based on the dataset type of GAN generated images. Many works such as \cite{wang2020cnn,yu2019attributing, 8695364} show that GANs leave unique artificial signatures in the images they generate. We exploit this information to implicitly identify signatures and cluster images belonging to unseen sources together while also attributing images to seen sources. We propose a novel iterative pipeline which utilizes a fixed set of images, labeled according to their corresponding sources, and perform GAN attribution and discovery on an unlabeled set of images. Our approach generalizes to an open-world setup where images in the unlabeled/discovery set are not restricted to be from the labeled class sources. Additionally, due to the iterative nature of our pipeline, we can continuously discover images from new GANs added to our discovery set in an online manner. Our approach only requires labels for an initial set of images from real datasets and a few GANs trained on these real datasets with each real/GAN source representing a separate class. While we can discover unseen GANs trained on these real datasets, we additionally show through experiments that we can discover new real datasets and GANs trained on these new datasets as well without them being present in the initial labeled set. Attribution and discovery in an open-world setup requires us to separate images belonging to seen sources during training from the unseen sources. We therefore introduce an explicit out-of-distribution (OOD) step using the deep network features to separate the images belonging to the two types of sources. We propose to incorporate the Winner Take All (WTA) hash~\cite{yagnik2011power} which, to the best of our knowledge, has previously never been used for OOD detection. Additionally, we obtain clusters for the OOD images and perform merge and refine steps to improve the grouping of the unknown GANs using 1-Nearest Neighbour (NN) graphs and kernel SVMs, respectively. We combine these components into a single unified pipeline which is executed iteratively for improving the features and clusters while attributing seen sources and discovering new GAN sources. Through extensive experiments, we demonstrate the capability of our approach in an open-world setup. We show the efficacy of our approach to generalize to a wide range of dataset setups. We also analyze the importance of the various stages. Additionally, we provide an approach to apply our algorithm for the problem of real/fake image detection and show competitive results on a variety of dataset setups. We summarize our contributions as follows: \textbf{1)} We introduce a new problem for discovering and attributing images from real and GAN sources in an open-world setup; \textbf{2)} We propose a novel iterative pipeline consisting of several components such as OOD detection, clustering and merge and refine stages providing a strong benchmark for this task, and; \textbf{3)} We analyze the capability of our approach to discover GANs on a variety of dataset setups and also present several insights into the various stages of our pipeline. \section{Related Work} \noindent\textbf{OOD Detection and Open Set Recognition:} Several works~\cite{liang2017enhancing, lee2018simple} have tackled OOD detection but require an OOD dataset for tuning hyperparameters, which is not possible as open-world knowledge is not known apriori. \cite{hsu2020generalized} removes this constraint but requires modification of the training setup to decompose confidence scores into two probabilities. On similar lines is the task of open set recognition \cite{scheirer2012toward}. \cite{jain2014multi,oza2019c2ae,rudd2017extreme} use the Extreme Value Theory to discard unknown samples but require setting thresholds for reconstruction errors and/or probability values to detect OOD samples which requires careful tuning for each dataset. \cite{boult2019learning} provides a detailed survey of more works in this area. \noindent\textbf{Open World learning:} While Open Set Recognition only rejects the unseen classes, Open World learning \cite{bendale2015towards} also focuses on reasoning about the unseen classes. \cite{xu2019open} tackles this problem using meta classifiers but are limited to the product classification problem. \cite{hsu2017learning,han2019learning,wang2020open,han2020automatically} also focus on a similar problem but require the unlabeled set to only contain unseen classes and knowledge about number of unseen classes in some cases. \noindent\textbf{Rank correlation:} \cite{yagnik2011power} compute the WTA hash which are ordinal embeddings providing a highly non-linear sparse transformation of the feature vector. \cite{dean2013fast} use this hashing algorithm for performing fast large scale object detection. To the best of our knowledge, no work utilizes ranking based measures for OOD detection. \noindent\textbf{Clustering:} Clustering is a highly explored field yet there is no one-size fits all solution.~\cite{sarfraz2019efficient} use a first Nearest Neighbours (1-NN) graph to perform parameter free clustering. Inspired from their work, we use a similar 1-NN graph for our merge step. \cite{yan2020clusterfit} perform K-means clustering on a network's features and retrain the network using the cluster labels as pseudo-labels. Our approach partly involves this setup but contains several other components such as OOD detection and merge and refine steps. Spectral clustering ~\cite{yang2019deep, ng2002spectral, von2007tutorial} is another common approach but requires eigenvalues for a large Laplacian which is not tractable for large datasets, as is our case. Another common direction is training a deep network ~\cite{yang2016joint,xie2016unsupervised,ijcai2017-243,ren2019semi,yang2019learning} which learns embeddings/clusters based on minimizing an objective function. However, these require careful training so as to not diverge while learning the features in an unsupervised manner. \noindent\textbf{Real/fake detection and GAN attribution:} A plethora of works \cite{rossler2018faceforensics, rossler2019faceforensics++,marra2018detection,wang2020cnn} exist for the problem of real/fake detection but are only limited to this binary classification problem and are not directly applicable to GAN attribution and discovery. ~\cite{wang2020cnn,yu2019attributing,8695364} tackle this problem but are, however, limited to the GANs that they train on and fail to generalize in an open world setup. \cite{marra2019incremental} propose a more dynamic approach to incrementally include GANs for attribution but require clean datasets with images coming from only a single GAN source which does not hold in practice, as images could be generated from multiple sources. To the best of our knowledge, there exists no work dealing with open-world GAN discovery and attribution which is a much harder task than just real/fake detection or closed set GAN attribution. \begin{figure*} \centering \includegraphics[width=\linewidth]{figures/Pipeline_v2.pdf} \caption{Illustration of our algorithm, where we iteratively discover new classes and retrain our network using them as pseudo-labels.} \vspace{-0.08in} \label{fig:pipeline} \end{figure*} \section{Proposed Approach} \subsection{Overview} In this section, we briefly describe our approach as shown in Fig. \ref{fig:pipeline}. Our initial labeled set consists of $n_s$ images corresponding to the seen classes and is denoted by $\mathcal{I}_s = \{I_{s_1},I_{s_2},...,I_{s_{n_s}}\}$, and their ground truth class labels, denoted by $\mathcal{Y}_s = \{y_{s_1},y_{s_2},...,y_{s_{n_s}}\}$. The discovery set consists of $n_t$ unlabeled images, from both seen and unseen classes, and is denoted by $\mathcal{I}_t=\{I_{t_1},I_{t_2},...,I_{t_{n_t}}\}$. Our pipeline proceeds iteratively, and at any point in the pipeline, our discovery set is partitioned into $\mathcal{I}_c$ and $\mathcal{I}_n$. $\mathcal{I}_c$ is a set of $n_c$ clustered images with predicted labels $\hat{\mathcal{Y}}_c$ while $\mathcal{I}_n$ is a set of images which could be potentially clustered in future iterations. In each iteration, we improve the predicted labels in the clustered set ($\mathcal{I}_c$) and add new samples from the non-clustered set ($\mathcal{I}_n$) into the clustered set. We do this via several stages using algorithms or tools which have previously not been applied for the specific tasks. We also combine the various stages in a unified manner for iteratively improving the features and clusters. \smallskip \noindent\textbf{Network training}: Our network consists of a feature extractor $f(\cdot)$ and classifier $g(\cdot)$. We train the network in a supervised manner using the two sets of images and labels, labeled set $\left(\mathcal{I}_s,\mathcal{Y}_s\right)$ and clustered set $\big(\mathcal{I}_c,\hat{\mathcal{Y}}_c\big)$. \smallskip \noindent\textbf{Out-of-distribution detection}: We use $f(\cdot)$ to extract features for $\mathcal{I}_c$ and $\mathcal{I}_n$ and perform OOD detection. This stage predicts samples from $\mathcal{I}_n$ to be in-distribution or OOD with respect to the clusters in $\mathcal{I}_c$. The in-distribution samples are classified using the classifier and attributed to $\mathcal{I}_c$ with the corresponding predicted labels. \smallskip \noindent\textbf{Clustering}: We use the K-means algorithm to overcluster the remaining samples in $\mathcal{I}_n$. These clusters are then added to the clustered set $\mathcal{I}_c$ with a new set of labels based on the cluster labels. At the end of this stage all samples have a predicted label and the non-clustered set, $\mathcal{I}_n$, is empty. \smallskip \noindent\textbf{Merge and refine}: To deal with overclustering we perform merge and refine operations. Specifically, coherent clusters are merged to reduce the number of clusters. This reduces the purity of the clusters and hence a refine operation is performed which throws away impure clusters, or samples likely to not have belonged to their existing clusters. The rejected samples are added to the non-clustered set $\mathcal{I}_n$. At the end of this stage, we have a new clustered set, $\mathcal{I}_c$ along with its predicted labels $\hat{\mathcal{Y}_c}$, and non-clustered set, $\mathcal{I}_n$. The four steps described above are then repeated. We now describe each of the steps enumerated above in detail. \subsection{Network Training} \label{ss:step1_classifier} This stage involves training the network using the cluster labels $\hat{\mathcal{Y}_c}$ corresponding to $\mathcal{I}_c$ and $\mathcal{Y}_s$ corresponding to $\mathcal{I}_s$ in a supervised manner. The network consists of a feature generation network $f(\cdot)$ parameterized by $\theta_f$, constructed using an off-the-shelf CNN followed by a few fully connected layers to reduce the dimensionality. The classification part of the network $g(\cdot)$ parameterized by $\theta_g$ involves a fully connected layer followed by the softmax function. The parameters of the network $\theta_f,\theta_g$ are optimized as per the following expression: \vspace{-0.05in}\begin{multline} \displaystyle \min_{\theta_f,\theta_g}\Bigg[ \frac{1}{n_c}\sum_{i=1}^{n_c}\mathcal{L}\left(g_{\theta_g} (f_{\theta_f}(I_{c_i})),\hat{y}_{c_i}\right) + \\ \frac{1}{n_s}\sum_{j=1}^{n_s}\mathcal{L}\left(g_{\theta_g}(f_{\theta_f}(I_{s_j})),y_{s_j}\right) \Bigg], \label{eq:cross_entropy_loss} \end{multline} \vspace{-0.05in}where $\mathcal{L}$ is the cross-entropy loss, $I_{c_i}$ and $\hat{y}_{c_i}$ are the $i^\text{th}$ images and labels from $\mathcal{I}_c$ and $\hat{\mathcal{Y}_c}$ respectively while $I_{s_j}$ and $y_{s_j}$ are the $j^\text{th}$ images and labels from $\mathcal{I}_s$ and ${\mathcal{Y}_s}$ respectively. Subsequent to network training, we use the feature generation network to extract the features $\mathcal{X}_c$ and $\mathcal{X}_n$ corresponding to the clustered set of images $\mathcal{I}_c$, and non-clustered set of images $\mathcal{I}_n$, respectively. \subsection{Out-of-distribution detection} \label{ss:anomaly} We utilize the WTA hashing algorithm proposed by Yagnik \etal \cite{yagnik2011power} who show that ordinal representations of feature vectors provide strong nonlinear transformations and demonstrate their algorithm's capability on downstream tasks, such as similarity search and classification. They show that such rank correlation measures are robust to noise unlike cosine or Euclidean based distances. Additionally, Euclidean/cosine based distances are highly sensitive to thresholds used for OOD detection which would require careful hyperparameter tuning for different dataset setups. We refer readers to their work or our supplementary material for a detailed explanation of the WTA hash. The WTA hash maps a $d$ dimensional feature vector $\boldsymbol{x}$ to a $H$ dimensional vector $\boldsymbol{x}_H$ with elements lying in $[K]$. Using this hash for each feature vector, we then represent the distance between any two feature vectors $\boldsymbol{x}$ and $\boldsymbol{y}$, as $d(\boldsymbol{x},\boldsymbol{y})$, which is the Hamming distance between their corresponding hashes. For each class in set $\hat{\mathcal{Y}}_c = \{\hat{y}_i \in [N], \ i \in [n_c]\}$ ($n_c$ is the number of samples in the clustered set, $N$ is number of clusters), we obtain OOD detectors in the following manner: For a cluster with cluster label $j \in [N]$ and for a feature sample $i \in [n_c]$ in the non-clustered set represented by $\boldsymbol{x}_{n_i} \in \mathcal{X}_n$, we compute the distance of $\boldsymbol{x}_{n_i}$ from each sample in the cluster $j$. We then average these sample distances to get the distance of sample $\boldsymbol{x}_{n_i}$ from cluster $j$, \ie, \begin{equation} d_j(x_{n_i}) = \frac{1}{N_j}\displaystyle \sum_{k=1, y_{c_k}=j}^{n_c}d(\boldsymbol{x}_{n_i},\boldsymbol{x}_{c_k}), \end{equation} where $N_j$ represents the number of samples in cluster $j$. The detector then classifies $\boldsymbol{x}_{n_i}$ as an in-distribution sample of class $j$ if $d_j(x_{n_i}) < t_j$ for a threshold $t_j$ for class $j$. The threshold $t_j$ is computed using the intra cluster distances for each cluster $j$ and setting a high percentile of these distances as the threshold. By doing so, the algorithm learns different thresholds for different clusters and is controlled only by a single percentile scalar which generalizes across different dataset setups. A test sample, $\boldsymbol{x}_{n_i}$ is classified as an OOD sample to $\mathcal{X}_c$, if all of the detectors for the clusters classify it as OOD. All in-distribution samples are classified using our classifier and their corresponding labels lie in $\hat{\mathcal{Y}_c}$. The samples are subsequently added to $\mathcal{I}_c$. \subsection{Clustering} \vspace{-0.05in} We now overcluster samples remaining in $\mathcal{I}_n$ by running K-Means on the feature set $\mathcal{X}_n$. We form a high number of clusters in order to get clusters with high purity. Once the clusters are obtained, they are added to the clustered set $\mathcal{I}_c$. Their new labels, corresponding to the cluster labels, are added to $\hat{\mathcal{Y}_c}$. At the end of this stage, no samples remain in the non-clustered set. More importantly, as we generate a large number of clusters, it makes the clustered set highly fragmented. In order to reduce the number of clusters and improve the purity of the clusters we perform a merge and refine step as explained in the following section. \subsection{Merge and refine} \label{ss:step1_merge_refine} \vspace{-0.05in} Overclustering results in a highly fragmented cluster set which could belong to the same class. To deal with this, a merge step is performed. Anything less than an ideal merge step results in impure clusters. To improve the purity a refine step is also performed. We discuss these in detail below. \vspace{-0.1in} \subsubsection{Merge} \vspace{-0.05in} We merge clusters in $\mathcal{I}_c$ using a 1-Nearest Neighbour graph. We obtain centroids, $\boldsymbol{u}_j$, for each cluster $j \in [N]$ ($N$ is the number of clusters) by averaging the features of all samples in the cluster. Using the hashing described in Section \ref{ss:anomaly} for each centroid, we define the distance between two centroid feature vectors $\boldsymbol{u}_i$ and $\boldsymbol{u}_j$, $d(\boldsymbol{u}_i,\boldsymbol{u}_j)$, as the Hamming distance between their corresponding hashes ${\boldsymbol{u}_i}_H$ and ${\boldsymbol{u}_j}_H$. We use the centroid distances between every pair of clusters to create a directed 1-Nearest Neighbour graph with each node representing a cluster centroid. A directed edge is present from one node to another if the latter node is the nearest neighbour centroid of the former node. Strongly connected components are computed for this graph and each connected component in the graph is considered to be a merged cluster. This stage generates a new set of labels, $\hat{\mathcal{Y}_c}$, for the clustered set $\mathcal{I}_c$. \vspace{-0.1in} \subsubsection{Refine} \label{ss:refine} \vspace{-0.05in} As the merge step is not ideal, it reduces the average purity of the clusters. In order to increase it, a refine step is performed to remove impure samples from each cluster. As the ground truth labels are unknown, SVM classifiers are leveraged to obtain a proxy measure for purity. \cite{malisiewicz-iccv11,shrivastavaSA11} show that weak SVM classifiers can be fit to a single positive instance with the remaining samples as negatives. Therefore, we use this formulation of weak classifiers that can fit to the majority class distribution of a cluster and mark the samples which do not belong to the majority class as negatives.\\ For each cluster $j \in [N]$, an SVM classifier, $Q_j$, is trained in a one-vs-all manner, where the positive samples belong to cluster $j$ while the rest of the samples in the clustered set are negative samples. After training $Q_j$, we use the SVM to predict the labels for samples in cluster $j$ as positive and negative. The samples which are predicted negative are then rejected and added back into the non-clustered set $\mathcal{I}_n$. If the percentage of predicted positive samples by $Q_j$ in cluster $j$ is below a threshold $\epsilon$, the entire cluster is discarded and all the samples are added to $\mathcal{I}_n$. Additionally, some refined clusters might have very few samples and the class distribution for training the network in the next iteration could become long tailed. In order to avoid this issue, we threshold clusters based on their sizes and discard those below a size threshold $\tau$ into $\mathcal{I}_n$. After the refine step we have a new set of clustered images with their corresponding pseudolabels. These are used along with the seen class train data $\mathcal{I}_s$ in order to train the network for the next iteration. \subsection{Cluster set initialization} \vspace{-0.05in} The start of every iteration of our pipeline requires a clustered set $\mathcal{I}_c$ along with the seen labeled set $\mathcal{I}_s$. For the first iteration, as we do not have any pseudolabels for the discovery set $\mathcal{I}_t$, we train our network using only the set $\mathcal{I}_s$ and their corresponding ground truth labels $\mathcal{Y}_s$. Our OOD detection step then determines whether images in $\mathcal{I}_t$ belong to the seen classes $\mathcal{Y}_s$ or not. In-distribution samples are classified and are added to the clustered set $\mathcal{I}_c$ while OOD samples are added to $\mathcal{I}_n$. At the end of this stage, we now have a clustered and non-clustered set for the discovery set images. The rest of the stages of our pipeline, \ie, K-Means Clustering, Merging and Refinement proceed as explained in the previous sections using the initialized $\mathcal{I}_c$ and $\mathcal{I}_n$. The refine step then produces a set of images in the clustered set with their corresponding cluster labels as pseudo-labels which are used to train the network for the next iteration. Additionally, at every iteration $t$, the feature extractor is initialized with the weights of the previous iteration $t-1$. The classifier is replaced with a new linear layer with weights randomly initialized as number of classes, which is dependent on number of clusters $N$, change across iterations. The algorithm then proceeds for a few iterations until fraction of undiscovered samples fall below a small threshold. \section{Experiments} \label{sec:experiments} We now evaluate our approach on real world dataset setups while providing detailed analysis of the several components of our pipeline. In Section \ref{ss:exp_details}, we describe the implementation details. Our labeled dataset consists of images from 4 real datasets as well as from certain GANs trained on these real datasets as shown in Table \ref{tab:default_set}. Together, they make up 12 classes in the labeled set. Our discovery set consists of additional images from these 12 classes as well as from 8 unseen GANs as shown in Table~\ref{tab:default_set} making up a total of 20 classes. We use, by default, this dataset for all our experiments unless mentioned otherwise. Note that the same GAN trained on different datasets corresponds to different classes. Section \ref{ss:baselines} shows extensive comparisons with other related works on GAN attribution and real/fake image detection. Section \ref{ss:analysis} provides several insights into our algorithm and also analyzes several components of our pipeline. Subsequently, we examine the results of our pipeline on varying dataset setups. Section \ref{sss:num_gans} shows an analysis of number of GANs needed in our labeled set to reliably discover new GANs in the discovery set. Section \ref{ss:new_dataset} changes number of unseen real datasets as well as corresponding GANs in the discovery set and shows the effectiveness of our approach to discover these new classes. \begin{table} \centering \footnotesize \renewcommand{\arraystretch}{1} \renewcommand{\tabcolsep}{6pt} \centering \caption{List of GANs trained on the corresponding 4 real datasets used in our labeled and discovery set. Note that the same GAN can be trained on multiple datasets.} \begin{tabular}{@{}lll@{}} \toprule \textbf{Dataset} & \textbf{Labeled GANs} & \textbf{Discovery GANs} \\ \midrule \makecell[tl]{CelebA\cite{liu2015faceattributes}}& \makecell[tl]{StarGAN\cite{choi2018stargan},\\ AttGAN\cite{he2019attgan}}& \makecell[tl]{StarGAN, BEGAN\cite{berthelot2017began}, \\ProGAN\cite{karras2017progressive}, SNGAN \cite{miyato2018spectral}, \\AttGAN, MMDGAN\cite{li2017mmd}, \\CramerGAN\cite{bellemare2017cramer}}\\ \hdashline \makecell[tl]{CelebA-HQ\\\cite{karras2017progressive}} & \makecell[tl]{ProGAN, \\StyleGAN\cite{karras2019style}} & \makecell[tl]{ProGAN, StyleGAN, \\ResNet19\cite{kurach2019large}}\\\hdashline ImageNet~\cite{deng2009imagenet} & \makecell[tl]{BigGAN\cite{brock2018large}, \\S3GAN\cite{lucic2019high}}& BigGAN, S3GAN, SNGAN \\ \hdashline \makecell[tl]{LSUN\\Bedroom~\cite{yu2015lsun}}& \makecell[tl]{ProGAN, \\MMDGAN} & \makecell[tl]{ProGAN, MMDGAN,\\ SNGAN}\\ \bottomrule \end{tabular} \vspace{-0.2in} \label{tab:default_set} \end{table} \subsection{Experimental details} \label{ss:exp_details} \vspace{-0.05in} For our feature extractor, we use the standard ResNet-50 ~\cite{he2016deep} backbone. We add 3 fully-connected layers to reduce the dimensionality of the feature vector to $128$. Another fully connected layer is used as the classification head on top of the feature extractor. The full network is trained in a supervised manner and using cross entropy loss. Every image is resized and center cropped to $256\times256$ except when specified otherwise. We use a batch size of $256$ for our training for each iteration of the pipeline. The weights are optimized using the Adam optimizer with $\beta_1=0.9$, $\beta_2=0.999$ and a fixed learning rate of $0.0001$ throughout our training. For the first iteration, we train our network for $50$ epochs, while for subsequent iterations we train for $100$ epochs, as the network takes longer to converge with additionally discovered samples with noisy pseudo labels. For our OOD detection step using WTA hash described in Section \ref{ss:anomaly}, we use $H=2048$ hashes and a window size of $K=2$. Our clustering stage uses the K-Means algorithm for $500$ clusters initialized using K-Means++ ~\cite{arthur2006k}. For the refine stage, we train SVMs with the RBF-kernel. We set the threshold, $\epsilon=0.5$, for dropping a cluster, as described in Section \ref{ss:refine}. To avoid training on clusters with very few samples, we discard clusters with less than $100$ members. \textbf{Metrics and analysis:} We evaluate our pipeline on 2 clustering metrics. We use Average Purity as a metric for evaluating the overall purity of our clusters with respect to the true labels of the discovery set. We also use Normalized Mutual Information (NMI), which is another commonly used clustering metric. At various stages or iterations of our pipeline, a small fraction of the discovery set samples remain non-clustered and in order to provide a fair evaluation across different experiments/baselines we attribute all the non-clustered samples to their nearest clusters and evaluate on the full discovery set, unless mentioned otherwise. \begin{table} \centering \footnotesize \renewcommand{\arraystretch}{1.1} \renewcommand{\tabcolsep}{3pt} \setlength{\cmidrulewidth}{0.01em} \caption{Comparison of our method with baselines derived from ~\cite{yu2019attributing,wang2020cnn, han2020automatically}. We try two fixed setups for number of clusters $k=20,500$ and finally let our approach discover the suitable number of clusters $k=209$. Compared to the 2 baselines, we obtain the highest Average Purity and NMI when number of clusters $k=209$. Ours [only $\S$3.2] corresponds to a single iteration of network training and clustering. The fully supervised setup is the upper bound when all classes are seen.} \vspace{0.03in} \resizebox{\linewidth}{!}{ \begin{tabular}{@{}lcccccc@{}} \toprule \multirow{2}{*}{Method} & \multicolumn{2}{c}{$k=20$} & \multicolumn{2}{c}{$k=500$} &\multicolumn{2}{c}{$k=209$}\\ \cmidrule[\cmidrulewidth](l){2-3} \cmidrule[\cmidrulewidth](l){4-5} \cmidrule[\cmidrulewidth](l){6-7} & Avg. Purity & NMI & Avg. Purity & NMI & Avg. Purity & NMI \\ \midrule Yu \etal ~\cite{yu2019attributing} & 0.656 & 0.706 & 0.759 & 0.518 & 0.734 &0.554 \\ {Han \etal ~\cite{han2020automatically}} & 0.680 & 0.709 & - & - & - & -\\ {Wang \etal ~\cite{wang2020cnn}} & 0.710 & 0.759 & 0.857 & 0.575 &0.840 & 0.624\\ Ours [only $\S$3.2] & 0.661 & 0.743 & 0.814 & 0.561 & 0.795 & 0.609\\ Ours &- &- & - &- & \textbf{0.861} & \textbf{0.724} \\ \hdashline Fully supervised & 0.928 & 0.929 & 0.996 & 0.658 & 0.997 &0.728\\ \bottomrule \end{tabular} } \vspace{-0.1in} \label{tab:baselines} \end{table} \subsection{Benchmark Evaluation} \label{ss:baselines} \vspace{-0.05in} As there exists no prior work dealing with open-world GAN discovery, we provide baselines by modifying recent works involving GAN attribution~\cite{yu2019attributing} and real/fake image detection~\cite{wang2020cnn}. We additionally include the recent approach of~\cite{han2020automatically} which deals with novel category discovery. Yu~\etal~\cite{yu2019attributing} deals with GAN attribution in a closed-world setup and hence cannot be directly incorporated to our problem setup. Therefore, we train their network on our labeled set and obtain features for our discovery set. We cluster the features using K-Means for 3 different values of $k$. $k=20$ corresponds to the true number of classes in our test set while $k=500$ corresponds to an overclustered regime. $k=209$ represents the number of clusters our algorithm returns at the end of 4 iterations. We compare across multiple values of $k$ as Average Purity and NMI are known to be sensitive to number of clusters. Wang~\etal~\cite{wang2020cnn} tackles real/fake detection and again cannot be directly used in our problem setup. Therefore, we modify their classification head to be multiclass and train their network on our labeled set using their training and preprocessing strategies and extract the features for our discovery set. We provide three similar baselines by performing clustering similar to the baselines generated from ~\cite{yu2019attributing}. Han~\etal~\cite{han2020automatically} discover novel visual categories but require the discovery set to only contain unseen classes. We therefore use our anomaly detection approach on their features to separate out the seen and unseen classes whose cluster assignments are then predicted separately using their approach. As they require knowledge of number of unseen classes for their predictions, we compare with the $k=20$ setup which corresponds to the true number of classes. Finally, we provide a baseline for our approach by performing network training and clustering the feature space into $k=20,500,209$ clusters. We also provide an upper-bound for our approach using a fully supervised case where the labeled set consists of images from all classes in the discovery set and perform clustering on the generated features. The results for these comparisons are provided in Table~\ref{tab:baselines}. Our algorithm achieves the highest Average Purity and NMI compared to all other baselines for the case of $k=209$. For $k=20,500$, \cite{wang2020cnn} outperforms a single iteration of network training and clustering because it does not involve OOD detection, merge or refine for this comparison. However, at the end of 4 iterations, for the case of $k=209$, we significantly outperform all baselines in terms of both Average Purity and NMI. The fully supervised approach provides an upper bound for all 3 cases. Note that we do not compare across number of clusters as Average Purity increases in general with more clusters while NMI decreases. \begin{table} \centering \footnotesize \renewcommand{\arraystretch}{1.2} \renewcommand{\tabcolsep}{6pt} \caption{We analyze the effect of the various stages of our pipeline. The number of clusters in the merge step decreases with negligible drop in Avg. Purity and increased NMI. The Refine step further increases the NMI and Avg. Purity by a big margin for the discovered samples. Note that the numbers corresponding to all samples in the refine step are included for the sake of fair comparison but are not actually computed by our approach.} \begin{tabular}{@{}lccc@{}} \toprule Stage &No. of clusters& Avg. Purity& NMI\\ \midrule Clustering & 512 & 0.793 & 0.682\\ Merge & 391 & 0.792 & 0.689\\ Refine (Discovered) &111&\textbf{0.849}&\textbf{0.838}\\ Refine (All) &111&0.772&0.720\\ \bottomrule \end{tabular} \vspace{-0.1in} \label{tab:stages} \end{table} \begin{table} \centering \footnotesize \renewcommand{\arraystretch}{1.1} \renewcommand{\tabcolsep}{6pt} \caption{We evaluate our algorithm over multiple iterations. Avg. Purity, NMI and \% of discovered samples progressively increases.} \begin{tabular}{@{}ccccc@{}} Iteration & \makecell{Avg.\\ Purity}& NMI &\makecell{\% Samples\\ Clustered} & \makecell{Sources\\Discovered}\\ \midrule 1& 0.772 &0.720&72.5&16/20\\ 2& 0.853 &0.724&88.8&20/20\\ 3& 0.861 &0.724&92.3&20/20\\ 4& \textbf{0.861} & \textbf{0.724}&\textbf{93.7}&\textbf{20/20}\\ \bottomrule \end{tabular} \vspace{-0.15in} \label{tab:iteration} \end{table} \vspace{-3pt} \subsection{Ablation Study} \label{ss:analysis} \vspace{-0.05in} Our algorithm is fairly robust to the various hyperparameter values used in our stages. Experiments for varying hyperparameter values are shown in the supplementary material. In this section, we analyze the importance of each stage and the progress of our pipeline over multiple iterations. We evaluate the effect of Clustering, Merge, and Refine stages in the first iteration of our pipeline. The results are summarized in Table \ref{tab:stages}. Note that the Average Purity drops only slightly in the merge step while the number of clusters drop significantly demonstrating the effectiveness of the 1-NN merge step explained in Section \ref{ss:step1_merge_refine}. From the merge to the refine step, Average Purity drops for the full discovery set as many samples remain undiscovered and we evaluate the metric over the full discovery set by na\"ively attributing them to the nearest cluster. However, the metrics evaluated on only the discovered samples increase significantly which shows that SVMs can identify the pure clusters and samples while rejecting the impure ones. Next, we evaluate our pipeline over multiple iterations. We show the results in Table \ref{tab:iteration}. The pipeline discovers only a small fraction of images and GANs in the first iteration while in subsequent iterations, more samples are added to the clustered set and more GANs are discovered. Average Purity and NMI both increase or remain constant over the four iterations which shows the effectiveness of our approach to discover as well as improve clusters. Our OOD stage obtains an accuracy of $86.97\%$ for seen classes, $99.87\%$ for unseen classes and $92.87\%$ overall. The high unseen class accuracy is because of setting a lower threshold to reduce false negative errors which do not get corrected in subsequent stages. \begin{figure*} \centering \includegraphics[width=\linewidth]{figures/Cluster_Collage.pdf} \caption{Samples from clusters discovered by our approach for unseen GANs with the majority class in parenthesis. It can be noticed that they are not just focusing on the object structure and semantics rather the underlying source.} \label{fig:cluster_vis} \vspace{-0.15in} \end{figure*} \vspace{-0.03in} \subsection{Varying dataset setups} \vspace{-0.05in} In this section, we provide an analysis by varying the dataset setups based on number of GANs per real dataset in the labeled set or on adding new real datasets and GANs trained on them in the unlabeled set. \vspace{-0.03in} \subsubsection{Effect of number of GANs per dataset} \label{sss:num_gans} \vspace{-0.05in} We answer the question of how many GANs per dataset are needed in our labeled set to reliably discover new ones in our discovery set. We have 3 labeled dataset setups: \textbf{1)} Our first setup consists of 4 real datasets: CelebA, CelebA-HQ, ImageNet and LSUN-Bedroom with no GANs; \textbf{2)} In addition to the 4 datasets in the first setup, our second setup has 4 GANs: StarGAN, ProGAN, BigGAN and MMDGAN trained on the respective datasets; \textbf{3)} In addition to the previous setup, we have 4 more GANs per dataset: AttGAN, StyleGAN, S3GAN and ProGAN. In order to fairly evaluate the 3 setups, we use a common discovery set consisting of all the classes in the second setup. Additionally, we have a set of GANs not present in all 3 labeled sets, namely, BEGAN, ResNet19 (from CompareGAN \cite{kurach2019large}), SNGAN and CramerGAN corresponding to the 4 real datasets. The results are summarized in Table ~\ref{tab:ngan}. Due to most information being present in the labeled set, the third setup performs best on both Average Purity and NMI. Despite the second setup having only a single GAN per dataset, it performs fairly well on the two metrics. On the other hand, the first setup, which does not have any GANs in the labeled set, fails to discover new ones as it cannot see any GAN-related artifacts in the labeled set and thus fails to discriminate based on this during discovery. \begin{table}[t] \centering \footnotesize \renewcommand{\arraystretch}{1.1} \renewcommand{\tabcolsep}{6pt} \caption{Varying number of GANs per dataset. We obtain the best metrics with the maximum number of GANs per dataset although discovering fewer samples compared to the first setup.} \begin{tabular}{@{}ccccc@{}} \toprule \makecell{\# of\\GANs} & \makecell{Avg.\\ Purity}& NMI &\makecell{\% Samples\\ Clustered} & \makecell{Sources\\Discovered}\\ \midrule 0 &0.497&0.559&\textbf{99.78}&8/12\\ 1 & 0.897 & 0.772 & 94.48&11/12\\ 2 & \textbf{0.954} & \textbf{0.789} & 95.98&\textbf{11/12}\\ \bottomrule \end{tabular} \label{tab:ngan} \vspace{-0.15in} \end{table} \begin{table} \centering \footnotesize \renewcommand{\arraystretch}{1.1} \renewcommand{\tabcolsep}{4pt} \centering \caption{Effect of adding new datasets and GANs trained on new datasets at test time. (*) provides the corresponding comparison when the real datasets are present in the labeled set (Sec.~\ref{ss:new_dataset})} \resizebox{\linewidth}{!}{ \begin{tabular}{@{}lcccc@{}} \toprule Test Set & Purity & NMI & \makecell{Sources\\ Discovered} & \makecell{\# of\\ Clusters} \\ \midrule New Real &0.942 & 0.813& 14/16 & 103\\ New Real* & \textbf{0.989}& \textbf{0.989}&\textbf{15/16} &56\\ \midrule New GANs & \textbf{0.976} & 0.828 & \textbf{16/16} & 105\\ New GANs* & 0.95 & \textbf{0.835} & 15/16 & 87\\ \midrule New Real + New GANs& 0.850 & 0.730 & \textbf{20/20} & 141\\ New Real + New GANs* & \textbf{0.977} & \textbf{0.856} & 19/20 & 128\\ \bottomrule \end{tabular} } \vspace{-0.2in} \label{tab:new_dat} \end{table} \vspace{-0.03in} \subsubsection{Discovering new dataset images} \label{ss:new_dataset} \vspace{-0.05in} In an open-world setting, the discovery set may contain images from new real datasets not seen in the labeled set along with GAN generated images corresponding to these datasets. To see whether the proposed approach can handle these situations we perform experiments covering 3 setups. Each setup uses the default labeled set in Table \ref{tab:default_set} but additional classes in the discovery set as follows: \textbf{1)} New real datasets: New real datasets namely DTD ~\cite{cimpoi14describing}, FashionGen~\cite{rostamzadeh2018fashion}, and Night and Shoes datasets (from Pix2Pix~\cite{isola2017image}); \textbf{2)} GANs on new real classes: New GANs trained on the four new real world datasets, namely, ProGAN on DTD, DCGAN~\cite{radford2015unsupervised} on FashionGen, and a separate Pix2Pix on Night and Shoes datasets; \textbf{3)} New Real + New GANs: A combination of GANs and real datasets from the previous two setups. In order to provide a benchmark for comparison, we show the performance when the four real datasets are in the labeled set (marked with a *). The results are shown in Table \ref{tab:new_dat}. In the first setup the goal is to discover new dataset sources. Our approach discovers most of the sources with high Purity and NMI, although it's performance is lower than the benchmark as expected because the labeled set for the benchmark contains all the classes present in the discovery set. In the second setup, our method discovers all unseen GANs even though they are trained on unseen datasets unlike the benchmark which does slightly worse in terms of Avg. Purity and number of GANs discovered likely because of the reduced number of final clusters. The third setup is more challenging due to the addition of both unseen datasets and GANs trained on them to the discovery set. However our approach discovers all unseen sources with reliable Average Purity and NMI while its corresponding benchmark does not discover all sources possibly because it restricts itself to lesser but purer clusters with higher NMI. \vspace{-0.03in} \subsubsection{Online discovery} \vspace{-0.05in} Here we extend our approach to an online setup where new GANs are added to the discovery set in an online fashion based on the chronological order they were published. Our setup consists of 9 GANs from 4 real sources in our labeled set and 4 new GANs in the discovery set. We additionally introduce 2 sets of 3 GANs each in an online fashion. Details of the datasets are provided in supplementary material. We show our results in Table \ref{tab:online}. We train our setup for 2 iterations with the initial discovery set of 17 sources. It can be seen that Average Purity increases in the second step and it also discovers an additional GAN source. When new GANs are introduced in iterations 3 and 5, the performance drops as the network is not trained on the new classes. However, after a single iteration the Average Purity increases significantly and NMI drops only slightly even though number of clusters increase. At the end of 6 iterations, we discover all the GAN sources added on the fly, except one. This shows that our approach works in an online setting, continuously discovering new GANs iteratively. \begin{table} \centering \footnotesize \renewcommand{\arraystretch}{1.2} \renewcommand{\tabcolsep}{6pt} \centering \caption{Evaluation of our algorithm in an online setup. We have 17 sources in our initial discovery set and add 3 sources each at iteration 3 and 5 causing an initial drop in results. The pipeline eventually performs better after training on the new samples.} \begin{tabular}{@{}ccccc@{}} \toprule Iteration & \makecell{Avg.\\ Purity}& NMI & \makecell{\% Samples\\ Clustered} & \makecell{Sources\\ Discovered}\\ \midrule 1& 0.846 &0.826&89.39&15/17\\ 2& 0.916 &0.798&92.42&16/17\\ \hdashline 3& 0.805 &0.771&95.74&18/20\\ 4& 0.805 &0.744&96.87&19/20\\ \hdashline 5& 0.731 &0.716&95.68&22/23\\ 6& 0.802 &0.705&95.36&22/23\\ \bottomrule \end{tabular} \vspace{-0.2in} \label{tab:online} \end{table} \subsection{Real/Fake detection} \vspace{-0.05in} We now apply our method to the common problem of real/fake detection. We use the binary classification model from ~\cite{wang2020cnn}, but trained on our labeled set and use majority voting to mark a cluster and all its constituent images as real or fake. We compare this with using the model directly on all samples and compare the performance in Table ~\ref{tab:real_fake} for our original setup and for the three setups defined in Sec. \ref{ss:new_dataset}. We observe that in most settings, we outperform the standard predictions which are evaluated sample-wise. We attribute it to the fact that the clustering is able to correct model's mistakes as it groups samples according to the source. As cluster assignments are less accurate due to increased difficulty of the final setup, our performance is lower but nevertheless, competitive with \cite{wang2020cnn}. \subsection{Qualitative analysis of clusters} \vspace{-0.05in} We visually inspect a few clusters generated by our method to see whether they focus on the semantic information or the GAN source. To this end we visualize random images from some of the highly pure clusters corresponding to unseen GANs trained on ImageNet, LSUN-Bedroom, CelebA and CelebA-HQ. As evident from Fig.~\ref{fig:cluster_vis} the cluster in the case of SNGAN-ImageNet does not seem to be object-specific, while the cluster for SNGAN-LSUN does not focus on specific room decor, lighting conditions, layout etc. Similarly, clusters corresponding to the face datasets seem to be focusing on the GAN source rather than specific facial attributes like expression, orientation, age etc. In addition to visualizing these clusters, we also add a qualitative analysis of the merge step in the supplementary material showing sub-clusters that are merged by our pipeline. \begin{table} \centering \footnotesize \renewcommand{\arraystretch}{1} \renewcommand{\tabcolsep}{6pt} \caption{We evaluate the real/fake detection accuracy (\%) using the clustering obtained from our network.} \begin{tabular}{@{}lcccc@{}} \toprule Approach & Original & New Real & New GANs & \makecell{New Real +\\New GANs}\\ \midrule Wang \etal ~\cite{wang2020cnn} & 92.56\% & 87.35\% &98.42\%&\textbf{89.09\%}\\ Ours & \textbf{98.62\%}&\textbf{89.84\%}&\textbf{99.10\%}&83.33\%\\ \bottomrule \end{tabular} \vspace{-0.2in} \label{tab:real_fake} \end{table} \vspace{-0.08in} \section{Conclusion} \vspace{-0.05in} We proposed a new problem of open-world GAN discovery and attribution. We presented an iterative approach to discover and attribute images from multiple GAN sources in a discovery set. Our framework discovers and groups GANs not seen during training by implicitly focusing on GAN-based fingerprints. We show ablation studies for the different components of our pipeline. We also show the generalization of our approach to various dataset setups and its extension to an online setting. As there have been no works addressing this problem, we compare with several baselines based on state-of-the-art related works and provide a strong benchmark for this task. Even though our approach works in an online setup, network training is an expensive step for each iteration. One potential direction for future work is to utilize approaches from continual learning literature~\cite{li2017learning} for faster training, to learn in a never-ending setup discovering new GANs on-the-fly. We hope, given the general formulation of the stages, our framework is utilized for other similar tasks as well. To facilitate such exploration of different scenarios we plan to release the toolset we have developed for our work to bolster future research in this area. \smallskip {\small \noindent\textbf{Acknowledgements.} This project was partially funded by DARPA SemaFor (HR001119S0085), DARPA SAIL-ON (W911NF2020009), and an independent gift from Facebook AI.} {\small \bibliographystyle{ieee_fullname} \section{Additional experimental details} For our feature extractor, we use 3 fully-connected layers. Each fully connected layer is followed by the ReLU activation unit and Dropout with a drop probability of $0.5$ during train phase for regularization. The first layer maps the input $2048$ dimensional vector to $512$ dimension. The second layer maintains the number of activation units at $512$ while the third one downsamples it to $128$ which is the final dimension of the feature vector we use for subsequent stages. For all our experiments in the supplementary, we train on $128\times128$ sized images. We set a percentile threshold of $0.9$ for our out-of-distribution detection stage. Our cluster merging algorithm using the 1-Nearest Neighbour Graph is a 2 staged setup which initially merges the newly obtained clusters from K-Means in the previous stage and then merges the entire clustered set. We adopt this 2 staged setup as K-Means overclusters the discovery set and requires merging before merging with the clusters in the clustered set. For training SVMs, we use the GPU-accelerated library of ThunderSVM~\cite{wenthundersvm18}. \section{Additional dataset details} Table~\ref{tab:data} summarizes the class-wise train and test splits used across our experiments. We use a variety of images for multiple image sources to more closely simulate a real world setup. Note that some train images are not used depending on the dataset setup where the image sources could only belong in the discovery set. \\ Our online dataset setup defined in Section. 4.4.3 of the paper consists of GANs in the chronological order they were published or introduced. We have an initial labeled and discovery set as defined in Table \ref{tab:online_set}. After running our pipeline for 2 iterations on this set, we add 3 more GANs to the discovery set: BigGAN and SSGAN \cite{chen2019self}, both trained on ImageNet, StyleGAN trained on CelebA-HQ. We run our pipeline for 2 more iterations on the new discovery set, and add 3 more GANs: ResNet19 and StarGAN-v2 \cite{choi2020stargan}, both trained on CelebA-HQ, S3GAN trained on ImageNet. This is followed by 2 more iterations of network training resulting in a total of 6 iterations for the full online setup. The numbers are as reported in Section 4.4.3 of the main paper. \begin{table} \renewcommand{\arraystretch}{1.1} \renewcommand{\tabcolsep}{6pt} \centering \small \caption{List of GANs trained on the corresponding 4 real datasets used in our labeled and discovery set. Note that the same GAN can be trained on multiple datasets.} \begin{tabular}{@{}llrr@{}} \\ \hline Dataset& Image Source& \makecell[r]{\# of Images\\(Train)} & \makecell[r]{\# of Images\\ (Test)} \\ \toprule \multirow{8}{*}{CelebA}&Real&$20k$&$10k$\\ &StarGAN&$20k$&$5k$\\ &AttGAN&$20k$&$10k$\\ &BEGAN&$20k$&$10k$\\ &ProGAN&$20k$&$10k$\\ &SNGAN&$20k$&$10k$\\ &MMDGAN&$20k$&$5k$\\ &CramerGAN&$20k$&$10k$\\ \midrule \multirow{4}{*}{CelebA-HQ}&Real&$20k$&$10k$\\ &ProGAN&$20k$&$10k$\\ &StyleGAN&$20k$&$5k$\\ &ResNet19&$20k$&$10k$\\ \midrule \multirow{4}{*}{ImageNet}&Real&$20k$&$10k$\\ &BigGAN&$20k$&$5k$\\ &S3GAN&$20k$&$10k$\\ &SNGAN&$15k$&$10k$\\ \midrule \multirow{5}{*}{LSUN-Bedroom}&Real&$20k$&$5k$\\ &ProGAN&$20k$&$10k$\\ &MMDGAN&$20k$&$5k$\\ &SNGAN&$20k$&$3k$\\ &CramerGAN&$20k$&$10k$\\ \midrule \multirow{2}{*}{DTD}&Real&-&$10k$\\ &ProGAN&$20k$&$5k$\\ \midrule \multirow{2}{*}{FashionGen}&Real&$20k$&$10k$\\ &DCGAN&$20k$&$5k$\\ \midrule \multirow{2}{*}{Night}&Real&$15k$&$5k$\\ &Pix2Pix&$15k$&$10k$\\ \midrule \multirow{2}{*}{Shoes}&Real&$20k$&$3k$\\ &Pix2Pix&$20k$&$10k$\\ \bottomrule \end{tabular} \label{tab:data} \end{table} \begin{table} \renewcommand{\arraystretch}{1.1} \renewcommand{\tabcolsep}{6pt} \centering \small \caption{Initial labeled and discovery set for our online setup.} \vspace{5pt} \begin{tabular}{@{}lll@{}} \toprule \textbf{Dataset} & \textbf{Labeled GANs} & \textbf{Discovery GANs} \\ \midrule \makecell[tl]{CelebA}& \makecell[tl]{BEGAN, \\MMDGAN,\\ CramerGAN,\\ProGAN}& \makecell[tl]{BEGAN, MMDGAN,\\ CramerGAN, ProGAN,\\ StarGAN, AttGAN,\\ SNGAN}\\ \hdashline \makecell[tl]{CelebA-HQ} & \makecell[tl]{ProGAN} &-\\\hdashline ImageNet& \makecell[tl]{SNGAN}& - \\ \hdashline \makecell[tl]{LSUN\\Bedroom}& \makecell[tl]{ProGAN, \\MMDGAN,\\CramerGAN} & \makecell[tl]{ProGAN, MMDGAN,\\ CramerGAN,SNGAN}\\ \bottomrule \end{tabular} \vspace{-5pt} \label{tab:online_set} \end{table} \section{Additional baseline comparisons} Section 4.2 of the paper provides comparisons with baselines derived from the works of \cite{yu2019attributing} and \cite{wang2020cnn} by training their methods on our dataset in a multiclass manner. We additionally provide baselines by using features from the pretrained models provided by them which were trained on their datasets. We provide results by performing K-means clustering on the features for $k=20$ and $k=500$ similar to the baselines derived in Section 4.2. The results are shown in Table \ref{tab:additional_baselines} (denoted by *). It can be seen that the features don't generalize across datasets and does worse than the baselines reported in the paper on both the metrics of Average Purity and NMI. \\ Also, as \cite{wang2020cnn} primarily deals with only real-fake classification, we train their method on our dataset but only on the binary real-fake classification task and extract their features. We show the results based on clustering the features for $k=20$ and $k=500$ and reporting results in Table ~\ref{tab:additional_baselines} (denoted by $\crosssymbol$). We see that features generated from the binary classification problem do worse than the multiclass case. This is because the binary classification problem only discriminates between real and fake image sources while grouping the different fake image sources together. This causes less discrimination between the fake image sources harming the clustering performance. We also provide a baseline (denoted by \#) using our approach but adding JPEG and blur augmentations as used by \cite{wang2020cnn}. We see that this degrades the performance compared to our original approach likely because these augmentations destroy valuable high frequency information used for discriminating between GAN sources. Since the baseline performance was lower in these evaluations we did not include them in the main paper. \begin{table}[th!] \centering \small \renewcommand{\arraystretch}{1.1} \renewcommand{\tabcolsep}{1.4mm} \caption{Comparing the proposed approach with additional baselines from \cite{yu2019attributing,wang2020cnn}. * represents the pretrained features used for clustering while $\crosssymbol$ denotes the features obtained from binary classification, the original task of \cite{wang2020cnn}. We also provide a baseline (denoted by \#) using our approach but with JPEG and blur augmentations as used by \cite{wang2020cnn}. This does worse on both clustering metrics compared to our original approach.} \vspace{5pt} \begin{tabular}{@{}llcc@{}} \toprule \makecell{\textbf{\# of}\\ \textbf{clusters}}& \textbf{Method}& \multicolumn{1}{c}{\textbf{Avg. Purity}} & \multicolumn{1}{c}{\textbf{NMI}} \\ \midrule \multirow{3}{*}{20}& Wang \etal \cite{wang2020cnn}*& 0.1946& 0.2042 \\ &Yu \etal \cite{yu2019attributing}*& 0.4529& 0.4543\\ &Wang \etal \cite{wang2020cnn}$\crosssymbol$&0.3841&0.4434\\ \hdashline \multirow{3}{*}{500}& Wang \etal \cite{wang2020cnn}*& 0.2929& 0.2004 \\ &Yu \etal \cite{yu2019attributing}*& 0.5947& 0.3916\\ &Wang \etal \cite{wang2020cnn}$\crosssymbol$&0.6082&0.4334\\\hdashline 258&$\textnormal{Ours}^\#$& 0.7696& 0.6249\\ 266&Ours&0.8216& 0.6552\\ \bottomrule \end{tabular} \label{tab:additional_baselines} \end{table} \begin{table*}[] \centering \small \renewcommand{\arraystretch}{1.0} \renewcommand{\tabcolsep}{6pt} \caption{Comparison of our approach using WTA hash with ODIN \cite{liang2017enhancing}. The in-distribution dataset is CIFAR-100 which is used to train a DenseNet. We evaluate our method on the same metrics reported in \cite{liang2017enhancing}. The numbers reported are in the format of "ODIN/Ours". All values are in percentages. $\uparrow$ implies that the larger value is better while $\downarrow$ implies smaller value is better. We outperform ODIN on all OOD datasets and metrics excluding LSUN (crop).} \vspace{5pt} \begin{tabular}{@{}lccccc@{}} \toprule Out-distribution dataset & FPR at 95\% TPR $\downarrow$& Detection error $\downarrow$ & AUROC $\uparrow$& AUPR In $\uparrow$& AUPR Out$\uparrow$\\ \midrule Tiny-ImageNet (crop)&$26.9/\boldsymbol{18.8}$&$12.9/\boldsymbol{10.2}$&$94.5/\boldsymbol{96.4}$&$94.7/\boldsymbol{96.6}$&$94.5/\boldsymbol{96.3}$\\ Tiny-ImageNet (resize)&$57.0/\boldsymbol{20.2}$&$22.7/\boldsymbol{10.6}$&$85.5/\boldsymbol{96.2}$&$86.0/\boldsymbol{96.3}$&$84.8/\boldsymbol{96.1}$\\ LSUN (crop)&$\boldsymbol{18.6}/32.1$&$\boldsymbol{9.7}/14.0$&$\boldsymbol{96.6}/93.8$&$\boldsymbol{96.8}/94.2$&$\boldsymbol{96.5}/93.7$\\ LSUN (resize)&$58.0/\boldsymbol{17.3}$&$22.3/\boldsymbol{9.6}$&$86.0/\boldsymbol{96.8}$&$87.1/\boldsymbol{97.0}$&$84.8/\boldsymbol{96.7}$\\ iSUN&$64.9/\boldsymbol{28.3}$&$24.0/\boldsymbol{12.6}$&$84.0/\boldsymbol{94.8}$&$85.1/\boldsymbol{95.2}$&$81.8/\boldsymbol{94.6}$\\ Gaussian&$100.0/\boldsymbol{0.0}$&$17.9/\boldsymbol{0.1}$&$99.5/\boldsymbol{100.0}$&$87.5/\boldsymbol{100.0}$&$65.1/\boldsymbol{99.8}$\\ Uniform&$100.0/\boldsymbol{0.0}$&$38.0/\boldsymbol{0.0}$&$40.5/\boldsymbol{100.0}$&$60.5/\boldsymbol{100.0}$&$40.9/\boldsymbol{99.9}$\\ \bottomrule \end{tabular} \vspace{-0.1in} \label{tab:ood_related} \end{table*} \section{Out-of-distribution detection} In this section, we provide more details on the WTA hash and also a comparison between cosine based distance and the WTA hashing based hamming distance for out-of-distribution detection. Additionally, we compare our approach with another popular out-of-distribution algorithm \cite{liang2017enhancing} and show that our algorithm performs well on their reported benchmarks. Finally, we analyze the effect of the percentile threshold used in our approach. \subsection{WTA hash details} The WTA hashing algorithm proceeds as follows. Suppose a single feature vector $\boldsymbol{x}$ has a dimension $d$. We generate $H$ different permutations $\boldsymbol{p}_i, \ i\in\{1,...,H\}$ of indices $\{1,...,d\}$ and then apply each of these permutations to $\boldsymbol{x}$ to get a set of vectors $\{\boldsymbol{x}'_i\}_{i=1}^{H}$. For each vector $\boldsymbol{x}'_i$, we take the first $K$ elements, for a window size $K$, and obtain the index of the max element. The set of these $H$ indices (one for each permutation) yields a new vector $\boldsymbol{x}_H$. Note that $\boldsymbol{x}_H$ is a $H$ dimensional vector with its elements taking integral values in $[0,K-1]$. The distance between two feature vectors is then defined as the hamming distance between their corresponding hashes.\\ \subsection{Cosine based distance details} We compare the in-distribution, out-distribution and overall accuracy of our algorithm for the 12 seen classes (as described in Table 1 of the paper) using the WTA hash distance and a cosine-based distance. The results are shown in Table ~\ref{tab:ood_wta_cosine}. As the number of samples in our in-distribution is roughly the same as number of samples in our out-distribution, we use standard accuracy as our metrics for comparison. In-distribution accuracy refers to the accuracy on all the samples in the discovery set which belong to the 12 seen classes while out-distribution corresponds to those belonging to the 8 unseen classes. Net accuracy is the overall accuracy on the full discovery set. We see that using the hash outperforms cosine based distance in terms of the net accuracy and in-distribution accuracy. It performs lower than the cosine-based distance in terms of the out-distribution accuracy but with only a small difference. This is because of an inherent tradeoff between in-distribution and out-distribution accuracy based on the percentile threshold. \begin{table} \centering \small \renewcommand{\arraystretch}{1.0} \renewcommand{\tabcolsep}{6pt} \caption{Comparison of our OOD step using WTA hash or cosine distance. We see that the WTA hash consistently outperforms the cosine-based distance at all 4 iterations of training even though it drops slightly on the out-distribution accuracy.} \vspace{5pt} \begin{tabular}{@{}cccc@{}} \toprule Iteration& \makecell{In-distribution\\ Accuracy (\%)} & \makecell{Out-distribution\\ Accuracy (\%)}&\makecell{Net\\ Accuracy (\%)}\\ \midrule 1& $86.02/\boldsymbol{91.74}$&$\boldsymbol{93.26}/89.35$&$89.33/\boldsymbol{90.65}$\\ 2& $83.49/\boldsymbol{88.33}$&$\boldsymbol{98.36}/97.63$&$90.22/\boldsymbol{92.58}$\\ 3& $81.14/\boldsymbol{85.37}$&$\boldsymbol{99.32}/98.34$&$89.45/\boldsymbol{91.3}$\\ 4& $79.11/\boldsymbol{82.94}$&$99.10/\boldsymbol{99.12}$&$88.27/\boldsymbol{90.33}$\\ \bottomrule \end{tabular} \vspace{-0.1in} \label{tab:ood_wta_cosine} \end{table} \begin{table} \centering \footnotesize \renewcommand{\arraystretch}{1.1} \renewcommand{\tabcolsep}{1.4mm} \caption{Comparison between using the WTA hash based hamming distance or the cosine based distance for computing the 1-NN graph during merge step. We analyze the performance directly at iteration 1 and also at the end of 4 iterations for both stages of merge and refine.} \label{tab:merge_wta_cosine} \vspace{5pt} \resizebox{\linewidth}{!}{ \begin{tabular}{@{}ccccccc@{}} \toprule \#iter. & Stage&\makecell{Avg.\\ Purity}& NMI &\makecell{\% Samples\\ Discovered} &\makecell{ \# of Sources\\ Discovered} & \makecell{\# of\\clusters}\\ \midrule \multirow{2}{*}{1} &Merge& $0.791/\boldsymbol{0.793}$& $0.642/\boldsymbol{0.646}$&-&-&$433/\boldsymbol{383}$\\ &Refine& $0.776/\boldsymbol{0.780}$& $\boldsymbol{0.671}/0.666$&$74.70/\boldsymbol{76.54}$&$4/\boldsymbol{5}$&$\boldsymbol{167}/180$\\ \multirow{2}{*}{4}&Merge& $0.820/\boldsymbol{0.825}$& $0.635/\boldsymbol{0.647}$&-&-&$618/\boldsymbol{432}$\\ &Refine& $0.819/\boldsymbol{0.823}$& $0.651/\boldsymbol{0.655}$&$91.80/\boldsymbol{94.76}$&$\boldsymbol{8}/\boldsymbol{8}$&$294/\boldsymbol{266}$\\ \bottomrule\\ \end{tabular} } \small \caption{Reducing number of clusters for K-Means (K) by almost half at each iteration. Average Purity and NMI does not change drastically compared to our default setup.} \vspace{5pt} \label{tab:red_k} \resizebox{0.6\linewidth}{!}{ \begin{tabular}{cccc} \toprule Iteration & K & Avg. Purity & NMI\\ \midrule 1& 500 & 0.695 & \textbf{0.6756} \\ 2&250 & 0.7877 & 0.6572\\ 3&125 & \textbf{0.8086} & 0.6484\\ 4&60 & 0.8056 & 0.6399\\ \multicolumn{2}{c}{Ours (Default)}&0.8216&0.6552\\ \bottomrule\\ \end{tabular} } \caption{We Na\"ively recluster the test set after each training step and use them as pseudolabels for retraining. Compared to our original approach, a significant drop in Average Purity and NMI is observed.} \vspace{5pt} \label{tab:recluster} \resizebox{0.8\linewidth}{!}{ \begin{tabular}{ccccc} \toprule \multirow{2}{*}{Step} & \multicolumn{2}{c}{Avg. Purity} & \multicolumn{2}{c}{NMI}\\ & Reclustering & Ours & Reclustering & Ours\\ \midrule 1 & 0.7858 & 0.7803 & 0.5402 & 0.6658 \\ 2 & 0.7803 & 0.8183 & 0.5383 & 0.6625 \\ 3 & 0.7597 & 0.8211 & 0.5289 & 0.6595 \\ 4 & 0.7303 & 0.8216 & 0.5112 & 0.6552 \\ \bottomrule \end{tabular}} \label{tab:recluster} \end{table} \subsection{Related works comparison for OOD} We now compare our approach with the popular out-of-distribution (OOD) approach called ODIN \cite{liang2017enhancing} on their benchmark. We show results using features extracted from DenseNet trained on CIFAR-100. We evaluate on the various out-of-distribution datasets provided by the authors of \cite{liang2017enhancing} and report our results in Table \ref{tab:ood_related}. We see that we outperform their algorithm on almost all datasets except LSUN (crop). Additionally, our algorithm has very few hyperparameters which require careful tuning and does not require a validation set. This shows that our approach generalizes well to other dataset setups and can be used in general for the problem of out-of-distribution detection. \subsection{Threshold analysis} We evaluate the performance of our pipeline when varying the percentile threshold which was set by default to $0.9$. We run our full pipeline iteratively for $4$ iterations and report the numbers for $4$ different values of $0.7$, $0.8$, $0.9$, $0.95$. The results are summarized in Table \ref{tab:ood_thresh}. Increasing the threshold has the expected result of decreased clusters as more samples in the discovery set are called in-distribution and are attributed to existing clusters. However, the resulting cluster metrics do not vary drastically in the neighbourhood of the default value of $0.9$. This shows that our pipeline is fairly robust to the percentile threshold which also intuitively transfers across datasets. Our approach thus has a single easy to tune scalar which automatically varies the threshold for the different seen classes or clusters. \begin{table} \centering \small \renewcommand{\arraystretch}{1.0} \renewcommand{\tabcolsep}{5pt} \caption{Effect of the percentile threshold for OOD on the final performance. The default value for our experiments is 0.9. For all the thresholds, all sources were discovered.} \vspace{5pt} \begin{tabular}{@{}ccccc@{}} \toprule $\beta$ & Avg. Purity & NMI & \makecell[c]{\# of \\Clusters} & \makecell[c]{\% Samples \\Discovered}\\ \midrule 0.7 &0.7952 &0.5842 &449 &0.9226\\ 0.8 & 0.8087 & 0.6135 & 376 &0.9234\\ 0.9 & 0.8216 & 0.6552 & 266 & 0.9476\\ 0.95 & 0.8268 & 0.6985 & 181 & 0.9358\\ \bottomrule \end{tabular} \vspace{-0.1in} \label{tab:ood_thresh} \end{table} \section{Clustering} We analyze the effect of the clustering, merge and refine stages of the pipeline, varying a number of hyperparameters and show our pipeline's robustness to these values with respect to the final performance. \subsection{Effect of different number of clusters and number of rounds of merge and refine} The $k$ value chosen for K-Means changes the number of clusters that are passed on to the merge step. Additionally, we try performing multiple rounds of merge and refine within a single iteration, which decreases the number of clusters and improve purity. Table ~\ref{tab:k_ai} summarizes the effect of changing $k$ and the number of rounds of merge and refine, $r$, for the pipeline. We see that for a fixed $r$ and different $k$'s, even though the number of clusters changes in the clustering stage, the final number of clusters at the end of $4$ iterations are similar and the performance on Average Purity and NMI does not vary drastically. However, when we fix $k$ and vary $r$ we see that final number of clusters show a visible drop and as expected NMI increases slightly while Average Purity decreases. \begin{figure*}[t] \begin{center} \includegraphics[width=\linewidth]{figures/Merge_vis.pdf} \end{center} \caption{Some example clusters merged by our approach during the merge step.} \label{fig:merge_vis} \end{figure*} \subsection{Cosine distance based merging} Instead of using the hamming distance between the WTA hashes of the feature vectors, we use a cosine based distance between feature vectors and compare the performance at the end of the first iteration and also at the end of 4 iterations. Table \ref{tab:merge_wta_cosine} compares cosine and WTA based distance at both points in the pipeline at the end of both merge and refine steps. We see that the WTA hash based distance marginally outperforms the cosine based distance at almost all points in the pipeline. It also discovers a higher percentage of samples with fewer clusters demonstrating its effectiveness in our pipeline. \subsection{Size threshold and SVM firing threshold} Next, we evaluate the effect of varying the size threshold, $\tau$ for discarding clusters at the end of refine step, as well as varying the SVM fire threshold, $\epsilon$, which also controls the number of clusters (likely impure) being discarded. The results are summarized in Table ~\ref{tab:svm_thresh}. As expected increasing $\tau$ or $\epsilon$ decreases the number of clusters as more samples are discarded. This comes at the cost of fewer samples and GANs being discovered. However, the clustering metrics do not vary drastically showing that our pipeline is robust to these hyperparameter values and can generalize across varying datasets and sizes. \begin{table}[] \centering \small \renewcommand{\arraystretch}{1.1} \renewcommand{\tabcolsep}{6pt} \caption{Effect of varying the size threshold ($\tau$) and SVM fire fire threshold ($\epsilon$) on performance. The default setting corresponds to $\tau=100$ and $ \epsilon=0.5$.} \vspace{5pt} \resizebox{\linewidth}{!}{ \begin{tabular}{@{}ccccccc@{}} \toprule $\tau$ & \multicolumn{1}{c}{$\epsilon$} & \makecell[c]{Avg.\\ Purity} & \multicolumn{1}{c}{NMI} & \makecell[c]{Sources\\ Discovered} & \makecell[c]{\#\\ of Clusters} & \makecell[c]{\% Samples\\ Discovered} \\ \midrule \multirow{3}{*}{50} & 0.3 & 0.8178 & 0.6356 & 20/20 & 407 & 0.9695 \\ & 0.5 & 0.8212 & 0.6326 & 20/20 & 434 & 0.9729 \\ & 0.7 & 0.8223 & 0.6551 & 20/20 & 308 & 0.9549 \\ \hdashline \multirow{3}{*}{100} & 0.3 & 0.8238 & 0.6451 & 20/20 & 293 & 0.9590 \\ & 0.5 & 0.8216 & 0.6552 & 20/20 & 266 & 0.9476 \\ & 0.7 & 0.823 & 0.6573 & 20/20 & 245 & 0.9237 \\ \hdashline \multirow{3}{*}{200} & 0.3 & 0.8265 & 0.6731 & 20/20 & 166 & 0.8919 \\ & 0.5 & 0.8197 & 0.67 & 20/20 & 161 & 0.9015 \\ & 0.7 & 0.8233 & 0.686 & 19/20 & 138 & 0.8841 \\ \hdashline \multirow{3}{*}{300} & 0.3 & 0.7849 & 0.7124 & 14/20 & 70 & 0.8643 \\ & 0.5 & 0.8009 & 0.6992 & 13/20 & 89 & 0.8340 \\ & 0.7 & 0.8055 & 0.7161 & 18/20 & 85 & 0.8638 \\ \bottomrule \end{tabular} } \label{tab:svm_thresh} \end{table} \begin{table}[] \centering \small \renewcommand{\arraystretch}{1.1} \renewcommand{\tabcolsep}{4pt} \caption{Effect of varying the number of clusters $k$ for K-Means along with the number of times merge and refine (Additional Iters) is performed for each step. In the default setting Additional Iters is $0$ as merge and refine are performed once per step while $k=500$. } \vspace{5pt} \resizebox{\linewidth}{!}{ \begin{tabular}{@{}ccccccc@{}} \toprule K & \makecell[c]{Additional\\ Iters} & \makecell[c]{Avg.\\ Purity} & NMI & \makecell[c]{Sources\\ Discovered} & \makecell[c]{\% Samples\\ Discovered} & \makecell[c]{\# of\\ Clusters} \\ \midrule \multirow{4}{*}{100} & 0 & 0.7948 & 0.6722 & 20/20 & 0.9839 & 133 \\ & 1 & 0.7952 & 0.6938 & 20/20 & 0.9876 & 104\\ & 2 & 0.7889 & 0.7016 & 19/20 & 0.9904 & 89 \\ & 3 & 0.7627 & 0.7255 & 19/20 & 0.9835 & 66 \\ \hdashline \multirow{4}{*}{200} & 0 & 0.8086 & 0.657 & 20/20 & 0.9762 & 199 \\ & 1 & 0.8125 & 0.6798 & 20/20 & 0.9847 & 154\\ & 2 & 0.7944 & 0.6981 & 19/20 & 0.9749 & 105 \\ & 3 & 0.7944 & 0.7085 & 20/20 & 0.9757 & 95 \\ \hdashline \multirow{4}{*}{300} & 0 & 0.8142 & 0.652 & 20/20 & 0.9665 & 238 \\ & 1 & 0.8051 & 0.6696 & 20/20 & 0.9486 & 163 \\ & 2 & 0.802 & 0.6975 & 20/20 & 0.9602 & 121 \\ & 3 & 0.7669 & 0.7046 & 17/20 & 0.9569 & 92 \\ \hdashline \multirow{4}{*}{400} & 0 & 0.8175 & 0.6472 & 20/20 & 0.9494 & 281\\ & 1 & 0.814 & 0.6639 & 20/20 & 0.9637 & 196 \\ & 2 & 0.8105 & 0.6817 & 20/20 & 0.9687 & 158 \\ & 3 & 0.8005 & 0.7078 & 20/20 & 0.9679 & 113 \\ \hdashline \multirow{4}{*}{500} & 0 & 0.8216 & 0.6552 & 20/20 & 0.9476 & 266 \\ & 1 & 0.8249 & 0.6736 & 20/20 & 0.9532 & 195 \\ & 2 & 0.8118 & 0.6887 & 20/20 & 0.9536 & 136 \\ & 3 & 0.7862 & 0.691 & 20/20 & 0.9485 & 114\\ \hdashline \multirow{4}{*}{600} & 0 & 0.8264 & 0.6535 & 20/20 & 0.9396 & 279\\ & 1 & 0.8229 & 0.667 & 20/20 & 0.9545 & 212 \\ & 2 & 0.8176 & 0.6854 & 20/20 & 0.9470 & 153\\ & 3 & 0.8039 & 0.6936 & 20/20 & 0.9459 & 121 \\ \hdashline \multirow{4}{*}{700} & 0 & 0.8279 & 0.6572 & 20/20 & 0.9137 & 266 \\ & 1 & 0.8294 & 0.6837 & 20/20 & 0.9115 & 172 \\ & 2 & 0.8348 & 0.6887 & 20/20 &0.9473 & 166 \\ & 3 & 0.8233 & 0.6923 & 20/20 & 0.9382 & 139 \\ \bottomrule \end{tabular} } \label{tab:k_ai} \end{table} \subsection{Reclustering} We now evaluate the effect of removing the merge and refine steps from our pipeline. Our pipeline consists of the usual network training followed by clustering. For each iteration, we discard the old clusters and perform clustering again using K-Means on all the test set samples. We then use the new clusters as pseudo-labels and retrain our network for improving the features. It should be noted there is no OOD detection, merge or refine steps. This method is similar to ClusterFit \cite{yan2020clusterfit} who also use cluster pseudo-labels for network training. We analyze the results in Table~\ref{tab:recluster} by comparing with our original pipeline. We see that the performance drops by a significant margin showing that it is crucial to maintain existing clusters and iteratively merge and refine them while simultaneously improving the feature representations of the existing clusters. \subsection{Effect of varying number of clusters} For most of our experiments we run the clustering using K-Means at a fixed value of $k$, which is used for all iterations. As the number of undiscovered samples reduce as number of iterations increases, we evaluate our pipeline's performance by decreasing $k$ after each iteration. We thus, approximately halve the value of $k$ after each iteration. The results are reported in Table ~\ref{tab:red_k}. We see that the performance does not change drastically compared to the default setup which shows that our network does not heavily rely on the number of clusters used during K-Means. \subsection{Qualitative Analysis} We now qualitatively show the effect of our merge step for a few clusters. The merge step of our approach merges clusters belonging to the same class but are actually fragmented due to overclustering from K-Means. We visualize two such clusters in Fig.~\ref{fig:merge_vis}. As we showed in the main paper, our clusters focus on the GAN source rather than image semantics and the merge step successfully combines clusters having the same majority GAN source. \section{Network Training} \subsection{Effect of faster training} By default, we retrain all feature extractor weights every iteration. To reduce the cost of full network retraining, we analyze finetuning only the final residual block of the ResNet-50 backbone along with the subsequent fully connected layers of the feature extractor. We also analyze using a lighter network such as MobileNet \cite{howard2017mobilenets} for our network training and compare it with our original setup. Additionally, we try to see the effect of removing the merge step from the pipeline The results are summarized in Table \ref{tab:fast_training}. We see that there is a small drop in network performance in terms of both Average Purity and NMI and it also fails to discover a single unseen source. Constraining the network is likely to have restricted the network's capability of improving the discovery set features although it doesn't have a significant impact. On the other hand, MobileNet obtains a higher NMI because of much fewer clusters, albeit at the cost of not discovering most of the unseen sources. This shows that very light networks are not as effective in obtaining discriminative representations for discovering new sources. Also the performance without merge is sub-par to our original approach which shows the importance of performing merge step to group similar clusters together after over-clustering. \begin{table} \centering \footnotesize \renewcommand{\arraystretch}{1} \renewcommand{\tabcolsep}{2pt} \caption{Results on our setup with slight variations in our training.} \begin{tabular}{@{}lcccc@{}} \toprule Experiment & \makecell[c]{\# of Clusters} & Avg. Purity & NMI & \makecell[c]{\# Sources Disc.}\\ \midrule Ours (Original) & 209 & 0.861 & 0.724 & 20/20\\ Ours (w/o merge)& 257 & 0.841 & 0.712 & 19/20\\ Ours (Freeze) & 229 & 0.850 & 0.691 & 19/20\\ MobileNet & 70 & 0.846 & 0.773 & 15/20\\ \bottomrule \end{tabular} \label{tab:fast_training} \end{table} \subsection{Effect of image size} By default, we resize and center crop all images to $256\times256$ in most of our experiments. We compare results when image sizes are varied from $64$ to $256$ for network training. From Table \ref{tab:image_size}, we see that increasing image size shows a marked improvement in all metrics. Therefore, we hypothesize that model fingerprints are likely more detectable and distinguishable when the image is resized to a higher resolution. However, this comes at the cost of increasing network training times and memory requirements (quadratically) which is infeasible in an online setup or for very large scale datasets. \begin{table} \centering \footnotesize \renewcommand{\arraystretch}{1} \renewcommand{\tabcolsep}{2pt} \caption{Results on our setup with varying image sizes.} \begin{tabular}{@{}lcccc@{}} \toprule Image Size & \makecell[c]{\# of Clusters} & Avg. Purity & NMI & \makecell[c]{\# Sources Disc.}\\ \midrule 64 & 169 & 0.655 & 0.579 & 19/20\\ 128 & 266 & 0.822 & 0.673 & 20/20 \\ 256 & 209 & 0.861 & 0.724 & 20/20\\ \bottomrule \end{tabular} \label{tab:image_size} \end{table} \section{Multiple seed sources} \cite{yu2019attributing} showed that training generators with different random seeds generate different distinguishable fingerprints in their images. We analyze whether we can discover new separate sources when a single generator architecture is trained on the same dataset but with different seeds. Table \ref{tab:multiple_seeds} shows results comparing this setup with our original setup. We add 2 different seeds for ProGAN for both CelebA and LSUN-Bedroom providing 2 new sources. Note that only a single seed of ProGAN trained on LSUN-Bedroom is present in the labeled set while the other 3 sources are unseen. The remaining classes are same as our original setup as described in Table 1 of the main paper. We see that there is only a small drop in Average Purity and NMI although it fails to discover a single unseen source. \begin{table} \centering \footnotesize \renewcommand{\arraystretch}{1} \renewcommand{\tabcolsep}{2pt} \caption{Results on our setup with variations in training.} \begin{tabular}{@{}lcccc@{}} \toprule Experiment & \makecell[c]{\# of Clusters} & Avg. Purity & NMI & \makecell[c]{\# Sources Disc.}\\ \midrule Ours (Original) & 209 & 0.861 & 0.724 & 20/20\\ Ours + Unseen seeds & 216 & 0.842 & 0.702 & 21/22\\ \bottomrule \end{tabular} \label{tab:multiple_seeds} \end{table}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} With its origins going back several centuries, discrete analysis becomes now an increasingly central methodology for many mathematical problems related to discrete systems and algorithms, widely applied in modern science. Our theme, being related with studying integrable discretizations of nonlinear integrable dynamical systems and the limiting properties of their solution sets, is of deep interest in many branches of modern science and technology, especially in discrete mathematics, numerical analysis, statistics and probability theory as well as in electrical and electronic engineering. In fact, this topic belongs to a much more general realm of mathematics, namely to calculus, differential equations and differential geometry. Thereby, although the topic is discrete, our approach to treating this problem will be completely analytical. In this work we will analyze the properties of discrete approximation for the nonlinear integrable Schr\"{o}dinger (NLS) dynamical system on a functional manifold $\tilde{M}\subset L_{2}(\mathbb{R};\mathbb{C}^{2})$: \begin{equation} \left. \begin{array}{l} \frac{d}{dt}\psi \ =i\psi _{xx}-2i\alpha \psi \psi \psi ^{\ast }, \\[2ex] \frac{d}{dt}\psi ^{\ast }\ =-i\psi _{xx}^{\ast }+2i\alpha \psi ^{\ast }\psi \psi ^{\ast \end{array \right\} :=\tilde{K}[\psi ,\psi ^{\ast }], \label{eq1.1} \end{equation where, by definition $(\psi ,\psi ^{\ast })^{\intercal }\in \tilde{M},$ \alpha \in \mathbb{R}$ is a constant, the subscript $"x"$ means the partial derivative with respect to the independent variable $x\in \mathbb{R},$ \tilde{K}:\tilde{M}\rightarrow T(\tilde{M})$ is the corresponding vector field on $\tilde{M}$ and $t\in \mathbb{R}$ is the evolution parameter. \ The system (\ref{eq1.1}) possesses a Lax type representation (see \cite{No}) and is Hamiltonian \begin{equation} \frac{d}{dt}(\psi ,\psi ^{\ast })^{\intercal }=-\tilde{\theta}\mathrm{grad \tilde{H}[\psi ,\psi ^{\ast }]=\tilde{K}[\psi ,\psi ^{\ast }] \label{eq1.1a} \end{equation with respect to the canonical Poisson structure $\tilde{\theta}$ and the Hamiltonian function $\tilde H$, where \begin{equation} \tilde{\theta}:=\left( \begin{array}{cc} 0 & -i \\ i & \end{array \right) \label{eq1.1b} \end{equation is a non-degenerate mapping $\tilde{\theta}:T^{\ast }(\tilde{M})\rightarrow T(\tilde{M})$ on the smooth functional manifold $\tilde{M},$ and \begin{equation} \tilde{H}:=\frac{1}{2}\int\limits_{\mathbb{R}}dx\left[ \psi \psi _{xx}^{\ast }+\psi _{xx}\psi ^{\ast }-2\alpha (\psi ^{\ast }\psi )^{2}\right] , \label{eq1.1c} \end{equation is a smooth mapping $\ \tilde{H}:\tilde{M}\rightarrow \mathbb{C}$. The corresponding symplectic structure \cite{AM,Ar,Bl,BPS} for the Poissonian operator \ (\ref{eq1.1b}) is defined by \begin{eqnarray} \tilde{\omega}^{(2)} &:&=-\frac{i}{2}\int_{\mathbb{R}}dx[<(d\psi ,d\psi ^{\ast })^{\intercal },\wedge \tilde{\theta}^{-1}(d\psi ,d\psi ^{\ast })^{\intercal }>= \label{eq1.1d} \\ &=&-i\int_{\mathbb{R}}dx[d\psi ^{\ast }(x)\wedge d\psi (x)], \notag \end{eqnarray which is a non-degenerate and closed 2-form on the functional manifold \tilde{M}.$ The simplest spatial discretizations of the dynamical system (\ref{eq1.1}) look as the flow \begin{equation} \begin{array}{l} \frac{d}{dt}\psi _{n}=\frac{i}{h^{2}}(\psi _{n+1}-2\psi _{n}+\psi _{n-1})-2i\alpha \psi _{n}\psi _{n}\psi _{n}^{\ast }, \\[2ex] \frac{d}{dt}\psi _{n}^{\ast }=-\frac{i}{h^{2}}(\psi _{n+1}^{\ast }-2\psi _{n}^{\ast }+\psi _{n-1}^{\ast })+2i\alpha \psi _{n}^{\ast }\psi _{n}\psi _{n}^{\ast \end{array} \label{eq1.2} \end{equation an \begin{equation} \left. \begin{array}{l} \frac{d}{dt}\psi _{n}=\frac{i}{h^{2}}(\psi _{n+1}-2\psi _{n}+\psi _{n-1})-i\alpha (\psi _{n+1}+\psi _{n-1})\psi _{n}\psi _{n}^{\ast }, \\[2ex] \frac{d}{dt}\psi _{n}^{\ast }=-\frac{i}{h^{2}}(\psi _{n+1}^{\ast }-2\psi _{n}^{\ast }+\psi _{n-1}^{\ast })+i\alpha (\psi _{n+1}^{\ast }+\psi _{n-1}^{\ast })\psi _{n}\psi _{n}^{\ast } \end{array \right\} :=K[\psi _{n},\psi _{n}^{\ast }] \label{eq1.3} \end{equation on some "discrete" submanifold $M_{h},$ where, by definition, $\{(\psi _{n},$ $\psi _{n}^{\ast })^{\intercal }\in \mathbb{C}^{2}:$ $n\in \mathbb Z\}\subset }\ $ $M_{h}$ $\subset l_{2}(\mathbb{Z};\mathbb{C}^{2})$ and K:M_{h}\rightarrow T(M_{h})$ is the corresponding vector field on $M_{h}.$ \begin{definition} If for a function $(\psi ,$ $\psi ^{\ast })^{\intercal }\in W_{2}^{2} \mathbb{R};\mathbb{C}^{2})\ $there exists the point-wise limit \lim_{h\rightarrow 0}(\psi _{n},$ $\psi _{n}^{\ast })^{\intercal }=(\psi (x), $ $\psi ^{\ast }(x)))^{\intercal },\ $where the set of vectors $(\psi _{n},$ $\psi _{n}^{\ast })^{\intercal }\in \mathbb{C}^{2},n\in \mathbb{Z},$ solves the infinite system of equations \ (\ref{eq1.3}), the set $\{(\psi _{n},$ \psi _{n}^{\ast })^{\intercal }\in \mathbb{C}^{2}:$ $n\in \mathbb{Z\}\subset }\ l_{2}(\mathbb{Z};\mathbb{C}^{2})$ will be called an approximate solution to the nonlinear Schr\"{o}dinger dynamical system \ (\ref{eq1.1}). \end{definition} \ It is well known \cite{AL,AL1} that the discretization scheme (\ref{eq1.3 ) conserves the Lax type integrability \cite{No,Bl,BPS} and that the scheme \ref{eq1.2}) does not. The integrability of \ (\ref{eq1.3}) can be easily enough checked by means of either the gradient-holonomic integrability algorithm \cite{Pr,PBPS,BPS} or the well known \cite{LWY} symmetry approach. In particular, the discrete dynamical system (\ref{eq1.3}) is a Hamiltonian one \cite{AM,Ar,Bl,Pr} on the symplectic manifold $M_{h}\subset l_{2} \mathbb{Z};\mathbb{C}^{2})$ with respect to the non-canonical symplectic structur \begin{equation} \omega _{h}^{(2)}=-\sum_{n\in \mathbb{Z}}\frac{ih}{2(1-h^{2}\alpha \psi _{n}^{\ast }\psi _{n})}<(d\psi _{n},d\psi _{n}^{\ast })^{\intercal },\wedge (d\psi _{n},d\psi _{n}^{\ast })^{\intercal }> \label{eq1.4} \end{equation on $M_{h}$ and looks as \begin{equation} \frac{d}{dt}(\psi _{n},\psi _{n}^{\ast })^{\intercal }=-\theta _{n}\mathrm grad}H[\psi _{n},\psi _{n}^{\ast }]=K[\psi _{n},\psi _{n}^{\ast }], \label{eq1.5} \end{equation where the Hamiltonian function \begin{equation} H=\sum_{n\in \mathbb{Z}}\frac{1}{h}(\psi _{n}\psi _{n+1}^{\ast }+\psi _{n+1}\psi _{n}^{\ast }+\frac{2}{\alpha h^{2}}\ln |1-\alpha h^{2}\psi _{n}^{\ast }\psi _{n}|) \label{eq1.5a} \end{equation and the related Poissonian operator $\theta _{n}:$ $T^{\ast }(M_{h})\rightarrow T(M_{h})$ equals \begin{equation} \theta _{n}:=\left( \begin{array}{cc} 0 & -ih^{-1}(1-h^{2}\alpha \psi _{n}^{\ast }\psi _{n}) \\ ih^{-1}(1-h^{2}\alpha \psi _{n}^{\ast }\psi _{n}) & \end{array \right) . \label{eq1.6} \end{equation} \begin{remark} \label{Rm_1.1}For the symplectic structure \ (\ref{eq1.4}) and, respectively, the Hamiltonian function \ (\ref{eq1.5a}) to be suitably defined on the manifold $M_{h}\subset l_{2}(\mathbb{Z};\mathbb{C}^{2})$ it is necessary to assume additionally that the finite stability condition \lim_{N,M\rightarrow \infty }\left( \prod\limits_{-N}^{M}\ (1-\alpha h^{2}\psi _{n}^{\ast }\psi _{n})\right) \neq 0$ holds. The latter is simply reduced as $h\rightarrow 0$ to the equivalent integral inequality $\alpha \leq \int_{\mathbb{R}}(x\psi ^{\ast }\psi )^{2}dx\ \left( \int_{\mathbb{R }\psi ^{\ast }\psi dx\right) ^{-1},$ which will be assumed for further to be satisfied. respectively, the manifold $\tilde{M}\subset \tilde{W}_{2}^{2} \mathbb{R};\mathbb{C}^{2}),$ where $\tilde{W}_{2}^{2}(\mathbb{R};\mathbb{C ^{2}):=W_{2}^{2}(\mathbb{R};\mathbb{C}^{2})\cap L_{2}^{(1)}(\mathbb{R} \mathbb{C}^{2})$ with the space $L_{2}^{(1)}(\mathbb{R};\mathbb{C ^{2}):=\{(\psi ,\psi ^{\ast })^{\intercal }\in L_{2}(\mathbb{R};\mathbb{C ^{2}):\int_{\mathbb{R}}x^{2}(\psi ^{\ast }\psi )^{2}dx<\infty \}.$ \end{remark} The symplectic structure (\ref{eq1.4}) is well defined on the manifold M_{h} $ and tends as $h\rightarrow 0$ to the symplectic structure (\re {eq1.1d}) on $\tilde{M},$ and respectively the Hamiltonian function (\re {eq1.5a}) tends to (\ref{eq1.1c}). In this work we have investigated the structure of the solution set to the discrete nonlinear Schr\"{o}dinger dynamical system \ (\ref{eq1.3}) by means of a specially devised analytical approach for invariant reducing the infinite system of ordinary differential equations (\ref{eq1.3}) to an equivalent finite one of ordinary differential equations with respect to the evolution parameter $t\in \mathbb{R}.$ \ As a result, there was constructed a finite set of recurrent algebraic regular relationships, allowing to expand the obtained before finite set of solutions to any discrete order n\in \mathbb{Z},\ $ which makes it possible to present a wide class of the approximate solutions to the nonlinear Schr\"{o}dinger dynamical system \ \ref{eq1.1}).$\ $ It is worthy here to stress that the problem of constructing an effective discretization scheme for the nonlinear Schr\"{o dinger dynamical system \ (\ref{eq1.1}) and its generalizations proves to be important both for applications \cite{APT,Ke,Sa,YJ} and for deeper understanding the nature of the related algebro-geometric and analytic structures responsible for their limiting stability and convergence properties. From these points of view we would like to mention work \ \cit {LLC}, where \ the standard discrete Lie-algebraic approach \cite{Bl,BSP} was recently applied\ to constructing a slightly different from \ (\re {eq1.2}) and \ (\ref{eq1.3}) discretization of the nonlinear Schr\"{o}dinger dynamical system \ (\ref{eq1.1}). As the symplectic reduction method, devised in the present work for studying the solution sets of the discrete nonlinear Schr\"{o}dinger dynamical system \ (\ref{eq1.3}), is completely independent of a chosen discretization scheme, it would be reasonable and interesting to apply it to that of \cite{LLC} and compare the corresponding results subject to their computational effectiveness. \section{A class of Hamiltonian discretizations of the NLS dynamical system} The discretizations \ (\ref{eq1.2}) and \ (\ref{eq1.3}) can be extended to a wide classs of Hamiltonian systems, if to assume that the Poissonian structure is given by the local expression \begin{equation} \theta _{n}=\left( \begin{array}{cc} 0 & -i\nu _{n}(g_{n}-{\tilde{h}}_{n}^{2}\alpha \psi _{n}^{\ast }\psi _{n}) \\ i\nu _{n}(g_{n}-{\tilde{h}}_{n}^{2}\alpha \psi _{n}^{\ast }\psi _{n}) & \end{array \right) , \label{eq1.6a} \end{equation generalizing \ (\ref{eq1.6}), and the Hamiltonian function is chosen in the for \begin{equation} H=\sum_{n\in \mathbb{Z}}h_{n}\left( a_{n}\psi _{n}\psi _{n+1}^{\ast }+b_{n}\psi _{n}\psi _{n}^{\ast }+c_{n}\psi _{n}\psi _{n-1}^{\ast }+\frac 2d_{n}}{\alpha }\ln |g_{n}-\alpha {\tilde{h}}_{n}^{2}\psi _{n}\psi _{n}^{\ast }|\right) , \label{eq1.6b} \end{equation where $h_{n},{\tilde{h}}_{n},\nu _{n},a_{n},b_{n}$ $,c_{n},d_{n}$ and $g_{n}$ $\in \mathbb{R}_{+},n\in \mathbb{Z},$ are some parameters. The reality condition, imposed on the Hamiltonian function \ (\ref{eq1.6b}), yields the relationship \begin{equation} c_{n}h_{n}=a_{n-1}^{\ast }h_{n-1}\ ,\quad b_{n}^{\ast }=b_{n}\ ,\quad d_{n}^{\ast }=d_{n}, \label{eq1.6c} \end{equation} which should be satisfied for all $n\in \mathbb{Z}.$ As a result, there is obtained the corresponding generalized discrete nonlinear Schr\"odinger dynamical system $\frac{d}{dt}(\psi _{n},\psi _{n}^{\ast })^{\intercal }:=-\theta _{n}\mathrm{grad}$ $H[\psi _{n},\psi _{n}^{\ast }],$ $n\in \mathbb{Z},$ equivalent to the infinite set of ordinary differential equations \begin{equation} \begin{array}{l} \frac{d}{dt}\psi _{n}=i\nu _{n}\left( h_{n+1}c_{n+1}g_{n}\psi _{n+1}+(b_{n}g_{n}h_{n}-2{\tilde{h}}_{n}^{2}h_{n}d_{n})\psi _{n}+h_{n-1}a_{n-1}g_{n}\psi _{n-1}\right) - \\ \qquad \quad -i\alpha \nu _{n}{\tilde{h}}_{n}^{2}\left( h_{n+1}c_{n+1}\psi _{n+1}+h_{n}b_{n}\psi _{n}+h_{n-1}a_{n-1}\psi _{n-1}\right) \psi _{n}\psi _{n}^{\ast } \\ \frac{d}{dt}\psi _{n}^{\ast }=-i\nu _{n}\left( h_{n}a_{n}g_{n}\psi _{n+1}^{\ast }+(b_{n}g_{n}h_{n}-2{\tilde{h}}_{n}^{2}h_{n}d_{n})\psi _{n}^{\ast }+h_{n}c_{n}g_{n}\psi _{n-1}^{\ast }\right) + \\ \qquad \quad +i\alpha \nu_{n}{\tilde{h}}_{n}^{2}\left( h_{n}a_{n}\psi _{n+1}^{\ast }+h_{n}b_{n}\psi _{n}^{\ast }+h_{n}c_{n}\psi _{n-1}^{\ast }\right) \psi _{n}\psi _{n}^{\ast \end{array} \label{eq1.6d} \end{equation for all $n\in \mathbb{Z}.$ In the completely autonomous case, when $h_{n}=h, \tilde{h}}_{n}=\tilde{h},\nu _{n}=\nu ,a_{n}=a,,b_{n}=b,c_{n}=c,d_{n}=d$ and $g_{n}=g$ $\in \mathbb{R}_{+}$ \ for all $n\in \mathbb{Z},$ the Poissonian structure \ (\ref{eq1.6a}) becomes \begin{equation} \theta _{n}=\left( \begin{array}{cc} 0 & -i\nu (g-{\tilde{h}}^{2}\alpha \psi _{n}^{\ast }\psi _{n}) \\ i\nu (g-{\tilde{h}}^{2}\alpha \psi _{n}^{\ast }\psi _{n}) & \end{array \right) \label{eq1.6e} \end{equation and \ the Hamiltonian function \ (\ref{eq1.6b}) becomes \begin{equation} H=\sum_{n\in \mathbb{Z}}h\left( a\psi _{n}\psi _{n+1}^{\ast }+b\psi _{n}\psi _{n}^{\ast }+c\psi _{n}\psi _{n-1}^{\ast }+\frac{2d}{\alpha }\ln |g-\alpha \tilde{h}}^{2}\psi _{n}\psi _{n}^{\ast }|\right) . \label{eq1.6f} \end{equation The corresponding reality condition for \ (\ref{eq1.6f}) reads as \begin{equation} c=a^{\ast }\ ,\quad b^{\ast }=b\ ,\quad d^{\ast }=d\ , \label{eq1.6g} \end{equation and the related discrete nonlinear Schr\"odinger dynamical systems reads as a set of the equations \begin{equation} \begin{array}{l} \ \frac{d}{dt}\psi _{n}=i\nu h\left( cg\psi _{n+1}+(bg-2{\tilde{h} ^{2}d)\psi _{n}+ag\psi _{n-1}\right) -i\alpha \nu h{\tilde{h}}^{2}\left( c\psi _{n+1}+b\psi _{n}+a\psi _{n-1}\right) \psi _{n}\psi _{n}^{\ast }, \\% [2ex] \frac{d}{dt}\psi _{n}^{\ast }=-i\nu h\left( ag\psi _{n+1}^{\ast }+(bg-2 \tilde{h}}^{2}d)\psi _{n}^{\ast }+cg\psi _{n-1}^{\ast }\right) +i\alpha \nu {\tilde{h}}^{2}\left( a\psi _{n+1}^{\ast }+b\psi _{n}^{\ast }+c\psi _{n-1}^{\ast }\right) \psi _{n}\psi _{n}^{\ast } \end{array} \label{eq1.6h} \end{equation for all $n\in \mathbb{Z}.$\texttt{\ }If now to make in \ (\ref{eq1.6d}) the substitutions \begin{eqnarray} \nu _{n} &=&\frac{1}{h_{n}}\ ,\quad g_{n}=1\ ,\quad {\tilde{h}}_{n}=h_{n}\ ,\quad a_{n}=\frac{1}{h_{n}^{2}}\ , \label{eq1.6i} \\ \quad b_{n} &=&0\ ,\quad c_{n}=\frac{1}{h_{n}h_{n-1}}\ ,\quad d_{n}=\frac{1} h_{n}^{4}}\ , \notag \end{eqnarray one \ obtains the discrete nonlinear Schr\"odinger dynamical syste \begin{equation} \left. \begin{array}{c} \frac{d}{dt}\psi _{n}=\frac{i}{h_{n}^{2}}(\psi _{n+1}-2\psi _{n}+h_{n}h_{n-1}^{-1}\psi _{n-1})- \\ -i\alpha (\psi _{n+1}+h_{n}h_{n-1}^{-1}\psi _{n-1})\psi _{n}^{\ast }\psi _{n}, \\ \frac{d}{dt}\psi _{n}^{\ast }=-\frac{i}{h_{n}^{2}}(\psi _{n+1}^{\ast }-2\psi _{n}^{\ast }+h_{n}h_{n-1}^{-1}\psi _{n-1}^{\ast })+ \\ +i\alpha (\psi _{n+1}^{\ast }+h_{n}h_{n-1}^{-1}\psi _{n-1}^{\ast })\psi _{n}^{\ast }\psi _{n} \end{array \right\} :=K_{n}^{(g)}[\psi _{n},\psi _{n}^{\ast }], \label{eq1.6j} \end{equation} whose Hamiltonian function equals \begin{equation} H^{(g)}=\ \sum_{n\in \mathbb{Z}}h_{n}^{-1}(\psi _{n}\psi _{n+1}^{\ast }+\psi _{n+1}\psi _{n}^{\ast }+\frac{2}{\alpha h_{n}^{2}}\ln |1-\alpha h_{n}^{2}\psi _{n}^{\ast }\psi _{n}|). \label{eq1.6k} \end{equation} Another substitution, \texttt{\ }taken in the form \begin{equation} c=a\neq 0\ ,\quad \nu hga=\frac{1}{h^{2}}\ ,\quad (bg-2{\tilde{h}}^{2}d)\nu h=-\frac{2}{h^{2}}\ ,\quad \nu h{\tilde{h}}^{2}(a+b+c)=2,\ \label{eq1.6l} \end{equation} is also suitable in the limit $h\rightarrow 0$ \ for discretization the nonlinear Schr\"odinger dynamical system \ (\ref{eq1.3}). The corresponding discrete nonlinear Schr\"odinger dynamics takes the form \begin{equation} \begin{array}{l} \frac{d}{dt}\psi _{n}=\frac{i}{h^{2}}(\psi _{n+1}-2\psi _{n}+\psi _{n-1}) \frac{2i\alpha }{2+\mu }(\psi _{n+1}+\psi _{n-1}+\mu \psi _{n})\psi _{n}\psi _{n}^{\ast }, \\[2ex] \frac{d}{dt}\psi _{n}^{\ast }=-\frac{i}{h^{2}}(\psi _{n+1}^{\ast }-2\psi _{n}^{\ast }+\psi _{n-1}^{\ast })+\frac{2i\alpha }{2+\mu }(\psi _{n+1}^{\ast }+\psi _{n-1}^{\ast }+\mu \psi _{n}^{\ast })\psi _{n}\psi _{n}^{\ast \end{array} \label{eq1.6m} \end{equation for all for all $n\in \mathbb{Z},$ where $\mu =b/a\in \mathbb{R}_{+}.$ \ \ Thus we obtained a one-parameter family of Hamiltonian discretizations of the NLS equation. The set of relationships (\ref{eq1.6l}) admits a lot of reductions, for instance, one can take \begin{equation} \nu =1,\quad g=1,\quad a=\frac{1}{h^{3}},\quad d=\left( \frac{\mu +2}{2 \right) ^{2}\frac{1}{h^{5}},\quad \frac{{\tilde{h}}^{2}}{h^{2}}=\frac{2} 2+\mu }, \label{eq1.6n} \end{equation not changing the infinite set of equations \ (\ref{eq1.6m}). All of the constructed above discretizations of the nonlinear Schr\"odinger dynamical system \ (\ref{eq1.1}) on the functional manifold $\tilde{M}$ can be considered as either better or worse from the computational point of view. If some of the discretization allows, except the Hamiltonian function, some extra conservation laws, it can be naturally considered as a much more suitable for numerical analysis case, allowing both to control the stability of the solution convergence, as a parameter $\mathbb{R}_{+}\ni h\rightarrow 0,$ and to make an invariant solution space reduction\ to a lower effective dimension of the related solution set. It is worthy to observe here that the functional structure of the discretization (\ref{eq1.3}) strongly depends both on the manifold $M$ and on the convergent as $h\rightarrow 0\ $ \ form of the Hamiltonian function \ref{eq1.6}). In particular, the existence of the limit \begin{equation} \tilde{H}:=\lim\limits_{h\rightarrow 0}\ \sum_{n\in \mathbb{Z}}\frac{1}{h (\psi _{n}\psi _{n+1}^{\ast }+\psi _{n+1}\psi _{n}^{\ast }+\frac{2}{\alpha h^{2}}\ln |1-\alpha h^{2}\psi _{n}^{\ast }\psi _{n}|), \label{eq1.8} \end{equation coinciding with the expression \ (\ref{eq1.1c}), imposes a strong constraint on the functional space $\tilde{M}\subset L_{2}(\mathbb{R};\mathbb{C}^{2}),$ namely, a vector-function $(\psi ,\psi ^{\ast })^{\intercal }\in W_{2}^{2} \mathbb{R};\mathbb{C}^{2})$ $\subset L_{2}(\mathbb{R};\mathbb{C}^{2}),$ thereby fixing a suitable functional class \cite{AH} for which the discretization conserves its physical Hamiltonian system sense. Respectively, the limiting for (\ref{eq1.8}) symplectic structure \begin{eqnarray} \tilde{\omega}^{(2)} &:&=-\lim\limits_{h\rightarrow 0}\sum_{n\in \mathbb{Z} \frac{i}{2}<(d\psi _{n},d\psi _{n}^{\ast })^{\intercal },\wedge \theta _{n}^{-1}(d\psi _{n},d\psi _{n}^{\ast })^{\intercal }>= \label{eq1.8a} \\ &=&-\lim\limits_{h\rightarrow 0}i\sum_{n\in \mathbb{Z}}h(1-\alpha h^{2}\psi _{n}^{\ast }\psi _{n})^{-1}d\psi _{n}^{\ast }\wedge d\psi _{n}=-i\int_ \mathbb{R}}dx[d\psi ^{\ast }(x)\wedge d\psi (x)] \notag \end{eqnarray on the manifold $\tilde{M}$ \ coincides exactly with the canonical symplectic structure \ (\ref{eq1.1d}) for the dynamical system \ (\re {eq1.1a}). If now to assume that a vector function $(\psi ,\psi ^{\ast })^{\intercal }\in W_{2}^{1}(\mathbb{R};\mathbb{C}^{2})\subset L_{2}(\mathbb{R};\mathbb{C ^{2}),$ the Hamiltonian function (\ref{eq1.6}) can be taken only as \begin{equation} H^{(s)}=\sum_{n\in \mathbb{Z}}(\psi _{n}\psi _{n+1}^{\ast }+\psi _{n+1}\psi _{n}^{\ast }+\frac{2}{\alpha h^{2}}\ln |1-\alpha h^{2}\psi _{n}^{\ast }\psi _{n}|), \label{eq1.9} \end{equation and the corresponding Poissonian structure as \begin{equation} \theta _{n}^{(s)}:=\left( \begin{array}{cc} 0 & ih^{-2} (h^2 \alpha \psi _{n}^{\ast }\psi -1) \\ ih^{-2}(1-h^{2}\alpha \psi _{n}^{\ast }\psi ) & \end{array \right) \label{eq1.10} \end{equation The limiting for (\ref{eq1.9}) Hamiltonian function \begin{equation} \tilde{H}^{(s)}:=\lim\limits_{h\rightarrow 0}H^{(s)}\ =\int\limits_{\mathbb{ }}dx(\psi \psi _{x}^{\ast }+\psi _{x}\psi ^{\ast })=0\ \label{eq1.10aa} \end{equation \ becomes trivial and, simultaneously, the limiting for (\ref{eq1.10}) symplectic structure \begin{eqnarray} \tilde{\omega}_{(s)}^{(2)} &:&=\lim\limits_{h\rightarrow 0}\sum_{n\in \mathbb{Z}}\frac{i}{2}<(d\psi _{n},d\psi _{n}^{\ast })^{\intercal },\wedge \theta _{n}^{(s),-1}(d\psi _{n},d\psi _{n}^{\ast })^{\intercal }>= \notag \\ &=&\lim\limits_{h\rightarrow 0}i\sum_{n\in \mathbb{Z}}h^{2}(1-\alpha h^{2}\psi _{n}^{\ast }\psi _{n})^{-1}d\psi _{n}^{\ast }\wedge d\psi _{n}=0 \label{eq1.10a} \end{eqnarray becomes trivial too. Thus, the functional space $W_{2}^{1}(\mathbb{R} \mathbb{C}^{2})\subset L_{2}(\mathbb{R};\mathbb{C}^{2})$ is not suitable for the discretization \ (\ref{eq1.3}) of the nonlinear integrable Schr\"odinger dynamical system \ (\ref{eq1.1}). It is important here to stress that the discretization parameter $h\in \mathbb{R}_{+}$ can be taken as depending on the node $n\in \mathbb{Z}:$ h\rightarrow h_{n}\in \mathbb{R}_{+},$ which satisfies the condition \sup\limits_{n\in \mathbb{Z}}h_{n}\ \leq \varepsilon ,$ where the condition \varepsilon \rightarrow 0$ should be later imposed. For instance, one can replace the dynamical system (\ref{eq1.3}) by\ (\ref{eq1.6j}), the Poissonian structure (\ref{eq1.4}) by \begin{equation} \theta _{n}^{(g)}:=\left( \begin{array}{cc} 0 & ih_{n}^{-1}(h_{n}^{2}\alpha \psi _{n}^{\ast }\psi -1) \\ ih_{n}^{-1}(1-h_{n}^{2}\alpha \psi _{n}^{\ast }\psi ) & \end{array \right) \label{eq1.12} \end{equation and, respectively, the Hamiltonian function (\ref{eq1.6}) becomes exactly\ \ref{eq1.6k}). It is easy to check that the modified discrete dynamical system (\ref{eq1.6j ) can be equivalently rewritten as \begin{equation} \frac{d}{dt}(\psi _{n},\psi _{n}^{\ast })^{\intercal }=-\theta _{n}^{(g) \mathrm{grad}H^{(g)}[\psi _{n},\psi _{n}^{\ast }]\ \label{eq1.14} \end{equation for all $n\in \mathbb{Z},$ meaning, in particular, that the Hamiltonian function (\ref{eq1.6k}) is conservative. The latter follows from the fact that the skewsymmetric operator (\ref{eq1.12}) is Poissonian on the discretized manifold $M_{h}.\ $\ Moreover, if to impose the constraint that uniformly in $n\in \mathbb{Z}$ the limit \textrm{lim}$_{\varepsilon \rightarrow 0}(h_{n}h_{n-1}^{-1})=1,$ the dynamical system \ (\ref{eq1.6j}) reduces to (\ref{eq1.1}) and the corresponding limiting symplectic structure \begin{eqnarray} \tilde{\omega}_{(g)}^{(2)} &:&=-\lim\limits_{\varepsilon \rightarrow 0}\sum_{n\in \mathbb{Z}}\frac{i}{2}<(d\psi _{n},d\psi _{n}^{\ast })^{\intercal },\wedge \theta _{n}^{(g),-1}(d\psi _{n},d\psi _{n}^{\ast })^{\intercal }>= \label{eq1.15a} \\ &=&-\lim\limits_{\varepsilon \rightarrow 0}i\sum_{n\in \mathbb{Z }h_{n}(1-\alpha h_{n}^{2}\psi _{n}^{\ast }\psi _{n})^{-1}d\psi _{n}^{\ast }\wedge d\psi _{n}= \notag \\ &=&-i\int_{\mathbb{R}}dx[d\psi ^{\ast }(x)\wedge d\psi (x)], \notag \end{eqnarray coincides exactly with the symplectic structure (\ref{eq1.8a}). \begin{remark} It is, by now, a not solved, but interesting, problem whether the modified discrete Hamiltonian dynamical system (\ref{eq1.6j}) sustains to be Lax type integrable. It is left for studying in a separate work. \end{remark} \section{Conservation laws for the integrable discrete NLS system} Taking into account that the discrete dynamical system (\ref{eq1.3}) is well posed in the space $M_{h}:=w_{h,2}^{2}(\mathbb{Z};\mathbb{C}^{2})\subset l_{2}(\mathbb{Z};\mathbb{C}^{2}),$ suitably approximating the Sobolev space of functions $W_{2}^{2}(\mathbb{R};\mathbb{C}^{2}),$ we can go further and to approximate the space $w_{h,2}^{2}(\mathbb{Z};\mathbb{C}^{2})$ by means of an infinite hierarchy of strictly invariant finite dimensional subspaces M_{h}^{(N)}\simeq \bar{w}_{h,2}^{2}(\mathbb{Z}_{(N)};\mathbb{C}^{2}),$ $N\in \mathbb{Z}_{+}.$ In particular, as it was before shown both in \cite{AL,AL1} by means of the inverse scattering transform method \cite{AL,No} and in \cit {BP,Pr,PBPS} \- by means of the gradient-holonomic approach \cite{PM}, the discrete nonlinear Schr\"{o}dinger dynamical system (\ref{eq1.3}) possesses on the manifold $M_{h}$ an infinite hierarchy of the functionally independent conservation laws: \begin{eqnarray} &&\bar{\gamma}_{0}=\frac{1}{\alpha h^{3}}\sum_{n\in \mathbb{Z}}\ln |1-\alpha h^{2}\psi _{n}^{\ast }\psi _{n}|,\ \ \ \ \ \gamma _{0}=\sum_{n\in \mathbb{Z _{+}}\sigma _{n}^{{(0)}}, \label{eq1.15} \\ &&\gamma _{1}=\sum_{n\in \mathbb{Z}}(\sigma _{n}^{(1)}+\frac{1}{2}\sigma _{n}^{(0)}\sigma _{n}^{(0)}), \notag \\ &&\gamma _{2}=\sum_{n\in \mathbb{Z}}(\sigma _{n}^{(2)}+\frac{1}{3}\sigma _{n}^{(0)}\sigma _{n}^{(0)}\sigma _{n}^{(0)}+\sigma _{n}^{(0)}\sigma _{n}^{(1)}),\ \ ..., \notag \end{eqnarray where the quantities $\sigma _{n}^{(j)},$ $n\in \mathbb{Z},$ $j\in \mathbb{Z _{+},$ are defined as follows: \begin{eqnarray} &&\sigma _{n}^{(0)}=-\frac{1}{\alpha h^{2}}(\psi _{n}^{\ast }\psi _{n-1}+\psi _{n-1}^{\ast }\psi _{n-2}), \label{eq1.16} \\ &&\sigma _{n}^{(1)}=i\frac{d}{dt}\sigma _{n-1}^{(0)}+(1-\alpha h^{2}\psi _{n-1}^{\ast }\psi _{n-1})(1-\alpha h^{2}\psi _{n-2}^{\ast }\psi _{n-2})+ \beta \frac{\alpha }{h^{2}}\psi _{n-1}^{\ast }(\psi _{n}+\psi _{n-1}),\ ...\ , \notag \end{eqnarray and $\beta \in \mathbb{R}$ is an arbitrary constant parameter. As a result of \ (\ref{eq1.16}) one finds the following infinite hierarchy of smooth conservation laws \begin{eqnarray} \bar{H}_{0} &=&\sum_{n\in \mathbb{Z}}\ln |1-\alpha h^{2}\psi _{n}^{\ast }\psi _{n}|,\ \label{eq1.16a} \\ H_{0} &=&\ \sum_{n\in \mathbb{Z}}\psi _{n}^{\ast }\psi _{n+1},\text{ \ \ H_{0}^{\ast }=\ \sum_{n\in \mathbb{Z}}\psi _{n}\psi _{n+1}^{\ast }, \notag \\ H_{1} &=&\sum_{n\in \mathbb{Z}}(\frac{1}{2}\psi _{n}^{2}\psi _{n-1}^{\ast ,2}+\psi _{n}\psi _{n+1}\psi _{n-1}^{\ast }\psi _{n}^{\ast }-\frac{\psi _{n}\psi _{n-2}^{\ast }}{\alpha h^{2}}), \notag \\ H_{1}^{\ast } &=&\sum_{n\in \mathbb{Z}}(\frac{1}{2}\psi _{n-1}^{2}\psi _{n}^{\ast ,2}+\psi _{n-1}\psi _{n}\psi _{n+1}^{\ast }\psi _{n}^{\ast } \frac{\psi _{n-2}\psi _{n}^{\ast }}{\alpha h^{2}}), \notag \end{eqnarray \begin{eqnarray*} H_{2} &=&\sum_{n\in \mathbb{Z}}[\frac{1}{3}\psi _{n}^{3}\psi _{n-1}^{\ast ,3}+\psi _{n}\psi _{n+1}\psi _{n-1}^{\ast }\psi _{n}^{\ast }(\psi _{n}\psi _{n-1}^{\ast }+\psi _{n+1}\psi _{n}^{\ast }+ \\ &&+\psi _{n+2}\psi _{n+1}^{\ast })-\frac{\psi _{n}\psi _{n-1}^{\ast }} \alpha h^{2}}(\psi _{n}\psi _{n-2}^{\ast }+\psi _{n+1}\psi _{n-1}^{\ast })- \\ &&-\frac{\psi _{n}\psi _{n}^{\ast }}{\alpha h^{2}}(\psi _{n+1}\psi _{n-2}^{\ast }+\psi _{n+2}\psi _{n-1}^{\ast })+\frac{\ \psi _{n}\psi _{n-3}^{\ast }}{\alpha ^{2}h^{4}}], \\ H_{2}^{\ast } &=&\sum_{n\in \mathbb{Z}}[\frac{1}{3}\psi _{n}^{\ast ,3}\psi _{n-1}^{3}+\psi _{n}^{\ast }\psi _{n+1}^{\ast }\psi _{n-1}\psi _{n}(\psi _{n}^{\ast }\psi _{n-1}+\psi _{n+1}^{\ast }\psi _{n}+ \notag \\ &&+\psi _{n+2}^{\ast }\psi _{n+1})-\frac{\psi _{n}^{\ast }\psi _{n-1}} \alpha h^{2}}(\psi _{n}^{\ast }\psi _{n-2}+\psi _{n+1}^{\ast }\psi _{n-1})- \\ &&-\frac{\psi _{n}\psi _{n}^{\ast }}{\alpha h^{2}}(\psi _{n+1}^{\ast }\psi _{n-2}+\psi _{n+2}^{\ast }\psi _{n-1})+\frac{\ \psi _{n}^{\ast }\psi _{n-3}} \alpha ^{2}h^{4}}], \end{eqnarray* and so on. Taking into account the functional structure of the equations \ (\ref{eq1.2 ) or \ (\ref{eq1.3}), one can define the space $\mathcal{D}(M_{h})$ of smooth functions $\gamma :M_{h}\rightarrow \mathbb{C}$ \ on $M_{h}$ as that \ invariant with respect to the phase transformation \ $\mathbb{C}^{2}\ni (\psi _{n},\psi _{n}^{\ast })\rightarrow (e^{\alpha }\psi _{n},e^{-\alpha }\psi _{n}^{\ast })\in \mathbb{C}^{2}$ for any $n\in \mathbb{Z}$ and $\alpha \in \mathbb{C}.$ Equivalently, a function $\gamma \in \mathcal{D}(M_{h})$ iff the following condition \begin{equation} \sum_{n\in \mathbb{Z}}<\mathrm{grad}\gamma \lbrack \psi _{n},\psi _{n}^{\ast }],(\psi _{n},-\psi _{n}^{\ast })^{\intercal }>=0 \label{eq1.3a} \end{equation holds on $M_{h}$. Note that conserved quantities (\ref{eq1.16a}) belong to \mathcal{D}(M_{h})$. The conservation law $\bar{H}_{0}\in \mathcal{D}(M_{h})$ is a Casimir function for the Poissonian structure \ (\ref{eq1.4}) on the manifold $M_{h}, $ that is for any $\gamma \in $ $\mathcal{D}(M_{h})$ the Poisson bracket \begin{eqnarray} \{\gamma ,\bar{H}_{0}\} &:&=\sum_{n\in \mathbb{Z}}<\mathrm{grad}\gamma \lbrack \psi _{n},\psi _{n}^{\ast }],\theta _{n}\mathrm{grad}\bar{H _{0}[\psi _{n},\psi _{n}^{\ast }]>= \notag \\ &=& i \alpha h \sum_{n\in \mathbb{Z}}<\mathrm{grad}\gamma \lbrack \psi _{n},\psi _{n}^{\ast }],(\psi _{n},-\psi _{n}^{\ast })>=0, \label{eq1.16b} \end{eqnarray owing to the condition \ (\ref{eq1.3a}). The Hamiltonian function \ (\re {eq1.6}) is obtained from the first three invariants of \ (\ref{eq1.16a}) as \begin{equation} H=\frac{2}{\alpha h^{3}}\bar{H}_{0}+\frac{1}{h}(H_{0}+H_{0}^{\ast }). \label{eq1.16c} \end{equation} \begin{remark} Similarly to the limiting condition \ (\ref{eq1.8}), the same limiting expression one obtains from the discrete invariant function \begin{equation} H^{(w)}=\frac{1}{2\alpha h^{3}}\bar{H}_{0}-\frac{\alpha h}{4 (H_{1}+H_{1}^{\ast }), \label{eq1.16d} \end{equation that is \begin{equation} \lim_{h-0}H^{(w)}=\tilde{H}:=\frac{1}{2}\int\limits_{\mathbb{R}}dx\left[ \psi \psi _{xx}^{\ast }+\psi _{xx}\psi ^{\ast }-2\alpha (\psi ^{\ast }\psi )^{2}\right] . \label{eq1.16e} \end{equation} \end{remark} Based on these results, one can apply to the discrete dynamical system (\re {eq1.3}) the Bogoyavlensky-Novikov type reduction scheme, devised before in \cite{No,PBPS} and obtain a completely Liouville integrable finite dimensional dynamical system on the manifold $M_{h}^{(N)}.$ Namely, we consider the critical submanifold $M_{h}^{(N)}\subset M_{h}$ of the following real-valued action functional \ \ \begin{equation} \mathcal{L}_{_{h}}^{(N)}:=\sum_{n\in \mathbb{Z}}\mathcal{L _{_{h}}^{(N)}[\psi _{n},\psi _{n}^{\ast }]=\bar{c}_{0}(h)\bar{H _{0}+\sum_{j=0}^{N\ }c_{j}(h)(H_{j}+H_{j}^{\ast }), \label{eq1.17} \end{equation where, by definition, $\bar{c}_{0},c_{j}:\mathbb{R}_{+}\rightarrow \mathbb{R ,$ $j=\overline{0,N},$ are suitably defined functions for arbitrary but fixed $N\in \mathbb{Z}_{+},$ and \begin{equation} M_{h}^{(N)}:=\left\{ (\psi ,\psi ^{\ast })^{\intercal }\in M_{h}:\ \mathrm grad}\mathcal{L}_{h}^{(N)}[\psi _{n},\psi _{n}^{\ast }]=0,\ n\in \mathbb{Z \right\} . \label{eq1.18} \end{equation} As one can easily show, the submanifold $M_{h}^{(N)}\subset M_{h}$ is finite-dimensional and for any $N\in \mathbb{Z}_{+}$ is invariant with respect to the vector field $K:M_{h}\rightarrow T(M_{h}).$ This property makes it possible to reduce it on the submanifold $M_{h}^{(N)}\subset M_{h}$ and to obtain a resulting finite-dimensional system of ordinary differential equations on $M_{h}^{(N)},$ whose solution manifold coincides with an subspace of exact solutions to the initial dynamical system (\ref{eq1.3}). The latter proves to be canonically Hamiltonian on the manifold $M_{h}^{(N)}$ and, moreover, completely Liouville-Arnold integrable. \ If the mappings \bar{c}_{0},c_{j}:\mathbb{R}_{+}\rightarrow \mathbb{R},$ $j=\overline{0,N},$ are chosen in such a way that the flow \ (\ref{eq1.3}), invariantly reduced on the finite dimensional submanifold $M_{h}^{(N)}\subset M_{h},$ is nonsingular as $h\rightarrow 0$ \ and complete, then the corresponding solutions to the discrete dynamical system \ (\ref{eq1.3}) will respectively approach those to the nonlinear integrable Schr\"odinger dynamical system \ \ref{eq1.1}). Below we will proceed to realizing this scheme for the most simple cases N=1 $ and $N=2.$ Another way of analyzing the discrete dynamical system \ \ref{eq1.3}), being interesting enough, consists in applying the approaches recently devised in \cite{Ci,MQ} and based on the long-time behavior of the chosen discretization subject to a fixed Hamiltonian function structure. \section{\label{Sec_2}The finite dimensional reduction scheme: the case $N=1 } Consider the following non degenerate action functional \begin{equation*} \mathcal{L}_{\ h}^{(1)}=\sum_{n\in \mathbb{Z}}\bar{c}_{0}(h)\ln |1-\alpha h^{2}\psi _{n}^{\ast }\psi _{n}|+\ \sum_{n\in \mathbb{Z}}c_{0}(h)(\psi _{n}^{\ast }\psi _{n+1}+\psi _{n}\psi _{n+1}^{\ast })+ \end{equation* \begin{eqnarray} &&+\sum_{n\in \mathbb{Z}}c_{1}(h)(\frac{1}{2}\psi _{n}^{2}\psi _{n-1}^{\ast ,2}+\psi _{n}\psi _{n+1}\psi _{n-1}^{\ast }\psi _{n}^{\ast }-\frac{\psi _{n}\psi _{n-2}^{\ast }}{\alpha h^{2}}+ \label{eq2.1} \\ &&+\frac{1}{2}\psi _{n}^{2}\psi _{n+1}^{\ast ,2}+\psi _{n}\psi _{n+1}\psi _{n+1}^{\ast }\psi _{n+2}^{\ast }-\frac{\psi _{n-1}\psi _{n+1}^{\ast }} \alpha h^{2}})\ \notag \end{eqnarray with mappings $\ \bar{c}_{0},c_{j}:\mathbb{R}_{+}\rightarrow \mathbb{R},$ $j \overline{0,1},$ taken as \ \begin{equation} \bar{c}_{0}(h)=\frac{4\xi +1}{2\alpha h^{3}},\text{ \ }c_{0}(h)=\frac{\xi }{ },\text{ \ \ }c_{1}(h)=\frac{\alpha h}{4}, \label{eq2.1a} \end{equation and being easily determined for any $\xi \in \mathbb{R}$ from the condition for existence of a limit as $h\rightarrow 0$: \begin{equation} \mathcal{\tilde{L}}_{\ }^{(1)}:=\lim_{h\rightarrow 0}\mathcal{L}_{\ h}^{(1)}. \label{eq2.1b} \end{equation The corresponding invariant critical submanifold \begin{equation} M_{h}^{(1)}:=\left\{ (\psi ,\psi ^{\ast })^{\intercal }\in M_{h}:\ \mathrm grad}\mathcal{L}_{h}^{(1)}[\psi _{n},\psi _{n}^{\ast }]=0,\ n\in \mathbb{Z \right\} \label{eq2.2} \end{equation is equivalent to the following system of discrete up-recurrent relationships with respect to indices $n\in \mathbb{Z}: \begin{eqnarray*} \psi _{n+2} &=&-\frac{-\bar{c}_{0}(h)/c_{1}(h)\psi _{n}}{(\frac{1}{\alpha h^{2}}-\psi _{n+1\ }\psi _{n+1}^{\ast })(\frac{1}{\alpha h^{2}}-\psi _{n}\psi _{n}^{\ast })}+ \\ &&+ \frac{2\psi _{n-1}c_{0}(h)/c_{1}(h)+\psi _{n\ }(\psi _{n+1}\psi _{n-1}^{\ast }+\psi _{n-1}\psi _{n+1}^{\ast })}{(\frac{1}{\alpha h^{2}}-\psi _{n+1\ }\psi _{n+1}^{\ast })}+ \\ & & + \frac{(\psi _{n+1}^{2}+\psi _{n-1}^{2})\psi _{n\ }^{\ast }-\psi _{n-2} \frac{1}{\alpha h^{2}}-\psi _{n-1\ }\psi _{n-1}^{\ast })}{(\frac{1}{\alpha h^{2}}-\psi _{n+1\ }\psi _{n+1}^{\ast })}:= \\ &=&\Phi _{+}(\psi _{n+1},\psi _{n+1}^{\ast };\psi _{n},\psi _{n}^{\ast };\psi _{n-1},\psi _{n-1}^{\ast }), \end{eqnarray* \begin{eqnarray} \psi _{n+2}^{\ast } &=&-\frac{-\bar{c}_{0}(h)/c_{1}(h)\psi _{n}^{\ast }}{ \frac{1}{\alpha h^{2}}-\psi _{n+1\ }\psi _{n+1}^{\ast })(\frac{1}{\alpha h^{2}}-\psi _{n}\psi _{n}^{\ast })}+ \label{eq2.3} \\ &&+ \frac{2\psi _{n-1}^{\ast }c_{0}(h)/c_{1}(h)+\psi _{n\ }^{\ast }(\psi _{n+1}\psi _{n-1}^{\ast }+\psi _{n-1}\psi _{n+1}^{\ast })}{(\frac{1}{\alpha h^{2}}-\psi _{n+1\ }\psi _{n+1}^{\ast })}+ \notag \\ && + \frac{(\psi _{n+1}^{\ast ,2}+\psi _{n-1}^{\ast ,2})\psi _{n\ }-\psi _{n-2}^{\ast }(\frac{1}{\alpha h^{2}}-\psi _{n-1\ }\psi _{n-1}^{\ast })}{ \frac{1}{\alpha h^{2}}-\psi _{n+1\ }\psi _{n+1}^{\ast })}:= \notag \\ &=&\Phi _{+}^{\ast }(\psi _{n+1},\psi _{n+1}^{\ast };\psi _{n},\psi _{n}^{\ast };\psi _{n-1},\psi _{n-1}^{\ast }). \notag \end{eqnarray The latter can be also rewritten as the system of down-recurrent mappings \begin{eqnarray} \psi _{n-2} &=&-\frac{-\bar{c}_{0}(h)/c_{1}(h)\psi _{n}}{(\frac{1}{\alpha h^{2}}-\psi _{n-1\ }\psi _{n-1}^{\ast })(\frac{1}{\alpha h^{2}}-\psi _{n}\psi _{n}^{\ast })}+ \label{eq2.4} \\ &&+\frac{2\psi _{n-1}c_{0}(h)/c_{1}(h)+\psi _{n\ }(\psi _{n+1}\psi _{n-1}^{\ast }+\psi _{n-1}\psi _{n+1}^{\ast })}{(\frac{1}{\alpha h^{2}}-\psi _{n-1\ }\psi _{n-1}^{\ast })}+ \notag \\ &&+\frac{(\psi _{n+1}^{2}+\psi _{n-1}^{2})\psi _{n\ }^{\ast }-\psi _{n+2} \frac{1}{\alpha h^{2}}-\psi _{n+1\ }\psi _{n+1}^{\ast })}{(\frac{1}{\alpha h^{2}}-\psi _{n-1\ }\psi _{n-1}^{\ast })}:= \notag \\ &=&\Phi _{-}(\psi _{n-1},\psi _{n-1}^{\ast };\psi _{n},\psi _{n}^{\ast };\psi _{n+1},\psi _{n+1}^{\ast }), \notag \end{eqnarray \begin{eqnarray*} \psi _{n-2}^{\ast } &=&-\frac{-\bar{c}_{0}(h)/c_{1}(h)\psi _{n}^{\ast }}{ \frac{1}{\alpha h^{2}}-\psi _{n-1\ }\psi _{n-1}^{\ast })(\frac{1}{\alpha h^{2}}-\psi _{n}\psi _{n}^{\ast })}+ \\ &&+\frac{2\psi _{n-1}^{\ast }c_{0}(h)/c_{1}(h)+\psi _{n\ }^{\ast }(\psi _{n+1}\psi _{n-1}^{\ast }+\psi _{n-1}\psi _{n+1}^{\ast })}{(\frac{1}{\alpha h^{2}}-\psi _{n-1\ }\psi _{n-1}^{\ast })}+ \\ &&+\frac{(\psi _{n+1}^{\ast ,2}+\psi _{n-1}^{\ast ,2})\psi _{n\ }-\psi _{n+2}^{\ast }(\frac{1}{\alpha h^{2}}-\psi _{n+1\ }\psi _{n+1}^{\ast })}{ \frac{1}{\alpha h^{2}}-\psi _{n-1\ }\psi _{n-1}^{\ast })}:= \\ &=&\Phi _{-}^{\ast }(\psi _{n-1},\psi _{n-1}^{\ast };\psi _{n},\psi _{n}^{\ast };\psi _{n+1},\psi _{n+1}^{\ast }), \end{eqnarray* which also hold for all $n\in \mathbb{Z}.$ The \ relationships \ (\ref{eq2.3 ) (or, the same, relationships\ (\ref{eq2.4})) mean that the whole submanifold $M_{h}^{(1)}\subset M_{h}$ \ is retrieved by means of the initial values \ $(\bar{\psi}_{-1},\bar{\psi}_{-1}^{\ast };\bar{\psi}_{0} \bar{\psi}_{0}^{\ast };\bar{\psi}_{1},\bar{\psi}_{1}^{\ast };\bar{\psi}_{2} \bar{\psi}_{2}^{\ast })^{\intercal }\in M_{h}^{(1)}\simeq \mathbb{C}^{8}.$ Thereby, the submanifold $M_{h}^{(1)}\subset M_{h}^{8}$ is naturally diffeomorphic to the finite dimensional complex space $M_{h}^{8}.$ Taking into account the canonical symplecticity \cite{Pr,PBPS} of the submanifold M_{h}^{(1)}\simeq M_{h}^{8}$ and its invariance with respect to the vector field \ (\ref{eq1.3}) one can easily reduce it on this submanifold M_{h}^{(1)}\simeq M_{h}^{8}$ and obtain the following equivalent finite dimensional flow on the manifold $M_{h}^{8}: \begin{eqnarray*} \frac{d}{dt}\psi _{2}\ &=&\frac{i}{h^{2}}[\Phi _{+}(\psi _{\ 2},\psi _{\ 2}^{\ast };\psi _{1},\psi _{1}^{\ast };\psi _{\ 0},\psi _{\ 0}^{\ast })-2\psi _{2}+\psi _{1}]- \\ &&-i\alpha \lbrack \Phi _{+}(\psi _{\ 2},\psi _{\ 2}^{\ast };\psi _{1},\psi _{1}^{\ast };\psi _{\ 0},\psi _{\ 0}^{\ast })+\psi _{1}]\psi _{2}\psi _{2}^{\ast },\ \end{eqnarray*} \begin{equation} \frac{d}{dt}\psi _{1}\ =\frac{i}{h^{2}}[\psi _{2}-2\psi _{1}+\psi _{0}]-i\alpha (\psi _{2}+\psi _{0})\psi _{1}\psi _{1}^{\ast },\ \notag \end{equation} \bigskip \begin{equation} \frac{d}{dt}\psi _{0}\ =\frac{i}{h^{2}}[\psi _{1}-2\psi _{0}+\psi _{-1}]-i\alpha (\psi _{1}+\psi _{-1})\psi _{0}\psi _{0}^{\ast }, \label{eq2.5} \end{equation} \bigskip \begin{eqnarray} \frac{d}{dt}\psi _{-1}\ &=&\frac{i}{h^{2}}[\psi _{0}-2\psi _{-1}+\Phi _{-}(\psi _{\ -1},\psi _{\ -1}^{\ast };\psi _{0},\psi _{0}^{\ast };\psi _{\ 1},\psi _{\ 1}^{\ast })]- \notag \\ &&-\ i\alpha \lbrack \psi _{0}+\Phi _{-}(\psi _{\ -1},\psi _{\ -1}^{\ast };\psi _{0},\psi _{0}^{\ast };\psi _{\ 1},\psi _{\ 1}^{\ast })]\psi _{-1}\psi _{-1}^{\ast }, \notag \end{eqnarray and \bigskip \begin{eqnarray} \frac{d}{dt}\psi _{-1}^{\ast }\ &=&-\frac{i}{h^{2}}[\psi _{0}^{\ast }-2\psi _{-1}^{\ast }+\Phi _{-}^{\ast }(\psi _{\ -1},\psi _{\ -1}^{\ast };\psi _{0},\psi _{0}^{\ast };\psi _{\ 1},\psi _{\ 1}^{\ast })]+ \notag \\ &&+\ i\alpha \lbrack \psi _{0}^{\ast }+\Phi _{-}^{\ast }(\psi _{\ -1},\psi _{\ -1}^{\ast };\psi _{0},\psi _{0}^{\ast };\psi _{\ 1},\psi _{\ 1}^{\ast })]\psi _{-1}\psi _{-1}^{\ast }, \notag \end{eqnarray \begin{equation*} \frac{d}{dt}\psi _{0}^{\ast }\ =-\frac{i}{h^{2}}[\psi _{1}^{\ast }-2\psi _{0}^{\ast }+\psi _{-1}^{\ast }]+i\alpha (\psi _{1}^{\ast }+\psi _{-1}^{\ast })\psi _{0}\psi _{0}^{\ast },\ \end{equation* \begin{equation} \frac{d}{dt}\psi _{1}^{\ast }\ =-\frac{i}{h^{2}}[\psi _{2}^{\ast }-2\psi _{1}^{\ast }+\psi _{0}^{\ast }]+i\alpha (\psi _{2}^{\ast }+\psi _{0}^{\ast })\psi _{1}\psi _{1}^{\ast },\ \label{eq2.6} \end{equation \begin{eqnarray*} \frac{d}{dt}\psi _{2}^{\ast }\ &=&-\frac{i}{h^{2}}[\Phi _{+}^{\ast }(\psi _{\ 2},\psi _{\ 2}^{\ast };\psi _{1},\psi _{1}^{\ast };\psi _{\ 0},\psi _{\ 0}^{\ast })-2\psi _{2}^{\ast }+\psi _{1}^{\ast }]+ \\ &&+i\alpha \lbrack \Phi _{+}^{\ast }(\psi _{\ 2},\psi _{\ 2}^{\ast };\psi _{1},\psi _{1}^{\ast };\psi _{\ 0},\psi _{\ 0}^{\ast })+\psi _{1}^{\ast }]\psi _{2}\psi _{2}^{\ast }.\ \end{eqnarray* The next proposition, characterizing the Hamiltonian structure of the reduced dynamical system \ (\ref{eq2.5}) and \ (\ref{eq2.6}), holds. \begin{proposition} The \ eight-dimensional complex dynamical system \ (\ref{eq2.5}) and (\re {eq2.6}) is Hamiltonian on the manifold $M_{h}^{(1)}\simeq M_{h}^{8}$ \ with respect to the canonical symplectic structure \begin{equation} \omega _{h}^{(2)}=\sum_{j=\overline{-2,1}}(dp_{-j}\wedge d\psi _{-j}+dp_{-j}^{\ast }\wedge d\psi _{-j}^{\ast }), \label{eq2.7} \end{equation where, by definition, \begin{equation} p_{-j}:=\mathcal{L}_{h,\psi _{n-j+1}}^{(1)\prime ,\ast }[\psi _{n},\psi _{n}^{\ast }]\cdot 1,\text{ \ \ \ }p_{-j}^{\ast }:=\mathcal{L}_{h,\psi _{n-j+1}^{\ast }}^{(1)\prime ,\ast }[\psi _{n},\psi _{n}^{\ast }]\cdot 1 \label{eq2.8} \end{equation for $j=\overline{-2,1}\ $ modulo the constraint $\mathrm{grad}\mathcal{L _{h}^{(1)}[\psi _{n},\psi _{n}^{\ast }]=0,n\in \mathbb{Z},$ on the submanifold $M_{h}^{(1)}\simeq M_{h}^{8},$ and the sign $^{"\prime ,\ast "}$ means the corresponding discrete Frech\`{e}t up-directed derivative and its natural conjugation with respect to the convolution mapping on $T^{\ast }(M_{h}^{(1)})\times T(M_{h}^{(1)}).$ \end{proposition} \begin{proof} The symplectic structure \ (\ref{eq2.7}) easily follows \cite{Pr,PBPS,BPS} from the discrete version of the Gelfand-Dikiy \cite{GD} differential relationship \begin{eqnarray} d\mathcal{L}_{h}^{(1)}[\psi _{n},\psi _{n}^{\ast }] &=&<\mathrm{grad \mathcal{L}_{h}^{(1)}[\psi _{n-1},\psi _{n-1}^{\ast }],(d\psi _{n-1},d\psi _{n-1}^{\ast })^{\intercal }>+ \label{eq2.9} \\ &&+\frac{d}{dn}\alpha _{h}^{(1)}(\psi _{n-1},\psi _{n-1}^{\ast };\psi _{n},\psi _{n}^{\ast };\psi _{n+1},\psi _{n+1}^{\ast };\psi _{n+2},\psi _{n+2}^{\ast }), \notag \end{eqnarray where $\alpha _{h}^{(1)}\in \Lambda ^{1}(M_{h}^{(1)})$ is, owing to the condition $\mathrm{grad}\mathcal{L}_{h}^{(1)}[\psi _{n}\ ,\psi _{n}^{\ast }]=0,n\in \mathbb{Z},$ on the submanifold $M_{h}^{(1)},$ not depending on the index $n\in \mathbb{Z}\ $ and suitably defined one-form on the manifold M_{h}^{8},$ allowing the following canonical representation \begin{equation} \alpha _{h}^{(1)}=\sum_{j=\overline{-2,1}}(p_{-j}(h)d\psi _{-j}+p_{-j}^{\ast }(h)d\psi _{-j}^{\ast }) \label{eq2.10} \end{equation with functions $p_{-j},p_{-j}^{\ast }:M_{h}^{(1)}\times \mathbb{R \rightarrow \mathbb{C},$ $\ j=\overline{-2,1}.$ The latter, being defined by the expressions \ (\ref{eq2.8}), compile jointly with variables $\psi _{-j},\psi _{-j}^{\ast }:M_{h}^{(1)}\times \mathbb{R}\rightarrow \mathbb{C},$ $\ j=\overline{-2,1},$ the global coordinates on the finite dimensional symplectic manifold $M_{h}^{8},$ proving the proposition. \end{proof} The dynamical system \ (\ref{eq2.5}) and (\ref{eq2.6}) \ on the manifold M_{h}^{8}$ possesses, except its Hamiltonian function, additionally exactly four mutually commuting functionally independent conservation laws $\mathcal H}_{k},\mathcal{H}_{k}^{\ast }:M_{h}^{8}\rightarrow \mathbb{R},$ $k \overline{0,1},$ \ and one Casimir function $\mathcal{\bar{H} _{0}:M_{h}^{8}\rightarrow \mathbb{R},$ which can be calculated \cite{PBPS} from the following functional relationships \begin{eqnarray} &<&\mathrm{grad}H_{k}[\psi _{n},\psi _{n}^{\ast }],K[\psi _{n},\psi _{n}^{\ast }]>:= \label{eq2.11} \\ & =& -\frac{d}{dn}\mathcal{H}_{k}(\psi _{n-1},\psi _{n-1}^{\ast };\psi _{n},\psi _{n}^{\ast };\psi _{n+1},\psi _{n+1}^{\ast };\psi _{n+2},\psi _{n+2}^{\ast }), \notag \end{eqnarray} \begin{eqnarray} &<&\mathrm{grad}H_{k}^{\ast }[\psi _{n},\psi _{n}^{\ast }],K[\psi _{n},\psi _{n}^{\ast }]>:= \notag \\ &=&-\frac{d}{dn}\mathcal{H}_{k}^{\ast }(\psi _{n-1},\psi _{n-1}^{\ast };\psi _{n},\psi _{n}^{\ast };\psi _{n+1},\psi _{n+1}^{\ast };\psi _{n+2},\psi _{n+2}^{\ast }), \notag \end{eqnarray} \begin{eqnarray} &<&\mathrm{grad}\bar{H}_{0}[\psi _{n},\psi _{n}^{\ast }],K[\psi _{n},\psi _{n}^{\ast }]>:= \notag \\ &=&-\frac{d}{dn}\mathcal{\bar{H}}_{0}(\psi _{n-1},\psi _{n-1}^{\ast };\psi _{n},\psi _{n}^{\ast };\psi _{n+1},\psi _{n+1}^{\ast };\psi _{n+2},\psi _{n+2}^{\ast }), \notag \end{eqnarray for $k=\overline{0,1}\ $modulo the constraint $\mathrm{grad}\mathcal{L _{h}^{(1)}[\psi _{n-2},\psi _{n-2}^{\ast }]=0,n\in \mathbb{Z},$ on the manifold $M_{h}^{(1)}\simeq M_{h}^{8},$ where $\frac{d}{dn}:=\Delta -1$ is a discrete analog of the differentiation and the shift operator $\Delta $ acts as $\Delta f_{n}:=f_{n+1},n\in \mathbb{Z},$ for any mapping $\ f:\mathbb Z\rightarrow C}.\ $\ \ From \ (\ref{eq2.11}) one can obtain by menas of simple but tedious calculations analytical expressions for the invariants \mathcal{H}_{k}^{\ast }:M_{h}^{8}\rightarrow \mathbb{R},$ \ which give rise to the corresponding Hamiltonian function for the dynamical system \ (\re {eq2.5}) and (\ref{eq2.6}), owing to the relationship \ (\ref{eq1.16b}): \begin{equation} \mathcal{H}=\frac{2}{\alpha h^{3}}\mathcal{\bar{H}}_{0}+\frac{1}{h}(\mathcal H}_{0}+\mathcal{H}_{0}^{\ast }), \notag \end{equation} satisfying the following canonical Hamiltonian system with respect to the symplectic structure \ (\ref{eq2.7}) \begin{eqnarray} d\psi _{-j}/dt &=&\partial \mathcal{H}/\partial p_{-j},\text{ \ \ \ }d\psi _{-j}^{\ast }/dt=\partial \mathcal{H}/\partial p_{-j}^{\ast }, \label{eq2.13} \\ dp_{-j}/dt &=&-\partial \mathcal{H}/\partial \psi _{-j},\text{ \ \ \ dp_{-j}^{\ast }/dt=-\partial \mathcal{H}/\partial \psi _{-j}^{\ast }, \notag \end{eqnarray where $j=\overline{-2,1},$ which is a Liouville-Arnold integrable on the symplectic manifold $M_{h}^{8}.$ \bigskip \begin{remark} The same way on can construct the finite dimensional reduction of the discrete Schr\"{o}dinger dynamical system \ (\ref{eq1.3}) in the case $N=2.$ Making use of the calculated before conservation laws \ (\ref{eq1.16a}), one can take the corresponding action functional as \begin{eqnarray} \label{eq3.1} & \mathcal{L}_{h}^{(2)} & = \ \bar{c}_{0}(h) \sum_{n\in \mathbb{Z}} \ln |1-\alpha h^{2}\psi _{n}^{\ast }\psi _{n}|+\ c_{0}(h) \sum_{n\in \mathbb{Z}} (\psi _{n}^{\ast }\psi _{n-1}+\psi _{n}\psi _{n-1}^{\ast })+ \notag \\ &&+ \ c_{1}(h) \sum_{n\in \mathbb{Z}} \left( \frac{1}{2}\psi _{n}^{2} \psi _{n-1}^{\ast,2} + \psi _{n} \psi _{n}^{\ast } ( \psi _{n+1}\psi _{n-1}^{\ast } + \psi _{n-1}\psi _{n+1}^{\ast }) \right. \\ && \left. + \ \frac{1}{2}\psi _{n-1}^{2}\psi _{n}^{\ast ,2} -\frac \psi_{n}\psi _{n-2}^{\ast }}{\alpha h^{2}} -\frac{\psi _{n-2}\psi _{n}^{\ast }}{\alpha h^{2}} \right) + \notag \\ &&+ \ c_{2}(h) \sum_{n\in \mathbb{Z}} \left( \frac{1}{3}\psi _{n}^{3}\psi _{n-1}^{\ast ,3}+\psi _{n}\psi _{n+1}\psi _{n-1}^{\ast }\psi _{n}^{\ast }(\psi _{n}\psi _{n-1}^{\ast }+\psi _{n+1}\psi _{n}^{\ast }+ \right. \notag \\ &&+ \ \psi _{n+2}\psi _{n+1}^{\ast })-\frac{\psi _{n}\psi _{n-1}^{\ast }} \alpha h^{2}}(\psi _{n}\psi _{n-2}^{\ast }+\psi _{n+1}\psi _{n-1}^{\ast })- \notag \\[1ex] &&- \ \frac{\psi _{n}\psi _{n}^{\ast }}{\alpha h^{2}}(\psi _{n+1}\psi _{n-2}^{\ast }+\psi _{n+2}\psi _{n-1}^{\ast })+\frac{\ \psi _{n}\psi _{n-3}^{\ast }}{\alpha ^{2}h^{4}}+\frac{1}{3}\psi _{n}^{\ast ,3}\psi _{n-1}^{3}+ \notag \\[1ex] &&+ \ \psi _{n}^{\ast }\psi _{n+1}^{\ast }\psi _{n-1}\psi _{n}(\psi _{n}^{\ast }\psi _{n-1}+\psi _{n+1}^{\ast }\psi _{n}+\psi _{n+2}^{\ast }\psi _{n+1})- \notag \\[1ex] && \left. - \ \frac{\psi _{n}^{\ast }\psi _{n-1}}{\alpha h^{2}}(\psi _{n}^{\ast }\psi _{n-2}+\psi _{n+1}^{\ast }\psi _{n-1})-\frac{\psi _{n}\psi _{n}^{\ast }}{\alpha h^{2}}(\psi _{n+1}^{\ast }\psi _{n-2}+\psi _{n+2}^{\ast }\psi _{n-1})+\frac{\ \psi _{n}^{\ast }\psi _{n-3}}{\alpha ^{2}h^{4}} \right) \notag \end{eqnarray with mappings $\bar{c}_{0},\ c_{j}:\mathbb{R}_{+}\rightarrow \mathbb{R},$ $j \overline{0,2},$ defined as before from the condition that there exists the limit \begin{equation} \mathcal{\tilde{L}}^{(2)}:=\lim_{h\rightarrow 0}\mathcal{L}_{h}^{(2)}. \label{eq3.1a} \end{equation The respectively defined critical submanifold \begin{equation} M_{h}^{(2)}:=\left\{ (\psi ,\psi ^{\ast })^{\intercal }\in M_{h}:\ \mathrm grad}\mathcal{L}_{h}^{(2)}[\psi _{n},\psi _{n}^{\ast }]=0,\ n\in \mathbb{Z \right\} \label{eq3.2} \end{equation becomes diffeomorphic to a finite dimensional canonically symplectic manifold $M_{h}^{12}$ on which the suitably reduced discrete Schr\"odinger dynamical system \ (\ref{eq1.3}) becomes a Liouville-Arnold integrable Hamiltonian system. The details of the related calculations are planned to be presented in a separate work under preparation. \end{remark} \section{\label{Sec_3} The Fourier analysis of the integrable discrete NLS system} It easy to observe that the linearized Schr\"{o}dinger system (\ref{eq1.1}) admits the following Fourier type solution: \begin{equation} \psi (x,t)=\int_{\mathbb{R}}ds\ \xi (s,t)\exp (ixs),\text{ \ \ \ }\psi ^{\ast }(x,t)=\int_{\mathbb{R}}ds\ \xi ^{\ast }(s,t)\exp (-ixs)\ \label{psi-F} \end{equation for all $x,t\in \mathbb{R}$, where $d\xi /dt=-is^{2}\xi ,$ $d\xi ^{\ast }/dt=\ is^{2}\xi ^{\ast },$i.e., \begin{equation} \xi (s,t)=\bar{\xi}(s)e^{-is^{2}t},\xi ^{\ast }(s,t)=\bar{\xi}^{\ast }(s)e^{is^{2}t} \end{equation and $\bar{\xi},\bar{\xi}^{\ast }:\mathbb{R}\rightarrow \mathbb{C}$ are prescribed functions (the Fourier transforms of initial conditions). Likewise, the linearized discrete Schr\"{o}dinger dynamical system \ (\re {eq1.3}) allows the following general discrete Fourier\ type solution \begin{equation} \psi _{n}=\int_{\mathbb{R}}ds\ \xi _{h}(s,t)\exp (ihns),\text{ \ \ \ }\psi _{n}^{\ast }=\int_{\mathbb{R}}ds\ \xi _{h}^{\ast }(s,t)\exp (-ihns)\ \label{eq4.1} \end{equation for all $n\in \mathbb{Z},$ where the evolution parameter $t\in \mathbb{R ,(\psi _{n},\psi _{n}^{\ast })^{\intercal }\in w_{h,2}^{2}(\mathbb{Z} \mathbb{C}^{2})$ and \begin{equation} \xi _{h}(s,t)=\bar{\xi}_{h}(s)\exp (-i\frac{4t}{h^{2}}\sin ^{2}\frac{sh}{2 ),\ \qquad \xi _{h}^{\ast }(s,t)=\bar{\xi}_{h}^{\ast }(s)\exp (\ i\frac{4t} h^{2}}\sin ^{2}\frac{sh}{2}). \label{eq4.2} \end{equation Here the function $\ (\bar{\xi}_{h},\bar{\xi}_{h}^{\ast })^{\intercal }\in $ $W_{h,2}^{2}(\mathbb{R};\mathbb{C}^{2})\subset L_{2}(\mathbb{R};\mathbb{C ^{2}),$ where the functional space $W_{h,2}^{2}(\mathbb{R};\mathbb{C}^{2})$ is yet to be determined. From the boundary condition $(\psi _{n},\psi _{n}^{\ast })^{\intercal }\in w_{h,2}^{2}(\mathbb{Z};\mathbb{C}^{2})$ it follows that expressions \begin{eqnarray} \frac{1}{2\pi }\sum_{n\in \mathbb{Z}}\psi _{n}^{\ast }\psi _{n} &=&\int_ \mathbb{R}}ds\xi _{h}^{\ast }(s)\xi _{h}(s)<\infty , \label{eq4.3} \\ \frac{1}{2\pi }\sum_{n\in \mathbb{Z}}(\psi _{n+1}^{\ast }\psi _{n}+\psi _{n\ }^{\ast }\psi _{n+1}) &=&2\int_{\mathbb{R}}ds\cos (hs)\xi _{h}^{\ast }(s)\xi _{h}(s)<\infty , \notag \end{eqnarray ensure the boundedness of the Hamiltonian function \ (\ref{eq1.6}), thereby determining a functional space $W_{h,2}^{2}(\mathbb{R};\mathbb{C}^{2})$ to which belong the vector function $(\xi _{h},\xi _{h}^{\ast })^{\intercal }\in $ $L_{2}(\mathbb{R};\mathbb{C}^{2}).$ However the discrete evolution is not following along the continuous trajectory. Being motivated by works \cite{Ci-oscyl,CR}, we modify the discrete system as folows in order to obtain the exact discretization: \begin{equation} \begin{array}{l} \label{del-dNLS}\displaystyle\frac{d}{dt}\psi _{n}=\frac{i}{\delta ^{2} (\psi _{n+1}-2\psi _{n}+\psi _{n-1})-i\alpha (\psi _{n+1}+\psi _{n-1})\psi _{n}\psi _{n}^{\ast }, \\[2ex] \displaystyle\frac{d}{dt}\psi _{n}^{\ast }=-\frac{i}{\delta ^{2}}(\psi _{n+1}^{\ast }-2\psi _{n}^{\ast }+\psi _{n-1}^{\ast })+i\alpha (\psi _{n+1}^{\ast }+\psi _{n-1}^{\ast })\psi _{n}\psi _{n}^{\ast } \end{array \end{equation Substituting (\ref{eq4.1}) into the linearization of (\ref{del-dNLS}) we obtain \begin{equation} \xi _{h}(s,t)=\bar{\xi}_{h}(s)\exp (-i\frac{4t}{\delta ^{2}}\sin ^{2}\frac{s }{2}),\ \qquad \xi _{h}^{\ast }(s,t)=\bar{\xi}_{h}^{\ast }(s)\exp (\ i\frac 4t}{\delta ^{2}}\sin ^{2}\frac{sh}{2}). \label{xi-del} \end{equation Therefore, linearization of the discretization (\ref{del-dNLS}) is exact (i.e., $\psi (nh,t)=\psi _{n}(t),n\in \mathbb{Z},$ if we assume \begin{equation} \delta =\frac{2}{s}\sin \frac{hs}{2}\ \label{xi-del-1} \end{equation for any $h\in \mathbb{R}.$ $\ \ $Thus, the parameter $\delta >0$ depends on $s\in \mathbb{R}\ $ yet for small $h\rightarrow 0$ one gets $\delta =h(1+O(h^{2}s^{2}).$ The nonlinear case is more difficult. In the continuous nonlinear case (\re {psi-F}) we have \begin{equation} d\xi /dt=-is^{2}\xi -2i\alpha \beta \lbrack s;\xi ]\ ,\qquad \quad d\xi ^{\ast }/dt=is^{2}\xi ^{\ast }+2i\alpha \beta ^{\ast }[s;\xi ^{\ast }], \label{xi-del-2} \end{equation} where the functionals $\ \beta ,\beta ^{\ast }:\mathbb{R\times }L_{2} \mathbb{R};\mathbb{C})\ \rightarrow L_{2}(\mathbb{R};\mathbb{C}),$ determined as \begin{eqnarray*} \beta \lbrack s;\xi ] &:&=\int\limits_{\mathbb{R}^{2}}ds^{\prime } ds^{\prime\prime}\xi (s+s^{\prime }-s^{\prime\prime})\xi (s^{\prime\prime})\xi ^{\ast }(s^{\prime }), \\ \beta ^{\ast }[s;\xi ] &:&=\int\limits_{\mathbb{R}^{2}}ds^{\prime }ds^{\prime\prime} \xi ^{\ast }(s+s^{\prime }-s^{\prime\prime})\xi ^{\ast }(s^{\prime\prime})\xi (s^{\prime }), \end{eqnarray*} depend both on $s\in \mathbb{R}$ and on the element $\xi \in L_{2}(\mathbb{ };\mathbb{C}),$ as well as depends parametrically on the evolution parameter $t\in \mathbb{R}$ through (\ref{xi-del-2}). In the nonlinear discrete case \ref{del-dNLS}) we, respectively, obtain: \begin{equation} d\xi _{h}/dt=-i\xi _{h}\frac{4}{\delta ^{2}}\sin ^{2}\frac{sh}{2} - 2i\alpha \beta _{h}[s;\xi _{h}], \qquad d\xi _{h}^{\ast }/dt=i\xi _{h}^{\ast }\frac{4} \delta ^{2}}\sin ^{2}\frac{sh}{2} +2i\alpha \beta _{h}^{\ast }[s;\xi _{h}], \\[2ex] \label{xi-del-3} \end{equation where the functionals $\ \beta _{h},\beta _{h}^{\ast }:\mathbb{R\times L_{2}(\mathbb{R};\mathbb{C})\ \rightarrow L_{2}(\mathbb{R};\mathbb{C})\ $ are determined as \begin{eqnarray} \beta _{h}[s;\xi _{h}] &:&=\int\limits_{\mathbb{R}^{2}}ds^{\prime }ds^{\prime\prime}\cos [h(s+s^{\prime }-s^{\prime\prime})]\xi _{h}(s+s^{\prime }-s^{\prime\prime})\xi _{h}(s^{\prime\prime})\xi _{h}^{\ast }(s^{\prime }), \label{xi-del-4} \\ \beta _{h}^{\ast }[s;\xi _{h}] &:&=\int\limits_{\mathbb{R}^{2}}ds^{\prime }ds^{\prime\prime}\cos [h(s+s^{\prime }-s^{\prime\prime})]\xi _{h}^{\ast }(s+s^{\prime }-s^{\prime\prime})\xi _{h}^{\ast }(s^{\prime\prime})\xi _{h}(s^{\prime }) \notag \end{eqnarray for any $s\in \mathbb{R}.$ To proceed further with the truly nonlinear case still presist to be a nontrivial problem, yet we hope to obtain a suitable procedure analogous to that of \cite{Ci,CR-PRE}. Instead of it one can analyze the related functional space \ constraints on the space of functions $(\bar{\xi}_{h},\bar{\xi}_{h}^{\ast })^{\intercal }\in $ $W_{h,2}^{2}(\mathbb{R};\mathbb{C}^{2}),$ representing solutions to the discrete nonlinear equation (\ref{eq1.3}) via the expressions \ (\re {eq4.1}), being imposed by the corresponding finite dimensional reduction scheme of Section \ (\ref{Sec_2}). This procedure actually may be realized, if to consider the derived before recurrence relationships \ (\ref{eq2.3}) (or similarly, \ (\ref{eq2.4})) allowing to obtain the related constraints on the space of functions $(\bar{\xi}_{h},\bar{\xi}_{h}^{\ast })^{\intercal }\in $ $W_{h,2}^{2}(\mathbb{R};\mathbb{C}^{2}),$ but the resulting relationships prove to be much complicated and cumbersome expressions. Thus, one can suggest the following practical numerical-analytical scheme of constructing solutions to the discrete nonlinear Schr\"{o}dinger dynamical system \ (\ref{eq1.3}): \ first to solve the Cauchy problem to the finite-dimensional system of ordinary differential equations \ (\ref{eq2.5}) and \ (\ref{eq2.6}), and next to substitute them into the system of recurrent algebraic relationships \ (\ref{eq2.3}) and \ (\ref{eq2.4}), obtaining this way the whole infinite hierarchy of the sought for solutions. \section{Conclusion} Within the presented investigation of solutions to the discrete nonlinear Schr\"odinger dynamical system (\ref{eq1.3}) we have succeeded in two important points. First, we have developed an effective enough scheme of invariant reducing the infinite system of ordinary differential equations \ref{eq1.3}) to an equivalent finite one of ordinary differential equations with respect to the evolution parameter $t\in \mathbb{R}$. Second, we constructed a finite set of recurrent algebraic regular relationships, allowing to expand the obtained before solutions to any discrete order $n\in \mathbb{Z}\ $and giving rise to the sought for solutions of the system (\re {eq1.3}).$\ $ It is important to mention here that within the presented analysis we have not used the Lax type representation for the discrete nonlinear Schr\"odinger dynamical system \ (\ref{eq1.3}), whose existence was stated many years ago in \cite{AL} and whose complete solution set analysis was done in works \cite{AL,AL1,BP,No} by means of both the inverse scattering transform and the algebraic-geometric methods. Concerning the\ set of recurrent relationships for exact solutions to the finite-dimensional reduction of the discrete nonlinear Schr\"odinger dynamical system \ (\re {eq1.3}), obtained both in the presented work and in work \cite{BP}, based on the corresponding Lax type representation, an interesting problem of finding between them relationship arises, and an answer to it would explain the hidden structure of the complete Liouville-Arnold integrability of the related set of the reduced ordinary differential equations. \section{Acknowledgements} J.L.C. acknowledges a financial support from the National Science Center (Poland) under NCN grant No. 2011/01/B/ST1/05137 and appreciates fruitful discussions during the 6th Symposium on Integrable Systems held in Bia\l ystok (Poland) on 26-29 June, 2013. J.L.C. is also grateful for discussions and hospitality during his visit in Lviv (Ukraine), July, 2013. \ A.P. is thankful to Prof. D. Blackmore for valuable discussions and remarks during the Nonlinear Mathematical Physics Conference held in the Sophus Lie Center, Nordfjordeid (Norway) on June 04-14, 2013.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The tiling problem, also known as the domino problem, asks whether the two-dimensional grid $\ensuremath{\mathbb{Z}}^2$ can be colored in a way that avoids a given finite collection of forbidden local patterns. The problem is undecidable in its full generality~\cite{berger}. The undecidability relies on the fact that there are \emph{aperiodic} systems of forbidden patterns that enforce any valid coloring to be non-periodic~\cite{berger}. In this paper we consider the low complexity setup where the number of allowed local patterns is small. More precisely, suppose we are given at most $nm$ legal rectangular patterns of size $n\times m$, and we want to know whether there exists a coloring of $\ensuremath{\mathbb{Z}}^2$ containing only legal $n\times m$ patterns. We prove that if such a coloring exists then also a periodic coloring exists (Corollary~\ref{cor:corperiodic}). This further implies, using standard arguments, that in this setup there is an algorithm to determine if the given patterns admit at least one coloring of the grid (Corollary~\ref{cor:cordecidable}). The results also extend to other convex shapes in place of the rectangle (see Section~\ref{sec:conclusions}). We believe the low complexity setting has relevant applications. There are numerous examples of processes in physics, chemistry and biology where macroscopic patterns and regularities arise from simple microscopic interactions. Formation of crystals and quasi-crystals is a good example where physical laws govern locally the attachments of particles to each other. Predicting the structure of the crystal from its chemical composition is a notoriously difficult problem (as already implied by the undecidability of the tiling problem) but if the number of distinct local patterns of particle attachments is sufficiently low, our results indicate that the situation may be easier to handle. Our work is also motivated by the \emph{Nivat's conjecture}~\cite{nivat}, an open problem concerning periodicity in low complexity colorings of the grid. The conjecture claims the following: if a coloring of $\ensuremath{\mathbb{Z}}^2$ is such that, for some $n,m\in\ensuremath{\mathbb{N}}$, the number of distinct $n \times m$ patterns is at most $nm$, then the the coloring is necessarily periodic in some direction. If true, this conjecture directly implies a strong form of our peridicity result: in the low complexity setting, not only a coloring exists that is periodic, but in fact all admitted colorings are periodic. Our contribution to Nivat's conjecture is that we show that under the hypotheses of the conjecture, the coloring must contain arbitrarily large periodic regions (Theorem~\ref{thm:periodic}). \section{Preliminaries} To discuss the results in detail we need precise definitions. Let $A$ be a finite alphabet. A coloring $c\in A^{\ensuremath{\mathbb{Z}}^2}$ of the two-dimensional grid $\ensuremath{\mathbb{Z}}^2$ with elements of $A$ is called a (two-dimensional) \emph{configuration}. We use the notation $c_{\vec{n}}$ for the color $c(\vec{n})\in A$ of cell $\vec{n}\in\ensuremath{\mathbb{Z}}^2$. For any $\vec{t}\in\ensuremath{\mathbb{Z}}^2$, the \emph{translation} $\tau^{\vec{t}}:A^{\ensuremath{\mathbb{Z}}^2}\longrightarrow A^{\ensuremath{\mathbb{Z}}^2}$ by $\vec{t}$ is defined by $\tau^{\vec{t}}(c)_{\vec{n}}=c_{\vec{n}-\vec{t}}$, for all $c\in A^{\ensuremath{\mathbb{Z}}^2}$ and all $\vec{n}\in\ensuremath{\mathbb{Z}}^2$. If $\tau^{\vec{t}}(c)=c$ for a non-zero $\vec{t}\in\ensuremath{\mathbb{Z}}^2$, we say that $c$ is \emph{periodic} and that $\vec{t}$ is a \emph{vector of periodicity}. If there are two linearly independent vectors of periodicity then $c$ is \emph{two-periodic}, and in this case there are horizontal and vertical vectors of periodicity $(k,0)$ and $(0,k)$ for some $k> 0$, and consequently a vector of periodicity in every rational direction. A \emph{finite pattern} is a coloring $p\in A^D$ of some finite domain $D\subset \ensuremath{\mathbb{Z}}^d$. For a fixed $D$, we call such $p$ also a \emph{$D$-pattern}. The set $[p]=\{c\in A^{\ensuremath{\mathbb{Z}}^2}\ |\ c|_{D}=p\}$ of configurations that contain pattern $p$ in domain $D$ is the \emph{cylinder} determined by $p$. We say that pattern $p$ \emph{appears} in configuration $c$, or that $c$ \emph{contains} pattern $p$, if some translate $\tau^{\vec{t}}(c)$ of $c$ is in $[p]$. For a fixed finite $D$, the set of $D$-patterns that appear in a configuration $c$ is denoted by $\Patt{c}{D}$, that is, $$\Patt{c}{D}=\{\tau^{\vec{t}}(c)|_{D}\ |\ \vec{t}\in\ensuremath{\mathbb{Z}}^2\ \}.$$ We say that $c$ has \emph{low complexity\/} with respect to shape $D$ if $|\Patt{c}{D}|\leq |D|$, and we call $c$ a \emph{low complexity configuration\/} if it has low complexity with respect to some finite $D$. \begin{conjecture}[Maurice Nivat 1997~\cite{nivat}] Let $c\in A^{\ensuremath{\mathbb{Z}}^2}$ be a two-dimensional configuration. If $c$ has low complexity with respect to some rectangle $D=\{1,\ldots,n\}\times\{1,\ldots,m\}$ then $c$ is periodic. \end{conjecture} \noindent The analogous claim in dimensions higher than two fails, as does an analogous claim in two dimensions for many other shapes than rectangles~\cite{cassaigne}. \subsection{Algebraic concepts} Kari and Szabados introduced in~\cite{kariszabados} an algebraic approach to study low complexity configurations. The present paper heavily relies on this technique. In this approach we replace the colors in $A$ by distinct integers, so that we assume $A\subseteq \ensuremath{\mathbb{Z}}$. We then express a configuration $c\in A^{\ensuremath{\mathbb{Z}}^2}$ as a formal power series $c(x,y)$ over two variables $x$ and $y$ in which the coefficient of monomial $x^iy^j$ is $c_{i,j}$, for all $i,j\in\ensuremath{\mathbb{Z}}$. Note that the exponents of the variables range from $-\infty$ to $+\infty$. In the following also polynomials may have negative powers of variables so all polynomials considered are actually Laurent polynomials. Let us denote by $\ensuremath{\mathbb{Z}}[x^{\pm 1}, y^{\pm 1}]$ and $\ensuremath{\mathbb{Z}}[[x^{\pm 1}, y^{\pm 1}]]$ the sets of such polynomials and power series, respectively. We call a power series $c\in \ensuremath{\mathbb{Z}}[[x^{\pm 1}, y^{\pm 1}]]$ \emph{finitary} if its coefficients take only finitely many different values. Since we color the grid using finitely many colors, configurations are identified with finitary power series. Multiplying a configuration $c\in \ensuremath{\mathbb{Z}}[[x^{\pm 1}, y^{\pm 1}]]$ by a monomial corresponds to translating it, and the periodicity of the configuration by vector $\vec{t}=(n,m)$ is then equivalent to $(x^ny^m-1)c=0$, the zero power series. More generally, we say that polynomial $f\in \ensuremath{\mathbb{Z}}[x^{\pm 1}, y^{\pm 1}]$ \emph{annihilates} power series $c$ if the formal product $fc$ is the zero power series. Note that variables $x$ and $y$ in our power series and polynomials are treated only as ``position indicators'': in this work we never plug in any values to the variables. The set of polynomials that annihilates a power series is a Laurent polynomial ideal, and is denoted by \[ \ensuremath{\text{Ann}}(c) = \{ f\in \ensuremath{\mathbb{Z}}[x^{\pm 1}, y^{\pm 1}] ~|~ fc=0 \} .\] It was observed in~\cite{kariszabados} that if a configuration has low complexity with respect to some shape $D$ then it is annihilated by some non-zero polynomial $f\neq 0$. \begin{lemma}[\cite{kariszabados}] \label{th:low_complexity} Let $c\in \ensuremath{\mathbb{Z}}[[x^{\pm 1}, y^{\pm 1}]]$ be a low complexity configuration. Then $\ensuremath{\text{Ann}}(c)$ contains a non-zero polynomial. \end{lemma} One of the main results of~\cite{kariszabados} states that if a configuration $c$ is annihilated by a non-zero polynomial then it has annihilators of particularly nice form: \begin{theorem}[\cite{kariszabados}] \label{th:decompo} Let $c\in\ensuremath{\mathbb{Z}}[[x^{\pm 1}, y^{\pm 1}]]$ be a configuration (a finitary power series) annihilated by some non-zero polynomial. Then there exist pairwise linearly independent $(i_1, j_1), \ldots, (i_m, j_m)\in\ensuremath{\mathbb{Z}}^2$ such that \[ (x^{i_1}y^{j_1} - 1) \cdots (x^{i_m}y^{j_m} - 1) \in \ensuremath{\text{Ann}}(c) .\] \end{theorem} \noindent Note that both Lemma~\ref{th:low_complexity} and Theorem~\ref{th:decompo} were proved in~\cite{kariszabados} for configurations $c\in A^{\ensuremath{\mathbb{Z}}^d}$ in arbitrary dimension $d$. In this work we only deal with two-dimensional configurations, so above we stated these results for $d=2$. If $X\subseteq A^{\ensuremath{\mathbb{Z}}^2}$ is a set of configurations, we denote by $\ensuremath{\text{Ann}}(X)$ the set of Laurent polynomials that annihilate all elements of $X$. We call $\ensuremath{\text{Ann}}(X)$ the annihilator ideal of $X$. \subsection{Dynamical systems concepts} \label{sec:dynamical} Cylinders $[p]$ are a base of a compact topology on $A^{\ensuremath{\mathbb{Z}}^2}$, namely the product of discrete topologies on $A$. See, for example, the first few pages of~\cite{tullio}. The topology is equivalently defined by a metric on $A^{\ensuremath{\mathbb{Z}}^2}$ where two configurations are close to each other if they agree with each other on a large region around cell $\vec{0}$. A subset $X$ of $A^{\ensuremath{\mathbb{Z}}^2}$ is a \emph{subshift} if it is closed in the topology and closed under translations. Equivalently, every configuration $c$ that is not in $X$ contains a finite pattern $p$ that prevents it from being in $X$: no configuration that contains $p$ is in $X$. We can then as well define subshifts using forbidden patterns: for a set $P$ of finite patterns, define $$X_P=\{c\in A^{\ensuremath{\mathbb{Z}}^2}\ |\ \forall \vec{t}\in \ensuremath{\mathbb{Z}}^2\ \forall p\in P\ :\ \tau^{\vec{t}}(c)\not\in[p]\ \},$$ the set of configurations that avoid all patterns in $P$. Set $X_P$ is a subshift, and every subshift is $X_P$ for some $P$. If $X=X_P$ for some finite $P$ then $X$ is a \emph{subshift of finite type} (SFT). The \emph{tiling problem} (aka the domino problem) is the decision problem that asks whether a given SFT is empty, that is, whether there exists a configuration avoiding a given finite collection $P$ of forbidden finite patterns. Usually this question is asked in terms of so-called Wang tiles, but our formulation is equivalent. The tiling problem is undecidable~\cite{berger}. An SFT is called \emph{aperiodic} if it is non-empty but does not contain any periodic configurations. Aperiodic SFTs exist~\cite{berger}, and in fact they must exist because of the undecidability of the tiling problem~\cite{wang}. We recall the reason for this fact in the proof of Corollary~\ref{cor:cordecidable}. Convergence of a sequence $c^{(1)}, c^{(2)}, \ldots$ of configurations to a configuration $c$ in our topology has the following simple meaning: For every cell $\vec{n}\in\ensuremath{\mathbb{Z}}^2$ we must have $c^{(i)}_{\vec{n}} = c_{\vec{n}}$ for all sufficiently large $i$. As usual, we denote then $c=\lim_{i\rightarrow\infty} c^{(i)}$. Note that if all $c^{(i)}$ are in a subshift $X$, so is the limit. Compactness of space $A^{\ensuremath{\mathbb{Z}}^2}$ means that every sequence has a converging subsequence. In the proof of Theorem~\ref{thm:main} in Section~\ref{sec:main} we frequently use this fact and extract converging subsequences from sequences of configurations. The \emph{orbit} of configuration $c$ is the set ${\cal O}(c) = \{\tau^{\vec{t}}(c)\ |\ \vec{t}\in\ensuremath{\mathbb{Z}}^2\ \}$ that contains all translates of $c$. The \emph{orbit closure} $\overline{{\cal O}(c)}$ of $c$ is the topological closure of the orbit ${\cal O}(c)$. It is a subshift, and in fact it is the intersection of all subshifts that contain $c$. The orbit closure $\overline{{\cal O}(c)}$ can hence be called the subshift generated by $c$. In terms of finite patters, $c'\in \overline{{\cal O}(c)}$ if and only if every finite pattern that appears in $c'$ appears also in $c$. A configuration $c$ is called \emph{uniformly recurrent} if for every $c'\in \overline{{\cal O}(c)}$ we have $\overline{{\cal O}(c')} = \overline{{\cal O}(c)}$. This is equivalent to $\overline{{\cal O}(c)}$ being a \emph{minimal subshift} in the sense that it has no proper non-empty subshifts inside it. A classical result by Birkhoff~\cite{birkhoff} implies that every non-empty subshift contains a minimal subshift, so there is a uniformly recurrent configuration in every non-empty subshift. We use the notation $\inner{\vec{x}}{\vec{y}}$ for the inner product of vectors $\vec{x},\vec{y}\in \ensuremath{\mathbb{Z}}^2$. For a nonzero vector $\vec{u}\in\ensuremath{\mathbb{Z}}^2\setminus\{\vec{0}\}$ we denote $$ H_{\vec{u}} = \{\vec{x}\in\ensuremath{\mathbb{Z}}^2\ |\ \inner{\vec{x}}{\vec{u}} < 0 \} $$ for the discrete \emph{half plane} in direction $\vec{u}$. See Figure~\ref{fig:halfplanes}(a) for an illustration. A subshift $X$ is \emph{deterministic} in direction $\vec{u}$ if for all $c,c'\in X$ $$ c|_{H_{\vec{u}}}=c'|_{H_{\vec{u}}} \Longrightarrow c=c', $$ that is, if the contents of a configuration in the half plane $H_{\vec{u}}$ uniquely determines the contents in the rest of the cells. Note that it is enough to verify that the value $c_{\vec{0}}$ on the boundary of the half plane is uniquely determined --- the rest follows by translation invariance of $X$. Moreover, by compactness, determinism in direction $\vec{u}$ implies that there is a finite number $k$ such that already the contents of a configuration in the discrete box $$ B_{\vec{u}}^{k} = \{\vec{x}\in\ensuremath{\mathbb{Z}}^2\ |\ -k < \inner{\vec{x}}{\vec{u}} < 0 \mbox{ and } -k < \inner{\vec{x}}{\vec{u}^\bot} < k \} $$ are enough to uniquely determine the contents in cell ${\vec{0}}$, where we denote by $\vec{u}^{\bot}$ a vector that is orthogonal to $\vec{u}$ and has the same length as $\vec{u}$, e.g., $(n,m)^{\bot}=(m,-n)$. See Figure~\ref{fig:halfplanes}(b) for an illustration. \begin{figure}[ht]% \centering \subfigure[The discrete half plane $H_{\vec{u}}$]{% \label{fig:halfplanes:a}% \includegraphics[width=5cm]{H.pdf} }% \qquad \qquad \subfigure[The discrete box $B_{\vec{u}}^k$ with $k=10$.]{% \label{fig:halfplanes:b}% \includegraphics[width=5cm]{Snm.pdf} }% % \caption{Discrete regions determined by vector $\vec{u}=(-1,2)$.} \label{fig:halfplanes} \end{figure} If $X$ is deterministic in directions $\vec{u}$ and $-\vec{u}$ we say that $\vec{u}$ is a direction of \emph{two-sided} determinism. If $X$ is deterministic in direction $\vec{u}$ but not in direction $-\vec{u}$ we say that $\vec{u}$ is a direction of \emph{one-sided} determinism. Directions of two-sided determinism correspond to directions of expansivity in the symbolic dynamics literature. If $X$ is not deterministic in direction $\vec{u}$ we call $\vec{u}$ a \emph{direction of non-determinism}. Finally, note that the concept of determinism in direction $\vec{u}$ only depends on the orientation of vector $\vec{u}$ and not on its magnitude. \section{Our results} Our first main new technical result is the following: \begin{theorem} \label{thm:main} Let $c$ be a two-dimensional configuration that has a non-trivial annihilator. Then $\overline{{\cal O}(c)}$ contains a configuration $c'$ such that $\overline{{\cal O}(c')}$ has no direction of one-sided determinism. \end{theorem} \noindent From this result, using a technique by Cyr and Kra~\cite{cyrkra}, we then obtain the second main result: \begin{theorem} \label{thm:periodic} Let $c$ be a two-dimensional configuration that has low complexity with respect to a rectangle. Then $\overline{{\cal O}(c)}$ contains a periodic configuration. \end{theorem} \noindent These two theorems are proved in Sections~\ref{sec:main} and \ref{sec:periodic}, respectively. But let us first demonstrate how these results imply relevant corollaries. First we consider SFTs defined in terms of allowed rectangular patterns. Let $D=\{1,\ldots, n\}\times \{1,\ldots ,m\}$ for some $m,n\in\ensuremath{\mathbb{N}}$, and let $P\subseteq A^D$ be a set of $D$-patterns over alphabet $A$. Define $X=X_{A^D\setminus P}=\{x\in A^{\ensuremath{\mathbb{Z}}^2}\ |\ \Patt{c}{D}\subseteq P \}$, the set of configurations whose $D$-patterns are among $P$. \begin{corollary} \label{cor:corperiodic} With the notations above, if $|P|\leq nm$ and $X\neq\emptyset$ then $X$ contains a periodic configuration. \end{corollary} \begin{proof} Let $c\in X$ be arbitrary. By Theorem~\ref{thm:periodic} then, $\overline{{\cal O}(c)}\subseteq X$ contains a periodic configuration. \end{proof} \begin{corollary} \label{cor:cordecidable} With the notations above, there is an algorithm to determine whether $X\neq \emptyset$ for a given $P$ of cardinality $|P|\leq nm$. \end{corollary} \begin{proof} This is a classical argumentation by H.~Wang~\cite{wang}: there is a semi-algorithm to test if a given SFT is empty, and there is a semi-algorithm to test if a given SFT contains a periodic configuration. Since $X$ is an SFT, we can execute both these semi-algorithms on $X$. By Corollary~\ref{cor:corperiodic}, if $X\neq\emptyset$ then $X$ contains a periodic configuration. Hence, exactly one of these two semi-algorithms will return a positive answer. \end{proof} \noindent The next corollary solves Nivat's conjecture for uniformly recurrent configurations. \begin{corollary} \label{cor:corminimal} A uniformly recurrent configuration $c$ that has low complexity with respect to a rectangle is periodic. \end{corollary} \begin{proof} Because $c$ has low complexity with respect to a rectangle then by Theorem~\ref{thm:periodic} there is a periodic configuration $c'\in\overline{{\cal O}(c)}$. All elements in $\overline{{\cal O}(c')}$ are periodic. Because $c$ is uniformly recurrent we have $\overline{{\cal O}(c)}=\overline{{\cal O}(c')}$, which implies that all elements of $\overline{{\cal O}(c)}$, including $c$ itself, are periodic. \end{proof} \noindent In Section~\ref{sec:conclusions} we briefly argue that all our results remain true if the $m\times n$ rectangle is replaced by any convex discrete shape. \section{Removing one-sided determinism} \label{sec:main} In this section we prove Theorem~\ref{thm:main} by showing how we can ``remove'' one-sided directions of determinism from subshifts with annihilators. Let $c$ be a configuration over alphabet $A\subseteq\ensuremath{\mathbb{Z}}$ that has a non-trivial annihilator. By Theorem~\ref{th:decompo} it has then an annihilator $\phi_1\cdots\phi_m$ where each $\phi_i$ is of the form \begin{equation} \label{eq:phi} \phi_i=x^{n_i}y^{m_i}-1 \mbox{ for some } \vec{v}_i=(n_i,m_i)\in\ensuremath{\mathbb{Z}}^2. \end{equation} Moreover, vectors $\vec{v}_i$ can be chosen pairwise linearly independent, that is, in different directions. We may assume $m\geq 1$. Denote $X=\overline{{\cal O}(c)}$, the subshift generated by $c$. A polynomial that annihilates $c$ annihilates all elements of $X$, because they only have local patterns that already appear in $c$. It is easy to see that $X$ can only be non-deterministic in a direction that is perpendicular to one of the directions $\vec{v}_i$ of the polynomials $\phi_i$: \begin{proposition} \label{prop:detann} Let $c$ be a configuration annihilated by $\phi_1\cdots\phi_m$ where each $\phi_i$ is of the form \emph{(\ref{eq:phi})}. Let $\vec{u}\in\ensuremath{\mathbb{Z}}^2$ be a direction that is not perpendicular to $\vec{v}_i$ for any $i\in\{1,\ldots, m\}$. Then $X=\overline{{\cal O}(c)}$ is deterministic in direction $\vec{u}$. \end{proposition} \begin{proof} Suppose $X$ is not deterministic in direction $\vec{u}$. By definition, there exist $d,e\in X$ such that $d\neq e$ but $d|_{H_{\vec{u}}}=e|_{H_{\vec{u}}}$. Denote $\Delta=d-e$. Because $\Delta\neq 0$ but $\phi_1\cdots\phi_m\cdot \Delta =0$, for some $i$ we have $\phi_1\cdots\phi_{i-1}\cdot \Delta \neq 0$ and $\phi_1\cdots\phi_{i}\cdot \Delta =0$. Denote $\Delta'=\phi_1\cdots\phi_{i-1}\cdot \Delta$. Because $\phi_i\cdot\Delta'=0$, configuration $\Delta'$ is periodic in direction $\vec{v}_i$. But because $\Delta$ is zero in the half plane ${H_{\vec{u}}}$, also $\Delta'$ is zero in some translate $H'={H_{\vec{u}}}-\vec{t}$ of the half plane. Since the periodicity vector $\vec{v}_i$ of $\Delta'$ is not perpendicular to $\vec{u}$, the periodicity transmits the values 0 from the region $H'$ to the entire $\ensuremath{\mathbb{Z}}^2$. Hence $\Delta'=0$, a contradiction. \end{proof} Let $\vec{u}\in\ensuremath{\mathbb{Z}}^2$ be a one-sided direction of determinism of $X$. In other words, $\vec{u}$ is a direction of determinism but $-\vec{u}$ is not. By the proposition above, $\vec{u}$ is perpendicular to some $\vec{v}_i$. Without loss of generality, we may assume $i=1$. We denote $\phi=\phi_1$ and $\vec{v}=\vec{v}_1$. Let $k$ be such that the contents of the discrete box $B=B_{\vec{u}}^{k}$ determine the content of cell $\vec{0}$, that is, for $d,e\in X$ \begin{equation} \label{eq:detshape} d|_B=e|_B \Longrightarrow d_\vec{0}=e_\vec{0}. \end{equation} As pointed out in Section~\ref{sec:dynamical}, any sufficiently large $k$ can be used. We can choose $k$ so that $k>|\inner{\vec{u}^\bot}{\vec{v}}|$. To shorten notations, let us also denote $H=H_{-\vec{u}}$. \begin{lemma} \label{lem:apulemma} For any $d,e\in X$ such that $\phi d=\phi e$ holds: $$ d|_B=e|_B \Longrightarrow d|_H=e|_H. $$ \end{lemma} \begin{proof} Let $d,e\in X$ be such that $\phi d=\phi e$ and $d|_B=e|_B$. Denote $\Delta=d-e$. Then $\phi\Delta=0$ and $\Delta|_B=0$. Property $\phi\Delta=0$ means that $\Delta$ has periodicity vector $\vec{v}$, so this periodicity transmits values 0 from the region $B$ to the strip $$ S=\bigcup_{i\in\ensuremath{\mathbb{Z}}} (B+i\vec{v}) = \{\vec{x}\in\ensuremath{\mathbb{Z}}^2\ |\ -k< \inner{\vec{x}}{\vec{u}} < 0\}. $$ See Figure~\ref{fig:regions} for an illustration of the regions $H$, $B$ and $S$. As $\Delta|_S=0$, we have that $d|_S=e|_S$. Applying (\ref{eq:detshape}) on suitable translates of $d$ and $e$ allows us to conclude that $d|_H=e|_H$. \end{proof} \begin{figure}[ht] \centering \includegraphics[width=5.5cm]{stripe.pdf} \caption{Discrete regions $H=H_{-\vec{u}}$, $B=B_{\vec{u}}^{k}$ and $S$ in the proof of Lemma~\ref{lem:apulemma}. In the illustration $\vec{u}=(-1,2)$ and $k=10$.} \label{fig:regions} \end{figure} A reason to prove the lemma above is the following corollary, stating that $X$ can only contain a bounded number of configurations that have the same product with $\phi$: \begin{corollary} \label{cor:bounded} Let $c_1,\ldots ,c_n\in X$ be pairwise distinct. If $\phi c_1=\cdots =\phi c_n$ then $n\leq |A|^{|B|}$. \end{corollary} \begin{proof} Let $H'=H-\vec{t}$, for $\vec{t}\in\ensuremath{\mathbb{Z}}^2$, be a translate of the half plane $H=H_{-\vec{u}}$ such that $c_1,\ldots ,c_n$ are pairwise different in $H'$. Consider the translated configurations $d_i=\tau^{\vec{t}}(c_i)$. We have that $d_i\in X$ are pairwise different in $H$ and $\phi d_1=\cdots =\phi d_n$. By Lemma~\ref{lem:apulemma}, configurations $d_i$ must be pairwise different in domain $B$. There are only $|A|^{|B|}$ different patterns in domain $B$. \end{proof} Let $c_1,\ldots ,c_n\in X$ be pairwise distinct such that $\phi c_1=\cdots =\phi c_n$, with $n$ as large as possible. By Corollary~\ref{cor:bounded} such configurations exist. Let us repeatedly translate the configurations $c_i$ by $\tau^{\vec{u}}$ and take a limit: by compactness there exists $n_1<n_2<n_3\ldots$ such that $$ d_i=\lim_{j\rightarrow \infty} \tau^{n_j\vec{u}}(c_i) $$ exists for all $i\in\{1,\ldots, n\}$. Configurations $d_i\in X$ inherit the following properties from $c_i$: \begin{lemma} \label{lem:dlemma} Let $d_1,\ldots ,d_n$ be defined as above. Then \begin{enumerate} \item[(a)] $\phi d_1=\cdots =\phi d_n$, and \item[(b)] Configurations $d_i$ are pairwise different in translated discrete boxes $B'=B-\vec{t}$ for all $\vec{t}\in\ensuremath{\mathbb{Z}}^2$. \end{enumerate} \end{lemma} \begin{proof} Let $i_1,i_2\in\{1,\ldots ,n\}$ be arbitrary, $i_1\neq i_2$. \medskip \noindent (a) Because $\phi c_{i_1} = \phi c_{i_2}$ we have, for any $n\in\ensuremath{\mathbb{N}}$, $$\phi \tau^{n\vec{u}}(c_{i_1}) = \tau^{n\vec{u}}(\phi c_{i_1}) = \tau^{n\vec{u}}(\phi c_{i_2}) = \phi \tau^{n\vec{u}}(c_{i_2}).$$ Function $c\mapsto \phi c$ is continuous in the topology so $$\phi d_{i_1} =\phi \lim_{j\rightarrow \infty} \tau^{n_j\vec{u}}(c_{i_1}) = \lim_{j\rightarrow \infty} \phi\tau^{n_j\vec{u}}(c_{i_1}) = \lim_{j\rightarrow \infty} \phi\tau^{n_j\vec{u}}(c_{i_2}) = \phi \lim_{j\rightarrow \infty} \tau^{n_j\vec{u}}(c_{i_2}) = \phi d_{i_2}.$$ \medskip \noindent (b) Let $B'=B-\vec{t}$ for some $\vec{t}\in\ensuremath{\mathbb{Z}}^2$. Suppose $d_{i_1}|_{B'} = d_{i_2}|_{B'}$. By the definition of convergence, for all sufficiently large $j$ we have $\tau^{n_j\vec{u}}(c_{i_1})|_{B'} = \tau^{n_j\vec{u}}(c_{i_2})|_{B'}$. This is equivalent to $\tau^{n_j\vec{u}+\vec{t}}(c_{i_1})|_{B} = \tau^{n_j\vec{u}+\vec{t}}(c_{i_2})|_{B}$. By Lemma~\ref{lem:apulemma} then also $\tau^{n_j\vec{u}+\vec{t}}(c_{i_1})|_{H} = \tau^{n_j\vec{u}+\vec{t}}(c_{i_2})|_{H}$ where $H=H_{-\vec{u}}$. This means that for all sufficiently large $j$ the configurations $c_{i_1}$ and $c_{i_2}$ are identical in the domain $H-n_j\vec{u}-\vec{t}$. But these domains cover the whole $\ensuremath{\mathbb{Z}}^2$ as $j\longrightarrow\infty$ so that $c_{i_1}=c_{i_2}$, a contradiction. \end{proof} Now we pick one of the configurations $d_i$ and consider its orbit closure. Choose $d=d_1$ and set $Y=\overline{{\cal O}(d)}$. Then $Y\subseteq X$. Any direction of determinism in $X$ is also a direction of determinism in $Y$. Indeed, this is trivially true for any subset of $X$. But, in addition, we have the following: \begin{lemma} \label{lem:onesidedremoved} Subshift $Y$ is deterministic in direction $-\vec{u}$. \end{lemma} \begin{proof} Suppose the contrary: there exist configurations $x,y\in Y$ such that $x\neq y$ but $x|_H=y|_H$ where, as usual, $H=H_{-\vec{u}}$. In the following we construct $n+1$ configurations in $X$ that have the same product with $\phi$, which contradicts the choice of $n$ as the maximum number of such configurations. By the definition of $Y$ all elements of $Y$ are limits of sequences of translates of $d=d_1$, that is, there are translations $\tau_1,\tau_2,\ldots$ such that $x=\lim_{i\rightarrow\infty} \tau_i(d)$, and translations $\sigma_1,\sigma_2,\ldots$ such that $y=\lim_{i\rightarrow\infty} \sigma_i(d)$. Apply the translations $\tau_1,\tau_2,\ldots$ on configurations $d_1,\ldots ,d_n$, and take jointly converging subsequences: by compactness there are $k_1<k_2<\ldots$ such that $$ e_i=\lim_{j\rightarrow\infty} \tau_{k_j}(d_i) $$ exists for all $i\in\{1,\ldots ,n\}$. Here, clearly, $e_1=x$. \medskip Let us prove that $e_1,\ldots ,e_n$ and $y$ are $n+1$ configurations that (i) have the same product with $\phi$, and (ii) are pairwise distinct. This contradict the choice of $n$ as the maximum number of such configurations, and thus completes the proof. \medskip \noindent \begin{enumerate}[label=(\roman*)] \item First, $\phi x=\phi y$: Because $x|_H=y|_H$ we have $\phi x|_{H-\vec{t}} = \phi y|_{H-\vec{t}}$ for some $\vec{t}\in\ensuremath{\mathbb{Z}}^2$. Consider $c'=\tau^{\vec{t}}(\phi x-\phi y)$, so that $c'|_H=0$. As $\phi_2\cdots \phi_m$ annihilates $\phi x$ and $\phi y$, it also annihilates $c'$. An application of Proposition~\ref{prop:detann} on configuration $c'$ in place of $c$ shows that $\overline{{\cal O}(c')}$ is deterministic in direction $-\vec{u}$. (Note that $-\vec{u}$ is not perpendicular to $\vec{v}_j$ for any $j\neq 1$, because $\vec{v}_1$ and $\vec{v}_j$ are not parallel and $-\vec{u}$ is perpendicular to $\vec{v}_1$.) Due to the determinism, $c'|_H=0$ implies that $c'=0$, that is, $\phi x=\phi y$. Second, $\phi e_{i_1}=\phi e_{i_2}$ for all $i_1,i_2\in\{1,\ldots ,n\}$: By Lemma~\ref{lem:dlemma} we know that $\phi d_{i_1}=\phi d_{i_2}$. By continuity of the function $c\mapsto \phi c$ we then have $$ \begin{array}{rc} \phi e_{i_1} = \phi \lim_{j\rightarrow\infty} \tau_{k_j}(d_{i_1}) = \lim_{j\rightarrow\infty} \phi \tau_{k_j}(d_{i_1}) =& \lim_{j\rightarrow\infty} \tau_{k_j}(\phi d_{i_1}) \\ &\mbox{\rotatebox{90}{$\,=$}}\vspace*{-2mm}\\ \phi e_{i_2} = \phi \lim_{j\rightarrow\infty} \tau_{k_j}(d_{i_2}) = \lim_{j\rightarrow\infty} \phi \tau_{k_j}(d_{i_2}) =& \lim_{j\rightarrow\infty} \tau_{k_j}(\phi d_{i_2}) \end{array} $$ Because $e_1=x$, we have shown that $e_1,\ldots ,e_n$ and $y$ all have the same product with $\phi$. \item Pairwise distinctness: First, $y$ and $e_1=x$ are distinct by the initial choice of $x$ and $y$. Next, let $i_1,i_2\in\{1,\ldots ,n\}$ be such that $i_1\neq i_2$. Let $\vec{t}\in \ensuremath{\mathbb{Z}}^2$ be arbitrary and consider the translated discrete box $B'=B-\vec{t}$. By Lemma~\ref{lem:dlemma}(b) we have $\tau_{k_j}(d_{i_1})|_{B'}\neq \tau_{k_j}(d_{i_2})|_{B'}$ for all $j\in \ensuremath{\mathbb{N}}$, so taking the limit as $j\longrightarrow\infty$ gives $e_{i_1}|_{B'}\neq e_{i_2}|_{B'}$. This proves that $e_{i_1}\neq e_{i_2}$. Moreover, by taking $\vec{t}$ such that $B'\subseteq H$ we see that $y|_{B'}=x|_{B'}=e_1|_{B'}\neq e_i|_{B'}$ for $i\geq 2$, so that $y$ is also distinct from all $e_i$ with $i\geq 2$. \end{enumerate} \vspace{-1em} \end{proof} \noindent The following proposition captures the result established above. \begin{proposition} \label{prop:main} Let $c$ be a configuration with a non-trivial annihilator. If $\vec{u}$ is a one-sided direction of determinism in $\overline{{\cal O}(c)}$ then there is a configuration $d\in \overline{{\cal O}(c)}$ such that $\vec{u}$ is a two-sided direction of determinism in $\overline{{\cal O}(d)}$. \qed \end{proposition} \noindent Now we are ready to prove Theorem~\ref{thm:main}. \begin{proof}[Proof of Theorem~\ref{thm:main}] Let $c$ be a two-dimensional configuration that has a non-trivial annihilator. Every non-empty subshift contains a minimal subshift~\cite{birkhoff}, and hence there is a uniformly recurrent configuration $c'\in\overline{{\cal O}(c)}$. If $\overline{{\cal O}(c')}$ has a one-sided direction of determinism $\vec{u}$, we can apply Proposition~\ref{prop:main} on $c'$ and find $d\in\overline{{\cal O}(c')}$ such that $\vec{u}$ is a two-sided direction of determinism in $\overline{{\cal O}(d)}$. But because $c'$ is uniformly recurrent, $\overline{{\cal O}(d)}=\overline{{\cal O}(c')}$, a contradiction. \end{proof} \section{Periodicity in low complexity subshifts} \label{sec:periodic} In this section we prove Theorem~\ref{thm:periodic}. Every non-empty subshift contains a uniformly recurrent configuration, so we can safely assume that $c$ is uniformly recurrent. Our proof of Theorem~\ref{thm:periodic} splits in two cases based on Theorem~\ref{thm:main}: either $\overline{O(c)}$ is deterministic in all directions or for some $\vec{u}$ it is non-deterministic in both directions $\vec{u}$ and $-\vec{u}$. The first case is handled by the following well-known corollary from a theorem of Boyle and Lind~\cite{Boyle_Lind}: \begin{proposition} \label{prop:case1} A configuration $c$ is two-periodic if and only if $\overline{O(c)}$ is deterministic in all directions.\qed \end{proposition} For the second case we apply the technique by Cyr and Kra~\cite{cyrkra}. This technique was also used in~\cite{szabados} to address Nivat's conjecture. The result that we read from~\cite{cyrkra,szabados}, although it is not explicitly stated in this form, is the following: \begin{proposition} \label{prop:case2} Let $c$ be a two-dimensional uniformly recurrent configuration that has low complexity with respect to a rectangle. If for some $\vec{u}$ both $\vec{u}$ and $-\vec{u}$ are directions of non-determinism in $\overline{{\cal O}(c)}$ then $c$ is periodic in a direction perpendicular to $\vec{u}$. \end{proposition} Let us prove this proposition using lemmas from~\cite{szabados}. We first recall some definitions, adjusted to our terminology. Let $D\subseteq \ensuremath{\mathbb{Z}}^2$ be non-empty and let $\vec{u}\in\ensuremath{\mathbb{Z}}^2\setminus\{\vec{0}\}$. The \emph{edge} $E_{\vec{u}}(D)$ of $D$ in direction $\vec{u}$ consists of the cells in $D$ that are furthest in the direction $\vec{u}$: $$ E_{\vec{u}}(D) = \{ \vec{v}\in D\ |\ \forall \vec{x}\in D\ \inner{\vec{x}}{\vec{u}} \leq \inner{\vec{v}}{\vec{u}} \}. $$ We call $D$ \emph{convex} if $D=C\cap\ensuremath{\mathbb{Z}}^2$ for a convex subset $C\subseteq\ensuremath{\mathbb{R}}^2$ of the real plane. For $D,E\subseteq \ensuremath{\mathbb{Z}}^2$ we say that $D$ \emph{fits} in $E$ if $D+\vec{t}\subseteq E$ for some $\vec{t}\in\ensuremath{\mathbb{Z}}^2$. A (closed) \emph{stripe} of width $k$ perpendicular to $\vec{u}$ is the set $$ S_{\vec{u}}^k = \{\vec{x}\in\ensuremath{\mathbb{Z}}^2\ |\ -k< \inner{\vec{x}}{\vec{u}} \leq 0\}. $$ Consider the stripe $S=S_{\vec{u}}^k$. Clearly its edge $E_{\vec{u}}(S)$ in direction $\vec{u}$ is the discrete line $\ensuremath{\mathbb{Z}}^2\cap L$ where $L\subseteq\ensuremath{\mathbb{R}}^2$ is the real line through $\vec{0}$ that is perpendicular to $\vec{u}$. The \emph{interior} $S^\circ$ of $S$ is $S\setminus E_{\vec{u}}(S)$, that is, $S^\circ = \{\vec{x}\in\ensuremath{\mathbb{Z}}^2\ |\ -k< \inner{\vec{x}}{\vec{u}} < 0\}$. A central concept from~\cite{cyrkra,szabados} is the following. Let $c$ be a configuration and let $\vec{u}\in\ensuremath{\mathbb{Z}}^2\setminus\{\vec{0}\}$ be a direction. Recall that $\Patt{c}{D}$ denotes the set of $D$-patterns that $c$ contains. A finite discrete convex set $D\subseteq \ensuremath{\mathbb{Z}}^2$ is called \emph{$\vec{u}$-balanced in $c$} if the following three conditions are satisfied, where we denote $E=E_{\vec{u}}(D)$ for the edge of $D$ in direction $\vec{u}$: \begin{enumerate} \item[(i)] $|\Patt{c}{D}|\leq |D|$, \item[(ii)] $|\Patt{c}{D\setminus E}| < |\Patt{c}{D}|+|E|$, and \item[(iii)] $|D\cap L|\geq |E|-1$ for every line $L$ perpendicular to $\vec{u}$ such that $D\cap L\neq\emptyset$. \end{enumerate} The first condition states that $c$ has low complexity with respect to shape $D$. The second condition implies that there are fewer than $|E|$ different $(D\setminus E)$-patterns in $c$ that can be extended in more than one way into a $D$-pattern of $c$. The last condition states that the edge $E$ is nearly the shortest among the parallel cuts across $D$. \begin{lemma}[Lemma 2 in~\cite{szabados}] \label{lem:Michal2} Let $c$ be a two-dimensional configuration that has low complexity with respect to a rectangle, and let $\vec{u}\in\ensuremath{\mathbb{Z}}^2\setminus\{\vec{0}\}$. Then $c$ admits a $\vec{u}$-balanced or a $(-\vec{u}$)-balanced set $D \subseteq \ensuremath{\mathbb{Z}}^2$. \end{lemma} A crucial observation in~\cite{cyrkra} connects balanced sets and non-determinism to periodicity. This leads to the following statement. \begin{lemma}[Lemma 4 in~\cite{szabados}] \label{lem:Michal4} Let $d$ be a two-dimensional configuration and let $\vec{u}\in\ensuremath{\mathbb{Z}}^2\setminus\{\vec{0}\}$ be such that $d$ admits a $\vec{u}$-balanced set $D \subseteq \ensuremath{\mathbb{Z}}^2$. Assume there is a configuration $e\in\overline{{\cal O}(d)}$ and a stripe $S=S_{\vec{u}}^k$ perpendicular to $\vec{u}$ such that $D$ fits in $S$ and $d|_{S^\circ}=e|_{S^\circ}$ but $d|_{S}\neq e|_{S}$. Then $d$ is periodic in direction perpendicular to $\vec{u}$. \end{lemma} With these we can prove Proposition~\ref{prop:case2}. \begin{proof}[Proof of Proposition~\ref{prop:case2}] Let $c$ be a two-dimensional uniformly recurrent configuration that has low complexity with respect to a rectangle. Let $\vec{u}$ be such that both $\vec{u}$ and $-\vec{u}$ are directions of non-determinism in $\overline{{\cal O}(c)}$. By Lemma~\ref{lem:Michal2} configuration $c$ admits a $\vec{u}$-balanced or a $(-\vec{u}$)-balanced set $D \subseteq \ensuremath{\mathbb{Z}}^2$. Without loss of generality, assume that $D$ is $\vec{u}$-balanced in $c$. As $\overline{{\cal O}(c)}$ is non-deterministic in direction $\vec{u}$, there are configurations $d,e\in \overline{{\cal O}(c)}$ such that $d|_{H_{\vec{u}}}=e|_{H_{\vec{u}}}$ but $d_{\vec{0}}\neq e_{\vec{0}}$. Because $c$ is uniformly recurrent, exactly same finite patterns appear in $d$ as in $c$. This means that $D$ is $\vec{u}$-balanced also in $d$. From the uniform recurrence of $c$ we also get that $e\in \overline{{\cal O}(d)}$. Pick any $k$ large enough so that $D$ fits in the stripe $S=S_{\vec{u}}^k$. Because $\vec{0}\in S$ and $S^\circ\subseteq H_{\vec{u}}$, the conditions in Lemma~\ref{lem:Michal4} are met. By the lemma, configuration $d$ is $\vec{p}$-periodic for some $\vec{p}$ that is perpendicular to $\vec{u}$. Because $d$ has the same finite patterns as $c$, it follows that $c$ cannot contain a pattern that breaks period $\vec{p}$. So $c$ is also $\vec{p}$-periodic. \end{proof} Now Theorem~\ref{thm:periodic} follows from Propositions~\ref{prop:case1} and \ref{prop:case2}, using Theorem~\ref{thm:main} and the fact that every subshift contains a uniformly recurrent configuration. \begin{proof}[Proof of Theorem~\ref{thm:periodic}] Let $c$ be a two-dimensional configuration that has low complexity with respect to a rectangle. Replacing $c$ by a uniformly recurrent element of $\overline{{\cal O}(c)}$, we may assume that $c$ is uniformly recurrent. Since $c$ is a low-complexity configuration, by Lemma~\ref{th:low_complexity} it has a non-trivial annihilator. By Theorem~\ref{thm:main} there exists $c'\in \overline{{\cal O}(c)}$ such that $\overline{{\cal O}(c')}$ has no direction of one-sided determinism. If all directions are deterministic in $\overline{{\cal O}(c')}$, it follows from Proposition~\ref{prop:case1} that $c'$ is two-periodic. Otherwise there is a direction $\vec{u}$ such that both $\vec{u}$ and $-\vec{u}$ are directions of non-determinism in $\overline{{\cal O}(c')}$. Now it follows from Proposition~\ref{prop:case2} that $c'$ is periodic. \end{proof} \section{Conclusions} \label{sec:conclusions} We have demonstrated how the low local complexity assumption enforces global regularities in the admitted configurations, yielding algorithmic decidability results. The results were proved in full details for low complexity configurations with respect to an arbitrary rectangle. The reader can easily verify that the fact that the considered shape is a rectangle is not used in any proofs presented here, and the only quoted result that uses this fact is Lemma~\ref{lem:Michal2}. A minor modification in the proof of Lemma~\ref{lem:Michal2} presented in~\cite{szabados} yields that the lemma remains true for any two-dimensional configuration that has low complexity with respect to any convex shape. We conclude that also all our results remain true if we use any convex discrete shape in place of a rectangle. If the considered shape is not convex the situation becomes more difficult. Theorem~\ref{thm:periodic} is not true for an arbitrary shape in place of the rectangle but all counter examples we know are based on periodic sublattices~\cite{cassaigne,karimoutot}. For example, even lattice cells may form a configuration that is horizontally but not vertically periodic while the odd cells may have a vertical but no horizontal period. Such a non-periodic configuration may be uniformly recurrent and have low complexity with respect to a scatted shape $D$ that only sees cells of equal parity. It remains an interesting direction of future study to determine if a sublattice structure is the only way to contradict Theorem~\ref{thm:periodic} for arbitrary shapes. We conjecture that Corollaries~\ref{cor:corperiodic} and \ref{cor:cordecidable} hold for arbitrary shapes, that is, that there does not exist a two-dimensional low complexity aperiodic SFT. A special case of this is the recently solved periodic cluster tiling problem~\cite{bhattacharya,szegedy}. \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{Introduction} It is well-known that the magnetic field plays a key role in almost all solar activities, such as solar flares, filament eruptions, coronal mass ejections, etc. {Many structures of the solar corona are shaped by the magnetic field, due to its pervasive nature.} Therefore, a thorough knowledge about the coronal magnetic field topology will help us to understand the physical processes of various solar activities. {However, so far the routine measurement of solar magnetic field is mainly based on the Zeeman effect and Hanle effect, which can measure stronger emissions and sharper lines on the photosphere but failed to measure the coronal magnetic field for its faint line intensities and broad line widths.} Although some techniques using infrared and radio observations have been proposed to solve this problem, (Lin et al., 2004; Gary and Hurford, 1994), they have not reached full fruition yet and have many limitations. Normally, one has to obtain the solar coronal magnetic fields from modeling by extrapolation using underlying photospherical observations. At present, the non-linear force-free field (NLFFF) model has been thought to be a good approximation to the actual physical state of the coronal magnetic fields. Available NLFFF extrapolation methods can be classified into five types: (1) Upward integration method: Nakagawa, 1974; Wu et al., 1990; Song et al., 2006; (2) Grad-Rubin method: Grad and Rubin, 1958; Sakurai, 1981; Amari et al., 1999, 2006; Wheatland, 2006; (3) MHD relaxation method: Chodura and Schlueter, 1981; Yang et al., 1986; Mikic and McClymont, 1994; Roumeliotis, 1996; Valori et al., 2005, 2007; Jiang et al., 2011, Jiang and Feng, 2012 (4) Optimization approach: Wheatland et al., 2000; Wiegelmann, 2004, 2007; Inhester and Wiegelmann, 2006; Wiegelmann and Neukirch, 2006 (5) Boundary integral equation method: Yan and Sakurai, 1997, 2000; Yan and Li, 2006; He and Wang, 2008; He et al., 2011. As a stand alone method, the Boundary Integral Equation (BIE) method is one that allows us to evaluate the NLFFF field at arbitrary points within the domain from the boundary data, without the requirement to solve the field in the entire domain. Moreover, {because the BIE model takes into account the asymptotic condition consistently,} it allows us to only use bottom boundary data as the boundary condition. This satisfies the current observational condition and avoids assuming arbitrarily-prescribed lateral and top boundary data. The BIE method for NLFFF was first proposed by Yan and Sakurai (1997, 2000), and many applications of BIE to solar events have been implemented (e.g., Yan and Sakurai, 1997; Yan et al., 2001; Liu et al., 2002; Yan, 2003). Later a new direct boundary integral Equation (DBIE) method was proposed as an improvement to the BIE method. An optimization technique has been applied to approximate the non-linear force-free field at any position numerically. Compared with BIE, the complicated volume integration in Equations (17) and (19) in the paper of Yan and Li (2006) were avoided. A series of test cases and practical applications (Yan and Li, 2006; Liu et al., 2011, 2012; He and Wang 2008; He et al., 2011) have been carried out to demonstrate the reliability and feasibility of DBIE. Recently, with the launch of Solar Dynamic Observatory (SDO), {the Helioseismic and Magnetic Imager (HMI; Schou et al., 2012) can provide the vector magnetogram which can be used as the high quality boundary data for coronal magnetic field reconstructions. The Atmospheric Imaging Assembly (AIA; Lemen et al., 2011) can provide high resolution coronal structure images for the evaluation of any modeling techniques.} Therefore, it is necessary to apply the DBIE method to real data by using high resolution boundary data as validation. The previous BIE method was thought to be slow when carried out on entire 3D domain (Schrijver et al., 2006; Wiegelmann, 2008) as the parallel algorithm was not implemented though the BIE technique itself should be suitable for parallel computation. In order to solve this problem, we implemented a Graphics Processing Unit (GPU) technique into our program to accelerate the computing processes. The results show that this method is effective and suitable. The NOAA 11158 was the first active region that produced X-class event in the current 24th solar cycle. An X2.2 flare event occurred on 2011 February 15 at 01:44 UT. Many studies have been carried out on this event, such as work on the evolution of the magnetic field (Sun et al., 2012), research focusing on solar features (Schrijver et al., 2011), extrapolations on the HMI, vector magnetogram (Wiegelmann et al., 2012), evolution of relative magnetic helicity and current helicity (Jing et al., 2012), non-potentiality of active region (Song et al., 2013), and the work on the rotating sunspots of this region (Vemareddy et al., 2012). Although most of these studies have the aid of extrapolation methods, none of them have demonstrated the 3-D view of the reconstructed coronal magnetic fields for this active region. The twin STEREO/A(head) and B(ehind) spacecraft (Kaiser et al., 2008) observe the Sun from multi-views, which provides us with a good opportunity for a comprehensive comparison so that the physical process in the corona can be understood correctly. {It should be mentioned that Su and van Ballegooijen (2012) compared a NLFFF model with bright EUV features on the two sides of a solar polar crown prominence that erupted on December 6, 2010 observed by STEREO B and AIA, since the channel was on the backside of the Sun in STEREO A observations. DeRosa et al. (2009) compared other NLFFF models with observations including STEREO A/B data for AR 10953 on April 30, 2007 but no comparison with STEREO images was shown. Sandman and Aschwanden (2011) proposed a forward-fitting method with the stereoscopically reconstructed STEREO loops as known conditions. } In this work, we apply DBIE method to active region NOAA 11158 on February 14 with the {HMI} vector magnetogram taken at 20:12 UT as boundary condition in order to understand the three-dimensional magnetic configuration before the X2.2 flare event. We will present our reconstructed topology configuration of magnetic fields in the whole research region and electric current distribution in the central region. We then compare them with observations from both front view (SDO/AIA) and side views (STEREO A/B). This paper is arranged as follows. Section 2 briefly introduces the DBIE method and GPU technique. Section 3 shows the observations and Section 4 presents the reconstructed results. Finally in Section 5 we draw our conclusions. \section{Methods} \label{method} \subsection{Principle of DBIE} \label{principle} As an improvement of the BIE method, DBIE method also need to satisfy the force-free field and divergence-free conditions (Yan and Li, 2006): \begin{eqnarray} & \nabla \times \textit{\textbf{B}}=\alpha\textit{\textbf{B}} \label{forcef} \\ & \nabla\cdot\textit{\textbf{B}}=0 \label{divf} \end{eqnarray} with the boundary condition at $z=0$ magnetogram region (outside this magentogeam region a vanishing field is assumed): \begin{equation} \textit{\textbf{B}}=\textbf{B}_0 \label{bnd} \end{equation} At infinity, an asymptotic constraint should be employed to ensure a finite energy content in the semispace above the Sun, \begin{equation} \textit{\textbf{B}}=\textit{\textbf{O}}(R^{-2}) \ \rm{when} \ R \longrightarrow \infty \end{equation} where $R$ is the radial distance. A reference function $Y$ is introduced in this method \begin{equation} Y=\frac{\cos(\lambda\rho)}{4\pi\rho}-\frac{\cos(\lambda{\rho}')}{4\pi{\rho}'}\label{Y} \end{equation} where $\lambda$ is a pseudo-force-free factor depending on the location of point $i$ only. $\rho={[(x-x_i)^2+(y-y_i)^2+(z-z_i)^2]}^{(1/2)}$ is a distance between a variable point ($x, y, z$) and a fixed point $(x_i, y_i, z_i)$, ${\rho}'={[(x-x_i)^2+(y-y_i)^2+(z+z_i)^2]}^{(1/2)}$. Combining with the force-free, divergence-free, boundary, and asymptotic conditions, we obtain a direct boundary integral formulation (Yan and Li, 2006) \begin{equation}\label{DBIE} \textit{B}_p(x_i,y_i,z_i)=\int_{\Gamma}\frac{z_i[{\lambda_{pi}}r\sin({\lambda_{pi}}r)+\cos({\lambda_{pi}}r)]\textit{B}_{p0}(x,y,0)}{2\pi[(x-x_i)^2+(y-y_i)^2+z_i^2]^{3/2}}dxdy \end{equation} where $r=[(x-x_i)^2+(y-y_i)^2+{z_i}^2]^{1/2}$, p=x, y, or z. $\textit{{B}}_{p0}$ is the magnetic field on the photospheric surface, and $\lambda_{pi}=\lambda_p(x_i,y_i,z_i)$ in place of $\lambda$ in Equation~(\ref{Y}), it is in principle governed implicitly by the following expression: \begin{equation}\label{lamd} {\lambda_{pi}}^2=\frac{\int_\Omega Y(x, y, z; x_i, y_i, z_i, \lambda_{pi})[\alpha^2B_p+(\nabla\alpha\times \textit{\textbf{B}})_p]dxdydz}{\int_\Omega Y(x, y, z; x_i, y_i, z_i, \lambda_{pi})B_pdxdydz} \end{equation} Here $\lambda$ (denotes those $\lambda_{pi}$'s for short) has the same dimension as the force-free factor $\alpha$. it is called pseudo-force-free factor. From Equation~(\ref{DBIE}), we can obtain the magnetic field $\textit{\textbf{B}}$ if $\lambda$ is known. {A previous study on the property of $\lambda$ distribution by substituting the Low \& Lou (1990) solution into the rigorous expression similar to Equation~(\ref{lamd}) was done for the BIE method (Li et al. 2004). It was found that the $\lambda$ values that satisfy the condition at some given point are not unique. However, this non-uniqueness of the $\lambda$ solutions does not influence the computation of the field at that location, as demonstrated by numerical results. Obviously, it is not practical to determine $\lambda$ from such an implicit expression~(\ref{lamd}). Yan and Li (2006) suggested to make use of the Downhill Simplex method (Nelder and Mead, 1965) to find the suitable $\lambda$ from a nonlinear programming problem. In this way the $\lambda$ is not obtained from Equation~(\ref{lamd}) exactly but instead we look for a numerical solution of the magnetic field. This is calculated from the given boundary condition (3) together with the assumed asymptotic condition (4) and} satisfies the original force-free (1) and divergence-free (2) conditions approximately. Here the two stopping criteria of the procedure for the approximation of conditions (1) and (2) are as follows. \begin{equation}\label{fi} f_i(\lambda_{xi},\lambda_{yi},\lambda_{zi})=\frac{|\textit{\textbf{J}}\times\textit{\textbf{B}}|}{|\textit{\textbf{J}}||\textit{\textbf{B}}|},\quad \rm{with} \quad\textit{\textbf{J}}=\nabla\times\textit{\textbf{B}} \end{equation} \begin{equation}\label{gi} g_i(\lambda_{xi},\lambda_{yi},\lambda_{zi})=\frac{|\delta\textit{\textbf{B}}_i|}{|\textit{\textbf{B}}_i|}=\frac{|\nabla\cdot\textit{\textbf{B}}|\Delta V_i}{|\textit{\textbf{B}}|\Delta\sigma_i}, \end{equation} and \begin{equation} f_i({\lambda}{^*_{xi}},\lambda{^*_{yi}},\lambda{^*_{zi}})=\min\{f_i(\lambda_{xi},\lambda_{yi},\lambda_{zi})\} \end{equation} \begin{equation} g_i({\lambda}{^*_{xi}},\lambda{^*_{yi}},\lambda{^*_{zi}})=\min\{g_i(\lambda_{xi},\lambda_{yi},\lambda_{zi})\} \end{equation} We set the constraints like this: \begin{equation}\label{figi} f_i({\lambda}{^*_{xi}},\lambda{^*_{yi}},\lambda{^*_{zi}})\leq\epsilon_f,\quad g_i({\lambda}{^*_{xi}},\lambda{^*_{yi}},\lambda{^*_{zi}})\ \leq\epsilon_g, \end{equation} where $\epsilon_f$ and $\epsilon_g$ are sufficiently small thresholds. Basically, $f_i(\lambda_{xi},\lambda_{yi},\lambda_{zi})$ is the sine of the angle between $\textit{\textbf{B}}$ and $\textit{\textbf{J}}$, which is used to evaluate the force-freeness. Similarly, $g_i(\lambda_{xi},\lambda_{yi},\lambda_{zi})$ stands for the divergence of $\textit{\textbf{B}}$. Since we just gave a simple description about the approximation of $\lambda$ in the previous work (Yan and Li 2006), and this may make some misunderstandings about our method. Here we will state it in detail. As stated above, in the numerical procedure (Yan and Li, 2006), we only need to control the force-freeness and divergence-free of the magnetic field through Equation~(\ref{fi})~and~(\ref{gi}) approaching a minimum. The DBIE numerical procedure is possible if the function $f_i$ can be calculated analytically. {In order to evaluate the right hand side of (\ref{fi}) and (\ref{gi}), we need to know the space derivative of $\textit{\textbf{B}}$ from (\ref{DBIE}) and hence of $\lambda$. This derivative is approximated by a first order finite difference. We evaluate $\lambda$ in the $\delta$-neighbourhood of the point $r_i=(x_i, y_i, z_i)$, where $\delta$ is a very small positive fraction (typically one thousandth of the pixel size). At an arbitrary} point in this small neighborhood it can be expressed as Equation~(\ref{lamdar}), \begin{equation}\label{lamdar} \lambda_p(r)=\lambda_p(r_i)+{\lambda_p}'(\xi)(r-r_i) \end{equation} which satisfies the Lagrange mean value theorem and $r_i<\xi<r$. Since $|\delta|\ll1$ and $|r-r_i|\le\delta$, the zeroth-order approximation is adopted. In our difference domain, we obtain $\lambda_p(r) \approx {\lambda_p}(r_i)$. Here ${\lambda_p}(r_i)$ is a value of $\lambda_p(r)$ in the center of the small domain. Then, any value of the function $\lambda_p(r_i)$ in the infinitesimal neighborhood is known. The field $\textit{\textbf{B}}$ and the current $\nabla\times\textit{\textbf{B}}$ can then be evaluated over the point $i$. This is a practical and rigorous numerical procedure. However, Rudenko and Myshyakov (2009) wrote that they "think that this method for solving the extrapolation problem is incorrect" because they found that Yan and Li (2006) "unreasonably drop these space derivatives" of ${\lambda}$ functions and "the resulting magnetic field will not be free-force". Obviously, the comments in Rudenko and Myshyakov (2009) are incorrect as they have confused the DBIE representation of the force-free field solution and the numerical approximation of the force-free field. It should be pointed out that the derivation of the DBIE is mathematically valid and rigorous. The problem is to find out how to obtain a numerical solution with the help of DBIE. As explained above, the strategy is not to solve Equations~(\ref{DBIE}) and~(\ref{lamd}) exactly but to find a numerical solution that satisfies the constraints (\ref{figi}) {and the boundary and asymptotic conditions (3-4)}. Alterenatively the original force-free and divergence-free equations (1-2) {together with boundary and asymptotic conditions (3-4)} is solved approximately. Therefore if one can construct numerically, the magnetic field distributions pointwise at any position that satisfies the constraints (\ref{figi}) {with the boundary and asymptotic conditions (3-4)}, one has already obtained a set of numerical solutions that are force-free and divergence-free {with the given boundary conditions} approximately. In the present work, our calculated results will further demonstrate the feasibility and validity of DBIE. At the same time, the current density can be obtained pointwise: \begin{equation}\label{J} \textit{\textbf{J}}=\nabla\times\textit{\textbf{B}} \end{equation} As one of the advantages of DBIE, it is a pointwise method, which can calculate the magnetic field and current density in any point above the photospherical boundary from the procedure. {However, it should be noted that a vector magnetogram with all three field components is more than a force-free field needs to be uniquely determined. The present DBIE employs the vector field in the reconstruction. Therefore the boundary data should satisfy compatibility relations in order to be consistent with a force-free corona. The inconsistency and errors contained in the vector magnetogram data will cause errors in the reconstructed field. The ignorance of the boundary field ${\bf B}_{0}$ outside of the magnetogram area would also have influence to the reconstructed field. In practice, the truncation of the magnetogram data should be chosen to approach zero as ${\bf B}_{0}$ vanishes outside of the magnetogram area. Nevertheless, the net flux of ${\bf B}_{0}$ in Equation (3) over the boundary magnetogram area does not need to be zero as shown in the derivation of the BIE (Yan and Sakurai 2000) or DBIE (Yan and Li 2006). } \subsection{GPU Technique} \label{GPUtech} With more and more advanced telescopes launched into space, higher quality images have become available. On one hand, high resolution images provide more clarity to the detail of the Sun and this will help us to study the nature of the Sun in more detail. On the other hand, the vector magnetogram used as the boundary condition of NLFFF methods is getting larger which will vastly increase the amount of computation for each method. For the BIE method, it is necessary to solve the computing speed problem and apply it to real data by using high resolution boundary data. The BIE or DBIE method are in principle suitable for high-performance parallel computing. However, in the previous work, the implementation of BIE with parallel computing on high performance computers was not carried out. Therefore BIE would be slow when extrapolating NLFFF from boundary data with computing grids compatible with current observations. DBIE is expected to make an improvement (Schrijver et al., 2006; Wiegelmann, 2008). Hence we will adopt a suitable parallel computing technique for DBIE. In recent years, the graphic processing technique has become prevalent in general purpose calculation. We utilized Graphics Processing Units (GPUs) in our program. The results turn out to be effective and suitable. We can replace a CPU cluster consisting of tens of CPUs with just one GPU board fixed on a personal computer. This convenience has profound meaning on the promotion of the application for the DBIE method. A GPU is composed of high-performance multi-core processors capable of very high computation and data throughput (Zhang et al., 2009). The GPU's powerful parallel computing ability to process the integration operation can be applied to the DBIE method. Parallel computation of the DBIE-NLFFF extrapolation algorithm is performed through GPU with shared memory accessing optimization under Linux system and a Compute Unified Device Architecture (CUDA) compiler. The calculation is operated through an Intel CPU @ 3.40 GHZ and NVIDIA Geforce GTX 480 graphics device with NVIDIA CUDA 4.2 on a personal computer. The platform employed in this work is a 4 core CPU and one GPU machine. The main part of the program is the integral operation in Equation~(\ref{DBIE}). The iteration part is executed mostly in the CPU cores and the data is transferred between CPU and GPU. In order to reduce the latency and improve the occupancy of the procedure, we need to reduce the data exchange in the global memory between CPU and GPU, and allocate reasonably the numbers of the $\textit{Thread}$ and $\textit{Block}$. The number is not fixed and there are some allocation rules which may improve the speed. Generally speaking, there are some tricks to allocate. The thread number is a multiple of 32, which can improve the memory coalescing of the procedure. For different sizes of data, the number is different, the larger the better, since it can improve the occupancy. We can adjust the thread between 128 and 256, and then change the block number gradually. Meanwhile, we should make sure the block number is larger than the multi-processor, which can guarantee no multi-processor is empty. In our work, the numbers of $\textit{thread}$ and $\textit{block}$ are 128 and 80, respectively, which provide good allocation in our procedure. In addition, we utilize the shared memory for optimizing our program to improve the computational speed. This can reduce volume of the output data transmission from GPU to CPU. According to Equation~(\ref{DBIE}), in the numerical procedure the magnetic field of an arbitrary point $i$ in the semispace above the boundary can be expressed as follows (Yan and Sakurai, 2000). \begin{equation}\label{DBIE-dis} \textit{\textbf{B}}_i=\sum_{e=1}^{Ne}\sum_{j=1}^{9}\left [\int_{-1}^{+1}\int_{-1}^{+1}YN_k(\xi,\eta)J(\xi,\eta)d\xi d\eta \right] \textit{\textbf{B}}_{j}^{e} \end{equation} where the boundary has been subdivided into $N_e$ 9-nodes elements with boundary data known over each node, $N_k(\xi,\eta)$ is the shape function, $J(\xi,\eta)$ denotes the Jacobian, and $\textit{\textbf{B}}_{j}^{e}$ indicates known nodal field values as provided by the boundary condition similar to $\textit{B}_{p0}$ in Equation~(\ref{DBIE}). For clarity, we simplify this equation as Equation~(\ref{DBIE-dis2}), where $N$ equals to $n\times n$ namely the number of grid nodes of the boundary condition. We allocate our GPU assignment like Figure~(\ref{gpu}), the number of $thread$ is expressed by $N_T$, the boundary grids are marked as $1, 2, ..., N_T, N_{T+1}, N_{T+2}, ..., 2N_T,..., N$. The boundary data are put into each $thread$, and $threads$ are put into $blocks$. Thus, our data parallelization is realized. Then we get the summation of the data in each $thread$, then add the summation results in each $block$. \begin{equation}\label{DBIE-dis2} \textit{\textbf{B}}_i=\sum_{k=1}^{N} a_{ik} \textit{\textbf{B}}_{k} \end{equation} \begin{figure} \centering \includegraphics[width=4.8in]{gpu3.eps} \caption{The stretch of the GPU assignment allocation. The data scale is presented by N that equals to the square of boundary grids n. $N_T$ indicates the number of GPU thread. The lines present the parts of assignment that are put into corresponding GPU threads.}\label{gpu} \end{figure} A series of numerical tests indicate that the GPU-accelerated DBIE program is almost 1000 times faster than the original DBIE, which is including the hardware update, difference of the compiler, instruction optimization and GPU's effect. The total computation cost can be expressed as $\textit{O}(n^2m^3)$ (Yan and Li, 2006), {which has to be multiplied with the number of iterations to minimize $f_i$ and $g_i$ in Equation (12),} where $n\times n$ is the boundary nodes and $m\times m\times m$ expresses as the cubic grids. As Figure~\ref{gpu} shows us that point $i$ in the semispace above the boundary needs to do the integral operation to the $n\times n$ boundary grids $\textit{B}_{p0}$ (in Equation~(\ref{DBIE})). We only apply the GPU acceleration into making this $n^2$ part parallelized. However, the internal grids (or $m^3$ part) parallelization, namely the number of points $r_i$ are not involved yet. So further acceleration could combine CUDA with other parallel computing techniques such as Message Passing Interface (MPI) to realize Muti-GPU parallelization. Before we apply the present DBIE method to analyze the practical problems we first compare it with a semi-analytical solution for NLFFF. As no iteration was performed to determine the set of factors by the BIE method in the comparison against the analytical force-free-field models of Low and Lou (1990). It was expected that completion of the iteration by DBIE will greatly reduce the computation time and should be feasible (Schrijver et al., 2006). Here we just adopt the Case \uppercase\expandafter{\romannumeral2} in Schrijver et al. (2006), i.e., only bottom boundary data are used because this type of the boundary condition is close to the case of the Sun. The boundary size and the five evaluation metrics $C_{vec}$, $C_{cs}$, ${E^{'}}_{n}$, ${E^{'}}_{m}$ and $\epsilon$ are the same as in Schrijver et al. (2006). The results and the comparison with other methods are shown in Table~\ref{tb}. It can be seen that after iteration by the present DBIE method, the metrics have been significantly improved as compared with the boundary integral method without an iteration and similar results have been obtained as compared with other methods. \begin{table} \caption{ Evaluation of metrics for the present DBIE and other methods.} \label{tb} \begin{tabular}{lccccc} \hline Only lower boundary provided, entire volume\tabnote{The parameters are the same as in Case \uppercase\expandafter{\romannumeral2} in Schrijver et al. (2006) with Low \& Lou (1990) solution: n=3, m=1, l=0.3, $\Phi=4\pi/5$ on a 192 $\times$ 192 pixel grid centered on the $64\times64\times64$-pixel test region.} & $C_{vec}$ & $C_{cs}$ & ${E^{'}}_{n}$ & ${E^{'}}_{m}$ & $\epsilon$\\ \hline Exact solution (Low \& Lou, 1990)&1& 1& 1& 1& 1\\ \hline Weighted Optimization Method (Wiegelmann)\tabnote {Data from Table~I of Schrijver et al. (2006).} & 1.00 & 0.57 & 0.86 & -0.25 & 1.04 \\ Optimization Method (McTiernan)$^2$ & 1.00 & 0.51 & 0.84 & -0.38 & 1.04 \\ Magnetofrictional Method (Valori)$^2$ & 0.99 & 0.55 & 0.75 & -0.15 & 1.02 \\ Grad - Rubin - like Method (Wheatland)$^2$ & 0.99 & 0.58 & 0.69 & 0.13 & 0.96 \\ Grad - Rubin - like Method (R$\acute{e}$gnier)$^2$ & 0.94 & 0.28 & 0.49 & -1.7 & 0.74\\ Boundary Integral Method (no iteration)$^2$ & 0.97 & 0.41 & -0.02 & -14. & 1.00 \\ \hline Upward-layered DBIE Method (He)\tabnote{Data from Table~4 of He \& Wang (2008).} &0.97&0.65&0.077&12.4&1.06\\ Present DBIE Method & 0.99 & 0.52 & 0.83 & -0.53 & 1.08\\ \hline \end{tabular} \end{table} \section{Observations} \label{obs} The NOAA 11158 was the first active region that produced an X-class event in the current 24th solar cycle. There were many C-class and M-class flares in this active region during its passage over the solar disk in February 2011. The largest, the X2.2 flare event occurred on February 15 at 01:44 UT. Several studies have been carried out on NOAA 11158 (Schrijver et al., 2011; Sun et al., 2012; Wiegelmann et al., 2012). The proposed GPU-accelerated DBIE is applied to reconstruct the coronal magnetic field from the vector magnetogram taken on 2011 February 14 at 20:12 UT from SDO/HMI. This is combined with observations from the SDO/AIA and the two STEREO/Extreme Ultraviolet Imager (EUVI) instruments (Howard et al., 2008; W$\ddot{u}$lser et al., 2004) to present a stereoscopic investigation of the coronal magnetic fields in order to understand the X2.2 flare event. We {average} the boundary data from 360 ${\rm km~pix^{-1}}$ ($0.5''$) to 720 $\rm km~pix^{-1}$ (about $1''$), which has $300\times300$ grid points to be used as the boundary condition. In order to make a comparison with previous work, we also pay attention to the central $250\times200$-pixel area covering the main features of the active region, with the vertical grid spacing matching the horizontal spacing. \begin{figure} \centerline{\includegraphics[width=0.8\textwidth,clip=]{magmap.eps} } \caption{ The observations from SDO. (a) is the EUV image in 171 $\AA$ of NOAA AR 11158 from SDO/AIA on 2011 February 14 at 20:14 UT. The EUV loops are divided into 14 groups and marked from G1 to G14. The field of view is about $300''\times300''$. (b) is the vector magnetogram from SDO/HMI at 20:12 UT. The horizontal fields are presented by using arrows with a length scale of 2000 G shown by the white bar. The vertical fields are plotted by contour map at $\pm1000, 2000$ G. P1, N1, and P2, N2 present two pairs of reversed polarities. Red indicates negative and blue is positive. } \label{magmap} \end{figure} Here, we used the HMI vector magnetogram as the boundary data with three components of the magnetic field shown in Figure~\ref{magmap}(b), and the two main pairs of bi-polarity are marked as P1, N1 and P2, N2 there. The cutout data used for the force-free field modeling has been mapped to a local Cartesian coordinate. Considering the precision of our method largely depends on the boundary condition, solar magnetic field measurement suffers from several uncertainties (McClymont et al., 1997). Wang et al. (2001) proposed a method to remove the $180^\circ$ ambiguity and make the boundary data reduction for the BIE method. Here we apply this data reduction method to the boundary data in the present study. After reconstructing the coronal magnetic field using the GPU accelerated DBIE method, we compare the modeling results with the EUV images of AIA and STEREO/EUVI from three points of view in order to quantify to what extent they correctly reproduce the coronal magnetic field configuration. To co-align the cutout vector magnetogram with AIA images we carried out a correlation analysis between $B_z$, from the original vector magnetogram, and $B_{los}$ from the full disk LOS magnetogram. Then the location of the rectangle research region (shown as the white squares in Figure~\ref{fulldisk}) is determined in the full disk SDO/HMI magnetogram. According to the SDO data analysis guide, we align HMI data with AIA data and obtain the cutout AIA image (shown as Figure~\ref{magmap}(a)) and the location of our research region. In order to determine the location of the research region in STEREO images, the coordinate conversions between Stonyhurst heliographic and heliocentric-cartesian are adopted (Thompson, 2006). Thus the reconstructed results viewed from three different points of view are shown aligned with the EUV background with accurate locations. \begin{figure} \centerline{\includegraphics[width=1.0\textwidth,clip=]{fulldisk.eps}} \caption{ Full disk maps from AIA (b), STEREO A (c), and STEREO B (a) in 171 \AA. The research region for extrapolation is marked by a white square in (b), and corresponding domain is marked in (a) and (c).} \label{fulldisk} \end{figure} \begin{figure} \centerline{\includegraphics[width=0.8\textwidth,clip=]{EUVI3.eps}} \caption{ Corresponding positions of the stereoscopically reconstructed coronal loops from three different points of view. It presents the loop features in lower part (blue, red, and light blue cross points) and middle part (blue and light blue cross points) of the AIA image (b), with the corresponding loop features denoted by the same color lines in the STEREO A (a) and B (c) images.} \label{EUVI3} \end{figure} Before we compare our reconstruction results with observed EUV loops, we need to determine the same features in AIA image and the two STEREO/EUVI images. For a coronal loop in the STEREO A image shown as the light blue line in the bottom of Figure~\ref{EUVI3}(c), we apply a Gaussian-fitting to the cross section of the loop and find the brightest point along this cross section. Then we select a number of cross sections along this loop and connected these points together. Thus, we get the $'skeleton'$ of the loop. Through the coordinate conversions (Thompson, 2006), some selected points along the $'skeleton'$ line are projected to the image of STEREO B in Figure~\ref{EUVI3}(a) and these projections seen as {short black bars} (whose length seen in the AIA image is nearly the same as the length of the research region). By using the same method as STEREO A image, we get the $'skeleton'$ line of the loop in the image of STEREO B and we obtain the points of intersection. We convert these points of intersection in STEREO B image and points in STEREO A image to AIA image in Figure~\ref{EUVI3}(b). Thus, the points of intersection from STEREO A and B lines are obtained and are marked as many cross-shaped points in the AIA image. We can thus obtain the stereoscopically reconstructed coronal loops. We apply this method mainly in some higher altitude structures which could be seen from both STEREO A/B EUVI instruments. Therefore, the comparisons below will take into account these obvious higher altitude structures. \begin{figure} \centerline{\includegraphics[width=1.0\textwidth,clip=]{extrapolation_map4.eps}} \caption{ Comparison between EUV images and reconstructed results. The first row presents the EUV images in 171 \AA from STEREO B (a), AIA (b), and STEREO A (c). The same images superimposed with extrapolated magnetic field lines are presented in the bottom row images. The red lines show the closed extrapolated magnetic field lines, the blue lines are the open magnetic field lines which extend to the outside of boundary area that is $300''\times300''$. The black squares in (b) and (e) present the $250''\times200''$ domain which contains the main features of EUV structures. The outer square in (c) and (f) image presents the boundary area and inner one is the same as the squares on the AIA images. The same region in the backside of the Sun in (a) and (d) are presented by dotted lines.} \label{mainmap} \end{figure} We present some selected EUV bright loop structures in 171 {\AA}, and divide the EUV features into groups marked from G1 to G14 (see Figure~\ref{magmap}(a)). G1, G2, and G3 are three groups corresponding to the EUV loops on the top of the research region and connect the magnetic polarities P1 and N1. When we determine the EUV vertical structures rooted at the edge of the solar disk and stretching out of the disk from the side views by the 3-D stereoscopic technique we find they are out of the interested region. So loops G1 could not be seen {from the views at the solar limb} and they are in lower altitudes. G4 is the kernel region where stretched loops along the polarity inversion line (PIL) are observed and the flare event occurred. So we present not only the reconstructed magnetic field lines but also the electric current lines. G5 and G6 present some lower small loops which can be distinguished just {from the view on the solar disk}. G7, G8, and G9 are large loops connecting N2 with P2. G10 and G11 are also the large loops connecting N2 but with P1. These bundles of loops could be seen from three different points of view. G12, G13, and G14 are the open loops extending to the outside of the interested region and rooted from N2 and P1 respectively. \section{Reconstructed Results} \label{result} The extrapolation code is based on the GPU-accelerated DBIE method. Alignments between the extrapolated field lines and EUV images in 171 $\AA$ from the SDO/AIA, and twin STEREO/EUVI instruments are presented (Figure~\ref{mainmap}). The DBIE method is performed to reconstruct the 3D magnetic field structures of the region NOAA 11158 in the corona. The red lines in Figure~\ref{mainmap}(e) show the calculated closed magnetic field lines. Blue lines present the calculated open magnetic field lines which extend to the outside of our computational domain. Figure~\ref{mainmap}(d) and (f) show us the counterparts reconstructed field structures from STEREO B and A, respectively. \begin{figure} \centerline{\includegraphics[width=0.9\textwidth,clip=]{G123.eps}} \caption{ The comparison between calculated magnetic field lines L1, L2, L3 and EUV loops in 171 $\AA$ G1, G2, G3.} \label{G123} \end{figure} Figure~\ref{G123} presents the decomposed comparison between EUV {loop groups G1, G2, G3 and our reconstructed magnetic field lines. The L represents the group of reconstructed magnetic field lines, G represents the group of EUV loops. } The first row of Figure~\ref{G123} presents the EUV patterns in 171 \AA. G1 consists of a series of small lower loop structures, which are not seen from STEREO B and show really good agreement with our reconstruction results L1 {from the view on the solar disk}. This bundle of loops connects P1 to the relatively weaker negative magnetic polarities between P1 and N1. G2 is the same as G1. It can also be seen from STEREO A. It is worth mentioning that the left footpoints of calculated lines in L2 show a helical structure and this agrees well with the EUV background G2 around the negative polarity N1. According to the method stated in Section~\ref{obs}, the EUV loops of G3 are lower than the vertical structures stretching out of the solar disk. Compared with calculated magnetic field lines in L3, it seems to be consistent with this, namely has a lower altitude seen from STEREO A and could not been seen from STEREO B. It is the largest loop bundle connecting P1 with N1. \begin{figure} \centerline{\includegraphics[width=0.9\textwidth,clip=]{G4.eps}} \caption{ The three blue squares in top panel represent the same region, within which the results are displayed in the following panels, around the polarity inversion line (PIL) from three different views for G4. G4-1 and G4-2 are the close-up views of G4. The green lines in G4-1 agrees with the calculated electric current lines (white) in L4-1. The blue lines in G4-2 agree with the higher-lying calculated magnetic field lines (red) in L4-2. The highly-twisted short lower-lying field lines in L4-2 form a S-shape co-spatial with a filament channel there along the PIL. At bottom panel the calculated magnetic field lines are shown in detail to demonstrate the pivot location of L4-2 to all other surrounding coronal structures L1, L2, L3, L5, L6, and the one-side footpoints of L7, L8, ..., L12, and L14, etc. } \label{G4} \end{figure} G4 is a group of EUV loops in the region around PIL and between magnetic polarities P2 and N1 (see Figure~\ref{G4}). There is a strong magnetic shear and should contain a large amount of magnetic free energy around the PIL, which is the most important region for understanding the physical processes of solar eruptions (see the blue box in first row images of Figure~\ref{G4}). This region has relatively complex structures seen from AIA, and there are also vertical structures stretching out of the edge of the solar disk (seen in the boxes in the STEREO side view images). We determine the correspondence of the features in all three images. {Around the PIL, there are some observed EUV loops connecting P2 with N1. Our extrapolation has obtained a series of small and low calculated magnetic field lines along the PIL and connecting the regions on both sides, which are agreeable with the EUV loop structures in general and the filament structure marked in Figure 1 in Sun et al. (2012), as shown in the last two rows in Figure~\ref{G4}. However, we did not obtain higher-lying calculated magnetic field lines over the filament channel connecting footpoints with opposite magnetic polarities P2 and N1. Nevertheless, the current lines connecting P2 with N1 are found and plotted in white marked as L4-1, which can be calculated from Equation~(\ref{J}). The location of the same corresponding EUV loops in three view aspects are also marked in G4-1 shown as green lines. In the STEREO A image, the upper half descending part of this loop structure is blocked by the saturated patch in the 171 \AA ~detector, but the other visible part is in good agreement with the central electric current lines in L4-1. The highest structure above the one in G4-1 is shown as blue line in G4-2 representing large scale loops connecting P1 to N2 along the PIL in the center region of G4 loops. L4-2 shows bundles of magnetic field lines in the kernel region. One bundle of field lines is located higher than the electric current lines and they show very good agreement with the coronal loop denoted as the blue line in G4-2 in both front and side views. Other bundles of lower-lying and short twisting field lines in L4-2 is connecting P2 and N1 along the PIL and co-spatial with the S-shaped filament channel, where some EUV strands in the dark filament channel were also shown in Sun et al. (2012). The highly-twisted short lower-lying field lines in L4-2 are in a pivot location to all other surrounding coronal structures L1, L2, L3, L5, L6, and the one-side footpoints of L7, L8, ..., L12, and L14, etc. Therefore they must have played a key role for the occurrence of the X2.2 flare event. In the STEREO A image, it can be seen that those lower-lying field lines in L4-2 form a twisted arcade structure along the PIL where the filament is located. It should be noted that while the strand in the left part of the S-shaped filament channel may be lower-lying and related to the filament, the right part EUV bright features along the PIL, marked as a filament in Figures 1 and 2 in Sun et al. (2012), may not be necessarily all lower-lying, as it agrees almost identical to the coronal loop in G4-2 that is clearly stereoscopically resolved as a high-lying coronal bright feature seen in STEREO A/B EUV images. At least one cannot simply attribute all those EUV bright features along the PIL in the filament channel to manifestation of a filament although the filament could be located there. } {Theoretically the field- and current lines should be identical for a force-free field but in practical situation there may exist discrepancy. For the current lines in L4-1 the averaged misalygnment angle between the fields and the current lines is $13.6^{\circ}$. This may be due to the inconsistency with force-freeness and errors contained in the boundary data in the PIL region. It should be pointed out that by the DBIE method the angle between the current {\bf J} and the field {\bf B} is mostly less than $5^{\circ}$ with an average value of $4^{\circ}$ but there do exist some points where the angles are large, whereas the relative flux error factor $g_i$ always has a maximum value of about 0.5\% when comparing with the exact solution (e.g., Fig.5 in Yan and Li 2006). In the present case for $250\times200\times100$ internal grid points, the averaged values are respectively $<f_i>=0.078$ (or the averaged angle between {\bf B} and {\bf J} is less than 4.5$^{\circ}$) and $<g_i>=0.00067$. Physcially the coronal EUV loops are controlled by the observed photospheric vector magnetogram data which are not necessarily force-free. Therefore there is no guarantee that the observed coronal EUV loops are always consistent with a force-free field solution, especially for the solar flare or coronal mass ejection events. Nevertheless the comparison between the calculated field lines and the observed coronal loops would reveal the quality of the extrapolation.} \begin{figure} \centerline{\includegraphics[width=0.9\textwidth,clip=]{G56.eps}} \caption{The comparison between calculated magnetic field loops L5, L6 and EUV background patterns in 171 $\AA$ G5, G6.} \label{G56} \end{figure} G5 and G6 are relatively low-lying EUV bright loops (see Figure \ref{G56}). From the views of STEREO A, G5 shows some highlight structures from which we cannot distinguish the details. The corresponding calculated magnetic field lines in L5 are lower and cannot be seen from STEREO B, which connect negative polarity N2 with the positive polarity between N1 and N2. It is important to note that there were a series of drastic solar activities before our selected extrapolated time in this region of G5. There was a large CME on February 14 at 18:00 UT, with the associated flare M2.2 at 17:20 UT from this site. G6 corresponds to a series of lower loops. The calculated magnetic field lines L6 are also lower-lying and are qualitatively in good agreement with observations from the AIA view of G6. From side views, they mix with the background and no obvious features could be seen. \begin{figure} \centerline{\includegraphics[width=0.8\textwidth,clip=]{G789.eps}} \caption{The comparison between calculated magnetic field loops L7-1, L7-2, L8, L9 and EUV background patterns G7, G8, G9 observed at 171 $\AA$.} \label{G789} \end{figure} G7, G8, and G9 consist of a series of coronal loops with different length scale (see Figure~\ref{G789}). These loops all originate from polarity N2 and their endings are around the polarity P2. The calculated L7, L8 and L9 lines are all in good spatial agreement with the EUV G7, G8 and G9 loops {from the AIA view on the solar disk}. The calculated field lines, L7-1 and L7-2 are lower and shorter, which can be seen just in STEREO A not STEREO B. The calculated lines are in good agreement with the EUV loops from the AIA image. L8 connects N2 to P2 and has good spatial co-alignment with G8 in the AIA image. However, their side projections still mix with the background and cannot be distinguished. L8 also shows poor spatial co-alignment with the high altitude STEREO A/B EUV loops. G9 is one of the largest loop bundles at the bottom of this region. As the loops grow, their altitude increases. This could be seen from the side view in the last row of Figure~\ref{G789}. L9 is in good agreement with G9 {from the front view} but show poor spatial co-alignment with the high altitude STEREO A/B EUV loops. \begin{figure} \centerline{\includegraphics[width=0.9\textwidth,clip=]{G1011.eps}} \caption{The comparison between calculated magnetic field loops L10, L11 and EUV background patterns G10, G11 at 171 $\AA$. The bottom row images give the corresponding positions of the coronal loops G10 (blue and red) and G11 (light blue) from three different points of view.} \label{G1011} \end{figure} We expect to find the coronal loops with not only the EUV patterns in AIA images but also in STEREO images. Figure~\ref{G1011} shows a series of loops connecting N2 to P1. Their corresponding structures are shown, from different angles, in the first row of Figure~\ref{G1011}. The correspondence of these EUV loops at different points of view are confirmed by our method stated in Section~\ref{obs} and shown in the last row of Figure~\ref{G1011} in different colored lines. However, we just confirm the correspondence of half loops and the other half cannot be determined. Nevertheless, this comparison validates our reconstructed results. We could see that the calculated magnetic field lines L10 agree well with the EUV loops of the left parts of G10. It overlaps with some parts of G7 and G8, but its ending is different. G11 also connects the spots N2 with P1 and has the same situation with G10. It overlaps with some parts of G9. It can be seen that the configurations of the calculated lines L10 and L11 are coincident with the coronal loops G10 and G11 in both {front view from AIA and STEREO A/B side views. Although the calculated field lines and observed EUV coronal loops agree with each other globally, they do not follow the same trajectories. In order to make a quantitative comparison for the three stereoscopically reconstructed loops, we calculated the angles between the tangent vectors along reconstructed loops and the calculated fields there, and obtain the averaged misalignment angles of 16.6$^{\circ}$, 17.8$^{\circ}$, and 18.3$^{\circ}$ for three reconstructed loops in the middle panel at last row in Figure~\ref{G1011} from bottom up. These values demonstrate the deviation from the force-freeness along these loops, which are quite good with a factor of about two smaller than those given by other NLFFF models yielding overall misalignment angles of $24^{\circ} - 44^{\circ}$ (DeRosa et al. 2009) and at the same order as a forward-fitting model using stereoscopically reconstructed loops as constraints (Sandman and Aschwanden 2011).} \begin{figure} \centerline{\includegraphics[width=0.9\textwidth,clip=]{G121314.eps}} \caption{ The comparison between calculated magnetic field lines that extend to the outside of boundary condition L12, L13, L14 and EUV loop patterns G12, G13, G14 at 171 \AA.} \label{G121314} \end{figure} G12, G13, and G14 consist of a series of large coronal loops which are open to the outside of the computing region (see Figure~\ref{G121314}). {It should be noted that they are not necessarily open loops but may be connected to other places in the solar surface.} L12 represents the bundles of magnetic field lines rooted in N2, the brightening of G12 at the ending has good agreement with L12 either in the view to the central region of the Sun or STEREO A/B side views. L13 are also magnetic field lines rooted in N2. There are two bundles of field lines that extend beyond the region. These two bundles of lines in L13 are consistent with the EUV background G13 from all points of view. The next group is L14 which displays a radial pattern rooted from P1. These structures spread to the outside of the computing area and are in good agreement with upper part of G14 in the central column in Figure~\ref{G121314}. {Some open coronal loops in lower part of G14 in the central column are actually connected to lower magnetic pore region outside AR 11158 as shown in Figure~\ref{fulldisk}(b), which are not included in the present magnetogram area employed as boudnary condition. Therefore the corresponding lower part field lines of L14 at first follow the loop tendency but at higher altitude they bend back to deviate from the real coronal loops, which should be due to the ignorance of the boundary field outside of the magnetogram area.} \section{Conclusions} \label{Conclusion} The reconstructed topology configurations of the magnetic fields in NOAA 11158 is stereoscopically presented for the first time with comparison to observation from three points of view. This allows us to understand this active region more comprehensively. The calculated magnetic field lines replicate the observed EUV loop patterns well from different views. These results demonstrate clearly that the DBIE method is effective when it is applied to the actual photospheric magnetogram. {The GPU acceleration makes DBIE tractable even if applied to large-scale domains. From the reconstructed coronal field structures, we can estimate the altitude of EUV loop patterns which we found to be below 86 Mm, or~40\% of the length of the magnetogram area.} They also match the actual EUV features as estimated from stereoscopic observations. In this region the DBIE can achieve very high numerical accuracy. {In the present case for $250\times200\times100$ internal grid points, the averaged angle between {\bf B} and {\bf J} is less than 4.5$^{\circ}$ and the averaged relative flux error $\bar{g_i}=0.00067$.} In the central sunspot cluster, the current density is very strong along the filament in the PIL, and the current density distribution on a vertical cross section was plotted (Sun et al., 2012). {Our results agree very well with the situation that there are strong currents across the PIL and we found enlongate lower-lying twisting field lines co-spatial with the S-shaped filament along the PIL. However, we argue that one cannot simply attribute all the EUV bright features along the PIL to manifestation of a filament although the filament could be located there. Furthermore, we have obtained the electric current lines three-dimensionally at higher altitude across the PIL in this region from three points of view. According to their agreements with the bright EUV loop structures there, we think that the features dominated by the strong currents should really exist above the PIL.} Generally speaking, the region with strong currents should contain a large amount of accumulated free energy and will eventually release quickly in this region. {It is most possible that the extrapolated magnetic field lines resembling the S-shaped filament channel and the electric current lines agreeable with the bright EUV loops twistingly overlying the filament} may be associated with the occurrence of the X2.2 flare. It should be noted that while the line-of-sight (from the Earth direction) co-alignments between calculated field lines and observed coronal loops agree with each other, the views from other sides may show that they do not actually agree three-dimensionally and belong to other groups. This indicates that co-alignment with line-of-sight images from the Earth direction alone may not provide the accurate coronal configuration and the real three-dimensional information is vital in understanding the coronal magnetic field structures and their associations with solar activities. {For the three stereoscopically reconstructed coronal loops, we quantitatively obtain the averaged misalignment angles of 16.6$^{\circ}$, 17.8$^{\circ}$, and 18.3$^{\circ}$ respectively, which are quite good with a factor of about two smaller than those given by other NLFFF models yielding overall misalignment angles of $24^{\circ} - 44^{\circ}$ (DeRosa et al. 2009), and at the same order as a forward-fitting model with reconstructed coronal loops as given conditions (Sandman and Aschwanden 2011).} As a method different from others while they demonstrate similar computational capability, DBIE has the advantage that it just needs photospheric data as the boundary condition and allows to evaluate the NLFFF field at every arbitrary point within the domain from the boundary data instead of having to solve the entire domain. The DBIE can be accelerated by parallel algorithm such as GPU techniques, which makes the DBIE method be applied into larger boundary condition. The present study validates that the DBIE method is rigorous and practical. In addition further acceleration could combine CUDA with MPI to realize Muti-GPU parallelization, which will improve the computational efficiency of DBIE method largely. As the first images of Chinese Spectral Radioheliograph (CSRH, Yan et al., 2009) have been obtained, the comparison between our extrapolation and the tomography observation from CSRH will be carried out in the near future. \begin{acks} The authors would like to thank the referee for the helpful and valuable comments on this paper. Dr. Yingna Su is acknowledged for improving the English of the manuscript and helpful comments. Mr. L. A. Selzer is acknowledged for improving the English of the manuscript as well. We thank the SDO and STEREO team for providing the magnetic field data and EUV images used in this investigation. We also wish to thank Dr. W.T. Thompson for his efficient support in the routine for correcting the STEREO data error. This work is supported by NSFC Grants No. 11221063, 11273030, and 11211120147, MOST Grant No. 2011CB811401, and the National Major Scientific Equipment R\&D Project ZDYZ2009-3. Part of experiments were implemented on the ScGrid and GPU cluser of Supercomputing Center, Computer Network Information Center of Chinese Academy of Sciences. \end{acks}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Low mass X-ray binaries (LMXB) are comprised of a compact stellar remnant, either a black hole or a neutron star, that accretes matter from a low-mass Roche-Lobe-filling secondary star via a circumstellar disk. The LMXB 4U 1957+115 was first detected by the Uhuru satellite \citep{gia74} and its optical counterpart V1408 Aql was first identified by \citet{mar78}. The X-ray properties of 4U 1957+115 are unusual. LMXBs usually cycle between active and quiescent states, but 4U 1957+115 has been persistently active for more than 40 years, the longest interval of any known LMXB, remaining in a spectrally soft, disk-dominated X-ray state with no detectable radio jet \citep{wij02,rus11}. V1408 Aquilae may be the only BH LMXB, with the possible exception of LMC X-3, that has been found persistently active. We note that LMC X-1, LMC X-3, and Cyg X-1 \citep{rus10} are persistent high mass X-ray binary systems. The X-ray light curve of the system has not shown any orbital modulation \citep{wij02}, but optical observations by \cite{tho87} revealed a nearly sinusoidal orbital variation with a peak-to-peak amplitude of 23\% and a period of 9.33 hr. \cite{hak99} observed the light curve on two nights and saw a change in the shape of the light curve and a significant increase in the amplitude of the variations. Further evidence of night-to-night variations was presented by \cite{rus10} from 144 images of the source spread over three years. The larger data sets of \cite{bay11} and \cite{mas12} confirmed both the sinusoidal orbital modulation and the changes in mean brightness from night to night. Following \citet{tho87}, \citet{bay11} showed that the orbital light curve can be reproduced by a model in which the secondary star is heated by flux from the accretion disk. The orbital modulation is produced entirely by the heated face of the secondary star as it rotates into and out of view. Our results differ from the models by \cite{hak14} in that their models do not find any evidence for a secondary star in the spectral energy distribution, and do not agree with a secondary whose face is irradiated by X-rays. The SED measured in this work is consistent with our model. Since the heated face of the secondary would be just a small, high-temperature black-body perturbation on the SED and would not be distinguishable in the measured SED. Several lines of evidence indicate that 4U 1957+115 has a black hole primary. It is common for LXMBs with neutron star primaries to show type I X-ray bursts but 4U 1957+115 has never shown such bursts. The X-ray spectrum is well described by a multi-temperature blackbody, with an additional non-thermal power law component in 15\% of the observations \citep{now12}. Its high inner disk temperature and small inner disk radius are consistent with a black hole primary \citep{whi83,wij02,now08,rus10,now12}. Models of the X-ray spectra that allow for rotation of the primary star yield large spin rates: from $a^* \gtrsim 0.9$ for a black hole with a mass of $3\, M_\odot$ and a distance of 10 Kpc, to a near-maximal spin of $a^* \approx 1$ for larger values of the mass and distance \citep{now12}. The observed spins of black holes range up to $a^*\approx1$, while the spins of neutron stars are exclusively less than $a^*= 0.1$ \citep{mil11}. \cite{yaq93}, \cite{sin94}, and \cite{bay11} argued that the primary in 4U 1957+115 is a neutron star, in part because the mass ratio $q = M_2/M_1$ is large, suggesting a small-mass primary that is more consistent with a neutron star than a black hole. We note, however, that a black hole with an unusually low mass could also yield a high mass ratio. The distribution of the known black hole and neutron star masses has a gap between $2\, M_\odot$ and $\sim4\, M_\odot$. Among the neutron stars with reliably measured masses, the most massive is J0348+0432 at $2.01\pm0.21, M_\odot$ followed by J1614+2230 at $1.97\pm0.04\, M_\odot$ \citep{dem10,ant13,lat14}. The least massive black hole, GRO J0422+32 has a mass of $3.97\pm0.95\, M_\odot$ \citep{gel03}, followed by GRS 1009-45 with a mass of near $4.4\, M_\odot$ and not less than $3.64\, M_\odot$ \citep{fil99}, and possibly by 4U~1547-47 with an estimated mass of $\lesssim 4\, M_\odot$ \citep{kre12}. A very massive neutron star or a very low mass black hole could be produced from a progenitor with a mass of $\sim 22M_\odot$ \citep{fry99}. The probability that the observed gap between the masses of neutron stars and black holes is a mere statistical fluke is low \citep{oze10,far11}, although it remains possible that the gap has a non-zero but sparse population. If the mass gap is real, it has important implications for the physics of core-collapse supernovae \citep{fry12}. \cite{bel12} proposed a theoretical explanation for the existence of the mass gap that depends on the growth time of instabilities that lead to core-collapse supernovae. Stars in the $20 - 40\, M_\odot $ are the ones likely to have high mass neutron stars or low mass black holes as remnants that would lie in the mass gap. If the growth time of instabilities is larger than 200 milliseconds, a continuous distribution would be expected in this range. But models that assume a growth time of 10 - 20 milliseconds introduce and explain the observed mass gap \citep{bel12}. The observed mass distribution might, however, be subject to strong selection effects. One possibility is that low-mass black holes are hiding among other X-ray sources whose masses are notoriously difficult to measure. The absorption-line spectra of their secondary stars are generally not visible. \citet{oze10} concluded that there are simply not enough persistent systems for this to be the sole source of the gap: Even if every X-ray binary system that could contain a black hole were to contain a low-mass black hole, the gap would still not be fully populated. Nevertheless, it remains possible that a few of the missing low-mass black holes are lurking among transient X-ray sources. In this paper we present new optical photometry of V1408 Aql. We derive an improved orbital ephemeris for the system and model the mean orbital light curve using our \texttt{XRBinary} light curve synthesis code, from which we constrain the orbital inclination and the mass of the compact star. We find that the most likely mass for the compact star places it inside the gap in the mass distribution, although the range of possible masses is large. In section 2 we describe the observations and summarize the behavior of the light curve, and in section 3 we derive an updated orbital ephemeris. In section 4 we discuss the models of the optical light curve and in section 5 we discuss constraints on the distance to the source. \begin{table}[ht] \caption{Journal of Observations} \begin{center} \begin{tabular}{cccccc} \hline\hline UT Date & UTC Start & Duration (hr) \\ \hline 14-July-2012 & 07:06 & 3.6 \\ 15-July-2012 & 06:25 & 4.5 \\ 16-July-2012 & 06:31 & 4.2 \\ 17-July-2012 & 07:39 & 3.1 \\ 18-July-2012 & 06:46 & 4.0 \\ 11-August-2012 & 02:41 & 4.7 \\ 12-August-2012 & 02:29 & 4.5 \\ 13-August-2012 & 02:30 & 4.0 \\ 15-August-2012 & 02:51 & 4.9 \\ \hline \end{tabular} \end{center} \raggedright $^{a}$Time resolution of all data is 10s. \label{journal-tab} \end{table} \begin{figure} \begin{center} \includegraphics[angle=0.0,scale=0.27]{Lightcurves.pdf} \end{center} \caption[New 2008 light curve.]{Twenty nine nights of photometry of V1408 Aql. On the scale of this figure the individual nights are not resolved, only entire observing runs. The data in the four clumps on the left side of the figure were obtained by \cite{bay11} and \cite{mas12}, while the new data comprise the two clumps on the right side of the figure. The detached clump of bright points on the right side of the figure come from a single night, HJD~2456122}. \label{alldata-fig} \end{figure} \begin{figure} \centering \subfigure[July 2012 lightcurves] { \includegraphics[scale=0.27]{Julydata.pdf} \label{onea} } \\ \subfigure[August 2012 lightcurves] { \includegraphics[scale=0.27]{Augustdata.pdf} \label{oneb} } \caption{The light curves of V1408 Aql on five nights in July 2012 (top) and four nights in August 2012 (bottom). } \label{runs-fig} \end{figure} \section{The Optical Orbital Light Curve} We obtained new high-speed optical photometry of V1408 Aql with the Argos CCD photometer on the 2.1m Otto Struve telescope at McDonald Observatory \citep{nat04}. The photometer produces a sequence of consecutive CCD images, all with 10-second exposure times for the data reported here. We observed V1408 Aql on five nights in July 2012 and four nights in August 2012 for 3 - 5 hours per night (see Table~\ref{journal-tab}). All the observations were made through a broad BVR filter with a roughly-square passband from 4130~\AA\ to 7385~\AA. We reduced the data using standard IRAF routines and extracted the brightness of V1408 Aql relative to the same two comparison stars used by \cite{bay11}. We combined the new photometry with the previously published photometry obtained using the same equipment and reduced in a similar manner by \cite{bay11} and \cite{mas12}. Altogether we now have 29 nights of data extending from 2008 to 2013. The six-year light curve is plotted in Figure~\ref{alldata-fig}. On the scale of this figure the individual nights are not resolved, only entire observing runs. The data in the four clumps on the left side of the figure were obtained by \cite{bay11} and \cite{mas12}, while the new data comprise the two clumps on the right side of the figure. The mean brightness of V1408 Aql varies by a factor of two. The night to night variations are of the order of one day, while the rapid variations or flickering are occasionally observed approximately every $ 1.1 \pm 0.2 $ hours. These long term variations might be caused by changes in the accretion rate of the secondary, which in turn affect the emission from the accretion disk. The detached clump of bright points on the right side of Figure \ref{alldata-fig} comes from a single night, 14 July 2012, when V1408 Aql was $\sim50$\% brighter than the immediately following night. Figure \ref{runs-fig} shows just the nine new light curves. The variation in mean brightness from night to night is readily apparent as is the sinusoidal orbital modulation. Other sources similar to V1408 Aql show flares and bursts. For example, the low mass black hole LMXB GRO J0422+32 was found to have episodic gamma-ray and neutrino emission observed in the form of a hard power-law in the X-ray spectra \citep{vie12}. The amplitude of the sinusoidal modulation is correlated with the mean brightness. Figure~\ref{ronea} shows a light curve from a night when V1408 Aql was unusually bright and Figure~\ref{roneb} shows a light curve when it was unusually faint. The sinusoidal modulation is clearly visible on both nights but its peak-to-peak amplitude was 29\% on the bright night and only 22\% on the faint night. Figure~\ref{amp-fig} shows the amplitude of the sinusoidal modulation plotted against mean brightness from the six nights for which we have enough data to measure both accurately. There is a nearly linear correlation between the two with a slope $dA/dB = 0.49$, where $A$ is the peak-to-peak amplitude of the sinusoidal modulation and $B$ is the mean brightness, both $A$ and $B$ are in units of relative intensity. \begin{figure} \centering \subfigure[June 06, 2008] { \includegraphics[scale=0.18]{June6.pdf} \label{ronea} } \\ \subfigure[June 25, 2009] { \includegraphics[scale=0.18]{June25.pdf} \label{roneb} } \caption{Two light curves of V1408 Aql. On June 06, 2008 (top) V1408 had a high mean brightness and the peak-to-peak amplitude of its orbital modulation was 29\%. On June 25, 2009 (bottom) its mean brightness was lower and the amplitude of the modulation was 22\%.} \label{runs2-fig} \end{figure} \begin{figure} \begin{center} \includegraphics[angle=0.0,scale=0.20]{Amplitude2.pdf} \end{center} \caption[New 2008 light curve.]{The peak-to-peak amplitude of the orbital modulation as a function of mean brightness from the six nights for which we have enough data to measure both accurately. The amplitude of the modulation increases as the mean brightness increases.} \label{amp-fig} \end{figure} \section{The Orbital Ephemeris} The night to night variations of the mean brightness tend to obscure the orbital modulation of the light curve, adding noise to measurements of the orbital period. The long term variations are on the order of days, or over twice as long as the orbital period. To mitigate this problem we multiplied the relative brightness on each night by a normalization factor to scale all the light curves to the same mean brightness, which we chose to be near the minimum mean brightness. There are a few caveats to mention. This works to first order because the long term variations are longer than the orbital period. And even though each light curve does not cover exactly the same phase, the normalization method for scaling should not be a problem. Since the normalization value used was less than 10\% for more than 50\% of the light curves used. We then measured the orbital period using the Phase Dispersion Minimization (PDM) periodogram \citep{ste78}. Figure~\ref{PDM-fig} shows the PDM periodogram along with the previously published periods and their error bars. The minimum of the periodogram is at a period of 0.388893(3) days, which is consistent with, but more accurate, than previously published periods \citep{tho87,bay11,mas12}. The improved orbital ephemeris is: \begin{equation} T = {\rm HJD}\, 2454621.829(4) + 0.388893(3) E, \end{equation} where $T$ is the time of maximum flux and $E$ is the orbit number. The five-year span of our data is too short for a meaningful constraint on the rate of change of the orbital period. Figure \ref{phase3-fig} shows the scaled data from all 29 nights folded at the orbital period. The sinusoidal orbital modulation is apparent, and so too is a large scatter about the mean orbital variation. The scatter is caused by rapid variations - flickering - in the optical flux, which we presume is caused by rapid variations in the flux from the accretion disk. Figure~8 of \citet{mas12} plots nightly light curves of V1408~Aql at a scale that displays the flickering particularly well. The average orbital light curve is shown in Figure~\ref{phase3-fig}. \begin{figure} \begin{center} \includegraphics[angle=0.0,scale=0.17]{PDM2.pdf} \end{center} \caption[New 2008 light curve.]{The PDM periodogram for all our photometry of V1408 Aql. The lowest minimum is the best orbital period. The points above the periodogram mark the previously published orbital periods from \citet{tho87}, \citet{mas12}, and \cite{bay11}.} \label{PDM-fig} \end{figure} \section{Analysis of the Orbital Light Curve} We analyzed the orbital light curve of V1408 Aql with our \texttt{XRbinary} light curve synthesis program. All the models consisted of a black hole primary star surrounded by an accretion disk, plus a secondary star that fills its Roche lobe and is irradiated by flux from the disk. For our purposes the black hole is just a point source of gravity that emits no flux. The accretion disk is a cylindrically-symmetric, geometrically thin, optically thick, steady-state alpha-model disk. The outer radius of the disk was set to the tidal truncation radius, which we take to be 0.9 times the mean radius of the primary star's Roche lobe \citep{fra02}. We assume that the disk radiates like a black body. The flux emitted from a surface element of the secondary star is prescribed to be $F_{emit} \ = \ F_0 + \alpha F_{irr}$, where $F_0$ is the flux the secondary star would emit in the absence of irradiation, $F_{irr}$ is the irradiating flux from the accretion disk, and $\alpha$, which we call the ``albedo,'' is the fraction of $F_{irr}$ that is re-radiated instead of absorbed into the structure of the secondary star. The intrinsic flux is calculated from the gravity darkening law $F_0 \propto |g|^{4\beta}$, where the local gravity is determined from the Roche geometry, and $\beta$ is the temperature-dependent gravity-darkening coefficient, which we take from Claret (2000). We assume the emitted flux is fully thermalized, so the local effective temperature is given by $\sigma T_{eff}^4 \ = \ F_{emit}$. Armed with a local effective gravity and local effective temperature, we adopt Kuruz solar-composition spectra for the local emitted flux if $T_{eff} \leq 8000$ K and blackbody spectra otherwise. The model does not include spots or other features on the surface of the secondary star. The free parameters of this model are \vspace{-0.75\baselineskip} \begin{itemize} \item the masses of the primary and secondary stars. \vspace{-0.5\baselineskip} \item the orbital inclination. \vspace{-0.5\baselineskip} \item the inner radius and total luminosity of the accretion disk. \vspace{-0.5\baselineskip} \item the intrinsic effective temperature of the (unirradiated) secondary star. \vspace{-0.5\baselineskip} \item the albedo of the secondary star. \end{itemize} \vspace{-0.75\baselineskip} There are also some additional parameters such as the time of phase zero and the resolution of the time steps that have no effect on the structure of the system. \begin{figure} \begin{center} \includegraphics[angle=0.0,scale=0.068]{Alldata.jpg} \end{center} \caption[New 2008 light curve.]{Scaled photometry of V1408~Aql from all 29 nights folded at the orbital period. Each color represents a separate night. Phase zero is defined to be the maximum of the mean orbital light curve. The solid black line is the best-fit sine curve, not a model of the modulation.} \label{phase3-fig} \end{figure} \begin{figure} \begin{center} \includegraphics[angle=0.0,scale=0.3]{Mass2.pdf} \end{center} \caption[Mass Vs Chi^2]{Plot of the relative $\chi^2$ for the best-fitting synthetic light curves as a function of $M_1$. The three sets of points correspond to the three masses of $M_2$. We see that the best models are for a primary of $3M_{\odot}$ for any secondary mass.} \label{mass} \end{figure} Perhaps surprisingly, our results do not depend strongly on the albedo of the secondary star. Changes in the value of the albedo are absorbed almost entirely into offsetting changes in the luminosity of the accretion disk. We have, therefore, fixed the albedo at 0.5. To estimate the inner radius of the disk we assumed the black hole has a spin of $a^* = 0.9$ and used the spin-ISCO relation in Figure 2 of \citep{mc11}. In fact, though, our results depend only weakly on the inner radius of the disk. The inner radius does affect the fraction of disk flux emitted at optical wavelengths, but even here it is the outer disk radius that dominates the amount of optical flux. A numerical test showed that increasing the inner disk radius by a factor of three increased the preferred orbital inclination by only two degrees. The intrinsic temperature of the secondary star is nearly irrelevant since the optical flux is dominated by the accretion disk and the secondary's heated face. Despite the large number of parameters needed to model the light curve, in the end only four parameters really matter: the mass of the primary star, the mass of the secondary star, the orbital inclination, and the disk luminosity. The light curve synthesis code assumes that the heating of the secondary star is due to incident flux on it's surface and that this flux is fully thermalized in the atmosphere of the secondary star. Since the incident flux is fully thermalized, the details of the actual spectrum become irrelevant in favor of the total amount of incident energy from the accretion disk. Therefore, the total flux depends on the total luminosity of the disk, not its spectral energy distribution. The disk luminosity is set via a separate input parameter in the code. The code assumes that the the spectrum emitted by an area element on the surface of the secondary is the same as the spectrum that would be emitted by an isolated star with the same effective temperature. The code does not account for the fact that the regions of the secondary's atmosphere which are heated by radiation can have a temperature inversion, which in turn can alter the absorption-line spectrum and produce chromospheric emission lines \citep{bar04, waw09}. But since we are doing this analysis with optical photometry instead of spectroscopy, the form in which the code handles the incident radiation is not expected be the limiting factor in our results. We can eliminate inclinations greater than $65^\circ$. At inclinations greater than $i \approx 65^\circ$ the accretion disk eclipses the secondary star. This is the case for all of the secondary masses modeled ($M_2 = 0.8 - 1.4\, M_\odot$ ). Figure~\ref{bad} shows an example of the distinctive eclipse profile produced when the accretion disk passes in front of the secondary star. No eclipse of this kind has been observed.\footnote{If the disk eclipses the secondary star, the secondary star also eclipses the disk 1/2 orbit later, but a shallow disk eclipse can be difficult to discern because it is superimposed on the minimum of the sinusoidal orbital variation.} To avoid an eclipse at $75^\circ$ either the outer radius of the accretion disk would have to be unrealistically small, about 50\% the radius of the black hole's Roche Lobe, or the mass ratio would have to be so low that the fits to the orbital light curve become unacceptably poor. Because so few parameters dominate the model, we were able to calculate a grid of models that covered the likely parameter space. The grid points in the primary mass were set at intervals 1 or 2 solar masses between $2\, M_\odot$ to $16\, M_\odot$ (see the first column of Tables 2, 3, and 4). The upper limit to the mass of a main-sequence secondary star that just fills the Roche lobe is $\sim 1.4\, M_\odot$ \citep{pat05}. We therefore chose grid points in secondary mass at $1.4\, M_\odot$, $1.0\, M_\odot$, and $0.8\, M_\odot$, the latter two corresponding to evolved secondaries. The grid points in inclination were typically at half degree intervals and in disk luminosity at intervals of roughly 10\% of the luminosity. For the models with a secondary star mass of $1.4\ M_\odot$ we adopted a temperature of 6650~K, which is appropriate for a main-sequence star of that mass. For the models with secondary masses of $0.8\, M_\odot$ and $1.0\, M_\odot$ we adopted temperatures of 4700~K and 4900~K respectively, which are more appropriate for an evolved secondary. Again, though, the intrinsic temperature of the secondary has little effect on the synthetic light curve. The output at each grid point is a synthetic light curve and a value of relative $\chi^2$ for the fit of the synthetic light curve to the observed data. We report a relative $\chi^2$ because the noise in the light curves is dominated by flickering noise, which is both highly correlated and variable in time, making measurement of the absolute $\chi^2$ intractable. The relative values of $\chi^2$ are, though, adequate for determining the best-fit parameters and their standard deviations. Plots like the one shown in Figure \ref{contour} were used to pinpoint the inclination and disk luminosity that yield the smallest relative $\chi^2$ for each pair of masses. In that Figure, the red zones have the highest values of $\chi^2$, while the blue regions have the lowest values and correspond to better fits to the data. Once the masses have been chosen, the inclination is highly constrained, typically to within $0.5^\circ$, the disk luminosity less so, but still typically to within 30\%. The results are given in Tables~\ref{table:eight}, \ref{table:one}, and \ref{table:four}, which list the relative $\chi^2$ for the the best-fit values of the inclination and disk luminosity for each pair of primary and secondary masses. Figure~\ref{mass} intercompares the models by plotting the best-fit values of $\chi^2$ against the mass of the primary star. The lowest values of $\chi^2$ occur for $M_2 = 0.8$ and $1.0\, M_\odot$, and for $M_1$ between 2 and $5\, M_\odot$. The 90\% upper bound on the mass of the black hole is $6.2\, M_\odot$. We are unable to place a useful lower limit on the mass of the black hole, but masses in the range of the known neutron star masses are not excluded. The very best fit occurs for $M_1 = 3.0\, M_\odot$, $M_2 = 1.0\, M_\odot$, and an orbital inclination $i = 12.75^{\circ}$. The preferred mass of the secondary in our models is perfectly consistent with the results found by \cite{hak14}. Our models show a 90\% upper bound of $6.2\, M_\odot$, and a 90\% lower bound that is outside the reasonable mass for a black hole primary. The $1 \sigma$ deviation from the best fit of $M_1 = 3.0\, M_\odot$ is of $\pm 2.5\, M_\odot$. Even though we can not constrain the lower mass based solely on our modeling, we adopt a lower bound of $M_1 = 2.0\, M_\odot$ from the assumption that the primary is a black hole and not a neutron star. Black holes are expected to form only when the mass of the compact object is too high to become a neutron star. Some ways in which a black hole can be created include: the collapse of the core of a star during a core-collapse supernova, a merger of two neutron stars, and the collapse of a neutron star that is accreting mass. In all these scenarios the black hole will not form, unless the mass is greater than the highest possible mass for a neutron star. Currently, the observed upper mass limit of neutron stars is $2.0\, M_\odot$ \citep{dem10}. Figure \ref{good} shows the synthetic light curve for this model overplotted on the mean optical light curve. The correlated errors in the light curves and imperfect model physics broaden the range of permitted black hole masses beyond the formal 90\% upper limit, possibly even to much higher masses. The best-fit masses are, however, unchanged by these considerations. \begin{figure} \centering \subfigure[The best-fitting synthetic light curve for an inclination of $75\,^{\circ}$. The dip near phase zero is caused by an eclipse of the secondary star by the accretion disk.] { \includegraphics[scale=0.37]{Bad.pdf} \label{bad} } \\ \subfigure[The very best fitting synthetic light curve. The parameters of the model are $M_1 = 3.0 M_\odot$, $M_2 = 1.0M_\odot$, and $i = 12.75^{\circ}$] { \includegraphics[scale=0.37]{Good.pdf} \label{good} } \caption{Two best-fit synthetic light curves overplotted on the mean orbital light curve of V1408 Aql. There is a difference between the average light curve and the model fits at phase = 1.0. This is likely due to insufficient coverage at this phase. However, additional photometry is needed to confirm this result.} \label{fits-fig} \end{figure} \begin{figure} \begin{center} \includegraphics[angle=0.0,scale=0.066]{Onefit.jpg} \end{center} \caption[Fixed inclination models]{The contour plot represents various models with different inclinations and disk luminosities for the case of a $2 M_\odot$ primary and a $0.8M_\odot$ secondary. The red regions represent the worst fits and the blue regions represent the lowest $\chi^2$ and therefore the best fits to the optical light curve. The red cross marks the spot of the best fit with an inclination of $i = 11\,^{\circ}$ and a luminosity of L = 7.50\e{37} ergs/s.} \label{contour} \end{figure} \begin{table*}[ht] \caption{Models with a Secondary Star Mass of $0.8M_\odot$} \centering \begin{center} \begin{tabular}{l*{8}{c}r} \hline \hline $M_1$ & $a$ & $R_{Disk}$ & $i$ & $L_{D}$ & $L_{D}/L_{Edd}$ & Relative $\chi^2$ \\ ($M_\odot$) & (AU) & (a) & (deg) & ($10^{38}$ erg s$^{-1}$) & & ($\e{5}$) \\ \hline 2 & 0.0145 & 0.414 & 11.0 & 0.75 & 0.30 & 1.121 \\ 3 & 0.0160 & 0.446 & 13.5 & 0.72 & 0.19 & 1.120 \\ 4 & 0.0173 & 0.469 & 16.0 & 0.71 & 0.14 & 1.122 \\ 5 & 0.0184 & 0.486 & 18.5 & 0.78 & 0.12 & 1.121 \\ 6 & 0.0194 & 0.499 & 21.0 & 0.70 & 0.09 & 1.124 \\ 7 & 0.0204 & 0.511 & 23.5 & 0.72 & 0.08 & 1.130 \\ 8 & 0.0212 & 0.520 & 26.0 & 0.75 & 0.07 & 1.140 \\ 10 & 0.0227 & 0.536 & 31.0 & 0.90 & 0.07 & 1.156 \\ 12 & 0.0240 & 0.548 & 36.0 & 1.30 & 0.09 & 1.171 \\ 14 & 0.0252 & 0.558 & 41.0 & 1.60 & 0.09 & 1.177 \\ 16 & 0.0263 & 0.567 & 46.0 & 2.90 & 0.14 & 1.162 \\ \hline \end{tabular} \end{center} \raggedright Note: $a$ is the separation of the stars, $R_{Disk}$ is the outer radius of the disk in units of $a$, $L_{D}$ is the luminosity of the disk, and $L_{Edd}$ is the Eddington luminosity. \label{table:eight} \end{table*} \begin{table*}[ht] \caption{Models with a Secondary Star Mass of $1.0M_\odot$} \centering \begin{center} \begin{tabular}{l*{8}{c}r} \hline \hline $M_1$ & $a$ & $R_{Disk}$ & $i$ & $L_{D}$ & $L_{D}/L_{Edd}$ & Relative $\chi^2$ \\ ($M_\odot$) & (AU) & (a) & (deg) & ($10^{38}$ erg s$^{-1}$) & & ($\e{5}$) \\ \hline 2 & 0.0148 & 0.396 & 10.50 & 1.4 & 0.56 & 1.121 \\ 3 & 0.0163 & 0.429 & 12.75 & 1.3 & 0.34 & 1.118 \\ 4 & 0.0176 & 0.451 & 15.00 & 1.4 & 0.28 & 1.120 \\ 5 & 0.0187 & 0.469 & 17.25 & 1.4 & 0.22 & 1.121 \\ 6 & 0.0196 & 0.482 & 19.50 & 1.6 & 0.21 & 1.123 \\ 7 & 0.0205 & 0.494 & 21.75 & 1.6 & 0.18 & 1.125 \\ 8 & 0.0214 & 0.504 & 24.00 & 1.6 & 0.16 & 1.130 \\ 10 & 0.0228 & 0.520 & 28.50 & 1.9 & 0.15 & 1.150 \\ 12 & 0.0241 & 0.533 & 32.50 & 1.7 & 0.11 & 1.165 \\ 14 & 0.0253 & 0.544 & 36.50 & 1.7 & 0.10 & 1.177 \\ 16 & 0.0264 & 0.552 & 41.00 & 2.8 & 0.14 & 1.182 \\ \hline \end{tabular} \end{center} \raggedright See note to Table~\ref{table:eight}. \label{table:one} \end{table*} \begin{table*}[ht] \caption{Models with a Secondary Star Mass of $1.4M_\odot$} \centering \begin{center} \begin{tabular}{l*{8}{c}r} \hline \hline $M_1$ & $a$ & $R_{Disk}$ & $i$ & $L_{D}$ & $L_{D}/L_{Edd}$ & Relative $\chi^2$ \\ ($M_\odot$) & (AU) & (a) & (deg) & ($10^{38}$ erg s$^{-1}$) & & ($\e{5}$) \\ \hline 2 & 0.0154 & 0.369 & 10.0 & 1.2 & 0.48 & 1.129 \\ 3 & 0.0168 & 0.401 & 12.0 & 1.7 & 0.45 & 1.127 \\ 4 & 0.0180 & 0.425 & 14.0 & 1.9 & 0.38 & 1.127 \\ 5 & 0.0191 & 0.442 & 16.0 & 2.3 & 0.37 & 1.129 \\ 6 & 0.0200 & 0.457 & 18.0 & 2.8 & 0.37 & 1.134 \\ 7 & 0.0209 & 0.469 & 20.0 & 3.0 & 0.34 & 1.140 \\ 8 & 0.0217 & 0.479 & 22.0 & 3.4 & 0.34 & 1.145 \\ 10 & 0.0231 & 0.496 & 26.0 & 4.2 & 0.33 & 1.161 \\ 12 & 0.0244 & 0.509 & 30.0 & 5.0 & 0.33 & 1.183 \\ 14 & 0.0256 & 0.520 & 34.0 & 6.3 & 0.36 & 1.208 \\ 16 & 0.0266 & 0.530 & 38.0 & 8.9 & 0.44 & 1.224 \\ \hline \end{tabular} \end{center} \raggedright See note to Table~\ref{table:eight}. \label{table:four} \end{table*} \section{Discussion} The fits to the orbital light curve of V1408~Aql favor low orbital inclinations, much lower than the inclinations usually adopted in analyses of its X-ray spectral energy distribution. According to \cite{mai14}, for example, fits to the X-ray spectrum of 4U~1957+115 require an orbital inclination near $75^\circ$ to obtain the canonical X-ray spectral hardening factor $h_d = T_{color}/T_{eff} = 1.7$ \citep{dav05,dav06} (this quantity is often called the color correction factor and denoted by $f_{c}$). Lower inclinations yield values for $h_d$ that are greater than 1.7. To rigidly limit the color correction factor to values near 1.7 is, however, unwarranted. Authors such as \cite{now08} preferred high inclinations near $75^\circ$, which in combination with a large distance, low black hole mass and high accretion rate reconcile the high temperature and low normalization obtained from their models. For some of their models they fixed the inclination to $75^\circ$ in order to be consistent with the interpretation of optical observations of \cite{hak99}. There is ample theoretical and observational evidence that color correction factors can be much larger than 1.7, Cygnus X-1 shows one of the highest color correction factors of up to $f_c \sim 5 $ \citep{rey13}. Color correction factors are different for various X-ray sources, and beyond that, an individual source can have different color correction factors \citep{mer00,dun11,sal13}. While it is not our intent to re-analyze the X-ray spectral energy distribution, we note that color correction factors larger than 1.7 do permit reasonable system parameters for low orbital inclinations. Although the typical value for the spectral hardening factor is $f_c = 1.7 $, \cite{mai14} provided a value of $f_c = 2.0 - 2.2$ for their preferred model. \citet{now12} fit the spectral energy distribution of of 4U~1957+115 with several sets of models, one of which was the {\tt eqpair} model with color correction factors raging from $f_c = 1.7 - 3.3$. While the fits of the {\tt eqpair} model yielded acceptable values for $\chi^2$, they required a low value for the normalization factor, $N_{eqp} = 1.926\e{-4}$. \citet{now12} interpreted the low normalization factor as evidence for a small inner disk radius and, consequently, evidence for a rapidly spinning black hole. While the fits all assumed an orbital inclination of $75^\circ$, the parameters of the fits are highly degenerate. \citet{now12} give a scaling relation for other system parameters: \begin{equation} N_{eqp} = \left( \dfrac{M_1}{1\, M_\odot} \right)^2 \left( \dfrac{D}{1\, \textrm{kpc}} \right)^{-2} f_{c}^{-4} \cos i, \end{equation} where $D$ is the distance and $f_c$ is the color correction factor. With $M_1 = 3\, M_\odot$, $i = 12.75^\circ$, and $N_{eqp} = 1.926\e{-4}$, this becomes \begin{equation} D f_c^2 = 213.5. \end{equation} If one insists on $f_c = 1.7$, the distance would be $D = 74\ \textrm{kpc}$, placing V1408~Aql uncomfortably far outside the Galaxy. If, instead, we set $f_c = 2.3$, the distance becomes 40~kpc; and if $f_c = 3.3$, it drops to 20~kpc, either of which would place V1408~Aql in the halo of the Galaxy. This range of distances agrees with the results of \citet{rus10}. If V1408~Aql contains a black hole, it would need to be at a distance between 22 and 40~kpc to lie in the same region of the $(L_X,L_{OPT})$ diagram as other black hole X-ray binaries in their soft states, with the larger distance somewhat preferred (see Figure~8 in \citet{rus10}). At the larger distance the X-ray luminosity at 2-10~keV would be $\sim\! 10^{38}\ \textrm{erg s}^{-1}$. This agrees with the large, distance-independent disk luminosity that is required to heat the secondary star in our models of the orbital light curve. Finally, we have found a correlation between the mean brightness of V1408~Aql and the amplitude of the sinusoidal orbital modulation: The amplitude increases as the mean brightness increases (see Figure~\ref{amp-fig}). \citet{rus10} found a correlation between the mean optical and mean X-ray fluxes from V1408~Aql: The mean X-ray flux increases as the mean optical flux increases. Taken together, these two correlations imply that the amplitude of the sinusoidal orbital modulation increases as the X-ray flux increases. This is the correlation expected from a model in which the orbital modulation is caused by the heated face of the secondary star. The X-ray flux is a good proxy for the total disk luminosity. As the X-ray flux increases, the disk becomes more luminous and heats the face of the secondary star towards the disk to a higher temperature, increasing the amplitude of the orbital variation. \section{Summary} We have presented new optical high-speed photometry of V1408~Aql from nine nights in 2012 July and August. The optical light curve continues to display a nearly-sinusoidal orbital modulation along with night-to-night variations of the mean brightness. We combined the new photometry with our previously-published photometry to derive a more accurate orbital period and mean orbital light curve, and to better define the night-to-night variations. We find that the amplitude of the orbital modulation is strongly correlated with the mean brightness, $dA/dB = 0.49$, where $A$ is the peak-to-peak amplitude of the sinusoidal modulation and $B$ is the mean brightness, both $A$ and $B$ are in units of relative intensity. The relative amplitude of the orbital modulation rises from 23\% when V1408~Aql is at the minimum of the observed range of its brightness to 29\% at the maximum of the range. We attribute the changes in mean brightness to changes in the luminosity of the accretion disk around the black hole. After scaling all the nightly light curves to the same mean brightness, we derived a more accurate orbital period, 0.388893(3) days, and mean orbital light curve, shown on Figure~\ref{good}. The mean orbital light curve is consistent with a model in which the orbital modulation is caused entirely by the changing aspect of the heated face of the secondary star as it revolves around the black hole. Fits of synthetic orbital light curves based on this model to the observed light curve favor low orbital inclinations and low black hole masses, the best fit occuring for $M_1 = 3.0\, M_\odot$, $M_2 = 1.0\, M_\odot$, and $i=12.75^{\circ}$. The upper bound to the mass of the black hole is $6.2\, M_\odot$ with a 90\% probability, although uncertainties in the data and the models allow higher masses, possibly much higher masses. Orbital inclinations higher than about $65^\circ$ are strongly disfavored by the lack of eclipses. The low orbital inclinations we have found are compatible with previous analyses of the X-ray spectral distribution of V1408~Aql if the color correction factor is somewhat larger that the value typically adopted for the analyses. If the distance to V1408~Aql is 40~kpc, the color correction factor must be increased to 2.3; and if the distance is 20~kpc, the color correction factor is 3.3. In conclusion, the compact star in V1408~Aql a viable candidate for a black hole whose mass lies within the gap in the distribution of compact star masses. \acknowledgements We thank Tom Maccarone and Michael Nowak for helpful discussions regarding the nature of the compact object and Amanda Bayless for providing some of the data used for this project. We also thank an anonymous referee for providing helpful suggestions towards the improvement of this paper. This research is supported by NSF Grant No.\ 0958783 and by a MARC Scholarship to the University of Texas at El Paso. \clearpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $f\colon M\to M$ be a diffeomorphism of a compact smooth manifold. Among the invariant probability measures for the system, thermodynamic formalism identifies distinguished measures called \emph{equilibrium states}; these are measures that maximize the quantity $h_\mu(f) + \int \varphi\,d\mu$, where $\varphi\colon M\to \RR$ is a \emph{potential function}. Sinai, Ruelle, and Bowen \cite{jS72,rB75,dR76} showed that a mixing Anosov diffeomorphism has a unique equilibrium state $\mu_\varphi$ for every H\"older continuous potential $\varphi$, and that for the \emph{geometric potential} $\varphi^u}%{\varphi^{\text{geo}}(x) = -\log|\det Df|_{E^u(x)}|$, this unique equilibrium state is the \emph{SRB measure}, which is the physically relevant invariant measure. The extension of this theory to systems beyond uniform hyperbolicity has generated a great deal of activity \cite{DU,lY98,oS01,BT09,ST16,PSZ}. In this paper, we study a class of derived from Anosov (DA) partially hyperbolic diffeomorphisms using theory developed by the first and third authors in \cite{CT4}. The results from \cite{CT4} show that equilibrium states exist and are unique under the hypotheses that `obstructions to the specification property and regularity' and `obstructions to expansivity' carry less topological pressure than the whole space. We consider the class of diffeomorphisms introduced by Ma\~n\'e~\cite{man78}, which are partially hyperbolic maps $f_M\colon \TT^d\to \TT^d$ ($d\geq 3$) constructed as $C^0$-perturbations of a hyperbolic toral automorphism $f_A$ with 1-dimensional unstable bundle. These maps are robustly transitive but not Anosov; they admit an invariant splitting $T\TT^d = E^u \oplus E^c \oplus E^s$ where vectors in $E^c$ are sometimes expanded and sometimes contracted. We give explicit criteria under which $f_M$, or a $C^1$-perturbation, has a unique equilibrium state for a H\"older continuous potential $\varphi\colon \TT^d\to\RR$. In \cite{CFT_BV}, we gave analogous results for the Bonatti--Viana family of diffeomorphisms, which admit a dominated splitting but are not partially hyperbolic. Our results here for the Ma\~n\'e family are stronger; in particular, the fact that the unstable bundle is 1-dimensional and uniformly expanding allows us to work at arbitrarily small scales and obtain large deviations and multifractal results not present in \cite{CFT_BV}. Our proofs, which rely on general pressure estimates for DA diffeomorphisms from \cite{CFT_BV}, are correspondingly simpler. An additional novelty in this paper is that we apply our results to a larger class of H\"older potential functions by giving a criteria for uniqueness involving the H\"older semi-norm. We emphasize that although we choose to focus on the Ma\~n\'e class since it is a model class of partially hyperbolic examples in the literature, our approach to existence and uniqueness applies more generally and does not rely on 1-dimensionality of the unstable manifold in an essential way, as is made clear in \cite{CFT_BV}. We need to control two parameters $\rho,r>0$ in Ma\~n\'e's construction. We write $\mathcal{F}_{\rho,r}$ for the set of Ma\~n\'e diffeomorphisms $f_M$ such that \begin{enumerate}[label=(\roman{*})] \item $f_A$ has a fixed point $q$ such that $f_M = f_A$ on $\TT^d \setminus B(q,\rho)$, and \item \label{gamma} if an orbit spends a proportion at least $r$ of its time outside $B(q,\rho)$, then it contracts vectors in $E^c$. \end{enumerate} The parameter $r$ has a more intrinsic definition as an upper bound on a quantity involving the maximum derivative of $f_M$ on $E^c$, and the construction can be carried out so $r$ is arbitrarily small. Our results apply to $C^1$ perturbations of $f_M$ that are partially hyperbolic, dynamically coherent, and satisfy \ref{gamma}. We denote this $C^1$-open set by $\mathcal{U}_{\rho,r}$. We describe the Ma\~n\'e construction and $\mathcal{U}_{\rho,r}$ more precisely in \S\ref{s.constructions}. We now state our main theorem. Here $h$ is the topological entropy of $f_A$, $L$ is a constant defined in \S \ref{pressureestimate} depending on $f_A$ and the maximum of $d_{C^0}(g, f_A)$ for $g \in \mathcal{U}_{\rho,r}$, and $H(r) = -r \log r -(1-r) \log (1-r)$. \begin{thma}\label{t.mane} Given $g\in \mathcal{U}_{\rho,r}$ and $\varphi\colon \mathbb{T}^d\to \mathbb{R}$ H\"older continuous, let \[ \Psi(\rho,r,\varphi) = (1-r)\sup_{B(q,\rho)} \varphi + r(\sup_{\TT^d} \varphi + h + \log L) + H(2r). \] If $\Psi(\rho,r,\varphi)< P(\varphi; g)$, then $(\TT^d,g,\varphi)$ has a unique equilibrium state. \end{thma} We then apply Theorem \ref{t.mane} by finding sufficient conditions to verify the inequality $\Psi(\rho,r,\varphi)< P(\varphi; g)$. In \S\ref{cor.br}, we show that for a fixed diffeomorphism in $\mathcal{U}_{\rho,r}$, every H\"older potential satisfying a bounded range condition has a unique equilibrium state. In \S \ref{sec:corollaries}, we obtain estimates on $\Psi(\rho,r,\varphi)$ and $P(\varphi; g)$ in terms of the H\"older semi-norm $|\varphi|_\alpha$ of the potential. We apply these estimates to obtain the following theorem, which says that the set of potentials for which Theorem \ref{t.mane} applies for $g\in \mathcal{U}_{\rho,r}$ contains a ball around the origin in $C^\alpha(\TT^d)$ whose radius goes to $\infty$ as $\rho,r\to0$. \begin{thma}\label{cor1.2} There is a function $T(\rho,r; \alpha)$ with the property that \begin{enumerate}[label=\textup{(\arabic{*})}] \item if $g\in \mathcal{U}_{\rho,r}$ and $|\varphi|_\alpha < T(\rho,r; \alpha)$, then $(\TT^d,g,\varphi)$ has a unique equilibrium state; and \item $T(\rho,r; \alpha)\to\infty$ as $\rho,r\to0$. \end{enumerate} \end{thma} As an immediate consequence, we see that for a fixed H\"older potential $\varphi$, there exist $\rho, r$ so that there is a unique equilibrium state with respect to any $g \in \mathcal{U}_{\rho,r}$. Theorem \ref{cor1.2} is proved using Theorem \ref{thm:pressure-gap}, which is a general lower bound on the entropy of an equilibrium state with respect to $f_A$ in terms of the H\"older norm of the potential that allows us to estimate $P(\varphi;g)$ from below. We apply Theorem \ref{t.mane} to scalar multiples of the geometric potential $\varphi^u}%{\varphi^{\text{geo}}(x) = \varphi^u}%{\varphi^{\text{geo}}_g(x) = -\log\|Dg|_{E^u(x)}\|$. We obtain the following results. \begin{thma}\label{main3} Let $g \in \mathcal{U}_{\rho, r}$ be a $C^2$ diffeomorphism. Suppose that \begin{equation}\label{eqn:mane-srb-condition} r(h + \log L )+H(2r) < \min \left\{\frac{\sup_{x\in \TT^d} \varphi^u}%{\varphi^{\text{geo}}_g(x)}{\inf_{x\in\TT^d} \varphi^u}%{\varphi^{\text{geo}}_g(x)} h, -\sup_{x\in \TT^d} \varphi^u}%{\varphi^{\text{geo}}_g(x)\right\}. \end{equation} Then the following properties hold. \begin{enumerate}[label=\textup{(\arabic{*})}] \item $t=1$ is the unique root of the function $t\mapsto P(t\varphi^u}%{\varphi^{\text{geo}}_g)$; \item\label{ag} There exists $a=a(g)>0$ such that $t\varphi^u}%{\varphi^{\text{geo}}$ has a unique equilibrium state $\mu_t$ for each $t\in (-a,1+a)$; \item $\mu_1$ is the unique SRB measure for $g$. \end{enumerate} \end{thma} The quantity $\sup \varphi^u}%{\varphi^{\text{geo}} / \inf \varphi^u}%{\varphi^{\text{geo}}$ is uniformly positive for all Ma\~n\'e diffeomorphisms, and the construction can be carried out so $ \sup \varphi^u}%{\varphi^{\text{geo}}_{f_M}$ and $\inf \varphi^u}%{\varphi^{\text{geo}}_{f_M}$ are both close to $ \varphi^u}%{\varphi^{\text{geo}}_{f_A} = - \log \lambda_u = - h$, where $\lambda_u$ is the unique eigenvalue of $A$ greater than $1$, so the right hand side of \eqref{eqn:mane-srb-condition} is close to $\log \lambda_u$. Thus, the inequality \eqref{eqn:mane-srb-condition} holds when $\rho, r$ are small. For a sequence of Ma\~n\'e diffeomorphisms $f_k \in \mathcal{F}_{\rho_k, r_k}$ with $ \rho_k, r_k\to 0$, it is easy to ensure that the H\"older semi-norm of $\varphi^u}%{\varphi^{\text{geo}}_{f_k}$ is uniformly bounded (for example by ensuring that the restrictions of each $f_k$ to $B(q, \rho_k)$ are rescalings of each other). In this case, applying Theorem \ref{cor1.2}, we see that $a(f_k) \to \infty$ as $k \to \infty$ in \ref{ag} above. In \S\ref{s.ldp}, we derive consequences of Theorem \ref{main3} for the multifractal analysis of the largest Lyapunov exponent, and give an upper large deviations principle for the equilibrium states in our main theorems. We now discuss related results in the literature for partially hyperbolic systems, which have largely focused on the measure of maximal entropy (MME). For ergodic toral automorphisms, the Haar measure was shown to be the unique MME by Berg \cite{Berg} using convolutions. Existence of a unique MME for the Ma\~n\'e examples was obtained in~\cite{BFSV}. For partially hyperbolic diffeomorphisms of the 3-torus homotopic to a hyperbolic automorphism, uniqueness of the MME was proved by Ures \cite{Ur}. The techniques for the results in the previous paragraph are not well suited to the study of equilibrium states for $\varphi \neq 0$, which remains largely unexplored. When the first version of the present work appeared [arXiv:1505.06371v1], the only available references for this subject were existence results for a certain class of partially hyperbolic horseshoes \cite{LOR}, with uniqueness results only for potentials constant on the center-stable direction \cite{AL12}. An improved picture has emerged since then. Spatzier and Visscher studied uniqueness of equilibrium states for frame flows \cite{SV}. Rios and Siqueira \cite{RS15} obtained uniqueness for H\"older potentials with small variation for certain partially hyperbolic horseshoes, and Ramos and Siqueira studied statistical properties of these equilibrium states \cite{RS16}. Crisostomo and Tahzibi \cite{CT16} studied uniqueness of equilibrium states for partially hyperbolic DA systems on $\TT^3$, including the Ma\~n\'e family, using techniques very different from ours under the extra assumption that the potential is constant on `collapse intervals' of the semi-conjugacy. The theory of SRB measures is much more developed. The fact that there is a unique SRB measure for the Ma\~n\'e diffeomorphisms follows from~\cite{BV}. The statistical properties of SRB measures is an active area of research \cite{AL, aC02, PSZ}. In \cite{CV}, interesting results are obtained on the continuity of the entropy of the SRB measure. The characterization of the SRB measure as an equilibrium state for DA systems obtained along an arc of $C^\infty$ diffeomorphisms was established by Carvalho \cite{mC93}, with partial results in the $C^r$ case. However, the characterization of the SRB measure as a \emph{unique} equilibrium state is to the best of our knowledge novel for the Ma\~n\'e class. Immediate consequences of this characterization include the upper large deviations principle and multifractal analysis results of \S\ref{s.ldp}. We now outline the paper. In \S\ref{s.back}, we give background material from \cite{CT4} on thermodynamic formalism. In \S\ref{s.perturb}, we recall pressure estimates on $C^0$-perturbations of Anosov systems. In \S\ref{s.constructions}, we give details of the Ma\~n\'e construction. In \S\ref{s.mane.pf}, we prove Theorem \ref{t.mane}. In \S \ref{sec:corollaries}, we prove Theorem \ref{cor1.2}. In \S\ref{s.srb}, we prove Theorem \ref{main3}. In \S\ref{s.ldp}, we give results on large deviations and multifractal analysis. In the preliminary sections \S\S~\ref{s.back}--\ref{s.perturb}, we follow the presentation of \cite{CFT_BV}, referring the reader to that paper for the proofs of some necessary background material. \section{Background and preliminary results}\label{s.back} \subsection{Pressure} Let $f\colon X\to X$ be a continuous map on a compact metric space. We identify $X\times \mathbb{N}$ with the space of finite orbit segments by identifying $(x,n)$ with $(x,f(x),\dots,f^{n-1}(x))$. Given a continuous potential function $\varphi\colon X\to \mathbb{R}$, write $ S_n\varphi(x) = S_n^f \varphi(x) = \sum_{k=0}^{n-1} \varphi(f^kx) $. For each $\eta>0$, write \[ \Var(\varphi,\eta) = \sup\{|\varphi(x)-\varphi(y)| \,:\, x,y\in X, d(x,y) < \eta\}. \] Since we consider H\"older potentials, we will often use the bound \[ \Var(\varphi,\eta) \leq |\varphi|_\alpha \eta^\alpha, \text{ where } |\varphi|_\alpha := \sup_{x\neq y} \frac{|\varphi(x)-\varphi(y)|}{d(x,y)^\alpha}. \] The $n$th Bowen metric associated to $f$ is defined by \[ d_n(x,y) = \max \{ d(f^kx,f^ky) \,:\, 0\leq k < n\}. \] Given $x\in X$, $\varepsilon>0$, and $n\in \NN$, the \emph{Bowen ball of order $n$ with center $x$ and radius $\varepsilon$} is $ B_n(x,\varepsilon) = \{y\in X \, :\, d_n(x,y) < \varepsilon\}. $ A set $E\subset X$ is $(n,\varepsilon)$-separated if $d_n(x,y) \geq \varepsilon$ for all $x,y\in E$. Given $\mathcal{D}\subset X\times \mathbb{N}$, we interpret $\mathcal{D}$ as a \emph{collection of orbit segments}. Write $\mathcal{D}_n = \{x\in X \, :\, (x,n)\in \mathcal{D}\}$ for the set of initial points of orbits of length $n$ in $\mathcal{D}$. Then we consider the partition sum $$ \Lambda^{\mathrm{sep}}_n(\mathcal{D},\varphi,\varepsilon; f) =\sup \Big\{ \sum_{x\in E} e^{S_n\varphi(x)} \, :\, E\subset \mathcal{D}_n \text{ is $(n,\varepsilon)$-separated} \Big\}. $$ The \emph{pressure of $\varphi$ on $\mathcal{D}$ at scale $\varepsilon$} is $$ P(\mathcal{D},\varphi,\varepsilon; f) = \varlimsup_{n\to\infty} \frac 1n \log \Lambda^{\mathrm{sep}}_n(\mathcal{D},\varphi,\varepsilon), $$ and the \emph{pressure of $\varphi$ on $\mathcal{D}$} is $$ P(\mathcal{D},\varphi; f) = \lim_{\varepsilon\to 0}P(\mathcal{D},\varphi,\varepsilon). $$ Given $Z \subset X$, let $P(Z, \varphi, \varepsilon; f) := P(Z \times \NN, \varphi, \varepsilon; f)$; observe that $P(Z, \varphi; f)$ denotes the usual upper capacity pressure \cite{Pesin}. We often write $P(\varphi;f)$ in place of $P(X, \varphi;f)$ for the pressure of the whole space. When $\varphi=0$, our definition gives the \emph{entropy of $\mathcal{D}$}: \begin{equation}\label{eqn:h} \begin{aligned} h(\mathcal{D}, \varepsilon; f)= h(\mathcal{D}, \varepsilon) &:= P(\mathcal{D}, 0, \varepsilon) \mbox{ and } h(\mathcal{D})= \lim_{\varepsilon\rightarrow 0} h(\mathcal{D}, \varepsilon). \end{aligned} \end{equation} Write $\mathcal{M}(f)$ for the set of $f$-invariant Borel probability measures and $\mathcal{M}_e(f)$ for the set of ergodic measures in $\mathcal{M}(f)$. The variational principle for pressure \cite[Theorem 9.10]{Wal} states that \[ P(\varphi;f)=\sup_{\mu\in \mathcal{M}(f)}\left\{ h_{\mu}(f) +\int \varphi \,d\mu\right\} =\sup_{\mu\in \mathcal{M}_e(f)}\left\{ h_{\mu}(f) +\int \varphi \,d\mu\right\}. \] A measure achieving the supremum is an \emph{equilibrium state}. \subsection{Obstructions to expansivity, specification, and regularity} Bowen showed in \cite{Bow75} that if $(X,f)$ has expansivity and specification, and $\varphi$ has a certain regularity property, then there is a unique equilibrium state. We recall definitions and results from \cite{CT4}, which show that non-uniform versions of Bowen's hypotheses suffice to prove uniqueness. Given a homeomorphism $f\colon X\to X$, the \emph{bi-infinite Bowen ball around $x\in X$ of size $\varepsilon>0$} is the set \[ \Gamma_\varepsilon(x) := \{y\in X \, :\, d(f^kx,f^ky) < \varepsilon \text{ for all } n\in \ZZ \}. \] If there exists $\varepsilon>0$ for which $\Gamma_\varepsilon(x)= \{x\}$ for all $x\in X$, we say $(X, f)$ is \emph{expansive}. When $f$ is not expansive, it is useful to consider the \emph{tail entropy} of $f$ at scale $\varepsilon>0$ \cite{rB72,mM76}: \begin{equation}\label{eqn:Bowen-tail-entropy} h_f^*(\varepsilon) = \sup_{x\in X} \lim_{\delta\to 0} \limsup_{n\to\infty} \frac 1n \log \Lambda^\mathrm{span}_n(\Gamma_\varepsilon(x),\delta;f), \end{equation} where for a set $Y \subset X$, $\Lambda^\mathrm{span}_n(Y, \delta;f) =\inf \{ \# E :\ Y \subset \bigcup_{x\in E} \overline{ B_n(x,\delta)} \}$. \begin{defn} \label{almostexpansive} For $f\colon X\rightarrow X$ the set of non-expansive points at scale $\varepsilon$ is $\mathrm{NE}(\varepsilon):=\{ x\in X \, :\, \Gamma_\varepsilon(x)\neq \{x\}\}$. An $f$-invariant measure $\mu$ is almost expansive at scale $\varepsilon$ if $\mu(\mathrm{NE}(\varepsilon))=0$. Given a potential $\varphi$, the pressure of obstructions to expansivity at scale $\varepsilon$ is \begin{align*} P_\mathrm{exp}^\perp(\varphi, \varepsilon) &=\sup_{\mu\in \mathcal{M}_e(f)}\left\{ h_{\mu}(f) + \int \varphi\, d\mu\, :\, \mu(\mathrm{NE}(\varepsilon))>0\right\} \\ &=\sup_{\mu\in \mathcal{M}_e(f)}\left\{ h_{\mu}(f) + \int \varphi\, d\mu\, :\, \mu(\mathrm{NE}(\varepsilon))=1\right\}. \end{align*} This is monotonic in $\varepsilon$, so we can define a scale-free quantity by \[ P_\mathrm{exp}^\perp(\varphi) = \lim_{\varepsilon \to 0} P_\mathrm{exp}^\perp(\varphi, \varepsilon). \] \end{defn} \begin{defn} A collection of orbit segments $\mathcal{G}\subset X\times \mathbb{N}$ has \emph{specification at scale $\varepsilon$} if there exists $\tau\in\mathbb{N}$ such that for every $\{(x_j, n_j)\, :\, 1\leq j\leq k\}\subset \mathcal{G}$, there is a point $x$ in $$\bigcap_{j=1}^k f^{-(m_{j-1}+ \tau)}B_{n_j}(x_j, \varepsilon),$$ where $m_{0}=-\tau$ and $m_j = \left(\sum_{i=1}^{j} n_i\right) +(j-1)\tau$ for each $j \geq 1$. \end{defn} The above definition says that there is some point $x$ whose trajectory shadows each of the $(x_i,n_i)$ in turn, taking a transition time of exactly $\tau$ iterates between each one. The numbers $m_j$ for $j\geq 1$ are the time taken for $x$ to shadow $(x_1, n_1)$ up to $(x_j, n_j)$. \begin{defn} \label{Bowen} Given $\mathcal{G}\subset X\times \mathbb{N}$, a potential $\varphi$ has the \emph{Bowen property on $\mathcal{G}$ at scale $\varepsilon$} if \[ V(\mathcal{G},\varphi,\varepsilon) := \sup \{ |S_n\varphi (x) - S_n\varphi(y)| : (x,n) \in \mathcal{G}, y \in B_n(x, \varepsilon) \} <\infty. \] We say $\varphi$ has the \emph{Bowen property on $\mathcal{G}$} if there exists $\varepsilon>0$ so that $\varphi$ has the Bowen property on $\mathcal{G}$ at scale $\varepsilon$. \end{defn} We refer to an upper bound for $V(\mathcal{G},\varphi,\varepsilon)$ as a \emph{distortion constant}. Note that if $\mathcal{G}$ has the Bowen property at scale $\varepsilon$, then it has it for all smaller scales. \subsection{General results on uniqueness of equilibrium states} Our main tool for existence and uniqueness of equilibrium states is \cite[Theorem 5.5]{CT4}. \begin{defn} A \emph{decomposition} for $(X,f)$ consists of three collections $\mathcal{P}, \mathcal{G}, \mathcal{S}\subset X\times (\NN\cup\{0\})$ and three functions $p,g,s\colon X\times \mathbb{N}\to \NN\cup\{0\}$ such that for every $(x,n)\in X\times \NN$, the values $p=p(x,n)$, $g=g(x,n)$, and $s=s(x,n)$ satisfy $n = p+g+s$, and \begin{equation}\label{eqn:decomposition} (x,p)\in \mathcal{P}, \quad (f^p(x), g)\in\mathcal{G}, \quad (f^{p+g}(x), s)\in \mathcal{S}. \end{equation} Given a decomposition $(\mathcal{P},\mathcal{G},\mathcal{S})$ and $M\in \mathbb{N}$, we write $\mathcal{G}^M$ for the set of orbit segments $(x,n)$ for which $p \leq M$ and $s\leq M$. \end{defn} Note that the symbol $(x,0)$ denotes the empty set, and the functions $p, g, s$ are permitted to take the value zero. \begin{thm}[Theorem 5.5 of \cite{CT4}]\label{t.generalM} Let $X$ be a compact metric space and $f\colon X\to X$ a homeomorphism. Let $\varphi \colon X\to\RR$ be a continuous potential function. Suppose that $P_\mathrm{exp}^\perp(\varphi) < P(\varphi)$, and that $(X,f)$ admits a decomposition $(\mathcal{P}, \mathcal{G}, \mathcal{S})$ with the following properties: \begin{enumerate} \item $\mathcal{G}$ has specification at any scale; \item $\varphi$ has the Bowen property on $\mathcal{G}$; \item $P(\mathcal{P} \cup \mathcal{S},\varphi) < P(\varphi)$. \end{enumerate} Then there is a unique equilibrium state for $\varphi$. \end{thm} \section{Perturbations of Anosov Diffeomorphisms}\label{s.perturb} We collect some background material about weak forms of hyperbolicity and perturbations of Anosov diffeomorphisms. \subsection{Partial hyperbolicity} \label{basics} Let $M$ be a compact manifold. Recall that a diffeomorphism $f\colon M\to M$ is \emph{partially hyperbolic} if there is a $Df$-invariant splitting $TM=E^s\oplus E^c\oplus E^u$ and constants $N\in \NN$, $\lambda>1$ such that for every $x\in M$ and every unit vector $v^\sigma\in E^\sigma$ for $\sigma\in \{s, c, u\}$, we have \begin{enumerate}[label=(\roman{*})] \item $\lambda \|Df^N_x v^s\|<\|Df^N_x v^c\|<\lambda^{-1}\|Df^N_x v^u\|$, and \item $\|Df^N_x v^s\|<\lambda^{-1}<\lambda<\|Df^N_x v^u\|$. \end{enumerate} A partially hyperbolic diffeomorphism $f$ admits \emph{stable and unstable foliations} $W^s$ and $W^u$, which are $f$-invariant and tangent to $E^s$ and $E^u$, respectively \cite[Theorem 4.8]{yP04}. There may or may not be foliations tangent to either $E^c$, $E^s\oplus E^c$, or $E^c\oplus E^u$. When these exist we denote these by $W^c$, $W^{cs}$, and $W^{cu}$ and refer to these as the center, center-stable, and center-unstable foliations respectively. For $x\in M$, we let $W^{\sigma}(x)$ be the leaf of the foliation $\sigma\in \{s, u, c, cs, cu\}$ containing $x$ when this is defined. For a foliation $W$, we write $d_W$ for the leaf metric, and write $W_\eta(x)$ for the $d_W$-ball of radius $\eta$ in $W(x)$. Suppose $W^1,W^2$ are foliations of $M$ with the property that $TM = TW^1 \oplus TW^2$. We say that $W^1,W^2$ have a \emph{local product structure at scale $\eta>0$ with constant $\kappa\geq 1$} if for every $x,y\in M$ with $\varepsilon := d(x,y) < \eta$, the leaves $W^1_{\kappa \varepsilon}(x)$ and $W^2_{\kappa \varepsilon}(y)$ intersect in a single point. \subsection{Anosov shadowing lemma} \label{constants} The Anosov shadowing lemma is proved in e.g. \cite[Theorem 1.2.3]{sP99}. \begin{lem}[Anosov Shadowing Lemma]\label{shadowinglemma} Let $f$ be a transitive Anosov diffeomorphism. There is $C=C(f)$ so that if $2 \eta>0$ is an expansivity constant for $f$, then every $\frac\eta C$-pseudo-orbit for $f$ can be $\eta$-shadowed by an orbit for $f$. \end{lem} The following result is proved in \cite[Lemma 3.2]{CFT_BV} using the natural semi-conjugacy which exists for maps in a $C^0$ neighborhood of $f$ as a consequence of the Anosov shadowing lemma. \begin{lem} \label{pressuredrop} Let $f$ be a transitive Anosov diffeomorphism, $C= C(f)$ the constant from the shadowing lemma, and $3 \eta>0$ an expansivity constant for $f$. If $g\in \Diff(M)$ is such that $d_{C^0}(f,g) < \eta/C$, then: \begin{enumerate}[label=\textup{(\roman{*})}] \item\label{Pg-geq} $P(\varphi; g)\geq P(\varphi;f)- \Var(\varphi, \eta)$; \item\label{Lambdag-leq} $\Lambda^{\mathrm{sep}}_n(\varphi, 3 \eta ;g) \leq \Lambda^{\mathrm{sep}}_n(\varphi, \eta ;f)e^{n \Var(\varphi, \eta) }$. \end{enumerate} \end{lem} In particular, \ref{Lambdag-leq} gives $P(\varphi, 3\eta; g) \leq P(\varphi; f) + \Var(\varphi, \eta)$. \subsection{Pressure estimates} \label{pressureestimate} The Ma\~n\'e examples are $C^0$ perturbations of Anosov maps, where the perturbation is made inside a neighborhood of a fixed point $q$. We estimate the pressure of orbit segments spending nearly all their time near $q$. Let $f$ be a transitive Anosov diffeomorphism of a compact manifold $M$, with topological entropy $h=h_\mathrm{top}(f)$ and expansivity constant $3\eta$. Let $C$ be the constant from the shadowing lemma. For any $\eta>0$ smaller than the expansivity constant for $f$, let $L = L(f, \eta)$ be a constant so that for every $n$, \begin{equation}\label{eqn:Lambda-upper-bound} \Lambda^{\mathrm{sep}}_n(0,\eta; f) \leq L e^{nh}. \end{equation} This is possible by \cite[Lemma 3]{Bow75}. Let $g\colon M\to M$ be a diffeomorphism with $d_{C^0}(f,g) < \eta/C$. Given a fixed point $q$ of $f$ and a scale $\rho \in (0,3\eta)$, let $\chi_q$ be the indicator function of $M\setminus B(q,\rho)$, and consider the following collection of orbit segments for $g$: \[ \mathcal{C} = \mathcal{C}(g,q,r) = \{(x,n) \in M\times \NN : S_n^g \chi_q(x) < rn \}, \] The following estimates are proved in \cite[Theorem 3.3]{CFT_BV}. \begin{thm} \label{coreestimateallscales} Under the assumptions above, we have the inequality \begin{equation} \label{eqn:hC} h(\mathcal{C},6\eta;g) \leq r(h + \log L) +H(2r), \end{equation} where $H(t) = -t\log t - (1-t)\log(1-t)$. Moreover, given $\varphi\colon M\to\RR$ continuous and $\delta>0$, we have \begin{equation} \label{eqn:PC} P(\mathcal{C},\varphi,\delta;g) \leq (1-r) \sup_{B(q, \rho)}\varphi + r \sup_{M}\varphi + h(\mathcal{C},\delta;g), \end{equation} and thus it follows that \[ P(\mathcal{C}, \varphi; g) \leq h_g^\ast(6 \eta) +(1-r) \sup_{B(q, \rho)} \varphi + r( \sup_{M}\varphi + h + \log L ) + H(2r). \] \end{thm} \subsection{Obstructions to expansivity}\label{sec:obstr-exp} Let $g$ be as in the previous section, and suppose that the following property \ref{E} holds. \begin{enumerate}[label=\textbf{[\Alph{*}{]}}] \setcounter{enumi}{4} \item\label{E} there exist $\varepsilon >0$, $r>0$, and a fixed point $q$ such that for $x \in M$, if there exists a sequence $n_k\to\infty$ with $\frac{1}{n_k}S^g_{n_k}\chi_q(x) \geq r$, then $\Gamma_\varepsilon(x)=\{x\}$. \end{enumerate} Then $\mathcal{C} = \mathcal{C}(r)$ from above has the following property, which is proved in \cite[Theorem 3.4]{CFT_BV}. \begin{thm} \label{expansivityestimate} Under the above assumptions, we have the pressure estimate $P_\mathrm{exp}^\perp(\varphi,\varepsilon) \leq P(\mathcal{C}(q,r),\varphi)$. \end{thm} Let $\chi=\chi_q$ and $\mathcal{C}=\mathcal{C}(q,r;g)$. Consider the set $$ A^+ = \{x : \text{there exists } K(x) \text{ so } \tfrac{1}{n}S^g_{n}\chi(x) < r \text{ for all } n>K(x)\} $$ The next lemma, which we need in \S \ref{s.srb}, is proved as \cite[Lemma 3.5]{CFT_BV} as an intermediate step in the proof of Theorem \ref{expansivityestimate}. \begin{lem} \label{keystepexpansivityestimate} Let $\mu \in \mathcal{M}_e(g)$. If $\mu(A^+)>0$, then $h_{\mu}(g)+\int \varphi\, d \mu \leq P(\mathcal{C}, \varphi)$. \end{lem} \subsection{Cone estimates and local product structure}\label{sec:torus} Let $F^1,F^2 \subset \RR^d$ be subspaces such that $F^1 \cap F^2 = \{0\}.$ Let $\measuredangle(F^1,F^2) := \min\{\measuredangle(v,w) \, :\, v\in F^1, w\in F^2\}$, and define \begin{equation}\label{eqn:barkappa} \bar\kappa(F^1,F^2) := (\sin\measuredangle(F^1,F^2))^{-1} \geq 1. \end{equation} Given $\beta \in (0,1)$ and $F^1,F^2 \subset \RR^d$, the \emph{$\beta$-cone of $F^1$ and $F^2$} is $$ C_\beta(F^1,F^2) = \{ v+w \, :\, v\in F^1, w\in F^2, \|w\| < \beta \|v\| \}. $$ The following two useful lemmas are proved in \S 8 of \cite{CFT_BV}. \begin{lem}\label{lem:Wlps} Let $W^1,W^2$ be any foliations of $F^1 \oplus F^2$ with $C^1$ leaves such that $T_x W^1(x) \subset C_\beta(F^1,F^2)$ and $T_x W^2(x) \subset C_\beta(F^2,F^1)$, and let $\bar\kappa = \bar\kappa(F^1,F^2)$. Then for every $x,y\in F^1 \oplus F^2$ the intersection $W^1(x) \cap W^2(y)$ consists of a single point $z$. Moreover, \[ \max\{d_{W^1}(x,z), d_{W^2}(y,z)\} \leq \frac{1+\beta}{1-\beta} \bar\kappa d(x,y). \] \end{lem} \begin{lem}\label{compare:dist} Under the assumptions of Lemma \ref{lem:Wlps}, suppose that $x, y$ are points belonging to the same local leaf of $W\in \{W^1, W^2\}$. Then \[ d(x,y) \leq d_W(x,y) \leq (1+\beta)^2 d(x,y). \] \end{lem} \section{Construction of Ma\~n\'e's examples}\label{s.constructions} We review the class of robustly transitive diffeomorphisms originally considered by Ma\~{n}\'{e}~\cite{man78}. Fix $d\geq3$ and let $f_A$ be the hyperbolic automorphism of $\mathbb{T}^d$ determined by a matrix $A\in\mathrm{SL}(d,\mathbb{Z})$ with all eigenvalues real, positive, simple, and irrational and only one eigenvalue outside the unit circle. Let $\lambda_u$ be the unique eigenvalue greater than $1$ and $\lambda_s<1$ be the largest of the other eigenvalues. Let $h=h_\mathrm{top}(f_A)$ be the topological entropy. The Ma\~{n}\'{e} class of examples are $C^0$ perturbations of $f_A$, which we will denote by $f_M$. We describe the construction below. We are careful about issues of scale to guarantee that we have local product structure at a scale which is `compatible' with the $C^0$ size of the perturbation. Fix an expansivity constant $3\eta$ for $f_M$. We require that $\eta$ is small enough so that calculations at scales which are a suitable multiple of $\eta$ are local: a necessary upper bound on $\eta$ can be computed explicitly, depending on basic properties of the map $f_M$. Let $q$ be a fixed point for $f_A$, and fix $0< \rho < 3\eta$. We carry out a perturbation in a $\rho$-neighborhood of $q$. Let $F^u,F^c,F^s \subset \RR^d$ be the eigenspaces corresponding to (respectively) $\lambda_u$, $\lambda_s$, and all eigenvalues smaller than $\lambda_s$, and let $F^{cs} = F^c\oplus F^s$. Let $\kappa = 2\bar\kappa(F^s,F^u)$, where $\bar\kappa$ is as in \eqref{eqn:barkappa}. Let $\mathcal{F}^{u,c,s}$ be the foliations of $\TT^d$ by leaves parallel to $F^{u,c,s}$. These leaves are dense in $\mathbb{T}^d$ since all eigenvalues are irrational. Let $\beta\in (0, \rho)$ be sufficiently small and consider the cones \begin{alignat*}{3} C_\beta^s &= C_\beta(F^s, F^{cu}), &\qquad C_\beta^c &= C_\beta(F^c, F^s \oplus F^u), \\ C_\beta^u &= C_\beta(F^u, F^{cs}), & C_\beta^{cs} &= C_\beta(F^{cs},F^u). \end{alignat*} \begin{figure}[htb] \begin{center} \ifps \psfrag{A}{$f_A$} \psfrag{B}{$f_M$} \psfrag{q}{$q$} \psfrag{r}{$q_1$} \psfrag{s}{$q$} \psfrag{t}{$q_2$} \includegraphics[width=.6\textwidth]{mane} \fi \caption{Ma\~{n}\'{e}'s construction}\label{f.mane} \end{center} \end{figure} Outside of $B(q,\rho)$, set $f_M$ to be equal to $f_A$. Inside $B(q,\rho)$, the fixed point $q$ undergoes a pitchfork bifurcation in the direction of $\mathcal{F}^c$; see \cite{man78} for details. The perturbation is carried out so that \begin{itemize} \item $\mathcal{F}^c$ is still an invariant foliation for $f_M$, and we write $E^c = T\mathcal{F}^c$; \item the cones $C_\beta^u$ and $C_{\beta}^s$ are invariant and uniformly expanding under $Df_M$ and $Df_M^{-1}$, respectively; in particular, they contain $Df_M$-invariant distributions $E^s$ and $E^u$ that integrate to $f_M$-invariant foliations $W^s$ and $W^u$. \item $E^{cs}:= E^c \oplus E^s$ integrates to a foliation $W^{cs}$. This holds because $E^s \subset C_{\beta}^s$ guarantees that $E^{cs} \subset C_\beta^{cs}$ \end{itemize} Thus, $f_M$ is partially hyperbolic with $T\TT^d = E^s \oplus E^c \oplus E^u$ and $E^{cs}$ integrates to a foliation. The index of $q$ changes during the perturbation, and we may also assume that for any point in $\mathbb{T}^d\setminus B(q,\rho/2)$ the contraction in the direction $E^c$ is $\lambda_s$. Inside $B(q,\rho/2)$, the perturbed map experiences some weak expansion in the direction $E^c$, and two new fixed points are created on $W^c(q)$, see Figure~\ref{f.mane}. Let $\lambda= \lambda_{c}(f_M) >1$ be the greatest expansion which occurs in the center direction. We can carry out the construction so that $\lambda$ is arbitrarily close to $1$. Since $f$ contracts $E^{cs}$ by a factor of at least $\lambda_s$ outside $B(q,\rho/2)$, and expands it by at most $\lambda$ inside the ball, we can estimate $\|Df_M^n|_{E^{cs}(x)}\|$ by counting how many of the iterates $x, f(x), \dots, f^{n-1}(x)$ lie outside $B(q,\rho/2)$. If at least $r n$ of these iterates lie outside the ball, then \begin{equation}\label{eqn:gamma-1} \|Df_M^n|_{E^{cs}(x)}\| \leq \lambda_s^{r n} \lambda^{(1-r)n}. \end{equation} Thus we are interested in a value of $r>0$ that gives $\lambda_s^r \lambda^{1-r} < 1.$ Consider the quantity \[ \gamma = \gamma(f_M) =\frac{\ln \lambda }{\ln \lambda - \ln \lambda_s} >0. \] Then $\gamma \to 0$ as $\lambda \to 1$, and for $r>\gamma$ a simple calculation gives \begin{equation}\label{theta} \theta_r:= \lambda_s^r \lambda^{1-r} < 1. \end{equation} Given $\rho,r>0$, we write $\mathcal{F}_{\rho,r}$ for the set of Ma\~n\'e diffeomorphisms constructed as described here for which $\gamma(f_M)<r$. Thus, for $f \in \mathcal{F}_{\rho,r}$, we have $\theta_r(f_M)<1$. There is a constant $K$ so that we can carry out the construction to satisfy $d_{C^0}(f_M,f_A) < K\rho$, $f_A(B(q,\rho)) \subset B(q, K \rho)$, and $f_M(B(q,\rho)) \subset B(q, K \rho)$. In particular, by choosing $\rho$ small, we can ensure that $d_{C^0}(f_M, f_A) < \eta/C$ where $C=C(f_A)$ is the constant from the Shadowing Lemma. We now consider diffeomorphisms $g$ in a $C^1$ neighborhood of $f_M$. For sufficiently small $C^1$ perturbations $g$ of $f_M$, the following remain true. \begin{itemize} \item $d_{C^0}(g, f_A) < \eta/C$, where $C=C(f_A)$ is the constant from Lemma \ref{shadowinglemma}. \item $g$ is partially hyperbolic with $T\TT^d = E^s_g \oplus E^c_g \oplus E^u_g$, where $E^\sigma_g \subset C_\beta^\sigma$ for each $\sigma\in \{s,c,u,cs\}$. \item The distributions $E^c_g$ and $E^{cs}_g$ integrate to foliations $W^c_g$ and $W^{cs}_g$ \item Each of the leaves $W^{cs}_g(x)$ and $W^u_g(x)$ is dense for every $x\in \TT^d$. \end{itemize} For the $C^1$ perturbations, partial hyperbolicity with $E_g^\sigma\subset C_\beta^\sigma$ and integrability are provided by \cite[Theorem 6.1]{HPS}; density of the leaves was shown in \cite{PS}. Given $g$ as above, let \begin{align*} \lambda_c(g) &= \sup \{ \|Dg|_{E^c(x)}\| : x\in B(q,\rho/2)\}, \\ \lambda_s(g) &= \sup \{ \|Dg|_{E^c(x)}\| : x \in \TT^d \setminus B(q,\rho/2)\}, \\ \gamma (g) &=\frac{\ln \lambda_c(g) }{\ln \lambda_c(g) - \ln \lambda_s(g)}. \end{align*} Let $\mathcal{U}_{\rho,r}$ be the set of $C^1$ diffeomorphisms $g\colon \TT^d\to \TT^d$ satisfying the conditions in the list above with $\gamma(g)<r$. A simple calculation gives \begin{equation}\label{eqn:gamma-r} \theta_r(g) := \lambda_c(g)^{1-r}\lambda_s(g)^r < 1. \end{equation} \section{Proof of Theorem~\ref{t.mane}}\label{s.mane.pf} We let $g \in \mathcal{U}_{\rho, r}$, and consider the collection $\mathcal{G}$ of orbit segments $(x,n)$ for which $(x,i)$ spends at least $\gamma i$ iterates outside of $B(q,\rho)$ for all $i\leq n$. We will show that these orbit segments experience uniform contraction in the $E^{cs}$ direction. Using local product structure, this will allow us to prove specification and the Bowen property for such orbit segments. The hypothesis $\Psi(\rho,r,\varphi)< P(\varphi; g)$, together with Theorems \ref{coreestimateallscales} and \ref{expansivityestimate}, allow us to bound the pressure of obstructions to expansivity and specification away from $P(\varphi; g)$. \subsection{Local product structure}\label{lps} We require local product structure for $g$ at scale $6 \eta$ repeatedly through this section. This holds because the splitting for $g$ is contained in thin cone fields and so the local leaves are near the local leaves for $f_A$ when $\beta$ and $\eta$ are small. \begin{lem}\label{lem:lps-mane} The diffeomorphism $g$ has local product structure for $W^{cs}_g,W^{u}_g$ at scale $6\eta$ with constant $\kappa = 2\bar\kappa(F^{s},F^u)$. \end{lem} \begin{proof} Let $\widetilde W^{cs}$ and $\widetilde W^{u}$ be the lifts of $W^{cs},W^{u}$ to $\RR^d$. Given $x,y\in \TT^d$ with $\varepsilon := d(x,y) < 6\eta$, let $\tilde x,\tilde y\in \RR^d$ be lifts of $x,y$ with $\varepsilon=d(\tilde x,\tilde y)<6\eta$. By Lemma \ref{lem:Wlps} the intersection $\widetilde W^{cs}(x) \cap \widetilde W^{u}(y)$ has a unique point $\tilde z$, which projects to $z\in \TT^d$. Moreover, the leaf distances between $\tilde x,\tilde z$ and $\tilde y, \tilde z$ are at most $(\frac{1+\beta}{1-\beta}) \bar\kappa(F^{s},F^u)\varepsilon$. Since $\beta$ is small, this is less than $2\bar\kappa(F^{s},F^u) \varepsilon$, so $z \in \tilde W^{cs}_{\kappa \varepsilon}(x) \cap \tilde W^u_{\kappa \varepsilon}(y)$. { By choosing $\eta$ not too large, we can ensure that $6 \eta \kappa$ is not too large relative to the diameter of $\TT^d$, so that the projection of $\widetilde W^{cs}_{6 \eta \kappa}(x) \cap \widetilde W^{u}_{6 \eta \kappa}(y)$ coincides with $W^{cs}_{6 \eta \kappa}(x) \cap W^{u}_{6 \eta \kappa}(y)$. Thus, $z$ is the only point in this intersection.} \end{proof} \subsection{Specification} A main ingredient for establishing specification for mixing locally maximal hyperbolic sets $f\colon \Lambda\to \Lambda$ is that given $\delta>0$, there exists $N\in\mathbb{N}$ such that for $x,y\in \Lambda$ and $n\geq N$ we have $f^n(W^u_{\delta}(x))\cap W^s_{\delta}(y)\neq \emptyset$. We mimic this idea replacing the stable manifold with the centerstable manifolds. All leaves of $W^u$ are dense in $\TT^d$ by the definition of $\mathcal{U}_{\rho, r}$. The following lemma gives uniform density. \begin{lem}\label{lem:intersection} For every $\delta>0$ there is $R>0$ such that for every $x,y\in \TT^d$, we have $W_R^u(x) \cap W_\delta^{cs}(y) \neq \emptyset$. \end{lem} \begin{proof} Fix $\delta>0$ and let $\alpha=\delta/\kappa$, where $\kappa$ is the constant from the local product structure. Fix $R_0>0$ such that each unstable leaf is $\alpha$-dense in the manifold. Thus for every $x\in \TT^d$ there is $z\in W_{R_0}^u(x)$ such that $d(y,z)<\alpha$, so by local product structure, $W^u_\delta(z)\cap W_\delta^{cs}(y)\neq \emptyset.$ Thus, $W_{R_0+\delta}^u(x) \supset W_\delta^u(z)$ and so writing $R=R_0 +\delta$, we have $W^u_R(x)\cap W^{cs}_\delta(y)\neq\emptyset$. \end{proof} Because $g$ is uniformly expanding along $W^u$, we see that for every $\delta>0$ there is $N\in \NN$ such that for every $x\in \TT^d$ and $n\geq N$, we have $g^n(W^u_\delta(x)) \supset W^u_R(g^nx)$. Thus by Lemma \ref{lem:intersection} we have \begin{equation}\label{eqn:iterated-intersection} g^n(W^u_\delta(x))\cap W^{cs}_\delta(y)\neq \emptyset \text{ for every } x,y\in \TT^d. \end{equation} Let $\chi$ be the indicator function of $\TT^d \setminus B(q, \rho)$, so that $\frac{1}{i}S_i \chi(x)$ is the proportion of time that an orbit segment $(x, i)$ spends outside $B(q, \rho)$. \begin{lem} \label{centrestable} Suppose $(x,n)\in \TT^d\times \NN$ is such that $\frac{1}{i}S_i\chi(x)\geq r$ for all $0\leq i\leq n$, and $\theta_r\in (0,1)$ is the constant defined at \eqref{eqn:gamma-r}. Then \begin{enumerate}[label=(\alph{*})] \item For any $y\in B_n(x,\rho/2)$, we have $\|Dg^i|_{E^{cs}(y)}\| < (\theta_r)^i$ for all $0\leq i\leq n$. \item For any $y,z\in W_{\rho/2}^{cs}(x)$, we have $d_W(f^iy,f^iz) \leq \theta_r^i d_W(y,z)$ for all $0\leq i\leq n$. \item For $0<\delta<\rho/2$, we have $W^{cs}_{\delta}(x)\subset B_n(x,2 \delta)$. \end{enumerate} \end{lem} \begin{proof} Given $0\leq i\leq n$, the inequality $\frac{1}{i}S_i\chi(x)>r$ implies that the orbit segment $(x,i)$ spends at least $ir$ iterates outside of $B(q,\rho)$. It follows that $(y,i)$ spends at least $ir$ iterates outside of $B(q,\rho/2)$. By the definition of $\lambda_c(g)$ and $\lambda_s(g)$, it follows that \[ \|Dg^i|_{E^{cs}(y)}\| \leq \lambda_c^{i - ir} \lambda_s^{ir} = (\theta_r)^i, \] proving the first claim. It is an easy exercise to prove (b) using the uniform contraction estimate provided by (a), and (c) follows immediately from (b) and Lemma \ref{compare:dist} (using that $\beta$ is small so $(1+\beta)^2<2$). \end{proof} Now we define the decomposition. We consider the following collections of orbit segments: \begin{equation}\label{eqn:mane-decomp} \begin{aligned} \mathcal{G}&=\{(x,n)\in \mathbb{T}^d\times \mathbb{N}\, :\, S_i\chi(x)\geq ir\, \, \forall\,\, 0\leq i\leq n\},\\ \mathcal{P} &= \{(x,n)\in \mathbb{T}^d\times \mathbb{N}\, :\, S_n\chi(x)< nr \}. \end{aligned} \end{equation} The collection $\mathcal{G}$ is chosen so that the centerstable manifolds are uniformly contracted along orbit segments from $\mathcal{G}$. These collections, together with the trivial collection $\{(x,0) : x \in X\}$ for $\mathcal{S}$, define a decomposition of any point $(x,n)\in X\times \mathbb{N}$ as follows: let $p$ be the largest integer in $\{0,..., n\}$ such that $\frac{1}{p}S_p\chi(x)<r$, and thus $(x, p) \in \mathcal{P}$. We must have $(g^p(x), n-p)\in\mathcal{G}$ since if $\frac{1}{k}S_k\chi(g^px)<r$ for some $0\leq k\leq n-p$, then $\frac{1}{p+k}S_{p+k}\chi(x) = \frac 1{p+k} \left(S_p\chi(x) + S_k\chi(g^p(x))\right) < r,$ contradicting the maximality of $p$. \begin{lem} \label{spec} The collection $\mathcal{G}$ has specification at any scale $\delta>0$. \end{lem} \begin{proof} For an arbitrary fixed $\delta>0$, we prove specification at scale $3\delta$. The key property that allows us to transition from one orbit to another is \eqref{eqn:iterated-intersection}. This property, together with uniform expansion on $W^u$, allows us to choose $\tau = \tau(\delta)\in \NN$ such that \[ \begin{aligned} g^\tau(W^u_\delta(x))&\cap W^{cs}_\delta(y)\neq \emptyset \text{ for all } x,y\in \TT^d, \\ d(g^{-\tau}y, g^{-\tau} z) &< \frac{1}{2}d(y, z) \text{ for all } x\in \TT^d \text{ and } y,z\in W^u_\delta(x). \end{aligned} \] Given any $(x_1, n_1), \dots, (x_k, n_k)\in \mathcal{G}$, we construct $y_j$ such that $(y_j,m_j)$ shadows $(x_1,n_1),\dots,(x_j,n_j)$, where $m_1 =n_1$, $m_2 = n_1 + \tau + n_2$, $\dots$, $m_k = (\sum_{i=1}^{k} n_i) + k\tau$. We also set $m_{0} = - \tau$. Let $y_1 = x_1$, and choose $y_2,\dots, y_k$ recursively so that $$ \begin{matrix} g^{m_1}y_1 \in W^{u}_\delta (g^{m_1}y_1) & \mbox{and} & g^{m_1+\tau}y_2 \in W^{cs}_\delta (x_2)\\ g^{m_2}y_3 \in W^{u}_\delta (g^{m_1}y_2) & \mbox{and} & g^{m_1+\tau}y_3 \in W^{cs}_\delta (x_3)\\ \vdots & \vdots & \vdots \\ g^{m_{k-1}}y_k \in W^{u}_\delta (g^{m_{k-1}}y_{k-1}) & \mbox{and} & g^{m_{k-1}+\tau}y_{k} \in W^{cs}_\delta (x_k).\\ \end{matrix} $$ Since $g^{m_j}y_{j+1}$ is in the unstable manifold of $g^{m_j}y_j$, and distance is contracted by $\frac{1}{2}$ every time the orbit passes backwards through a `transition', we obtain that $$ \begin{matrix} d_{n_j}(g^{m_{j-1}+\tau}y_j, g^{m_{j-1}+\tau}y_{j+1})& < &\delta \\ d_{n_{j-1}}(g^{m_{j-2}+\tau}y_j, g^{m_{j-2}+\tau}y_{j+1})& < &\delta/2 \\ \vdots & &\vdots\\ d_{n_1}(y_j, y_{j+1}) & < & \delta/2^j. \end{matrix} $$ That is, $d_{n_{j-i}}(g^{m_{j-i-1}+\tau}y_j, g^{m_{j-i-1}+\tau}y_{j+1}) < \delta/2^i$ for each $i \in \{1, \ldots, j\}$. Since $g^{m_j+\tau}(y_{j+1}) \in B_{n_{j+1}}(x_{j+1}, \delta)$ by Lemma \ref{centrestable}, it follows that $$d_{n_j}(g^{m_{j-1}+ \tau}y_k, x_j) < 2 \delta + \sum_{j=1}^\infty 2^{-j} \delta = 3\delta.$$ Thus, $y_k\in \bigcap_{j=1}^k g^{-(m_{j-1} + \tau)}B_{n_j}(x_j, 3 \delta)$, and so $\mathcal{G}$ has specification at scale $3 \delta$. \end{proof} \subsection{The Bowen property} Let $\theta_u \in (0,1)$ be such that $\|Dg|_{E^u(x)}^{-1}\| \leq \theta_u$ for all $x\in \TT^d$. Let $\kappa$ be the constant associated with the local product structure of $E^{cs}_g \oplus E^u_g$. Let $\varepsilon = \rho/(2\kappa )$. \begin{lem}\label{bowen-balls} Given $(x,n)\in \mathcal{G}$ and $y\in B_n(x,\varepsilon)$, we have \begin{equation}\label{eqn:hyp-hyp} d(g^kx,g^ky) \leq \kappa \varepsilon(\theta_r^k + \theta_u^{n-k}) \end{equation} for every $0\leq k\leq n$. \end{lem} \begin{proof} Using the local product structure, there exists $z\in W^{cs}_{\kappa \varepsilon}(x) \cap W^u_{\kappa \varepsilon}(y)$. Since $g^{-1}$ is uniformly contracting on $W^u$, we get \[ d(g^kz,g^ky) \leq \theta_u^{n-k} d(g^nz,g^ny) \leq \theta_u^{n-k} \kappa \varepsilon, \] and Lemma \ref{centrestable} gives $d(g^kx,g^kz) \leq \theta_r^k d(x,z) \leq \theta_r^k \kappa \varepsilon$. The triangle inequality gives \eqref{eqn:hyp-hyp}. \end{proof} \begin{lem} \label{bowenprop} Any H\"older continuous $\varphi$ has the Bowen property on $\mathcal{G}$ at scale $\varepsilon$. \end{lem} \begin{proof} Since $\varphi$ is H\"older, there exists $K>0$ and $\alpha\in (0,1)$ such that $|\varphi(x) - \varphi(y)| \leq K d(x,y)^\alpha$ for all $x,y\in \TT^d$. For $(x, n) \in \mathcal{G}$ and $y\in B_n(x,\varepsilon)$, Lemma \ref{bowen-balls} gives \[ |S_n\varphi(x)-S_n \varphi(y)| \leq K\sum_{k=0}^{n-1}d(g^kx, g^ky)^\alpha \leq K(\kappa \varepsilon)^\alpha \sum_{k=0}^{n-1} (\theta_u^{n-k} + \theta_r^{k})^\alpha. \] The summand admits the upper bound \[ (\theta_u^{n-k} + \theta_r^k)^\alpha \leq (2\theta_u^{n-k})^{\alpha} + (2\theta_r^k)^{\alpha}, \] and we conclude that \[ |S_n\varphi(x)-S_n \varphi(y)| \leq K(2\kappa\varepsilon)^\alpha \sum_{j=0}^\infty (\theta_u^{j\alpha} + \theta_r^{j\alpha}) < \infty \] \end{proof} \subsection{Expansivity} The diffeomorphism $g$ is partially hyperbolic with one-dimensional center bundle. Thus, it is well known that the non-expansive set for a point $x$ must be contained in a compact subset of a (one-dimensional) center leaf, and so $g$ is entropy-expansive \cite[Proposition 6]{CY05}. We give a quick sketch proof for completeness. \begin{lem} \label{centerleaf} For all $x \in \TT^d$, and $\varepsilon \leq 6\eta$, $\Gamma_{\varepsilon}(x)$ is contained in a compact subset of $W_g^c(x)$ with diameter a uniform multiple of $\varepsilon$. \end{lem} \begin{proof}[Sketch proof] Recall that the foliations $W_g^{cs}$ and $W_g^u$ have a local product structure at scale $\varepsilon$, and that there is also a local product structure within each leaf of $W_g^{cs}$ associated to the foliations $W_g^c$ and $W_g^s$. In particular, for every $y\in \Gamma_{\varepsilon}(x)$ there are $z_1 \in W_g^{cs}(x) \cap W_g^u(y)$ and $z_2\in W_g^c(x) \cap W_g^s(z_1)$, where all the leaf distances are controlled by a uniform multiple of $\varepsilon$. Under forward iterates, $z_1$ remains close to $x$, and so if $z_1\neq y$ then uniform forward expansion along leaves of $W_g^u$ implies that for some $n\geq 0$, $d(g^n(z_1),g^n(y))$ is large enough that $d(g^n(x),g^n(y)) > \varepsilon$. Thus we must have $z_1 = y$. A similar argument using backward iterates shows that $z_2 = z_1$. Thus $y$ is in the local $W_g^c$ leaf of $x$. This shows that $\Gamma_{\varepsilon}(x)$ is contained in a compact subset of $W_g^c(x)$, with diameter a uniform multiple of $\varepsilon$. \end{proof} We use this to show there is no tail entropy at scale $6 \eta$, and that Condition \ref{E} from \S\ref{sec:obstr-exp} is satisfied. \begin{lem} \label{hexp-mane} The diffeomorphism $g$ satisfies $h_g^\ast(6\eta) =0$. \end{lem} \begin{proof} Given $x \in X$, Lemma \ref{centerleaf} shows that $\Gamma_{6\eta}(x)$ is contained in a compact interval in the center leaf that is bounded in length. Therefore, $h(\Gamma_{6\eta}(x))=0$ for all $x\in\mathbb{T}^d$ and $h_g^\ast(6\eta)=0$. \end{proof} \begin{lem} \label{satisfieshyp} The diffeomorphism $g$ satisfies Condition \ref{E} from \S\ref{sec:obstr-exp} \end{lem} \begin{proof} For sufficiently small $\varepsilon>0$, Lemma \ref{centerleaf} shows that every $x\in \TT^d$ has $\Gamma_\varepsilon(x) \subset W_{\rho/2}^{cs}(x)$. It follows from Pliss' Lemma~\cite{Pliss} that if $m_k\to \infty$ is such that $\frac 1{m_k} S_{m_k}^{g^{-1}}\chi(x) \geq r$ for every $k$, then for every $r' \in (\gamma, r)$ there exists $m_k'\to\infty$ such that for every $k$ and every $0\leq j\leq m_k'$, we have $\frac 1j S_j^{g^{-1}}\chi(g^{-m_k'+j}x) \geq r'$. Thus $g^{-m_k'}x$ has the property that \[ \tfrac 1mS_m^g\chi(g^{-m_k'}x) \geq r' \text{ for all } 0\leq m\leq m_k', \] so we can apply Lemma \ref{centrestable} and conclude that \[ \Gamma_\varepsilon(x) \subset g^{m_k'}(W_{\rho/2}^{cs}(g^{-m_k'}x)) \subset B(x, \theta_{r'}^{m_k'}\rho/2) \] Since $m_k'\to\infty$ and $\theta_{r'}<1$, this implies that $\Gamma_\varepsilon(x) = \{x\}$. \end{proof} \subsection{Proof of Theorem \ref{t.mane}} \label{maneprooffinal} We now complete the proof that if $g\in \mathcal{U}_{\rho, r}$ and $\varphi\colon \TT^d\to \RR$ satisfy the hypotheses of Theorem \ref{t.mane}, then the conditions of Theorem \ref{t.generalM} are satisfied, and hence there is a unique equilibrium state for $(\TT^d,g,\varphi)$. We define the decomposition $(\mathcal{P},\mathcal{G}, \mathcal{S})$ as in \eqref{eqn:mane-decomp}. In Lemma \ref{spec}, we showed $\mathcal{G}$ has specification at all scales. In Lemma \ref{bowenprop}, we showed $\varphi$ has the Bowen property on $\mathcal{G}$ at scale $\varepsilon = \frac{\rho}{2\kappa }$. In Theorem \ref{coreestimateallscales}, we showed $P(\mathcal{P}, \varphi; g)$ admits the upper bound \[ h_g^*(6\eta) + (1-r) \sup_{B(q, \rho)}\varphi + r( \sup_{\mathbb{T}^d}\varphi + h + \log L) + H(2r). \] By Lemma \ref{hexp-mane}, $h_g^*(6\eta)=0$, and so the hypothesis $\Psi(\rho, r, \varphi)< P(\varphi;g)$ gives $P(\mathcal{P},\varphi)<P(\varphi;g)$. By Theorem \ref{expansivityestimate} and Lemma \ref{satisfieshyp}, $P_\mathrm{exp}^\perp(\varphi) \leq P(\mathcal{P},\varphi)$. Thus, we see that under the hypotheses of Theorem \ref{t.mane}, all the hypotheses of Theorem \ref{t.generalM} are satisfied for the decomposition $(\mathcal{P}, \mathcal{G}, \mathcal{S})$. \subsection{H\"older potentials with bounded range} \label{cor.br} We prove the following corollary of Theorem \ref{t.mane}. \begin{cor}\label{t.manerange} Given $g\in \mathcal{U}_{\rho,r}$, suppose that for $L=L(f_A, \eta)$ and $h=h_\mathrm{top}(f_A)$, we have \begin{equation} \label{mane.hestimate} r(\log L + h) + H(2r) < h. \end{equation} Let $\eta'=C(f_A)d_{C^0}(f_A, g)$ and $V(\varphi)= \Var(\varphi, \eta').$ Then writing $D(r) = h -r(\log L + h) - H(2r) > 0$, every H\"older continuous potential $\varphi$ with the bounded range hypothesis $\sup \varphi - \inf \varphi +V(\varphi) <D(r)$ has a unique equilibrium state. In particular, \eqref{mane.hestimate} is a criterion for $g$ to have a unique measure of maximal entropy. \end{cor} \begin{proof} If $\sup \varphi - \inf \varphi +V(\varphi)< D(r)$, then \begin{align*} \Psi(\rho, r, \varphi) &\leq (1-r) \sup_{B(q, \rho)}\varphi + r( \sup_{\TT^d}\varphi + h + \log L) +H(2r) +V(\varphi) \\ &= (1-r) \sup_{B(q, \rho)}\varphi + r(\sup_{\TT^d}\varphi) + h +V(\varphi) - D(r) \\ &\leq \sup_{\TT^d} \varphi + h+V(\varphi) - D(r) \\ &< \inf_{\TT^d}\varphi + h - V(\varphi) \leq P(\varphi; f_A)- V(\varphi)\leq P(\varphi; g). \end{align*} The last inequality follows from Lemma \ref{pressuredrop}\ref{Pg-geq}. Thus Theorem \ref{t.mane} applies. \end{proof} \section{Lower bounds on entropy and proof of Theorem \ref{cor1.2}}\label{sec:corollaries} It is well known that the unique equilibrium state for a H\"older potential $\varphi$ on a uniformly hyperbolic system has positive entropy. We prove an explicit lower bound on the entropy in terms of $|\varphi|_\alpha$, the H\"older semi-norm of $\varphi$, for equilibrium states for maps with the specification property. \begin{thm}\label{thm:pressure-gap} Let $X$ be a compact metric space and $f\colon X\to X$ a homeomorphism. Fix $\varepsilon < \frac 16 \diam(X)$ and suppose that $f$ has specification at scale $\varepsilon$ with transition time $\tau$. Let $\varphi\colon X\to\RR$ be a potential satisfying the Bowen property at scale $\varepsilon$ with distortion constant $V$. Let \[ \Delta = \frac{\log(1+e^{-(V+(2\tau+1)(\sup \varphi - \inf\varphi)})}{2(\tau+1)}. \] Then we have \begin{equation}\label{eqn:gap} P(\varphi) \geq P(\varphi,\varepsilon) \geq \Big( \sup_\mu \int\varphi\,d\mu\Big) + \Delta. \end{equation} In particular, every equilibrium state $\mu$ for $\varphi$ has $h_\mu(f) \geq \Delta > 0$. \end{thm} \begin{proof} Fix $x\in X$ and $n\in \NN$. Fix $\alpha\in (0,\frac 12]$, let $m_n = \lceil \frac{\alpha n}{2(\tau+1)} \rceil$, and let \[ \mathcal{I}_n = \{ (k_1, k_2, \ldots, k_{m_n}) \mid 0 < k_1 < k_2 < \cdots < k_{m_n} < n \text{ and } k_i\in 2(\tau+1)\NN\ \forall i\}. \] The idea is that for each $\vec k\in \mathcal{I}_n$, we will use the specification property to construct a point $\pi(\vec k) \in X$ whose orbit is away from the orbit of $x$ for a bounded amount of time around each time $k_i$, and $\varepsilon$-shadows the orbit of $x$ at all other times; thus the set of points $\{\pi(\vec k): \vec k \in \mathcal{I}_n\}$ will be $(n,\varepsilon)$-separated on the one hand, and on the other hand each point $\pi(\vec k)$ will have its $n^{th}$ Birkhoff sum close to that of $x$. First note that standard estimates for factorials give $\log\binom{n}{\ell} \geq H(\frac\ell{n}) n + o(n)$, and that $m_n / \lfloor \frac n{2(\tau+1)} \rfloor \geq \alpha$, so \begin{equation}\label{eqn:nmn} \log\#\mathcal{I}_n \geq \log{\lfloor \frac{n}{2(\tau+1)}\rfloor \choose m_n} \geq \frac{H(\alpha)}{2(\tau+1)} n - o(n). \end{equation} Given $k\in \{0, \dots, n-1\}$, let $y_k \in X$ be any point with $d(f^kx,y_k) > 3\varepsilon$. Now for every $\vec{k}\in \mathcal{I}_n$, the specification property guarantees the existence of a point $\pi(\vec{k})\in X$ with the property that \[ \begin{aligned} \pi(\vec{k}) &\in B_{k_1-\tau}(x,\varepsilon), \\ \qquad f^{k_1}(\pi(\vec{k})) &\in B(y_{k_1},\varepsilon), \\ \qquad f^{k_1+\tau+1}(\pi(\vec{k})) &\in B_{k_2 - k_1 - 2\tau - 1}(f^{k_1 + \tau+1}x), \end{aligned} \] and so on. Writing $k_0=0$, we see that for any $0\leq i < m_n$ we have \begin{equation}\label{eqn:pik} \begin{aligned} f^{k_i + \tau + 1}(\pi(\vec{k})) &\in B_{k_{i+1} - k_i - 2\tau - 1}(f^{k_i + \tau +1}x), \\ f^{k_{i+1}}(\pi(\vec{k})) &\in B(y_{k_{i+1}},\varepsilon), \end{aligned} \end{equation} and we ask that $f^{k_{m_n}+ \tau+1}(\pi (\vec k)) \in B_{n-k_{m_n}}(f^{k_{m_n}+1+\tau}x)$. Write $j_i = k_{i+1} - k_i - 2\tau - 1$ for $i\in \{0, \ldots, m_n-1\}$ and $j_{m_n}=n-k_{m_n}$; then the Bowen property gives \[ |S_{j_i} \varphi(f^{k_i + \tau + 1}x) - S_{j_i} \varphi(f^{k_i + \tau + 1} (\pi (\vec{k}))| \leq V \] for any $0\leq i \leq m_n$. We control the `excursions' away from $x$ by observing that for any $z,z' \in X$, $|S_{2\tau+1} \varphi(z)-S_{2\tau+1} \varphi(z')| \leq (2\tau+1)(\sup \varphi-\inf \varphi)$, and there are $m_n$ such excursions. We conclude that \begin{equation}\label{eqn:Snphi} |S_n\varphi(\pi(\vec{k})) - S_n\varphi(x)| \leq (m_n+1)V + m_n(2\tau+1)(\sup \varphi- \inf \varphi). \end{equation} Consider the set $\pi(\mathcal{I}_n) \subset X$. Given any $\vec{k} \neq \vec{k}' \in \mathcal{I}_n$, let $i$ be minimal such that $k_i \neq k_i'$; then put $j=k_i'$ and observe that $f^j(\pi(\vec{k})) \in B(y_j, \varepsilon)$ and $f^j(\pi(\vec{k}')) \in B(f^j(x),\varepsilon)$. Since $d(y_j,f^jx) > 3\varepsilon$ this guarantees that $\pi(\vec{k}') \notin B_n(\pi(\vec{k}),\varepsilon)$, and so $\pi(\mathcal{I}_n)$ is $(n,\varepsilon)$-separated. Together with \eqref{eqn:Snphi}, this gives \[ \begin{aligned} \Lambda^\mathrm{sep}_n(\phi,\varepsilon) &\geq \sum_{\vec{k} \in \pi(\mathcal{I}_n)} e^{S_n\varphi(\pi(\vec{k}))} \\ &\geq (\#\mathcal{I}_n) \exp\big(S_n\varphi(x) - (m_n+1)V - m_n(2\tau+1)(\sup\varphi-\inf\varphi)\big). \end{aligned} \] Using \eqref{eqn:nmn} to bound $\#\mathcal{I}_n$ from below, we can take logs, divide by $n$, and send $n\to\infty$ to get \[ P(\varphi,\varepsilon) \geq \Big(\limsup_{n\to\infty} \frac 1n S_n\varphi(x) \Big) + \frac 1{2(\tau+1)} \Big(H(\alpha) - \alpha(V+(2\tau+1)(\sup \varphi- \inf \varphi)) \Big). \] Given any ergodic $\mu$, we can take a generic point $x$ for $\mu$ and conclude that the lim sup in the above expression is equal to $\int\varphi\,d\mu$. Thus to bound the difference $P(\varphi,\varepsilon) - \int\varphi\,d\mu$, we want to choose the value of $\alpha$ that maximizes $H(\alpha) - \alpha Q$, where $Q=V+(2\tau+1)(\sup \varphi- \inf \varphi)$. A straightforward differentiation and routine calculation shows that $\frac d{d\alpha} (H(\alpha) - \alpha Q) = 0$ occurs when $\alpha = (1+e^Q)^{-1}$, at which point we have $H(\alpha) - \alpha Q = \log(1+e^{-Q})$, proving Theorem \ref{thm:pressure-gap}. \end{proof} \begin{cor}\label{cor:pressure-gap} Given a topologically mixing Anosov diffeomorphism $f$ on a compact manifold $M$, there are $Q,\delta>0$ such that for every H\"older potential $\varphi$, we have \[ P(\varphi; f) \geq \delta \log(1+e^{-Q|\varphi|_\alpha}) + \sup_\mu \int\varphi\,d\mu. \] \end{cor} \begin{proof} Every H\"older potential on an Anosov system has the Bowen property with distortion constant given by $Q_1|\varphi|_\alpha$; moreover, $\sup \varphi- \inf \varphi \leq |\varphi|_\alpha (\diam M)^\alpha$. Mixing Anosov diffeomorphisms have the specification property; let $\tau$ be the transition time for a scale at which $f$ has specification, and let $\delta= \frac 1{2(\tau+1)}$. Then Theorem \ref{thm:pressure-gap} gives \[ P(\varphi; f) \geq \delta \log(1+e^{Q_1 |\varphi|_\alpha + (2\tau+1)|\varphi|_\alpha(\diam M)^\alpha}). \] Putting $Q = Q_1 + (2\tau+1)(\diam M)^\alpha$ gives the result. \end{proof} \subsection*{Proof of Theorem \ref{cor1.2}} We see from (i) of Lemma \ref{pressuredrop} that there is a constant $K$ (independent of $\rho, r$) such that for every $g\in \mathcal{U}_{\rho,r}$, \[ P(\varphi; g) \geq P(\varphi; f_A) - K^{\alpha}\rho^\alpha |\varphi|_\alpha. \] Since $q$ is a fixed point of $f_A$, Corollary \ref{cor:pressure-gap} gives \[ P(\varphi; g) \geq \varphi(q) + \delta \log(1+e^{-Q|\varphi|_\alpha}) - K^{\alpha} \rho^\alpha |\varphi|_\alpha \] for every $g\in \mathcal{U}_{\rho,r}$. On the other hand, we have \begin{align*} \Psi(\rho,r,\varphi) &\leq \varphi(q) + |\varphi|_\alpha\rho^\alpha + r(\sup \varphi - \varphi(q) + h + \log L) + H(2r) \\ &\leq \varphi(q) + |\varphi|_\alpha (\rho^\alpha + r(\diam M)^\alpha) + r(h+\log L) + H(2r). \end{align*} Thus the following is a sufficient condition to give $\Psi(\rho,r,\varphi) < P(\varphi; g)$: \[ (\rho^\alpha(1+K^{\alpha}) + r(\diam M)^\alpha)|\varphi|_\alpha + r(h + \log L) + H(2r) < \delta \log(1+e^{-Q|\varphi|_\alpha}) \] Let $S_1(\rho,r) = \rho^\alpha(1+K^\alpha) + r(\diam M)^\alpha$ and $S_2(r) = r(h+\log L)+H(2r)$, so the above condition can be rewritten as \begin{equation}\label{eqn:enough} S_1(\rho,r)|\varphi|_\alpha + S_2(r) < \delta \log(1+e^{-Q|\varphi|_\alpha}). \end{equation} Given $\rho,r>0$, and $\alpha \in (0,1]$, define $T(\rho,r; \alpha)$ by \begin{equation}\label{eqn:T} T(\rho,r;\alpha) = \sup\big\{ T\in \RR : S_1(\rho,r) T + S_2(r) < \delta \log (1 + e^{-QT}) \big\}. \end{equation} Then for every $\varphi$ with $|\varphi|_\alpha < T(\rho,r; \alpha)$, condition \eqref{eqn:enough} holds, which gives $\Psi(\rho,r,\varphi) < P(\varphi; g)$. This is enough to deduce the first part of Theorem \ref{cor1.2} from Theorem \ref{t.mane}. For the second part of Theorem \ref{cor1.2}, observe that for every $t>0$, we can choose $\rho,r>0$ sufficiently small that $S_1(\rho,r)t + S_2(r) < \delta \log(1+e^{-Qt})$, which means that $t < T(\rho,r; \alpha)$ for all sufficiently small $\rho,r$. In other words, $T(\rho,r; \alpha)\to\infty$ as $\rho,r\to 0$, which completes the proof. \section{Proof of Theorem \ref{main3}}\label{s.srb} Given a $C^2$ diffeomorphism $g$ on a $d$-dimensional manifold and $\mu \in \mathcal{M}_e(g)$, let $\lambda_1 \leq \cdots \leq \lambda_d$ be the Lyapunov exponents of $\mu$, and let $\lambda^+(\mu)$ be the sum of the positive Lyapunov exponents. Following the definition in \cite[Chapter 13]{BP07}, an \emph{SRB measure} for a $C^2$ diffeomorphism is an ergodic invariant measure $\mu$ that is hyperbolic (non-zero Lyapunov exponents) and has absolutely continuous conditional measures on unstable manifolds. The Margulis--Ruelle inequality \cite[Theorem 10.2.1]{BP07} gives $h_\mu(g) \leq \lambda^+(\mu)$, and it was shown by Ledrappier and Young \cite{LY} that equality holds if and only if $\mu$ has absolutely continuous conditionals on unstable manifolds. In particular, for any ergodic invariant measure $\mu$, we have \begin{equation}\label{eqn:nonpos} h_\mu(g) - \lambda^+(\mu)\leq 0, \end{equation} with equality if and only if $\mu$ is absolutely continuous on unstable manifolds. Thus an ergodic measure $\mu$ is an SRB measure if and only if it is hyperbolic and equality holds in \eqref{eqn:nonpos}. Let $g\in \mathcal{U}_{\rho,r}$ be a $C^2$ diffeomorphism. Since there is a continuous splitting $T \TT^d = E^u \oplus E^{cs}$, the geometric potential $\varphi^u}%{\varphi^{\text{geo}}(x) = -\log\|Dg|_{E^u(x)}\|$ is continuous. Furthermore, $\varphi^u}%{\varphi^{\text{geo}}$ is H\"older continuous because the map $g$ is $C^2$ and the distribution $E^u$ is H\"older. The H\"older continuity of $E^u$ follows from the standard argument for Anosov diffeomorphisms. See for instance \cite[\S6.1]{BrSt}; the argument there extends unproblematically to the case of absolute partial hyperbolicity, which covers our setting. We build up our proof of Theorem \ref{main3}. Since $\sup \varphi^u}%{\varphi^{\text{geo}}< 0$, the function $t\mapsto P(t\varphi^u}%{\varphi^{\text{geo}})$ is a convex strictly decreasing function, so it has a unique root. We must show that this root occurs at $t=1$, that we have uniqueness of the equilibrium state for all $t$ in a neighborhood of $[0,1]$, and that the equilibrium state for $\varphi^u}%{\varphi^{\text{geo}}$, which we denote $\mu_1$, is the unique SRB measure. We assume the hypothesis of Theorem \ref{main3} so that \begin{equation}\label{eqn:mane-srb-condition2} r(h+\log L) + H(2r)< \left(\frac{\sup_{x\in \TT^d} \varphi^u}%{\varphi^{\text{geo}}_{g}(x)}{\inf_{x\in\TT^d} \varphi^u}%{\varphi^{\text{geo}}_{g}(x)}\right) h, \end{equation} and also that \begin{equation}\label{eqn:mane-srb-condition3} r(h+\log L) + H(2r)< - \sup \varphi^u}%{\varphi^{\text{geo}}_{g}. \end{equation} We recall the following result which was proved as Lemma 7.1 of \cite{CFT_BV}. \begin{lem} \label{lem:Pgeq0} Let $M$ be a compact Riemannian manifold and let $W$ be a $C^0$ foliation of $M$ with $C^1$ leaves. Suppose there exists $\delta>0$ such that $\sup_{x\in M} m_{W(x)}(W_\delta(x)) < \infty$, where $m_{W(x)}$ denotes volume on the leaf $W(x)$ with the induced metric. Let $f\colon M\to M$ be a diffeomorphism and let $\psi(x) = -\log|\det Df(x)|_{T_x W(x)}|$. Then $P(f,\psi)\geq 0$. \end{lem} The hypothesis of this lemma is met for a foliation which lies in a cone around a linear foliation, see \cite[\S 7.2]{CFT_BV} for details, so Lemma \ref{lem:Pgeq0} applies to the unstable foliation $W^u$ of the Ma\~n\'e family. We conclude that $P(\varphi^u; g) \geq 0$. To get a unique equilibrium state for $t\varphi^u}%{\varphi^{\text{geo}}$, it suffices to show that \[ \Psi(t) := \Psi(\rho, r, t \varphi^u}%{\varphi^{\text{geo}}) = (1-r) \sup_{B(q, \rho)} t\varphi^u}%{\varphi^{\text{geo}} + r (\sup_{\mathbb{T}^d} t\varphi^u}%{\varphi^{\text{geo}} + h + \log L ) + H(2r) \] satisfies $\Psi(t) < P(t\varphi^u}%{\varphi^{\text{geo}})$ for all $t\in [0,1]$ and then apply Theorem \ref{t.mane}. Note that since the equality is strict it will then continue to hold for all $t$ in a neighborhood of $[0,1]$. The Ma\~n\'e construction is carried out to leave $E^u$ as unaffected as possible, so we expect that $\sup \varphi^u}%{\varphi^{\text{geo}}_{f_M}$ and $\inf \varphi^u}%{\varphi^{\text{geo}}_{f_M}$ are close to $\varphi^u}%{\varphi^{\text{geo}}_{f_A} \equiv - \log \lambda_u$. Thus, we expect that the $\sup \varphi^u}%{\varphi^{\text{geo}}_g / \inf \varphi^u}%{\varphi^{\text{geo}}_g$ term in \eqref{eqn:mane-srb-condition2} can be taken close to $1$. Since making this precise would require a lengthy analysis of the details of the construction with only a small benefit to our estimates, we choose to not pursue this argument. For $t\geq0$, we have \begin{equation} \label{upperline} \Psi(t) \leq t(\sup \varphi^u}%{\varphi^{\text{geo}}) +r (h + \log L ) + H(2r). \end{equation} At $t=1$, it is immediate from \eqref{eqn:mane-srb-condition3} that \begin{equation} \label{bad0geo} \Psi(1)<0 \leq P(\varphi^u}%{\varphi^{\text{geo}}), \end{equation} so $\varphi^u}%{\varphi^{\text{geo}}$ has a unique equilibrium state. The case $t\in[0,1)$ requires some more analysis. The straight line $l_1(t)$ described by \eqref{upperline} which bounds $\Psi(t)$ above has its root at \[ t^\ast = -\frac{r (h + \log L ) + H(2r)}{\sup \varphi^u}%{\varphi^{\text{geo}}}, \] and by \eqref{eqn:mane-srb-condition3}, $t^\ast <1$. Thus, for $t\in (t^\ast, 1]$, $\Psi(t)<0\leq P(\varphi^u}%{\varphi^{\text{geo}}) \leq P(t \varphi^u}%{\varphi^{\text{geo}})$. For $t \in [0, t^\ast]$, the variational principle shows that $P(t\varphi^u}%{\varphi^{\text{geo}}) \geq h+ t (\inf \varphi^u}%{\varphi^{\text{geo}})$. Thus we have bounded $P(t\varphi^u}%{\varphi^{\text{geo}})$ from below by a straight line $l_2(t)$. In \eqref{upperline}, we bounded $\Psi(t)$ above by a straight line $l_1(t)$. By \eqref{eqn:mane-srb-condition2}, $l_2(0) > l_1(0)$. The root of $l_2$ is $-h/(\inf \varphi^u}%{\varphi^{\text{geo}})$, and the root of $l_1$ is $t^\ast$. Thus by \eqref{eqn:mane-srb-condition2}, $t^\ast<-h/(\inf \varphi^u}%{\varphi^{\text{geo}})$ and so $l_2(t^\ast)> l_1(t^\ast)$. In particular, for $t \in [0, t^\ast]$, $P(t\varphi^u}%{\varphi^{\text{geo}}) \geq l_2(t)>l_1(t) \geq \Psi(t)$. We conclude that $\Psi(t) < P(t\varphi^u}%{\varphi^{\text{geo}})$ for all $t\in [0,1]$, and thus there is a unique equilibrium state by Theorem \ref{t.mane}. It remains to show that $P(\varphi^u}%{\varphi^{\text{geo}};g)= 0$ and that the unique equilibrium state is in fact the unique SRB measure. Let $\mu$ be ergodic, and let $\lambda_1 \leq \lambda_2 \leq \ldots \leq \lambda_d$ be the Lyapunov exponents of $\mu$. Recall that $E^{cs} \oplus E^{u}$ is $Dg$-invariant, so for every $\mu$-regular $x$ the Oseledets decomposition is a sub-splitting of $E^{cs} \oplus E^{u}$, and thus $\int \varphi^u}%{\varphi^{\text{geo}}\,d\mu =- \lambda_d(\mu)$. Thus, \begin{equation}\label{eqn:phlambda} \int\varphi^u}%{\varphi^{\text{geo}}\,d\mu \geq -\lambda^+(\mu) \end{equation} and if $\lambda_{d-1}(\mu) <0$ it follows that $\int \varphi^u}%{\varphi^{\text{geo}}\,d\mu =- \lambda^+(\mu)$. Let $\mathcal{M}_* \subset \mathcal{M}_e(g)$ be the set of ergodic $\mu$ such that $\mu$ is hyperbolic and $\lambda_{d-1}(\mu) <0$. \begin{lem} \label{lem:keySRBestimate} If $\mu \in \mathcal{M}_e(g) \setminus \mathcal{M}_*$, then \[ h_\mu(g) -\lambda^+(\mu) \leq h_\mu(g) + \int\varphi^u}%{\varphi^{\text{geo}}\,d\mu \leq \Psi(1). \] \end{lem} \begin{proof} If $\mu \in \mathcal{M}_e(g) \setminus \mathcal{M}_*$, then either $\mu$ is not hyperbolic, or $\lambda_{d-1}(\mu)>0$. Then there exists a set $Z\subset M$ with $\mu(Z)=1$ so that for each $z \in Z$, there exists $v \in E^{cs}_z$ with $\lim_{n\to\infty} \tfrac 1n \log \|Dg^n_z(v)\| \geq 0$. We claim that $z\in Z$ belongs to the set \[ A^+=\{x : \text{there exists } K(x) \text{ so } \tfrac{1}{n}S^g_{n}\chi(x) < r \text{ for all } n>K(x)\}. \] To see this, suppose that $z \notin A^+$. Then there exists $n_k \to \infty$ with $\frac{1}{n_k}S^g_{n_k}\chi(z) \geq r$. By lemma \ref{centrestable}, this gives \[ \|Dg^n_z(v)\| \leq \|Dg^{n_k}|_{E^{cs}(z)}\| < (\theta_r)^{n_k}, \] and thus $ \lim_{n_k\to\infty} \tfrac 1{n_k} \log \|Dg^{n_k}_z(v)\| \leq \log \theta_r<0$, which is a contradiction. Thus, $\mu(A^+) =1$. It follows that \[ h_\mu(g) -\lambda^+(\mu) \leq h_\mu(g) + \int\varphi^u}%{\varphi^{\text{geo}}\,d\mu \leq P(\mathcal{C},\varphi^u}%{\varphi^{\text{geo}}) \leq \Psi(1). \] where the first inequality uses \eqref{eqn:phlambda}, the second uses Lemma \ref{keystepexpansivityestimate}, and the third uses Theorem \ref{coreestimateallscales}. \end{proof} It follows from Lemma \ref{lem:keySRBestimate}, \eqref{bad0geo}, and the Variational Principle that \begin{equation}\label{eqn:vp*} P(\varphi^u}%{\varphi^{\text{geo}}; g) = \sup \left\{h_\mu(g) + \int\varphi^u}%{\varphi^{\text{geo}}\,d\mu \, :\, \mu \in \mathcal{M}_*\right\}. \end{equation} Now, for every $\mu\in \mathcal{M}_*$, we have $\int\varphi^u}%{\varphi^{\text{geo}}\,d\mu = -\lambda^+(\mu)$, and thus \begin{equation}\label{eqn:free-energy} h_\mu(g) + \int\varphi^u}%{\varphi^{\text{geo}}\,d\mu = h_\mu(g) -\lambda^+(\mu)\leq 0 \end{equation} by \eqref{eqn:nonpos}. Together with \eqref{eqn:vp*} this gives $P(\varphi^u}%{\varphi^{\text{geo}};g) \leq 0$, and we conclude that $P(\varphi^u}%{\varphi^{\text{geo}};g)=0$. It only remains to show that the unique equilibrium state $\mu_1$ is in fact an SRB measure for $g$, and there are no other SRB measures. Since $\mu_1\in \mathcal{M}_*$, it is hyperbolic, and since $P(\varphi^u}%{\varphi^{\text{geo}};g)=0$, \eqref{eqn:free-energy} gives $h_{\mu_1}(g) - \lambda^+(\mu_1)=0$, so $\mu_1$ is an SRB measure. To see there are no other SRB measures, we observe that if $\nu\neq \mu_1$ is ergodic, then $h_\nu(g) - \lambda^+(\nu) \leq h_\nu(g) + \int\varphi^u}%{\varphi^{\text{geo}}\,d\nu < P(\varphi^u}%{\varphi^{\text{geo}};g)=0$ by the uniqueness of $\mu_1$ as an equilibrium measure. This completes the proof of Theorem \ref{main3}. \section{Large Deviations and Multifractal Analysis} \label{s.ldp} \subsection{Large deviations} The upper level-$2$ large deviations principle is a statement which implies the following estimate on the rate of decay of the measure of points whose Birkhoff sums experience a `large deviation' from the expected value: \begin{equation}\label{eqn:ldp-1} \varlimsup_{n\to\infty} \frac 1n \log \mu \left \{x : \left |\frac 1n S_n\psi(x) - \int \psi \, d\mu \right|> \varepsilon \right \} \leq -q(\varepsilon), \end{equation} where $\varepsilon>0$, $\psi\colon \TT^d\to\RR$ is any continuous function, and $q(\varepsilon) \in[0, \infty] $ is a \emph{rate function}, whose precise value can be formulated precisely in terms of the free energies of a class of measures depending on $\varepsilon$ and $\psi$. That our equilibrium measures satisfy the upper level-$2$ large deviations principle follows from Theorem 5.5 of \cite{CT4}. That result says that an equilibrium state provided by Theorem \ref{t.generalM} has the upper level-$2$ large deviations principle, and is a consequence of a general large deviations result of Pfister and Sullivan \cite{PfS}, and a weak upper Gibbs property which is satisfied by our equilibrium states. The question of lower large deviations bounds for Ma\~n\'e diffeomorphisms remains open. \subsection{Multifractal analysis}\label{sec:multifractal} Let $g$ be a $C^2$ diffeomorphism satisfying the hypotheses of Theorem \ref{main3}. For each $t\in [0,1]$, let $\mu_t$ be the unique equilibrium state for $t\varphi^u}%{\varphi^{\text{geo}}$ given by Theorem \ref{main3}. Then, $\mu_0$ is the unique MME and $\mu_1$ is the unique SRB measure. It follows from Lemma \ref{hexp-mane} that the entropy map $\mu\mapsto h_g(\mu)$ is upper semicontinuous, hence by Remark 4.3.4 of \cite{Ke98}, uniqueness of the equilibrium state implies that the function $t\mapsto P(t\varphi^u}%{\varphi^{\text{geo}})$ is differentiable on $(-\varepsilon,1+\varepsilon)$, with derivative $\chi^+(\mu_t)$, where we write $\chi^+(\mu_t) = \int \varphi^u}%{\varphi^{\text{geo}} \,d\mu_t$ for the largest Lyapunov exponent of $\mu_t$. This has immediate consequences for multifractal analysis. Given $\chi\in \RR$, let \begin{align*} K_\chi &= \{x\in \TT^d \mid \lim_{n\to\infty} \tfrac 1n \log \|Dg^n|_{E^u(x)}\| = \chi\} \\ &= \{x \in \TT^d \mid \lim_{n\to\infty} \tfrac 1n S_n\varphi^u}%{\varphi^{\text{geo}} (x) = -\chi\} \end{align*} be the set of points whose largest Lyapunov exponent exists and is equal to $\chi$. The following is a direct consequence of Theorem \ref{main3} and \cite[Corollary 2.9]{C14}. \begin{thm}\label{thm:multifractal} Let $g$ and $\mu_t$ be as in Theorem \ref{main3}. Let $\chi_0 = \chi^+(\mu_0)$ and $\chi_1 = \chi^+(\mu_1)$. Then for every $\chi\in [\chi_1,\chi_0]$, we have \begin{align*} h_\mathrm{top}(K_\chi,g) &= \inf\{P(t\varphi) + t\chi \mid t\in \RR \} \\ &= \sup \{h_\mu(g) \mid \mu\in \mathcal{M}_f(X), \chi^+(\mu) = \chi\} \\ &= \sup \{h_\mu(g) \mid \mu \in \mathcal{M}_f^e(K_\chi) \}, \end{align*} where $h_\mathrm{top}(K_\chi,g)$ is topological entropy defined as a dimension characteristic in the sense of Bowen \cite{rB73}. The infimum in the first line is achieved for some $t\in [0,1]$, and for this $t$ we have $h_\mathrm{top}(K_\chi,g) = h_{\mu_t}(g)$. \end{thm} \subsection*{Acknowledgments} We thank the American Institute of Mathematics, where some of this work was completed as part of a SQuaRE. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Optical frequency comb (OFC) is a widely used notion to denote a spectral source featuring a series of isolated spectral lines, which are uniformly spaced \cite{Pilozzi2017}. OFCs are readily built from a diverse range of all-optical building blocks, including mode locked fiber lasers \cite{jones2000carrier,fortier201920,jin2006absolute}, ring resonators \cite{del2007optical,Herr2014}, sampled or superstructured fiber Bragg gratings (SFBGs) \cite{Dong2006,Lee2003,Lee2004,Lee2004a,Li2003,Loh1999,Navruz2008,Zhang2019,Zou2006}, $\mathcal{PT}$-symmetric topological structures \cite{Pilozzi2017} and so forth. OFCs have broad spectral span (with or without uniform amplitudes) \cite{Li2003}, and due to this inherent property, the footprints of OFCs are found in both classical and quantum optical applications such as telecommunication systems \cite{Dong2006,Lee2003,Lee2004,Lee2004a,Li2003,Loh1999,Navruz2008,Zhang2019,Zou2006}, ultrafast spectroscopy \cite{Gohle2005}, generation of attosecond pulses \cite{baltuvska2003attosecond}, quantum computing \cite{Pfister2020}, and optical frequency metrology \cite{udem2002optical}. Precise control over the spectral characteristics of OFCs is one of the challenging aspects of scientific investigation \cite{jayaraman1993theory,Li2003,Zou2006,fortier201920} besides the scalability and on-chip integration of these OFC sources \cite{baltuvska2003attosecond}. For an outstanding contribution to the generation of OFC and its application to laser-based precision spectroscopy \cite{holzwarth2000optical}, Hall and Hansch were awarded Nobel prize back in 2005 \cite{hall2006nobel,hansch2006nobel}. Since then, investigations on the generation, control and applications of OFCs in various fields remain as one of the active areas of research \cite{fortier201920}. Research interest on modern light wave communication systems is primarily targeted at effective utilization of the available channel bandwidth via judicious design of multi channel devices that would simultaneously handle many independent frequencies \cite{Dong2006,Lee2003,Lee2004,Lee2004a,Li2003,Loh1999,Navruz2008,Zhang2019,Zou2006}. In particular, OFCs based on SFBGs are fascinating for the reasons that they are much compact, less complex and they can be employed to handle multi-channel functionalities like multi-wavelength lasers \cite{jayaraman1993theory,ishii1993multiple, Sourani2019}, broadband dispersion compensators \cite{Li2003,Lee2003,Lee2004a}, transverse-load sensing devices \cite{Shu2003}, space-tunable multi-channel notch filters \cite{Li2008}, etc. Previously, frequency comb spectrum in the presence of gain and loss was realized in a topological cascaded laser \cite{Pilozzi2017} and its realization in simple optical devices like fiber Bragg grating (FBG) still remains to be unexplored, which is a subject of the present investigation. Hence, it is important to critically analyze some of the fundamental concepts in designing a sampled grating structure alias superstructure in order to realize an OFC. SFBG is a class of Bragg periodic structures, which is customized by not allowing any spatial variation in the refractive index (RI) of the core in between two adjacent samples \cite{Shu2003}. The samples are the regions of the core whose RI is permanently modulated by exposing them to intense UV exposure \cite{Li2003,Lee2003}. These samples are commonly referred as seed gratings since they form the basic building units of the structure \cite{Li2003}. Even though these basic building blocks are nothing but the conventional FBG structures (uniform or nonuniform), the SFBG exhibits some unique characteristics both in its structure and spectrum \cite{Eggleton1994,DeSterke1997,Eggleton1996,DeSterke1996}. Physically, the term sampling length ($s_L$) refers to the length of one sample and each sample is separated from its predecessor by a distance of $s_\Delta$ (region of constant RI which is unexposed to UV). Hence, the period of the sampled grating is given by $s_\Lambda = s_L +s_\Delta$ \cite{Zhang2019,Zou2006,Navruz2008} as shown in Fig. \ref{fig1}. The RI modulation of the individual sample is low, while the modulation of the grating depth of the overall structure is large \cite{DeSterke1995,DeSterke1996}. Also, the grating period ($\Lambda$) of the each FBG is very low ($<$ 1 $\mu$m) when compared to the sampling period (of the order of mm) \cite{erdogan1997fiber,DeSterke1995} which results in a broad spectral span of the SFBG in contrast to the conventional FBG spectrum \cite{erdogan1997fiber,DeSterke1995,DeSterke1997} . The SFBG spectrum consists of many spectral lines (discrete) of high reflectivity and narrow width \cite{Zou2006,Lee2004} with each spectral line dedicated to one particular wavelength. The origin of such distinct spectrum can well be understood from the photonic band gap structure of a SFBG which possesses additional Bragg resonances or multiple band gaps that differentiate it from a conventional FBG \cite{DeSterke1995,DeSterke1997,Eggleton1996}. Thus higher order reflection modes become inevitable attributes of the SFBG spectrum \cite{Zhang2019,DeSterke1995}. The periodic nature of the inter-coupling parameter is yet another remarkable property of SFBGs \cite{DeSterke1997,DeSterke1995}. These features can be deemed as a precursor for realizing the multi-wavelength applications such as filters and others mentioned previously \cite{Li2003}. Theoretical and experimental investigations on SFBGs are predominantly focused on improving the spectral characteristics such as increasing number of usable optical channels within the available bandwidths by reducing the channel spacing \cite{Navruz2008,Zhang2019,Zou2006}, and nonuniformity in the reflectivity (transmittivity) of the multi-channels \cite {ibsen1998sinc}. These desirable attributes can be altered in accordance with the application of interest thanks to the availability of a wide range of sampling functions and flexible design methodologies in fabricating them \cite{ibsen1998sinc,Lee2003,Li2003}. Broadly, these sampling windows can be categorized into amplitude sampling \cite{ibsen1998sinc}, phase only sampling \cite{Li2003,Lee2003}, and hybrid sampling functions \cite{Navruz2008}. The uniform sampling technique is the simple and most straight forward approach in realizing SFBGs. The channels in the output spectrum of a uniformly sampled SFBG is likely to be nonuniform in their amplitude and their control remains a challenge. The SFBG mainly suffers from a decrease in the reflectivity with an increase in the number of channels in general \cite{Li2003,ibsen1998sinc,Zou2006,Lee2003}. In the perspective of SFBGs without gain and loss, phase sampling techniques are widely incorporated to overcome this issue \cite{Lee2003,Lee2004,Navruz2008}. Having concisely discussed the general concepts in the physical realization of SFBGs and the spectrum exhibited by them, we would like to note that all the above mentioned types of SFBG structures can be revisited again by researchers from the perspective of $\mathcal{PT}$-symmetric structures. It is worthwhile to note that invoking the notion of $\mathcal{PT}$-symmetry in a conventional FBG itself is of high scientific interest as it leads to novel non-Hermitian optical systems. We here take a step further ahead to establish $\mathcal{PT}$-symmetry in a FBG superstructure which to our knowledge has no relevant scientific articles in literature to deal with. Realizing a $\mathcal{PT}$-symmetric SFBG (PTSFBG) simply requires the modulation of RI of the seed grating (samples) to obey the $\mathcal{PT}$-symmetric condition $n(z) = n^*(-z)$ \cite{kottos2010optical,el2007theory,ruter2010observation,sarma2014modulation,lin2011unidirectional,phang2013ultrafast,miri2012bragg,huang2014type,govindarajan2018tailoring}. It is well-known that the real and imaginary parts of the RI profile, respectively, need to be even and odd functions of propagation distance ($z$) for a $\mathcal{PT}$-symmetric FBG (PTFBG) \cite{huang2014type,lin2011unidirectional,miri2012bragg}. Prior to this work, linear spectrum of homogeneous \cite{miri2012bragg,lin2011unidirectional,kulishov2005nonreciprocal} and inhomogeneous $\mathcal{PT}$-symmetric gratings \cite{huang2014type,lupu2016tailoring,raja2020tailoring,raja2020phase} were investigated and the literature seems to lack any comprehensive investigation on the linear spectrum based on sampled PTFBG. Nevertheless, the demonstration of frequency comb in a supersymmetric (SUSY) DFB structure by Longhi \cite{longhi2015supersymmetric} could be translated into the context of PTFBGs. In this paper, we consider a SFBG with gain and loss under the expectation that the concept of reversal of direction of incidence \cite{kulishov2005nonreciprocal,longhi2010optical,phang2015versatile} can possibly overcome the problem of reduction in reflectivity with an increase in the number of channels in contrast to the conventional SFBGs. Recently, discrete comb lasing modes were demonstrated in a topological $\mathcal{PT}$-symmetric structure \cite{Pilozzi2017}. It should be noted that the broken $\mathcal{PT}$-symmetric spectrum of any PTFBG feature a lasing behavior as reported by many authors \cite{raja2020tailoring,phang2014impact,huang2014type,raja2020phase,longhi2010pt}. The natural question arises from these investigations is that whether it is possible to realize discrete and identical lasing modes with the aid of a SFBG operating in the broken $\mathcal{PT}$-symmetric regime. We believe that the inherent nature of PTFBGs to offer multi-functionalities in different $\mathcal{PT}$-symmetric regimes and the degrees of freedom it offers for the optimization of the desired spectral characteristics looks promising to engineer applications like filters and tunable laser sources in a SFBG structure. With these motivations, we organize the article as follows. Section \ref{Sec:2} describes the theoretical modeling of the PTSFBG in addition to the mathematical description of the system based on the transfer matrix method. In Sec. \ref{Sec:3}, the comb filtering application of the system and its optimization with the grating parameters are presented in the unbroken $\mathcal{PT}$-symmetric regime with a special emphasis on the right light incidence direction. The reflection-less wave transport phenomenon at the unitary transmission point is illustrated in Sec. \ref{Sec:4}. Section \ref{Sec:5} illustrates the discrete multi channel lasing spectrum of the system in the broken $\mathcal{PT}$-symmetric regime. The inferences from the previous sections are summarized in Sec. \ref{Sec:6}. \section{Mathematical model} \label{Sec:2} \begin{figure*} \centering \includegraphics[width=0.85\linewidth]{fig1}\\ \caption{Schematic of a PTSFBG of length $L$ made of uniform PTFBG samples of length $s_L$ and sampling period $s_\Lambda$. Here we have taken number of samples ($N_s$) to be four. Each sample is separated from the next sample by a region of $s_\Delta = s_\Lambda - s_L$. Each sample consists of many unit cells of grating period $\Lambda$. The $\mathcal{PT}$-symmetric RI condition is achieved in each unit cell by having alternate regions of gain (red) and loss (green) \cite{phang2013ultrafast,phang2014impact,raja2020tailoring}} \label{fig1} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{fig2} \caption{Schematics of the modulation in (a) real ($n^{'}_{R}$) and (b) magnitude of imaginary ($n^{'}_{I}$) parts of RI profile. The solid lines represent modulated RI of the sample over a distance of $s_L$. The blank spaces in each sample period indicate that the core RI is unexposed to UV over a region of $s_\Delta$. } \label{fig2} \end{figure} The description of the device is as follows: The structure is built up of (multiple) uniform PTFBG samples separated from each other by regions of core unexposed to UV. The RI distribution [$n(z)$] of the sample that includes the effect of $\mathcal{PT}$-symmetry is given by \begin{equation} n(z)=n_{0}+n_{1R}\cos\left(\frac{2\pi}{\Lambda}z\right)+in_{1I}\sin\left(\frac{2\pi}{\Lambda}z\right), \label{Eq:norm1} \end{equation} where $n_0$ stands for the constant RI of the core. The grating's modulation strength is a complex entity whose real and imaginary parts are given by $n_{1R}$ and $n_{1I}$, respectively. The notation $\Lambda$ in Eq. (\ref{Eq:norm1}) refers to the grating period of each one of the gratings and it is indicated pictorially in Fig. \ref{fig1}. Each sample has a uniform length of value $s_L$ and it is followed by a region which is unexposed to UV (of length $s_\Delta$). This behavior occurs cyclically in each sampling period ($s_\Lambda$). It is important to point out that the system locally satisfies the $\mathcal{PT}$-symmetric condition in each unit cell as well in the sampling length. One unit cell is formed by having a real [$n_{R}^{'}$ $=$ $n_{1R}$ $\cos(2 \pi z/\Lambda)$] and imaginary [$ n_{I}^{'}$ $=$ $n_{1I}$ $\sin(2 \pi z/\Lambda)$] modulation of the refractive index. The modulations $n_{R}^{'}$ and $n_{I}^{'}$ are depicted in Figs. \ref{fig2}(a) and \ref{fig2}(b). Each sample (uniform PTFBG) features a number of alternating regions of gain and loss. It is useful to introduce an important parameter, namely duty cycle, which must be judiciously varied to achieve the desired spectrum \cite{erdogan1997fiber}. Mathematically, the duty cycle is defined as the ratio of the length of the sample to the sampling period and it reads as \begin{equation} d=\cfrac{s_L}{s_\Lambda}. \label{Eq:norm2} \end{equation} The PTSFBG proposed here is simply a uniform PTFBG in which the grating elements are stripped off in a periodic fashion. The resulting spectrum of the device can be found by the coupled mode theory (CMT) formalism. The transfer matrix method (TMM) stands out to be a first choice for theoretical physicists to analyze any complex FBG structure as it offers high accuracy and consumes less computation time over other techniques like the Gel-Fand–Levitan–Marchenko inverse scattering method, standard thin-film techniques or the Rouard theory of waveguides \cite{erdogan1997fiber}, \cite{kashyap2009fiber}. Each of these techniques has its own demerits and they are listed as below: First, the accuracy of the simulated results is limited by the rounding off errors in the computation in thin film based approaches. Also, it cannot fully characterize both the phase and amplitude responses of the complex type of gratings like nonuniform FBGs or superstructures. If the number of grating periods or the length of grating itself is large, the number of matrices also gets increased in Rouard method \cite{erdogan1997fiber}. Thus, the computation becomes more complex and consumes more time. Nonetheless, the TMM is capable of addressing all these issues. Many types of physically realizable gratings have been fully characterized by employing this technique \cite{kashyap2009fiber}. This is possible because of the fact that the TMM approach allows computation of the output field of a short section of the grating in a single iteration \cite{kashyap2009fiber}. In the subsequent iterations, the resulting matrix that represents the output fields from the previous section is taken as the input matrix for a given section and this process is repeated for $n$-number of cycles until the whole FBG is computed \cite{raja2020tailoring}. Another important reason to choose TMM over other methods (for modeling SFBG) is that the direct analytical solutions are tedious to calculate if the number of samples is large \cite{erdogan1997fiber,kashyap2009fiber}. The direct integration of coupled mode equations may not work easily, if the sample contains abrupt phase jumps in its RI profile \cite{Li2003,erdogan1997fiber}. Here, the mathematical relation between the number of samples ($N_s$), sampling period ($s_\Lambda$) and the length of the whole device ($L$) can be given by, \begin{gather} N_s=L/s_\Lambda. \end{gather} Note that the total number of samples ($N_s$) can be varied as per the requirement by manipulating the sampling period, by fixing the length of the device (unless specified). Mathematically, let the matrices corresponding to these samples be assumed as $\mathcal{L}_{\mathcal{S}}$, where $\mathcal{S} = 1, 2, \dots, N_s$. As an example, the PTSFBG with four samples ($N_s = 4$) is shown in Fig. \ref{fig1}. To model this PTSFBG structure, the following routine is adopted: All the samples in the above discussion are taken to be \emph{identical} in the present investigation. Therefore, the matrices that represent these samples are also identical ($\mathcal{L}_1 = \mathcal{L}_2 = \mathcal{L}_3 = \mathcal{L}_4$). The regions unexposed to UV within each sample period $s_\Lambda$ are simply modeled by a phase matrix $\mathcal{L}_{\Delta}$ whose matrix elements are given by \cite{erdogan1997fiber} \begin{gather} \nonumber \\\mathcal{L}_{\Delta}=\left[\begin{array}{cc} \exp\left(\cfrac{2\pi i n_0s_\Delta}{\lambda}\right) & 0\\ 0 & \exp\left(\cfrac{-2\pi i n_0s_\Delta}{\lambda}\right) \end{array}\right], \label{Eq:norm6} \end{gather} where $s_\Delta$ and $\lambda$ stand for the length of the region unexposed to UV and the operating wavelength, respectively. Let $u_0$ and $v_0$ represent the input fields of the PTSFBG. Similarly, the output fields are given by $u_{out}$ and $v_{out}$. Since the PTSFBG is built up of repeated units of samples followed by the core region unexposed to UV, the corresponding phase matrix ($\mathcal{L}_\Delta$) should be inserted in between the matrices representing the sample. Hence, the total electric field propagating through the device is given by \begin{widetext} \begin{gather} \left[\begin{array}{c} u_{out}\\ v_{out} \end{array}\right]= \mathcal{L}_1 \times \mathcal{L}_\Delta \times \mathcal{L}_2 \times \mathcal{L}_\Delta \times \mathcal{L}_{3} \times \mathcal{L}_\Delta \times \mathcal{L}_4 \left[\begin{array}{c} u_0\\ v_0 \end{array}\right] = \left[\begin{array}{cc} \mathcal{L}_{11}& \mathcal{L}_{12} \\ \mathcal{L}_{21} & \mathcal{L}_{22} \end{array}\right] \left[\begin{array}{c} u_0\\ v_0. \end{array}\right] \label{Eq:Norm9} \end{gather} To find the matrices $\mathcal{L}_1$, $\mathcal{L}_2$, $\mathcal{L}_3$, and $\mathcal{L}_4$, each sample (of length $s_L$) is divided into $n_s$ number of \emph{piece-wise uniform} sections (of length $\Delta z$). Thus, a sample is assumed to be a functional block formed by cascading $n_s$ number of sections of uniform PTFBG ($s_L = n_s \times \Delta z$). It is well known that cascading (physically) of different sections leads to multiplication of their respective transfer matrices (mathematically). Let $\mathcal{N}_j$ ($j = 1, 2, \dots, n_s$) represent the matrix that corresponds to the $n^{th}$ section of the sample. Therefore, \begin{gather} \mathcal{L}_1 = \mathcal{N}_1 \times \mathcal{N}_2 \times \mathcal{N}_3 \dots \mathcal{N}_{n_s} \end{gather} The standard forms of $\mathcal{N}_j$ for a uniform PTFBG have been already dealt by us in detail in our previous work \cite{raja2020phase} and its final form is given by Eq. (\ref{Eq:Norm8}) \begin{gather} \mathcal{N}_j=\left[\begin{matrix} \cosh(\hat{\sigma}_j {\Delta z} )+i\left(\cfrac{\delta_j}{\hat{\sigma}_j}\right)\sinh(\hat{\sigma}_j {\Delta z}) & i\left(\cfrac{\kappa_j+g_j}{\hat{\sigma}_j}\right)\sinh(\hat{\sigma}_j \Delta z) \\-i\left(\cfrac{\kappa_j-g_j}{\hat{\sigma}_j}\right)\sinh(\hat{\sigma}_j {\Delta z}) &\cosh(\hat{\sigma}_j {\Delta z} )-i\left(\cfrac{\delta_j}{\hat{\sigma}_j}\right)\sinh(\hat{\sigma}_j {\Delta z}) \end{matrix}\right], \label{Eq:Norm8} \end{gather} where $j= 1, 2 \dots n_s$. In Eq. (\ref{Eq:Norm8}), $\hat{\sigma}_j$, $k_j$, $g_j$, and $\delta_j$ represent, respectively, the propagation constant, coupling, gain-loss and detuning coefficients of the piece-wise uniform $n^{th}$ section of the sample. For a uniform PTSFBG, these coefficients are same in all the sections and therefore, \begin{gather} \nonumber \hat{\sigma}_j=\left(\kappa_j^2-g_j^2-\delta_j^2\right)^{1/2}, \qquad \kappa_j = \kappa = \pi n_{1R}/\lambda,\qquad g_j = g = \pi n_{1I}/\lambda, \\ \delta_j = \delta = 2\pi n_{0} \left (\cfrac{1}{\lambda}-\cfrac{1}{\lambda_b}\right), \quad \lambda_b = 2 n_{0} \Lambda. \end{gather} Note that the model can be extended to $N_s$ number of samples by increasing the value of $\mathcal{S}$. In such a case, Eq. (\ref{Eq:Norm9}) can be rewritten as \begin{gather} \left[\begin{array}{c} u_{out}\\ v_{out} \end{array}\right]= \mathcal{L}_1 \times \mathcal{L}_\Delta \times \dots \times \mathcal{L}_{N_{s}-1} \times \mathcal{L}_\Delta \times \mathcal{L}_{N_{s}} \left[\begin{array}{c} u_0\\ v_0 \end{array}\right] = \left[\begin{array}{cc} \mathcal{L}_{11}& \mathcal{L}_{12} \\ \mathcal{L}_{21} & \mathcal{L}_{22} \end{array}\right] \left[\begin{array}{c} u_0\\ v_0 \end{array}\right]. \label{Eq:Norm9a} \end{gather} It is well known that the reflection and transmission coefficients are nothing but the squared magnitudes of the amplitudes. These coefficients can be directly computed by applying the FBG boundary conditions, $u_0 = 1$ and $v_{out} = 0$ in Eq. (\ref{Eq:Norm9}) and they read as \cite{lin2011unidirectional,raja2020tailoring} \begin{gather} R_{L}=|- \mathcal{L}_{21}/\mathcal{L}_{22}|^2, \quad R_{R} = |\mathcal{L}_{12}/\mathcal{L}_{22}|^2, \quad T=|(|\mathcal{L}_{11}\mathcal{L}_{22}-\mathcal{L}_{12}\mathcal{L}_{21}|)/\mathcal{L}_{22}|^{2}. \label{Eq: mul3} \end{gather} \end{widetext} The nature of $\mathcal{PT}$-symmetry can be described based on the coupling and gain-loss coefficients as, \begin{gather} g\begin{cases}<k, & \text{for: unbroken $\mathcal{PT}$-symmetric regime}\\=k, & \text{at: exceptional point} \\>k, & \text{for: broken $\mathcal{PT}$-symmetric regime}.\end{cases} \end{gather} The Bragg wavelength of the sample is assumed to be $1550$ nm and constant core index ($n_0$) is taken as $1.45$ throughout the paper. \section{Unbroken $\mathcal{PT}$-symmetric regime} \label{Sec:3} It should be remembered that the output fields of uniform, chirped and other PTFBG devices consist of a single spectrum centered at the Bragg wavelength \cite{lin2011unidirectional,raja2020tailoring}. In contrast to these structures, the optical field (reflected and transmitted light) emerging out from a PTSFBG is a distinct one and commonly referred to as a comb spectrum for two reasons. First, the generated spectrum is characterized by periodic maxima (minima) or a serious of sharp spectral lines (resembling the teeth of a comb) \cite{jayaraman1993theory,Li2003}. The other factor is that all these spectral lines are equidistant from each other in the wavelength domain and share a common phase evolution as a result of which the frequency dynamics of every mode in the comb spectrum is deterministic in nature \cite{fortier201920}. With this brief description, we directly look into the comb spectrum of a PTSFBG by following the routine proposed in Sec. \ref{Sec:2}. Among the various spectral features (such as delay, dispersion, and so on), this article deals only with the variations in the reflection and transmission characteristics of a comb spectrum (with respect to the changes in the PTSFBG parameters) which are directly computed from Eq. (\ref{Eq: mul3}) \subsection{Influence of $\mathcal{PT}$-symmetry ($n_{1I}$)} \begin{figure} \centering \includegraphics[width=0.5\linewidth]{fig4a}\includegraphics[width=0.5\linewidth]{fig4b} \caption{(a) Reflection and (b) transmission spectrum of a conventional SFBG ($n_{1I}=0$) having grating parameters $L = 10$ mm, $n_{1R}=5\times 10^{-4}$, $d=0.1$, and $s_\Lambda=500$ $\mu$m. The reflectivity peaks are denoted by the notation $R_{R(m)}^{p}$ and $R_{L(m)}^{p}$, where $L$, $R$ in the subscript indicate left and right incidences, respectively. The transmission dips are represented as $T_m^d$ and the full width at half maximum [FWHM] is denoted as $w_m$. $m$ indicates the order of the mode and it takes the values $-1$ ,$0$, $1$, $2$ for sampling period of $500$ $\mu$m in the wavelength range $1547$ -- $1554$ nm and the corresponding values of $\lambda^p_m$ are given by 1548.3, 1549.9, 1551.6, and 1553.3 nm, respectively.} \label{fig4} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\linewidth]{fig5a}\includegraphics[width=0.5\linewidth]{fig5b}\\\includegraphics[width=0.5\linewidth]{fig5c}\includegraphics[width=0.5\linewidth]{fig5d}\\\includegraphics[width=0.5\linewidth]{fig5e}\includegraphics[width=0.5\linewidth]{fig5f} \caption{(a, c) Reflection and (b) transmission spectrum of a PTSFBG having the same device parameters as in Fig. \ref{fig4}. The inset in (b) shows the transmission spectrum pertaining to a single channel. (d) Continuous variation of transmission dips ($T_d^m)$ and the inset depicts variation in FWHM ($w_m$) against variation in $n_{1I}$. (e) and (f) Continuous variation of reflectivity peaks [$R_{R(m)}^{p}$ and $R_{L(m)}^{p}$] against variation in $n_{1I}$.} \label{fig5} \end{figure} For simplicity, we first consider the spectrum with four modes ($N = 4$) in the wavelength range $\lambda = 1547$ to $1554$ nm by assuming a sampling period of $s_\Lambda$ = 500 $\mu$m and $d = 0.1$ as shown in Fig. \ref{fig4}. The mode which occurs close to the Bragg wavelength features highest reflectivity (in general) and the detuning parameter corresponding to this mode is nearly zero ($\delta \approx 0$) and hence the mode is designated as zeroth order mode ($m = 0$). On either side of zeroth order mode, higher order modes occur at discrete wavelengths corresponding to positive ($\delta>0$) and negative detuning regimes ($\delta<0$) and hence the modes in Fig. \ref{fig4} are designated as $m = -1$, 0 , 1, and 2 rather than $m = 0$, 1, 2, and 3. The linear spectrum of a conventional sampled FBG is illustrated in Figs. \ref{fig4}(a) and \ref{fig4}(b) which confirm that the reflection and transmission from each channel always obey the condition $R+T=1$, which suggests that the Hamiltonian of the system is conserved in the absence of $\mathcal{PT}$-symmetry. But, with the inclusion of gain and loss, the reflectivity of each channel (indicated by subscript $m$) increases provided that the light launching direction is the right ($R_R$) direction as shown in Figs. \ref{fig5}(a) and \ref{fig5}(e). Nevertheless, when the incident direction is reversed, reflectivity ($R_L$) of all the channels gets reduced as seen in Figs. \ref{fig5}(c) and \ref{fig5}(f) when compared to the reflection spectrum shown in Fig. \ref{fig4}(a). The dips in the transmittivity ($T^d_m$) of the individual channels are clearly influenced by the presence of $\mathcal{PT}$-symmetry as shown in Figs. \ref{fig5}(b) and \ref{fig5}(d). Moreover, the transmittivity at the side lobes of the individual channels gets reduced provided that $n_{1I}$ is sufficiently large, say $n_{1I}=4\times10^{-4}$. Also, significant reduction in the FWHM of a single channel is observed which is shown in the inset of Figs. \ref{fig5}(b) and \ref{fig5}(d). \subsection{Variations in the sampling period ($s_\Lambda$)} \begin{figure} \centering \includegraphics[width=0.5\linewidth]{fig6a}\includegraphics[width=0.5\linewidth]{fig6b}\\\includegraphics[width=0.5\linewidth]{fig6c}\includegraphics[width=0.5\linewidth]{fig6d}\\\includegraphics[width=0.85\linewidth]{fig6e \caption{(a) -- (c) Influence of sampling period on unbroken PTSFBG spectrum with same device parameters as in Fig. \ref{fig5}. Also, $n_{1I}$ is kept constant at 0.0004. (d) Illustrates the reflection spectrum in a conventional SFBG ($R_L$ = $R_R$ $=$ $R$) for different sampling periods. (e) Variation in the number of modes ($N$), center wavelength ($\lambda_{m}^{p}$) and separation between adjacent modes ($\Delta_\lambda$) against $s_\Lambda$. Here the parameter $m$ which indicates the order of the modes takes discrete values $m = -3$, $-2$, $-1$, $0$, $1$, $2$, $3$ for $s_\Lambda = 1000$ $\mu$m. The values of $\lambda_m^p$ for these modes in the same order are given by 1548.3, 1549.1, 1549.9, 1550.8, 1551.6, 1552.4, and 1553.3 nm, respectively. Similarly, $m = -5$ to $7$ (in steps of unity) for $s_\Lambda = 2000$ $\mu$m and the corresponding values of $\lambda^p_m$ are found to be 1548.3, 1548.7, 1549.1, 1549.5, 1549.9, 1550.4, 1550.8, 1551.2, 1551.6, 1552, 1552.4, 1552.9, 1553.3, and 1553.7 nm.} \label{fig6} \end{figure} The sampling period ($s_\Lambda$) is a crucial parameter in the construction of sampled PTFBG structure, since it dictates the number of usable channels within the available spectral span. Also, the channel spacing between any two adjacent channels is controlled by the sampling period. Among the two spectra shown in Fig. \ref{fig6}(a), the first one [$s_\Lambda$ = 1000 $\mu$m (green and solid lines)] is characterized by less number of channels and thus features more inter channel separation width than the second one [$s_\Lambda$ = 2000 $\mu$m (red and dotted lines)]. Also, this is true for the reflection spectrum for left incidence as shown in Fig. \ref{fig6}(c). By the inherent property of the sampled FBG and the nature of sample (uniform), all these channels are equally spaced in the spatial domain as shown in Fig. \ref{fig6}(e). From these figures, it can be concluded that the number of channels is directly proportional to the sampling period ($s_\Lambda$), whereas the channel separation is inversely proportional to $s_\Lambda$ and it satisfies the relation \cite{Lee2003,Li2003,Li2008,jayaraman1993theory}, \begin{gather} \Delta_\lambda =\cfrac{\lambda_{b}^{2}}{2n_0 s_\Lambda} \label{Eq:norm18} \end{gather} From Eq. (\ref{Eq:norm18}), it is very obvious that designing a SFBG with larger $s_\Lambda$ will result in a minimal channel separation width and thereby leading to a increased number of channels within the desired spectral range. But, it should be recalled that the sampling period cannot be arbitrarily large in the perspective of conventional FBGs \cite{Navruz2008,Li2003,Li2008} as it leads to a decrease in the reflectivity of the channels away from the Bragg wavelength as shown in Fig. \ref{fig6}(d). Increasing the index of the core ($n_0$) may appear as a good choice to compensate this reduction in reflectivity from Eq. (\ref{Eq:norm18}) but it suffers from fabrication difficulties. Instead, SFBGs with gain and loss can be employed to increase the reflectivity as shown in Fig. \ref{fig6}(a) as long as the input field is launched from the rear end of the device which is a unique outcome of the $\mathcal{PT}$-symmetry. It is important to point out that the gain and loss parameters amplify the reflectivity pertaining to all the individual wavelengths of the comb spectrum (rather than equalizing the reflectivity of all channels to that of the zeroth order) so that even the channels on the edges of the designed filter will have reflectivity larger than unity as shown in Fig. \ref{fig6}(a). In opto-electronic approach, Erbium doped fiber amplifiers (EDFA) are used as signal boosters and they do perform the same functionality. Yet, fewer modes corresponding to $s_\Lambda$ = 2000 $\mu$m [Fig. \ref{fig6}(a)] still have weak reflectivity on both sides which means that the parameter $g$ must be increased accordingly for larger sampling periods. \subsection{Impact of device length ($L$) on uniformity of combs} \begin{figure} \centering \includegraphics[width=0.5\linewidth]{fig7a}\centering \includegraphics[width=0.5\linewidth]{fig7b}\\\centering \includegraphics[width=0.5\linewidth]{fig7c}\includegraphics[width=0.5\linewidth]{fig7d} \caption{(a) -- (c) Linear spectrum of a PTSFBG of length $L = 100$ mm for two different sampling periods $s_\Lambda = 500$ $\mu$m and 1000 $\mu$m and other device parameters are the same as in Fig. \ref{fig6}(a). (d) Flattening of the envelope and generation of spectrum with uniform amplitudes by varying the length ($L$) given that the values of $m$ and $\lambda_m^p$ are the same as given in Fig. \ref{fig6} for a sampling period of $s_\Lambda$ = 1000 $\mu$m.} \label{fig7} \end{figure} It can be inferred from Figs. \ref{fig7}(a) -- \ref{fig7}(c) that the reflectivity and transmittivity of all the individual channels in the selected spectral span can be made almost uniform by increasing the physical length of the device ($L$). This behavior is observed to be true for all the values of the sampling period. Also, the reflectivity of each channel is dramatically increased with an increase in the physical length of the device ($L$) for both left and right light incidences. As a special case, the flattening of the envelope of the spectrum for right incidence is shown in Fig. \ref{fig7}(d) which confirms that the systems with longer physical lengths are optimal for generation of spectrum with uniform reflectivity peaks which is a highly desired feature in the perspective of PTSFBG spectra. \subsection{Variations in the duty cycle ($d$)} \begin{figure} \centering \includegraphics[width=0.5\linewidth]{fig8a}\includegraphics[width=0.5\linewidth]{fig8b}\\\includegraphics[width=0.5\linewidth]{fig8c}\includegraphics[width=0.5\linewidth]{fig8d}\\\includegraphics[width=0.5\linewidth]{fig8e}\includegraphics[width=0.5\linewidth]{fig8f}\\\includegraphics[width=0.75\linewidth]{fig8g} \caption{(a) -- (f) Reflection and transmission spectrum of a PTSFBG under varying duty cycle $d$ with $s_\Lambda = 500$ $\mu$m and $n_{1I} = 0.0004$. The plots in the left and right panel are simulated at $L = 10$ mm and $100$ mm. (g) Variation in center wavelength ($\lambda_m^{p}$) with changes in the duty cycle ($d$). Also, the variation in the full width half maximum of the zeroth order mode ($w_0$) is shown in inset. } \label{fig8} \end{figure} To enable an understanding about the role of duty cycle on the resulting PTSFBG spectra, the sampling period is kept at $s_\Lambda = 500$ $\mu$m. The duty cycle is varied between 0.05 to 0.2 for two different lengths of the grating given by $L = 10$ and $100$ mm in the left and right panels of Fig. \ref{fig8}, respectively. From these figures and Eq. (\ref{Eq:norm2}), it is very clear that if the length of the sample ($s_L$) is increased at a fixed value of sampling period ($s_\Lambda$), it gives rise to an increase in FWHM of the individual channels as shown in Fig. \ref{fig8}(a) and inset of Fig. \ref{fig8}(e). Furthermore, it causes a shift in the spectrum towards longer wavelength sides irrespective of the sampling period and length of the device as shown in Fig. \ref{fig8}(g). Also, an increase in the duty cycle leads to an increase in the magnitude of reflectivity for both left and right incidences besides growth in the dip of the transmittivity for lower values of length of the device ($L < 60$ mm) as depicted in Figs. \ref{fig8}(a), \ref{fig8}(c) and \ref{fig8}(e). Hence, it is proven that the reduction in reflectivity with a number of channels can be judiciously controlled (at lower lengths) in multiple ways, via variations in gain and loss, proper choice of duty cycle, and the device length. However, for larger values of physical length of the device such as $L = 100$ mm, it helps only in shifting the center wavelength of the spectrum and increasing the full width half maximum ($w_m$) as shown in Figs. \ref{fig8}(b), \ref{fig8}(d) and \ref{fig8}(f). \subsection{Influence of modulation strength ($n_{1R}$) on FWHM} \begin{figure} \centering \includegraphics[width=0.5\linewidth]{fig9a}\centering \includegraphics[width=0.5\linewidth]{fig9b \caption{Variations in FWHM of PTSFBG spectrum with changes in $n_{1R}$ at $n_{1I} = 0.0004$, $d = 0.1$, and $s_\Lambda = 500$ $\mu$m.} \label{fig9} \end{figure} Finally, the role of the coupling parameter ($\kappa = \pi n_{1R}/\lambda$) on the PTSFBG spectrum is illustrated in Fig. \ref{fig9}. The shape of the stopband of individual channels and the corresponding FWHM are influenced by different values of the real part of the modulation strength ($n_{1R}$). If the individual channels need to posses a flat stopband and broader width, one can opt for larger modulation strengths as shown in the inset of Fig. \ref{fig9}(a). On the other hand, the stopband is tapered at weaker modulation strengths as shown in Figs. \ref{fig9}(a) and \ref{fig9}(b). As depicted by Fig. \ref{fig9}(a), the decrease in $R_R$ is not a major issue here, since there are other degrees of freedom offered by the PTSFBG to compensate for such a reduction in the reflectivity. Nevertheless, the reflectivity of spectrum for left incidence increases with any increase in $n_{1R}$ as shown in the inset of Fig. \ref{fig9}(b). \subsection{Application: Tunable RF traversal filter} \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{fig10} \caption{Schematic showing the generation of optical combs with a PTSFBG and its application to a tunable RF traversal filter \cite{xu2019advanced, pastor2001broad, leng2004optimization, davies1984fibre, hamidi2010tunable,mora2003tunable}. TBL: tunable broadband laser, OCI: optical circulator, OC: optical coupler, PWS: programmable wave shaper, EOM: electro optic modulator, LDE: linear dispersive element (single mode optical fiber spool), OSA: optical spectrum analyzer, PD: photodiode, F: filter, VNA: vector network analyzer, RF: radio frequency.} \label{fig10} \end{figure} An optical field from a tunable broadband laser source can be directed as the input to the PTSFBG via an optical ciruclator (OCI). The resulting comb spectrum is modulated by an electro optic modulator (EOM) \cite{leng2004optimization}. To impose the desired tap weights, the intensity of each channel can be judiciously controlled with the aid of programmable wave shaping devices \cite{xu2019advanced,davies1984fibre,mora2003tunable,mora2002automatic}. The RF input to the EOM is nothing but the message signal which is modulated on a desired carrier frequency and must be introduced alongside the optical combs \cite{xu2019advanced,davies1984fibre}. The EOM generates replicas of RF inputs to optical outputs \cite{pastor2001broad}. The resulting optical signal is then passed into a single mode fiber (SMF) spool of length $L_f = $ 23 or 50 km \cite{pastor2001broad,leng2004optimization}. The SMF offers linear dispersion characteristics to the input signals and as a result each modulated comb channel is provided with a precise time delay ($\tau$). The magnitude of the delay is determined by the product of wavelength separation ($\Delta_\lambda$) between individual channels of the comb spectrum and the dispersion ($D$) offered by the SMF \cite{leng2004optimization}. In the end, the delayed and weighted optical taps are mixed at the receiver (photo diode and a optical filter) \cite{pastor2001broad,xu2019advanced}. The resulting RF output signal can then be sent to a vector network analyzer (VNA). The VNA assists in recording and analyzing the RF response of the different frequency channels \cite{xu2019advanced,leng2004optimization}. This kind of RF traversal filters are potential candidates to realize any given RF transfer function with ease by tuning the appropriate tap weights \cite{mora2003tunable}. \section{Unitary transmission point dynamics} \label{Sec:4} \begin{figure} \centering \includegraphics[width=0.5\linewidth]{fig11a}\centering \includegraphics[width=0.5\linewidth]{fig11b}\\\centering \includegraphics[width=0.5\linewidth]{fig11c}\centering \includegraphics[width=0.5\linewidth]{fig11d} \caption{(a) Reflection-less wave transport phenomenon in a PTSFBG operating at the unitary transmission point ($n_{1R}$ = $n_{1I}$ = 0.0005). (b) Depicts the role of sampling parameter ($s_\Lambda$) at $L = 10$ mm and $d = 0.1$. (c) Portrays the effect of variation in the duty cycle ($d$) at $s_\Lambda$ = $2000$ $\mu$m and the length ($L$) is the same as in (b). (d) Illustrates the change in spectrum with device length $L = 100$ mm at $s_\Lambda = 2000$ $\mu$m and $d = 0.1$. } \label{fig11} \end{figure} At this juncture, it is essential to recollect that any type of $\mathcal{PT}$-symmetric FBG device functioning at the unitary transmission point ($n_{1R} = n_{1I}$) will demonstrate ideal light transmission ($T = 1$ and $R_L = 0$), if the light is launched from the front end (left) of the device \cite{huang2014type, raja2020tailoring,raja2020phase,lin2011unidirectional,kulishov2005nonreciprocal}. From Fig. \ref{fig11}(a), we confirm that reflection-less light transport is exhibited by the PTSFBG irrespective of the variations in the sampling length ($s_L$), sampling period ($s_\Lambda$), duty cycle ($d$) or the length of the device ($L$). However, any increase in the sampling period ($s_\Lambda$) increases the number of channels and vice-versa for the right light incidence as shown in Fig. \ref{fig11}(b). Moreover, any increase in the duty cycle ($d$) increases the reflectivity ($R_R$) of the channels closer to $1550$ nm and marginally shifts the individual peaks as shown in Fig. \ref{fig11}(c). In contrast to Fig. \ref{fig11}(b), the reflectivity of each channel is dramatically increased in Fig. \ref{fig11}(d) when the length of the device is increased to $L = 100$ mm. It should be mentioned that these behaviors can be explored in the creation of comb filters. \section{Broken $\mathcal{PT}$-symmetric regime} \label{Sec:5} Any PTFBG will exhibit lasing behavior in its spectrum under the operating conditions, $n_{1I}>n_{1R}$. Under this condition, sharp variations (exponential increase or decrease) in the reflectivity and transmittivity of the grating spectrum occur with variation in the value of $n_{1I}$. Also, the FWHM of the spectra in the broken regime is too narrow compared to the unbroken regime and thus a large amount of reflected (transmitted) power is concentrated in a narrow spectral span. For these reasons, the dynamics of the system in the broken regime is known as lasing behavior. With this note, we directly present the results pertaining to the lasing behavior exhibited by a PTSFBG in its spectrum (comb). Throughout this section, the modulation strength parameter and length are kept constant as $n_{1R} = 5\times10^{-4}$ and $L = 10$ mm, respectively (unless specified). \subsection{Impact of $n_{1I}$ on the lasing spectra}\label{sub:1} \begin{figure} \centering \includegraphics[width=1\linewidth]{fig12a}\\\centering \includegraphics[width=1\linewidth]{fig12b}\\\includegraphics[width=1\linewidth]{fig12c} \caption{Lasing spectrum of a PTSFBG of length $10$ mm with a sampling period of $s_\Lambda=500$ $\mu$m and duty cycle of 0.1 under variations in $n_{1I}$. It should be noted that the minimum value of transmittivity is always greater than unity in the perspective of the system operating in the broken $\mathcal{PT}$-symmetric regime. } \label{fig12} \end{figure} Figures \ref{fig12}(a) -- \ref{fig12}(c) depict the formation of lasing combs with fewer modes. From these figures, it is very clear that the value of gain and loss affects the lasing spectrum by two means: Primarily, with an increase in the value of $n_{1I}$, the reflectivity and transmittivity of each mode get intensified. Secondly, the FWHM of the individual wavelengths of lasing spectrum gets reduced with the increase of $n_{1I}$. The mode which is closer to the Bragg wavelength (zeroth order) receives maximum amplification. On either side of the zeroth order mode, the first order lasing modes of the spectrum appear. As the order of the mode increases, both reflectivity ($R_R$ and $R_L$) and transmittivity ($T$) get decreased. The lasing modes at the edges of the spectrum (scaled to $1547.5$ to $1554$ nm here) feature lesser amplification and so the peaks corresponding to individual modes are non-identical in magnitude. Nevertheless, each mode is equally spaced ($\Delta_\lambda$) in a given wavelength range. \subsection{Role of sampling period ($s_\Lambda$) on the number of the modes}\label{sub:2} \begin{figure} \centering \includegraphics[width=1\linewidth]{fig13a}\\\centering \includegraphics[width=1\linewidth]{fig13b}\\\includegraphics[width=1\linewidth]{fig13c} \caption{Variations in the number of modes in the lasing spectrum of a PTSFBG for two different sampling periods $s_\Lambda = 1000$ and $2000$ $\mu$m. The length and duty cycle parameters are same as in Fig. \ref{fig12}.} \label{fig13} \end{figure} In Fig. \ref{fig12}, the number of channels within the available range is few (four). If additional modes are desired in the lasing spectra, the obvious choice is to increase the sampling period as shown in Fig. \ref{fig13}. This also results in a decrease in the wavelength separation ($\Delta_\lambda$) between the adjacent modes of the spectra. For instance, seven channels are visible in the output spectrum for a sampling period of $s_\Lambda$ = $1000$ $\mu$m. As mentioned earlier, the reflectivity (transmittivity) of the modes far away from the Bragg wavelength is not as much as that of the zeroth order modes as depicted in Figs. \ref{fig13}(a) -- \ref{fig13}(c). \subsection{Variation in the duty cycle ($d$)}\label{sub:3} \begin{figure} \centering \includegraphics[width=0.5\linewidth]{fig14a}\includegraphics[width=0.5\linewidth]{fig14b}\\\includegraphics[width=0.5\linewidth]{fig14c}\includegraphics[width=0.5\linewidth]{fig14d}\\\includegraphics[width=0.5\linewidth]{fig14e}\includegraphics[width=0.5\linewidth]{fig14f} \caption{(a) --(c) Simultaneous shifting and increase in reflectivity (transmittivity) with changes in the duty cycle ($d$) at $n_{1I} = 6.5\times10^{-4}$ and $s_\Lambda = 1000$ $\mu$m. Also, the effect of variation in length on the lasing spectrum is shown in (d) at duty cycle value of $d = 0.1$ and the value of $g$ is same as in (a). (e) and (f) Variation in the peak reflectivity $[R_{R(m)}^p]$ with changes in the duty cycle ($d$) and length ($L$), respectively. The values of $\lambda_m^p$ in (f) are the same as mentioned in Fig. \ref{fig6}.} \label{fig14} \end{figure} The duty cycle parameter offers two important functionalities over the control of the PTSFBG lasing spectra, namely the magnitude control and location of the spectrum on the wavelength axis. As $d$ gets larger, the comb lasing spectrum shows growth in reflectivity as well as in transmittivity. Also, the combs are shifted towards longer wavelengths as shown in Figs. \ref{fig14}(a) -- \ref{fig14}(c). Under duty cycle variations, the nonuniformity in the magnitude of the comb exists which is mainly due to the extreme amplification of the zeroth order mode as shown in Fig. \ref{fig14}(e). The degree of dissimilarity among the reflectivity of these spectral modes further builds up with any increase in the length of the structure as shown in Figs. \ref{fig14}(d) and \ref{fig14}(f). \subsection{Comb lasing spectrum with uniform $R$ and $T$} \label{subsec:d} \begin{figure} \includegraphics[width=0.5\linewidth]{fig15a}\includegraphics[width=0.5\linewidth]{fig15b}\\\includegraphics[width=0.5\linewidth]{fig15c}\includegraphics[width=0.5\linewidth]{fig15d} \caption{ (a) and (b) Gain and loss induced comb lasing spectrum of a PTSFBG in the broken $\mathcal{PT}$-symmetric regime with nearly uniform reflectivity (transmittivity) for all the channels at a sampling period of $s_\Lambda = 4000$ $\mu$m, duty cycle of $d = 0.01$, and $L = 60$ mm. (c) Continuous variation of reflectivity [$R_{R(m)}^p$, $R_{L(m)}^p$] and transmittivity peaks [($T^p_m$) corresponding to $m = 0$] against gain and loss ($n_{1I}$). In contrast to the unbroken $\mathcal{PT}$-symmetric regime, the transmission spectrum possesses sharp peaks rather than showing dips and hence they are denoted by $T^p_m$ rather than $T^d_m$. (d) Variations in the FWHM of the zeroth order mode with changes in the value of $n_{1I}$.} \label{fig15} \end{figure} Unlike the unbroken $\mathcal{PT}$-symmetric regime, increasing the length of the overall structure itself cannot make the lasing spectrum uniform (nearly) as shown in Fig. \ref{fig14}(f) because of the constrain that extreme amplification of the zeroth order mode in the broken regime strongly depends on the sampling length. In other words, the inherent nature of the sample to favor stronger amplification of a particular mode must be cut down. Without managing this effect, the choice of longer device length will further build up the unevenness among the reflectivity of the individual channels. From a theoretical perspective, different duty cycle values were tested and a value of $d = 0.01$ is found to be optimum for generation of comb lasing spectrum with nearly uniform reflectivity and transmittivity at a device length of $L = 60$ mm. It is noteworthy to mention that such a decrement in the duty cycle is not feasible in the context of conventional SFBG structures due to the fact that reduction in the duty cycle adversely decreases the reflectivity of the device. The role of the sampling period is very much the same as illustrated previously in Fig. \ref{fig13} except that the channels are nearly uniform in amplitude in all the cases. In other words, if $s_\Delta$ increases at a fixed value of sampling length $s_L$ ($s_\Lambda = s_L + s_\Delta$), the system can accommodate more channels with a reduced inter channel separation and vice-versa. The variation in gain and loss brings an increase in $R$ and $T$ as shown in Figs. \ref{fig15}(a), \ref{fig15}(b) and \ref{fig15}(c). Also, the FWHM ($w_m$) decreases with an increase in $n_{1I}$ as seen in Fig. \ref{fig15}(d). This confirms that the magnitude of the reflection spectrum is inversely proportional to the FWHM of the modes. However, larger values of $n_{1I}$ brings about nonuniformity in the values of reflectivity and transmittivity. \subsection{Comb lasing spectrum with an inverted envelope} \begin{figure} \centering \includegraphics[width=0.5\linewidth]{fig16a}\includegraphics[width=0.5\linewidth]{fig16b}\\\includegraphics[width=0.5\linewidth]{fig16c}\includegraphics[width=0.5\linewidth]{fig16d} \caption{(a) and (b) Comb lasing spectrum of a PTSFBG with an inverted envelope induced by variations in gain and loss. The sampling period and duty cycle values are same as in Fig. \ref{fig15}. (c) Continuous variation of reflectivity [$R_{R(m)}^p$, $R_{L(m)}^p$] and transmittivity peaks ($T^p_m$) against gain and loss ($n_{1I}$) for the zeroth order mode ($m = 0$). (d) Depicts the same dynamics with same system parameters as in (c) except the order of the mode is taken to be $m = -17$ [mode on the left edge of the spectra in (a) and (b)].} \label{fig16} \end{figure} Another distinct attribute of the gain and loss parameter is that it gives rise to a \emph{comb lasing spectrum with an uncommon envelope shape} as shown in Figs. \ref{fig16}(a) -- \ref{fig16}(b). Explicitly, the system facilitates larger amplifications for the $m^{th}$ order modes appearing at the edges of the given wavelength span. For the subsequent order modes, the reflectivity and transmittivity are lesser than the previously occurring modes and along these lines, the zeroth order modes exhibit lowest amplification as shown in Figs. \ref{fig16}(c) and \ref{fig16}(d). This is the exact counterpart of the lasing spectrum discussed in the first three subsections of the broken $\mathcal{PT}$-symmetric regime which is characterized by an intense amplification at the center (zeroth order) and lowest amplification at the edges ($\pm m^{th}$ order). As the value of $n_{1R}$ is increased between the range $1.5 \times 10^{-3}$ and $2 \times 10^{-3}$, this behavior in the lasing spectrum is observed in a fashion that the stronger amplification at the higher order modes is inhibited progressively with an increase in $n_{1I}$. But the overall shape of the inverted envelope is maintained throughout this range of gain and loss. \subsection{Comb lasing spectrum with dual mode lasing channels} \begin{figure} \centering \includegraphics[width=0.5\linewidth]{fig17a}\includegraphics[width=0.5\linewidth]{fig17b}\\\includegraphics[width=0.5\linewidth]{fig17c} \caption{Comb lasing spectrum of a PTSFBG with dual mode lasing channels generated by tuning the value of gain and loss. The sampling period and duty cycle values are same as in Fig. \ref{fig15}.} \label{fig17} \end{figure} When $n_{1I} > 2 \times 10^{-3}$, we observe the usual lasing spectrum with a conventional envelope shape except that each individual channel demonstrates a dual mode lasing behavior. A dip is visible in between two peaks of the individual modes as shown in Figs. \ref{fig17}(a) -- \ref{fig17}(c). This dip exactly occurs at $R_{min}$ and $T_{min}$ (or closer to those values) when $n_{1I}$ is less. As the value of the imaginary part of the modulation strength is gradually increased, $R$ and $T$ values of individual modes are also enhanced whereas the depth of penetration of this dip drops off and finally the dip vanishes for $n_{1I} > 3.7 \times 10^{-3}$. Beyond this, again comb spectrum with single mode lasing channels appears. \subsection{Application of PTSFBG in the broken $\mathcal{PT}$-symmetric regime: Tunable Laser} \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{fig18} \caption{Schematic showing the generation of tunable multiwavelength laser source and its application to telecommunication system. LS: laser source, EDF: Erbium doped fiber, PTTL: $\mathcal{PT}$-symmetric tunable laser, OCI: optical circulator, OC: optical coupler, OSA: optical spectrum analyzer, ATT: attenuator (variable), PWS: programmable wave shaper, RG: regenerator, TX: transmitter, MUX: multiplexer, SMF: single mode fiber, BPF: band pass filter, DC: dispersion compensation, DMUX: demultiplexer, RX: receiver. PPTL consists of two PTSFBGs of different sampling periods and a gain medium (EDF). Similarly, EDF amplifiers (EDFA) are used as regenerators. SMFs are generally used as transport fibers. Uniform PTFBGS can be used as BPFs and gain flattening filters. DC consists of EDFA and chirped PTFBGS \cite{raja2020tailoring} for attenuation and dispersion compensation, respectively. For demultiplexing, phase shifted PTFBGs can be employed \cite{raja2020phase}} \label{fig18} \end{figure} From Fig. \ref{fig15}, we infer that it possible to generate uniformly spaced lasing modes with uniform intensities and narrow FWHM with the aid of the proposed system. Since each mode in the output spectrum represents a distinct wavelength and the separation between these modes is very narrow, the device can be effectively used as tunable multi-wavelength laser source which can simultaneously be fed as inputs for multiple transmitters. To construct such a tunable laser source, two PTSFBGs with a gain element should be engineered. It should be noted that wavelength tuning of the $\mathcal{PT}$-symmetric tunable laser (PTTL) requires the two PTSFBGs to have different sampling periods \cite{jayaraman1993theory,bidaux2015extended}. The tuning is based on the principle of Vernier effect (in the reflection spectra) which occurs as a consequence of variation in the channel spacing of two dissimilar SFBGs (having different sampling periods) \cite{xu2005chirped,bidaux2015extended}. Vernier effect states that when one of these superstructured gratings is tuned, constructive interference occurs between the pair of modes which are common to both the gratings, thereby leading to lasing at these wavelengths and suppression of the other lasing modes whose center wavelength does not coincide. For instance, consider two PTSFBGs having different sampling periods $s_\Lambda = 1000$ and 2000 $\mu$m. From Fig. \ref{fig5}(e), we infer that $\lambda_m^{p} = 1550.8$ nm is one among the center wavelengths which is common to both the resulting comb spectra ($s_\Lambda = 1000$ and 2000 $\mu$m), whereas $\lambda_m^{p} = 1550.4$ nm is not so. The resulting spectra from the PTTL will feature a comb mode at $\lambda = 1550.8$ nm and its reflectivity will be the product of reflectivities of the individual PTSFBGs. In a similar fashion, other overlapping modes are selected and amplified by the PTTL. On the other hand, the lasing at the non-overlapping wavelengths are totally suppressed, for instance $\lambda = 1550.4$ nm. It is desirable to acquire replicas of laser output fields with the same intensities for all the channels \cite{xu2005chirped}. Therefore, PTSFBGS exhibiting uniform reflectivity across all the wavelengths (in the given range) must be employed. To flatten the envelope of the output spectra, many SFBG structures were investigated in the literature, including chirped SFBGs \cite{Dong2006}, and sinc sampled FBGs \cite{Loh1999}. In the chirped SFBGs, either the grating period or sampling length or sampling period or combinations of the above can be chirped \cite{Lee2003,Dong2006,Lee2004,Lee2004a,Loh1999,azana2005spectral}. But, the resolution of the spectrum strongly depends on the lithographic process used for fabrication of these gratings \cite{bidaux2015extended}. Alternatively, phase shifted SFBGS were proposed in which optimization of the number of phase shift regions required is tedious \cite{Lee2003,Li2008,Navruz2008,Shi2018,Zhang2019}. Here, we demonstrate that it is possible to generate the same lasing spectrum (with nearly uniform reflectivity) with $\mathcal{PT}$-symmetric uniform grating samples. The envelope formed by reflectivity peaks of different channels widens with larger duty cycle values in conventional SFBGS \cite{hansmann1995variation}. Reducing the duty cycle to smaller values is also not an ideal way in the perspective of conventional SFBGS as it adversely decreases the reflectivity \cite{jayaraman1993theory,Lee2003}. One could visualize from Figs. \ref{fig15} and \ref{fig17} that PTSFBGs have two important features: First, the output spectrum features enhanced reflectivity. Second, the envelope is much flatter as a result of smaller duty cycle values (for example, $d = 0.01$). Achieving these two features simultaneously is an exceptional feature of PTSFBG, thanks to the concepts of gain and loss. It should be recalled that these structures can be fabricated using an Argon laser with the standard scan-writing technology \cite{xu2019advanced}. The modern translation stages which hold the phase mask exhibit an infinitely small moving precision of the order of 5 nm \cite{xu2005chirped} and above and thus fabricating a small length of sample should not be a tough task to our knowledge. We also visualize that much of the PTSFBG structure is unused as a result of the larger sampling period at very low duty cycles. These unused regions can be used to fabricate interleaved samples having different Bragg wavelengths \cite{Loh1999,Lee2004a}. The period ($\Lambda$) of each interleaved PTFBG should be different. Otherwise, the comb spectrum from two different PTSFBGs may impose on one another. In simpler words, the concept of interleaving refers to the fabrication of several PTSFBGs on the same fiber with different periods ($\Lambda$) in such a way that the physical concept leads to interleaving effect on the spectrum as well \cite{Loh1999}. As theoretical physicists, we believe some of the challenges remains now for the experimental realization of these $\mathcal{PT}$-symmetric devices are to find a suitable dopant material like Er$^{3+}$ and Cr$^{3+}$ for the fabrication of the PTSFBG with gain and loss regions, respectively \cite{ozdemir2019parity}. Even though it looks very simple to achieve phase transitions such as unitary transmission point and broken symmetry in $\mathcal{PT}$-symmetric systems by simply tuning the value of $n_{1I}$, certain practical challenges still remain to be addressed by the experimental physicist. One can think of tuning the value of permittivity of both the gain medium as well as the lossy medium, simultaneously by the help of external pumping source. Another way for achieving these types of phase transitions is by varying the frequency of the incident optical field, appropriately. This may lead to the violation of causality principle and $\mathcal{PT}$-symmetric operation is bound to occur only at isolated frequencies and not on a continuous interval of frequencies \cite{Ge2012}. This type of violation occurs as a result of Kramers-Kronig relations which describe the fact that the variation in the imaginary part of the index can also lead to a change in its real part \cite{Zyablovsky2014}. Nevertheless, it is to be noted that a continuous tuning of gain and loss through pumping is still feasible at the resonant frequency \cite{Nguyen2016}. In such a case, it should be remembered that the amount of pumping should be lesser in the lossy regions and higher in the gain regions to achieve $\mathcal{PT}$-symmetry \cite{Phang2018,Phang2015res}. As theoretical physicists, we expect the experimentalists to put forward suitable strategies to address these challenges to make $\mathcal{PT}$-symmetric combs more realistic. Having briefly discussed on the principle of operation, existing structures, experimental feasibility and the advantages of the proposed scheme, we look at the operation of the experimental set up illustrated by Fig. \ref{fig18}. The input optical signal to the PTTL can be pumped from a CW laser. It can be directly fed to the PTTL with an OCI as shown in the schematic or the input signal may be amplified with a EDFA and be passed through a uniform FBG to filter out the EDFA noise \cite{chembo2016kerr}. The PTTL can be constructed with two PTSFBGs with comb like reflection spectrum as discussed above. The EDF in the optical cavity serves as the gain medium alongside the tuning PTSFBGs and this generates a laser source over its gain profile \cite{jayaraman1993theory, xu2005chirped, othonos2006fibre,kryukov2019laser}. When one of the non identical PTSFBGs is tuned, lasing will take place at a particular channel, if and only if the comb lines from both the PTSFBGs intersect. In other words, reflectivity multiplies at the overlapping channels and the lasing at the other wavelengths is inhibited \cite{xu2005chirped,bidaux2015extended,schneider2005sampled,jayaraman1993theory}. Thus, it is possible to obtain multiwavelength comb lasers with nearly identical intensities with controlled precision and the same can be visualized via an OSA \cite{schneider2005sampled,xu2005chirped}. The variable optical attenuator is used to flatten the envelope of the comb laser further. In the case of comb lasers with extreme amplification at the center wavelength, these attenuators can be followed by a notch filter which is helpful in attenuating the power levels of these modes to comparable levels \cite{chembo2016kerr}. The programmable wave shapers separate the comb lines in wavelength which serve as carrier signals \cite{xu2019advanced}. Also, it is useful to separate the interleaved channels \cite{chembo2016kerr}. These carriers are regenerated by the EDFA before they are fed to the transmitters. In the transmitter side, multiple laser sources are required to drive each transmitter which makes the system bulky \cite{bidaux2015extended}. PTSFBG are fascinating in a sense that they serve as the alternative solution to build a compact and reconfigurable transmitter system, since they serve as the building block to fabricate a broadband laser source which is then separated in terms of wavelengths by the PWS according to the number of transmitters \cite{kryukov2019laser}. The new generation transmitters come with inbuilt phase modulators and the stream of data from all these transmitters are multiplexed and sent to the transport fiber \cite{chembo2016kerr}. SMFs are generally used as long distance transport fibers. As the signal travels in the fiber, attenuation and broadening mechanism comes into the picture and to compensate these detrimental effects, EDFA and a chirped PTFBG, respectively, can be used in the compensation module. The advantages of chirped PTFBG is that they can compensate both normal and anamolous dispersions simply by the concept of reversal of direction of incidence \cite{raja2020tailoring}. We also reported the construction of demultiplexers with $\mathcal{PT}$-symmetric phase shifted FBGs, in our previous work which can demultiplex all the data streams from the multiplexed input signal \cite{raja2020phase}. At the receivers, all these signals are demodulated and coherently detected \cite{chembo2016kerr}. \section{Conclusions} \label{Sec:6} In this article, we have presented a complete description of the comb spectrum of a PTSFBG with uniform sampling. We first illustrated the role of various device parameters, namely the sampling period, duty cycle and gain-loss parameter on the generation of the comb spectrum in the three different $\mathcal{PT}$-symmetric regimes, namely the unbroken, Unitary transmission point and broken symmetric regimes. Special emphasis was given to the generation of uniform amplitude comb filters with narrow channel spacing in the unbroken regime. The major highlight of this section is that it provides a framework towards the generation of a large number channels with significantly large reflectivity by increasing the sampling period of the grating. It also confirms that the decrease in the reflectivity (with increasing number of channels) can be independently controlled with the aid of gain and loss parameter. An architecture which can possibly serve as a tunable RF traversal filter was proposed at the end of Sec. \ref{Sec:3}. We then presented a brief analysis on the dependence of comb spectrum on the different control parameters for the case of right light incidence at the unitary transmission point. Remarkably, the reflectionless wave transport phenomenon was observed under similar conditions when the direction of light incidence is reversed. This once again proves that the concept of unidirectional wave transport is a distinct feature of any PTFBG system and is unresponsive to any variation in other device parameters except for evenly balanced values of real and imaginary parts of the modulation strength. Further, the analysis required for figuring out the interdependence between the changes in the lasing spectrum against the variation in the grating parameters in the case of broken $\mathcal{PT}$-symmetric regime is discussed in Sec. \ref{Sec:5}. We then proposed an optimal way to generate comb lasing spectrum with uniform reflectivity across different wavelengths by decreasing the duty cycle of the grating. Such a reduction in the duty cycle is not feasible in the context of conventional PTSFBGs due to the dependence of reflectivity on the duty cycle and coupling coefficient. It was proved that it is possible to obtain comb spectrum flattened envelope as well as uniform reflectivity for different wavelengths as a result of the interplay between the reduced duty cycle and large gain-loss parameter. Surprisingly, the tuning of gain and loss parameter also leads to the generation of lasing spectrum with an unconventional inverted envelope and dual mode lasing behavior in the individual channels. Towards the end, we showed that a single laser source from a PTSFBG can drive multiple transmitters in a wavelength division multiplexing network. The architecture also integrates different modules like dispersion compensator and demultiplexer based on PTFBGs. The physical behavior exploited for the comb application is reported only from the perspective of FBGs without gain and loss. This is the very first time, to the best of our knowledge, these applications have been dealt from the viewpoint of $\mathcal{PT}$-symmetric superstructures. From a fundamental perspective, gain and loss provides an additional degree of freedom to control the intensity and FWHM of the comb spectra. From the application perspective, the inclusion of gain and loss in the form of $\mathcal{PT}$-symmetry opens up an alternative route to overcome some of the critical problems like regrowth challenges prevailing in the current hybrid integration optical technologies to build tunable and reconfigurable devices. With advancements in lightwave technology, these PTFBG based devices are expected to be available in the near future, credits to their improved spectral features and the number of degrees of freedom to manipulate them. \section*{Acknowledgments} SVR is indebted to a financial assistantship provided by Anna University through an Anna Centenary Research Fellowship (CFR/ACRF-2018/AR1/24). AG and ML acknowledge the support by DST-SERB for providing a Distinguished Fellowship (Grant No. SB/DF/04/2017) to ML in which AG was a Visiting Scientist. AG is now supported by the University Grants Commission (UGC), Government of India, through a Dr. D. S. Kothari Postdoctoral Fellowship (Grant No. F.4-2/2006 (BSR)/PH/19-20/0025).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Time evolution with MPS} The standard way of computing dynamical quantities with the MPS formalism starts with a state that is (exactly) described by a MPS (\ref{eq:mps}). Then, some evolution operator is applied to it for a given time, making use of a Suzuki-Trotter expansion~\cite{trotter59} \nocite{suzuki90} of the total evolution operator. Within each discrete time step, the evolution operator is broken down into a product of operators. In particular, for a nearest neighbor Hamiltonian $H=\sum_i h_{i,i+1}$, we may write $ e^{-iH\delta}\approx e^{-i H_e \delta/2}e^{-i H_o\delta}e^{-i H_e \delta/2}, $ where $H_e$ ($H_o$) contains the $h_{i,i+1}$ terms with even (odd) $i$, so that each exponential factor is a product of mutually commuting local terms. Alternatively, the evolution operator can be decomposed as a product of translationally invariant Matrix Product Operators (MPO)~\cite{murg08mpo}. The action of one step of evolution on the MPS can be computed by applying the corresponding sequence of operators (Fig.~\ref{fig:tn-a}) to yield a MPS with larger bond dimension. This must be truncated to keep the best MPS description of the evolved state with fixed dimension, $D$. After repeating this procedure for the required number of steps, expectation values can be calculated in the evolved state. The accuracy of the description will however drop exponentially with the successive truncations. Our new method avoids this explicit truncation on the bond dimension of the evolved MPS. The basic idea is to look at the quantity that we want to compute, say the time dependent expectation value of some local operator, $\langle \Psi(t)|O|\Psi(t)\rangle$, as the contraction of a two dimensional tensor network, and perform it, not along time, but in the direction of space (see Fig.~\ref{fig:tn-b}). To construct the network, we start from the initial MPS and, for every evolution step, apply the proper MPOs. Repeating this for the required number of evolution steps, we construct the exact evolved MPS (within the Trotter approximation), as no truncation is carried out. Finally, we apply the local operator $O$ and contract with the Hermitian conjugate of the evolved state as constructed before. \begin{figure}[floatfix] \hspace{-.05\columnwidth} \begin{minipage}[c]{.4\columnwidth} \subfigure[Transverse contraction along space direction renders a finite 2D network.]{ \label{fig:tn-b} \psfrag{contractL}[c][c]{contraction} \psfrag{contractR}[c][c]{} \psfrag{Eo}[c][]{$E_O$} \psfrag{E}[c][]{$E$} \psfrag{langle}[c][]{$\langle L |$} \psfrag{rangle}[c][]{$|R\rangle$} \psfrag{time}[bc][bc]{time} \psfrag{space}[tc][tc]{space} \includegraphics[height=.9\columnwidth]{transverse.eps} } \end{minipage} \hspace{.185\columnwidth} \begin{minipage}[c]{.4\columnwidth} \subfigure[Transverse contraction of folded network.]{ \label{fig:tn-fold} \psfrag{Eo}[bc][bc]{$\tilde{E}_O$} \psfrag{E}[bc][bc]{$\tilde{E}$} \psfrag{langle}[c][c]{$\langle \tilde{L} |$} \psfrag{rangle}[c][c]{$|\tilde{R}\rangle$} \psfrag{fol}[cl][cl]{\parbox{.2\columnwidth}{folding axis}} \includegraphics[height=.82\columnwidth]{foldedNet2.eps} } \end{minipage} \caption{Expectation value $\langle O(t)\rangle$ with the basic transverse method (a) and with folding (b), where operators for the same time step are grouped together in a double effective operator. } \end{figure} The procedure above produces a two dimensional network, infinite in the spatial direction as the original MPS, but finite along the time direction. The expectation value we want to compute can now be written as~\cite{perez07mps}, $$ \langle \Psi(t)|O|\Psi(t)\rangle = \lim_{k\rightarrow\infty} \mathrm{tr}(E_{\ell}^{[-k]} \ldots E^{[-1]} E_O^{[0]} E^{[1]}\ldots E_{r}^{[k]}), $$ where $E(t)=\sum_i {\bar{A}^i}(t)\otimes A^i(t)$ is the transfer matrix of the evolved state, $E_O(t)=\sum_{i,j}[{\bar{A}^i}(t)\otimes A^j(t)] \langle i|O|j\rangle$ contains the only application of the single-body operator, and the bracketed superindices on each transfer matrix indicate the site of the chain. For a translationally invariant MPO representation of the evolution operator, the network retains the invariance~\footnote{Using the standard decomposition of the evolution operator into commuting products, the resulting network has translational symmetry with period two, so that the argument can be easily adapted substituting $E$ for the product of two contiguous transfer matrices, $E_e E_o$.} and the transfer matrix is the same on every site, except for the single one on which $O$ acts~\footnote{In the infinite limit we do not need to consider the vector terms at the edges, $E_{\ell}^{[-k]}$ and $E_{r}^{[k]}$, which would also be different for the finite case.}. If the largest eigenvalue of $E(t)$, $\lambda$, is non-degenerate, $ E^k(t)\xrightarrow[k \rightarrow \infty]{}\lambda^k|R\rangle\langle L| $. Effectively, we may then substitute the left and right semi-infinite lattices at both sides of the operator by the left and right eigenvectors of $E(t)$ corresponding to the largest eigenvalue, $\langle L|$ and $|R \rangle$, \begin{equation} \langle O(t) \rangle=\frac{\langle \Psi(t)|O|\Psi(t)\rangle}{\langle \Psi(t)|\Psi(t)\rangle} =\frac{\langle L | E_O | R \rangle}{\langle L | E | R \rangle}. \label{eq:O(t)} \end{equation} We now specify the algorithm for computing time-dependent expectation values in translationally invariant infinite chains. The first step is to find the best MPS approximation, with given bond dimension $D$, to the dominant eigenvectors of $E(t)$. To this end, we repeatedly apply the transfer matrix $E(t)$ (already written as a MPO along the time direction, see Fig.~\ref{fig:tn-b}) to the left and to the right of an arbitrary initial MPS vector and truncate the result to the chosen $D$, using the technique for two dimensional tensor networks introduced in~\cite{verstraete04mpdo,murg07hard}, until convergence is achieved. The procedure yields a MPS approximation to the eigenvectors, with the truncation taking always place in the space of transverse vectors. The second step, computing the numerator and the denominator in (\ref{eq:O(t)}), can be done very efficiently, as each term is a contraction of a MPO acting between a pair of MPS. The adaptation of the method to the case of imaginary time evolution is straightforward, so that it is also useful for finding ground state properties. In this case, the eigenvector calculation is similar to that in transfer matrix DMRG algorithms for thermal states~\cite{bursill96trans}. \nocite{wang97trans} With this approach, we study an infinite chain with an impurity. We consider an Ising chain, \begin{equation} H=-\left(\sum_i\sigma_z^i \sigma_z^{i+1}+g_i \sigma_x^i \right), \label{eq:Ising} \end{equation} with $g_i=1$ $\forall i\neq 0$, and the impurity represented by a different value of the field at site $i=0$, $g_0$. The system is started in a product MPS and imaginary time evolution is applied for a long time, so that we approach the ground state. Then we compute the site dependent magnetization, $\langle \sigma_x^{[i]} \rangle$. Such a calculation cannot be easily done with a purely invariant method as iTEBD~\cite{vidal07infinite}, because the presence of a singular site will affect a cone of tensors as time increases. However, with the transverse method, the computation of $\langle L |$ and $|R\rangle$ is not modified by the presence of the impurity~\cite{rommer99imp}. Thus the cost of computing the expectation value of a local operator acting on the position $i=0$ in the ground state of this chain will be the same as in the translationally invariant case, while applying the operator at $i\neq 0$ will reduce to the contraction of a 2D tensor network of width $i+3$ (Fig.~\ref{fig:impurity}). \begin{figure} \hspace{-.05\columnwidth} \begin{minipage}[c]{.3\columnwidth} \subfigure[Tensor network corresponding to the magnetization at distance $x$ from the impurity.]{ \label{fig:impSch} \psfrag{i0}[c]{$i=0$} \psfrag{ix}[c]{$i=x$} \psfrag{langle}{$\langle L |$} \psfrag{rangle}{$|R\rangle$} \includegraphics[height=1.2\columnwidth]{impurityFig.eps} } \end{minipage} \hspace{.05\columnwidth} \begin{minipage}[c]{.6\columnwidth} \subfigure[$\langle \sigma_x\rangle$ as a function of distance $x$.]{ \label{fig:impPlot} \begin{psfrags}% \psfragscanon% \psfrag{s04}[][]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c} \end{tabular}}% \psfrag{s05}[][]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c} \end{tabular}}% \psfrag{s12}[t][t]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}$g_0=1$\end{tabular}}% \psfrag{s13}[t][t]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}$g_0=2$\end{tabular}}% \psfrag{s14}[t][t]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}$g_0=0.5$\end{tabular}}% \psfrag{x01}[t][t]{0}% \psfrag{x02}[t][t]{0.1}% \psfrag{x03}[t][t]{0.2}% \psfrag{x04}[t][t]{0.3}% \psfrag{x05}[t][t]{0.4}% \psfrag{x06}[t][t]{0.5}% \psfrag{x07}[t][t]{0.6}% \psfrag{x08}[t][t]{0.7}% \psfrag{x09}[t][t]{0.8}% \psfrag{x10}[t][t]{0.9}% \psfrag{x11}[t][t]{1}% \psfrag{x12}[t][t]{-30}% \psfrag{x13}[t][t]{-20}% \psfrag{x14}[t][t]{-10}% \psfrag{x15}[t][t]{0}% \psfrag{x16}[t][t]{10}% \psfrag{x17}[t][t]{20}% \psfrag{x18}[t][t]{30}% \psfrag{v01}[r][r]{0}% \psfrag{v02}[r][r]{0.1}% \psfrag{v03}[r][r]{0.2}% \psfrag{v04}[r][r]{0.3}% \psfrag{v05}[r][r]{0.4}% \psfrag{v06}[r][r]{0.5}% \psfrag{v07}[r][r]{0.6}% \psfrag{v08}[r][r]{0.7}% \psfrag{v09}[r][r]{0.8}% \psfrag{v10}[r][r]{0.9}% \psfrag{v11}[r][r]{1}% \psfrag{v12}[r][r]{0.4}% \psfrag{v13}[r][r]{0.5}% \psfrag{v14}[r][r]{0.6}% \psfrag{v15}[r][r]{0.7}% \psfrag{v16}[r][r]{0.8}% \psfrag{v17}[r][r]{0.9}% \psfrag{v18}[r][r]{1}% \includegraphics[height=.8\columnwidth,width=.9\columnwidth]{impur.eps} \end{psfrags} } \end{minipage} \caption{Ising chain with magnetic impurity at the origin. } \label{fig:impurity} \end{figure} The capabilities of the transverse method regarding real time evolution can be further illustrated by the computation of two-body correlators at different times. If we consider two different times, $t_2>t_1$, we may write \begin{eqnarray} \langle &\Psi(0)&|O_2^{[x]}(t_2) O_1^{[x+\Delta]}(t_1) |\Psi(0)\rangle \nonumber \\ &=&\langle \Psi(0) | U(t_2,0)^{\dagger} O_2^{[x]} U(t_2,t_1) O_1^{[x+\Delta]} U(t_1,0) |\Psi(0)\rangle \nonumber \\ &=&\frac{\langle L | E_{O_2(t_2)} E^{\Delta-1} E_{O_1(t_1)} | R \rangle}{\langle L | E^{\Delta+1} | R \rangle}, \label{eq:O(t)O(t)} \end{eqnarray} where $E$ is the transfer matrix resulting from evolution until time $t_2$, and $E_{O_i(t)}$ are the corresponding MPO containing the action of each single-body operator~\cite{naef99}. In particular, if both operators act on the same site ($\Delta=0$), the computation has the same cost as one single expectation value. If $\Delta\neq0$, computing (\ref{eq:O(t)O(t)}) requires instead the contraction of a two dimensional network of width $\Delta+3$. This is done by applying one MPO at a time and truncating to the closest MPS with the given bond $D$. Since the network is now finite in both directions, this last phase of the contraction can be done either in the spatial or in the time direction. The success of the transverse approach will depend on whether the transfer matrix of the evolved MPS has a non-degenerate dominant eigenvector which can be approximated by a MPS of reduced dimension. Our implementation shows that the procedure achieves comparable results to the standard contraction~\cite{vidal07infinite} in a translationally invariant chain. The transverse method offers the advantage of being applicable to dynamical situations in which translational symmetry is broken by a small number of sites, such as a chain with impurities, or a semi-infinite system, but it is also limited to short times. However, there is a more efficient representation of the entanglement in the transverse eigenvectors. In the MPO representing the transfer matrix of the evolved MPS, tensors that lie at the same distance from the center (occupied by the physical operator $O$ as in Fig.~\ref{fig:tn-b}) correspond to the same time step, coming from a certain term and its adjoint in the Trotter decomposition. We can group such pairs together in a new MPO by ``folding'' the original MPO (see Fig.~\ref{fig:tn-fold}). The folding operation can be understood as performing the equivalent asymmetric contraction $ \langle \Psi(t)|O|\Psi(t)\rangle= \langle\Phi | \Bigl(O |\Psi(t)\rangle\otimes|\bar{\Psi}(t)\rangle \Bigr) $ where $|\bar{\Psi}(t)\rangle$ is the complex conjugate of the evolved vector and $|\Phi\rangle=\otimes_k\sum_{i_k=1}^d|i_k\bar{i}_k\rangle$ is the product of (unnormalized) maximally entangled pairs between each site of the chain and its conjugate. In our scheme, the ket is now the tensor product of two tensor networks corresponding to $|\Psi(t)\rangle$ and its conjugate. We may then group together each tensor in $|\Psi\rangle$ with the corresponding one in $|\bar{\Psi}\rangle$, and define an effective tensor network of higher bond dimension and physical dimension $d^2$, which can now be contracted using again the transverse technique. This folded transverse method allows us to explore the dynamics until much longer times than any other procedure. We may get some physical intuition for this improvement by looking at a single localized excitation that propagates freely with velocity $v$. After time $t$, sites $x\pm vt$ in the evolved state become entangled. If we look instead at the transverse MPS obtained contracting the network from the right until $x+vt$, it is easy to see that all time sites are in a product, except for those corresponding to the instant $t$. These sites occupy symmetric positions around the center of the network, so that folding groups them together in a single site which will be in a product state with all the rest. As a first benchmark for the new method, we simulate the dynamics of states far from equilibrium under the Ising Hamiltonian (\ref{eq:Ising}) with uniform magnetic field $g$. The initial state $|\Psi_0\rangle=\otimes_i\frac{1}{\sqrt{2}}(|0\rangle_i+|1\rangle_i)$ is evolved with a constant Hamiltonian and the results of the transverse method with and without folding are compared to the exact results (Fig.~\ref{fig:Ising}). For very short times the Trotter error dominates in both methods. However, while for the transverse procedure (as for iTEBD) truncation error becomes soon dominant, and the results deviate abruptly from the exact solution, the accuracy of the folded version is maintained for much longer times. \begin{figure}[floatfix] \begin{minipage}{\columnwidth} \centering \begin{psfrags}% \psfragscanon% \newcommand{1.}{1.} \psfrag{s01}[b][b][1.]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}$\langle\sigma_{x}(t)\rangle$\end{tabular}}% \psfrag{s05}[][][1.]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c} \end{tabular}}% \psfrag{s06}[][][1.]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c} \end{tabular}}% \psfrag{s07}[t][t][1.]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}t\end{tabular}}% \psfrag{s08}[b][b][1.]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}$\epsilon_r$\end{tabular}}% \psfrag{s16}[t][t][1.]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}D=120\end{tabular}}% \psfrag{s17}[b][b][1.]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}D=60\end{tabular}}% \psfrag{s18}[b][b][1.]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}folded\\D=60\end{tabular}}% \psfrag{s19}[b][b][1.]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}folded\\D=120\end{tabular}}% \psfrag{x01}[t][t][1.]{0}% \psfrag{x02}[t][t][1.]{0.1}% \psfrag{x03}[t][t][1.]{0.2}% \psfrag{x04}[t][t][1.]{0.3}% \psfrag{x05}[t][t][1.]{0.4}% \psfrag{x06}[t][t][1.]{0.5}% \psfrag{x07}[t][t][1.]{0.6}% \psfrag{x08}[t][t][1.]{0.7}% \psfrag{x09}[t][t][1.]{0.8}% \psfrag{x10}[t][t][1.]{0.9}% \psfrag{x11}[t][t][1.]{1}% \psfrag{x12}[t][t][1.]{0}% \psfrag{x13}[t][t][1.]{5}% \psfrag{x14}[t][t][1.]{10}% \psfrag{x15}[t][t][1.]{0}% \psfrag{x16}[t][t][1.]{2}% \psfrag{x17}[t][t][1.]{4}% \psfrag{x18}[t][t][1.]{6}% \psfrag{x19}[t][t][1.]{8}% \psfrag{x20}[t][t][1.]{10}% \psfrag{x21}[t][t][1.]{12}% \psfrag{v01}[r][r][1.]{0}% \psfrag{v02}[r][r][1.]{0.1}% \psfrag{v03}[r][r][1.]{0.2}% \psfrag{v04}[r][r][1.]{0.3}% \psfrag{v05}[r][r][1.]{0.4}% \psfrag{v06}[r][r][1.]{0.5}% \psfrag{v07}[r][r][1.]{0.6}% \psfrag{v08}[r][r][1.]{0.7}% \psfrag{v09}[r][r][1.]{0.8}% \psfrag{v10}[r][r][1.]{0.9}% \psfrag{v11}[r][r][1.]{1}% \psfrag{v12}[r][r][1.]{$10^{-4}$}% \psfrag{v13}[r][r][1.]{$10^{-2}$}% \psfrag{v14}[r][r][1.]{$10^{0}$}% \psfrag{v15}[r][r][1.]{0.5}% \psfrag{v16}[r][r][1.]{1}% \psfrag{v17}[r][r][1.]{1.5}% \includegraphics[width=.9\columnwidth,height=.5\columnwidth]{folded_g1.05.eps} \end{psfrags} \end{minipage} \\ \begin{minipage}{\columnwidth} \centering \begin{psfrags}% \psfragscanon% \psfrag{s02}[t][t]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}t\end{tabular}}% \psfrag{s03}[b][b]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}$\langle\sigma_{x}(t)\rangle$\end{tabular}}% \psfrag{s07}[][]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c} \end{tabular}}% \psfrag{s08}[][]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c} \end{tabular}}% \psfrag{s11}[][]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c} \end{tabular}}% \psfrag{s12}[][]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c} \end{tabular}}% \psfrag{s31}[t][t]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}folded\end{tabular}}% \psfrag{s32}[t][t]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}D\end{tabular}}% \psfrag{s37}[][]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}$\epsilon=10^{-6}$\end{tabular}}% \psfrag{s38}[b][b]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}$\epsilon=10^{-8}$\end{tabular}}% \psfrag{s39}[t][t]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}$\epsilon=10^{-2}$\end{tabular}}% \psfrag{s40}[b][b]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}iTEBD\\D=64\end{tabular}}% \psfrag{s41}[b][b]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}128\end{tabular}}% \psfrag{s42}[b][b]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}256\end{tabular}}% \psfrag{s43}[b][b]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}512\end{tabular}}% \psfrag{s44}[b][b]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}1024\end{tabular}}% \psfrag{s45}[b][b]{\color[rgb]{0,0,0}\setlength{\tabcolsep}{0pt}\begin{tabular}{c}$\epsilon=10^{-4}$\end{tabular}}% \psfrag{x01}[t][t]{0}% \psfrag{x02}[t][t]{0.1}% \psfrag{x03}[t][t]{0.2}% \psfrag{x04}[t][t]{0.3}% \psfrag{x05}[t][t]{0.4}% \psfrag{x06}[t][t]{0.5}% \psfrag{x07}[t][t]{0.6}% \psfrag{x08}[t][t]{0.7}% \psfrag{x09}[t][t]{0.8}% \psfrag{x10}[t][t]{0.9}% \psfrag{x11}[t][t]{1}% \psfrag{x12}[t][t]{0}% \psfrag{x13}[t][t]{5}% \psfrag{x14}[t][t]{10}% \psfrag{x15}[t][t]{0}% \psfrag{x16}[t][t]{2}% \psfrag{x17}[t][t]{4}% \psfrag{x18}[t][t]{6}% \psfrag{x19}[t][t]{8}% \psfrag{x20}[t][t]{10}% \psfrag{v01}[r][r]{0}% \psfrag{v02}[r][r]{0.2}% \psfrag{v03}[r][r]{0.4}% \psfrag{v04}[r][r]{0.6}% \psfrag{v05}[r][r]{0.8}% \psfrag{v06}[r][r]{1}% \psfrag{v07}[r][r]{$10^{1}$}% \psfrag{v08}[r][r]{$10^{2}$}% \psfrag{v09}[r][r]{0.4}% \psfrag{v10}[r][r]{0.6}% \psfrag{v11}[r][r]{0.8}% \psfrag{v12}[r][r]{1}% \includegraphics[width=.9\columnwidth,height=.5\columnwidth]{folded_g-1.05.eps} \end{psfrags} \end{minipage} \caption{Magnetization as a function of time. For the Ising model (up) results for the transverse method (triangles) are compared to the folded version (stars) for $D=60,$ $120$. The relative error with respect to the exact result (solid line) is shown in the inset. For the non-integrable model (down), results with the folded approach for D=60 (red), 120 (blue), 240 (green) are compared to those of iTEBD (solid lines) for increasing values of $D$. In the inset, the required value of $D$ as a function of time, for different levels of accuracy. } \label{fig:Ising} \label{fig:IsingP} \end{figure} To test the method on a more general problem, we repeat the test for a non-integrable Hamiltonian, $ H=-\left(\sum_i\sigma_z^i \sigma_z^{i+1}+g \sigma_x^i+h \sigma_z^i \right ). $ For this case there are no exact results, but we may compare the folded computation to the iTEBD simulations with a similar Trotter error (Fig.~\ref{fig:IsingP}). Again we check that the accuracy of the folded procedure for comparable bond dimension reaches much longer times. Moreover, remarkably enough, even when the results from the folded method start deviating (from those to which iTEBD converges for large $D$), they do so in a smooth way, so that, in contrast to other procedures, they continue to qualitatively describe the evolution for long times. This can be seen in a more precise way by looking at the truncation error. At a certain time, this can be estimated by looking at the error in the right eigenvector for a given bond dimension with respect to the best eigenvector obtained, i.e. that for the highest $D$. If we plot (Fig.~\ref{fig:IsingP}) the bond dimension required to achieve a fixed truncation error, we observe that, although the $D$ required for a high precision grows exponentially with time, with a relatively low bond $D<100$ a qualitative description of the dynamics is reproduced, that lies within $1\%$ of the exact solution well beyond times $t>10$. From the discussion above the transverse method, combined with the folding technique, represents a very promising tool for the dynamical studies of one dimensional systems. The first results show the applicability of the method even to non-integrable systems, allowing the simulation of longer evolution times than any other technique, and a qualitative description of the dynamics until even later. This opens the door for the study of physical problems not accessible until now for numerical methods, including the dynamics of phase transitions, out-of-equilibrium states and thermalization problems. The present formalism might also prove very valuable in the context of extracting spectral information for quantum impurity problems, the central problem in dynamical mean field theory. The big advantage of our method is that we can deal with real frequencies, and no analytic continuation from imaginary frequencies is needed as in the case of Monte Carlo simulations. In contrast, the main limitation would be its exclusive applicability to one dimensional systems. Finally, although the method has been described for infinite chains, it is easy to adapt the technique for the dynamical study of finite systems. \paragraph*{Note: } An independent derivation in~\cite{huebener09ctn}, in the context of concatenated tensor network states, lead to a similar network to describe time-evolved states. \acknowledgments We acknowledge D. P\'erez-Garc\'{\i}a and J. J. Garc\'{\i}a-Ripoll for discussions and T. Nishino for pointing out Ref.~\cite{rommer99imp}. This work was supported by DFG through Excellence Cluster MAP and FOR 635, and by EU project SCALA.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:intro} Morphology is a fundamental property of a galaxy. It is intimately related to galaxy mass, star formation rate (SFR), stellar kinematics and environment \citep[e.g.][]{ Pozzetti2010, Wuyts2011, Huertas-Company2016}. Galaxy structure changes across cosmic time and this mutation is intertwined with formation channels and evolutionary paths. Whether galaxy morphology determines the fate of a galaxy or, conversely, morphological transformations are driven by their stellar population content is still a matter of debate \citep[e.g.][]{Lilly2016}. Distinguishing between the two options is one of the main challenges in understanding galaxy formation and evolution. Having large samples of galaxies with morphological classifications is crucial for studying the relation between shapes and star formation histories or mass assembly. Traditionally, morphological classification was done by visual inspection. This method has the great inconvenience of being very expensive in terms of time (limiting the available samples to a few thousands -- e.g., \citealt{Nair2010}), but it is also affected by the subjectivity of the classifier. An alternative is people-powered research like the Galaxy Zoo project \citep[e.g.][]{Lintott2008, Willett2013, Walmsley2020}, where volunteers are asked to classify galaxies. The large number of classifiers significantly reduces the task time and allows for a statistical analysis of the answers. However, these methods still present important biases (see figure 24 in \citealt{Fischer2019}). More importantly, they will be unable to keep up with the enormous amount of data (millions of galaxy images) that the next generation of surveys such as the Vera Rubin Observatory Legacy Survey of Space and Time or Euclid will deliver: about a hundred years would be needed to classify all data from the Euclid mission with a Galaxy Zoo-like approach. Therefore, applying automated classification methods to such large surveys is mandatory. An alternative common approach is the quantitative estimation of galaxy morphology. In this methodology, the galaxy light is described in terms of structural quantities (e.g., magnitude, size, ellipticity, asymmetry, concentration, etc.). For DES, such measurements are available for $\sim$50 million galaxies up to $\mathrm{m}_i$ $ <$ 23 mag \citep[][which also provides a comprehensive overview]{Tarsitano2018}. This technique can rely either on the parametrization of galaxy light profile (e.g., S\'ersic function) or on the analysis of the distribution of galaxy light without accounting for the PSF. Therefore it requires specific calibrations. Recent studies in machine learning and deep learning techniques, in particular, present an attractive way forward. The use of convolutional neural networks (CNN) for the analysis of galaxy images has proven to be extremely successful for classifying galaxy images \citep[e.g.][-- and many others]{Dieleman2015, Aniyan2017, Tuccillo2018, Huertas-Company2018, Dominguez2018, Metcalf2019, Pasquet2019, Ntampaka2019, Hausen2020, Ghosh2020}. CNNs have overtaken other supervised automated methods such as Logistic Regression, Support Vector Machines, random forest, decision trees, etc., in terms of both accuracy and computation time (see \citealt{Cheng2020} for a detailed comparison), specially for image-based (or array-based) data. However, supervised algorithms rely on large samples of pre-labelled objects on which to train. Complex classification, such as the separation between ETGs and LTGs, requires deep CNNs with a large number of free parameters. These training samples should come from the same data domain (e.g., instrument, depth) as the sample to be classified. This ideal is particularly challenging for new surveys where the overlap with available morphological catalogues may be limited. Transfer learning between surveys is an alternative that helps reduce the required training sample size by almost one order of magnitude (see \citealt{DS2019}), but a set of labelled objects with a similar distribution to the main target sample is still needed. In this context, we aim to provide morphological classifications for galaxies from the Dark Energy Survey public data release DR1 \citep{DES2018}. Although the scope of DES is to probe the nature of dark energy, the survey has observed the sky over 6 years mapping $\sim$~300 million galaxies in the first three years as a by-product of DR1 -- which will become $\sim$ 600 million galaxies for DR2 \citep{DES6}. This dataset is one of the largest and deepest among modern galaxy surveys, reaching up to 24~mag in the \textit{r}-band. Although there are enough DES galaxies in common with previous morphological catalogues (in particular with \citealt{Dominguez2018}) to properly train CNNs, they are limited to bright magnitudes ($\mathrm{m}_r < 17.7~\mathrm{mag}$). To reach the fainter magnitudes that are necessary to probe the redshift evolution of morphological transformations, in section \ref{sec:data} we use DES galaxies with well-known classifications and simulate what they would look like if they were at higher redshifts. This dramatically reduces the quality of the redshifted images (while keeping track of the original \textit{true} labels). We then check if the CNNs are able to recover features hidden to the human eye. In section \ref{sect:models} we use the original and simulated samples to train our CNNs to classify images as ETGs or LTGs, and edge-on or face-on. We then compare our CNN classifications with the corresponding \textit{true} labels of a sub-sample that was reserved for testing, as well as with the properties of faint DES galaxies from other available catalogues (see section \ref{sec:results}). This is the largest catalogue of galaxy morphological classification to date (as detailed in section \ref{sec:des_morph}), along with the independent catalog produced by the companion DES paper presented in Cheng et al., where CNNs are used to classify DES galaxies into elliptical or spirals on the basis of the $i$-band image. The multiband morphological catalog presented in this work provides reliable ETG/LTG classifications for 27 million galaxies from the DES survey with $\mathrm{m}_r < 21.5~\mathrm{mag}$. Our catalog also includes an edge-on classification, which can be useful for other science analyses (e.g., probing self interacting dark matter, \citealt{Secco2018}; estimating dust attenuation, \citealt{Li2020,Masters2010}; or stuying diffuse ionized gas, \citealt{Levy2019}; among others). \section{Data Sets} \label{sec:data} In this section, we describe the dataset used for training and testing, as well as the final sample to which we apply our models in the construction of our catalogue. \subsection{Dark Energy Survey science DR1} \label{sec:desy3} The main objective of this work is to provide morphological classifications for a large set of galaxies in the public release of the first three years of DES data\footnote{DES database is publicly aaccesible at \url{https://des.ncsa.illinois.edu/desaccess/}} (DES DR1, \citealt{DES2018}). The DES DR1 covers the full DES footprint (about 5,000 square degrees) and includes roughly 40,000 exposures taken with the Dark Energy Camera \citep{Flaugher2015}. The coadd images, taken in \textit{griz}-bands, are available along with catalogs of roughly 300 million galaxies, reaching a median co-added catalog depth of $g$ = 24.33, $r$ = 24.08, $i$ = 23.44 {mag} at signal-to-noise ratio S/N = 10, with a pixel size of 0.263\arcsec (see \citealt{DES2018} for technical details). We selected a high quality galaxy sample based on a classification that separates PSF-like objects (such as stars and QSOs) and extended objects (i.e., galaxies). The classifier is denoted as EXTENDED\_CLASS\_COADD in the DES database and it is derived using the \textit{spread\_model} quantity from Sextractor photometry \citep[see][for more details]{Abbott2018}; its value should be greater than 1 in order to select medium and high confidence galaxies. We also excluded regions of the sky with missing data or bright stars in any of the observed bands by employing the masks described in DES database, since that could affect our model predictions. We used photometric data in the \textit{gri}-bands and selected galaxies brighter than $\mathrm{m}_r = 21.5~\mathrm{mag}$, where $\mathrm{m}_r$ denotes the magnitude in an elliptical aperture shaped by Kron radius in the \textit{r}-band (MAG\_AUTO\_R in the DES database), and with a half-light radius in the \textit{r}-band (FLUX\_RADIUS\_R in the DES database; denoted as $\mathrm{r}_r$ throughout the paper) larger than 2.8 pixels (or 0.74 \arcsec; see section~\ref{sec:network}). See table~\ref{tab:catalog} for more details. This selection produces a final catalog of 26,971,945 (i.e., nearly 27 million) galaxies. We provide morphological classifications for these galaxies and describe our catalog in Section \ref{sec:des_morph}. \subsection{SDSS morphological catalogue} \label{sec:ds18} To derive morphological classifications of galaxies within the DES footprint, we have used the morphological catalog published by \citet[][DS18 hereafter]{Dominguez2018}, which partially overlaps with the DES DR1 (see section \ref{sec:training} for details). The DS18 is a publicly available catalogue that provides morphologies for $\sim$670,000 galaxies in the Sloan Digital Sky Survey (SDSS) with $\mathrm{m}_r < 17.7~\mathrm{mag}$. These were obtained by combining accurate existing visual classification catalogues (\citealt{Nair2010}) and Galaxy Zoo 2 (\citealt{Willett2013}; hereafter GZOO) with deep learning (DL) algorithms using CNNs. The DS18 catalogue provides several GZOO-like classifications, as well as a T-Type (i.e. the numerical Hubble stage, \citealt{deVaucouleurs1959}) and a separation between elliptical and S0 galaxies. Although these classifications are automated (i.e., derived without any visual inspection), they are good enough (accuracy > 97\%) to provide reliable labels for our training sample. \subsection{Simulating DES galaxies at higher redshifts} \label{sec:sims} Although the number of DES galaxies that are also in the DS18 catalogue is large ($\sim$20,000), it is unlikely that a CNN trained on a galaxy sample brighter than $\mathrm{m}_r < 18$ mag would accurately classify the vast majority of considerably fainter galaxies in DES. One way to remedy this issue is to visually classify a sample of fainter galaxies. That would have been a tedious task, subject to biases, since different classifiers (i.e., observers) would almost certainly assign different classifications, (as can be seen in the GZOO catalog). In addition, some of the features that distinguish ETGs from LTGs are not so evident for faint distant galaxies (e.g. spiral arms, bar; see figure \ref{fig:high-z_examples}), which would complicate the classification. An alternative to visual inspection is to simulate what actual DES galaxies with a well-known classification would look like if they were at higher redshift. We simulate the effects of observing a galaxy at a higher $z$ given its original DES cutout at $z_0$. To do so we use \texttt{GALSIM} \footnote{https://github.com/GalSim-developers/GalSim} \citep{Rowe2015} and assume a $\Lambda$CDM cosmology with $\Omega_M=0.3$, $\Omega_{\Lambda}=0.7$ and $h=0.7$. We perform the following steps: \begin{figure*} \centering \includegraphics[width=2\columnwidth]{figs/high-z_examples.pdf} \caption{Cutouts of a LTG (upper panels) with T-Type $=5.0$ from DS18 observed at $z_0 = 0.02$ with $\mathrm{m}_r = 15.6~\mathrm{mag}$ and for an ETG (lower panels) with T-Type $=-2.4$ observed at $z_0 = 0.16$ and $\mathrm{m}_r = 16.7~\mathrm{mag}$. Cutouts from left to right show the original galaxies redshifted to an apparent magnitude of $\mathrm{m}_r = (18.0, 19.0, 20.5, 21.5)~\mathrm{mag}$, respectively. For each panel, the redshift ($z$) and the apparent magnitude ($\mathrm{m}_r$) are shown on the upper left corner, while the size of each image (in pixels) is indicated in the lower left corner. The features that distinguish ETG and LTG galaxies become less evident at fainter magnitudes. \label{fig:high-z_examples}} \end{figure*} \begin{itemize} \item a de-convolution of the original image by the corresponding point spread function (PSF) in each band. As an approximation for the PSF\footnote{This approximation has no impact in the performance of the classifications on real DES images, as demonstrated in section~\ref{sec:results}.}, we assume a Moffat surface brightness profile ($\beta = 2$) with FWHM equal to the FWHM values for the DES PSF presented in \citet{Abbott2018}, $(1.12,0.96,0.88)$ arcsec for the \textit{gri}-bands, respectively; \item a change in angular size due to the cosmological dimming as follows: \begin{equation} \frac{\mathrm{s}(z)}{\mathrm{s}(z_\mathrm{0})} = \frac{\mathrm{D_L}(z_\mathrm{0})}{\mathrm{D_L}(z)} \frac{(1+z)^2}{(1+z_\mathrm{0})^2}, \end{equation} where $\mathrm{s}$ denotes the size of the image, $\mathrm{D_L}$ corresponds to the luminosity distance and $z_\mathrm{0}$ and $z$ are the observed and the simulated redshift, respectively. Note that we keep constant the DES pixel size of $0.263\arcsec$ and, therefore, it is the size of the image in pixels that changes (it shrinks with increasing $z$); \item a change in the apparent magnitude. This change includes the cosmological dimming effect, the $k$-correction and the evolution of the intrinsic brightness of the galaxies. For the ETGs, we assume the evolution corresponds to that of a single stellar population model, while for the LTG class we assume a constant star formation rate (SFR, taking the value from SDSS spectroscopy). In summary, we express the change in apparent magnitude as: \begin{equation} \mathrm{m}(z) - \mathrm{m}(z_\mathrm{0}) = \Delta\mathrm{m_{evo}} + ~5\mathrm{log} \left[ \frac{\mathrm{D_L}(z)}{\mathrm{D_L}(z_\mathrm{0})} \right], \end{equation} where $\mathrm{m}(z_\mathrm{0})$ and $\mathrm{m}(z)$ indicate the observed and redshifted apparent magnitude, respectively, and $\Delta\mathrm{m_{evo}}$ corresponds to the change in magnitude according to the $k$-correction and the evolutionary models; \item a convolution of the resulting image by the above mentioned PSF in each band; \item the addition of Gaussian noise to the final image. In order to avoid contamination from the central galaxy, we estimate the noise from a set of cutouts with a larger field of view ($\sim 20$ times the half-light radius). We use a robust wavelet-based estimator of the Gaussian noise standard deviation (\texttt{estimate\_sigma} function available for the \texttt{scikit-image}\footnote{https://scikit-image.org/} package in \texttt{Python}). \end{itemize} We apply this procedure to each band independently and then we combine the three bands into an RGB image. We simulate each galaxy satisfying the following conditions: a) the final apparent magnitude (in the \textit{r}-band) below $\mathrm{m}_r(z) < 22.5$; b) the final size of the image, $\mathrm{s}(z)$, larger than $32\times32$ pixels; c) and the final redshift $z$ < 1.0. The first condition ensures that the CNN is learning from images of galaxies that are even fainter than the limiting brightness of our project ($\mathrm{m}_r < 21.5$) but are still bright enough to pass the DES detection threshold. The second condition avoids extreme interpolations when constructing the input matrix (see section \ref{sec:network}). As mentioned above, the pixel size is kept constant, while the size of the image decreases with increasing $z$. This choice ensures that both the original and the simulated image of the galaxy have the same physical size. In figure~\ref{fig:high-z_examples}, we show the original cutout at $z_0$ along with four simulated cutouts for two galaxies (one LTG and one ETG). For the LTG galaxy (T-Type $=5.0$ from DS18), it can be clearly seen how the spiral arms and the bar are almost indistinguishable (by eye) when the galaxy is simulated at a $\mathrm{m}_r \geq 20.5~\mathrm{mag}$. The original size of the LTG image is $310\times310$ pixels, while its simulation at $\mathrm{m}_r = 21.5~\mathrm{mag}$ is only $32\times32$ pixels. For the ETG galaxy (T-Type $=-2.4$), the original size of the image is $96\times96$ pixels, while the size of its simulation at $\mathrm{m}_r = 21.5~\mathrm{mag}$ is $34\times34$ pixels. \section{Deep Learning morphological classification model} \label{sect:models} \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{figs/architectureCNN.png} \caption{Network architecture used for training the models, consisting of four convolutional layers with a linear activation function (denoted as \textit{ReLu}) and different kernel sizes represented by the red squares, followed by a fully connected layer. The numbers above each layer correspond to the size of the output convoluted images, while the number of weights at each level are indicated below and denoted by W.} \label{fig:network} \end{figure*} We apply DL algorithms using CNNs to morphologically classify galaxy images. DL is a methodology that automatically learns and extracts the most relevant features (or parameters) from raw data for a given classification problem through a set of non-linear transformations. The main advantage of this methodology is that no pre-processing needs to be done: the input to the machine are the raw RGB cutouts for each galaxy (i.e., $gri$-bands, respectively). The main disadvantage is that, given the complexity of extracting and optimising the features and weights in each layer, a large number of already classified (or labelled) images needs to be provided to the machine. As explained in section \ref{sec:data}, we combine a previous morphological catalogue that overlaps with the DES dataset with simulations of the original DES images for training and testing how our model performs. Since we want to apply the morphological classification to the DES sample that covers a much fainter range of objects than the SDSS sample, we limit the morphological classification to a simpler scheme than that presented in DS18: we classify the DES galaxies according to two different schemes: a) early-type galaxies (ETGs) vs. late-type galaxies (LTGs); and b) face-on vs. edge-on galaxies. \subsection{Network architecture} \label{sec:network} Given the success of previous studies that have used CNNs for galaxy classification \citep{Huertas2015, Dieleman2015, Dominguez2018}, we adopt a similar (but not identical) CNN configuration. Testing the performance of different network architectures is beyond the scope of this paper. We use the same input images and CNN configuration for each classification task. We use the \texttt{KERAS} library\footnote{https://keras.io/}, a high-level neural network application programming interface, written in \texttt{Python}. The input to the CNN are the RGB cutouts (i.e., $gri$-bands) downloaded from the DES DR1, with a varying size that is function of the half-light radius of the galaxy in the \textit{r}-band. The cutouts have a size of $\sim 11.4$ times the half-light radius centred on the target galaxy to guarantee that no galaxy light is missed. The algorithm reads the images that are re-sampled into (64, 64, 3) matrices, with each number representing the flux in a given pixel at a given band. Down-sampling the input matrix is necessary to reduce the computing time and to avoid over-fitting in the models, as commonly done in the literature \citep{Dieleman2015,Dominguez2018,Walmsley2020}. The flux values are normalised to the maximum value in each filter for each galaxy to eliminate colour information. For the smaller galaxies, the fixed pixel size can lead to cutout sizes that are below the $64\times64$ pixels for which the CNN has been designed. For these, the cutouts are up-sampled to 64$\times$64 matrices by interpolating between pixels. Since this could create some artifacts and affect the spatial resolution of the images, we require all cutouts to be at least $32\times32$ pixels in size. This condition leads to a minimum galaxy half-light radius of 2.8 ($32/11.4$) pixels (as mentioned in section~\ref{sec:desy3}). The network architecture, shown in figure \ref{fig:network}, is composed of four convolutional layers with a linear activation function (denoted as \textit{ReLu}) and squared filters of different sizes (6, 5, 2 and 3, respectively), followed by a fully connected layer. Dropout is performed after each convolutional layer to avoid over-fitting, and a 2$\times$2 max-pooling follows the second and third convolutional layers. The number of weights (i.e., free-parameters) in each layer -- before dropout -- are also indicated. \citep[See][ for a comprehensive review on DL concepts]{Goodfellow2016}. We train the models in binary classification mode for both classification schemes. For each, the output is a single value between 0 and 1, and can be interpreted as the probability $\mathrm{p}$ of being a positive example (labelled as $\mathrm{Y}=1$ in our input matrix) or as the probability $1-\mathrm{p}$ of being a negative example (labelled as $\mathrm{Y}=0$ in our input matrix). We use 50 training epochs, with a batch size of 30 and an \textit{adam} optimization (default \textit{learning rate} of 0.001). In the training process, we perform \textit{data augmentation}, allowing the images to be zoomed in and out (0.75 to 1.3 times the original size), flipped and shifted both vertically and horizontally (by 5\%). This ensures our model does not suffer from over-fitting since the input is not the same in every training epoch. \textit{Early stopping} is also switched on with a maximum of 10 epochs after convergence is reached. The best model, defined as the optimal one for the validation sample during the training, is then saved and applied to cutouts from the full DES DR1 galaxy catalog, generated in the same way as for the training sample. \subsection{Training sample} \label{sec:training} \subsubsection{Primary training sample} Our primary training sample is the result of cross-matching the sources in the DS18 and the DES DR1 catalogs presented in section~\ref{sec:data}. We identified sources in both catalogs as those with a separation in the sky of less than 2 arcsec, after removing multiple identifications. We remove those objects missing spectra (or having bad spectroscopy according to SDSS flags) and with relative differences in $z$ of more than 5\% between the photo-$z$ for DES Y3 Gold catalog (\citealt{deVicente2016} and Sevilla-Noarbe et al. in prep.) and spec-$z$ for SDSS data. Only 50 galaxies are excluded according to these criteria. The resulting catalog consists of 19,913 galaxies with good quality imaging and secure spectroscopic $z$. \subsubsection{Simulated training sample} The DS18 catalog (described in section~\ref{sec:ds18}) only reaches an observed magnitude of $\mathrm{m}_r < 17.7$ mag. Since we aim to push the limits of the morphological classification of galaxies to fainter magnitudes, we extend our primary training sample by simulating each galaxy at higher redshift $z$ and, consequently, making it look fainter and smaller. Following the pipeline described in section \ref{sec:sims}, we generate two sets of simulations: a) one at a random $z$ (hereafter \textit{rnd}) chosen from a uniform distribution between the observed $z_{\mathrm{0}}$ and the maximum $z_{max}$ to which the galaxy can be redshifted according to the criteria mentioned in section \ref{sec:sims} (i.e., brighter than $\mathrm{m}_r(z) < 22.5$; cutout larger than $32\times32$ pixels and $z$ < 1); b) a second one at the maximum $z$ (hereafter \textit{max}) under the three given conditions above. By combining the primary training sample and the two sets of simulations we obtain a more uniform distribution of the apparent magnitude as can be seen in figure~\ref{fig:appmag} for the \textit{r}-band. Note that, the primary training set is limited in apparent magnitude to $\mathrm{m}_r < 17.7~\mathrm{mag}$, while the two set of simulations extend our training sample to $\mathrm{m}_r < 22.5~\mathrm{mag}$. As there are indications that close to the limits of the training sample the CNN results are not as accurate \citep[see e.g.][]{Yan2020}, we extend the training sample one magnitude fainter than the test sample and the final catalogue to avoid such effects. There are almost 6,000 galaxies that can be simulated to the maximum apparent magnitude of $\mathrm{m}_r < 22.5~\mathrm{mag}$. \subsection{ETG vs. LTG classification scheme} \label{sec:ETG_vs_LTG} In this section, we present our CNN predictions for differentiating between ETGs and LTGs. We denote the ETG as the negative class ($\mathrm{Y}=0$), while the LTG is considered as the positive class ($\mathrm{Y}=1$). The T-Type parameter derived by DS18 is a continuous variable ranging from -3 to 10, where values of T-Type $< 0$ correspond to ETGs and values of T-Type $> 0$ designate LTGs. Unfortunately, the quality of the galaxy images, especially at fainter magnitudes, prevents us from providing such a fine classification for DES galaxies. Separating the sample in two main subclasses (ETGs and LTGs) seems like a reasonable goal for the present catalogue. However, the transition between ETGs and LTGs is smooth and continuous, where intermediate T-Type values are usually assigned to lenticular galaxies (also known as S0s). Given that this classification is trained in binary mode, we select a galaxy sample of LTGs and ETGs, therefore, not including intermediate T-Types (-0.5 < T-Type < 0.5). According to this criteria, a total of 1,293 galaxies ($\sim 6 \%$) were excluded. Since the DES observations are deeper than the SDSS ones, the classification based on SDSS imaging could differ for some of the DES galaxies. To improve the quality of our training sample we also excluded 1,488 galaxies ($\sim 7 \%$) with wrong labels in DS18 identified after a visual inspection of the miss-classifications for the predictions of a preliminary model. Then, we re-trained the model without those objects. In summary, our primary training sample consists of 17,132 galaxies with $|\mathrm{T\text{-}Type}| > 0.5$ and accurate spec-$z$, and is magnitude-limited with $\mathrm{m}_r < 17.7$ mag (as the original DS18 catalog). \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/apparent_magnitude.pdf} \includegraphics[width=0.975\columnwidth]{figs/apparent_magnitude_RESERVED.pdf} \caption{Top panel: Distribution of apparent magnitude in the \textit{r}-band ($\mathrm{m}_r$) for the primary training sample (solid blue); the two sets of simulations (solid orange and red show the \textit{rnd} and \textit{max} sets); and the training sample distribution (dashed black) used for the ETG vs. LTG classification scheme. Bottom panel: Same as top but only for the test sample (see section~\ref{sec:ETG_vs_LTG}). Note that the CNN predictions are trained with a sample that extends to $\mathrm{m}_r < 22.5$, while the model is tested only to $\mathrm{m}_r < 21.5$ (the limit of the DES catalogue presented in this work). \label{fig:appmag}} \end{figure} \subsubsection{Training} As described in section~\ref{sec:training}, we use a combination of the primary and the simulated training samples. Nevertheless, from the primary training sample of 17,132 galaxies, we randomly select a subset of 1,132 galaxies and their corresponding \textit{rnd} and \textit{max} simulated samples that we never use for training. We denote this subset as the `test sample' and we use it to check the models' performances. Since none of these galaxies (neither the original nor the simulated ones) have been shown to the CNN, using this subset as a test sample is a secure way to check the results of our classification scheme. Since we want to test our models to $\mathrm{m}_r < 21.5$ we only show results for galaxies up to that apparent magnitude threshold. In figure~\ref{fig:appmag}, we show the apparent magnitude distribution of the test sample to $\mathrm{m}_r < 21.5$. The primary test sample consists of 1,132 galaxies, and the \textit{rnd} and \textit{max} test samples include 1,088 and 623 galaxies, respectively. Therefore, the test sample includes a total 2,843 galaxies to $\mathrm{m}_r < 21.5$, of which 1,557 (55\%) are labelled as ETGs and 1286 (45\%) are labelled as LTGs. After removing the galaxies belonging to the test sample, we end up with a training sample of 48,000 galaxy images (16,000$\times$3), with roughly 50\% of each class (ETGs and LTGs). We randomly shuffle the galaxies in the training sample and apply a $k$-fold cross-validation technique (with $k=5$) for which the original training sample is randomly partitioned into $k$ equal sized sub-samples. Of the $k$ sub-samples, a single sub-sample is retained as the validation data while training the model, and the remaining $k-1$ sub-samples (38,400) are used as training data. By doing so we ensure that each of the 5 CNN models derived is trained with a different set of images and a different initialisation; this provides a (rough) estimate of the classification uncertainty. \subsubsection{Testing} \label{sec:ETG_vs_LTG-test} \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/predprob_ETG.pdf} \caption{Distribution of the predicted probabilities (p$_i$) for the test sample $i$ in the ETG vs. LTG classification scheme. Black solid histogram corresponds to the distribution of the median probability of the 5 models ($\tilde{\mathrm{p}}$), while the dashed black histogram shows the the distribution of the median probability of the 5 models only for the secure classifications (i.e., those with a $\Delta\mathrm{p}$ < 0.3). \label{fig:predprob}} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/roc_all_runs_ETG.pdf} \includegraphics[width=0.88\columnwidth]{figs/matrix_norm_RESERVED.pdf} \caption{Top panel: ROC curves for the five predicted probabilities (dashed colored lines). Solid black line corresponds to the ROC curve when the predicted probability is equal to the median value ($\tilde{\mathrm{p}}$) of the five predicted probabilities ($\mathrm{p}_i$). Bottom panel: Confusion matrix for the ETG vs. LTG classification scheme. In each cell, we show the number of candidates and the fraction of candidates (in brackets) of TN (top-left), FN (top-right), FP (bottom-left) and TP (bottom-right). \label{fig:roc}} \end{figure} One way to check the reliability of the model predictions made for the ETG vs. LTG classification scheme (hereafter interpreted as a predicted probability, $\mathrm{p}_i$), is to compute the difference in the maximum and the minimum values for the predicted probabilities of the 5 models (expressed as $\Delta\mathrm{p}$). We find that $92.3\%$ of the galaxies from the test sample have $\Delta\mathrm{p} < 0.3$, and we designate these as secure classifications. The remaining $7.7\%$ of the galaxies within the test sample have less secure classifications. In figure~\ref{fig:predprob}, we show the distribution of the predicted probabilities for the test sample along with the distribution of their median probability value (for the full and for the secure classifications). Note that the majority of the insecure classifications are clustered around intermediate values of p. As extensively done in literature \citep[see][for instance]{Powers2011}, we also check the accuracy of our models by computing the area under the ROC curve (ROC AUC) for the different predicted probabilities. The ROC curve is a representation of the false positive rate ($\mathrm{FPR = FP/N}$, i.e., the ratio of the number of false positives to negative cases) versus the true positive rate ($\mathrm{TPR = TP/P}$, i.e., the ratio of the number of true positives to positive cases) for different probability thresholds. A good classifier maximises TP and minimises FP values. The top panel of figure~\ref{fig:roc} shows the ROC curves for the 5 models, and for their median value. Good classifiers should be as close as possible to the left hand and upper boundaries of this panel, and this is clearly true for all the curves, which all have ROC AUC values of 0.99. A complementary way to test the model performance is the precision (Prec) and recall (R) scores \citep[e.g.][]{Dieleman2015}, which can be defined as follows: \begin{align} \mathrm{Prec} &= \mathrm{\frac{TP}{TP+FP}};\\ \mathrm{R} &= \mathrm{\frac{TP}{TP+FN} = TPR}, \end{align} where the separation between positive and negative samples is determined with respect to a probability threshold. The precision is intuitively the ability of the classifier not to label as positive a sample that is negative (or a purity/contamination indicator). The recall is intuitively the ability of the classifier to find all the positive samples (i.e., a completeness indicator). Additionally, the accuracy of the model prediction is defined as the fraction of correctly classified instances: \begin{equation} \mathrm{Acc = \frac{TP+TN}{P+N}} \end{equation} We derive the probability threshold ($\mathrm{p_{thr}}$) that optimises the ROC curve (i.e., maximises TPR and minimises FPR), but depending on the user purpose, one can vary the $\mathrm{p_{thr}}$ to obtain a more complete or less contaminated sample. In table~\ref{tb:models}, we present a summary of these different estimators for the 5 independent model predictions ($\mathrm{p_i}$) and for their median value ($\mathrm{\tilde{p}}$). We have already noted that the ROC AUC equals 0.99 for all the cases: Prec, R and Acc range from 0.95 to 0.97. Besides, if we assume $\tilde{\mathrm{p}}$ as our fiducial probability, there are only 36 FN and 60 FP in the test sample (3\% and 4\%, respectively), as shown in the confusion matrix in the bottom panel of figure~\ref{fig:roc}, which translates into an Acc$=0.97$. If we restrict attention to the subset of secure classifications, the number of FN and FP decreases to 20 and 28, respectively. This restriction leads to an accuracy classification score for the secure subset of $\mathrm{Acc} = 0.98$. Nevertheless, $\sim 80\%$ of the insecure classifications are still valid (FN and FP are $16+32=48$ out of 219 galaxies that comprise 7.7$\%$ of the test sample). Even for the subset of secure candidates (92$\%$ of the test sample) there are some galaxies with intermediate probabilities. We define a \textit{robust} sub-sample of ETGs and LTGs as those with $max(\mathrm{p}_i) < 0.3$ and $min(\mathrm{p}_i) > 0.7$, respectively. Note that \textit{robust} classifications are (by definition) within the secure subset. We find that 1,391 galaxies are classified as \textit{robust} ETGs (they are $53\%$ of the secure and a $49\%$ of the whole test sample). On the other hand, 1,077 galaxies are \textit{robust} LTGs ($41\%$ of the secure and a $38\%$ of the whole test sample). The remaining $6\%$ of the secure sample (156 galaxies) are intermediate (but still secure) candidates. Nevertheless, if we use the median of the probabilities $\mathrm{\tilde{p}}$ and the optimal threshold of 0.40, most of the galaxies from the secure sample (92$\%$) are still correctly classified. These results demonstrate that the model is able to separate ETGs and LTGs even for the intermediate candidates. Additionally, we apply our models to the subset of 1,293 galaxies (and their simulated counterparts) with -0.5 < T-Type < 0.5 that we did not include in the training of our models. In total, we classified 3,879 galaxies with intermediate values of T-Type (3 $\times$ 1,293, the original galaxies and the two simulated counterparts), of which 2,004 are ETGs (i.e., -0.5 < T-Type < 0.0) and 1,875 are LTGs (i.e., 0.0 < T-Type < 0.5). We find that $62\%$ of them are secure classifications (i.e., $\Delta\mathrm{p} < 0.3$), significantly lower than the same value for the whole test sample. The number of this subset of galaxies that are classified as ETGs and LTGs is 2,026 ($52\%$ of the total) and 1,853 ($48\%$ of the total), respectively. We also find that 1,348 ($35\%$ of the total) and 874 ($23\%$ of the total) galaxies are classified as \textit{robust} ETGs and \textit{robust} LTGs, respectively. Only 177 galaxies ($5\%$ of the total) are classified as intermediate but still secure candidates. In terms of accuracy, $75\%$ of these galaxies are correctly classified as ETGs, while $88\%$ of these galaxies are correctly classified as LTGs. Therefore, our classifications are reliable (although more uncertain) even for those objects with intermediate values of T-Type (i.e., -0.5 < T-Type < 0.5) that are a-priori difficult to classify. Note that the fraction of galaxies with intermediate T-Types is very small (6\% in the primary training sample). Including these galaxies in the test set (assuming the same fraction as in the primary training sample) would reduce the accuracy from 97\% to 96\%. The fraction of such objects in the full DES catalogue (section~\ref{sec:des_morph}) is unknown and their labels are uncertain (see the large scatter in figure 11 from DS18). As a result, it is difficult to quantify how they impact the overall accuracy. Therefore, we only quote the final accuracy of the models after such objects have been removed. One of the key questions we would like to answer is how much are the results of our classification affected by the galaxy brightness. In figure \ref{fig:model-perf} and in table \ref{tb:models-mag}, we show how the metrics used to test the model performance change with apparent magnitude. In general, there are very small variations, with the AUC ROC being the most stable parameter (> 0.99 always). The accuracy range is also small (0.96 < Acc < 0.98), while the precision and recall show variations of $\sim$5\%. There is no clear trend with apparent magnitude, i.e., the models seem to be able to distinguish between ETGs and LTGs regardless of the faintness of the images. We did the same exercise by dividing the test sample in bins of half-light radius, finding accuracy values above $94\%$, even for the smallest galaxies. Evidently, CNNs detect features hidden to the human eye and therefore classify significantly better than visual inspection. \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/model_perf.pdf} \caption{ETG vs. LTG model performance (in terms of ROC AUC, precision, accuracy and recall) as a function of magnitude for the test sample. The values are calculated using the $\mathrm{p_{thr}}$ of the median model $\mathrm{\tilde{p}}$ from table \ref{tb:models}. The lack of dependence of the metrics with magnitude demonstrates that the model is able to correctly classify even the fainter galaxies. \label{fig:model-perf}} \end{figure} \begin{table} \begin{center} \caption{Summary of the ETG vs. LTG model performance for the five runs: optimal threshold ($\mathrm{p_{thr}}$), area under the ROC curve, precision, recall and accuracy values. The last row shows the values obtained for the median probability of the five runs, $\mathrm{\tilde{p}}$, which we use throughout the paper as the standard model.} \label{tb:models} \begin{tabular}{ccccccc} \hline \hline Model & $\mathrm{p_{thr}}$& ROC AUC & Prec & R & Acc\\ \hline \hline $\mathrm{p_1}$ & 0.49 & 0.99 & 0.97 & 0.96 & 0.97\\ $\mathrm{p_2}$ & 0.46 & 0.99 & 0.95 & 0.96 & 0.96\\ $\mathrm{p_3}$ & 0.39 & 0.99 & 0.96 & 0.97 & 0.97\\ $\mathrm{p_4}$ & 0.35 & 0.99 & 0.95 & 0.97 & 0.96\\ $\mathrm{p_5}$ & 0.54 & 0.99 & 0.96 & 0.96 & 0.96\\ \hline $\mathrm{\tilde{p}}$ & 0.40 & 0.99 & 0.95 & 0.97 & 0.97\\ \hline \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{Summary of the ETG vs. LTG performance in magnitude bins. The values are calculated using the $\mathrm{p_{thr}}= 0.40$ obtained for the full test sample and the median model $\mathrm{\tilde{p}}$ .} \label{tb:models-mag} \begin{tabular}{lcccc} \hline \hline Mag bin & ROC AUC & Prec & R & Acc\\ \hline \hline 14 < $\mathrm{m}_{r}$ < 21.5 & 0.99 & 0.95 & 0.97 & 0.97 \\ \hline 14 < $\mathrm{m}_{r}$ < 17 & 0.99 & 0.97 & 0.95 & 0.97\\ 17 < $\mathrm{m}_{r}$ < 18 & 0.99 & 0.95 & 0.97 & 0.96 \\ 18 < $\mathrm{m}_{r}$ < 19 & 1.00 & 0.94 & 0.98 & 0.98 \\ 19 < $\mathrm{m}_{r}$ < 20 & 0.99 & 0.93 & 0.96 & 0.96 \\ 20 < $\mathrm{m}_{r}$ < 21.5 & 0.99 & 0.97 & 1.00 & 0.98 \\ \end{tabular} \end{center} \end{table} \subsection{Face-on vs. Edge-on classification scheme} \label{sec:edgeon} In this section, we present our CNN predictions for the second classification scheme to distinguish face-on vs. edge-on galaxies. Whereas what we mean by `edge-on' is intuitively obvious, and we treat these as the positive class ($\mathrm{Y}=1$), we use the term `face-on' to refer to the objects that are not edge-on (i.e., the negative class, $\mathrm{Y}=0$), i.e., this classification does not aim to return a continuous output, such as galaxy inclination or ellipticity, but rather to select galaxies that are clearly viewed edge-on. \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/apparent_magnitude_EdgeOn.pdf} \includegraphics[width=\columnwidth]{figs/apparent_magnitude_RESERVED_EdgeOn.pdf} \caption{Top panel: Same as figure~\ref{fig:appmag} but for the face-on vs. edge-on classification scheme. Bottom panel: Same as top but only for the test sample (see section~\ref{sec:edgeon}). Note that the CNN predictions are trained with a sample that extends up to $\mathrm{m}_r < 22.5$, while we test the model predictions only up to $\mathrm{m}_r < 21.5$ (the limit of the DES catalogue presented in this work). \label{fig:appmag_edgeon}} \end{figure} \subsubsection{Training} As for the first classification scheme, our training sample is a combination of the original and the simulated samples. We use the information provided by DS18 on the probability of being edge-on (see section~\ref{sec:ds18}) to select a reliable sample of galaxies with which to train our CNNs. We define face-on galaxies as those with $\mathrm{p_{edge-on}} < 0.1$ and edge-on galaxies as those with $\mathrm{p_{edge-on}} > 0.9$ , corresponding to 11,783 galaxies. We randomly select 2,783 galaxies (and their simulated versions) as the test sample and the remaining 9,000 for the training sample. The training sample consists of 27,000 galaxies (3$\times$9,000, the originals and their simulated versions) with 23,424 (87\%) face-on galaxies and 3,576 (13\%) edge-on galaxies. As for the ETG vs. LTG model, we train 5 different models with $k$-folding. We have reserved a total of 8,349 original and simulated galaxies for testing. However, as for the first classification scheme (section~\ref{sec:ETG_vs_LTG}), we only show results for galaxies to $\mathrm{m}_r < 21.5$: all the 2,783 galaxies within the primary test sample, 2,673 galaxies from the \textit{rnd} test sample and 1,477 galaxies from the \textit{max} test sample. Therefore, the test sample includes a total 6,933 galaxies, of which 6,066 (87\%) are face-on and 876 (13\%) are edge-on. In figure~\ref{fig:appmag_edgeon}, we show the distribution of ($\mathrm{m}_r$) for the different datasets that make up the training and the test samples for the face-on vs. edge-on classification scheme. Since the fractions of face-on and edge-on galaxies are so unequal, we use balanced weights during the training phase of our CNN. In other words, the algorithm compensates for the lack of examples of one class by dividing the loss of each example by the fraction of objects of that particular class. \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/predprob_EdgeOn.pdf} \caption{Distribution of the predicted probabilities ($\mathrm{p^e_i}$) for the test sample $i$ in the face-on vs. edge-on classification scheme. Black solid histogram corresponds to the distribution of the median probability of the 5 models ($\mathrm{\tilde{p}_e}$), while the dashed black histogram shows the the distribution of the median probability of the 5 models only for the secure classifications (i.e., those with a $\Delta\mathrm{p_e}$ < 0.3). \label{fig:predprob_edgeon}} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/roc_all_runs_EdgeOn.pdf} \includegraphics[width=0.88\columnwidth]{figs/matrix_norm_RESERVED_EdgeOn.pdf} \caption{Same as figure~\ref{fig:roc} but for the face-on vs. edge-on classification. Note the better performance of the average model $\mathrm{\tilde{p}_e}$. \label{fig:conf_matrix}} \end{figure} \subsubsection{Testing} \label{sec:edgeon-test} As described in section~\ref{sec:ETG_vs_LTG-test}, we check the accuracy of our model predictions by means of the ROC AUC, Prec, R and Acc estimators. In table~\ref{tb:models-edegon}, we show these values for the 5 face-on vs. edge-on models (denoted as $\mathrm{p^e_i}$) and the median one (denoted as $\mathrm{\tilde{p}_e}$). The top panel of figure~\ref{fig:conf_matrix} shows the ROC curve for the different models while the bottom panel summarises the results of $\mathrm{\tilde{p}_e}$ in a confusion matrix showing the number of TN (5977), FP (89), FN (21) and TP (846) along with their respective fractions within the two classes. The median model $\mathrm{\tilde{p}_e}$ model has a better performance than the 5 individual models with Acc$=0.98$ and R$=0.98$, while Prec$=0.90$ is slightly smaller. This is in part due to the unbalanced test sample: the total number of FP is about 1/10 of the TP, although the FP are only $\sim$ 1--2\% of the face-on galaxies. On the other hand, the number of FP (only 89, or $1\%$ of the predicted edge-on galaxies) is considerably lower than the number of TP, which translates into an excellent R value. Analogously to the ETG vs. LTG model, we define a \textit{secure} sub-sample of galaxies for the edge-on classification where $\Delta\mathrm{p_e} < 0.3$. There are 93\% of \textit{secure} galaxies in the test sample. The \textit{robust} edge-on are galaxies with $min(\mathrm{p}^{e}_i) > 0.7$. We find 668 galaxies classified as \textit{robust} edge-on ($10\%$ of the secure sample and $12\%$ of the whole test sample). The dependence of the edge-on classification with apparent magnitude is highlighted in figure \ref{fig:model-perf_edgeon}, which plots the performance of the $\mathrm{\tilde{p}_e}$ model in the same magnitude bins (summarised in table \ref{tb:models-edegon-mag}). There is a very small variation with apparent magnitude: the most affected quantity decreases from 0.99 at $14.0 < \mathrm{m}_r < 17.0$ to 0.95 at $19.0 < \mathrm{m}_r < 20.0$. In the same table we show the values obtained for a balanced test sample, robust against class representation. In this case, the precision values are significantly improved (from Prec $=0.90$ to Prec $=0.98$ for the full test sample) while the other indicators are almost unchanged. We did the same exercise by dividing the test sample in bins of half-light radius, finding accuracy values above $96\%$, even for the smallest galaxies. \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/model_perf_EdgeOn.pdf} \caption{Same as figure~\ref{fig:model-perf} but for the edge-on model performance. The lack of dependence of the metrics with magnitude demonstrates that the model is able to correctly classify even the fainter galaxies \label{fig:model-perf_edgeon}} \end{figure} \begin{table} \begin{center} \caption{Summary of the edge-on versus face-on model performance for the five runs: optimal threshold ($\mathrm{p_{thr}}$), area under the ROC curve, precision, recall and accuracy values. The last row shows the values obtained for the median probability of the five runs, $\mathrm{\tilde{p}_e}$, which we use throughout the paper as the standard model.} \label{tb:models-edegon} \begin{tabular}{ccccccc} \hline \hline Model & $\mathrm{p_{thr}}$& ROC AUC & Prec & R & Acc\\ \hline \hline $\mathrm{p^e_1}$ & 0.26 & 1.00 & 0.86 & 0.97 & 0.98\\ $\mathrm{p^e_2}$ & 0.21 & 1.00 & 0.83 & 0.98 & 0.97\\ $\mathrm{p^e_3}$ & 0.32 & 1.00 & 0.92 & 0.97 & 0.98\\ $\mathrm{p^e_4}$ & 0.35 & 1.00 & 0.80 & 0.98 & 0.97\\ $\mathrm{p^e_5}$ & 0.29 & 0.99 & 0.79 & 0.97 & 0.96\\ \hline $\mathrm{\tilde{p}_e}$ & 0.33 & 1.00 & 0.90 & 0.98 & 0.98\\ \hline \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{Summary of the edge-on model performance in magnitude bins. The values are calculated using the $\mathrm{p_{thr}}= 0.33$ obtained for the full test sample and the median model $\mathrm{\tilde{p}_e}$ (in brackets for a balanced test sample).} \label{tb:models-edegon-mag} \begin{tabular}{lcccc} \hline \hline Mag bin & ROC AUC & Prec & R & Acc\\ \hline \hline 14 < $\mathrm{m}_{r}$ < 21.5 & 1.00 (1.00) & 0.90 (0.98) & 0.98 (0.98) & 0.98 (0.98) \\ \hline 14 < $\mathrm{m}_{r}$ < 17 & 1.00 (1.00) & 0.89 (0.97) & 0.99 (0.99) & 0.98 (0.98)\\ 17 < $\mathrm{m}_{r}$ < 18 & 1.00 (1.00) & 0.93 (1.00) & 0.98 (0.98) & 0.99 (0.99)\\ 18 < $\mathrm{m}_{r}$ < 19 & 1.00 (1.00) & 0.90 (0.99) & 0.96 (0.96) & 0.99 (0.98)\\ 19 < $\mathrm{m}_{r}$ < 20 & 1.00 (1.00) & 0.90 (0.97) & 0.95 (0.95) & 0.98 (0.96)\\ 20 < $\mathrm{m}_{r}$ < 21.5 & 1.00 (1.00)& 0.89 (0.99) & 0.98 (0.98) & 0.98 (0.98)\\ \end{tabular} \end{center} \end{table} \begin{table*} \begin{center} \caption{Comparison of the test samples discussed in section \ref{sec:results}. Columns show the total number of galaxies and the corresponding fraction of secure ones ($\Delta\mathrm{p} < 0.3$). Also given are the fractions of the secure galaxies classified as (\textit{robust}) ETGs, LTGs and edge-on. The last column contains the fraction of \textit{robust} ETGs that are also classified as \textit{robust} edge-on. These cases should be taken with care since only discs should be edge-on.} \label{tab:results} \begin{tabular}{lrcccccc} \hline \hline Sample & \# galaxies & Secure ($\Delta\mathrm{p} < 0.3$) & ETGs (\textit{robust}) & LTGs (\textit{robust}) & Secure ($\Delta\mathrm{p_e} < 0.3$) & Edge-on (\textit{robust}) & ETGs$+$edge-on\\ & & \% from total & \% from secure & \% from secure & \% from total &\% from secure & \% from \textit{robust} ETGs \\ \hline \hline DES DR1 & 26,971,945 & 87 & 12 (10) & 88 (85) & 73 & 9 (6) & 0.3 \\ DES struct. param. & 6,060,018 & 89 & 9 (7) & 91 (88) & 79 & 7 (6) & 0.3\\ DES stellar mass & 137,956 & 83 & 48 (44) & 52 (47) & 86 & 7 (6) & 0.5\\ VIPERS & 7,384 & 81 & 22 (20) & 78 (76) & 77 & 2 (2) & 0.1 \\ \hline \end{tabular} \end{center} \end{table*} The face-on vs. edge-on classification is useful for different scientific purposes (see section~\ref{sec:intro}), but might also help as an additional test for the ETG vs. LTG classification presented in section~\ref{sec:ETG_vs_LTG}. Since only discs can be seen edge-on, a galaxy should not be classified simultaneously as an ETG and edge-on. We find 91 (predicted) ETGs in the ETG vs. LTG test sample with $\mathrm{\tilde{p}_e} > 0.33$, corresponding to $\sim$3\% of the test sample. This fraction is reduced to 0.7\% when only \textit{robust} ETG and \textit{robust} edge-on are considered. This small fraction reassures us about the performance of the two models. A visual inspection of these galaxies confirms that most of them look like edge-on lenticulars, with a clear bulge and disc but no signs of spiral arms. This is especially evident for \textit{robust} edge-on ETGs (see figure~\ref{fig:cutouts}). Thus, including the additional information provided by the edge-on vs. face-on classification helps to increase the purity of the ETG sample and is an efficient way to identify edge-on lenticulars. \section{DES DR1 morphological catalog} \label{sec:des_morph} In this section, we present the results of applying the classification schemes described in sections~\ref{sec:ETG_vs_LTG} and~\ref{sec:edgeon} to the DES DR1 galaxy catalog presented in section~\ref{sec:desy3}. We briefly summarise the overall results here but address a more exhaustive comparison with other observed galaxy properties in section~\ref{sec:results}. Table \ref{tab:results} summarises the statistics for the full DES morphological catalogue, as well as for three comparison samples, while the magnitude distribution of each sub-sample is shown in figure \ref{fig:mag-distr}. In table~\ref{tab:catalog}, we describe the content of the full DES DR1 morphological catalogue, which will be released along with the paper. Examples of each class at different magnitudes are shown in appendix \ref{appendix}. For the ETG vs. LTG classification scheme, $87\%$ of the 26,971,945 galaxies in the DES DR1 morphological catalogue are secure classifications, i.e., $\Delta\mathrm{p} < 0.3$ (where $\Delta\mathrm{p}$ corresponds to the maximum difference between the five predicted probabilities). Within this subset of secure classifications, $10\%$ of the galaxies are classified as \textit{robust} ETGs (i.e., $max(\mathrm{p_i}) < 0.3$), while $85\%$ are classified as \textit{robust} LTGs (i.e., $min(\mathrm{p_i}) > 0.7$) . The remaining $5\%$ of the galaxies may be considered as intermediate (but still secure) candidates. Being less conservative, $\sim 12\%$ of the galaxies from the subset of secure classifications are classified as ETGs (i.e., $\mathrm{\tilde{p}} < \mathrm{p_{thr}} = 0.4$) while, consequently, $\sim 88\%$ are classified as LTGs (i.e., $\mathrm{\tilde{p}} > \mathrm{p_{thr}} = 0.4$). The much larger fraction of LTGs with respect to ETGs in a magnitude limited sample is consistent with previous work (see e.g., \citealt{Pozzetti2010}). Figure~\ref{fig:appmag_vs_photoz} shows how the galaxies the whole DES DR1 morphological catalog populate the apparent magnitude ($\mathrm{m}_r$) and photometric redshift ($z_\mathrm{photo}$) plane, color coded by density, secure fraction and predicted LTG fraction. The predicted LTG fraction is computed as the average of the predicted labels (i.e., 0 for ETGs, 1 for LTGs) of the galaxies in each bin. As expected, the brightest galaxies at low $z_\mathrm{photo}$ are dominated by ETGs, while the faint galaxies are predominantly LTGs. The fraction of secure classified galaxies is relatively constant for the (observed) bright galaxies and there is an interesting trend with redshift for galaxies fainter than $\mathrm{m}_r$ > 19: the fraction of insecure galaxies increases with $z_\mathrm{photo}$, as expected. In any case, note that the average fraction of secure galaxies is 87\%; it remains greater than 50\% even in the more uncertain regions of the $\mathrm{m}_r$-$z_\mathrm{photo}$ plane. Although most of the faintest (observed) galaxies are classified as LTGs, the classification model is able to retrieve a significant fraction of ETGs ($\sim 50\%$) at intermediate $z_\mathrm{photo} \sim 0.5$ and $\mathrm{m}_r \gtrsim 20.0~\mathrm{mag}$. Note that the faint, low redshift population corresponds to intrinsically faint (and therefore low mass) galaxies, that are, in general, LTGs. Unfortunately, there are no additional parameters with which to further test the full DES DR1 catalogue. We cross-correlate this sample with other available measurements in the following sections. \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/apparent_magnitude_DES-FULL.pdf} \caption{Normalized apparent magnitude distribution for the full DES morphological catalogue presented in this work, as well as for the three catalogues used for comparison. \label{fig:mag-distr}} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/appmag_vs_redshift_secure_fractions.pdf} \caption{Apparent magnitude ($\mathrm{m}_r$) vs. photometric redshift ($z_\mathrm{photo}$) for the secure subset ($\Delta_\mathrm{p} < 0.3$) within the DES DR1 morphological catalog. Left-hand panel shows the number of galaxies (in log-scale) in each hexagonal bin. Middle panel shows the fraction of secure galaxies. Right-hand panel indicates the predicted ETG/LTG fraction. A predicted LTG fraction of 1 means that $100\%$ of the galaxies in a particular hexagonal bin are LTGs, while a predicted LTG fraction of 0 indicates $100\%$ of the objects in the bin are ETGs. The brightest galaxies at low $z_\mathrm{photo}$ are dominated by ETGs, while the faint galaxies are predominantly LTGs. The fraction of secure classified galaxies is relatively constant for the (observed) bright galaxies and the fraction of insecure galaxies increases with $z_\mathrm{photo}$. The average fraction of secure galaxies is 87\% and greater than 50\% even in the more uncertain regions of the $\mathrm{m}_r$-$z_\mathrm{photo}$ plane. \label{fig:appmag_vs_photoz}} \end{figure} For the edge-on vs. face-on classification scheme, $18\%$ of the galaxies have values of $\mathrm{\tilde{p}_e} > 0.33$ ($9\%$ if the limit is $min(\mathrm{p^e_i}) > 0.70$). The fraction of \textit{robust} ETGs with $\mathrm{\tilde{p}_e} > 0.33$ is less than $\sim 3\%$ (0.3$\%$ if $min(\mathrm{p^e_i}) > 0.70$). This small fraction is reassuring since, as explained in section \ref{sec:edgeon-test}, edge-on galaxies should only be discs (and therefore LTGs). We strongly recommend that users combine the two classifications since many of these galaxies could actually be miss-classified LTGs or edge-on lenticulars. Some examples are shown in figure \ref{fig:cutouts}. \begin{table*} \begin{center} \caption{Content of the full DES DR1 morphological catalogue.} \label{tab:catalog} \begin{tabular}{ll} \hline \hline COADD\_OBJECT\_ID & Unique object ID for Y3 coadd processing \\ RA & Right ascension (J2000.0 in degrees) \\ DEC & Declination (J2000.0 in degrees) \\ MAG\_AUTO\_R & Apparent magnitude in an elliptical aperture shaped by the Kron radius ($\mathrm{m}_r$ throughout the paper) \\ FLUX\_RADIUS\_R & Radius (in pixels) of the circle containing half of the flux of the object ($\mathrm{r}_r$ throughout the paper) \\ $\mathrm{P}i\_\mathrm{LTG}$ & Probability of being LTG for each of the 5 models (with $i = [1,5]$) \\ $\mathrm{MP}\_\mathrm{LTG}$ & Median probability of the 5 models of being LTG \\ $\mathrm{P}i\_\mathrm{EdgeOn}$ & Probability of being edge-on for each of the 5 models (with $i = [1,5]$) \\ $\mathrm{MP}\_\mathrm{EdgeOn}$ & Median probability of the 5 models of being edge-on \\ FLAG\_LTG & Classification for ETG vs. LTG model; 0=ETG, 2=secure ETG, 4=robust ETG; 1=LTG, 3=secure LTG, 5=robust LTG\\ FLAG\_EdgeOn & Classification for edge-on model; 0= no edge-on, 1=edge-on, 2= secure edge-on, 3=robust edge-on\\ \hline \end{tabular} \end{center} \end{table*} \section{Validation of the classification on real DES galaxies} \label{sec:results} Although the results presented in sections \ref{sec:ETG_vs_LTG-test} and \ref{sec:edgeon-test} show the CNNs perform well, it may be argued that the tests were done on a similar set of simulated images as the ones used to train the CNNs. To further test the goodness of the morphological classification on real DES DR1 galaxy images we now present a comparison with other available data (both photometric and spectroscopic). \subsection{DES DR1 stellar mass catalog}\label{sec:palmese} The DES DR1 stellar mass catalog is the result of running the \texttt{LePhare} code \citep{Arnouts2011} on the DES DR1 galaxy catalog using \citet{Bruzual2003} templates, three different metallicities (including solar), Chabrier IMF and exponentially declining star formation histories \citep[similarly to][]{Palmese2020}. The redshift of each galaxy is assumed to be equal to the mean photo-$z$ mean statistic obtained from multi-object fitting photometry \citep[MOF,][]{Drlica-Wagner2018}. The resulting catalog contains estimates of the stellar mass and the absolute magnitude for $\sim$184 million galaxies. \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/absmag_vs_redshift_classes.pdf} \caption{Absolute magnitude ($\mathrm{M}_r$) vs. spectroscopic redshift ($z_{\mathrm{spec}}$) for the secure subset ($\Delta_\mathrm{p} < 0.3$) for the comparison sample of the DES DR1 stellar mass catalog. Left-hand panel shows the number of galaxies (in log-scale) in each bin. Right-hand panel indicates the predicted ETG/LTG fraction. There is a clear separation between the ETG and LTG populations, with the (intrinsically) brightest galaxies at each redshift dominated by the ETGs, as expected. The lack of ETGs at the lowest redshifts ($z \lesssim 0.2$) is due to the scarcity of massive ETGs in such a small volume. \label{fig:absmag_vs_appmag}} \end{figure} To select galaxies in the DES DR1 stellar mass catalog for which the stellar mass and absolute magnitude estimates are reliable (given the large uncertainties associated with the $z_{\mathrm{photo}}$), we cross-match the above mentioned catalog with a spectroscopic compilation from several surveys \footnote{J. Gschwend, private communication (see also \citealt{Gschwend2018A}).} and select galaxies with $(|z_{\mathrm{photo}}-z_{\mathrm{spec}}|)/(1+z_{\mathrm{spec}}) < 0.05$. The sample used for the comparison with our work consists of 137,956 galaxies covering a redshift range of $0 < z < 1$ and the magnitude shown in figure \ref{fig:mag-distr}. The summary of the statistics is shown in table \ref{tab:results}. For this sub-sample, $86\%$ of the galaxies show secure classifications (i.e., $\Delta \mathrm{p} < 0.30$). The fraction of \textit{robust} ETGs and LTGs is 44 and 47\%, i.e., this is a much more balanced sample compared to the full DES DR1 (and also to the other sub-samples shown in table \ref{tab:results}). We note that the magnitude distribution of this subset of galaxies is relatively flat, meaning that a large fraction of the faint LTGs may be missing. Regarding the second classification, 7\% of the galaxies are edge-on and less that 0.5\% of the \textit{robust} ETGs are classified as \textit{robust} edge-on. In figure~\ref{fig:absmag_vs_appmag}, we show how the galaxies populate the absolute magnitude -- redshift plane ($\mathrm{M}_r$ and $z_{spec}$, respectively). There is a clear separation between the ETG and LTG populations, with the (intrinsically) brightest galaxies at each redshift dominated by the ETGs, as expected. The lack of ETGs at the lowest redshifts ($z \lesssim 0.2$) is due to the scarcity of massive ETGs in such a small volume. \subsection{DES Y1 structural parameters} \label{sec:tarsitano} The DES Y1 structural and morphological catalogue presented in \citet{Tarsitano2018} consists of $\sim$50 million objects selected from the first year of the DES. For a comparison with our predicted morphologies, we use the single S\'ersic index ($\mathrm{n}_r$) and the ellipticity ($\epsilon_r$) obtained with GALFIT for the \textit{r}-band. Following Appendix B3.2 of \citet{Tarsitano2018}, we extract a clean sample of validated and calibrated objects by applying the recommended cuts $\mathrm{FIT\_STATUS\_R=1}$ and $\mathrm{SN\_R>10}$ in the \textit{r}-band. We also select objects with realistic values for the S\'ersic index and ellipticity within $0 < \mathrm{n}_r < 10$ and $0 < \epsilon_r < 1$, respectively. These criteria are fulfilled by a 54\% of the objects in the catalogue. Then, we cross-match the resulting catalog with our DES DR1 catalog to $\mathrm{m}_r < 21.5~\mathrm{mag}$ and excluded ($\sim 600$) objects with unreliable redshifts (i.e., $z_\mathrm{photo} < 0.0$). Finally, we construct a catalog for comparison with 6,060,018 ($\sim12\%$ of the original catalogue) galaxies for which accurate $\mathrm{n}_r$, $\epsilon_r$ and apparent magnitudes are available. The magnitude distribution is shown in figure \ref{fig:mag-distr} and the median of the apparent magnitude of the selected sub-sample is $\mathrm{\tilde{m}_r} = 20.8~\mathrm{mag}$. As detailed in table \ref{tab:results}, $89\%$ of the galaxies in this subset show secure ETG/LTG classifications (i.e., $\Delta \mathrm{p} < 0.30$). While the fraction of edge-on, 7\%, is very similar to the other sub-samples, the fraction of \textit{robust} ETGs and LTGs (7 and 88 \%, respectively) is very uneven. The \textit{r}-band magnitude distribution is similar to the DES DR1 morphological catalogue, although missing some galaxies at the very bright end), which can explain the larger fraction of LTG for this subset. \begin{figure*} \centering \includegraphics[width=\columnwidth]{figs/sersic_vs_ellipticity_classes.pdf} \includegraphics[width=\columnwidth]{figs/radius_vs_ellipticity_classes.pdf} \includegraphics[width=\columnwidth]{figs/sersic_norm_by_classes.pdf} \includegraphics[width=\columnwidth]{figs/ellipticity_norm_by_classes.pdf} \caption{Top-left panels: S\'ersic index vs. ellipticity for the sample in common with the DES Y1 structural parameters catalogue. The bins are color coded by number density and fraction of LTG over the total. Bottom-left panel: Cumulative distribution function (CDF) of the S\'ersic index for the ETG vs. LTG classification scheme: red histogram corresponds to the \textit{robust} ETGs, those with $max (\mathrm{p_i}) < 0.30$; blue histogram shows the \textit{robust} LTGs, those with $min (\mathrm{p_i}) > 0.70$; green histogram corresponds to the intermediate but secure candidates. Top-right panels: Observed radius vs. ellipticity color coded by number density and fraction of edge-on galaxies over the total. Bottom-right panel: CDFs for the ellipticity. Black histograms show the CDFs for the face-on galaxies with $\mathrm{\tilde{p}_e} < 0.33$ (dashed) and $max (\mathrm{p^e_i}) < 0.30$ (solid). Orange histograms correspond to the CDFs for the edge-on galaxies with $\mathrm{\tilde{p}_e} > 0.33$ (dashed) and $min (\mathrm{p^e_i}) > 0.70$ (solid). The very different distributions of the sub-samples is an indicator of the accuracy of our model predictions in real DES galaxy images. \label{fig:cum_sersic}} \end{figure*} We check the reliability of this sub-sample by using the structural parameters derived by \cite{Tarsitano2018}. It is well known that the S\'ersic index correlates well with galaxy morphologies: large $n_r$ is a good proxy for ETGs and vice versa for LTGs (e.g., \citealt{Fischer2019}). On the other hand, edge-on galaxies should have large ellipticity values, $\epsilon_\mathrm{r}$. In figure~\ref{fig:cum_sersic}, we show how this sub-sample populates the $\mathrm{n}_r-\epsilon_\mathrm{r}$ and $\mathrm{r}_r-\epsilon_\mathrm{r}$ planes, as well as cumulative distribution functions (CDFs) for the S\'ersic index and the ellipticity for the two classifications schemes. For the ETG vs. LTG classification, we find an evident separation of each class around $\mathrm{n}_r \sim 2$ and almost no ETGs with $\epsilon_\mathrm{r} < 0.5$, as expected. Although the fraction of galaxies with high S\'ersic index classified as LTGs is $\sim$ 30\%, we note that this is due to the much larger fraction of LTGs in this sub-sample. According to the CDF, $88\%$ of the \textit{robust} ETGs have $\mathrm{n}_r > 2$, while $87\%$ of the \textit{robust} LTGs have $\mathrm{n}_r < 2$. Although the transition between ETGs and LTGs is not exactly at $\mathrm{n}_r = 2$, the very different distributions of the ETGs and LTG samples is an indicator of the accuracy of our model predictions in real DES galaxy images. It is also interesting to note that galaxies not classified within the previous two classes, i.e., the secure intermediate candidates, show a CDF that places them in between the CDFs for the ETGs and the LTGs. For the face-on vs. edge-on classification scheme, we find an even sharper separation at $\epsilon_\mathrm{r} \sim 0.5$ at all radius, except for the smallest galaxies ($\mathrm{r}_r \lesssim 1.0$ arcsec), which indicates that in those cases the spatial resolution is not enough for identifying edge-on galaxies. Regarding the CDF, $87\%$ of the \textit{robust} face-on galaxies ($max (\mathrm{p^e_i}) < 0.30$) have $\epsilon_\mathrm{r} < 0.5$, while $\sim 100\%$ of the \textit{robust} edge-on ($min (\mathrm{p^e_i}) > 0.70$) have $\epsilon_\mathrm{r} > 0.5$, thus allowing us to be confident about our model predictions (figure~\ref{fig:cum_sersic}). The fact that only 0.3\% of the \textit{robust} ETGs are classified as \textit{robust} edge-on is also a good sanity check. \subsection{VIPERS spectral classification} \label{sec:vipers} In this section, we compare the predictions made by our ETG vs. LTG classification with an unsupervised machine-learning classification extracted from the VIMOS Public Extragalactic Redshift Survey (VIPERS) presented in \citet{Siudek2018}. The data release provides spectroscopic measurements and photometric properties for 86,775 galaxies. The galaxy classification is based on a Fisher Expectation-Maximization (FEM) unsupervised algorithm working in a parameter space of 12 rest-frame magnitudes and spectroscopic redshift. The FEM unsupervised algorithm is able to distinguish 12 classes (11 classes of galaxies and an additional class of broad-line active galactic nuclei, AGNs). In particular, classes 1--3 host the reddest spheroidal-shape galaxies showing no sign of star formation activity and dominated by old stellar populations; classes 7--11 contain the blue star-forming galaxies. Classes 4--6 host intermediate galaxies whose physical properties (such as colours, sSFR, stellar masses, and shapes) are intermediate between those of red, passive, and blue, active galaxies. These intermediate galaxies have more concentrated light profiles and lower gas contents than star-forming galaxies. Class 11 may consist of low-metallicity galaxies, or AGNs according to its localisation on the BPT diagram. See color-color diagrams in figure 2 of \citet{{Siudek2018}} for further details. We include in this comparison VIPERS galaxies that are also present in the DES DR1 morphological catalog with an accurate spectroscopic redshift estimate and the highest membership probability to one of the classes. This subset includes 7,384 galaxies with an apparent magnitude of $\mathrm{m}_r \in [18.0, 21.5]~\mathrm{mag}$ (see figure \ref{fig:mag-distr}) and with a spectroscopic redshift distribution ranging from $0.04 < z_\mathrm{spec} < 1.46$ with a median value of $\tilde{z}_\mathrm{spec} \approx 0.55$. Note that this is the faintest of the comparison samples. Table \ref{tab:results} shows statistics for this sub-sample, for which $81\%$ of the galaxies show secure classifications (i.e., $\Delta \mathrm{p} < 0.30$). Of the secure subset, $20\%$ of the galaxies are classified as \textit{robust} ETGs while 76\% are \textit{robust} LTGs. Although the LTGs still dominate the number counts, the two classes are much more balanced than for the full DES morphological catalogue or the \cite{Tarsitano2018} sub-sample. On the other hand, the fraction of edge-on galaxies (2\%) is smaller than for the other sub-samples, and only 0.1\% of the \textit{robust} ETGs are classified as \textit{robust} edge-on. In figure~\ref{fig:vipers_classes}, we show the number of galaxies belonging to each of the classes derived by \citet{Siudek2018} for the ETGs and LTGs sub-samples according to our model predictions. The ETGs clearly dominate at classes below 4, and are negligible for classes above 6. On the other hand, the LTGs dominate for classes above 6, with a very small fraction with classes 1--3. The intermediate classes are composed of a mix of ETGs and LTGs, but mainly populated by LTGs. This strong correlation nicely demonstrates that our model is able to correctly classify original DES images, even at the fainter magnitudes. To quantify these trends, we can consider as negatives (N) the galaxies belonging to classes 1--3 and as positives (P) the galaxies falling with the classes 4--11. By doing so we find $89\%$ of TN and a $97\%$ of TP. This translates into an accuracy classification score of $\mathrm{Acc} \approx 0.95$. We have visually inspected the FN images within the VIPERS dataset (i.e., ETGs with classes 4--11) finding that for most of them there are neither clear signs of features (such as spiral arms) nor edge-on morphologies, indicating that the ETG morphological classification might be correct regardless of their spectral classification. In the case of the FP, we noticed that for a large fraction of them there is (at least) one close companion within the field of view of the cutout that might lead to an inaccurate classification. \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/prob_classes.pdf} \caption{Number of galaxies belonging to each VIPERS class derived by \citet{Siudek2018} ranging from 1 to 12. The red histogram corresponds to the number of ETGs with $\mathrm{\tilde{p}} < 0.4$. The blue histogram shows the number of LTGs with $\mathrm{\tilde{p}} > 0.4$. The ETGs clearly dominate at classes below 4, and are negligible for classes above 6, while the LTGs dominate for classes above 6. This strong correlation demonstrates that our model is able to correctly classify original DES images, even at the fainter magnitudes. \label{fig:vipers_classes}} \end{figure} \section{Conclusions} \begin{itemize} \item We present a morphological classification according to two schemes (a) ETGs vs. LTGs, (b) edge-on vs. face-on for $\sim$ 27 million DES galaxies to $\mathrm{m}_r < 21.5~\mathrm{mag}$. The classifications are based on the predictions of supervised DL models using CNNs (section \ref{sec:network}). \item The training sample consists of bright ($\mathrm{m}_r < 17.7~\mathrm{mag}$) DES galaxies with a previously known morphological classification (from \citealt{Dominguez2018}) as well as their artificially redshifted counterparts described in section \ref{sec:sims}. \item Although some of the features that distinguish ETGs and LTGs almost disappear for the fainter galaxies (figure \ref{fig:high-z_examples}) the models are able to correctly classify galaxies according to the two schemes with excellent results (accuracy > 97\%) even at the fainter magnitude bins (figure \ref{fig:model-perf} and \ref{fig:model-perf_edgeon}). \item We train 5 different models using $k$-folding to obtain a measurement of the classification uncertainty. About 87\% of the galaxies in the final catalogue have secure labelling for the ETG vs. LTG classification (i.e., $\Delta \mathrm{p} < 0.30$). This fraction is 73\% for the edge-on classification. \item The classifications on real DES faint images are consistent with other available observables, such as absolute magnitude, Sersic index $n$, ellipticity $\epsilon$ or spectral classification (section \ref{sec:results}). \item Our work demonstrates that machines are able to recover features hidden to the human eye and so can reliably classify faint galaxy images. The methodology adopted in this work to overcome the lack of faint labelled samples can be applied to future big data surveys such as Euclid or Vera Rubin Observatory Legacy Survey of Space and Time. \item The exceptional amount of data provided by DES DR1 has allowed us to construct the largest automated morphological catalogue to date (along with the companion DES morphological catalog presented in Cheng et al.) by several orders of magnitude compared to previous works (e.g., \citealt{Dominguez2018}). This classification will be a fundamental tool for our understanding of morphological transformations across cosmic time. \item The complete DES dataset DR2, including observations for 600 million galaxies, will be made public in early 2021. The DL models presented in this work can be directly applied to DR2, providing accurate morphological classification for a great fraction of the galaxies with very little effort. In addition, the existence of deep fields within the DES DR2 will allow us to extend this classification to even fainter magnitude limits and to carry out crucial scientific analysis for galaxy formation and evolution. \end{itemize} \section*{Acknowledgements} This work was supported in part by NSF grant AST-1816330. HDS acknowledges support from the Centro Superior de Investigaciones Científicas PIE2018-50E099. We are grateful to R. Sheth for a careful reading of the manuscript. Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, the Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A\&M University, Financiadora de Estudos e Projetos, Funda{\c c}{\~a}o Carlos Chagas Filho de Amparo {\`a} Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Cient{\'i}fico e Tecnol{\'o}gico and the Minist{\'e}rio da Ci{\^e}ncia, Tecnologia e Inova{\c c}{\~a}o, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey. The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Energ{\'e}ticas, Medioambientales y Tecnol{\'o}gicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgen{\"o}ssische Technische Hochschule (ETH) Z{\"u}rich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ci{\`e}ncies de l'Espai (IEEC/CSIC), the Institut de F{\'i}sica d'Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universit{\"a}t M{\"u}nchen and the associated Excellence Cluster Universe, the University of Michigan, NFS's NOIRLab, the University of Nottingham, The Ohio State University, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, Texas A\&M University, and the OzDES Membership Consortium. Based in part on observations at Cerro Tololo Inter-American Observatory at NSF's NOIRLab (NOIRLab Prop. ID 2012B-0001; PI: J. Frieman), which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. The DES data management system is supported by the National Science Foundation under Grant Numbers AST-1138766 and AST-1536171. The DES participants from Spanish institutions are partially supported by MICINN under grants ESP2017-89838, PGC2018-094773, PGC2018-102021, SEV-2016-0588, SEV-2016-0597, and MDM-2015-0509, some of which include ERDF funds from the European Union. IFAE is partially funded by the CERCA program of the Generalitat de Catalunya. Research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Program (FP7/2007-2013) including ERC grant agreements 240672, 291329, and 306478. We acknowledge support from the Brazilian Instituto Nacional de Ci\^encia e Tecnologia (INCT) do e-Universo (CNPq grant 465376/2014-2). This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. \section*{Data availability} The DES Y1 morphological catalog is available in the Dark Energy Survey Data Management (DESDM) system at the National Center for Supercomputing Applications (NCSA) at the University of Illinois and can be accessed at \url{https://des.ncsa.illinois.edu/releases/other/morphCNN}. The pipeline used to construct the DES Y1 morphological catalog will be shared on request to the corresponding author. \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} An important open problem of the AdS/CFT correspondence \cite{Maldacena} is to understand the finite-size spectrum of the ${\rm AdS}_5\times {\rm S}^5$ superstring. Recently, there has been further significant progress in this direction. First, the four-loop anomalous dimension of the Konishi operator was computed \cite{BJ08} by means of generalized L\"uscher's formulae \cite{Luscher85,JL07,BJ08} (see also \cite{HS08a}-\cite{BJ09} for other applications of L\"uscher's approach), and the result exhibits a stunning agreement with a direct field-theoretic computation \cite{Sieg,Vel}. Second, the groundwork for constructing the Thermodynamic Bethe Ansatz (TBA) \cite{Zamolodchikov90}, which encodes the finite-size spectrum for all values of the 't Hooft coupling, has been laid down, based on the mirror theory approach\footnote{The TBA approach in the AdS/CFT spectral problem was advocated in \cite{AJK} where it was used to explain wrapping effects in gauge theory.} \cite{AF07}. Most importantly, the string hypothesis for the mirror model was formulated \cite{AF09a} and used to derive TBA equations for the ground state \cite{AF09b}-\cite{GKKV09}. Also, a corresponding Y-system \cite{ZamY} was conjectured \cite{GKV09}, and its general solution was obtained \cite{Heg}. The AdS/CFT Y-system has unusual properties, and, in particular, is defined on an infinite-genus Riemann surface \cite{AF09b,FS}. \smallskip The derivation of the TBA equations is not yet complete, because the equations pertain only to the ground state energy (or Witten's index in the case of periodic fermions) and do not capture the energies of excited states. Therefore, one has to find a generalization of the TBA equations that can account for the complete spectrum of the string sigma model, including all excited states. \smallskip Here we continue to explore the mirror TBA approach. In particular, we will be interested in finding the TBA integral equations which describe the spectrum of string states in the $\sl(2)$ sector. An attempt in this direction has been already undertaken in \cite{GKKV09} and the emerging integral equations have been used for numerical computation of the anomalous dimension of the Konishi operator \cite{GKV09b}. However, the subleading term in the strong coupling expansion in this result disagrees with the result by \cite{RT09k} obtained by string theory means. There exists yet another prediction \cite{AF05} for this subleading term, which differs from both \cite{GKV09b} and \cite{RT09k}. All these results are based on certain assumptions which require further justification. This makes urgent to carefully analyze the issue of the TBA equations for excited states, and to better understand what happens on the string theory side. \smallskip In this paper we analyze two-particle states in the $\sl(2)$ sector. First, we show that each state is governed by its own set of the TBA equations. Second, we provide evidence that for each state there are infinitely-many critical values of 't Hooft coupling constant $\lambda$, and that the excited states integral equations have to be modified each time one of these critical values is crossed.\footnote{Existence of such critical values was observed in the excited-state TBA equations for perturbed minimal models \cite{DT97}. We thank Patrick Dorey for this comment.} Performing careful analysis of two-particle states in a region between any two neighboring critical points, we propose the corresponding integral equations. \smallskip The problem of finite-size spectrum of two-dimensional integrable models has been studied in many works, see e.g. \cite{Kuniba:1993cn}-\cite{GKV08}. To explain our findings, we start with recalling that for some integrable models the inclusion of excited states in the framework of the TBA approach has been achieved by applying a certain analytic continuation procedure \cite{DT96, DT97}. This can be understood from the fact that the convolution terms entering the integral TBA equations exhibit a singular behavior in the complex rapidity plane, the structure of these singularities does depend on the value of the coupling constant. This leads to a modification of the ground-state TBA equations, which, indeed, describe the profile and energies of excited states. Here we intend a similar strategy for the string sigma model. \smallskip To derive the TBA equations for excited states, we propose to use a contour deformation trick. In other words, we assume that the TBA equations for excited states have the same form as those for the ground state with the only exception that the integration contour in the convolution terms is different. Returning the contour back to the real rapidity line of the mirror theory, one picks up singularities of the convolution terms which leads to modification of the final equations. The original contour should be drawn in such a way, that the arising TBA equations would reproduce the large $L$ asymptotic solution (where $L$ is the size of the system). \smallskip Recall that the TBA equations for the string mirror model \cite{AF09b} are written in terms of the following Y-functions: $Y_Q$-functions associated with $Q$-particle bound states, auxiliary functions $Y_{Q|vw}$ for $Q|vw$-strings, $Y_{Q|w}$ for $Q|w$-strings, and $Y_{\pm}$ for $y_\pm$-particles. The Y-functions depend on the 't Hooft coupling $\lambda$ related to the string tension $g$ as $\lambda=4\pi^2 g^2$. As we will see, the analytic structure of these Y-functions depends on $g$ and plays a crucial role in obtaining the TBA equations for excited states. \smallskip Most conveniently, the large $L$ asymptotic solution for the Y-functions is written in terms of certain transfer-matrices associated with an underlying symmetry group of the model \cite{Kuniba:1993cn,Tsuboi}. In the context of the string sigma model the corresponding asymptotic solution was presented in \cite{GKV09}. We will use this solution to check the validity of our TBA equations. \smallskip Our analysis starts from describing physical two-particle states in the $\sl(2)$ sector. It appears that for the $\sl(2)$ states the functions $Y_{Q|vw}(u)$ play the primary role in formulating the excited state TBA equations. Analyzing the asymptotic solution, we find that each $Y_{Q|vw}$ has four zeroes in the complex $u$-plane. With $g$ changing, the zeroes change their position as well, and at certain critical values $g=g_{cr}$ they give rise to new singularities in the TBA equations which resolution results in the appearance of new driving terms. The critical values $g_{cr}$ are defined as values of $g$ at which $Y_{Q|vw}(u)$ acquires two zeros at $u=\pm i/g_{cr}$: $Y_{Q|vw}(\pm i/g_{cr})=0$. We show that at weak coupling $g\sim 0$ all two-particle states can be organized into an infinite tower of classes $k={\rm I}\,, {\rm II}\,, \ldots\,, \infty$, see Table 1. \begin{center} {\small { \renewcommand{\arraystretch}{1.5} \renewcommand{\tabcolsep}{0.2cm} \begin{tabular}{|c|l|l|} \hline Type of a state & Y-functions & Number of zeros\\ \hline I & $Y_{1|vw}$ & 2\\ \hline II & $Y_{1|vw}$, $Y_{2|vw}$ & 2+2 \\ \hline III & $Y_{1|vw}$, $Y_{2|vw}$, $Y_{3|vw}$ & 4+2+2 \\ \hline IV & $Y_{1|vw}$, $Y_{2|vw}$, $Y_{3|vw}$, $Y_{4|vw}$ & 4+4+2+2 \\ \hline \vdots & \vdots & \vdots \\ \hline $k\to\infty$ & $Y_{1|vw}$, $Y_{2|vw}$,\quad\ldots & 4+4+\quad\ldots \\ \hline \end{tabular} } } \vspace{0.5cm} \parbox{13cm} {\small Table 1. Classification of two-particle states in the $\sl(2)$-sector at $g\sim 0$. The right column shows the number of zeros which the corresponding asymptotic $Y_{M|vw}$-functions from the middle column have in the physical strip $|{\rm Im}(u)|<1/g$. States of type I are called ``Konishi-like". States of type II, III, IV, $\ldots$ correspond to larger value of $\kappa$, see section 3.} \end{center} \vspace{0.3cm} Each class is unambiguously determined by number of zeroes of $Y_{M|vw}$-functions in the strip $|{\rm Im}(u)|<1/g$. In particular, for states of type I only $Y_{1|vw}$-function has two zeroes in the physical strip. We call all these states ``Konishi-like" because they share this property with the particular string state corresponding to the Konishi operator. \smallskip Our results disagree with that by \cite{GKKV09,GKV09b} in the following two aspects: First, the integral equations for excited states in the $\sl(2)$ sector do not have a universal form, even for two-particle states. Second, we face the issue of critical points. When a critical point is crossed, the compatibility of the asymptotic solution with the integral TBA equations requires modification of the latter. The equations proposed in \cite{GKKV09} capture only type I states and only below the first critical point. \smallskip To find approximate locations of the critical values, we first solve the asymptotic Bethe-Yang equations \cite{BS05} (which include the BES/BHL dressing phase \cite{BES,BHL06}) for some states numerically from weak to strong coupling and obtain the corresponding interpolating curve $u\equiv u_{1}(g)$, where $u_{1}$ is the rapidity of an excited particle in string theory. The second particle has rapidity $u_{2}=-u_{1}$ due to the level matching condition. Further, we compute the Y-functions on the large $L$ asymptotic solution corresponding to this two-particle excited state and study their analytic properties considered as functions of $g$, and, in particular, determine approximately the critical values. \smallskip We also note that with the coupling increasing more and more critical points get crossed which leads to accumulation of zeroes of $Y_{M|vw}$'s in the physical strip. Apparently, as the asymptotic solution indicates, when $g$ tends to infinity the zeroes move towards the points $\pm 2$, so that the latter points behave as an attractor for zeroes of all $Y_{M|vw}$-functions. \smallskip Estimation based on the large $L$ asymptotic solution gives $\lambda\approx774$ for the first critical value corresponding to the Konishi operator. In the weak-coupling region below the first critical point, the integral equations for Konishi operator we obtain seem to agree with that of \cite{GKKV09}. However, these weak-coupling equations become inconsistent with the known large $L$ asymptotic solution once the first critical point is crossed, and have to be modified. Consequently, the derivation of the anomalous dimension for the Konishi operator at strong coupling requires re-examination. Of course, the existence of critical points is not expected to violate analyticity of the energy $E(\lambda)$ of a string state considered as the function of $\lambda$, but it poses a question about the precise analytic behavior of $E(\lambda)$ in the vicinity of critical points. \smallskip We discuss both the canonical and simplified TBA equations. The canonical equations \cite{AF09b,Bombardelli:2009ns,GKKV09} follow from the string hypothesis for the mirror model \cite{AF09a} by using the standard procedure, see e.g. \cite{Korepin}. The simplified equations \cite{AF09b,AF09d} obtained from the canonical ones have more close relation to the Y-system. It turns out that the simplified equations are sensitive only to the critical points defined above. In contrast, the canonical equations have to be modified when crossing not only a critical point but also what we call a subcritical point $\bar{g}_{cr}$. A subcritical point $\bar{g}_{cr}$ is defined as the value of $g$ at which the function $Y_{Q|vw}(u)$ acquires zero at $u=0$. Hence, in comparison to the canonical equations, the simplified equations exhibit a more transparent analytic structure. In addition to locality, this is yet another reason why we attribute to the simplified equations a primary importance and carry out their analysis in the main text. To study the exact Bethe equations which determine the exact, {\it i.e.} non-asymptotic, location of the Bethe roots, we find it advantageous to use a so-called {\it hybrid form} of the TBA equations for $Y_Q$-functions. This form is obtained by exploiting both the canonical and simplified TBA equations. \smallskip Recently, the finite-gap solutions of semi-classical string theory have been nicely derived \cite{Gromov} from the TBA equations \cite{GKKV09}. This raises a question why modifications of the TBA equations we find in this paper were not relevant for this derivation. We have not studied this question thoroughly. However, one can immediately see that there is a principle difference between states with finite number of particles and semiclassical states composed of infinitely many particles. Namely, at strong coupling the rapidities of two-particle states fall inside the interval $[-2,2]$, while those of semi-classical states are always outside this interval. Thus, the modification of the TBA equations discussed in this paper might not be necessary for semi-classical states. It would be important to better understand this issue. \smallskip The paper is organized as follows. In the next section we explain our criteria for a choice of the integration contour in the excited states TBA equations. In section 3 we discuss two-particle states in the $\sl(2)$ sector and the corresponding asymptotic Y-functions. By analysing analytic properties of the Y-functions, we determine the critical and subcritical values of the coupling constant both for the Konishi and for some other states. In section 4 we present the simplified TBA equations for Konishi-like states and we explain why and how their form depends on the value of the coupling constant. In section 5 we generalize this discussion to arbitrary two-particle states from the $\sl(2)$ sector. In section 6 we summarize the most essential properties of the AdS/CFT Y-system implied by the TBA equations under study. Finally, in Conclusions we mention some interesting open problems. The definitions, treatment of canonical equations, and further technical details are relegated to eight appendices. \section{Contour deformation trick} The TBA equations for the ${\rm AdS}_5\times {\rm S}^5$ mirror model are written for Y-functions which depend on the real momentum of the mirror model. The energy of string excited states obviously depends on real momenta of string theory particles, and to formulate the TBA equations for excited states one also needs to continue analytically the Y-functions to the string theory kinematic region. To visualize the analytic continuation it is convenient to use the $z_Q$-tori because the kinematic regions of the mirror and string theory Q-particle bound states (Q-particles for short) are subregions of the $z$-torus, see Figure 1. In addition, the Q-particle energy, and many of the kernels appearing in the set of TBA equations are meromorphic functions on the corresponding torus. The mirror Q-particle region can be mapped onto a $u$-plane with the cuts running from the points $\pm2\pm{i\over g}Q$ to $\pm\infty$, and the string Q-particle region can be mapped onto a $u_*$-plane with the cuts connecting the points $-2\pm{i\over g}Q$ and $2\pm{i\over g}Q$, see Figure 1. Since the cut structure on the planes is different for each Y-function, they cannot be considered as different sheets of a Riemann surface. The $z_Q$-torus can be glued either from four mirror $u$-planes or four string $u_*$-planes. \begin{figure}[t] \begin{center} \includegraphics*[width=0.8\textwidth]{RegionsUM} \end{center} \caption{These are the mirror and string regions on the $z$-torus. They are in one-to-one correspondence with the $u$-planes. The boundaries of the regions are mapped to the cuts.} \end{figure} As was shown in \cite{AF07}, $z$-torus variables corresponding to real momenta of the mirror and string theory are related to each other by the shift by a quarter of the imaginary period of the torus \begin{eqnarray} z = z_* + {\omega_2\over 2} \,, \end{eqnarray} where $z$ is the variable parametrizing the real momenta of the mirror theory, and $z_*$ is the variable parametrizing the real momenta of the string theory. The line $(-\infty,\infty)$ in the string $u_*$-plane is mapped to the interval Re$(z_*)\in (-{\omega_1\over 2},{\omega_1\over 2})$, Im$(z_*)=$const on the $z$-torus, and we choose $z_*$ in the string region to be real. Then, the interval Re$(z)\in (-{\omega_1\over 2},{\omega_1\over 2})$, Im$(z)={\omega_2\over 2i}$ of the mirror region is mapped onto the real line of the mirror $u$-plane. It is argued in \cite{DT96, DT97} that the TBA equations for excited states can be obtained from the ones for the ground state by analytically continuing in the coupling constants and picking up the singularity of proper convolution terms. We prefer however to employ a slightly different procedure which we refer to as the contour deformation trick. We believe it is equivalent to \cite{DT96, DT97}. It is based on the following assumptions \begin{itemize} \item The form of TBA equations for any excited state and the expression for the energy are universal. TBA equations for excited states differ from each other only by a choice of integration contours of convolution terms and the length parameter $L$ which depend on a state. \item The choice of the integration contours and $L$ is fixed by requiring that the large $L$ solution of the excited state TBA equations be given by the generalized L\"uscher formulae, that is all the Y-functions can be written in terms of the eigenvalues of the transfer matrices. The integration contour depends on the excited state under consideration, and in general on the values of 't Hooft's coupling and $L$. \item An excited state is completely characterized by the five charges it carries and a set of $N$ real numbers $p_k$ which are in one-to-one correspondence with momenta $p_k^o$ of $N$ Q-particles in the small coupling limit $g\to 0$. The momenta $p_k^o$ are found by using the one-loop Bethe equations for fundamental particles and their bound states. For finite values of $g$ the set of $p_k$ is determined by exact Bethe equations which state that at any $p=p_k$ the corresponding $Y_Q$-functions are equal to $-1$. \end{itemize} We consider only the simplest case of two-particle excited states in the $\sl(2)$ sector of the string theory because there are no bound states in this sector and the complete two-particle spectrum can be readily classified. The physical states satisfy the level-matching condition which for two-particle states takes a very simple form: $p_1 = -p_2$, or $u_1 = -u_2$ or $z_{*1}=-z_{*2}$\,, depending on the coordinates employed. The TBA equations we propose in next sections are valid only for physical states. \section{States and Y-functions in the $\sl(2)$ sector} To fix the integration contour one should choose a state and analyze the analytic structure of the large $L$ Y-functions which we refer to as the asymptotic Y-functions. We begin with a short discussion of two-particle states in the $\sl(2)$ sector. \subsection{Bethe-Yang equations for the $\sl(2)$ sector} There is only a single Bethe-Yang (BY) equation in the $\sl(2)$-sector for two-particle physical configurations satisfying the vanishing total momentum condition $p_1 +p_2=0$ that can be written in the form \cite{St04} \begin{eqnarray}\label{BYeJ1} 1=e^{ip J}S_{\sl(2)}(p,-p)\ \Longrightarrow\ e^{ip (J+1)}=\frac{1+ {1\over x_s^+{}^2}}{1+{1\over x_s^-{}^2}}\sigma(p,-p)^2 \, , \end{eqnarray} where $p\equiv p_1 >0$, $J$ is the charge carried by the state, $\sigma$ is the dressing factor, and $x_s^\pm$ are defined in appendix \ref{app:rapidity}. Taking the logarithm of the equation, one gets \begin{eqnarray}\label{BYe} i p(J+1) - \log \frac{1+{1\over x_s^+{}^2}}{1+{1\over x_s^-{}^2}} - 2i \,\theta(p,-p) =2\pi i\, n\,, \end{eqnarray} where $\theta = {1\over i}\log\sigma$ is the dressing phase, and $n$ is a positive integer because we have assumed $p$ to be positive. As was shown in \cite{AFS}, at large values of $g$ the integer $n$ is equal to the string level of the state. As is well known, in the small $g$ limit the equation has the obvious solution \begin{eqnarray}\label{BYeJsol0} p^o_{J,n} ={2\pi n\over J+1}\,,\quad n=1\,,\ldots\,, \left[{J+1\over 2}\right]\,, \end{eqnarray} where $[x]$ denotes the integer part of $x$, and the range of $n$ is bounded because the momentum $p$ can only take values from $0$ to $\pi$. The corresponding rapidity variable $u_{J,n}$ in the small $g$ limit takes the following form \begin{eqnarray}\label{uJmo} u_{J,n} \to {1\over g} u^o_{J,n}\,,\quad u^o_{J,n}=\cot{\pi n\over J+1}\,. \end{eqnarray} Thus, any two-particle state in the $\sl(2)$ sector is completely characterized by the two integers $J$ and $n$. In particular, in the simplest $J=2$ case corresponding to a descendent of the Konishi state $n$ can take only one value $n=1$, and the small $g$ solution is \begin{eqnarray}\label{BYeJsol02} p^o_{2,1} ={2\pi\over 3}\,,\quad u^o_{2,1}={1 \over \sqrt3}\,. \end{eqnarray} The BY equation \eqref{BYe} can be easily solved perturbatively up to any desired order in $g$, and numerically up to very large values of $g$. We have used the BES series representation \cite{BES} for the dressing phase for perturbative computations, and the DHM integral representation \cite{DHM} for the numerical ones\footnote{The DHM representation can be also readily used for perturbative computations.}. For the Konishi state, the perturbative solution up to $g^{16}$-th can be found in appendix \ref{appBY}. We have solved numerically the BY equation for the Konishi state for ${1\over 10}\le g\le 1000$ with the step ${1\over 10}$ for ${1\over 10}\le g\le 10$, the step $1$ for $10\le g\le 100$, and the step $10$ for $100\le g\le 1000$. In Figure 2 we show the results up to $g=100$. For greater values of $g$ nothing interesting happens, and the solution can be approximated by asymptotic formulae from \cite{AFS,AF05,RS09}, see appendix \ref{appBY} for more details. \begin{figure}[t] \begin{center} \includegraphics[width=.48\textwidth]{U_Konishi2}\quad \includegraphics[width=.48\textwidth]{U_Konishi1} \end{center} \caption{These are the plots of $u$ which solves the BY equation for the Konishi state. } \end{figure} We then applied the Interpolation function in Mathematica to have $u_{2,1}$ as a smooth function of $g$. Using the function, one can find that $u_{2,1}(g)$ decreases up to $g\sim 0.971623$, and then begins to increase and at large $g$ it asymptotes to $u=2$. The functions $u_{J,n}(g)$ for other values of $J$ and $n$ have similar $g$ dependence. The only exception is the case $J=2n-1$ where the exact solution of the BY equation is $p_{2n-1,n}=\pi$ and, therefore, $u_{2n-1,n}=0$. The TBA equations we propose below are not in fact valid for these $(2n-1,n)$ states. The perturbative and numerical solutions for $u_{J,n}(g)$ can be easily used to analyze the behavior of Y-functions considered as functions of $g$. In particular it is easy to determine if some of them become negative for large enough values of $g$. \subsection{Y-functions in the $\sl(2)$ sector} Let us recall that the TBA equations for the ${\rm AdS}_5\times {\rm S}^5$ mirror model involve $Y_Q$-functions for momentum carrying $Q$-particle bound states, and auxiliary functions $Y_{Q|vw}^{(\alpha)}$ for $Q|vw$-strings, $Y_{Q|w}^{(\alpha)}$ for $Q|w$-strings, and $Y_{\pm}^{(\alpha)}$ for $y_\pm$-particles. The index $\alpha=1,2$ reflects two $\alg{su}(2|2)$ algebras in the symmetry algebra of the light-cone string sigma model. The TBA equations \cite{AF09b} depend also on the parameters $h_\alpha$ which take care of the periodicity condition of the fermions of the model \cite{AFrev}. For the $\sl(2)$ sector the fermions are periodic, and from the very beginning one can set the parameters $h_\alpha$ to 0 because there is no singularity at $h_\alpha=0$ in the excited states TBA equations. For the $\sl(2)$ states there is a symmetry between the left and right $\alg{su}(2|2)$ auxiliary roots, and, therefore, all Y-functions satisfy the condition \begin{eqnarray} Y^{(1)}_{\forall} = Y^{(2)}_{\forall} = Y_{\forall} \,, \end{eqnarray} where $\forall$ denotes a Y-function of any kind. The string theory spectrum in the $\sl(2)$ sector is characterized by a set of $N$ real numbers $z_{*k}$ or $u_k$ corresponding to momenta of $N$ fundamental particles in the limit $g\to 0$. According to the discussion above, these numbers are determined from the exact Bethe equations \begin{eqnarray}\label{Y1m1} Y_1(z_{*k}) = Y_{1_*}(u_{k}) = -1\,, \quad k=1,\ldots, N\,, \end{eqnarray} where $Y_1(z)$ is the Y-function of fundamental mirror particles considered as a function on the $z$-torus, and $Y_{1_*}(u)$ denotes the $Y_1$-function analytically continued to the string $u$-plane. In the large $J$ limit the exact Bethe equations must reduce to the BY equations, and it is indeed so because the asymptotic $\sl(2)$ $Y_Q$-functions can be written in terms of the transfer matrices defined in appendix \ref{appT} as follows \cite{BJ08} \begin{eqnarray}\label{YQasympt} Y_Q^o(v)=e^{-J\widetilde{\cal E}_Q(v) }{T_{Q,1}(v|\vec{u})^2\over \prod_{i=1}^{N} S_{\sl(2)}^{1_*Q}(u_i,v)}=e^{-J\widetilde{\cal E}_Q(v)}T_{Q,1}(v|\vec{u})^2\, \prod_{i=1}^{N} S_{\sl(2)}^{Q1_*}(v,u_i)\,, ~~~\end{eqnarray} where $v$ is the rapidity variable of the mirror $u$-plane and $\widetilde{\cal E}_Q$ is the energy of a mirror $Q$-particle. $S_{\sl(2)}^{1_*Q}$ denotes the S-matrix with the first and second arguments in the string and mirror regions, respectively. $T_{Q,1}(v|\vec{u})$ is up to a factor the trace of the S-matrix describing the scattering on these string theory particles with a mirror Q-particle or in other words the eigenvalue of the corresponding transfer matrix. The BY equations then follow from the fact that $\widetilde{\cal E}_{1_*}(u_k) = -ip_k$, and the following normalization of $T_{1,1}$ \begin{eqnarray} T_{1,1}(u_{_*k}|\vec{u}) = 1 \ \Longrightarrow\ -1=e^{iJp_k}\prod_{i=1}^{N} S_{\sl(2)}^{1_*1_*}(u_k,u_i)\,, \end{eqnarray} where $u_{_*k}=u_{k}$, and the star just indicates that one analytically continues $T_{1,1}$ to the string region. Then, $S_{\sl(2)}^{1_*1_*}(u_k,u_i) = S_{\sl(2)}(u_k,u_i)$ is the usual $\sl(2)$ sector S-matrix used in the previous subsection. Let us also mention that $T_{Q,1}$ has the following large $v$ asymptotics \begin{eqnarray} T_{Q,1}(v\,|\,\vec{u}) \to Q\left( 1 - \prod_{i=1}^{N} \sqrt{x_i^+\over x_i^-} \ \right)^2\,,\quad v\to\infty\,, \end{eqnarray} and therefore it goes to 0 if the level-matching is satisfied. Then, in the large $J$ limit all auxiliary asymptotic $\sl(2)$ Y-functions can be written in terms of the transfer matrices as follows \cite{GKV09} \begin{eqnarray}\nonumber Y_-^{o}&=&-\frac{T_{2,1}}{T_{1,2}}\,,\quad Y_+^{o}=-\frac{T_{2,3}T_{2,1}}{T_{3,2}T_{1,2}}\,,\quad Y_{Q|vw}^{o}=\frac{T_{Q+2,1}T_{Q,1}}{T_{Q+1,2}}\,,\quad Y_{Q|w}^{o}=\frac{T_{1,Q+2}T_{1,Q}}{T_{2,Q+1}T_{0,Q+1}}\,. \end{eqnarray} The transfer matrices $T_{a,s}$ can be computed in terms of $T_{a,1}$ by using the Bazhanov-Reshetikhin formula \cite{BR}, see appendix \ref{appT} for all the necessary explicit formulae. An important property of the Y-functions for $vw$- and $w$-strings is that they approach their vacuum values as $v\to\infty$ \begin{eqnarray}\nonumber Y_{M|vw}(v)\to M(M+2)\,,\quad Y_{M|w}(v)\to M(M+2)\,,\quad v\to\infty\,,\ \ -{M\over g}<{\rm Im}(v)<{M\over g}\,. \end{eqnarray} Now we are ready to analyze the dependence of asymptotic $Y$-functions on $g$. Recall that they depend on the rapidities $u_k$ which are solutions of the BY equations. \smallskip We begin with the small $g$ limit where the effective length goes to infinity, and one can in fact trust all the asymptotic formulae. It is convenient to rescale $v$ and $u_k$ variables as $v\to v/g$, $u_k\to u_k/g$ because the rescaled variables are finite in this limit. Let $\kappa\equiv u_1=-u_2$ be the rescaled rapidity of a fundamental particle. According to the previous subsection, in the small $g$ limit they are given by $u_{J,n}^o$, eq.\eqref{uJmo}. \smallskip The most important functions in the $\sl(2)$ case are $Y_{M|vw}$, and we find that for $N=2$ they exhibit the following small $g$ behavior in the strip $-M<{\rm Im}(v)<M$ {\small \begin{eqnarray} Y_{M|vw}(v)=M(M+2)\frac{\big[M^2-1+v^2-\kappa^2\big]\big[(M+2)^2-1+v^2-\kappa^2\big]} {\big[(M+1)^2+(v-\kappa)^2\big]\big[(M+1)^2+(v+\kappa)^2\big]}+{\cal O}(g^2)\, . \end{eqnarray} } The leading term has the correct large $u$-asymptotics and four apparent zeros at $$ v=\pm\sqrt{\kappa^2-M^2+1}\, , ~~~~ v=\pm\sqrt{\kappa^2-(M+2)^2+1}\, . $$ One can see that $Y_{1|vw}$-function always has at least two real zeros at $v=\pm\kappa$. Other zeros of $Y_{M|vw}$-functions can be either real or purely imaginary depending on the values of $M$ and $\kappa$. It appears that the form of simplified TBA equations depends on the imaginary part of these zeros, and we will see in next sections that if a pair of zeros $v=\pm r$ fall in the strip $|{\rm Im}(r)|<1$ then the equations should be modified. \smallskip Thus, we are lead to consider the following three possibilities \begin{enumerate} \item If $M^2-2< \kappa^2<(M+2)^2-2$ then $Y_{M|vw}$ has two zeros at $v=\pm\sqrt{\kappa^2-M^2+1}$ that are in the strip $|{\rm Im}(v)|<1$. In terms of the integers $J$ and $m$ characterizing two-particle states one gets the condition \begin{eqnarray}\label{cond1} \sqrt{M^2-2}<\cot{\pi n\over J+1}<\sqrt{(M+2)^2-2}\,. \end{eqnarray} \item If $ \kappa^2<M^2-2 \ \Longleftrightarrow\ \cot{\pi n\over J+1}<\sqrt{M^2-2} $ then $Y_{M|vw}$ does not have any zeros in the strip $|{\rm Im}(v)|<1$. \item If $ \kappa^2>(M+2)^2-2 \ \Longleftrightarrow\ \cot{\pi n\over J+1}>\sqrt{(M+2)^2-2} $ then $Y_{M|vw}$ has four zeros in the strip $|{\rm Im}(v)|<1$. \end{enumerate} Some of these zeros can be real, and in fact the canonical TBA equations take different forms depending on whether the roots are real or imaginary. \smallskip Classification of two-particle states at $g\sim 0$ is presented in Table 1. The type of a state is determined by how many zeroes of $Y_{M|vw}$-functions occur in the physical strip and it depends on $J$ and $n$. \smallskip Consider a two-particle state with $\kappa = u_{J,n}^o$ for some $(J,n)$. Table 1 shows that there exists a number $m \ge 1$, equal to the maximal value of $M$ the condition \eqref{cond1} is satisfied. Then both $Y_{m|vw}$ and $Y_{m-1|vw}$ have two zeros, all $Y_{k|vw}$ with $k\le m-2$ have four zeros, and all $Y_{k|vw}$-functions with $k\ge m+1$ have no zeros in the strip $|{\rm Im}(v)|<1$. For example, among the states with $(J,n=1)$ at small coupling, the states of type I are found if and only if $J\le 4$. The type II is found for $5\le J\le 7$, type III for $8\le J\le 11$, and type IV for $12 \le J\le 14$. In particular, $Y_{1|vw}$ for the state $(8,1)$ has two real zeros and two imaginary zeros in the strip $|{\rm Im}(v)|<1$, and $Y_{1|vw}$ for the state $(J\ge9,1)$ has four real zeros. \smallskip As for the Konishi state with $J=2$ and $n=1$ only $Y_{1|vw}$-function has two zeros and all the other $Y_{M|vw}$-functions have no zeros at small coupling. Let us also mention that at $g=0$ the $Y_{2|vw}$-function of the state $(5,1)$ (and in general of any state $(6k-1,k)$) has a double zero at $v=0$. This double zero however is an artifact of the perturbative expansion, and in reality $Y_{2|vw}$ has two imaginary zeros for small values of $g$ equal to $\approx \pm ig\sqrt3$. For the state with $J=6$ and $n=1$ both $Y_{1|vw}$ and $Y_{2|vw}$ have two real zeros. \subsection{Critical values of $g$} \subsubsection*{Evolution of zeros} Now we would like to understand what happens with $Y_{M|vw}$-functions when one starts increasing $g$. To this end one should use numerical solutions of the BY equations discussed at the beginning of this section. We also switch back to the original $u$ variables because they are more convenient for general values of $g$, and refer to the strip $|$Im$(u)|<1/g$ as the physical one. We find that for finite $g$ any $Y_{k|vw}$-function has four zeros which are either real or purely imaginary. We could not find any other complex zeros. The four zeros of $Y_{k|vw}$ are split into two pairs, and the two zeros in a pair have opposite signs, and are either real or complex conjugate to each other. We denote the four zeroes of $Y_{k|vw}$ by $r_j^{(k)}$ and $\hat{r}_j^{(k)}$, where $j=1,2$ and $k=1\,,2\,, \ldots\,$. If the four zeros are real then $r_j^{(k)}$ are the zeros of $Y_{k|vw}$ which have a larger absolute value than $\hat{r}_j^{(k)}$. If only two zeros are real then we denote them as $r_j^{(k)}$ and the imaginary zeros as $\hat{r}_j^{(k)}$. Finally, if the four zeros are imaginary then $r_j^{(k)}$ are the ones closer to the real line than the second pair $\hat{r}_j^{(k)}$. The locations of the zeros depend on $g$, and we should distinguish two cases. We observe first that if two zeros are real at $g\sim 0$ then they are of order $1/g$, and obviously outside the interval $[-2,2]$. With $g$ increasing they starts moving toward the origin, and at some value of $g$ they reach their closest position to the origin which is inside the interval $[-2,2]$. Then, for larger $g$ they remain inside the interval but begin to move to its boundaries and reach them at $g=\infty$. In the second case, one considers a pair of imaginary zeros at $g\sim 0$. With $g$ increasing they start moving toward the real line, and at some value of $g$ they get to the origin and become a double zero. Then, for larger $g$ they split and begin to move to the boundaries of the interval $[-2,2]$, and reach them at $g=\infty$. The only exception from this behavior we find is the $g$-dependence of the two zeros of $Y_{2|vw}$-function for the states $(6k-1,k)$ that are equal to $\pm i\sqrt3$ at small $g$. These zeros become real at very small value of $g$. Then they start moving to $\pm2$, cross the boundaries of the interval $[-2,2]$, and reach their maximum. After that they behave as zeros of all the other $Y_{k|vw}$-functions. Thus, at very large values of $g$ all zeros of any $Y_{k|vw}$-function are real, inside the interval $[-2,2]$, and very close to $\pm2$. The pairs of the zeros of different $Y_{M|vw}$-functions are not independent, and satisfy the following relations \begin{eqnarray} \hat{r}_j^{(k-1)} = r_j^{(k+1)}\,,\quad k =2\,,\ldots\,, \infty\,. \end{eqnarray} Therefore, the zeros of $Y_{M|vw}$-functions can be written as follows \begin{eqnarray}\label{zer} \{ r_j^{(1)}\,,r_j^{(3)} \} \,; \{ r_j^{(2)}\,,r_j^{(4)} \} \,;\ldots\,; \{ r_j^{(k-1)}\,,r_j^{(k+1)} \}\,; \{ r_j^{(k)} \,, r_j^{(k+2)} \} \,; \{ r_j^{(k+1)}\,,r_j^{(k+3)} \} \,;\ldots \,,~~~~ \end{eqnarray} so that $Y_{k|vw}$ has the zeros $\{ r_j^{(k)} \,, r_j^{(k+2)} \}$. These zeros have a natural ordering. If we assume for definiteness that the zeros with $j=1$ have negative real or imaginary parts, then they are ordered as \begin{eqnarray} r_1^{(1)} \prec r_1^{(2)} \prec r_1^{(3)} \prec \cdots \prec r_1^{(k)} \prec r_1^{(k+1)} \prec \cdots\,, \end{eqnarray} where $r_1^{(k)} \prec r_1^{(k+1)}$ if either Re$(r_1^{(k)}) < \ $Re$(r_1^{(k+1)})$ or Im$(r_1^{(k)}) > \ $Im$(r_1^{(k+1)})$. It is important that the zeros never change the ordering they have at $g\sim 0$. In particular, $Y_{1|vw}$ always has two real zeros $r_j^{(1)}= u_j$ which are Bethe roots. They are the largest (in magnitude) zeros among all $Y_{k|vw}$-functions, and are the closest ones to $\pm 2$ at large $g$. \begin{center} {\small {\renewcommand{\arraystretch}{1.5} \renewcommand{\tabcolsep}{0.2cm} \begin{tabular}{|c|l|l|} \hline Initial condition $\rightarrow$ & $Y_{1|vw}$, $Y_{2|vw}$ & 2+2 \\ & $Y_{1|vw}$, $Y_{2|vw}$, $Y_{3|vw}$ & 4+2+2 \\ $g$ ~$\downarrow$ & $Y_{1|vw}$,$Y_{2|vw}$, $Y_{3|vw}$, $Y_{4|vw}$ & 4+4+2+2 \\ & $Y_{1|vw}$, $Y_{2|vw}$, $Y_{3|vw}$, $Y_{4|vw}$, $Y_{5|vw}$ & 4+4+4+2+2 \\ \vdots & \vdots & \vdots \\ & $Y_{1|vw}$, $Y_{2|vw}$,\quad\ldots & 4+4+\quad\ldots \\ \hline \end{tabular}} } \vspace{0.5cm} \parbox{13cm}{\small Table 2. Evolution of a two-particle states in the $\sl(2)$-sector with respect to $g$. At $g\sim 0$ a state has a certain number of $Y_{M|vw}$-functions with zeroes in the physical strip. Increasing the coupling, the critical points get crossed which leads to accumulation of zeroes of $Y_{M|vw}$'s in the physical strip. This phenomenon can be called ``Y-function democracy". } \end{center} In addition we find that the functions below have either zeros or equal to $-1$ at locations related to $r_j^{(k)}$ \begin{eqnarray}\label{zerorel} Y_{k|vw}\big(r_j^{(k+1)}\pm{i\over g}\big) = -1\,,\quad Y_{k+1}\big(r_j^{(k+1)}\big) = 0\,,\quad k=1\,,\ldots\,, \infty\,,\quad Y_{\pm}\big(r_j^{(2)}\big) = 0\,.~~~~~~ \end{eqnarray} As will be discussed in the next section the equations $Y_{k|vw}\big(r_j^{(k+1)}\pm{i\over g}\big) = -1$ lead to integral equations which play the same role as the exact Bethe equations $Y_1(u_j)=-1$ and allow one to find the exact location of the roots $r_j^{(k+1)}$. Let us finally mention that nothing special happens to $Y_{M|w}$-functions. \subsubsection*{Critical and subcritical values} Let $m$ again be the maximum value of $M$ the condition \eqref{cond1} is satisfied. According to the discussion above for any two-particle state there is a critical value of $g$ such that the function $Y_{m+1|vw}$ which had no zeros in the physical strip for small values of $g$, acquires two zeros at $u=\pm i/g$. At the same time $Y_{m-1|vw}$ also acquires zeros at $u=\pm i/g$. At a slightly larger value of $g$ the two zeros that were at $u=\pm i/g$ collide at the origin, and $Y_{m+1|vw}$ and $Y_{m-1|vw}$ acquire double zeros at $u=0$. Then, the double zeros split, and both $Y_{m|vw}$ and $Y_{m+1|vw}$ have two real zeros, and $Y_{m-1|vw}$ has four. Increasing $g$ more, one reaches the second critical value of $g$ such that the functions $Y_{m+2|vw}$ and $Y_{m|vw}$ acquire zeros at $u=\pm i/g$, see Table 2. \smallskip This pattern repeats itself, and there are infinitely many critical values of $g$ which we denote as $g_{J,n}^{r,m}$ and define as the smallest value of $g$ such that for a symmetric configuration of Bethe roots the function $Y_{m+r|vw}$ acquires two zeros at $u=\pm i/g$. The subscript $J,n$ denotes a state in the $\sl(2)$ sector, and they determine $m$. The critical values of $g$ can be also determined from the requirement that at $g=g_{J,n}^{r,m}$ the function $1+Y_{m+r-1|vw}$ has a double zero at $u=0$: $1+Y_{m+r-1|vw}(0,g_{J,n}^{r,m})=0$. This condition is particularly useful because the value of the Y-functions at $u=0$ can be found from the TBA equations, see next section for detail. The second set of subcritical values of $g$ can be defined as the smallest value of $g$ such that the function $Y_{m+r|vw}$ acquires a double zero at $u=0$. They are denoted as $\bar{g}_{J,n}^{r,m}$, and they are always greater than the corresponding critical values: $g_{J,n}^{r,m} <\bar{g}_{J,n}^{r,m}$. \smallskip The locations of the critical values depend on the state under consideration, and can be determined approximately by using the asymptotic $Y$-functions discussed in the previous subsection. The values obtained this way are only approximate because for large enough values of $g$ one should take into account the deviations of the Y-functions from their large $J$ expressions. \smallskip We will see in next sections that the critical values $g_{J,n}^{r,m}$ play a crucial role in formulating excited states simplified TBA equations which take different form in each of the intervals $g_{J,n}^{r,m}<g<g_{J,n}^{r+1,m}$, $r=0,1,\ldots $ where $g_{J,n}^{0,m}=0$. The second set of $\bar{g}_{J,n}^{r,m}$ is not important for the simplified equations. The canonical TBA equations however require both sets because they take different form in each of the intervals $g_{J,n}^{r,m}<g<\bar{g}_{J,n}^{r,m}\,$; $\ \bar{g}_{J,n}^{r,m}<g<g_{J,n}^{r+1,m}$, $\ r=0,1,\ldots$. \smallskip Strictly speaking the integration contour in TBA equations also depends on $g$ and the state under consideration. Nevertheless it appears that in simplified TBA equations the contour can be chosen to be the same for all values of $g$, and even for all two-particle states from the $\sl(2)$ sector if one allows its dynamical deformation. That means that with increasing $g$ the contour should be deformed in such a way that it would not hit any singularity. This also shows that one should not expect any kind of non-analyticity in the energy of a state at a critical value of $g$. What may happen is that the critical values are the inflection points of the energy. \subsubsection*{Critical values of $g$ for the Konishi state} In this subsection we discuss in detail the critical values for Konishi state. To analyze the dependence of Y-functions on $g$ one should first solve the BY equations with $J=2$, $n=1$, and then plug the $u_j$'s obtained into the expressions for $Y$-functions from appendix \ref{appT}. \begin{figure}[t] \begin{center} \includegraphics[width=.45\textwidth]{Yvw12}\qquad\includegraphics[width=.45\textwidth]{Yvw12b} \end{center} \caption{On the left and right pictures $Y_{1|vw}$, $Y_{2|vw}$ and $Y_{3|vw}$ are plotted for the Konishi state at $\bar g_{cr}^{(1)}\approx 4.5$ and $\bar g_{cr}^{(2)}\approx 11.5$, respectively. $Y_{2|vw}$ touches the $u$-axis at $g=\bar g_{cr}^{(1)}$, and has two real zeros for $\bar g_{cr}^{(1)}<g<\bar g_{cr}^{(2)}$. } \end{figure} Solving the equations \begin{eqnarray} Y_{r+1|vw}(\pm{i\over g},g)=0\,,\quad r=1,2,\ldots \,, \end{eqnarray} we find that there are 7 critical values of $g$ for $g<100$ \begin{eqnarray} g_{2,1}^{(r,1)} = \{4.429, 11.512, 21.632, 34.857, 51.204, 70.680, 93.290\}\,. \end{eqnarray} Note that the distance between the critical values increases with $g$. The first critical value is distinguished because only $Y_{2|vw}(\pm i/g,g)$ vanishes there. For all the other critical values the function $Y_{r-1|vw}(\pm i/g,g)$ also is equal to zero \begin{eqnarray} Y_{r+1|vw}(\pm {i\over g},g_{2,1}^{(r,1)})=0\ \Longrightarrow\ Y_{r-1|vw}(\pm {i\over g},g_{2,1}^{(r,1)})=0\,,\quad {\rm for}\ \ r=2,3,\ldots \,.~~~~~ \end{eqnarray} Then, solving the equations \begin{eqnarray} Y_{r+1|vw}(0,g)=0\,,\quad r=1,2,\ldots \,, \end{eqnarray} one finds the following 7 subcritical values of $g$ for $g<100$ \begin{eqnarray} \bar{g}_{2,1}^{(r,1)} = \{4.495, 11.536, 21.644, 34.864, 51.209, 70.684, 93.292\}\,. \end{eqnarray} Note that the distance between a critical value and a corresponding subcritical one decreases with $g$. Again, at the first subcritical value only $Y_{2|vw}(0,g)$ vanishes. For all the other subcritical values the function $Y_{r-1|vw}(0,g)$ also acquires an extra double zero \begin{eqnarray} Y_{r+1|vw}(0,\bar{g}_{2,1}^{(r,1)})=0\ \Longrightarrow\ Y_{r-1|vw}(0,\bar{g}_{2,1}^{(r,1)})=0\,,\quad {\rm for}\ \ r=2,3,\ldots \,. \end{eqnarray} Once $g$ crosses a subcritical value $\bar{g}_{2,1}^{(r,1)} $ the corresponding double zeros at $u=0$ split, and each of the functions $Y_{r-1|vw}(0,g)$ and $Y_{r+1|vw}(0,g)$ acquires two symmetrically located zeros. As a result at infinite $g$ all the $Y_{M|vw}$-functions have four real zeros. Moreover, one can also see that if $g$ is between two subcritical values then all these zeros for all the functions are inside the interval $[-2,2]$ and approach $\pm2$ as $g\to\infty$. \begin{figure}[t] \begin{center} \includegraphics[width=.45\textwidth]{Yvw23}\qquad\includegraphics[width=.45\textwidth]{Yvw23b} \end{center} \caption{On the left and right pictures $Y_{1|vw}$, $Y_{2|vw}$, $Y_{3|vw}$ and $Y_{4|vw}$ are plotted for the Konishi state at $\bar g_{cr}^{(2)}\approx 11.5$ and $\bar g_{cr}^{(3)}\approx 21.6$, respectively. $Y_{1|vw}$ and $Y_{3|vw}$ touch the $u$-axis at $g=\bar g_{cr}^{(2)}$, and $Y_{2|vw}$ and $Y_{4|vw}$ touch it at $g=\bar g_{cr}^{(3)}$. $Y_{1|vw}$ has four real zeros for $g>\bar g_{cr}^{(2)}$. } \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=.45\textwidth]{Yvw34}\qquad\includegraphics[width=.45\textwidth]{Yvw34b} \end{center} \caption{On the left picture and right pictures $Y_{2|vw}$, $Y_{3|vw}$, $Y_{4|vw}$ and $Y_{5|vw}$ are plotted for the Konishi state at $\bar g_{cr}^{(3)}\approx 21.6$ and $\bar g_{cr}^{(4)}\approx 34.9$, respectively. $Y_{2|vw}$ and $Y_{4|vw}$ touch the $u$-axis at $g=\bar g_{cr}^{(3)}$, and $Y_{3|vw}$ and $Y_{5|vw}$ touch it at $g=\bar g_{cr}^{(4)}$. $Y_{2|vw}$ has four real zeros for $g>\bar g_{cr}^{(3)}$. } \end{figure} In Figures 3-5, we show several plots of $Y_{M|vw}$-functions for the Konishi state. In what follows a Konishi-like state refers to any two-particle state for which only $Y_{1|vw}$-function has two real zeros and all the other $Y_{M|vw}$-functions have no zeros in the physical strip at small coupling. \subsubsection*{Critical values of $g$ for some states} Here we analyze the $g$-dependence of Y-functions for several other states. We begin with the state with $J=5$ and $n=1$. This is the state with the lowest value of $J$ such that both $Y_{1|vw}$ and $Y_{2|vw}$ have two zeros in the physical strip at small $g$. The critical and subcritical values are determined by the equations \begin{eqnarray} Y_{r+2|vw}(\pm{i\over g},g)=0\,,\quad Y_{r+2|vw}(0,\bar g)=0\,,\quad r=1,2,\ldots \,, \end{eqnarray} and we find the following values for $g<100$ \begin{eqnarray} {g}_{5,1}^{r,2} &=& \{6.707, 15.458, 27.233, 42.107, 60.101, 81.222\}\\\nonumber \bar{g}_{5,1}^{r,2} &=& \{6.764, 15.479, 27.244, 42.114, 60.105, 81.225\}\,. \end{eqnarray} Next, we consider the state with $J=8$ and $n=1$. This is the state with the lowest value of $J$ such that $Y_{1|vw}$ has four zeros, $Y_{2|vw}$ has two real zeros and $Y_{3|vw}$ has two imaginary zeros in the physical strip at small $g$. Therefore, the critical and subcritical values are determined by the equations \begin{eqnarray} Y_{r+3|vw}(\pm{i\over g},g)=0\,,\quad Y_{r+2|vw}(0,\bar{g})=0\,,\quad r=1,2,\ldots \,. \end{eqnarray} We find the following 6 critical and 7 subcritical values of $g$ for $g<100$ \begin{eqnarray} g_{8,1}^{r,3} &=& \{~~ -~,9.157, 19.561, 32.985, 49.505, 69.143, 91.909\}\\\nonumber \bar g_{8,1}^{r,3} &=& \{0.116, 9.207, 19.580, 32.995, 49.511, 69.148, 91.912\}\,. \end{eqnarray} The reason why $g_{8,1}^{1,3}$ is so small is that the two imaginary roots of $Y_{3|vw}$ that are in the physical strip at $g=0$ reach the real line very quickly. One might think that the first critical value increases with $J$. It is not so as one can see on the example of the state with $J=9$ and $n=1$. This is again the state such that $Y_{1|vw}$ has four zeros, and $Y_{2|vw}$ and $Y_{3|vw}$ have two zeros at small $g$, and it has the following 6 critical values of $g$ for $g<100$ \begin{eqnarray} g_{9,1}^{r,3} &=& \{6.970, 16.982, 29.935, 45.968, 65.114, 87.384\}\\\nonumber \bar g_{9,1}^{r,3} &=& \{7.052, 17.006, 29.947, 45.976, 65.119, 87.388\}\,. \end{eqnarray} \section{TBA equations for Konishi-like states} \label{TBAKon} As was discussed above to formulate excited state TBA equations one should choose an integration contour, take it back to the real line of the mirror plane, and then check that the resulting TBA equations are solved by the large $L$ expressions for Y-functions. We begin our analysis with the simplest case of a Konishi-like state which appears however to be quite general and allows one to understand the structure of the TBA equations for any two-particle $\sl(2)$ sector state. To simplify the notations we denote the critical values of the state under consideration as $g_{cr}^{(r)}$. \subsection{Excited states TBA equations: $g<g_{cr}^{(1)}$} Let us stress that the equations below are valid only for physical states satisfying the level-matching condition. Since some terms in the equations below have the same form for any $N$ we keep an explicit dependence on $N$ in some of the formulae. \subsubsection*{ Integration contour} The integration contour for all Y-functions but $Y_\pm$ is chosen in such a way that it lies a little bit above the interval Re$(z)\in (-{\omega_1\over 2},{\omega_1\over 2})$, Im$(z)={\omega_2\over 2i}$ in the middle of the mirror theory region, and penetrates to the string theory region in the small vicinity of $z =\omega_2/2$ (the centre point of the mirror region). \begin{figure}[t] \begin{center} \includegraphics*[width=0.45 \textwidth]{z-torus_path}\quad \includegraphics*[width=0.4\textwidth]{u-plane_path} \end{center} \caption{On the left picture the integration contour on the $z$-torus is shown. On the right picture a part of the integration contour on the mirror u-plane going from $\pm\infty$ a little bit above the real line to the origin, and down to the string region is plotted. The green curves correspond to the horizontal lines on the z-torus just above the real line of the string region. The red semi-lines are the cuts of the mirror $u$-plane, and on the $z$-torus they are mapped to the part of the boundary of the mirror region that lies in the string region.} \end{figure} Then, it goes along the both sides of the vertical line to the string region real line, and encloses all the points $z_{*k}$ so that they lie between the mirror theory line and the integration contour, see Figure 6. In the mirror $u$-plane the contour lies above the real line, then it goes down at $u=0$, reaches a minimum value and turns back to the real line. In the equations involving the functions $Y_Q$, $Y_{Q|vw}$ and $Y_{Q|w}$ it crosses the cuts of the mirror $u$-plane with Im$(u)=-{Q\over g}$, and enters another sheet, see Figure 2. It is worth mentioning that the contour does not cross any additional cuts $Y$-functions have on the $z$-torus. Then, one uses the TBA equations for the ground state energy, and taking the integration contour back to the mirror region interval Im$(z)={\omega_2\over 2i}$, picks up $N$ extra contributions of the form $-\log S(z_{*},z)$ from any term $\log (1+Y_1)\star K$, where $S(w,z)$ is the S-matrix corresponding to the kernel $K$: $K(w,z)={1\over 2\pi i}{d\over dw}\log S(w,z)$. In addition, one also gets contributions of the form $-\log S(w,z)$ from the imaginary zeros of $1+Y_{M|vw}$ located below the real line of the mirror $u$-plane, see \eqref{zerorel}. Finally, the integration contour for $Y_\pm$-functions should be deformed so that the points $u_k^-=u_k-{i\over g}$ of the mirror $u$-plane lie between the interval $[-2,2]$ of the mirror theory line and the contour. Then, the terms of the form $\log (1-Y_+)\,\hat{\star}\, K$ would produce extra contributions of the form $+\log S(u_{k}^-,z)$ because $Y_+(u_k^-)=\infty$. In fact, this is important only if one uses the simplified TBA equations because in the canonical TBA equations $Y_\pm$-functions appear only in the combination $1-{1\over Y_\pm}$. Note also that $Y_\pm$-functions analytically continued to the whole mirror $u$-plane have a cut $(-\infty,-2]\cup [2,\infty)$, and they should satisfy the following important equality which, as was shown in \cite{AF09b}, is necessary for the fulfillment of the Y-system \begin{eqnarray}\label{ypym} Y_+(u\pm i0) = Y_-(u\mp i0)\quad {\rm for}\ \ u\in (-\infty,-2]\cup [2,\infty)\,. \end{eqnarray} This equality shows that one can glue the two $u$-planes along the cuts, and then $Y_\pm$-functions can be thought of as two branches of one analytic function defined on the resulting surface (with extra cuts in fact). We will see that the equality \eqref{ypym} indeed follows from the TBA equations. \subsubsection*{ Simplified TBA equations} Using this procedure and the simplified TBA equations for the ground state derived in \cite{AF09b,AF09d}, one gets the following set of integral equations for Konishi-like states and $g<g_{cr}^{(1)} $ \bigskip \noindent $\bullet$\ $M|w$-strings: $\ M\ge 1\ $, $Y_{0|w}=0$ \begin{eqnarray}\label{Yforws} \log Y_{M|w}= \log(1 + Y_{M-1|w})(1 + Y_{M+1|w})\star s +\delta_{M1}\, \log{1-{1\over Y_-}\over 1-{1\over Y_+} }\,\hat{\star}\, s\,.~~~~~ \end{eqnarray} These equations coincide with the ground state ones. \bigskip \noindent $\bullet$\ $M|vw$-strings: $\ M\ge 1\ $, $Y_{0|vw}=0$ \begin{eqnarray}\label{Yforvw3} \hspace{-0.3cm}\log Y_{M|vw}(v)&=&-\delta_{M1} \sum_{j=1}^N \log S(u_j^--v)- \log(1 + Y_{M+1})\star s~~~~~\\\nonumber &+& \log(1 + Y_{M-1|vw} )(1 + Y_{M+1|vw})\star s+\delta_{M1} \log{1-Y_-\over 1-Y_+}\,\hat{\star}\, s\,,~~~~~ \end{eqnarray} where $u_j^-\equiv u_j-{i\over g}$, and the kernel $s$ and the corresponding S-matrix $S$ are defined in appendix \ref{app:rapidity}. For $M=1$ the first term is due to our choice of the integration contour for $Y_\pm$-functions, and the pole of $Y_+$ at $u=u_j^-$. It is worth pointing out that there is no extra contribution in \eqref{Yforvw3} from any term of the form $\log(1 + Y_{k|vw} )\star s$. In general such a term leads to a contribution equal to $-\log S(r_1 - v)S(r_2 - v)$ where $r_1$ and $r_2$ are the two zeros of $1 + Y_{k|vw}$ with negative imaginary parts. To explain this, we notice that if the zeros $r_j^{(k+1)}$ of $Y_{k+1|vw}$ lie outside the physical strip, then $r_j$ are $r_1=r_1^{(k+1)}-{i\over g}$ and $r_2=r_1^{(k+1)}+{i\over g}$, where Im$(r_1^{(k+1)})<-{1\over g}$. Since $S(r-{i\over g})S(r+{i\over g}) = 1$ the term $\log(1 + Y_{k|vw} )\star s$ does not contribute, see Figure 7. On the other hand, if the zeros $r_j^{(k+1)}$ of $Y_{k+1|vw}$ lie inside the physical strip then $r_j$ are related to $r_j^{(k+1)}$ as $r_j=r_j^{(k+1)}-{i\over g}$, and the term $\log(1 + Y_{k|vw} )\star s$ leads to the extra contribution equal to $-\sum_j\log S(r_j^{(k+1)-} - v)$ where $r_j^{(k+1)-}\equiv r_j^{(k+1)}-{i\over g}$, see Figure 7. \begin{figure}[H] \begin{center} \includegraphics*[width=0.9\textwidth]{ZeroesY2} \end{center} \caption{In the upper pictures the positions of zeros of $Y_{k+1|vw}$ and of $1+Y_{k|vw}$ are shown for $g<g_{cr}$. The contributions of $\log(1+Y_{k|vw})\star s$ to the TBA equations cancel out for this case. The two pictures in the middle correspond to the situation when two zeros of $Y_{k+1|vw}$ enter the physical region $|{\rm Im}u|<\frac{1}{g}$ (depicted in yellow). Finally, the two pictures at the bottom are drawn for the case when two zeros of $Y_{k+1|vw}$ are on the real line which corresponds to $g_{cr}<\bar{g}_{cr}<g$. When $g>g_{cr}$ the contributions of $\log(1+Y_{k|vw})\star s$ do not cancel anymore and lead to modification of the corresponding TBA equations. } \end{figure} We conclude therefore that at weak coupling and only for Konishi-like states no term $\log(1 + Y_{k|vw} )\star s$ gives an extra contribution to the TBA equations for $vw$-strings. Let us also mention that the poles of $S(u_j^--v)$ cancel the zeros of $Y_{1|vw}(v)$ at $v=u_j$, and \eqref{Yforvw3} is compatible with the reality condition for Y-functions. \bigskip \noindent $\bullet$\ $y$-particles \footnote{ The equation for $Y_+Y_-$ follows from eq.(4.14) and (4.26) of \cite{AF09b}, and the identity $ K_{Qy}\,\hat{\star}\, K_1 = K_{xv}^{Q1} - K_{Q-1}\,. $ Another identity $ K_{xv}^{Q1} \star s = {1\over 2}K_{Qy}+{1\over 2}K_{Q} -\delta_{Q1} s - K_{Qy}^{ms}\,\check{\star}\, \tilde{s}\,, $ where $\tilde{s}(u)\equiv s(u-{i\over g})$ is useful in deriving Y-system equations for $Y_\pm$. } \begin{eqnarray} \label{Yfory1s} \log {Y_+\over Y_-}(v)&=& -\sum_{j=1}^N\, \log S_{1_*y}(u_{j},v) +\log(1 + Y_{Q})\star K_{Qy}\,,~~~~~~~ \\ \label{Yfory2s} \log {Y_+ Y_-}(v) &=& -\sum_{j=1}^N\, \log {\big(S_{xv}^{1_*1}\big)^2\over S_2}\star s(u_j,v) \\\nonumber &+&2\log{1 + Y_{1|vw} \over 1 + Y_{1|w} }\star s - \log\left(1+Y_Q \right)\star K_Q+ 2 \log(1 + Y_{Q})\star K_{xv}^{Q1} \star s\,,~~~~ \end{eqnarray} where we use the following notation \begin{eqnarray}\nonumber &&\log {\big(S_{xv}^{1_*1}\big)^2\over S_2}\star s(u_{j},v) \equiv \int_{-\infty}^\infty\, dt\, \log {S_{xv}^{1_*1}(u_j,t)^2\over S_2(u_j-t)}\, s(t-v) \,.~~~~~ \end{eqnarray} Then, $S_{1_*y}(u_{j},v) \equiv S_{1y}(z_{*j},v)$ is a shorthand notation for the S-matrix with the first and second arguments in the string and mirror regions, respectively. The same convention is used for other S-matrices. Both arguments of the kernels in these formulae are in the mirror region. Taking into account that under the analytic continuation through the cut $|v|>2$ the S-matrix $S_{1_*y}$ and the kernel $K_{Qy}$ transforms as $S_{1_*y}\to 1/S_{1_*y}$ and $K_{Qy}\to - K_{Qy}$, one gets that the functions $Y_\pm$ are indeed analytic continuations of each other and, therefore, the equality \eqref{ypym} does hold. It can be easily checked that the term on the first line in \eqref{Yfory2s} is real, and this makes obvious that the equations for $Y_\pm$-functions are also compatible with the reality of Y-functions. The origin of this term can be readily understood if one uses the following identity \begin{eqnarray}\label{idn} -\log {\big(S_{xv}^{1_*1}\big)^2\over S_2}\star s(u_{j},v)= \log S_{1}(u_{j}-v) -2 \log S_{xv}^{1_*1}\star s(u_{j},v) \end{eqnarray} which holds up to a multiple of $2\pi i$, and since $S_{xv}^{1_*1}(u,v)$ has a zero at $u=v$ the integration contour in the second term on the r.h.s. of \eqref{idn} runs above the real line. Then, the term $\log S_{1}(u_{j}-v)$ comes from the term $- \log\left(1+Y_1 \right)\star K_1$, and the second term come from $2 \log(1 + Y_{1})\star K_{xv}^{Q1} \star s$. Eq.\eqref{Yfory2s} is very useful for checking the TBA equations in the large $J$ limit where one gets \begin{eqnarray} \nonumber \log {Y_+ Y_-} &=& -\sum_{j=1}^N\, \log {\big(S_{xv}^{1_*1}\big)^2\over S_2}\star s(u_j,v) +2\log{1 + Y_{1|vw} \over 1 + Y_{1|w} }\star s \,. \end{eqnarray} \bigskip \noindent $\bullet$\ $Q$-particles for $Q\ge 3$ \begin{eqnarray} \log Y_{Q}&=&\log{\left(1 + {1\over Y_{Q-1|vw}} \right)^2\over (1 + {1\over Y_{Q-1} })(1 + {1\over Y_{Q+1} }) }\star s\label{YforQ2a3} \,~~~~~~~ \end{eqnarray} \bigskip \noindent $\bullet$\ $Q=2$-particle \begin{eqnarray}\label{YforQ2a4} \log Y_{2}&=& \sum_{j=1}^N \log S(u_{j}-v) -\log(1 + {1\over Y_{1} })(1 + {1\over Y_{3} }) \star s+2\log\left(1 + {1\over Y_{1|vw}} \right)\star s\,~~~~~~~ \end{eqnarray} In fact by using the p.v. prescription, one gets \begin{eqnarray}\label{YforQ2bb} \log Y_{2}&=& \log{\left(1 + {1\over Y_{1|vw}} \right)^2\over (1 + {1\over Y_{1} })(1 + {1\over Y_{3} }) }\star_{p.v.} s\,~~~~~~~ \end{eqnarray} which makes obvious the reality of Y-functions. \bigskip \noindent $\bullet$\ $Q=1$-particle \begin{eqnarray} \nonumber \log Y_{1}&=&\sum_{j=1}^N\log\check{\Sigma}_{1_*}^2\, \check{S}_{1}\,\check{\star}\, s(u_j,v)-L\, \check{\cal E}\,\check{\star}\, s +\log\left(1-{1\over Y_{-}} \right)^2Y_2\, \hat{\star}\, s \\\nonumber &-&2 \log\left(1-{1 \over Y_{-}} \right)\left(1-{1 \over Y_{+}}\right) Y_2\, \hat{\star}\, \check{K}\,\check{\star}\, s +\log{Y_1}\star \check{K}_1\,\check{\star}\, s \\\label{YforQ1a5} &-& \log\left(1+Y_{Q} \right)\star \big( 2\check{K}_Q^\Sigma + \check{K}_Q +\check{K}_{Q-2}\big)\,\check{\star}\, s - \log(1 + Y_{2})\star s\,.~~~~~ \end{eqnarray} All the kernels appearing here are defined in appendix \ref{app:rapidity}, and we also assume that $\check{K}_{0}=0$ and $\check{K}_{-1}=0$. The reality of this equation follows from the reality of \begin{eqnarray}\nonumber {S_{ss}(u-{i\over g},v)^2 \over \check{S}_{1}(u,v)} = \frac{x_s(u-{i\over g}) - x_s(v)}{x_s(u-{i\over g}) - {1\over x_s(v)}} \frac{x_s(u+{i\over g}) - x_s(v)}{x_s(u+{i\over g}) - {1\over x_s(v)}} = S_{ss}(u-{i\over g},v)S_{ss}(u+{i\over g},v)\,,~~~~~~ \end{eqnarray} which appears if one uses the representation \eqref{s1star} for $\check{\Sigma}_{1_*}$. Note that $\check{S}_{1}(u,v)$ is defined through the kernel $\check{K}_{1}(u,v)$ by the integral \begin{eqnarray} \check{S}_{1}(u,v) = \exp\Big( 2\pi i \int_{-\infty}^u\, du'\, \check{K}_{1}(u',v) \Big)= {S_{ss}(u-{i\over g},v) \over S_{ss}(u+{i\over g},v)}\,, \end{eqnarray} and it differs from the naive formula \begin{eqnarray} S_{ms}(u-{i\over g},v) S_{ms}(u+{i\over g},v) =\check{S}_{1}(u,v)\, x_s(v)^2 \,, \end{eqnarray} which one could write by using the expression \eqref{ck1} for the kernel $\check{K}_{1}$. We see that the reality of Y-functions is a trivial consequence of these equations. Moreover, in the large $L$ limit the simplified TBA equations do not involve infinite sums at all. As a result, they can be easily checked numerically with an arbitrary precision. We have found that for Konishi-like states the integral equations are solved at the large $L$ limit by the asymptotic Y-functions given in terms of transfer matrices if the length parameter $L$ is related to the charge $J$ carried by a string state as $$L=J+2\,.$$ We expect that for all $N$-particle states from the $\sl(2)$ sector the relation between length and charge is universal and given by $L=J+2$. \medskip There is another form of the TBA equations for $Q$-particles which is obtained by combining the simplified and canonical TBA equations. We refer to this form as the hybrid one, and the equations can be written as follows \bigskip \noindent $\bullet$\ Hybrid equations for $Q$-particles \begin{align} &\log Y_Q(v) = - \sum_{j=1}^N\( \log S_{\sl(2)}^{1_*Q}(u_j,v) - 2 \log S\star K^{1Q}_{vwx} (u_j^-,v) \) \notag\\ &\quad - L\, \widetilde{{\cal E}}_{Q} + \log \left(1+Y_{Q'} \right) \star \(K_{\sl(2)}^{Q'Q} + 2 \, s \star K^{Q'-1,Q}_{vwx} \) \label{TbaQsl2H} \\ &\quad + 2 \log \(1 + Y_{1|vw}\) \star s \,\hat{\star}\, K_{yQ} + 2 \, \log \(1 + Y_{Q-1|vw}\) \star s \notag \\ &\quad - 2 \log{1-Y_-\over 1-Y_+} \,\hat{\star}\, s \star K^{1Q}_{vwx} + \log {1- \frac{1}{Y_-} \over 1- \frac{1}{Y_+} } \,\hat{\star}\, K_{Q} + \log \big(1- \frac{1}{Y_-}\big)\big( 1- \frac{1}{Y_+} \big) \,\hat{\star}\, K_{yQ} \,, \notag \end{align} where $K^{0,Q}_{vwx}=0$, and $Y_{0|vw}=0$, and we use the notation \begin{eqnarray} \log S\star K^{1Q}_{vwx} (u_j^-,v) = \int_{-\infty}^\infty\, dt\, \log S(u_j^- -t-i0) \star K^{1Q}_{vwx}(t+i0,v)\,.~~~ \end{eqnarray} The first term on the first line of \eqref{TbaQsl2H} comes from $\log \left(1+Y_{Q'} \right) \star K_{\sl(2)}^{Q'Q} $, and the second one from $- 2 \log{1-Y_-\over 1-Y_+} \,\hat{\star}\, s \star K^{1Q}_{vwx}$. Eq.\eqref{TbaQsl2H} is derived in appendix \ref{hybridQ}. \medskip The energy of the multiparticle state is obtained in the same way by taking the integration contour back to the real mirror momentum line, and is given by \begin{eqnarray}\nonumber E_{\{n_k\}}(L)&=& \sum_{k=1}^N\, i{\widetilde p}^1(z_{*k}) -\int {\rm d}z\, \sum_{Q=1}^\infty{1\over 2\pi}{d{\widetilde p}^Q\over dz}\log\left(1+Y_Q\right)~~~~~~\\ &=& \sum_{k=1}^N\mathcal E_k -\int {\rm d}u\, \sum_{Q=1}^\infty{1\over 2\pi}{d{\widetilde p}^Q\over du}\log\left(1+Y_Q\right)\,, \label{Enk} \end{eqnarray} where \begin{eqnarray} \mathcal E_k = igx^-(z_{*k})-igx^+(z_{*k}) -1= igx_s^-(u_{k})-igx_s^+(u_{k}) -1\,, \end{eqnarray} is the energy of a fundamental particle in the string theory, see appendix \ref{app:rapidity} for definitions and conventions. For practical computations the analytic continuation from the mirror region to the string one reduces to the substitution $x^{Q\pm}(u)\to x^{Q\pm}_s(u)\equiv x_s(u\pm {i\over g} Q)$ in all the kernels and S-matrices. Then, as was discussed above the string theory spectrum is characterized by a set of $N$ real numbers $u_k$ (or $z_{*k}$) satisfying the exact Bethe equations \eqref{Y1m1}. We assume for definiteness that $u_k$ are ordered as $u_1 < \cdots < u_N$. Finally, to derive exact Bethe equations one should analytically continue $Y_1$ given by either eq.\eqref{YforQ1a5} or \eqref{TbaQsl2H}. We find that it is simpler and easier to handle the exact Bethe equations derived from the hybrid equation \eqref{TbaQsl2H} for $Y_1$. In the appendix \ref{canTBA} we also derive exact Bethe equations from the canonical equation for $Y_1$. \subsubsection*{Exact Bethe equations } Now we need to derive the integral form of the exact Bethe equations (\ref{Y1m1}). Let us note first of all that at large $L$ eq.(\ref{Y1m1}) reduces to the BY equations for the $\sl(2)$-sector by construction, and the integral form of (\ref{Y1m1}) should be compatible with this requirement. To derive the exact Bethe equations, we take the logarithm of eq.(\ref{Y1m1}), and analytically continue the variable $z$ of $Y_1(z)$ in eq.(\ref{TbaQsl2H}) to the point $z_{*k}$. On the mirror $u$-plane it means that we go from the real $u$-line down below the line with Im$(u)=-{1\over g}$ without crossing any cut, then turn back, cross the cut with Im$(u)=-{1\over g}$ and $|$Re$(u)|>2$, and go back to the real $u$-line, see Figure 6. As a result, we should make the following replacements $x(u-{i\over g}) \to x_s(u-{i\over g}) = x(u-{i\over g})$, $x(u+{i\over g}) \to x_s(u+{i\over g}) = 1/x(u+{i\over g})$ in the kernels appearing in (\ref{TbaQsl2H}). The analytic continuation depends on the analytic properties of the kernels and Y-functions, and its detailed consideration can be found in appendix \ref{app:Y1}. As shown there, the resulting exact Bethe equations for a string theory state from the $\sl(2)$ sector can be cast into the following integral form \begin{align} &\pi i(2n_k+1)=\log Y_{1_*}(u_k) =i L\, p_k- \sum_{j=1}^N\, \log S_{\sl(2)}^{1_*1_*}(u_j,u_k)\label{Tba1sl2B}\\ &\quad + 2 \sum_{j=1}^N\, \log {\rm Res}(S)\star K^{11_*}_{vwx} (u_j^-,u_k) -2 \sum_{j=1}^N\log\big(u_j-u_k-{2i\over g}\big)\, {x_j^--{1\over x_{k}^-}\over x_j^-- x_{k}^+} \notag\\ &\quad + \log \left(1+Y_{Q} \right) \star \(K_{\sl(2)}^{Q1_*} + 2 \, s \star K^{Q-1,1_*}_{vwx} \)+ 2 \log \(1 + Y_{1|vw}\) \star \( s \,\hat{\star}\, K_{y1_*} + \tilde{s}\) \notag \\ &\quad - 2 \log{1-Y_-\over 1-Y_+} \,\hat{\star}\, s \star K^{11_*}_{vwx} + \log {1- \frac{1}{Y_-} \over 1- \frac{1}{Y_+} } \,\hat{\star}\, K_{1} + \log \big(1- \frac{1}{Y_-}\big)\big( 1- \frac{1}{Y_+} \big) \,\hat{\star}\, K_{y1_*} \,, \notag \end{align} where we use the notations \begin{eqnarray} &&\log {\rm Res}(S)\star K^{11_*}_{vwx} (u^-,v) = \int_{-\infty}^{+\infty}{\rm d}t\,\log\Big[S(u^- -t)(t-u)\Big] K_{vwx}^{11*}(t,v)\,,~~~\\ &&\tilde{s}(u)=s(u^-)\,. \end{eqnarray} The integration contours in the formulae above run a little bit above the Bethe roots $u_j$, $p_k= i \widetilde{{\cal E}}_{Q}(z_{*k})=-i\log{x_s(u_k+{i\over g})\over x_s(u_k-{i\over g})}$ is the momentum of the $k$-th particle, and the second argument in all the kernels in \eqref{Tba1sl2B} is equal to $u_{k}$. The first argument we integrate with respect to is the original one in the mirror region. Taking into account that the BY equations for the $\sl(2)$ sector have the form \begin{eqnarray}\nonumber \pi i(2n_k+1)=i J\, p_k - \sum_{j=1}^N \log S_{\sl(2)}^{1_*1_*}(u_{j},u_{k})\label{BYsl2} \,,~~~~~~ \end{eqnarray} and that $Y_Q$ is exponentially small at large $J$, we conclude that if the analytic continuation has been done correctly then up to an integer multiple of $2\pi i$ the following identities between the asymptotic Y-functions should hold \begin{eqnarray}\nonumber &&{\cal R}_k\equiv 2 \hspace{0.3mm} i \, p_k + 2 \sum_{j=1}^N\, \log {\rm Res}(S)\star K^{11_*}_{vwx} (u_j^-,u_k) -2 \sum_{j=1}^N\log\big(u_j-u_k-{2i\over g}\big)\, {x_j^--{1\over x_{k}^-}\over x_j^-- x_{k}^+} \nonumber\\ &&\quad + 2 \log \(1 + Y_{1|vw}\) \star \( s \,\hat{\star}\, K_{y1_*} + \tilde{s}\)- 2 \log{1-Y_-\over 1-Y_+} \,\hat{\star}\, s \star K^{11_*}_{vwx} \nonumber\\ &&\quad + \log {1- \frac{1}{Y_-} \over 1- \frac{1}{Y_+} } \,\hat{\star}\, K_{1} + \log \big(1- \frac{1}{Y_-}\big)\big( 1- \frac{1}{Y_+} \big) \,\hat{\star}\, K_{y1_*} = 0\,.~\label{Rkid} \end{eqnarray} For $N=2$ and $u_1=-u_2$ one gets one equation, and by using the expressions for the Y-functions from appendix \ref{appT} one can check numerically\footnote{Since $K^{11_*}_{vwx}(u,v)$ has a pole at $u=v$ with the residue equal to $-{1\over 2\pi i}$ the terms of the form $2f\star K^{11_*}_{vwx} $ can be represented as $ 2f\star K^{11_*}_{vwx} = 2f\star_{p.v.} K^{11_*}_{vwx} + f(u_k)$ which is useful for numerics. } that it does hold for any real value of $u_1$ such that only $Y_{1|vw}$ has two zeros inside the strip $|{\rm Im} \,u|<1/g$. \subsection{Excited states TBA equations: $g_{cr}^{(1)}<g<g_{cr}^{(2)}$} In this subsection we consider the TBA equations for values of $g$ in the first critical region $g_{cr}^{(1)}<g<g_{cr}^{(2)}$. In this region in addition to the two real zeros of $Y_{1|vw}$ at $u_j$, two zeros of $Y_{2|vw}$ enter the physical strip. We denote these zeros $r_j$.\footnote{Canonical TBA equations take different forms depending on the reality of the zeros, and, therefore, one has to divide the region into two subregions: $g_{cr}^{(1)}<g<\bar g_{cr}^{(1)}$ and $\bar g_{cr}^{(1)}<g<g_{cr}^{(2)}$, see the next subsection and appendix \ref{canTBA} for detail.} \subsubsection*{ Simplified TBA equations} We first notice that for $g_{cr}^{(1)}<g< g_{cr}^{(2)}$ the function $Y_{2|vw}$ has two zeros in the physical strip. Therefore, as was discussed above, the contribution to the simplified TBA equations coming from the zeros of $Y_{2|vw}$ and $1+Y_{1|vw}$ does not vanish. If $g<\bar g_{cr}^{(1)}$ no deformation of the integration contour is needed because all these zeros are on the imaginary line of the mirror region. As $g$ approaches $\bar g_{cr}^{(1)}$ the two zeros of $Y_{2|vw}$ approach $u=0$, and at $g=\bar g_{cr}^{(1)}$ they both are at the origin. As $g>\bar g_{cr}^{(1)}$ the zeros become real, located symmetrically, and they push the integration contour to be a little bit below them. In addition to this at $g=\bar g_{cr}^{(1)}$ the two zeros of $1+Y_{1|vw}$ with the negative imaginary part reach the point $u=-i/g$, and as $g>\bar g_{cr}^{(1)}$ they begin to move along the line Im$(u)=-1/g$ in opposite directions. As a result, the integration contour should be deformed in such a way that the two zeros of $1+Y_{1|vw}$ would not cross it. Thus, the zeros of $1+Y_{1|vw}$ always lie between the real line and the integration contour, and the terms of the form $\log (1+Y_{1|vw})\star K$ produce the usual contribution once one takes the contour back to the real line. Let us also mention that the points $r_j^-$ of the mirror $u$-plane are mapped to the upper boundary of the string region on the $z$-torus. Using this integration contour, one gets the following set of simplified TBA equations for Konishi-like states and $g_{cr}^{(1)}<g<g_{cr}^{(2)}$ \bigskip \noindent $\bullet$\ $M|w$-strings: their equations coincide with the ground state ones \eqref{Yforws}. \bigskip \noindent $\bullet$\ $M|vw$-strings: $\ M\ge 1\ $, $Y_{0|vw}=0$ \begin{eqnarray}\label{Yforvw3c1} &&\hspace{-0.3cm}\log Y_{M|vw}(v)=-\delta_{M1} \sum_{j=1}^N \log S(u_j^--v)-\delta_{M2} \sum_{j=1}^2 \log S(r_j^--v)~~~~~\\\nonumber &&+ \log(1 + Y_{M-1|vw} )(1 + Y_{M+1|vw})\star s+\delta_{M1} \log{1-Y_-\over 1-Y_+}\,\hat{\star}\, s- \log(1 + Y_{M+1})\star s\,.~~~~~ \end{eqnarray} The first term is due to the pole of $Y_+$ at $u=u_j^-$, and the second term is due to the zeros of $1 + Y_{1|vw}$ at $u=r_j^-$. \bigskip \noindent $\bullet$\ $y$-particles \begin{eqnarray}\label{Yfory1sc1} \log {Y_+\over Y_-}(v)&=& -\sum_{j=1}^N \log S_{1_*y}(u_{j},v) +\log(1 + Y_{Q})\star K_{Qy}\,,~~~~~~~ \\ \label{Yfory2sc2} \log {Y_+ Y_-}(v) &=& -\sum_{j=1}^N\, \log {\big(S_{xv}^{1_*1}\big)^2\over S_2}\star s(u_j,v)-2\sum_{j=1}^2 \log S(r_j^--v) \\\nonumber &+&2\log{1 + Y_{1|vw} \over 1 + Y_{1|w} }\star s - \log\left(1+Y_Q \right)\star K_Q+ 2 \log(1 + Y_{Q})\star K_{xv}^{Q1} \star s\,,~~~~ \end{eqnarray} where the second term on the second line is due to the zeros of $1 + Y_{1|vw}$ at $u=r_j^-$. \bigskip \noindent $\bullet$\ $Q$-particles for $Q\ge 3$ \begin{eqnarray} \log Y_{Q}&=&\log{\left(1 + {1\over Y_{Q-1|vw}} \right)^2\over (1 + {1\over Y_{Q-1} })(1 + {1\over Y_{Q+1} }) }\star_{p.v.} s\label{YforQ2a3c1b}\,. \,~~~~~~~ \end{eqnarray} In fact the p.v. prescription, see appendix \ref{app:reality}, is not really needed here because for $Q=3$ the double zero of $Y_2$ cancels the zeros of $Y_{2|vw}$, and for $Q\ge4$ everything is regular. Thus, the formula works no matter if the roots $r_j$ are real or imaginary. \bigskip \noindent $\bullet$\ $Q=2$-particle \begin{eqnarray}\label{YforQ2a4c1} \log Y_{2}&=& - 2\sum_{j=1}^2 \log S(r_{j}^--v)+\log{\left(1 + {1\over Y_{1|vw}} \right)^2\over (1 + {1\over Y_{1} })(1 + {1\over Y_{3} }) }\star_{p.v.} s\,~~~~~~~ \end{eqnarray} This makes obvious the reality of Y-functions because the double zero of $Y_2$ at $v=r_j$ is cancelled by the pole of $S(r_{j}^--v)$. \subsubsection*{ Hybrid equations} One can easily see that the simplified equation for $Q=1$-particles is the same as eq.\eqref{YforQ1a5} in the weak coupling region $g<g_{cr}^{(1)}$. Thus, we will only discuss the hybrid equations for $Q$-particles. Strictly speaking, their form is sensitive to whether the zeros of $Y_{2|vw}$ are complex or real, and the first critical region is divided into two subregions: $g_{cr}^{(1)}<g<\bar g_{cr}^{(1)}$ and $\bar g_{cr}^{(1)}<g<g_{cr}^{(2)}$. On the other hand, to derive the exact Bethe equations we only need the hybrid $Q=1$ equation which, as we will see, takes the same form in both subregions, and, moreover, for any $g>g_{cr}^{(1)}$. For $g_{cr}^{(1)}<g<\bar g_{cr}^{(1)}$ the function $Y_{2|vw}$ has two complex conjugate zeros in the physical strip which, therefore, lie on the opposite sides of the real axis of the mirror $u$-plane. Thus, the zero $r_1$ with the negative imaginary part lies between the integration contour and the real line of the mirror region. Taking the integration contour back to the real line produces an extra contribution from this zero. Since $1+Y_{1|vw}(r_j^-)=0$, and $Y_\pm=0$, the terms $2 \log \(1 + Y_{1|vw}\) \star s \,\hat{\star}\, K_{y1}$ and $\log \big(1- \frac{1}{Y_-}\big)\big( 1- \frac{1}{Y_+} \big) \,\hat{\star}\, K_{y1}$ in the hybrid equation \eqref{TbaQsl2H} lead to the appearance of $-2 \log S\,\hat{\star}\, K_{y1}(r_j^-,v)$ and $2\log S_{yQ} (r_1,v)$,\footnote{Note that the S-matrix $S_{yQ}$ is normalized as $S_{yQ} (\pm 2,v)=1$.} respectively. The integration contour for the $\,\hat{\star}\,$-convolution in the term $-2 \log S\,\hat{\star}\, K_{y1}(r_j^-,v)$ is the one for $Y_\pm$-functions and it should be taken to the mirror region too. Since the S-matrix $S(r_1^--t)$ has a pole at $t=r_1$, this produces the extra term $-2\log S_{yQ} (r_1,v)$ that exactly cancels the previous contribution from $\log \big(1- \frac{1}{Y_-}\big)\big( 1- \frac{1}{Y_+} \big) \,\hat{\star}\, K_{y1}$. Thus, the only additional term that appears in the hybrid TBA equation \eqref{TbaQsl2H} for $g_{cr}^{(1)}<g<\bar g_{cr}^{(1)}$ is $-2 \sum_j \log S\,\hat{\star}\, K_{y1}(r_j^-,v)$. As $g>\bar g_{cr}^{(1)}$ the two zeros of $Y_\pm$ (and $Y_{2|vw}$) become real and located a little bit above the integration contour. Therefore, the only extra contribution to the TBA equation comes from the term $2 \log \(1 + Y_{1|vw}\) \star s \,\hat{\star}\, K_{y1}$ and it is again is $-2 \sum_j \log S\,\hat{\star}\, K_{y1}(r_j^-,v)$. Thus, we see that in the first critical region $g_{cr}^{(1)}<g< g_{cr}^{(2)}$ the hybrid $Q=1$ equation takes the following form independent of whether $g<\bar g_{cr}^{(1)}$ or $g> \bar g_{cr}^{(1)}$ \noindent $\bullet$\ Hybrid $Q=1$ equation for $g_{cr}^{(1)}<g<g_{cr}^{(2)}$ \begin{align} &\log Y_1(v) = - \sum_{j=1}^N\( \log S_{\sl(2)}^{1_*1}(u_j,v) - 2 \log S\star K^{11}_{vwx} (u_j^-,v) \) - 2\sum_{j=1}^2 \log S\,\hat{\star}\, K_{y1}(r_j^-,v) \notag\\ &\quad - L\, \widetilde{{\cal E}}_{1} + \log \left(1+Y_{Q} \right) \star \(K_{\sl(2)}^{Q1} + 2 \, s \star K^{Q-1,1}_{vwx} \) + 2 \log \(1 + Y_{1|vw}\) \star s \,\hat{\star}\, K_{y1} \notag \\ &\quad - 2 \log{1-Y_-\over 1-Y_+} \,\hat{\star}\, s \star K^{11}_{vwx} + \log {1- \frac{1}{Y_-} \over 1- \frac{1}{Y_+} } \,\hat{\star}\, K_{1} + \log \big(1- \frac{1}{Y_-}\big)\big( 1- \frac{1}{Y_+} \big) \,\hat{\star}\, K_{y1} \,. \label{TbaQsl2H2} \end{align} Note also that for $g>\bar g_{cr}^{(1)}$ one can also use the p.v. prescription in the terms $\log S\,\hat{\star}\, K_{y1}(r_j^-,v) $ and $\log \big(1- \frac{1}{Y_-}\big)\big( 1- \frac{1}{Y_+} \big) \,\hat{\star}\, K_{y1}$ because the extra terms $\pm \log S_{y1}$ cancel each other. \subsubsection*{ Exact Bethe equations: $g_{cr}^{(1)}<g< g_{cr}^{(2)}$} The exact Bethe equations are obtained by analytically continuing $\log Y_1$ in \eqref{TbaQsl2H2} following the same route as for the small $g$ case, and they take the following form \begin{align} &\pi i(2n_k+1)=\log Y_{1_*}(u_k) =i L\, p_k- \sum_{j=1}^N\, \log S_{\sl(2)}^{1_*1_*}(u_j,u_k)\label{Tba1sl2B3}\\ &\quad + 2 \sum_{j=1}^N\, \log {\rm Res}(S)\star K^{11_*}_{vwx} (u_j^-,u_k) -2 \sum_{j=1}^N\log\big(u_j-u_k-{2i\over g}\big)\, {x_j^--{1\over x_{k}^-}\over x_j^-- x_{k}^+} \notag\\ &\qquad\qquad\quad - 2\sum_{j=1}^2 \( \log S\,\hat{\star}\, K_{y1_*}(r_j^-,u_k)- \log S(r_j-u_k)\) \notag\\ &\quad + \log \left(1+Y_{Q} \right) \star \(K_{\sl(2)}^{Q1_*} + 2 \, s \star K^{Q-1,1_*}_{vwx} \)+ 2 \log \(1 + Y_{1|vw}\) \star \( s \,\hat{\star}\, K_{y1_*} + \tilde{s}\) \notag \\ &\quad - 2 \log{1-Y_-\over 1-Y_+} \,\hat{\star}\, s \star K^{11_*}_{vwx} + \log {1- \frac{1}{Y_-} \over 1- \frac{1}{Y_+} } \,\hat{\star}\, K_{1} + \log \big(1- \frac{1}{Y_-}\big)\big( 1- \frac{1}{Y_+} \big) \,\hat{\star}\, K_{y1_*} \,, \notag \end{align} We recall that in the formulae above the integration contours run a little bit above the Bethe roots $u_j$, and below the dynamical roots $r_j$. The exact Bethe equation also have the same form no matter whether $g<\bar g_{cr}^{(1)}$ or $g> \bar g_{cr}^{(1)}$. We conclude again that the consistency with the BY equations requires fulfillment of identities similar to \eqref{Rkid} that we have checked numerically. \subsubsection*{ Integral equations for the roots $r_j$} In the first critical region the exact Bethe equations should be supplemented by integral equations which determine the exact location of the roots $r_j$. Let us recall that at $r_j^\pm=r_j\pm{i\over g}$ the function $Y_{1|vw}$ satisfies the relations \begin{eqnarray} \label{y1vwm1} Y_{1|vw}(r_j^\pm) = -1\,. \end{eqnarray} These equations can be used to find the roots $r_j$. They can be brought to an integral form by analytically continuing the simplified TBA equation \eqref{Yforvw3c1} for $Y_{1|vw}$ to the points $r_j^\pm$. The analytic continuation is straightforward, and one gets for roots with non-positive imaginary parts \begin{eqnarray} \label{Yforvwc1} \prod_{k=1}^2 S(u_k-r_j)\exp\( \log(1 + Y_{2|vw})\star \tilde{s}+\log{1-Y_-\over 1-Y_+}\,\hat{\star}\, \tilde{s}- \log(1 + Y_{2})\star \tilde{s}\)=-1\,,~~~~~ \end{eqnarray} where we used the exponential form of the TBA equation, and took into account that $Y_{2|vw}(r_j)=Y_{2}(r_j)=Y_{\pm}(r_j)=0$. One can easily check that if $r_j$ is imaginary, that is $g< \bar g_{cr}^{(1)}$, then the product of S-matrices in \eqref{Yforvwc1} is negative, while the exponent is real. As a result, taking the logarithm of \eqref{Yforvwc1} does not produce any new mode number. If $r_j$ is real, that is $g> \bar g_{cr}^{(1)}$, then both factors in \eqref{Yforvwc1} are on the unit circle. Their phases however are small (as they should be due to the continuity of the roots as functions of $g$), and one does not get any mode number either. Let us also mention that eq.\eqref{Yforvwc1} can be also used to find the critical values of $g$. If one sets $r_j=-i/g$, one gets an integral equation for $g_{cr}^{(1)}$, and the case of $r_j=0$ gives the one for $\bar g_{cr}^{(1)}$. The equation \eqref{Yforvwc1} is better for numerical computation than \eqref{y1vwm1}, because all the Y-functions are evaluated on the mirror real axis. \subsection{Excited states TBA equations: $g_{cr}^{(m)}<g<g_{cr}^{(m+1)}$} In this subsection we propose the TBA equations for Konishi-like states for the general case where $g_{cr}^{(m)}<g<g_{cr}^{(m+1)}$ and $m\ge 2$. Then, the functions $Y_{1|vw}\,,\ldots\, ,Y_{m-1|vw}$ have four zeros, and $Y_{m|vw}$ and $Y_{m+1|vw}$ have two zeros in the physical strip. The only two roots that can be imaginary if $g<\bar g_{cr}^{(m)}$ are $r_j^{(m+1)}$. As was discussed in section 3, these zeros can be written in the form \eqref{zer} \begin{eqnarray} \{ u_j\,,r_j^{(3)} \} \,,\ \{ r_j^{(2)}\,,r_j^{(4)} \} \,,\ldots\,, \{ r_j^{(m-1)}\,,r_j^{(m+1)} \}\,, \{ r_j^{(m)} \} \,, \{ r_j^{(m+1)} \} \,, \end{eqnarray} where we indicate only those zeros which are in the physical strip and important for formulating the TBA equations. We also recall that the functions below have zeros at locations related to $r_j^{(k)}$ \begin{eqnarray} Y_{\pm}\big(r_j^{(2)}\big) = 0\,,\ \ 1+Y_{k|vw}\big(r_j^{(k+1)}\pm{i\over g}\big) = 0\,,\ \ Y_{k+1}\big(r_j^{(k+1)}\big) = 0\,,\quad k=1\,,\ldots\,, m\,.~~~~~~ \end{eqnarray} \subsubsection*{ Simplified TBA equations: $g_{cr}^{(m)}<g<g_{cr}^{(m+1)}$} The necessary modification of the integration contour is obvious, and simplified equations take the following form \bigskip \noindent $\bullet$\ $M|w$-strings: their equations coincide with the ground state ones \eqref{Yforws}. \bigskip \noindent $\bullet$\ $M|vw$-strings: $\ M\ge 1\ $, $Y_{0|vw}=0$ \begin{eqnarray}\label{Yforvw3cm} &&\hspace{-0.3cm}\log Y_{M|vw}(v)=-\sum_{j=1}^2\left[ \log S(r_j^{(M),-}-v)+\log S(r_j^{(M+2),-}-v)\right]~~~~~\\\nonumber &&+ \log(1 + Y_{M-1|vw} )(1 + Y_{M+1|vw})\star s+\delta_{M1} \log{1-Y_-\over 1-Y_+}\,\hat{\star}\, s- \log(1 + Y_{M+1})\star s\,,~~~~~ \end{eqnarray} where we identify $r_j^{(1)}\equiv u_j$, and assume that $\log S(r_j^{(k),-}-v)$ is absent in the sum on the first line if $k\ge m+2$. For $M=1$ the term $\log S(u_j-v)$ is due to the pole of $Y_+$ at $u=u_j^-$, and the second term is due to the zero of $1 + Y_{2|vw}$ at $u=r_j^{(3),-}$. For $M\ge 2$ both terms on the first line are due to the zeros of $1 + Y_{k|vw}$ at $u=r_j^{(k+1),-}$ for $k=1,\ldots, m$. \bigskip \noindent $\bullet$\ $y$-particles: their equations coincide with eqs.\eqref{Yfory1sc1}and \eqref{Yfory2sc2} for the first critical region $g_{cr}^{(1)}<g<g_{cr}^{(2)}$. \bigskip \noindent $\bullet$\ $Q$-particles for $Q\ge 3$ \begin{eqnarray} \log Y_{Q}&=&-2\sum_{j=1}^2\, \log S(r_j^{(Q),-}-v) + \log{\left(1 + {1\over Y_{Q-1|vw}} \right)^2\over (1 + {1\over Y_{Q-1} })(1 + {1\over Y_{Q+1} }) }\star_{p.v.} s\label{YforQ2cm} \,.~~~~~~~ \end{eqnarray} The p.v. prescription is again not really needed because the four zeros $\{ r_j^{(Q-1)}\,,r_j^{(Q+1)} \}$ of $Y_{Q-1|vw}$ are cancelled by the double zeros of $Y_{Q-1}$ and $Y_{Q+1}$, and the formula works no matter if the roots $r_j^{(m+1)}$ are real or imaginary. The equations below coincide with the corresponding eqs.\eqref{YforQ2a4c1},\eqref{YforQ1a5},\eqref{TbaQsl2H2} and \eqref{Tba1sl2B3} for the first critical region $g_{cr}^{(1)}<g<g_{cr}^{(2)}$ with the replacement $r_j \mapsto r_j^{(2)}$. \medskip \noindent $\bullet$\ Equations for $Q=2$ and $Q=1$-particles. \smallskip \noindent $\bullet$\ Hybrid $Q=1$ equation. \smallskip \noindent $\bullet$\ Exact Bethe equations. \smallskip According to these results, only some of the simplified equations for $Q$-particles and $vw$-strings change their form when crossing the $m$-th critical point $(m>1)$. However, since all Y-functions are coupled with each other, the existence of higher critical points still affects the equations above but in a less direct way. \subsubsection*{ Integral equations for the roots $r_j^{(k)}$: $g_{cr}^{(m)}<g<g_{cr}^{(m+1)}$} In this region the exact Bethe equations should be supplemented by integral equations which determine the exact location of the roots $r_j^{(k)}$, $k=2,\ldots, m+1$. The integral equations are obtained by analytically continuing the simplified TBA equations \eqref{Yforvw3cm} for $Y_{k|vw}$ to the points $r_j^{(k+1)\pm}$, and using the conditions \begin{eqnarray}\label{yvwmk} Y_{k|vw}(r_j^{(k+1)\pm}) = -1\,. \end{eqnarray} The analytic continuation is again straightforward, and one gets for roots with nonpositive imaginary parts the following equations \begin{eqnarray}\label{Yforvwc1m} r_j^{(2)}:&&\quad \prod_{n=1}^2 S(u_n-r_j^{(2)})S(r_n^{(3)}-r_j^{(2)})\times\\\nonumber &&\qquad\quad \times\exp\(\, \log(1 + Y_{2|vw})\star \tilde{s}+\log{1-Y_-\over 1-Y_+}\,\hat{\star}\, \tilde{s}- \log(1 + Y_{2})\star \tilde{s}\, \)=-1,\\ r_j^{(k+1)}:&&\quad \prod_{n=1}^2 S(r_n^{(k)}-r_j^{(k+1)})S(r_n^{(k+2)}-r_j^{(k+1)})\times \label{Yforvwckm} \\\nonumber &&\qquad\quad \times \exp\(\, \log(1 + Y_{k-1|vw})(1 + Y_{k+1|vw})\star \tilde{s}- \log(1 + Y_{k+1})\star \tilde{s}\, \)=-1, \end{eqnarray} The continuity condition of the roots as functions of $g$ again guarantees that one does not get any new mode numbers. \subsection{Further evidence for critical values of $g$} As was mentioned in the previous section the exact location of the critical values can be found only by solving the TBA and exact Bethe equations. One may wonder if the critical values can be absent at all due to the contribution of $Y_Q$ functions. Indeed, $Y_Q$-functions at $g\approx g_{cr}$ are not small as one can see from Figure 8. \begin{figure}[t] \begin{center} \includegraphics*[width=0.48\textwidth]{YQ1cr1}\quad \includegraphics*[width=0.48\textwidth]{YQ2cr1} \end{center} \caption{In the left and right pictures the profiles of the asymptotic $Y_{1}$ and $Y_{2}$ functions for the Konishi state at $g=g_{cr}^{(1)}=4.429$ are depicted, respectively. } \end{figure} However, their contribution to the quantities of interest is still small. To illustrate this, in Figure 9 we present the plots of asymptotic $Y_\pm$, $Y_{1|vw}$ and $Y_\pm$, $Y_{1|vw}$ (only for $u>0$ because all Y-functions are even) obtained after the first iteration of the simplified TBA equations by taking into account the contribution of the first eight $Y_Q$-functions for $g=4.38671$, $\lambda = 759.691$. \begin{figure}[t] \begin{center} \hspace{-0.40cm}\includegraphics*[width=0.32\textwidth]{Yo1vwY1vw1it}\quad \includegraphics*[width=0.32\textwidth]{YopYp1it} \ \includegraphics*[width=0.32\textwidth]{YomYm1it} \end{center} \caption{In the left picture the profiles of the asymptotic $Y_{1|vw}$ (blue) and $Y_{1|vw}$ after the first iteration (purple) are depicted. In the middle and right pictures similar profiles for $Y_+$ and $Y_-$ are presented. } \end{figure} The blue curves are plots of the asymptotic Y-functions. As we discussed in this section, at the first critical value $Y_{1|vw}(0)=-1$, and at the first subcritical value $Y_{\pm}(0)=0$. One can see from the plots that the influence of $Y_Q$-functions at $u \approx 0$ is extremely small\footnote{Note, that for $g$ far enough from the critical value, e.g. $g = 2$, the contribution of $Y_Q$-functions around $u=0$ results in a more visible change.}. This is of course expected because the finite contribution of $Y_Q$-functions cannot balance the infinities originating from the zeroes of Y-functions. We also see from the plot of $Y_{1|vw}$ that there is a tendency for the actual critical value to be higher than the asymptotic one. To show that the zeroes of Y-functions unavoidably enter the physical strip when the coupling increases, we also provide a plot of asymptotic $Y_Q$-functions for large value of $\lambda=10^8$, see Figure 10. One can see that the functions are very small for almost all values of $u$ except those close to $\pm2$. Thus, at strong coupling all Y-functions are well-approximated around $u=0$ by their asymptotic value. On the other hand, if the zeroes are outside the physical strip then they are purely imaginary, and their positions can be determined by analytically continuing the corresponding TBA equations, see e.g. eq.\eqref{Yforvwc1}. Assuming the roots $r_j$ are imaginary, at large $g$ the main contribution of the TBA kernel $\tilde s$ appearing in \eqref{Yforvwc1} originates from the region around $u=0$. In this region, as was explained above, it is legitimate to use the asymptotic solution which implies that the roots $r_j$ are real and close to $\pm2$. This leads to a contradiction with the assumption that the roots $r_j$ are imaginary at sufficiently large $g$. Thus, one concludes that for large $g$ the roots are real and close to $\pm2$. \begin{figure}[t] \begin{center} \includegraphics*[width=0.99\textwidth]{YoQ1}\\[2mm] \hspace{-0.40cm}\includegraphics*[width=0.99\textwidth]{YoQ2} \end{center} \caption{Profiles of the asymptotic $Y_{Q}$, $Q=1,2$ are depicted for $\lambda=10^8$. } \end{figure} \section{TBA equations for arbitrary two-particle $\sl(2)$ states} Our consideration of the TBA equations for Konishi-like states can be easily used to formulate TBA equations for an arbitrary two-particle state from the $\sl(2)$ sector. One starts again with analyzing the structure of zeros of $Y_{M|vw}$-functions at weak coupling. Then, as was discussed in section 3, for every state $(J,n)$ there is a number $m$ such that the first $m-1$ $Y_{k|vw}$-functions have four zeros, both $Y_{m|vw}$ and $Y_{m+1|vw}$ have two zeros, and all $Y_{k|vw}$-functions with $k\ge m+2$ have no zeros in the physical strip. This is exactly the structure of zeros of $Y_{M|vw}$-functions for a Konishi-like state with the coupling constant being in the $m$-th critical region: $g_{cr}^{(m)}<g< g_{cr}^{(m+1)}$. \smallskip Thus, at weak coupling the simplified TBA equations for this $(J,n)$ state have exactly the same form as the TBA equations for a Konishi-like state in the $m$-th critical region. As to the canonical TBA and exact Bethe equations, one should take into account that real zeros $r_j^{(k)}$ of $Y_{k|vw}$-functions are located outside the interval $[-2,2]$ for small $g$ for most of the $\sl(2)$ states. In particular, if the zeros $r_j^{(2)}$ of $Y_{2|vw}$ are outside the interval then the corresponding zeros $r_j^{(2)-}$ of $1+Y_{1|vw}$ are located on the cuts of $S_{vwx}^{1Q}(r_j^{(2)-},v)$, and since the integration contour runs below these zeros, one should add $+i0$ to them in all the expressions. In addition, the integration contour for $Y_\pm$-functions runs over the interval $[-2,2]$ (and a little bit above it), and if $|r_j^{(2)}|>2$ then there is no singularity of the integrand and the p.v. prescription is not necessary. Strictly speaking, this means that in addition to the critical and subcritical values of $g$ we discussed above, one should also consider such values of $g$ that the zeros $r_j^{(2)}$ of $Y_{2|vw}$ are equal to $\pm2$, and distinguish the cases with the zeros being inside and outside the interval $[-2,2]$. It is straightforward to do it, and we refrain from presenting explicit formulae for these cases here. \smallskip Increasing $g$, one reaches the first critical value $g_{J,n}^{1,m+1}$ of the $(J,n)$ state, and the TBA equations would have to be modified. It is in fact clear that in the region $g_{J,n}^{r,m+1}<g <g_{J,n}^{r+1,m+1}$ the TBA equations for the $(J,n)$ state coincide with the ones for a Konishi-like state in the $m+r$-th critical region. \section{Remarks on the Y-system} For reader's convenience, in this section we summarize the most essential properties of the Y-system implied by the TBA equations under study. \smallskip The kernel $s$ has the uniquely defined inverse $s^{-1}$ which acts as the following operator $$ (f\star s^{-1})(u)=\lim_{\epsilon\to 0^+}\big[f(u+\frac{i}{g}-i\epsilon)+f(u-\frac{i}{g}+i\epsilon)\big]\, . $$ Applying $s^{-1}$ to the set of the simplified TBA equations, one can easily derive the Y-system equations \cite{GKV09} for $-2<u<2$. Since the Y-functions appear to be non-analytic, for $|u|>2$ (here and in what follows we mean $u \in (-\infty,-2) \cup (2, \infty)$ by $|u|>2$) one gets instead new equations which, as we will explain below, encode the jump discontinuities of the Y-functions across the cuts \cite{AF09b}. To exemplify this statement, it is enough to consider $Y_{1|w}$ and $Y_1$-functions. \smallskip We start with eq.\eqref{Yforws} for $Y_{1|w}$. Applying $s^{-1}$ to this equation, one finds \begin{align} Y_{1|w}^{(\alpha)}\big(u+\frac{i}{g}-i0\big)Y_{1|w}^{(\alpha)}\big(u-\frac{i}{g}+i0\big)&= \Big(1+Y_{2|w}^{(\alpha)}\Big)\frac{1-\frac{1}{Y_-^{(\alpha)}}} {1-\frac{1}{Y_+^{(\alpha)}}}(u)\, , ~~~|u|<2 \label{TBAforY} \\ Y_{1|w}^{(\alpha)}\big(u+\frac{i}{g}-i0\big)Y_{1|w}^{(\alpha)}\big(u-\frac{i}{g}+i0\big)&= 1+Y_{2|w}^{(\alpha)}(u)\, , ~~~~~~~~~~~~~~~~~~|u|>2\, . \label{TBAforY outside} \end{align} We stress that these equations are unambiguously {\it derived} from the TBA equation and the $\epsilon$-prescription on the left hand side is fixed by that of $s^{-1}$. We recall that in the TBA equations the functions $Y_{\pm}$ have their support on $[-2,2]$. \smallskip The TBA equation \eqref{Yforws} for $Y_{1|w}$ shows that this function has branch points located at $u=\pm 2\pm \frac{i}{g}$. Since we are dealing with the mirror $u$-plane, it is natural to choose the cuts to run from $\pm \infty$ to $\pm 2\pm \frac{i}{g}$ parallel to the real axis. In the same way the TBA equations show that various Y-functions have branch points located at $\pm 2\pm \frac{i}{g}Q$, $Q=0,1,2,\ldots,\infty$, and, therefore, all the cuts of all the Y-functions can be chosen to be outside the strip $|{\rm Re}\,u|<2$, and running parallel to the real axis. Then, Y-functions are analytic in the strip $|{\rm Re}\,u|<2$, in eq.(\ref{TBAforY}) the $\epsilon$-prescription can be dropped, and $u$ can be considered as a complex variable taking values in the strip \begin{eqnarray}\label{TBAforY1} Y_{1|w}^{(\alpha)}\big(u+\frac{i}{g}\big)Y_{1|w}^{(\alpha)}\big(u-\frac{i}{g}\big)= \Big(1+Y_{2|w}^{(\alpha)}\Big)\frac{1-\frac{1}{Y_-^{(\alpha)}}} {1-\frac{1}{Y_+^{(\alpha)}}}(u)\, , ~~~|{\rm Re}\, u|<2\, . \end{eqnarray} Thus, we see that with this (and only this) choice of the cuts the Y-system takes its standard form. In fact, analytically continuing Y-functions outside the strip $|{\rm Re}\, u|<2$, one concludes that eq.\eqref{TBAforY1} is valid for all complex values of $u$ but those which belong to the cuts. Approaching then the real axis for $|u|>2$, say, from above, one arrives at the following prescription \cite{FS} \footnote{ Naively one could try to define a new ``inverse'' operator $s^{-1}_{\small G}$\,, $ (f\star s^{-1}_{\small G})(u)=\lim_{\epsilon\to 0^+}\big[f(u+\frac{i}{g}+i\epsilon)+f(u-\frac{i}{g}+i\epsilon)\big]\,, $ and claim that applying it to the simplified TBA equation, one gets eq.\eqref{TBAforY1b} for all values of $u$. The problem with this is that the operator $s^{-1}_{\small G}$ is not inverse to $s$. In fact it annihilates $s$: $s\star s^{-1}_{\small G}=0$. } \begin{eqnarray}\label{TBAforY1b} Y_{1|w}^{(\alpha)}\big(u+\frac{i}{g}+i0\big)Y_{1|w}^{(\alpha)}\big(u-\frac{i}{g}+i0\big)= \Big(1+Y_{2|w}^{(\alpha)}\Big)\frac{1-\frac{1}{Y_-^{(\alpha)}}} {1-\frac{1}{Y_+^{(\alpha)}}}(u+i0)\, , ~~ |u|>2\, . \end{eqnarray} Such an {\it analytically continued} equation can be used to determine the jump discontinuities of the Y-functions across the cut. From the compatibility of the equation \eqref{TBAforY1b} with eq.\eqref{TBAforY outside}, we find that \begin{eqnarray} \frac{Y_{1|w}^{(\alpha)}\big(u+\frac{i}{g}+i0\big)}{Y_{1|w}^{(\alpha)}\big(u+\frac{i}{g}-i0\big)}= \frac{1-\frac{1}{Y_-^{(\alpha)}}} {1-\frac{1}{Y_+^{(\alpha)}}}(u+i0),~~~~~~~|u|>2\,, \end{eqnarray} and analogously \begin{eqnarray} \frac{Y_{1|w}^{(\alpha)}\big(u+\frac{i}{g}-i0\big)}{Y_{1|w}^{(\alpha)}\big(u+\frac{i}{g}+i0\big)}= \frac{1-\frac{1}{Y_-^{(\alpha)}}} {1-\frac{1}{Y_+^{(\alpha)}}}(u-i0),~~~~~~~|u|>2\, . \end{eqnarray} We also find that these discontinuity equations are consistent with the relation (\ref{ypym}) which follows from the TBA equations for $Y_{\pm}$. \smallskip Thus, the TBA equations tell us that, because of the absence of analyticity, the Y-system equations must be supplemented by proper jump discontinuity conditions. This was one of the important observations made in \cite{AF09b}. \smallskip Finally, we recall that for $Y_1$ one finds \begin{eqnarray}\begin{aligned} Y_{1}\big(u+\frac{i}{g}-i0\big)Y_{1}\big(u-\frac{i}{g}+i0\big)&= \frac{\Big(1-\frac{1}{Y_-^{(1)}}\Big)\Big(1-\frac{1}{Y_-^{(2)}}\Big)}{1+\frac{1}{Y_2}}\, , ~~~|u|<2 \\ Y_{1}\big(u+\frac{i}{g}-i0\big)Y_{1}\big(u-\frac{i}{g}+i0\big)&=\frac{e^{-\Delta}}{1+\frac{1}{Y_2}}\, , ~~~~~~~~~~~~~~~~~~~~~~|u|>2\,, \end{aligned}\end{eqnarray} where the explicit form of the quantity $\Delta$ is given in \cite{AF09b,AF09d} for the ground state case, and in the excited state case it can be extracted from eq.\eqref{YforQ1a5} or obtained from the ground state one by using the contour deformation trick. Once again, the second equation here determines the jump discontinuity of $Y_1$ across the cut. However, a new feature here is that the jump discontinuity, that is $\Delta$, does depend on a state under consideration! \smallskip To summarize, the Y-system exhibits the following properties dictated by the underlying TBA equations \begin{itemize} \item The Y-system is not analytic on the $u$-plane and for this reason it must be supplemented by jump discontinuity conditions; \item In general, jump discontinuities depend on a state of interest and they can be only fixed by the TBA equations; \item Different Y-functions have different cut structure. In total there are infinitely many cuts on the mirror theory $u$-plane with the branch points located at $\pm 2\pm \frac{i}{g}Q$, $Q=0,1,2,\ldots,\infty$. As a result, the Y-system lives on a Riemann surface of infinite genus. \end{itemize} Needless to say, these intricate analyticity properties discovered in \cite{AF09b,FS} render the AdS/CFT Y-system rather different from its known relativistic cousins and, for this reason, make it much harder to solve. \section{Conclusions} In this paper we have analyzed the TBA equations for the ${\rm AdS}_5\times {\rm S}^5$ mirror model. We provided an evidence that for any excited string state and, therefore, for any ${\cal N}=4$ SYM operator there could be infinitely many critical values of 't Hooft's coupling constant at which the TBA equations have to be modified. It is a demanding but also a rather challenging problem to locate the exact position of the critical points and it is similar in spirit to determination of the exact positions of Bethe roots. At the same time, we also want to give a word of caution. Our approach is based on the optimistic assumption that the analytic structure of the exact TBA solution emulates the one of the large $L$ asymptotic solution. The possibility that the exact solution might develop new singularities in comparison to the asymptotic one would lead to even more complicated scenario than we described here. \smallskip One could also speculate about the physical origin of critical points. It is known that the $\sl(2)$ sector is closed to any order of perturbation theory. One possibility would be that the first critical value of $g$ of an $\sl(2)$ state is the one where this state begins to mix with states from other sectors of the theory. It would be interesting to understand if this is indeed the case. \smallskip The TBA equations and the contour deformation trick we have formulated allow one to discuss many interesting problems, and we list some of them below. \smallskip Concerning the issue of critical points, it would be very interesting to analyze the TBA equations, and compute numerically the scaling dimension of the Konishi operator in the vicinity of and beyond a critical point. The simplified TBA equations seem to be much better suited for such an analysis than the canonical ones. \smallskip Then, one should solve analytically the TBA equations for two-particle states at large $g$. The large $g$ expansion should contain no $\log g$ which follow from the BY equations \cite{Bec,RS09}. It should also fix the coefficient of the subleading $1/g$ term in the energy expansion. There are currently two different predictions for the coefficient \cite{AF05,RT09k} obtained by using some string theory methods and relaying on certain assumptions. Thus, it is important to perform a rigorous string theory computation of this coefficient. \smallskip Recently, the TBA approach has been applied to obtain the 5-loop anomalous dimension of the Konishi operator by a combination of analytical and numerical means \cite{Arutyunov:2010gb} and the corresponding result was found to be in a perfect agreement with the one based on the generalization of L\"uscher's formulae \cite{BJ09}. This constitutes an important test of the TBA equations we propose in this paper (the hybrid equations). It would be nice to support this numerical agreement by an analytic proof. \smallskip The TBA equations we have proposed are not valid for the two-particle $(J,{J+1\over 2})$ state. In the semi-classical string limit $g\to\infty$ and $J/g$ fixed it should correspond to the folded string rotating in S$^2$ \cite{GKP02}. It would be interesting to analyze these states along the lines of our paper, write TBA equations, and solve them. \smallskip We have discussed only two-particle states. It is certainly important to generalize our analysis to arbitrary $N$-particle $\sl(2)$ states. For $J=2$ the lowest energy $N$-particle state is dual to the twist-two operator that plays an important role in field theory, see e.g. \cite{Kotikov:2007cy}. \smallskip It would be also of interest to consider the one-particle case at large $g$ and finite $J/g$. Its energy should match the string theory result \cite{AFZmag}. \smallskip Let us finally mention that one should also consider other sectors and exhibit the $\alg{psu}(2,2|4)$ invariance of the spectrum. \section*{Acknowledgements} The work of G.~A. was supported in part by the RFBR grant 08-01-00281-a, by the grant NSh-672.2006.1, by NWO grant 047017015 and by the INTAS contract 03-51-6346. The work of S.F. was supported in part by the Science Foundation Ireland under Grants No. 07/RFP/PHYF104 and 09/RFP/PHY2142. The work of R.S. was supported by the Science Foundation Ireland under Grants No. 07/RFP/PHYF104. \section{Appendices} \subsection{Kinematical variables, kernels and S-matrices}\label{app:rapidity} All kernels and S-matrices we are using are expressed in terms of the function $x(u)$ \begin{eqnarray}\label{basicx} x(u)=\frac{1}{2}(u-i\sqrt{4-u^2}), ~~~~{\rm Im}\, x(u)<0\, , \end{eqnarray} which maps the $u$-plane with the cuts $[-\infty, -2]\cup [2,\infty]$ onto the physical region of the mirror theory, and the function $x_s(u)$ \begin{eqnarray}\label{stringx} x_s(u)={u\over 2}\Big(1 + \sqrt{1-{4\over u^2}}\Big)\,,\quad |x_s(u)|\ge 1\,, \end{eqnarray} which maps the $u$-plane with the cut $[-2,2]$ onto the physical region of the string theory. The momentum $\tilde{p}^Q$ and the energy $\tilde{\cal{E}}_Q$ of a mirror $Q$-particle are expressed in terms of $x(u)$ as follows \begin{eqnarray} {\widetilde p}_Q=g x\big(u-\frac{i}{g}Q\big)-g x\big(u+\frac{i}{g}Q\big)+i Q\, , ~~~~~\tilde{\cal{E}}_Q=\log\frac{x\big(u-\frac{i}{g}Q\big)}{x\big(u+\frac{i}{g}Q\big)}\, . \end{eqnarray} The kernels act from the right, and the three types of star operations used in this paper are defined as follows \begin{eqnarray}\nonumber &&f\star K(v) \equiv \int_{-\infty}^\infty\, du\, f(u) \, K(u,v)\,,\quad f\,\hat{\star}\, K(v) \equiv \int_{-2}^2\, du\, f(u) \, K(u,v)\,,\\ &&f\,\check{\star}\, K(v) \equiv \left(\int_{-\infty}^{-2} +\int_{2}^\infty\right)\, du\, f(u) \, K(u,v)\,. \end{eqnarray} The TBA equations discussed in this paper involve convolutions with a number of kernels which we specify below, see also \cite{AF09b} for more details. First, the following universal kernels appear in the TBA equations \begin{alignat}{2} s (u) & = \frac{1}{2 \pi i} \, \frac{d}{du} \log S(u)= {g \over 4\cosh {\pi g u \over 2}}\,,\quad S(u)=-\tanh[ \frac{\pi}{4}(u g - i)]\,, \nonumber \\ K_Q (u) &= \frac{1}{2\pi i} \, \frac{d}{du} \, \log S_Q(u) = \frac{1}{\pi} \, \frac{g\, Q}{Q^2 + g^2 u^2}\,,\quad S_Q(u)= \frac{u - \frac{iQ}{g}}{u + \frac{i Q}{g}} \,, \nonumber\\ K_{MN}(u) &= \frac{1}{2\pi i} \, \frac{d}{du} \, \log S_{MN}(u)=K_{M+N}(u)+K_{N-M}(u)+2\sum_{j=1}^{M-1}K_{N-M+2j}(u)\,,\nonumber\\ S_{MN}(u) &=S_{M+N}(u)S_{N-M}(u)\prod_{j=1}^{M-1}S_{N-M+2j}(u)^2 \label{KMMp}=S_{NM}(u)\,. \end{alignat} Then, the kernels $K_\pm^{Qy}$ are related to the scattering matrices $S_\pm^{Qy}$ of $Q$- and $y_\pm$-particles in the usual way \begin{eqnarray}\nonumber K^{Qy}_-(u,v)&=&1\over 2\pi i}{d\over du}\log S^{Qy}_-(u,v)\,,\quad S^{Qy}_-(u,v) = \frac{x(u-i{Q\over g})-x(v)}{x(u+i{Q\over g})-x(v)} \sqrt{{\frac{x(u+i{Q\over g})}{x(u-i{Q\over g})}} \,,\\\nonumber K^{Qy}_+(u,v)&=&1\over 2\pi i}{d\over du}\log S^{Qy}_+(u,v)\,,\quad S^{Qy}_+(u,v) = \frac{x(u-i{Q\over g})-{1\over x(v)}}{x(u+i{Q\over g})-{1\over x(v)}} \sqrt{{\frac{x(u+i{Q\over g})}{x(u-i{Q\over g})}}\,. \end{eqnarray} These kernels can be expressed in terms of the kernel $K_Q$, and the kernel \begin{eqnarray} K(u,v) = \frac{1}{2 \pi i} \, \frac{d}{du} \, \log S (u,v) = \frac{1}{2 \pi i} \, \frac{ \sqrt{4-v^2}}{\sqrt{4-u^2}}\, {1\over u-v} \,,\ \ S(u,v)=\frac{x(u) - x(v)}{x(u) - 1/x(v)}\,,~~~ \label{Kuv} \end{eqnarray} as follows \begin{eqnarray} K^{Qy}_\mp(u,v)&=&{1\over 2}\Big( K_Q(u-v) \pm K_{Qy}(u,v)\Big)\,, \end{eqnarray} where $K_{Qy}$ is given by \begin{eqnarray}\label{sqy} K_{Qy}(u,v)&=&{1\over 2\pi i}{d\over du}\log S_{Qy}(u,v)=K(u-\frac{i}{g}Q,v)-K(u+\frac{i}{g}Q,v)\,,\\\nonumber S_{Qy}(u,v) &=&{S_-^{Qy}(u,v) \over S_+^{Qy}(u,v) }={x(u-{i\over g}Q) - x(v)\over x(u-{i\over g}Q) - {1\over x(v)}}\,{x(u+{i\over g}Q) - {1\over x(v)}\over x(u+{i\over g}Q) - x(v)}= {S(u-\frac{i}{g}Q,v)\over S(u+\frac{i}{g}Q,v)}\, .~~~~~ \label{kqy} \end{eqnarray} The S-matrices $S^{Qy}_\pm$ and $S_{Qy}$ (and kernels $K^{Qy}_\pm$ and $K_{Qy}$) can be easily continued to the string region by using the substitution $x(u\pm i{Q\over g})\to x_s(u\pm i{Q\over g})$, and the resulting S-matrices are denoted as $S^{Q_*y}_\pm$ and $S_{Q_*y}$. Notice that $S_{Q_*y}(-\infty,v) =1$. One can also replace $x(v)$ by $x_s(v)$ in the formulae above. Then, one gets the S-matrices and kernels which are denoted as $K_{Qy}^{ms}$ ($ms$ for mirror-string) and so on. The following kernels and S-matrices are similar to $K_\pm^{Qy}$ \begin{eqnarray}\nonumber K^{yQ}_-(u,v)&=&{1\over 2\pi i}{d\over du}\log S^{yQ}_-(u,v)\,,\quad S^{yQ}_-(u,v) = \frac{x(u)-x(v+i{Q\over g})}{x(u)-x(v-i{Q\over g})}\sqrt{\frac{x(v-i{Q\over g})}{x(v+i{Q\over g})}}\,,\\\nonumber K^{yQ}_+(u,v)&=&{1\over 2\pi i}{d\over du}\log S^{yQ}_+(u,v)\,,\quad S^{yQ}_+(u,v) = \frac{{1\over x(u)}-x(v-i{Q\over g})}{{1\over x(u)}-x(v+i{Q\over g})}\sqrt{ \frac{x(v+i{Q\over g})}{x(v-i{Q\over g})} }\,.\end{eqnarray} They satisfy the following relations \begin{eqnarray}\nonumber &&K^{yQ}_\pm(u,v) ={1\over 2}\Big(K_{yQ}(u,v)\mp K_Q(u-v)\Big)\, ,\\\nonumber K_{yQ}(u,v)&=&{1\over 2\pi i}{d\over du}\log S_{yQ}(u,v)= \label{KyQuv} K(u,v+{i\over g}Q)-K(u,v-{i\over g}Q)\\\nonumber S_{yQ}(u,v) &=&{S_-^{yQ}(u,v)S_+^{yQ}(u,v) }=\frac{x(u)-x(v+i{Q\over g})}{x(u)-x(v-i{Q\over g})}\frac{{1\over x(u)}-x(v-i{Q\over g})}{{1\over x(u)}-x(v+i{Q\over g})} \\ &=&{S(u,v+\frac{i}{g}Q)\over S(u,v-\frac{i}{g}Q)} {x(v-i{Q\over g})\over x(v+i{Q\over g})} \label{kyq} \, .~~~~~~ \end{eqnarray} It is worth mentioning that $S_{yQ}(\pm2,v) =1$. Next, we define the following S-matrices and kernels \begin{eqnarray}\nonumber S_{xv}^{QM}(u,v) &=& \frac{x(u-i{Q \over g })-x(v+i{M \over g})}{x(u+i{Q \over g })-x(v+i{M \over g})}\, \frac{x(u-i{Q \over g })-x(v-i{M \over g})}{x(u+i{Q \over g })-x(v-i{M \over g})}\, \frac{x(u+i{Q \over g })}{x(u-i{Q \over g })}~~~~\\\label{Sxv} &\times &\prod_{j=1}^{M-1}\frac{u-v-\frac{i}{g}(Q-M+2j)}{u-v+\frac{i}{g}(Q-M+2j)}\,,\\\nonumber K_{xv}^{QM}(u,v) &=&{1\over 2\pi i}{d\over du}\log S_{xv}^{QM}(u,v)\,, \end{eqnarray} and \begin{eqnarray}\nonumber S_{vwx}^{QM}(u,v) &=& \frac{x(u-i{Q \over g })-x(v+i{M \over g})}{x(u-i{Q \over g })-x(v-i{M\over g})}\, \frac{x(u+i{Q \over g })-x(v+i{M \over g})}{x(u+i{Q \over g})-x(v-i{M \over g})}\, \frac{x(v-i{M \over g })}{x(v+i{M \over g})}~~~~ \\\label{Svwx} &\times &\prod_{j=1}^{M-1}\frac{u-v-\frac{i}{g}(M-Q+2j)}{u-v+\frac{i}{g}(M-Q+2j)}\,,\\\nonumber K_{vwx}^{QM}(u,v) &=&{1\over 2\pi i}{d\over du}\log S_{vwx}^{QM}(u,v)\,. \end{eqnarray} Then, the $\sl(2)$ S-matrix $S_{\sl(2)}^{QM}$ in the uniform light-cone gauge \cite{AFrev} with the gauge parameter $a=0$ can be written in the form \begin{eqnarray}\label{Ssl2} S_{\sl(2)}^{QM}(u,v)= S^{QM}(u-v)^{-1} \, \Sigma_{QM}(u,v)^{-2}\,, \end{eqnarray} where $\Sigma^{QM}$ is the improved dressing factor \cite{AF09c}. The corresponding $\sl(2)$ and dressing kernels are defined as usual \begin{eqnarray} K_{\sl(2)}^{QM}(u,v)= \frac{1}{2\pi i}\frac{d}{du}\log S_{\sl(2)}^{QM}(u,v) \,,\quad K_{QM}^{\Sigma}(u,v)=\frac{1}{2\pi i}\frac{d}{du}\log \Sigma_{QM}(u,v)\,.~~~~ \end{eqnarray} The analytically continued $\sl(2)$ S-matrix is given by \begin{eqnarray}\nonumber S_{\sl(2)}^{1_*M}(u,v)&=&{1\over S_{1M}(u-v)\Sigma_{1_*M}(u,v)^2}\,,~~~~ \end{eqnarray} where the improved dressing factor is given by \cite{AF09c} \begin{eqnarray}\label{sigtot3} \begin{aligned} {1\over i}\log\Sigma_{1_*M}(u,v) &= \Phi(y_1^+,y_2^+)-\Phi(y_1^+,y_2^-)-\Phi(y_1^-,y_2^+)+\Phi(y_1^-,y_2^-) \\ &+{1\over 2}\left(\Psi(y_{2}^+,y_1^+)+\Psi(y_{2}^-,y_1^+)-\Psi(y_{2}^+,y_1^-) -\Psi(y_{2}^-,y_1^-) \right)~~~~~ \\ &+\frac{1}{2i}\log\frac{(y_1^--y_2^+)\big(y_1^- -\frac{1}{y_2^-}\big)\big(y_1^+ -\frac{1}{y_2^-}\big)}{(y_1^+-y_2^+)\big(y_1^- -\frac{1}{y_2^+}\big)^2} \,.~~~~~ \end{aligned}\end{eqnarray} Here $y_{1}^{\pm}=x_s(u\pm{i\over g})$ are parameters of a fundamental particle in string theory, and $y_{2}^{\pm}=x(v\pm{i\over g}M)$ are parameters of an $M$-particle bound state in the mirror theory, see \cite{AF09c} for details. Next, we introduce the following kernel and S-matrix \begin{eqnarray} \label{bK}\nonumber \bar{K}(u,v)= \frac{1}{2 \pi i} \, \frac{d}{du} \, \log S_{ms}(u,v)={1\over 2\pi} \frac{\sqrt{1-{4\over v^2}}}{\sqrt{4-u^2}}{v\over u-v}\,,\ \ S_{ms}(u,v)=\frac{x(u) - x_s(v)}{x(u) - {1\over x_s(v)}}\,. \end{eqnarray} With the help of this kernel we define\footnote{The definitions of the kernels $\check{K}$ and $\check{K}_Q$ differ by the sign from the ones used in \cite{AF09b}.} \begin{eqnarray} \check{K}(u,v)&=&\bar{K}(u,v)\big[\theta(-v-2)+\theta(v-2)\big]\,,\quad ~~~~\\ \label{ck1} \check{K}_Q (u,v)&=& \big[\bar{K}(u+{i\over g}Q,v) + \bar{K}(u-{i\over g}Q,v) \big]\big[\theta(-v-2) +\theta(v-2)\big]\, , \end{eqnarray} where $\theta(u)$ is the standard unit step function. Obviously, both $\check{K}$ and $\check{K}_Q$ vanish for $v$ being in the interval $(-2,2)$ and are equal to (twice) the jump discontinuity of the kernels ${K}$ and ${K}_{Qy}$ across the real semi-lines $|v|>2$. We also use \begin{eqnarray} \label{Kss}\nonumber K_{ss}(u,v)= \frac{1}{2 \pi i} \, \frac{d}{du} \, \log S_{ss}(u,v)={1\over 2\pi i} \frac{\sqrt{1-{4\over v^2}}}{\sqrt{1-{4\over u^2}}}{v\over (u-v)}\,,\ \ S_{ss}(u,v)=\frac{x_s(u) - x_s(v)}{x_s(u) - {1\over x_s(v)}}\,, \end{eqnarray} and define $\check{S}_{Q}(u,v)$ as \begin{eqnarray} \check{S}_{Q}(u,v) = {S_{ss}(u-{i\over g}Q,v) \over S_{ss}(u+{i\over g}Q,v)}\,,\quad \check{K}_Q (u,v)= \frac{1}{2 \pi i} \, \frac{d}{du} \, \log \check{S}_{Q}(u,v)\,, \end{eqnarray} to ensure that $\check{S}_{Q}(-\infty,v)=1$. The quantity $\check{\cal E}$ is defined as \begin{eqnarray}\label{cEu} \check{\cal E}(u)=\log \frac{x (u - i0)}{x (u + i0)} = 2\log x_s(u) \neq 0 \quad {\rm for} \ \ u \in (-\infty,-2) \cup (2, \infty) \,. \end{eqnarray} The TBA equations for $Y_Q$-particles involve the kernel \begin{eqnarray}\label{Ksig} {\check K}^\Sigma_{Q} = {1\over 2\pi i} {\partial\over \partial u} \log{\check \Sigma}_{Q}= - K_{Qy}\,\hat{\star}\, \check{I}_0 + \check{I}_Q \end{eqnarray} where \begin{eqnarray}\label{checkIQ} &&\check{I}_Q=\sum_{n=1}^\infty \check{K}_{2n+Q}(u,v)=K_\Gamma^{[Q+2]}(u-v)+2\int_{-2}^2 {\rm d}t \, K_\Gamma^{[Q+2]}(u-t)\check{K}(t,v) \,,~~~~~~~\\\label{KG0} &&K_\Gamma^{[Q]}(u)={1\over 2\pi i} {d\over d u} \log \frac{\Gamma\big[{Q\over 2}-\frac{i}{2}g u\big]}{\Gamma\big[{Q\over 2}+\frac{i}{2}g u\big]}={g\gamma\ov2\pi}+ \sum_{n=1}^\infty\Big(K_{2n+Q-2}(u)-{g\over 2\pi n}\Big) \, .~~~~~~~~~~\end{eqnarray} The kernel \eqref{Ksig} is related to the dressing kernel $K_{QM}^{\Sigma}$ as follows \begin{eqnarray}\label{KSKi1} \check{K}_{Q}^\Sigma(u,v)= K^\Sigma_{Q1}(u,v+{i\over g}-i0)+K^\Sigma_{Q1}(u,v-{i\over g}+i0) - K^\Sigma_{Q2}(u,v) \,. \end{eqnarray} The analytically continued kernel $\check{K}_{1_*}^\Sigma$ is given by \begin{eqnarray}\label{k1starsigma} \check{K}_{1_*}^\Sigma(u,v) =- K_{1_*y}\,\hat{\star}\, \check{I}_0 -K_{ss}(u-{i\over g}\,,v)\,.~~~~~~~ \end{eqnarray} Finally, integrating \eqref{k1starsigma} over the first argument, one gets \begin{eqnarray}\label{s1star} \log\check{\Sigma}_{1_*}(u,v) =- \log S_{1_*y}\,\hat{\star}\, \check{I}_0 -\log S_{ss}(u-{i\over g},v)\,.~~~~~~~ \end{eqnarray} Note that the first term here is real, but the last one is not. \subsection{Solution of the Bethe-Yang equation for the Konishi state}\label{appBY} \subsubsection*{Perturbative solution} The equation \eqref{BYe} can be solved in perturbative theory, and one gets up to $g^{16}$ {\smaller \begin{eqnarray} p&=& \frac{2 \pi }{3}-\frac{\sqrt{3} g^2}{4}+\frac{9 \sqrt{3}g^4}{32}-\frac{3\sqrt{3}}{8} g^6 \left( \zeta (3)+1\right) +\frac{g^8 \sqrt{3}\left(960 \zeta (3)+960\zeta (5)+671 \right)}{1024} \\\nonumber &+& g^{10}\sqrt{3} \left(-\frac{141 \zeta (3)}{64}-\frac{309 \zeta (5)}{128}-\frac{315 \zeta (7)}{128}-\frac{3807 }{2560}\right) \\\nonumber &+& g^{12}\sqrt{3} \left(\frac{2799\zeta (3)}{512}+\frac{9 \zeta (3)^2}{64}+\frac{1527 \zeta (5)}{256}+\frac{3339 \zeta (7)}{512}+\frac{441 \zeta (9)}{64}+\frac{7929 }{2048}\right) \\\nonumber &+& g^{14}\sqrt{3} \left(-\frac{30015 \zeta (3)}{2048}-\frac{81 \zeta (3)^2}{128}-\frac{7929 \zeta (5)}{512}-\frac{45 \zeta (3) \zeta (5)}{64}-\frac{17127 \zeta (7)}{1024} \right. \\\nonumber &&\hspace{7cm} -\left.\frac{1197 \zeta (9)}{64}-\frac{10395 \zeta (11)}{512}-\frac{303837 }{28672}\right) \\\nonumber &+& g^{16}\sqrt{3} \left(\frac{340785 \zeta (3)}{8192}+\frac{2349\zeta (3)^2}{1024}+\frac{350505 \zeta (5)}{8192}+\frac{891 \zeta (3) \zeta (5)}{256}+\frac{225 \zeta (5)^2}{256}\right. \\\nonumber &+&\left. \frac{183519 \zeta (7)}{4096}+\frac{945 \zeta (3) \zeta (7)}{512}+\frac{100863 \zeta (9)}{2048}+\frac{230175 \zeta (11)}{4096}+\frac{127413 \zeta (13)}{2048}+\frac{15543873 }{524288}\right)\,. \end{eqnarray}} Approximately one gets \begin{eqnarray} p&=& 2.0944-0.433013 g^2+0.487139 g^4-1.43028 g^6+4.77062 g^8\\\nonumber &&\hspace{3cm}-15.7964 g^{10}+52.5014 g^{12}-176.638 g^{14}+602.45 g^{16}\,. \end{eqnarray} The corresponding expansion of the $u$-variable is given by {\smaller \begin{eqnarray} u&=&\frac{1}{\sqrt{3} g}\Big[1+2 g^2-\frac{5 g^4}{4}+g^6 \left(\frac{7}{4}+\frac{3 \zeta(3)}{4}\right)-\frac{1}{128} g^8 (461+144 \zeta(3)+240 \zeta(5)) \\\nonumber &+& g^{10} \left(\frac{1133}{128}+\frac{63 \zeta(3)}{32}+\frac{189 \zeta(5)}{64}+\frac{315 \zeta(7)}{64}\right) \\\nonumber &-& g^{12} \left(\frac{23835}{1024}+\frac{1167 \zeta(3)}{256}+\frac{9 \zeta(3)^2}{64}+\frac{729 \zeta(5)}{128}+\frac{2079 \zeta(7)}{256}+\frac{441 \zeta(9)}{32}\right) \\\nonumber &+&g^{14} \left(\frac{64731}{1024}+\frac{3429 \zeta(3)}{256}+\frac{9 \zeta(3)^2}{128}+\frac{897 \zeta(5)}{64}+\frac{45}{64} \zeta(3) \zeta(5)+\frac{8559 \zeta(7)}{512} \right. \\\nonumber &&\hspace{7cm} \left. +\frac{189 \zeta(9)}{8}+\frac{10395 \zeta(11)}{256}\right) \\\nonumber &+&g^{16} \left(-\frac{1441077}{8192}-\frac{176445 \zeta(3)}{4096}+\frac{405 \zeta(3)^2}{512}-\frac{169371 \zeta(5)}{4096}-\frac{477}{512} \zeta(3) \zeta(5) \right. \\\nonumber &-& \left. \frac{225 \zeta(5)^2}{256}-\frac{87417 \zeta(7)}{2048}-\frac{945}{512} \zeta(3) \zeta(7)-\frac{51975 \zeta(9)}{1024}-\frac{147015 \zeta(11)}{2048}-\frac{127413 \zeta(13)}{1024}\right)\Big]\,. \end{eqnarray} } Approximately one gets \begin{eqnarray} u&=& \frac{0.57735}{g}+1.1547 g-0.721688 g^3+1.53087 g^5-3.98263 g^7 \\\nonumber &&\hspace{2cm}+11.1101 g^9 -32.8297 g^{11}+101.602 g^{13}-325.587 g^{15}\,. \end{eqnarray} The dimension of the Konishi operator is {\smaller \begin{eqnarray} \Delta&=& 2+3 g^2-3 g^4+\frac{21 g^6}{4}+g^8 \left(-\frac{705}{64}-\frac{9 \zeta(3)}{8}\right)+g^{10} \left(\frac{6627}{256}+\frac{135 \zeta(3)}{32}+\frac{45 \zeta(5)}{16}\right) \\\nonumber &+& g^{12} \left(-\frac{67287}{1024}-\frac{27 \zeta(3)}{2}-\frac{1377 \zeta(5)}{128}-\frac{945 \zeta(7)}{128}\right) \\\nonumber &+& g^{14} \left(\frac{359655}{2048}+\frac{10899 \zeta(3)}{256}+\frac{27 \zeta(3)^2}{128}+\frac{18117 \zeta(5)}{512}+\frac{7371 \zeta(7)}{256}+\frac{1323 \zeta(9)}{64}\right) \\\nonumber &+& g^{16} \left(-\frac{7964283}{16384}-\frac{278505 \zeta(3)}{2048}-\frac{621 \zeta(3)^2}{512}-\frac{58491 \zeta(5)}{512}-\frac{135}{128} \zeta(3) \zeta(5) \right. \\\nonumber &&\hspace{6cm} \left.-\frac{198207 \zeta(7)}{2048}-\frac{20979 \zeta(9)}{256}-\frac{31185 \zeta(11)}{512}\right) \\\nonumber &+& g^{18} \left(\frac{22613385}{16384}+\frac{3600585 \zeta(3)}{8192}+\frac{1539 \zeta(3)^2}{256}+\frac{1520127 \zeta(5)}{4096}+\frac{7101 \zeta(3) \zeta(5)}{1024} \right. \\\nonumber &+& \left. \frac{675 \zeta(5)^2}{512}+\frac{2605095 \zeta(7)}{8192}+\frac{2835 \zeta(3) \zeta(7)}{1024}+\frac{573237 \zeta(9)}{2048}+\frac{1002375 \zeta(11)}{4096}+\frac{382239 \zeta(13)}{2048}\right)\,. \end{eqnarray} } Approximately one gets \begin{eqnarray} \Delta_{\rm Konishi}&=& 2+3 g^2-3g^4+5.25 g^6-12.3679 g^8+33.8743 g^{10}-100.537 g^{12}~~~~\\\nonumber &+&313.532 g^{14} -1011.73 g^{16}+3348.11 g^{18}\,. \end{eqnarray} \begin{figure}[t] \begin{center} \includegraphics[width=.46\textwidth]{P_Konishi}\quad \includegraphics[width=.40\textwidth]{E_Konishi} \end{center} \caption{Numerical solution of the BYE for the Konishi operator. The momentum was computed for ${1\over 10}\le g\le 10$ with the step $\Delta g = {1\over 10}$, and for $10\le g\le 100$ with $\Delta g = 1$. On the right picture the asymptotic dimension of the Konishi operator is plotted. It approaches $2\sqrt{2\pi g}$ as expected from \cite{AFS}.} \end{figure} We have also solved the BY equation numerically for small values of $g$, and its numerical solution perfectly agrees with the analytic one. The perturbative solution works very well at least up to $g= {1\over 5}$. For $g={1\over 5}$ the difference between the analytic solution and numerical one is $\approx 5\times 10^{-10}$. \subsubsection*{Numerical solution for ${1\over 10}\le g\le 100$} The BY equation can be solved numerically up to very large values of $g$, and one gets the plot on Figure 7. for the momentum $p$ as a function of $g$. For large values of $g \sim 100$ the momentum is approximated by \begin{eqnarray} p_{\rm AFS} = \sqrt{2\pi\over g} - {1\over g}\,, \end{eqnarray} with a good precision as expected from \cite{AFS,AF05}. For $g=100$ the difference between the numerical solution and the AFS formula is equal to $-0.0016902$. If one uses the following asymptotic expression from \cite{RS09} \begin{eqnarray} p_{\rm RS} = \sqrt{2\pi\over g} - {1\over g}+\frac{0.931115+0.199472 \log (g)}{g^{3/2}}\,, \end{eqnarray} that is one includes the next subleading terms then the agreement is much better and for $g=100$ the difference between the numerical solution and the Rej-Spill formula is equal to $0.0001587$. To match the coefficients in $p_{\rm RS}$ one has to solve the BY equation for larger values of $g$. The corresponding plots of the $u$-variable are shown on Figure 3, and were discussed in section 3. \subsection{Transfer matrices and asymptotic Y-functions}\label{appT} Here we remind the construction of the asymptotic Y-functions in terms of transfer-matrices corresponding to various representations of the centrally extended $\alg{su}(2|2)$ superalgebra. Consider $K^I$ physical particles (excited states) of string theory characterized by the $u_*$-plane rapidities $u_1,\ldots, u_N$, $N\equiv K^{\rm I}$ or, equivalently, by the $p_1,\ldots, p_N$ physical momenta. Each of these particles transforms in the fundamental representation of $\alg{su}(2|2)$. Consider also a single auxiliary particle with rapidity $v$ corresponding to a bound state (atypical) representation of $\alg{su}(2|2)$ with the bound state number $a$. Scattering the bound state representation through the chain of $N$ physical particles gives rise to the following monodromy matrix $$ \mathbb{T}(v|\vec{u})=\prod_{i=1}^N \mathbb{S}_{0i}(v,u_i)\, . $$ Here $\mathbb{S}_{0i}(v,u_i)$ is the S-matrix describing the scattering of the auxiliary particle in the bound state representation with a physical particle with rapidity $u_i$. The transfer-matrix $T_a(v|\vec u)$ corresponding to this scattering process is defined as the trace of $\mathbb{T}(v|\vec u)$ over the auxiliary space of the $a$-particle bound state representation $T_a(v|\vec{u})={\rm tr}_0\mathbb{T}(v|\vec{u})$. We are mostly interested in the situation where the auxiliary particle is in the mirror theory. An eigenvalue of this transfer matrix for an anti-symmetric bound state representation of the mirror particle is given by the formula found in \cite{ALST} \begin{eqnarray}\label{eqn;FullEignvalue} &&T_a(v\,|\,\vec{u})=\prod_{i=1}^{K^{\rm{II}}}{\textstyle{\frac{y_i-x^-} {y_i-x^+}\sqrt{\frac{x^+}{x^-}} \, +}}\\ && {\textstyle{+}}\prod_{i=1}^{K^{\rm{II}}}{\textstyle{\frac{y_i-x^-}{y_i-x ^+}\sqrt{\frac{x^+}{x^-}}\left[ \frac{x^++\frac{1}{x^+}-y_i-\frac{1}{y_i}}{x^++\frac{1}{x^+}-y_i-\frac{1 }{y_i}-\frac{2i a}{g}}\right]}}\prod_{i=1}^{K^{\rm{I}}} {\textstyle{\left[\frac{(x^--x^-_i)(1-x^- x^+_i)}{(x^+-x^-_i)(1-x^+ x^+_i)}\frac{x^+}{x^-} \right]}}\nonumber\\ &&{\textstyle{+}} \sum_{k=1}^{a-1}\prod_{i=1}^{K^{\rm{II}}}{\textstyle{\frac{y_i-x^-}{y_i- x^+}\sqrt{\frac{x^+}{x^-}} \left[\frac{x^++\frac{1}{x^+}-y_i-\frac{1}{y_i}}{x^++\frac{1}{x^+}-y_i-\frac{1}{y_i}-\frac{2ik}{g}}\right]}} \left\{\prod_{i=1}^{K^{\rm{I}}}{\textstyle{\lambda_+(v,u_i,k)+}}\right.\left.\prod_{i=1}^{K^{\rm{I}}}{\textstyle{\lambda_-(v,u_i,k)}}\right\}\nonumber\\ &&\quad -\sum_{k=0}^{a-1}\prod_{i=1}^{K^{\rm{II}}} {\textstyle{\frac{y_i-x^-}{y_i-x^+}\sqrt{\frac{x^+}{x^-}}\left[\frac{x^+ -\frac{1}{x^+}-y_i-\frac{1}{y_i}} {x^+-\frac{1}{x^+}-y_i-\frac{1}{y_i}-\frac{2ik}{g}}\right]}}\prod_{i=1}^ {K^{\rm{I}}}{\textstyle{\frac{x^+-x^+_i}{x^+-x^-_i}\sqrt{\frac{x^-_i}{x^ +_i}} \left[1-\frac{\frac{2ik}{g}}{v-u_i+\frac{i}{g}(a-1) }\right]}}\times\nonumber\\ &&\quad\times \left\{\prod_{i=1}^{K^{\rm{III}}}{\textstyle{\frac{w_i-x^+-\frac{1}{x^+} +\frac{i(2k-1)}{g}}{w_i-x^+-\frac{1}{x^+}+\frac{i(2k+1)}{g}}+ }}\prod_{i=1}^{K^{\rm{II}}}{\textstyle{\frac{y_i+\frac{1}{y_i}-x^+-\frac {1}{x^+}+\frac{2ik}{g}}{y_i+\frac{1}{y_i}-x^+-\frac{1}{x^+}+\frac{2i(k+1 )}{g}}}}\prod_{i=1}^{K^{\rm{III}}}{\textstyle{\frac{w_i-x^+-\frac{1}{x^+ }+\frac{i(2k+3)}{g}}{w_i-x^+-\frac{1}{x^+}+\frac{i(2k+1)}{g}}}}\right\}. \nonumber \end{eqnarray} Eigenvalues are parametrized by solutions of the auxiliary Bethe equations: \begin{eqnarray} \label{bennote} \prod_{i=1}^{K^{\rm{I}}}\frac{y_k-x^-_i}{y_k-x^+_i}\sqrt{\frac{x^+_i}{x^ -_i}}&=& \prod_{i=1}^{K^{\rm{III}}}\frac{w_i-y_k-\frac{1}{y_k}+\frac{i}{g}}{w_i-y _k-\frac{1}{y_k}-\frac{i}{g}},\\ \prod_{i=1}^{K^{\rm{II}}}\frac{w_k-y_i-\frac{1}{y_i}+\frac{i}{g}}{w_k-y_ i-\frac{1}{y_i}-\frac{i}{g}} &=& \prod_{i=1,i\neq k}^{K^{\rm{III}}}\frac{w_k-w_i+\frac{2i}{g}}{w_k-w_i-\frac{2i}{g}}.\nonumber \end{eqnarray} In the formulae above $$ v=x^++\frac{1}{x^+}-\frac{i}{g}a=x^-+\frac{1}{x^-}+\frac{i}{g}a\,, $$ and $v$ takes values in the mirror theory $v$-plane, so $x^\pm = x(v \pm {i\over g}a)$ where $x(v)$ is the mirror theory $x$-function. As was mentioned above, $u_j$ take values in string theory $u$-plane, and therefore $x_j^\pm = x_s(u_j \pm {i\over g})$ where $x_s(u)$ is the string theory $x$-function. Finally, the quantities $\lambda_{\pm}$ are \begin{eqnarray}\nonumber \hspace{-1cm} \lambda_\pm(v,u_i,k)&=&\frac{1}{2}\left[1-\frac{(x^-_ix^+-1) (x^+-x^+_i)}{(x^-_i-x^+) (x^+x^+_i-1)}+\frac{2ik}{g}\frac{x^+ (x^-_i+x^+_i)}{(x^-_i-x^+) (x^+x^+_i-1)}\right.\\ \label{eqn;lambda-pm} &&~~~~~~~~~~~~\left.\pm\frac{i x^+ (x^-_i-x^+_i)}{(x^-_i-x^+) (x^+x^+_i-1)}\sqrt{4-\left(v-\frac{i(2k-a)}{g}\right)^2}\right]\, . \end{eqnarray} For the $\sl(2)$-sector $K^{\rm II}=0=K^{\rm III}$ and the expression above simplifies to \begin{eqnarray}\label{TS} T_a(v\,|\,\vec{u})&=&1+\prod_{i=1}^{K^{\rm{I}}} \frac{(x^--x^-_i)(1-x^- x^+_i)}{(x^+-x^-_i)(1-x^+ x^+_i)}\frac{x^+}{x^-}\\ &&\hspace{-0.5cm} -2\sum_{k=0}^{a-1}\prod_{i=1}^{K^{\rm{I}}} \frac{x^+-x^+_i}{x^+-x^-_i}\sqrt{\frac{x^-_i}{x^+_i}} \left[1-\frac{\frac{2ik}{g}}{v-u_i+\frac{i}{g}(a-1)}\right]+\sum_{m=\pm} \sum_{k=1}^{a-1}\prod_{i=1}^{K^{\rm I }}\lambda_m(v,u_i,k)\, . \nonumber \end{eqnarray} For a two-particle physical state, the last formula appears to coincides up to a gauge transformation with \cite{Beisert06b} \begin{eqnarray}\nonumber\hspace{-0.5cm} T_a(v\,|\,\vec{u})&=&\frac{x^+}{x^-}\left[(1+a)\prod_{i=1}^2\frac{x^--x^-_i}{x^+-x^-_i} +(a-1)\prod_{i=1}^2\frac{x^--x^+_i}{x^+-x^-_i}\frac{x^-_i-\frac{1}{x^+}} {x^+_i-\frac{1}{x^+}}\right.\\ &&~~~~~~~~~-\left. a\prod_{i=1}^2\frac{x^--x^+_i}{x^+-x^-_i} -a\prod_{i=1}^2\frac{x^--x^-_i}{x^+-x^-_i}\frac{x^-_i-\frac{1}{x^+}}{x^+_i-\frac{1}{x^+}}\right]\, , \label{TA}\end{eqnarray} which is nothing else but an eigenvalue of the transfer matrix evaluated on the fermionic vacuum and continued to the mirror region for the auxiliary variable. In numerical computations formula (\ref{TA}) works much faster than (\ref{TS}) and, therefore, we use it for constructing the asymptotic Y-functions. \smallskip The transfer matrices $T_{a,s}$ are used to introduce Y-functions $Y_{a,s}$ which solve Y-system equations in the following standard way \cite{Kuniba:1993cn,Tsuboi, GKV09} \begin{eqnarray} Y_{a,s}=\frac{T_{a,s+1}T_{a,s-1}}{T_{a+1,s}T_{a-1,s}} \end{eqnarray} The transfer matrices $T_{a,s}$ solve the so-called Hirota equations \cite{Hirota} and they can be computed via $T_{a,1}\equiv T_a(v\,|\,\vec{u})$ through the Bazhanov-Reshetikhin formula \cite{BR} which we present here for the asymptotic solution \begin{eqnarray} T_{a,s}=\hbox{det}_{1\leq i,j\leq s}T_{a+i-j}\Big(v+\frac{i}{g}(s+1-i-j)\mid\vec{u}\Big)\, . \end{eqnarray} If here $a+i-j< 0$ then the corresponding element $T_{a+i-j}\Big(v+\frac{i}{g}(s+1-i-j)\mid\vec{u}\Big)$ is regarded as zero. Also, $T_0\Big(v+\frac{i}{g}(s+1-i-j)\mid \vec{u}\Big)=1$. In other words, asymptotic $T_{a,1}$ and $T_{1,s}$ satisfy the boundary conditions: $T_{0,s} =1=T_{a,0}$, $T_{a<0,s}=0$ and $T_{a,s<0}=0$. \smallskip Here for reader's convenience we recall the relationship of the Y-functions introduced in \cite{AF09b} and those of \cite{GKV09}. For the auxiliary Y-functions we have \begin{eqnarray}\begin{aligned} &Y_-^{(1)}=-\frac{1}{Y_{1,1}}=-\frac{T_{2,1}T_{0,1}}{T_{1,2}T_{1,0}}\, , & Y_-^{(2)}& =-\frac{1}{Y_{1,-1}}=-\frac{T_{2,-1}T_{0,-1}}{T_{1,0}T_{1,-2}}\, , \\ &Y_+^{(1)}=-Y_{2,2}=-\frac{T_{2,3}T_{2,1}}{T_{3,2}T_{1,2}}\, , &Y_+^{(2)}&=-Y_{2,-2}=-\frac{T_{2,-1}T_{2,-3}}{T_{3,-2}T_{1,-2}}\, ,\\ &Y_{Q|vw}^{(1)}=\frac{1}{Y_{Q+1,1}}=\frac{T_{Q+2,1}T_{Q,1}}{T_{Q+1,2}T_{ Q+1,0}}\, , &Y_{Q|vw}^{(2)}&=\frac{1}{Y_{Q+1,-1}}=\frac{T_{Q+2,-1}T_{Q,-1}}{T_{Q+1,0 }T_{Q+1,-2}}\, ,\\ &Y_{Q|w}^{(1)}=Y_{1,Q+1}=\frac{T_{1,Q+2}T_{1,Q}}{T_{2,Q+1}T_{0,Q+1}}\, , &Y_{Q|w}^{(2)}&=Y_{1,-Q-1}=\frac{T_{1,-Q}T_{1,-Q-2}}{T_{2,-Q-1}T_{0,-Q-1 }}\, . \end{aligned} \end{eqnarray} The asymptotic functions $Y_Q(v)$ in the $\sl(2)$-sector are given by (\ref{YQasympt}). Note that in this expression the transfer matrix naturally comes with a prefactor \begin{eqnarray} S_Q(v\,|\,\vec{u})= \prod_{i=1}^{K^{\rm{I}}} \sqrt{S_{\sl(2)}^{Q1_*}(v,u_i)} = \prod_{i=1}^{K^{\rm{I}}} {1\over \sqrt{S_{\sl(2)}^{1_*Q}(u_i,v)}} \, .\end{eqnarray} This prefactor can be split as follows \begin{eqnarray} \label{split} S_Q(v\,|\,\vec{u}) = \prod_{i=1}^{K^{\rm{I}}} {1\over \sqrt{S_{\sl(2)}^{1_*Q}(u_i,v)h_Q(u_i,v)}} \prod_{k=1}^{K^{\rm{I}}} \sqrt{h_Q(u_k,v)} \, , \end{eqnarray} where $h_Q(u,v)$ is introduced in \eqref{siqvwx}. This quantity satisfies the equation $$ h_Q\Big(u,v+\frac{i}{g}\Big)h_Q\Big(u,v-\frac{i}{g}\Big)=h_{Q+1}(u,v)h_{ Q-1}(u,v) $$ and is a pure phase when $u$ and $v$ are in the string and mirror regions, respectively. The splitting (\ref{split}) can be used to introduce the normalized transfer-matrix $\widetilde{T}_{Q,1}(v) $ $$ \widetilde{T}_{Q,1}(v\mid \vec{u}) \equiv\prod_{k=1}^{K^{\rm{I}}} \sqrt{h_Q(u_k,v)}\, T_{Q}(v\,|\,\vec{u}) $$ which renders the corresponding $\widetilde{Y}_Q=e^{-J\tilde{\cal E}_Q}\widetilde{T}_{Q,1}^2(v | \vec{u})$ real. The functions $\widetilde{Y}_Q$ represent a simple and useful tool for checking the TBA equations. \subsection{Hybrid equations for $Y_Q$-functions}\label{hybridQ} In this appendix we derive the hybrid TBA equations for $Y_Q$-functions. We discuss only the ground state TBA equations because the excited states equations can be obtained by using the contour deformation trick. The canonical TBA equation for $Q$-particles reads \begin{align} &\log Y_Q = - L\, \widetilde{{\cal E}}_{Q} + \log\left(1+Y_{Q'} \right) \star K_{\sl(2)}^{Q'Q} \label{TbaQsl2app} \\[1mm] &\quad + 2 \log\left(1+ \frac{1}{Y_{M'|vw}} \right) \star K^{M'Q}_{vwx} + 2 \log \left(1- \frac{1}{Y_-} \right) \,\hat{\star}\, K^{yQ}_- + 2 \log \left(1- \frac{1}{Y_+} \right) \,\hat{\star}\, K^{yQ}_+ \,. \notag \end{align} To derive the hybrid equation we need to compute the first sum on the second line of this equation. It is done by using the simplified equations for $Y_{M|vw}$ \begin{align}\label{SimYvw} \log Y_{M|vw} &= \log(1 + Y_{M-1|vw})(1 + Y_{M+1|vw}) \star s \\ &\notag - \log(1 + Y_{M+1})\star s + \delta_{M1} \log{1-Y_-\over 1-Y_+} \,\hat{\star}\, s . \end{align} Let us assume that we have some kernels ${\sf K}_M$ which satisfy the following identities \begin{equation} {\sf K}_M - s \star \({\sf K}_{M+1} + {\sf K}_{M-1}\) = \delta {\sf K}_M \quad \(M \ge 2\), \qquad {\sf K}_1 - s \star {\sf K}_2 = \delta {\sf K}_1 \,, \label{def:delta KM} \end{equation} where $ \delta {\sf K}_M$ are known kernels. Then, applying the kernel ${\sf K}_M$ to both sides of \eqref{SimYvw}, and taking the sum over $M$ from 1 to $\infty$, we obtain the formula \begin{align}\label{identity sum over Mvw} \sum_{M=1}^\infty \log \(1 + {1 \over Y_{M|vw}}\) \star {\sf K}_M &= \sum_{M=1}^\infty \log \(1 + Y_{M|vw}\) \star \delta {\sf K}_M \\[1mm] &+ \sum_{M=1}^\infty \log(1 + Y_{M+1}) \star s \star {\sf K}_M - \log{1-Y_-\over 1-Y_+} \star {\sf K}_1 \,. \notag \end{align} Now choosing the kernel ${\sf K}_M$ to be $K^{MQ}_{vwx}$ and using the formula (6.31) from \cite{AF09b} \begin{align} K^{MQ}_{vwx} - s \star \( K^{M+1,Q}_{vwx} + K^{M-1,Q}_{vwx} \) &= \delta_{M+1,Q} \, s \qquad \(M \ge 2\), \notag \\ K^{1Q}_{vwx} - s \star K^{2Q}_{vwx} &= s \,\hat{\star}\, K_{yQ} + \delta_{2,Q} \, s\,, \end{align} and the identity \eqref{identity sum over Mvw}, we get \begin{align}\notag &\sum_{M=1}^\infty \log \(1 + {1 \over Y_{M|vw}}\) \star K^{MQ}_{vwx} = \log \(1 + Y_{1|vw}\) \star s \,\hat{\star}\, K_{yQ} + \delta_{M+1,Q} \log \(1 + Y_{M|vw}\) \star s \label{identity sum over Mvw3} \\[1mm] &\quad + \sum_{M=1}^\infty \log(1 + Y_{M+1}) \star s \star K^{MQ}_{vwx} - \log{1-Y_-\over 1-Y_+} \,\hat{\star}\, s \star K^{1Q}_{vwx} \,. \end{align} Finally, substituting this formula into \eqref{TbaQsl2app}, we get the hybrid ground state TBA equation. \subsection{Analytic continuation of $Y_1(v)$}\label{app:Y1} To derive the exact Bethe equations \eqref{Tba1sl2B} one has to analytically continue $Y_1(z)$ in eq.(\ref{TbaQsl2H}) to the point $z_{*k}$. On the $v$-plane it means that we go from the real $v$-line down below the line with Im$(v)=-{1\over g}$ without crossing any cut, then turn back, cross the cut with Im$(v)=-{1\over g}$ and $|$Re$(v)|>2$, and go back to the real $v$-line. As a result we should make the following replacements $x(v-{i\over g}) \to x_s(v-{i\over g}) = x(v-{i\over g})$, $x(v+{i\over g}) \to x_s(v+{i\over g}) = 1/x(v-{i\over g})$ in the kernels appearing in (\ref{Y1m1}). The analytic continuation depends on the analytic properties of the kernels and Y-functions. Let us consider the terms in eq.(\ref{TbaQsl2H}) order by order \subsubsection*{Terms with $Y_\pm(v)$-functions} For the analytic continuation of the last two terms in \eqref{TbaQsl2H} it is convenient to use the kernels $K^{y1}_\pm$ given by \begin{eqnarray}\label{Ky1m} K^{y1}_-(u,v)&=&{1\over 2\pi i}{d\over du}\log S^{y1}_-(u,v)\,,\quad S^{y1}_-(u,v)= \frac{x(u)-x(v+{i\over g})}{x(u)-x(v-{i\over g})}\sqrt{\frac{x(v-{i\over g})}{x(v+{i\over g})}}\,,~~~~~\\ \label{Ky1p} K^{y1}_+(u,v)&=&{1\over 2\pi i}{d\over du}\log S^{y1}_+(u,v)\,,\quad S^{y1}_+(u,v)= \frac{{1\over x(u)}-x(v-{i\over g})}{{1\over x(u)}-x(v+{i\over g})}\sqrt{ \frac{x(v+{i\over g})}{x(v-{i\over g})} } \,.\end{eqnarray} It is clear from these equations that $K^{y1}_+(u,v)$ remains regular in the analytic continuation. $K^{y1}_-(u,v)$ on the other hand has a pole at $v= u-{i\over g}$ and behaves as \begin{eqnarray} K^{y1}_-(u,v)&=& {1\over 2\pi i}{1\over u-v-{i\over g}} +\ regular\ at\ v \sim u-{i\over g}\,. \end{eqnarray} Thus, one needs to analyze the analytic continuation of a function defined by the following integral for real $v$ \begin{eqnarray} F(v) = {1\over 2\pi i}\int_{-2}^2\, du\, f(u){1\over u-v-{i\over g}} \,. \end{eqnarray} The consideration is the same as the one for the dressing phase in \cite{AF09c}, and after $v$ crosses the interval $[-2-{i\over g},2-{i\over g}]$ one gets the following expression for $F(v)$ \begin{eqnarray}\label{Fvafter} F(v) =f(v+{i\over g})+ {1\over 2\pi i} \int_{-2}^2\, du\, f(u){1\over u-v-{i\over g}}\,,\quad {\rm Im}(v)<-{1\over g} \,. \end{eqnarray} Then we go back to the real $v$-line but we do not cross the interval $[-2-{i\over g},2-{i\over g}]$, and therefore \eqref{Fvafter} remains the same. However, we should also analytically continue $f(v+{i\over g})$ back to real values of $v$. Thus we conclude that the analytic continuation of \begin{eqnarray} \log \left(1- \frac{1}{Y_{-}} \right) \,\hat{\star}\, K^{y1}_- + \log\left(1- \frac{1}{Y_+} \right) \,\hat{\star}\, K^{y1}_+ \end{eqnarray} is given by \begin{eqnarray}\label{ypmc} \log \left(1- \frac{1}{Y_{*-}(v+{i\over g})} \right) +\log \left(1- \frac{1}{Y_-} \right) \,\hat{\star}\, K^{y1_*}_- + \log\left(1- \frac{1}{Y_+} \right) \,\hat{\star}\, K^{y1_*}_+ \,,~~~~~ \end{eqnarray} where $Y_{*-}(v)$ is the $Y_{-}(v)$-function analytically continued to the upper half-plane through the cut $|v|>2$. As was discussed in section 4, $Y_{*-}$ coincides with $Y_{+}$, and, therefore, to find $Y_{*-}(v+{i\over g})$ one can just use the analytic continuation of the TBA equation \eqref{Tbaysl2} for $Y_{+}(v)$ to $Y_{+}(v+{i\over g})$. Since the S-matrix $S_-^{1_*y}(u_k,v)$ has a pole at $v=u_k+{i\over g}$, see appendix \ref{app:rapidity}, one concludes that ${Y_{*-}(u_k+{i\over g})}=\infty$, and, therefore, the contribution of the first term in \eqref{ypmc} vanishes in the exact Bethe equation $Y_{1_*}(u_k) = -1$. Nothing dangerous happens with the term $\log{1-Y_-\over 1-Y_+} \,\hat{\star}\, s \star K^{11}_{vwx}$ because there is no singularity in the analytic continuation process of $K^{11}_{vwx}$ until we get back to the real $v$-line. Then, we get for the analytically continued $K^{11_*}_{vwx}$ \begin{eqnarray}\label{K11svwx} &&K^{11_*}_{vwx}(u,v)={1\over 2\pi i}{d\over du}\log S^{11_*}_{vwx}(u,v)\,,\quad\\\nonumber &&S^{11_*}_{vwx}(u,v)= \frac{x(u-{i\over g})-x_s(v+{i\over g})}{x(u+{i\over g})-x_s(v-{i\over g})} \,\frac{{1\over x(u-{i\over g})}-x_s(v-{i\over g})}{{1\over x(u+{i\over g})}-x_s(v+{i\over g})}\,,~~~~\end{eqnarray} and it shows a pole at $v=u$ \begin{eqnarray}\label{Km1svwx2} &&K^{11_*}_{vwx}(u,v)=-{1\over 2\pi i}\,{1\over u-v} +\ regular\ at\ v \sim u\,.~~~~\end{eqnarray} Since we integrate over the line which is above the real line, the pole is not dangerous if the function we integrate with the kernel is regular as it is the case for the case under consideration. \subsubsection*{Terms with $Y_{M|vw}$-functions} The analytic continuation of the term $\log \(1 + Y_{1|vw}\) \star s \,\hat{\star}\, K_{y1}$ is given by \begin{eqnarray} \log \(1 + Y_{1|vw}\) \star (s \,\hat{\star}\, K_{y1} + \tilde{s}) \end{eqnarray} because $K_{y1}(u,v)$ has a pole at $v=u-{i\over g}$. \subsubsection*{Terms with $Y_Q$-functions} The kernel $K_{\sl(2)}^{Q1}$ is given by \begin{eqnarray}\label{Kq1sl2} K_{\sl(2)}^{Q1}(u,v)&=& - K_{Q1}(u-v) - 2 K^\Sigma_{Q1}(u,v)\\\nonumber &=& - K_{Q-1}(u-v)- K_{Q+1}(u-v) - 2 K^\Sigma_{Q1}(u,v) \,, \end{eqnarray} Since $K^\Sigma_{Q1}(u,v)$ is a holomorphic function if $u$ is in the mirror region and $v$ is in the string one, only $K_{Q1}$ can cause any problem with the analytic continuation. Moreover, we see immediately that only the $Q=2$ case should be treated with care. It is easy to show however that since the integral over $u$ is taken from $-\infty$ to $\infty$ the analytic continuation does not give any extra term because $v$ crosses the line Im$(u)=-{1\over g}$ twice. This is the difference of this case from the $Y_\pm$ one. The term with $K^{Q'-1,1}_{vwx}(v) $ is harmless too because $s$ is regular. \subsubsection*{The term $\log S\star K^{11}_{vwx} (u_j^-,v)$} This term is similar to $\log\left(1+ \frac{1}{Y_{1|vw}} \right)\star K^{11}_{vwx}$ considered above, and its analytic continuation is given by \begin{eqnarray} \log {\rm Res}(S)\star K^{11_*}_{vwx} (u_j^-,u_k) -\sum_{j=1}^N\log\big(u_j-u_k-{2i\over g}\big)\, {x_j^--{1\over x_{k}^-}\over x_j^-- x_{k}^+}\,. \end{eqnarray} \medskip Summing up all the contributions, one gets the exact Bethe equations \eqref{Tba1sl2B} and \eqref{Tba1sl2}. We have shown that the r.h.s. of \eqref{Tba1sl2B} is purely imaginary, and, therefore, the exact Bethe equations can be also written in the form \begin{align} &\pi (2n_k+1)=L\, p_k +i\sum_{j=1}^N\, \log S_{\sl(2)}^{1_*1_*}(u_j,u_k)\label{Tba1sl2Bb}\\ &\quad +2 \sum_{j=1}^N\, \log {\rm Res}(S)\star {\rm Im}K^{11_*}_{vwx} (u_j^-,u_k) -2 \sum_{j=1}^N{\rm Im}\log\big(u_j-u_k-{2i\over g}\big)\, {x_j^--{1\over x_{k}^-}\over x_j^-- x_{k}^+} \notag\\ &\quad -2 \log \left(1+Y_{Q} \right) \star \({\rm Im}K_{Q1_*}^\Sigma - s \star {\rm Im}K^{Q-1,1_*}_{vwx} \)- 2i \log \(1 + Y_{1|vw}\) \star \( \tilde{s} +s \,\hat{\star}\, K_{y1_*} \) \notag \\ &\quad - 2 \log{1-Y_-\over 1-Y_+} \,\hat{\star}\, s \star {\rm Im}K^{11_*}_{vwx} -i \log \big(1- \frac{1}{Y_-}\big)\big( 1- \frac{1}{Y_+} \big) \,\hat{\star}\, K_{y1_*} \,. \notag \end{align} \subsubsection*{Analytic continuation of the canonical TBA equation for $Y_1$} Let us also consider the analytic continuation of the canonical TBA equation for $Y_1$. Then, the kernel $K^{M1}_{vwx}$ is given by \begin{eqnarray}\label{Km1vwx} &&K^{M1}_{vwx}(u,v)={1\over 2\pi i}{d\over du}\log S^{M1}_{vwx}(u,v)\,,\quad\\\nonumber &&S^{M1}_{vwx}(u,v)= \frac{x(u-{i\over g}M)-x(v+{i\over g})}{x(u+{i\over g}M)-x(v-{i\over g})} \,\frac{{1\over x(u-{i\over g}M)}-x(v-{i\over g})}{{1\over x(u+{i\over g}M)}-x(v+{i\over g})}\,,~~~~\end{eqnarray} and nothing dangerous happens for $M>1$. So, we just have to consider the $M=1$ case. Since $Y_{1|vw}(u_j)=0$, and we should take care of the resulting log-singularity before the analytic continuation. Introducing $Z$-functions as in \eqref{YvwZvw}, one gets for the term in the canonical TBA equation (\ref{TbaQsl2}) \begin{eqnarray}\nonumber &&2 \log\left(1+ \frac{1}{Y_{M|vw}} \right)\star K^{M1}_{vwx} = 2 \log{M^2\over M^2-1}\star K^{M-1,1}_{vwx} - 2 \log\prod_{j=1}^N \left( u - {u_j}\right)\star K^{11}_{vwx}~~~~~ \\ &&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+2 \log Z_{M|vw}\star K^{M1}_{vwx}\\\nonumber &&~~~~~~~=2\log 2 -2 \sum_{j=1}^N\log\big(u_j-v-{2i\over g}\big)\, {x_j^--{1\over x^-}\over x_j^--{1\over x^+}} +2\log Z_{M|vw}\star K^{M1}_{vwx} \,,~~~~~~~~~ \end{eqnarray} where we used $ \log{M^2\over M^2-1}\star K^{M-1,1}_{vwx}=\log 2$, and the following formula \begin{eqnarray}\nonumber \int_{-\infty}^\infty\, dt\, \log( t +i0- u_j)\, K^{11}_{vwx}(t,v) = \log\big(u_j-v-{2i\over g}\big)\, {x_j^--{1\over x^-}\over x_j^--{1\over x^+}} \,.~~~~~\label{tmuKvwx} \end{eqnarray} The analytic continuation in $v$ then gives \begin{eqnarray}\nonumber 2\log\left(1+ \frac{1}{Y_{M|vw}} \right)\star K^{M1_*}_{vwx}(v) &=& 2\log 2 -2 \sum_{j=1}^N\log\big(u_j-v-{2i\over g}\big)\, {x_j^--{1\over x^-}\over x_j^-- x^+} \\\nonumber &+&2 \log Z_{M|vw}\star_{p.v.} K^{M1_*}_{vwx} + \log Z_{1|vw}(v) \,. \end{eqnarray} The equation seems to coincide with the one derived in \cite{GKV09b} after one performs the parity transformation $u_j\to -u_j$, and sets $g=2$. \subsection{Canonical TBA equations} \label{canTBA} \subsubsection*{Canonical TBA equations: $g<g_{cr}^{(1)} $} The canonical ground state TBA equations \cite{AF09b} are derived from the string hypothesis for the mirror model \cite{AF09a} by following a textbook route, see e.g. \cite{Korepin}, and can be written in the form \medskip \noindent $\bullet$\ $Q$-particles \begin{multline} V_Q\equiv \log Y_Q + L\, \widetilde{{\cal E}}_{Q} - \log\left(1+Y_{Q'} \right) \star K_{\sl(2)}^{Q'Q} - 2\log\left(1+ \frac{1}{Y_{M'|vw}} \right) \star K^{M'Q}_{vwx} \\ - 2\log \left(1- \frac{1}{Y_-} \right) \,\hat{\star}\, K^{yQ}_- - 2\log\left(1- \frac{1}{Y_+} \right) \,\hat{\star}\, K^{yQ}_+ = 0\,, \label{TbaQsl2v} \end{multline} \noindent $\bullet$\ $y$-particles \begin{equation} V_\pm\equiv \log Y_\pm + \log\left(1+ Y_Q \right) \star K^{Qy}_\pm -\log {1+ \frac{1}{Y_{M|vw}} \over 1+ \frac{1}{Y_{M|w}} } \star K_{M}=0\,. \label{Tbaysl2v} \end{equation} \noindent $\bullet$\ $M|vw$-strings \begin{multline} V_{M|vw}\equiv \log Y_{M|vw}+ \log\left(1+Y_{Q'} \right) \star K^{Q'M}_{xv} \\ - \log\left(1+ \frac{1}{Y_{M'|vw}} \right) \star K_{M'M} - \log {1- \frac{1}{Y_-} \over 1- \frac{1}{Y_+} } \,\hat{\star}\, K_M =0\,. \label{Tbavwsl2v} \end{multline} \noindent $\bullet$\ $M|w$-strings \begin{equation} V_{M|vw}\equiv \log Y_{M|w} - \log \left(1+ \frac{1}{Y_{M'|w}} \right)\star K_{M'M} - \log{1- \frac{1}{Y_-} \over 1- \frac{1}{Y_+} } \,\hat{\star}\, K_M =0\,. \label{Tbawsl2v} \end{equation} \medskip Applying the contour deformation trick to the canonical TBA equations, one gets the following set of integral equations for Konishi-like states and $g<g_{cr}^{(1)} $ \begin{eqnarray} &&\bullet \ Q{\rm -particles}:\quad\quad V_Q = - \sum_{*} \log S_{\sl(2)}^{1_*Q}(u_j,v)\,,\label{TbaQsl2}\\ &&\bullet \ y{\rm -particles}:\quad\quad\ V_\pm = \sum_{*} \log S^{1_*y}_\pm(u_j,v)\,,\label{Tbaysl2}\\ &&\bullet \ M|vw{\rm -strings}: \quad V_{M|vw}= \sum_{*} \log S^{1_*M}_{xv}(u_j,v)\,, \label{Tbavwsl2}\\ &&\bullet \ M|w{\rm -strings}: \quad\ \ V_{M|w} =0\label{Tbawsl2}\,. \end{eqnarray} Here summation over repeated indices is assumed. The sums in the formulae run over the set of $N$ particles, all Y-functions depend on the real $u$ variable (or $z$) of the mirror region. We recall that $S_{\sl(2)}^{1_*Q}$ and $S^{1_*M}_{xv}$ denote the S-matrices with the first and second arguments in the string and mirror regions, respectively, and both arguments of the kernels in these formulae are in the mirror region. Then, the integrals are taken over the interval $[-2,2]$ for convolutions involving $Y_\pm$, and over the horizontal line that is a little bit above the real $u$ line (or the interval Re$(z)\in (-{\omega_1\over 2},{\omega_1\over 2})$, Im$(z)={\omega_2\over 2i}$ on the $z$-torus) for all other convolutions. The reason why one should choose the integration contour to run a little bit above the real line of the mirror $u$-plane is that $Y_{1|vw}$ has zeros at $u=u_k$, and, therefore, the terms $\log\left(1+ \frac{1}{Y_{1|vw}} \right) \star K$ with any kernel $K$ should be treated carefully. The prescription above for the integration contour guarantees the reality of all Y-functions as we show in appendix \ref{app:reality}. The $\log S$-term in the equation for vw-strings is in fact necessary to cancel the corresponding singularity on the l.h.s. of this equation. \medskip One can show that the imaginary zeros of $Y_{k|vw}$ and $1 + Y_{k|vw}$ do not contribute to the canonical equations for Konishi-like states at weak coupling. In appendix \ref{appSimple} we also show that the simplified equations can be derived from the canonical ones following the same route as in \cite{AF09b,AF09d}. \medskip The canonical TBA equations are rather complicated and involve infinite sums that makes the high-precision numerical tests very time-consuming. We have checked numerically that for Konishi-like states they are solved at the large $L$ limit by the asymptotic Y-functions if $L=J+2$. \medskip Let us stress again that the TBA equations above are valid only up to the first critical value of $g$ where the function $Y_{1|vw}(u)$ has real zeros only for $u=u_k$, and $Y_{M|vw}$-functions with $M\ge 1$ do not have other zeros in the physical strip. \subsubsection*{Exact Bethe equations: $g<g_{cr}^{(1)} $ } To derive the exact Bethe equations we take the logarithm of eq.(\ref{Y1m1}), and analytically continue the variable $z$ of $Y_1(z)$ in eq.(\ref{TbaQsl2}) to the point $z_{*k}$. The analytic continuation is similar to the one in section \ref{TBAKon}, and its detailed consideration can be found in appendix \ref{app:Y1}. As shown there, the resulting exact Bethe equations for a string theory state from the $\sl(2)$ sector can be cast into the following integral form \begin{eqnarray}\nonumber &&\pi i(2n_k+1)=\log Y_{1_*}(u_k) =i L\, p_k - \sum_{j=1}^N \log S_{\sl(2)}^{1_*1_*}(u_{j},u_{k})+ \log\left(1+Y_Q \right)\star K_{\sl(2)}^{Q1_*}~~~~~~\\\nonumber &&~~~~~~~~~~~~~+2\log 2 -2 \sum_{j=1}^N\log\big(u_j-u_k-{2i\over g}\big)\, {x_j^--{1\over x_{k}^-}\over x_j^-- x_{k}^+}+2 \log Z_{M|vw}\star K^{M1_*}_{vwx} \\\label{Tba1sl2} &&~~~~~~~~~~~~~+ 2 \log \Big(1- \frac{1}{Y_-}\Big) \,\hat{\star}\, K^{y1_*}_- + 2 \log\Big(1- \frac{1}{Y_+}\Big) \,\hat{\star}\, K^{y1_*}_+\,.~~~~~~ \end{eqnarray} Here the integration contours run a little bit above the Bethe roots $u_j$, $p_k= i \widetilde{{\cal E}}_{Q}(z_{*k})=-i\log{x_s(u_k+{i\over g})\over x_s(u_k-{i\over g})}$ is the momentum of the $k$-th particle, and the second argument in all the kernels in this equation is equal to $u_{k}$ but the first argument we integrate with respect to is the original one in the mirror region. The Z-functions are defined in the same way as in \cite{GKV09b} \begin{eqnarray}\label{YvwZvw} 1 + {1\over Y_{1|vw}} \equiv Z_{1|vw}\, {4\over 3}\, {1\over \prod_{j=1}^N \left( u - {u_j}\right)}\,,\quad 1+ {1\over Y_{M|vw}} \equiv Z_{M|vw}\, {(M+1)^2\over M(M+2)}\,,~~~~~ \end{eqnarray} they are positive for real $u$, and $Z_{M|vw}$, $M=2,3,\ldots$ asymptote to 1 at $u\to\infty$. Taking into account that the BY equations for the $\sl(2)$ sector have the form \begin{eqnarray}\nonumber \pi i(2n_k+1)=i J\, p_k - \sum_{j=1}^N \log S_{\sl(2)}^{1_*1_*}(u_{j},u_{k})\label{BYsl2b} \,,~~~~~~ \end{eqnarray} and that $Y_Q$ is exponentially small at large $J$, we conclude that if the analytic continuation has been done correctly then up to an integer multiple of $2\pi i$ the following identities between the asymptotic Y-functions should hold \begin{eqnarray}\nonumber &&{\cal R}_k\equiv 2 \hspace{0.3mm} i\, p_k +2\log 2 -2 \sum_{j=1}^N\log\big(u_j-u_k-{2i\over g}\big)\, {x_j^--{1\over x_{k}^-}\over x_j^--x_{k}^+}+2 \log Z_{M|vw}\star K^{M1_*}_{vwx}~~~\\\label{Rksl2} &&~~~~~~~+ 2 \log \Big(1- \frac{1}{Y_-}\Big) \,\hat{\star}\, K^{y1_*}_- + 2 \log\Big(1- \frac{1}{Y_+}\Big) \,\hat{\star}\, K^{y1_*}_+ = 0\,.~ \end{eqnarray} For $N=2$ and $u_1=-u_2$ one gets one equation, and by using the expressions for the Y-functions from appendix \ref{appT} one can check numerically that it does hold for any real value of $u_1$ such that only $Y_{1|vw}$ has two zeros for real $u$. We believe that the canonical TBA equations (\ref{TbaQsl2}-\ref{Tbawsl2}) and eqs.\eqref{Tba1sl2} are equivalent to the ones proposed in \cite{GKV09b} for states having the vanishing total momentum but a detailed comparison is hard to perform due to different notations and conventions. We see however that they are definitely different for physical states which satisfy the level-matching condition but do not have the vanishing total momentum. \subsubsection*{ Canonical TBA equations: $g_{cr}^{(1)}<g<\bar g_{cr}^{(1)}$} Here is the set of canonical TBA equations for Konishi-like states and $g_{cr}^{(1)}<g<\bar g_{cr}^{(1)}$ \begin{eqnarray} &&\bullet \ Q{\rm -particles}:\quad V_Q = - \sum_{*} \log S_{\sl(2)}^{1_*Q}(u_j,v)-2 \sum_{j=1}^2 \log S_{vwx}^{1Q}(r_j^-,v)~~~~~~~~ \\ &&\hspace{6cm}+2 \log S_{vwx}^{2Q}(r_1,v)+ 2 \log S_{yQ} (r_1,v)\,, \label{TbaQsl2c1}\nonumber\\ &&\bullet \ y{\rm -particles}: \quad\ V_\pm = \sum_{j=1}^N \log S^{1_*y}_\pm(u_j,v)- \sum_{j=1}^2 \log S_1(r_j^--v)+\log S_2(r_1-v)\,,~~~~~\nonumber\\ &&\bullet \ M|vw{\rm -strings}: V_{M|vw}= \sum_{j=1}^N\log S^{1_*M}_{xv}(u_j,v) - \sum_{j=1}^2 \log S_{1M}(r_j^-,v) +\log S_{2M}(r_1,v) \,, \nonumber\\ &&\bullet \ M|w{\rm -strings}: \ \ V_{M|w} =0\nonumber\,, \end{eqnarray} where $2 \log S_{yQ} (r_1,v)$ appears due to the imaginary zero $r_1$ of $Y_\pm$, and we take into account that the S-matrix $S_{yQ}$ is normalized as $S_{yQ} (\pm 2,v)=1$. \subsubsection*{ Exact Bethe equations: $g_{cr}^{(1)}<g< \bar g_{cr}^{(1)}$} The exact Bethe equations are obtained by analytically continuing $\log Y_1$ in \eqref{TbaQsl2c1} following the same route as for the small $g$ case, and they take the following form \begin{eqnarray}\nonumber \pi i(2n_k+1)&=&\log Y_{1_*}(u_k) =i L\, p_k - \sum_{j=1}^N \log S_{\sl(2)}^{1_*1_*}(u_{j},u_{k})+ \log\left(1+Y_Q \right)\star K_{\sl(2)}^{Q1_*}~~~~~~~~~\\\nonumber &-&2 \sum_{j=1}^2 \log S_{vwx}^{11_*}(r_j^-,u_k) +2 \log S_{vwx}^{21_*}(r_1,u_k) + 2 \log S_{y1_*} (r_1,u_k)\\\nonumber &-& 2 \sum_{j=1}^N\log\big(u_j-u_k-{2i\over g}\big)\, {x_j^--{1\over x_{k}^-}\over x_j^-- x_{k}^+} +2\log 2+2 \log Z_{M|vw}\star K^{M1_*}_{vwx} \\\label{Eba1sl2bc1} &+& 2 \log \Big(1- \frac{1}{Y_-}\Big) \,\hat{\star}\, K^{y1_*}_- + 2 \log\Big(1- \frac{1}{Y_+}\Big) \,\hat{\star}\, K^{y1_*}_+\,.~~~~~~ \end{eqnarray} We conclude again that the consistency with the BY equations requires the fulfillment of the identities between the asymptotic Y-functions similar to \eqref{Rksl2} that we have checked numerically for the Konishi-like states. \subsubsection*{ Canonical TBA equations: $\bar g_{cr}^{(1)}<g< g_{cr}^{(2)}$} The canonical TBA equations for Konishi-like states and $\bar g_{cr}^{(1)}<g<g_{cr}^{(2)}$ take the form \begin{eqnarray} &&\bullet \ Q{\rm -particles}:\quad V_Q = - \sum_{j=1}^N \log S_{\sl(2)}^{1_*Q}(u_j,v)-2 \sum_{j=1}^2 \log S_{vwx}^{1Q}(r_j^-,v)\,,\label{TbaQsl2c1b}\\ &&\bullet \ y{\rm -particles}: \quad\ V_\pm = \sum_{j=1}^N \log S^{1_*y}_\pm(u_j,v)- \sum_{j=1}^2 \log S_1(r_j^--v)\,,~~~~~\nonumber\\ &&\bullet \ M|vw{\rm -strings}: V_{M|vw}=\sum_{j=1}^N\log S^{1_*M}_{xv}(u_j,v) - \sum_{j=1}^2 \log S_{1M}(r_j^-,v) \,, \nonumber\\ &&\bullet \ M|w{\rm -strings}: \ \ V_{M|w} =0\nonumber\,, \end{eqnarray} The equations for $Y_Q$-particles differ from \eqref{TbaQsl2c1} by the absence of the term $2\log S_{vwx}^{2Q}(r_1,v)$. The p.v. prescription however would give instead additional terms due to the real zeros $r_j$ of $Y_{2|vw}$ and $Y_\pm$. \subsubsection*{ Exact Bethe equations: $\bar g_{cr}^{(1)}<g< g_{cr}^{(2)}$} Analytically continuing $\log Y_1$ in \eqref{TbaQsl2c1b}, one gets the exact Bethe equations \begin{eqnarray}\nonumber &&\pi i(2n_k+1)=\log Y_{1_*}(u_k) =i L\, p_k - \sum_{j=1}^N \log S_{\sl(2)}^{1_*1_*}(u_{j},u_{k})+ \log\left(1+Y_Q \right)\star K_{\sl(2)}^{Q1_*}~~~~~~~~~\\\nonumber &&~~~-2 \sum_{j=1}^2 \log S_{vwx}^{11_*}(r_j^-,u_k) +2\log 2 -2 \sum_{j=1}^N\log\big(u_j-u_k-{2i\over g}\big)\, {x_j^--{1\over x_{k}^-}\over x_j^-- x_{k}^+} \\\label{Eba1sl2c1b} &&~~~+2 \log Z_{M|vw}\star K^{M1_*}_{vwx}+ 2 \log \Big(1- \frac{1}{Y_-}\Big) \,\hat{\star}\, K^{y1_*}_- + 2 \log\Big(1- \frac{1}{Y_+}\Big) \,\hat{\star}\, K^{y1_*}_+\,.~~~~~~ \end{eqnarray} We recall that the integration contours should run a little bit above the Bethe roots $u_j$, and below the dynamical roots $r_j$, and the consistency with the BY equations leads to identities of the form \eqref{Rksl2} that we have checked numerically. \subsubsection*{Canonical TBA equations: $g_{cr}^{(m)}<g<\bar g_{cr}^{(m)}$} The necessary modification of the integration contour follows the one for the first critical region, and the integration contour runs a little bit above the Bethe roots $u_j$, below the zeros $r_j^{(k)}$, $k=1,\ldots,m$, and between the zeros $r_j^{(m+1)}$, Im$(r_1^{(m+1)})<0$. All the roots $r_j^{(k)} - {i\over g}$ are between the integration contour and the real line of the mirror region. The contour for $Y_\pm$ functions lies above the zeros $r_j^{(2)}$. Here are the canonical TBA equations for Konishi-like states and $g_{cr}^{(m)}<g<\bar g_{cr}^{(m)}$ \begin{eqnarray}\nonumber &&\bullet \ Q{\rm -particles}:\quad V_Q = - \sum_{j=1}^N \log S_{\sl(2)}^{1_*Q}(u_j,v) -2 \sum_{k=1}^{m-1}\sum_{j=1}^2 \log S_{vwx}^{kQ}(r_j^{(k+1)-},v)\\ &&\hspace{4cm}+ 2 \log \frac{S_{vwx}^{m-1,Q}(r_1^{(m+1)},v) \, S_{vwx}^{m+1,Q}(r_1^{(m+1)},v)} {S_{vwx}^{m,Q}(r_1^{(m+1)-},v)}\,,\label{TbaQsl2c1n}\\\nonumber &&\bullet \ y{\rm -particles}: \quad\ V_\pm = \sum_{j=1}^N \log S^{1_*y}_\pm(u_j,v)- \sum_{k=1}^{m-1}\sum_{j=1}^2 \log S_k(r_j^{(k+1)-}-v)\\ &&\hspace{4cm}+\log {S_{m-1}(r_1^{(m+1)}-v)S_{m+1}(r_1^{(m+1)}-v)\over S_{m}(r_1^{(m+1)-}-v)}\,,~~~~~\nonumber\\\nonumber &&\bullet \ M|vw{\rm -strings}: V_{M|vw}=\sum_{j=1}^N\log S^{1_*M}_{xv}(u_j,v) - \sum_{k=1}^{m-1}\sum_{j=1}^2 \log S_{kM}(r_j^{(k+1),-},v)\\ &&\hspace{5cm}+ \log {S_{m-1,M}(r_1^{(m+1)},v)S_{m+1,M}(r_1^{(m+1)},v)\over S_{m,M}(r_1^{(m+1)-},v)} \,, \nonumber\\ &&\bullet \ M|w{\rm -strings}: \ \ V_{M|w} =0\nonumber\,, \end{eqnarray} \subsubsection*{ Exact Bethe equations: $g_{cr}^{(m)}<g< \bar g_{cr}^{(m)}$} The exact Bethe equations take the following form \begin{multline} \pi i(2n_k+1)=\log Y_{1_*}(u_k) =i L\, p_k - \sum_{j=1}^N \log S_{\sl(2)}^{1_*1_*}(u_{j},u_{k})+ \log\left(1+Y_Q \right)\star K_{\sl(2)}^{Q1_*}\\ -2 \sum_{k=1}^m\sum_{j=1}^2 \log S_{vwx}^{k1_*}(r_j^{(k+1)-},u_k) + 2 \log \frac{S_{vwx}^{m-1,1_*}(r_1^{(m+1)},u_k) \, S_{vwx}^{m+1,1_*}(r_1^{(m+1)},u_k)} {S_{vwx}^{m1_*}(r_1^{(m+1)-},u_k)}\\ + 2\log 2 -2 \sum_{j=1}^N\log\big(u_j-u_k-{2i\over g}\big)\, {x_j^--{1\over x_{k}^-}\over x_j^-- x_{k}^+} +2 \log Z_{M|vw}\star K^{M1_*}_{vwx}\\+ 2 \log \Big(1- \frac{1}{Y_-}\Big) \,\hat{\star}\, K^{y1_*}_- + 2 \log\Big(1- \frac{1}{Y_+}\Big) \,\hat{\star}\, K^{y1_*}_+\,, \label{Tba1sl2cm} \end{multline} where $\tilde{x}_j^-\equiv x(r_j^{(3)}-{i\over g})$, and the integration contours run just above the Bethe roots, and below the zeros $r_j^{(k)}$, $k=1,\ldots,m$. The functions $Z_{M|vw}$ are defined as in \eqref{YvwZvw}. The consistency with the BY equations again implies that the sum of the terms on the last three lines of \eqref{Tba1sl2cm} is equal to $-2i p_k$ that we have checked numerically for the Konishi-like states. \subsubsection*{Canonical TBA equations: $\bar g_{cr}^{(m)}<g< g_{cr}^{(m+1)}$} In this case all the zeros $r_j^{(k)}$, $k=1,\ldots,m+1$ are real, the integration contour runs below all of them but the Bethe roots $u_j$, and the canonical TBA equations for Konishi-like states and $g_{cr}^{(m)}<g<\bar g_{cr}^{(m)}$ take the following form \begin{eqnarray} &&\bullet \ Q{\rm -particles}:\quad V_Q = - \sum_{j=1}^N \log S_{\sl(2)}^{1_*Q}(u_j,v) -2 \sum_{k=1}^m\sum_{j=1}^2 \log S_{vwx}^{kQ}(r_j^{(k+1)-},v) \,,~~~~~~~~\label{TbaQsl2c1nb}\\\nonumber &&\bullet \ y{\rm -particles}: \quad\ V_\pm = \sum_{j=1}^N \log S^{1_*y}_\pm(u_j,v)- \sum_{k=1}^m\sum_{j=1}^2 \log S_k(r_j^{(k+1)-}-v) \,,~~~~~\nonumber\\\nonumber &&\bullet \ M|vw{\rm -strings}: V_{M|vw}=\sum_{j=1}^N\log S^{1_*M}_{xv}(u_j,v) - \sum_{k=1}^m\sum_{j=1}^2 \log S_{kM}(r_j^{(k+1)-},v) \,, \nonumber\\ &&\bullet \ M|w{\rm -strings}: \ \ V_{M|w} =0\nonumber\,. \end{eqnarray} \subsubsection*{ Exact Bethe equations: $\bar g_{cr}^{(m)}<g< g_{cr}^{(m+1)}$} The exact Bethe equations take the following form \begin{eqnarray}\nonumber &&\pi i(2n_k+1)=\log Y_{1_*}(u_k) =i L\, p_k - \sum_{j=1}^N \log S_{\sl(2)}^{1_*1_*}(u_{j},u_{k})+ \log\left(1+Y_Q \right)\star K_{\sl(2)}^{Q1_*}~~~~~~~~~\\\nonumber &&~~~-2 \sum_{k=1}^m\sum_{j=1}^2 \log S_{vwx}^{k1_*}(r_j^{(k+1),-},u_k) +2\log 2 -2 \sum_{j=1}^N\log\big(u_j-u_k-{2i\over g}\big)\, {x_j^--{1\over x_{k}^-}\over x_j^-- x_{k}^+} \\\label{Tba1sl2cmb} &&~~~+2 \log Z_{M|vw}\star K^{M1_*}_{vwx}+ 2 \log \Big(1- \frac{1}{Y_-}\Big) \,\hat{\star}\, K^{y1_*}_- + 2 \log\Big(1- \frac{1}{Y_+}\Big) \,\hat{\star}\, K^{y1_*}_+ \,.~~~~~~ \end{eqnarray} The consistency with the BY equations again implies the fulfillment of identities of the form \eqref{Rksl2} that we have checked numerically for the Konishi-like states. \subsection{Reality of Y-functions}\label{app:reality} In this appendix we show that the reality condition for Y-functions is compatible with the canonical TBA equations for Konishi-like states. We consider only the weak coupling region. The generalization to other regions and general $\sl(2)$ states is straightforward. To start with we introduce the principle value prescription for the integrals involving $\log f(u)$ where $f(u)$ is real for real $u$, and has first-order zeros (or poles) in the interval $[a,b]$ at $u_k$, $k=1,\ldots, N$ \begin{eqnarray}\label{pvp} \log f \star_{p.v} K \equiv \lim_{\epsilon\to 0} \sum_{k=0}^{N}\int_{u_k+\epsilon}^{u_{k+1}-\epsilon}\, du\, \log\left| f(u)\right| K(u,v)\,,~~~~ \end{eqnarray} where $u_0=a$, $u_{N+1}=b$. In the cases of interest $a=-\infty\,,\ b=\infty$ or $a=-2\,,\ b=2$. Assuming for definiteness that $f(u)$ has $N$ zeros, and $f(\infty)>0$, one can write \begin{eqnarray} f(u) = \tilde{f}(u)\prod_{k=1}^N (u-u_k)\,, \end{eqnarray} where $\tilde{f}(u)>0$ for any $u\in{\bf R}$. Then the convolution terms of the form $\log f \star K$ with the integration contour running a little bit above the real line can be written as follows \begin{eqnarray}\label{logfk} &&\log f \star K = \int_{a}^{b}\, du\, \log f(u+i0) K(u+i 0,v)\\\nonumber &&~~~= \int_{a}^{b}\, du\, \log \tilde{f}(u) K(u,v) + \sum_{k=1}^N \int_{a}^{b}\, du\, \log (u-u_k+i0) K(u+i 0,v) \\\nonumber &&~~~= \log f \star_{p.v} K \ +\ \pi i\, \sum_{k=1}^N \int_{a}^{u_k}\, du\, K(u,v) = \log f \star_{p.v} K \ +\ {1\over 2} \sum_{k=1}^N \log {S(u_k,v)\over S(a,v)}\,, \end{eqnarray} where $\log S(u,v) \equiv 2\pi i \int^{u}\, du'\, K(u',v)$, and it can differ from the S-matrix defining the kernel $K$ by a function of $v$. It is convenient to choose the normalization $S(a,v)=1$, and most of our S-matrices satisfy this condition. Let us now use \eqref{logfk} to show the reality of Y-functions. We start with \eqref{Tbaysl2} that we write as follows \begin{eqnarray}\label{Yfory1} &&\log {Y_+ \over Y_-}(v)= - \sum_{j=1}^N \log S_{1_*y}(u_{j},v) +\log(1 + Y_{Q})\star K_{Qy} \,,\\ \label{Yfory2} &&\log {Y_+ Y_-}(v) = \sum_{j=1}^N \log S_{1}(u_{j}-v) - \log\left(1+Y_Q \right)\star K_Q+2\log {1+{1\over Y_{M|vw}} \ov1+{1\over Y_{M|w}}}\star K_M\nonumber\,,~~~~ \end{eqnarray} where \begin{eqnarray}\label{sqsy} S_{Q_*y}(u,v) = {x_s(u-{i\over g}Q) - x(v)\over x_s(u-{i\over g}Q) - {1\over x(v)}}\,{x_s(u+{i\over g}Q) - {1\over x(v)}\over x_s(u+{i\over g}Q) - x(v)}\,, \end{eqnarray} and we used that \begin{eqnarray} S_{Qy}(u,v) = {S_-^{Qy}(u,v) \over S_+^{Qy}(u,v) }\,,\quad S_{Q}(u-v) = {S_-^{Qy}(u,v) S_+^{Qy}(u,v) } = {u-v-{i\over g}Q\over u-v+{i\over g}Q}\,,~~~~~ \end{eqnarray} and that $S_{Q}(u-v)$ analytically continued to the string-mirror region is equal to itself. Taking into account that \begin{eqnarray} 2\log {1+{1\over Y_{M|vw}} \ov1+{1\over Y_{M|w}}}\star K_M =2\log {1+{1\over Y_{M|vw}} \ov1+{1\over Y_{M|w}}}\star_{p.v} K_M -\sum_{k=1}^N \log S_1(u_k-v)\,, \end{eqnarray} we see that the equations for $Y_\pm$ can be written as \begin{eqnarray}\label{Yfory1b} &&\log {Y_+ \over Y_-}(v)= - \sum_{j=1}^N \log S_{1_*y}(u_{j},v) +\log(1 + Y_{Q})\star K_{Qy} \,,\\ \label{Yfory2b} &&\log {Y_+ Y_-}(v) = - \log\left(1+Y_Q \right)\star K_Q+2\log {1+{1\over Y_{M|vw}} \ov1+{1\over Y_{M|w}}}\star_{p.v.} K_M \,.~~~~ \end{eqnarray} Assuming now that $Y_Q$, $Y_{M|vw}$ and $Y_{M|w}$ are real, the reality of $Y_\pm$ just follows from the reality and positivity of $S_{Q_*y}(u,v)$ that can be easily checked by using \eqref{sqsy}. Next we consider $Y_{M|vw}$. By using the p.v. prescription we write \eqref{Tbavwsl2} as follows \begin{multline} \log Y_{M|vw}(v) = {1\over 2}\sum_{j=1}^N \log {S^{1_*M}_{xv}(u_j,v)^2\over S^{1M}(u_j-v)} - \log\left(1+Y_{Q'} \right) \star K^{Q'M}_{xv} \\ + \log\left(1+ \frac{1}{Y_{M'|vw}} \right) \star_{p.v.} K_{M'M} + \log {1- \frac{e^{ih_\alpha}}{Y_-} \over 1- \frac{e^{ih_\alpha}}{Y_+} } \star K_M \,, \label{Tbavwsl2b} \end{multline} where \begin{eqnarray} S^{1_*M}_{xv}(u,v)&=& {x_s(u-{i\over g})-x(v+{i\over g}M)\over x_s(u+{i\over g})- x(v-{i\over g}M) }\, {x_s(u+{i\over g})-{1\over x(v+{i\over g}M)}\over x_s(u-{i\over g})-{1\over x(v-{i\over g}M)}}\,,\\ S^{1M}(u)&=& \frac{u -{i\over g}(1+M)}{u +{i\over g}(1+M)} \frac{u -{i\over g}(M-1)}{u +{i\over g}(M-1)}\,. \end{eqnarray} Thus, we get \begin{eqnarray} {S^{1_*M}_{xv}(u,v)^2\over S^{1M}(u-v)} &=& {x_s(u-{i\over g})-x(v-{i\over g}M)\over x_s(u+{i\over g})- x(v+{i\over g}M) }\, {x_s(u+{i\over g})-{1\over x(v+{i\over g}M)}\over x_s(u-{i\over g})-{1\over x(v-{i\over g}M)}} \\\nonumber &&~~~~~~~~\times {x_s(u-{i\over g})-x(v+{i\over g}M)\over x_s(u+{i\over g})- x(v-{i\over g}M) }\, {x_s(u+{i\over g})-{1\over x(v-{i\over g}M)}\over x_s(u-{i\over g})-{1\over x(v+{i\over g}M)}}\,, \end{eqnarray} which is obviously real and positive, and at $M=1$ it has a double zero at $v=u$ as it should be. Finally we consider $Y_Q$-functions. We write \eqref{TbaQsl2} as follows \begin{multline} \log Y_Q(v) =- L\, \widetilde{{\cal E}}_{Q} - \sum_{j=1}^N \log S_{\sl(2)}^{1_*Q}(u_j,v)S^{1Q}_{vwx}(u_j,v) + \log\left(1+Y_{Q'} \right) \star K_{\sl(2)}^{Q'Q}~~~ \\ ~~~~~~~~~~~~~+ \log\left(1+ \frac{1}{Y_{M'|vw}} \right) \star_{p.v.} K^{M'Q}_{vwx} + \log \left(1- \frac{e^{ih_\alpha}}{Y_-} \right) \star K^{yQ}_- + \log\left(1- \frac{e^{ih_\alpha}}{Y_+} \right) \star K^{yQ}_+\,. \label{TbaQsl2b} \end{multline} Here \begin{eqnarray}\nonumber S^{1Q}_{vwx}(u,v) &=& {x(u-{i\over g})-x(v+{i\over g}Q)\over x(u-{i\over g})- x(v-{i\over g}Q) }\, {x(u+{i\over g})-x(v+{i\over g}Q)\over x(u+{i\over g})-x(v-{i\over g}Q)}\, {x(v-{i\over g}Q)\over x(v+{i\over g}Q)}\\\label{siqvwx} &=&{x_s(u-{i\over g})-x(v+{i\over g}Q)\over x_s(u-{i\over g})- x(v-{i\over g}Q) }\, {1 - {1\over x_s(u+{i\over g})x(v+{i\over g}Q)}\over 1-{1\over x_s(u+{i\over g})x(v-{i\over g}Q)}}=h_Q(u,v)\,,~~~~ \end{eqnarray} where $h_Q(u,v)$ is the function that appears in the crossing relations, see \cite{AF09c}. Note that $S^{1Q}_{vwx}$ is unitary: $\left(S^{1Q}_{vwx}\right)^* = 1/S^{1Q}_{vwx} = S^{1Q}_{vwx}/h_Q^2$. To prove the reality of $S_{\sl(2)}^{1_*Q}(u,v)S^{1Q}_{vwx}(u,v)$ we use the crossing relation.\footnote{It follows closely the proof of the unitarity of the mirror S-matrix \cite{AF07}.} To this end it is convenient to go to the $z$-torus. Then, the crossing relation for $S_{\sl(2)}^{1Q}$ can be written in the form \cite{AF06} \begin{eqnarray} S_{\sl(2)}^{1Q}(z_{1}-{\omega_2\over 2},z_2)S_{\sl(2)}^{1Q}(z_{1}+{\omega_2\over 2},z_2) = {1\over h_Q(u,v)^2}\,, \end{eqnarray} and we have \begin{eqnarray}\nonumber \left(S_{\sl(2)}^{1_*Q}(u,v)\right)^* &\equiv& \left(S_{\sl(2)}^{1Q}(z_{*1},z_2)\right)^* = \left(S_{\sl(2)}^{1Q}(z_{1}-{\omega_2\over 2},z_2)\right)^* = {1\over S_{\sl(2)}^{1Q}(z_{1}+{\omega_2\over 2},z_2) }\\ &=& S_{\sl(2)}^{1_*Q}(u,v) h_Q(u,v)^2\,, \end{eqnarray} where we have chosen $z_k$ to be real, and used the generalized unitarity condition for the mirror model. Taking into account \eqref{siqvwx}, one concludes that $S_{\sl(2)}^{1_*Q}S^{1Q}_{vwx}$ is real. It is also possible to show that the product is positive by representing $S_{\sl(2)}^{1_*Q}S^{1Q}_{vwx}$ as a product $s s^*$, where $s \sim \sigma$. \subsection{From canonical to simplified TBA equations}\label{appSimple} In this appendix we discuss the derivation of the simplified TBA equations from the canonical ones for Konishi-like states in the weak coupling region. Since it basically repeats the one done in \cite{AF09b,AF09d} we will be sketchy. \subsubsection*{Simplifying the TBA equations for $vw$-strings} To derive the simplified equation \eqref{Yforvw3} for $vw$-strings from the canonical one, one applies the inverse kernel $(K+1)^{-1}_{MN}$ to \eqref{Tbavwsl2} and uses the following identity \begin{eqnarray}\label{SxvKI} \log S^{1_*Q}_{xv}\star(K+1)_{QM}^{-1} &=& \delta_{M1}\Big(\log S(u-v+{i\over g}-i0) + \log S_{1_*y}\star s\Big)\,. ~~~~~ \end{eqnarray} Note also that one should understand $\log S^{1_*Q}_{xv}$ as \begin{eqnarray} \log S^{1_*Q}_{xv}(u,v) = \log(v-u) + \log {S^{1_*Q}_{xv}(u,v) \over v-u}\,. \end{eqnarray} Then, even though the last formula is valid up to a multiple of $i \pi$, it agrees exactly with Mathematica choice of branches. \subsubsection*{Simplifying the TBA equations for $Q$-particles} We want to apply $(K+1)^{-1}_{MN}$ to the equation \eqref{TbaQsl2}. This requires computing \begin{eqnarray}\nonumber \log S_{\sl(2)}^{1_*M}\star (K+1)^{-1}_{MQ} &=&-\log S_{1M}\star (K+1)^{-1}_{MQ}-2\log \Sigma_{1_*M}\star (K+1)^{-1}_{MQ} \\ &=&-\delta_{Q2}\log S(u-v)-2\delta_{Q1}\log \check{\Sigma}_{1_*}\,\check{\star}\, s\,.~~~~ \end{eqnarray} As was shown in \cite{AF09c}, the improved dressing factor is a holomorphic function of the first argument in the string region, and the second one in the intersection of the region $\{|y^{+}_{2}|<1,|y^{-}_{2}|>1\}$ with the mirror region ${\rm Im}\,y_2^{\pm}<0$, which includes the real momentum line of the mirror theory. This immediately implies that \begin{eqnarray} \log\Sigma_{1_*M}\star (K+1)^{-1}_{MQ}=0~~~~{\rm for}~~~Q\neq 1. \end{eqnarray} To find $\log\Sigma_{1_*M}\star (K+1)^{-1}_{M1}$, and, therefore, $\check{\Sigma}_{1_*}$ we start with the kernel $K^\Sigma_{1_*M}(u,v)$, and use the following identity derived in \cite{AF09d} \begin{eqnarray}\label{KSKi} K^\Sigma_{Q'M}\star (K + 1)^{-1}_{MQ} =\delta_{1Q}\check{K}_{Q'}^\Sigma\star s\,, \end{eqnarray} where the kernel $\check{K}_{Q}^\Sigma(u,v)$ does not vanish for $|v|<2$, and is given by \eqref{Ksig}. Setting $Q'=1=Q$ in \eqref{KSKi} and analytically continuing the first argument to the string region, one gets \begin{eqnarray}\label{KSKism} K^\Sigma_{1_*M}\star (K + 1)^{-1}_{MQ} =\delta_{1Q}\check{K}_{1_*}^\Sigma\star s\,, \end{eqnarray} where the analytically continued kernel $\check{K}_{1_*}^\Sigma$ is given by \eqref{k1starsigma}. To obtain eq.\eqref{k1starsigma} from \eqref{Ksig} one uses the following formula \begin{eqnarray} \check{I}_0(u+{i\over g}\,,v) &=&\check{I}_1(u\,,v) - {1\over 2\pi i}{1\over u-v-{i\over g}} - {1\over \pi i}\int_{-2}^2\, dt\, {1\over u-t-{i\over g}}\check{K}(t,v)~~~~ \nonumber\\ &=&\check{I}_1(u\,,v) - K_{ss}(u-{i\over g}\,,v)\,.~~~~~~~ \end{eqnarray} Finally, integrating \eqref{k1starsigma} over the first argument, one gets \eqref{s1star}. The remaining part of the derivation of the simplified equations for Q-particles repeats \cite{AF09d}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} An agent which operates in the real world must be able to continuously learn from the environment. Learning from a stream of samples, usually in the form of static datasets, also called tasks, is referred to as Lifelong Learning or Continual Learning (CL). A continual learning scenario comes often with a phenomenon called Catastrophic Forgetting (CF) \citep{MCCLOSKEY1989109}, that arises when an agent loses the knowledge learned from past samples while extracting information from newer ones. This phenomenon inhibits the correct working of agents that operate in such scenarios, but it can be mitigated, or removed, using methods built for that purpose. A key point is that such methods must present a contained memory footprint, because we can't save all past samples encountered during the training, and the agents cannot grow indefinitely, consuming all the memory. Thus, the external memory, intended as the collection of all the things saved on the hardware that must be preserved across all the tasks, must be contained. Over the last years, a large amount of research has been done about methods to alleviate the CF phenomenon. Usually, CL techniques are grouped into three categories (regularization-based methods, rehearsal methods, and architectural methods), and a method can belong to one, or more, categories at the same time \citep{PARISI201954}. The methods in the first set are designed in such a way that important parameters of the model, with respect to past tasks, are preserved during the training of newer tasks using any sort of regularization technique, by directly operating over the parameters of the model, or by adding a regularization term to the training loss \citep{li2017learning, kirkpatrick2017overcoming, synaptic, serra2018overcoming, saha2020gradient, chaudhry2021using}. Rehearsal-based methods save portions of past tasks and use the information contained in the memory to mitigate CF while training on new tasks \citep{rebuffi2017icarl, chaudhry2019tiny, van2020brain, chaudhry2021using, rosasco2021distilled ; the samples associated to past tasks can also be generated using a generative model, and in that case the methods are called pseudo-rehearsal. Finally, architectural-based methods freeze important parameters or dynamically adjust and expand the neural network's structure, to preserve the knowledge associated to past tasks, while the model learns how to solve the current task \citep{rusu2016progressive, aljundi2017expert, yoon2017lifelong, veniat2020efficient, POMPONI2021407}. Aside from developing techniques to solve CF, another issue is formalizing scenarios describing how tasks are created, how they arrive, and what information is provided to the model itself (e.g., the task identifier). Usually, a method is designed to solve a subset of all possible CL scenarios \citep{van2019three}. In this paper, we operate on scenarios in which the identity of a task is always known during the training, but it may not be known during inference phase, and the classes are disjoint (the same class appears in only one task). If we can use the task identity during the inference phase, we have a scenario called \textit{task incremental}, otherwise, the scenario is called \textit{class incremental}; the latter is much harder to deal with, being closer to a real-world scenario. The task incremental scenarios have been studied exhaustively, due to the simplicity of the problem, while fewer methods have been proposed to efficiently solve the latter. In this paper, we propose a method that achieves better results on both task and class incremental scenarios. Our proposal, Centroids Matching{}, is a pure regularization-based method when it comes to fight CF in task incremental problems (which are easier), while it uses an external memory, containing samples from past tasks, when we require to solve class incremental scenarios; when fighting the CF phenomenon in a task incremental scenario, our approach requires no memory, since we force the model to keep its extraction ability by using the current samples and the current model, without the need to store the past model when the learning of a task is over, nor an external memory containing past samples. Our approach differs from the existing literature because it does not train the neural network using a standard training procedure, in which a cross-entropy error is minimized, but it operates directly on the embeddings vectors outputted by the model, by producing an embedding for each point. These vectors are moved to match the centroids of the associated classes, which are calculated using a sub-set of training samples (support set). When fighting the CF phenomenon in a task incremental scenario, our approach requires no memory, since we force the model to keep its extraction ability by using the current samples and the current model, without the need to store the past model when the learning of a task is over, nor an external memory containing past samples. \begin{figure}[!t] \centering \includegraphics[width=0.6\textwidth]{figures/scenarios.drawio.pdf} \caption{The proposed approach applied on both Task (left) and Class (right) Incremental learning scenarios. The bigger circles are the centroids of the classes, while the smaller ones are samples from the same class of the corresponding centroid. We see that a \textbf{CIL}{} scenarios is solved by merging the embedding spaces together, into a new space that contains all the classes so far; the merging process is explained in Section \ref{sec:class-il}.} \label{fig:scenarios} \end{figure} \section{Related Works} An agent that has the Continual Learning (CL) property is capable of learning from a sequence of tasks \citep{delange2021continual} without forgetting past learned knowledge. When past learned knowledge is lost, and with it also the ability to solve tasks already solved in the past, we have a phenomenon called Catastrophic Forgetting (CF) \citep{FRENCH1999128, MCCLOSKEY1989109}, which occurs because the information saved in the parameters of the model is overwritten when learning how to solve new tasks, leading to a partial or total forgetting of the information. A CL method should be able of alleviating, or removing, CF while efficiently learning how to solve current tasks. Initial CL works focused on fighting the CF phenomenon on the easiest supervised CL scenario, called task incremental learning, but, recently, we have seen a shift toward class incremental scenario, being closer, and more suitable, to real-world applications; nevertheless, a limited number of proposed approaches focus on that specific scenario \citep{masana2020class, belouadah2021comprehensive}. The main difference between the two is that in the first scenario we can classify only samples in the context of a task, thus the task identity must be known a priori, while in the latter one the model must discriminate at inference time between all classes seen during the training procedure, without having the task identity. Depending on how a CL method achieves this goal, we can group, following \cite{PARISI201954}, the CL methods into three categories: regularization methods, rehearsal methods, and architectural methods. Our approach belongs to the first set when we are dealing with task incremental scenarios, and both the first and the second one when the scenario is a class incremental one. Our approach regularizes the model by constraining the embeddings, but other approaches, based on the same principle, have been proposed over the years. One of the first was proposed in \cite{Hou_2019_CVPR}, and it does not work directly on the embeddings of the model, but on the logits produced by it, and uses them to regularize the training by reducing the distance between the current model and the past one, while also correcting biases that arise when training a model to solve a CL scenario (e.g. class imbalances). In \cite{POMPONI2020139}, the authors proposed a regularization-rehearsal approach that works directly on the embeddings space produced by the model. Given a sample and the task from which the sample comes, the proposed method uses a regularization term that forces the model to reduce the distance between the current embeddings vector and the one obtained when the training on the source task was over; moreover, the approach requires a very small memory to work, but it can only regularize models that operate on task incremental scenarios because it requires tasks spaces to be separated. In \cite{han2021contrastive} a regularization-rehearsal method which uses the embeddings vectors to regularize the model is proposed. The vectors are used to calculate multiple contrastive losses used as regularization factors; also, a mechanism that overwrites a portion of the embeddings is used, enabling selective forgetting; unfortunately, the approach requires a big external memory in order to achieve competitive results. More CL scenarios exist, such as a stream supervised scenario, often called Online Incremental Learning, in which the model sees a sample only once, and the idea of using the embeddings to regularise the model has also been exploited in these scenarios. Starting from the same ground idea which inspired our approach, in \cite{de2021continual}, the authors proposed a CL approach that operates over a stream of samples, by continuously updating the prototypes extracted using the same stream, by using a novel loss which synchronizes the latent space with the continually evolving prototypes. Similarly, the authors of \cite{Taufique_2022_CVPR} proposed an approach that works in the context of unsupervised domain adaptation, in which a buffer containing prototypes is used to calculate a contrastive loss against the current batch of samples. In the approach proposed in \cite{kurniawan2021online}, which aims to solve online continual learning scenarios, the authors used many loss functions to train the model, and one of them is based on the similarity calculated in the embedding space, which pulls closer the samples belonging to the same class. Other CL methods, that do not use the embeddings to regularize the training, have been proposed over the years. The approaches belonging to the regularization-based set fight CF by forcing the model's parameters, which are relevant for past tasks, to be as close as possible to the optimal parameters obtained when these past tasks were over. One of the first methods that use a regularization approach to fight CF is Elastic Weight Consolidation (EWC), proposed in \cite{kirkpatrick2017overcoming}, that assigns an importance scalar to each parameter of the model, slowing the change of the parameters that are considered more important to preserve the ability to solve past tasks; in some cases this constraining approach could be too strong, leading to an incapacity of the model to learn how to solve new tasks. Other methods, such as the ones proposed in \cite{saha2020gradient} and \cite{ farajtabar2020orthogonal}, regularize the model by forcing the gradients to go toward a space of the parameters where the CF is minimized for all the tasks, while the current one is being solved as efficiently as possible, by moving the weights in the space that satisfies all the constraints. Memory-based methods save a small number of samples from each solved task, or generate synthetic samples using generative models, to be used jointly with the current training samples, in order to preserve past learned knowledge. These methods are often called rehearsal methods, or pseudo-rehearsal when a generative model is involved. The most representative method in this set is proposed in \cite{lopez2017gradient}, which uses the samples from the external memory to estimate the gradients associated to past tasks, which are used to modify the gradients associated with the current training samples, to solve, jointly, the current and the past tasks; moreover, this was the first method that can improve the scores obtained on past tasks, supposing that the memory dimension is big enough to be fully representative of past tasks. A more straightforward approach, yet very effective, is to use the external memory to augment the current batch by concatenating a random batch extracted from the memory and the current batch, as proposed, for instance, in \cite{chaudhry2019tiny, riemer2018learning, yoon2021online}. Being a straightforward approach, many other similar approaches, as well as theoretical studies to understand what CF really is and how to fight it, have been proposed over the years \citep{rebuffi2017icarl, rolnick2019experience, ostapenko2022foundational}. \section{Continual Learning Setup} \label{sec:continual_learning_setup} We define a supervised CL scenario as a set of N tasks $\text{T} = \{\mathcal{T}_i\}_{i=1...\text{N}}$, in which a task can be retrieved only when the training on the current one is over; when a new task is collected, the past ones cannot be retrieved anymore (except for testing purpose). Each task $\mathcal{T}_i$ is a set of tuples $(\mathbf{x}, y)$, where $\mathbf{x}$ is a sample, and $y \in Y_i$ is the label associated to it, where $Y_i$ is the set of classes contained in the task $\mathcal{T}_i$. Also, the tasks are disjoint, meaning that: $\bigcap_{i=1...\text{N}} Y_i = \varnothing$ (a class cannot belong to two different tasks). The goal of a CL method is to help the model to generally perform well on all learned tasks so far, by adapting to new tasks while preserving the previously learned knowledge. A method that solves a task at the expense of another is not desirable, and thus a trade-off must be achieved, by looking at the overall performance. Assuming that the tasks' boundaries are always known and well defined during the whole training procedure, i.e., we always know when a task is over or retrieved, we follow \cite{van2019three} to define two different scenarios, based on how the inference procedure is carried out: \begin{itemize} \item Task Incremental Learning (\textbf{TIL}{}): in which the task's identity of a sample is given during the inference phase. \item Class Incremental Learning (\textbf{CIL}{}): in which the task's identity of a sample is not given during the inference phase. \end{itemize} The difference is minimal, but yet crucial. In fact, we can consider the first one as a more simple and theoretical scenario, which is also the most studied one. Its main limitation is that, to correctly classify a sample, we must know the task from which the sample comes, and usually, this is not the case. In fact, such scenarios are more suitable to develop and test novel methods, before adapting them to an agent that operates on more realistic scenarios. The second scenario is more difficult and the agents suffer CF drastically, because not only the model must be regularized, but the space of the prediction must be extended to include also the upcoming classes, leading to a faster forgetting of past tasks. It must be noted that the scenario definition is untied from the architecture of the neural network involved, which can have any topology, as long as the scenario's rules are followed. Nevertheless, usually, a multi-head strategy is adopted to operate in \textbf{TIL}{} scenarios, in which a backbone is shared, and, for each task, a smaller neural network, usually called head, is used to classify the samples belonging to that task; each head takes as input the output of the backbone, making the prediction spaces of the tasks well separated. The shared backbone is also used when operating in \textbf{CIL}{} scenarios, but the classification head is usually just one, whose output neurons are expanded (the newer classes are added during the training) when a new task is collected. \section{Centroids Matching{} (CM{}) framework} Our approach is inspired by the Prototypical Networks proposed in \cite{snell2017prototypical}, following the idea that there exists an embedding space, also called feature space, in which vectors cluster around the most probable centroid, representing a class, and called \textit{prototype}. Following this idea, our model does not follow a standard classification approach in which a cross-entropy loss is minimized, but it uses the model to extract a features vector from an input sample, and then it forces the vector to be as close as possible to the correct centroid, representing the class in the embedding space of the task. In the following section, we explain how this approach can be used to easily mitigate CF in multiple CL scenarios. \subsection{\textbf{TIL}{} scenario} Following Section \ref{sec:continual_learning_setup}, suppose the model consists of two separate components: a feature extractor, also called backbone, $\psi: \mathbb{R}^{\text{I}} \rightarrow \mathbb{R}^\text{D}$, where I is the size of an input sample and D is the dimension of the features vector produced by the backbone, and a classifier head $f_i: \mathbb{R}^\text{D} \rightarrow \mathbb{R}^\text{E}$, one for each task, where E is the dimension of the task specific feature vector. The backbone operates as a generic feature extractor, while each head is a specific model that, given the generic vector of features extracted by the backbone, transforms the vector into a vector of features for that specific task. Given an input sample $\mathbf{x}$ and the task $i$ from which the sample comes, the final vector, used for training and testing, is carried out by combining the functions: $e_i(\mathbf{x}) = f_i \circ \psi(\mathbf{x})$. The backbone remains unique during the whole training, while a new head $f$ is added each time a new task is encountered. Both the backbone and all the heads are updated during the whole training process. When a new task $\mathcal{T}_i$ is available, we extract and remove a subset of the training set, named support set and identified as $S_i$, containing labelled training samples that won't be used to train the model. This support set is used to calculate the centroids, one for each class in the task. A centroids, for a given class $k$, in the space of the task $i$, is the average of the feature vectors extracted from the samples in the support set, calculated using the corresponding head: \begin{align} \textbf{c}^k_i = \frac{1}{\lvert S_i^k\rvert} \sum_{(\mathbf{x}, y) \in S_i^k} e_i(\mathbf{x}) \end{align} where $S_i^k$ is the subset of $S_i$ that contains only samples having label $k$. During the training, these centroids are calculated at each iteration, in order to keep them up to date. Then, given the Euclidean distance function $d: \mathbb{R}^M \times \mathbb{R}^M \rightarrow \mathbb{R}_+$, and a sample $\mathbf{x}$, our approach produces a distribution over the classes based on the softmax distance between the features produced using $e_i(\mathbf{x})$ and the centroids associated to the current task: \begin{align} \label{eq:p} p(y = k \vert \mathbf{x}, i) = \frac{\text{exp}(-d(\textbf{c}^k_i, e_i(\mathbf{x})))}{\sum_{k' \in Y_i} \text{exp}(-d(\textbf{c}^{k'}_i, e_i(\mathbf{x})))} \end{align} We note that, in this scenario, it makes no sense to calculate the distances between different tasks' heads, since each head produces centroids placed in their own embedding space (see the left side of Fig. \ref{fig:scenarios}), without interfering with the others. The loss associated to a sample is then the logarithm of the aforementioned probability function: \begin{align} L(\mathbf{x}, k, i) = -\log p(y = k \vert \mathbf{x}, i) \end{align} If the current task is not the first one, in order to preserve past learned knowledge, we need to regularize the model. When a new task is collected, we clone the current model, both the backbone and all the heads created so far, which we indicate as $\overline{e_i}(\cdot)$, for each task $i$. Then, while training on the current task, we augment the loss using the distance between the features extracted using the cloned model and the one extracted by the current one, both calculated using the current set of training samples, without the support of an external memory containing past samples. The regularization term is the following: \begin{align} R(\mathbf{x}, t) = \frac{1}{t} \sum_{i < t} d\left(\overline{e_{i}}(\mathbf{x}), e_{i}(\mathbf{x})\right) \end{align} Using this simple regularization term, we force the model to preserve the ability to extract the same information that it was able to extract when the previous task was over. Moreover, since the regularization term is calculated only using samples from the current task, no external memory is needed to regularize the model. The overall regularization approach works because the past heads are trained at the same time as the new ones, while leaving the weights of the model unconstrained, as long as the output distance is minimized. Then, the final loss for a task which is not the first one is: \begin{align} \label{eq:loss} L_{ti}(\mathbf{x}, k, t) = -\log p(y = k \vert \mathbf{x}, t) + \lambda R(\mathbf{x}, t) \end{align} where $\lambda$ is a scalar used to balance the two terms. When a task is over, the final centroids, for the classes in the task, are calculated and saved. Thus, the external memory contains, when all the tasks are over, only the centroids for the classes seen during the training, and thus the required memory is negligible. To classify a sample $\mathbf{x}$ from a task $t$, we use the same approach used during the training process, based on the distance between the centroids of that task and the features extracted from the sample $\mathbf{x}$: \begin{align} y = \argminE_{k \in Y_i} \,\, p(y = k \vert \mathbf{x}, t) \end{align} This is possible because we always know from which task the sample comes. In the next section, we will show how this can be achieved when the task identity is not known. \subsection{\textbf{CIL}{} scenario} \label{sec:class-il} The class incremental scenario is a more difficult if compared to the \textbf{TIL}{} scenario, because the identity of the task is available only during the training process but not during the inference phase, requiring a different approach. Most of the methods fails to overcome CF in this scenario, mainly because a single head classifier is used to classify all the classes, leading to faster CF, because the capacity of a single head is limited. Instead, in our approach, we keep the heads separated and regularized as in the \textbf{TIL}{} scenario, but, while training, we also project the embeddings vectors into a shared embedding space, containing all the classes so far, leading to an easier classification phase. As before, we train on the first task $t$ using the loss \ref{eq:loss}. If the task is not the first one, we also add a projection loss to the training loss, which is used to project all the embedding spaces into a single one. First of all, we use an external memory $\mathcal{M}$, which can contain a fixed number of samples for each task, or a fixed number of samples during the whole training, removing a portion of past images when new ones need to be stored. In our experiment, we use a fixed sized memory, which is resized each time a new task must be saved in the memory. If the current task $i$ is not the first, we augment the current dataset with the samples from the memory. Since the memory is smaller than the training set, we use an oversampling technique when sampling samples from the memory, in a way that a batch always contains samples from past tasks. We define the projected loss, which is a modified version of the equation \ref{eq:p}, as: \begin{align} \overline{p}(y = k \vert \mathbf{x}, i) = \frac{\text{exp}(-d(p_i(\textbf{c}^k_i), \frac{1}{i}\sum_{j \le i} p_i(\mathbf{x})))}{\sum_{k' \in Y_i} \text{exp}(-d(p_i(\textbf{c}^{k'}_i), \frac{1}{i}\sum_{j \le i} p_i(\mathbf{x})))} \end{align} where $Y_i = \bigcup_{j=1, \dots, i} Y_j$ contains all the classes up to the current task, and the function $p_i(\cdot)$ is a projection function, one for each task, that projects the embeddings, and the centroids, from the task wise embedding space to the shared embedding space. For this reason, the labels $y$ must be scaled accordingly using a simple scalar offset. For a generic task $i$, the projecting function $p_i$ is defined as: \begin{align} p_i(\mathbf{e}) = \mathbf{e} \cdot \text{Sigmoid}(s_i(\mathbf{e})) + t_i(\mathbf{e}) \end{align} in which the functions $s_i, t_i: \mathbb{R}^{\text{E}} \rightarrow \mathbb{R}^\text{E}$ are, respectively, the scaling and the translating function, implemented using two small neural networks, trained along with the backbone and the heads. The final loss for this scenario is defined as: \begin{align} \label{eq:ci_loss} L_{ci}(\mathbf{x}, k, t) = \overline{p}(y = k \vert \mathbf{x}, t) \end{align} At inference time, if we know the task associated to a sample, we can perform the inference step as in the \textbf{TIL}{} scenario, otherwise, we classify directly the samples, without inferring the corresponding class: \begin{align} y = \argminE_{k \in Y} \,\, \overline{p}(y = k \vert \mathbf{x}, \text{N}) \end{align} where $Y$ is the set of all the classes seen during the training, and N is the total number of tasks. In this way, all the tasks are projected into the same embedding space, thus the classification of a sample does not require the task identity. \section{Experiments} \subsection{Experimental setting} \textbf{Dataset}: We conduct extensive experiments on multiple established benchmarks in the continual learning literature, by exploring both \textbf{TIL}{} as well as the harder \textbf{CIL}{}. The datasets we use to create the scenarios are: CIFAR10, CIFAR100 \citep{Krizhevsky09learningmultiple}, and TinyImageNet (a subset of ImageNet \citep{5206848} that contains $200$ classes and smaller images). To create a scenario we follow the established approach, where the classes from a dataset are grouped into N disjoint sets, with N the number of tasks to be created. We use CIFAR10 to build a scenario composed of 5 tasks, each one with 2 classes; using CIFAR100 we create 10 tasks with 10 classes in each one; in the end, the classes in TinyImageNet are grouped into 10 tasks, each one containing 20 classes. \textbf{Baselines}: we test our approach against many continual learning methods: Gradient Episodic Memory (GEM) \citep{lopez2017gradient}, Elastic Weight Consolidation (EWC) \citep{kirkpatrick2017overcoming}, Online EWC (OEWC) \citep{schwarz2018progress}, Experience Replay (ER) \cite{chaudhry2019tiny}, Embedding Regularization (EmR) \citep{POMPONI2020139}; regarding the latter, it only works on \textbf{TIL}{} scenarios, and the other results are omitted. We also use two baseline approaches: Naive Training, in which the model is trained without fighting the CF, and Cumulative Training, in which the current training task is created by merging all past tasks as well as the current one. \textbf{Hyper-parameters}: for each method, we searched for the best hyper-parameters, following the results presented in respective papers. For EWC, we used $100$ as regularization strength weight for all the scenarios. For GEM we used task memory of $500$ for CIFAR10 and $1000$ for the other experiments. In the EmR memory, we saved $200$ samples from each task. Lastly, for ER we used a fixed memory size of $500$ for CIFAR100 scenarios and $1000$ otherwise. Regarding our approach, the support set contains $100$ images from the training set of each task, and we set the penalty weight $\lambda$ to $0.1$ for CIFAR10, $0.75$ for CIFAR100 and TinyImageNet; regarding the \textbf{CIL}{} scenarios, we used a fixed size memory of $500$ for each scenario. \textbf{Models and Training}: for each dataset we use ResNet20 \citep{he2016deep} architecture, trained using SGD with learning rate set to $0.01$ and momentum to $0.9$. For each classification head, we used two linear layers, with the first layer that takes as input the output of the backbone, followed by a ReLU activation function, and then an output layer whose outputs size depends on the number of classes in the current task. Regarding our proposal, each head is composed of two linear layers, and it projects the output of the backbone into a vector of $128$ features (the output of the ResNet model has $64$ values). We repeat each experiment $5$ times; each time the seed of the experiment is changed in an incremental way (starting from $0$). Regarding EmR and our proposal, since these are not rehearsal methods when operating in the \textbf{TIL}{} scenario, after each task we save the statistics of the batch norm layers, which are retrieved when a sample from the corresponding task must be classified. Also, we used the following augmentation schema for the proposed datasets: the images are standardized, randomly flipped with probability $50\%$, and then a random portion of the image is cropped and resized to match the original size. A scenario, usually, is built by grouping the classes in an incremental way (the first n classes will form the first task, and so on). We use this approach for the first experiment, instead, when the experiment is not the first, each of the N tasks is created using a randomly selected subset of classes. Using this approach, a different scenario is built for each experiment, and more challenging scenarios could be created since the correlation between the classes disappears. \textbf{Metrics}: to evaluate the efficiency of a CL method, we use two different metrics from \cite{diaz2018don}, both calculated on the results obtained on the test set of each task. The first one, called Accuracy, shows the final accuracy obtained across all the test splits of the tasks, while the second one, called Backward Transfer (BWT), measures how much of that past accuracy is lost during the training on upcoming tasks. To calculate the metrics we use a matrix $\mathbf{R} \in \mathbb{R}^{\text{N} \times \text{N}}$, in which an entry $\mathbf{R}_{i, j}$ is the test accuracy obtained on the test split of the task $j$ when the training on the task $i$ is over. Using the matrix $\mathbf{R}$ we calculate the metrics as: \begin{multicols}{2} \begin{equation*} \text{Accuracy} = \frac{1}{\text{\text{N}}}\sum_{j=1}^\text{\text{N}} \mathbf{R}_{\text{\text{N}}, j} \,, \end{equation*} \hfill \begin{equation*} \text{BWT} = \frac{\sum_{i=2}^{\text{\text{N}}} \sum_{j=1}^{i-1} (\mathbf{R}_{i, j} - \mathbf{R}_{j, j})}{\frac{1}{2} \text{\text{N}} (\text{\text{N}}-1)} \,. \end{equation*} \end{multicols} In addition to these metrics, we also take into account the memory used by each method. The memory is calculated as the number of additional scalars, without counting the ones used to store the neural network, that must be kept in memory after the training of a task while waiting for the new one to be collected; the memory used during the training process is not counted as additional, since it can be discarded once the training is over. The formulas used to calculate the approximated required memory are: \begin{itemize} \item EWC: this approach saves a snapshot of the model after each task. The required memory is: N $\times$ P. \item OEWC: it is similar to the EWC, but it saves only one set of parameters, which is updated after the training on each task. The final memory is P. \item ER, GEM, CM{} (\textbf{CIL}{}): these methods need an external memory in which a subset from each task is saved, and then used to regularize the training. The required memory depends on the number of images saved, and it is calculated as I$\times$M$\times$N. \item EmR: this approach requires not only the images but also the features vector associated with each image (the output of the backbone). The required memory size is: (D + I)$\times$M$\times$N. \item CM{} (\textbf{TIL}{}): requires only to save, after each task, the centroids of the classes in the tasks; thus, the memory size is E $\times$ T. \end{itemize} where N is the number of tasks, P is the number of parameters in the neural network, I is the dimension of the input images, D is the dimension of the feature vector extracted by the model, and E is the dimension of the output related to our proposal. \textbf{Implementation:} To perform all the experiments, we used the Avalanche framework, which implements the logic to create tasks and evaluate the CL approaches. Regarding the methods, we implemented in Avalanche EmR and Centroids Matching{}, while the others are already present in the framework. The code containing all the files necessary to replicate the experiments is available \href{https://anonymous.4open.science/r/CentroidsMatching/}{here}. \subsection{Results} \begin{table}[t] \centering \caption{Mean and standard deviation (in percentage), calculated over $5$ experiments, of achieved Accuracy and BWT for each combination of scenario and method; some results are missing because the corresponding method does not work on that specific scenario. The best results for each combination of dataset-scenario are highlighted in bold.} \label{table:main_results} \resizebox{\textwidth}{!}{% \begin{tabular}{|c|cc|cc|cc|cc|cc|cc|} \hline \multirow{2}{*}{Method} & \multicolumn{6}{c|}{\textbf{TIL}{}} & \multicolumn{6}{c|}{\textbf{CIL}{}} \\ \cline{2-13} & \multicolumn{2}{c|}{CIFAR10} & \multicolumn{2}{c|}{CIFAR100} & \multicolumn{2}{c|}{TinyImageNet} & \multicolumn{2}{c|}{CIFAR10} & \multicolumn{2}{c|}{CIFAR100} & \multicolumn{2}{c|}{TinyImageNet} \\ \cline{2-13} & \multicolumn{1}{c|}{BWT} & Accuracy & \multicolumn{1}{c|}{BWT} & Accuracy & \multicolumn{1}{c|}{BWT} & Accuracy & \multicolumn{1}{c|}{BWT} & Accuracy & \multicolumn{1}{c|}{BWT} & Accuracy & \multicolumn{1}{c|}{BWT} & Accuracy \\ \hline \hline Naive & \multicolumn{1}{c|}{$-34.60_{\pm 7.39}$} & $67.00_{\pm 4.98}$ & \multicolumn{1}{c|}{$-59.63_{\pm 14.50}$} & $26.75_{\pm 13.20}$ & \multicolumn{1}{c|}{$-61.39_{\pm 2.15}$} & \multicolumn{1}{c|}{$21.84_{\pm 1.12}$} & \multicolumn{1}{c|}{$-95.96_{\pm 0.97}$} & $18.00_{\pm 0.77}$ & \multicolumn{1}{c|}{$-79.64_{\pm 1.11}$} & $8.34_{\pm 0.09}$ & \multicolumn{1}{c|}{$-60.57_{0.7}$} & \multicolumn{1}{c|}{$6.26_{\pm 0.04}$} \\ \hline % Cumulative & \multicolumn{1}{c|}{$-2.75_{\pm 0.99}$} & $93.83_{\pm 1.12}$ & \multicolumn{1}{c|}{$2.03_{\pm 3.84}$} & $75.05_{\pm 10.71}$ & \multicolumn{1}{c|}{$6.33_{\pm 0.77}$} & \multicolumn{1}{c|}{$63.03_{\pm 1.06}$} & \multicolumn{1}{c|}{$-2.84_{\pm 1.59}$} & $86.42_{\pm 0.32}$ & \multicolumn{1}{c|}{$-3.28_{\pm 0.04}$} & $59.86_{\pm 0.45}$ & \multicolumn{1}{c|}{$-5.88_{\pm 0.11}$} & \multicolumn{1}{c|}{$29.22_{\pm 0.64}$} \\ \hline \hline % EWC & \multicolumn{1}{c|}{$-16.15_{\pm 7.11}$} & $77.06_{\pm 4.47}$ & \multicolumn{1}{c|}{$-5.11_{\pm 0.91}$} & \multicolumn{1}{c|}{$58.62_{\pm 0.91}$} & \multicolumn{1}{c|}{$-5.68_{\pm 3.56}$} & \multicolumn{1}{c|}{$27.41_{\pm 2.1}$} & \multicolumn{1}{c|}{$-92.52_{\pm 2.58}$} & $17.07_{\pm 0.89}$ & \multicolumn{1}{c|}{$-63.54_{\pm 1.36}$} & $6.13_{\pm 0.32}$ & \multicolumn{1}{c|}{$-43.58_{\pm 6.70}$} & \multicolumn{1}{c|}{$0.5$} \\ \hline OWC & \multicolumn{1}{c|}{$ -15.67_{\pm 9.70}$} & $76.07_{\pm 6.60}$ & \multicolumn{1}{c|}{$-6.37_{\pm 2.69}$} & $59.56_{\pm 1.61}$ & \multicolumn{1}{c|}{$-7.60_{\pm 5.31}$} & \multicolumn{1}{c|}{$24.37_{\pm 19.37}$} & \multicolumn{1}{c|}{$-90.01_{\pm 3.12}$} & $15.76_{\pm 1.12} $ & \multicolumn{1}{c|}{$-61.43_{\pm 2.03}$} & $5.97_{\pm 0.94}$ & \multicolumn{1}{c|}{$-46.23_{\pm 6.48}$} & \multicolumn{1}{c|}{$0.5$} \\ \hline ER & \multicolumn{1}{c|}{$-2.95_{\pm 0.67}$} & $90.56_{\pm 0.64}$ & \multicolumn{1}{c|}{$-8.42_{\pm 0.08}$} & $70.55_{\pm 0.79}$ & \multicolumn{1}{c|}{$-17.15_{\pm 0.05}$} & \multicolumn{1}{c|}{$43.31_{\pm 0.72}$} & \multicolumn{1}{c|}{$-50.14_{\pm 1.81}$} & $52.60_{\pm 1.38}$ & \multicolumn{1}{c|}{$-67.22 _{\pm 0.39}$} & $25.09_{\pm 0.22}$ & \multicolumn{1}{c|}{$-53.39_{\pm 0.72}$} & \multicolumn{1}{c|}{$8.08_{\pm 0.28}$} \\ \hline GEM & \multicolumn{1}{c|}{$-4.87_{\pm 1.56}$} & $90.15_{\pm 1.19}$ & \multicolumn{1}{c|}{$-10.58_{\pm 0.35}$} & $71.85_{\pm 0.37}$ & \multicolumn{1}{c|}{$-57.39_{\pm 0.59}$} & \multicolumn{1}{c|}{$14.08_{\pm 0.15}$} & \multicolumn{1}{c|}{$-80.17_{\pm 1.59}$} & $22.86_{\pm 1.41}$ & \multicolumn{1}{c|}{$-62.71_{\pm 1.56}$} & $17.09_{\pm 0.92}$ & \multicolumn{1}{c|}{$-53.14_{\pm 0.85}$} & \multicolumn{1}{c|}{$5.55_{\pm 0.21}$} \\ \hline EmR & \multicolumn{1}{c|}{$-2.30_{\pm 0.98}$} & $91.39_{\pm 1.51}$ & \multicolumn{1}{c|}{$-2.75_{\pm 0.32}$} & $72.03_{\pm 0.95}$ & \multicolumn{1}{c|}{$-8.43_{\pm 1.00}$} & \multicolumn{1}{c|}{$46.88_{\pm 2.03}$} & \multicolumn{1}{c|}{$-$} & $-$ & \multicolumn{1}{c|}{-} & - & \multicolumn{1}{c|}{$-$} & \multicolumn{1}{c|}{$-$}\\ \hline CM{} & \multicolumn{1}{c|}{$-2.09_{\pm 0.71}$} & $\mathbf{92.72_{\pm 1.33}}$ & \multicolumn{1}{c|}{$-5.88_{\pm 0.90}$} & $\mathbf{74.76_{\pm 1.17}}$ & \multicolumn{1}{c|}{$-13.45_{\pm 3.62}$} & \multicolumn{1}{c|}{$\mathbf{47.80_{\pm 2.93}}$} & \multicolumn{1}{c|}{$-18.71_{\pm 10.84}$} & $\mathbf{64.64_{\pm 12.78}}$ & \multicolumn{1}{c|}{$-62_{\pm 1.23}$} & $\mathbf{27.91_{\pm 0.39}}$ & \multicolumn{1}{c|}{$-52.13_{\pm 0.91}$} & \multicolumn{1}{c|}{$\mathbf{12.04_{\pm 0.32}}$} \\ \hline \end{tabular}% } \end{table} \textbf{Classification results:} Table \ref{table:main_results} summarizes all the results obtained across the experiments, in terms of Accuracy and BWT. We can see that CM{} significantly improves the results in all the scenarios. The improvements on \textbf{TIL}{} scenarios are significant, by achieving an accuracy that is close to the upper bound set by the cumulative strategy. Moreover, the results are better than all the other methods when it comes to \textbf{CIL}{} scenarios. Surprisingly, the ER approach achieves also good results in all the scenarios, but not as good as the ones achieved by our proposal. The results clearly show the difficulty discrepancy between \textbf{TIL}{} and \textbf{CIL}{}, because approaches that work well on the first one, drastically fail to overcome the CF in the second one; also, it seems that only methods that use an external memory are capable of achieving good results on \textbf{CIL}{}, and the sole parameters regularization is not enough to fight CF. \begin{figure} \centering \begin{subfigure}[t]{0.4\linewidth} \centering \includegraphics[width=1\linewidth]{figures/memory_cifar10.pdf} \caption{How the accuracy score, obtained on CIFAR10 \textbf{CIL}{} scenario, changes when the number of the samples saved in the memory changes.} \label{fig:memory_reh} \end{subfigure} \hfill \begin{subfigure}[t]{0.4\linewidth} \centering \includegraphics[width=\linewidth]{figures/memory.pdf} \caption{How the required memory grows when the number of tasks increases. The images have size $3 \times 32 \times 32$.} \label{fig:memory_all} \end{subfigure} \caption{The images show the required memory for each method, as well as how the accuracy changes when the rehearsal memory grows (only for methods that require an external memory containing past samples).} \label{fig:memory} \end{figure} \textbf{Memory comparison:} the memory required by CL methods is a crucial aspect, and in this section we study how the accuracy score is correlated to this aspect. Figure \ref{fig:memory} shows all the aspects related to the memory size. All the results are obtained using the same training configuration used for the main experiments. The image \ref{fig:memory_reh} shows the memory usage, correlated with the achieved accuracy score, required by each method when solving CIFAR10 \textbf{CIL}{} scenario. Firstly, we see that GEM is not capable of achieving competitive results even when a large subset of samples is saved, while the others achieve good results even with a smaller memory dimension, and this is probably because a large number of samples is required to correctly estimates the gradients which are used to regularize the training. Regarding the other two approaches, we see that our proposal achieves better results. Not all the methods require an external memory containing samples from past tasks, and Image \ref{fig:memory_all} shows how the memory required by all the memory changes when the number of tasks grows. We clearly see that, when it comes to solving \textbf{TIL}{} problems, our proposal requires a smaller memory than all the others. When looking at the results in Table \ref{table:main_results} for \textbf{CIL}{} problems, and combining them with the curves in Image \ref{fig:memory_all}, we can conclude that, despite a large amount of memory requested by some methods, few of them are capable of achieving good results; on the other hand, when solving \textbf{CIL}{} scenarios our approach becomes a rehearsal one, and the required memory is almost the same if compared to other rehearsal approaches. \begin{figure}[t] \centering \begin{subfigure}[t]{0.3\linewidth} \centering \includegraphics[width=\linewidth]{figures/spaces/task0_aftertask0.png} \caption{The embedding space when the first task is over.} \end{subfigure} \hfill \begin{subfigure}[t]{0.3\linewidth} \centering \includegraphics[width=\linewidth]{figures/spaces/task0_aftertask2.png} \caption{The embedding space when the third task is over.} \end{subfigure} \hfill \begin{subfigure}[t]{0.3\linewidth} \centering \includegraphics[width=\linewidth]{figures/spaces/task0_aftertask4.png} \caption{The embedding space when the last task is over} \end{subfigure} \caption{The images show how the embeddings spaces, associated with the first task from CIFAR10 \textbf{TIL}{} scenario, change while training on new tasks. We can see that, despite the small changes in the shape of the samples, the overall space is preserved during the whole training. The points are projected into a bi-dimensional space using PCA \citep{pca_paper}. Better viewed in colors.} \label{fig:space_task0} \end{figure} \begin{figure} \centering \begin{subfigure}[t]{0.23\linewidth} \centering \includegraphics[width=\linewidth]{figures/spaces/ci_task0.pdf} \caption{The embedding space when the second task is over.} \end{subfigure} \hfill \begin{subfigure}[t]{0.23\linewidth} \centering \includegraphics[width=\linewidth]{figures/spaces/ci_task1.pdf} \caption{The embedding space when the third task is over.} \end{subfigure} \hfill \begin{subfigure}[t]{0.23\linewidth} \centering \includegraphics[width=\linewidth]{figures/spaces/ci_task2.pdf} \caption{The embedding space when the fourth task is over} \end{subfigure} \hfill \begin{subfigure}[t]{0.23\linewidth} \centering \includegraphics[width=\linewidth]{figures/spaces/ci_task3.pdf} \caption{The embedding space when the last task is over} \end{subfigure} \caption{The images show how the merged embeddings space obtained on CIFAR10 \textbf{CIL}{} changes during the training on all the tasks. The images clearly show that new classes are added without interfering with the ones already present in the space. To visualize clearly the clustering space, we used Voronoi diagrams over the 2D projections of the centroids, obtained using PCA \citep{pca_paper}. The samples are omitted for clarity. Better viewed in colors.} \label{fig:space_merged} \end{figure} \textbf{Analysis of the embedding spaces produced by Centroids Matching{}:} in this section we analyze how the regularization approach proposed influences the shape of the embedding space produced by a model. In Figure \ref{fig:space_task0} we see how the embedding space, extracted from a model trained on CIFAR10 \textbf{TIL}{} scenario, changes while new tasks are learned: the regularization term is capable of keeping the embedding space almost unchanged, and well separable, during the whole training process. Is also interesting to see how our proposal merges the embedding spaces during the training on a \textbf{CIL}{} scenario, and this aspect is shown in Figure \ref{fig:space_merged}. We can see that the classes remain highly separable even in the late stages of the training procedure. The merged space is achieved in an incremental way, by inserting new classes into the existing embedding space, without moving already present centroids. For example, we see that, when passing from the first space to the second one, two new classes are added on the left of the existing embedding space, without interfering with the existing centroids. This is possible because the distance regularization works well, and also because the approach is capable of adapting the model to the embedding space, by correctly projecting the centroids and the samples of newer tasks. \subsection{Ablation Study} \textbf{How the dimension of the support set affects the training procedure?} Being the support set crucial to our proposal, we expect that its dimension affects the overall training procedure. On the other hand, we also expect that, once an upper bound on the number of support samples is stepped over, the results are not affected anymore, since the same centroids could be calculated using fewer samples. Table \ref{table:support_set} shows the results obtained while changing the dimension of the support set. We can clearly see that, under a certain threshold, the results are very close. When the threshold is exceeded, we see a decrease in the achieved accuracy score. This could happen because more images are removed from the training set in order to create the support set, and this negatively affects the results, since some patterns could be missing from the training dataset. \textbf{Comparing merging procedures for CIL scenario.} As exposed in Section \ref{sec:class-il}, the merging function used to merge the embedding spaces uses a scale plus transaction function. Here, we study how the choice of the merging function $p_i(\cdot)$ affects the results. To this end, we implemented different functions: \begin{itemize} \item Scale-Translate: the merging function proposed in Section \ref{sec:class-il}. \item Linear: a simple linear layer is used to project the embeddings vector into the new space. \item Offset: an offset is calculated using a linear layer on the embedding, and it is used to shift the embeddings of a given task. \item None: the merging step is performed directly on the embeddings outputted by the model. \end{itemize} For each approach, the weights of the merging networks are shared between the centroids and the embeddings of the same task, to avoid adding many parameters. In Table \ref{table:merging} the results are shown. We see that the Scale-translate approach achieves better results, on average, than all the other approaches, probably due to its inherent capacity to transform the embeddings. The only exception is the second task, in which the approach mentioned above loses more accuracy. Also, as expected, the approach None achieves the worst results but, surprisingly, it is capable of achieving a decent average accuracy. \begin{table}[t] \centering \caption{The results, in terms of average and standard deviation calculated over 2 runs, obtained on CIFAR10 \textbf{CIL}{} scenario when varying the merging strategy used, are shown. The results are both in terms of Accuracy and BWT (in the brackets), and both are calculated when training on the last task is over.} \label{table:merging} \resizebox{0.8\textwidth}{!}{% \begin{tabular}{|l|c|c|c|c|c|c|} \hline & Task 1 & Task 2 & Task 3 & Task 4 & Task 5 & Accuracy \\ \hline Scale-Translate & $58.90\, (-33.99)$ & $34.35\, (-50.75)$ & $58.60\, (-26.25)$ & $75.05\, (-20.30)$ & $93.60$ & $64.10\, (-33.99)$\\ \hline Linear & $43.15 \, (-54.70)$ & $51.00 \, (-35.40)$ & $46.55 \, (-45.80)$ & $63.35 \, (-24.95)$ & $89.35$ & $58.68 \, (-40.21)$ \\ \hline Offset & $47.25 \, (-50.90)$ & $46.30 \, (-42.05)$ & $44.35 \, (-46.15)$ & $61.25 \, (-28.55)$ & $87.85$ & $57.40 \, (-41.91)$ \\ \hline None & $43.45(-53.90)$ & $41.70(-46.95)$ & $42.00(-50.06)$ & $65.01(-21.74)$ & $87.80$ & $56.01(-41.70)$ \\ \hline \end{tabular}% } \end{table} \textbf{How does $\mathbf{\lambda}$ affect forgetting?} In this section we analyze how the parameter $\lambda$, used to balance the regularization term in \ref{eq:loss}, affects the obtained results. The results are shown in Table \ref{table:reg}, and we can see that setting $\lambda$ too high leads the training process to fail when the scenario is a \textbf{CIL}{} one, and inhibits the training when it comes to \textbf{TIL}{} scenarios (achieving a small forgetting but a small overall accuracy). Moreover, the more the regularization term grows, the more the results degenerate; the same is true also when $\Lambda$ is too small, leading to a model that is not able to remember correctly past tasks. In the end, we can conclude that the best results are obtained when the weight parameter is close and smaller than $0$, leading to a model with a balanced trade-off between remembering past tasks and training on the current one. \textbf{Different Merging approaches for embeddings and centroids.} In the experiments so far we took into account only the merging approach in which the same merging strategy, and the same weights to perform the merging, are used for both embeddings and centroids. In this section, we also explore the approach in which different strategies are applied separately or when the same strategy uses different weights for centroids and embeddings, in order to understand better which approach is the best one when isolated. The results of this study are exposed in Table \ref{table:merging1}. We can see that the best results, overall, are achieved when the Scale-Translate merging approach is applied to the embeddings. By combining the results with the ones in the Table \ref{table:merging}, we can conclude that the best models are achieved when both the centroids and the embeddings are projected using the same Scale-Translate layer. \begin{table}[t] \centering \begin{minipage}[t]{0.48\textwidth} \centering \captionof{table}{The table shows the accuracy, averaged over 2 runs, obtained while changing the number of samples in the support set. The results are calculated using CIFAR10 scenarios; the hyperparameters are the same used in the main experimental section.} \label{table:support_set} \begin{tabular}{l|ccccc|} \cline{2-6} & \multicolumn{5}{c|}{Support set size} \\ \cline{2-6} & \multicolumn{1}{c|}{10} & \multicolumn{1}{c|}{50} & \multicolumn{1}{c|}{100} & \multicolumn{1}{c|}{200} & \multicolumn{1}{c|}{500}\\ \hline \multicolumn{1}{|l|}{\textbf{TIL}{}} & \multicolumn{1}{c|}{$90.86$} & \multicolumn{1}{c|}{$90.86$} & \multicolumn{1}{c|}{$91.70$} & \multicolumn{1}{c|}{$89.79$} & \multicolumn{1}{c|}{$90.70$} \\ \hline \multicolumn{1}{|l|}{\textbf{CIL}{}} & \multicolumn{1}{c|}{$59.63$} & \multicolumn{1}{c|}{$63.97$} & \multicolumn{1}{c|}{$63.55$} & \multicolumn{1}{c|}{$63.16$} & \multicolumn{1}{c|}{$61.04$} \\ \hline \end{tabular} \end{minipage} \hfill \begin{minipage}[t]{0.48\textwidth} \centering \captionof{table}{The table shows how combining the merging strategies, used by Centroids Matching{}, affects the final accuracy obtained on CIFAR10 \textbf{CIL}{}.} \label{table:memory} \begin{tabular}{cc|cccc|} \cline{3-6} \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & \multicolumn{4}{c|}{Embeddings} \\ \cline{3-6} \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{S-T} & \multicolumn{1}{c|}{MLP} & \multicolumn{1}{c|}{Offset} & None \\ \hline \multicolumn{1}{|c|}{\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{Centroids}}}} & S-T & \multicolumn{1}{c|}{62.40} & \multicolumn{1}{c|}{$56.92$} & \multicolumn{1}{c|}{$60.91$} & $60.67$ \\ \cline{2-6} \multicolumn{1}{|c|}{} & MLP & \multicolumn{1}{c|}{$ 62.13$} & \multicolumn{1}{c|}{56.55} & \multicolumn{1}{c|}{$ 59.25$} & $62.05$ \\ \cline{2-6} \multicolumn{1}{|c|}{} & Offset & \multicolumn{1}{c|}{$62.84$} & \multicolumn{1}{c|}{$60.94$} & \multicolumn{1}{c|}{58.37} & $61.82$ \\ \cline{2-6} \multicolumn{1}{|c|}{} & None & \multicolumn{1}{c|}{$61.69$} & \multicolumn{1}{c|}{$62.05$} & \multicolumn{1}{c|}{$60.67$} & 59.05 \\ \hline \end{tabular}% \label{table:merging1} \end{minipage} \end{table} \begin{table}[] \centering \caption{The results, in terms of average and standard deviation calculated over 2 runs, obtained on CIFAR10 \textbf{CIL}{} scenario when varying the merging strategy used, are shown. The results are both in terms of Accuracy and BWT (in the brackets), and both are calculated when training on the last task is over.} \label{table:reg} \resizebox{0.8\textwidth}{!}{% \begin{tabular}{|l|c|c|c|c|c|c|} \hline & $0.01$ & $0.1$ & $1$ & $10$ & $100$ \\ \hline C10 \textbf{TIL}{} & $89.25\, (-6.08)$ & $91.39\, (-8.94)$ & $79.39\, (-10.94)$ & $50.00\, (-12.03)$ & $50.00\, (-12.01)$ \\ \hline C10 \textbf{CIL}{} & $59.75 \, (-42.75)$ & $62.32 \, (-39.51)$ & $49.57\, (-38.82)$ & $42.19 \, (-17.70)$ & $15.95 \, (-15.95)$ \\ \hline \end{tabular}% } \end{table} \section{Conclusions} In this paper, we proposed an approach to overcome CF in multiple CL scenarios. Operating on the embedding space produced by the models, our approach is capable of effectively regularising the model, leading to a lower CF, requiring no memory when it comes to solving easy CL scenarios. The approach reveals that operating on a lower level, the embedding space, can lead to better CL approaches while having the possibility to analyze the embedding space to understand how the tasks, and classes within, interact. \bibliographystyle{abbrvnat}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Computational details} In this section we outline the computational approach and model used to investigate excitonic complexes. Our technique of choice is diffusion Monte Carlo (DMC). Because DMC is such a widely used approach, we do not give a detailed account of the method, and refer the reader to more complete technical discussions~\cite{supp-Needs:1,supp-Ceperley:52,supp-Foulkes:53}. DMC calculations use the imaginary time Schr\"odinger equation along with a guiding wavefunction $\Psi_{\textrm{T}}$ to project out excited states from an initial wavefunction $\Phi(t=0)$, propagating it in imaginary time until the true ground state wavefunction $\psi_0$ is found. If we define the mixed probability $f(\textbf{R},t) = \Psi_{\textrm{T}}(\textbf{R})\Phi(\textbf{R},t)$~\cite{supp-Grimm:54,supp-Kalos:55}, where $\textbf{R}$ contains the spatial coordinates of all quantum particles, then the equation of motion for $f(\textbf{R},t)$ derived from the imaginary time Schr\"odinger equation is \begin{equation} -\frac{\partial f(\textbf{R},t)}{\partial t} = -\frac{1}{2}\nabla_{\textbf{R}}^2 f(\textbf{R},t) + \nabla_{\textbf{R}}\cdot[\textbf{v}(\textbf{R})f(\textbf{R},t)] + (E_{\textrm{L}}(\textbf{R},t) - E_{\textrm{ref}})f(\textbf{R},t), \end{equation} where \begin{equation} \textbf{v}(\textbf{R},t) = \Psi_{\textrm{T}}^{-1}(\textbf{R})\nabla_{\textbf{R}}\Psi_{\textrm{T}}(\textbf{R}), \end{equation} \begin{equation} E_{\textrm{L}}(\textbf{R}) = \Psi_{\textrm{T}}(\textbf{R})^{-1}\hat{H}\Psi_{\textrm{T}}(\textbf{R}), \end{equation} and $E_{\textrm{ref}}$ is an arbitrary energy offset. A solution to the importance-sampled imaginary-time Schr\"odinger equation is then sampled using approximate Greens functions that result in the drift-diffusion motion and the branching action~\cite{supp-Needs:1}. For the systems we consider, the exact ground state wave function is nodeless, so DMC yields {\em exact} ground state energies and a sampling of the {\em exact} ground state wavefunctions. The guiding wavefunction used in this work is of the form $\Psi_{\textrm{T}}(\textbf{R}) = e^{J(\textbf{R})}$, which contains the Jastrow factor introduced in Ref.~\cite{supp-Aleiner:4} adapted specifically to the potential~(2). The Jastrow term contains two-body electron-hole and electron-electron (hole-hole) terms \begin{equation} \label{ueh} u_{\textrm{eh}}(r) = c_1r^2\ln(r)e^{-c_2r^2} - c_3 r \left(1 - e^{-c_2r^2}\right), \end{equation} \begin{equation} \label{uee} u_{\textrm{ee}}(r) = c_4r^2\ln(r)e^{-c_5r^2}. \end{equation} The constants $c_1 = m_\textrm{e}m_\textrm{h}/2(m_\textrm{e} + m_\textrm{h})$ and $c_4 = -m_\textrm{e}/4$ ($m_{\textrm{e,h}}$ are the effective masses of the carriers) were chosen to satisfy the logarithmic analogue of the Kato cusp conditions; the other constants were optimized via unreweighted variance minimization to improve the efficiency of the Monte Carlo sampling~\cite{supp-Umrigar:5,supp-Drummond:6}. A blocking method is used to gauge the correlation timescales for the energy estimates, and yields accurate standard deviations for the final average~\cite{supp-Flyvbjerg:12}. Energy estimates were obtained for calculations with $\Delta t \in \{0.01,0.03,0.1\}$, and then extrapolated to zero timestep. All reported DMC probability distributions were sampled from forward-walking DMC calculations~\cite{supp-Barnett:7} with the optimal guiding wave function described by Eqs.~(\ref{ueh})--(\ref{uee}), and $\Delta t = 0.01$. A forward walking time of 300 a.u.~was used for calculations employing the Keldysh potential; that time was 30 a.u.~for calculations using the Coulomb potential. Screening lengths and effective masses for all materials studied were determined in Ref.~\cite{supp-Berkelbach:8}. Potentials of the type (2) were first discussed by Keldysh~\cite{supp-Keldysh:3} and have been used to treat excitonic properties in quasi two-dimensional materials in several recent studies~\cite{supp-Cudazzo:10,supp-Cudazzo:43,supp-Berkelbach:8,supp-Berghauser:44,supp-Zhang:45,supp-Fogler:46,supp-Wang:47,supp-Wu:48}. The potential~(2) behaves as $1/r$ at large $r/r_0$, but diverges more weakly as $\ln ({r/r_0})$ near $r = 0$. The crossover point is related to the screening distance $r_0 = 2\pi\chi_{\textrm{2D}}$, where $\chi_{\textrm{2D}}$ is the two-dimensional polarizability of the TMDC layer. For computational efficiency and consistency with past variational calculations, we use a modified form of the effective potential~(2), given by \begin{equation} \label{keldysh2} V^\prime(r) = -\frac{1}{r_0}\left[\ln\left(\frac{r}{r + r_0}\right) + (\gamma - \ln 2)e^{-r/r_0}\right], \end{equation} where $\gamma$ is Euler's constant~\cite{supp-Cudazzo:10}. Calculations with the unaltered potential~(2) typically result in exciton ground-state energies that are merely 2--3 meV lower than those produced with~(2).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The rapid development of experimental quantum optics in the past decades has led to the establishment of quantum key distribution (QKD) \cite{Gisin02,Scarani09}. This branch of quantum information science is aimed at the development and implementation of methods for the secure distribution of cryptographic keys so that the very laws of quantum physics provide the security of the key. The key can be then used in the one-time-pad classical cryptographic system, thus providing the possibility ideally for an unconditionally secure information transmission. A recent addition to QKD is the use of homodyne detection of continuous-variable (CV) states \cite{Weed2012}. Enforced by the Gaussian security proofs \cite{wolf06,Nav06,GP06,Lev13}, this allows the use of multiparticle quantum states in the typical photonic implementations of QKD in order to improve the applicability and stability of QKD protocols. CV QKD protocols were first suggested and studied on the basis of nonclassical squeezed states of light \cite{Ralph99,Cerf01}. An important step in CV QKD was made when the semiclassical coherent states were shown to be in principle sufficient for key distribution \cite{Grosshans02} even at long distances, using Gaussian modulation and reverse information reconciliation \cite{Grosshans03}. The coherent-state protocol was successfully tested in mid-range (25 km) \cite{Lod10} and long-range (80 km) \cite{Jou13} optical fiber channels, while a proof-of-principle laboratory test of the squeezed-state protocol was performed recently \cite{Mad12}. It was shown in particular that the squeezed-state protocol can potentially outperform its coherent-state counterpart in terms of robustness and range, especially if data processing efficiency is limited \cite{Use11}. For the security analysis of the CV QKD protocols the trusted parties need to know the correlation of their measurements. Because of the optimality of Gaussian states, that is equivalent to knowing the parameters of the channel, namely, its transmittance and excess noise. This knowledge allows one to estimate the maximum amount of information leaking to a potential eavesdropper from such a channel. The channel estimation is, however, a nontrivial task. Indeed, the trusted parties need to estimate the channel based on probe pulses, which must be indistinguishable from the signal, otherwise the eavesdropper can recognize them and exploit that information. The accuracy of the estimation is typically limited by the intensity and number of estimation pulses; however, more states used for estimation means a lower capacity to carry information. The effect of the limited ensemble size on the applicability of CV QKD was studied previously for modulated coherent-state \cite{Lev10,Jou12} and entanglement-based \cite{Fur12,Ebe13} protocols. The channel estimation in the current state-of-the-art implementation \cite{Jou13} is based on revealing half of the pulses publicly and estimating from that the channel noise and transmittance. However, the channel estimation strategy was not optimized for the given quantum-state resource and other parameters of the scheme, and information on the effect of channel estimation on the squeezed-state protocol appears to be lacking, which limits the performance of the standard protocol. In the current paper we suggest a novel channel estimation strategy based on double Gaussian modulation of the coherent or squeezed states of light. We show that this method can bypass the trade-off between estimation and information transfer and that the optimized protocol can approach the effectiveness of the ideal case. We also study the protocol based on nonclassical squeezed states and show the advantage of using such states in CV QKD when channel estimation and finite-size effects are considered. In summary, we present a CV QKD protocol that is practically applicable in long-distance channels thanks to the optimal use of the resources. The paper is organized as follows. First, we describe in Sec. II the basic concepts of CV QKD: the model used, the standard method for channel estimation, and the calculation of the achievable key rate. In Sec. III, we optimize the standard method of channel estimation used for CV-QKD incorporating finite-size effects. Then in Sec. IV, we calculate the theoretical limits of CV QKD. This leads us to the concept of a double-modulation protocol (Sec. V) which can approach the optimal performance for long distances. Finally, in Sec. VI, we draw conclusions. \section{Preliminaries} \subsection{The model used} We consider the generic CV QKD protocol, when Alice sends CV quantum states of light to Bob through a channel, that is under the control of a potential eavesdropper Eve. The resource states are Gaussian squeezed or coherent states; Alice applies a Gaussian displacement operation to them, while Bob performs homodyne detection on the other end of the channel using also an intense reference pulse sent by Alice (Fig. \ref{method}). After the transmission is done, they reveal a subset (chosen randomly) of their measurement data, estimate the channel parameters from that, and use that information to upper-bound the information Eve could get. Finally they use the other, unrevealed subset of data to extract the key, performing also error correction and privacy amplification. \begin{figure}[!ht] \begin{center} \includegraphics[width=8.5cm]{scheme_estimation-1.eps} \caption{Prepare-and-measure CV-QKD with single modulation: using a Gaussian source (S) and modulator (M) at Alice's side and a homodyne detector (H) at Bob's side. One subset of the data is used for estimation; the other subset is used for key distribution. \label{method}} \end{center} \end{figure} In this case the transfer of quadrature variables through a lossy and noisy channel can be described in the Heisenberg picture by the following evolution: \begin{equation}\label{def_channel} x_B=\sqrt{T}\cdot (x_S+x_M)+\sqrt{1-T} \cdot x_0+x_\eps, \end{equation} where all variables are normally distributed with zero mean, except for the fixed transmittance parameter $T \in [0,1]$. The operator $x_B$ represents the measured quadrature at Bob's side. The operator $x_A=x_S+x_M$ represents the quadrature of a state Alice sent into the channel, where $x_S$ comes from the quantum fluctuation of the resource state [with variance $\Var(x_S)=V_S$], while $x_M$ comes from the modulation of the state [$\Var(x_M)=V$]. Note that $V_S=1$ (shot noise unit) for coherent states and $V_S<1$ for squeezed states. $x_0$ corresponds to the vacuum state [$\Var(x_0)=1$], while $x_{\eps}$ is a Gaussian excess noise with variance $\Var(x_{\eps})=V_{\eps}$. In practice the values of $x_S$, $x_0$, and $x_\eps$ are all unknown to Alice and Bob, so they can treat all of them as a noise. So we can rewrite (\ref{def_channel}) in a much simpler form: \begin{equation}\label{def_channel_simple} x_B=\sqrt T \cdot x_M+x_N, \end{equation} where $x_N$ is the aggregated noise with zero mean and variance $V_N=1+V_\eps+T(V_S-1)$. \subsection{The standard method for channel estimation} We suppose that the variance of the modulation ($V$) is known, that is, the channel can be parametrized using two unknown parameters: $T$ and $V_{\eps}$. Let us suppose that the estimation is made using $m$ Gaussian states. Denote the realizations of $x_M$ and $x_B$ with $M_i$ and $B_i$ ($i\in \{1,2,\dots,m\}$) respectively. We know that the covariance of $x_M$ and $x_B$ is $\Cov(x_M,x_B)=\sqrt{T}\cdot V=:C_{MB}$. We can estimate the value of $T$ from \begin{equation}\label{est_eta} \hat T=\frac{1}{V^2}\cdot \left(\widehat{C_{MB}}\right)^2, \end{equation} where we use the maximum likelihood estimator \begin{equation}\label{est_Cov} \widehat{C_{MB}}=\frac{1}{m}\sum_{i=1}^m M_i B_i. \end{equation} On the other hand, the estimator of $V_\eps$ can be expressed using the maximum likelihood estimator of $V_N$ substituting the real value of $T$ with its estimator from (\ref{est_eta}): \begin{equation}\label{hatVeps} \hat V_\eps=\frac{1}{m} \sum_{i=1}^m (B_i-\sqrt{\hat T} M_i)^2+\hat T(1-V_S)-1 \end{equation} Let us notice that to estimate $V_\eps$ either Alice or Bob should reveal the measurement data, so these states can not be used for key distribution. \subsection{The achievable secret key rate} We will use reverse reconciliation to obtain a secret key for large distances, i.e., the common key is based on Bob's state, which Alice and Eve try to guess. Then in the asymptotical case the key rate is \cite{Lod10} $$ K_{\infty}(T,V_\eps)=\beta I(A:B)-S(B:E), $$ where $\beta \in [0,1]$ is the reconciliation efficiency, $I(A:B)$ is the mutual information of Alice and Bob, while $S(B:E)$ is the maximal information Eve can retain about Bob's state. The true values of $T$ and $V_\eps$ are unknown, so for implementing a secure CV QKD protocol we need to set confidence intervals for both $T$ and $V_\eps$ with a low probability of error $\delta$(see Appendix C). In order not to underestimate the eavesdropping, the key rate must be minimized considering every possible combination of $T$ and $V_\eps$ from the given confidence intervals. Numerical calculations (and intuition) suggest that in the most pessimistic case, we should use the lower bound ($T^{low}$) of the confidence interval for $T$ and the upper bound ($V_\eps^{up}$) for $V_\eps$. We can obtain the following key rate incorporating finite-size effects \cite{Lev10}: \begin{equation}\label{key_rate} K=\frac{n}{N}\cdot \bigg[K_{\infty}(T^{low},V_\eps^{up})-\Delta(n)\bigg], \end{equation} where $n$ is the number of Gaussian states used for secret key transmission and $\Delta(n)$ is a correction term for the achievable mutual information in the finite case \cite{Sca08}. Note that if we use $m=r \cdot N$ states for estimation we will have $n=(1-r) \cdot N$ states for key distribution. The mutual information reads $$ I(A:B)=\frac12 \log_2\Bigg(1+\frac{V\cdot T}{V_N}\Bigg). $$ We suppose that Eve performs a collective Gaussian attack on the signal pulse (the reference pulse is usually much stronger than the signal pulse). In this case, the upper bound of the information which is available to Eve on Bob's measurement results is given by the Holevo information, that is, the difference between two von Neumann entropies: $$ S(B:E)=S_E-S_{E|B}, $$ where $S_E$ is the von Neumann entropy of the eavesdropper's state, while $S_{E|B}$ denotes the von Neumann entropy of the eavesdropper's state conditioned on Bob's measurement. In the general case the channel noise is assumed to be under full control of Eve, so in order to calculate these entropies we use an equivalent entanglement-based scheme (Fig. \ref{fig_entangled}) and purification method \cite{Use11}. It is equivalent in the sense that the states and conditional states (conditioned on Alice's measurements) sent to Bob through the channel have the same distribution as in the prepare-and-measure scheme. A generalized entanglement-based scheme is used, which corresponds to the preparation of arbitrarily squeezed states and arbitrary modulation applied to them. This scheme differs from the standard entanglement-based schemes by the presence of an additional squeezed state coupled to a signal prior to measurement on the sending side, and also by the unbalanced preparation of the entangled state. \begin{figure}[!ht] \begin{center} \includegraphics[width=8.5cm]{entangled.eps} \caption{The equivalent entanglement-based scheme that is used for calculating the Holevo information instead of the prepare-and-measure scheme \cite{Use11}. \label{fig_entangled}} \end{center} \end{figure} With this equivalence we can substitute a prepare-and-measure scheme with arbitrary squeezing of the signal states and modulation variances with an entanglement-based scheme. The calculation of the Holevo information in the entanglement-based scheme is straightforward from here using purification. From the fact that Eve is able to purify the state shared between Alice and Bob it follows that $S_E=S_{AB}$ and $S_{E|B}=S_{A|B}$. Both entropies can be calculated from the symplectic eigenvalues of the covariance matrices of two-mode Gaussian states. That is, we can express them as an (enormously long) analytic formula of parameters and we can use it efficiently for the numerical optimization of parameters. Finally, for the sake of simplicity we will use the approximate formula for $\Delta(n)$ obtained in \cite{Lev10}: $$ \Delta(n)\approx 7 \sqrt{\frac{\log_2(2/\delta^*)}{n}}, $$ where $\delta^*$ is the probability of error during privacy amplification. \section{Optimization of the standard method} The key rate (\ref{key_rate}) can be calculated for fixed parameters: it depends only on the values of $T$, $V_\eps$, $N$, $\beta$, $r$, $V$, and $V_S$. Let us investigate thoroughly how the key rate depends on these parameters. The first two are given; they are the parameters of the channel. One can estimate them with the estimators in Eqs. (\ref{est_eta}) and (\ref{hatVeps}). We can approximate their variances with (see Appendix A) \begin{equation}\label{var_hatt} \Var(\hat T)\approx \frac{4}{m} \cdot T^2 \left(2+\frac{V_N}{T V}\right)=:\sigma_1^2 \end{equation} and \begin{equation}\label{var_veps} \Var(\hat V_\eps)\approx \frac{2}{m}\cdot V_N^2+(1-V_S)^2\cdot \sigma_1^2=:s_1^2. \end{equation} From that we can calculate the expected values of $T^{low}$ and $V_\eps^{up}$ in Eq. (\ref{key_rate}), so we can calculate the key rate for any set of parameters in advance numerically. Otherwise, for an optical fiber of length $d$ (in km) we will use $T=10^{-0.2 d/10}$. In the literature it is usually assumed that the excess noise in an optical fiber is proportional to the transmittance \cite{Jou12,Lev10,Fur12}. So we used this assumption in our numerical calculation as well, having $V_{\eps}=T \cdot \eps$, with $\eps=0.01$. $N$ is the size of the blocks; it is in general predetermined, but for practical applications it is reasonable to assume short blocks. In the current state-of-the-art realization \cite{Jou13} $N=10^8$ and $N=10^9$ were used. $\beta$ is the efficiency of the information reconciliation, which depends on the performance of the algorithms being used and on the achieved signal-to-noise ratio. Recently efficient postprocessing algorithms were developed \cite{Jou11}; thus we will use a realistic $\beta=0.95$. \begin{figure}[!t] \begin{center} \includegraphics[width=8.5cm]{paper_dist_106.eps} \caption{(Color online) The secure key rate of different methods: for coherent states [cyan (light gray), $V_S=1$], moderate squeezing [orange (medium gray), $V_S=0.5$], and strong squeezing [purple (dark gray), $V_S=0.1$], for optimized single-modulation (dotted lines) and double-modulation (solid lines) schemes as a function of distance $d$ for $\eps=0.01$, $\beta=0.95$, and $N=10^6$. For comparison we plotted also the key rate yielded by the current state-of-the-art technique (thick black dashed line) and the best theoretically possible upper limit $K^{th}$ (thick black dash-dotted line). \label{dist}} \end{center} \end{figure} From (\ref{key_rate}) we can see that the key rate will be higher if more states are used for key distribution ($n$). But at the same time that means fewer states for estimation ($m$) and results in an inaccurate estimation of channel parameters. In the case of perfect reconciliation $V$ should be as large as possible, but in a realistic case $V$ must be limited. In fact, there exist optimal values for $r$ and $V$ which maximize the key rate. But so far the variances of the parameter estimators have not been obtained in general, only for given measurement outcomes. This means that designing an experiment by optimizing the available parameters was impossible; some fixed parameters were used instead, e.g., $V=1.5$ \cite{Jou12} and $r=1/2$ \cite{Lev10,Jou13}. However, since we know the variances of the estimators in advance, the optimal setting can be calculated by numerical optimization and a significant improvement is achieved in the key rate (Fig. \ref{dist}, thick black dashed vs purple dotted line). The optimal $V$ will be close to the one obtained in the asymptotical case (without any finite-size effects) \cite{Use11}. While from Fig. \ref{fig_r} we can see that the optimal $r$ will be close to $50\%$ only if the key rate is close to zero. If that is not the case, the optimal ratio will be below $50\%$ and show a linear correlation with the block size ($N$) on a log-log plot. That is, the optimal ratio $r_{opt}$ will have the form of $r_{opt}\approx \alpha x^\gamma$ with the actual value of $\alpha$ and $\gamma$ depending on the parameters. In general $\alpha$ will be lower for smaller distances and higher levels of squeezing, while $\gamma\approx -0.35$ (the lines on Fig. \ref{fig_r} are nearly parallel). \begin{figure}[!t] \begin{center} \includegraphics[width=8.5cm]{paper_r.eps} \caption{(Color online) The optimal ratio used for estimation for the single modulation method: for coherent states [cyan (light gray), $V_S=1$], moderate squeezing [orange (medium gray), $V_S=0.5$], and strong squeezing [purple (dark gray), $V_S=0.1$], for a long distance $T=0.03$ ($d\approx$ 76 km, dotted lines) and a short distance $T=0.3$ ($d\approx$ 26 km, solid lines) as a function of block size $N$ with $\eps=0.01$ and $\beta=0.95$. \label{fig_r}} \end{center} \end{figure} Finally, $V_S$ is a squeezing parameter of the source, which can be set in state preparation. If we compare the performance using coherent states ($V_S=1$), moderately squeezed states ($V_S=0.5$, i.e., 3 dB squeezing) and strongly squeezed states ($V_S=0.1$, i.e., 10 dB squeezing), we can see a substantial improvement due to squeezing (Fig. \ref{dist}, dotted lines), even for the moderately squeezed states. With the use of squeezing the protocol can achieve reasonable distances for even values of $N$ as low as $10^6$. \section{Limitations on the key rate} One of the main limiting factors compared to the asymptotical case comes from the fact that there will be a security break if the asymptotical key rate drops below $\Delta(n)$. This quantity is of order $c/\sqrt{n} ~ (\textrm{with } c\in \mathbb R_+)$ which results in a substantial restriction on achievable distance even for large values of $n$. To get a higher key rate one can try to improve the coefficient $c$ using theoretical considerations \cite{Fur12}. For a given function $\Delta$ the possible improvement comes from using as large $n$ as possible, i.e., $n=N$. Actually, from the $K_{\infty}\ge c/\sqrt{N}$ restriction one can easily obtain that the maximally achievable distance is in the best case a linear function of $\log_{10} N$: if we take ten times larger block sizes, we can expect an improvement of about 25 km (see Fig. \ref{fig_max_dist} and Appendix D). \begin{figure}[!t] \begin{center} \includegraphics[width=8.5cm]{paper_max_dist2.eps} \caption{(Color online) The asymptotical key rate $K_\infty (T,V_\eps)$ (thick black solid line) using optimal modulation and infinitely strong squeezing with $\eps=0.01$ and $\beta=0.95$, its exponential approximation (solid gray line), and the level of $\Delta(N)$ for $N=10^6$ (dash-dotted), $N=10^8$ (dotted), and $N=10^{10}$ (dashed). If the asymptotical key rate drops below $\Delta(N)$ (large circles), it is impossible to obtain a positive key rate. \label{fig_max_dist}} \end{center} \end{figure} In practical situations the transmittance $T$ can be estimated quite promptly, but that is not true for the excess noise $V_\eps$. For large distances $V_\eps=T\cdot \eps$ will be very small; nevertheless $V_\eps^{up}$ will be large since the estimator $\hat V_\eps$ will have a large standard deviation. Simple calculations show that we have \begin{equation}\label{Ve_up} V_\eps^{up} \approx \sqrt{2} \cdot \frac{1+V_\eps+T (V_M+V_S-1)}{\sqrt{m}}, \end{equation} where $V_M$ is the conditional modulation of Alice: $V_M=0$ if Alice reveals the modulation data for Bob; $V_M=V$ if Alice does not reveal the modulations. The theoretical lower bound is \begin{equation}\label{Ve_up_th} V_\eps^{up} \ge V_\eps^{th} = \sqrt{2} \cdot \frac{1+V_\eps-T }{\sqrt{N}}, \end{equation} and is fulfilled if Alice shares all the modulation for channel estimation and uses infinitely squeezed states. From these two observations we can get the theoretical maximum for the key rate (\ref{key_rate}) in the finite case: \begin{equation}\label{key_rate_th} K^{th}=K_{\infty}(T,V_\eps^{th})-\Delta(N). \end{equation} Unfortunately, this is impossible to achieve since all states would have to be used for both optimal key distribution and optimal channel estimation at the same time (the latter means revealing the modulation for all states). \section{Double-modulation method}\label{double} Using the standard method we set $V_M=0$ in (\ref{Ve_up}), that is, Alice reveals the exact values of modulation. However, she uses only half of the states, which still results in a much higher value of $V_\eps^{up}> \sqrt{2} \cdot V_\eps^{th}$. But for large distances $T$ becomes very small, so the term in parentheses in (\ref{Ve_up}) will not have a large effect. This observation has caused us to take a different approach to the problem: Alice should not share the modulation data at all. In this case Alice can use all the states for estimation, which for large distances results in a value much closer to the theoretical limit: $V_\eps^{up} \rightarrow V_\eps^{th}$, if $T\rightarrow 0$. Let us note that in this case, besides having a better estimate of $V_\eps$, one can use twice as many states for key distribution, too. \begin{figure}[!t] \begin{center} \includegraphics[width=8.5cm]{scheme_estimation-2.eps} \caption{Prepare-and-measure CV-QKD with double modulation: using a Gaussian source (S) and two modulators (M) at Alice's side and a homodyne detector (H) at Bob's side. One modulation can be used for estimation, and the other modulation for key distribution; hence all the states can be used for both estimation and key distribution at the same time. \label{method2}} \end{center} \end{figure} To actually achieve this effect, we also need a method to estimate $T$ without revealing the modulation data. That led us to using two consecutive modulations on Alice's side (see Fig. \ref{method2}). After finishing the transmission Alice reveals for each state the second modulation ($x_{M2}$). Then Bob using these public data and his (secret) measurement data can estimate $T$ and $V_\eps$ (attributing the first modulation of Alice to the noise of the source). In this way the first modulation ($V_1$) and Bob's measurement remain secret, so they can be used for key distribution as in the standard case (having an additional noise coming from the second modulation, but since that is revealed publicly it can be simply eliminated from the process). \begin{figure}[!t] \begin{center} \includegraphics[width=8.5cm]{simulation_stdev} \caption{(Color online) The approximated standard deviation (lines) and the real values calculated from numerical simulation (symbols). The values of $s_1$ [standard method, purple (dark gray) dash-dotted line), $s_2$ [double modulation, orange (medium gray) dotted line), and $s_3$ [modified double modulation, cyan (light gray) dashed line] for $N=10^5$, $r=0.5$, $V=V_1=3$, $V_2=10$, and $\eps=0.01$ using coherent states ($V_S=1$) show a good match with the empirical standard deviation of different estimators of $V_\eps$ (circles, squares, and diamonds, respectively) averaged from $10^3$ different realizations. The theoretical lower limit from $V_\eps^{th}$ (black solid line) is also plotted for comparison. \label{fig_stdev}} \end{center} \end{figure} Then the evolution can be written in the form of \begin{equation}\label{def_channel2} x_B=\sqrt{T}\cdot (x_S+x_{M1}+x_{M2})+\sqrt{1-T} \cdot x_0+x_\eps. \end{equation} Since Alice reveals only the values of the second modulations ($x_{M2}$), the first modulation acts as a noise in the estimation process; thus we can rewrite (\ref{def_channel2}) in the following form: \begin{equation}\label{def_ch2} x_B=\sqrt T \cdot x_{M2}+x_N^*, \end{equation} where $x_N^*$ is the aggregated noise with variance $V_N^*=1+V_\eps+T(V_1+V_S-1)$. That means that it is the same evolution as for a single modulation [see Eq. (\ref{def_channel_simple})], we need only to change the variance $V$ to $V_2$, $V_N$ to $V_N^*$, and $m$ to $N$. So by substitution in Eqs. (\ref{var_hatt}) and (\ref{var_veps}) we can easily obtain the approximated variance for the estimator of $T$: \begin{equation}\label{vareta2} \sigma_2^2:=\frac{4}{N} \cdot T^2 \left(2+\frac{V_N^*}{T V_2}\right), \end{equation} and the approximated variance for the estimator of $V_\eps$: \begin{equation}\label{varVeps2} s_2^2:=\frac{2}{N}\cdot (V_N^*)^2+(V_1+V_S-1)^2\cdot \sigma_2^2. \end{equation} This formula shows the effects described above. The aggregated noise $V_N^*$ and the factor $(V_1+V_S-1)^2$ will be larger here than in the standard case. That results in a worse key rate for small distances. But if $T \rightarrow 0$, then $V_N^* \rightarrow V_N$ and $\sigma_2^2 \rightarrow 0$, so these negative effects will disappear, thus from using more states for estimation, one can get an even better estimation (see Fig. \ref{fig_stdev}, purple dash-dotted and orange dotted lines). Moreover, for large distances the variance of the estimator gets close to the theoretical optimum (see Fig. \ref{fig_stdev}, orange dotted and black solid lines). Note that $V_2$ plays a role only in the estimation of $T$. Larger values of $V_2$ always produce lower values of $\sigma_2$ (so also lower values of $s_2$), but the improvement saturates quickly. \begin{figure}[!t] \begin{center} \includegraphics[width=8.5cm]{paper_N_003.eps} \caption{(Color online) The secure key rate of different methods: for coherent states [cyan (light gray), $V_S=1$], moderate squeezing [orange (medium gray), $V_S=0.5$], and strong squeezing [purple (dark gray), $V_S=0.1$], for the optimized single-modulation (dotted lines) and double-modulation (solid lines) schemes as a function of block size $N$ in the case of $\eps=0.01$, $\beta=0.95$, and $T=0.03$ ($d=76$ km). For comparison we plotted also the key rate using the current state-of-the-art technique (thick black dashed line), and the best theoretically possible upper limit $K^{th}$ using $V_S=0.1$ (thick gray dash-dotted line), and infinitely strong squeezing (thick black dash-dotted line). \label{N_b}} \end{center} \end{figure} That is, we obtained a method which estimates the excess noise efficiently. But in the meantime, besides using all the states for estimation, one can use all states for key distribution purposes too. That will result in a key rate (Fig. \ref{N_b}) approaching the theoretical limit described in (\ref{key_rate_th}). It is important to mention that even with a feasible level of squeezing we can obtain a key rate close to the theoretical optimum corresponding to infinitely strong squeezing. If $T$ is not close to zero, the above method will not be efficient. But in that case Alice can share some data from the first modulation too. That is, in this case we will have the same situation as in the standard method; there is only an additional layer of noise for every state, which will be revealed and used for better estimation (see Appendix B). We can optimize the ratio of shared modulation ($r$) and get a slightly improved key rate compared to the single-modulation scheme (Fig. \ref{dist} dotted vs solid lines). Not surprisingly the improvement is higher for larger distances. The optimal ratio $r$ becomes zero between $T=0.1$ and $T=0.3$ (that is between 25 and 50 km) depending on the parameters. Thus for large distances, to get the optimal performance one should indeed use each state for estimation and key distribution too. The implementation of this protocol is simple. From some rough estimation of channel parameters (e.g., from earlier results) Alice chooses a modulation variance that is close to optimal. After Bob received the states Alice reveals all her data of the second modulations. From this they will have a proper estimation of the channel parameters and they can calculate the optimal ratio $r$ numerically. If it is positive, Alice chooses $r\cdot N$ states randomly and reveals their first modulation, which will be incorporated to obtain a more accurate estimation. Then they continue the protocol as in the standard case. \section{Conclusion and discussion} We proposed a feasible double modulation quantum key distribution scheme which uses each state for both channel estimation and key distribution purposes. We presented a simple theoretical maximum for the key rate and the achievable distance for CV-QKD protocols. We showed that our method can greatly improve the key rate and the maximal distance, moreover, it can approach the theoretical limit for long distances. The improvement comes from three factors: optimizing the parameters, using squeezed states, and using the double-modulation method (see Fig. \ref{N_b} for the different effects). The full optimization for the finite-size case was feasible because we obtained formulas as a simple function of the model parameters for a good approximation of the standard deviation of the channel parameter estimators. Using squeezed states instead of coherent states makes the largest improvement, which also theoretically clarifies that the result obtained in \cite{Use11, Mad12} remains valid if we take finite-size effects into account (this surviving effect is far from trivial in general). Finally, there is the double modulation method, which allows us to approach the theoretical limit for large distances, proving that the above ideas are enough to realize a nearly optimal CV QKD scheme. For comparison, we obtained a key rate about ten times higher than the current state-of-the-art implementation \cite{Jou13} even if we use ten times shorter block sizes (Fig. \ref{N_b}, large circles). Note that the results were obtained for a specific protocol; however, the same concepts can be applied to different and more complex settings of CV QKD, which implies that they might produce a significant improvement in practical applications of quantum communication. \subsection*{Acknowledgments} V.C.U. and L.R. acknowledge the Project No. 13-27533J of GA\v CR. The research leading to these results has received funding from the EU FP7 under Grant Agreement No. 308803 (Project BRISQ2), co-financed by M\v SMT \v CR (7E13032). \subsection*{Appendix A: Variances for the single-modulation method} To obtain the variances of the estimators of interest we first calculate the variance of $\widehat{C_{MB}}=\frac{1}{m}\sum_{i=1}^m M_i B_i$. This will be an unbiased estimator of $C_{MB}$, since $$ \bE(\widehat{C_{MB}})=\frac{1}{m}\sum_{i=1}^m \bE(M_i B_i)=\bE(x_M \cdot x_B)= $$ $$ =\bE\Bigg(x_M\cdot \bigg(\sqrt{T}\cdot x_M+x_N\bigg)\Bigg)=\sqrt{T} ~\bE( x_M^2)=C_{MB}. $$ On the other hand, we have $$ \Var(\widehat{C_{MB}})=\frac{1}{m^2}\sum_{i=1}^m \Var(M_i B_i)=\frac{1}{m} ~\Var(x_M\cdot x_B)= $$ $$ =\frac{1}{m} \Var\bigg(x_M\cdot (\sqrt{T}\cdot x_M+x_N)\bigg)= $$ $$ =\frac{1}{m} \bigg(T~ \Var(x_M^2)+\Var(x_M x_N)\bigg)=\frac{1}{m} (T \cdot 2V^2+V \cdot V_N), $$ where we used the definition of $x_B$, the moments of a normal distribution and the fact that for independent random variables with zero mean the variance can be factorized. From this we can finally obtain the variance \begin{equation} V_{\mathrm{Cov}}:=\Var(\widehat{C_{MB}})=\frac{1}{m}\cdot T V^2 \bigg(2+\frac{V_N}{T V}\bigg), \end{equation} which is of order $1/m$, so it is a consistent estimator (i.e., $\widehat{C_{MB}}\rightarrow C_{MB}$ if $m$ goes to infinity). Now we can obtain the properties of the estimators used in the standard method. First we calculate the properties of the estimator of the transmittance: $\hat T=\frac{1}{V^2}\cdot \left(\widehat{C_{MB}}\right)^2$. If we rearrange the second term: $$ (\widehat{C_{MB}})^2=V_{\mathrm{Cov}}\cdot \left(\frac{\widehat{C_{MB}}}{\sqrt{V_{\mathrm{Cov}}}}\right)^2, $$ then the last expression will be noncentrally $\chi^2$ distributed: $$ \left(\frac{\widehat{C_{MB}}}{\sqrt{V_{\mathrm{Cov}}}}\right)^2 \sim \chi^2\left(1,\frac{C_{MB}^2}{V_{\mathrm{Cov}}}\right). $$ From that we can obtain the mean $$ \bE(\hat T)=\frac{V_{\mathrm{Cov}}}{V^2}\cdot \bE\left(\frac{\widehat{C_{MB}}}{\sqrt{V_{\mathrm{Cov}}}}\right)^2=\frac{V_{\mathrm{Cov}}}{V^2}\cdot \left(1+\frac{C_{MB}^2}{V_{\mathrm{Cov}}}\right)= $$ $$ =\frac{V_{\mathrm{Cov}}+C_{MB}^2}{V^2}=\frac{V_{\mathrm{Cov}}+T V^2}{V^2}=T+\frac{V_{\mathrm{Cov}}}{V^2}=T+O(1/m), $$ and the variance $$ \Var(\hat T)=\frac{V_{\mathrm{Cov}}^2}{V^4}\cdot\Var\left(\frac{\widehat{C_{MB}}}{\sqrt{V_{\mathrm{Cov}}}}\right)^2=\frac{V_{\mathrm{Cov}}^2}{V^4}\cdot2\left(1+2\frac{C_{MB}^2}{V_{\mathrm{Cov}}}\right)= $$ $$ =\frac{2V_{\mathrm{Cov}}\cdot(V_{\mathrm{Cov}}+2C_{MB}^2)}{V^4}=\frac{2V_{\mathrm{Cov}}\cdot 2C_{MB}^2}{V^4}+O(1/m^2), $$ where we have used that $V_{\mathrm{Cov}}$ is of order $1/m$. We can rewrite the variance of $\hat T$ in the following form: \begin{equation}\label{vareta} \sigma_1^2:=\Var(\hat T)=\frac{4}{m} \cdot T^2 \left(2+\frac{V_N}{T V}\right)+O(1/m^2). \end{equation} We can see that $\hat T$ is not unbiased, only asymptotically unbiased. But the bias is of order $1/m$, while its standard deviation $\sigma_1$ is of order $1/\sqrt{m}$, meaning that the magnitude of the bias is negligible compared to it, so $\hat T$ can be used to estimate $T$. Note that in further calculations it suffices to use only the first-order approximation, since typically $m>10^5$, so the second term will be negligible. Now we can calculate the properties of the estimator of the excess noise: $\hat V_\eps=\frac{1}{m} \sum_{i=1}^m (B_i-\sqrt{\hat T} M_i)^2+\hat T(1-V_S)-1$. We can calculate its variance by substituting the definition of the estimator $\sqrt{\hat T}$ into the sum, but at the end there would not be much difference from using simply $\sqrt{\hat T}=\sqrt{T}$, i.e., assuming that $\hat T$ has negligible variance. The reason behind that is that for large values of $m$, the estimator $\sqrt{\hat T}$ will be very close to its real value; thus the main source of uncertainty in the sum comes from the random variables $B_i$ and $M_i$. So for the sake of simplicity, we will present in the following the simpler analysis. It is easy to see that $B_i-\sqrt{T} M_i$ is normally distributed with variance $V_N$. So $Y:=\sum_{i=1}^m \left(\frac{B_i-\sqrt{T} M_i}{\sqrt{V_N}}\right)^2$ will be $\chi^2$ distributed: $Y \sim \chi^2(m)$, with $\bE(Y)=m$ and $\Var(Y)=2 m$. Then $\sum_{i=1}^m (B_i-\sqrt{\hat T} M_i)^2$ can be approximated by $V_N\cdot Y$ for large values of $m$ and we can obtain $$ \bE (\hat V_\eps)=\bE \left(-1+\hat T(1-V_S)+\frac{1}{m} \sum_{i=1}^m (B_i-\sqrt{\hat T} M_i)^2\right)\approx $$ $$ \approx -1 + T(1-V_S)+\frac{1}{m}V_N \cdot \bE(Y)=V_\eps. $$ To calculate the variance we assume that $\hat T$ and $(B_i-\sqrt{\hat T} M_i)^2$ are independent. This is not exactly true, but numerical simulations show us that this assumption still gives us a good approximation in the current situation (see Fig. \ref{fig_stdev}). Hence we calculate the variances independently for each term: $$ \Var \left(-1+\hat T(1-V_S)+\frac{1}{m} \sum_{i=1}^m (B_i-\sqrt{\hat T} M_i)^2\right)\approx $$ $$ \approx(1-V_S)^2 \Var(\hat T)+\frac{1}{m^2} V_N^2 \Var(Y). $$ That is, in other words, we can approximate the variance of $\hat V_\eps$ with \begin{equation}\label{varVeps} \Var(\hat V_\eps)\approx \frac{2}{m}\cdot V_N^2+(1-V_S)^2\cdot \sigma_1^2=:s_1^2. \end{equation} Let us note that $\sigma_i^2$ is of order $1/m$, so both terms in (\ref{varVeps}) will be of order $\frac{1}{m}$, too. For coherent states $s_1^2$ will be constant, $\frac{2(1+V_\eps)^2}{m}$ (see Fig. \ref{fig_stdev}, purple dash-dotted line). \subsection*{Appendix B: Variances for the modified double-modulation method} To get an accurate estimation for low distances in double-modulation settings, Alice needs to share some of the first modulations too. Let us suppose that Alice reveals $m=r\cdot N$ first modulations. Then one can calculate an estimation of $T$ and $V_\eps$ from both subsets of the states independently. For the $(1-r)\cdot N$ states for which Alice reveals only the second modulation we can use the same calculation as described in Sec. \ref{double} around Eq. (\ref{varVeps2}) [with the small difference that we should use $(1-r)\cdot N$ instead of $N$]. While for the $r\cdot N$ states for which Alice reveals both modulations, the evolution is \begin{equation}\label{def_ch3} x_B=\sqrt T \cdot (x_{M1}+x_{M2})+x_N. \end{equation} That is, it is the same situation as in the standard method, with the difference that there should be $V_1+V_2$ instead of $V$ in the appropriate formula. One can easily verify that if there are two different estimators $\hat x_1$ and $\hat x_2$ with variances $W_1$ and $W_2$, then the best linear estimator $\alpha \cdot \hat x_1 + (1-\alpha) \cdot \hat x_2$ is yielded by setting $\alpha=\frac{W_2}{W_1+W_2}$. In this case the minimal variance is $$ \mathrm{opt}(W_1,W_2):=\frac{W_1\cdot W_2}{W_1+W_2}=\frac{1}{\frac{1}{W_1}+\frac{1}{W_2}}. $$ Using this result we can construct the best linear estimator from the two independent estimators (corresponding to two subsets of states) and we can obtain the variance of the estimator of $T$: \begin{widetext} \begin{equation}\label{vareta3} \sigma_3^2:=\mathrm{opt}\bigg(\frac{4}{(1-r)N} \cdot T^2 \left(2+\frac{V_N^*}{T V_2}\right)~,~\frac{4}{r N} \cdot T^2 \left(2+\frac{V_N}{T (V_1+V_2)}\right)\bigg) \end{equation} and the variance of the estimator of $V_\eps$: \begin{equation}\label{varVeps3} s_3^2:=\mathrm{opt}\bigg(\frac{2}{(1-r)N}\cdot (V_N^*)^2+(1-V_S-V_1)^2\cdot \sigma_3^2 ~,~ \frac{2}{r N}\cdot V_N^2+(1-V_S)^2\cdot \sigma_3^2 \bigg). \end{equation} \end{widetext} This method combines the advantages of the single- and double-modulation methods (see Fig. \ref{fig_stdev}, cyan dashed line). It provides the optimal estimation, converges to the standard method for low distances, while for large distances, converges to the double-modulation method. It is important to notice that in our calculations we have used approximated variances, but Fig. \ref{fig_stdev} shows us that these approximations are close to the empirical variances even for $N=10^5$. Therefore we can use these approximate formulas to numerically calculate the key rate. We should also note that in the case of squeezed sources a moderate improvement in standard deviations can be experienced, but that does not change fundamentally the relations discussed above. \subsection*{Appendix C: Confidence intervals} Let us suppose that $X$ is an estimator of interest and it is normally distributed with mean $\mu$ and standard deviation $\sigma$. We are looking for a symmetric confidence interval (around $\mu$), and denote the significance level of the confidence interval with $\delta$, that is, \begin{equation}\label{error} P\left( \mu-\alpha < X < \mu + \alpha \right )=1-\delta. \end{equation} Therefore the probability of an estimate above the upper bound is $\delta/2$: $$ \delta/2 = P\left( X > \mu + \alpha \right )=P\left( \frac{ X - \mu}{\sigma}> \frac{\alpha}{\sigma} \right )= $$ $$ =P\left(Y>\frac{\alpha}{\sigma}\right)=1-\Phi\left(\frac{\alpha}{\sigma}\right), $$ where $Y$ has the standard normal distribution and $\Phi$ is the cumulative distribution function of the standard normal distribution. Solving this equality, we obtain \begin{equation}\label{alpha} \alpha=\Phi^{-1}(1-\delta/2)\cdot \sigma. \end{equation} The usual magnitude of error in studies is $10^{-10}$ \cite{Jou12, Lev10, Jou13}, so let us fix $\delta/2=10^{-10}$; then $\Phi^{-1}(1-\delta/2)\approx 6.5$. That means, for example, in the case of the standard method $$ \bE (T^{low})=T-6.5\sigma_1 \quad \textrm{and} \quad \bE (V_\eps^{up})=V_\eps+6.5 s_1, $$ with an error probability of $10^{-10}$. In real-life applications one should use their estimated values: $$ \hat T^{low}=\hat T-6.5\hat\sigma_1 \quad \textrm{and} \quad \hat V_\eps^{up}=\hat V_\eps+6.5 \hat s_1. $$ \subsection*{Appendix D: The maximal distance for CV QKD} One can approximate the asymptotical key rate as an exponentially decreasing function of distance (see Fig. \ref{fig_max_dist}). That is, we have $$ K_\infty (T,V_\eps) \approx a \cdot 10^{-\kappa \cdot d}. $$ We can obtain a trivial upper bound for the key rate: $$ K < K_\infty (T,V_\eps) - \Delta(N). $$ If the right-hand side drops below zero, the key rate will be negative. The right-hand side is positive if $K_\infty (T,V_\eps) > \Delta(N)$, that is, if we use $\Delta(n)=\frac{c}{\sqrt{n}}$ \cite{Lev10} we have $$ a \cdot 10^{-\kappa \cdot d} > \frac{c}{\sqrt{N}}. $$ Rearranging this, we obtain a necessary condition for the positivity of the key rate: \begin{equation} d < \frac{1}{2\kappa} \cdot \log_{10} N -\frac{1}{\kappa} \log_{10}\frac{c}{a}. \end{equation} For realistic parameters $\kappa \approx 0.02$, so the coefficent of $\log_{10} N$ will be around 25, as is stated in the main text. \subsection*{Appendix E: Dependence of key rate on parameters} In the following we will show how the achievable distances for different methods change, if we vary the parameters of the protocol. We always optimize the modulation variance ($V$) and the ratio of states used for estimation ($r$). We check the key rate for coherent states ($V_S=1$), moderately squeezed states ($V_S=0.5$), and strongly squeezed states ($V_S=0.1$), while the distance is a function of $T$. So the only parameters remaining are $\eps$, $\beta$, and $N$. \begin{figure}[!t] \begin{center} \includegraphics[width=8.5cm]{paper_dist_Veps01.eps} \caption{(Color online) The secure key rates of different methods: for coherent states [cyan (light gray), $V_S=1$], moderate squeezing [orange (medium gray), $V_S=0.5$], and strong squeezing [purple (dark gray), $V_S=0.1$], for the optimized single-modulation (dotted lines) and double-modulation (solid lines) schemes as a function of distance $d$ for $\eps=0.1$, $\beta=0.95$, and $N=10^6$. For comparison we plotted also the key rate yielded by the current state-of-the-art technique (thick black dashed line) and the best theoretically possible upper limit $K^{th}$ (thick black dash-dotted line). \label{fig_eps}} \end{center} \end{figure} If the excess noise $\eps$ increases (see Fig. \ref{fig_eps}) the achievable distances become shorter in every case. But the difference for squeezed states will be much smaller than in the other cases (with the largest difference in the nonoptimized case). So we can conclude that the proposed optimized single- and double-modulation squeezed-state protocols are much more robust against noise. \begin{figure}[!ht] \begin{center} \includegraphics[width=8.5cm]{paper_dist_beta08.eps} \caption{(Color online) The secure key rate of different methods: for coherent states [cyan (light gray), $V_S=1$], moderate squeezing [orange (medium gray), $V_S=0.5$], and strong squeezing [purple (dark gray), $V_S=0.1$], for the optimized single-modulation (dotted lines) and double-modulation (solid lines) schemes as a function of distance $d$ for $\eps=0.01$, $\beta=0.8$, and $N=10^6$. For comparison we plotted also the key rate yielded by the current state-of-the-art technique (thick black dashed line) and the best theoretically possible upper limit $K^{th}$ (thick black dash-dotted line). \label{fig_beta}} \end{center} \end{figure} The situation is similar if the reconciliation efficiency $\beta$ is reduced (see Fig. \ref{fig_beta}). Once again the distances are decreasing, but in this case the differences will be smaller. The nonoptimized version performs a little better than previously, because in this case the optimal $V$ becomes closer to the \textit{a priori} fixed modulation. The advantage of using the double-modulation method is more visible even for relatively large values of $T$ (i.e., for small distances). \vfill\eject \begin{figure}[!ht] \begin{center} \includegraphics[width=8.5cm]{paper_dist_108.eps} \caption{(Color online) The secure key rates of different methods: for coherent states [cyan (light gray), $V_S=1$], moderate squeezing [orange (medium gray), $V_S=0.5$], and strong squeezing [purple (dark gray), $V_S=0.1$], for the optimized single-modulation (dotted lines) and double-modulation (solid lines) schemes as a function of distance $d$ for $\eps=0.01$, $\beta=0.95$, and $N=10^8$. For comparison we plotted also the key rate yielded by the current state-of-the-art technique (thick black dashed line) and the best theoretically possible upper limit $K^{th}$ (thick black dash-dotted line). \label{fig7}} \end{center} \end{figure} \noindent If we increase the block size $N$ (see Fig. \ref{fig7}) then the achievable distances will increase too. Note that the relation of the different methods is similar if $N$ is smaller; the improvement is close to an additive function (as we have seen for the theoretical limit in the main text). Let us also note that for large distances there will be a fair improvement using the double modulation method compared to the single modulation case: we can achieve the same distance with many fewer sqeezed states (e.g., double modulation with 3 dB squeezing produces a similar performance as single modulation with 10 dB squeezing). The key rate closely approaches the theoretical limit for the given level of squeezing.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Disk Formation in Hierarchical Hydrodynamical Simulations } According to Fall and Efstathiou's (FE) standard model of disk formation \cite{FE80}, extended disks similar to those observed in spiral galaxies can be formed from the diffuse halo gas component provided that {\it gas conserves its specific angular momentum (j) during collapse}. However, so far, no hydrodynamical simulation of galaxy formation in fully consistent hierarchical cosmological scenarios had been able to produce extended disks similar to observed spirals. The problem was either the excessive loss of angular momentum by the gas clumps as they merge inside the dark haloes, when no star formation processes are considered, resulting in too concentrated disks (i.e., the so-called {\it disk angular momentum catastrophe} problem, hereafter DAMC \cite{Nal95} \cite{NS97} \cite{Wal98} and references quoted therein), or the too early gas exhaustion into stars as it cools and collapses, leaving no gas to form disks at low $z$ \cite{TDT98} \cite{SN98}. In this paper we report on some results of disk formation in hierarchical hydrodynamical simulations \cite{Tal97} \cite{TDT98} \cite{DTTS98} where a simple implementation of star formation that {\it prevents gas depletion at high redshifts}, but {\it permits the formation of stellar bulges}, has allowed extended and populated disks to form at later times. We have followed the evolution of $64^3$ particles in a periodic box of 10 Mpc ($H_0 = $50 km s$^{-1}$ Mpc$^{-1}$) using a SPH code coupled to the high resolution AP3M code \cite{TC92}, either {\it including a star formation algorithm} with star formation efficiency $c = 0.01$ ({\bf S1} simulation) or {\it not} ({\bf S2} simulation). The initial distribution of positions and velocities is the same in both S1 and S2, and is consistent with a standard flat CDM cosmology, with $ \Omega_{\rm b} = 0.1, \Lambda = 0 $ and $b = 2.5$. All, dark, gas and star particles have the same mass, $M = 2.6 \times 10^8$ M$_{\odot}$. The gravitational softening length is 3 kpc and the minimum allowed smoothing length is 1.5 kpc. Baryonic objects forming disk-like structures (DLOs) identified in S1 have stellar bulge-like cores and extended, populated disks, their masses and specific angular momenta are compatible with those of observed spirals, and their bulge and disk scales, $R_{\rm b}$ and $R_{\rm d}$, respectively, are also consistent with their observable values. In S2, DLOs have an inner, rather disordered gas concentration and, also, extended disks, but much less populated than their S1 counterparts. Gas particles inside their optical radii have too low $j$, and their bulge and disk scales disagree with observations (see Table 1 and \cite{F83} \cite{C97} \cite{DTTS98}). \medskip \begin{center} {\bf Table 1.} Some Characteristics of DLOs with $N_{\rm baryon} > 150$. \end{center} \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l| } \hline DLO & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline $N_{\rm gas}$ & 348& 359 &307 &311 &210 &151 &227 &189 & 108 & 109 \\ $N_{\rm star}$& 278& 240 &211 &215 &95 &69 &79 &157 &99 & 47 \\ $R_{\rm b}$ (S1, kpc)&0.74&0.74&0.85&0.74&0.54&0.53&0.99&0.49&0.54&0.40\\ $R_{\rm b}$ (S2, kpc)&1.29&1.29&1.41&1.19& & & 1.27&1.32&1.47&1.23 \\ $R_{\rm d}$ (S1, kpc)&7.33&5.66&10.90&9.98&6.50&5.61&6.56&5.29&7.07&9.75\\ $R_{\rm d}$ (S2, kpc)&6.99&7.08&14.04&9.02& & & 5.87&5.62&13.31&10.75\\ \hline \end{tabular} \end{center} \medskip \begin{figure}[t] \input epsf \leavevmode \epsfxsize=18.5cm \epsfbox{fig_blois98.ps} \vspace*{-6.0cm} \caption{ \baselineskip 2.8 true mm\footnotesize Specific angular momentum component along $\vec{J}_{\rm dis}$ for each baryon particle of halo \#1, versus their positions at different $z $. {\it Points}: gas particles, {\it stars}: stellar particles; {\it open symbols}: counterrotating particles. {\it Left panels}: S1 version at different $z$; {\it right panels}: S2 version at approximately the same $z$. {\it Full lines}: $v_c(R) R$; {\it dotted lines}: $X_2(R)$ for actual disks at each $z$; {\it dashed line}: $X_2(R)$ for the pure exponential version at $z = 0$. {\it Arrows} mark $R_{\rm st}^{\rm ad}$ and $R_{\rm st}^{\rm ped}$, where $X_2(R) = 3$.} \end{figure} So, stellar bulges seem to be critical to ensure global angular momentum conservation in the assembly of disks in hierarchical hydrodynamical cosmological simulations. In fact, it is known that bulges play a fundamental role in stabilizing disks against the bar instability mode, that, otherwise, would cause a strong inward material transport due to angular momentum losses \cite{CST95} \cite{MH96} \cite{vdB98}. To clarify their role, we briefly describe how disks are assembled in S1 and S2 simulations \cite{TDT98} \cite{DTTS98}. i) First, dark matter haloes collapse at high $z$ forming a first generation of (very small) disks and stars. ii) Then, the first unstabilizing mergers at high $z$ happen, resulting in disk disruption and rapid mass inflow to the central regions with angular momentum loss and violent star formation, mainly at the central regions. Also, most preexisting stars will concentrate at the center of the new object through violent relaxation. These two processes help build up a central stellar bulge-like structure. iii) After the first mergers, a disk is regenerated through an infall of gas particles, either belonging to the baryonic merging clumps or diffuse. For example, a compact stellar bulge and an almost cold disk in DLO \#1 of S1 at $z = 0.57$ are apparent in Figure 1a. iv) After disk regeneration, the system can undergo new major merger events at lower $z$. During the orbital decay phase, previous to the actual fusion of the DLOs, most of their orbital angular momentum is transported to (the particle components of) each host halo, spinning it up (as in \cite{B92} \cite{BH}). Because, now, the disks involved in the merger are stabilized by their bulges, no strong gas inflow occurs in this phase (as in \cite{MH96}). As the disks approach one another, they are heated and finally disrupted, but the high efficiency of gas shocking and cooling, and the symmetry of the central potential, quickly puts those of their gas particles with high angular momentum into a new intermediate disk, while their low angular momentum particles sink to the center where most of them are transformed into stars, feeding the bulge. The stellar bulge of the smaller DLO is eventually destroyed and incomplete orbital angular momentum loss puts most of its stars on the remnant disk (Fig. 1b, note incomplete relaxation). v) Relaxation and disk regeneration are completed. Most of disk external particles are supplied by infall, as in iii) (Fig. 1c). Note in Fig. 1c that at $z=0$ most gas particles placed at $R_i \stackrel{<}{\scriptstyle \sim}$ 30 kpc have $j_{z,i} \simeq |\vec j_i| \simeq v_{\rm c}(R_i)R_i$, where $v_{\rm c}(R) = G M(<R)/R$, with a small dispersion around this value, that is, they follow circular trajectories on the equatorial plane, forming a cold thin disk. In contrast, those at $R_i \stackrel{>}{\scriptstyle \sim} 30$ kpc (halo gas particles) are disordered, with their $|j_{z,i}|$ taking any value under the full line. Roughly half of them are in counterrotation (i.e., with $j_{z,i} < 0$). The specific angular momentum of halo gas particles, however, is the same as that of gas disk particles, and, also, the same as that of dark matter particles. Stars at $R_i \stackrel{<}{\scriptstyle \sim} 2$ kpc form a compact central relaxed core, with $\vec{j}_i$ without any preferred direction and very low $|\vec j_i|$ (that is, they have been formed from gas particles that had lost much of their $|\vec j_i|$), while those at $R_i \stackrel{>}{\scriptstyle \sim} 2$ kpc roughly follow a (thicker) disk. The assembly of galactic-like objects in S2 follows the same stages. We recall that in both simulations, haloes and merger trees are identical. The main difference is that in S2, the i) and ii) stages do not result in a stellar core, and, consequently, in iii) stage an unstable gas disk is formed, susceptible to grow bars. In particular, during the orbital decay phase in iv), strong gas inflow and $j$ loss are induced (i.e., a DAMC, see also Fig. 1d, and \cite{MH96} \cite{BH}). The actual fusion completes the gas inflow (Fig. 1e), involving most of the gas particles originally in the merging disks. Few of them are left for disk regeneration, so that, in phase v), new disks are formed almost only from halo gas particles (Fig. 1f), and hence their low population. The behaviour patterns described so far are common to the other DLOs in S1 or S2. In any case, particles in the external cold disk component at $z = 0$ have fallen in the quiescent phase of evolution that follows the last major merger event, according with FE's scenario, while most of those in the central regions (in S2), or those giving rise to stars in the bulge (in S1) have been involved in a DAMC. Most particles in the intermediate disk component in S1 DLOs belonged to the merging objects and have suffered a partial angular momentum conservation in the merger event. Hence, cold thin disks naturally appear in the non-violent phases of evolution. However, as stated, cold disks are strongly unstable against the bar mode. Some works on disk stability (\cite{CST95} \cite{vdB98} and references therein) suggest that sometimes a central bulge is needed to ensure stability, as massive dark haloes are not always able to stabilize a given amount of baryons as pure exponential disks ({\it ped}s). This could be the process at work in DAMCs observed in hydrodynamical simulations. To find out whether this is the case here, we have calculated the $X_2(R)$ parameter \cite{T81} \cite{BT87} for the disk component of our DLOs at different $z$ (Fig. 1), and, also, for their {\it ped} versions (i.e., putting all their respective baryonic masses distributed as a {\it ped}). Recalling the $X_2(R)$ stability criterion, if we define the stability thresholds, $R_{\rm st}^{\rm ad}$ and $R_{\rm st}^{\rm ped}$, as the points where $X_2(R)=3$ for actual and pure exponential disks, respectively, it is apparent from Fig. 1 that disks, when present, are stable: they are detected at $R > R_{\rm st}^{\rm ad}$ if they have had enough time to form after the last merger. By contrast, the {\it ped} version of DLO \#1 at $z = 0$ would be stable only at larger $R$ ($R > R_{\rm st}^{\rm ped} \simeq 21$ kpc). This behaviour is common to any DLO in S1 or S2, and so central mass concentrations are needed to stabilize these disks. These results strongly suggest that DAMCs result from strong gas inflows due to disk instabilities triggered by interactions and mergers during the assembly of galaxy-like objects, and that they can be easily avoided by stabilizing the disks with stellar bulges. \acknowledgements{We are indebted to the DGES (Spain) for financial support. P.B. Tissera thanks the Astrophysics Group at ICSTM (London) for their hospitality.} \begin{bloisbib} \bibitem{B92} Barnes, J.E., 1992, ApJ, 484, 507 \bibitem{BH} Barnes, J.E., \& Hernquist, L. 1991, ApJ, 370, L65; 1992, ARA\&A, 30, 705; 1996, ApJ, 471, 115 \bibitem{BT87} Binney, J., \& Tremaine, S. 1987, {\it Galactic Dynamics}, (Princeton: Princeton Univ. Press) ch. 6 \bibitem{CST95} Christodoulou, D.M., Shlosman, I., \& Tohline, J.E. 1995, ApJ, 443, 55 \bibitem{C97} Courteau, S. 1997, in {\it Morphology \& Dust Content in Spiral Galaxies} eds. D. Block \& M. Greenberg, (Dordrecht: Kluwer) \bibitem{DTTS98} Dom\'{\i}nguez-Tenreiro, R., Tissera, P.B., \& S\'aiz, A. 1998, ApJ Letters, in press \bibitem{F83} Fall, S.M. 1983, in IAU Symp. 100 {\it Internal Kinematics and Dynamics of Galaxies}, ed. E. Athanassoula (Dordrecht: Reidel), p. 391 \bibitem{FE80} Fall, S.M., \& Efstathiou, G. 1980, MNRAS, 193, 189 \bibitem{MH96} Mihos, J.C., \& Hernquist, L. 1994, ApJ, 425, L13; 1996, ApJ, 464, 641 \bibitem{Nal95} Navarro, J.F., Frenk, C.S., \& White, S.D.M. 1995, MNRAS, 275, 56 \bibitem{NS97} Navarro, J.F., \& Steinmetz, M. 1997, ApJ, 438, 13 \bibitem{SN98} Steinmetz, M., \& Navarro, J.F., SISSA astro-ph 9808076 preprint \bibitem{TC92} Thomas, P.A., \& Couchman, H.M.P. 1992, MNRAS, 257, 11 \bibitem{TDT98} Tissera, P.B., \& Dom\'{\i}nguez-Tenreiro, R. 1998, MNRAS, 297, 177 \bibitem{Tal97} Tissera, P.B., Lambas, D.G., \& Abadi, M.G. 1997, MNRAS, 286, 384 \bibitem{T81} Toomre, A. 1981, in {\it The Structure and Evolution of Normal Galaxies}, eds. S.M. Fall \& D. Lynden-Bell, (Cambridge: Cambridge Univ. Press), p. 111 \bibitem{vdB98} van der Bosch, F.C., 1998, SISSA astro-ph 980113 preprint \bibitem{Wal98} Weil, M.L., Eke, V.R., \& Efstathiou, G., 1998, SISSA astro-ph 9802311 preprint \end{bloisbib} \vfill \end{document} simulations.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{introduction} Interference management is an important problem in wireless system design. Researchers have been exploring the capacity characterization of the Gaussian interference channel from a information theoretic perspective for more than thirty years. Several innerbounds and outerbounds of the capacity region for the two user Gaussian interference channel with single antenna nodes are determined \cite{Carleial:75IT,Sato:81IT,Han&Kobayashi:81IT,Costa:85IT,Sato:77IT,Carleial:83IT,Kramer:04IT,Etkin-etal:07IT_submission,Telatar&Tse:07ISIT,Shang-etal:06IT_submission}. However, the capacity region of the Gaussian interference channel remains an open problem in general. Interference channels with multiple-antenna nodes are studied in \cite{Vishwanath-Jafar:ITW,Shang-etal:MIMO,ParetoMISO}. \subsection{Motivating Example} In \cite{ParetoMISO}, the authors study the achievable rate region of the multiple input single output (MISO) interference channel obtained by treating interference as noise. They parameterize the Pareto boundary of the MISO Gaussian interference channel for arbitrary number of users and antennas at the transmitter as long as the number of antennas is larger than the number of users. For 2 user case, they show that the optimal beamforming directions are a linear combination of maximum ratio transmission vectors and the zero forcing vectors. However, for the case when the number of antennas is less than that of users, the optimal beamforming direction is not known. Intuitively, this is because when the number of antennas is less than that of users, it is not possible for each user to choose beamforming vectors to ensure no interference is created at all other users. The same problem is evident when we study this channel from a degrees of freedom \footnote{If the sum capacity can be expressed as $C_{\Sigma}(SNR)=\eta \log(SNR)+o(\log(SNR))$ then we say that the channel has $\eta$ degrees of freedom.} perspective. For the 2 user MISO interference channel with 2 transmit antennas and a single receive antenna, it is easy to see 2 degrees of freedom can be achieved if each user chooses zero forcing beamforming vector so that no interference is created at the other user. This is also the maximum number of degrees of freedom of this channel. However, for 3 user MISO interference channel with two antennas at each transmitter, it is not possible for each user to choose beamforming vectors so that no interference is created at all other users. As a result, only 2 degrees of freedom can be achieved by zero forcing. Can we do better than merely zero forcing? What is the total number of degrees of freedom of the 3 user MISO interference channel with 2 antennas at each transmitter? In general, what is the total number of degrees of freedom of the $K$ user $M \times N$ MIMO interference channel? These are the questions that we explore in this paper. Before we answer the above questions, let us first review the results on the degrees of freedom for the $K$ user single input single output (SISO) Gaussian interference channel. If $K=1$, it is well known the degrees of freedom for this point to point channel is 1. If $K=2$, it is shown that this channel has only 1 degrees of freedom \cite{Nosratinia-Madsen}. In other words, each user can achieve $\frac{1}{2}$ degrees of freedom simultaneously. For $K>2$, it is surprising that every user is still able to achieve $\frac{1}{2}$ degrees of freedom no matter how large $K$ is, if the channel coefficients are time-varying or frequency selective and drawn from a continuous distribution \cite{Cadambe_Jafar_int}. The achievable scheme is based on interference alignment combined with zero forcing. For the MISO interference channel we find a similar characterization of the degrees of freedom. For example, the degrees of freedom for the 3 user MISO interference channel with 2 antennas at each transmitter is only 2 which is the same as that for the 2 user case. In other words, every user can achieve $\frac{2}{3}$ degrees of freedom simultaneously. For $K>3$, every user is still able to achieve $\frac{2}{3}$ degrees of freedom regardless of $K$ if the channel coefficients are time-varying or frequency selective and drawn from a continuous distribution. The achievable scheme is based on interference alignment on the single input multiple output (SIMO) interference channel for simplicity. If interference alignment is achieved on the SIMO channel it can also be achieved on the MISO channel, due to a reciprocity of alignment \cite{Gomadam_Cadambe_Jafar_dist}. Interestingly, the interference alignment scheme is different from all prior schemes. All prior interference alignment schemes \cite{Cadambe_Jafar_int} (including the ones for the $X$ channel \cite{Jafar_Shamai,Cadambe_Jafar_X}) explicitly achieve one-to-one alignment of signal vectors, i.e., to minimize the dimension of the space spanned by interference signal vectors, one signal vector from an interferer and one signal vector from another interferer are aligned along the same dimension at the desired receivers. For example, consider 3 user SISO interference channel with 2 symbol extension or 3 user MIMO interference channel where each node has 2 antennas. We need to choose beamforming vectors $\mathbf{v}^{[2]}$ and $\mathbf{v}^{[3]}$ at Transmitter 2 and 3, respectively so that they cast overlapping shadow at Receiver 1, i.e., \begin{displaymath} \mathbf{H}^{[12]}\mathbf{v}^{[2]}=\mathbf{H}^{[13]}\mathbf{v}^{[3]} \end{displaymath} where $\mathbf{H}^{[12]}$ and $\mathbf{H}^{[13]}$ are $2 \times 2$ channel matrices from Transmitter 2 and 3 to Receiver 1, respectively. However, such an alignment is not feasible on the SIMO channel. Notice that the solution to the condition mentioned above exists only when the range of the two channel matrices has intersection. The channel matrix for 2 symbol extension SIMO channel with 2 antennas at each receiver is $4 \times 2$. The range of two such channel matrices has null intersection with probability one if the channel coefficients are drawn from a continuous distribution. Thus, one-to-one interference alignment does not directly work for SIMO channel. Instead, interference from one interferer can only be aligned within the union of the spaces spanned by the interference vectors from $R$ other interferers where $R$ is the number of antennas at each receiver. \subsection{Overview of Results} In this paper we study the degrees of freedom of the $K$ user MIMO Gaussian interference channel with $M$ antennas at each transmitter and $N$ antennas at each receiver. We provide both the innerbound (achievability) and outerbound (converse) of the total number of degrees of freedom for this channel. We show that $\min(M,N)K$ degrees of freedom can be achieved if $K \leq R$ and $\frac{R}{R+1}\min(M,N)K$ degrees of freedom can be achieved if $K>R$ where $R=\lfloor\frac{\max(M,N)}{\min{(M,N)}}\rfloor$. The total number of degrees of freedom is bounded above by $\min(M,N)K$ if $K \leq R$ and $\frac{\max(M,N)}{R+1}K$ if $K>R$. The bounds are tight when the ratio $\frac{\max(M,N)}{\min(M,N)}=R$ is equal to an integer which includes MISO and SIMO interference channel as special cases. The result indicates when $K \leq R$ every user can achieve $\min(M,N)$ degrees of freedom which is the same as what one can achieve without interference. When $K>R$ every user can achieve a fraction $\frac{R}{R+1}$ of the degrees of freedom that one can achieve in the absence of all interference. In other words, if $K \leq R$, then there is no loss of degrees of freedom for each user with interference. If $K > R$, every user only loses a fraction $\frac{1}{R+1}$ of the degrees of freedom that can be achieved without interference. In the second part of this paper we study the achievable degrees of freedom based on interference alignment scheme for the $R+2$ user MIMO interference channel with $M$ antennas at each transmitter and $RM$, $R=2,3,\ldots$ antennas at each receiver and constant channel coefficients, i.e. in the absence of time variation. We show that for this channel $RM+\lfloor\frac{RM}{R^2+2R-1}\rfloor$ degrees of freedom can be achieved without symbol extension. When $\lfloor\frac{RM}{R^2+2R-1}\rfloor<0$ and hence $M<R+2$, $RM+\frac{1}{\lceil\frac{R+2}{M}\rceil}$ degrees of freedom per orthogonal dimension can be achieved with finite symbol extension. Since only $RM$ degrees of freedom can be achieved using zero forcing, these results provide interesting examples where using interference alignment scheme can achieve more degrees of freedom than merely zero forcing. \section{system model} The $K$ user MIMO interference channel is comprised of $K$ transmitters and $K$ receivers. Each transmitter has $M$ antennas and each receiver has $N$ antennas. The channel output at the $k^{th}$ receiver over the $t^{th}$ time slot is characterized by the following input-output relationship: \begin{equation*} \mathbf{Y}^{[k]}(t)=\mathbf{H}^{[k1]}(t)\mathbf{X}^{[1]}(t)+\mathbf{H}^{[k2]}(t)\mathbf{X}^{[2]}(t)+\cdots+\mathbf{H}^{[kK]}(t)\mathbf{X}^{[K]}(t)+\mathbf{Z}^{[k]}(t) \end{equation*} where, $k\in \{1,2,\cdots,K\}$ is the user index, $t \in \mathbb{N}$ is the time slot index, $\mathbf{Y}^{[k]}(t)$ is the $N \times 1$ output signal vector of the $k^{th}$ receiver, $\mathbf{X}^{[j]}(t)$ is the $M \times 1$ input signal vector of the $j^{th}$ transmitter, $\mathbf{H}^{[kj]}(t)$ is the $N \times M$ channel matrix from transmitter $j$ to receiver $k$ over the $t^{th}$ time slot and $\mathbf{Z}^{[k]}(t)$ is $N\times 1$ additive white Gaussian noise (AWGN) vector at the $k^{th}$ receiver. We assume all noise terms are i.i.d zero mean complex Gaussian with unit variance. We assume that all channel coefficient values are drawn i.i.d. from a continuous distribution and the absolute value of all the channel coefficients is bounded between a non-zero minimum value and a finite maximum value. The channel coefficient values vary at every channel use. Perfect knowledge of all channel coefficients is available to all transmitters and receivers. Transmitters $1, 2, \cdots, K$ have independent messages $W_1, W_2, \cdots, W_K$ intended for receivers $1, 2, \cdots, K$, respectively. The total power across all transmitters is assumed to be equal to $\rho$. We indicate the size of the message set by $|W_i(\rho)|$. For codewords spanning $t_0$ channel uses, the rates $R_i(\rho)=\frac{\log|W_i(\rho)|}{t_0}$ are achievable if the probability of error for all messages can be simultaneously made arbitrarily small by choosing an appropriately large $t_0$. The capacity region $\mathcal{C}(\rho)$ of the $K$ user MIMO interference channel is the set of all achievable rate tuples ${\mathbf R}(\rho)=(R_1(\rho), R_2(\rho), \cdots, R_K(\rho))$. We define the spatial degrees of freedom as: \begin{equation} \eta \triangleq \lim_{\rho \rightarrow \infty} \frac{C_\Sigma(\rho)}{\log(\rho)} \end{equation} where $C_\Sigma(\rho)$ is the sum capacity at SNR $\rho$. \section{Outerbound on the degrees of freedom for the $K$ user MIMO interference channel} We provide an outerbound on the degrees of freedom for the $K$ user MIMO Gaussian interference channel in this section. Note that the converse holds for both time-varying and constant (non-zero) channel coefficients, i.e., time variations are not required. We present the result in the following theorem: \begin{theorem}\label{thm:outerbound} For the $K$ user MIMO Gaussian interference channel with $M$ antennas at each transmitter and $N$ antennas at each receiver, the total number of degrees of freedom is bounded above by $K\min(M,N)$ if $K \leq R$ and $\frac{\max(M,N)}{R+1}K$ if $K>R$ where $R=\lfloor\frac{\max(M,N)}{\min{(M,N)}}\rfloor$, i.e. \begin{equation*} \eta=d_1+\cdots+d_K \leq \min{(M,N)}K~1(K \leq R)+\frac{\max(M,N)}{R+1}K~1(K>R) \end{equation*} where 1(.) is the indicator function and $d_i$ represents the individual degrees of freedom achieved by user $i$. \end{theorem} \begin{proof}\\ 1) $K \leq R$: It is well known that the degrees of freedom of a single user MIMO Gaussian channel with $M$ transmit antennas and $N$ receive anteanns is equal to $\min(M,N)$. Thus, for the $K$ user MIMO Gaussian interference channel with the same antenna deployment, the degrees of freedom cannot be more than $K\min(M,N)$, i.e $\eta \leq K\min(M,N)$.\\ 2) $K>R$: Consider the $R+1$ user MIMO interference channel with $M,N$ antennas at the transmitter and receiver respectively. If we allow full cooperation among $R$ transmitters and full cooperation among their corresponding receivers, then it is equivalent to the two user MIMO interference channel with $RM$, $M$ (respectively) antennas at transmitters and $RN$, $N$ antennas at their corresponding receivers. In \cite{Jafar_dof_int}, it is shown that the degrees of freedom for a two user MIMO Gaussian interference channel with $M_1$, $M_2$ antennas at transmitter $1$, $2$ and $N_1$, $N_2$ antennas at their corresponding receivers is min\{$M_1+M_2$, $N_1+N_2$, max($M_1$,$N_2$), max($M_2$,$N_1$)\}. From this result, the degrees of freedom for the two user MIMO interference channel with $RM$, $M$ antennas at the transmitters and $RN$, $N$ at their corresponding receivers is $\max(M,N)$. Since allowing transmitters and receivers to cooperate does not hurt the capacity, the degrees of freedom of the original $R+1$ user interference channel is no more than $\max(M,N)$. For $K>R+1$ user case, picking any $R+1$ users among $K$ users gives an outerbound: \begin{equation} d_{i_1}+d_{i_2}+\cdots+d_{i_{R+1}} \leq \max(M,N) \quad \forall i_1,\cdots,i_{R+1} \in \{1,2,\cdots,K\}, \quad i_1 \ne i_2 \ne \cdots \ne i_{R+1} \end{equation} Adding up all such inequalities, we get the outerbound of the $K$ user MIMO interference channel: \begin{equation} d_1+d_2+\cdots+d_K \leq \frac{\max(M,N)}{R+1}K \end{equation} \end{proof} \section{Innerbound on the degrees of freedom for the $K$ user MIMO interference channel} To derive the innerbound on the degrees of freedom for the $K$ user MIMO Gaussian interference channel, we first obtain the achievable degrees of freedom for the $K$ user SIMO interference channel with $R$ antennas at each receiver. The innerbound on the degrees of freedom of the $K$ user MIMO interference channel follows directly from the results of the SIMO interference channel. The corresponding input-output relationship of the $K$ user SIMO interference channel is: \begin{equation*} \mathbf{Y}^{[k]}(t)=\mathbf{h}^{[k1]}(t)x^{[1]}(t)+\mathbf{h}^{[k2]}(t)x^{[2]}(t)+\cdots+\mathbf{h}^{[kK]}(t)x^{[K]}(t)+\mathbf{Z}^{[k]}(t) \end{equation*} where $\mathbf{Y}^{[k]}(t)$, $x^{[j]}(t)$, $\mathbf{h}^{[kj]}(t)$, $\mathbf{Z}^{[k]}(t)$ represent the channel output at receiver $k$, the channel input from transmitter $j$, the channel vector from transmitter $j$ to receiver $k$ and the AWGN vector at receiver $k$ over the $t^{th}$ time slot respectively. We start with the problem mentioned in the introduction. For the 3 user SIMO Gaussian interference channel with 2 receive antennas, 2 degrees of freedom can be achieved using zero forcing. From the converse result in the last section, we cannot achieve more than 2 degrees of freedom on this channel. Therefore, the maximum number of degrees of freedom for this channel is 2. For the 4 user case, the converse result indicates that this channel cannot achieve more than $\frac{8}{3}$ degrees of freedom. Can we achieve this outerbound? Interestingly, using interference alignment scheme based on beamforming over multiple symbol extensions of the original channel, we are able to approach arbitrarily close to the outerbound. Consider the $\mu_n=3(n+1)^8$ symbol extension of the channel for any arbitrary $n \in \mathbb{N}$. Then, we effectively have a $2\mu_n \times \mu_n$ channel with a block diagonal structure. In order for each user to get exactly $\frac{2}{3}$ degrees of freedom per channel use and hence $\frac{2}{3}\mu_n=2(n+1)^8$ degrees of freedom on the $\mu_n$ symbol extension channel, each receiver with a total of $2\mu_n$ dimensional signal space should partition its signal space into two disjoint subspaces, one of which has $\frac{2}{3}\mu_n$ dimension for the desired signals and the other has $\frac{4}{3}\mu_n$ dimension for the interference signals. While such an alignment would exactly achieve the outerbound, it appears to be infeasible in general. But if we allow user 4 to achieve only $(\frac{2}{3}-\epsilon_n)\mu_n=2n^8$ degrees of freedom over the $\mu_n$ extension channel where $\epsilon_n=\frac{2(n+1)^8-2n^8}{3(n+1)^8}=\frac{2}{3}[1-\frac{1}{(1+\frac{1}{n})^8)}]$, then it is possible for user 1, 2, 3 to achieve exactly $\frac{2}{3}\mu_n$ degrees of freedom simultaneously for a total of $(\frac{8}{3}-\epsilon_n)\mu_n$ degrees of freedom over the $\mu_n$ symbol extension channel. Hence, $\frac{8}{3}-\frac{2}{3}[1-\frac{1}{(1+\frac{1}{n})^8)}]$ degrees of freedom per channel use can be achieved. As $n \to \infty$, $\frac{2}{3}[1-\frac{1}{(1+\frac{1}{n})^8)}] \to 0$. Therefore, we can achieve arbitrarily close to the outerbound $\frac{8}{3}$. Next we present a detailed description of the interference-alignment scheme for the 4 user SIMO channel with 2 antennas at each receiver. In the extended channel, Transmitter $j, \forall j=1,2,3$ sends message $W_j$ to Receiver $j$ in the form of $\frac{2}{3}\mu_n$ independently encoded steams $x^{[j]}_m(t), m=1,2,\ldots,\frac{2}{3}\mu_n$ along the same set of beamforming vectors $\mathbf{\bar{v}}^{[1]}_1(t),\ldots,\mathbf{\bar{v}}^{[1]}_{\frac{2}{3}\mu_n}(t)$, each of dimension $\mu_n \times 1$, so that we have \begin{figure}[t] \centering \includegraphics[width=6.4in]{ia.eps} \caption{Interference alignment on the 4 user interference channel} \label{fig1} \end{figure} \begin{equation*} \mathbf{\bar{X}}^{[j]}(t) = \displaystyle\sum_{m=1}^{\frac{2}{3}\mu_n} x^{[j]}_m(t) \mathbf{\bar{v}}_m^{[1]}(t) = \mathbf{\bar{V}}^{[1]}(t) \mathbf{X}^{[j]}(t), ~~~ j = 1,2,3 \end{equation*} where $\mathbf{\bar{V}}^{[1]}(t)=[\mathbf{\bar{v}}^{[1]}_1(t),\cdots,\mathbf{\bar{v}}^{[1]}_{\frac{2}{3}\mu_n}(t)]$ is a $\mu_n \times \frac{2}{3}\mu_n$ matrix and $\mathbf{X}^{[j]}(t)$ is a $\frac{2}{3}\mu_n \times 1$ column vector. Transmitter 4 sends message $W_4$ to Receiver 4 in the form of $(\frac{2}{3}-\epsilon_n)\mu_n$ independently encoded streams $x^{[4]}_m(t), m=1,2,\ldots,(\frac{2}{3}-\epsilon_n)\mu_n$ along the beamforming vectors $\mathbf{\bar{v}}^{[2]}_1(t),\ldots,\mathbf{\bar{v}}^{[2]}_{(\frac{2}{3}-\epsilon_n)\mu_n}(t)$ so that \begin{equation*} \mathbf{\bar{X}}^{[4]}(t) = \displaystyle\sum_{m=1}^{(\frac{2}{3}-\epsilon_n)\mu_n} x^{[4]}_m(t) \mathbf{\bar{v}}_m^{[2]}(t) = \mathbf{\bar{V}}^{[2]}(t) \mathbf{X}^{[4]}(t) \end{equation*} where $\mathbf{\bar{V}}^{[2]}(t)=[\mathbf{\bar{v}}^{[2]}_1(t),\cdots,\mathbf{\bar{v}}^{[2]}_{(\frac{2}{3}-\epsilon_n)\mu_n}(t)]$ is a $\mu_n \times (\frac{2}{3}-\epsilon_n)\mu_n$ matrix and $\mathbf{X}^{[4]}(t)$ is a $(\frac{2}{3}-\epsilon_n)\mu_n \times 1$ column vector. Therefore, the received signal at Receiver $k$ is \begin{equation*} \mathbf{\bar{Y}}^{[k]}(t) = \displaystyle\sum_{j=1}^{3}\mathbf{\bar{H}}^{[kj]}(t) \mathbf{\bar{V}}^{[1]}(t)\mathbf{X}^{[j]}(t) + \mathbf{\bar{H}}^{[k4]}(t)\mathbf{\bar{V}}^{[2]}(t)\mathbf{X}^{[4]}(t) + \bar{\mathbf{Z}}^{[k]}(t) \end{equation*} where $\mathbf{\bar{H}}^{[kj]}(t)$ is the $2\mu_n \times \mu_n$ matrix representing the $\mu_n$ extension of the original channel matrix, i.e. \begin{eqnarray*} \bar{\mathbf{H}}^{[kj]}(t)= \left[ \begin{array}{cccc} \mathbf{h}^{[kj]}(\mu_n(t-1)+1) & \mathbf{0} & \ldots & \mathbf{0}\\ \mathbf{0} & \mathbf{h}^{[kj]}(\mu_n(t-1)+2) & \ldots & \mathbf{0}\\ \vdots & \cdots & \ddots & \vdots\\ \mathbf{0} & \mathbf{0}& \cdots & \mathbf{h}^{[kj]}(\mu_nt) \\ \end{array}\right] \end{eqnarray*} where $\mathbf{0}$ is a $2 \times 1$ vector with zero entries. Similarly, $\mathbf{\bar{Y}}$ and $\bar{\mathbf{Z}}$ represent the $\mu_n$ symbol extension of the $\mathbf{Y}$ and $\mathbf{Z}$ respectively. The interference alignment scheme is shown in Fig. \ref{fig1}. At Receiver 1, the interference from Transmitter 2 and Transmitter 3 cannot be aligned with each other because the subspaces spanned by the columns of $\mathbf{\bar{H}}^{[12]}$ and $\mathbf{\bar{H}}^{[13]}$ have null intersection with probability one. Thus, the interference vectors from Transmitter 2, i.e. columns of $\mathbf{\bar{H}}^{[12]}\mathbf{\bar{V}}^{[1]}$ and interference vectors from Transmitter 3, i.e. columns of $\mathbf{\bar{H}}^{[13]}\mathbf{\bar{V}}^{[1]}$ together span a $\frac{4}{3}\mu_n$ dimensional subspace in the $2\mu_n$ dimensional signal space at Receiver 1. In order for Receiver 1 to get a $\frac{2}{3}\mu_n$ dimensional interference-free signal space, we need to align the space spanned by the interference vectors from Transmitter 4, i.e. the range of $\mathbf{\bar{H}}^{[14]}\mathbf{\bar{V}}^{[2]}$ within the space spanned by the interference vectors from Transmitter 2 and 3. Note that we cannot align the interference from Transmitter 4 within the space spanned by the interference vectors from Transmitter 2 only or Transmitter 3 only. Because the subspaces spanned by the columns of $\mathbf{\bar{H}}^{[14]}$ and $\mathbf{\bar{H}}^{[12]}$ or the subspaces spanned by the columns of $\mathbf{\bar{H}}^{[14]}$ and $\mathbf{\bar{H}}^{[13]}$ have null intersection with probability one. Mathematically, we have \begin{equation}\label{reqrx1} \text{span}(\mathbf{\bar{H}}^{[14]}\mathbf{\bar{V}}^{[2]}) \subset \text{span}(\left[ \mathbf{\bar{H}}^{[12]}\mathbf{\bar{V}}^{[1]}~ \mathbf{\bar{H}}^{[13]}\mathbf{\bar{V}}^{[1]} \right]) \end{equation} where $\text{span}(\mathbf{A})$ means the space spanned by the columns of matrix $\mathbf{A}$. This condition can be expressed equivalently as \begin{equation*} \text{span}(\mathbf{\bar{H}}^{[14]}\mathbf{\bar{V}}^{[2]}) \subset \text{span} (\left[ \mathbf{\bar{H}}^{[12]}~ \mathbf{\bar{H}}^{[13]}\right] \left[ \begin{array}{cc} \mathbf{\bar{V}}^{[1]}& \mathbf{0}\\ \mathbf{0}&\mathbf{\bar{V}}^{[1]} \end{array} \right]) \end{equation*} where $\mathbf{0}$ denotes a $\mu_n \times \frac{2}{3}\mu_n$ matrix with zero entries. Note that $[\mathbf{\bar{H}}^{[12]}~ \mathbf{\bar{H}}^{[13]}]$ is a $2\mu_n \times 2\mu_n$ matrix with full rank almost surely. Therefore, the last equation is equivalent to \begin{eqnarray}\label{reqrx1_2} \text{span}(\underbrace{[\mathbf{\bar{H}}^{[12]}~\mathbf{\bar{H}}^{[13]}]^{-1}\mathbf{\bar{H}}^{[14]}}_{\mathbf{T}^{[1]}} \mathbf{\bar{V}}^{[2]}) \subset \text{span}(\left[ \begin{array}{cc} \mathbf{\bar{V}}^{[1]}& \mathbf{0}\\ \mathbf{0}&\mathbf{\bar{V}}^{[1]} \end{array} \right]) \end{eqnarray} where $\mathbf{T}^{[1]}$ is a $2\mu_n \times \mu_n$ matrix which can be written in a block matrix form: \begin{equation*} \mathbf{T}^{[1]}=\left[\begin{array}{c}\mathbf{T}^{[1]}_{1}\\ \mathbf{T}^{[1]}_{2}\end{array}\right] \end{equation*} where $\mathbf{T}^{[1]}_{1}$ and $\mathbf{T}^{[1]}_{2}$ are $\mu_n \times \mu_n$ matrices. Therefore, \eqref{reqrx1_2} can be expressed alternatively as \begin{eqnarray} \text{span}( \left[ \begin{array}{c} \mathbf{T}^{[1]}_1 \mathbf{\bar{V}}^{[2]}\\ \mathbf{T}^{[1]}_2 \mathbf{\bar{V}}^{[2]}\\ \end{array} \right] )\subset \text{span}(\left[ \begin{array}{cc} \mathbf{\bar{V}}^{[1]}& \mathbf{0}\\ \mathbf{0}& \mathbf{\bar{V}}^{[1]} \end{array} \right]) \end{eqnarray} This condition can be satisfied if \begin{eqnarray}\label{cond1} \left\{\begin{array}{ccc} \mathbf{T}^{[1]}_1 \mathbf{\bar{V}}^{[2]} & \prec & \mathbf{\bar{V}}^{[1]} \\ \mathbf{T}^{[1]}_2 \mathbf{\bar{V}}^{[2]} & \prec & \mathbf{\bar{V}}^{[1]} \\ \end{array}\right . \end{eqnarray} where $\mathbf{P} \prec \mathbf{Q}$ means that the set of column vectors of matrix $\mathbf{P}$ is a subset of the set of column vectors of matrix $\mathbf{Q}$. Similarly, at Receiver 2, the interference vectors from Transmitter 4 are aligned within the space spanned by the interference vectors from Transmitter 1 and 3, i.e., \begin{equation} \text{span}(\mathbf{\bar{H}}^{[24]}\mathbf{\bar{V}}^{[2]}) \subset \text{span}(\left[ \mathbf{\bar{H}}^{[21]}\mathbf{\bar{V}}^{[1]}~ \mathbf{\bar{H}}^{[23]}\mathbf{\bar{V}}^{[1]} \right]) \end{equation} This condition can be satisfied if \begin{eqnarray}\label{cond2} \left\{\begin{array}{ccc} \mathbf{T}^{[2]}_1 \mathbf{\bar{V}}^{[2]} & \prec & \mathbf{\bar{V}}^{[1]} \\ \mathbf{T}^{[2]}_2 \mathbf{\bar{V}}^{[2]} & \prec & \mathbf{\bar{V}}^{[1]} \\ \end{array}\right . \end{eqnarray} where \begin{equation*} \mathbf{T}^{[2]}=\left[\begin{array}{c}\mathbf{T}^{[2]}_{1}\\ \mathbf{T}^{[2]}_{2}\end{array}\right]=[\mathbf{\bar{H}}^{[21]}~\mathbf{\bar{H}}^{[23]}]^{-1}\mathbf{\bar{H}}^{[24]} \end{equation*} At Receiver 3, the interference vectors from Transmitter 4 are aligned within the space spanned by the interference vectors from Transmitter 1 and 2, i.e. \begin{equation} \text{span}(\mathbf{\bar{H}}^{[34]}\mathbf{\bar{V}}^{[2]}) \subset \text{span}(\left[ \mathbf{\bar{H}}^{[31]}\mathbf{\bar{V}}^{[1]}~ \mathbf{\bar{H}}^{[32]}\mathbf{\bar{V}}^{[1]} \right]) \end{equation} This condition can be satisfied if \begin{eqnarray}\label{cond3} \left\{\begin{array}{ccc} \mathbf{T}^{[3]}_1 \mathbf{\bar{V}}^{[2]} & \prec & \mathbf{\bar{V}}^{[1]} \\ \mathbf{T}^{[3]}_2 \mathbf{\bar{V}}^{[2]} & \prec & \mathbf{\bar{V}}^{[1]} \\ \end{array}\right . \end{eqnarray} where \begin{equation*} \mathbf{T}^{[3]}=\left[\begin{array}{c}\mathbf{T}^{[3]}_{1}\\ \mathbf{T}^{[3]}_{2}\end{array}\right]=[\mathbf{\bar{H}}^{[31]}~\mathbf{\bar{H}}^{[32]}]^{-1}\mathbf{\bar{H}}^{[34]} \end{equation*} Now, let us consider Receiver 4. As shown in Fig. \ref{fig1}, to get a $(\frac{2}{3}-\epsilon_n)\mu_n$ interference free dimensional signal space, the dimension of the space spanned by the interference vectors has to be less than or equal to $2\mu_n-(\frac{2}{3}-\epsilon_n)\mu_n$. To achieve this, we align the space spanned by $(\frac{2}{3}-\epsilon_n)\mu_n$ vectors of the interference vectors from Transmitter 3 within the space spanned by the interference from Transmitter 1 and 2. Since $\mathbf{\bar{V}}^{[1]}$ is a $\mu_n \times \frac{2}{3}\mu_n$ matrix, we can write it as $\mathbf{\bar{V}}^{[1]}=[\mathbf{\bar{V}}^{[1]}_u~ \mathbf{\bar{V}}^{[1]}_{\epsilon_n}]$ where $\mathbf{\bar{V}}^{[1]}_u$ and $\mathbf{\bar{V}}^{[1]}_{\epsilon_n}$ are $\mu_n \times (\frac{2}{3}-\epsilon_n)\mu_n$ and $\mu_n \times \epsilon_n\mu_n$ matrices, respectively. We assume the space spanned by the columns of $\mathbf{\bar{H}}^{[43]}\mathbf{\bar{V}}^{[1]}_u$ is aligned within the space spanned by the interference from Transmitter 1 and 2, i.e., \begin{equation}\label{alignrx4} \text{span}(\mathbf{\bar{H}}^{[43]}\mathbf{\bar{V}}^{[1]}_u) \subset \text{span} (\left[ \mathbf{\bar{H}}^{[41]}\mathbf{\bar{V}}^{[1]}~ \mathbf{\bar{H}}^{[42]}\mathbf{\bar{V}}^{[1]} \right]) \end{equation} From equation \eqref{cond1}, we have \begin{equation*} \mathbf{T}^{[1]}_1 \mathbf{\bar{V}}^{[2]} \prec \mathbf{\bar{V}}^{[1]} \end{equation*} This implies that $(\frac{2}{3}-\epsilon_n)\mu_n$ columns of $\mathbf{\bar{V}}^{[1]}$ are equal to the columns of $\mathbf{T}^{[1]}_1 \mathbf{\bar{V}}^{[2]}$. Without loss of generality, we assume that $\mathbf{\bar{V}}^{[1]}_u=\mathbf{T}^{[1]}_1 \mathbf{\bar{V}}^{[2]}$. Thus, \eqref{alignrx4} can be written as \begin{eqnarray*} \text{span}(\mathbf{\bar{H}}^{[43]}\mathbf{\bar{V}}^{[1]}_u)=\text{span}(\mathbf{\bar{H}}^{[43]}\mathbf{T}^{[1]}_1 \mathbf{\bar{V}}^{[2]}) \subset \text{span} (\left[ \mathbf{\bar{H}}^{[41]}\mathbf{\bar{V}}^{[1]}~ \mathbf{\bar{H}}^{[42]}\mathbf{\bar{V}}^{[1]} \right])\\ \Rightarrow \text{span}(\mathbf{\bar{H}}^{[43]}\mathbf{T}^{[1]}_1 \mathbf{\bar{V}}^{[2]}) \subset \text{span} (\left[ \mathbf{\bar{H}}^{[41]}~ \mathbf{\bar{H}}^{[42]}\right] \left[ \begin{array}{cc} \mathbf{\bar{V}}^{[1]}& \mathbf{0}\\ \mathbf{0}& \mathbf{\bar{V}}^{[1]} \end{array} \right])\\ \Rightarrow \text{span}(\underbrace{\left[\mathbf{\bar{H}}^{[41]}~ \mathbf{\bar{H}}^{[42]}\right]^{-1} \mathbf{\bar{H}}^{[43]}\mathbf{T}^{[1]}_1 }_{\mathbf{T}^{[4]}}\mathbf{\bar{V}}^{[2]}) \subset \text{span} (\left[ \begin{array}{cc} \mathbf{\bar{V}}^{[1]}& \mathbf{0}\\ \mathbf{0}& \mathbf{\bar{V}}^{[1]} \end{array} \right]) \end{eqnarray*} Note that $\mathbf{T}^{[4]}$ is a $2\mu_n \times \mu_n$ matrix and can be written in a block matrix form: \begin{equation*} \mathbf{T}^{[4]}=\left[ \begin{array}{c}\mathbf{T}^{[4]}_1\\ \mathbf{T}^{[4]}_2 \end{array}\right] \label{T_block} \end{equation*} where each block $\mathbf{T}^{[4]}_i$ is a $\mu_n \times \mu_n$ matrix. Then, the above equation can be expressed as \begin{equation*} \text{span}( \left[ \begin{array}{c} \mathbf{T}^{[4]}_1 \mathbf{\bar{V}}^{[2]}\\ \mathbf{T}^{[4]}_2 \mathbf{\bar{V}}^{[2]}\\ \end{array} \right] )\subset \text{span}(\left[ \begin{array}{cc} \mathbf{\bar{V}}^{[1]}& \mathbf{0}\\ \mathbf{0}&\mathbf{\bar{V}}^{[1]} \end{array} \right]) \end{equation*} The above condition can be satisfied if \begin{eqnarray}\label{cond4} \left\{\begin{array}{ccc} \mathbf{T}^{[4]}_1 \mathbf{\bar{V}}^{[2]} & \prec & \mathbf{\bar{V}}^{[1]} \\ \mathbf{T}^{[4]}_2 \mathbf{\bar{V}}^{[2]} & \prec & \mathbf{\bar{V}}^{[1]} \\ \end{array}\right . \end{eqnarray} Therefore, we need to design $\mathbf{\bar{V}}^{[1]}$ and $\mathbf{\bar{V}}^{[2]}$ to satisfy conditions \eqref{cond1}, \eqref{cond2}, \eqref{cond3}, \eqref{cond4}. Let $\mathbf{w}$ be a $3(n+1)^8 \times 1$ column vector $\mathbf{w} = [1 \ 1 \ \ldots \ 1]^T$. We need to choose $2(n+1)^8$ column vectors for $\mathbf{\bar{V}}^{[1]}$ and $2n^8$ column vectors for $\mathbf{\bar{V}}^{[2]}$. The sets of column vectors of $\mathbf{\bar{V}}^{[1]}$ and $\mathbf{\bar{V}}^{[2]}$ are chosen to be equal to the sets $\bar{V}^{[1]}$ and $\bar{V}^{[2]}$ where \begin{equation*} \begin{aligned} \bar{V}^{[1]} = &\{ \big(\prod_{i=1,2 j=1,\ldots,4} (\mathbf{T}_i^{[j]})^{\alpha_i^{[j]}}\big)\mathbf{w}: \alpha_i^{[j]} \in \{1, \ldots, n+1\} \} \quad \cup &\{\big(\prod_{i=1,2 j=1,\ldots,4} (\mathbf{T}_i^{[j]})^{\beta_i^{[j]}}\big)\mathbf{w}: \beta_i^{[j]} \in \{n+2, \ldots, 2n+2\} \} \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} \bar{V}^{[2]} = &\{ \big(\prod_{i=1,2 j=1,\ldots,4} (\mathbf{T}_i^{[j]})^{\alpha_i^{[j]}}\big)\mathbf{w}: \alpha_i^{[j]} \in \{1, \ldots, n\} \} \quad \cup &\{\big(\prod_{i=1,2 j=1,\ldots,4} (\mathbf{T}_i^{[j]})^{\beta_i^{[j]}}\big)\mathbf{w}: \beta_i^{[j]} \in \{n+2, \ldots, 2n+1\} \} \end{aligned} \end{equation*} For example, when $n=1$, the set $\bar{V}^{[2]}$ consists of two elements, i.e., \\$\bar{V}^{[2]}= \{(\prod_{i=1,2 j=1,\ldots,4} \mathbf{T}_i^{[j]})\mathbf{w} \quad (\prod_{i=1,2 j=1,\ldots,4} (\mathbf{T}_i^{[j]})^3)\mathbf{w}\}$. The set $\bar{V}^{[1]}$ consists of $2(1+1)^8= 2^9$ column vectors in the form $\{(\prod_{i=1,2 j=1,\ldots,4} (\mathbf{T}_i^{[j]})^{\alpha_i^{[j]}})\mathbf{w} \quad (\prod_{i=1,2 j=1,\ldots,4} (\mathbf{T}_i^{[j]})^{\beta_i^{[j]}})\mathbf{w}\}$ where $\alpha_i^{[j]}$ takes values $1,2$; $\beta_i^{[j]}$ takes values $3, 4$. Note that the above construction requires the commutative property of multiplication of matrices $\mathbf{T}^{[j]}_i$. Therefore, it requires $\mathbf{T}^{[j]}_i$ to be diagonal matrices. We provide the proof to show this is true in Appendix \ref{apdx:simo}. In order for each user to decode its desired message by zero forcing the interference, it is required that the desired signal vectors are linearly independent of the interference vectors. We also show this is true in Appendix \ref{apdx:simo}. {\it Remark:} Note that for the $K$ user Gaussian interference channel with single antenna nodes\cite{Cadambe_Jafar_int} and $M \times N$ user $X$ channel \cite{Cadambe_Jafar_X}, we need to construct two precoding matrices $\mathbf{V}$ and $\mathbf{V}'$ to satisfy several such conditions $\mathbf{V} \prec \mathbf{T}_i \mathbf{V}'$. Here, we use the same precoding matrix $\mathbf{\bar{V}}^{[1]}$ for Transmitter 1, 2, 3 so that we need to design two precoding matrices $\mathbf{\bar{V}}^{[1]}$ and $\mathbf{\bar{V}}^{[2]}$ to satisfy similar conditions $\mathbf{\bar{V}}^{[2]} \prec \mathbf{T}_i \mathbf{\bar{V}}^{[1]}$. Therefore, we use the same method in \cite{Cadambe_Jafar_int} and \cite{Cadambe_Jafar_X} to design $\mathbf{\bar{V}}^{[1]}$ and $\mathbf{\bar{V}}^{[2]}$ here. We present the general result for the achievable degrees of freedom of the SIMO Gaussian interference channel in the following theorem. \begin{theorem}\label{thm:simo} For the $K>R+1$ user SIMO Gaussian interference channel with a single antenna at each transmitter and $R$ antennas at each receiver, a total of $\frac{R}{R+1}K$ degrees of freedom per orthogonal time dimension can be achieved. \end{theorem} \begin{proof} We provide the proof in Appendix \ref{apdx:simo}. \end{proof} Next, we present the innerbound on the degrees of freedom for the $K$ user MIMO Gaussian interference channel in the following theorem: \begin{theorem}\label{thm:innerbound} For the time-varying $K$ user MIMO Gaussian interference channel with channel coefficients drawn from a continuous distribution and $M$ antennas at each transmitter and $N$ antennas at each receiver, $K\min(M,N)$ degrees of freedom can be achieved if $K \leq R$ and $\frac{R}{R+1}\min(M,N)K$ degrees of freedom can be achieved if $K>R$ where $R=\lfloor\frac{\max(M,N)}{\min{(M,N)}}\rfloor$, i.e. \begin{equation*} \eta=d_1+\cdots+d_K \geq \min{(M,N)}K~1(K \leq R)+\frac{R}{R+1}\min(M,N)K~1(K>R) \end{equation*} where 1(.) is the indicator function and $d_i$ represents the individual degrees of freedom achieved by user $i$. \end{theorem} \begin{proof} When $K\leq R$, the achievable scheme is based on beamforming and zero forcing. There is a reciprocity of such scheme discussed in \cite{Cadambe_Jafar_X}. It is shown that the degrees of freedom is unaffected if all transmitters and receivers are switched. For example, the degrees of freedom of the $2$ user MISO interference channel with 2 transmit antennas and a single receive antenna is the same as that of the 2 user SIMO interference channel with a single transmit antenna and 2 receive antennas. When $K>R$, the achievable scheme is based on interference alignment. There is a reciprocity of alignment which shows that if interference alignment is feasible on the original channel then it is also feasible on the reciprocal channel \cite{Gomadam_Cadambe_Jafar_dist}. Therefore, without loss of generality, we assume that the number of transmit antennas is less than or equal to that of receive antennas, i.e. $M \leq N$. As a result, we need to show that $KM$ degrees of freedom can be achieved if $K \leq R$ and $\frac{R}{R+1}MK$ degrees of freedom can be achieved if $K>R$ where $R=\lfloor\frac{N}{M}\rfloor$. The case when $R=1$ is solved in \cite{Cadambe_Jafar_int}. Therefore, we only consider the cases when $R>1$ here.\\ 1) $K \leq R$: Each transmitter sends $M$ independent data streams along beamforming vectors. Each receiver gets $M$ interference free streams by zero forcing the interference from unintended transmitters. As a result, each user can achieve $M$ degrees of freedom for a total of $KM$ degrees of freedom.\\ 2) $K>R$: When $K=R+1$, by discarding one user, we have a $R$ user interference channel. $RM$ degrees of freedom can be achieved on this channel using the achievable scheme described above. When $K>R+1$, first we get $RM$ antennas receive nodes by discarding $N-RM$ antennas at each receiver. Then, suppose we view each user with $M$ antennas at the transmitter and $RM$ antennas at the receiver as $M$ different users each of which has a single transmit antenna and $R$ receive antennas. Then, instead of a $K$ user MIMO interference channel we obtain a $KM$ user SIMO interference channel with $R$ antennas at each receiver. By the result of Theorem \ref{thm:simo}, $\frac{R}{R+1}KM$ degrees of freedom can be achieved on this interference channel. Thus, we can also achieve $\frac{R}{R+1}KM$ degrees of freedom on the $K$ user MIMO interference channel with time-varying channel coefficients. \end{proof} Finally, we show that the innerbound and outerbound are tight when the ratio $\frac{\max(M,N)}{\min(M,N)}$ is equal to an integer. We present the result in the following corollary. \begin{corollary}\label{thm:mimo} For the time-varying $K$ user MIMO Gaussian interference channel with $M$ transmit antennas and $N$ receive antennas, the total number of degrees of freedom is equal to $K\min(M,N)$ if $K \leq R$ and $\frac{R}{R+1}\min(M,N)K$ if $K>R$ when $R=\frac{\max(M,N)}{\min(M,N)}$ is equal to an integer, i.e. \begin{equation*} \eta=d_1+\cdots+d_K = \min{(M,N)}K~1(K \leq R)+\frac{R}{R+1}\min(M,N)K~1(K>R) \end{equation*} \end{corollary} \begin{proof} The proof is obtained by directly verifying that the innerbound and outerbound match when the ratio $R=\frac{\max(M,N)}{\min(M,N)}$ is equal to an integer. When $K \leq R$, the innerbound and outerbound always match which is $\min{(M,N)}K$. When $K > R$, the innerbound and outerbound match when $\frac{R}{R+1}\min(M,N)K = \frac{\max(M,N)}{R+1}K$ which implies that $R\min(M,N)=\max(M,N)$. In other words, when either the number of transmit antennas is an integer multiple of that of receive antennas or vice versa, the total number of degrees of freedom is equal to $\frac{R}{R+1}\min(M,N)K$. \end{proof} {\it Remark}: For the $K$ user MIMO Gaussian interference channel with $M,N$ antennas at the transmitter and the receiver respectively, if $K \leq R$ where $R=\lfloor\frac{\max(M,N)}{\min{(M,N)}}\rfloor$ then the total number of degrees of freedom is $\min{(M,N)}K$. This result can be extended to the same channel with constant channel coefficients. {\it Remark}: If $\min(M,N)=1$, then Corollary \ref{thm:mimo} shows that the total number of degrees of freedom of the $K$ user SIMO Gaussian interference channel with $R$ receive antennas or the $K$ user MISO Gaussian interference channel with $R$ transmit antennas is equal to $K~1(K \leq R)+\frac{R}{R+1}K~1(K>R)$. \section{Achievable Degrees of Freedom for the MIMO interference channel with constant channel coefficients} Note that the converse results and the results of the achievable degrees of freedom based on merely zero forcing in previous sections are also applicable to the same channel with constant channel coefficients. The results of the achievable degrees of freedom based on interference alignment are obtained under the assumption that the channel coefficients are time-varying. It is not known if the results can be extended to the same channel with constant channel coefficients. Because the construction of precoding matrices $\mathbf{\bar{V}}^{[1]}$ and $\mathbf{\bar{V}}^{[2]}$ requires commutative property of multiplication of diagonal matrices $\mathbf{T}^{[j]}_i$. But for the MIMO scenarios, those matrices are not diagoal and commutative property cannot be exploited. In fact, the degrees of freedom for the interference channel with constant channel coefficients remains an open problem for more than 2 users. One known scenario is the 3 user MIMO Gaussian interference channel with $M$ antennas at each node. In \cite{Cadambe_Jafar_int}, it is shown that the total number of degrees of freedom is $\frac{3}{2}M$. The achievable scheme is based on interference alignment on signal vectors. In \cite{Cadambe_Jafar_Shamai}, the first known example of a $K$ user Gaussian interference channel with single antenna nodes and constant channel coefficients are provided to achieve the outerbound on the degrees of freedom. The achievable scheme is based on interference alignment on signal levels rather than signal vectors. In this section, we will provide examples where interference alignment combined with zero forcing can achieve more degrees of freedom than merely zero-forcing for some MIMO Gaussian interference channels with constant channel coefficients. More general results are provided in Appendix \ref{apdx:mimo}. {\it Example 1}: Consider the 4 user MIMO Gaussian interference channel with 4 antennas at each transmitter and 8 antennas at each receiver. Note that for the 3 user MIMO interference channel with the same antenna deployment, the total number of degrees of freedom is 8. Also, for the 4 user case, only 8 degrees of freedom can be achieved by merely zero forcing. However, we will show that using interference alignment combined with zero forcing, 9 degrees of freedom can be achieved on this interference channel without channel extension. In other words, the 4 user MIMO interference channel with 4, 8 antennas at each transmitter and receiver respectively can achieve more degrees of freedom than the 3 user interference channel with the same antenna deployment. Besides, more degrees of freedom can be achieved on this 4 user interference channel by using interference alignment combined with zero forcing than merely zero forcing. Next, we show that user $1,2,3$ can achieve $d_i=2, \forall i=1,2,3$ degrees of freedom and user 4 can achieve $d_4=3$ degrees of freedom resulting in a total of 9 degrees of freedom achieved on this channel. Transmitter $i$ sends message $W_i$ to Receiver $i$ using $d_i$ independently encoded streams along vectors $\mathbf{v}^{[i]}_m$, i.e., \begin{eqnarray*} \mathbf{X}^{[i]}&=&\sum_{m=1}^{2}x^{[i]}_m\mathbf{v}_m^{[i]}=\mathbf{V}^{[i]}\mathbf{X}^{i},~i=1,2,3\\ \mathbf{X}^{[4]}&=&\sum_{m=1}^{3}x^{[4]}_m\mathbf{v}_m^{[4]}=\mathbf{V}^{[4]}\mathbf{X}^{4} \end{eqnarray*} where $\mathbf{V}^{[i]}=[\mathbf{v}^{[i]}_1~\mathbf{v}^{[i]}_2], i=1,2,3$ and $\mathbf{V}^{[4]}=[\mathbf{v}^{[4]}_1~\mathbf{v}^{[4]}_2~\mathbf{v}^{[4]}_3]$. The signal at Receiver $j$ can be written as \begin{equation*} \mathbf{Y}^{[j]}=\sum_{i=1}^4\mathbf{H}^{[ji]}\mathbf{V}^{[i]}\mathbf{X}^{i}+\mathbf{Z}^{[j]}. \end{equation*} In order for each receiver to decode its message by zero forcing the interference signals, the dimension of the space spanned by the interference signal vectors has to be less than or equal to $8-d_i$. Since there are $9-d_i$ interference vectors at receiver $i$, we need to align $(9-d_i)-(8-d_i)=1$ interference signal vector at each receiver. This can be achieved by if one interference vector lies in the space spanned by other interference vectors at each receiver. Mathematically, we choose the following alignments \begin{eqnarray} \text{span}(\mathbf{H}^{[14]}\mathbf{v}^{[4]}_1) \subset \text{span}(\left[ \mathbf{H}^{[12]}\mathbf{V}^{[2]}~ \mathbf{H}^{[13]}\mathbf{V}^{[3]} \right]) &\Rightarrow& \text{span}(\underbrace{[\mathbf{H}^{[12]}~\mathbf{H}^{[13]}]^{-1}\mathbf{H}^{[14]}}_{\mathbf{T}^{[1]}} \mathbf{v}^{[4]}_1) \subset \text{span}(\left[ \begin{array}{cc} \mathbf{V}^{[2]}& \mathbf{0}\\ \mathbf{0}&\mathbf{V}^{[3]} \end{array} \right]) \notag\\ &\Rightarrow& \text{span}( \left[ \begin{array}{c} \mathbf{T}^{[1]}_1 \mathbf{v}^{[4]}_1\\ \mathbf{T}^{[1]}_2 \mathbf{v}^{[4]}_1\\ \end{array} \right] )\subset \text{span}(\left[ \begin{array}{cc} \mathbf{V}^{[2]}& \mathbf{0}\\ \mathbf{0}&\mathbf{V}^{[3]} \end{array} \right])\label{iacon1} \end{eqnarray} \begin{eqnarray} \text{span}(\mathbf{H}^{[24]}\mathbf{v}^{[4]}_1) \subset \text{span}(\left[ \mathbf{H}^{[21]}\mathbf{V}^{[1]}~ \mathbf{H}^{[23]}\mathbf{V}^{[3]} \right]) &\Rightarrow& \text{span}(\underbrace{[\mathbf{H}^{[21]}~\mathbf{H}^{[23]}]^{-1}\mathbf{H}^{[24]}}_{\mathbf{T}^{[2]}}\mathbf{v}^{[4]}_1) \subset \text{span}(\left[ \begin{array}{cc} \mathbf{V}^{[1]}& \mathbf{0}\\ \mathbf{0}&\mathbf{V}^{[3]} \end{array} \right])\notag\\ &\Rightarrow& \text{span}( \left[ \begin{array}{c} \mathbf{T}^{[2]}_1 \mathbf{v}^{[4]}_1\\ \mathbf{T}^{[2]}_2 \mathbf{v}^{[4]}_1\\ \end{array} \right] )\subset \text{span}(\left[ \begin{array}{cc} \mathbf{V}^{[1]}& \mathbf{0}\\ \mathbf{0}&\mathbf{V}^{[3]} \end{array} \right])\label{iacon2} \end{eqnarray} \begin{eqnarray} \text{span}(\mathbf{H}^{[32]}\mathbf{v}^{[2]}_1) \subset \text{span}(\left[\mathbf{H}^{[31]}\mathbf{V}^{[1]}~ \mathbf{H}^{[34]}\mathbf{V}^{[4]} \right]) &\Rightarrow& \text{span}(\underbrace{[\mathbf{H}^{[31]}~\mathbf{H}^{[34]}]^{-1}\mathbf{H}^{[32]}}_{\mathbf{T}^{[3]}}\mathbf{v}^{[2]}_1) \subset \text{span}(\left[ \begin{array}{cc} \mathbf{V}^{[1]}& \mathbf{0}\\ \mathbf{0}&\mathbf{V}^{[4]} \end{array} \right]))\notag\\ &\Rightarrow& \text{span}( \left[ \begin{array}{c} \mathbf{T}^{[3]}_1 \mathbf{v}^{[2]}_1\\ \mathbf{T}^{[3]}_2 \mathbf{v}^{[2]}_1\\ \end{array} \right] )\subset \text{span}(\left[ \begin{array}{cc} \mathbf{V}^{[1]}& \mathbf{0}\\ \mathbf{0}&\mathbf{V}^{[4]} \end{array} \right]) \label{iacon3}\\ \text{span}(\mathbf{H}^{[41]}\mathbf{v}^{[1]}_1) \subset \text{span}(\left[ \mathbf{H}^{[42]}\mathbf{V}^{[2]} ~\mathbf{H}^{[43]}\mathbf{V}^{[3]} \right]) &\Rightarrow& \text{span}(\underbrace{[\mathbf{H}^{[42]}~\mathbf{H}^{[43]}]^{-1}\mathbf{H}^{[41]}}_{\mathbf{T}^{[4]}}\mathbf{v}^{[1]}_1) \subset \text{span}(\left[ \begin{array}{cc} \mathbf{V}^{[2]}& \mathbf{0}\\ \mathbf{0}&\mathbf{V}^{[3]} \end{array} \right]) \notag\\ &\Rightarrow& \text{span}( \left[ \begin{array}{c} \mathbf{T}^{[4]}_1 \mathbf{v}^{[1]}_1\\ \mathbf{T}^{[4]}_2 \mathbf{v}^{[1]}_1\\ \end{array} \right] )\subset \text{span}(\left[ \begin{array}{cc} \mathbf{V}^{[2]}& \mathbf{0}\\ \mathbf{0}&\mathbf{V}^{[3]} \end{array} \right])\label{iacon4} \end{eqnarray} where $\mathbf{T}^{[i]}$ is an $8 \times 4$ matrix which can be written in a block matrix form: \begin{equation} \mathbf{T}^{[i]}=\left[\begin{array}{c}\mathbf{T}^{[i]}_{1}\\ \mathbf{T}^{[i]}_{2}\end{array}\right]~~i=1,2,3,4 \end{equation} where $\mathbf{T}^{[i]}_{1}$ and $\mathbf{T}^{[i]}_{2}$ are $4 \times 4$ matrices. To satisfy the conditions \eqref{iacon1}, \eqref{iacon2}, \eqref{iacon3}, \eqref{iacon4}, we let \begin{eqnarray*} \mathbf{T}^{[1]}_1 \mathbf{v}^{[4]}_1 = \mathbf{v}^{[2]}_1 &~& \text{span}(\mathbf{T}^{[1]}_2 \mathbf{v}^{[4]}_1) =\text{span} (\mathbf{v}^{[3]}_1)\label{mimo_eigen2}\\ \mathbf{T}^{[2]}_1 \mathbf{v}^{[4]}_1 = \mathbf{v}^{[1]}_1 &~& \text{span}(\mathbf{T}^{[2]}_2 \mathbf{v}^{[4]}_1) = \text{span} (\mathbf{v}^{[3]}_1)\label{mimo_eigen3}\\ \mathbf{T}^{[3]}_1 \mathbf{v}^{[2]}_1 =\mathbf{v}^{[1]}_2 &~& \mathbf{T}^{[3]}_2 \mathbf{v}^{[2]}_1 = \mathbf{v}^{[4]}_2\\ \mathbf{T}^{[4]}_1 \mathbf{v}^{[1]}_1 = \mathbf{v}^{[2]}_2 &~& \mathbf{T}^{[4]}_2 \mathbf{v}^{[1]}_1 = \mathbf{v}^{[3]}_2 \end{eqnarray*} Notice once $\mathbf{v}^{[4]}_1$ is chosen, all other vectors can be solved from the above equations. To solve $\mathbf{v}^{[4]}_1$, we have \begin{eqnarray*} \text{span}(\mathbf{T}^{[1]}_1 \mathbf{v}^{[4]}_1)&=& \text{span}(\mathbf{T}^{[2]}_2 \mathbf{v}^{[4]}_1)\\ \Rightarrow \text{span}((\mathbf{T}^{[2]}_{2})^{-1}\mathbf{T}^{[1]}_{2}\mathbf{v}^{[4]}_{1})&=& \text{span} (\mathbf{v}^{[4]}_{1})\\ \Rightarrow \mathbf{v}^{[4]}_{1}&=&\mathbf{e}, \end{eqnarray*} where $\mathbf{e}$ is an eigenvector of matrix $(\mathbf{T}^{[2]}_2)^{-1}\mathbf{T}^{[1]}_2$. Note that the above construction only specifies $\mathbf{V}^{[i]}, \forall i=1,2,3$ and $\mathbf{v}^{[4]}_1, \mathbf{v}^{[4]}_2$. The remaining $\mathbf{v}^{[4]}_3$ can be picked randomly according to a continuous distribution so that all columns of $\mathbf{V}^{[i]}$ are linearly independent. Through interference alignment, we ensure that the interference vectors span a small enough signal space. We need to verify that the desired signal vectors, i.e., $\mathbf{H}^{[ii]}\mathbf{V}^{[i]}$ are linearly independent of interference vectors so that each receiver can decode its message using zero forcing. Notice that the direct channel matrices $\mathbf{H}^{[ii]}, i=1,2,3,4$ do not appear in the interference alignment equations, $\mathbf{V}^{[i]}$ undergoes an independent linear transformation by multiplying $\mathbf{H}^{[ii]}$. Therefore, at each receiver the desired signal vectors are linearly independent of the interference signal vectors with probability one. As a result, user $i$ can achieve $d_i$ degrees of freedom and a total of 9 degrees of freedom can be achieved. {\it Example 2}: Consider the 4 user MIMO Gaussian interference channel with 2 antennas at each transmitter and 4 antennas at each receiver. We show that 9 degrees of freedom can be achieved on the 2-symbol extension of the original channel and hence 4$\frac{1}{2}$ degrees of freedom per channel use can be achieved. Since only 4 degrees of freedom can be achieved using merely zero forcing, $\frac{1}{2}$ more degrees of freedom is achieved using interference alignment scheme. Note that although we have equivalently a 4 user interference channel with $4 \times 8$ channel on the 2-symbol extension channel, we cannot use the same achievable scheme used in Example 1 due to the block diagonal structure of the extension channel matrix. Consider 2-symbol extension of the channel. The channel input-output relationship is \begin{equation*} \mathbf{\bar{Y}}^{[j]}= \sum_{i=1}^{4}\mathbf{\bar{H}}^{[ji]}\mathbf{\bar{X}}^{[i]}+\mathbf{\bar{Z}}^{[j]}~~\forall j=1,2,3,4 \end{equation*} where the overbar notation represents the 2-symbol extensions so that \begin{equation*} \mathbf{\bar{X}}\triangleq\left[\begin{array}{c}\mathbf{X}(2t)\\ \mathbf{X}(2t+1)\end{array}\right]\quad \mathbf{\bar{Z}}\triangleq\left[\begin{array}{c}\mathbf{Z}(2t)\\ \mathbf{Z}(2t+1)\end{array}\right] \end{equation*} where $\mathbf{X}$ and $\mathbf{Z}$ are $2 \times 1$ and $4 \times 1$ vectors respectively, and \begin{equation*} \mathbf{\bar{H}} \triangleq \left[\begin{array}{cc}\mathbf{H} & \mathbf{0}\\ \mathbf{0} & \mathbf{H} \end{array}\right]. \end{equation*} where $\mathbf{H}$ is the $4 \times 2$ channel matrix. We assign $d_1=d_2=d_3=2$ and $d_4=3$ degrees of freedom to message $W_1,W_2,W_3,W_4$ respectively for a total 9 degrees of freedom over the 2-symbol extension channel. Transmitter $i$ sends message $W_i$ in the form of $d_i$ independently encoded streams along the direction vectors $\mathbf{\bar{v}}^{[i]}_1,\ldots,\mathbf{\bar{v}}^{[i]}_{d_i}$, each of dimension $4 \times 1$, so that we have: \begin{equation*} \mathbf{\bar{X}}^{[i]}=\sum_{m=1}^{d_i}\mathbf{\bar{v}}^{[i]}_{m}x^{[i]}_m=\mathbf{\bar{V}}^{[i]}\mathbf{X}^{[i]}\quad i=1,2,3,4 \end{equation*} where $\mathbf{\bar{V}}^{[i]}$ and $\mathbf{X}^{[i]}$ are $4 \times d_i$ and $d_i \times 1$ matrices respectively. In order to get $d_i$ interference free dimension at Receiver $i$, we need to align 1 interference vector at each receiver. This can be achieved if one interference vector lies in the space spanned by other interference vectors at each receiver. Mathematically, we choose the following alignments: \begin{eqnarray} \text{span}(\mathbf{\bar{H}}^{[12]}\mathbf{\bar{v}}^{[2]}_1) \subset \text{span}(\left[ \mathbf{\bar{H}}^{[13]}\mathbf{\bar{V}}^{[3]}~ \mathbf{\bar{H}}^{[14]}\mathbf{\bar{V}}^{[4]} \right])\Rightarrow \text{span}(\underbrace{[\mathbf{\bar{H}}^{[13]}~\mathbf{\bar{H}}^{[14]}]^{-1}\mathbf{\bar{H}}^{[12]}}_{\mathbf{T}^{[1]}} \mathbf{\bar{v}}^{[2]}_1) \subset \text{span}(\left[ \begin{array}{cc} \mathbf{\bar{V}}^{[3]}& \mathbf{0}\\ \mathbf{0}&\mathbf{\bar{V}}^{[4]} \end{array} \right])\\ \text{span}(\mathbf{\bar{H}}^{[23]}\mathbf{\bar{v}}^{[3]}_1) \subset\text{span}(\left[ \mathbf{\bar{H}}^{[21]}\mathbf{\bar{V}}^{[1]}~ \mathbf{\bar{H}}^{[24]}\mathbf{\bar{V}}^{[4]} \right]) \Rightarrow \text{span}(\underbrace{[\mathbf{\bar{H}}^{[21]}~\mathbf{\bar{H}}^{[24]}]^{-1}\mathbf{\bar{H}}^{[23]}}_{\mathbf{T}^{[2]}}\mathbf{\bar{v}}^{[3]}_1) \subset \text{span}(\left[ \begin{array}{cc} \mathbf{\bar{V}}^{[1]}& \mathbf{0}\\ \mathbf{0}& \mathbf{\bar{V}}^{[4]} \end{array} \right])\\ \text{span}(\mathbf{\bar{H}}^{[34]}\mathbf{\bar{v}}^{[4]}_1) \subset \text{span}(\left[\mathbf{\bar{H}}^{[31]}\mathbf{\bar{V}}^{[1]}~ \mathbf{\bar{H}}^{[32]} \mathbf{\bar{V}}^{[2]} \right])\Rightarrow \text{span}(\underbrace{[\mathbf{\bar{H}}^{[31]}~\mathbf{\bar{H}}^{[32]}]^{-1}\mathbf{\bar{H}}^{[34]}}_{\mathbf{T}^{[3]}}\mathbf{\bar{v}}^{[4]}_1 \subset \text{span}(\left[ \begin{array}{cc} \mathbf{\bar{V}}^{[1]}& \mathbf{0}\\ \mathbf{0}& \mathbf{\bar{V}}^{[2]} \end{array} \right]))\\ \text{span}(\mathbf{\bar{H}}^{[41]}\mathbf{\bar{v}}^{[1]}_1) \subset \text{span}(\left[\mathbf{\bar{H}}^{[42]} \mathbf{\bar{V}}^{[2]} ~\mathbf{\bar{H}}^{[43]}\mathbf{\bar{V}}^{[3]} \right]) \Rightarrow \text{span}(\underbrace{[\mathbf{\bar{H}}^{[42]}~\mathbf{\bar{H}}^{[43]}]^{-1}\mathbf{\bar{H}}^{[41]}}_{\mathbf{T}^{[4]}}\mathbf{\bar{v}}^{[1]}_1) \subset \text{span}(\left[ \begin{array}{cc} \mathbf{\bar{V}}^{[2]}& \mathbf{0}\\ \mathbf{0}& \mathbf{\bar{V}}^{[3]} \end{array} \right]) \end{eqnarray} where $\mathbf{T}^{[i]}$ is the $8 \times 4$ matrix which can be written in a block matrix form: \begin{equation} \mathbf{T}^{[i]}=\left[\begin{array}{c}\mathbf{T}^{[i]}_{1}\\ \mathbf{T}^{[i]}_{2}\end{array}\right]~~i=1,2,3,4 \end{equation} The above equations can be satisfied if \begin{eqnarray} \mathbf{T}^{[1]}\mathbf{\bar{v}}^{[2]}_1 = \left[\begin{array}{c}\mathbf{\bar{v}}^{[3]}_1\\\mathbf{\bar{v}}^{[4]}_1 \end{array}\right]~ \mathbf{T}^{[2]}\mathbf{\bar{v}}^{[3]}_1 = \left[\begin{array}{c}\mathbf{\bar{v}}^{[1]}_1\\\mathbf{\bar{v}}^{[4]}_2 \end{array}\right]~ \mathbf{T}^{[3]}\mathbf{\bar{v}}^{[4]}_1 = \left[\begin{array}{c}\mathbf{\bar{v}}^{[1]}_2\\\mathbf{\bar{v}}^{[2]}_2 \end{array}\right]~ \mathbf{T}^{[4]}\mathbf{\bar{v}}^{[1]}_1 = \left[\begin{array}{c}\mathbf{\bar{v}}^{[2]}_3\\\mathbf{\bar{v}}^{[3]}_2 \end{array}\right] \end{eqnarray} Notice that once we pick $\mathbf{\bar{v}}^{[2]}_1$, all other vectors can be solved from above equations. $\mathbf{\bar{v}}^{[2]}_1$ can be chosen randomly according to a continuous distribution so that all vectors are linearly independent with probability one. Also, since all the vectors are chosen independently of the direct channel matrices $\mathbf{\bar{H}}^{[ii]}$ and all entries of $\mathbf{\bar{V}}^{[i]}$ are not equal to zero almost surely, the desired signal vectors are linearly independent of the interference vectors at each receiver. As a result, Receiver $i$ can decode its message by zero forcing the interference to achieve $d_i$ degrees of freedom for a total of 9 degrees of freedom over the 2-symbol extension channel. Therefore, $4\frac{1}{2}$ degrees of freedom per channel use can be achieved on the original channel. \section{conclusion} We investigate the degrees of freedom for the $K$ user MIMO Gaussian interference channel with $M,N$ antennas at each transmitter and receiver, respectively. The motivation of this work is the potential benefits of interference alignment scheme shown recently to achieve the capacity of certain wireless networks within $o(\log(SNR))$. In this work, interference alignment scheme is also found to be optimal in achieving the degrees of freedom of the $K$ user $M \times N$ MIMO Gaussian interference channel if the ratio $\frac{\max(M,N)}{\min(M,N)}$ is equal to an integer with time-varying channel coefficients drawn from a continuous distribution. We also explore the achievable degrees of freedom for the MIMO interference channel with constant channel coefficients using interference alignment combined with zero forcing. We provide some examples where using interference alignment can achieve more degrees of freedom than merely zero forcing. \appendices \section{Proof of Theorem \ref{thm:simo}}\label{apdx:simo} \begin{proof} Let $\Gamma =KR(K-R-1)$. We will develop a coding scheme based on interference alignment to achieve a total of $(R+1)R(n+1)^{\Gamma}+(K-R-1)Rn^{\Gamma}$ degrees of freedom over a $\mu_n=(R+1)(n+1)^\Gamma$ symbol extension of the original channel. Hence, a total of $\frac{(R+1)R(n+1)^{\Gamma}+(K-R-1)Rn^{\Gamma}}{(R+1)(n+1)^\Gamma}$ degrees of freedom per orthogonal dimension can be achieved for any arbitrary $n \in \mathbb{N}$. Taking supremum over all $n$ proves the total number of degrees of freedom is equal to $\frac{RK}{R+1}$ as desired. Specifically, over the extended channel, user $i=1,2,\cdots,R+1$ achieves $R(n+1)^{\Gamma}$ degrees of freedom and other user $i=R+2,R+3,\cdots,K$ achieves $Rn^{\Gamma}$ degrees of freedom. As a result, user $i=1,2,\cdots,R+1$ achieves $\frac{R(n+1)^{\Gamma}}{(R+1)(n+1)^\Gamma}$ degrees of freedom and user $i=R+2,R+3,\cdots,K$ achieves $\frac{Rn^{\Gamma}}{(R+1)(n+1)^\Gamma}$ degrees of freedom per channel use, i.e. \begin{equation} d_i = \frac{R(n+1)^{\Gamma}}{(R+1)(n+1)^\Gamma}~~~~ i=1,2,\cdots,R+1 \quad d_i = \frac{Rn^{\Gamma}}{(R+1)(n+1)^\Gamma}~~~~ i=R+2,R+3,\cdots,K \end{equation} This implies that \begin{equation} d_1 + d_2 + \cdots+ d_K \geq \sup_n \frac{(R+1)R(n+1)^{\Gamma}+(K-R-1)Rn^{\Gamma}}{(R+1)(n+1)^\Gamma} = \frac{KR}{R+1} \end{equation} In the extended channel, the signal vector at the $k^{th}$ user's receiver can be expressed as \begin{equation*} \bar{\mathbf{Y}}^{[k]}(t) = \sum_{j=1}^{K}\bar{\mathbf{H}}^{[kj]}(t)\bar{\mathbf{X}}^{[j]}(t)+\bar{\mathbf{Z}}^{[k]}(t) \end{equation*} where $\bar{\mathbf{X}}^{[j]}(t)$ is a $\mu_n \times 1$ column vector representing the $\mu_n$ symbol extension of the transmitted symbol $x^{[j]}(t)$, i.e. \begin{equation*} \bar{\mathbf{X}}^{[j]}(t) \triangleq \left[\begin{array}{c}x^{[j]}(\mu_n(t-1)+1)\\x^{[j]}(\mu_n(t-1)+2)\\\vdots\\x^{[j]}(\mu_nt)\end{array}\right] \end{equation*} Similarly, $\bar{\mathbf{Y}}(t)$ and $\bar{\mathbf{Z}}(t)$ represent $\mu_n$ symbol extensions of the $\mathbf{Y}(t)$ and $\mathbf{Z}(t)$ respectively. $\bar{\mathbf{H}}^{[kj]}(t)$ is a $R\mu_n \times \mu_n$ matrix representing the $\mu_n$ symbol extension of the channel, i.e. \begin{align} \bar{\mathbf{H}}^{[kj]}(t) = \left[ \begin{array}{cccc} \mathbf{h}^{[kj]}(\mu_n(t-1)+1) & \mathbf{0} & \ldots & \mathbf{0}\\ \mathbf{0} & \mathbf{h}^{[kj]}(\mu_n(t-1)+2) & \ldots & \mathbf{0}\\ \vdots & \vdots & \ddots & \vdots\\ \mathbf{0} & \mathbf{0}& \cdots & \mathbf{h}^{[kj]}(\mu_nt) \end{array}\right] \end{align} where $\mathbf{h}^{[kj]}$ is the $R \times 1$ channel vector. Message $W_j$ ($j=1,2,\cdots,R+1$) is encoded at Transmitter $j$ into $R(n+1)^{\Gamma}$ independent streams $x^{[j]}_m(t)$, $m=1,2,\ldots,R(n+1)^{\Gamma}$ along the same set of vectors $\mathbf{\bar{v}}^{[1]}_m(t)$ so that $\bar{\mathbf{X}}^{[j]}(t)$ is \begin{equation*} \mathbf{\bar{X}}^{[j]}(t) = \sum_{m=1}^{R(n+1)^{\Gamma}} x^{[j]}_m(t) \mathbf{\bar{v}}_m^{[1]}(t) = \mathbf{\bar{V}}^{[1]}(t) \mathbf{X}^{[j]}(t) \end{equation*} where $\mathbf{X}^{[j]}(t)$ is a $R(n+1)^{\Gamma}\times 1$ column vector and $\bar{\mathbf{V}}^{[1]}(t)$ is a $ (R+1)(n+1)^\Gamma \times R(n+1)^{\Gamma} $ dimensional matrix. Similarly, $W_j$ ($j=R+2,\cdots,K$) is encoded at Transmitter $j$ into $Rn^{\Gamma}$ independent streams $x^{[j]}_m(t)$, $m=1,2,\ldots,Rn^{\Gamma}$ along the same set of vectors $\mathbf{\bar{v}}^{[2]}_m(t)$ so that \begin{equation*} \mathbf{\bar{X}}^{[j]}(t) = \sum_{m=1}^{Rn^{\Gamma}} x^{[j]}_m(t) \mathbf{\bar{v}}_m^{[2]}(t) = \mathbf{\bar{V}}^{[2]}(t) \mathbf{X}^{[j]}(t) \end{equation*} The received signal at the $k^{th}$ receiver can then be written as \begin{equation*} \mathbf{\bar{Y}}^{[k]}(t) = \sum_{j=1}^{R+1}\mathbf{\bar{H}}^{[kj]}(t) \mathbf{\bar{V}}^{[1]}(t)\mathbf{X}^{[j]}(t) + \sum_{j=R+2}^{K}\mathbf{\bar{H}}^{[kj]}(t)\mathbf{\bar{V}}^{[2]}(t)\mathbf{X}^{[j]}(t) + \bar{\mathbf{Z}}^{[k]}(t) \end{equation*} We wish to design the direction vectors $\bar{\mathbf{V}}^{[1]}$ and $\bar{\mathbf{V}}^{[2]}$ so that signal spaces are aligned at receivers where they constitute interference while they are separable at receivers where they are desired. As a result, each receiver can decode its desired signal by zero forcing the interference signals. First consider Receiver $k$, $\forall k=1,2,\cdots,R+1$. Every receiver needs a $R(n+1)^{\Gamma}$ interference free dimension out of the $R(R+1)(n+1)^{\Gamma}$ dimensional signal space. Thus, the dimension of the signal space spanned by the interference signal vectors cannot be more than $R^2(n+1)^{\Gamma}$. Notice that all the interference vectors from Transmitter $1,2,\cdots,k-1,k+1,\cdots,R+1$ span a $R^2(n+1)^{\Gamma}$ dimensional subspace in the $R(R+1)(n+1)^{\Gamma}$ dimensional signal space. Hence, we can align the interference signal vectors from Transmitter $j$, $\forall j=R+2,R+3,\cdots,K$ within this $R^2(n+1)^{\Gamma}$ dimensional subspace. Mathematically, we have \begin{equation*} \text{span}(\mathbf{\bar{H}}^{[kj]}\mathbf{\bar{V}}^{[2]}) \subset \text{span} (\left[\bar{\mathbf{H}}^{[k1]}\mathbf{\bar{V}}^{[1]} ~\bar{\mathbf{H}}^{[k2]}\mathbf{\bar{V}}^{[1]} \cdots \bar{\mathbf{H}}^{[k(k-1)]}\mathbf{\bar{V}}^{[1]}~\bar{\mathbf{H}}^{[k(k+1)]}\mathbf{\bar{V}}^{[1]} \cdots \bar{\mathbf{H}}^{[k(R+1)]}\mathbf{\bar{V}}^{[1]}\right]) \end{equation*} where $\text{span} (\mathbf{A})$ represents the space spanned by the columns of matrix $\mathbf{A}$. The above equation can be expressed equivalently as \begin{equation*} \text{span}(\mathbf{\bar{H}}^{[kj]}\mathbf{\bar{V}}^{[2]}) \subset \text{span} (\left[\bar{\mathbf{H}}^{[k1]}~\bar{\mathbf{H}}^{[k2]} \cdots \bar{\mathbf{H}}^{[k(k-1)]}~\bar{\mathbf{H}}^{[k(k+1)]} \cdots \bar{\mathbf{H}}^{[k(R+1)]}\right] \begin{tiny} \left[\begin{array}{ccccccc}\mathbf{\bar{V}}^{[1]}&\mathbf{0}& \cdots &\mathbf{0}& \mathbf{0}&\cdots&\mathbf{0}\\ \mathbf{0}&\mathbf{\bar{V}}^{[1]}&\cdots&\mathbf{0}&\mathbf{0}&\cdots&\mathbf{0}\\ \vdots& \vdots & \ddots &\vdots &\vdots &\ddots &\vdots\\ \mathbf{0}&\mathbf{0}& \cdots &\mathbf{\bar{V}}^{[1]}& \cdots & \cdots & \mathbf{0}\\ \mathbf{0}&\mathbf{0}& \cdots & \cdots & \mathbf{\bar{V}}^{[1]}& \cdots &\mathbf{0}\\ \vdots& \vdots & \ddots &\vdots &\vdots &\ddots &\vdots\\ \mathbf{0}& \mathbf{0} & \cdots &\mathbf{0} &\mathbf{0} &\cdots & \mathbf{\bar{V}}^{[1]} \\ \end{array} \right]) \end{tiny} \end{equation*} Notice that $[\bar{\mathbf{H}}^{[k1]}~\bar{\mathbf{H}}^{[k2]} \cdots \bar{\mathbf{H}}^{[k(k-1)]}~\bar{\mathbf{H}}^{[k(k+1)]} \cdots \bar{\mathbf{H}}^{[k(R+1)]}]$ is a $R\mu_n \times R\mu_n$ square matrix with full rank almost surely. Thus, the above equation can be expressed equivalently as \begin{eqnarray}\label{eqnreq1} \text{span}(\underbrace{\left[\bar{\mathbf{H}}^{[k1]}~\bar{\mathbf{H}}^{[k2]} \cdots \bar{\mathbf{H}}^{[k(k-1)]}~\bar{\mathbf{H}}^{[k(k+1)]} \cdots \bar{\mathbf{H}}^{[k(R+1)]}\right]^{-1} \mathbf{\bar{H}}^{[kj]}}_{\mathbf{T}^{[kj]}}\mathbf{\bar{V}}^{[2]}) \subset \notag \\ \text{span} (\begin{tiny} \left[\begin{array}{ccccccc}\mathbf{\bar{V}}^{[1]}&\mathbf{0}& \cdots &\mathbf{0}& \mathbf{0}&\cdots&\mathbf{0}\\ \mathbf{0}&\mathbf{\bar{V}}^{[1]}&\cdots&\mathbf{0}&\mathbf{0}&\cdots&\mathbf{0}\\ \vdots& \vdots & \ddots &\vdots &\vdots &\ddots &\vdots\\ \mathbf{0}&\mathbf{0}& \cdots &\mathbf{\bar{V}}^{[1]}& \cdots & \cdots & \mathbf{0}\\ \mathbf{0}&\mathbf{0}& \cdots & \cdots & \mathbf{\bar{V}}^{[1]}& \cdots &\mathbf{0}\\ \vdots& \vdots & \ddots &\vdots &\vdots &\ddots &\vdots\\ \mathbf{0}& \mathbf{0} & \cdots &\mathbf{0} &\mathbf{0} &\cdots & \mathbf{\bar{V}}^{[1]} \\ \end{array} \right]) \end{tiny} \end{eqnarray} Note that $\mathbf{T}^{[kj]}$ is a $R\mu_n \times \mu_n$ matrix and can be written in a block matrix form: \begin{equation} \mathbf{T}^{[kj]}=\left[ \begin{array}{c}\mathbf{T}^{[kj]}_1\\\mathbf{T}^{[kj]}_2\\\vdots\\ \mathbf{T}^{[kj]}_R\end{array}\right] \label{T_block} \end{equation} where each block $\mathbf{T}^{[kj]}_i$ is a $\mu_n \times \mu_n$ matrix. Then, \eqref{eqnreq1} can be expressed equivalently as \begin{equation*} \text{span}( \left[ \begin{array}{c} \mathbf{T}^{[kj]}_1 \mathbf{\bar{V}}^{[2]}\\ \mathbf{T}^{[kj]}_2 \mathbf{\bar{V}}^{[2]}\\ \vdots\\ \mathbf{T}^{[kj]}_R \mathbf{\bar{V}}^{[2]} \end{array} \right] )\subset \text{span}( \left[ \begin{array}{cccc}\mathbf{\bar{V}}^{[1]}&\mathbf{0}& \cdots &\mathbf{0}\\ \mathbf{0}&\mathbf{\bar{V}}^{[1]}&\cdots&\mathbf{0}\\ \vdots& \vdots & \ddots &\vdots\\ \mathbf{0}& \mathbf{0} & \cdots & \mathbf{\bar{V}}^{[1]} \end{array}\right]) \end{equation*} The above condition can be satisfied if \begin{equation}\label{c1} \mathbf{T}^{[kj]}_i\bar{\mathbf{V}}^{[2]} \prec \bar{\mathbf{V}}^{[1]}~\forall k=1,\ldots,R+1 ~j=R+2,\ldots,K~i=1,\ldots,R \end{equation} where $\mathbf{P} \prec \mathbf{Q}$ means that the set of column vectors of matrix $\mathbf{P}$ is a subset of the set of column vectors of matrix $\mathbf{Q}$. Then consider Receiver $k$, $\forall k=R+2,R+3,\cdots,K$. To get a $Rn^{\Gamma}$ interference free dimension signal space, the dimension of the signal space spanned by the interference vectors cannot be more than $R(R+1)(n+1)^{\Gamma}-Rn^{\Gamma}$ at each receiver. This can be achieved if all interference vectors from Transmitter $j$, $\forall j=R+2,\cdots,k-1,k+1,\cdots,K$ and $Rn^{\Gamma}$ interference vectors from Transmitter $R+1$ are aligned within the signal space spanned by interference vectors from transmitter $1,2,\cdots,R$. We first consider aligning the interference from Transmitter $R+2,\cdots,k-1,k+1,\cdots,K$. Mathematically, we choose the following alignments: \begin{eqnarray*} \text{span}(\mathbf{\bar{H}}^{[kj]}\mathbf{\bar{V}}^{[2]}) &\subset& \text{span} (\left[\bar{\mathbf{H}}^{[k1]}\mathbf{\bar{V}}^{[1]} ~\bar{\mathbf{H}}^{[k2]}\mathbf{\bar{V}}^{[1]} \cdots \bar{\mathbf{H}}^{[kR]}\mathbf{\bar{V}}^{[1]}\right])\\ \Rightarrow \text{span}(\mathbf{\bar{H}}^{[kj]}\mathbf{\bar{V}}^{[2]}) &\subset& \text{span} (\left[\bar{\mathbf{H}}^{[k1]} ~\bar{\mathbf{H}}^{[k2]}~ \cdots~ \bar{\mathbf{H}}^{[kR]}\right] \left[ \begin{array}{cccc}\mathbf{\bar{V}}^{[1]}&\mathbf{0}& \cdots &\mathbf{0}\\ \mathbf{0}&\mathbf{\bar{V}}^{[1]}&\cdots&\mathbf{0}\\ \vdots& \vdots & \ddots &\vdots\\ \mathbf{0}& \mathbf{0} & \cdots & \mathbf{\bar{V}}^{[1]} \end{array}\right]) \end{eqnarray*} Notice that $[\bar{\mathbf{H}}^{[k1]} ~\bar{\mathbf{H}}^{[k2]}~\cdots~ \bar{\mathbf{H}}^{[kR]}]$ is a $R\mu_n \times R\mu_n$ square matrix with full rank almost surely. Thus, the above equation can be expressed equivalently as \begin{eqnarray}\label{eqnreq2} \text{span}(\underbrace{\left[\bar{\mathbf{H}}^{[k1]} ~\bar{\mathbf{H}}^{[k2]} \cdots \bar{\mathbf{H}}^{[kR]}\right]^{-1} \mathbf{\bar{H}}^{[kj]}}_{\mathbf{T}^{[kj]}}\mathbf{\bar{V}}^{[2]}) \subset \text{span} (\left[\begin{array}{cccc}\mathbf{\bar{V}}^{[1]}&\mathbf{0}& \cdots &\mathbf{0}\\ \mathbf{0}&\mathbf{\bar{V}}^{[1]}&\cdots&\mathbf{0}\\ \vdots& \vdots & \ddots &\vdots\\ \mathbf{0}& \mathbf{0} & \cdots & \mathbf{\bar{V}}^{[1]} \end{array} \right]) \end{eqnarray} Note that $\mathbf{T}^{[kj]}$ is a $R\mu_n \times \mu_n$ matrix and can be written in a block matrix form: \begin{equation*} \mathbf{T}^{[kj]}=\left[ \begin{array}{c}\mathbf{T}^{[kj]}_1\\\mathbf{T}^{[kj]}_2\\\vdots\\ \mathbf{T}^{[kj]}_R\end{array}\right] \label{T_block} \end{equation*} where each block $\mathbf{T}^{[kj]}_i$ is a $\mu_n \times \mu_n$ matrix. Then, \eqref{eqnreq2} can be expressed as \begin{equation*} \text{span}( \left[ \begin{array}{c} \mathbf{T}^{[kj]}_1 \mathbf{\bar{V}}^{[2]}\\ \mathbf{T}^{[kj]}_2 \mathbf{\bar{V}}^{[2]}\\ \vdots\\ \mathbf{T}^{[kj]}_R \mathbf{\bar{V}}^{[2]} \end{array} \right] )\subset \text{span}( \left[ \begin{array}{cccc}\mathbf{\bar{V}}^{[1]}&\mathbf{0}& \cdots &\mathbf{0}\\ \mathbf{0}&\mathbf{\bar{V}}^{[1]}&\cdots&\mathbf{0}\\ \vdots& \vdots & \ddots &\vdots\\ \mathbf{0}& \mathbf{0} & \cdots & \mathbf{\bar{V}}^{[1]} \end{array}\right]) \end{equation*} The above condition can be satisfied if \begin{equation} \mathbf{T}^{[kj]}_i\bar{\mathbf{V}}^{[2]} \prec \bar{\mathbf{V}}^{[1]} ~k=R+2,R+3,\cdots,K~~j=R+2,\cdots,k-1,k+1,\cdots,K~i=1,\cdots,R \label{c2} \end{equation} Now consider aligning $Rn^{\Gamma}$ interference vectors from Transmitter $R+1$ at Receiver $k$, $\forall k=R+2,R+3,\cdots,K$. This can be achieved if the space spanned by $Rn^{\Gamma}$ columns of $\bar{\mathbf{H}}^{[k(R+1)]}\mathbf{\bar{V}}^{[1]}$ is aligned within the range of $\left[\bar{\mathbf{H}}^{[k1]}\mathbf{\bar{V}}^{[1]}~\cdots~\mathbf{\bar{H}}^{[kR]}\mathbf{\bar{V}}^{[1]}\right]$. Since $\mathbf{\bar{V}}^{[1]}$ is a $\mu_n \times R(n+1)^{\Gamma}$ matrix, we can write it as $\mathbf{\bar{V}}^{[1]}=[\mathbf{\bar{V}}^{[1]}_u~ \mathbf{\bar{V}}^{[1]}_{\epsilon_n}]$ where $\mathbf{\bar{V}}^{[1]}_u$ and $\mathbf{\bar{V}}^{[1]}_{\epsilon_n}$ are $\mu_n \times Rn^{\Gamma}$ and $\mu_n \times (R(n+1)^{\Gamma}-Rn^{\Gamma})$ matrices, respectively. We assume the space spanned by the columns of $\bar{\mathbf{H}}^{[k(R+1)]}\mathbf{\bar{V}}^{[1]}_u$ is aligned within the space spanned by the interference from Transmitter 1, 2, \ldots, $R$. From equation \eqref{c1}, we have \begin{equation*} \mathbf{T}^{[1(R+2)]}_1 \bar{\mathbf{V}}^{[2]} \prec \bar{\mathbf{V}}^{[1]} \end{equation*} This implies that $Rn^{\Gamma}$ columns of $\mathbf{\bar{V}}^{[1]}$ are equal to the columns of $\mathbf{T}^{[1(R+2)]}_R \bar{\mathbf{V}}^{[2]}$. Without loss of generality, we assume that $\mathbf{\bar{V}}^{[1]}_u=\mathbf{T}^{[1(R+2)]}_1 \bar{\mathbf{V}}^{[2]}$. Thus, to satisfy the interference alignment requirement, we choose the following alignments: \begin{eqnarray*} \text{span}(\mathbf{\bar{H}}^{[k(R+1)]}\bar{\mathbf{V}}^{[1]}_u)=\text{span}(\mathbf{\bar{H}}^{[k(R+1)]}\mathbf{T}^{[1(R+2)]}_1 \bar{\mathbf{V}}^{[2]}) \subset \text{span} (\left[\bar{\mathbf{H}}^{[k1]}\mathbf{\bar{V}}^{[1]} ~\bar{\mathbf{H}}^{[k2]}\mathbf{\bar{V}}^{[1]} \cdots \bar{\mathbf{H}}^{[kR]}\mathbf{\bar{V}}^{[1]}\right])\\ \Rightarrow \text{span}(\mathbf{\bar{H}}^{[k(R+1)]}\mathbf{T}^{[1(R+2)]}_1 \bar{\mathbf{V}}^{[2]}) \subset \text{span} (\left[\bar{\mathbf{H}}^{[k1]}~\bar{\mathbf{H}}^{[k2]}~\cdots~ \bar{\mathbf{H}}^{[kR]}\right] \left[ \begin{array}{cccc}\mathbf{\bar{V}}^{[1]}&\mathbf{0}& \cdots &\mathbf{0}\\ \mathbf{0}&\mathbf{\bar{V}}^{[1]}&\cdots&\mathbf{0}\\ \vdots& \vdots & \ddots &\vdots\\ \mathbf{0}& \mathbf{0} & \cdots & \mathbf{\bar{V}}^{[1]} \end{array}\right])\\ \Rightarrow \text{span}(\underbrace{\left[\bar{\mathbf{H}}^{[k1]} ~\bar{\mathbf{H}}^{[k2]}~\cdots~ \bar{\mathbf{H}}^{[kR]}\right]^{-1} \mathbf{\bar{H}}^{[k(R+1)]}\mathbf{T}^{[1(R+2)]}_1}_{\mathbf{T}^{[k(R+1)]}}\mathbf{\bar{V}}^{[2]}) \subset \text{span} (\left[\begin{array}{cccc}\mathbf{\bar{V}}^{[1]}&\mathbf{0}& \cdots &\mathbf{0}\\ \mathbf{0}&\mathbf{\bar{V}}^{[1]}&\cdots&\mathbf{0}\\ \vdots& \vdots & \ddots &\vdots\\ \mathbf{0}& \mathbf{0} & \cdots & \mathbf{\bar{V}}^{[1]} \end{array} \right]) \end{eqnarray*} Note that $\mathbf{T}^{[k(R+1)]}$ is a $R\mu_n \times \mu_n$ matrix and can be written in a block matrix form: \begin{equation*} \mathbf{T}^{[k(R+1)]}=\left[ \begin{array}{c}\mathbf{T}^{[k(R+1)]}_1\\\mathbf{T}^{[k(R+1)]}_2\\\vdots\\ \mathbf{T}^{[k(R+1)]}_R\end{array}\right] \label{T_block} \end{equation*} where each block $\mathbf{T}^{[k(R+1)]}_i$ is a $\mu_n \times \mu_n$ matrix. Then, the above equation can be expressed as \begin{equation*} \text{span}( \left[ \begin{array}{c} \mathbf{T}^{[k(R+1)]}_1 \mathbf{\bar{V}}^{[2]}\\ \mathbf{T}^{[k(R+1)]}_2 \mathbf{\bar{V}}^{[2]}\\ \vdots\\ \mathbf{T}^{[k(R+1)]}_R \mathbf{\bar{V}}^{[2]} \end{array} \right] )\subset \text{span}( \left[ \begin{array}{cccc}\mathbf{\bar{V}}^{[1]}&\mathbf{0}& \cdots &\mathbf{0}\\ \mathbf{0}&\mathbf{\bar{V}}^{[1]}&\cdots&\mathbf{0}\\ \vdots& \vdots & \ddots &\vdots\\ \mathbf{0}& \mathbf{0} & \cdots & \mathbf{\bar{V}}^{[1]} \end{array}\right]) \end{equation*} The above condition can be satisfied if \begin{equation} \mathbf{T}^{[k(R+1)]}_i\bar{\mathbf{V}}^{[2]} \prec \bar{\mathbf{V}}^{[1]} \label{c3}~k=R+2,R+3,\cdots,K~i=1,\cdots,R \end{equation} Thus, interference alignment is ensured by choosing $\bar{\mathbf{V}}^{[1]}$ and $\bar{\mathbf{V}}^{[2]}$ to satisfy \eqref{c1}, \eqref{c2}, \eqref{c3}. Note that these conditions can be expressed as \begin{equation} \mathbf{T}^{[kj]}_i\bar{\mathbf{V}}^{[2]} \prec \bar{\mathbf{V}}^{[1]}~~ \forall (k,j)\in A ~i=1,2,\cdots,R \end{equation} where $A=\{(k,j):(k,j)\in \{1,2,\cdots,R+1\}\times\{R+2,\cdots,K\}\} \cup\{(k,j):(k,j)\in\{R+2,\cdots,K\}\times\{R+1,\cdots,K\},~k \neq j\}$. Therefore, there are $KR(K-R-1)$ such equations. We need to choose $R(n+1)^{\Gamma}$ column vectors for $\mathbf{\bar{V}}^{[1]}$ and $Rn^{\Gamma}$ column vectors for $\mathbf{\bar{V}}^{[2]}$. Let $\mathbf{w}$ be a $\mu_n \times 1$ column vector $\mathbf{w} = [1 \ 1 \ \ldots \ 1]^T$. The sets of column vectors of $\mathbf{\bar{V}}^{[1]}$ and $\mathbf{\bar{V}}^{[2]}$ are chosen to be equal to the sets $\bar{V}^{[1]}$ and $\bar{V}^{[2]}$ respectively where \begin{equation} \bar{V}^{[1]} = \bigcup_{m=0}^{R-1} \big\{ \big(\prod_{i=1,\cdots,R, (k,j)\in A} (\mathbf{T}^{[kj]}_i)^{\alpha^{[kj]}_i}\big)\mathbf{w}:\ \alpha^{[kj]}_i \in \{mn+m+1, mn+m+2, \ldots, (m+1)n+m+1 \}\big\}\label{v1} \end{equation} \begin{equation} \bar{V}^{[2]} = \bigcup_{m=0}^{R-1} \big\{ \big(\prod_{i=1,\cdots,R, (k,j)\in A} (\mathbf{T}^{[kj]}_i)^{\alpha^{[kj]}_i}\big)\mathbf{w}: \alpha_i^{[kj]} \in \{mn+m+1, mn+m+2, \ldots, (m+1)n+m \}\big\}\label{v2} \end{equation} Note that the above construction requires the commutative property of multiplication of matrices $\mathbf{T}^{[kj]}_i$. Therefore, it requires $\mathbf{T}^{[kj]}_i$ to be diagonal matrices. Next, we will show this is true. We illustrate this for the case when $k=R+2,\cdots,K$ and $j=R+2,\cdots,k-1,k+1,\cdots,K$. Similar arguments can be applied to other cases. Notice that $[\bar{\mathbf{H}}^{[k1]}~ \bar{\mathbf{H}}^{[k2]}~\cdots ~\bar{\mathbf{H}}^{[kR]}]$ is a $R\mu_n \times R\mu_n$ square matrix: \begin{eqnarray*} &\left[\begin{array}{cccc}&\bar{\mathbf{H}}^{[k1]}~ \bar{\mathbf{H}}^{[k2]} \cdots \bar{\mathbf{H}}^{[kR]}\end{array}\right] =\\ &\begin{tiny}\left[ \begin{array}{ccccccccc} \mathbf{h}^{[k1]}(\mu_n(t-1)+1) & \mathbf{0}_{R \times 1} & \ldots & \mathbf{0}_{R \times 1}&\cdots& \mathbf{h}^{[kR]}(\mu_n(t-1)+1) & \mathbf{0}_{R \times 1} & \ldots & \mathbf{0}_{R \times 1}\\ \mathbf{0}_{R \times 1} & \mathbf{h}^{[k1]}(\mu_n(t-1)+2) & \ldots & \mathbf{0}_{R \times 1}&\cdots&\mathbf{0}_{R \times 1} & \mathbf{h}^{[kR]}(\mu_n(t-1)+2) & \ldots & \mathbf{0}_{R \times 1}\\ \vdots & \vdots & \ddots & \vdots &\cdots&\vdots & \vdots & \ddots & \vdots\\ \mathbf{0}_{R \times 1} & \mathbf{0}_{R \times 1}& \cdots & \mathbf{h}^{[k1]}(\mu_nt)&\cdots& \mathbf{0}_{R \times 1} & \mathbf{0}_{R \times 1}& \cdots & \mathbf{h}^{[kR]}(\mu_nt) \end{array}\right]\end{tiny} \end{eqnarray*} Then, \begin{eqnarray*} [\bar{\mathbf{H}}^{[k1]}~ \bar{\mathbf{H}}^{[k2]}~\cdots ~\bar{\mathbf{H}}^{[kR]}]^{-1}=\begin{tiny}\left[ \begin{array}{cccc}\mathbf{u}^{[k1]}(\mu_n(t-1)+1)_{1 \times R} & \mathbf{0}_{1 \times R} & \cdots & \mathbf{0}_{1 \times R}\\ \mathbf{0}_{1 \times R}& \mathbf{u}^{[k1]}(\mu_n(t-1)+2)_{1 \times R} & \cdots & \mathbf{0}_{1 \times R}\\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{0}_{1 \times R} & \mathbf{0}_{1 \times R}& \cdots &\mathbf{u}^{[k1]}(\mu_n(t-1)+\mu_n)_{1 \times R}\\ \mathbf{u}^{[k2]}(\mu_n(t-1)+1)_{1 \times R} & \mathbf{0}_{1 \times R} & \cdots & \mathbf{0}_{1 \times R}\\ \mathbf{0}_{1 \times R}& \mathbf{u}^{[k2]}(\mu_n(t-1)+2)_{1 \times R} & \cdots & \mathbf{0}_{1 \times R}\\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{0}_{1 \times R} & \mathbf{0}_{1 \times R}& \cdots &\mathbf{u}^{[k2]}(\mu_n(t-1)+\mu_n)_{1 \times R}\\ \vdots & \vdots & \vdots & \vdots\\ \mathbf{u}^{[kR]}(\mu_n(t-1)+1)_{1 \times R} & \mathbf{0}_{1 \times R} & \cdots & \mathbf{0}_{1 \times R}\\ \mathbf{0}_{1 \times R}& \mathbf{u}^{[kR]}(\mu_n(t-1)+2)_{1 \times R} & \cdots & \mathbf{0}_{1 \times R}\\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{0}_{1 \times R} & \mathbf{0}_{1 \times R}& \cdots &\mathbf{u}^{[kR]}(\mu_n(t-1)+\mu_n)_{1 \times R} \end{array}\right]\end{tiny} \end{eqnarray*} where $\mathbf{u}^{[kj]}(\mu_n(t-1)+\kappa), \forall \kappa=1,2,\ldots,\mu_n$ is a $1 \times R$ row vector and \begin{eqnarray} \begin{tiny}\left[ \begin{array}{cccc}\mathbf{h}^{[k1]}(\mu_n(t-1)+\kappa)& \mathbf{h}^{[k2]}(\mu_n(t-1)+\kappa) & \cdots & \mathbf{h}^{[kR]}(\mu_n(t-1)+\kappa)\end{array} \right]^{-1} = \left[ \begin{array}{c}\mathbf{u}^{[k1]}(\mu_n(t-1)+\kappa)\\\mathbf{u}^{[k2]}(\mu_n(t-1)+\kappa) \notag\\ \vdots\\\mathbf{u}^{[kR]}(\mu_n(t-1)+\kappa)\end{array}\right]~~~~\kappa=1,2,\ldots,\mu_n.\end{tiny} \end{eqnarray} Recall \begin{eqnarray*} \mathbf{T}^{[kj]}=\left[ \begin{array}{c}\mathbf{T}^{[kj]}_1\\ \mathbf{T}^{[kj]}_2\\\vdots\\ \mathbf{T}^{[kj]}_R\end{array}\right]=[\bar{\mathbf{H}}^{[k1]}~ \bar{\mathbf{H}}^{[k2]}~\cdots ~\bar{\mathbf{H}}^{[kR]}]^{-1}\bar{\mathbf{H}}^{[kj]}~~ \bar{\mathbf{H}}^{[kj]}(t) = \begin{tiny}\left[ \begin{array}{cccc} \mathbf{h}^{[kj]}(\mu_n(t-1)+1) & \mathbf{0} & \ldots & \mathbf{0}\\ \mathbf{0} & \mathbf{h}^{[kj]}(\mu_n(t-1)+2) & \ldots & \mathbf{0}\\ \vdots & \vdots & \ddots & \vdots\\ \mathbf{0} & \mathbf{0}& \cdots & \mathbf{h}^{[kj]}(\mu_nt) \end{array}\right]\end{tiny} \end{eqnarray*} Thus, $\forall i=1,2,\cdots,R$ \begin{equation}\label{diagonal} \mathbf{T}^{[kj]}_i=\begin{tiny}\left[ \begin{array}{cccc}\mathbf{u}^{[ki]}(\mu_n(t-1)+1)\mathbf{h}^{[kj]}(\mu_n(t-1)+1)& 0 & \cdots & 0\\ 0 & \mathbf{u}^{[ki]}(\mu_n(t-1)+2)\mathbf{h}^{[kj]}(\mu_n(t-1)+2) & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots &\mathbf{u}^{[ki]}(\mu_n t)\mathbf{h}^{[kj]}(\mu_n t)\end{array}\right]\end{tiny} \end{equation} Hence, $\mathbf{T}^{[kj]}_i$ are diagonal matrices with diagonal entries $\mathbf{u}^{[ki]}(\mu_n(t-1)+\kappa)\mathbf{h}^{[kj]}(\mu_n(t-1)+\kappa)$, $\forall \kappa=1,\ldots,\mu_n $. Through interference alignment, we ensure that the dimension of the interference is small enough. Now we need to verify that the desired signal vectors are linearly independent of the interference vectors so that each receiver can separate the signal and interference signals. Consider Receiver 1. Since all interference vectors are aligned in the signal space spanned by interference from transmitter $2,3\cdots,R+1$, it suffices to verify that columns of $\bar{\mathbf{H}}^{[11]}\mathbf{\bar{V}}^{[1]}$ are linearly independent of columns of $[\bar{\mathbf{H}}^{[12]}\mathbf{\bar{V}}^{[1]} \cdots \bar{\mathbf{H}}^{[1(R+1)]}\mathbf{\bar{V}}^{[1]}]$ almost surely. Notice that the direct channel matrix $\mathbf{\bar{H}}^{[11]}$ does not appear in the interference alignment equations and $\mathbf{\bar{V}}^{[1]}$ is chosen independently of $\mathbf{\bar{H}}^{[11]}$. Then, the desired signal $\mathbf{\bar{V}}^{[1]}$ undergoes an independent linear transformation by multiplying $\mathbf{\bar{H}}^{[11]}$. Thus, columns of $\bar{\mathbf{H}}^{[11]}\mathbf{\bar{V}}^{[1]}$ are linearly independent of columns of $[\bar{\mathbf{H}}^{[12]}\mathbf{\bar{V}}^{[1]} \cdots \bar{\mathbf{H}}^{[1(R+1)]}\mathbf{\bar{V}}^{[1]}]$ almost surely as long as all entries of $\mathbf{\bar{V}}^{[1]}$ are not equal to zero with probability one. If there are some entries of $\mathbf{\bar{V}}^{[1]}$ are equal to zero, then due to the block diagonal structure of $\mathbf{\bar{H}}^{[11]}$ the desired signal vectors are linearly dependent of the interference vectors. For example, consider three $3 \times 3$ diagonal matrix $\mathbf{H}^{[1]}$, $\mathbf{H}^{[2]}$, $\mathbf{H}^{[3]}$ whose entries are drawn according to a continuous distribution. $\mathbf{v}$ is a $3 \times 1$ vector whose entries depend on entries of $\mathbf{H}^{[2]}$, $\mathbf{H}^{[3]}$ and are non-zero with probability one. Vectors $\mathbf{H}^{[2]}\mathbf{v}$ and $\mathbf{H}^{[3]}\mathbf{v}$ span a plane in the three dimensional space. Now vector $\mathbf{v}$ undergoes a random linear transformation by multiplying $\mathbf{H}^{[1]}$. The probability that vector $\mathbf{H}^{[1]}\mathbf{v}$ lies in that plan is zero. If $\mathbf{v}$ has one zero entry, for example $\mathbf{v}=[1~1~0]^T$, then $\mathbf{H}^{[1]}\mathbf{v}, \mathbf{H}^{[2]}\mathbf{v}$ and $\mathbf{H}^{[3]}\mathbf{v}$ are two dimensional vectors in the three dimensional vector space. Hence they are linearly dependent. Next we will verify all entries of $\mathbf{\bar{V}}^{[1]}$ and $\mathbf{\bar{V}}^{[2]}$are nonzero with probability one through their construction from $\eqref{v1}$ and $\eqref{v2}$. From \eqref{v1}, \eqref{v2} and \eqref{diagonal}, it can be seen that each entry of $\mathbf{\bar{V}}^{[1]}$ and $\mathbf{\bar{V}}^{[2]}$ is a product of the power of some $\mathbf{u}^{[ki]}(\mu_n(t-1)+\kappa)\mathbf{h}^{[kj]}(\mu_n(t-1)+\kappa)$. To verify each entry of $\mathbf{\bar{V}}^{[1]}$ and $\mathbf{\bar{V}}^{[2]}$ is not equal to zero with probability one, we only need to verify $\mathbf{u}^{[ki]}(\mu_n(t-1)+\kappa)\mathbf{h}^{[kj]}(\mu_n(t-1)+\kappa)$ is not equal to zero with probability one. Since each entry of $\mathbf{h}^{[kj]}(\mu_n(t-1)+\kappa)$ is drawn from a continuous distribution, $\mathbf{u}^{[ki]}(\mu_n(t-1)+\kappa)\mathbf{h}^{[kj]}(\mu_n(t-1)+\kappa)=0$ if and only if all entries of $\mathbf{u}^{[ki]}(\mu_n(t-1)+\kappa)$ are equal to zero. However, $\mathbf{u}^{[ki]}(\mu_n(t-1)+\kappa)$ is a row of the inverse of the $R\times R$ square matrix. Thus, not all entries of $\mathbf{u}^{[ki]}(\mu_n(t-1)+\kappa)$ are equal to zero with probability one. As a result, all entries of $\mathbf{\bar{V}}^{[1]}$ and $\mathbf{\bar{V}}^{[2]}$ are not equal to zero with probability one. To this end, we conclude that at Receiver 1 the desired signal vectors are linearly independent with the interference signal vectors. Similar arguments can be applied at Receiver $2,3,\ldots,K$ to show that the desired signal vectors are linearly independent of the interference vectors. Thus, each receiver can decode its desired streams using zero forcing. As a result, each user can achieve $\frac{R}{R+1}$ degrees of freedom per channel use for a total of $\frac{R}{R+1}K$ degrees of freedom with probability one. \end{proof} \section{The achievable degrees of freedom of the MIMO Gaussian Interference channel with constant channel coefficients}\label{apdx:mimo} In this appendix, we consider the achievable degrees of freedom for some MIMO Gaussian interference channels with constant channel coefficients. Specifically, we consider the $R+2$ user MIMO Gaussian interference channel where each transmitter has $M>1$ antennas and receiver has $RM$, $R=2,3,\cdots$ antennas. The main results of this section are presented in the following theorems: \begin{theorem}\label{theorem:cwot} For the $R+2$ user MIMO Gaussian interference channel where each transmitter has $M>1$ antennas and each receiver has $RM$, $R=2,3,\cdots$, antennas with constant channel coefficients, $RM+\lfloor\frac{RM}{R^2+2R-1}\rfloor$ degrees of freedom can be achieved without channel extension. \end{theorem} \begin{proof} The achievable scheme is provided in the following part. \end{proof} Theorem \ref{theorem:cwot} is interesting because it shows that when $\lfloor\frac{RM}{R^2+2R-1}\rfloor>0$ and hence $M>R+2-\frac{1}{R}$, using interference alignment scheme combined with zero forcing can achieve more degrees of freedom than merely zero forcing. It also shows that the $R+2$ user MIMO interference channel with $M$ antennas at each transmitter and $RM$ antennas at each receiver can achieve more degrees of freedom than $R+1$ user with the same antenna deployment when $M>R+2-\frac{1}{R}$. For example, if $R=2$, Theorem \ref{theorem:cwot} shows that for the 4 user interference channel with $M$ and $2M$ antennas at each transmitter and receiver respectively, $2M+\lfloor\frac{2M}{7}\rfloor$ degrees of freedom can be achieved using interference alignment. However, only $2M$ degrees of freedom can be achieved using zero forcing. Thus, when $M>3$, using interference alignment combined with zero forcing can achieve more degrees of freedom than merely zero forcing. Similarly, only $2M$ degrees of freedom can be achieved on the 3 user interference channel with the same antenna deployment. Hence, when $M>3$ more degrees of freedom can be achieved on the 4 user interference channel. While Theorem~$\ref{theorem:cwot}$ indicates that when $M<R+2$ using interference alignment combined with zero forcing may not achieve more degrees of freedom than zero forcing without channel extension, using interference alignment can achieve more degrees of freedom if we allow channel extension. We present the result in the following theorem: \begin{theorem}\label{theorem:cex} For the $R+2$ user MIMO interference channel where each transmitter has $M$ $(1<M<R+2)$ antennas and each receiver has $RM$, $R=2,3,\cdots$, antennas with constant channel coefficients, $RM+\frac{1}{\lceil\frac{R+2}{M}\rceil}$ degrees of freedom per orthogonal dimension can be achieved with $\lceil\frac{R+2}{M}\rceil$ channel extension. \end{theorem} \begin{proof} The achievable scheme is provided in the following part. \end{proof} Theorem $\ref{theorem:cex}$ shows that if we allow channel extension, $\frac{1}{\lceil\frac{R+2}{M}\rceil}$ more degrees of freedom can be achieved using interference alignment combined with zero forcing than merely zero forcing. For example, when $R=2, M=2$, $\frac{1}{2}$ more degrees of freedom can be achieved using interference alignment. \subsection{Proof of Theorem \ref{theorem:cwot}}\label{achievability:t2} When $\lfloor\frac{RM}{R^2+2R-1}\rfloor<0$ and hence $M <R+2-\frac{1}{R}$, $RM$ degrees of freedom can be achieved by zero forcing at each receiver. When $M \geq R+2$, we provide an achievable scheme based on interference alignment to show that the $i^{th}$ user can achieve $d_i$ degrees of freedom where $R\lfloor\frac{R M}{R^2+2R-1}\rfloor \leq d_i \leq M$ and $d_1+\cdots+d_{R+2}=RM+\lfloor\frac{RM}{R^2+2R-1}\rfloor$. Transmitter $i$ sends message $W_i$ to Receiver $i$ using $d_i$ independently encoded streams along vectors $\mathbf{v}^{[i]}_m$, i.e, \begin{equation*} \mathbf{X}^{[i]}=\sum_{m=1}^{d_i}x^{i}_m\mathbf{v}_m^{[i]}=\mathbf{V}^{[i]}\mathbf{X}^{i}~i=1,\cdots,R+2 \end{equation*} Then, the received signal is \begin{equation*} \mathbf{Y}^{[j]}=\sum_{i=1}^{R+2}\mathbf{H}^{[ji]}\mathbf{V}^{[i]}\mathbf{X}^i+\mathbf{Z}^{[j]}. \end{equation*} In order for each receiver to decode its desired signal streams by zero forcing the interference, the dimension of the interference has to be less than or equal to $RM-d_i$. However, there are $\lfloor\frac{RM}{R^2+2R-1}\rfloor+RM-d_i$ interference vectors at Receiver $i$. Therefore, we need to align $\lfloor\frac{RM}{R^2+2R-1}\rfloor$ interference signal vectors at each receiver. This can be achieved if $\lfloor\frac{RM}{R^2+2R-1}\rfloor$ interference vectors are aligned within the space spanned by all other interference vectors. First, we write $\mathbf{V}^{[i]}$ in the block matrix form: \begin{equation*} \mathbf{V}^{[i]}=[\mathbf{V}^{[i]}_1~\mathbf{V}^{[i]}_2~\cdots~\mathbf{V}^{[i]}_R~\mathbf{V}^{[i]}_{R+1}] \end{equation*} where $\mathbf{V}^{[i]}_1,\cdots,\mathbf{V}^{[i]}_R$ are $M \times \lfloor\frac{RM}{R^2+2R-1}\rfloor$ dimensional matrices and $\mathbf{V}^{[i]}_{R+1}$ is an $M \times (d_i-R \lfloor\frac{RM}{R^2+2R-1}\rfloor)$ dimensional matrix. At Receiver 1, we align the range of $\mathbf{H}^{[1(R+2)]}\mathbf{V}^{[R+2]}_1$ within the space spanned by other interference vectors: \begin{eqnarray} \text{span}(\mathbf{H}^{[1(R+2)]}\mathbf{V}^{[R+2]}_1) \subset \text{span}(\left[ \mathbf{H}^{[12]}\mathbf{V}^{[2]}~\mathbf{H}^{[13]}\mathbf{V}^{[3]}~\cdots~\mathbf{H}^{[1(R+1)]}\mathbf{V}^{[R+1]} \right])\notag \\ \Rightarrow \text{span}(\underbrace{[\mathbf{H}^{[12]}~\mathbf{H}^{[13]}~\cdots~\mathbf{H}^{[1(R+1)]}]^{-1}\mathbf{H}^{[1(R+2)]}}_{\mathbf{T}^{[1]}}\mathbf{V}^{[R+2]}_1) \subset \text{span}(\left[\begin{array}{cccc}\mathbf{V}^{[2]}&\mathbf{0}& \cdots &\mathbf{0}\\ \mathbf{0}&\mathbf{V}^{[3]}&\cdots&\mathbf{0}\\ \vdots& \vdots & \ddots &\vdots\\ \mathbf{0}& \mathbf{0} & \cdots & \mathbf{V}^{[R+1]} \end{array} \right]) \label{eqnrx1} \end{eqnarray} Note that $\mathbf{T}^{[1]}$ is a $RM \times M$ matrix and can be written in a block matrix form: \begin{equation*} \mathbf{T}^{[1]}=\left[ \begin{array}{c}\mathbf{T}^{[1]}_1 \\ \mathbf{T}^{[1]}_2 \\\vdots\\ \mathbf{T}^{[1]}_{R} \end{array}\right] \end{equation*} Then, condition \eqref{eqnrx1} can be expressed equivalently as \begin{equation*} \text{span}(\left[ \begin{array}{c}\mathbf{T}^{[1]}_1 \mathbf{V}^{[R+2]}_1)\\ \mathbf{T}^{[1]}_2 \mathbf{V}^{[R+2]}_1)\\\vdots\\ \mathbf{T}^{[1]}_{R} \mathbf{V}^{[R+2]}_1)\end{array}\right]) \subset \text{span}(\left[\begin{array}{cccc}\mathbf{V}^{[2]}&\mathbf{0}& \cdots &\mathbf{0}\\ \mathbf{0}&\mathbf{V}^{[3]}&\cdots&\mathbf{0}\\ \vdots& \vdots & \ddots &\vdots\\ \mathbf{0}& \mathbf{0} & \cdots & \mathbf{V}^{[R+1]} \end{array} \right]) \end{equation*} This condition can be satisfied if \begin{eqnarray} \mathbf{T}^{[1]}_1 \mathbf{V}^{[R+2]}_1&=& \mathbf{V}^{[2]}_1 \notag\\ \mathbf{T}^{[1]}_2 \mathbf{V}^{[R+2]}_1 &=& \mathbf{V}^{[3]}_1 \notag\\ & \vdots & \notag \\ \mathbf{T}^{[1]}_{R-1} \mathbf{V}^{[R+2]}_1 &=& \mathbf{V}^{[R]}_1 \notag\\ \text{span}(\mathbf{T}^{[1]}_{R} \mathbf{V}^{[R+2]}_1)&=&\text{span}(\mathbf{V}^{[R+1]}_1)\label{span1} \end{eqnarray} At Receiver 2, we align the range of $\mathbf{H}^{[2(R+2)]}\mathbf{V}^{[R+2]}_1$ within the space spanned by other interference vectors: \begin{equation*} \text{span}(\mathbf{H}^{[2(R+2)]}\mathbf{V}^{[R+2]}_1) \subset \text{span}(\left[ \mathbf{H}^{[21]}\mathbf{V}^{[1]}~\mathbf{H}^{[23]}\mathbf{V}^{[3]}~\cdots~\mathbf{H}^{[2(R+1)]}\mathbf{V}^{[R+1]} \right])\\ \end{equation*} By similar arguments used at Receiver 1, this condition can be satisfied if \begin{eqnarray} \mathbf{T}^{[2]}_1 \mathbf{V}^{[R+2]}_1&=& \mathbf{V}^{[1]}_1 \notag\\ \mathbf{T}^{[2]}_2 \mathbf{V}^{[R+2]}_1 &=& \mathbf{V}^{[3]}_2 \notag\\ & \vdots & \notag \\ \mathbf{T}^{[2]}_{R-1} \mathbf{V}^{[R+2]}_1 &=& \mathbf{V}^{[R]}_2 \notag\\ \text{span}(\mathbf{T}^{[2]}_{R} \mathbf{V}^{[R+2]}_1)&=&\text{span}(\mathbf{V}^{[R+1]}_1)\label{span2} \end{eqnarray} where \begin{equation*} \mathbf{T}^{[2]}= \left[ \begin{array}{c}\mathbf{T}^{[2]}_1 \\ \mathbf{T}^{[2]}_2 \\ \vdots\\ \mathbf{T}^{[2]}_{R} \end{array}\right]= [\mathbf{H}^{[21]}~\mathbf{H}^{[23]}~\cdots~\mathbf{H}^{[2(R+1)]}]^{-1}\mathbf{H}^{[2(R+2)]} \end{equation*} At Receiver $j$, $\forall j, 2<j\leq R+1$, we align the range of $\mathbf{H}^{[j(j-1)]}\mathbf{V}^{[j-1]}_1$ within the space spanned by other interference vectors: \begin{equation*} \text{span}(\mathbf{H}^{[j(j-1)]}\mathbf{V}^{[j-1]}_1) \subset \text{span}(\left[ \mathbf{\bar{H}}^{[j1]}\mathbf{V}^{[1]}~\cdots~\mathbf{\bar{H}}^{[j(j-2)]}\mathbf{V}^{[j-2]}~\mathbf{\bar{H}}^{[j(j+1)]}\mathbf{V}^{[j+1]}~\cdots~\mathbf{\bar{H}}^{[ji]}\mathbf{V}^{[i]}~\cdots~\mathbf{\bar{H}}^{[j(R+2)]}\mathbf{V}^{[R+2]} \right]) \end{equation*} By similar arguments used at Receiver 1, this condition can be satisfied if \begin{eqnarray*} \mathbf{T}^{[j]}\mathbf{V}^{[j-1]}_1=\left[\begin{array}{c}\mathbf{V}^{[1]}_{n(1,j)}\\ \vdots\\ \mathbf{V}^{[j-2]}_{n(j-2,j)}\\ \mathbf{V}^{[j+1]}_{n(j+1,j)}\\ \vdots\\ \mathbf{V}^{[i]}_{n(i,j)}\\ \vdots\\ \mathbf{V}^{[R+2]}_{n(R+2,j)} \end{array}\right] \end{eqnarray*} where \begin{equation*} \mathbf{T}^{[j]}=[\mathbf{H}^{[j1]}~\cdots~\mathbf{H}^{[j(j-2)]}~\mathbf{H}^{[j(j+1)]}~\cdots~\mathbf{H}^{[j(R+2)]}]^{-1}\mathbf{H}^{[j(j-1)]}~~~ n(i,j)=\left\{\begin{array}{ccc}j-1&~&i=1,R+1,R+2,i \neq j\\j-2&~&1<i<R+1,j>i+1\\j&~&3<i<R+1,j<i\end{array} \right. \end{equation*} At Receiver $R+2$, we align the range of $\mathbf{H}^{[(R+2)1]}\mathbf{V}^{[1]}_1$ within the space spanned by other interference vectors: \begin{equation*} \text{span}(\mathbf{H}^{[(R+2)1]}\mathbf{V}^{[1]}_1) \subset \text{span}(\left[ \mathbf{H}^{[(R+2)2]}\mathbf{V}^{[2]}~\mathbf{H}^{[(R+2)3]}\mathbf{V}^{[3]}~\cdots~\mathbf{H}^{[(R+2)(R+1)]}\mathbf{V}^{[R+1]}\right])\\ \end{equation*} This condition can be satisfied if \begin{eqnarray*} \mathbf{T}^{[R+2]}\mathbf{V}^{[1]}_1=\left[\begin{array}{c}\mathbf{V}^{[2]}_R\\ \mathbf{V}^{[3]}_R\\ \vdots\\ \mathbf{V}^{[R+1]}_R \end{array}\right] \end{eqnarray*} where \begin{equation*} \mathbf{T}^{[R+2]}=[\mathbf{H}^{[(R+2)2]}~\mathbf{H}^{[(R+2)3]}~\cdots~\mathbf{H}^{[(R+2)(R+1)]}]^{-1}\mathbf{H}^{[(R+2)1]} \end{equation*} Notice once $\mathbf{V}^{[R+2]}_1$ is chosen, all other vectors can be solved from the above equations. To solve $\mathbf{V}^{[R+2]}_1$, from \eqref{span1}, \eqref{span2}, we have \begin{eqnarray*} \text{span} (\mathbf{T}^{[1]}_{R}\mathbf{V}^{[R+2]}_1)&=&\text{span}(\mathbf{T}^{[2]}_{R}\mathbf{V}^{[R+2]}_1)\label{eigen}\\ \Rightarrow \text{span} ((\mathbf{T}^{[2]}_{R})^{-1}\mathbf{T}^{[1]}_{R}\mathbf{V}^{[R+2]}_1)&=&\text{span}(\mathbf{V}^{[R+2]}_1) \end{eqnarray*} Hence, columns of $\mathbf{V}^{[R+2]}_1$ can be chosen as \begin{equation} \mathbf{V}^{[R+2]}_{1}=[\mathbf{e}_1~\cdots~\mathbf{e}_{\lfloor\frac{RM}{R^2+2R-1}\rfloor}] \end{equation} where $\mathbf{e}_1~\cdots~\mathbf{e}_{\lfloor\frac{RM}{R^2+2R-1}\rfloor}$ are the $\lfloor\frac{RM}{R^2+2R-1}\rfloor$ eigenvectors of $(\mathbf{T}^{[2]}_{R})^{-1}\mathbf{T}^{[1]}_{R}$. Note that the above construction only specifies $\mathbf{V}^{[i]}_1, \mathbf{V}^{[i]}_2,\ldots, \mathbf{V}^{[i]}_R$. The remaining vectors of $\mathbf{V}^{[i]}_{R+1}$ can be chosen randomly according to a continuous distribution. Through interference alignment, we ensure that the interference vectors span a small enough signal space. We need to verify that the desired signal vectors, i.e., $\mathbf{H}^{[ii]}\mathbf{V}^{[i]}$ are linearly independent of interference vectors so that each receiver can decode its message using zero forcing. Notice that the direct channel matrices $\mathbf{H}^{[ii]}, i=1,\ldots,R+2$ do not appear in the interference alignment equations, $\mathbf{V}^{[i]}$ undergoes an independent linear transformation by multiplying $\mathbf{H}^{[ii]}$. Therefore, the desired signal vectors are linearly independent of the interference signals with probability one. As a result, user $i$ can achieve $d_i$ degrees of freedom for a total of $RM+\lfloor\frac{RM}{R^2+2R-1}\rfloor$ degrees of freedom. \subsection{Proof of Theorem \ref{theorem:cex}}\label{achievability:t3} We will provide an achievable scheme based on interference alignment to show in the $\lceil\frac{R+2}{M}\rceil$ symbol extension channel, user $i, \forall i=1,3,\ldots,R+2$ can achieve $d_i$ $(R \leq d_i\leq \lceil\frac{R+2}{M}\rceil M)$ degrees of freedom and user 2 can achieve $d_2$ $(R+1 \leq d_2\leq \lceil\frac{R+2}{M}\rceil M)$ degrees of freedom for a total of $RM\lceil\frac{R+2}{M}\rceil+1$ degrees of freedom. Hence, $RM+\frac{1}{\lceil\frac{R+2}{M}\rceil}$ degrees of freedom can be achieved on the original channel. Over the extension channel, the channel input-output relationship is \begin{equation*} \mathbf{\bar{Y}}^{[j]}= \sum_{i=1}^{R+2}\mathbf{\bar{H}}^{[ji]}\mathbf{\bar{X}}^{[i]}+\mathbf{\bar{Z}}^{[j]} \end{equation*} where the overbar notation represents the $\lceil\frac{R+2}{M}\rceil$-symbol extensions so that \begin{equation*} \mathbf{\bar{X}}\triangleq\left[\begin{array}{c}\mathbf{X}(\lceil\frac{R+2}{M}\rceil t)\\ \vdots \\ \mathbf{X}(\lceil\frac{R+2}{M}\rceil (t+1)-1)\end{array}\right]\quad \mathbf{\bar{Z}}\triangleq\left[\begin{array}{c}\mathbf{Z}(\lceil\frac{R+2}{M}\rceil t)\\ \vdots \\ \mathbf{Z}(\lceil\frac{R+2}{M}\rceil (t+1)-1)\end{array}\right] \end{equation*} where $\mathbf{X}$ and $\mathbf{Z}$ are $M \times 1$ and $RM \times 1$ vectors respectively, and \begin{equation*} \mathbf{\bar{H}} \triangleq \left[\begin{array}{cccc}\mathbf{H} & \mathbf{0}& \cdots & \mathbf{0}\\ \mathbf{0} & \mathbf{H} & \cdots& \mathbf{0}\\ \vdots & \vdots & \ddots & \vdots\\ \mathbf{0} & \mathbf{0} & \cdots & \mathbf{H} \end{array}\right]. \end{equation*} where $\mathbf{H}$ is the $RM \times M$ channel matrix. In the extension channel, Transmitter $i$ sends message $W_i$ to Receiver $i$ using $d_i$ independently encoded streams along vectors $\mathbf{\bar{v}}^{[i]}_1, \cdots, \mathbf{\bar{v}}^{[i]}_{d_i}$, i.e, \begin{equation*} \mathbf{\bar{X}}^{[i]}=\sum_{m=1}^{d_i}\mathbf{\bar{v}}^{[i]}_{m}x^{[i]}_m=\mathbf{\bar{V}}^{[i]}\mathbf{X}^{[i]} \end{equation*} where $\mathbf{\bar{V}}^{[i]}$ and $\mathbf{X}^{[i]}$ are $M \lceil\frac{R+2}{M}\rceil \times d_i$ and $d_i \times 1$ matrices respectively. In order for each receiver to decode its desired signal streams by zero forcing the interference, the dimension of the space spanned by the interference vectors has to be less than or equal to $RM\lceil\frac{R+2}{M}\rceil-d_i$. However, there are $RM\lceil\frac{R+2}{M}\rceil-d_i+1$ interference vectors at Receiver $i$. Therefore, we need to align 1 interference signal vector at each receiver. This can be achieved if one interference vector is aligned within the space spanned by all other interference vectors. Mathematically, we choose the following interference alignment equations:\\ At Receiver 1: \begin{eqnarray*} \text{span}(\mathbf{\bar{H}}^{[12]}\mathbf{\bar{v}}^{[2]}_1) \subset \text{span}(\left[ \mathbf{\bar{H}}^{[13]}\mathbf{\bar{V}}^{[3]}~\mathbf{\bar{H}}^{[14]}\mathbf{\bar{V}}^{[4]}~\cdots~\mathbf{\bar{H}}^{[1(R+1)]}\mathbf{\bar{V}}^{[R+1]}\right])\\ \Rightarrow \text{span}(\underbrace{[\mathbf{\bar{H}}^{[13]}~\mathbf{\bar{H}}^{[14]}~\cdots~\mathbf{\bar{H}}^{[1(R+1)]}]^{-1}\mathbf{\bar{H}}^{[12]}}_{\mathbf{T^{[1]}}}\mathbf{\bar{v}}^{[2]}_1) \subset \text{span}(\left[\begin{array}{cccc}\mathbf{\bar{V}}^{[3]}&\mathbf{0}& \cdots &\mathbf{0}\\ \mathbf{0}&\mathbf{\bar{V}}^{[4]}&\cdots&\mathbf{0}\\ \vdots& \vdots & \ddots &\vdots\\ \mathbf{0}& \mathbf{0} & \cdots & \mathbf{\bar{V}}^{[R+1]} \end{array} \right])\\ \end{eqnarray*} This can be achieved if \begin{eqnarray}\label{s1} \mathbf{T}^{[1]}\mathbf{\bar{v}}^{[2]}_1=\left[\begin{array}{c}\mathbf{\bar{v}}^{[3]}_1\\ \mathbf{\bar{v}}^{[4]}_1\\ \vdots\\ \mathbf{\bar{v}}^{[R+1]}_1 \end{array}\right] \end{eqnarray} At Receiver $j$, $\forall j~2 \leq j\leq R+1$: \begin{eqnarray*} &\text{span}(\mathbf{\bar{H}}^{[j(j+1)]}\mathbf{\bar{v}}^{[j+1]}_1) \subset \text{span}(\left[ \mathbf{\bar{H}}^{[j1]}\mathbf{\bar{V}}^{[1]}~\cdots~\mathbf{\bar{H}}^{[j(j-1)]}\mathbf{\bar{V}}^{[j-1]}~\mathbf{\bar{H}}^{[j(j+2)]}\mathbf{\bar{V}}^{[j+2]}~\cdots~\mathbf{\bar{H}}^{[j(R+2)]}\mathbf{\bar{V}}^{[R+2]} \right])\\ &\Rightarrow \text{span}(\underbrace{[\mathbf{\bar{H}}^{[j1]}~\cdots~\mathbf{\bar{H}}^{[j(j-1)]}~\mathbf{\bar{H}}^{[j(j+2)]}~\cdots~\mathbf{\bar{H}}^{[j(R+2)]}]^{-1}\mathbf{\bar{H}}^{[j(j+1)]}}_{\mathbf{T}^{[j]}}\mathbf{\bar{v}}^{[j+1]}_1) \subset \\ &\text{span}( \begin{tiny} \left[\begin{array}{ccccccc}\mathbf{\bar{V}}^{[1]}&\mathbf{0}& \cdots &\mathbf{0}& \mathbf{0}&\cdots&\mathbf{0}\\ \mathbf{0}&\mathbf{\bar{V}}^{[2]}&\cdots&\mathbf{0}&\mathbf{0}&\cdots&\mathbf{0}\\ \vdots& \vdots & \ddots &\vdots &\vdots &\vdots &\vdots\\ \mathbf{0}&\mathbf{0}& \cdots &\mathbf{\bar{V}}^{[j-1]}& \cdots & \cdots & \mathbf{0}\\ \mathbf{0}&\mathbf{0}& \cdots & \cdots & \mathbf{\bar{V}}^{[j+2]}& \cdots &\mathbf{0}\\ \vdots& \vdots & \vdots &\vdots &\vdots &\ddots &\vdots\\ \mathbf{0}& \mathbf{0} & \cdots &\mathbf{0} &\mathbf{0} &\cdots & \mathbf{\bar{V}}^{[R+2]} \\ \end{array} \right]) \end{tiny} \end{eqnarray*} This condition can be satisfied if \begin{eqnarray}\label{s2} \mathbf{T}^{[j]}\mathbf{\bar{v}}^{[j+1]}_1=\left[\begin{array}{c}\mathbf{\bar{v}}^{[1]}_{n(1,j)}\\ \vdots\\ \mathbf{\bar{v}}^{[j-1]}_{n(j-1,j)}\\ \mathbf{\bar{v}}^{[j+2]}_{n(j+2,j)}\\ \vdots\\ \mathbf{\bar{v}}^{[j]}_{n(i,j)}\\ \vdots\\ \mathbf{\bar{v}}^{[R+2]}_{n(R+2,j)} \end{array}\right] \end{eqnarray} where \begin{eqnarray*} n(i,j)=\left\{\begin{array}{ccc}j-1&~&i=1,2,j>i\\j&~&i \geq 3,j<i-1\\j-2&~& i \geq 3, j \geq i+1\end{array} \right. \end{eqnarray*} At Receiver $R+2$: \begin{eqnarray*} \text{span}(\mathbf{\bar{H}}^{[(R+2)1]}\mathbf{\bar{v}}^{[1]}_1) \subset \text{span}(\left[ \mathbf{\bar{H}}^{[(R+2)2]}\mathbf{\bar{V}}^{[2]}~\mathbf{\bar{H}}^{[(R+2)3]}\mathbf{\bar{V}}^{[3]}~\cdots~\mathbf{\bar{H}}^{[(R+2)(R+1)]}\mathbf{\bar{V}}^{[R+1]}\right])\\ \Rightarrow \text{span}(\underbrace{[\mathbf{\bar{H}}^{[(R+2)2]}~\mathbf{\bar{H}}^{[(R+2)3]}~\cdots~\mathbf{\bar{H}}^{[(R+2)(R+1)]}]^{-1}\mathbf{\bar{H}}^{[(R+2)1]}}_{\mathbf{T}^{[R+2]}}\mathbf{\bar{v}}^{[1]}_1) \subset \text{span}(\left[\begin{array}{cccc}\mathbf{\bar{V}}^{[2]}&\mathbf{0}& \cdots &\mathbf{0}\\ \mathbf{0}&\mathbf{\bar{V}}^{[3]}&\cdots&\mathbf{0}\\ \vdots& \vdots & \ddots &\vdots\\ \mathbf{0}& \mathbf{0} & \cdots & \mathbf{\bar{V}}^{[R+1]} \end{array} \right])\\ \end{eqnarray*} This can be achieved if \begin{eqnarray}\label{s3} \mathbf{T}^{[R+2]}\mathbf{\bar{v}}^{[1]}_1=\left[\begin{array}{c}\mathbf{\bar{v}}^{[2]}_{R+1}\\ \mathbf{\bar{v}}^{[3]}_R\\ \vdots\\ \mathbf{\bar{v}}^{[R+1]}_R \end{array}\right] \end{eqnarray} Note that once we pick $\mathbf{\bar{v}}^{[2]}_1$, all other vectors can be solved from \eqref{s1}, \eqref{s2}, \eqref{s3}. $\mathbf{\bar{v}}^{[2]}_1$ can be chosen randomly according to a continuous distribution as long as no entry of $\mathbf{\bar{v}}^{[2]}_1$ is equal to zero. Note that the above construction only specifies $\mathbf{\bar{v}}^{[i]}_1,\cdots,\mathbf{\bar{v}}^{[i]}_{R}$,$\forall i=1,2,\ldots,R+2$ and $\mathbf{\bar{v}}^{[2]}_{R+1}$. The remaining $\mathbf{\bar{v}}^{[i]}_{R+1},\cdots,\mathbf{\bar{v}}^{[i]}_{d_i}$,$\forall i=1,3,\ldots,R+2$ and $\mathbf{\bar{v}}^{[2]}_{R+2},\cdots,\mathbf{\bar{v}}^{[2]}_{d_2}$ can be chosen randomly from a continuous distribution. Since all the vectors are chosen independently of the direct channel matrices $\mathbf{\bar{H}}^{[ii]}$ and all entries of $\mathbf{\bar{V}}^{[i]}$ are not equal to zero almost surely, the desired signal vectors are linearly independent of the interference vectors at each receiver. As a result, each receiver can decode its message by zero forcing the interference to achieve $d_i$ degrees of freedom for a total of $RM\lceil\frac{R+2}{M}\rceil+1$ degrees of freedom on the $\lceil\frac{R+2}{M}\rceil$-symbol extension channel. Therefore, $RM+\frac{1}{\lceil\frac{R+2}{M}\rceil}$ degrees of freedom per channel use can be achieved on the original channel.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} \IEEEPARstart{S}{cene} parsing~\cite{zhao2017pyramid}\cite{zhou2017scene}\cite{hung2017scene}\cite{zhang2017scale} aims to divide the entire scene into different segments and predict the semantic category for each of them. It is a fundamental task in computer vision and image processing, challenged by the complexity of natural scenes that usually contain multiple elements of various categories, including discrete objects (e.g., person, cat) and stuff (e.g., sky, river, grass). The elements within a scene may be spatially dependent upon each other. For example, a ship usually appears on the sea rather than on the road. Such dependencies of spatial positions can be exploited to boost the prediction accuracy further. Mainstream scene parsing models built on Fully Convolutional Networks~\cite{Long_2015_CVPR} incorporate carefully designed modules to exploit spatial context information. For example, Deeplabv2~\cite{chen2017deeplab} uses an Atrous Spatial Pyramid Pooling (ASPP) module to sample feature maps in parallel with different atrous rates to enlarge the receptive field of filters; PSPNet~\cite{zhao2017pyramid} performs pooling operations at multiple grid scales for the same goal. With a larger receptive field, these networks are able to extract broader scales of spatial context information. However, these methods model the dependencies between positions in an implicit way. \begin{figure}[t] \centering \includegraphics[width=1.00\linewidth]{figure/patch_attn.pdf} \caption{Visualization of attention maps. (a) Original images. (b) Attention maps for a certain position (green cross in (a)) computed with conventional self-attention. (c) Attention maps for a certain position (green cross in (a)) when self-attention is restricted in local patches. The attention maps of self-attention in local patches focus on surrounding areas that closely correlate with the specified position in (a).} \label{fig:patch_attn} \end{figure} Self-attention mechanism~\cite{wang2018non} is able to capture long-range dependencies between positions explicitly and has been applied to scene parsing~\cite{fu2019dual}\cite{zhang2019co} with a remarkable performance boost. The key idea behind self-attention is that the response at a certain position is a weighted sum of features at all the positions. In this way, all the positions are related to each other and provide the network with a global receptive field. Long-range dependencies captured by self-attention can be combined with short-range ones captured by local convolution, leading to rich context information for dense labeling problems. However, long-range dependencies do not always work well for scene parsing tasks since a position is often less correlated with the positions far away from it, compared with those which are nearer. Moreover, information from distant positions may not be beneficial to building discriminative features. In Fig.~\ref{fig:patch_attn}(b), we visualize the attention map between a certain position and all the positions in the image computed in the self-attention process, where brighter color represents higher attention weight. The specified position is denoted as a green cross in Fig.~\ref{fig:patch_attn}(a). We find that in the conventional self-attention mechanism, the feature of this position would aggregate information from a wide area of the input image. For example, in the second row of (b), the position on the cat receives information from distant ones of the image, like those on the window and curtain. However, there is no apparent correlation between the window, curtain, and cat. Thus the long-range dependencies captured from these positions are not useful for the model to classify a certain position on the cat. When we restrict self-attention to a local patch of the image, the visualized attention map is more concentrated on the surrounding area of the specified position. For example, in the second row of Fig.~\ref{fig:patch_attn} (c), self-attention is restricted to the bottom-right $1/4$ patch of the image. The attention map between the specified position and all the positions within the patch mainly focuses on the head and body parts of the cat, indicating that useful middle-range dependencies among positions are captured. Benefiting from the close correlations within the same category, context information aggregated from these dependencies can severe as more valid guidance for the model to classify the specified position. Based on this observation, we devise a novel Middle-Range (MR) branch which restricts the self-attention mechanism to local patches of the input feature in order to fill the gap between long-range and short-range dependencies for complete context extraction. Furthermore, we analyze the attention map generated with conventional self-attention and find that each position contributes different attention weights to others for context aggregation. For each position, the total value of attention weights that it contributes to others reveals its correlation with other positions as well as its importance to the global context. A larger value implies that the position has stronger correlations with most of the other positions. Thus, the features of positions contributing higher attention weights to others encode the common patterns of the whole image, including main elements appearing in the scene, large-area continuous background, etc. These patterns contain useful global context information which is crucial to scene understanding. By emphasizing features of the positions with larger contributions, long-range dependencies can be captured more accurately and adaptively to complicated scene elements, which can enhance the aggregation of global context by self-attention. We instantiate this idea with a Reweighed Long-Range (RLR) branch to modulate feature responses according to the attention weights contributions of each position. With the newly proposed MR and RLR branches, we build an Omni-Range Dependencies Network (ORDNet) in which short-range, middle-range, and reweighed long-range dependencies collaborate seamlessly to achieve adaptability to diverse spatial region contents and relationships in natural scene images. The ORDNet is general and can be applied to any FCN backbone for learning more discriminative feature representations. Our main contributions are summarized as follows: \begin{itemize} \item We devise a Middle-Range (MR) branch which explicitly captures middle-range dependencies within local patches of scene image, filling the gap between long-range and short-range dependencies. \item We also propose a Reweighed Long-Range (RLR) branch to emphasize features of the positions which encode common patterns, so that more accurate and adaptive long-range dependencies could be captured. \item With the above two branches, we develop a novel Omni-Range Dependencies Network (ORDNet) which effectively integrates short-range, middle-range and reweighed long-range dependencies to extract comprehensive context information for accurate scene parsing. Our ORDNet outperforms previous state-of-the-art methods on three popular scene parsing benchmarks, including PASCAL-Context~\cite{mottaghi2014role}, COCO Stuff~\cite{caesar2018coco} and ADE20K~\cite{zhou2017scene} datasets, which well demonstrates its effectiveness. \end{itemize} \begin{figure*}[t] \centering \includegraphics[width=1.0\linewidth]{figure/pipeline.pdf} \caption{The pipeline of proposed Omni-Range Dependencies Network (ORDNet). Given an input image, the extracted feature $X$ of a CNN backbone is fed into a Middle-Range (MR) branch and a Reweighed Long-Range (RLR) branch to capture the middle-range and reweighed long-range dependencies respectively. The outputs of these two branches are then concatenated along the channel dimension and fused by a $1 \times 1$ convolution. An identity skip connection of $X$ is added to ease optimization. The fused feature is fed into an FCN Head to predict the logit map and then upsampled $8$ times to obtain the final parsing mask. } \label{fig:pipeline} \end{figure*} \section{Related work} \label{sec:related_work} \subsection{Semantic Segmentation} The goal of semantic segmentation is to assign category labels to the pixels of foreground objects and stuff in the scene, rather than segmenting the entire scene as scene parsing does. By expanding the set of pre-defined categories on which the model to segment, semantic segmentation serves as a basic technology of scene parsing. In recent years, remarkable progress has been achieved based on Fully Convolutional Networks~\cite{Long_2015_CVPR} (FCNs). FCN replaces the fully-connected layers of the image classification network (e.g., VGG16~\cite{simonyan2014very}) with convolution layers and introduces transposed convolution and skip layers to predict pixel-level labels. The many pooling layers in FCN increase the receptive field of convolution filters, and meanwhile reduce the resolution of feature maps, leading to inaccurate semantic masks. In order to maintain the resolution of feature maps while enjoying the increased receptive field of convolution filters, DeepLab~\cite{chen2014semantic} integrates atrous convolution into CNN, which boosts segmentation performance largely and becomes the de-facto component of latter segmentation methods. Many deep models~\cite{kuanar2019adaptive}\cite{kuanar2018cognitive}\cite{kuanar2019low} have proposed various approaches to aggregate local spatial context information to refine the feature representations and achieved great performances. Later works further propose to aggregate the important multi-scale spatial context information based on the last feature map of an FCN backbone. For example, DeepLab v2~\cite{chen2017deeplab} employs parallel atrous convolutions with different atrous rates called ASPP to capture context information of multiple receptive fields. DeepLab v3~\cite{chen2017rethinking} further integrates image-level features into ASPP to get a global receptive field. PSPNet~\cite{zhao2017pyramid} performs pooling operations at multiple grid scales in order to aggregate multi-scale contextual information. In addition to extracting context information from feature maps, EncNet~\cite{zhang2018context} uses a Context Encoding Module which exploits semantic category prior information of the scenes to provide global contexts. Recent DANet~\cite{fu2019dual} and CFNet~\cite{zhang2019co} exploit the self-attention mechanism to effectively capture long-range dependencies, which outperform previous multi-scale context aggregation methods in semantic segmentation and scene parsing. InterlacedSSA~\cite{Huang2019InterlacedSS} proposes a factorized self-attention approach to approximately capture long-range dependencies with low computational costs, which achieves comparable performance with DANet and CFNet. Different from the above methods, in this paper we further propose to capture middle-range dependencies and reweighed long-range dependencies to provide richer semantic information than vanilla self-attention. Our Omni-Range Dependencies Network (ORDNet) can fill the semantic gap between original long-range and short-range dependencies, and also capture more accurate long-range dependencies by feature reweighing, achieving more comprehensive scene understanding. \subsection{Attention Mechanism} Attention mechanism is first introduced in~\cite{bahdanau2014neural} for neural machine translation, and later widely applied to various tasks like machine translation~\cite{luong2015effective}, VQA~\cite{lu2016hierarchical}\cite{zhu2016visual7w}\cite{yang2016stacked} and image captioning~\cite{xu2015show}. \cite{vaswani2017attention} is the first work to apply self-attention for capturing long-range dependencies within input sentences, achieving noticeably boosted performance in machine translation. In~\cite{wang2018non}, self-attention mechanism is further extended to vision tasks and a non-local network is proposed to capture long-range dependencies. For image tasks, self-attention methods compute the response at a position as a weighted sum of the features at all positions in the input feature maps, in which way the receptive field for the current position can go beyond local convolution kernels to cover the whole feature map. Self-attention mechanism is widely adopted in vision tasks such as semantic segmentation~\cite{fu2019dual}~\cite{huang2018ccnet}, GANs~\cite{zhang2018self} and image de-raining~\cite{li2018non}. Inspired by~\cite{wang2018non}, we propose a new self-attention architecture to capture omni-range dependencies of positions, where the Middle-Range (MR) branch restricts self-attention to patches to model middle-range dependencies and the Reweighed Long-Range (RLR) branch further emphasizes features of positions which encode common patterns of the image to obtain more accurate long-range dependencies. Compared with previous works, our method conforms better to practical spatial relations between semantic regions and achieves higher performance on several benchmarks. In addition to self-attention, researchers also explore other attention methods to refine feature maps by adjusting their scales. SENet~\cite{hu2018squeeze} utilizes a squeeze-and-excitation process to recalibrate feature channels with signals pooled from the entire feature maps. The first squeeze operator conducts global average pooling to generates a channel descriptor as the global information embedding. The second excitation operator maps the channel descriptor to a set of channel-specific weights with two successive fully connection layers. Finally, the channel-specific weights are multiplied with original features to rescale the channel responses. CBAM~\cite{woo2018cbam} and BAM~\cite{park2018bam} apply SE operation to both channel and spatial dimensions. Our proposed reweighed long-range branch serves as a spatial recalibration module to some extent. Compared with the spatial branch in CBAM, which is based on the single response of the current position, our RLR branch reweighs the feature responses according to correlations among all positions of the entire feature map so that positions encoding common patterns and main elements of the scene can be emphasized to form a more discriminative feature representation. \begin{figure*}[t] \centering \includegraphics[width=0.75\linewidth]{figure/mr_branch.pdf} \caption{ The pipeline of proposed Middle-Range (MR) branch. This branch contains 3 steps. First, The input feature $X \in \mathbb{R} ^ {H \times W \times C}$ is divided into $2 \times 2$ patches i.e. [$X_1$, $X_2$, $X_3$, $X_4$], which are ordered by rows. Second, each patch is enhanced by a self-attention module separately to get $Y_{m} \in \mathbb{R}^{4 \times \frac{H}{2} \times \frac{W}{2} \times C}$ as the intermediate output. The operation of self-attention is as same as that described in \ref{sec:self-attention revisiting}. Third, $Y_{m}$ is recovered to the same size of input feature to obtain $Z_{m}$ as final output. } \label{fig:mr_branch} \end{figure*} \section{Method} \label{sec:method} \subsection{Revisiting Self-Attention} \label{sec:self-attention revisiting} Self-attention mechanism computes the response at a position as a weighted sum of the features at all positions in the input feature maps. In this way, each position of the input feature can interact with others regardless of their spatial distance. The network can then effectively capture long-range dependencies among all the spatial positions. The overall workflow of self-attention is illustrated in the top area of Fig.~\ref{fig:mr_branch}. Given an input feature $X=[x^1; x^2; ...; x^{HW}]\in\mathbb{R}^{HW \times C}$ and an output feature $Y=[y^1; y^2; ...; y^{HW}]\in\mathbb{R}^{HW \times C}$, which are both reshaped to the matrix form, self-attention mechanism computes the output as \begin{equation} \label{eq:self-attention formulation} y^i = \frac{1}{N(X)}\sum_{j=1}^{HW}attn^{ij}v(x^j), \end{equation} where $i$ is the index of a position of the output feature $Y$ and $j$ is the index enumerating all the positions of the input feature $X$. $H, W$ and $C$ are the height, width, and channel dimensions of $X$. $N(X)$ serves as a normalization factor which is set as $HW$. $v(\cdot)$ is the value transform function implemented as a $1 \times 1$ convolution, $v(x^j) \in \mathbb{R}^{C_v}$ is the transformed feature. $attn^{ij}$ indicates the attention weight contributed by position $j$ to position $i$, which is defined as \begin{equation} \label{eq:relationship formulation} {attn^{ij}=q(x^i)^\top k(x^j)}, \end{equation} where $q(\cdot)$ and $k(\cdot)$ are the query and key transform functions, $q(x^i) \in \mathbb{R}^{C_q}$ and $k(x^j) \in \mathbb{R}^{C_k}$ are transformed features respectively. In this work, we implement both $q(\cdot)$ and $k(\cdot)$ as $1 \times 1$ convolutions. Eq.~(\ref{eq:self-attention formulation}) computes the response at query position $i$ as a weighted sum of the features of all positions. Conventional self-attention, as described above, can effectively capture long-range dependencies, which well complements convolution layers that capture short-range dependencies within a local region. However, exploiting context information from too distant positions is sometimes problematic since a position is often less correlated with those far away from it. A position tends to have stronger correlations with those are moderately near from it, where middle-range dependencies are contained within this context. Therefore, it is necessary to fill the gap between short-range and long-range dependencies and capture various correlations among entities in the scene. Also, during the process of self-attention operation, the features of the positions which contribute significant attention weights to others usually encode common patterns contained in the scene. These patterns are beneficial to the comprehensive understanding of sophisticated scenes and deserve appropriate emphasis for better capturing long-range dependencies. Based on the two observations, we propose a novel Omni-Range Dependencies Network (ORDNet) which consists of a Middle-Range (MR) branch and a Reweighed Long-Range (RLR) branch to mine middle-range dependencies and refine long-range dependencies respectively for better scene understanding (\textit{see Fig.~\ref{fig:pipeline}}). We will elaborate on the MR branch in~\ref{sec:mr} and the RLR branch in~\ref{sec:rlr}. The overall architecture of ORDNet will be described in detail in~\ref{sec:ordnet}. \subsection{Middle-Range Branch} \label{sec:mr} Conventional self-attention exploits all positions of a feature map to update the feature of each query position. By analyzing correlation patterns among ground-truth mask patches in Fig.~\ref{fig:patch_vis}, we observe that a query position tends to have stronger correlations with the positions near to it compared with those which are distant. To demonstrate this, we randomly select $1,000$ images from PASCAL-Context~\cite{mottaghi2014role} dataset and divide the ground-truth masks into $2 \times 2$ and $4 \times 4$ patches along the height and width dimensions. We further calculate correlations inside and between patches by taking the average of corresponding similarity values: \begin{equation} \label{eq:correlation function} Corr(p^m, p^n) = \sum_{i \in \Omega_{p^m}, j \in \Omega_{p^n}}sim(l^i, l^j) / (\vert\Omega_{p^m}\vert \vert\Omega_{p^n}\vert), \end{equation} where $sim(\cdot,\cdot)$ computes the similarity between two positions, which is either 1 or 0. We define the similarity between a pair of positions as whether their semantic labels are the same: \begin{equation} \label{eq:similarity matrix} sim(l^i, l^j)= \begin{cases} 1& l^i = l^j \\ 0& l^i \neq l^j. \end{cases} \end{equation} Here $l^i$ and $l^j$ denote the ground-truth category labels of position $i$ and $j$; $p^n$ and $p^m$ represent the $n$-th and $m$-th patch, and $\Omega_{p^n}$ and $\Omega_{p^m}$ denote the set of all the positions belonging to them respectively; $\vert \cdot \vert$ means cardinality of their sets. The visualized correlation matrices are shown in Fig.~\ref{fig:patch_vis} (a) and (b). We can observe that the values of the top left and bottom right elements, as well as their surroundings, are much larger (darker color) than those of the rest regions, which indicates intra-patch correlations are stronger than inter-patch ones. Moreover, pixels within the same or near patches tend to share the same label. Visualization of the attention map in Fig.~\ref{fig:patch_attn} also demonstrates that conducting self-attention in local patches can capture middle-range dependencies among nearby similar positions instead of original long-range dependencies among all the positions. Middle-range dependencies captured from local patches are able to provide more relevant context information than long-range dependencies, considering the higher intra-patch correlations than inter-patch ones. We then develop a Middle-Range (MR) branch to capture such more informative middle-range dependencies to complement with long-range dependencies. As illustrated in Fig.~\ref{fig:mr_branch}, our MR branch explicitly divides the input feature maps into $2\times 2$ patches and conducts self-attention operation within each patch separately. The output $Y_{m}$ is then recovered to the original spatial dimensions. We narrow the self-attention range from the entire feature map to patch level so that the local nature of the feature can be exploited. Benefited from the complementarity among short-range, middle-range and long-range dependencies, the network can adapt to diverse spatial relationships between different scene elements. We use $2 \times 2$ patches for all the experiments. We also have tried to divide features into $4 \times 4$ patches but found diminishing return, possibly due to limited receptive field. Experimental results are shown in Tab.~\ref{tab:4x4patch}. \begin{figure}[!htbp] \centering \includegraphics[width=1.00\linewidth]{figure/patch_vis.pdf} \caption{Visualization of intra-patch and inter-patch correlations. (a) Correlation values computed on $2 \times 2$ patches. (b) Correlation values computed on $4 \times 4$ patches. The patches are ordered along rows. Darker color denotes a larger correlation value. The values of top left and bottom right elements as well as their surroundings are much larger than those of the rest regions. This means the pixels within the same or closer patches tend to have the same label. Furthermore, intra-patch correlation is usually stronger than inter-patch correlation.} \label{fig:patch_vis} \end{figure} \subsection{Reweighed Long-Range Branch} \label{sec:rlr} During the self-attention process, the multiplication between query and key features will generate an attention map where each element indicates the attention weight between each pair of positions. We observe that some positions contribute larger attention weights to other positions, which implies that there are stronger correlations between these positions and other ones. Features of these positions usually encode common patterns like main elements appearing in the scene and large-area continuous background. These positions may be crucial to the global context during the self-attention process. By emphasizing features of these essential positions, the long-range dependencies modeled by self-attention is able to be more accurate. Therefore, we propose a Reweighed Long-Range (RLR) branch to selectively enhance features of these positions which contribute large attention weights to others. \begin{figure}[htbp] \centering \includegraphics[width=1.00\linewidth]{figure/rlr_branch.pdf} \caption{Our proposed Reweighed Long-Range (RLR) branch. A self-attention module takes backbone feature as input $X$ and outputs attended feature $Y_{l}$ along with the attention map $Attn \in \mathbb{R}^{HW \times HW}$. Then $Attn$ is summed up along each column and fed into a sigmoid function to get the attention contribution vector, which is then reshaped to $H \times W$ to obtain the global attention weight contribution map $Attn_g \in \mathbb{R}^{H \times W}$. The output feature $Z_{l}$ is attained by multiplying $Attn_g$ with $Y_{l}$ elementwisely. Note that the difference between $Attn$ and $Attn_g$ is that each element of $Attn$ denotes the attention weights between a pair of positions in $X$, whereas each element of $Attn_g$ denotes the summation of contribution of current position to all the positions.} \label{fig:rlr_branch} \end{figure} As illustrated in Fig~\ref{fig:rlr_branch}, given an input feature $X \in \mathbb{R}^{H \times W \times C}$, we first feed it to a self-attention module to output an attended feature $Y_{l} \in \mathbb{R}^{H \times W \times C}$ via Equation~(\ref{eq:self-attention formulation}) and the attention map $Attn \in \mathbb{R}^{HW \times HW}$ via Equation~(\ref{eq:relationship formulation}). $attn^{ji} \in Attn$ can be viewed as an attention weight contributed by position $i$ to position $j$. We compute the global attention weight contribution at position $i$ by summing up all the attention weights it contributes to other positions and further employ a simple gated mechanism as below: \begin{equation} \label{eq:spatial attention} attn^{i}_{g} = \sigma(\sum_{j=1}^{HW}attn^{ji}). \end{equation} Here $\sigma(\cdot)$ is the sigmoid function and is applied to normalize the scale of $Attn_g$. The spatial dimensions of the global attention weight contribution map $Attn_g=[attn^1_g; attn^2_g; ...; attn^{HW}_g]$ are $H\times W$ after reshaping operation. Through Equation~(\ref{eq:spatial attention}) we obtain $attn^{i}_g$, i.e., the normalized attention weight contribution of position $i$ to all the positions, which measures the effect of position $i$ on the global context of self-attended features. We finally multiply $Y_{l} \in \mathbb{R}^{H \times W \times C}$ with $Attn_g \in \mathbb{R}^{H \times W}$ elementwisely using the broadcasting rule to get $Z_{l} \in \mathbb{R}^{H \times W \times C}$ as the output of this branch: \begin{equation} \label{eq:selective attention apply} Z_{l} = Y_{l} * Attn_g. \end{equation} By multiplied with the global attention weight contribution $Attn_g$, the feature at each position is reweighed according to its contribution to other positions. Features of positions which encode common patterns of the scene could be emphasized for better representation. Experimental results in Table~\ref{tab:Ablation} shows that our RLR branch improves the performance without introducing extra parameters. \subsection{Omni-Range Dependencies Network} \label{sec:ordnet} We here explain our proposed Omni-Range Dependencies Network (ORDNet) in detail. The architecture of our ORDNet is illustrated in Fig.~\ref{fig:pipeline}. We use ResNet101~\cite{he2016deep} pretrained on ImageNet~\cite{deng2009imagenet} as the backbone network to extract visual features. To enlarge the receptive field as well as maintain feature resolution, we replace the strided convolutions in the last two stages of ResNet with atrous convolutions, with stride set as $1$ and dilation rates set as $2$ and $4$ respectively. We also follow~\cite{zhang2018context} to replace the first $7 \times 7$ convolution of ResNet with $3$ consecutive $3 \times 3$ convolutions. The resolution of the output feature from backbone network $X \in \mathbb{R}^ {H \times W \times C}$ is $1/8$ of the original image. $X$ is then fed into our proposed two branches to capture middle-range and reweighed long-range dependencies in visual feature. After getting the output features $Z_{m}$ and $Z_{l}$ from these two branches, we concatenate them along the channel dimension. Then a $1\times 1$ convolution is applied on the concatenated feature to reduce its channel dimensions to the same number of the input feature. We also add a shortcut connection from input feature $X$ to the fused output of two branches to ease optimization. The fused output feature is then passed into an FCN head for final mask prediction, and we further upsample the prediction result by $8$ times to match the original resolution. In practice, our proposed two branches can be easily plugged into a segmentation network due to the residual nature and further enhance feature representations. The concurrent work InterlacedSSA~\cite{Huang2019InterlacedSS} proposes a factorized self-attention method similar to our MR branch for semantic segmentation. The motivation of InterlacedSSA is to decrease the computation/memory cost of self-attention mechanism by factorizing it into two consecutive self-attention processes occurred in patches. However, the motivation of our MR branch is to demonstrate the effectiveness of capturing middle-range dependencies by restricting self-attention in feature patches. Moreover, the factorized self-attention in InterlacedSSA still aims to capture long-range dependencies among positions by stepwise information propagation. However, the factorized self-attention in our MR branch aims to explicitly capture middle-range dependencies among positions to fill the semantic gap between long-range and short-range dependencies for more comprehensive scene understanding. It serves as an additional information source to complement with reweighed self-attention and normal convolutions. In summary, our ORDNet can make self-attention more comprehensive in aggregating information, while InterlacedSSA can reduce the computational budget of self-attention. InterlacedSSA makes a step forward over our middle-range branch and the contributions of us and InterlacedSSA are complementary. \subsection{Loss Functions} Our full loss function $\mathcal{L}_{Full}$ contains two parts namely standard cross-entropy loss $\mathcal{L}_{CE}$ and Lovasz-hinge loss~\cite{yu2015learning} $\mathcal{L}_{IoU}$, which are formulated as follows: \begin{equation} \label{ce_loss} \mathcal{L}_{CE}(y^*, \tilde{y})=-\sum_{k = 1}^{K}{y_k^* \log{\tilde{y_k}}}, \end{equation} \begin{equation} \begin{aligned} \label{iou_loss} & \mathcal{L}_{IoU}(y^*, \tilde{y}) = -Jaccard(y^*, \tilde{y}) \\ & =-\sum_{k = 1}^{K}\frac{|(\arg\max (y^*) == k) \bigcap (\arg\max (\tilde{y}) == k)|}{|(\arg\max (y^*) == k) \bigcup (\arg\max (\tilde{y}) == k)|}, \end{aligned} \end{equation} \begin{equation} \label{full_loss} \mathcal{L}_{Full}(y^*, \tilde{y})=\alpha_1\mathcal{L}_{CE}(y^*, \tilde{y}) + \alpha_2\mathcal{L}_{IoU}(y^*, \tilde{y}), \end{equation} where $y^*$ denotes ground-truth mask, $\tilde{y}$ denotes predicted logits, $\alpha_1$ and $\alpha_2$ are weights for different loss terms, $K$ denotes the number of semantic categories.\newline \section{Experiments} \subsection{Experimental Setup} \textbf{Training: } We conduct all the experiments using PyTorch~\cite{paszke2019pytorch} on three scene parsing benchmarks, including PASCAL-Context~\cite{mottaghi2014role}, COCO Stuff~\cite{caesar2018coco} and ADE20K~\cite{zhou2017scene}. We also evaluate our model on PASCAL VOC 2012 dataset~\cite{Everingham10} for semantic segmentation task. We choose dilated FCN \cite{yu2015multi} as the baseline model and plug our MR and RLR branches between the ResNet backbone and FCN head. The output prediction is bilinearly upsampled by $8$ times to match the input size. We initialize the backbone with an ImageNet~\cite{krizhevsky2012imagenet} pretrained model and other layers including MR branch, RLR branch and the FCN head are randomly initialized. We adopt the SGD optimizer with momentum set to $0.9$ and weight decay set to $0.0001$ to train the network. We use the polynomial learning rate scheduling $lr=baselr*{(1-\frac{iter}{total\_iter})}^{power}$. The base learning rate is set to $0.004$ for ADE20K dataset and $0.001$ for other datasets. The channel dimension $C$ of the input feature $X$ for our MR branch and RLR branch is $2048$. Self-attention contains three linear layers to transform the input feature, namely query (q), key (k) and value (v). To reduce the memory cost, we set the output channel dimensions of query and key layers $C_q = C_k = 256$ and set $C_v = 512$ for all the self-attention modules we used. Our MR branch adopts $2 \times 2$ patches to capture middle-range dependencies. We conduct all the experiments on $4$ NVIDIA TITAN RTX GPU cards. For data augmentation, We apply random flipping, random cropping and random resize between $0.5$ and $2$ for all datasets. When comparing with other methods, we adopt both standard cross-entropy loss and Lovasz-hinge loss~\cite{yu2015learning} as our full loss function to train the network. Loss weights $\alpha_1$ and $\alpha_2$ are both set as $1.0$. For the whole scene parsing dataset ADE20K, we follow~\cite{zhang2018context} and the standard competition benchmark~\cite{zhou2017scene} to calculate mIoU by ignoring background pixels. When training our ORDNet on PASCAL-Context dataset, we also ignore the pixels of background category following~\cite{zhang2018context}\cite{fu2019dual}. \textbf{Evaluation: } As prior works~\cite{zhang2018context}\cite{fu2019dual}\cite{zhang2019co} show that employing multi-scale testing during evaluation is able to bring significant performance gain in semantic segmentation and scene parsing, we follow the best practice in~\cite{zhang2018context} to average the predictions of different scales as the final results. During evaluation, the input image is first resized according to a set of different scales $\{0.5, 0.75, 1.0, 1.25, 1.5, 1.75\}$, then cropped to a pre-defined image size for training. The cropped image is randomly flipped and fed into the segmentation network. The output logits are cropped and averaged across above scales as final prediction. For the consideration of fairness, we adopt multi-scale testing when comparing with state-of-the-art methods. Mean Intersection of Union (mIoU) and pixel accuracy (pixAcc) are adopted as evaluation metrics. \begin{figure*}[!htbp] \centering \includegraphics[width=0.65\linewidth]{figure/quality.pdf} \caption{Qualitative comparison with baseline Dilated FCN and Basic SA on PASCAL-Context test set. (a) Original image. (b) Ground-truth masks. (c) Results of Dilated FCN. (d) Results of Basic SA. (e) Results of our ORDNet. (f) Legend of semantic categories.} \label{fig:result} \end{figure*} \subsection{Ablation Studies} We conduct both quantitative and qualitative ablation experiments on the test set of PASCAL-Context dataset~\cite{mottaghi2014role} to verify the effectiveness of our MR and RLR branches and their variants. We train our model for $50$ epochs with batch size of $16$. Following~\cite{zhang2019co}, we use the most common $59$ categories of PASCAL-Context without the background category for ablation studies. \begin{table}[!htbp] \caption{Ablation studies on PASCAL-Context test set. mIoU is calculated on $59$ categories w/o background.} \label{tab:Ablation} \centering \begin{tabular}{lccc} \toprule Method & Backbone & mIoU(\%) & pixAcc(\%) \\ \midrule Dilated FCN \cite{yu2015multi} & ResNet50 & 45.30 & 75.34 \\ Basic SA \cite{wang2018non} & ResNet50 & 49.45 & 78.61 \\ Basic SA + MR & ResNet50 & 50.26 & 78.95 \\ Basic SA + RLR & ResNet50 & 50.28 & 78.89 \\ ORDNet & ResNet50 & \textbf{50.67} & \textbf{79.35} \\ ORDNet & ResNet101 & \textbf{53.03} & \textbf{80.24} \\ \bottomrule \end{tabular} \end{table} \textbf{MR and RLR branches: } Experimental results of our proposed two branches are illustrated in Table~\ref{tab:Ablation}. The dilated FCN baseline yields mIoU of $45.30\%$. After combining with basic self-attention (Basic SA)~\cite{wang2018non}, the mIoU can increase significantly by $4.15\%$, which forms a strong baseline for our method. Upon the basic self-attention mechanism (Basic SA), adding our Middle-Range branch (Basic SA + MR) is able to achieve $0.81\%$ improvement in mIoU, which demonstrates that more comprehensive scene context information can be extracted by integrating middle-range dependencies into the segmentation network to complement with long-range dependencies captured by original self-attention. Adding our Reweighed Long-Range branch (Basic SA + RLR) can also bring $0.81\%$ mIoU gain over the Basic SA baseline, indicating that emphasizing features of positions which encode the common patterns of scenes is able to capture more accurate long-range dependencies to achieve better understanding of scenes. Furthermore, the RLR branch introduces no extra parameters over the strong Basic SA baseline while outperforming it by a large margin. When incorporating our proposed two branches together, our Omni-Range Dependencies Network (ORDNet) is capable of obtaining a further improvement with $1.22\%$ mIoU gain and $0.74\%$ pixAcc gain due to the effective complementarity between short-range (captured by convolutions), middle-range and reweighed long-range dependencies. After utilizing a deeper backbone network, our ORDNet can further achieve the performance boost to $3.58\%$ and $1.63\%$ gains in mIoU and pixAcc, respectively. \textbf{Qualitative results: } Qualitative comparison with baseline Basic SA~\cite{wang2018non} are shown in Fig~\ref{fig:result}. Our ORDNet obtains better parsing results in both global and local parts. For example, in the $4$-th row of (d) and (e), Basic SA produces muddled result with nonexistent categories, while our ORDNet is able to understand the entire scene correctly and generate coherent parsing map in the assistance of omni-range dependencies. Also in the $1$-st and $6$-th rows of (d) and (e), our ORDNet can fix local prediction errors in Basic SA, e.g., missing tiny sheeps and confusing ear, which indicates the superiority of integrating middle-range and reweighed long-range dependencies into the segmentation network. There are also some failure cases in Fig~\ref{fig:result}. For instance, our model fails to identify the ``background'' category (two pillars, snowboard) in the $3$-rd and $4$-th row. The reason is that we ignore the pixels of ``background'' category when training our ORDNet on PASCAL-Context dataset following~\cite{zhang2018context}\cite{fu2019dual}. Therefore, our ORDNet cannot identify the ``background'' category and predicts other labels for these pixels, which has no harm to the performance. We also observe other poor results like inaccurate boundaries in the last row of Fig~\ref{fig:result}. We suppose the reason is that our ORDNet cannot aggregate enough boundary details from low resolution feature maps. Exploiting low level features from the CNN backbone may alleviate this problem. \textbf{Number of patches in MR branch: } We also conduct analysis on $2\times 2$ patches and $4\times 4$ patches in our MR branch only. Experimental results are shown in Table~\ref{tab:4x4patch}. Given the same input size, conducting self-attention over $2\times 2$ patches reduce FLOPs from $94.50$G to $62.27$G and reduce mIoU from $49.45\%$ to $48.75\%$. Conducting self-attention over $4\times 4$ patches could reduce FLOPs from $94.50$G to $51.17$G given the same input size. However, it does not work as well as original self-attention (MR\_nopatch) or $2\times 2$ patches MR branch. Enlarging crop size to $640$ has limited improvement but results in larger FLOPs which is larger than $2\times 2$ patches (Second row). We suppose that dividing feature map into too many patches for self-attention will bring about fragmented receptive field, leading to inferior performance. Note that we only compare variants of MR branch without integrating with Basic SA, thus the mIoU and PixAcc of $2\times 2$ patches MR branch in Table~\ref{tab:4x4patch} are lower than those in Table~\ref{tab:Ablation}. \begin{figure*}[t] \centering \includegraphics[width=0.75\linewidth]{figure/attn_out.pdf} \caption{Visualization of attention maps and parsing results of self-attention method and our RLR branch. (a) Original images. (b) Visualization of global attention weight contribution map $Attn_g$, which is calculated by summing up attention weights that each position contributes to other positions. (c) Ground-truth label maps. (d) Parsing results of basic self-attention. (e) Parsing results of RLR branch. We can observe from (d) that large-area continuous background usually contributes more attention weights to other positions, e.g. sky in the 2nd row, wall in the 4th row. By emphasizing these regions, our RLR branch could correct the prediction errors made by basic-attention and assign proper labels to these regions. \textit{Best viewed in color.}} \label{fig:attention_out} \end{figure*} \begin{table}[!htbp] \caption{Results of different versions of MR branch on PASCAL-Context test set. All the models are based on ResNet50 backbone.} \label{tab:4x4patch} \centering \begin{tabular}{lcccc} \toprule Method & Crop Size & mIoU(\%) & PixAcc(\%) & GFLOPs \\ \midrule MR\_nopatch & $480\times 480$ & \textbf{49.45} & \textbf{78.61} & 94.50 \\ MR\_2x2patch & $480\times 480$ & 48.75 & 76.83 & 62.27 \\ MR\_4x4patch & $480\times 480$ & 46.19 & 75.94 & 51.17 \\ MR\_4x4patch & $640\times 640$ & 46.92 & 76.08 & 78.97 \\ \bottomrule \end{tabular} \end{table} \begin{table}[!htbp] \caption{Results of different versions of RLR branch on PASCAL-Context test set. All the models are based on ResNet50 backbone.} \label{tab:rlr_versions} \centering \begin{tabular}{lccc} \toprule Attention Matrix & Normalizing Method & mIoU(\%) & PixAcc(\%) \\ \midrule Attention-in & Softmax & 49.80 & 78.58 \\ Attention-in & Sigmoid & 49.97 & 78.61 \\ Attention-out & Softmax & 50.13 & \textbf{78.92} \\ Attention-out & Sigmoid & \textbf{50.28} & 78.89 \\ \bottomrule \end{tabular} \end{table} \begin{table}[!htbp] \caption{Results of different fusing methods for outputs of MR branch and RLR branch. All the models are based on ResNet50 backbone.} \label{tab:fuse} \centering \begin{tabular}{lcc} \toprule Fusing method & mIoU(\%) & PixAcc(\%) \\ \midrule Element-wise Summation & 50.19 & 78.94 \\ Concat + $1 \times 1$ Convolution & \textbf{50.67} & \textbf{79.35} \\ Attention to Scale~\cite{Chen2016Attention} & 49.73 & 78.81 \\ Channel Selection~\cite{Li2019Selective} & 49.60 & 78.69 \\ \bottomrule \end{tabular} \end{table} \textbf{Variants of RLR branch: } We evaluate different versions of RLR branch. Experiment results are shown in Table~\ref{tab:rlr_versions}. All the models are based on ResNet50. Attention-in means that $Attn_g$ is obtained by summing up $Attn$ along each row so that each element of $Attn_g$ denotes the contribution of all positions to the current position. Attention-out means that $Attn_g$ is obtained by summing up $Attn$ along each column so that each element of $Attn_g$ denotes the contribution of current position to all others. We also try different normalization methods for $Attn_g$, including Softmax and Sigmoid functions. Results in Table~\ref{tab:rlr_versions} show that calculating $Attn_g$ as attention-out attains better performance than attention-in and combining attention-out with sigmoid normalization achieves the best performance on PASCAL-Context dataset. \textbf{Fusing methods for MR and RLR branches: } We compare different approaches to fuse the outputs of our proposed MR and RLR branches on the test set of PASCAL-Context. Experiment results are presented in Table~\ref{tab:fuse}. All the models are based on ResNet50. Besides elementwise summation and concatenation followed by a $1 \times 1$ convolution, we also explore Attention to Scale~\cite{Chen2016Attention} and Channel Selection~\cite{Li2019Selective} for feature fusion. Attention to scale obtains a selection weight map for each branch and each position of the fused feature will receive a weighted summation of the features at the same positions from the two branches. Instead of spatial dimension, Channel Selection obtains the selection weight vector for each branch and conduct weighted summation along channel dimension. Experiment results indicate that simply concatenating the two features and fusing them with a $1 \times 1$ convolution achieves the best performance. A possible reason is that for features from our two branches, normal convolution fusing them along both spatial and channel dimensions, while Attention to Scale and Channel Selection perform fusion only along spatial dimension or channel dimension, respectively. \textbf{Visualization of RLR branch: } To further illustrate the proposed RLR branch, we visualize the attention maps and parsing results of our model with RLR branch only in Fig~\ref{fig:attention_out}. Column (b) visualizes of global attention weight contribution map $Attn_g$, i.e., summation of attention weights contributed by each position to all the positions. We can observe that areas with larger attention weight contribution (brighter color) usually represent common patterns of the scene, e.g., sky, grass and wall which serve as large-area continuous background. As shown in column (d) and (e), by emphasizing common pattern regions, our RLR branch is able to correct erroneous predictions made by basic self-attention and make the network understand the scene contents more comprehensively. \begin{table}[!htbp] \caption{Comparison with state-of-the-art methods on PASCAL-Context test set. mIoU is calculated on $60$ categories w/ background.} \label{tab:sota_pcontext} \centering \begin{tabular}{lccc} \toprule Method & Backbone & mIoU(\%) \\ \midrule FCN-8s~\cite{Long_2015_CVPR} & - & 37.8 \\ CRF-RNN~\cite{zheng2015conditional} & - & 39.3 \\ ParseNet~\cite{liu2015parsenet} & - & 40.4 \\ BoxSup~\cite{dai2015boxsup} & - & 40.5 \\ ConvPP-8~\cite{xie2015convolutional} & - & 41.0 \\ HO\_CRF~\cite{arnab2016higher} & - & 41.3 \\ PixelNet~\cite{bansal2016pixelnet} & - & 41.4 \\ Piecewise~\cite{lin2016efficient} & - & 43.3 \\ DAG-RNN + CRF~\cite{shuai2018scene} & - & 43.7 \\ VeryDeep~\cite{wu2016bridging} & - & 44.5 \\ DeepLab-v2~\cite{chen2017deeplab} & ResNet101 + COCO & 45.7 \\ LabelBank~\cite{hu2017labelbank} & ResNet101 & 45.8 \\ RefineNet-101~\cite{lin2017refinenet} & ResNet101 & 47.1 \\ RefineNet-152~\cite{lin2017refinenet} & ResNet152 & 47.3 \\ PSPNet~\cite{zhao2017pyramid} & ResNet101 & 47.8 \\ Model A2, 2 conv~\cite{Wu2016Wider} & - & 48.1 \\ MSCI~\cite{lin2018multi} & ResNet152 & 50.3 \\ SGR~\cite{liang2018symbolic} & ResNet101 & 50.8 \\ CLL~\cite{ding2018context} & ResNet101 & 51.6 \\ EncNet~\cite{zhang2018context} & ResNet101 & 51.7 \\ SGR+~\cite{liang2018symbolic} & ResNet101 + COCO Stuff & 52.5 \\ DUpsampling~\cite{tian2019decoders} & Xception-71 & 52.5 \\ DANet~\cite{fu2019dual} & ResNet101 & 52.6 \\ SVCNet~\cite{ding2019semantic} & ResNet101 & 53.2 \\ CFNet~\cite{zhang2019co} & ResNet101 & 54.0 \\ InterlacedSSA~\cite{Huang2019InterlacedSS} & ResNet101 & 54.1 \\ \midrule ORDNet (ours) & ResNet101 & \textbf{54.5} \\ \bottomrule \end{tabular} \end{table} \subsection{Results on PASCAL-Context} \label{sec:repc} PASCAL-Context dataset~\cite{mottaghi2014role} contains $4,998$ images for training and $5,105$ images for testing. All images are densely annotated with $60$ categories in total including background. We train our model for $80$ epochs with batch size of $16$ when comparing with state-of-the-art methods. The image crop size is set to $480$. We use the total $60$ categories including the most frequent $59$ categories and the background category to evaluate our method as prior works~\cite{lin2017refinenet}\cite{zhang2018context}\cite{zhang2019co} do. Compared methods and results are shown in Table~\ref{tab:sota_pcontext}. Our ORDNet achieves $54.5\%$ mIoU which outperforms previous state-of-the-art methods. SGR+~\cite{liang2018symbolic} and DeepLab-v2~\cite{chen2017deeplab} utilize additional COCO Stuff~\cite{caesar2018coco} and COCO~\cite{lin2014microsoft} data to pretrain their models. RefineNet-152~\cite{lin2017refinenet} and MSCI~\cite{lin2018multi} adopt deeper backbone network to boost performance. Recent CFNet~\cite{zhang2019co} integrates the Context Encoding module from EncNet~\cite{zhang2018context} and obtains $54.0\%$ mIoU. InterlacedSSA~\cite{Huang2019InterlacedSS} further boosts the performance to $54.1\%$ mIoU via factorized self-attention similar to our MR branch. Our ORDNet achieves better performance than above methods without using extra pretraining data, deeper backbone network or other context aggregation modules. It is demonstrated that capturing omni-range dependencies is more effective in providing richer semantic information for scene parsing. \begin{table}[!htbp] \caption{Comparison with state-of-the-art methods on COCO Stuff test set.} \label{tab:sota_coco} \centering \setlength{\tabcolsep}{4mm}{ \begin{tabular}{lccc} \toprule Method & Backbone & mIoU(\%) \\ \midrule FCN~\cite{caesar2018coco} & - & 22.7 \\ FCN-8s~\cite{Long_2015_CVPR} & - & 27.2 \\ DAG-RNN + CRF~\cite{shuai2018scene} & ResNet101 & 31.2 \\ RefineNet~\cite{lin2017refinenet} & ResNet152 & 33.6 \\ LabelBank~\cite{hu2017labelbank} & ResNet101 & 34.3 \\ DeepLab-v2~\cite{chen2017deeplab} & ResNet101 & 34.4 \\ CLL~\cite{ding2018context} & ResNet101 & 35.7 \\ DSSPN~\cite{liang2018dynamic} & ResNet101 & 36.2 \\ SGR~\cite{liang2018symbolic} & ResNet101 & 39.1 \\ InterlacedSSA~\cite{Huang2019InterlacedSS} & ResNet101 & 39.2 \\ SVCNet~\cite{ding2019semantic} & ResNet101 & 39.6 \\ DANet~\cite{fu2019dual} & ResNet101 & 39.7 \\ \midrule ORDNet (ours) & ResNet101 & \textbf{40.5} \\ \bottomrule \end{tabular}} \end{table} \begin{table}[!htbp] \caption{Comparison with state-of-the-art methods on ADE20K validation set.} \label{tab:sota_ade20k} \centering \begin{tabular}{lccc} \toprule Method & Backbone & mIoU(\%) & pixAcc(\%) \\ \midrule SegNet~\cite{Badrinarayanan2017SegNet} & - & 21.64 & 71.00 \\ FCN~\cite{Long_2015_CVPR} & - & 29.39 & 71.32 \\ DilatedNet \cite{yu2015multi} & - & 32.31 & 73.55 \\ CascadeNet~\cite{zhou2017scene} & - & 34.90 & 74.52 \\ DeepLabv2~\cite{chen2017deeplab} & ResNet101 & 38.97 & 79.01 \\ RefineNet-101~\cite{lin2017refinenet} & ResNet101 & 40.20 & - \\ RefineNet-152~\cite{lin2017refinenet} & ResNet152 & 40.70 & - \\ DSSPN~\cite{liang2018dynamic} & ResNet101 & 42.03 & 81.21 \\ PSPNet-101~\cite{zhao2017pyramid} & ResNet101 & 43.29 & 81.39 \\ PSPNet-269~\cite{zhao2017pyramid} & ResNet269 & 44.94 & 81.69 \\ Model A2, 2c~\cite{Wu2016Wider} & - & 43.73 & 81.17 \\ SGR~\cite{liang2018symbolic} & ResNet101 & 44.32 & 81.43 \\ EncNet~\cite{zhang2018context} & ResNet101 & 44.65 & \textbf{81.69} \\ CFNet~\cite{zhang2019co} & ResNet101 & 44.89 & - \\ InterlacedSSA~\cite{Huang2019InterlacedSS} & ResNet101 & 45.04 & - \\ \midrule ORDNet (ours) & ResNet101 & \textbf{45.39} & 81.48 \\ \bottomrule \end{tabular} \end{table} \begin{table*}[t] \caption{Results on ADE20K test set. Evaluation provided by the challenge organizers.} \label{tab:sota_ade20k_test} \centering \begin{tabular}{lcccc} \toprule Method & Ensemble models & Train/Trainval & Backbone & Final Score \\ \midrule Dense Relation Network & - & - & - & 56.35 \\ DRANet101\_SingleModel & No & Trainval & ResNet101 & 56.72 \\ Adelaide & Yes & Trainval & - & 56.73 \\ SenseCUSceneParsing & No & Train &- & 55.38 \\ SenseCUSceneParsing & Yes & Trainval &- & \textbf{57.21} \\ \midrule ORDNet(Ours) & No & Train & ResNet101 & 56.67 \\ ORDNet(ours) & No & Trainval & ResNet101 & \textbf{56.86} \\ \bottomrule \end{tabular} \end{table*} \subsection{Results on COCO Stuff} COCO Stuff dataset~\cite{caesar2018coco} contains $10,000$ images from MSCOCO dataset~\cite{lin2014microsoft} with dense annotations of $80$ thing (e.g. book, clock) and $91$ stuff categories (e.g. flower). We use $9,000$ images for training and the rest for testing. We adopt batch size of $16$ and train our model for $80$ epochs. The image crop size is set to $520$. Mean IoU results calculated on all the $171$ categories are shown in Table~\ref{tab:sota_coco}. Among the compared methods, DAG-RNN~\cite{shuai2018scene} utilizes chain-RNNs to model rich spatial dependencies. CCL~\cite{ding2018context} adopts a gating mechanism in the decoder stage to improve inconspicuous objects and background stuff segmentation. SGR~\cite{liang2018symbolic} uses a knowledge graph to convert image features into symbolic nodes and conducts graph reasoning on them. SVCNet~\cite{ding2019semantic} generates a scale- and shape-variant semantic mask for each pixel to confine its contextual region for more adaptive context aggregation. DANet~\cite{fu2019dual} employs spatial and channel-wise self-attention to further improve performance. Recent InterlacedSSA~\cite{Huang2019InterlacedSS} proposes a factorized approach similar to our MR branch to accelerate self-attention. Our method outperforms these methods with a large margin and achieves a new state-of-the-art result of $40.5\%$ mIoU with no external knowledge used. This result indicates that capturing omni-range dependencies is more effective than merely modeling long-range dependencies in conventional self-attention. \subsection{Results on ADE20K} ADE20K~\cite{zhou2017scene} is a large scene parsing benchmark with $150$ categories including stuff and objects. It contains $20,210$ images for training and $2,000$ for validation. We train our model for $120$ epochs on the training set with batch size of $16$ and report mIoU and pixAcc results on the validation set. As the average image size of ADE20K dataset is larger than others, we adopt image crop size of $576$ on ADE20K. Comparison with previous state-of-the-art methods is shown in Table~\ref{tab:sota_ade20k}. PSPNet-269~\cite{zhao2017pyramid} uses a much deeper backbone network than other methods. EncNet~\cite{zhang2018context} and CFNet~\cite{zhang2019co} exploit prior information of semantic categories appearing in the scene to improve performance. InterlacedSSA~\cite{Huang2019InterlacedSS} introduces factorized self-attention similar to our MR branch and obtain $45.04\%$ mIoU. Our method achieves $45.39\%$ mIoU which outperforms previous methods without using deeper backbone, category prior or external knowledge like~\cite{liang2018symbolic}. As mentioned in the Section $4.1$ of InterlacedSSA, the $0.2\%$ improvement of their method is not neglectable considering the improvements on ADE20K is very challenging. Therefore, results of our method demonstrate capturing omni-range dependencies is also effective in challenging and complicated scenes. To further demonstrate the effectiveness of our ORDNet, we evaluate our model on the test set. The experiment results are shown in Table~\ref{tab:sota_ade20k_test}. Our model without finetuning on validation set achieves $56.67$ final score and surpasses most other methods. The final score denotes the average of PixAcc and mIoU. For fair comparison, we further finetune our model on the train+val set of ADE20K for $20$ epochs, with the same training scheme except that the initial learning rate is set to $1e-4$. Our ORDNet achieves a final score of $56.86$ on the test set with a single model and ranks at the $2$-nd place on the leaderboard of MIT Scene Parsing Benchmark. Our single model surpasses the $3$-rd place, Adelaide, which ensembles multiple models. The $1$-st place, SenseCUSceneParsing achieves $57.21$ final score by ensembling multiple models as well. Its single model trained only on the training set achieves $55.38$ while our ORDNet achieves $56.67$ with the same setting. \subsection{Results on PASCAL VOC 2012} We also evaluate the proposed ORDNet on PASCAL VOC 2012 dataset~\cite{Everingham10} with $21$ categories for semantic segmentation task. We adopt the augmented dataset~\cite{hariharan2011semantic} which contains $10,582$ images for training, $1,449$ images for validation and $1,456$ images for testing. We first train on the augmented train + val set for $60$ epochs with initial learning rate of $1e-3$. Then we finetune the model on the original PASCAL VOC training set for another $20$ epochs with learning rate of $1e-4$. We adopt ResNet101 as backbone network and the image crop size is set to $480$. Our ORDNet achieves $83.3\%$ mIoU on PASCAL VOC 2012 test set without using COCO pretraining or additional context aggregation modules. It is demonstrated that our method can also adapt to foreground object segmentation task by capturing omni-range dependencies. \subsection{Analysis of Computational Overhead} As shown in TABLE~\ref{tab:compute_cost}, all the models are run on a single NVIDIA TITAN XP GPU card to report computational overhead. InterlacedSSA is superior to other methods including ours in terms of speed and memory cost. It is a natural result since the main idea of InterlacedSSA is to decrease the computational budget of self-attention via feature factorization. While the main idea of our ORDNet is to demonstrate the effectiveness of capturing omni-range dependencies by our MR and RLR branches, which outperforms InterlacedSSA on all the datasets we adopted in this paper. Reducing computational complexity is not one of our claims. Comparing with Basic SA and DANet, our ORDNet actually has a moderate computational overhead since we reduce the channel dimensions of query, key and value layers in our MR and RLR branches to $256$, $256$ and $512$ respectively, but our ORDNet achieves higher performances. We will explore how to reduce its time and memory costs in the future work. \begin{table}[!htbp] \caption{Efficiency comparison given input feature map of size $[2048 \times 128 \times 128]$ in inference stage.} \label{tab:compute_cost} \centering \begin{tabular}{lccc} \toprule Methods & Memory (MB) & GFLOPs & Time (ms) \\ \midrule InterlacedSSA~\cite{Huang2019InterlacedSS} & \textbf{252} & \textbf{386} & \textbf{45} \\ Basic SA~\cite{wang2018non} & 2168 & 619 & 77 \\ Ours & 2192 & 624 & 83 \\ DANet~\cite{fu2019dual} & 2339 & 1110 & 121 \\ \bottomrule \end{tabular} \end{table} \section{Conclusion and future work} In this paper, we address the scene parsing problem which requires the model to segment the entire scene instead of foreground objects. We propose a novel Omni-Range Dependencies Network (ORDNet) which restricts the scope of self-attention to local patches to capture middle-range dependencies and meanwhile selectively emphasizes spatial regions contributing significant attention weights to others to model more accurate long-range dependencies. By integrating middle-range, reweighed long-range and short-range dependencies captured by local convolutions together, our ORDNet can aid models in adapting to various spatial scales and relationships in the complicated natural images, thus strengthening local and global feature representations. Extensive experiments on four scene parsing and segmentation benchmarks demonstrate its superior performance. Furthermore, our ORDNet can be applied to other visual tasks for capturing omni-range dependencies due to its generality and plug-and-play property. In the future, we hope to apply the ORDNet to other visual tasks and study how to further reduce its computing budget. { \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} NoSleep\footnote{\href{https://www.reddit.com/r/nosleep}{https://www.reddit.com/r/nosleep}} is an online community hosted on the social media website Reddit where people share, rate, and comment on original horror stories in an immersive environment. From its inception in May 2010 to May 2014, NoSleep grew organically to more than 240,000 subscribers. On May 7, 2014, Reddit's administrators added NoSleep to a list of communities that every new Reddit user is automatically subscribed to. Less than a month later, NoSleep's subscriber-base had doubled and it has continued to grow at this pace (see Figure \ref{fig:growth}). Although theory suggests that large influxes of newcomers will challenge, disrupt, and can even destroy online communities, NoSleep appeared to manage this growth without major negative effects. Using a grounded theory-based analysis of interviews of NoSleep members, writers, and moderators, we suggest that NoSleep was able to survive and thrive through this massive influx of newcomers because it had created systems that ensured a high degree of adherence to the community's norms and that minimized the effect of violations. Our findings also point to several important trade-offs and limitations. \begin{figure} \centering \includegraphics[width=\columnwidth]{nosleep_growth.pdf} \caption{Plots of numbers of subscribers (in millions), and moderators over the life of the community. Data retrieved from snapshots available at archive.org} \label{fig:growth} \end{figure} \section{Background} The problem of how to attract newcomers is one of the most important challenges for builders of online communities \cite{kraut_building_2012}. A growing body of work considers how leadership, framing, competition, and membership overlap can play an important role in attracting newcomers \cite{schweik_internet_2012, zhu_selecting_2014, kraut_building_2012, zhu_impact_2014}. However, relatively little research has considered the challenges faced by communities that successfully manage to attract large numbers of new members. The problems caused by successfully attracting large numbers of newcomers are often invoked in terms of an ``Eternal September'' \cite{grossman_net.wars_1998}. In the 1980s and early 1990s, Usenet participants frequently complained about inexperienced university students joining their communities in September, the beginning of the North American academic year. When America Online connected to Usenet in 1994 and unleashed a large and unremitting stream of new users, Usenet denizens complained of a ``September'' that never ended and irrevocably damaged their community. Social computing research suggests that influxes of newcomers can disrupt communities in many ways including increased activity leading to information overload \cite{butler_membership_2001, jones_information_2004} and alienation among established users \cite{jeffries_systers:_2005}. Most prominently, research has suggested that newcomers cause disruptions because they do not know and do not follow community norms \cite{kraut_building_2012}. As a result, new users can disrupt conversations, contribute low quality content, and annoy existing users. For example, Gorbatai has shown that newcomers tend to make low quality edits to Wikipedia which require fixes by established editors \cite{gorbatai_aligning_2012}. Because most social computing research treats community growth as a desirable goal, most studies of newcomer activity have focused on how newcomers can be deterred by sanctions leveled at good faith norm violations. For example, studies have shown that Wikipedia editors whose work is undone are less likely to contribute in the future \cite{halfaker_rise_2013, halfaker_dont_2011, piskorski_testing_2013}. In response, designers have created welcoming spaces within existing communities \cite{morgan_tea_2013} and tools to identify and welcome good faith contributors \cite{halfaker_snuggle:_2014}. This work has limited relevance for communities in NoSleep's position in May 2014 whose challenge was widespread norm violation by a surge of newbies. Norm violations included linking to external content in ways that are normative elsewhere in Reddit, marking work as fiction or non-fiction, and misusing the Reddit voting system. Perhaps most importantly, many new users violated a very strong norm of suspended disbelief that requires all commenters to act as if stories are factual. For example, asking if something is true, or even complimenting the author for a ``nice story,'' is not permitted on NoSleep. Violation of this norm by newcomers is disruptive to NoSleep's immersive environment and has historically been treated as a serious threat to the community. Although the social computing literature points to a deep toll taken by this type of widespread norm violation \cite{kraut_building_2012}, NoSleep seems to have largely survived its own Eternal September. The NoSleep participants we interviewed suggested that, ``\emph{it's gotten bigger but not necessarily worse}'' (P1) and that, ``\emph{if you went on today, you would see all the comment threads just filled with people going along with it and just enjoying the experience.}'' (P5). Our analysis asks: How did NoSleep survive, and even thrive, through its Eternal September? \section{Methodology} Because large influxes of newcomers are largely unstudied in the social computing literature, the phenomenon is well-suited to qualitative theory-building. Consequently, we adopted a grounded theory interview-based approach \cite{charmaz_constructing_2006}. We recruited members of NoSleep in two ways. First, we posted several messages in NoSleep-related forums on Reddit describing the study and requesting that participants contact a member of our research team with information about their individual role and experience in NoSleep. We identified a set of roles (e.g., moderator, writer) and other dimensions (e.g., gender, ex-user) as theoretically important. As is common in grounded theory, we used these dimensions to build a sample that was stratified but did not attempt to be statistically representative \cite{trost_statistically_1986}. In some cases, we reached out to individuals who we felt would have illuminating perspectives (e.g., a founding member of the community). In total, we interviewed 12 subjects as described in Table \ref{tab:subjects}. All participants were compensated with a \$10 Amazon gift card. \begin{table}[] \centering \begin{tabular}{cllll} \hline ID & Gender & Role & Joined & Length\\ \hline 1 & Female & Lurker & 2013 & 46 min \\ 2 & Female & Active & 2011 & 36 min \\ 3 & Female & Moderator & 2010 & 41 min \\ 4 & Female & Active & 2012 & 42 min \\ 5 & Male & Founder / Moderator & 2010 & 52 min \\ 6 & Female & Lurker & 2010 & 46 min \\ 7 & Male & Ex-Active / Writer & 2013 & 77 min \\ 8 & Female & Lurker & 2012 & 41 min \\ 9 & Male & Moderator / Writer & 2012 & 62 min \\ 10 & Female & Lurker & 2013 & 24 min \\ 11 & Female & Active & 2013 & 48 min \\ 12 & Male & Moderator & 2010 & 43 min \\ \hline \end{tabular} \caption{List of study participants with participant ID as used in this paper, gender, primary role(s) in NoSleep, year that they joined the community, and the length of our interview. The term ``active'' indicates an intense combination of reading, voting, and commenting. ``Lurker'' indicates reading and voting and was associated with less deep involvement in the community.} \label{tab:subjects} \end{table} Subjects were interviewed over the phone, or via audio/video chat for an average of 47 minutes. Although interviews were semi-structured and open-ended, our protocol was designed to elicit feedback on the large influx of users in May 2014 and is provided in the supplemental materials. All interviews were recorded and transcribed. Following the methodology laid out by Charmaz \cite{charmaz_constructing_2006}, the first author coded each interview using a series of both inductive codes emergent from the text and deductive codes identified by theory. Transcripts and codes were discussed by all the authors and transcripts were recoded in an iterative fashion. Ultimately, codes were grouped into themes that were elaborated in iteratively created memos. \section{Findings} Our analysis of coded interviews suggests that NoSleep survived and thrived during its Eternal September because it was equipped with three interconnected systems that ensured a high degree of adherence to NoSleep's norms while minimizing the effect of violations. We present these findings in terms of three propositions and suggest that these attributes could help other online communities survive and thrive in the face of large influxes of newcomers. We also describe ways in which the individuals we interviewed reflected on the trade-offs introduced by these features. \subsection{Proposition 1. Consistent Enforcement by Leaders} Participants attributed NoSleep's success in the face of a large influx of new users to the exceptional responsiveness and effectiveness of the community's moderators (``mods'') who wield broad authority to remove content and ban users. Figure \ref{fig:growth} shows that there were a dozen moderators in May 2014 and that the size of the group has accelerated since then. P1 commented on the quality of moderation saying, \emph{``the NoSleep mods really do a lot as far as keeping everyone not just on track and following rules, but also like keeping everyone interested and active.''} NoSleep's moderators were described as a group of community insiders who were committed to, and effective in, keeping the community stable and sustainable. Moderator work primarily involves rule enforcement and sanctioning. Our subjects commented on the consistency and strictness of NoSleep's moderators and described these qualities as an important component of NoSleep's success in the face of massive growth. Several NoSleep moderators active in other Reddit subcommunities explained that NoSleep had both the strictest and the most consistently enforced rules they had encountered. For example, moderator P3 described how she would enforce community norms at the expense of suppressing friendly conversation: \emph{``If people come on and they say `that [story] was really great,' that's technically breaking the rules. But, you have to be a real jerk -- that's why you're like `nope you can't -- you cannot praise good writing.'\thinspace''} Although moderators like P3 mentioned examples of the social and emotional challenges of enforcing NoSleep's norms, they also felt that their work was essential to maintain the stability and immersiveness of the community. Although frequently described as inflexible, subjects also described community leadership as engaged, fair, and legitimate. Echoing the experience of several subjects, P10 described at length how a moderator's interventions helped her learn how to effectively navigate and interact within the community. In comparison to other communities, the NoSleep moderators were described as extremely organized and engaged. Interviews with moderators revealed a large and sophisticated behind-the-scenes infrastructure including an entire private Reddit community used by moderators to communicate, collaborate, and coordinate with each other to ensure that their actions were responsive and consistent. P3 described the usefulness of this private subreddit: ``\emph{We put out drafts of moderator announcements and everybody suggests additions or things that should be changed, so that when we go out and moderate the community -- we're really able to show a unified front.}'' \subsection{Proposition 2. Moderation By Community Members} The community members we interviewed suggested that widespread engagement in norm enforcement by ``normal'' community members was critically important to handling newcomers. They also suggested that this type of community regulation was only made possible by a strong shared sense of community. For example, there was a striking degree of shared understanding across all participants on what constituted good NoSleep stories. Although subjects could reflect on their individual taste in stories, many respondents echoed P6's claims that excellent NoSleep stories should include ``\emph{a strong character voice}'' and be ``\emph{something that's almost believable}.'' Although many newcomers adopt more off-the-cuff styles, nearly every subject interviewed mentioned style and grammar as criteria that they use to judge the quality of stories on NoSleep. Some of this knowledge is made explicit in documentation on the site. The degree to which this knowledge reflects a shared sense of community was also visible in the way community members described working together to address examples of users violating NoSleep's norms. Described as ``burying'' material, subjects described collectively rating norm-violating content and comments as low quality (i.e., ``downvoting'') so that it becomes hidden by Reddit's interface. P1 explained how she approached comments left by newcomers unaware of the suspension of disbelief norm, ``\emph{that's when you're like `All right, if we all downvote this, it will go away. It will be like it never happened}.''' Although P1 did not write content or comments herself, she expressed a feeling of empowerment to act with the community to preserve its norms that was nearly universal among our interviewees. Although rules are rigid, P8, P9 and P10 each reflected on the way that the community's sense of purpose and shared goals made it possible to identify attempts to game its system, and on the community's ability to ``correct themselves'' in these cases. Because the number of moderators with explicit authority is limited, this sense of community made collective action among ``normal'' members possible, scaled effectively, and was able to both minimize damage from, and educate, a sustained influx of newcomers. Because users could work to ``bury'' norm-violating material, there was much less pressure on moderators to act quickly and in every case of a violated norm. \subsection{Proposition 3. Technological Systems Maintaining Norms} Participants suggested that the technological affordances of NoSleep were a third important factor in the community's smooth growth. Several technologies mentioned included basic functionality of the Reddit platform that facilitates community moderation. This included Reddit's voting system giving all members the ability to quickly vote content up or down. As P1 explained, ``\emph{in order to be an active member of the community, you just need to hit a button}.'' A related tool proved by Reddit facilitates ``peer reporting'' of problematic content which is presented to moderators who can then hide content and contact users. Moderator P12 explained that, ``\emph{if you have individuals committed enough to ... not break immersion, just by clicking a report button on comments that do [break immersion], it brings it into the moderator queue so that we can make them disappear}.'' Building on this system, NoSleep employs a tool called ``AutoModerator'' that automatically detects rule violations and sanctions violators \cite{morris_reddit_2012}. Although not visible to many users, several moderators we interviewed explained that the tool, also provided by Reddit, finds and flags problematic content and communicates with moderators in ways that obviates the need for action on many straightforward moderation issues. Interviewees also credited Reddit's functionality that allows newcomers to edit and improve stories or comments after discovering they have inadvertently broken a rule or deviated from the community's norms and then to resubmit their content. They also pointed to a feature that issues reminders of norms at points of action including an HTML placeholder attribute in the comment box below stories that reminds users of rules before commenting. One moderator described a tool used by the community called ``post throttling'' which limits users to one story submission every 24 hours. P3 explained that throttling was used to reduce the threat of a newcomer disrupting the community and to provide a limit on the effect of a trend of newbies posting ``series'' of stories to garner visibility and popularity. Of course, these technologies rely on social infrastructure to be effective in ways that highlight the interrelated nature of our propositions. Community voting and peer reporting rely on an engaged set of community members with a shared sense of the community as well as an active set of moderators who can effectively remove flagged or downvoted content and sanction repeat violators. Our interviewees suggested that NoSleep effectively combined these three features into a socio-technical system that was able to maintain community standards through a sustained influx of newcomers. \subsection{The Cost of Strong Regulation} All three propositions point to systems facilitating strong norm enforcement. Although described as important for managing the influx of new users, these systems were not described as universally positive or costless. For example, a rule requiring stories to be believable was elaborated in the aftermath of the influx of new users to explicitly bar supernatural stories (e.g., stories that involve demons or vampires). Although subjects acknowledged that this pushed newcomers toward creating believable stories, it also annoyed some established users who felt they could navigate the fine line between supernatural content and believability. For example, P3 explained that, ``\emph{the rules had been made tighter because of the new subscribers, and that sometimes doesn't allow them as much freedom}.'' Similarly, P7 felt that rules were, ``\emph{corralling younger users into acting or reading in a particular way...instead of doing what they want}.'' Frustrated by this experience, P7 explained that he no longer contributes to the community as frequently. For others, tough rule enforcement was seen as having a negative effect on commenting and discussion by making the environment feel constrained and contrived. P11 described the difference between her experiences in NoSleep and other Reddit subcommunities, commenting on the rules that forbid any kind of criticism of stories: \emph{``Most subreddits have comment sections that are kind of stream of consciousness—people tend to share their own experiences. On NoSleep, you don't see people really sharing their own experiences; you see people commenting specifically on the story…Again, because of the rules, you don't really see trolling. Kind of nice, kind of not nice ... I think the comment section is kind of -- I don't think it's very organic.''} Through a strong system of rules and a complex socio-technical infrastructure to ensure that they are enforced, NoSleep was able to survive and thrive despite the weight of millions of newcomers. However, the cost of NoSleep's survival was described by some interviewees as an uncomfortably strict environment that limited creativity. The systems described as facilitating NoSleep's rapid growth were also portrayed as providing strict limits on what it could grow to become. \section{Discussion} In one sense, our three propositions seem to be at odds with other social computing research. For example, a body of Wikipedia research has connected stronger systems of norms with inefficient bureaucracies that may cause communities growth to slow or even stop \cite{butler_dont_2008, suh_singularity_2009, jullien_rise_2015}. For example, Halfaker et al.~\cite{halfaker_rise_2013} connect increases in social and technical systems for norm-enforcement to lower rates of newcomer retention. In another sense, our propositions should not be unfamiliar to social computing researchers. For example, all three propositions can be found in some form in among Kraut and Resnick's \cite{kraut_building_2012} ``design claims'' and our contribution lies not in the discovery of these features but in our suggestion that they play a critical role in helping groups survive and thrive through what is often traumatic or catastrophic growth. We believe that techniques that minimize the effect of norm violations by newcomers may both help prevent communities from descending into chaos \emph{and} deter newcomers. Techniques that prevent short-term disaster may be inappropriate -- and difficult to change -- when growth slows. Of course, our work is limited in many ways. One unavoidable limitation of our inductive grounded theory approach is that findings may reflect the idiosyncrasies and biases of interviewees. For example, we were only able to recruit one user who described themselves as a former NoSleep member. As a result, our findings may reflect ``survivor bias'' where individuals less negatively affected by an event are the only people available to be interviewed. We gain some confidence in our findings by the fact that our participants did not describe any major exoduses of authors or moderators or major changes in the nature of the community. That said, we only present these findings as propositions for testing in future work. Our findings offer several important implications for design. The first points toward the importance of emphasizing decentralized moderation. Although previous research has found this to be ``underprovisioned'' on Reddit as a whole \cite{gilbert_widespread_2013}, in NoSleep it seems to be sufficient. A second implication is the importance of ensuring enough leadership capacity is available when an influx of newcomers is anticipated. Designers may benefit by focusing on tools to let existing leaders bring others on board and help them clearly communicate norms. Finally, designers should support an ecosystem of accessible and appropriate moderator tools. During a widely reported Reddit uprising, a moderator of a different subcommunity complained that, ``the moderation tools we are given are severely lacking'' \cite{warzel_reddit_2015}. Our interviews and analysis point to the importance of strong systems of norm enforcement made possible by leadership, community engagement, and technology. Although we propose that NoSleep's socio-technical infrastructure can provide a template for other communities facing similar challenges, we also suggest that they are not without trade-offs. Although not without qualification, NoSleep's example provides a model for how an Eternal September need not mean an inevitable march toward winter. \bibliographystyle{SIGCHI-Reference-Format}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{introduction} In the nucleus of a galaxy, when an unlucky star is occasionally perturbed into an orbit on which it comes too close to the central supermassive black hole (SMBH), it will be destroyed by the tidal force \citep{Rees_Tidal_1988}. In such a TDE, the accretion of the debris produces a flare and illuminates the galaxy for a period of months to years. A large sample of TDEs can uncover the hidden population of SMBHs in the center of quiescent galaxies, and provide a promising method to measure the properties of BHs and to study the physics of accretion. The main observational properties of dozens of discovered candidate TDEs are: bright in UV/optical band with almost constant temperature about $2-4\times 10^4$ K near the peak of the luminosity, and some of them show X-rays which might come from the accretion disk. The photosphere radius of the UV/optical radiation inferred by assuming a blackbody spectrum is $\sim 10^{15}$ cm, which is larger than the circularization radius $R_{\rm c} \sim 10^{13}$ cm \citep{Gezari_An_2012,Holoien_ASASSN14ae_2014,Holoien_Six_2016}. It is commonly considered that the UV/optical emission originates from the self-collision near the apocenter \citep{Piran_Disk_2015} or comes from the reprocessing layer, which is produced in the circularization process as the stretched stream shocks itself near the apocenter due to the apsidal precession \citep{Jiang_Prompt_2016,Lu_Self_2020}, or is driven by the super-Eddington accretion process in the accretion disk \citep{Strubbe_Optical_2009, Lodato_Multiband_2011,Metzger_A_2016}. Though these models can explain many TDE candidates, the issues including the circularization process and the formation of the accretion disk are still unclear. If the pericenter of the tidally disrupted star is slightly farther away than the tidal radius, it cannot be fully destroyed and therefore retains the core after the encounter. \cite{Guillochon_Hydrodynamical_2013} found the critical value of the so called penetration factor that separates TDEs into full TDEs (FTDEs) and partial TDEs (PTDEs) by hydrodynamical simulations. In this paper, we consider some PTDEs which will not produce outflow (wind) driven by the super-Eddington accretion or the self-collision. Without the outflow, these PTDEs provide a clean environment to study the circularization process of the debris stream and how the accretion disk forms. In Section \ref{typical}, we describe the characteristic dynamical properties of PTDEs. In Section \ref{cir_process} - \ref{temperature_evolution}, we calculate the light curve and the temperature of PTDEs in the circularization process and in the viscous evolution. In Section \ref{dependence}, we study the dependence on the mass of the BH and the disrupted star. In section \ref{event_detection_rate}, we estimate the ratio of the event rate between PTDEs and FTDEs, and calculate the detection rate of PTDEs. We summarize and discuss the results in Section \ref{conclusion}. \section{Characteristic dynamical properties} \label{typical} When a star approaches the tidal radius $R_{\rm T} = R_*(M_{\rm h}/M_*)^{1/3}$ \citep{Rees_Tidal_1988,Phinney_Manifestations_1989} in a parabolic orbit, it can be disrupted by the tidal force from an SMBH. Here $M_{\rm h} \equiv {M_6} \times 10^6\ \rm{M_{\odot}}$, $R_* \equiv r_* \times R_{\odot}$, $M_* \equiv m_* \times \rm{M_{\odot}}$ are the BH's mass, star's radius and mass, respectively. Some materials within the star during the encounter can overcome the self-gravitional force and become unbound to the star, but others are not and left a core after the encounter. We can use the penetration factor $\beta \equiv R_{\rm T} / R_{\rm p}$ to quantify this effect. Here $R_{\rm p}$ is the pericenter radius, in unit of the BH's Schwarzschild radius $R_{\rm S} = 2GM_{\rm h}/c^2$, it is \begin{equation} R_{\rm p} \simeq 23\ \beta^{-1} M_6^{-2/3} r_* m_*^{-1/3}\ R_{\rm S}. \end{equation} The hydrodynamic simulation results of \citet[hereafter G13]{Guillochon_Hydrodynamical_2013} showed that a star is fully disrupted when $\beta \ge \beta_{\rm d}$, and a partial disruption happens when $\beta < \beta_{\rm d}$. For stars with the polytropic index $\gamma = 4/3$, $\beta_{\rm d}=1.85$, and for $\gamma = 5/3$, $\beta_{\rm d}=0.9$. Notice that there are some other works indicating similar results \citep[e.g.,][$\beta_{\rm d}=0.92$ for $\gamma = 5/3$ and $\beta_{\rm d}=2.01$ for $\gamma = 4/3$]{Mainetti_The_2017}. \cite{Ryu_Tidal1_2020} found different results for different stellar mass by using stellar evolution code MESA. When the star is tidally disrupted by an SMBH, the debris would have a range in specific energy due to their locations in the SMBH's potential well. In the "frozen-in" model \citep{Lodato_Stellar_2009}, the energy spread is $\epsilon = \pm GM_{\rm h}x/R_{\rm p}^2$, where $x$ is the distance from the center of the star. The most bound debris with a specific energy\footnote{Notice that this is different from the case of fully disruption ($\beta \gtrsim 1$), whose specific energy of the most bound material is $\epsilon_0 \simeq GM_{\rm h} R_*/R_{\rm T}^2$, because in the latter case this energy is determined at $R_{\rm T}$, not at $R_{\rm p}$ \citep{Guillochon_Hydrodynamical_2013}.} of \begin{equation} \label{eq:eps0} \epsilon_0 \simeq \frac{GM_{\rm h} R_*}{R_{\rm p}^2} \simeq \frac{GM_{\rm h}}{2a_0} \end{equation} is the first to return to the pericenter, where \begin{equation} a_0 \simeq \frac{R_{\rm p}^2}{2R_*} \label{semimajor} \end{equation} is the semi-major axis of its orbit, corresponding to an eccentricity of $e_0 = 1 - R_{\rm p}/a_0$. The period of this orbit \begin{equation} t_{\rm fb} = 2 \pi \sqrt{a_0^3/GM_{\rm h}} \simeq 41\ \beta^{-3} M_6^{1/2} r_*^{3/2} m_*^{-1}\ {\rm day} \label{pmb} \end{equation} determines the characteristic time-scale of the debris fallback. Typically, less bound debris follows the most bound debris to return, at a rate that decays as $t^{-5/3}$ based on the constant $dM/dE$ \citep{Rees_Tidal_1988, Phinney_Manifestations_1989,Ramirez-Ruiz_THE_2009}. Actually the fallback rate is more complex, even the tail of the fallback rate in PTDE is much steeper than the power law $-5/3$ \citep{Guillochon_Hydrodynamical_2013, Ryu_Tidal3_2020,Coughlin_Partial_2019}. In order to get more flexible results and to obtain the total bound mass to fall back, we adopt the fitting formulae of the simulation results in G13 for the total fallback mass $\Delta M$. The stellar polytropic index is $\gamma=5/3$ in this paper. \cite{Ryu_Tidal3_2020,Law_Stellar_2020} obtained similar results from the simulations of the disruption, considering the more realistic stellar structure output by the stellar evolution code MESA. Here since we only propose to predict the general features of PTDEs, we omit the study of the dependence on the stellar evolution. In this paper, we only consider the bound debris left by the encounter, which will be accreted by the SMBH afterward. The remnant core could possibly be bound to the SMBH after the encounter, and fall back to be disrupted again \citep{Ryu_Tidal3_2020}. However, the orbital periods of the remnant core are $\simeq 400 - 40,000$ yr, therefore too long for detection. Furthermore, whether the remnant core is bound or unbound is still unclear \citep{Manukian_Turbovelocity_2013,Ryu_Tidal3_2020}. \section{Circularization} \label{cir_process} \subsection{Stream Crossing} \label{sc_cir} Many studies indicate that the circularization of the debris is possible due to the general relativistic apsidal precession, which is crucial for the formation of accretion disk \citep{Rees_Tidal_1988,Hayasaki_Finite_2013,Dai_Soft_2015,Bonnerot_Disc_2015,Bonnerot_Long_2017}. Upon each passage of the pericenter, the stream precesses by a small angle $\phi \sim R_{\rm S} / R_{\rm p}$, thus undergoes a succession of self crossings, which dissipates an amount of the specific energy. Consequently, the apocenter of the stream's new orbit moves closer to the BH, and its eccentricity decreases. The stream crossing is illustrated in Figure \ref{sketch}. \begin{figure} \centering \includegraphics[scale=0.5]{sketch.eps} \caption{A sketch of the stream crossing. After the stream precesses by an angle $\phi_{\rm N}$ on the $N$-th orbit, it collides with itself, thus it losses energy and moves to $N+1$-th orbit. Because of the efficient radiative diffusion in PTDEs, most of the thermal energy produced by the collision radiates from a small region near the collision point. The semi-major axis of orbits $N$ and $N+1$ are $a_{\rm N}$ and $a_{\rm N+1}$, respectively. The velocity of the two colliding components are shown as the orange arrows. This sketch is adapted from \cite{Bonnerot_Long_2017}.} \label{sketch} \end{figure} In the $N$-th orbit, the specific energy $\epsilon_{\rm N}$, semi-major axis $a_{\rm N}$, and specific angular momentum $j_{\rm N}$ are related as in: $\epsilon_{\rm N} = GM_{\rm h}/(2 a_{\rm N})$, and $j_{\rm N}^2= GM_{\rm h} a_{\rm N}(1- e_{\rm N}^2)$. Under the assumption of completely inelastic collision, the dissipated energy per orbit is (\cite{Bonnerot_Long_2017}, also see \cite{Dai_Soft_2015}) \begin{equation} \label{eq:delta-eps} \Delta \epsilon_{\rm N} \simeq \frac{9}{2} \pi^2 \frac{e_{\rm N}^2}{c^4} \left(\frac{GM_{\rm h}}{j_{\rm 0}}\right)^6 = \Delta \epsilon_0 \frac{e_{\rm N}^2}{e_0^2} \end{equation} assuming the stream's angular momentum is conserved during the crossings, i.e., $j_{\rm N} = j_0$. The dissipated energy during the first crossing is \begin{equation} \label{eq:delta-eps0} \Delta \epsilon_0 = \frac{9}{16} \frac{\pi^2 e_0^2}{(1+e_0)^3} \left(\frac{R_{\rm S}}{R_{\rm p}}\right)^3 c^2 \end{equation} When the stream is eventually fully circularized, i.e., $e_N=0$, then $\epsilon_{\rm N}$ shall take its final value $\epsilon_{\rm c} = GM_{\rm h} / (2R_{\rm c})$, where \begin{equation} R_{\rm c}= R_{\rm p}(1+e_0) \label{eq:Rc} \end{equation} is the so-called circularization radius. \subsection{Energy Dissipation Rate History} \label{dissipation_cir} We assume that in the circularization phase the energy dissipation mainly comes from the self-collision of the stream. The viscous dissipation is relatively weak during this phase, but it will become important after the circularization (see the discussion in section \ref{conclusion}). Furthermore, we will consider only the part of the stream that is made of the most bound debris, i.e., the ``main stream'', since it returns earlier and comprises the major portion of the total stream mass, and neglect the tail of the stream. Due to apsidal precession, this stream crosses and collides with itself in successive orbits, reducing its energy. Adopting the iterative method in \cite{Bonnerot_Long_2017}, the specific energy dissipation rate is $\Delta \epsilon_{\rm N}/t_{\rm s, N}$ in the $N$-th orbit, such that the dissipation rate can be estimated by $\Delta M \Delta \epsilon_{\rm N}/t_{\rm s, N}$ during the circularization, where $t_{\rm s, N}$ is the orbital period of $N$-th orbit. Assuming efficient radiative cooling, the luminosity is equal to the dissipation rate, i.e., \begin{equation} \label{eq:Lc} L_{\rm N} \simeq \Delta M \frac{\Delta \epsilon_{\rm N}}{t_{\rm s, N}}. \end{equation} Here we assume the mass of ''main stream'' equals to the total fallback mass and neglect the tail of returning stream. The reason is that the main stream contains most of the fallback mass after the first collision, which is approximated to be $\Delta M (1-(4t_{\rm fb}/t_{\rm fb})^{-5/4}) \simeq 0.82\ \Delta M$ by assuming $\dot M_{\rm fb} \propto (t/t_{\rm fb})^{-9/4}$ \citep{Coughlin_Partial_2019,Miles_Fallback_2020}. The mass of the main stream is very sensitive to $\beta$. When $\beta > 0.5$ the tidal disruption occurs \citep{Ryu_Tidal1_2020}. In this paper, we calculate three cases of PTDEs with $\beta = 0.55, 0.6, 0.7$, whose total fallback mass are $\Delta M \simeq 0.0048, 0.0254, 0.1222\ M_*$, respectively \citep{Guillochon_Hydrodynamical_2013}. Alternatively, we can write the energy dissipation rate history in a differential form: $\dot \epsilon(t) = d \epsilon/dt= \Delta \epsilon_{\rm N}/t_{\rm s, N}$. Substituting the orbital period-energy relation $t_{\rm s, N} \equiv 2\pi GM_{\rm h} /(2 \epsilon_{\rm N})^{3/2}$ and Equation (\ref{eq:delta-eps}), we get a differential equation of $\epsilon$: \begin{equation} \label{eq:diss} \dot \epsilon = \frac{\Delta \epsilon_0}{t_{\rm fb}} \frac{1}{e_0^2} \left(1 - \frac{\epsilon}{\epsilon_{\rm c}}\right) \left(\frac{\epsilon}{\epsilon_0}\right)^{3/2}. \end{equation} When $a_{\rm N} \gg R_{\rm c}$, $\epsilon /\epsilon_{\rm c} \ll 1$, the factor $1-\epsilon / \epsilon_{\rm c}$ can be dropped, then one can determine the time-scale of circularization by solving for $\epsilon(t)$ from the above equation. Letting $e_0 \sim 1$, we obtain the circularization time-scale \begin{equation} \label{circularization} \begin{split} t_{\rm cir} &\simeq 2\frac{ \epsilon_0}{\Delta \epsilon_0} t_{\rm fb} \\ &\simeq 8 \ \beta^{-1} M_6^{-5/3} m_*^{-1/3} r_*^2 \ t_{\rm fb}, \end{split} \end{equation} for PTDEs. The same formula was obtained in \cite{Bonnerot_Long_2017} along a different approach. For the PTDE with $\beta = 0.5$, it needs to spend a duration $\sim 16\ t_{\rm fb}$ to form a circular disk. Other mechanism, e.g. the magneto-rotational instability (MRI), may cause momentum exchange and speed up the circularization process \citep{Bonnerot_Long_2017, Chan_Magnetorotational_2018}. We assume the viscous effects are weak in the circularization stage. We will further explore this issue in section \ref{conclusion}. \begin{figure} \centering \includegraphics[scale=0.5]{Lc_m6.eps} \caption{Bolometric luminosity history during the circularization process for the disruption of a star ($m_*=r_*=1$) by a $10^6\ \rm{M_{\odot}}$ SMBH. The grey lines represent the borderline FTDEs which have $\beta=\beta_{\rm d}=0.9$, and others belong to PTDEs. It is calculated by Equation (\ref{eq:diss}).} \label{Lc_m6} \end{figure} Furthermore, from Equation (\ref{eq:diss}) it is straightforward to find that the peak dissipative luminosity per mass is \begin{equation} \label{eq:dissp} \dot \epsilon_{\rm p} = \frac 25 \left(\frac35\right)^{3/2}\frac{1}{e_0^2}\left(\frac{\epsilon_{\rm c}}{\epsilon_0}\right)^{3/2}\frac{\Delta \epsilon_0}{t_{\rm fb}}. \end{equation} Then using Equation (\ref{eq:eps0}) and (\ref{eq:delta-eps0}), we get the peak luminosity \begin{equation} \label{eq:Lcp} L_{\rm p} = 6 \times 10^{42} \left(\frac{\Delta M}{0.01\ M_{\odot}}\right)\beta^{9/2} M_6^2 m_*^{3/2} r_*^{-9/2}\ {\rm erg\ s^{-1}}. \end{equation} Using the differential form, i.e., Equation (\ref{eq:diss}), we can rewrite the circularization luminosity as $L_{\rm c}(t) \simeq \Delta M \dot \epsilon(t)$ and plot it in Figure \ref{Lc_m6}. The result of this differential form is equivalent to that of iterative form (Equation (\ref{eq:Lc})), and it provides a clear relation between the parameters, hence we adopt it in the following calculations. \subsection{Photon Diffusion} \label{photon_diffusion} The luminosity estimate above assumes that photons can diffuse efficiently. However, we should examine the issue of diffusion efficiency more carefully. After the self-collision, the gas is heated, the photon needs some time to diffuse out. The diffusion time-scale determines the observed luminosity and the spectrum. If the diffusion time-scale is much shorter than the orbital period, the radiation mainly emerges from the stream near the collision position. Otherwise, the thermal energy will be accumulated during the circularization process. The radiative diffusion time-scale after the shock is given by $t_{\rm diff} \simeq \tau h_{\rm s}/c$, where $c$ and $h_{\rm s}$ are the light speed and the height of the stream, respectively. And the optical depth of the stream after the shock is $\tau \simeq \kappa_{\rm es} \rho h_{\rm s}$, where $\kappa_{\rm es}\simeq 0.34\ {\rm cm^2\ g^{-1}}$ is the opacity for electron scattering for a typical stellar composition \footnote{The atoms are ionized after the shock, and the temperature is high, so that the electron scattering dominates the opacity.}. Assuming the stream is homogeneous in the interior, the density of the stream after the shock is estimated by \begin{equation} \label{eq:density} \rho = \frac{\Delta M}{4 \pi a_{\rm s} h_{\rm s} w_{\rm s} }. \end{equation} Here $w_{\rm s}$ is the width of the stream. And the perimeter of the orbit is $\sim 4 a_{\rm s}$, where $a_{\rm s}$ is the semi-major axis of the orbit. Then the diffusion time-scale after the shock can be written as \begin{equation} \label{eq:tdiff} \begin{split} t_{\rm diff} &\simeq \kappa_{\rm es} \frac{\Delta M}{4\pi c a_{\rm s}}\left(\frac{h_{\rm s}}{w_{\rm s}}\right) \\ &\simeq 1.4 \times 10^{-2}\ \left(\frac{\Delta M}{0.01\ \rm{M_{\odot}}}\right) \left(\frac{h_{\rm s}}{w_{\rm s}}\right) \times \\ &\left(\frac{a_{\rm s}}{a_{\rm 0}}\right)^{-5/2} \beta^5 M_6^{-7/6} r_*^{-5/2} m_*^{5/3}\ t_{\rm s}, \end{split} \end{equation} where $t_{\rm s}$ is the orbital period of the stream. In order to estimate the evolution of diffusion time-scale during the circularization process, we need to consider the evolution of the apocenter radius and the height-to-width ratio of the stream. The apocenter radius becomes smaller as the circularization process carries on. The change of height-to-width is complicated, since it evolves under the gravity and pressure force. In \cite{Bonnerot_Long_2017}, they assume $h_{\rm s}/w_{\rm s} = 1$ during the circularization process. We should relax this assumption here, because the SMBH's gravity will restrict the expansion of stream in the vertical direction, and the width of stream increases slightly due the viscous shear. Therefore, when the stream is cold at later times, the stream will become geometrically thin \citep{Bonnerot_Disc_2015}, so the height-to-width ratio should be $h_{\rm s}/w_{\rm s} \ll 1$. We set $h_{\rm s}/w_{\rm s} \simeq 1$ at the beginning of the circularization process, and let $h_{\rm s}/w_{\rm s} \simeq 10^{-2}$ when the circularization process completes, which is consistent with the geometrically thin ring (or disk) \citep{Kato_Black_1998}. To account for a smooth transition we adopt the following form for the evolution of the height-to-width ratio \begin{equation} \label{eq:htow} \frac{h_{\rm s}}{w_{\rm s}} \simeq 10^{-2\frac{t}{t_{\rm cir}}}. \end{equation} We show the history of $t_{\rm diff}/t_{\rm s}$ in Figure \ref{diffusion}. It shows that the radiative diffusion is efficient during the whole circularization process for $\beta \sim 0.5$, but not for $\beta \sim 0.9$. If the radiative diffusion is efficient, i.e. $t_{\rm diff} \lesssim t_{\rm s}$, then the thermal energy will not be accumulated in each orbit. At the early time when $a_{\rm s} \sim a_{\rm 0}$, the photon can diffuse out efficiently before the next shock for PTDEs. However, for those FTDEs with $\beta \gtrsim \beta_{\rm d}$, the bound mass $\Delta M \sim 0.5\ M_\odot$, the diffusion time-scale $t_{\rm diff} \gtrsim t_{\rm s}$ and thus the photon cannot diffuse efficiently. \begin{figure} \centering \includegraphics[scale=0.5]{diffusion.eps} \caption{Ratio of the radiative diffusion time-scale and the period of the orbit for the disruption of a star ($m_*=r_*=1$) by a $10^6\ \rm{M_{\odot}}$ SMBH. The colors represent different penetration factor $\beta = R_{\rm T} / R_{\rm p}$. It is calculated by plugging Equation (\ref{eq:htow}) into Equation (\ref{eq:tdiff}).} \label{diffusion} \end{figure} Since we focus on the PTDEs, the assumption of efficient radiative cooling is reasonable. Therefore we do not consider the photon diffusion in the calculations of the luminosity of circularization hereafter. \section{The disk viscous evolution} \label{viscous_evolution} After the circularization process, the stream settles into the radius $R_{\rm c}$ with the width $w_0$, and the viscous evolution becomes important. In this section we review the basic disk equations, then study the subsequent evolution by an analytic calculation. Then using a numerical model of viscous evolution, we test the analytical results and obtain the detailed spectral evolution. \subsection{Disk Equations} The energy balance during the viscous evolution is given by $Q^+=Q^-_{\rm adv}+Q^-_{\rm rad}$. Here the viscous heating rate per unit surface area is \begin{equation} Q^+=\frac94 \nu\Sigma\Omega^2. \label{Q+} \end{equation} The radiative cooling rate is \begin{equation} Q^-_{\rm rad}=\frac{4acT_{\rm c}^4}{3\kappa \Sigma}, \label{Qrad} \end{equation} and the advective term is given by \begin{equation} Q^-_{\rm adv} = \frac{\dot M_{\rm acc}}{2\pi R^2}\frac{P_{\rm tot}}{\rho} \xi, \label{Qadv} \end{equation} where $\dot M_{\rm acc} = 3 \pi \nu \Sigma$ is the local accretion rate and $\xi$ is close to unity \citep{Frank_Accretion_1985}. Here $\Sigma$, $\Omega$, $a$, and $T_{\rm c}$ are the local surface density, the local angular velocity, radiation constant, and the mid-plane temperature, respectively. The total pressure is $P_{\rm tot} = P_{\rm rad} + P_{\rm gas} = aT_{\rm c}^4/3+\rho k_{\rm b} T_{\rm c}/(\mu m_{\rm p})$. Here $k_{\rm b}$, $\mu = 0.6$, and $m_{\rm p}$ are the Boltzmann constant, mean particle weight, and proton mass, respectively. The local density is $\rho \simeq \Sigma/(2H)$. Here the height-scale is given by the hydrostatic equilibrium, i.e., \begin{equation} H=\Omega^{-1} \left(\frac{P_{\rm tot}}{\rho}\right)^{1/2}. \end{equation} We adopt the $\alpha_{\rm g}$-viscosity \citep{Sakimoto_Accretion_1981}, i.e., \begin{equation} \nu=\frac{2\alpha P_{\rm gas}}{3\Omega \rho} \label{viscosity} \end{equation} to calculate the viscous evolution. The local opacity is $\kappa=\kappa_{\rm es} + \kappa_{\rm R}$, where the electron scattering opacity is dominated by Thompson electron scattering, i.e., $\kappa_{\rm es} = 0.2(1+X)\ {\rm cm^2g^{-1}}$, and the Kramer's opacity is given by $\kappa_{\rm R} = 4 \times 10^{25} Z(1+X) \rho T^{-3.5}\ {\rm cm^2g^{-1}}$. The gas composition we adopt in this paper is the solar composition, i.e., $X=0.71$, $Y=0.27$, and $Z=0.02$. \subsection{Analytical Calculation} \label{analytic} The viscous evolution was considered by \cite{Cannizzo_The_1990}, and we summarize it here. We assume the disk is in thermal equilibrium between viscous heating and radiative cooling, i.e., $Q^+ = Q^-_{\rm rad}$. Using these relations we obtain the temperature \begin{equation} T_{\rm c}=\left( \frac98 \frac{\alpha \kappa}{ac} \Omega \Sigma^2 \frac{k_{\rm b}}{\mu m_{\rm p}} \right)^{1/3}. \label{Tc} \end{equation} The viscous time-scale is given by \begin{equation} t_{\nu} = R^2/\nu. \label{viscous_time} \end{equation} It is the function of time and radius, but we can average it with respect to $R$ by letting $R=R_{\rm d}$ and $\Sigma=\Delta M/(2\pi R_{\rm d}^2)$, where $R_{\rm d}$ is the average radius of the disk (ring). Substituting the average surface density and radius into Equation (\ref{Tc}), and using Equations (\ref{viscosity}) and (\ref{viscous_time}), we obtain \begin{equation} \begin{split} &t_{\nu} = C R_{\rm d}^{7/3} M_{\rm d}^{-2/3}, \\ &C \equiv \alpha^{-4/3} \left(\frac{k_{\rm b}}{\mu m_{\rm p}} \right)^{-4/3}\left(\frac{3acGM_{\rm h}}{\kappa} \right)^{1/3}, \end{split} \label{tv} \end{equation} where $M_{\rm d}$ is the total mass of the disk (ring). At the beginning $R_{\rm d}=R_{\rm c}$ and $M_{\rm d}=\Delta M$, the initial viscous time-scale is \begin{equation} \begin{split} t_0& = C R_{\rm c}^{7/3} (\Delta M)^{-2/3} \\ & = 13.4\ \alpha^{-4/3} \kappa^{-1/3} \beta^{-7/3} M_6^{10/9} m_*^{-7/9} r_*^{7/3} \times \\ &\left(\frac{\Delta M}{0.01\ \rm{M_{\odot}}} \right)^{-2/3}\ {\rm yr}, \end{split} \label{t0} \end{equation} which is the time-scale of the ring-to-disk phase. Notice that the ring-to-disk time-scale does not depend on the initial width of the ring. In order to estimate the bolometric luminosity, we assume the accretion rate is \begin{equation} \frac{dM_{\rm d}}{dt} = -\frac{M_{\rm d}}{t_{\nu}}, \label{macc} \end{equation} and keep the total angular momentum $J_{\rm d} = M_{\rm d} (GM_{\rm h}R_{\rm d})^{1/2}$ constant during the ring-to-disk and disk phase \citep{Kumar_Mass_2008}, i.e., \begin{equation} M_{\rm d}^2 R_{\rm d} = (\Delta M)^2 R_{\rm c}. \end{equation} Then Equation (\ref{tv}) becomes $t_{\nu}=t_0(M_{\rm d}/\Delta M)^{-16/3}$. Substituting it into Equation (\ref{macc}) and assuming a constant opacity, one obtains the accretion rate \begin{equation} \dot M_{\rm acc} = \frac{\Delta M}{t_0} \left(1+\frac{16}{3}\frac{t}{t_0} \right)^{-19/16}. \label{accretion_disk} \end{equation} The bolometric luminosity can be written as $L_{\rm disk} = \eta \dot M_{\rm acc} c^2$. The efficiency is $\eta = 1/12$ for Schwarzschild BH. We plot it in Figure \ref{Lbol} with the electron scattering assumption $\kappa=\kappa_{\rm es}$ to compare with the numerical results. According to the results in Figure \ref{Lbol}, we can estimate the peak luminosity in viscous evolution by \begin{equation} \begin{split} L_{\rm disk, p} &\simeq \eta c^2 \dot M_{\rm acc}(t_0) \\ &\simeq 2 \times 10^{41}\ \alpha^{4/3} \left(\frac{\Delta M}{0.01\ \rm{M_{\odot}}} \right)^{5/3} \times \\ &\beta^{7/3} M_6^{-10/9} m_*^{7/9} r_*^{-7/3}\ {\rm erg\ s^{-1}}. \label{eq:Lbc} \end{split} \end{equation} When $t\gtrsim t_0$, the luminosity is $\propto t^{-1.2}$, which is same as the self-similar result in \cite{Cannizzo_The_1990}. Notice that the early part ($t < t_0$) of the solution is a rough approximation, because at this stage the accretion rate in Equation (\ref{accretion_disk}) is not necessarily the mass inflow rate at the inner boundary of the disk (i.e., near the BH's event horizon) due to a likely viscous diffusion delay. A more rigorous way to explore this early phase is described below. \subsection{Numerical Calculation} \label{numerical} The viscous evolution is governed by the diffusion equation of the surface density \citep{Frank_Accretion_1985}, i.e., \begin{equation} \frac{\partial \Sigma}{\partial t} = \frac{3}{R}\frac{\partial}{\partial R} \left[R^{1/2} \frac{\partial}{\partial R} (\nu \Sigma R^{1/2}) \right]. \end{equation} We list the assumptions and initial conditions of the numerical model here: \begin{itemize} \item We assume the ring is axisymmetric with a Gaussian surface density profile centered at the circularization radius $R_{\rm c}$ \begin{equation} \Sigma (R)=\zeta \frac{\Delta M}{R_{\rm c} w_0} {\rm exp} \left[-\left(\frac{R-R_{\rm c}}{2 w_0}\right)^2\right], \end{equation} where $\zeta \simeq 1/(4 \pi^{3/2})$ is a coefficient that satisfies \begin{equation} \int^{\infty}_{R_{\rm in}} \Sigma (R) 2 \pi R\ dR = \Delta M. \end{equation} We set the initial width of the ring as $w_0 = 0.1 R_{\rm c}$. \item We set the inner boundary condition to be $R_{\rm in}=R_{\rm ISCO}=3R_{\rm S}$ corresponding to a Schwarzschild BH. Once the matter arrives the boundary, it is removed. \item For simplicity we adopt the vertically-averaged disk (ring), and neglect the returning stream $\dot M_{\rm fb}$ after the circularization, so we only calculate the 1D evolution. \item We consider the form of advective cooling term is Equation (\ref{Qadv}). However, it is important only if the accretion rate is super-Eddington \citep{Shen_Evolution_2014}. \item We use the $\alpha_{\rm g}$-viscosity ansatz, i.e., Equation (\ref{viscosity}), to avoid the thermal instability \citep{Lightman_Black_1974}. Moreover, because the fitting results of TDEs in \cite{Van_Velzen_Late_2019} give the high viscosity, i.e., $\alpha > 0.1$, we set $\alpha = 1$ in the following calculations. \end{itemize} We show the results of the evolution of the surface density in Figure \ref{ring_to_disk_sr}. We can see the time-scales of the ring-to-disk phase are consistent with the analytical calculation in section \ref{analytic}. Even though the opacity in the numerical calculation is not a constant, the time-scale $t_0$, i.e., Equation (\ref{t0}) depends weakly on $\kappa$, so $t_0$ is a good approximation of the time-scale of ring-to-disk phase. The emergent energy flux is given by \begin{equation} F_{\rm vis} = \frac12 Q^-_{\rm rad} = \sigma T_{\rm eff}^4, \label{Fvis} \end{equation} where $\sigma$ and $T_{\rm eff}$ are the Stefan-Boltzmann constant and local effective temperature, respectively. The factor $1/2$ comes form the two side of the disk (ring). The bolometric luminosity is given by \begin{equation} L_{\rm disk} = 2 \times \int^{R_{\rm out}}_{R_{\rm in}} 2\pi R \sigma T_{\rm eff}^4\ dR, \label{Lb} \end{equation} and is shown in Figure \ref{Lbol}. After the formation of accretion disk, the numerical results approach the self-similar ones. However, the analytical results overestimate the luminosity in the ring-to-disk phase. For an observer at distance $D$ whose viewing angle is $i$, with the blackbody assumption the flux at wavelength $\lambda$ is given by \begin{equation} F_{\lambda}=\frac{2\pi\cos{i}}{D^2} \int^{R_{\rm out}}_{R_{\rm in}} B_{\lambda}(T_{\rm eff})R\ dR. \label{vFv} \end{equation} where the Planck function $B_{\lambda}$ is \begin{equation} B_{\lambda}=\frac{2h c^2}{\lambda^5 \left(\mathrm{e}^{hc/(\lambda k_{\rm b} T_{\rm eff})}-1\right)}, \end{equation} and $h$ is the Planck constant. The evolution of local effective temperatures and that of the spectrum are shown in Figure \ref{local_Teff} and \ref{ring_to_disk_vFv}, respectively. We use the peak local effective temperature $\max [T_{\rm eff}(R)]$ to represent the observed effective temperature of the whole disk surface, and plot its evolution in Figure \ref{Teff_max}. \begin{figure} \centering \includegraphics[scale=0.5]{Lb_m6.eps} \caption{Disk bolometric luminosity history for different $\beta$ calculated by the numerical method in section \ref{numerical}. The dashed lines are the analytical results in section \ref{analytic}.} \label{Lbol} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{sr70.eps} \caption{Evolution of the disk surface density for the case of $\beta = 0.7$. The colors of the lines represent the time evolution.} \label{ring_to_disk_sr} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{teff70.eps} \caption{Evolution of the local effective temperature distribution on the disk for $\beta = 0.7$.} \label{local_Teff} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{vfv70.eps} \caption{Evolution of the disk emission spectrum for $\beta = 0.7$. It is calculated by Equation (\ref{vFv}) assuming viewing angle is $i=0$ and the distance $D = 10\ {\rm Mpc}$. The colors of the lines represent the time evolution. } \label{ring_to_disk_vFv} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{teff_max.eps} \caption{Evolution of the peak local effective temperature of the disk for different $\beta$.} \label{Teff_max} \end{figure} \section{Overall light curve and temperature evolution} \label{temperature_evolution} \subsection{During Circularization} \label{circularization_radiation} During the circularization process, we assume most of the radiation comes from the shocked region, and the shocked matter-radiation mixture is thermalized. Here we estimate the effective temperature by the single blackbody assumption \begin{equation} T_{\rm eff} \simeq [L_{\rm c}/(\sigma S)]^{1/4}, \end{equation} and assume the viewing angle is face-on. We determine the radiative area $S$ by considering the diffusion time-scale as below. When the radiative diffusion is efficient, i.e., $t_{\rm diff} \lesssim t_{\rm s}$, the initial radiative area $S_0$ can be written as the product of the width and the length of the radiative region: \begin{equation} S_0 \simeq 2w_{\rm s, 0} l_{\rm diff, 0}. \end{equation} Here the length can be estimated by $l_{\rm diff, 0} \sim v_{\rm a, 0} t_{\rm diff, 0}$, and $v_{\rm a, 0} \simeq (GM_{\rm h}/a_0)^{1/2}((1-e_0)/(1+e_0))^{1/2}$ is the stream velocity near the opocenter. We assume the initial expansion of stream is ballistic after the collision, thus the width is $w_{\rm s, 0} \simeq R_0+c_{\rm s, 0} t_{\rm diff, 0}$, where $R_0$ is the initial width of the stream near the collision point and the sound speed after the collision is $c_{\rm s, 0} \simeq 2/3 \Delta \epsilon_0^{1/2}$. For PTDEs, the self-gravity is negligible. Before the collision, the stream width is dominated by the tidal shear, thus in this limit $R_0 \simeq R_*(a_0/R_{\rm p})$ \citep{Kochanek_The_1994,Coughlin_On_2016}. If $c_{\rm s, 0} t_{\rm diff, 0} \gg R_0$, then it is the former that determines $w_{\rm s, 0}$. We can estimate the initial radiative area as \begin{equation} \begin{split} S_{\rm 0} &\simeq 4 \times 10^{-4}\ \left(\frac{\Delta M}{0.01\ \rm{M_{\odot}}}\right)^2 \times \\ &\beta^{11} M_6^{-7/6} m_*^{11/3} r_*^{-11/2}\ a_{\rm 0}^2, \end{split} \label{area0} \end{equation} where we use Equations (\ref{semimajor}), (\ref{eq:delta-eps0}) and (\ref{eq:tdiff}) and assume an initial $h_{\rm s}/w_{\rm s} \simeq 1$. After the collisions, the orbital energy will be redistributed gradually, causing the stream to extend slightly in the radial direction. Furthermore, the viscous shear in the late stage of the circularization grows, further widening the stream. It is difficult to obtain the detailed evolution of the radiative area by an analytical method. Instead, we parametrize the radiative area evolution in a smooth power-law form \begin{equation} S = S_0 \left( \frac{t}{1.5\ t_{\rm fb}}\right)^{\gamma}, \label{area} \end{equation} where $1.5\ t_{\rm fb}$ is the time when the first collision occurs. The power law index $\gamma$ is determined by the starting condition of the disk viscous evolution (see below). \subsection{Circularization Process to Disk Accretion} In order to obtain the whole light curve including the circularization stage and the disk viscous evolution stage, we connect the luminosity curves of the two stages at an intermediate point where these two are equal, as is shown in Figure \ref{L_whole}. After this transition time $t_{\rm c}$, the disk viscous evolution dominates the luminosity. So far we have assumed the emission spectrum is a single blackbody in the circularization stage and a multi-color blackbody in the disk viscous evolution stage. Thus there are some uncertainties in the transition between the two. In fact, if the initial width of the circularized ring at the beginning of the second stage is small, the spectrum is approximate to a single blackbody as well. Therefore, we expect that the effective temperature should vary smoothly during the transition. Therefore, the radiative area evolution index $\gamma$ during the circularization process can be determined by \begin{equation} \gamma = \frac{\log(S_{\rm c}/S_0)}{\log(t_{\rm c}/1.5\ t_{\rm fb})}. \label{power_law} \end{equation} Here $S_{\rm c} = L_{\rm disk}/(\sigma \max[T_{\rm eff}(R)]^4)$ is the effective radiative area at the transition time $t_{\rm c}$ when the luminosity of the viscous evolution becomes dominating the luminosity. With these, the overall evolution of the effective temperature for the entire PTDE can be calculated and is plotted in Figure \ref{Teff_whole}. \begin{figure} \centering \includegraphics[scale=0.5]{L_whole.eps} \caption{Overall bolometric luminosity history for the disruption of a star ($m_*=r_*=1$) by a $10^6\ \rm{M_{\odot}}$ SMBH. The vertical dashed lines represent the time of the transition $t_{\rm c}$ between the circularization stage and the viscous evolution.} \label{L_whole} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{Teff_whole.eps} \caption{Overall effective temperature history. Parameters are same as in Figure \ref{L_whole}. The effective temperature is defined as a single blackbody temperature in the circularization stage, and as the peak local effective temperature of the multi-color blackbody spectrum of the disk in the viscous evolution.} \label{Teff_whole} \end{figure} \section{BH mass and stellar dependence} \label{dependence} The circularization time-scale and the peak luminosity in the circularization, i.e., Equations (\ref{circularization}) and (\ref{eq:Lcp}), depend on the BH mass and the stellar properties. Because the effect of general relativity is stronger with higher BH masses, the circularization time-scale is shorter and the luminosity is higher, thus the circularization stage is much easier to be detected. For comparison purpose, here we study the PTDEs with a $10^7 \ \rm{M_{\odot}}$ SMBH. We plot the light curve and the effective temperature evolution for this case in Figures \ref{L_whole_m7} and \ref{Teff_whole_m7}. Compared with the case of a $10^6\ M_{\odot}$ SMBH, the circularization time-scale is shorter, and the luminosity of the circularization stage is higher. On the other hand, the viscous time-scale is longer and the viscous luminosity is lower. The ratio between the luminosities in the circularization and in the viscous evolution is \begin{equation} \begin{split} \frac{L_{\rm p}}{L_{\rm disk, p}} &\simeq 30\ \alpha^{-4/3} \left(\frac{\Delta M}{0.01\ \rm{M_{\odot}}} \right)^{-2/3} \times \\ &\beta^{13/6} M_6^{28/9} m_*^{13/18} r_*^{-13/6}, \end{split} \end{equation} which is very sensitive to the BH mass. The luminosity in the viscous evolution is very weak for a $10^7\ M_{\odot}$ SMBH. \begin{figure} \centering \includegraphics[scale=0.5]{L7_whole.eps} \caption{Overall bolometric luminosity history for PTDEs with $10^7\ \rm{M_{\odot}}$ SMBH.} \label{L_whole_m7} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{Teff7_whole.eps} \caption{Overall effective temperature history for PTDEs with $10^7\ \rm{M_{\odot}}$ SMBH.} \label{Teff_whole_m7} \end{figure} Most of the tidally disrupted stars come from the lower end of the stellar mass function \citep{Stone_Rates_2016,Kochanek_Tidal_2016}. Here we also calculate the case of a smaller main-sequence star with mass $m_*=0.2$ using the mass-radius relation $r_*=m_*^{0.89}$ \citep{Torres_Accurate_2010}. We plot the light curve and the effective temperature evolution in Figures \ref{L_whole_02} and \ref{Teff_whole_02}, respectively. The luminosity is little higher than that of $m_*= 1$ (see Equation (\ref{eq:Lcp})), and the circularization time-scale and viscous time-scale are shorter. The evolution of temperature is similar to that of $m_*= 1$. The luminosity and the temperature of borderline FTDE with $\beta = 0.9$ we plot here are just for comparison. Since $t_{\rm diff}/t_{\rm s} \propto m_*^{-0.56} M_{\rm h}^{-7/6}$, it is insensitive to the stellar mass and is lower for the larger BHs, and $t_{\rm diff}/t_{\rm s} \gtrsim 1$ for FTDEs (see the Figure \ref{diffusion}). Thus the photon diffusion of FTDE is inefficient, the stream will expand intensively after the intersections of the debris streams, and an elliptical disk might be formed \citep{Shiokawa_General_2015,Piran_Disk_2015,Liu_Elliptical_2020}. Furthermore, the accretion process is super-Eddington for $\beta \gtrsim 0.9$, thus it will produce disk wind and affect the light curve and the spectrum. These features are not accounted for in our calculation. \begin{figure} \centering \includegraphics[scale=0.5]{L02_whole.eps} \caption{Overall bolometric luminosity history for PTDEs of a low-mass star ($m_*= 0.2$).} \label{L_whole_02} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{Teff02_whole.eps} \caption{Overall effective temperature history for PTDEs of a low-mass star ($m_*= 0.2$).} \label{Teff_whole_02} \end{figure} \section{Event rate and Detection rate} \label{event_detection_rate} \subsection{Event rate} The parameter space for PTDEs and FTDEs are $0.5 < \beta < \beta_{\rm d}$ and $\beta_{\rm d} < \beta < \beta_{\rm max}$, respectively. The exact value of $\beta$ for the partial disruption onset is $\beta > 0.5$ \citep{Ryu_Tidal1_2020}.The upper limit of the penetration factor for disruption $\beta_{\rm max}$ is given by $\beta_{\rm max} \simeq R_{\rm T}/R_{\rm S}$. If $\beta > \beta_{\rm max}$, SMBH will directly swallow the whole star instead of tidally disrupting it \citep{Kesden_Tidal_2012}. Notice that if $\beta_{\rm max}<\beta_{\rm d}$, only PTDEs can occur in that case. We can estimate the event rate of TDEs by the loss-cone dynamics \citep{Merritt_Loss_2013}. Occasionally a star will be scattered into a highly eccentric orbit with pericenter radius $R_{\rm p} \lesssim R_{\rm lc} \equiv \max[R_{\rm S}, R_{\rm d}]$, and the star will be captured or fully disrupted by the SMBH. Here $R_{\rm d} = R_{\rm T}/\beta_{\rm d}$ is the full disruption radius. The rate of the stars entering $R_{\rm lc}$ (hereafter as $\dot N_{\rm lc}$) depends on the realistic stellar density profile in the nucleus of each galaxy. In \cite{Stone_Rates_2016}, they use an early-type galaxy sample consisting of 144 galaxies to calculate the TDE rate. Here we use their fitting result (Eq. 27 in \cite{Stone_Rates_2016}), i.e., \begin{equation} \label{eq:erate} \dot N_{\rm lc} = \dot N_0 \left(\frac{M_{\rm h}}{10^8\ M_{\odot}}\right)^B, \end{equation} with $\dot N_0 = 2.9 \times 10^{-5}\ \rm{yr^{-1} gal^{-1}}$ and $B = -0.404$, to estimate the stellar loss rate in a galaxy. Using the loss rate $\dot N_{\rm lc}$, the fraction function of SMBHs $\phi(M_{\rm h})$, the fraction function of penetration factor $f_{\rm TDE}$, and the initial mass function (IMF) $\chi_{\rm Kro}$ in Appendix \ref{appendix}, one can calculate the differential volumetric event rates of PTDEs and FTDEs with respect to the SMBH mass by \begin{equation} \label{eq:volrate} \frac{d \dot N_{\rm TDE}}{dM_{\rm h}} = \int^1_{m_{*, {\rm min}}} \int_{\beta} \dot N_{\rm lc} \phi(M_{\rm h}) \xi_{\rm Kro}(m_*) f_{\rm TDE}\ dm_* d\beta, \end{equation} where $m_{*, {\rm min}}$ is the lower limit for TDE, it is given by Eq. (\ref{eq:mstarmin}). The volumetric event rates of TDEs with respect to $M_{\rm h}$ is shown in Fig. \ref{volrate}. The structure of polytropic stars with $\gamma = 4/3$ are denser than that with $\gamma = 5/3$, therefore it needs to be much closer to the SMBH for full disruptions, thus reducing the probability for FTDEs. The event rates of PTDE and FTDE are similar for the smaller SMBHs $M_{\rm h} \le 10^6\ M_{\odot}$. However, the event rate of PTDEs becomes dominant for the larger SMBHs. There are two reasons: First, the radius for the full disruption become closer to the SMBH as the SMBH mass increases, when $M_{\rm h} > 10^8\ M_{\odot}$ the stars can not be fully disrupted, then only PTDEs can occur. Second, most of the stars are in the diffusion limit for the lager SMBHs (see Eq. (\ref{eq:fpin})), which will suppress the FTDEs. Overall, the chance of PTDEs is very promising. The event rate of PTDEs is greater than that of FTDEs, even for the larger SMBHs. Notice that \cite{Ryu_Tidal1_2020} consider the realistic stellar structure by using MESA, and find that for low-mass stars the chance of PTDEs is approximately equal to that of FTDEs, but for high-mass star, the likelihood of PTDEs is 4 times higher than that of FTDEs. However, they consider only the full loss-cone regime and neglect the upper limit of penetration $\beta_{\rm max}$ for FTDEs. Therefore, their estimate of the PTDE fraction is lower than ours. \begin{figure} \centering \includegraphics[scale=0.5]{event_rate.eps} \caption{Volumetric event rate of TDEs versus SMBH mass $M_{\rm h}$. The solid and dashed lines represent $\gamma = 5/3$ and $\gamma=4/3$ polytropic indices of stars, respectively.} \label{volrate} \end{figure} \subsection{Detection rate of PTDEs} The fallback mass of PTDE is less than that of FTDE, so PTDE is seemingly dimmer than FTDE. However, as we discuss above, PTDEs prefer heavier SMBHs, thus most of them are bright and detectable. The detection rate depends on the physical processes that contribute to the emission of TDEs \citep{Stone_Rates_2016}. Here we calculate the detection rate of PTDEs using our model. The limiting detection distance for a PTDE is \begin{equation} \label{dlim} d_{\rm lim} = \left[\frac{L_{\nu}}{4\pi f_{\nu}}\right]^{1/2}, \end{equation} where the spectral flux density limit $f_{\nu}$ is given by $m_{\rm R} \simeq -2.5\ \lg(f_{\nu}/3631 \rm{J_y})$ and the R-band limiting magnitude $m_R \simeq 20.5$ for ZTF. The monochromatic luminosity of the source is $L_{\nu} \simeq \pi B_{\nu}(T_{\rm eff}) S$, where $B_{\nu}$ is the Planck function. The effective temperature $T_{\rm eff} \sim (L_{\rm p}/(\sigma S_0))^{1/4}$ and radiative area $S \sim S_0$ are given in the model, i.e., Eq. (\ref{eq:Lcp}) and (\ref{area0}), thus $d_{\rm lim}$ is the function of $\beta$, $M_{\rm h}$ and $m_*$. The detection rate of PTDEs is \begin{equation} \label{eq:detection} \begin{split} D_{\rm p} \simeq \int dM_{\rm h} \int_{m_{\rm *, min}}^1 dm_* &\int_{0.5}^{\beta_{\rm lc}} \dot N_{\rm lc}(M_{\rm h})\phi(M_{\rm h})\chi_{\rm Kro}(m_*) \\ & \times f_{\rm TDE} \frac43 \pi d_{\rm lim}^3\ d\beta. \end{split} \end{equation} Here $\beta_{\rm lc} \equiv R_{\rm T}/R_{\rm lc}$. The integral upper limit of the SMBH mass is given by $R_{\rm T}/0.5 = R_{\rm S}$, i.e., $M_{\rm h,max} = (R_{\odot}c^2/(GM_{\odot}^{1/3}))^{3/2}$. It gives $D_{\rm p} \sim 5 \times 10^2\ {\rm yr^{-1}}$ for $\gamma = 5/3$, and $D_{\rm p} \sim 10^2\ {\rm yr^{-1}}$ for $\gamma = 4/3$. Taking the field of view of ZTF into consideration ($\sim 0.1$ of the whole sky), the ZTF detection rate of PTDEs is about dozens per year. \section{Conclusion and Discussion} \label{conclusion} In this paper, we consider that PTDEs may not produce outflow or wind during the circularization due to the efficient radiative diffusion in the stream. Because PTDEs have less fallback mass than FTDEs, photons diffuse out more efficiently. Hence they provide a clean environment to study the circularization process and the disk formation. We calculate the light curves of PTDEs considering the earlier circularization process and the later disk viscous evolution. During the circularization process, the radiation comes directly from the shocked stream. After the circularization, the ring at the circularization radius evolves by the viscous shear, and eventually settles in a self-similar and sub-Eddington accretion phase. There are two peaks in the light curve of a PTDE. The first one corresponds to the circularization process, the second one to the formation of the accretion disk. The times of the peaks are $t_{\rm cir} \sim 10^2 - 10^3\ {\rm day}$ and $t_0 \sim 10^3 - 10^4\ {\rm day}$, respectively. Formulae for the time-scales of both phases are provided. The ratio between them is \begin{equation} \begin{split} \frac{t_{\rm cir}}{t_0} &\simeq 0.07\ \alpha^{4/3} \kappa^{1/3} \left(\frac{\Delta M}{0.01\ M_{\odot}}\right)^{2/3} \times \\ &\beta^{-5/3} M_6^{-41/18} m_*^{-5/9} r_*^{7/6}. \end{split} \end{equation} For most of the PTDEs with $M_{\rm h} \gtrsim 10^6\ M_{\odot}$ and $m_* \gtrsim 0.1$, the circularization time-scale is shorter than the viscous time-scale. Therefore, we conclude that accretion disk forms after the circularization, and we can see the double peaks in the light curve of PTDE. Increasing either the BH mass, density of the disrupted star, or the penetration factor can enhance the self-crossing shock, because all these bring the pericenter radius closer (in units of the Schwarzschild radius) to the BH; therefore, the luminosity increases and the time-scale is shorter for the circularization stage. In the viscous evolution stage, the viscous time-scale is shorter and the luminosity is higher for a smaller BH mass, a smaller star or a larger penetration factor. Based on the single blackbody assumption in the circularization stage, we calculate the effective temperature which follows the light curve to rise and drop. After that, as the circularized ring evolves to an accretion disk, the effective temperature rises until the disk has formed. Eventually, both the light curve and the effective temperature decay in power laws with time, following the self-similar solution of disk evolution. Overall, the effective temperatures are $\sim 10^4 - 10^6\ {\rm K}$ and exhibit weak dependence on the BH mass and the star, so the spectra peak in UV. \subsection{Viscous Effects in the Circularization Stage} \label{subsec:vis_cir} In the calculation of the circularization process, we neglect the viscous effects. Here we explore the importance of viscous effects in the circularization stage. There are two main effects of the viscous shear. One is that the viscous shear can heat up the stream and increase the luminosity. Furthermore, the viscous shear can redistribute the angular momentum of the debris stream and cause some parts of the stream to be closer to the SMBH. Then it may further enhance the dissipation caused by the viscous shear and the self-crossing, and speeds up the formation of the disk (elliptical or circular disk). \cite{Svirski_Elliptical_2017} \citep[also see][]{Bonnerot_Long_2017} calculate the viscous effects induced by the magnetic stress, which originates from the exponential growth of the magneto-rotational instability (MRI) \citep{Balbus_A_1991}. In order to be consistent with the prescription of viscosity adopted in the disk stage, we also adopt the $\alpha_{\rm g}$-viscosity (Equation (\ref{viscosity})) to parametrize the viscous shear in the circularization stage. One can find that the prescription of $\alpha_{\rm g}$-viscosity is equivalent to that of magnetic stress used in \cite{Svirski_Elliptical_2017} if we let $\alpha \simeq \alpha_{\rm mag} (v_{\rm A}/c_{\rm s})^2$. Here $\alpha_{\rm mag}$ and $v_{\rm A}$ are the ratio of the $\hat n-\hat t$ magnetic stress component to the total magnetic stress and the Alfv\'{e}n velocity, respectively. The redistribution of the specific angular momentum caused by the viscous shear can be estimated by $dj/dt \simeq \nu \Omega$. The total change of the specific angular momentum during the circularization process can be written as $\Delta j \simeq \nu \Omega t_{\rm cir}$, thus \begin{equation} \begin{split} \frac{\Delta j}{j} \sim &\alpha t_{\rm cir} \frac{k_{\rm b} T_{\rm c}}{\mu m_{\rm p}j} \\ &\sim 0.01\ \alpha \left(\frac{T_{\rm c}}{10^6\ {\rm K}}\right) \beta^{-7/2} M_6^{-8/3} m_*^{-7/6} r_*^3, \end{split} \end{equation} where the specific angular momentum is $j \simeq (GM_{\rm h}R_{\rm c})^{1/2}$. For PTDEs with $M_{\rm h} \gtrsim 10^6\ M_{\odot}$, the viscous shear has negligible effect on the extension of stream during the circularization process. Therefore the structure of stream will keep thin until its orbit circularizes and settles into a ring. Another viscous effect is that it can heat up the stream and increase the luminosity. We can estimate the viscous luminosity in the circularization process by assuming the viscous heating rate equals to the radiative cooling rate, i.e., $L_{\rm c, vis} \simeq \int \nu \Omega^2 \Sigma\ dS$. Most of the viscous heating come from the pericenter where only a small part of the stream mass locates at, thus the viscous luminosity is very small, i.e., $L_{\rm c, vis} \ll (\nu \Omega^2)_{\rm p} \Delta M \simeq L_{\rm disk, 0}$. Here the subscript $p$ denotes the value at the pericenter radius, and $L_{\rm disk, 0}$ is the luminosity of the ring after the circularization process. Therefore, it is reasonable to neglect the viscous effects in the circularization process. In the late time of the circularization, the viscous effects become important and the luminosity is dominated by the viscous shear. In the transition between the circularization stage and the viscous evolution stage, we artificially connect the light curve of these two stages due to the fact that the luminosity induced by the viscous shear is small in the circularization stage. In fact the transition should be smooth if we take the viscous heating into account in the circularization stage. We consider that in the circularization stage most of the radiation comes from the shock-heated debris stream. As the material cools down, the ions will recombine. Assuming most of the material is hydrogen, the total recombination energy is $E_{\rm re}\simeq N \times 13.6\ {\rm eV} \simeq 3 \times 10^{44}\ (\Delta M/0.01\ M_{\odot})\ {\rm erg}$, where $N$ is the number of hydrogen ions. The recombination luminosity is $L_{\rm re} \simeq E_{\rm re}/t_{\rm fb} \simeq 10^{38}\ {\rm erg/s}$, which is negligible. Recombination will probably promote the chemical reactions in the stream, so that dust clumps form \citep{Kochanek_The_1994}. However, these dust will be evaporated in later shocks. \subsection{Observational Prospect} \label{subsec:obs} We calculate the detection rate of PTDEs through loss-cone dynamic. For ZTF, the detection rate is about dozens per year, it is very promising. We encourage the search of them by optical/UV or soft X-ray telescopes. For some PTDEs that might have already been discovered in the past data, our work would be useful to identify them. Recently, \cite{Gomez_The_2020} report a TDE candidate AT 2018hyz. They use the Modular Open-Source Fitter for Transients (MOSFIT) to model the light curves, and conclude that it is a PTDE. The MOSFIT model assumes a rapid circularization of the stream and that the evolution of luminosity traces the mass fallback rate, which may not be the case for PTDEs, as we have shown. The double peaks in the light curve of AT 2018hyz are consistent with what we predict for a PTDE, which is the feature of a two-stage evolution. However, the blackbody spectral fitting of AT 2018hyz gives a large photosphere $\sim 10^{15}\ {\rm cm}$, which is not consistent with our model. There are some uncertainty in the spectrum fitting, e.g., the galaxy extinction and prior spectrum assumption. Furthermore, in the circularization process, the spectrum might deviate from the blackbody spectrum we assume here. More details of the radiative dynamical evolution of the circularization need to be understood. Recently, \cite{Frederick_A_2020} report five transient events found by ZTF from active galactic nucleus (AGNs). One of them, ZTF19aaiqmgl, shows two peaks in the light curve, and only the second peak has X-ray detection. The second peak might correspond to the disk formation. However, in an AGN, the debris from the disrupted star will collide with the pre-existing accretion disk \citep{Chan_Tidal_2019}, and the shocks will heat up the gas. For PTDEs, the lighter streams might directly merge with the disk. The details of PTDEs from AGNs are still unclear. Furthermore, \cite{Payne_14ko_2021} report a repeated PTDE candidate ASASSN-14ko, which is located within an AGN. The star will be tidally stripped by the BH during each encounter near the pericenter, if the star is in an elliptical orbit. The pre-existing accretion disk can produce periodic bright flares by accreting these stripped mass. Thus the debris might not experience a long-term circularization and disk formation process, but how the debris interact with the pre-existing disk is unclear. \section{acknowledgments} We thank the referee for helpful comments and suggestions. This work is supported by the National Natural Science Foundation of China (12073091), Guangdong Basic and Applied Basic Research Foundation (2019A1515011119) and Guangdong Major Project of Basic and Applied Basic Research (2019B030302001). \begin{appendix} \section{Functions in the calculation of event rate} \label{appendix} In order to calculate the volumetric event rate of TDEs, one needs to consider the fraction of SMBHs with mass $M_{\rm h}$, i.e., $\phi(M_{\rm h})$. We only consider those TDEs occurring near the SMBHs in galaxy nucleus, thus $\phi(M_{\rm h})$ actually is the fraction of galaxies which have SMBHs with mass $M_{\rm h}$. In \cite{Stone_Rates_2016}, they calculate $\phi(M_{\rm h})$ using the Schechter function \citep[galaxy luminosity function,][]{Schechter_An_1976}, the scaling relations of BH masses and host galaxy properties \citep{McConnell_Revisiting_2013}, and the occupation fraction of SMBHs \citep{Miller_Xray_2015}. We rewrite the fraction function of SMBHs here \begin{equation} \label{eq:schefunm} \begin{split} \phi(M_{\rm h}) dM_{\rm h} &= 3.53 \phi_* f_{\rm occ} M_6^{-1.07} \\ & \times \exp \left(-0.025M_6^{0.709}\right) dM_6\end{split} \end{equation} where $\phi_*= 4.9 \times 10^{-3} h_7^{3}\ {\rm Mpc^{-3}}$, $f_{\rm occ}(M_{\rm h})$ is the occupation fraction of SMBHs, and we take the normalized Hubble constant $h_7 = 1$. The occupation fraction of SMBHs $f_{\rm occ}$ is the probability that a galaxy harbors a SMBH, which is given by \citep{Miller_Xray_2015} \begin{equation} \label{eq:focc} \begin{small} f_{\rm occ} = \begin{cases} 0.5+0.5&\tanh \left(\ln\left(\frac{M_{\rm bul}}{M_{\rm c}}\right) \times 2.5^{8.9-\log_{10}\left(\frac{M_{\rm c}}{M_{\odot}}\right)}\right), \\ &{M_{\rm bul} < 10^{10} M_{\odot}} \\ 1,&{M_{\rm bul} > 10^{10} M_{\odot}}, \end{cases} \end{small} \end{equation} where $M_{\rm bul}$ is the bulge mass, which we relate to the SMBH mass using the $M_{\rm bul}-M_{\rm h}$ relation from \cite{McConnell_Revisiting_2013}, i.e., \begin{equation} \label{Mbulge} \log_{10}(M_{\rm h})=8.46+1.05\log_{10}(M_{\rm bul}/10^{11}\ M_{\odot}). \end{equation} The parameter $M_{\rm c}$ is the approximate mass below which the occupation fraction turns over. It should be less than $\sim 10^{8.5}\ M_{\odot}$ \citep{Stone_Rates_2016}. Here we assume $M_{\rm c} \simeq 10^8\ M_{\odot}$. The exact value of $M_{\rm c}$ only affects the occupation fraction for the galaxies with smaller SMBHs. As we shall see, most of the observable PTDEs occur near the larger SMBHs, therefore it changes little the detection rate of PTDEs. Furthermore, only those stars with low density can be disrupted by the SMBH. That is because their disruption radius are outside the horizon. FTDEs and PTDEs require $R_{\rm d} \gtrsim R_{\rm S}$ and $R_{\rm T}/0.5 \gtrsim R_{\rm S}$, respectively. Using the stellar mass-radius relation for the lower main sequence $r_* \propto m_*^{0.89}$ \citep{Torres_Accurate_2010}, one can obtain the lower limit of stellar mass for disruption is \begin{equation} \label{eq:mstarmin} m_{*,min} = \begin{cases} 0.85\ \beta_{\rm d}^{1.8} \left(\frac{M_{\rm h}}{10^8 M_{\odot}}\right)^{1.2},&\quad{\rm FTDEs} \\ 0.25\ \left(\frac{M_{\rm h}}{10^8 M_{\odot}}\right)^{1.2},&\quad{\rm PTDEs}. \end{cases} \end{equation} The stars have large orbital period would diffuse across the loss-cone by gravitational encounters in a single orbit, i.e., the so-called full loss-cone regime or pinhole limit. And the stars near the SMBH, have short orbital period, will diffuse into the loss-cone over many orbits, and thus hardly penetrate beyond the loss-cone boundary, i.e., the so-called empty loss-cone regime or diffusion limit. Unlike the calculation in \cite{Stone_Rates_2016}, we consider that the stars in the diffusion limit and in the pinhole limit have different fates. In the diffusion limit, most of the stars experience one or more partial disruptions as they approaching the loss cone. After the partial disruption, their remnant cores probably return to be disrupted again or escape as the so-called ''turbovelocity'' stars \citep{Manukian_Turbovelocity_2013,Ryu_Tidal3_2020}, anyway, that will result in the strong suppression of FTDEs in the diffusion limit. And in the pinhole limit, the velocity directions of stars are randomly distributed, then the fraction function with $\beta$ is $f_{\rm TDE} \propto \beta^{-2}$. Therefore, for the FTDE ($\beta_{\rm d} < \beta < \beta_{\rm max}$), we assume only the stars in pinhole limit can be fully disrupted, thus $f_{\rm TDE} \propto f_{\rm pin}\beta^{-2}$. Here the pinhole fraction $f_{\rm pin}$ is the fraction of the stars whose orbits are in the pinhole limit near the SMBH. It can be estimate by the fitting formula \cite[i.e., Eq. (29) in][]{Stone_Rates_2016} \begin{equation} \label{eq:fpin} f_{\rm pin} = 0.22 \left(\frac{M_{\rm h}}{10^8 M_{\odot}}\right)^{-0.307}, \end{equation} which should satisfy $f_{\rm pin} < 1$. For the PTDE ($0.5 < \beta < \beta_{\rm lc}$), we assume $f_{\rm TDE}$ contains the contributions of stars in both diffusion limit and pinhole limit. In the pinhole limit, $f_{\rm TDE} \propto f_{\rm pin}\beta^{-2}$. In the diffusion limit, we assume the stars outside the loss-cone ($R_{\rm lc}$) are in the quasi-steady state, then the distribution of angular momenta of stars can be obtained by the steady-state solution, i.e., $\xi_{\rm diff}(\mathcal{R}) \propto \ln(\mathcal{R})$ \citep{Merritt_Loss_2013}, Here $\mathcal{R} \equiv j^2/j_{\rm lc}^2$ with $j_{\rm lc} \simeq (2GM_{\rm h}R_{\rm lc})^{1/2}$. It satisfies $ \xi_{\rm diff}(\mathcal{R})\ d\mathcal{R} = \xi_{\rm diff}(\beta)\ d\beta$ and \begin{equation} \int^{\beta_{\rm lc}}_{0.5} \xi_{\rm diff}(\beta)\ d\beta = 1. \end{equation} Thus we have \begin{equation} \label{eq:xidff} \xi_{\rm diff}(\beta) = \frac{0.5 \ln{(\mathcal{R})}}{\ln{(\beta_{\rm lc}/0.5)}+0.5/\beta_{\rm lc}-1}\beta^{-2}. \end{equation} Then we obtain the fraction function with $\beta$ in TDE is \begin{equation} f_{\rm TDE} = \begin{cases} (1-f_{\rm pin})\xi_{\rm diff}(\beta)+f_{\rm pin}\frac{\beta^{-2}}{1/\beta_{\rm lc}},&{\rm PTDEs} \\ f_{\rm pin}\frac{\beta^{-2}}{1/\beta_{\rm d}},&{\rm FTDEs}. \end{cases} \end{equation} Furthermore, because the TDE rate depends on the present-day mass function of stars, we need to take the IMF into account. We adopt the Kroupa IMF \citep{Kroupa_On_2001}, i.e., \begin{equation} \label{eq:kroupa} \chi_{\rm Kro} = \begin{cases} 0.28m_*^{-1.3},&\quad{0.08 < m_* < 0.5} \\ 0.14m_*^{-2.3},&\quad{0.5 < m_* < 1} \\ 0,&\quad{\rm {otherwise}}, \end{cases} \end{equation} where the upper truncation $m_*=1$ was chosen to approximate an old stellar population. It satisfies $\int \chi_{\rm Kro}\ dm_* = 1$. \end{appendix}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Conclusion and Future work} This paper derives fundamental lower bounds on the statistical accuracy of ridge-regularized ERM (RERM) for linear and binary models in high-dimensions. It then derives simple closed-form approximations that allow precisely quantifying the sub-optimality gap of RLS. In Section \ref{sec:unreg_opt} in the supplementary material, these bounds are further used to study the benefits of regularization by comparing (RERM) to un-regularized ERM. Among several interesting directions of future work, we highlight the following. First, our lower bounds make it possible to compare RERM to the optimal Bayes risk \cite{barbier2019optimal,reeves2019replica}. Second, it is interesting to extend the analysis to GLMs for arbitrary link functions beyond linear and binary studied here. A third exciting direction is investigating the fundamental limits of RERM in the presence of correlated (Gaussian) features. \section*{Acknowledgment} This work was supported by NSF Grant CCF-1909320 and Academic Senate Research Grant from UCSB. \section{Introduction}\label{sec:intro} Empirical Risk Minimization (ERM) includes a wide family of statistical inference algorithms that are popular in estimation and learning tasks encountered in a range of applications in signal processing, communications and machine learning. ERM methods are often efficient in implementation, but first one needs to make certain choices: such as, choose an appropriate loss function and regularization function, and tune the regularization parameter. Classical statistics have complemented the practice of ERM with an elegant theory regarding optimal such choices, as well as, fundamental limits, i.e., tight bounds on their performance, e.g., \cite{huber2011robust}. These classical theories typically assume that the size $m$ of the set of observations is much larger than the dimension $n$ of the parameter to be estimated, i.e., $m\gg n$. In contrast, modern inference problems are typically high-dimensional, i.e. $m$ and $n$ are of the same order and often $n>m$ \cite{candes2014mathematics,montanari2015statistical,karoui2013asymptotic}. This paper studies the fundamental limits of convex ERM in high-dimensions for generalized linear models. Generalized linear models (GLM) relate the response variable $y_i$ to a linear model $\mathbf{a}_i^T\mathbf{x}_0$ via a link function: $y_i=\varphi(\mathbf{a}_i^T\,\mathbf{x}_0)$. Here, $\mathbf{x}_0\in\mathbb{R}^n$ is a vector of true parameters and $\mathbf{a}_i\in\mathbb{R}^n,~i\in[m]$ are the feature (or, measurement) vectors. Following the ERM principle, $\mathbf{x}_0$ can be estimated by the minimizer of the empirical risk $\frac{1}{m}\sum_{i=1}^m \mathcal{L}\left(y_i,\mathbf{a}_i^T \mathbf{x}\right)$ for a chosen loss function $\mathcal{L}$. Typically, ERM is combined with a regularization term and among all possible choices arguably the most popular one is ridge regularization, which gives rise to ridge-regularized ERM (RERM, in short): \begin{align}\label{eq:RERM_gen} \widehat{\mathbf{x}}_{\mathcal{L},\la}=\arg\min_{\mathbf{x}\in\mathbb{R}^n}\,\frac{1}{m}\,\sum_{i=1}^m \mathcal{L}\left(y_i,\mathbf{a}_i^T \mathbf{x}\right)+\frac{\lambda}{2}\|\mathbf{x}\|_2^{2}. \end{align} This paper aims to provide answers to the following questions on fundamental limits of \eqref{eq:RERM_gen}: \emph{What is the minimum achievable (estimation/prediction) error of $\widehat{\mathbf{x}}_{\mathcal{L},\la}$? How does this depend on the link function $\varphi$ and how to choose $\mathcal{L}$ and $\la$ to achieve it? What is the sub-optimality gap of popular choices such as ridge-regularized least-squares (RLS)? How do the answers to these questions depend on the over-parameterization ratio $n/m$?} We provide answers to the questions above for the following two popular instances of GLMs. \noindent\emph{Linear models:} $y_i = \mathbf{a}_i^T\mathbf{x}_0 + z_i$, where $z_i\widesim{\text{\small{iid}}} P_Z,~i\in[m]$. As is typical, for linear models, we measure performance of $\widehat{\mathbf{x}}_{\mathcal{L},\la}$ with the squared error: $\|\widehat{\mathbf{x}}_{\mathcal{L},\la}-\mathbf{x}_0\|_2^2$. \vspace{4pt} \noindent\emph{Binary models:} $y_i = f(\mathbf{a}_i^T\mathbf{x}_0),~i\in[m]$ for a (possibly random) link function outputing values $\{\pm 1\}$, e.g., logistic, probit and signed models. We measure estimation performance in terms of (normalized) correlation ${(\widehat{\mathbf{x}}_{\mathcal{L},\la}^T\,\mathbf{x}_0)}\Big/{\|\widehat{\mathbf{x}}_{\mathcal{L},\la}\|_2\|\mathbf{x}_0\|_2}$ and prediction performance in terms of classification error $\mathbb{P}\big( y\neq \mathrm{sign}(\widehat{\mathbf{x}}_{\mathcal{L},\la}^T\,\mathbf{a}) \big)$ where the probability is over a fresh data point $(\mathbf{a},y)$. \vspace{4pt} All our results are valid under the following two assumptions. \begin{ass}[High-dimensional asymptotics]\label{ass:HD} Throughout the paper, we assume the high-dimensional limit where $m,n\rightarrow\infty$ at a fixed ratio $\delta=m/n>0$. \end{ass} \begin{ass}[Gaussian features]\label{ass:gaussian} The feature vectors $\mathbf{a}_{i} \in \mathbb{R}^{n}, i \in[m]$ are iid $\mathcal{N}(\mathbf{0},\mathbf{I}_n)$. \end{ass} \vspace{4pt} \noindent\textbf{Overview of Contributions.}~~We are now ready to summarize the paper's main contributions. \vspace{4pt} \noindent$\bullet$~~For linear models, we prove a lower bound on the squared-estimation error of RERM; see Theorem \ref{thm:lowerbound_reg}. We start with a system of two nonlinear equations that is parametrized by the loss $\mathcal{L}$ and the regularizer $\la$, and determines the high-dimensional limit of the error for the corresponding $\mathcal{L}$ and $\la$ \cite{karoui2013asymptotic,Master}. By identifying an algebraic structure in these equations, we establish a lower bound on their solution that holds for all choices of $\mathcal{L}$ and $\la$. \vspace{4pt} \noindent$\bullet$~~For binary models, we first derive a system a of three nonlinear equations whose unique solution characterizes the statistical performance (correlation or classification error) of RERM under mild assumptions on the loss and link functions $\mathcal{L}$ and $f$; see Theorem \ref{propo:boundedness}. Previous works have only considered specific loss and link functions or \emph{no} regularization. Second, we use this system of equations to upper bound the accuracy over this class of $(\mathcal{L},f)$-pairs; see Theorem \ref{thm:lowerbound_bin}. \vspace{4pt} \noindent$\bullet$~~Importantly, we present a recipe for optimally tuning $\mathcal{L}$ and $\la$ in both linear and binary models; see Lemmas \ref{thm:opt_reg} and \ref{thm:opt_bin}. For specific models, such as linear model with additive exponential noise, binary logistic and signed data, we numerically show that the optimal loss function is convex and we use gradient-descent to optimize it. The numerical simulations perfectly match with the theoretical predictions suggesting that our bounds are tight. \vspace{4pt} \noindent$\bullet$~~We derive simple closed-form approximations to the aforementioned bounds; see Corollaries \ref{cor:lowerbound} (linear) and \ref{cor:lowerbound_binary} (binary). These simple (yet tight) expressions allow us to precisely quantify the sub-optimality of ridge-regularized least-squares (RLS). For instance, we show that \emph{optimally-tuned RLS is (perhaps surprisingly) approximately optimal for logistic data and small signal strength, but the sub-optimality gap grows drastically as signal strength increases.} In the Appendix, we also include comparisons to ERM without regularization and to a simple averaging method. \subsection{Prior Work} Our results fit in the rapidly growing recent literature on \emph{sharp} asymptotics of (possibly non-smooth) convex optimization-based estimators, e.g., \cite{DMM,Sto,montanariLasso,Cha,TroppEdge,oymak2016sharp,StoLasso,OTH13,COLT,karoui2013asymptotic,karoui15,donoho2016high,Master,TroppUniversal,miolane2018distribution,wang2019does,celentano2019fundamental,hu2019asymptotics,bu2019algorithmic,hu2019asymptotics,bu2019algorithmic}. Most of these works study linear models. Extensions to generalized linear models for the special case of regularized LS were studied in \cite{NIPS}, while more recently there has been a surge of interest in RERM methods tailored to binary models (such as logistic regression or SVM) \cite{huang2017asymptotic,candes2018phase,sur2019modern,mai2019large,logistic_regression,svm_abla,salehi2019impact,taheri2020sharp,Zeyu2019,montanari2019generalization,liang2020precise,mignacco2020role}. Out of these works relatively few have focused on fundamental limits among families of ERM (rather than specific instances). The papers \cite{bean2013optimal,donoho2016high,advani2016statistical} derive lower bounds and optimal loss functions for the squared error of (unregularized) ERM for linear models. In a related work, \cite{montanari15} studies robustness of these methods to the noise distribution. More recently, \cite{celentano2019fundamental} performed an in-depth analysis of fundamental limits of convex-regularized LS for linear models of structured signals. For binary models, upper bounds on the correlation of un-regularized ERM were only recently derived in \cite{taheri2020sharp}. This paper contributes to this line work. For linear models, we build on corresponding sharp error characterizations in \cite{karoui15,Master} to extend the results of \cite{bean2013optimal,donoho2016high,advani2016statistical} to ridge-regularized ERM. Specifically, our results hold for all values of $\delta>0$ including the, so called, overparameterized regime $\delta<1$. For binary models, our contribution is twofold: (i) we present sharp asymptotic characterizations for RERM for a wide class of loss and link functions; (ii) we use these to extend the correlation bounds of \cite{taheri2020sharp} to the regularized case. On a technical level, the sharp asymptotics are derived using the convex Gaussian min-max Theorem (CGMT) \cite{StoLasso,COLT}. In particular, we follow the machinery introduced in \cite{NIPS,svm_abla,salehi2019impact,taheri2020sharp,Zeyu2019} that applies the CGMT to binary models and predicts the performance in terms of a system of few nonlinear equations. Our main technical contribution here is proving existence and uniqueness of the solutions to these equations, which is critical as it guarantees that our performance bounds hold for a wide class of loss and link functions. \vspace{4pt} \noindent\textbf{Notation.}~We use boldface notation for vectors. We write $i\in[m]$ for $i=1,2,\ldots,m$. For a random variable $H$ with density $p_{_H}(h)$ that has a derivative $p_{_H}^{\prime}(h), \forall h \in \mathbb{R},$ we define its \emph{Fisher information} $\mathcal{I}(H):=\mathbb{E}[\left(p_{_H}'(h)/p_{_H}(h)\right)^{2}].$ We write $\env{\mathcal{L}}{x}{\tau}:=\min_{v}\frac{1}{2\tau}(x-v)^2 + \mathcal{L}(v),$ for the \emph{Moreau envelope function} and $\prox{\mathcal{L}}{x}{\tau}:=\arg\min_{v}\frac{1}{2\tau}(x-v)^2 + \mathcal{L}(v)$ for the \emph{proximal operator} of the loss $\mathcal{L}:\mathbb{R}\rightarrow\mathbb{R}$ at $x$ with parameter $\tau>0$. We denote the first order derivative of the Moreau-envelope function w.r.t $x$ as: $ \envdx{\mathcal{L}}{x}{\tau}:=\frac{\partial{\env{\mathcal{L}}{x}{\tau}}}{\partial x}. $ Finally, for a sequence of random variables $\mathcal{X}_{m,n}$ that converges in probability to some constant $c$ in the high-dimensional asymptotic limit of Assumption \ref{ass:HD}, we write $\mathcal{X}_{m,n}\stackrel{P}{\longrightarrow} c$. \section{Linear Models}\label{sec:linear} Consider data $(y_i,\mathbf{a}_i)$ from an additive noisy linear model: $y_i = \mathbf{a}_i^T\mathbf{x}_0 + z_i,~ z_i\widesim{\text{\small{iid}}} P_Z,~i\in[m].$ \begin{ass}[Noise distribution]\label{ass:noise} The noise variables $z_i$ are iid distributed as $Z \sim P_Z$, $i\in[m]$, for a distribution $P_Z$ with zero mean and finite nonzero second moment. \end{ass} For loss functions that are lower semicontinuous (lsc), proper, and convex we focus on the following version of \eqref{eq:RERM_gen} that is tailored to linear models: \begin{align}\label{eq:opt_reg_main} \widehat{\mathbf{x}}_{{\mathcal{L},\la}}:=\arg \min_{\mathbf{x}\in\mathbb{R}^n} \;\;\frac{1}{m}\sum_{i=1}^m \mathcal{L}\left(y_i-\mathbf{a}_i^T \mathbf{x}\right)+\frac{\la}{2}\|\mathbf{x}\|^{2}. \end{align} We assume without loss of generality that $\|\mathbf{x}_0\|_2=1$ \footnote{Suppose that $\|\mathbf{x}_0\|_2=r>0$. Then, the optimization problem in \eqref{eq:opt_reg_main} can be transformed to the case $\widetilde{\mathbf{x}}_0:= \mathbf{x}_0/r$ (hence $\|\widetilde{\mathbf{x}}_0\|=1$) by setting $\widetilde{\mathcal{L}}(t) := \mathcal{L}(rt)$, $\widetilde{\la} := r^2\la$ and $\widetilde{Z}=Z/r$. This implies that the results of Section \ref{sec:limit_opt_lin} can be reformulated by replacing $Z$ with $\widetilde{Z}$.}. \subsection{Background on Asymptotic Performance}\label{sec:back_lin} Prior works have investigated the limit of the squared error $\|\widehat{\mathbf{x}}_{_{\mathcal{L},\la}}-\mathbf{x}_0\|^2$ \cite{karoui2013asymptotic,Master}. Specifically, consider the following system of two equations in two unknowns $\alpha$ and $\tau$: \begin{subequations}\label{eq:eq_main0} \begin{align} \E \Big[\Big(\envdx{\mathcal{L}}{\alpha\,G+Z}{\tau}\Big)^2\,\Big]&=\frac{\alpha^2- \la^2\delta^2 \tau^2}{\tau^2\,\delta}, \\ \E\Big[G\cdot\envdx{\mathcal{L}}{\alpha\,G+Z}{\tau}\Big]&=\frac{\alpha(1-\la\delta\tau)}{\tau\,\delta}, \end{align} \end{subequations} where $G\sim\mathcal{N}(0,1)$ and $Z\sim P_Z$ is the noise variable. It has been shown in \cite{karoui2013asymptotic,Master} that under appropriate regularity conditions on $\mathcal{L}$ and the noise distribution $P_Z$, the system of equations above has a unique solution $(\alpha_{\mathcal{L},\la}>0,\tau {_{\mathcal{L},\la}}>0)$ and $\alpha_{\mathcal{L},\la}^2$ is the HD limit of the squared-error, i.e., \begin{align}\label{eq:alphaL} \|\widehat{\mathbf{x}}_{{\mathcal{L},\la}}-\mathbf{x}_0\|_2^2\,\stackrel{P}{\longrightarrow}\,\alpha_{\mathcal{L},\la}^2. \end{align} Here, we derive tight lower bounds on $\alpha_{\mathcal{L},\la}^2$ over both the choice of $\mathcal{L}$ and $\la$. Our starting point is the asymptotic characterization in \eqref{eq:alphaL}, i.e., our results hold for all loss functions and regularizer parameters for which \eqref{eq:eq_main0} has a unique solution that characterizes the HD limit of the square-error. To formalize this, we define the following collection of loss functions $\mathcal{L}$ and noise distributions $P_Z$: \be \mathcal{C}_{\rm lin} := \Big\{ (\mathcal{L},P_Z)\,\Big|\,\text{$\forall\la>0$: \eqref{eq:eq_main0} has a unique bounded solution $(\alpha_ {_{\mathcal{L},\la}}>0,\tau_{_{\mathcal{L},\la}}>0)$ and \eqref{eq:alphaL} holds} \Big\}.\notag \end{align} We refer the reader to \cite[Thm. 1.1]{karoui2013asymptotic} and \cite[Thm. 2]{Master} for explicit characterizations of $(\mathcal{L},P_Z)$ that belong to $\mathcal{C}_{\rm lin}$. We conjecture that some of these regularity conditions (e.g., the differentiability requirement) can in fact be relaxed. While this is beyond the scope of this paper, if this is shown then automatically the results of this paper formally hold for a richer class of loss functions. \subsection{Fundamental Limits and Optimal Tuning}\label{sec:limit_opt_lin} Our first main result, stated as Theorem \ref{thm:lowerbound_reg} below, establishes a tight bound on the achievable values of $\alpha_{{\mathcal{L},\la}}^2$ for all regularization parameters $\la>0$ and all choices of $\mathcal{L}$ such that $(\mathcal{L},P_Z)\in\mathcal{C}_{\rm lin}$. \begin{thm}[{Lower bound on ${\alpha_{{\mathcal{L},\la}}}$}] \label{thm:lowerbound_reg} Let Assumptions \ref{ass:HD}, \ref{ass:gaussian} and \ref{ass:noise} hold. For $G\sim \mathcal{N}(0,1)$ and noise random variable $Z\sim P_Z$, consider a new random variable $V_a:=a\,G +Z,$ parameterized by $a\in\mathbb{R}$. Fix any $\delta>0$ and define $\alpha_{\star} = \alpha_\star(\delta,P_Z)$ as follows: \begin{align}\label{eq:alphaopt_thm} \alpha_{\star}:=\min_{\substack{0\le x<1/\delta}}\left[a>0:\;\frac{\delta(a^2-x^2\,\delta^2)\,\mathcal{I}(V_a)}{(1-x\,\delta)^2}=1\right]. \end{align} For any $\mathcal{L}$ such that $(\mathcal{L},P_Z)\in\mathcal{C}_{\rm lin}$, $\la>0$ and $\alpha_ {{\mathcal{L},\la}}^2$ denoting the respective high-dimensional limit of the squared-error as in \eqref{eq:alphaL}, it holds that $\alpha_ {{\mathcal{L},\la}} \geq \alpha_\star$. \end{thm} The proof of the theorem is presented in Section \ref{sec:proof_bin_lowerbound}. This includes showing that the minimization in \eqref{eq:alphaopt_thm} is feasible for any $\delta>0$. In general, the lower bound $\alpha_\star$ can be computed by numerically solving \eqref{eq:alphaopt_thm}. For special cases of noise distributions (such as Gaussian), it is possible to analytically solve \eqref{eq:alphaopt_thm} and obtain a closed-form formula for $\alpha_\star$, which is easier to interpret. While this is only possible for few special cases, our next result establishes a simple closed-form lower bound on $\alpha_\star$ that is valid under only mild assumptions on $P_Z$. For convenience, let us define $h_{\delta}:\mathbb{R}_{>0}\rightarrow\mathbb{R}_{>0}$, \begin{align}\label{eq:h_del} h_{{\delta}}(x):=\frac{1}{2}\Big(1-x-\delta+\sqrt{(1+\delta+x)^2-4\delta}\,\Big)\,. \end{align} The subscript $\delta$ emphasizes the dependence of the function on the oversampling ratio $\delta$. We also note for future reference that $h_{{\delta}}$ is strictly increasing for all fixed $\delta>0$. \begin{cor}[Closed-form lower bound on $\alpha_\star^2$]\label{cor:lowerbound} Let $\alpha_\star$ be as in \eqref{eq:alphaopt_thm} under the assumptions of Theorem \ref{thm:lowerbound_reg}. Assume that $p_Z$ is differentiable and takes strictly positive values on the real line. Then, it holds that $$\alpha_\star^2 \ge h_{\delta}\left({1}\big/{\mathcal{I}(Z)}\right).$$ Moreover, the equality holds if and only if $Z\sim\mathcal{N}(0,\zeta^2)$ for $\zeta>0$. \end{cor} The proof of Corollary presented in Section \ref{sec:proofofcor_lin} shows that the gap between the actual value of $\alpha_\star$ and $h_{\delta}\big({1}\big/{\mathcal{I}(Z)}\big)$ depends solely on the distribution of $Z$. Informally: the more $Z$ resembles a Gaussian, the smaller the gap. The simple approximation of Corollary \ref{cor:lowerbound} is key for comparing the performance of optimally tuned RERM to optimally-tuned RLS in Section \ref{sec:LS_linear} A natural question regarding the lower bound of Theorem \ref{thm:lowerbound_reg} is whether it is tight. Indeed, the lower bound cannot be improved in general. This can be argues as follows. Consider the case of additive Gaussian noise $Z\sim\mathcal{N}(0,\zeta^2)$ for which $\mathcal{I}(Z)={1}/{\E[Z^2]}=1/\zeta^2$. On the one hand, Corollary \ref{cor:lowerbound} shows that $\alpha_\star^2\geq h_{\delta}(\zeta^{2})$ and on the other hand, we show in Lemma \ref{cor:LS_reg} that optimally-tuned RLS achieves this bound, i.e., ${\alpha_{\,{\ell_{_2},\la_{\rm opt}}}^2}=h_{\delta}(\zeta^{2})$. Thus, the case of Gaussian noise shows that the bound of Theorem \ref{thm:lowerbound_reg} cannot be improved in general. \vspace{4pt} Our next result reinforces the claim that the bound is actually tight for a larger class of noise distributions. \begin{lem}[{Optimal tuning for linear RERM}]\label{thm:opt_reg} For given $\delta>0$ and $P_Z$, let $(\alpha_{\star}>0,x_\star\in[0,1/\delta))$ be the optimal solution in the minimization in \eqref{eq:alphaopt_thm}. Denote $\la_\star=x_\star$ and define $V_\star := \alpha_\star G +Z$. Consider the loss function $\mathcal{L}_{\star}:\mathbb{R}\rightarrow\mathbb{R}$ defined as $\mathcal{L}_{\star}(v) := -\env{\frac{\alpha_{\star}^{^2}-\la_{\star}^{^2}\,\delta^{^2}}{1-\la_{\star}\,\delta}\cdot\log\left(p_{_{V_{\star}}}\right)}{v}{1}.$ Then for $\mathcal{L}_\star$ and $\la_\star$, the equations \eqref{eq:eq_main0} satisfy $(\alpha,\tau) = (\alpha_\star,1)$. \end{lem} We leave for future work coming up with sufficient conditions on $P_Z$ under which $(\mathcal{L}_\star,P_Z)\in\mathcal{C}_{\rm lin}$, which would imply that the bound of Theorem \ref{thm:lowerbound_reg} is achieved by choosing $\mathcal{L}=\mathcal{L}_\star$ and $\la=\la_\star$ in \eqref{eq:opt_reg_main}. In Figures \ref{fig:fig}(Left) and \ref{fig:fig_app}(Top Left), we numerically (by using gradient descent) evaluate the performance of the proposed loss function $\mathcal{L}_\star$, in the case of Laplacian noise, suggesting that it achieves the lower bound $\alpha_\star$ in Theorem \ref{thm:lowerbound_reg}. See also Figure \ref{fig:lopt}(Left) for an illustration of $\mathcal{L}_\star$. \subsection{The Sub-optimality Gap of RLS in Linear Models}\label{sec:LS_linear} We rely on Theorem \ref{thm:lowerbound_reg} to investigate the statistical gap between least-squares (i.e. $\mathcal{L}(t)=t^2$ in \eqref{eq:opt_reg_main}) and the optimal choice of $\mathcal{L}$. As a first step, the lemma below computes the high-dimensional limit of optimally regularized RLS. \begin{lem}[Asymptotic error of optimally regularized RLS]\label{cor:LS_reg} Fix $\delta>0$ and noise distribution $P_Z$. Let $\widehat{\mathbf{x}}\,_{{\ell_2,\la}}$ be the solution to $\la$-regularized least-squares. Further let $\alpha_{\ell_2,\la}$ denote the high-dimensional limit of $\left\|\widehat{\mathbf{x}}\,_{{\ell_2,\la}}-\mathbf{x}_0\right\|_2^2$. Then, $\la\mapsto \alpha_{\ell_2,\la}$ is minimized at $\la_{\rm opt} = 2\,\E[Z^2]$ and $$\alpha_{{\ell_2,\la_{\rm opt}}}^2 := h_{\delta}\left(\E\left[Z^2\right]\right).$$ \end{lem} We combine this result with the closed-form lower bound of Corollary \ref{cor:lowerbound} to find that ${\alpha_\star^2}/{\alpha_{\,{\ell_{_2},\la_{\rm opt}}}^2} \in[\omega_{_\delta},1]$ for $$\omega_{\delta}:=\frac{h_{{\delta}}\left(1/\mathcal{I}(Z)\right)}{h_{{\delta}}\left(\E\left[Z^2\right]\right)}.$$ The fact that $\omega_\delta \leq 1$ follows directly by the increasing nature of the function $h_{\delta}$ and the Cramer-Rao bound $\E[Z^2] \ge {1}\big/\mathcal{I}(Z)$ (see Proposition \ref{propo:Fisher}(c)). Moreover, using analytic properties of the function $h_{\delta}$ it is shown in Section \ref{sec:proof_omega_LB} that \begin{align}\label{eq:omega_LB} {\alpha_\star^2}\big/{\alpha_{\,{\ell_{_2},\la_{\rm opt}}}^2}\geq \omega_{_\delta} \geq \max\left\{ 1-\delta \,,\, \big(\mathcal{I}(Z)\, \E[Z^2]\big)^{-1} \right\}. \end{align} The first term in the lower bound in \eqref{eq:omega_LB} reveals that in the highly over-parameterized regime ($\delta\ll 1$), it holds $\omega_\delta\approx 1$. Thus, optimally-regularized LS becomes optimal. More generally, in the overparameterized regime $0<\delta<1$, the squared-error of optimally-tuned LS is no worse than $(1-\delta)^{-1}$ times the optimal performance among all convex ERM. The second term in \eqref{eq:omega_LB} is more useful in the underparameterized regime $\delta\geq 1$ and captures the effect of the noise distribution via the ratio $(\mathcal{I}(Z)\,\E[Z^2])^{-1} \leq 1$ (which is closely related to the classical Fisher information distance studied e.g. in \cite{johnson2004fisher}). From this and the fact that $\mathcal{I}(Z) = 1/\E[Z^2]$ iff $Z\sim\mathcal{N}(0,\zeta^2)$ we conclude that $\omega_{\delta}$ attains its maximum value $1$ (thus, optimally-tuned LS is optimal) when $Z$ is Gaussian. For completeness, we remark that \cite{wu2012optimal} has shown that when $Z\sim \mathcal{N}(0,\zeta^2)$, then the minimum mean square error (MMSE) is also given by $h_{\delta}(\zeta^2)$. To further illustrate that our results are informative for general noise distributions, consider the case of Laplacian noise, i.e., $Z\sim \texttt{Laplace}(0,b^2)$. Using $\E[Z^2] = 2b^2$ and $\mathcal{I}(Z) = b^{-2}$ in \eqref{eq:omega_LB} we obtain $\omega_{\delta}\ge 1/2$, for all $b>0$ and $\delta>0$. Therefore we find that optimally-tuned RLS achieves squared-error that is at most twice as large as the optimal error, i.e. if $Z\sim \texttt{Laplace}(0,b^2),~b>0$ then for all $\delta>0$ it holds that $\alpha_{\,{\ell_{_2},\la_{\rm opt}}}^2 \le 2\,\alpha_\star^2$. See also Figures \ref{fig:fig} and \ref{fig:fig_app} for a numerical comparison. \section{Binary Models}\label{sec:binary} Consider data $(y_i,\mathbf{a}_i),~i\in[m]$ from a binary model: $y_i = f(\mathbf{a}_i^T\mathbf{x}_0)$ where $f$ is a (possibly random) link function outputting $\{\pm1\}.$ \begin{ass}[Link function]\label{ass:label} The link function $f$ satisfies $\nu_f:=\E\left[S\,f(S)\right]\neq0,$ for $S\sim \mathcal{N}(0,1)$.\footnote{See Section \ref{sec:sfs} for further discussion.} \end{ass} Under Assumptions \ref{ass:HD}, \ref{ass:gaussian} and \ref{ass:label} we study the ridge-regularized ERM for binary measurements: \begin{equation}\label{eq:opt_bin_main} {\widehat\mathbf{w}}_{{\mathcal{L},\la}}:=\arg \min_{\mathbf{w}\in\mathbb{R}^n}\;\;\frac{1}{m}\sum_{i=1}^m \mathcal{L}\left(y_i\mathbf{a}_i^T \mathbf{w}\right)+\frac{\la}{2}\|\mathbf{w}\|^{2}. \end{equation} We also assume that $\|\mathbf{x}_0\|_2=1$ since the signal strength can always be absorbed in the link function, i.e., if $\|\mathbf{x}_0\|_2=r>0$ then the results continue to hold for a new link function $\widetilde{f}(t) := f\big(r t\big)$ \subsection{Asymptotic Performance}\label{sec:background_bin} In contrast to linear models where we focused on squared error, for binary models, a more relevant performance measure is normalized correlation $\corr{{\widehat\mathbf{w}}_{{\mathcal{L},\la}}}{\mathbf{x}_0}$. Our first result determines the limit of $\corr{{\widehat\mathbf{w}}_{{\mathcal{L},\la}}}{\mathbf{x}_0}$. Specifically, we show that for a wide class of loss functions it holds that \begin{align}\label{eq:corr_lim} \rho_{_{\mathcal{L},\la}}:=\corr{{\widehat\mathbf{w}}_{{\mathcal{L},\la}}}{\mathbf{x}_0}:=\frac{|{\widehat\mathbf{w}}_{{\mathcal{L},\la}}^T\,\mathbf{x}_0|}{\|{\widehat\mathbf{w}}_{{\mathcal{L},\la}}\|_2\|\mathbf{x}_0\|_2} \stackrel{P}{\longrightarrow} \sqrt\frac{1}{1+\sigma_{{\mathcal{L},\la}}^2}\,, \end{align} where $\sigma_{{\mathcal{L},\la}}^2:=\alpha_{{\mathcal{L},\la}}^2\big/\mu_{{\mathcal{L},\la}}^2$ and $(\alpha_{{\mathcal{L},\la}},\mu_{{\mathcal{L},\la}})$ are found by solving the following system of three nonlinear equations in three unknowns $(\alpha,\mu,\tau)$, for $G,S\widesim{\text{\small{iid}}}\mathcal{N}(0,1)$ : \begin{subequations}\label{eq:bin_sys} \begin{align} \Exp\Big[S\,f(S)\,\envdx{\mathcal{L}}{\alpha G + \mu S f(S)}{\tau} \Big]&=-\la\mu,\quad\\ {\tau^2}\,{\delta}\,\Exp\Big[\left(\envdx{\mathcal{L}}{\alpha G + \mu S f(S)}{\tau}\right)^2\Big]&=\alpha^2,\\ {\tau\,\delta}\,\E\Big[ G\, \envdx{\mathcal{L}}{\alpha G + \mu S f(S)}{\tau} \Big]&=\alpha(1-\la\tau\delta). \end{align} \end{subequations} To formalize this, we define the following collection of loss and link functions: \begin{equation}\label{eq:Cbin} \begin{split} \mathcal{C}_{\rm bin} := \Big\{(\mathcal{L} , \,f )\Big|\,\text{$\forall\la>0$: \eqref{eq:bin_sys} has a unique bounded solution $(\alpha_{\mathcal{L},\la}>0,\mu_{{\mathcal{L},\la}},\tau_{{\mathcal{L},\la}}>0)$} \text{ and \eqref{eq:corr_lim} holds}\Big\}. \end{split} \end{equation} \begin{thm}[Asymptotics for binary RERM]\label{propo:boundedness} Let Assumptions \ref{ass:HD} and \ref{ass:gaussian} hold and $\|\mathbf{x}_0\|_2=1$. Let $f:\mathbb{R}\rightarrow\{-1,+1\}$ be a link function satisfying Assumption \ref{ass:label}. Further assume a loss function $\mathcal{L}$ with the following properties: $\mathcal{L}$ is convex, twice differentiable and bounded from below such that $\mathcal{L}^\prime(0)\neq0$ and for $G\sim\mathcal{N}(0,1)$, we have $\E[\mathcal{L}(G)] < \infty$. Then, it holds that $(\mathcal{L},f) \in \mathcal{C}_{\rm bin}.$ \end{thm} We prove Theorem \ref{propo:boundedness} in Section \ref{sec:asy_bin}. Previous works have considered special instances of this: \cite{sur2019modern,salehi2019impact} study unregularized and regularized logistic-loss for the logistic binary model, while \cite{taheri2020sharp} studies strictly-convex ERM without regularization. Here, we follow the same approach as in \cite{salehi2019impact,taheri2020sharp}, who apply the convex Gaussian min-max theorem (CGMT) to relate the performance of RERM to an auxiliary optimization (AO) problem whose first-order optimality conditions lead to the system of equations in \eqref{eq:bin_sys}. Our technical contribution in proving Theorem \ref{propo:boundedness} is proving existence and uniqueness of solutions to \eqref{eq:bin_sys} for a broad class of convex losses. As a final remark, the solution to \eqref{eq:bin_sys} (specifically, the parameter $\sigma_{{\mathcal{L},\la}}^2$) further determines the high-dimensional limit of the classification error for a fresh feature vector $\mathbf{a}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_n)$ (see Section \ref{sec:proofoferr}) : \begin{align}\label{eq:testerror2} \mathcal{E}_{{\mathcal{L},\la}} := \mathbb{P} \left( f\left(\mathbf{a}^T \mathbf{x}_0\right) \left(\mathbf{a}^T{\widehat\mathbf{w}}_{{\mathcal{L},\la}}\right) < 0\right)\stackrel{P}{\longrightarrow}\mathbb{P} \left( \sigma_{{\mathcal{L},\la}}\,G + Sf(S) < 0 \right),~~G,S\widesim{\text{\small{iid}}} \mathcal{N}(0,1). \end{align} \subsection{Fundamental Limits and Optimal Tuning}\label{sec:limitandopt} Thus far, we have shown in \eqref{eq:corr_lim} and \eqref{eq:testerror2} that $\sigma_{{\mathcal{L},\la}}$ predicts the high-dimensional limit of the correlation and classification-error of the RERM solution ${\widehat\mathbf{w}}_{{\mathcal{L},\la}}$. In fact, smaller values for $\sigma_{{\mathcal{L},\la}}$ result in better performance, i.e. higher correlation and classification accuracy (see Section \ref{sec:proofoferr}). In this section we derive a lower bound on $\sigma_{{\mathcal{L},\la}}$ characterizing the statistical limits of RERM for binary models. \begin{thm}[Lower Bound on ${\sigma_{{\mathcal{L},\la}}}$]\label{thm:lowerbound_bin} Let Assumptions \ref{ass:HD}, \ref{ass:gaussian} and \ref{ass:label} hold. For $G,S\widesim{\text{\small{iid}}} \mathcal{N}(0,1)$ define the random variable $W_s:= s\,G + S\,f(S)$ parameterized by $s\in\mathbb{R}$. Fix any $\delta>0$ and define \begin{align}\label{eq:sigopt_thm} \sigma_{\star}= \sigma_\star(\delta,f):=\min_{0\le x<1/\delta}\left[s > 0: \frac{1-s^2(1-s^2\mathcal{I}(W_s))}{\delta s^2(s^2\mathcal{I}(W_s)+\mathcal{I}(W_s)-1)}-2x + x^2\delta\big(1+\frac{1}{s^2}\big) = 1\right]. \end{align} For any $(\mathcal{L},f)\in\mathcal{C}_{\rm bin}$, $\la>0$ and $\sigma_ {{\mathcal{L},\la}}^2$ the respective high-dimensional limit of the error as in \eqref{eq:corr_lim}, it holds that $\sigma_ {{\mathcal{L},\la}} \geq \sigma_\star$. \end{thm} We prove Theorem \ref{thm:lowerbound_bin} in Section \ref{sec:proofofbin}, where we also show that the minimization in \eqref{eq:sigopt_thm} is always feasible. In view of \eqref{eq:corr_lim} and \eqref{eq:testerror2} the theorem's lower bound translates to an upper bound on correlation and test accuracy. Note that $\sigma_\star$ depends on the link function only through the Fisher information of the random variable $s\,G + S\,f(S)$. This parallels the lower bound of Theorem \ref{thm:lowerbound_reg} on linear models with the random variable $S\,f(S)$ effectively playing the role of the noise variable $Z$. \vspace{4pt} Next we present a useful closed-form lower bound for $\sigma_\star$. For convenience define the function $H_\delta : \mathbb{R}_{>1}\rightarrow\mathbb{R}_{>0}$ parameterized by $\delta>0$ as following, \begin{align} H_{\delta}(x):= 2\left(-\delta-x+\delta \,x+\sqrt{(-\delta-x+\delta \,x)^2 + 4\delta(x-1)}\right)^{-1}. \end{align} \begin{cor}[Lower bound on $\sigma_\star$]\label{cor:lowerbound_binary} Let $\sigma_\star$ be as in \eqref{eq:sigopt_thm}. Fix any $\delta>0$ and assume that $f(\cdot)$ is such that the random variable $Sf(S)$ has a differentiable and strictly positive probability density on the real line. Then, $$\sigma_\star^2\ge H_\delta \left(\;\mathcal{I}(Sf(S))\;\right).$$ \end{cor} Corollary \ref{cor:lowerbound_binary} can be viewed as an extension of Corollary \ref{cor:lowerbound} to binary models. The proof of the corollary presented in Section \ref{sec:proofofcor_bin} further reveals that the more the distribution of $Sf(S)$ resembles a Gaussian distribution, the tighter the gap is, with equality being achieved if and only if $Sf(S)$ is Gaussian. \vspace{4pt} Our next result strengthens the lower bound of Theorem \ref{thm:lowerbound_bin} by showing existence of a loss function and regularizer parameter for which the system of equations \eqref{eq:bin_sys} has a solution leading to $\sigma_\star$. \begin{lem}[{Optimal tuning for binary RERM}]\label{thm:opt_bin} For given $\delta>0$ and binary link function $f$, let $(\sigma_{\star}>0,x_\star\in[0,1/\delta))$ be the optimal solution in the minimization in \eqref{eq:sigopt_thm}. Denote $\la_\star=x_\star$ and define $W_\star := \sigma_\star G + Sf(S)$. Consider the loss function $\mathcal{L}_{\star}:\mathbb{R}\rightarrow\mathbb{R}$ \begin{align}\label{eq:optimalloss_thm} \mathcal{L}_{\star}(x) := -\env{\frac{\eta(\la_{\star}\delta-1)}{\delta(\eta-\mathcal{I}(W_{_{\star}}))}Q + \frac{\la_{\star}\delta-1}{\delta(\eta-\mathcal{I}(W_{_{\star}}))}\log\left(p_{_{W_{_{\star}}}}\right)}{x}{1}, \end{align} where \eta:=1- \mathcal{I}(W_{{\star}})\cdot(\sigma_{\star}^2-\sigma_{\star}^2\la_{\star}\delta - \la_{\star}\delta)-\la_{\star}\delta$ and $Q(w) := w^2/2.$ Then for $\mathcal{L}_\star$ and $\la_\star$, the equations \eqref{eq:bin_sys} satisfy $(\alpha,\mu,\tau) = (\sigma_\star,1,1)$. \end{lem} Lemma \ref{thm:opt_bin} suggests that if $\mathcal{L}_\star$ satisfies the assumptions of Theorem \ref{propo:boundedness}, then $\sigma_{\mathcal{L}_\star,\la_\star} = \sigma_\star$. In Figures \ref{fig:fig} and \ref{fig:fig_app} and for the special cases of Signed and Logistic models, we verify numerically that performance of candidates $\mathcal{L}_\star$ and $\la_\star$ reaches the optimal errors . This suggests that for these models, Lemma \ref{thm:opt_bin} yields the optimal choices for $\mathcal{L}$ and $\la$. See also Figure \ref{fig:lopt}(Right) for an illustration of $\mathcal{L}_\star$. \subsection{The Sub-optimality Gap of RLS in Binary Models}\label{LS_binary} We use the optimality results of the previous section to precisely quantify the sub-optimality gap of RLS. First, the following lemma characterizes the performance of RLS. \begin{lem}[Asymptotic error of RLS]\label{cor:LS_bin} Let Assumptions \ref{ass:HD}, \ref{ass:gaussian} and \ref{ass:label} hold. Recall that $\nu_f=\E[Sf(S)]\neq 0$. Fix any $\delta>0$ and consider solving \eqref{eq:opt_bin_main} with the square-loss $\mathcal{L}(t)=(t-1)^2$ and $\la\geq0$. Then, the system of equations in \eqref{eq:bin_sys} has a unique solution $(\alpha_{{\ell_2,\la}},\mu_{{\ell_2,\la}},\tau_{{\ell_2,\la}})$ and \begin{align}\label{eq:sigmaLSreg} \sigma_{{\ell_2,\la}}^2= \frac{\alpha_{\ell_2,\la}^2}{\mu_{\ell_2,\la}^2} = \frac{1}{2\delta \nu_f^2}\Big(1-\delta \nu_f^2 + \frac{2+ 2\delta+\la\delta+\delta\nu_f^2\left((2+\la)\delta-6\right)}{\sqrt{4+4\delta(\la-2)+\delta^2(\la+2)^2}}\Big). \end{align} Moreover, it holds that $\sigma_{{\ell_2,\la}}^2 \geq \sigma_{{\ell_2,\la_{\rm opt}}}^2 := H_\delta \big((1-\nu_f^2)^{-1}\big)$ with equality attained for the optimal tuning $\la_{{\rm opt}}= 2\big(1-\nu_f^2\big)\big/\big(\delta\,\nu_f^2\big)$. \end{lem} In resemblance to Lemma \ref{cor:LS_reg} in which RLS performance for linear measurements only depends on the second moment $\E[Z^2]$ of the additive noise distribution, Lemma \ref{cor:LS_bin} reveals that the corresponding key parameter for binary models is $1-\nu_f^2$. Interestingly, the expression for $\sigma_{{\ell_2,\la_{\rm opt}}}^2$ conveniently matches with the simple bound on $\sigma_\star^2$ in Corollary \ref{cor:lowerbound_binary}. Specifically, it holds for any $\delta>0$ that \begin{align}\label{eq:RLS_bin} 1\;\ge\;\frac{\sigma_\star^2}{\sigma_{{\ell_{_2},\la_{\rm opt}}}^2} \ge\Omega_{\delta}:=\frac{H_{\delta}\left(\,\mathcal{I}(S\,f(S))\,\right)}{H_{\delta}\left((1-\nu_f^2)^{-1}\right)}. \end{align} We note that $H_\delta(\cdot)$ is strictly-decreasing in its domain for a fixed $\delta>0$. Furthermore, the Cramer-Rao bound (see Prop. \ref{propo:Fisher} (d)) requires that $\mathcal{I}(Sf(S))\geq \left({\rm Var}[Sf(S)]\right)^{-1} =\big(1-\nu_f^2\big)^{-1}$. Combining these, confirms that $\Omega_\delta\le1$. Furthermore $\Omega_\delta=1$ and thus $\sigma_\star^2 = \sigma_{{\ell_{_2},\la_{\rm opt}}}^2$ iff the random variable $Sf(S)$ is Gaussian. However, for any binary link function satisfying Assumption \ref{ass:label}, $Sf(S)$ does not take a Gaussian distribution (see Section \ref{sec:sfs}), thus \eqref{eq:RLS_bin} suggests that square-loss cannot be optimal. Nevertheless, one can use \eqref{eq:RLS_bin} to argue that square-loss is (perhaps surprisingly) approximately optimal for certain popular models. For instance consider logistic link function $\widetilde{f}_{ r}$ defined as $\mathbb{P}(\widetilde{f}_{ r}(x)=1) = (1+\exp(-rx))^{-1}$, where $r:=\|\mathbf{x}_0\|_2$. Using \eqref{eq:RLS_bin} and maximizing the sub-optimality gap $1/\Omega_\delta$ over $\delta>0$, we find that if $f=\widetilde{f}_{r=1}$ then for all $\delta>0$ it holds that $$\sigma_{{\ell_{_2},\la_{\rm opt}}}^2 \le 1.003 \; \sigma_\star^2.$$ \emph{Thus, for a logistic link function and $\|\mathbf{x}_0\|_2=1$ optimally-tuned RLS is approximately optimal!} This is in agreement with the key message of Corollary \ref{cor:lowerbound_binary} on the critical role played by $Sf(S)$, since for the logistic model and small values of $r$, its density is ``close'' to a Gaussian. However, as signal strength increases and $\widetilde{f}_{r}$ converges to the sign function ($\widetilde{f}_r(\cdot)\rightarrow \mathrm{sign}(\cdot)$), there appears to be room for improvement between RLS and what Theorem \ref{thm:lowerbound_bin} suggests to be possible. This can be precisely quantified using \eqref{eq:RLS_bin}. For example, for $r=10$ it can be shown that $ \sigma_{{\ell_{_2},\la_{\rm opt}}}^2 \le 2.442 \; \sigma_\star^{2}, ~\forall \delta>0$. Lemma \ref{thm:opt_bin} provides the recipe to bridge the gap in this case. Indeed, Figures \ref{fig:fig} and \ref{fig:fig_app} show that the optimal loss function $\mathcal{L}$ predicted by the lemma outperforms RLS for all values $\delta$ and its performance matches the best possible one specified by Theorem \ref{thm:lowerbound_bin}. \section{Numerical Experiments} In Figure \ref{fig:fig}(Left), we compare the lower bound of Theorem \ref{thm:lowerbound_reg} with the error of RLS (see Lemma \ref{cor:LS_reg}) for $Z\sim \texttt{Laplace}(0,1)$ and $\|\mathbf{x}_0\|_2=1$. To numerically validate that $\alpha_\star$ is achievable by the proposed choices of loss function and regularization parameter in Lemma \ref{thm:opt_reg}, we proceed as follows. We generate noisy linear measurements with iid Gaussian feature vectors $\mathbf{a}_i\in\mathbb{R}^{100}$. The estimator $\widehat{\mathbf{x}}_{{\mathcal{L}_\star,\la_\star}}$ is computed by running gradient descent (GD) on the corresponding optimization in \eqref{eq:opt_reg_main} when the proposed optimal loss and regularizer of Lemma \ref{thm:opt_reg} are used. See Figure \ref{fig:lopt}(Left) for an illustration of the optimal loss for this model. The resulting vector $\widehat{\mathbf{x}}_{{\mathcal{L}_\star,\la_\star}}$ is used to compute $\|\widehat{\mathbf{x}}_{{\mathcal{L}_\star,\la_\star}} - \mathbf{x}_0\|^2$. The average of these values over 50 independent Monte-carlo trials is shown in red squares. The close match between the theoretical and empirical values suggest that the fundamental limits presented in this paper are accurate even in small dimensions (also see the first and second rows of Table \ref{table:ratio}). In the next two figures, we present results for binary models. Figure \ref{fig:fig}(Middle) plots the effective error parameter $\sigma$ for the Signed model and Figure \ref{fig:fig}(Right) plots the classification error `$\mathcal{E}$' for the Logistic model with $\|\mathbf{x}_0\|_2 =10$. The red squares correspond to the numerical evaluations of ERM with $\mathcal{L}= \mathcal{L}_\star$ and $\la=\la_\star$ (as in Lemma \ref{thm:opt_bin}) derived by running GD on the proposed optimal loss and regularization parameter. See Figure \ref{fig:lopt}(Right) for an illustration of the optimal loss in this case. The solution ${\widehat\mathbf{w}}_{{\mathcal{L}_\star,\la_\star}}$ of GD is used to calculate $\sigma_{{\mathcal{L}_\star,\la_\star}}$ and $\mathcal{E}_{{\mathcal{L}_\star,\la_\star}}$ in accordance with \eqref{eq:corr_lim} and \eqref{eq:testerror2}, respectively. Again, note the close match between theoretical and numerical evaluations (also see the third and fourth rows of Table \ref{table:ratio}). Finally, for all three models studied in Figure \ref{fig:fig}, we also include the theoretical predictions for the error of the following: (i) RLS with small and large regularization (as derived in Equations \eqref{eq:alphaLSreg} and \eqref{eq:sigmaLSreg}); (ii) optimally tuned RLS (as predicted by Lemmas \ref{cor:LS_reg} and \ref{cor:LS_bin}); (iii) optimally-tuned unregularized ERM (marked as $\alpha_{\rm ureg}, \sigma_{\rm ureg}, \mathcal{E}_{\rm ureg}$). The curves for the latter are obtained from \cite{bean2013optimal} and \cite{taheri2020sharp} for linear and binary models, respectively. We refer the reader to Sections \ref{sec:gains_lin} and \ref{sec:gains_bin} for a precise study of the benefits of regularization in view of Theorems \ref{thm:lowerbound_reg} and \ref{thm:lowerbound_bin}, for both linear and binary models. \section{Numerical Experiments}\label{sec:numerical} \begin{figure} \centering \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=\linewidth,height= 4.8cm]{figure1.pdf} \label{fig:figure2} \end{subfigure} \begin{subfigure}{0.33\textwidth} \centering \includegraphics[width=0.99\linewidth,height= 4.8cm]{fig_sign.pdf} \label{fig:fig_sign} \end{subfigure}% \begin{subfigure}{0.33\textwidth} \centering \includegraphics[width=0.99\linewidth,height= 4.8cm]{fig_log_error_r10.pdf} \label{fig:fig_log} \end{subfigure} \caption{The lower bounds on error derived in this paper, compared to RLS for the linear model with $Z\sim\texttt{Laplace}(0,1)$ (Left), and for the binary Signed model (Middle) and binary Logistic model with $\|\mathbf{x}_0\|=10$ (Right). The red squares denote the performance of the optimally tuned RERM as derived in Lemmas \ref{thm:opt_reg} and \ref{thm:opt_bin}. See Section \ref{sec:numeric} for additional numerical results.} \label{fig:fig} \end{figure} \begin{table} \caption{Theoretical and numerical values of $\alpha_\star^2/\alpha_{\mathcal{L},\la_{\rm opt}}^2$ (for linear models) and $\sigma_\star^2/\sigma_{\mathcal{L},\la_{\rm opt}}^2$(for binary models) for different values of $\delta$ and for some special cases studied in this paper. The theoretical results for $\alpha_\star$ and $\sigma_\star$ correspond to Theorems \ref{thm:lowerbound_reg} and \ref{thm:lowerbound_bin}. The empirical values of $\alpha_\star$ and $\sigma_\star$ are derived by numerically solving the optimally-tuned RERM (as derived in Lemmas \ref{thm:opt_reg} and \ref{thm:opt_bin}) by GD with $n=100$. Results shown are averages over $50$ independent experiments.} \label{table:ratio} \vskip 0.1in \begin{center} \begin{small} \begin{sc} \begin{tabular}{l c | c c c c c r r} \toprule & $\delta$ & 0.5 &2& 4& 6 & 8 \\ \toprule \multirow{2}{8em}{$Z\sim\texttt{Laplace}(0,1)$} & Theory &0.9798 & 0.9103 & 0.8332& 0.7690&0.7447\\ &Experiment &0.9700 &0.8902 & 0.8109& 0.7530&0.7438\\ \midrule \multirow{2}{8em}{$Z\sim\texttt{Laplace}(0,2)$} & Theory &0.9832 & 0.9329 & 0.8796 & 0.8371 & 0.8043\\ &Experiment&0.9785 &0.9103 & 0.8550 & 0.8316 & 0.7864\\ \midrule \multirow{2}{8em}{$f = \texttt{Sign}$} & Theory & 0.9934 & 0.8531 & 0.6199 & 0.4602 & 0.3618\\ &Experiment &0.9918 & 0.8204 & 0.6210 & 0.4710 & 0.3829\\ \midrule \multirow{2}{10em}{$f = \texttt{Logistic}, {\small \|\mathbf{x}_0\|=10}$} & Theory &0.9826 & 0.8721 & 0.7116 & 0.6211 & 0.5712\\ &Experiment & 0.9477 & 0.8987 & 0.7112 & 0.6211 & 0.6389\\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} \section{Comparison to a Simple Averaging Estimator}\label{sec:averaging} In this section, we compare the performance of optimally ridge-regularized ERM to the following simple averaging estimator \begin{align}\label{eq:av_est} {\widehat\mathbf{w}}_{\rm ave} = \frac{1}{m}\sum_{i=1}^{m} y_i\mathbf{a}_i. \end{align} This estimator is closely related to the family of RERM estimators studied in this paper. To see this, note that ${\widehat\mathbf{w}}_{\rm ave}$ can be expressed as the solution to ridge-regularized ERM with $\la =1$ and linear loss function $\mathcal{L} (x) = -x$ for all $x \in \mathbb{R}$: \begin{align*} {\widehat\mathbf{w}}_{\rm ave} = \arg \min_{\mathbf{w} \in \mathbb{R} ^n}\; \frac{1}{2m} \sum_{i=1}^m \left\|y_i \mathbf{a}_i -\mathbf{w}\right\|_2^2 = \arg \min_{\mathbf{w} \in \mathbb{R} ^n} \;\frac{1}{m}\sum_{i=1}^m -y_i \mathbf{a}_i^T\mathbf{w} + \frac{1}{2} \|\mathbf{w}\|_2^2. \end{align*} Moreover, it is not hard to check that the correlation performance of ${\widehat\mathbf{w}}_{\rm ave}$ is the same as that of the solution of RLS with regularization $\la$ approaching infinity. It is in fact possible to exploit these relations of the estimator to the RERM family in order to evaluate its asymptotic performance using the machinery of this paper (i.e., by using the Equations \eqref{eq:bin_sys}). However, a more direct evaluation that uses the closed form expression in \eqref{eq:av_est} is preferable here. In fact, it can be easily checked that the following limit is true in the high-dimensional asymptotic regime: \begin{align} \forall \delta>0 \; : \quad\quad\quad \corr{{\widehat\mathbf{w}}_{\rm ave}}{\mathbf{x}_0} \stackrel{P}{\longrightarrow}\frac{1}{1+\frac{1}{\delta} \nu_f^{-2}},\label{eq:aveper} \end{align} where recall our notation $\nu_f=\E[S f(S)],~S\sim\mathcal{N}(0,1)$. The use of the simple averaging estimator for signal recovery in generalized linear models (also, in single-index models) has been previously investigated for example in \cite{lu2017phase}. A favorable feature of ${\widehat\mathbf{w}}_{\rm ave}$ is its computational efficiency. In what follows, we use our lower bounds on the performance of general RERM estimators, to evaluate its suboptimality gap compared to more complicated alternatives. To begin, in view of \eqref{eq:aveper} and \eqref{eq:corr_lim} let us define the corresponding ``effective error parameter" \begin{align} \sigma^2_{\rm ave} =\frac{1}{\delta} \nu_f^{-2}. \label{eq:ave} \end{align} First, we compare this value with the error of regularized LS. Let ${\widehat\mathbf{w}}_{\rm LS}$ be the solution to \emph{unregularized} LS for $n>m$. It can be checked (e.g., \cite{NIPS}) that \begin{align} \corr{{\widehat\mathbf{w}}_{\rm LS}}{\mathbf{x}_0} \stackrel{P}{\longrightarrow}\frac{1}{1+ \sigma_{{\rm LS}}^2 },\quad\text{where}~~\sigma_{{\rm LS}}^2 := \frac{1}{\delta-1} \big(\nu_f^{-2}-1\big). \end{align} Directly comparing this to \eqref{eq:ave}, we find that $ \frac{\sigma^2_{\rm LS}}{\sigma^2_{\rm ave}} = \Big(\frac{1}{1-1/\delta}\Big)\,\big( 1 - \nu_f^2 \big), $ for all $\delta>1$. In other words, \begin{align} \sigma^2_{\rm ave}\gtrless \sigma^2_{\rm LS} \quad\Longleftrightarrow\quad \delta \gtrless \nu_f^{-2}. \end{align} Next, we study the performance gap of the averaging estimator from the optimal RERM. For this, we use Corollary \ref{cor:lowerbound_binary} to compare $\sigma^2_{\rm ave}$ to the lower bound $\sigma_\star$. We find that for any $\delta>0$ and any link function $f$ satisfying the assumptions of Corollary \ref{cor:lowerbound_binary}: \begin{align} 1\;\ge\;\frac{\sigma_\star^2}{\sigma_{\rm ave}^2}\; \ge \;\delta\,\nu_f^2\cdot H_\delta\Big(\mathcal{I}(Sf(S))\Big). \end{align} We complement these bounds with numerical simulations in Section \ref{sec:numeric}. \section{Gains of Regularization}\label{sec:unreg_opt} \subsection{Linear models}\label{sec:gains_lin} In this section, we study the impact of the regularization parameter on the best achievable performance. For this purpose, we compare $\alpha_\star$, the best achievable performance of ridge-regularized case, to the best achievable performance among non-regularized empirical risk minimization with convex losses denoted by $\alpha_{\rm ureg}$. By definition of $\alpha_{\rm ureg}$, for all convex losses $\mathcal{L}$, in the regime of $\delta>1$ it holds that, $\alpha_{\rm ureg}\;\le\;\alpha_{{\mathcal{L},\,0}}.$ In \cite{bean2013optimal}, the authors compute a tight lower bound on $\alpha_{\rm ureg}$ and show that it is attained provided that $p_Z$ is log-concave. Our next result bounds the ratio $\alpha_{\star}^2\, /\, \alpha_{_{\rm ureg}}^2$, illustrating the impact of regularization for a wide range of choices of $Z\sim \mathcal{D}$ and any $\delta>1$. \begin{cor}\label{cor:gains_reg} Let the assumptions of Corollary \ref{cor:lowerbound} hold and $\delta >1$. Then it holds that: \begin{align}\label{eq:gains_reg} \frac{(\delta-1)}{\E[Z^2]} \,h_{{\delta}} \left(\frac{1}{\mathcal{I}(Z)}\right)\;\le\;\frac{\alpha_{\star}^2}{\alpha_{\,{\rm ureg}}^2}\;\le\; \min\Big\{(\delta-1)\,\mathcal{I}(Z),\,1 \Big\}. \end{align} \end{cor} \begin{proof} In order to obtain an upper bound for $\alpha_\star^2/\alpha_{\rm ureg}^2$ first we find a lower bound for $\alpha_{\rm ureg}^2$. We have $$\alpha_{{\rm ureg}}^2\,\mathcal{I}(V_{\alpha_{_{\rm ureg}}}) = \frac{1}{\delta},$$ thus we may apply the Stam's inequality (as stated in Proposition \ref{propo:Fisher}(f)) for $\mathcal{I}(V_{\alpha_{_{\rm ureg}}}) $ to derive the following lower bound : \begin{align}\label{eq:unreg_linear_lower} \alpha_{{\rm ureg}}^2 \ge \frac{1}{(\delta-1)\mathcal{I}(Z)}. \end{align} Also note that it holds that $\alpha_\star^2\le\alpha_{{\ell_2,\la_{\rm opt}}}^2$. Thus by recalling Lemma \ref{cor:LS_reg} and the fact that the function $h_{\delta}(\cdot)\le1$ for all $\delta\ge0$ we deduce that $\alpha_\star^2\le1$. Additionally since $\alpha_\star^2 \le \alpha_{\rm ureg}^2$, we conclude the upper bound in the statement of the Corollary. To proceed, we use the Cramer-Rao bound (see Proposition \ref{propo:Fisher}(d)) for $\mathcal{I}(V_{\alpha_{{\rm ureg}}}) $ to derive the following upper bound for $\alpha_{_{\rm ureg}}^2$ which holds for all $\delta>1$: \begin{align}\notag \alpha_{{\rm ureg}}^2 \le \frac{\E[Z^2]}{\delta-1}. \end{align} This combined with the result of Corollary \ref{cor:lowerbound} derives the lower bound in the statement of the corollary and completes the proof. \end{proof} Importantly, based on \eqref{eq:gains_reg} we find that as $\delta\rightarrow 1$ the ratio $\alpha_{\star}^2\, /\, \alpha_{{\,\rm ureg}}^2$ reaches zero, implying the large gap between $\alpha^\star$ and $\alpha_{{\,\rm ureg}}$ in this regime. In the highly under-parameterized regime where $\delta\rightarrow \infty$, by computing the limit in the lower bound our bound gives \begin{align} \frac{1}{\E[Z^2]\,\mathcal{I}(Z)}\;\le\;\lim_{\delta\rightarrow\infty} \frac{\alpha_{\star}^2}{\alpha_{\,{\rm ureg}}^2} \;\le \;1 \,.\label{eq:not_so_good_bound} \end{align} For example, we see that in this regime when $Z$ is close to a Gaussian distribution such that $\mathcal{I}(Z) \approx 1/\E[Z^2]$, then provably $\alpha_\star \approx \alpha_{\rm ureg}$, implying that impact of regularization is infinitesimal in the resulting error. We remark that for other distributions that are far from Gaussian in the sense $\mathcal{I}(Z) \gg 1/\E[Z^2]$ the simple lower bound in \eqref{eq:not_so_good_bound} is not tight; this is because the bound of Corollary \ref{cor:lowerbound} is not tight in this case. \subsection{Binary models}\label{sec:gains_bin} In order to demonstrate the impact of regularization on the performance of ERM based inference, we compare $\sigma_\star$ with the optimal error of the non-regularized ERM for $\delta>1$ which we denote by $\sigma_{\rm ureg}$. Thus $\sigma_{\rm ureg}$ satisfies for all convex losses that $\sigma_{\rm ureg} \le \sigma_{\mathcal{L},0}$. The general approach for determining $\sigma_{{\rm ureg}}$ is discussed in \cite{taheri2020sharp} in which the authors also show the achievability of $\sigma_{{\rm ureg}}$ for well-known models such as the Signed and Logistic models. \par Our next result quantifies the gap between $\sigma_{{\rm ureg}}$ and $\sigma_\star$ in terms of the label functions $f$ and $\delta>1$. \begin{cor}\label{cor:gains_bin} Let the assumptions of Theorem \ref{thm:lowerbound_bin} hold and $\delta >1$. Further assume the label function $f$ is such that $p_{_{S\cdot f(S)}}(x)$ is differentiable and positive for all $x \in \mathbb{R} $. Then it holds that: \begin{align}\label{eq:unregGap_bin} \frac{(\delta-1)\nu_f^2}{1-\nu_f^2}\,H_\delta\Big(\mathcal{I}(\,Sf(S)\,)\Big)\;\le\;\frac{\sigma_{\star}^2}{\sigma_{_{\rm ureg}}^2}\;\le\; \min\left\{\frac{\delta-1}{\delta}\cdot\frac{\mathcal{I}(Sf(S))-1}{ \nu_f^2}\,,\,1\,\right\}. \end{align} \end{cor} \begin{proof} To provide the bounds of the ratio $\sigma_{\star}^2/\sigma_{_{\rm ureg}}^2$, we follow a similar argument stated in the proof of Corollary \ref{cor:gains_reg}. First, we use the result in \cite{taheri2020sharp} which states that for $\sigma_{\rm ureg}^2$ and all $\delta>1$ it holds that \begin{align} \sigma_{\rm ureg}^2 \ge \frac{1}{(\delta-1)(\mathcal{I}(Sf(S))-1)}. \end{align} Since it trivially holds that $\sigma_{\star}^2 \le \sigma_{\ell_2,\la_{\rm opt}}^2$ and also by noting that $ \sigma_{\ell_2,\la_{\rm opt}}^2$ as derived by Lemma \ref{cor:LS_bin} satisfies $\sigma_{\ell_2,\la_{\rm opt}}^2 \le \frac{1}{\delta \nu_f^2}$ for all $\delta>0$ (which is followed by the fact that $H_\delta(x)\le \frac{x}{(x-1)\delta}$), we conclude that \begin{align} \sigma_\star^2\le \frac{1}{\delta \nu_f^2}. \end{align} Additionally since it trivially holds that $\sigma_\star^2\le\sigma_{\rm ureg}^2$ we conclude the upper bound in the statement of the corollary. We proceed with proving the lower bound in the statement of the corollary. For this purpose, first we derive an upper bound for $\sigma_{\rm ureg}^2$. Using the fact that $\sigma_{_{\rm ureg}}^2$ satisfies : \begin{align} \frac{1-\sigma_{_{\rm ureg}}^2(1-\sigma_{_{\rm ureg}}^2\mathcal{I}(W_{\rm ureg}))}{\delta \sigma_{_{\rm ureg}}^2(\sigma_{_{\rm ureg}}^2\mathcal{I}(W_{\rm ureg})+\mathcal{I}(W_{\rm ureg})-1)}= 1 \end{align} as well as the Cramer-Rao lower bound (Proposition \ref{propo:Fisher}(d)) for $\mathcal{I}(W_{\rm ureg})$ we may deduce that : \begin{align} \sigma_{{\rm ureg}}^2 \le \frac{(\delta-1)\nu_f^2}{1-\nu_f^2}. \end{align} This combined with the lower bound on $\sigma_\star^2$ as stated in Corollary \ref{cor:lowerbound_binary} proves the lower bound in the statement of the corollary and completes the proof. \end{proof} Importantly, as shown by \eqref{eq:unregGap_bin}, in the case of $\delta$ being close to 1, one can see that both of the bounds in \eqref{eq:unregGap_bin} vanish. This shows the large gap between $\sigma_{{\rm ureg}}$ and $\sigma_{\star}$ and further implies the benefit of regularization in this regime. When $\delta\rightarrow\infty$ i.e. in the highly under-parameterized regime, by deriving the limits as well as using Proposition \ref{propo:Fisher} (d), we see that \eqref{eq:unregGap_bin} yields: \begin{align}\label{eq:unregGap_bin_infty} \frac{\nu_f^2}{1-\nu_f^2}\cdot\frac{1}{\mathcal{I}(Sf(S))-1}\le \lim_{\delta\rightarrow\infty}\; \frac{\sigma_{\star}^2}{\sigma_{{\rm ureg}}^2} \le 1. \end{align} Thus in this case both the values of $\sigma_\star$ and $\sigma_{\rm ureg}$ are approaching zero with the ratio depending on the properties of $Sf(S)$. For models such as Logistic with small signal strength (i.e. small $\|\mathbf{x}_0\|$) where $\mathcal{I}(Sf(S)) \approx 1/(1-\nu_f^2)$, one can derive that based on \eqref{eq:unregGap_bin_infty} the ratio reaches 1, which confirms the intuition that for large values of $\delta$ the impact of regularization is almost negligible. \section*{Broader Impact} \bibliographystyle{alpha} \section{Useful facts} \subsection{On Moreau Envelopes} In Proposition \ref{propo:mor}, some of the differential properties of Moreau-envelope functions, used throughout the paper are summarized (cf. \cite{rockafellar2009variational}): \begin{propo}[Properties of Moreau-envelopes]\label{propo:mor} Let $\mathcal{L}$ be a lower semi-continuous and proper function. Then \noindent{(a)} The value $\env{\mathcal{L}}{x}{\tau}$ is finite and depends continuously on $(x,\tau)$, with $\env{\mathcal{L}}{x}{\tau} \rightarrow \mathcal{L}(x)$ as $\tau\rightarrow 0_+$ and $\env{\mathcal{L}}{x}{\tau} \rightarrow \min_{t\in\mathbb{R}}\mathcal{L}(t)$ as $\tau\rightarrow+\infty$, for all $x\in\mathbb{R}$.\\ \noindent{(b)} The first order derivatives of the Moreau-envelope of a function $\mathcal{L}$ are derived as follows: \begin{align} \envdx{\mathcal{L}}{x}{\tau}&:=\frac{\partial{\env{\mathcal{L}}{x}{\tau}}}{\partial x}= \frac{1}{\tau}{(x-\prox{\mathcal{L}}{x}{\tau})},\label{eq:mor_der1}\\ \envdla{\mathcal{L}}{x}{\tau}&:=\frac{\partial{\env{\mathcal{L}}{x}{\tau}}}{\partial \tau} = -\frac{1}{2\tau^2}{(x-\prox{\mathcal{L}}{x}{\tau})^2}\label{eq:mor_der2}. \end{align} Also if $\mathcal{L}$ is differentiable then \begin{align} \envdx{\mathcal{L}}{x}{\tau}&= \mathcal{L}^\prime(\prox{\mathcal{L}}{x}{\tau})\label{eq:envdxp},\\ \envdla{\mathcal{L}}{x}{\tau}&= -\frac{1}{2}(\mathcal{L}^\prime(\prox{\mathcal{L}}{x}{\tau})^2. \end{align} \noindent{(c)} Additionally, based on the relations above, if $\mathcal{L}$ is twice differentiable then the following is derived for its second order derivatives : \begin{align} \envddx{\mathcal{L}}{x}{\tau} &= \frac{\mathcal{L}''(\prox{\mathcal{L}}{x}{\tau})}{1+\tau \mathcal{L}''(\prox{\mathcal{L}}{x}{\tau})},\label{eq:secondderivative_mor}\\[5pt] \mathcal{M}^{''}_{\mathcal{L},2}\left(x ; \tau\right)&= \frac{\Big(\mathcal{L}^\prime(\prox{\mathcal{L}}{x}{\tau})\Big)^2\,\mathcal{L}^{\prime\prime}(\prox{\mathcal{L}}{x}{\tau})}{1+\tau\,\mathcal{L}^{\prime\prime}(\prox{\mathcal{L}}{x}{\tau})}.\label{eq:secondderivative_mor2} \end{align} \end{propo} \par The following proposition gives the recipe for inverting Moreau-envelpe of a convex function: \begin{propo}[Inverse of the Moreau envelope]\cite[Result.\,23]{advani2016statistical}\label{propo:inverse} For $\tau>0$ and $f$ a convex, lower semi-continuous function such that $g(\cdot) =\env{f}{\cdot}{\tau}$, the Moreau envelope can be inverted so that $f(\cdot) = -\env{-g}{\cdot}{\tau}.$ \end{propo} \begin{lem}[e.g., \cite{taheri2020sharp}, Lemma A.1.]\label{lem:H_cvx} The function $H:\mathbb{R}^3\rightarrow\mathbb{R}$ defined as follows \begin{align}\label{eq:H_def} H(x,p,\tau) = \frac{1}{2\tau}(x-p)^2, \end{align} is jointly convex in its arguments. \end{lem} \subsection{On Fisher Information} In Proposition \ref{propo:Fisher} we collect some useful properties of the Fisher Information for location. For the proofs and more details, we refer the interested reader to \cite{Stam}. \begin{propo}[Properties of Fisher Information, \cite{Stam}]\label{propo:Fisher} Let $X$ be a zero-men random variable with probability density $p_X$ satisfying the following conditions: (i) $p_X(x)>0, -\infty<x<\infty$; (ii) $p_X^\prime(x)$ exists; and (iii) The following integral exists: $$ \mathcal{I}(X) = \int_{-\infty}^{\infty} {\frac{(p_X^\prime(x))^2}{p_X(x)}}\,\mathrm{d}x. $$ The Fisher information for location $\mathcal{I}(X)$ defined above satisfies the following properties. \begin{enumerate}[(a),leftmargin=\parindent,align=left] \item $\mathcal{I}(X) := \E\left[(\xi_X(X))^2\right] = \E\Big[\left(\frac{p_X^\prime(X)}{p_X(X)}\right)^2\Big].$ \item For any $c\in\mathbb{R}$, $\mathcal{I}(X+c) = \mathcal{I}(X)$. \item For any $c\in\mathbb{R}$, $\mathcal{I}(c\,X) = \mathcal{I}(X)/c^2$. \item (Cramer-Rao bound) $\mathcal{I}(X) \geq \frac{1}{\E[X^2]}$, with equality if and only if $X$ is Gaussian. \item For two independent random variables $X_1, X_2$ satisfying the three conditions above and any $\theta \in [0,1]$, it holds that $\mathcal{I}(X_1+ X_2) \leq \theta^2 \mathcal{I}(X_1) + (1-\theta)^2\mathcal{I}(X_2)$. \item (Stam's inequality) For two independent random variables $X_1, X_2$ satisfying the three conditions above, it holds that \begin{align}\label{eq:Fisher_Stam} \mathcal{I}(X_1+X_2)\le\frac{\mathcal{I}(X_1)\cdot\mathcal{I}(X_2)}{\mathcal{I}(X_1)+\mathcal{I}(X_2)}. \end{align} Moreover equality holds if and only if $X_1$ and $X_2$ are independent Gaussian random variables. \end{enumerate} \end{propo} \begin{lem} \label{lem:Fisher_lim} Let $G\sim\mathcal{N}(0,1)$ and $Z$ be a random variable satisfying the assumptions of Proposition \ref{propo:Fisher}. For any $a\in\mathbb{R}$, use the shorhand $V_a:=a\, G + Z$. The following are true: \begin{enumerate}[(a),leftmargin=\parindent,align=left] \item $\lim_{a\rightarrow0}a^2\mathcal{I}(V_a) = 0.$ \item $\lim_{a\rightarrow+\infty} a^2\mathcal{I}(V_a) = 1.$ \end{enumerate} \end{lem} \begin{proof}To show part $(a)$, we use Proposition \ref{propo:Fisher}(e) with $\theta = 0$ to derive that \begin{align}\label{eq:alphatozero} \lim_{a\rightarrow 0} a^2\,\mathcal{I}(V_a) \le \lim_{a\rightarrow 0} a^2\,\mathcal{I}(Z) = 0, \end{align} where the second step follows by the fact that $\mathcal{I}(Z)$ is finite for any $Z$ satisfying the assumption of the lemma. In order to prove part $(b)$, we apply Proposition \ref{propo:Fisher}(c) to deduce that : \begin{align}\label{eq:alphatoinfty} \lim_{a\rightarrow+\infty}a^2\,\mathcal{I}(V_a) =\lim_{a\rightarrow+\infty}a^2\,\mathcal{I}(a\,G + Z) = \lim_{a\rightarrow+\infty}\mathcal{I}(G+\frac{1}{a}Z) = 1, \end{align} \end{proof} \subsection{On Min-max Duality} \begin{thm}[Sion's min-max theorem \cite{sion1958}]\label{lem:minmaxsion} Let $X$ be a compact convex subset of a linear topological space and $Y$ a convex subset of a linear topological space. If $f$ is a real-valued function on $X \times Y$ with $f(x, \cdot)$ upper semicontinuous and quasi-concave on $Y, \forall x \in X,$ and $f(\cdot, y)$ lower semicontinuous and quasi-convex on $X, \forall y \in Y$ then, \[ \min _{x \in X}\, \sup _{y \in Y} \,f(x, y)=\sup _{y \in Y}\, \min _{x \in X} \,f(x, y). \] \end{thm} \section{Additional Experiments}\label{sec:numeric} In this section, we present additional numerical results comparing the bounds of Theorems \ref{thm:lowerbound_reg} and \ref{thm:lowerbound_bin} to the performance of the following: (i) Ridge-regularized Least-Squares (RLS); (ii) optimal unregularized ERM (Section \ref{sec:unreg_opt}); (iii) a simple averaging estimator (see Section \ref{sec:averaging}). Figure \ref{fig:fig_app}(Top Left) plots the asymptotic squared error $\alpha^2$ of these estimators for linear measurements with $Z\sim\texttt{Laplace}(0,2)$. Similarly, Figure \ref{fig:fig_app}(Top Right) and Figure \ref{fig:fig_app}(Bottom) plot the effective error term $\sigma$ for Logistic data with $\|\mathbf{x}_0\|_2=1$, and the limiting value $\rho$ of the correlation measure for Logistic data with $\|\mathbf{x}_0\|_2=10$, respectively. The red squares represent the performance of optimally tuned ERM (as per Lemmas \ref{thm:opt_reg} and \ref{thm:opt_bin}) derived numerically by running GD, as previously described in the context of Figure \ref{fig:fig}. \begin{figure} \centering \begin{subfigure}{.47\textwidth} \centering \includegraphics[width=0.85\linewidth,height= 5.5cm]{figure2.pdf} \end{subfigure} \begin{subfigure}{0.47\textwidth} \centering \includegraphics[width=0.85\linewidth,height= 5.5cm]{fig_log_2.pdf} \end{subfigure}\\ \vspace{.2in} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.9\linewidth,height= 5.7cm]{fig_log_rho_r10.pdf} \end{subfigure} \caption{Fundamental error bounds derived in this paper compared to RLS, averaging estimator and optimal unregularized ERM for: (Top Left) a linear model with $Z\sim\texttt{Laplace}(0,2)$, (Top Right) a binary Logistic model with $\|\mathbf{x}_0\|_2=1$ , (Bottom) a binary Logistic model with $\|\mathbf{x}_0\|_2=10$ (here shown is correlation measure \eqref{eq:corr_lim}). The red squares correspond to numerical evaluation of the performance of the optimally tuned RERM as derived in Lemmas \ref{thm:opt_reg} and \ref{thm:opt_bin}.} \label{fig:fig_app} \end{figure} The numerical findings in Figures \ref{fig:fig} and \ref{fig:fig_app} validate the theoretical findings of Sections \ref{sec:LS_linear} and \ref{LS_binary}, regarding sub-optimality of RLS for Laplace noise and Logistic binary model (with large $\|\mathbf{x}_0\|$) and optimality of $\la$-tuned RLS for Logistic model with small $\|\mathbf{x}_0\|$. Furthermore, by comparing the optimal performance of unregularized ERM to the optimal errors of RERM in both Figures \ref{fig:fig} and \ref{fig:fig_app}, we confirm the the theoretical guarantees of Section \ref{sec:unreg_opt} regarding the impact of regularization in the regime of small $\delta$ for both linear and binary models. \begin{figure} \centering \begin{subfigure}{.46\textwidth} \centering \includegraphics[width=0.89\linewidth,height= 4.8cm]{opt_loss_linear.pdf} \label{fig:lopt_lin} \end{subfigure} \begin{subfigure}{0.46\textwidth} \centering \includegraphics[width=0.89\linewidth,height=4.8cm]{lopt_bin.pdf} \label{fig:lopt_bin} \end{subfigure}% \caption{Illustrations of the proposed loss functions achieving optimal performance (as in Lemmas \ref{thm:opt_reg} and \ref{thm:opt_bin}), for three special cases: a linear model with additive Laplace noise, the binary logistic model and the binary signed model. Here, in both plots, we fix $\delta=2$. The curves are appropriately shifted and rescaled to allow direct comparison to the least-squares loss function.} \label{fig:lopt} \end{figure} \subsection{Optimal Tuning in Special Cases}\label{sec:optimaltuning_app} Figure \ref{fig:lopt} depicts the candidate for optimal loss function derived in Lemmas \ref{thm:opt_reg} and \ref{thm:opt_bin}, for specific linear and binary models discussed in this paper. To allow for a direct comparison with the least-squares loss function, the optimal losses for the linear models are shifted such that $\mathcal{L}_\star\ge0$ and rescaled such that $\mathcal{L}_\star(1)=1$. Similarly, for the Logistic model with $\|\mathbf{x}_0\|=1$, the optimal loss is rescaled such that $\mathcal{L}_\star(1)=0$ and $\mathcal{L}_\star(2)=1$. Interestingly, for this model, $\mathcal{L}_\star$, when rescaled (which results in no change in performance by appropriately rescaling $\la_\star$) is similar to the least-squares loss. This confirms the (approximate) optimality of optimally-tuned RLS for this model and further verifies the numerical observations in Figure \ref{fig:fig_app} (Top Right) and the theoretical guarantees of Section \ref{LS_binary} for this model. \section{Fundametal Limits for Binary Models: Proofs for Section \ref{sec:binary}} \subsection{Discussion on Assumption \ref{ass:label}}\label{sec:sfs} As per Assumption \ref{ass:label}, the link function must satisfy $\E[Sf(S)]\neq 0$. This is a rather mild assumption in our setting. For example, it is straightforward to show that it is satisfied for the Signed, Logistic and Probit models. More generally, for a link function $f:\mathbb{R}\rightarrow\{\pm 1\}$ and $S\sim\mathcal{N}(0,1)$, the probability density of $Sf(S)$ can be computed as follows for any $x\in \mathbb{R}$: \begin{align}\label{eq:sfs} p_{_{Sf(S)}}(x) = \Big(1+ \widehat{f}(x)-\widehat{f}(-x)\Big) \frac{\exp(-x^2/2)}{\sqrt{2\pi}}, \quad\quad \widehat{f}(x):= \mathbb{P}\left(f(x)=1\right). \end{align} From this and the fact that $\exp(-x^2/2)$ is an even function of $x$, we can conclude that Assumption \ref{ass:label} is valid if $\widehat{f}(x)$ is monotonic and non-constant based on $x$ (e.g., as in the Signed, Logistic and Probit models) . In contrast, Assumption \ref{ass:label} fails if the function $\widehat{f}$ is even. Finally, we remark that using \eqref{eq:sfs}, it can be checked that $S\,f(S)\sim \mathcal{N}(\mu,\zeta^2)$ if and only if $(\mu,\zeta)=(0,1)$, and consequently only if $\widehat{f}$ is an even function. Based on these, we conclude that for all link functions $f$ satisfying Assumption \ref{ass:label}, the resulting distribution of $Sf(S)$ is non-Gaussian. Finally, we remark that $\nu_f=\E[Sf(S)]$ is the first Hermite coefficient of the function $f$ and the requirement $\nu_f\neq 0$ arises in a series of recent works on high-dimensional single-index models, e.g., \cite{Ver,genzel2016high}; see also \cite{mondelli2017fundamental,lu2017phase} for algorithms specializing to scenarios in which $\nu_f=0$. \subsection{Discussion on the Classification Error \eqref{eq:testerror2}}\label{sec:proofoferr} First, we prove that for an estimator $\widehat\mathbf{w}_{\mathcal{L},\la}$, the relation $\mathbb{P}(\sigma_{\mathcal{L},\la}G+Sf(S)<0)$ determines the high-dimensional limit of classification error. Then we show that the classification error is indeed an increasing function of $\sigma_{\mathcal{L},\la}$ for most well-known binary models. For the estimator ${\widehat\mathbf{w}}_{{\mathcal{L},\la}}$ obtained from \eqref{eq:opt_bin_main}, and $\mathbf{x}_0$ denoting the true vector with unit norm, the parameters $\mu_{{\mathcal{L},\la}}$ and $\alpha_{{\mathcal{L},\la}}$ denote the high-dimensional terms of bias and variance, \begin{align} \mathbf{x}_0^T {\widehat\mathbf{w}}_{{\mathcal{L},\la}} \stackrel{P}{\longrightarrow}\,\mu_{{\mathcal{L},\la}},\label{eq:mu} \\[5pt] \left\|{\widehat\mathbf{w}}_{{\mathcal{L},\la}}- \mu_{\mathcal{L},\la} \,\mathbf{x}_0\right\|_{2}^{2} \stackrel{P}{\longrightarrow} \,\alpha_{{\mathcal{L},\la}}^2.\label{eq:error_bin} \end{align} We note that by rotational invariance of Gaussian distribution we may assume without loss of generality that $\mathbf{x}_0 = \left[1,\,0,\,0,\,\cdots,\,0\right]^T \, \in \mathbb{R}^n$. Therefore we deduce from \eqref{eq:mu} and \eqref{eq:error_bin} that \begin{align*} {\widehat\mathbf{w}}_{{\mathcal{L},\la}}(1) \stackrel{P}{\longrightarrow} \mu_{\mathcal{L},\la}, \quad\quad \sum_{i=2}^n \left({\widehat\mathbf{w}}_{{\mathcal{L},\la}}(i)\right)^2 \stackrel{P}{\longrightarrow} \alpha_{\mathcal{L},\la}^2. \end{align*} Using these, we derive the following for the classification error : \begin{align*} \mathcal{E}_{{\mathcal{L},\la}} &= \mathbb{P} \left(\;f\left(\mathbf{a}^T \mathbf{x}_0\right) \; \mathbf{a}^T\,{\widehat\mathbf{w}}_{{\mathcal{L},\la}} < 0 \;\right)\\[5pt] &= \mathbb{P} \left( \;f\left(\mathbf{a}(1)\right) \cdot \Big({\widehat\mathbf{w}}_{{\mathcal{L},\la}}(1) \mathbf{a}(1) + {\widehat\mathbf{w}}_{{\mathcal{L},\la}} (2) \mathbf{a}(2) + \cdots + {\widehat\mathbf{w}}_{{\mathcal{L},\la}}(n) \mathbf{a}(n)\Big)<0\;\right). \end{align*} Recalling Assumption \ref{ass:gaussian} we have $\mathbf{a} \sim \mathcal{N}\left(\mathbf{0},\,\mathbf{I}\right)$. Thus by denoting $S,G\widesim{\text{\small{iid}}} \mathcal{N} (0,1)$ and assuming without loss of generality that $\mu_{\mathcal{L},\la}>0$, we derive \eqref{eq:testerror2}. Next, we show that for the studied binary models in this paper, the high-dimensional limit for the classification error is increasing based on effective error term $\sigma>0$. In particular, we find that if $p_{_{Sf(S)}}(x)>p_{_{Sf(S)}}(-x)$ for $x\in\mathbb{R}_{>0}$ then it is guaranteed that $a\mapsto\mathbb{P}(aG+Sf(S)<0)$ is an increasing function for $a>0$. To show this, we denote by $\phi$ the density of standard normal distribution and let $a_1>a_2$ to be two positive constants, then under the given condition on $p_{_{Sf(S)}}$, we deduce that, \begin{align*} \mathbb{P}\left(Sf(S)<a_1G\right)&\,-\,\mathbb{P}\left(Sf(S)<a_2G\right) = \\ &\int_0^{+\infty}\int_{a_2g}^{a_1g}p_{_{Sf(S)}}(x)\,\phi(g)\;\text{d}x\,\text{d}g - \int_{-\infty}^{0}\int_{a_1g}^{a_2g}p_{_{Sf(S)}}(x)\,\phi(g)\;\text{d}x\,\text{d}g >0. \end{align*} This shows the desired. Importantly, we remark that in view of \eqref{eq:sfs}, this condition on the density of $Sf(S)$ is satisfied for many well-known binary models including Logistic, Probit and Signed. \subsection {Proof of Theorem \ref{thm:lowerbound_bin}}\label{sec:proofofbin} We need the following auxiliary result, which we prove first. \begin{lem}[Boundedness of $\tau$ in \eqref{eq:bin_main}]\label{lem:boundedtau} Fix $\delta>0$ and $\la>0$ and let $\mathcal{L}$ be a convex, twice differentiable and non-linear function. Then all solutions $\tau$ of the system of equations in \eqref{eq:bin_main} satisfy $0<\tau<\frac{1}{\la\delta}$. \end{lem} \begin{proof} The proof follows directly from the proof of Lemma \ref{lem:tau_bound_lin} by replacing $Z$ with $\mu\,Sf(S)$. Note that the Equation \eqref{eq:lambdabin_main} can be obtained by replacing $Z$ with $\mu Sf(S)$ in Equation \eqref{eq:eq_tau}. \end{proof} Next, we proceed to the proof main of Theorem \ref{thm:lowerbound_bin}. For convenience, let us define the function $\Phi:\mathbb{R}_{\ge0}\times[0,1/\delta)\rightarrow\mathbb{R}$ as following : \begin{align} \Phi(s,x)&:= \frac{1-s^2(1-s^2\mathcal{I}(W_s))}{\delta s^2(s^2\mathcal{I}(W_s)+\mathcal{I}(W_s)-1)}-2x + x^2\delta(1+s^{-2}). \label{eq:phi_bin} \end{align} Then, $\sigma_\star$ as in \eqref{eq:sigopt_thm} is equivalently expressed as: \begin{align} \sigma_{\star}&:= \min_{\substack{0\le x<1/\delta}} \left\{ s \ge 0: \; \Phi(s,x)= 1\right\},\label{eq:sig_opt_phi_bin} \end{align} Before everything, we show that $\sigma_{\star}$ is well defined, i.e., the feasible set of the minimization in \eqref{eq:sig_opt_phi_bin} is non-empty for all $\delta>0$ and link functions $f(\cdot)$ satisfying Assumption \ref{ass:label}. Specifically, we will show that for any $\delta>0$ there exists $s\geq 0$ such that $\widetilde{\Phi}:=\Phi\big(s,\frac{s}{\delta(1+s)}\big) = 1$. It suffices to prove that the range of the function $\widetilde{\Phi}$ is $(0,\infty)$. Clearly, the function is continuous in $\mathbb{R}_{\geq 0}$. Moreover, it can be checked that \begin{align}\label{eq:Phit} \widetilde{\Phi}(s) = \Phi_0(s) + \frac{2}{\delta(1+s^2)},\quad \text{where}\quad \Phi_0(s) := \frac{1-s^2\mathcal{I}(W_s)}{\delta\,s^2(s^2\mathcal{I}(W_s)+\mathcal{I}(W_s)-1)}. \end{align} But, by Lemma \ref{lem:Fisher_lim}, $\lim_{s\rightarrow0}\;\;s^2\,\mathcal{I}(W_s) = 0$ and $\lim_{s\rightarrow+\infty}s^2\,\mathcal{I}(W_s) = 1$. Using these, we can show that $\lim_{s\rightarrow0}\;\;\Phi_0(s) = +\infty$ and $\lim_{s\rightarrow+\infty}\;\;\ \Phi_0(s) = 0$. Combined with \eqref{eq:Phit}, we find that $\lim_{s\rightarrow0}\;\;\Phi_0(s) = +\infty$ and $\lim_{s\rightarrow+\infty}\;\;s^2\,\mathcal{I}(W_s) = 0$. This concludes the proof of feasibility of the minimization in \eqref{eq:phi_bin}. We are now ready to prove the main claim of the theorem. Fix convex loss function $\mathcal{L}$ and regularization parameter $\la\geq$. Let $(\alpha>0,\mu, \tau>0)$ be the unique solution to \eqref{eq:bin_main} and denote $\sigma=\alpha/\mu$. We will prove that \begin{align}\label{eq:WTS_bin} \sigma \geq \sigma_\star. \end{align} The first step in the proof will be to transform the equations \eqref{eq:bin_main} in a more appropriate form. In order to motivate the transformation, note that the performance of the optimization problem in \eqref{eq:opt_bin_main} is unique up to rescaling. In particular consider the following variant of the optimization problem in \eqref{eq:opt_bin_main} : \begin{align} {\widehat{\mathbf{v}}}_{{\mathcal{L},\la}} : = \arg\min_{\mathbf{w}}\left[\frac{c_{_1}}{m}\sum_{i=1}^m {\mathcal{L}}\left(c_{_2}\,y_i\mathbf{a}_i^T \mathbf{w}\right)+c_{_1} \la \|c_{_2}\mathbf{w}\|^{2}\right],\quad c_{_1}>0,\,c_{_2}\neq0. \notag \end{align} It is straightforward to see that, regardless of the values of $c_{_1}$ and $c_{_2}$, $ \corr{{\widehat{\mathbf{w}}}_{{\mathcal{L},\la}}}{\mathbf{x}_0} = \corr{\widehat{\mathbf{v}}_{{\mathcal{L},\la}}}{\mathbf{x}_0}$, where recall that $\widehat{\mathbf{w}}_{{\mathcal{L},\la}}$ solves \eqref{eq:opt_bin_main}. Thus in view of \eqref{eq:corr_lim}, we see that the error $\sigma$ resulting from $\widehat{\mathbf{w}}_{{\mathcal{L},\la}}$ and $\widehat{\mathbf{v}}_{{\mathcal{L},\la}}$ are the same. Motivated by this observation, we consider the following rescaling for the loss function and regularization parameter: \begin{align} \widetilde{\mathcal{L}}(\cdot):=\frac{\tau}{\mu^2}\,\mathcal{L}(\mu\,\cdot),\quad\quad \widetilde{\la}:=\tau\la, \label{eq:ltilde,ltilde} \end{align} From standard properties of Moreau-envelope functions it can be shown that \begin{align}\notag \envdx{\widetilde{\mathcal{L}}}{\cdot/\mu\,}{1} = \frac{\tau}{\mu}\,\envdx{\mathcal{L}}{\cdot\,}{\tau}. \end{align} Using these transformations, we can rewrite the system of equations \eqref{eq:bin_main} in terms of $\sigma$, $\widetilde{\mathcal{L}}$ and $\widetilde{\la}$ as follows: \begin{subequations}\label{eq:bin_main2} \begin{align} \Exp\Big[Sf(S) \cdot\envdx{\widetilde{\mathcal{L}}}{W_\sigma}{1} \Big]&=-\widetilde{\la} , \label{eq:murbin_main2}\\ \Exp\Big[\,\Big(\envdx{\widetilde{\mathcal{L}}}{W_\sigma}{1}\Big)^2\,\Big]&=\sigma^2/\delta , \label{eq:alphabin_main2}\\ \E\Big[ G\cdot \envdx{\widetilde{\mathcal{L}}}{W_\sigma}{1} \Big]&=\sigma(1-\widetilde{\la}\delta)/\delta . \label{eq:lambdabin_main2} \end{align} \end{subequations} where we denote $W_\sigma:=\sigma G+Sf(S).$ Next, we further simplify \eqref{eq:bin_main2} as follows. Similar to the procedure leading to \eqref{eq:equivalence}, here also we may deduce that, $$ \E\Big[ G\cdot \envdx{\widetilde{\mathcal{L}}}{W_\sigma}{1} \Big] = -\sigma\,\E\Big[ \xi_{{W_\sigma}}(W_\sigma)\cdot \envdx{\widetilde{\mathcal{L}}}{W_\sigma}{1} \Big]. $$ Thus \eqref{eq:lambdabin_main2} can be rewritten as \begin{align}\label{eq:third_eq} \E\Big[ \xi_{_{W_\sigma}}(W_\sigma)\cdot \envdx{\widetilde{\mathcal{L}}}{W_\sigma}{1} \Big] = (\widetilde{\la}\delta-1)/\delta. \end{align} Additionally, we linearly combine \eqref{eq:murbin_main2} and \eqref{eq:lambdabin_main2} (with coefficient $\sigma$) to yield : \begin{align}\label{eq:first_eq} \Exp\Big[W_\sigma \cdot\envdx{\widetilde{\mathcal{L}}}{W_\sigma}{1} \Big]&= \sigma^2/\delta - \sigma^2\widetilde{\la}-\widetilde{\la}, \end{align} Putting together \eqref{eq:alphabin_main2}, \eqref{eq:third_eq} and \eqref{eq:first_eq}, we have shown that $\sigma$ satisfies the following system of equations: \begin{subequations}\label{eq:bin_sigma} \begin{align} \Exp\Big[W_\sigma \cdot\envdx{\widetilde{\mathcal{L}}}{W_\sigma}{1} \Big]&= \frac{\sigma^2}{\delta} - \sigma^2\widetilde{\la}-\widetilde{\la} , \label{eq:mubin_sigma}\\ \Exp\Big[\,\Big(\envdx{\widetilde{\mathcal{L}}}{W_\sigma}{1}\Big)^2\,\Big]&=\frac{\sigma^2}{\delta} , \label{eq:alphabin_sigma}\\ \E\Big[ \xi_{{W_\sigma}}(W_\sigma)\cdot \envdx{\widetilde{\mathcal{L}}}{W_\sigma}{1} \Big]&=\widetilde{\la}-\frac{1}{\delta}. \label{eq:lambdabin_sigma} \end{align} \end{subequations} Next, we will use this fact to derive a lower bound on $\sigma$. To this end, let $\beta_{_1},\beta_{_2} \in \mathbb{R}$ be two real constants. By combining \eqref{eq:mubin_sigma} and \eqref{eq:lambdabin_sigma} we find that \begin{align}\label{eq:befCS} \E\Big[(\beta_{_1}W_\sigma+\beta_{_2}\xi_{{W_\sigma}}(W_\sigma))\cdot\envdx{\widetilde{\mathcal{L}}}{W_\sigma}{1}\Big] = \beta_{_1}(\frac{\sigma^2}{\delta}-\sigma^2\widetilde{\la}-\widetilde{\la}) + \beta_{_2}(\widetilde{\la}-\frac{1}{\delta}). \end{align} Applying Cauchy-Schwarz inequality to the LHS of \eqref{eq:befCS} gives : \begin{align} \left(\beta_{_1}\left(\frac{\sigma^2}{\delta}-\sigma^2\widetilde{\la}-\widetilde{\la}\right) + \beta_{_2}(\widetilde{\la}-\frac{1}{\delta})\right)^2&\le \E \bigg[\left(\beta_{_1}W_\sigma+\beta_{_2}\xi_{{W_\sigma}}(W_\sigma))\right)^2\bigg]\cdot \E\bigg[\Big(\envdx{\widetilde{\mathcal{L}}}{W_\sigma}{1}\Big)^2\bigg]\notag\\ &= \E \bigg[\left(\beta_{_1}W_\sigma+\beta_{_2}\xi_{{W_\sigma}}(W_\sigma))\right)^2\bigg]\frac{\sigma^2}{\delta},\label{eq:RHS} \end{align} where we used \eqref{eq:alphabin_sigma} in the last line. To simplify the expectation in the RHS of \eqref{eq:RHS}, we use the facts that $\E[W_\sigma^2] = \sigma^2+1$ and $\E[(\xi_{{W_\sigma}}(W_\sigma))^2] = \mathcal{I}(W_\sigma)$. Also by integration by parts one can derive that $\E[W_\sigma\cdot\xi_{_{W_\sigma}}(W_\sigma)] = -1$. Thus we arrive at the following inequality from \eqref{eq:RHS}: \begin{align}\label{eq:afterCS} \left(\beta_{_1}\left(\sigma^2/\delta-\sigma^2\widetilde{\la}-\widetilde{\la}\right) + \beta_{_2}(\widetilde{\la}-1/\delta)\right)^2\le \beta_{_1}^2\,(\sigma^2+1) + \beta_{_2}^2\,\mathcal{I}(W_\sigma) - 2\beta_{_1}\beta_{_2}. \end{align} Now, we choose the coefficients $\beta_{_1}$ and $\beta_{_2}$ as follows: $\beta_{_1}=1-\widetilde{\la}\delta-(\sigma^2-\sigma^2\widetilde{\la}\delta-\widetilde{\la}\delta)\,\mathcal{I}(W_\sigma)$ and $\beta_{_2}=1$. (We show later in Theorem \ref{thm:opt_bin}, that this choice lead to an achievable lower bound). Substituting these values in \eqref{eq:afterCS} and simplifying the resulting expressions yield the following inequality for $\sigma$: \begin{align}\label{eq:ineq_bin} \frac{1-\sigma^2(1-\sigma^2\mathcal{I}(W_\sigma))}{\delta\sigma^2(\sigma^2\mathcal{I}(W_\sigma)+\mathcal{I}(W_\sigma)-1)}-2\widetilde{\la}+ \widetilde{\la}^2\delta(1+\sigma^{-2})\le1. \end{align} We will now finish the proof of the theorem by using \eqref{eq:ineq_bin} to prove \eqref{eq:WTS_bin}. For the sake of contradiction to \eqref{eq:WTS_bin}, assume that $\sigma<\sigma_\star$. From \eqref{eq:ineq_bin} and the notation introduced in \eqref{eq:phi_bin}, we have shown that $ \Phi\,(\sigma,\widetilde{\la}) \le 1$. Recall from \eqref{eq:ltilde,ltilde} that $\widetilde{\la}=\la\tau$. But, from Lemma \ref{lem:boundedtau} it holds that $\widetilde{\la}=\la\tau < \frac{1}{\delta}$. Therefore, the pair $(\sigma,\widetilde{\la})$ is feasible in the minimization problem in \eqref{eq:sig_opt_phi_bin}. By this, optimality of $\sigma_{\star}$ and our assumption that $\sigma<\sigma_\star$ in \eqref{eq:sig_opt_phi_bin} it must hold that $\Phi\,({\sigma},\widetilde{\la}) < 1.$ But then, since $\lim_{s\rightarrow0}\Phi\,(s,\widetilde{\la})=+\infty$ and by continuity of the function $\Phi(\cdot,x)$ for all fixed $x\in[0,1/\delta)$, we have: \begin{align}\label{eq:alphatilde_bin} \exists\,\sigma_1 : \;\text{s.t.} \;0<\sigma_1<{\sigma},\;\;\text{and}\;\; \Phi(\sigma_1,\widetilde{\la}) = 1. \end{align} Therefore $\Phi(\sigma_1,\widetilde{\la}) = 1$ for $\sigma_1<\sigma_{\star}$, which contradicts the optimality of $\sigma_{\star}$ in \eqref{eq:sig_opt_phi_bin} and completes the proof. \subsection{Proof of Lemma \ref{thm:opt_bin}} To prove the claim of the lemma we show that the proposed candidate-optimal loss and regularization parameter pair $(\mathcal{L}_{\star}, \la_{\star})$ satisfies the system of equations in \eqref{eq:bin_main} with $(\alpha,\mu,\tau) = (\sigma_{\star},1,1)$. In line with the proof of Theorem \ref{thm:lowerbound_bin} and the equivalent representation of \eqref{eq:bin_sigma} for the equations in \eqref{eq:bin_main}, we show that $(\mathcal{L}_{\star},\la_{\star})$ satisfy all three equations in \eqref{eq:bin_sigma} with $(\sigma,\mu,\tau) = (\sigma_{\star},1,1).$ We emphasize that since $\mu=\tau=1$, based on \eqref{eq:ltilde,ltilde} the $\mathcal{L}_{\star}$ and $\la_{\star}$ remain the same under these changes of parameters thus $(\widetilde{\mathcal{L}}_{\star}(\cdot), \widetilde{\la}_{\star}) = (\mathcal{L}_{\star}, \la_{\star}) $. Note that we need $\mathcal{M}_{\mathcal{L}}(\cdot)$ to be able to assess the equations in \eqref{eq:bin_sigma}. For this purpose we use inverse properties of Moreau-envelope functions in Proposition \ref{propo:inverse} to derive the following from the definition of $\mathcal{L}_{\star}$ in \eqref{eq:optimalloss_thm} : \begin{align}\notag \env{\mathcal{L}_{\star}}{w}{1} = -\frac{\eta(\la_{\star}\delta-1)}{\delta(\eta-\mathcal{I}(W_{\star}))} Q(w) -\frac{\la_{\star}\delta-1}{\delta(\eta-\mathcal{I}(W_{\star}))}\log\left(p_{_{W_{\star}}}(w)\right). \end{align} Thus, \begin{align}\notag \envdx{\mathcal{L}_{\star}}{w}{1} = -\frac{\eta(\la_{\star}\delta-1)}{\delta(\eta-\mathcal{I}(W_{\star}))} w -\frac{\la_{\star}\delta-1}{\delta(\eta-\mathcal{I}(W_{\star}))}\xi_{_{W_{\star}}}(w). \end{align} Using this and the fact that $\E[W_{\star}\cdot\xi_{_{W_{\star}}}(W_{\star})] = -1$ (derived by integration by parts), the LHS of the equation \eqref{eq:mubin_sigma} changes to \begin{align*} \Exp\Big[W_{\star} \cdot\envdx{\mathcal{L}_{\star}}{W_{\star}}{1} \Big] &= -\frac{\eta(\la_{\star}\delta-1)}{\delta(\eta-\mathcal{I}(W_{\star}))} \E\left[W_{\star}^2 \right] - \frac{\la_{\star}\delta-1}{\delta(\eta-\mathcal{I}(W_{\star}))}\E\left[W_{\star}\cdot\xi_{_{W_{\star}}}(W_{\star})\right]\\[5pt] &= -\frac{\eta(\la_{\star}\delta-1)}{\delta(\eta-\mathcal{I}(W_{\star}))}(\sigma_{\star}^2+1)+ \frac{\la_{\star}\delta-1}{\delta(\eta-\mathcal{I}(W_{\star}))} = \frac{\sigma_{\star}^2}{\delta} - \sigma_{\star}^2\la_{\star}-\la_{\star}, \end{align*} where for the last step, we replaced $\eta$ according to the statement of the lemma.\\ Similarly, for the second equation \eqref{eq:alphabin_sigma}, we begin with replacing the expression for $\envdx{\mathcal{L}_{\star}}{W_{\star}}{1}$ to see that \begin{align} \Exp\left[\left(\envdx{\mathcal{L}_{\star}}{W_{\star}}{1} \right)^2 \right]\notag &= \frac{(\la_{\star}\delta-1)^2}{\delta^2(\eta-\mathcal{I}(W_{\star}))^2}\left(\eta^2\,\E\left[W_{\star}^2\right] + \mathcal{I}\left(W_{\star}\right) + 2\eta\,\E\left[W_{\star}\cdot\xi_{{W_{\star}}}(W_{\star})\right]\right)\notag \\[5pt] &= \frac{(\la_{\star}\delta-1)^2}{\delta^2(\eta-\mathcal{I}(W_{\star}))^2}\left(\eta^2\,(\sigma_{\star}^2+1)+ \mathcal{I}\left(W_{\star}\right) - 2\eta\right).\label{eq:third_opt_checking} \end{align} After replacing $\eta$, we can simplify \eqref{eq:third_opt_checking} to reach the following \begin{align*} \Exp\left[\left(\envdx{\mathcal{L}_{\star}}{W_{\star}}{1} \right)^2 \right]&= \frac{1-\sigma_{\star}^2(1-\sigma_{\star}^2\,\mathcal{I}(W_{\star}))}{\delta^2(\sigma_{\star}^2\,\mathcal{I}(W_{\star})+\mathcal{I}(W_{\star})-1)}-\frac{2\la_{\star}\,\sigma_{\star}^2}{\delta} + \la_{\star}^2(1+\sigma_{\star}^{2})\\[5pt] &= \frac{\Phi(\sigma_{\star},\la_{\star})\cdot\sigma_{\star}^2}{\delta}= \frac{\sigma_{\star}^2}{\delta}, \end{align*} where the last two steps follow from the definition of $\sigma_{\star}$ in \eqref{eq:sigopt_thm} and $\Phi(\cdot,\cdot)$ in \eqref{eq:phi_bin}. For the third Equation \eqref{eq:lambdabin_sigma} we deduce in a similar way that \begin{align*} \E\Big[ \xi_{_{W_{\star}}}(W_{\star})\cdot \envdx{\mathcal{L}_{\star}}{W_{\star}}{1} \Big] &= -\frac{\eta(\la_{\star}\delta-1)}{\delta(\eta-\mathcal{I}(W_{\star}))}\E\left[W_{\star}\cdot\xi_{{W_{\star}}}(W_{\star})\right] - \frac{\la_{\star}\delta-1}{\delta(\eta-\mathcal{I}(W_{\star}))} \mathcal{I}(W_{\star}) \\&= \la_{\star} - \frac{1}{\delta}, \end{align*} confirming the RHS of Equation \eqref{eq:lambdabin_sigma}. This completes the proof. \subsection{Proof of Lemma \ref{cor:LS_bin}} Let $\ell_2(t) = (1-t)^2$ for $t\in\mathbb{R}$. Using the Equations in \eqref{eq:bin_main} and replacing $\env{\ell_2}{x}{\tau} = \frac{(x-1)^2}{2\tau+1}$ we can solve the equations to find the closed-form formulas for $(\mu,\alpha,\tau)$ for a fixed $\la\ge0$. For compactness, define $F(\cdot,\cdot):\mathbb{R}_{>0}\times\mathbb{R}_{>0}\rightarrow \mathbb{R}_{>0}$ where $F(\delta,\la) := \la\delta + \sqrt{8\la\delta+(\delta(\la+2)-2)^2}$. We derive the following for $\mu_{{\ell_2,\la}}$ and $\alpha_{{\ell_2,\la}}$ and for all $\delta>0$, \begin{align} \mu_{{\ell_2,\la}}&= \frac{4\delta\,\E[Z^2]}{2+2\delta+F(\delta,\la)},\notag\\[5pt] \alpha_{{\ell_2,\la}}^2 &= \frac{\delta \left(2-2\delta-2\la\delta+F(\delta,\la)\right)^2 (2+2\delta+F(\delta,\la))\left(1-\frac{8\delta\left(\E[Z^2]\right)^2(2+F(\delta,\la))}{(2+2\delta+F(\delta,\la))^2}\right)}{2\left(2-2\delta+F(\delta,\la)\right)^2\left(F(\delta,\la)-\la\delta\right)}\notag. \end{align} Using these, we reach $\sigma_{\,{\ell_2,\la}}^2 = \alpha_{{\ell_2,\la}}^2/\mu_{{\ell_2,\la}}^2$ as stated in \eqref{eq:sigmaLSreg}. By minimizing $\sigma_{\,{\ell_2,\la}}^2$ with respect to $\la\ge0$ we derive $\la_{\rm opt}$ and the resulting $\sigma_{\,{\ell_2,\la_{\rm opt}}}^2$ in the statement of the lemma. \subsection {Proof of Corollary \ref{cor:lowerbound_binary}}\label{sec:proofofcor_bin} The proof is analogous to the proof of Corollary \ref{cor:lowerbound}. Here again we use Stam's inequality in Proposition \ref{propo:Fisher} to provide a bound for $\mathcal{I}(W_\sigma) = \mathcal{I}(\sigma G+Sf(S))$ based on $\mathcal{I}(\sigma G) = \sigma^{-2}$ and $\mathcal{I}(Sf(S))$. First we define \begin{align}\label{eq:sigmahat_cor} \widehat{\sigma} := \min_{x\ge0} \left\{s\ge0 : \frac{1}{\delta} + \frac{1}{\delta s^2(\mathcal{I}(Sf(S))-1)}-2x+\delta x^2(1+ s ^{-2}) \le 1 \right\}. \end{align} Next we use Stam's inequality to deduce that : \begin{align}\notag \mathcal{I}(W_\sigma):=\mathcal{I}(\sigma G +Sf(S) )\le \frac{\mathcal{I}(Sf(S))}{1+\sigma^2\mathcal{I}(Sf(S))}. \end{align} We can use this inequality in the constraint condition of $\sigma_{\star}$ in \eqref{eq:sigopt_thm} to deduce that: \begin{align} \frac{1}{\delta} + \frac{1}{\delta\sigma_{\star}^2(\mathcal{I}(Sf(S))-1)}-2\la_{\star}+\delta \la_{\star}^2(1+\sigma_{\star}^{-2}) \le 1, \end{align} Thus we find that $(\sigma,x)= (\sigma_{\star},\la_\star)$ is a feasible solution of the constraint in \eqref{eq:sigmahat_cor}, resulting in : \begin{align} \sigma_\star \ge \widehat{\sigma}. \end{align} To complete the proof of the theorem, we need to find the closed-form $\widehat{\sigma}$. Proceeding from \eqref{eq:sigmahat_cor} we derive the following \begin{align*} \widehat{\sigma}^2 &= \min_{x\ge0} \left\{s^2 : \frac{1}{s^2}\left(\frac{1}{\delta(\mathcal{I}(Sf(S))-1)}+x^2\delta\right) \le 1+2x-\frac{1}{\delta}-x^2\delta \right\} \\[5pt] &= \min_{x\ge0} \left\{s^2 : \frac{1}{s^2} \le \frac{1+2x-1/\delta-x^2\delta}{\frac{1}{\delta(\mathcal{I}(Sf(S))-1)}+x^2\delta} \right\} \\[5pt] &= \left(\max_{x\ge0} \left\{\frac{1+2x- 1/\delta-x^2\delta}{\frac{1}{\delta(\mathcal{I}(Sf(S))-1)}+x^2\delta}\right\}\right)^{-1}. \end{align*} The first line follows by algebraic simplifications in \eqref{eq:sigmahat_cor}. The second line is true since by Cramer-Rao bound (see Proposition \ref{propo:Fisher} (d)) $\mathcal{I}(Sf(S)) \ge (Var[Sf(S)])^{-1}$; thus $\mathcal{I}(Sf(S))\ge1$. Noting that the right hand-side of the inequality is independent of $\sigma$ and can take positive values for some $x\ge0$ we conclude the last line. Optimizing with respect to the non-negative variable $x$ in the last line completes the proof and yields the desired result in the statement of the corollary. \section{Fundamental Limits for Linear Models: Proofs for Section \ref{sec:linear}} \subsection{Auxiliary Results} \begin{lem}[Boundedness of $\tau$ in \eqref{eq:eq_main}]\label{lem:tau_bound_lin} Let $\mathcal{L}(\cdot)$ be a non-linear, convex and twice differentiable function, $\la>0$ and $\delta>0$ and the pair $(\alpha,\tau)$ be a solution to \eqref{eq:eq_main0} where $\alpha>0$. Then, $ 0 < \tau < \frac{1}{\la\delta}. $ \end{lem} \begin{proof} Using Stein's lemma (aka Gaussian integration by parts) we find that \begin{align*} \E\Big[ G\cdot \envdx{\mathcal{L}}{\alpha\,G+Z}{\tau} \Big]&=\alpha\,\E\Big[ \envddx{\mathcal{L}}{\alpha\,G+Z}{\tau} \Big] \end{align*} Therefore the equation in the LHS in \eqref{eq:eq_main0} is equivalent to \begin{align}\label{eq:steinthird} \tau\delta\,\E\Big[ \envddx{\mathcal{L}}{\alpha\,G+Z}{\tau} \Big] = 1-\la\tau\delta. \end{align} Next we prove that under the assumptions of the lemma, $\E\Big[ \envddx{\mathcal{L}}{\alpha G + \mu S f(S)}{\tau} \Big]$ is positive. First using properties of Morea-envelopes in \eqref{eq:secondderivative_mor}, we have \begin{align} \E\Big[ &\envddx{\mathcal{L}}{\alpha\,G+Z}{\tau} \Big] = \E\left[ \frac{\mathcal{L}''(\prox{\mathcal{L}}{\alpha\,G+Z}{\tau})}{1+\tau \mathcal{L}''(\prox{\mathcal{L}}{\alpha G + Z}{\tau})}\right]\ge0.\label{eq:mor_der_thirdeq} \end{align} In particular, we see that equality is achieved in \eqref{eq:mor_der_thirdeq} is achieved if and only if \begin{align*} \forall x\in \mathbb{R}: \quad \envddx{\mathcal{L}}{x}{\tau} =0. \end{align*} Or equivalently, \begin{align}\label{eq:mor_lin} \exists\,c_{_1},c_{_2} \in \mathbb{R} : \text{s.t.} \;\; \forall x\in \mathbb{R} : \env{\mathcal{L}}{x}{\tau} = c_{_1}x+c_{_2}. \end{align} Finally, using Proposition \ref{propo:inverse} to ``invert" the Moreau envelope function, we find that the loss function $\mathcal{L}(\cdot)$ satisfying \eqref{eq:mor_lin} is such that \begin{align*} \forall x\in\mathbb{R}:\quad\mathcal{L}(x) = -\env{-c_{_1}I-c_{_2}}{x}{\tau}= c_{_1}x + \frac{\tau c_{_1}^2}{2}+c_{_2}, \end{align*} where $I(\cdot)$ is the identity function i.e. $I(t) = t,$ $\forall t\in \mathbb{R}$. But according to the assumptions of the lemma, $\mathcal{L}$ is a \emph{non-linear} convex function. Thus, it must hold that $\E\left[ \envddx{\mathcal{L}}{\alpha\,G+Z}{\tau} \right]>0$. Using this and the assumptions on $\la$ and $\delta$, the advertised claim follows directly from \eqref{eq:steinthird}. \end{proof} \subsection{Proof of Theorem \ref{thm:lowerbound_reg}}\label{sec:proof_bin_lowerbound} Fix a convex loss function $\mathcal{L}$ and regularization parameter $\la\geq 0$. Let $(\alpha>0,\tau>0)$ be the unique solution to \begin{subequations}\label{eq:eq_main} \begin{align} \delta \tau^2\cdot \E \Big[\Big(\envdx{\mathcal{L}}{\alpha\,G+Z}{\tau}\Big)^2\,\Big]&=\alpha^2- \la^2\delta^2 \tau^2, \label{eq:eq_alpha}\\ \delta \tau \cdot\E\Big[G\cdot\envdx{\mathcal{L}}{\alpha\,G+Z}{\tau}\Big]&=\alpha\,(1-\la\delta\tau) \label{eq:eq_tau}. \end{align} \end{subequations} For convenience, let us define the function $\Psi: \mathbb{R}_{\ge0} \times [0,1)\rightarrow \mathbb{R}$: \begin{align}\label{eq:psi_reg1} \quad\Psi(a,x):=\frac{(a^2-x^2\,\delta^2)\,\mathcal{I}(V_a)}{(1-x\,\delta)^2}. \end{align} Then, $\alpha_{\star}>0$ as in \eqref{eq:alphaopt_thm} is equivalently expressed as \begin{align}\label{eq:psi_reg} \alpha_{\star}:= \min_{\substack{0\le x<1/\delta}} \left\{a \geq 0: \; \Psi(a,x)= \frac{1}{\delta} \right\}. \end{align} Before everything, let us show that $\alpha_\star$ is well-defined, i.e., that the feasible set of the minimization in \eqref{eq:psi_reg} is non-empty for all $\delta>0$ and random variables $Z$ {satisfying Assumption \ref{ass:noise}}. Specifically, we will show that there exists $a\geq 0$ such that $\Psi\left(a,\frac{a}{(1+a) \delta}\right)=1/\delta$. It suffices to prove that the range of the function $\widetilde{\Psi}(a):=\Psi\left(a,\frac{a}{(1+a)\delta}\right)$ is $(0,\infty)$. Clearly, the function $\widetilde{\Psi}$ is continuous in $\mathbb{R}_{\ge0}$. Moreover, it can be checked that $\widetilde{\Psi}(a)=(a^2+2a)\Psi_0(a)$ where $\Psi_0(a):=a^2\mathcal{I}(V_a)$. By Lemma \ref{lem:Fisher_lim}, $\lim_{a\rightarrow 0}\Psi_0(a) = 0$ and $\lim_{a\rightarrow +\infty}\Psi_0(a) = 1$. Hence, we find that $\lim_{a\rightarrow0}\widetilde{\Psi}(a) = 0$ and $\lim_{a\rightarrow+\infty}\widetilde{\Psi}(a) = +\infty$, as desired. We are now ready to prove the main claim of the theorem, i.e., \begin{align}\label{eq:desired21} \alpha \geq \alpha_\star. \end{align} Denote by $\phi_\alpha$ the density of the Gaussian random variable $\alpha G$. We start with the following calculation: \begin{align \E&\bigg[ G\cdot \envdx{\mathcal{L}}{V_\alpha }{\tau} \bigg] = -\alpha\iint\envdx{\mathcal{L}}{ u+z}{\tau} \phi^\prime_{\alpha}(u)p_{Z}(z)\mathrm{d}u\mathrm{d}z \notag\\ &= -\alpha\iint\envdx{\mathcal{L}}{ v}{\tau} \phi^\prime_{\alpha}(u)p_{Z}(v-u)\mathrm{d}u\mathrm{d}v \notag \\ &= -\alpha\int\envdx{\mathcal{L}}{ v}{\tau} p^\prime_{_{V}}(v)\mathrm{d}v = -\alpha\,\E\Big[\envdx{\mathcal{L}}{V _\alpha}{\tau}\cdot\xi_{{V_{_{\alpha}}}}(V_\alpha)\Big], \label{eq:equivalence} \end{align} where for a random variable $V$, we denote its score function with $\xi_V(v) := p'_V(v)/p_V(v)$ for $v\in\mathbb{R}$. Using \eqref{eq:equivalence} and $\alpha>0$, \eqref{eq:eq_tau} can be equivalently written as following, \begin{align}\label{eq:ksi} 1-\la\,\delta\,\tau = -\delta\tau\cdot\E\Big[\envdx{\mathcal{L}}{V_\alpha}{\tau}\cdot\xi_{{V_{_{\alpha}}}}(V_\alpha)\Big]. \end{align} Next, by applying Cauchy-Shwarz inequality, recalling $\E[(\xi_{{V_{_{\alpha}}}}(V_\alpha))^2]=\mathcal{I}\left(V_\alpha\right)$ and using \eqref{eq:eq_alpha}, we have that \begin{align*} \left(\E\Big[\envdx{\mathcal{L}}{ V_\alpha}{\tau}\cdot\xi_{{V_{_{\alpha}}}}(V_\alpha)\Big]\right)^2&\le \E \Big[\Big(\envdx{\mathcal{L}}{V_\alpha}{\tau}\Big)^2\,\Big]\cdot\mathcal{I}\left(V_\alpha\right) = \frac{(\alpha^2-\la^2\,\delta^2\,\tau^2)\,\mathcal{I}(V_\alpha)}{\delta \tau^2}, \end{align*} where we have also used the fact that $\tau>0$. To continue, we use \eqref{eq:ksi} to rewrite the LHS above and deduce that: \begin{align} \left(\frac{1-\la\,\delta\,\tau}{\delta\tau}\right)^2\le \frac{(\alpha^2-\la^2\,\delta^2\,\tau^2)\,\mathcal{I}(V_\alpha)}{\delta \tau^2}. \end{align} By simplifying the resulting expressions we have proved that $(\alpha,\tau)$ satisfy the following inequality: \begin{align}\label{eq:inequality} \frac{(\alpha^2-\la^2\,\delta^2\,\tau^2)\,\mathcal{I}(V_\alpha)}{(1-\la\,\delta\,\tau)^2} \ge \frac{1}{\delta}. \end{align} In the remaining, we use \eqref{eq:inequality} to prove \eqref{eq:desired21}. For the sake of contradiction to \eqref{eq:desired21}, assume that there exists a valid triplet $(\alpha,\la,\tau)$ such that $\alpha< \alpha_\star$. Recall by inequality \eqref{eq:inequality} that $\alpha$ satisfies: \begin{align}\label{eq:psi_ineq} \Psi\Big({\alpha}\,, \,{\la\,\tau}\Big) \ge \frac{1}{\delta}. \end{align} We show first that \eqref{eq:psi_ineq} holds with strict inequality. To see this, suppose that $\Psi({\alpha}, \la\,\tau) =1/\delta$. From Lemma \ref{lem:tau_bound_lin}, it also holds that $\la\,\tau\in(0,1/\delta)$. Hence, the pair $(\alpha,\la\,\tau)$ is a feasible point in the minimization in \eqref{eq:psi_reg}. Combining this with optimality of $\alpha_\star$ lead to the conclusion that $\alpha_\star\geq \alpha$, which contradicts our assumption $\alpha< \alpha_\star$. Therefore we consider only the case where \eqref{eq:psi_ineq} holds with strict inequality i.e., $\Psi({\alpha},\,\la\tau)>1/\delta$. To proceed, note that $\Psi(0,x)\le0$ for all $x\in[0,1).$ Thus, by continuity of the function $a\mapsto\Psi(a,x)$ for fixed $x\in[0,1/\delta)$: \begin{align}\label{eq:alphatilde} \exists\,\widetilde{\alpha} : \;\text{s.t.} \;\;0\le\widetilde{\alpha}<{\alpha},\;\;\text{and}\;\; \Psi\,\Big(\widetilde{\alpha}\,,\,{\la}{\,\tau}\Big) = \frac{1}{\delta}. \end{align} By recalling our assumption that ${\alpha}<\alpha_{\star}$, we can deduce that \eqref{eq:alphatilde} in fact holds for $\widetilde{\alpha}<\alpha_{\star}$. However, this is in contradiction with the optimality of $\alpha_{\star}$ defined in \eqref{eq:psi_reg}. This shows that for all achievable $\alpha$ it must hold that $\alpha\ge\alpha_{\star}$. This proves the claim in \eqref{eq:desired21} and completes the proof of the theorem. \subsection{Proof of Lemma \ref{thm:opt_reg}} To prove the claim of the lemma, it suffices to show that the proposed loss function and regularization parameter, satisfy the system of equations in \eqref{eq:eq_main} with $\alpha = \alpha_{\star}$. For this purpose we show that $(\mathcal{L}, \la, \alpha, \tau) = (\mathcal{L}_{\star}, \la_{\star}, \alpha_{\star}, 1)$ satisfy \eqref{eq:eq_main}. First, we recognize that for the candidate optimal loss function in Lemma \ref{thm:opt_bin} we have $\forall v\in \mathbb{R}$ that \begin{align} \label{eq:envdx_opt} \envdx{\mathcal{L}_{\star}}{v}{1} = -\frac{\alpha_{\star}^2 - \la_{\star}^2\,\delta^2}{1-\la_{\star}\delta}\cdot \xi_{{V_{\star}}}(v). \end{align} Thus by replacing the proposed parameters in \eqref{eq:eq_alpha} we have : \begin{align*} \delta\,\E \left[\Big(\envdx{\mathcal{L}_{\star}}{V_{\star}}{1}\Big)^2\,\right] &= \delta \left(\frac{\alpha_{\star}^2-\la_{\star}^2\,\delta^2}{1- \la_{\star}\,\delta}\right)^2\mathcal{I}\left(V_{\star}\right) = \alpha_{\star}^2-\la_{\star}^2\delta^2, \end{align*} where for the last line we used the definitions of $\alpha_{\star}$ and $\la_{\star}$ in the statement of the lemma. This proves the claim for \eqref{eq:eq_alpha}. To show that Equation \eqref{eq:eq_tau} is satisfied we use its equivalent expression in \eqref{eq:ksi} and also replace \eqref{eq:envdx_opt} in \eqref{eq:ksi}. Specifically, this shows that \begin{align*} \delta\,\E\bigg[ G\cdot \envdx{\mathcal{L}_{\star}}{V_{\star}}{1} \bigg] &=-\delta\,\alpha_{\star}\,\E\bigg[\envdx{\mathcal{L}_{\star}}{ V_{\star}}{1}\cdot\xi{_{V_\star}}(V_{\star})\bigg] \\ &= \frac{ \delta\,\alpha_{\star}\,(\alpha_{\star}^2-\la_{\star}^2\,\delta^2)\cdot\mathcal{I}(V_{\star})}{1-\la_{\star}\,\delta} = \alpha_{\star}(1-\la_{\star}\,\delta), \end{align*} from which we conclude that Equation \eqref{eq:eq_tau} is satisfied. This completes the proof of the lemma. \subsection{Proof of Lemma \ref{cor:LS_reg}} By letting $\mathcal{L}(t) = t^2$ we find that $\env{\mathcal{L}}{x}{\tau} = \frac{x^2}{2\tau+1}$ for all $x\in \mathbb{R}$ and $\tau\in\mathbb{R}_{>0}$. Using this in Equations \eqref{eq:eq_main} and a after a few algebraic simplifications we arrive at the following closed-form expression for $\alpha_{{\ell_{_2},\la}}^2$ for all $\la\ge0$ and random variables $Z$ with finite second moment, \begin{align}\label{eq:alphaLSreg} \alpha_{{\ell_{_2},\la}}^2 = \frac{1}{2}\left(1-\E[Z^2]-\delta\right) + \frac{\E[Z^2](\la+2\delta+2)+2(\delta-1)^2+\la(\delta+1)}{2\sqrt{(\la+2\delta-2)^2+8\la}}. \end{align} Next, by using direct differentiation to optimize this over $\la\ge0$, we derive $\la_{\rm opt} = 2\E[Z^2]$ and the resulting expression for $\alpha_{{\ell{_2},\la_{\rm opt}}}^2$ in the statement of the lemma. \subsection{Proof of Corollary \ref{cor:lowerbound}}\label{sec:proofofcor_lin} As mentioned in the main body of the paper, the difficulty in deriving a closed-form expression for $\alpha_{\star}$ in \eqref{eq:alphaopt_thm} is due to the fact that in general $\mathcal{I}(V_a) = \mathcal{I}(a G+Z)$ may not be expressible in closed-form with respect to $a$. The core idea behind this corollary is using Stam's inequality (see Proposition \ref{propo:Fisher}) to bound $\mathcal{I}(V_a)$ in terms of $\mathcal{I}(a G) = a^{-2}$ and $\mathcal{I}(Z)$. Specifically, applying \eqref{eq:Fisher_Stam} to the random variables $a\,G$ and $Z$ we find that: \begin{align}\label{eq:stam} \mathcal{I}(V_a) = \mathcal{I}(a\, G+Z)\le\frac{\mathcal{I}(Z)}{1+a^2\mathcal{I}(Z)}. \end{align} Substituting the RHS above in place of $\mathcal{I}(V_a)$ in the definition of $\alpha_\star$ in \eqref{eq:alphaopt_thm}, let us define $\widehat{\alpha}$ as follows: \begin{align}\label{eq:alphahat} \widehat{\alpha} := \min_{0\le x<1/\delta}\left\{a\,\ge0:\frac{(a^2-x^2\,\delta^2)\,\mathcal{I}(Z)}{(1-x\,\delta)^2(1+a^2\mathcal{I}(Z))}\ge\frac{1}{\delta}\right\}. \end{align} The remaining of proof has two main steps. First, we show that \begin{align}\label{eq:Cor21_step1} \alpha_\star^2\ge\widehat{\alpha}^2. \end{align} Second, we solve the minimization in \eqref{eq:alphahat} to yield a closed-form expression for $\widehat{\alpha}$. Towards proving \eqref{eq:Cor21_step1}, note from the definition of $\alpha_\star$ and inequality \eqref{eq:stam} that there exists $x_\star\in[0,1/\delta)$ such that \begin{align}\notag \frac{1}{\delta} = \frac{(\alpha_\star^2-x_\star^2\,\delta^2)\,\mathcal{I}(V_{\star})}{(1-x_\star\,\delta)^2} \leq \frac{(\alpha_\star^2-x_\star^2\,\delta^2)\,\mathcal{I}(Z)}{(1-x_\star\,\delta)^2(1+\alpha_\star^2\,\mathcal{I}(Z))}. \end{align} Thus, the pair $(\alpha_\star,x_\star)$ is feasible in \eqref{eq:alphahat}. This and optimality of $\widehat{\alpha}$ in \eqref{eq:alphahat} lead to \eqref{eq:Cor21_step1}, as desired. The next step is finding a closed-form expression for $\widehat{\alpha}$. Based on \eqref{eq:alphahat} and few algebraic simplifications we have : \begin{align} \widehat{\alpha}^2 &= \min_{0\le \,x\,<1/\delta} \left\{ a^2 : a^2\,\mathcal{I}(Z)\cdot(\delta-(1-x\,\delta)^2) \ge(1-x\,\delta)^2 +\delta^3 x^2\mathcal{I}(Z)\right\}\notag\\[5pt] &= \min_{\max\{0, \frac{1-\sqrt{\delta}}{\delta}\}\le \,x \,< 1/\delta} \left\{a^2 : a^2 \ge \frac{(1-x\delta)^2+\delta^3\,x^2\,\mathcal{I}(Z)}{\mathcal{I}(Z)\cdot(\delta-(1-x\delta)^2)}\right\}\notag\\[5pt] &= \min_{\max\{0, \frac{1-\sqrt{\delta}}{\delta}\}\le \,x \,< 1/\delta} \, \left\{\frac{(1-x\delta)^2+\delta^3\,x^2\,\mathcal{I}(Z)}{\mathcal{I}(Z)\cdot(\delta-(1-x\delta)^2)}\right\}.\label{eq:alphahat2} \end{align} The last equality above is true because the fraction in the constraint in the second line is independent of $a$. Next, by minimizing with respect to the variable $x$ in \eqref{eq:alphahat2}, we reach $\widehat{\alpha}^2 = h_\delta(1/\mathcal{I}(Z))$. Finally, we know from Proposition \ref{propo:Fisher}(f) that equality in \eqref{eq:stam} is achieved if and only if the noise is Gaussian i.e. $Z\sim\mathcal{N}(0,\zeta^2)$ for some $\zeta>0$. Thus, if this is indeed the case, then $\alpha_\star=\widehat{\alpha}$ and the lower bound is achieved with replacing the Fisher information of $Z$ i.e, $\mathcal{I}(Z) =\zeta^{-2}$. This completes the proof of the corollary. \subsection{Proof of Equation \eqref{eq:omega_LB}}\label{sec:proof_omega_LB} First, we prove the bound $\omega_{_\delta} \geq \left(\mathcal{I}(Z)\, \E[Z^2]\right)^{-1}.$ Fix $\delta>0$ and consider the function $\widetilde{h}_{\delta}(x):=h_{\delta}(x)/x$ for $x\geq 0$. Direct differentiation and some algebra steps suffice to show that $\widetilde{h}_{\delta}(x)$ is decreasing. Using this and the fact that $1/\mathcal{I}(Z) \le \E[Z^2]$ (cf. Proposition \ref{propo:Fisher} (c)), we conclude with the desired. Next, we prove the lower bound $\omega_\delta\geq 1-\delta$. Fix any $\delta>0$. First, it is straightforward to compute that $ h_{\delta}(0) = \max\{1-\delta,0\} \geq 1-\delta. $ Also, simple algebra shows that $h_{\delta}(x)\leq 1, x\geq 0$. From these two and the increasing nature of $h_{\delta}(x)$ we conclude that $1-\delta \le h_{\delta}(x) \le 1$, for all $x \geq 0$. The desired lower bound follows immediately by applying these bounds to the definition of $\omega_\delta$. \section*{Appendix} \section{Asymptotics for Binary RERM: Proof of Theorem \ref{propo:boundedness}}\label{sec:asy_bin} In this section, we prove that under the assumptions of Theorem \ref{propo:boundedness}, the system of equations in \eqref{eq:bin_sys} has a unique and bounded solution. \subsection{Asymptotic Error of RERM via an Auxiliary Min-Max Optimization} As mentioned in Section \ref{sec:binary}, the proof of Theorem \ref{propo:boundedness} has essentially two parts. The first part of the proof uses the CGMT \cite{COLT} and the machinery developed in \cite{Master,NIPS,salehi2019impact,taheri2019sharp} to relate the properties of the RERM solution to an Auxiliary Optimization (AO). The detailed steps follow mutatis-mutandis analogous derivations in recent works \cite{Master,NIPS,salehi2019impact,taheri2019sharp,svm_abla} and are omitted here for brevity. Instead, we summarize the finding of this analysis in the following proposition. \begin{propo}\label{propo:minmax} Consider the optimization problem in \eqref{eq:opt_bin_main}. If the min-max optimization in \eqref{eq:minmax_bin} has a unique and bounded solution $(\alpha^\star>0,\mu^\star,\upsilon^\star>0,\gamma^\star>0)$, then the values of $\alpha_{\mathcal{L},\la}$ and $\mu_{\mathcal{L},\la}$ corresponding to $\mathcal{L}$ and $\la$ defined in \eqref{eq:mu}-\eqref{eq:error_bin} are derived by setting $\alpha_{\mathcal{L},\la} = \alpha_\star$ and $\mu_{\mathcal{L},\la} = \mu_\star$, where \begin{align}\label{eq:minmax_bin} \begin{split} (\alpha^\star,\mu^\star,\upsilon^\star,\gamma^\star) = \arg\min_{\substack{(\alpha,\mu,\upsilon) \in \\[2pt]\mathbb{R}_{\geq0}\times\mathbb{R}\times \mathbb{R}_{>0}}} \max _{\gamma \in \mathbb{R}_{>0}}\bigg[ \Theta(&\alpha,\mu,\upsilon,\gamma ):= \frac{\gamma\upsilon}{2}-\frac{\alpha \gamma}{\sqrt{\delta}}+\frac{\la \mu^{2}}{2}+\frac{\la \alpha^{2}}{2} + \\ &\mathbb{E}\left[\mathcal{M}_{\mathcal{L}}\left(\alpha G+\mu Sf(S) ; \frac{\upsilon}{\gamma}\right)\right]\bigg], \end{split} \end{align} and $G,S\widesim{\text{\small{iid}}}\mathcal{N}(0,1)$. \end{propo} The system of equations in \eqref{eq:bin_sys} is derived by the first-order optimality conditions of the function $\Theta$ based on its arguments $(\alpha,\mu,\upsilon,\gamma)$, i.e., by imposing $\nabla\Theta = \mathbf{0}$. In fact, similar to \cite{taheri2020sharp}, it only takes a few algebraic steps to simplify the four equations in $\nabla\Theta = \mathbf{0}$ to the three equations in \eqref{eq:bin_sys}. For the rest of this section, we focus on the second part of the proof of Theorem \ref{propo:boundedness} regarding existence/uniqueness of solutions to \eqref{eq:bin_sys}, which has not been previously studied in our setting. \subsection{Properties of $\Theta$ : Strict Convexity-Strict Concavity and Boundedness of Saddle Points} We will show in Lemma \ref{lem:unique_1} that for proving uniqueness and boundedness of the solutions to \eqref{eq:bin_sys}, it suffices to prove uniqueness and boundedness of the saddle point $(\alpha^\star,\mu^\star,\upsilon^\star,\gamma^\star)$ of $\Theta$. In fact, a sufficient condition for uniqueness of solutions in \eqref{eq:minmax_bin} is that $\Theta$ is (jointly) strictly convex in $(\alpha,\mu,\upsilon)$ and strictly-concave in $\gamma$ (e.g., see \cite[Lemma B.2.]{taheri2020sharp}). Lemma \ref{lem:theta_convexity}, which is key to the proof of Theorem \ref{propo:boundedness}, derives sufficient conditions on $\mathcal{L}$ guaranteeing strict convexity-strict concavity of $\Theta$ as well as conditions on $\mathcal{L}$ ensuring boundedness of $(\alpha^\star,\mu^\star,\upsilon^\star,\gamma^\star).$ \begin{lem}[Properties of $\Theta$]\label{lem:theta_convexity} Let $\mathcal{L}(\cdot)$ be a lower semi-continuous (lsc), proper and convex function and $\la > 0$. Then the following statements hold for the function $\Theta:\mathbb{R}_{\ge0}\times\mathbb{R}\times\mathbb{R}_{>0}\times\mathbb{R}_{>0}\rightarrow \mathbb{R}$ in \eqref{eq:minmax_bin}, \begin{enumerate}[(a),leftmargin=\parindent,align=left] \item If $\mathcal{L}$ is bounded from below, then for all solutions $(\alpha^\star,\mu^\star,\upsilon^\star,\gamma^\star)$ there exists a constant $C>0$ such that $\alpha^\star\in [0,C], \mu^\star \in [-C,C]$ and $\upsilon^\star \in [0,C]$. \item If $\mathcal{L}$ is bounded from below and $\E[\mathcal{L}(G)]<\infty$ for $G\sim\mathcal{N}(0,1)$, then there exists a constant $C>0$ such that $\gamma^\star \in [0,C].$ \item In addition to the assumptions of parts (a) and (b) assume that $\mathcal{L}^\prime(0)\neq0$, then $\gamma^\star>0,\alpha^\star>0$ and $\upsilon^\star>0$. \item If $\mathcal{L}$ is twice differentiable and non-linear, then $\Theta$ is jointly strictly-convex in $(\alpha, \mu,\upsilon)$. \item If $\mathcal{L}$ satisfies the assumptions of part (c) then $\Theta$ is strictly-concave in $\gamma$. \end{enumerate} \end{lem} \subsubsection{Proof of Lemma \ref{lem:theta_convexity}} \noindent\textbf{Statement (a).}~~Let $\widetilde{\Theta}(\alpha,\mu,\upsilon) := \sup_{\gamma \in \mathbb{R}_{>0}} {\Theta}(\alpha,\mu,\upsilon,\gamma)$. For all feasible $(\alpha,\mu,\upsilon)$ it holds \begin{align} \widetilde{\Theta}\left(\alpha,\mu,\upsilon\right)\;\; &\ge\; \Theta\left(\alpha,\mu,\upsilon,1\right) \notag\\&=\; \frac{\upsilon}{2} -\frac{\alpha}{\sqrt{\delta}} + \frac{\la(\alpha^2+\mu^2)}{2} + \E\Big[\env{\mathcal{L}}{\alpha G +\mu Sf(S)}{\upsilon}\Big].\label{eq:boundedness1} \end{align} Recall that $\mathcal{L}$ is bounded from below, i.e., for all $\mathcal{L} (x) \ge B, \forall x\in\mathbb{R}$ for some real $B$. By definition of Moreau-envelope function the same bound holds for $\mathcal{M}_{\mathcal{L}}$, i.e. for all $x\in \mathbb{R}$ and $y\in \mathbb{R}_{>0}$, we have that $\env{\mathcal{L}}{x}{y} \ge B$. Using this, we proceed from \eqref{eq:boundedness1} to derive that: \begin{align}\label{eq:boundedness2} \widetilde{\Theta}\left(\alpha,\mu,\upsilon\right)\;\; \ge \;\; B + \frac{\upsilon}{2} -\frac{\alpha}{\sqrt{\delta}} + \frac{\la(\alpha^2+\mu^2)}{2} . \end{align} Based on \eqref{eq:boundedness2} that holds for all feasible $(\alpha,\mu,\upsilon)$ and using the fact that $\la>0$ it can be readily shown that \begin{align*} &\lim_{\alpha\rightarrow +\infty}\;\min_{\substack{\left(\mu,\upsilon\right) \in\mathbb{R}\times \mathbb{R}_{>0}}} \widetilde{\Theta}\left(\alpha,\mu,\upsilon\right) = +\infty, \quad \quad \lim_{\upsilon\rightarrow +\infty}\;\min_{\substack{\left(\alpha,\mu\right) \in \mathbb{R}_{\ge0}\times \mathbb{R}}} \widetilde{\Theta}\left(\alpha,\mu,\upsilon\right) = +\infty, \\ &\lim_{\mu\rightarrow \pm\infty}\;\min_{\substack{\left(\alpha,\upsilon\right) \in \mathbb{R}_{\ge0}\times \mathbb{R}_{>0}}} \widetilde{\Theta}\left(\alpha,\mu,\upsilon\right) = +\infty. \end{align*} Thus, the function $\widetilde{\Theta}\left(\alpha,\mu,\upsilon\right)$ is level-bounded in $\mathbb{R}_{\geq0}\times\mathbb{R}\times\mathbb{R}_{>0}$. This implies the boundedness of solutions $(\alpha^\star,\mu^\star,\upsilon^\star)$ to \eqref{eq:minmax_bin} \cite[Thm.~1.9]{rockafellar2009variational}, as desired. \noindent\textbf{Statement (b).}~~Under the assumptions of the lemma, we know from part $(a)$ that the set of solutions to $(\alpha^\star,\mu^\star,\upsilon^\star)$ in \eqref{eq:minmax_bin} is bounded. Thus we can apply the Min-Max Theorem \ref{lem:minmaxsion} and flip the order of minimum and maximum to write: \begin{align}\label{eq:boundedness3} \min_{\substack{\left(\alpha,\mu,\upsilon\right) \\ \in \, [0,C]\times[-C,C]\times (0,C]}} \max_{\gamma \,\in\, \mathbb{R}_{\geq0}}\;\; \Theta (\alpha,\mu,\upsilon,\gamma) = \max_{\gamma\,\in\,\mathbb{R}_{\geq0}}\Big[ \widehat{\Theta}(\gamma) := \min_{\substack{\left(\alpha,\mu,\upsilon\right) \\ \in \, [0,C]\times[-C,C]\times (0,C]}} \Theta \left(\alpha,\mu,\upsilon,\gamma\right)\Big]. \end{align} Without loss of generality, we assume $C$ large enough such that $C > \max \{1,1/\sqrt{\delta}\}$. Then, by choosing $\alpha=1,\mu=0$ and $\upsilon = 1/\sqrt{\delta}$, we find that for all $\gamma>0$: \begin{align}\label{eq:boundedness4} \widehat{\Theta}(\gamma) \;\le \;\Theta\left(1,0,1/\sqrt{\delta},\gamma\right) = -\frac{\gamma}{2\sqrt{\delta}} + \frac{\la}{2} + \E\left[\env{\mathcal{L}}{G}{\frac{1}{\gamma\,\sqrt{\delta}}}\right]. \end{align} Note that for any $y\in\mathbb{R}$: $ \env{\mathcal{L}}{y}{\frac{1}{\gamma\,\sqrt{\delta}}} = \min_{x\in\mathbb{R}} \frac{\gamma\sqrt{\delta}}{2}(x-y)^2 + \mathcal{L}(x) \le \mathcal{L}(y). $ Thus we derive from \eqref{eq:boundedness4}: \begin{align}\label{eq:boundedness5} \widehat{\Theta}(\gamma) \le -\frac{\gamma}{2\sqrt{\delta}} + \frac{\la}{2} + \E\left[\,\mathcal{L}(G)\,\right]. \end{align} But $\E\left[\mathcal{L}(G)\right]$ is assumed to be bounded, thus it can be concluded from \eqref{eq:boundedness5} that the function $\widehat{\Theta}(\gamma)$ is level-bounded, i.e., \begin{align} \lim_{\gamma\rightarrow+\infty} \widehat{\Theta}(\gamma) = -\infty. \end{align} This implies boundedness of the set of maximizers $\gamma^\star$, which completes the proof. \noindent\textbf{Statement (c).}~~First, we show that $\gamma^\star>0$. On the contrary, assume that $\gamma^\star=0$. Then based on \eqref{eq:minmax_bin} and Proposition \ref{propo:mor}(a), \begin{align} (\alpha^\star,\mu^\star,\upsilon^\star) = \arg\min _{\substack{\left(\alpha,\mu,\upsilon\right) \\ \in \, [0,C]\times[-C,C]\times (0,C]}} \left[\frac{\la \alpha^2}{2}+\frac{\la\mu^2}{2} + \min_{t\in\mathbb{R}} \mathcal{L}(t)\right]\notag, \end{align} implying that $\alpha^\star=\mu^\star=0$ and $\Theta(\alpha^\star,\mu^\star,\upsilon^\star,\gamma^\star) = \min_{t\in\mathbb{R}} \mathcal{L}(t)$. On the other hand, in this case we find that for any $\widetilde{\gamma}\in (0,C]$, $$\Theta(\alpha^\star,\mu^\star,\upsilon^\star,\widetilde{\gamma}) = \widetilde{\gamma}\upsilon^\star + \env{\mathcal{L}}{0}{\frac{\upsilon^\star}{\widetilde{\gamma}}} \, > \min_{t\in\mathbb{R}} \mathcal{L}(t).$$ To deduce the inequality, we used the fact that $\env{\mathcal{L}}{0}{\tau} = \min_{t\in\mathbb{R}} t^2/(2\tau) + \mathcal{L}(t) > \min_{t\in\mathbb{R}} \mathcal{L}(t)$ for all $\tau\ge0$, provided that $\mathcal{L}(t)$ does not attain its minimum at $t=0$. Thus, since by assumption $\mathcal{L}^\prime(0)\neq0$, we deduce that $\Theta(\alpha^\star,\mu^\star,\upsilon^\star,\widetilde{\gamma})>\Theta(\alpha^\star,\mu^\star,\upsilon^\star,\gamma^\star)$, which is in contradiction to the optimality of $\gamma^\star$. This shows that $\gamma^\star>0$ for any loss function satisfying the assumptions of the lemma. Next, we prove that $\alpha^\star>0$. if $\alpha^\star=0$, then based on the optimality of $\alpha^\star$ it holds that $$ \frac{\partial\Theta}{\partial{\alpha}}\Big|_{{\left(\alpha^\star,\mu^\star,\upsilon^\star,\gamma^\star\right)}}\, \ge 0, $$ thus based on \eqref{eq:minmax_bin}, \begin{align}\label{eq:nabla_al} \mathbb{E}\left[G \cdot\envdx{\ell}{\mu^\star\,Sf(S)}{\frac{\upsilon^\star}{\gamma^\star}} \right]-\frac{\gamma^\star}{\sqrt{\delta}} \ge 0. \end{align} Since by assumption $G$ and $S\,f(S)$ are independent and $\E[G]=0$, we deduce from \eqref{eq:nabla_al} that $\gamma^\star=0$, which is in contradiction to the previously proved fact that $\gamma^\star>0.$ This shows that $\alpha^\star>0$, as desired. Finally, we note that if $\upsilon^\star=0$, then based on \eqref{eq:minmax_bin} and in light of Proposition \ref{propo:mor}(a), we find that, \begin{align} (\alpha^\star,\mu^\star,\gamma^\star) = \arg\min _{\substack{\left(\alpha,\mu\right) \\ \in \, [0,C]\times[-C,C]}} \max_{\gamma \,\in\, (0,C]} \left[-\frac{\alpha\gamma}{\sqrt{\delta}}+\frac{\la \alpha^2}{2}+\frac{\la\mu^2}{2} + \E\Big[\mathcal{L}\left(\alpha G + \mu Sf(S)\right)\Big]\right]\notag, \end{align} which based on the decreasing nature of RHS in terms of $\gamma$, implies that either $\gamma^\star=0$ or $\alpha^\star=0$. However, we proved that both $\gamma^\star$ and $\alpha^\star$ are positive. This proves the desired result $\upsilon^\star\neq0$ and completes the proof of this part. \noindent\textbf{Statement (d).}~~Let $\mathbf{w}_1 := (\alpha_1,\mu_1,\tau_1)$ and $\mathbf{w}_2 := (\alpha_2,\mu_2,\tau_2)$ be two distinct points in the space $\mathbb{R}_{\ge 0} \times \mathbb{R} \times \mathbb{R}_{>0}$. We consider two cases : \\ \noindent\underline{Case \rom{1} : $(\alpha_1, \mu_1)=(\alpha_2,\mu_2)$ } \\ In this case, it suffices to show that for fixed $\alpha>0$ and $\mu$ and under the assumptions of the lemma, the function $\mathbb{E}\left[\env{\mathcal{L}}{\alpha G+\mu Sf(S)}{ \tau}\right]$ is strictly-convex in $\tau$. Denote by $p(\alpha,\mu,\tau):=\prox{\mathcal{L}}{\alpha G + \mu S f(S)}{\tau}$. First, we derive second derivate of the Moreau-envelope function with respect to $\tau$ by applying \eqref{eq:secondderivative_mor2}, and further use convexity of $\mathcal{L}$ to derive that : \begin{align} \frac{\partial^2}{\partial \tau^2}\mathbb{E}\Big[\mathcal{M}_{\mathcal{L}}&\left(\alpha G+\mu Sf(S) ; \tau\right)\Big] \notag\\ &= \E \left[\frac{\Big(\mathcal{L}^\prime\left(p\left(\alpha,\mu,\tau\right)\right)\Big)^2\,\mathcal{L}^{\prime\prime}\left(p\left(\alpha,\mu,\tau\right)\right)}{1+\tau\,\mathcal{L}^{\prime\prime}\left(p\left(\alpha,\mu,\tau\right)\right)} \right]\ge0.\label{eq:second_der_tau} \end{align} Next we show that the inequality above is strict if $\mathcal{L}(\cdot)$ is a non-linear function. First we note that combining \eqref{eq:mor_der1} and \eqref{eq:envdxp} yields that for all $x\in\mathbb{R}$: \begin{align} \mathcal{L}^\prime(\prox{\mathcal{L}}{x}{\tau})&= \frac{1}{\tau}{(x-\prox{\mathcal{L}}{x}{\tau})},\notag\\ \mathcal{L}^{\prime\prime}(\prox{\mathcal{L}}{x}{\tau})&=\frac{1-\proxp{\mathcal{L}}{x}{\tau}}{\tau\cdot\proxp{\mathcal{L}}{x}{\tau}}.\notag \end{align} Using these relations and denoting by $p^\prime(\alpha,\mu,\tau):= \proxp{\mathcal{L}}{\alpha G + \mu S f(S)}{\tau}$, we can rewrite \eqref{eq:second_der_tau} as following : \begin{align} \frac{\partial^2}{\partial \tau^2}\mathbb{E}\Big[&\mathcal{M}_{\mathcal{L}}\left(\alpha G+\mu Sf(S) ; \tau\right)\Big] \notag\\ &=\frac{1}{\tau^3}\E\left[\frac{\Big(\alpha G + \mu S f(S)-p(\alpha,\mu,\tau)\Big)^2\Big(1-p^\prime(\alpha,\mu,\tau)\Big)}{p^\prime(\alpha,\mu,\tau)\Big(1+\tau\,\mathcal{L}^{\prime\prime}(p(\alpha,\mu,\tau))\Big)}\right]\label{eq:second_der_tau2}. \end{align} It is straightforward to see that if $\alpha>0$, then $\alpha G+\mu Sf(S)$ has positive density in the real line. Thus from \eqref{eq:second_der_tau2} we find that : \begin{align}\label{eq:iff} \frac{\partial^2}{\partial \tau^2}\mathbb{E}\Big[\mathcal{M}_{\mathcal{L}}\left(\alpha G+\mu Sf(S) ; \tau\right)\Big] = 0\;\Longleftrightarrow\;\exists c\in \mathbb{R} \;\;\text{s.t.}\;\forall x \in \mathbb{R}:\quad\prox{\mathcal{L}}{x}{\tau} = x + c. \end{align} Recalling \eqref{eq:mor_der1}, we see that the condition in \eqref{eq:iff} is satisfied if and only if : \begin{align}\label{eq:mor_lin1} \exists c_{_1},c_{_2} \in \mathbb{R} : \text{s.t.} \;\; \forall x\in \mathbb{R} : \env{\mathcal{L}}{x}{\tau} = c_{_1}x+c_{_2}. \end{align} Using inverse properties of Moreau-envelope in Proposition \ref{propo:inverse}, we derive that the loss function $\mathcal{L}(\cdot)$ satisfying \eqref{eq:mor_lin1} takes the following shape, \begin{align*} \forall x\in\mathbb{R}:\quad\mathcal{L}(x) = -\env{-c_{_1}I-c_{_2}}{x}{\tau}= c_{_1}x + \frac{\tau c_{_1}^2}{2}+c_{_2}. \end{align*} where $I(\cdot)$ is the identity function i.e. $I(t) = t,$ $\forall t\in \mathbb{R}$. Therefore if $\mathcal{L}$ is non-linear function as required by the assumption of the lemma, $\mathbb{E}\left[\mathcal{M}_{\mathcal{L}}\left(\alpha G+\mu Sf(S) ; \tau \right)\right]$ has a positive second derivative with respect to $\tau$ and consequently $\Theta$ is strictly-convex in $\upsilon$.\\ \noindent\underline{Case \rom{2} : $(\alpha_1,\mu_1)\neq(\alpha_2,\mu_2)$} \\ In this case we use definition of strict-convexity to prove the claim. First, for compactness we define : \begin{align*} p_i :&= \prox{\mathcal{L}}{\alpha_i G+ \mu_i Sf(S)}{\tau_i} = \arg\min_{w} \frac{1}{2\tau_i}\left(\alpha_i G+ \mu_i Sf(S) - w\right)^2 + \mathcal{L}(w),\\ \Omega(\mathbf{w}_i) &= \Omega(\alpha_i,\mu_i,\tau_i) :=\frac{\la \mu_i^{2}}{2}+\frac{\la \alpha_i^{2}}{2} +\mathbb{E}\Big[\env{\mathcal{L}}{\alpha_i G+\mu_i Sf(S)} {\tau_i}\Big] \end{align*} for $i=1,2$. Based on the way we defined the functions $\Theta$ and $\Omega$, one can see that in order to show strict-convexity of $\Theta$ in $(\alpha,\mu,\upsilon)$ it suffices to prove strict-convexity of $\Omega$ in $(\alpha,\mu,\tau)$. Let $\theta\in(0,1)$, and denote $\tau_\theta:=\theta\tau_1+\overline{\theta}\tau_2, \alpha_\theta:=\theta\alpha_1+\overline{\theta}\alpha_2$ and $\mu_\theta:=\theta\mu_1+\overline{\theta}\mu_2$. With this notation, \begin{align} &\Omega(\theta \mathbf{w}_1+\overline{\theta} \mathbf{w}_2) \leq \\ &\frac{\la\mu_{\theta}^2}{2} + \frac{\la\alpha_{\theta}^2}{2} +\E\left[\, \frac{1}{2\tau_\theta}\Big(\alpha_\theta G+ \mu_\theta Sf(S) - (\theta p_1+ \overline{\theta} p_2) \Big)^2 + \mathcal{L}\Big(\theta p_1 + \overline{\theta} p_2\Big) \,\right] \notag \\[4pt] &=\frac{\la\mu_{\theta}^2}{2} + \frac{\la\alpha_{\theta}^2}{2} +\E\Big[\, H\Big( \alpha_\theta G+ \mu_\theta Sf(S), \theta p_1 + \overline{\theta} p_2, \tau_\theta \Big) + \mathcal{L}\Big(\theta p_1 + \overline{\theta} p_2\Big) \,\Big]\notag \\[4pt] &\leq \frac{\la\mu_{\theta}^2}{2} + \frac{\la\alpha_{\theta}^2}{2} +\notag\\ &\E\left[\, \theta H\Big(\alpha_1 G+ \mu_1 Sf(S), p_1, \tau_1\Big) + \overline{\theta} H\Big(\alpha_2 G+ \mu_2 Sf(S), p_2, \tau_2\Big) + \mathcal{L}\Big(\theta p_1+ \overline{\theta} p_2\Big) \,\right].\label{eq:EME_step1} \end{align} The first inequality above follows by the definition of the Moreau envelope. The equality in the second line uses the definition of the function $H:\mathbb{R}^3\rightarrow\mathbb{R}$ in \eqref{eq:H_def}. Finally, the last inequality follows from convexity of $H$ as proved in Lemma \ref{lem:H_cvx}.\\ Continuing from \eqref{eq:EME_step1}, we use convexity of $\mathcal{L}$ to find that \begin{align} &\Omega(\theta \mathbf{w}_1+\overline{\theta} \mathbf{w}_2) \leq \frac{\la\mu_{\theta}^2}{2} + \frac{\la\alpha_{\theta}^2}{2} + \notag\\ & \E\Big[\, \theta H(\alpha_1 G+ \mu_1 Sf(S), p_1,\tau_1) + \overline{\theta} H(\alpha_2 G+ \mu_2 Sf(S),p_2,\tau_2) + \theta\,\mathcal{L}(p_1) + \overline{\theta} \mathcal{L}(p_2) \Big] \label{eq:EME_make_strict} \end{align} Additionally since $\la>0$ and $(\alpha_1,\mu_1)\neq(\alpha_2,\mu_2)$, we find that : $$ \frac{\la\mu_{\theta}^2}{2} + \frac{\la\alpha_{\theta}^2}{2} < \frac{\la (\theta \mu_1^2 + \overline{\theta} \mu_2^2)}{2} + \frac{\la (\theta \alpha_1^2 + \overline{\theta} \alpha_2^2)}{2}. $$ Thus proceeding from \eqref{eq:EME_make_strict} we conclude strict-convexity of the function $\Omega$ : \begin{align*} \Omega(\theta &\mathbf{w}_1+\overline{\theta} \mathbf{w}_2) < \frac{\la (\theta \mu_1^2 + \overline{\theta} \mu_2^2)}{2} + \frac{\la (\theta \alpha_1^2 + \overline{\theta} \alpha_2^2)}{2} + \\ &\E\Big[\, \theta H(\alpha_1 G+ \mu_1 Sf(S), p_1,\tau_1) + \overline{\theta} H(\alpha_2 G+ \mu_2 Sf(S),p_2,\tau_2) + \theta\,\mathcal{L}(p_1) + \overline{\theta} \mathcal{L}(p_2) \Big] \\ &= \theta \Omega(\mathbf{w}_1) + \overline{\theta} \Omega(\mathbf{w}_2). \end{align*} This completes the proof of part (d). \noindent\textbf{Statement (e).}~~Based on the proof of part $(c)$ and under the assumptions of the lemma we have $\alpha^\star \neq 0$. Thus we see that the random variable $\alpha G+\mu Sf(S)$ has a positive probability density everywhere in the desired domain of the optimization problem in \eqref{eq:minmax_bin}. Next, we use the result in \cite[Proposition A.6]{taheri2020sharp}, which states that if the random variable $X$ has a positive density everywhere and $\mathcal{L}$ is continuously differentiable with $\mathcal{L}^\prime (0) \neq 0$ then $$ \E\Big[\env{\mathcal{L}}{X}{1/\gamma}\Big] $$ is strictly concave in $\gamma$. Based on this, $\Theta$ is strictly-concave in $\gamma$. This completes the proof of the lemma. \subsection{From \eqref{eq:minmax_bin} to \eqref{eq:bin_sys}} The following lemma connects the min-max optimization \eqref{eq:minmax_bin} to the system of equations in \eqref{eq:bin_sys} \begin{lem}[Uniqueness of solutions to \eqref{eq:bin_sys}]\label{lem:unique_1} Assume that the optimization problem in \eqref{eq:minmax_bin} yields a unique and bounded solution $(\alpha>0,\mu,\upsilon>0,\gamma>0)$. Then the equations \eqref{eq:bin_sys} have a unique and bounded solution $(\alpha>0,\mu,\tau>0)$ where $\tau = \upsilon/\gamma.$ \end{lem} \begin{proof} By direct differentiation with respect to the variables $(\mu,\alpha,\upsilon,\gamma)$, the first order optimality conditions of the min-max optimization in \eqref{eq:minmax_bin} are as follows: \begin{align}\label{eq:foureq_bin} \begin{split} \mathbb{E}\left[Sf(S)\envdx{\ell}{\alpha G + \mu S f(S)}{\frac{\upsilon}{\gamma}}\right] = -\la\mu,\, \la \alpha +\mathbb{E}\left[G \envdx{\ell}{\alpha G + \mu S f(S)}{\frac{\upsilon}{\gamma}} \right]=\frac{\gamma}{\sqrt{\delta}},\\ \frac{1}{\gamma}\mathbb{E}\left[\envdla{\ell}{\alpha G + \mu S f(S)}{\frac{\upsilon}{\gamma}}\right]=-\frac{\gamma}{2},\, -\frac{\upsilon}{\gamma^2}\mathbb{E}\left[\envdla{\ell}{\alpha G + \mu S f(S)}{\frac{\upsilon}{\gamma}}\right]+\frac{\upsilon}{2} =\frac{\alpha}{\sqrt{\delta}}. \end{split} \end{align} Assumptions of the lemma imply that the saddle point of the optimization problem in \eqref{eq:minmax_bin} is unique and bounded, therefore \eqref{eq:foureq_bin} yields a unique bounded solution $(\alpha>0,\mu,\upsilon>0,\gamma>0)$. By denoting $\tau=\upsilon/\gamma$ and using the fact that $\envdla{\mathcal{L}}{x}{\tau} = -\frac{1}{2}(\envdx{\mathcal{L}}{x}{\tau})^2$ (as implied by \eqref{eq:mor_der1}-\eqref{eq:mor_der2}) we reach the Equations \eqref{eq:bin_sys} i.e., \begin{subequations}\label{eq:bin_main} \begin{align} \Exp\Big[S\,f(S) \cdot\envdx{\mathcal{L}}{\alpha G + \mu S f(S)}{\tau} \Big]&=-\la\mu , \label{eq:murbin_main}\\ {\tau^2}\,{\delta}\cdot\Exp\Big[\,\left(\envdx{\mathcal{L}}{\alpha G + \mu S f(S)}{\tau}\right)^2\,\Big]&=\alpha^2 , \label{eq:alphabin_main}\\ \tau\,\delta\cdot\E\Big[ G\cdot \envdx{\mathcal{L}}{\alpha G + \mu S f(S)}{\tau} \Big]&=\alpha(1-\la\tau\delta) . \label{eq:lambdabin_main} \end{align} \end{subequations} The uniqueness of $(\alpha>0,\mu,\tau>0)$ as the solution to \eqref{eq:bin_main} follows from the uniqueness of the solution $(\alpha>0,\mu,\upsilon>0,\gamma>0)$ to \eqref{eq:foureq_bin}. In particular if there are two distinct solutions $(\alpha_1,\mu_1,\tau_1)$ and $(\alpha_2,\mu_2,\tau_2)$ to the Equations \eqref{eq:bin_main}, then we reach contradiction by noting that $(\alpha_1,\mu_1,\upsilon_1:=\alpha_1/\sqrt{\delta},\gamma_1:=\alpha_1/(\tau_1\sqrt{\delta}))$ and $(\alpha_2,\mu_2,\upsilon_2:=\alpha_2/\sqrt{\delta},\gamma_2:=\alpha_2/(\tau_2\sqrt{\delta}))$ are two distinct points satisfying the Equations \eqref{eq:foureq_bin}. This completes the proof of the lemma. \end{proof} \subsection{Completing the proof of Theorem \ref{propo:boundedness}} We are now ready to complete the proof of Theorem \ref{propo:boundedness}. Based on Lemma \ref{lem:unique_1}, for the system of equations in \eqref{eq:bin_sys} to have a unique and bounded solution, it suffices that $(\alpha^\star>0,\mu^\star,\upsilon^\star>0,\gamma^\star>0)$ as the solution of \eqref{eq:minmax_bin} is unique and bounded. Since $\Theta$ is convex-concave and the optimality sets are bounded from Lemma \ref{lem:theta_convexity}(a)-(e), a saddle point of $\Theta$ exists \cite[Cor.~37.3.2]{Roc70}. Additionally, based on the assumptions of the theorem and in view of Lemma \ref{lem:theta_convexity}(d),(e), $\Theta$ is jointly strictly-convex in $(\alpha,\mu,\upsilon)$ and strictly-concave in $\gamma$ which implies the uniqueness of $(\alpha^\star>0,\mu^\star,\upsilon^\star>0,\gamma^\star>0)$ as a solution to \eqref{eq:minmax_bin}. This completes the proof of the theorem. As mentioned in the main body of the paper, we conjecture that some of the technical conditions of Theorem \ref{propo:boundedness}, albeit mild in their current form, can be relaxed even further. Refining these conditions can be an interesting topic of future work, but is out of the scope of this paper. We mention in passing that the conclusions of Theorem \ref{propo:boundedness} also hold true if we replace the two-times differentiability condition by an assumption that the loss is one-time differentiable and strictly convex.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $G$ be a finite abelian group written additively. We denote by $\exp(G)$ the {\it exponent} of $G$ that is the least common multiple of the orders of its elements. Let $r$ be a multiple of $\exp(G)$. The generalized Erd\H{o}s--Ginzburg--Ziv constant $\mathsf{s}_r(G)$ is the smallest integer $s$ such that every sequence of length $s$ over $G$ has a zero-sum subsequence of length $r$. If $r = \exp(G)$, then $\mathsf{s}(G)=\mathsf{s}_{\exp(G)}(G)$ is the classical Erd\H{o}s--Ginzburg--Ziv constant. In the case when $k$ is a power of a prime, Gao \cite{Gao:2003} proved $\mathsf{s}_{km}(\mathbb{Z}_k^d) = km + (k-1)d$ for $m \geq k^{d-1}$ and conjectured that the same equality holds when $km > (k-1)d$. In this paper, we consider the case $G = \mathbb{Z}_2^d$. We show that the problem of determining $\mathsf{s}_{2m}(\mathbb{Z}_2^d)$ is essentially equivalent to finding the lowest redundancy of a linear binary code of given length which does not contain words of Hamming weight $2m$. When $m=2$, this problem is also equivalent to finding the maximal length of a linear binary code of redundancy $d$ and distance $5$ or higher. We prove that $\mathsf{s}_{2m}(\mathbb{Z}_2^d) = 2m+d$ for $d < 2m$, validating the Gao's conjecture for $k=2$. We also prove $\mathsf{s}_{2m}(\mathbb{Z}_2^{2m}) = 4m+1$, $\mathsf{s}_{2m}(\mathbb{Z}_2^{2m+1}) = 4m+2$ for even $m$, and $\mathsf{s}_{2m}(\mathbb{Z}_2^{2m+1}) = 4m+5$ for odd $m$. Our results provide counterexamples to Conjectures~4.4 and~4.6 from \cite{Gao:2014}. This paper is organized as follows. We discuss maximal length linear binary codes in \cref{sec:codes}, linear codes without a forbidden weight in \cref{sec:forbidden}, and generalized Erd\H{o}s--Ginzburg--Ziv constants in \cref{sec:EGZG}. We present our results for $\mathsf{s}_{2m}(\mathbb{Z}_2^d)$ in \cref{sec:summary}. \Cref{sec:proofs} contains the proofs. \section{Linear binary codes of maximal length}\label{sec:codes} In this section, we will provide basic definitions and some results from coding theory (for details, see~\cite{Tomlinson:2017}). Let $\mathbb{F}_2$ be the binary field and $\mathbb{F}_2^n$ be the $n$-dimensional vector space over $\mathbb{F}_2$. The Hamming weight of vector $x\in\mathbb{F}_2^n$ is the number of its entries equal to $1$. The dot product of vectors $x=(x_1,x_2,\ldots,x_n)$ and $y=(y_1,y_2,\ldots,y_n)$ is defined as $x \cdot y = x_1 y_1 + x_2 y_2 + \ldots + x_n y_n$. A {\it linear binary code} of length $n$ is a subspace in $\mathbb{F}_2^n$. Its elements are called {\it words}. The {\it distance} of a linear code is the smallest Hamming weight of its nonzero word. A trivial code of dimension $0$ has distance $\infty$. A linear binary code $C$ is called an $(n,k,d)$ code when it has length $n$, dimension $k$ and distance $d$. The {\it dual code} $C^\perp$ is the subspace of vectors in $\mathbb{F}_2^n$ orthogonal to $C$. The {\it redundancy} of $C$ is the dimension of its dual code, $r=n-k$, which may be interpreted as the number of parity check bits. If $y^{(1)},y^{(2)},\ldots,y^{(r)}$ form a basis of $C^\perp$, then $C$ consists of vectors $x$ such that $x \cdot y^{(i)} = 0$ for every $i=1,2,\ldots,r$. If $y^{(i)} = (y_{i1},y_{i2},\ldots,y_{in})$, the $(r \times n)$-matrix $[y_{ij}]$ is called a {\it parity-check matrix} of $C$. In fact, any binary $r \times n$ matrix of rank $r$ is a parity-check matrix of some linear code of length $n$ and redundancy $r$. An $(n,k,2t+1)$ code is capable of correcting up to $t$ errors in a word of length $n$ that carries $k$ bits of information. It is natural to seek codes of maximal possible length with prescribed error-correction capabilities. We denote by $N(r,d)$ the largest length of a linear code with redundancy $r$ and distance $d$ or higher, that is the largest $n$ such that an $(n,n-r,\geq d)$ code exists. It follows from the well known Hamming bound (see \cite{Tomlinson:2017}) that \begin{equation}\label{eq:Hamming} \sum_{i=0}^t \binom{N(r,2t+1)}{i} \; \leq \; 2^r \; . \end{equation} The primitive binary BCH code (see \cite{Bose:1960,Hocquenghem:1959}) is a $(2^m-1,2^m-1-mt,2t+1)$ code. It gives the lower bound \begin{equation}\label{eq:BCH} N(mt,2t+1) \; \geq \; 2^m-1 \; . \end{equation} It is easy to see that $N(m,3) = 2^m - 1$. When $t \geq 2$, the bound \eqref{eq:BCH} is not sharp: some codes of slightly larger length are known. Goppa~\cite{Goppa:1971} constructed $(2^m,2^m-mt,2t+1)$ codes. Chen~\cite{Chen:1991} found $(2^m+1,\:2^m+1-2m,\:5)$ codes for even $m$. Sloane, Reddy, and Chen~\cite{Chen:1991,Sloane:1972} obtained $(2^m+2^{\lceil m/2 \rceil}-1,$ $2^m+2^{\lceil m/2 \rceil}-1-(2m+1),\:5)$ codes. Hence, \[ N(4s, 5) \; \geq \; 2^{2s} + 1 \; , \;\;\;\;\;\; N(4s+2,5) \; \geq \; 2^{2s+1} \; , \] \[ N(4s+1,5) \; \geq \; 2^{2s} + 2^{s} - 1 \; , \;\;\;\;\;\; N(4s+3,5) \; \geq \; 2^{2s+1} + 2^{s+1} - 1 \; . \] The values of $N(r,d)$ for small $r$ and $d$ can be derived from tables in \cite{Grassl:codetables}. We list these values for $4 \leq r \leq 14,\; d=5$ : \vspace{3mm} \noindent \begin{tabular}{cccccccccccc} $r$ & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 \\ $N(r,5)$ & 5 & 6 & 8 & 11 & 17 & 23 & 33 & 47--57 & 65--88 & 81--124 & 128--179 \end{tabular} \vspace{3mm} It follows from \eqref{eq:Hamming} that $N(r,5) \leq 2^{(r+1)/2}$. When $r$ is large, the best known lower and upper bounds for $N(r,5)$ differ by a factor of $\sqrt{2}$ if $r$ is even, and by a factor of $2$ if $r$ is odd. For $a>1$, not a single $(2^m+a,\: 2^m+a-2m,\: 5)$ code is known. For future use we need \begin{theorem}[MacWilliams identities \cite{MacWilliams:1963} Let $C$ be a $k$-dimensional linear binary code of length $n$. Let $A_j$ denote the number of words of Hamming weight $j$ in $C$, and $B_j$ denote the number of words of Hamming weight $j$ in the dual code $C^\perp$. Then for every $\lambda = 0,1,\ldots,n$, \begin{equation}\label{eq:MWI} 2^{n-k} \sum_{j=0}^\lambda \binom{n-j}{\lambda-j} A_j \; = \; 2^\lambda \sum_{j=0}^{n-\lambda} \binom{n-j}{\lambda} B_j \; . \end{equation} \end{theorem} \section{Codes without a forbidden weight}\label{sec:forbidden} Let $R_{2m}(n)$ be the smallest redundancy of a linear code of length $n$ which has no words of Hamming weight $2m$. The problem of determining $R_{2m}(n)$ was studied in \cite{Bassalygo:2000,Enomoto:1987} in notation $l(n,\overline{2m}) = n - R_{2m}(n)$. It follows from Theorem~1.1 of~\cite{Enomoto:1987} that \begin{equation}\label{eq:Enomoto1} R_{2m}(n) \; = \; n - 2m + 1 \;\;\;\;\; \mbox{for} \;\; 2m-1 \leq n \leq 4m-1 \; , \end{equation} \begin{equation}\label{eq:Enomoto2} R_{2m}(4m) \; = \; 2m \; . \end{equation} We will solve some new cases with $n > 4m$ in Corollary~\ref{th:Enomoto_my_1}. It follows from Theorem~6 of~\cite{Bassalygo:2000} that \begin{equation}\label{eq:Bassalygo} R_{2m}(n) / m \; = \; \log_2 n \; + \; O(1) \; , \end{equation} when $m$ is fixed and $n\rightarrow\infty$. The proof of \eqref{eq:Enomoto1} and \eqref{eq:Enomoto2} in~\cite{Enomoto:1987} uses the notion of binormal form of a binary matrix. Following~\cite{Enomoto:1987}, we say that a $k \times n$ binary matrix $M=[a_{ij}]$ is in {\it binormal form} if $n \geq 2k$, $\;a_{i,2j-1} = a_{i,2j}$ for $i \neq j$, and $a_{i,2i-1} \neq a_{i,2i}\;$ ($i,j = 1,2,\ldots,k$). \begin{lemma}[Proposition~2.1 \cite{Enomoto:1987}]\label{th:Enomoto1} If $k \times n$ binary matrix $M$ is in binormal form, then for any $k$-dimensional binary vector $x$, there is a unique choice of $k$ indices $j_i \in \{ 2i-1,2i \}$ $\; (i=1,2,\ldots,k)$ such that the sum of columns $j_1,j_2,\ldots,j_k$ of $M$ is equal to $x$. In particular, one can pick up $k$ columns in $M$ whose sum is the $k$-dimensional zero vector. \end{lemma} \begin{lemma}[Lemma~2.2 \cite{Enomoto:1987}]\label{th:Enomoto2} Let $n$ be odd, $2k < n$, and $M$ be a $k \times n$ binary matrix of rank $k$. If the sum of entries in each row is $0$, then M can be brought to binormal form by such operations as permutations of the columns and additions of one row to another. \end{lemma} Lemmas \ref{th:Enomoto1} and \ref{th:Enomoto2} yield \begin{corollary}\label{th:Enomoto} Let $n$ be odd and $2k<n$. Let $M$ be a $k \times n$ binary matrix of rank $k$ where the sum of entries in each row is $0$. One can pick up $k$ columns in $M$ whose sum is the $k$-dimensional zero vector. \end{corollary} \section{Generalized Erd\H{o}s--Ginzburg--Ziv constant} \label{sec:EGZG} Let $G$ be a finite abelian group written additively. The classical Erd\H{o}s--Ginzburg--Ziv constant $\mathsf{s}(G)$ is the smallest integer $s$ such that every sequence of length $s$ over $G$ has a zero-sum subsequence of length $\exp(G)$ (see \cite{Edel:2007,Ellenberg:2017,Gao:2006,Gao:2003,Harborth:1973,Kemnitz:1983,Reiher:2007}). In 1961, Erd\H{o}s, Ginzburg, and Ziv \cite{Erdos:1961} proved $\mathsf{s}(\mathbb{Z}_k)=2k-1$. Kemnitz' conjecture, $\mathsf{s}(\mathbb{Z}_k^2)=4k-3$ (see~\cite{Kemnitz:1983}), was open for more than twenty years and finally was proved by Reiher~\cite{Reiher:2007} in 2007. The following generalization of the classical Erd\H{o}s--Ginzburg--Ziv constant was introduced by Gao~\cite{Gao:2003}. If $r$ is a multiple of $\exp(G)$, then $\mathsf{s}_r(G)$ denotes the smallest integer $s$ such that every sequence of length $s$ over $G$ has a zero-sum subsequence of length $r$. (Notice that if $r$ is not a multiple of $\exp(G)$, then there is an element $x \in G$ whose order is not a divisor of $r$, and the infinite sequence $x,x,x,\ldots$ contains no zero-sum subsequence of length $r$.) Obviously, $\mathsf{s}_{\exp(G)}(G)=\mathsf{s}(G)$. Constants $\mathsf{s}_r(G)$ were studied in \cite{Bitz:2017,Gao:2014,Gao:2006b,Gao:2003,Han:2018,Han:2019,He:2016,Kubertin:2005}. A sequence that consists of $(km-1)$ copies of the zero vector and $(k-1)$ copies of each of the basis vectors demonstrates that \begin{equation}\label{eq:s_lower} \mathsf{s}_{km}(\mathbb{Z}_k^d) \; \geq \; km + (k-1)d \; . \end{equation} If $km \leq (k-1)d$, we can add $(1,1,\dots,1)$ to this sequence. Hence, \begin{equation}\label{eq:s_lower2} \mathsf{s}_{km}(\mathbb{Z}_k^d) \; \geq \; km + (k-1)d + 1 \;\;\;\;\;\mbox{when}\;\;\; km \leq (k-1)d \; . \end{equation} It is easy to see that \begin{equation}\label{eq:recursive} \mathsf{s}_{km}(\mathbb{Z}_k^d) + (k-1) \; \leq \; \mathsf{s}_{km}(\mathbb{Z}_k^{d+1}) \; . \end{equation} Indeed, consider a sequence $S$ over $\mathbb{Z}_k^d$ that does not have zero-sum subsequences of size $km$. Attach $0$ to each vector in $S$ as the $(d+1)$th entry and add to the sequence $(k-1)$ copies of a vector whose $(d+1)$th entry is $1$. The resulting sequence over $\mathbb{Z}_k^{d+1}$ will not contain a zero-sum subsequence of length $km$, either. In the case when $k$ is a power of a prime, Gao~\cite{Gao:2003} proved the equality in \eqref{eq:s_lower} for $m \geq k^{d-1}$ and conjectured \begin{equation}\label{eq:Gao} \mathsf{s}_{km}(\mathbb{Z}_k^d) \; = \; km + (k-1)d \;\;\;\;\;\mbox{for}\;\;\; km > (k-1)d \; . \end{equation} The connection between generalized Erd\H{o}s--Ginzburg--Ziv constants of $\mathbb{Z}_2^d$ and linear binary codes is evident from the following observation. Let $S$ be a sequence of length $n$ over $\mathbb{Z}_2^d$. Write its $n$ vectors column-wise to get a $d \times n$ binary matrix $M$. Obviously, $S$ has a zero-sum subsequence of length $r$ if and only if $M$ has $r$ columns that sum up to a zero vector. Let $C$ be the subspace in $\mathbb{Z}_2^n$ generated by the rows of $M$. If $M$ has $r$ columns that sum up to a zero vector, then the same will be true for any basis of $C$ written row-wise. The $n$-dimensional vector, whose non-zero entries are positioned in these $r$ columns, will be orthogonal to any word of $C$. It means that the dual code $C^\perp$ has a word of weight $r$. The same arguments work in the opposite way, too. If a linear binary code has a word of weight $r$ then any its parity check matrix has $r$ columns that sum up to a zero vector. When $k>2$ is a power of a prime, a similar connection exists between the generalized Erd\H{o}s--Ginzburg--Ziv constants of $\mathbb{Z}_k^d$ and linear $k$-ary codes (which are subspaces of vector spaces over field $\mathbb{F}_k$), but unfortunately, it works only one way. If a sequence over $\mathbb{F}_k^d$ has a zero-sum subsequence of length $r$, then being written column-wise, it serves as a parity check matrix of a $k$-ary code which has a word whose $r$ entries are equal to $1$ and the rest are equal to $0$. However, the fact that a $k$-ary code has a word with $r$ nonzero entries does not guarantee that its parity-check matrix has $r$ columns that sum up to a zero vector. \section{Summary of results}\label{sec:summary} In this section, we consider the case $G=\mathbb{Z}_2^d$. We will show (see Theorem~\ref{th:s_m_small}) that Gao's conjecture \eqref{eq:Gao} holds for $k=2$. We will also show that constants $N(d,5)$ from \Cref{sec:codes} and $\mathsf{s}_4(\mathbb{Z}_2^d)$ are equivalent (namely, $\mathsf{s}_4(\mathbb{Z}_2^d) = N(d,5)+4$) while constants $R_{2m}(n)$ from \cref{sec:forbidden} and $\mathsf{s}_{2m}(\mathbb{Z}_2^d)$ are closely related. To simplify notation, we will write $\mathsf{s}_{2m}(d)$ instead of $\mathsf{s}_{2m}(\mathbb{Z}_2^d)$. Let $W$ be a set of positive integers which contains at least one even number. We denote by $\beta_W(d)$ the largest size of a set in $\mathbb{Z}_2^d$ which has no zero-sum subsets of size $w \in W$. We will use the following shortcuts: \[ \beta_{2m}(d) = \beta_{\{2m\}}(d), \;\;\;\; \beta_{2[k,m]}(d) = \beta_{\{2k,2k+2,\ldots,2m\}}(d), \] \[ \beta_{[1,2m]}(d) = \beta_{\{1,2,\ldots,2m\}}(d). \] As it should be expected, $\mathsf{s}_{2m}(d)$ and $\beta_{2m}(d)$ are close: \begin{equation}\label{eq:beta_s_beta} \beta_{2m}(d) + 1 \; \leq \; \mathsf{s}_{2m}(d) \; \leq \; \beta_{2m}(d) + 2m-1 \; . \end{equation} The lower bound in \eqref{eq:beta_s_beta} is trivial. The upper bound was proved in \cite[Theorem~4.1]{Sidorenko:2017}. The set of $d$ basis vectors in $\mathbb{Z}_2^d$ with addition of vector $(1,1,\ldots,1)$ demonstrates that \begin{equation}\label{eq:even_beta_1} \beta_{[1,2m]}(d) \; \geq \; d+1 \;\;\;\;\mbox{for}\;\; d \geq 2m \; . \end{equation} If $d \geq 3m$, there exist vectors $x,y\in\mathbb{Z}_2^d$ such that each of the three vectors $x,y,x+y$ has Hamming weight $2m$. Then $x,y$ and the $d$ basis vectors demonstrate that \begin{equation}\label{eq:even_beta_2} \beta_{[1,2m]}(d) \; \geq \; d+2 \;\;\;\;\mbox{for}\;\; d \geq 3m \; . \end{equation} \begin{theorem}\label{th:beta_123_2m} $\; \beta_{[1,2m]}(d) \; = \; N(d,2m+1) \; . $ \end{theorem} \begin{theorem}\label{th:beta_24_2m} $\; \beta_{2[1,m]}(d) \; = \; \beta_{[1,2m]}(d) + 1 \; . $ \end{theorem} It is easy to see that $\beta_{2m}(d) = 2^d\;$ if $\;2m > 2^d$. \begin{theorem}\label{th:beta_d_small} $ \beta_{2m}(d) \; = \; \begin{cases} 2m + 2, & \mbox{if }\; 2^{d-1} \leq 2m < 2^d,\; m \neq 2^{d-1} - 2, \\ 2m , & \mbox{if }\; m = 2^{d-1} - 2. \end{cases} $ \end{theorem} \begin{theorem}\label{th:beta_d-1} Let $m$ be odd. Then $\beta_{2m}(d) \geq 2\beta_{2[1,m]}(d-1)$. If $b$ is even and $m < b \leq \beta_{2[1,m]}(d-1)$, then $\beta_{2b-2m}(d) \geq 2b$. \end{theorem} \begin{theorem}\label{th:s_max_beta} ${\displaystyle \; \mathsf{s}_{2m}(d) \; = \; 1 + \max_{1 \leq j \leq m} \left\{ \beta_{2[j,m]}(d) + (2m-2j) \right\} }$. \end{theorem} \begin{theorem}\label{th:beta_s_4} $\; \mathsf{s}_4(d) \; = \; \beta_4(d) + 3 \; = \; N(d,5) + 4 \; . $ \end{theorem} \begin{theorem}\label{th:beta_s_6} $\; \mathsf{s}_6(d) \; = \; \beta_6(d) + 1 \; $ for $d \geq 3$. \end{theorem} $R_{2m}(n)$ and $\mathsf{s}_{2m}(d)$ are related, because $R_{2m}(n)$ is, in fact, the smallest number of rows in a binary matrix with $n$ columns that does not contain $2m$ columns summing up to a zero vector, while $\mathsf{s}_{2m}(d)$ is the largest number of columns in a matrix with $d$ rows that has the same property. \begin{theorem}\label{th:R_s} $\; R_{2m}(n) = d \; \iff \; \mathsf{s}_{2m}(d-1) \; \leq \; n < \mathsf{s}_{2m}(d) \; . $ \end{theorem} \begin{theorem}\label{th:s_m_small} $\; \mathsf{s}_{2m}(d) \; = \; 2m + d \;\;\; \mbox{for} \;\; d < 2m \; . $ \end{theorem} \begin{theorem}\label{th:s_2m_2m} $\; \mathsf{s}_{2m}(2m) \; = \; 4m+1 \; . $ \end{theorem} Theorems \ref{th:beta_24_2m} and \ref{th:beta_d-1} together with \eqref{eq:even_beta_1} and \eqref{eq:even_beta_2} yield \begin{corollary}\label{th:s_lower3} If $m$ is odd, then $\beta_{2m}(d) \geq 2d+2\;$ for $d \geq 2m+1$, and $\beta_{2m}(d) \geq 2d+4\;$ for $d \geq 3m+1$. \end{corollary} \begin{theorem}\label{th:s_2m_2m_1_odd} $\; \mathsf{s}_{2m}(2m+1) \; = \; 4m+5 \: $ for odd $m$. \end{theorem} \begin{theorem}\label{th:beta_odd} If $m \geq 3$ is odd and $2m-3 \leq d \leq 2m+1$, then $\beta_{2m}(d) = \mathsf{s}_{2m}(d) - 1$. \end{theorem} In line with Corollary \ref{th:s_lower3} and Theorem \ref{th:s_2m_2m_1_odd}, we propose \begin{conjecture}\label{conj:odd} If $m$ is odd, $\;\mathsf{s}_{2m}(d) = 2d+3\:$ for $\:2m+1 \leq d \leq 3m$. \end{conjecture} When $d > 2m$, the cases of even and odd $m$ differ significantly. For even $m$ and $2m < d < 3m$, the best lower bound we know follows from \eqref{eq:recursive} and Theorem~\ref{th:s_2m_2m}: $\: \mathsf{s}_{2m}(d) \geq \mathsf{s}_{2m}(2m) + (d-2m) = d + 2m + 1 . $ \begin{theorem}\label{th:s_2m_2m_1_even} $\; s_{2m}(2m+1) \; = \; 4m+2 \; $ for even $m$. \end{theorem} \begin{conjecture}\label{conj:even} If $m$ is even, $\:\mathsf{s}_{2m}(d) = d + 2m + 1\:$ for $2m+1 \leq d \leq 3m-1$. \end{conjecture} To prove Conjecture \ref{conj:even}, it would be sufficient to show that $\mathsf{s}_{4k}(6k-1) \leq 10k$. This is equivalent to the statement that every $(4k+1)$-dimensional code of length $10k$ has a word of weight $4k$. Computer search confirmed that Conjectures~\ref{conj:odd} and~\ref{conj:even} hold for $m=3$ and $m=4$, respectively. Conjecture 4.4 of \cite{Gao:2014} in the case $G=\mathbb{Z}_2^d$ claims $\mathsf{s}_2(d) - 2 > \mathsf{s}_4(d) - 4 > \cdots > \mathsf{s}_{2m}(d) - 2m$, where $m = \lfloor d/2 \rfloor$. Theorems~\ref{th:beta_s_4} and~\ref{th:s_2m_2m_1_odd} provide a counterexample: $\mathsf{s}_4(7) - 4 = \mathsf{s}_6(7) - 6 = 11$. Computer search showed that $\mathsf{s}_8(11)=20$, and by Theorem~\ref{th:s_2m_2m_1_odd}, $\mathsf{s}_{10}(11)=25$, which gives another example: $\mathsf{s}_8(11) - 8 = 12$, $\mathsf{s}_{10}(11) - 10 = 15$. Conjecture 4.6 of \cite{Gao:2014} claims $\mathsf{s}_{2m}(d) = \beta_{[1,2m]}(d) + 2m$ for every $m$. (In notation of \cite{Gao:2014}, $\eta_{2m}(\mathbb{Z}_2^d) = \beta_{[1,2m]}(d) + 1$.) The equality holds for $m=2$ (see our Theorem~\ref{th:beta_s_4}), and is likely to hold for even $m$ in general. Our Theorem~\ref{th:s_2m_2m_1_odd} disproves this conjecture for odd $m \geq 3$ and $d=2m+1$. Indeed, it is easy to see that a two-dimensional binary code of length $n \geq 7$ and distance $n-2$ or higher can not exist. Therefore, $\beta_{[1,2m]}(2m+1) = N(2m+1,2m+1) < 2m+3$. By applying \eqref{eq:even_beta_1}, we get $\beta_{[1,2m]}(2m+1) = 2m+2$ while $s_{2m}(2m+1) = 4m+5$. \begin{observation}\label{th:2m_2m-2} If $\mathsf{s}_{2m}(d) - \mathsf{s}_{2m-2}(d) \geq 3$ then $\beta_{2m}(d) = \mathsf{s}_{2m}(d)-1$. Indeed, consider a sequence $S$ of length $\mathsf{s}_{2m}(d)-1$ over $\mathbb{Z}_2^d$ that does not contain a zero-sum subsequence of length $2m$. If $\beta_{2m}(d) < \mathsf{s}_{2m}(d)-1$, there is $z\in\mathbb{Z}_2^d$ which appears in $S$ at least twice. Remove two copies of $z$ to obtain a sequence $S'$ of length $\mathsf{s}_{2m}(d)-3$. As $\mathsf{s}_{2m}(d)-3 \geq \mathsf{s}_{2m-2}(d)$, $S'$ must contain a zero-sum subsequence of length $2m-2$. By adding back two copies of $z$ we get a zero-sum subsequence of length $2m$ in $S$. \end{observation} In light of Conjectures \ref{th:s_2m_2m_1_odd}, \ref{th:s_2m_2m_1_odd}, and Observation~\ref{th:2m_2m-2}, we expect $\beta_{2m}(d) = \mathsf{s}_{2m}(d)-1$ to hold for all odd $m$ and $2m-3 \leq d \leq 3m$. \vspace{2mm} It follows from Theorem \ref{th:s_2m_2m_1_even} and \eqref{eq:recursive} that $\mathsf{s}_{2m}(2m+2) \geq 4m+3$ for even $m$. By Corollary~\ref{th:s_lower3} and~\eqref{eq:beta_s_beta}, $\mathsf{s}_{2m}(2m+2) \geq 4m+7$ for odd $m$. Hence, Theorems \ref{th:R_s}, \ref{th:s_2m_2m}, \ref{th:s_2m_2m_1_odd}, and \ref{th:s_2m_2m_1_even} yield \begin{corollary}\label{th:Enomoto_my_1} \[ R_{2m}(n) \; = \; \begin{cases} 2m + 1, & \mbox{if }\; n = 4m+1, \\ 2m + 1, & \mbox{if }\; 4m+2 \leq n \leq 4m+4, \; m \mbox{ is odd}, \\ 2m + 2, & \mbox{if }\; 4m+5 \leq n \leq 4m+6, \; m \mbox{ is odd}, \\ 2m + 2, & \mbox{if }\; n = 4m+2, \; m \mbox{ is even}. \end{cases} \] \end{corollary} The next statement (which was also proved in \cite{Sidorenko:2017}) can be easily derived from Theorem~\ref{th:R_s} and~\eqref{eq:Bassalygo}: \begin{corollary}\label{th:s_d_large} For any fixed $m$, $\; \mathsf{s}_{2m}(d) = \Theta\left(2^{d/m}\right) $ as $d\rightarrow\infty\;$. \end{corollary} It follows from \eqref{eq:BCH} and Theorem \ref{th:R_s} that $\; \limsup_{d\rightarrow\infty} \mathsf{s}_{2m}(d) \: 2^{-d/m} \geq 1$. We can improve this bound for odd $m$. Namely, \eqref{eq:beta_s_beta} and Theorems \ref{th:beta_123_2m}, \ref{th:beta_24_2m}, \ref{th:beta_d-1} yield $s_{2m}(d) \geq 2N(d-1,2m+1) + 2$. Coupled with \eqref{eq:BCH}, it leads to \begin{corollary} If $m$ is odd, $\; {\displaystyle\limsup_{d\rightarrow\infty}} \; \mathsf{s}_{2m}(d) \: 2^{-d/m} \; \geq \; 2^{1-1/m} \; . $ \end{corollary} By Theorem \ref{th:beta_s_4}, bounds on $N(d,5)$ from \cref{sec:codes} translate into bounds on $\mathsf{s}_4(d)$. For $d \leq 10$, we get the exact values: \vspace{3mm} \begin{tabular}{cccccccccccc} $d$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ $\mathsf{s}_4(d)$ & 5 & 6 & 7 & 9 & 10 & 12 & 15 & 21 & 27 & 37 \end{tabular} \vspace{3mm} \section{Proofs of theorems}\label{sec:proofs} For $A\subseteq\mathbb{Z}_2^d$, we denote the sum of elements of $A$ by $\sum A$. \begin{proof}[\bf{Proof of Theorem \ref{th:beta_123_2m}}] First, we will prove $\beta_{[1,2m]}(d) \geq N(d,2m+1)$. Consider a linear code $C$ of length $n=N(d,2m+1)$, redundancy $d$ and distance at least $2m+1$. Its parity check matrix $M$ has size $d \times n$. Since $C$ has no words of weights $1,2,\ldots,2m$, the sum of any $k\in\{1,2,\ldots,2m\}$ columns of $M$ is not a zero vector. It means that the columns of $M$, being interpreted as $n$ vectors in $\mathbb{Z}_2^d$, form a set without zero-sum subsets of sizes $2m$ and less. To prove $\beta_{[1,2m]}(d) \leq N(d,2m+1)$, consider a set $A$ of size $\beta_{[1,2m]}(d)$ in $\mathbb{Z}_2^d$ which has no zero-sum subsets of sizes $2m$ and less. If $A$ does not contain a basis in $\mathbb{Z}_2^d$, then there exists a vector $x \in \mathbb{Z}_2^d$ which cannot be represented as a sum of some vectors from $A$. Then $A \cup \{ x \}$ would have no zero-sum subsets of sizes $2m$ and less which contradicts with the maximality of $|A|$. Hence, $A$ contains a basis. Then a $d \times |A|$ matrix, whose columns are the vectors in $A$, has rank $d$ and is a parity check matrix of a code of length $n$, redundancy $d$ and distance at least $2m+1$. \end{proof} \begin{proof}[\bf{Proof of Theorem \ref{th:beta_24_2m}}] Consider a set $A$ of size $\beta_{[1,2m]}(d)$ in $\mathbb{Z}_2^d$ which has no zero-sum subsets of sizes $1,2,\ldots,2m$. In particular, $0 \notin A$. It is obvious that $A \cup \{0\}$ does not have zero-sum subsets of sizes $2,4,\ldots,2m$. Hence, $\beta_{2[1,m]}(d) \geq \beta_{[1,2m]}(d) + 1$. To prove the opposite inequality, consider a set $B$ of size $\beta_{2[1,m]}(d)$ in $\mathbb{Z}_2^d$ which has no zero-sum subsets of sizes $2,4,\ldots,2m$. Select $y \in B$ and define $B_y = \{x+y \; | \; x \in B \}$. Notice that $B_y$ does not have zero-sum subsets of sizes $2,4,\ldots,2m$ and contains the zero vector. Then $B_y \backslash \{0\}$ will have no zero-sum subsets of sizes $1,2,\ldots,2m$. Therefore, $\beta_{[1,2m]}(d) \geq |B_y \backslash \{0\}| = \beta_{2[1,m]}(d) - 1$. \end{proof} \begin{lemma}\label{th:beta_d_small_1} If $A\subset\mathbb{Z}_2^d$, $|A| > 2^{d-1}$ and $\sum A \neq 0$, then there exists $B \subset A$ such that $|B| = |A|-2$ and $\sum B = 0$. \end{lemma} \begin{proof}[\bf{Proof}] There are exactly $2^d$ solutions $(x,y)$ of the equation $x+y = \sum A$ where $x,y \in \mathbb{Z}_2^d$. Let $\overline{A} = \mathbb{Z}_2^d \backslash A$. The number of solutions with $x \in \overline{A}$ or $y \in \overline{A}$ is at most $2|\overline{A}| = 2 (2^d - |A|) < 2^d$. Hence, there exists a solution $(x,y)$ where $x,y \in A$. As $x+y = \sum A \neq 0$, we get $x \neq y$. Set $B = A \backslash \{x,y\}$. Then $|B| = |A|-2$ and $\sum B = 0$. \end{proof} \begin{lemma}\label{th:beta_d_small_2} If $A\subset\mathbb{Z}_2^d$, $|A| \geq 2^{d-1} + 2$, then there exists $B \subset A$ such that $|B| = |A|-3$ and $\sum B = 0$. \end{lemma} \begin{proof}[\bf{Proof}] As $|A| \geq 2$, there exists $x \in A$ such that $x \neq \sum A$. Set $A_1 = A \backslash \{x\}$, then $\sum A_1 \neq 0$. By Lemma~\ref{th:beta_d_small_1}, there exists $B \subset A_1$ such that $|B| = |A|-3$ and $\sum B = 0$. \end{proof} \begin{proof}[\bf{Proof of Theorem \ref{th:beta_d_small}}] We will prove the lower bound first. In the case $m \neq 2^{d-1} - 2$, select $A \subseteq \mathbb{Z}_2^d$ such that $|A|=2m+2$ and $\sum A = 0$. If $B \subset A$, $|B|=2m$, and $A \backslash B = \{x,y\}$, then $\sum B = \sum A - (x+y) = x+y \neq 0$. Hence, $\beta_{2m}(d) \geq 2m+2$. In the case $m = 2^{d-1} - 2$, the bound $\beta_{2m}(d) \geq 2m$ is trivial. Now we will prove the upper bound. If $m \neq 2^{d-1} - 2$, consider $A\subset\mathbb{Z}_2^d$ where $|A| = 2m+3 \geq 2^{d-1} + 3$. By Lemma~\ref{th:beta_d_small_2}, there exists $B \subset A$ such that $|B|=2m$ and $\sum B = 0$. Therefore, $\beta_{2m}(d) \leq 2m+2$. If $m = 2^{d-1} - 2$, consider $A\subset\mathbb{Z}_2^d$ where $|A| = 2m+1 = 2^d - 3$. Let $x = \sum A$, and $\mathbb{Z}_2^d \backslash A = \{a,b,c\}$. Since $x = \sum A = a+b+c = a + (b+c) \neq a$ (and similarly, $x \neq b,c$), we conclude that $x \in A$. Set $B = A \backslash \{x\}$. Then $|B|=2m$ and $\sum B = 0$, so $\beta_{2m}(d) \leq 2m$. \end{proof} \begin{proof}[\bf{Proof of Theorem \ref{th:beta_d-1}}] Consider a set $A$ of size $b \leq \beta_{2[1,m]}(d-1)$ in $\mathbb{Z}_2^{d-1}$ which does not have zero-sum subsets of sizes $2,4,\ldots,2m$. For $i=0,1$, obtain $A_i \subset \mathbb{Z}_2^d$ by attaching $i$ to each vector of $A$ as the $d$'th entry. We claim that $A_0 \cup A_1$ does not have zero-sum subsets of size $2m$. Indeed, suppose $X \subseteq (A_0 \cup A_1)$, $|X|=2m$. Let $X_i = \{(z_1,z_2,\ldots,z_{d-1}) | \; (z_1,z_2,\ldots,z_{d-1},i) \in X\}$ and $Y = (X_0 \cup X_1) \backslash (X_0 \cap X_1)$. Then $X_0,X_1,Y$ are subsets of $A$. If $\sum X = 0$ then $|X_0|, |X_1|$ are even and $\sum X_0 + \sum X_1 = 0$. Notice that $|X_0 \cap X_1|=m$ is impossible, as it would imply $|X_0|=|X_1|=m$, but $m$ is odd. Hence, $\sum Y = \sum X_0 + \sum X_1 - 2 \sum (X_0 \cap X_1) = 0$ while $|Y| = 2m - 2|X_0 \cap X_1|$ is even and not equal to zero. Selecting $b = \beta_{2[1,m]}(d-1)$ in $\mathbb{Z}_2^{d-1}$, we get $\beta_{2m}(d) \geq |A_0 \cup A_1| = 2\beta_{2[1,m]}(d-1)$. When $b$ is even, $\sum (A_0 \cup A_1) = 0$, so $A_0 \cup A_1$ does not have zero-sum subsets of size $2b-2m$ either. Hence, $\beta_{2b-2m}(d) \geq |A_0 \cup A_1| = 2b$. \end{proof} \begin{proof}[\bf{Proof of Theorem \ref{th:s_max_beta}}] We will show first that $\mathsf{s}_{2m}(d) \geq 1 + \beta_{2[j,m]}(d) + (2m-2j)$. Indeed, consider a set $A$ of size $\beta_{2[j,m]}(d)$ in $\mathbb{Z}_2^d$ which does not have zero-sum subsets of sizes $2j,2j+2,\ldots,2m$. Select $x \in A$ and form a sequence from all the elements of $A$ plus $(2m-2j)$ extra copies of $x$. It is easy to see that this sequence does not have zero-sum subsequences of size $2m$. Hence, $\mathsf{s}_{2m}(d) \geq 1 + \max_{1 \leq j \leq m} \left\{\beta_{2[j,m]}(d) + (2m-2j)\right\}$. Now we need to prove the opposite inequality. Let $s$ be an integer such that $s > \beta_{2[j,m]}(d) + (2m-2j)$ for every $j=1,2,\ldots,m$. Define the product of an integer $n$ and $z\in\mathbb{Z}_2^d$ as $z$ if $n$ is odd, and $0$ if $n$ is even. Let $f(z)$ be the number of appearances of $z\in\mathbb{Z}_2^d$ in a sequence of length $s$ over $\mathbb{Z}_2^d$. We are going to show that there exists a nonnegative integer function $g$ on $\mathbb{Z}_2^d$ such that $g \leq f$, $\;\sum_{z\in\mathbb{Z}_2^d} g(z) = 2m$, and $\sum_{z\in\mathbb{Z}_2^d} g(z)z = 0$. Indeed, set $f_1(z)=1$ if $f(z)$ is odd, and $f_1(z)=0$ if $f(z)$ is even. Set $f_2(z) = f(z) - f_1(z)$. All values of $f_2(z)$ are even. If $\sum_{z\in\mathbb{Z}_2^d} f_2(z) \geq 2m$, then there exists $g \leq f_2$ such that all values of $g$ are even and $\sum_{z\in\mathbb{Z}_2^d} g(z) = 2m$. Hence, we can assume that $\sum_{z\in\mathbb{Z}_2^d} f_2(z) = 2(m-j)$ where $j\in\{1,2,\ldots,m\}$. Let $A = \{ z\in\mathbb{Z}_2^d: \; f_1(z)=1 \}$. Since $|A| = \sum_{z\in\mathbb{Z}_2^d} (f(z)-f_2(z)) = s - 2(m-j) > \beta_{2[j,m]}(d)$, there exists $B \subseteq A$ such that $|B| = k \in \{2j,2j+2,\ldots,2m\}$ and $\sum_{z \in B} z = 0$. Set $f_B(z)=1$ if $z \in B$, and $f_B(z)=0$ otherwise. Choose a function $f_3 \leq f_2$ such that all values of $f_3$ are even and $\sum_{z\in\mathbb{Z}_2^d} f_3(z) = 2m - k$. Then $f_3+f_B$ is the required function $g$. \end{proof} \begin{proof}[\bf{Proof of Theorem \ref{th:beta_s_4}}] By definition, $\beta_4(d)=\beta_{\{2,4\}}(d)$. Thus, Theorems~\ref{th:beta_123_2m} and~\ref{th:beta_24_2m} imply $\beta_4(d) = N(d,5)+1$, while Theorem~\ref{th:s_max_beta} implies $\mathsf{s}_4(d) = \beta_4(d) + 3$. \end{proof} \begin{proof}[\bf{Proof of Theorem \ref{th:R_s}}] We will show first that $n < \mathsf{s}_{2m}(d)$ implies $R_{2m}(n) \leq d$, and then show that $n \geq \mathsf{s}_{2m}(d-1)$ implies $R_{2m}(n) \geq d$. As $\mathsf{s}_{2m}(d-1) < \mathsf{s}_{2m}(d)$ (see \eqref{eq:recursive}), these two statements establish the theorem. Let $n < \mathsf{s}_{2m}(d)$. Then there exists a sequence $S$ of length $n$ over $\mathbb{Z}_2^d$ which does not have zero-sum subsequences of size $2m$. Write the vectors of $S$ as $d \times n$ binary matrix $M$. This matrix does not have $2m$ columns that sum up to a zero vector. Its rank is $r \leq d$. Take $r$ independent rows of $M$ to get an $r \times n$ matrix $M_1$ of rank $r$. We claim that $M_1$ does not have $2m$ columns that sum up to a zero vector. Indeed, any row of $M$ that is not in $M_1$ is the sum of some rows of $M_1$. If $M_1$ had a set of $2m$ columns which sum up to a zero vector, then the same columns in $M$ would also sum up to a zero vector. Let $C$ be the linear code whose parity check matrix is $M_1$. This code has length $n$, redundancy $r$ and does not have a word of weight $2m$. Hence, $R_{2m}(n) \leq r \leq d$. Let $n \geq \mathsf{s}_{2m}(d-1)$. Consider a linear code $C$ of length $n$ and redundancy $r$ which does not have words of weight $2m$. Let $M$ be an $r \times n$ parity check matrix of $C$. If $r \leq d-1$, then $n \geq \mathsf{s}_{2m}(r)$, and there exists a set of $2m$ columns in $M$ that sum up to a zero vector. It contradicts with the assumption that $C$ does not have words of weight $2m$. Hence, $r \geq d$. We have proved that $R_{2m}(n) \geq d$. \end{proof} \begin{proof}[\bf{Proof of Theorem \ref{th:s_m_small}}] It follows from \eqref{eq:Enomoto1} that $R_{2m}(2m+d)=d+1$ when $d<2m$. Hence, by Theorem~\ref{th:R_s}, $\mathsf{s}_{2m}(d) \leq 2m + d$. The opposite inequality is provided by~\eqref{eq:s_lower}. \end{proof} \begin{observation}\label{th:operations} The following operations do not change the fact whether a binary matrix has a set of $2m$ columns that sum up to a zero vector: permutations of columns (rows), additions of one row to another, additions of the same vector to each column. In particular, bringing a matrix to binormal form by Lemma~\ref{th:Enomoto2} does not change the fact whether such a set of $2m$ columns exists. \end{observation} \begin{lemma}\label{th:rank} Let $M$ be an $d \times n$ binary matrix whose rank is less than $d$. If $n \geq \mathsf{s}_{2m}(d-1)$, then $M$ contains $2m$ columns that sum up to a zero vector. \end{lemma} \begin{proof}[\bf{Proof}] Use additions of one row to another to make the last row entirely zero. By Observation~\ref{th:operations}, it does not change the fact whether there is a set of $2m$ columns that sum up to a zero vector. As $n > \mathsf{s}_{2m}(d-1)$, such a set of columns must exist. By Observation~\ref{th:operations}, the same is true for the original matrix. \end{proof} \begin{proof}[\bf{Proof of Theorem \ref{th:s_2m_2m}}] When $k=2$ and $d=2m$, \eqref{eq:s_lower2} yields $\mathsf{s}_{2m}(2m) \geq 4m+1$. To prove $\mathsf{s}_{2m}(2m) \leq 4m+1$, consider a sequence $S$ of size $4m+1$ over $\mathbb{Z}_2^{2m}$. We need to prove that $S$ has a zero-sum subsequence of size $2m$. Let $x$ be the sum of all elements of $S$. Add $x$ to each element of $S$ and denote the resulting sequence by $S_0$. If $S_0$ has a zero-sum subsequence of size $2m$, then $S$ has it, too. Consider a $(2m) \times (4m+1)$ binary matrix $M$ whose columns represent the vectors from $S_0$. By Theorem~\ref{th:s_m_small}, $4m+1 \geq \mathsf{s}_{2m}(2m-1)$. If the rank of $M$ is less than $2m$, then by Lemma~\ref{th:rank}, it has a set of $2m$ columns that sum up to a zero vector. Suppose that the rank of $M$ is $2m$. Since the sum of all vectors of $S_0$ is a zero vector, $M$ satisfies the conditions of Corollary~\ref{th:Enomoto}. Thus, there are $2m$ columns in $M$ whose sum is a zero vector. \end{proof} \begin{lemma}\label{th:2m_1_2m_1} Let $C=[c_{ij}]$ be a $t \times t$ binary matrix. If $C$ does not have a row where all off-diagonal entries are equal, then it has three distinct rows $i,j,k$ such that $c_{ij}+c_{ik}=1$ and $c_{ji}+c_{jk}=1$. \end{lemma} \begin{proof}[\bf{Proof}] Let $a_i$ be the number of off-diagonal nonzero entries in row $i$, and $b_i = (t-1) - a_i$ be the number of off-diagonal zero entries. Let $a = \min\{ a_1,a_2,\ldots,a_t \}$\!, $b = \min\{ b_1,b_2,\ldots,b_t \}$. Since the statement of the lemma holds for matrix $[c_{ij}]$ if and only if it holds for matrix $[1-c_{ij}]$, we may assume that $a \leq b$. Let index $i$ be such that $a_i=a$. Since $a_i > 0$, there exists index $j \neq i$ such that $c_{ij}=1$. Consider two cases for $c_{ji}$. 0). $c_{ji}=0$. As $a_j > a_i - 1$, there exists $k \neq i,j$ such that $c_{ik}=0$ and $c_{jk}=1$. 1). $c_{ji}=1$. As $a_i + a_j = a_i + (t-1 - b_j) \leq (t-1) + a_i - b < t$, there exists $k \neq i,j$ such that $c_{ik} = c_{jk} = 0$. \end{proof} Let $M$ be an $r \times c$ matrix, and $1 \leq r' \leq r'' \leq r$, $\: 1 \leq c' \leq c'' \leq c$. We denote by $M[r':r'',\: c':c'']$ the submatrix of $M$ formed by rows $r', r'+1, \ldots, r''$ and columns $c', c'+1, \ldots, c''$. \begin{lemma}\label{th:2m_1_4m_5} Let $k$ be even, and $M$ be a $(k+1) \times (2k+5)$ matrix in binormal form. One can pick up $k$ columns in $M$ whose sum is the $(k+1)$-dimensional zero vector. \end{lemma} \begin{proof}[\bf{Proof}] Let $M=[a_{ij}]$. Let $C=[c_{ij}]$ be a $(k+1) \times (k+1)$ binary matrix where $c_{ij} = a_{i,2j-1} = a_{i,2j}$ for $i \neq j$. The values of diagonal entries $c_{ii}$ are not important and may be set arbitrarily. Suppose, there is a row in $C$ where the sum of off-diagonal entries is $0$. Without loss of generality, we may assume that it is the last row, so $\sum_{i=1}^{k} c_{k+1,i} = 0$. Since $M[1:k,1:2k]$ is in binormal form, by Lemma~\ref{th:Enomoto1}, there is a set of $k$ indices $j(i) \in \{ 2i-1,2i \}$ $\; (i=1,2,\ldots,k)$ such that the sum of columns $j(1),j(2),\ldots,j(k)$ in $M[1:k,1:2k]$ is the $k$-dimensional zero vector. Since $a_{k+1,j(i)} = c_{k+1,i}$, the condition $\sum_{i=1}^{k} c_{k+1,i} = 0$ guarantees that the sum of columns $j(1),j(2),\ldots,j(k)$ in $M$ is the $(k+1)$-dimensional zero vector. Thus, we may assume that the sum of off-diagonal entries in each row of $C$ is equal to $1$. It means that $C$ satisfies the conditions of Lemma~\ref{th:2m_1_2m_1}. Hence, without loss of generality, we may assume that $c_{k-1,k} + c_{k-1,k+1} = 1$ and $c_{k,k-1} + c_{k,k+1} = 1$. Since the sum of off-diagonal entries in any row of $C$ is $1$, we get $\sum_{j=1}^{k-2} c_{ij} = 0$ for $i=k-1,k$. Let $x = \sum_{j=1}^{k-2} c_{k+1,j}$. Then the sum of all columns in the $3 \times (k-2)$ matrix $C[k-1:k+1,\: 1:k-2]$ is equal to $(0,0,x)^T$. We claim that the $3 \times 9$ matrix $M[k-1:k+1,\: 2k-3:2k+5]$ has two columns whose sum is $(0,0,x)^T$. Indeed, among $9$ columns there must be two equal, their sum is $(0,0,0)^T$. Since $M$ is in binormal form, its columns $(2k+1)$ and $(2k+2)$ differ only in the last entry. Hence, $M[k-1:k+1, 2k-3:2k+5]$ contains two rows whose sum is $(0,0,1)^T$. Now we select two columns in $M[k-1:k+1,\: 2k-3:2k+5]$ whose sum is $(0,0,x)^T$ and call the columns of $M$ which contain them {\it special}. Let $(y_1,y_2,\ldots,y_{k+1})^T$ be the sum of the two special columns. We already know that $y_{k-1}=y_{k}=0$ and $y_{k+1}=x$. Since $M[1:k-2,\:1:2k-4]$ is in binormal form, by Lemma~\ref{th:Enomoto1}, there is a set of $k-2$ indices $l(i) \in \{ 2i-1,2i \}$ $\; (i=1,2,\ldots,k-2)$ such that the sum of columns $l(1),l(2),\ldots,l(k-2)$ in $M[1:k-2,\:1:2k-4]$ is equal to $(y_1,y_2,\ldots,y_{k-2})^T$. Then the sum of these $k-2$ columns plus the two special columns in $M$ is the $(k+1)$-dimensional zero vector. \end{proof} \begin{proof}[\bf{Proof of Theorem \ref{th:s_2m_2m_1_odd}}] By \eqref{eq:beta_s_beta} and Corollary~\ref{th:s_lower3}, $\mathsf{s}_{2m}(2m+1) \geq \linebreak \beta_{2m}(2m+1) + 1 \geq 4m+5$. To prove $\mathsf{s}_{2m}(2m+1) \leq 4m+5$, consider a sequence $S$ of size $4m+5$ over $\mathbb{Z}_2^{2m+1}$. We need to prove that $S$ has a zero-sum subsequence of size $2m$. Let $x$ be the sum of all elements of $S$. Add $x$ to each element of $S$ and denote the resulting sequence by $S_0$. If $S_0$ has a zero-sum subsequence of size $2m$, then $S$ has it, too. Consider a $(2m+1) \times (4m+5)$ binary matrix $M$ whose columns represent the vectors from $S_0$. By Theorem~\ref{th:s_2m_2m}, $4m+5 \geq \mathsf{s}_{2m}(2m)$. If the rank of $M$ is less than $2m+1$, then by Lemma~\ref{th:rank}, it has a set of $2m$ columns that sum up to a zero vector. Suppose that the rank of $M$ is $2m+1$. Since the sum of all vectors of $S_0$ is a zero vector, $M$ satisfies the conditions of Lemma~\ref{th:Enomoto2} and can be brought to matrix $M'$ in binormal form by permutations of columns and additions of one row to another. By Lemma~\ref{th:2m_1_4m_5}, $M'$ has a set of $2m$ columns that sum up to a zero vector. Then by Observation~\ref{th:operations}, $M$ has such a set, too. \end{proof} \begin{proof}[\bf{Proof of Theorem \ref{th:beta_odd}}] By \eqref{eq:beta_s_beta}, $\beta_{2m}(d) \leq \mathsf{s}_{2m}(d) - 1$. It remains to prove $\beta_{2m}(d) \geq \mathsf{s}_{2m}(d) - 1$. The values of $\mathsf{s}_{2m}(d)$ for $d \leq 2m+1$ were determined in Theorems~\ref{th:s_m_small}, \ref{th:s_2m_2m}, and~\ref{th:s_2m_2m_1_odd}. Theorems~\ref{th:beta_24_2m} and~\ref{th:beta_d-1} yield $\beta_{2m}(d) \geq 2\beta_{[1,2m]}(d-1) + 2$. The set of $d-1$ basis vectors in $\mathbb{Z}_2^{d-1}$ demonstrates that $\beta_{[1,2m]}(d-1) \geq d-1$, so we get \[ \beta_{2m}(2m-1) \; \geq \; 2(2m-2)+2 \; = \; 4m-2 \; = \; \mathsf{s}_{2m}(2m-1)-1 , \] \[ \beta_{2m}(2m ) \; \geq \; 2(2m-1)+2 \; = \; 4m \; = \; \mathsf{s}_{2m}(2m )-1 . \] By \eqref{eq:even_beta_1}, $\beta_{[1,2m]}(2m) \geq 2m+1$, and hence, \[ \beta_{2m}(2m+1) \; \geq \; 2(2m+1)+2 \; = \; 4m+4 \; = \; \mathsf{s}_{2m}(2m+1)-1 . \] We use Theorem~\ref{th:beta_d-1} with odd $m \geq 1$ and $b = 2m+2 \leq \beta_{2[1,m]}(2m)$ to get $\beta_{2m+4}(2m+1) = \beta_{2b-2m}(2m+1) \geq 2b = 4m+4$ for $m \geq 1$. Substituting $m-2 \geq 1$ instead of $m$, we get \[ \beta_{2m}(2m-3) \; \geq \; 4m-4 \; = \; \mathsf{s}_{2m}(2m-3)-1 \;\;\;\mbox{for}\;\; m \geq 3 . \] Finally, \[ \beta_{2m}(2m-2) \; \geq \; \beta_{2m}(2m-3) + 1 \; \geq \; 4m-3 \; = \; \mathsf{s}_{2m}(2m-2)-1 \;\;\;\mbox{for}\;\; m \geq 3 . \] \end{proof} \begin{lemma}\label{th:code_without_2_and_4} Let $C$ be a linear binary code of length $n \geq 10$. If $C$ does not have words of Hamming weight $2$ and $4$, then its dual code $C^\perp$ has a nonzero word of weight $l$ where $|l - n/2| \geq 2$. \end{lemma} \begin{proof}[\bf{Proof}] Suppose, to the contrary, that the weights of nonzero words of $C^\perp$ lay in the interval $[(n-3)/2,\: (n+3)/2]$. Let $D_n = \{ 0,\: (n-3)/2,\: (n-1)/2,\: (n+1)/2,\: (n+3)/2 \}$ if $n$ is odd, and $D_n = \{ 0,\: (n-2)/2,\: n/2,\: (n+2)/2 \}$ if $n$ is even. Let $r$ be the dimension of $C^\perp$, so the dimension of $C$ is $k=n-r$. Let $A_j\;$ ($B_j$) denote the number of words of weight $j$ in $C\;$ ($C^\perp$). Then $A_0=1$, $A_2=A_4=0$, $B_0=1$, $B_j=0$ for $j \notin D_n$, and $\sum_{j \in D_n} B_j = 2^r$. Consider a linear combination of MacWilliams identities \eqref{eq:MWI} with $\lambda=1,2,3,4$, where coefficients $c_\lambda$ will be selected later: \[ \sum_{\lambda=1}^4 c_\lambda \; 2^r \sum_{j=0}^\lambda \binom{n-j}{\lambda-j} A_j \; = \; \sum_{\lambda=1}^4 c_\lambda \; 2^\lambda \sum_{j=0}^{n-\lambda} \binom{n-j}{\lambda} B_j \; . \] As $D_n \subseteq [0,n - \lambda]$ for $n \geq 10$, $\lambda \leq 4$, and $B_j=0$ for $j \notin D_n$, we can rewrite it as \begin{equation}\label{eq:MWI1} 2^r \sum_{\lambda=1}^4 c_\lambda \sum_{j=0}^\lambda \binom{n-j}{\lambda-j} A_j \; = \; \sum_{j \in D_n} f_j B_j \; , \end{equation} where \[ f_j \; = \; \sum_{\lambda=1}^4 c_\lambda \; 2^\lambda \sum_{j=0}^{n-\lambda} \binom{n-j}{\lambda} \; . \] As $A_2=A_4=0$, the left hand side of \eqref{eq:MWI1} is a linear combination of $A_0,A_1,A_3$. We are going to choose coefficients $c_1,c_2,c_3,c_4$ in such a way that $A_1$ and $A_3$ are eliminated while the values of $f_j$ are equal for all $j \in D_n \backslash \{0\}$. If $n$ is even, set $c_1=4n(n-1)(n-2)$, $c_2=-12(n-2)^2$, $c_3=24(n-3)$, $c_4=-24$. In this case, we get $f_0=0$, $f_j = n^2(n+2)(n-2)$ for $j \in D_n \backslash \{0\}$, and \eqref{eq:MWI1} is reduced to \[ 2^r n(n-1)(n-2)(n+3) A_0 \; = \; n^2(n+2)(n-2) \sum_{j \in D_n \backslash \{0\}} B_j \; . \] Since $A_0=1$ and $\sum_{j \in D_n \backslash \{0\}} B_j = 2^r - 1$, we can simplify it further to \[ n(n-2)(3 \cdot 2^r - n(n+2)) \; = \; 0 \; , \] which has no integer solutions for $n>6$. If $n$ is odd, set $c_1=4(n+1)(n-1)(n-3)$, $c_2=-12(n-1)(n-3)$, $c_3=24(n-3)$, $c_4=-24$. In this case, we get $f_0=0$, $f_j = (n+3)(n+1)(n-1)(n-3)$ for $j \in D_n \backslash \{0\}$, and \eqref{eq:MWI1} is reduced to \[ 2^r n(n-1)(n-3)(n+4) A_0 \; = \; (n+3)(n+1)(n-1)(n-3) \sum_{j \in D_n \backslash \{0\}} B_j \; . \] Since $A_0=1$ and $\sum_{j \in D_n \backslash \{0\}} B_j = 2^r - 1$, we can simplify it further to \[ (n-1)(n-3)(3 \cdot 2^r - (n+1)(n+3)) \; = \; 0 \; , \] which has no integer solutions for $n>3$. \end{proof} \begin{lemma}\label{th:beta_2_beta} $\; \beta_{2[1,m]}(d) \; \leq \; \max\{9,\; 2 \beta_{2[1,m]}(d-1) - 4\} \; . $ \end{lemma} \begin{proof}[\bf{Proof}] Suppose, $\beta_{2[1,m]}(d) \geq 10$. We need to prove $\beta_{2[1,m]}(d) \leq 2 \beta_{2[1,m]}(d-1) - 4$. Consider a set $A$ of size $n=\beta_{2[1,m]}(d)$ in $\mathbb{Z}_2^d$ which does not have zero-sum subsets of sizes $2,4,\ldots,2m$. Write $n$ vectors of $A$ column-wise as a $d \times n$ binary matrix $M$. Similarly to the proof of Theorem~\ref{th:beta_123_2m}, the maximality of $|A|$ ensures that $M$ has rank $d$. Let $C$ be the linear code of length $n$ whose parity check matrix is $M$. This code does not have words of weight $2,4,\ldots,2m$. As $n \geq 10$, by Lemma~\ref{th:code_without_2_and_4}, the dual code $C^\perp$ has a word of weight $l$ where $|l - n/2| \geq 2$. Then there exists a parity check matrix $M_1$ of code $C$ such that this word is the first row of $M_1$. Notice that the sum of any $2k$ columns of $M_1$ is not a zero vector ($k=1,2,\ldots,m$). If $l \geq (n+4)/2$, remove from $M_1$ all columns which contain $0$ in the first row. If $l \leq (n-4)/2$, remove from $M_1$ all columns which contain $1$ in the first row. The resulting matrix $M_2$ is of size $d \times t$ where $t \geq (n+4)/2$. All entries in the first row of $M_2$ are equal. Remove the first row to get matrix $M_3$ of size $(d-1) \times t$. As $M_2$ does not have sets of columns of size $2k\;$ ($k=1,2,\ldots,m$) which sum up to a zero vector, the same is true for $M_3$. Therefore, $\beta_{2[1,m]}(d-1) \geq t \geq (n+4)/2 = (\beta_{2[1,m]}(d) + 4) / 2$. \end{proof} \begin{proof}[\bf{Proof of Theorem \ref{th:beta_s_6}}] For $3 \leq d \leq 7$, the statement of the theorem follows from Theorem~\ref{th:beta_odd}. If $d \geq 8$, \eqref{eq:even_beta_1} and Theorem~\ref{th:beta_24_2m} yield $\beta_{\{2,4,6\}}(d) \geq 10$. We may apply Lemma~\ref{th:beta_2_beta} to get $\beta_{\{2,4,6\}}(d) \leq 2 \beta_{\{2,4,6\}}(d-1) - 4$. By Theorem~\ref{th:beta_d-1}, $\beta_6(d) \geq 2 \beta_{\{2,4,6\}}(d-1) \geq \beta_{\{2,4,6\}}(d) + 4$. By definition, $\beta_{\{4,6\}}(d) = \beta_{\{2,4,6\}}(d)$, hence, Theorem~\ref{th:s_max_beta} yields $ \mathsf{s}_6(d) = 1+ \max\{\beta_{\{2,4,6\}}(d)+4,\; \beta_6(d)\} = 1 + \beta_6(d) $. \end{proof} \begin{lemma}\label{th:digraph} Let $D$ be a digraph with $n \equiv 1 \pmod{4}$ vertices where every vertex has odd out-degree. Then one can find in $D$ either $3$ vertices that span a subgraph with $2$ vertices of odd out-degree, or $5$ vertices that span a subgraph with $3$ vertices of odd out-degree. \end{lemma} \begin{proof}[\bf{Proof}] Let the vertex set of $D$ be $\{1,2,\ldots,n\}$. Let $C=[c_{ij}]$ be the adjacency matrix: $c_{ij}=1$ if the arc $(i,j)$ is present in $D$, otherwise, $c_{ij}=0\:$ ($i \neq j)$. The diagonal entries $c_{ii}$ are zeros. For a subset $A$ of vertices we define the {\it type} of $A$ as the number of vertices of odd out-degree in the subgraph of $D$ spanned by $A$. We need to show that there exists either a triple of type $2$ or a quintuple of type $3$. For a triple $\{i,j,k\}$, let $t(i,j,k)$ denote its type. Similarly, for a quintuple $\{i,j,k,l,m\}$, its type is denoted by $t(i,j,k,l,m)$. Suppose, $D$ contains no triple of type $2$, so every triple of even type must have type $0$. We are going to show that there is a quintuple of type $3$. The size of $C$ is odd, and the sum of the entries in each row is odd. Hence, the sum of all entries of $C$ is odd. In the sum of expressions $f(i,j,k) = c_{ij} + c_{ji} + c_{ik} + c_{ki} + c_{jk} + c_{kj}$ over all triples $\{i,j,k\}$, each off-diagonal entry of $C$ appears $n-2 \equiv 1 \pmod{2}$ times. Notices, that $t(i,j,k)$ is odd (even) when $f(i,j,k)$ is odd (even). Hence, the sum of the types of all triples is odd. As $n \equiv 1 \pmod{4}$, the number of all triples, $\binom{n}{3}$, is even. It means that there exists at least one triple of even type, and at least one triple of odd type. Then we can find a triple of even type and a triple of odd type that share a pair. Notice that the expression $f(i,j,k)+f(i,j,l)+f(i,j,l)+f(j,k,l)$ is always even, because each arc of the subgraph spanned by $\{i,j,k,l\}$ is counted there twice. Thus, the sum of types of the $4$ triples within one quadruple is even. In particular, a quadruple which contains a triple of even type and a triple of odd type must contain two triples of even type and two triples of odd type. Without limiting generality, we can assume that triples $\{1,2,3\}$, $\{2,3,4\}$ are of type $0$, and $\{1,2,4\}$, $\{1,3,4\}$ are of odd type. Hence, \begin{equation*} c_{12}=c_{13}, \;\;\; c_{42}=c_{43}, \;\;\; c_{21}=c_{23}=c_{24}, \;\;\; c_{31}=c_{32}=c_{34}, \;\;\; c_{14} \neq c_{41}. \end{equation*} Notice that the following operation on $D$ preserves the types of all subsets of odd sizes (in particular of sizes $3,5,n$): for a given vertex $i$, remove all arcs $(i,j)$ that were present in $D$, and add all arcs $(i,j)$ that were not present. It is equivalent to replacing each off-diagonal entry $c_{ij}$ in row $i$ of $C$ with $1-c_{ij}$. We apply this {\it switching} operation, if necessary, to some of vertices $1,2,3,4$ to ensure \begin{equation}\label{eq:ccc2} c_{12}=c_{13}=c_{42}=c_{43}=c_{21}=c_{23}=c_{24}=c_{31}=c_{32}=c_{34}=0. \end{equation} As indices $1$ and $4$ appear in \eqref{eq:ccc2} symmetrically, and $c_{14} \neq c_{41}$, we can assume, without limiting generality, that $c_{14}=1$ and $c_{41}=0$. Define \begin{multline*} V_1 = \{ i\in\{4,5,\ldots,n\}: \; t(i,2,3)=0, \; \\ t(i,1,2) \equiv 1 \; ({\rm mod}\: 2), \; t(i,1,3) \equiv 1 \; ({\rm mod}\: 2) \} \; , \end{multline*} \begin{multline*} V_2 = \{ i\in\{4,5,\ldots,n\}: \; t(i,1,3)=0, \; \\ t(i,2,3) \equiv 1 \; ({\rm mod}\: 2), \; t(i,1,2) \equiv 1 \; ({\rm mod}\: 2) \} \; , \end{multline*} \begin{multline*} V_3 = \{ i\in\{4,5,\ldots,n\}: \; t(i,1,2)=0, \; \\ t(i,1,3) \equiv 1 \; ({\rm mod}\: 2), \; t(i,2,3) \equiv 1 \; ({\rm mod}\: 2) \} \; , \end{multline*} \[ W_0 = \{ i\in\{4,5,\ldots,n\}: \; t(i,1,2) = t(i,1,3) = t(i,2,3) = 0 \} \; , \] \[ W_1 = V_1 \cup \{1\} \; , \;\;\;\;\;\; W_2 = V_2 \cup \{2\} \; , \;\;\;\;\;\; W_3 = V_3 \cup \{3\} \; . \] As the sum of types of all triples within one quadruple is even, $\{W_0,W_1,W_2,W_3\}$ form a partition of $\{1,2,\ldots,n\}$. Notice that $4 \in V_1$. We claim that $t(i,j,3)=0$ for any $i \in V_1$ and $j \in V_2$ (if $V_2$ is empty, this statement becomes trivial). Indeed, by the definition of $V_1$ and $V_2$, $t(1,2,3)=t(i,2,3)=t(1,j,3)=0$ while $t(i,1,2)$, $t(i,1,3)$, $t(j,2,3)$, $t(j,1,2)$ are odd. Suppose, $t(i,j,3)$ is odd. Then $t(i,j,1) \equiv t(i,j,3) + t(i,1,3) + t(j,1,3) \equiv 0 \pmod{2}$ and $t(i,j,2) \equiv t(i,j,3) + t(i,2,3) + t(j,2,3) \equiv 0 \pmod{2}$, which means that there are $5$ triples of type $0$ in cyclic pattern: \[ t(i,j,1) = t(j,1,3) = t(1,3,2) = t(3,2,i) = t(2,i,j) = 0. \] If so, the principal minor of $C$, formed by rows and columns $i,j,1,2,3$, must carry equal off-diagonal entries within each row. But then we would have $t(i,j,3)=0$ which contradicts with the initial assumption $t(i,j,3) \equiv 1 \pmod{2}$. Therefore, $t(i,j,3)=0$. Similarly, $t(i,2,k)=0$ for any $i \in V_1$, $k \in V_3$. For every $i \in (W_0 \cup W_1 \backslash \{1,4\})$, we have $t(i,2,3)=0$, hence $c_{2i}=c_{23}=0$, $c_{3i}=c_{32}=0$, and $c_{i2}=c_{i3}$. If $c_{i2}=1$, apply switching operation to vertex $i$ to ensure $c_{i2}=c_{i3}=0$. Now we have $c_{2i}=c_{23}=c_{3i}=c_{32}=c_{i2}=c_{i3}=0$ for all $i \in W_0 \cup W_1$. For any $i \in V_1$ and $j \in V_2$, we have $t(i,j,3)=0$, hence $c_{ij}=c_{i3}=0$. The same is true when $i=1$ or $j=2$. Therefore, $c_{ij}=0$ for all $i \in W_1$ and $j \in W_2$, and similarly, $c_{ik}=0$ for all $i \in W_1$ and $k \in W_3$. Denote by $F$ the subgraph of $D$ spanned by $W_1$. If $i,j \in W_1$, $i \neq j$, and $c_{ij}=c_{ji}=1$, then $t(i,j,2)=2$ (since $c_{i2}=c_{2i}=c_{j2}=c_{2j}=0$). Hence, $F$ does not have a pair of opposite arcs. We know that $c_{14}=1$, $c_{41}=0$, so $F$ contains a transitive tournament of size $2$ spanned by vertices $1$ and $4$, and $1$ is the vertex of zero in-degree in this tournament. Let $T$ be a transitive tournament of the largest possible size contained in $F$ such that $1$ is its vertex of zero in-degree. Let $i$ denote the vertex of zero out-degree in $T$. As the out-degree of $i$ in the whole $D$ is odd, there exists vertex $j$, distinct from $i$ and $1$, such that $c_{ij}=1$. As $c_{ij}=1$, $\;j$ cannot belong to $W_2$ or $W_3$. The two remaining cases are $j \in W_1$ and $j \in W_0$. If $j \in W_1$, then $c_{ji}=0$. By the maximality of $T$, there exists a vertex $k$ in $T$ such that $c_{kj}=0$. As $i$ is the zero out-degree vertex in tournament $T$, we get $c_{ki}=1$ and $c_{ik}=0$. If $c_{jk}=0$, we get $t(k,i,j)=2$. If $c_{jk}=1$, we get $t(k,i,j,2,3)=3$. If $j \in W_0$, then $t(1,2,j)=0$. Hence, $c_{1j}=c_{12}=0$ and $c_{j1}=c_{j2}=0$. As $1$ is the zero in-degree vertex in tournament $T$, we get $c_{1i}=1$ and $c_{i1}=0$. If $c_{ji}=0$, we get $t(1,i,j)=2$. If $c_{ji}=1$, we get $t(1,i,j,2,3)=3$. \end{proof} \begin{lemma}\label{th:2m_1_4m_2} Let $m$ be even, and $M$ be a $(2m+1) \times (4m+2)$ matrix in binormal form. One can pick up $2m$ columns in $M$ whose sum is the $(2m+1)$-dimensional zero vector. \end{lemma} \begin{proof}[\bf{Proof}] Let $n=2m+1$, $M=[a_{ij}]$. Let $C=[c_{ij}]$ be an $n \times n$ binary matrix where $c_{ij} = a_{i,2j-1} = a_{i,2j}$ for $i \neq j$. The values of diagonal entries $c_{ii}$ are not important and may be set to zero. For a subset $I \subseteq \{1,2,\ldots,n\}$ and $i \in I$, we denote $\sigma_i(I) = \sum_{j \in I \backslash \{i\}} c_{ij}$. We say that $I$ is of {\it type} $t$ if among $|I|$ values $\sigma_i(I)$ with $i \in I$ there are exactly $t$ that are equal to $1$. Let $t(I)$ denote the type of $I$. Similarly to the proof of Lemma~\ref{th:2m_1_4m_5}, we can assume that the sum of off-diagonal entries in each row of $C$ is equal to $1$. Hence, $t(\{1,2,\ldots,n\})=n$. We claim that if there exists $I \subseteq \{1,2,\ldots,n\}$ such that $|I| = 2t(I)-1$, then one can pick up $2m$ columns in $M$ with zero sum. Indeed, as $t(1,2,\ldots,n)=n$, we have $|I| \neq n$, so $|I| \leq n-2$. Without limiting generality, we may assume that $I=\{n-2t+2,n-2t+3,\ldots,n\}$ where $t=t(I)$, $2t \leq n-1$, $\sigma_i(I)=1$ for $n-2t+2 \leq i \leq n-t+1$, and $\sigma_i(I)=0$ for $n-t+2 \leq i \leq n$. As $M[1:n-2t+1,\: 1:2(n-2t+1)]$ is in binormal form, by Lemma~\ref{th:Enomoto1}, there is a set of indices $j(i)\in\{2i-1,2i\}\:$ ($i=1,2,\ldots,n-2t+1$) such that the sum of columns $j(1),j(2),\ldots,j(n-2t+1)$ in $M[1:n-2t+1,\: 1:2(n-2t+1)]$ is equal to the $(n-2t+1)$-dimensional zero vector. As ${\displaystyle \sum_{j=1}^{n-2t+1} c_{ij} = 1 - \sigma_i(I)}$ for $n-2t+2 \leq i \leq n$, the sum of columns $j(1),j(2),\ldots,j(n-2t+1)$ in $M$ has the last $t-1$ entries equal to $1$, and the rest equal to $0$. Columns $2r-1$ and $2r$ in $M$ differ only in the $r$'th row, so the sum of columns $2n-2t+3,2n-2t+4,\ldots,2n$ in $M$ also has the last $t-1$ entries equal to $1$, and the rest equal to $0$. Set $j(i)=n+1+i$ for $n-2t+2 \leq i \leq n-1$. Then the sum of columns $j(1),j(2),\ldots,j(n-1)$ in $M$ is the $n$-dimensional zero vector. Let $D$ be a digraph with vertices $1,2,\ldots,n$ where an arc $(i,j)$ is present if and only if $c_{ij}=1$. As $n \equiv 1 \pmod{4}$, and every vertex has odd out-degree, $D$ satisfies conditions of Lemma~\ref{th:digraph}. Thus, there is a subset $I \subset \{1,2,\ldots,n\}$ (a triple or a quintuple) such that $|I| = 2t(I)-1$. As we just have shown, it guarantees the existence of $2m$ columns in $M$ with zero sum. \end{proof} \begin{lemma}\label{th:2m_1_4m_3} Let $m$ be even, and $M$ be a $(2m+1) \times (4m+3)$ matrix in binormal form where the sum of all columns is a zero vector. If $M$ has two identical columns, then it also has a set of $2m$ columns with zero sum that includes at most one of the two identical columns. \end{lemma} \begin{proof} Set $n=2m+1$. Let $x_i = (a_{1i},a_{2i},\ldots,a_{ni})^T$ be the $i$th column of $M\:$ ($i=1,2,\ldots,2n+1$). As $M$ is in binormal form, we can define $n \times n$ matrix $C=[c_{ij}]$ where $c_{ij} = a_{i,2j-1} = a_{i,2j}$ for $i \neq j$. The values of diagonal entries $c_{ii}$ are not important and may be set arbitrarily. If one of the two columns that are identical is the last column, then the statement of the lemma follows from Lemma~\ref{th:2m_1_4m_2}. Hence, without limiting generality, we may assume that the two identical columns are $2n-2$ and $2n$. Suppose, $c_{n,n-1} = 1$. Then $a_{n,2n-2} = 1$. As $x_{2n}=x_{2n-2}$, we get $a_{n,2n} = 1$ and $a_{n,2n-1} = 0$. As $M$ is in binormal form, $x_1+x_2+\ldots+x_{2n} = (1,1,\ldots,1)^T$. As the sum of all columns of $M$ is a zero vector, $x_{2n+1} = (1,1,\ldots,1)^T$. Add row $n$ to each row $i\in\{1,2,\dots,n-1\}$ where $a_{i,2n-1} \neq a_{i,2n+1}$. After this is done, columns $2n-1$ and $2n+1$ differ only in the last entry. Now swap columns $2n$ and $2n+1$. The resulting matrix $M'$ is in binormal form. By Lemma~\ref{th:2m_1_4m_2}, there is a set of $2m=n-1$ columns in $M'$ with zero sum that does not include column $2n+1$ of $M'$ (which originated from column $2n$ of $M$). In this case, $M$ has a set of $2m$ columns with zero sum that does not include column $2n$. Hence, we may assume $c_{n,n-1} = 0$, and similarly, $c_{n-1,n} = 0$. The $(n-1) \times (2n-2)$ submatrix $M[1:n-1,\: 1:2n-2]$ is in binormal form. By Lemma~\ref{th:Enomoto1}, there is a set of indices $j(i) \in \{2i-1,2i\}\:$ ($i=1,2,\ldots,n-1$) such that the sum of columns $j(1),j(2),\ldots,j(n-1)$ in $M[1:n-1,\: 1:2n-2]$ is the $(n-1)$-dimensional zero vector. We recall that $c_{n,n-1} = 0$. If $\sum_{j=1}^{n-2} c_{n,j} = 0$, then the sum of columns $j(1),j(2),\ldots,j(n-1)$ in $M$ is the $n$-dimensional zero vector. Hence, we may assume $\sum_{j=1}^{n-2} c_{n,j} = 1$, and similarly, $\sum_{j=1}^{n-2} c_{n-1,j} = 1$. The $(n-2) \times (2n-4)$ submatrix $M[1:n-2,\: 1:2n-4]$ is in binormal form. By Lemma~\ref{th:Enomoto1}, there is a set of indices $j(i) \in \{2i-1,2i\}\:$ ($i=1,2,\ldots,n-2$) such that $x_{j(1)} + x_{j(2)} + \ldots + x_{j(n-2)}$ has the first $n-2$ entries equal to $1$. Since $\sum_{j=1}^{n-2} c_{n-1,j} = 1$ and $\sum_{j=1}^{n-2} c_{n,j} = 1$, the last two entries of $x_{j(1)} + x_{j(2)} + \ldots + x_{j(n-2)}$ are also equal to $1$. As $x_{2n+1}=(1,1,\ldots,1)^T$, we get $x_{j(1)} + x_{j(2)} + \ldots + x_{j(n-2)} + x_{2n+1} = 0$. \end{proof} \begin{proof}[\bf{Proof of Theorem \ref{th:s_2m_2m_1_even}}] By \eqref{eq:recursive} and Theorem~\ref{th:s_2m_2m}, $\mathsf{s}_{2m}(2m+1) \geq \linebreak \mathsf{s}_{2m}(2m) + 1 = 4m+2$. To prove $\mathsf{s}_{2m}(2m+1) \leq 4m+2$, consider a sequence $S$ of size $4m+2$ over $\mathbb{Z}_2^{2m+1}$. Let $M$ be a $(2m+1) \times (4m+2)$ binary matrix whose columns represent the vectors from $S$. We need to prove that $M$ has a set of $2m$ columns whose sum is a zero vector. By Theorem~\ref{th:s_2m_2m}, $4m+2 \geq \mathsf{s}_{2m}(2m)$. If the rank of $M$ is less than $2m+1$, then by Lemma~\ref{th:rank}, there is a set of $2m$ columns that sum up to a zero vector. Suppose that the rank of $M$ is $2m+1$. Let $x\in\mathbb{Z}_2^{2m+1}$ be the sum of all columns of $M$, and $x_{4m+2}\in\mathbb{Z}_2^{2m+1}$ be the last column. Add $x + x_{4m+2}$ to each column of $M$, and expand the matrix by a new column equal to $x$. In the resulting $(2m+1) \times (4m+3)$ matrix $M'$, the sum of all columns is a zero vector, and the last two columns are equal to $x$. $M'$ satisfies conditions of Lemma~\ref{th:Enomoto2} and can be brought to binormal form $M''$ by permutations of the columns and additions of one row to another. There are two identical columns in $M''$ that originated from the last two columns of $M'$. By Lemma~\ref{th:2m_1_4m_3}, there is a set of $2m$ columns in $M''$ with zero sum which includes at most one of these two columns. By Observation~\ref{th:operations}, $M'$ has a set of $2m$ columns with zero sum that does not include the last column. Therefore, $M$ has a set of $2m$ columns with zero sum. \end{proof} \section*{Acknowledgment} The author thanks the anonymous referees for careful reading and helpful suggestions. \bibliographystyle{elsarticle-num-names-alphsort}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:introduction} Many candidate theories for physics beyond the standard model (BSM) consist of a gauge group and matter fields in representations of the gauge group such that multiple little groups are possible. Especially in grand-unified theories (GUTs) \cite{Langacker:1980js,O'Raifeartaigh:1986vq}, the usual phenomenology requires the choice of a particular little group, which coincides with the standard model gauge group\footnote{Tumbling theories \cite{Raby:1979my} will face the same issues eventually.}. This is usually done via a suitable gauge-fixing and a minimization of the corresponding (quantum) effective potential \cite{Bohm:2001yx,O'Raifeartaigh:1986vq}, which typically offers multiple degenerate solutions at tree level. Often, these degeneracies are lifted perturbatively beyond tree level, yielding a unique little group with the desired properties. This approach, however, is not gauge invariant. Even in the presence of a Brout-Englert-Higgs (BEH) effect, Becchi-Rouet-Stora-Tyutin (BRST) invariance is insufficient \cite{Fujikawa:1982ss} to guarantee gauge independence of the results, due to the Gribov-Singer ambiguity \cite{Gribov:1977wm,Singer:1978dk}. Since physics must be independent of the choice of gauge fixing \cite{Frohlich:1980gj,Frohlich:1981yi,Banks:1979fi}, the physical, fully gauge-invariant spectrum needs to be built from gauge-invariant composite operators \cite{Frohlich:1980gj,Frohlich:1981yi,Banks:1979fi}. This will generally yield a different spectrum than the gauge-dependent one of the elementary degrees of freedom \cite{Maas:2015gma,Maas:2017xzh,Sondenheimer:2019idq}, but can be described by an augmented perturbation theory \cite{Maas:2017xzh,Sondenheimer:2019idq,Maas:2020kda,Dudal:2020uwb}, implementing the Fröhlich-Morchio-Strocchi (FMS) mechanism \cite{Frohlich:1980gj,Frohlich:1981yi}. This effect and its description using augmented perturbation theory has been supported in lattice calculations \cite{Maas:2016ngo,Maas:2018xxu,Afferrante:2020hqe,Greensite:2020lmh,Dobson:2021sgl} and a review of the situation can be found in \cite{Maas:2017wzi}. This naturally raises the question whether the selection of the little group is also affected, and whether the selection of the physical, gauge-invariant realization by means of a BEH effect is possible at all \cite{Maas:2017xzh}. If not, what determines the dynamics and the spectrum of the theory? These are important questions to address, given the phenomenological relevance of GUTs. A definitive answer requires a fully non-perturbative approach, which can determine the physical and gauge-fixed quantities equally well. Lattice gauge theory is one such possibility. While there have been various investigations which could access these questions in the past \cite{Olynyk:1984pz,Lee:1985yi,Olynyk:1985tr,Olynyk:1985cd,Kajantie:1998yc}, they have either focused only on the physical results or used perturbative means to cover the gauge-fixed sector. The aim here is to cover both aspects on equal footing. We therefore concentrate on the simplest model which exhibits the aforementioned properties: an \SU[3] gauge theory, coupled to a single scalar `Higgs' field in the adjoint representation. Lacking the strong sector, it is not a viable candidate for a full GUT, but it is the simplest non-Abelian theory with multiple little groups, while being at the same time computationally accessible. We start by summarizing the usual perturbative construction and the obstructions for this theory in section \ref{s:patterns}. We then discuss the structures of the symmetries of this theory in detail in section \ref{s:symmetry}, and the implication for different gauge choices in section \ref{s:gauge}. The technical aspects of the lattice implementation are given in section \ref{s:lattice}. This section can be skipped if the reader is not interested in these details. We examine the phase structure and symmetries of the theory in section \ref{s:results}, and analyse the implications of our results. In this context we also see that the gauge-dependent degrees of freedom do indeed behave as expected in most parts of the phase diagram, and we do not have an artificial contradiction by erroneously calculating at an effectively strong coupling. Our results show that only a single breaking pattern survives for a given set of parameters and gauge-fixing strategy. We also determine the full quantum effective potential to see that this does not just come about by lifting a degeneracy between different minima, but that the other minima are found to be eliminated altogether. However, we find that the choice of gauge fixing strongly affects which options are possible. In particular, unitary gauge and Landau--'t Hooft gauge show substantially different behaviours, and there is subtlety in precisely defining a suitable gauge. We thus conclude in section \ref{s:summary} with the somewhat pessimistic outlook that there is much less freedom in choosing the dynamics of a GUT than seems to be possible at the perturbative level. We supplement this work with several technical appendices, including an outlook how to generalize the present constructions beyond \SU[3], which is still a somewhat special case. Some preliminary results can also be found in \cite{Dobson:2021sgl}. \section{Multiple breaking patterns at tree-level}\label{s:patterns} Our adjoint \SU[3] gauge-scalar theory can be described by the Lagrangian \begin{align}\label{eqn:general_lagrangian} \cL & = -\frac{1}{4}W_{\mu\nu}W^{\mu\nu}+2\tr[(D_\mu\Sigma)^\dag (D^\mu\Sigma)]- V(\Sigma)\nonumber \\ W^{\mu\nu} & =\pd^\mu W^\nu-\pd^\nu W^\mu + ig [W^\mu,W^\nu]\,, \end{align} where $W^{\mu\nu}$ is the gauge field strength tensor and $V(\Sigma)$ is a gauge-invariant potential that allows for a BEH effect. The scalar field $\Sigma$ is in the adjoint representation of \SU[3], and is acted on by the covariant derivative $D_\mu$ as \begin{equation} (D_\mu\scA)(x) = (\pd_\mu\scA)(x) + i g[W_\mu(x),\scA(x)]\,. \end{equation} Under a gauge transformation $G$, the scalar transforms as \begin{equation}\label{eq:adj_gt} \scA(x)\to G(x)\scA(x)G^\dag(x)\,. \end{equation} For later convenience, it is also useful to define the fields in terms of the corresponding $\RR^8$ coefficient vectors as \begin{equation}\label{eqn:vec} \Sigma=\Sigma^a T_a \qquad W_\mu = W_\mu^a T_a \qqtext{with} T_a = \frac{\lambda_a}{2}\,, \end{equation} where the $T^a$ are the generators and $\lambda_a$ the Gell-Mann matrices. The components of the scalar transform homogeneously, as $\Sigma_i(x)\mapsto G_{ij}(x)\Sigma_j(x)$. Apart from the gauge symmetry, the kinetic part also has a global \ZZ[2] symmetry under the transformation $\scA(x)\mapsto -\scA(x)$. The potential is chosen to leave this global symmetry intact, \begin{equation}\label{eqn:adj_potential} V = -\mu^2\tr\qty[\Sigma^2] + \lambda\qty(\tr\qty[\Sigma^2])^2\,. \end{equation} We start by following the path of the standard BEH mechanism using this expression. That is, the scalar field is split into a non-vanishing vacuum expectation value (vev) which points into a certain direction $\Sigma_0$ with absolute value $w>0$, and fluctuations $\sigma$ around this vev, which are assumed to be small\footnote{We note that we are implicitly using that only a subset of all possible breaking patterns can occur for a potential renormalizable by power counting \cite{O'Raifeartaigh:1986vq}. For the present \SU[3] case, however, this potential exhausts all possible cases. This is no longer true for $N>3$. Higher-order tree-level potentials, which may be considered also for GUTs as in the standard model case \cite{Gies:2014xha,Eichhorn:2015kea}, can in principle realize all possible subgroups, according to Morse theory \cite{O'Raifeartaigh:1986vq}. This may also happen at the level of the quantum effective potential.}: \begin{equation}\label{eqn:adj_beh} \Sigma\qty(x) = \ev{\Sigma} + \sigma\qty(x) = w\Sigma_0 + \sigma\qty(x)\,. \end{equation} Note that by specifying $w\Sigma_0$ we completely fix the direction of the vev. This has far-reaching consequences to be discussed below. \begin{figure}[t!] \centering \includesvg[width=.8\linewidth]{{break_pat}} \caption{ Different breaking patterns in terms of the matrix invariants $\tr\qty[\Sigma^2]$ and $\tr\qty[\Sigma^3]$. Note that there is no two-dimensional region in this plot with a symmetry pattern other than $\U[1]\times\U[1]$. The colour gradient is added for later convenience. } \label{fig:su3break} \end{figure} \begin{table*}[t!] \begin{tabular}{@{}l@{\hspace{4em}}c@{\hspace{4em}}c@{\hspace{4em}}c@{}}\toprule & \multicolumn{3}{@{}c@{}}{Breaking pattern} \\ \cmidrule(r){2-4} & $\SU[2]\times\U[1]$ & $\U[1]\times\U[1]^\star$ & $\U[1]\times\U[1]$ (generic) \\ \midrule % eigenvalues & $\{\lambda,\lambda, -2\lambda\}$ & $\{\lambda, -\lambda, 0\}$ & $\{\lambda_1, \lambda_2, -(\lambda_1+\lambda_2)\}$ \\ \addlinespace[.5em] \multirow{2}{*}{vev-alignment $\qty(\Sigma_3,\Sigma_8)$} & $\pm\qty(0,1)$ & $\pm\qty(1,0)$ & \multirow{2}{*}{other} \\ & $\pm\qty(\sqrt{3}/2,\pm1/2)$ & $\pm\qty(1/2,\pm\sqrt{3}/2)$ & \\ \addlinespace[.5em] % breaking angle $\theta_0$ & $\tfrac{\qty(2n+1)\pi}{6} \text{ for }n=0,\dots,5$ & $\tfrac{2n\pi}{6} \text{ for }n=0,\dots,5$ & other \\ \addlinespace[.5em] % gauge boson degeneracy & $\qty(4,4)$ & $\qty(2,4,2)$ & $\qty(2,2,2,2)$ \\ \addlinespace[.5em] % scalars $(0,m_H)$ & $\qty(3,1)$ & $\qty(1,1)$ & $\qty(1,1)$ \\ \bottomrule \end{tabular} \caption{Symmetry breaking patterns as a function of the direction of the chosen vev, i.e. $\theta_0$. In the general case there are three distinct non-zero eigenvalues and the vev is invariant under $\U[1]\times\U[1]$. For six special values of $\theta_0$ the eigenvalues degenerate, leading to a $\SU[2]\times\U[1]$ symmetry. For a further six values, the determinant is zero, resulting in a special case of the $\U[1]\times\U[1]$ pattern, denoted by $\U[1]\times\U[1]^\star$, which is invariant under the transformation $\theta_0\mapsto -\theta_0$. Note that the overall scale of $\Sigma$ is given by $w$ and that the ordering of the eigenvalues and the respective massless gauge boson modes differs in the different $\theta_0$ sectors. However, the number of massless and massive modes is fixed as stated here. } \label{tab:adj_patterns} \end{table*} The different breaking patterns originate from the existence of more than one unitarily in-equivalent\footnote{That is, there is no gauge transformation (\ref{eq:adj_gt}), which can map them into each other.} direction for $w\Sigma_0$. Since $\Sigma_0$ is diagonalizable and traceless, the most general vev can be parameterized by an angle $\theta_0$ in the two-dimensional Cartan of \SU[3], with \begin{equation}\label{eqn:adj_breaking_angle} \Sigma_0 = \cos\qty(\theta_0) T_3 + \sin\qty(\theta_0)T_8\,. \end{equation} Both $w$, and the eigenvalues of $\Sigma_0$, can in principle be obtained from the scalar field without explicitly gauge-fixing by using the matrix invariants\footnote{Note that in the present theory the relation $\tr\qty[\Sigma^3]=3\det\qty[\Sigma]$ holds and can alternatively be used as has been done in \cite{Dobson:2021sgl}. However, with having a generalization to \SU[N] groups in mind it makes more sense to only use quantities which are renormalizable by power-counting for any $N$.} $\tr\qty[\Sigma^3]$ and $\tr\qty[\Sigma^2]$ \cite{Olynyk:1984pz}. For the specific definition of the vev-direction in \cref{eqn:adj_breaking_angle}, the angle $\theta_0$ and $w$ can be obtained from \begin{align} w & = \sqrt{2\tr\qty[\Sigma^2]} \label{eqn:w_gauge_invariant} \\ \sin(3\theta_0) & = \sqrt{6}\frac{\tr\qty[\Sigma^3]}{\tr\qty[\Sigma^2]^{3/2}}\,. \label{eqn:theta_gauge_invariant} \end{align} However, interpreting the angle $\theta_0$ in terms of a vev-direction already requires implicit gauge-fixing, to obtain the relation (\ref{eqn:theta_gauge_invariant}). Therefore, one should keep in mind that whenever angles or breaking patterns are involved, the gauge has been fixed, either implicitly or explicitly. Depending on $\Sigma_0$, the corresponding eigenvalues can have different degeneracies, giving either a breaking pattern of $\SU[2]\times\U[1]$ or $\U[1]\times\U[1]$. Which one is realized depends on the choice of the vev-direction \cite{Maas:2017xzh}. \Cref{tab:adj_patterns} lists the possible patterns. In the plane of matrix invariants, as shown in \cref{fig:su3break}, it is visible that the two breaking patterns $\SU[2]\times\U[1]$ and $\U[1]\times\U[1]^{\star}$ from \cref{tab:adj_patterns} play a special role. \begin{figure*}[th!] \centering \begin{subfigure}{0.18\textwidth} \includesvg[width=\hsize]{{circle_1}} \caption*{$\tr\qty[(\Sigma/w)^3]=-\tfrac{1}{4\sqrt{3}}$} \end{subfigure} \hfill \begin{subfigure}{0.18\textwidth} \includesvg[width=\hsize]{{circle_2}} \caption*{\mbox{$-\tfrac{1}{4\sqrt{3}}<\tr\qty[(\Sigma/w)^3]<0$}} \end{subfigure} \hfill \begin{subfigure}{0.18\textwidth} \includesvg[width=\hsize]{{circle_3}} \caption*{$\tr\qty[(\Sigma/w)^3]=0$} \end{subfigure} \hfill \begin{subfigure}{0.18\textwidth} \includesvg[width=\hsize]{{circle_4}} \caption*{$0<\tr\qty[(\Sigma/w)^3]<\tfrac{1}{4\sqrt{3}}$} \end{subfigure} \hfill \begin{subfigure}{0.18\textwidth} \includesvg[width=\hsize]{{circle_5}} \caption*{$\tr\qty[(\Sigma/w)^3]=\tfrac{1}{4\sqrt{3}}$} \end{subfigure} \caption{The additional discrete symmetries of the breaking patterns visualized as intersection points of the unit circle with $\tr\qty[(\Sigma/w)^3]$. The coloured sectors correspond to the $2\pi/3$ periodicity of the system in $\theta_0$. For the $\SU[2]\times\U[1]$ pattern (leftmost and rightmost panels) there are three intersection points, yielding a \ZZ[3] symmetry. Likewise, the $\U[1]\times\U[1]^{\star}$ (middle panel) special case has six intersection points, and thus a $\ZZ[6]\sim S_3$ symmetry. The other cases correspond to $\U[1]\times\U[1]$ with six intersection points, but only a \ZZ[3] symmetry.} \label{fig:su3intersects} \end{figure*} Because \cref{eqn:theta_gauge_invariant} yields a $2\pi/3$ periodicity in $\theta_0$, there is a further freedom in choosing $\theta_0$. The full range of $2\pi$ decomposes into 6 sectors of size $\pi/3$, two pairs always related by the global $\ZZ[2]$ symmetry. This corresponds to the $3!=6$ possible permutations of the eigenvalues of $\Sigma_0$ and yields additional unbroken discrete groups, as shown in figure \ref{fig:su3intersects}. Thus, so far the breaking patterns are actually $\SU[2]\times\U[1]\times \ZZ[3]$, $\U[1]\times\U[1]\times \ZZ[3]$, and $\U[1]\times\U[1]\times \ZZ[6]$, where the latter is the one referred to as $\U[1]\times\U[1]^{\star}$. The existence of these discrete groups will be relevant for the possible choices of gauges below. Since the interval $[-\pi/6,\pi/6]$ is the most convenient for later we choose it as the fundamental domain in the following. Inserting (\ref{eqn:adj_breaking_angle}) into the potential (\ref{eqn:adj_potential}) shows that the minimum of the potential is independent of $\theta_0$, and thus all values of $\theta_0$ correspond to a possible choice at tree-level, as long as $w=\pm\mu/\sqrt{\lambda}$. Note that both values of $w$ are mapped into each other under the global $\ZZ[2]$ symmetry. At the same time, there is no gauge transformation, which has the same effect, as (\ref{eq:adj_gt}) leaves the sign of $\Sigma$ unchanged. Additionally, it should be mentioned that only in the \SU[3]-theory the angle-independence of the potential is sufficient to show that all angles do minimize the potential\footnote{If the potential were to have an additional $\ZZ[2]$-breaking term $\sim\tr[\Sigma^3]$, then this would no longer be the case.}. This is due to the enhanced \O[8]-symmetry of the potential and cannot be assumed for the general \SU[N] case. A more detailed discussion on this is given in \cref{app:sun}. At tree-level, the two different breaking patterns lead to drastically different spectra for the elementary particles, which can be sorted into the representations of the unbroken subgroups. The massive gauge bosons appear either as oppositely charged pairs of the \U[1] subgroups or as doublets of an unbroken \SU[2]. As a consequence, the massive gauge bosons always come in pairs, with masses \begin{subequations}\label{eqn:boson_masses} \begin{align} m_1 & = gw\cos\qty(\theta_0) \\ m_2 & = \frac{gw\left(\cos\qty(\theta_0)+\sqrt{3}\sin\qty(\theta_0)\right)}{2} \\ m_3 & = \frac{gw\left(\cos\qty(\theta_0)-\sqrt{3}\sin\qty(\theta_0)\right)}{2}\,. \end{align} \end{subequations} Thus, for special values of $\theta_0$ some doublets can degenerate, or a doublet can have vanishing mass. There are always two massless gauge bosons corresponding to the unbroken subgroups, i.e. one for each unbroken \U[1]. In the $\SU[2]\times\U[1]$ case there is a triplet for the unbroken \SU[2]. Thus, there are 2 and 4 massless modes for the $\U[1]\times\U[1]$ and $\SU[2]\times\U[1]$ patterns, respectively. In addition, there is always a scalar with independent mass $m_H$, as well as one or three remaining massless scalars for the $\U[1]\times\U[1]$ and $\SU[2]\times\U[1]$ patterns, respectively. Of course, in the unbroken case all gauge bosons (and scalars) are degenerate. The gauge-invariant composite spectrum from augmented perturbation theory is different from the elementary one, and also differs between the breaking patterns with and without \SU[2] subgroup. See \cite{Maas:2017xzh} for details. \section{Gauge symmetry, vacuum expectation value, and global symmetry}\label{s:symmetry} As the condition in \cref{eqn:adj_beh} is gauge-dependent \cite{Lee:1974zg,Frohlich:1980gj,Frohlich:1981yi,Maas:2012ct}, its implementation requires gauge-fixing, a process which leads to some subtleties. In the present case there are two particular questions to answer. First: is there a BEH effect at all, i.e. is $w\neq 0$? Second: if there is a BEH effect, what is the value of $\theta_0$? At tree-level, the presence of a BEH effect is determined by a condition on $w$ alone, with no constraint on $\theta_0$. However, beyond tree-level, the questions need to be phrased more carefully. The issue is that $\Sigma_0$ is not gauge-invariant. Furthermore, it is known from the standard-model case of an \SU[2] gauge theory with a fundamental scalar doublet that the question of $|w|>0$ is potentially dependent on the gauge in which it is calculated \cite{Lee:1974zg,Caudy:2007sf,Maas:2012ct}. This may also apply to $\theta_0$. All of this can be traced back to the fact that the BEH effect is really just fixing the gauge in presence of a particular type of potential, such that \begin{equation}\label{eqn:beh} \left\langle \frac{1}{V}\int \dd[4]{x}\Sigma\qty(x)\right\rangle=w\Sigma_0 \end{equation} can hold\footnote{This will only work in a suitably chosen renormalization scheme.}. Thus, the right question to be asked is whether there exists a gauge, for a fixed set of tree-level parameters $(g,\mu,\Lambda)$, such that \cref{eqn:beh} holds for a given set of values $(w,\theta_0)$. Currently, we do not have the means to check all possible gauges, so we will concentrate here on whether this is true for unitary gauge and (minimal\footnote{The Gribov-Singer ambiguity seems to be not quantitatively relevant in presence of a BEH effect (i.e.\ $|w|>0$) in Landau gauge, despite its qualitative relevance to the asymptotic state space by invalidating the perturbative BRST construction \cite{Fujikawa:1982ss}. Hence, the specification of how Gribov copies are treated is likely not relevant. However, this is so far only circumstantial evidence \cite{Maas:2010nc,Lenz:2000zt}.} \cite{Maas:2011se}) Landau--'t Hooft gauge. There is already one immediate issue in \cref{eqn:beh}: it is not invariant under a global $\ZZ[2]$ transformation. This is at first sight nothing dramatic, as there are other cases where the gauge condition in a BEH effect breaks global symmetries \cite{Maas:2017wzi}. However, there always remains an unbroken global diagonal symmetry of the same type. This diagonal symmetry originates from the fact that a global transformation can be undone by a gauge transformation, such that the gauge condition remains invariant. In the present case, changes of the gauge condition by a \ZZ[2]-transformation cannot be undone by a gauge transformation in general, since (\ref{eq:adj_gt}) cannot change the sign of $\Sigma$. The only exception is when $\Sigma_0$ has the special $\U[1]\times \U[1]^\star$ form in table \ref{tab:adj_patterns}, for which a $\ZZ[2]$ transformation is equivalent to reordering the eigenvalues. Otherwise, for every space-time averaged value of $\Sigma$, the path integration also contains one with opposite sign, and the two necessarily average to zero. Thus, strictly speaking, the gauge condition (\ref{eqn:beh}) can in general not be satisfied. Hence, except where it is possible to fix to the $\U[1]\times \U[1]^\star$ pattern, no BEH effect would be possible at all. Of course, as soon as the global $\ZZ[2]$ symmetry is spontaneously or explicitly broken, this is again possible, and we will discuss this situation below. If we step away from the desire to emulate perturbation theory by implementing (\ref{eqn:beh}) for a moment, it is helpful to reconsider what a BEH effect truly is. In fact, a BEH effect is long-range order in the direction of the scalar field. This can be detected using the gauge-fixed scalar field, e.g.\ by a suitable renormalized expression like \cite{Caudy:2007sf,Maas:2017wzi} \begin{equation}\label{eqn:adj_gauge_order_lat} \left\langle \left(\frac{1}{V}\int_V \dd[4]{x} \Sigma(x)\right)^2\right\rangle\,, \end{equation} which will not vanish in the infinite-volume limit $V\to\infty$ in the presence of long-range ordering. Therefore, long-range order and the physical content of the BEH effect can be detected while foregoing the implementation of (\ref{eqn:beh}). The physical consequences, like massive gauge bosons, then remain, but are generated non-perturbatively \cite{Maas:2012ct,Maas:2013aia}. \begin{figure}[t!] \centering \includesvg[width=\linewidth]{{vev}} \caption{Normalized components of the vev as a function of $\theta_0$ in the fundamental domain and the corresponding breaking patterns for specific values.} \label{fig:comp} \end{figure} Another option to resolve the issue is possible due to the periodicity of the breaking angle. In fact there exists for every field configuration a gauge transformation such that $\theta_0$ can be rotated into the interval between $-\pi/6$ and $\pi/6$, by performing a permutation of the eigenvalues. Thus, by constraining $\Sigma_0$ to lie in this interval, we still break the $\ZZ[2]$ symmetry, but there now exists an unbroken diagonal subgroup. That subgroup contains a $\ZZ[2]$ transformation followed by a suitably chosen element of the $S_3$ permutation subgroup of \SU[3], which transfers the vev back into this $\theta_0$ interval. Therefore, the action of this diagonal subgroup changes the value of $\theta_0$ to $-\theta_0$\footnote{Note, that the sign-flip of the angle only holds for the specific definition of the fundamental region above. In general this transformation mirrors the angle around the centre of the chosen domain.}. However, there is no preference for $\theta_0$ to be greater or smaller than zero with regard to the breaking pattern itself. Thus, the distribution will be symmetric, manifesting the $\ZZ[2]$ symmetry of the theory. Nevertheless, this does not imply that $\Sigma_0$ itself is symmetric, as can be seen in \cref{fig:comp}. It is thus well possible to have different vevs, even when averaging over a symmetric distribution of $\theta_0$. Additionally, this subgroup still conserves the fact that the $\U[1]\times\U[1]^\star$ pattern is intrinsically $\ZZ[2]$-invariant. This allows now a useful definition of a BEH effect in this theory: \emph{A BEH effect occurs in a fixed gauge if and only if, after performing a gauge transformation for each configuration such as to fulfil (\ref{eqn:beh}) with $\theta_0$ between $-\pi/6$ and $\pi/6$, the quantity (\ref{eqn:adj_gauge_order_lat}) (or equivalently $|w|$) does not vanish in the infinite volume limit.} The resulting value of $\Sigma_0$ then determines the breaking pattern possible for the given set of parameters --- i.e.\ the direction of $\Sigma_0$, and thus the breaking pattern, cannot be predetermined. It is that value of $\Sigma_0$ around which gauge-fixed expansion techniques need to expand. Only in the case of breaking pattern coexistence there is the possibility to choose a pattern. To identify coexistence, it is necessary to determine the quantum effective action and determine its structure. Since the vev (\ref{eqn:beh}) has by construction two independent parameters, the quantum effective action needs likewise two sources. To obtain a power-countable renormalizable structure, the quantum effective action can be constructed in the following way. Define after gauge-fixing the two components \begin{align} \Sigma_3 & = 2\tr\qty[\Sigma T_3] \\ \Sigma_8 & = 2\tr\qty[\Sigma T_8] \end{align} and introduce two constant sources $j_3$ and $j_8$. The source-dependent partition function then defines the free energy $W(j_3,j_8)$ \begin{equation} e^{W\qty(j_3,j_8)}=\int{\cal D}A_\mu\,{\cal D}\Sigma\, e^{iS+\int \dd[d]{x}\left(j_3\Sigma_3+j_8\Sigma_8\right)}. \end{equation} Performing a Legendre transformation with respect to $j_3$ and $j_8$ yields the quantum effective action as a function of the classical (space-time independent) fields $\sigma_3$ and $\sigma_8$, \begin{align} \Gamma\qty(\sigma_3,\sigma_8) & =\sigma_3 j_3(\sigma_3)+\sigma_8 j_8(\sigma_8)-W\qty(j_3(\sigma_3),j_8(\sigma_8)) \\ \sigma_i & =\frac{\partial W}{\partial j_i}\,, \end{align} which of course requires resolving the classical fields $\sigma_i$ as a function of the sources. In the classical limit this will reproduce the classical action. The quantum effective action is necessarily a convex function, and can therefore not have multiple minima. Multiple breaking patterns will indicate themselves by flat sections, which is essentially what can be expected from a Maxwell construction \cite{Rivers:1987hi}. It should be noted that the free energy and the quantum effective action is perturbatively an analytical function of the classical fields \cite{Bohm:2001yx}. But the quantum effective action does not need to be an analytic function beyond perturbation theory: for example, non-analyticities occur in Yang-Mills theory in minimal Landau gauge \cite{Maas:2013sca}. It now remains to uncover what is possible. As it turns out, this will indeed depend on the gauge choice. \section{Gauge choices}\label{s:gauge} We will consider two gauge choices, Landau--'t Hooft gauge and unitary gauge, the two extremes of the $R_\xi$ condition for $\xi=0$ and $\xi\to\infty$, respectively. \subsection{Unitary gauge} Consider for the moment unitary gauge. It is defined by fixing the direction of the vev locally, which in the present case translates to diagonalizing $\Sigma(x)$. Since diagonalization is invariant under permutations of the eigenvalues it is possible to impose strong ordering, in the sense that the diagonal elements are sorted by size\footnote{Different kinds of ordering schemes simply puts the local breaking angles into different sectors of size $\pi/3$.}. As a consequence the vacuum expectation value \begin{align}\label{eqn:vev} \left\langle \frac{1}{V}\int_V \dd[4]{x} \Sigma(x)\right\rangle \end{align} is necessarily non-vanishing if $\Sigma(x)$ is non-zero in any sizeable fraction of the volume $V$. The forceful alignment of space-time-adjacent fields enforces maximal long-range order on the scalar field. As a consequence, if the scalar field has a non-vanishing amplitude at all, it will have a vacuum expectation value. As will be seen, this is then the case throughout the phase diagram. Thus, unitary gauge enforces a BEH effect for all values of the parameters. In principle this is also expected to happen in other theories with a BEH effect. That appears counter-intuitive at first, as it is known that there are strongly-interacting regions within the phase diagram, essentially being a QCD-like theory. The reason for this behaviour is, of course, a trade-off when gauge-fixing. By enforcing order on the scalar field, any disorder is transferred to the gauge fields, which therefore will strongly fluctuate. Thus, while a vacuum condensate is enforced in this way, the strong gauge-field fluctuations, as well as possible local amplitude fluctuations of the Higgs field, will invalidate any perturbative calculations. Especially, despite the presence of a vev, there may be no mass generation for the gauge bosons. This can be remedied, but only at the expense of deviating from the perturbative gauge condition (\ref{eqn:beh}). Instead of enforcing a strong ordering of the eigenvalues, we note that the breaking patterns do not rely on the ordering of the eigenvalues as listed in table \ref{tab:adj_patterns}. However, long-range ordering implies that the ordering of the eigenvalues need to be correlated over long distances. Thus, we fix to unitary gauge, but only admit gauge transformations from the coset \SU[3]$/S_3$, where $S_3$ is the permutation group of order 3. In practice, diagonalization algorithms do not ensure such an ordering. To enforce it, we determine before diagonalization locally an angle $\theta\qty(x)$ in analogy to (\ref{eqn:theta_gauge_invariant}). We then reorder the eigenvalues after diagonalization such that the sign of the local angle $\theta_0\qty(x)$ coincides with the original up to a rotation of the domain. This leaves us in the end with local $\Sigma_0\qty(x)$-matrices, as in \cref{eqn:adj_breaking_angle}, with angles in the region $\theta_0\qty(x)\in[0,\pi/6]$ for $\theta\qty(x) \ge 0$ and $\theta_0\qty(x)\in(\pi/2,7\pi/6]$ for $\theta\qty(x) < 0$. This means that on a local level the global $\ZZ[2]$ is still preserved. When calculating the expectation value over several configurations, although restricting the ordering in some sense, the preservation of the (diagonal) \ZZ[2] symmetry implies again that for every configuration there exists another configuration with exactly opposite ordering. Thus, in this case (\ref{eqn:vev}) vanishes. This allows to unambiguously identify the presence of long-range ordering in this gauge using (\ref{eqn:adj_gauge_order_lat}). The finally obtained expectation value of the angle $\theta_0$ is again restricted to the fundamental domain, due to the properties of the $\arcsin$-function, although the local angles do not lie within this domain. What should be noted is that both prescriptions effectively remove the discrete $\ZZ[3]$ groups (see figure \ref{fig:su3intersects}). However, the first prescription also explicitly breaks the global $\ZZ[2]$, while the second does not. As a consequence of the explicit breaking of the global $\ZZ[2]$, the vev is enforced everywhere. Thus, such a prescription would only be allowed, if either boundary conditions or the embedding of the theory would require such an explicit breaking, while the second one is always applicable. However, all expansion schemes \cite{Bohm:2001yx,Maas:2017xzh} rely on a unique value for the vacuum expectation value. Thus, to eventually obtain configurations, which are fixed to the same gauge as in the continuum, requires, after establishing that for a given set of lattice parameters long-range ordering persists, the eigenvalues to agree with (\ref{eqn:adj_beh}). Thus, this is a two-step process. Of course, when a BEH effect is possible, this procedure will yield the same result as enforcing long-range order. Differences will only arise for the case of non-BEH-like dynamics, especially for QCD-like dynamics. \subsection{Landau--'t Hooft gauge} An alternative and independent approach is to use (minimal) Landau--'t Hooft gauge. In that case, the gauge fields are first locally transformed to (minimal) Landau gauge, which leaves the global gauge freedom unaffected. The corresponding gauge transformation is then applied to the local scalar fields. The space-time averaged expectation value of the scalar field is then used to obtain a global gauge transformation as in the unitary gauge, i.e.\ diagonalization with ordered eigenvalues. This global transformation is then applied to the local gauge and scalar fields. In this case it is not necessary to artificially preserve the diagonal \ZZ[2]-symmetry locally while diagonalizing, since Landau gauge enforces maximum smoothness on the gauge fields locally \cite{Cucchieri:1995pn}. This avoids to a large extent that local correlations are transferred to the gauge fields and thus keeps local correlations between the scalar fields intact. As a consequence, it is not possible to obtain a non-vanishing vacuum expectation value everywhere, but only where long-range ordering exists. \section{Lattice implementation}\label{s:lattice} \subsection{Action and configurations} To investigate the phase diagram, we perform lattice simulations. To this end, we use the Wilson action for the theory \cite{Montvay:1994cy} \begin{subequations}\label{eqn:adj_lattice_lagrangian} \begin{align} S= & \sum_{x}\left( \beta\sum_{\mu<\nu}\qty[1-\frac{1}{3}\Re\qty[\tr\qty[U_{\mu\nu}\qty(x)]]] \right. \label{eqn:adj_lattice_lagrangian_gauge} \\ & + \gamma\qty[2\tr\qty[\Sigma\qty(x)^2] - 1]^2+ 2\tr\qty[\Sigma\qty(x)^2]\label{eqn:adj_lattice_lagrangian_scalar} \\ & \left.- 4\kappa\sum_{\mu=1}^{4} \tr\qty[\Sigma\qty(x)U_{\mu}\qty(x)\Sigma\qty(x+\hat{\mu})U_{\mu}\qty(x)^{\dagger}]\right) \label{eqn:adj_lattice_lagrangian_int} \end{align} \end{subequations} where the tree-level couplings are related to the continuum tree-level couplings by $\beta = 6/g^2$, $a^2\mu^2=(1-2\gamma)/\kappa - 8$ and $\lambda=2\gamma/\kappa^2$. The plaquette is given by $U_{\mu\nu}\qty(x) = U_{\mu}\qty(x)U_{\nu}\qty(x+\hat{\mu})U_{\mu}^{\dagger}\qty(x+\hat{\nu})U_{\nu}^{\dagger}\qty(x)$ and the links are related to the gauge fields as $U_{\mu}\qty(x)=\exp{i a W_{\mu}^a T^a}$. We note that thereby the gauge degrees of freedom are group-valued, which adds another global \ZZ[3] (center-) symmetry in the gauge sector of the lattice theory. In the continuum limit, only terms in the action remain, which are trivially invariant under that symmetry, and it can thus not play a dynamical role. Of course, our present theory is potentially trivial, and may therefore not have an (interacting) continuum limit. In this case, however, the terms sensitive to the symmetry become sub-leading for the long-range, low energy physics, which determine the phase diagram \cite{Hasenfratz:1986za}. Thus, while we monitored this symmetry using Polyakov loops, we did not use it as a dynamical information in the following. The generated configurations where obtained on symmetric lattices $N^4$ with sizes $N=4$, 6, 8, 10, 12, 14, 16, 20 and 24. The simulations used a heatbath algorithm for the link updates \cite{Cabibbo:1982zn} with an additional Metropolis step to accommodate for the interaction term. The scalar updates were performed as a generalized pseudo-heatbath method like the one proposed in \cite{Knechtli:1999tw}. Here we decided to solve the resulting cubic equation in this update without any approximation to obtain a higher acceptance rate in areas where $\kappa$ becomes large. Further details on the methods used can be found in \cref{app:algorithms}. A configuration is obtained after one full sweep, which consists of 5 pure link sweeps followed by 1 scalar sweep and overrelaxation sweeps for both the links and the scalar fields. To assure decorrelation sufficient configurations for thermalization and in between measurements have been dropped, yielding an autocorrelation time of 1 for local quantities like the plaquette, where possible. Additionally, individual runs have been performed and combined. We note that in many cases the global structure of the phase diagram, especially with respect to the $\ZZ[2]$ symmetry, could be determined reasonably well\footnote{Note that we cannot exclude currently that at volumes beyond our computational resources this changes again, as has been observed in the \SU[2] theory with a fundamental Higgs \cite{Bonati:2009pf}.} already with volumes up to $14^4$. Larger volumes have been used to further test this insight as well as to study places where this was not the case. A serious issue has been critical slowing down, especially in the BEH-like regime at small values of the scalar self-coupling $\gamma$. Similar effects were previously seen in the \SU[2] case \cite{Afferrante:2020hqe}. There, a massless physical vector mode was observed, which is potentially the origin of this problem. Likewise, in the current theory such physical massless vector boson modes are also expected, as well as additional massless scalar modes towards the infinite-volume limit \cite{Maas:2017xzh}. They especially are expected in the $\SU[2]\times\U[1]$ case which, as will be seen, is indeed the one seeming to emerge towards $\gamma\to 0$. Thus, the emergence of strong autocorrelations is consistent. At the moment, only sufficiently long Monte Carlo trajectories seem to be suitable to address these autocorrelations. Details of this problem are relegated to \cref{a:crit}. The results in section\footnote{All numerical analysis results have been obtained using the Julia programming language \cite{Julia:2017} with heavy usage of the packages \cite{JuliaPlots:2022,JuliaUnROOT:2022}. Analytic and arbitrary precision results have been obtained using Mathematica \cite{Mathematica}. For data storing and analysis of the propagators the ROOT framework \cite{Brun:1997pa} has been used.} \ref{s:results} will be given in a way including the effects of critical slowing down as discussed in the appendix. \subsection{Observables and gauge fixing} To determine the status of the global $\ZZ[2]$ symmetry we used \cite{Maas:2017wzi,Langfeld:2004vu} \begin{equation}\label{eqn:z2order} \mathcal{O}_{\ZZ[2]}=\left\langle\qty(\frac{1}{V}\sum_x \tr\qty[\Sigma(x)^3])^2\right\rangle\,. \end{equation} This quantity approaches zero for $V\to\infty$ like a power-law if the global $\ZZ[2]$ symmetry is unbroken. If it approaches a constant, the symmetry is meta-stable, and will break when applying an arbitrarily small explicit breaking. However, without such a breaking, the symmetry remains intact, and any quantity not invariant under it will vanish \cite{Maas:2017wzi}. We will use this information below. As noted above, the gauge condition breaks the $\ZZ[2]$ symmetry and the gauge symmetry to a common subgroup. However, as the order parameter (\ref{eqn:z2order}) is gauge-invariant, it is insensitive to it, and will always give the status of the global symmetry alone. Additionally, it should be noted that a locally normalized version of the quantity in \cref{eqn:z2order} would make it a measure for `\ZZ[2]-magnetization', like in the Ising model, and thus be even better suited to study the behaviour. However, while this modification changes the values of the order parameter quantitatively the absolute value tells us something about the severity of the \ZZ[2]-breaking. Therefore, we obtained both versions in the actual simulations and in fact they show the same behaviour. Gauge-fixing to unitary gauge can be done straightforwardly, as it only requires the diagonalization of a $3\times 3$ matrix, which can still be done analytically following \cite{Kopp:2006wp}. Suitably (not) sorting the eigenvalues can be done as previously described. The only obstruction would be if the scalar field locally vanishes, leading to gauge defects \cite{Greensite:2006ns,Ripka:2003vv}. On a finite lattice in a simulation, the field will never be exactly zero with any appreciable probability, so this does not need to be explicitly checked. Determining the Higgs vacuum expectation value is done using the observables \begin{align} w\Sigma_0 & = \left\langle\frac{1}{V}\sum_x\Sigma\qty(x)\right\rangle \label{eqn:lvev} \\ \mathcal{O}_w & = \sqrt{2\expval{\frac{1}{V^2}\tr\qty[\qty(\sum_x\Sigma\qty(x))^2]}}\label{eqn:lvev2}, \end{align} where $\mathcal{O}_w$, as discretization of (\ref{eqn:adj_gauge_order_lat}), is used to detect long-range ordering. Note that the order parameter has been re-scaled to match the usual definition of the vev in \cref{eqn:w_gauge_invariant}. The corresponding values for $w$ and $\theta_0$ can then be obtained by inserting $\Sigma_0$ into \cref{eqn:w_gauge_invariant} and \cref{eqn:theta_gauge_invariant}, respectively. For the angle this yields values in the fundamental domain. For $\mathcal{O}_w$ it is necessary to determine its value for $V\to\infty$. Only if it is non-zero a vacuum expectation value can exist. For (minimal) Landau--'t Hooft gauge first the gauge fields are transformed to minimal Landau gauge using stochastic overrelaxation following \cite{Suman:1993mg,Cucchieri:1995pn}. Afterwards, the scalar field is correspondingly gauge transformed. To ensure the global gauge condition (\ref{eqn:beh}) the space-time average $\sum_x \Sigma(x)$ is determined per configuration, and then diagonalized in the same way as for unitary gauge. The required global gauge transformation is then applied locally to the links and the scalar fields \cite{Maas:2016ngo}. This yields the gauge-fixed configurations. The vacuum expectation value is determined as for unitary gauge using (\ref{eqn:lvev}). The deconstruction into length and angle is then performed as before. Likewise, a vev is only present if the infinite-volume limit is non-zero. Concerning the breaking-pattern a deeper analysis of the results is required, and will be done in section \ref{s:results}. \subsection{Quantum effective potential} As noted before, the possibility for multiple breaking patterns is strongly tied to the quantum effective potential. Especially interesting is whether any structures exist indicative of a choice. At finite lattice spacing even multiple minima, which merge to a flat structure in the continuum limit, may be a signal. Thus, determining the quantum effective potential is an important step in understanding the problem at hand. However, in practice there exists a problem. Because the quantum effective potential is defined in a fixed gauge as a function of the gauge-fixed classical fields, its correct determination requires the inclusion of the Faddeev-Popov determinant of the corresponding gauge choice in the Markov chain. This yields the practical problem that the determinant is in general gauges, and especially in Landau gauge, not positive definite, thus introducing a sign problem \cite{Mehta:2009zv,vonSmekal:2008es,vonSmekal:2007ns}. There is not yet an efficient solution known for this problem, and so an exact determination of the quantum effective potential is not possible. To surpass this problem, we determine the quantum effective action by reweighting \cite{Maas:2013sca}, i.e. by determining the free energy as \begin{equation} e^{W(j_3,j_8)}=\left\langle e^{\sum_x \left(j_3 \Sigma_3(x)+j_8\Sigma_8(x)\right)}\right\rangle_\text{gauge fixed}\,, \end{equation} using constant sources $j_3$ and $j_8$. In this way the back reaction of the source term on the weight of the configurations is neglected. This will work better the smaller the sources are. However, it can be expected to fail at the latest when the source term becomes of the same order as the action. Of course, this can happen for (drastically) different values of $j_3$ and $j_8$. Thus, for the constant sources used here, only some ellipsoidal patch of $j_3$-$j_8$ space will be possible. We will consider here this maximum patch size. The subsequent calculation of the quantum effective potential can then be done straightforwardly by a numerical Legendre transformation \cite{Maas:2013sca}. However, the exponential nature will in general require higher precision than native number formats, and thus it has been calculated using arbitrary precision arithmetic using Mathematica. \subsection{Running coupling} Our primary interest is in those cases, where a perturbative construction (augmented by the FMS mechanism) is possible. Also, we wanted to avoid situations where strong non-perturbative effects could alter the result. To check the nature of the interaction, we determined the running gauge coupling in the mini-MOM scheme \cite{vonSmekal:2009ae} individually for the various colours \cite{Maas:2018xxu}. This scheme allows to determine the gauge coupling just from the two-point correlation functions in the gauge sector and the ghost sector for Landau gauge, \begin{align} \alpha^a(p) & =\frac{N}{2\pi\beta}Z^a(p)G^a(p)^2\label{rcoupling} \\ Z^a(p) & =\sum_\mu \frac{p^2}{3}D_{\mu\mu}^{aa}\nonumber \\ G^a(p) & =p^2 D^{aa}_G\nonumber\,, \end{align} where there is no summation over $a$ and $D_{\mu\nu}$ and $D_G$ are the gauge boson propagator and the ghost propagator, respectively. The corresponding calculations have been done using the methods described in \cite{Maas:2010qw}. As a by-product, this therefore also allows determining, at least in principle, the mass spectrum of the gauge bosons, and to test it against the perturbative tree-level predictions \cite{Maas:2016ngo,Maas:2018xxu,Afferrante:2020hqe}, and identify possible degeneracies. Of course, there are also the couplings to the scalars. However, if these are non-perturbatively strong, this should make itself felt in (substantial) deviations of the gauge sector from tree-level behaviour. \section{Results}\label{s:results} \subsection{Global symmetry} \begin{figure}[h!] \centering \includesvg[width=\linewidth]{{global_phasediag_cb}} \caption{The status of the $\ZZ[2]$ symmetry in the phase diagram. Circles are unbroken, and squares are affected by spontaneous symmetry breaking. The colour scheme describes the expectation value of the absolute breaking angle and is the same as in all previous figures. Blue corresponds to $\U[1]\times\U[1]^\star$ or the region without BEH effect, the darkest red to $\SU[2]\times\U[1]$, and other to $\U[1]\times\U[1]$. See \cref{sec:results_implicit} for more details.} \label{fig:z2} \end{figure} \begin{figure}[t!] \centering \includesvg[width=0.99\linewidth]{{Z2-parameter_6.0-x-0.3}} \hfill \includesvg[width=0.99\linewidth]{{Z2-parameter_binder_6.0-x-0.3}} \caption{The order parameter (\ref{eqn:z2order}) (top panel, logarithmic) and its Binder cumulant (bottom panel) at fixed $\beta=6$ and $\gamma=0.3$ as a function of $\kappa$. The colour scheme of the symbols is the same as in figure \ref{fig:z2}.} \label{fig:z2order} \end{figure} Most unambiguous is the situation with the global symmetry. We find a clear separation of the phase diagram into a region of intact $\ZZ[2]$ symmetry and a region where spontaneous symmetry breaking of the global $\ZZ[2]$ symmetry is possible\footnote{Note that the Osterwalder-Seiler-Fradkin-Shenker argumentation \cite{Osterwalder:1977pc,Fradkin:1978dv} does not apply to this theory, as it requires a full breaking of the gauge group by the BEH effect. Thus, a separation in different phases is possible. This has already been observed in other cases \cite{Wellegehausen:2011sc}.}. This is shown in figure \ref{fig:z2}. In \cref{fig:z2order} we also see clearly the different volume scaling behaviours of the order parameter above and below the phase transition around $\kappa\approx0.4$ for these parameters. While we did not attempt to investigate the phase boundary in detail, breaking a symmetry, accompanied by a local order parameter, requires necessarily a non-analyticity, and thus a phase transition. Thus, we find that the phase diagram separates clearly into two distinct phases. The order of the phase transition would require a more detailed analysis along lines of constant physics. Cursory investigations of the order parameter and its Binder cumulant \begin{equation}\label{eqn:binder} \mathcal{O}_B = 1 - \frac{\expval{\qty(\sum_x \tr\qty[\Sigma\qty(x)^3])^4}}{3\expval{\qty(\sum_x \tr\qty[\Sigma\qty(x)^3])^2}^2}\,, \end{equation} shown for an example in figure \ref{fig:z2order}, suggest that at least part of the phase boundary is of second order, and thus defines possible continuum limits of the theory. We also did not observe any double peak structures in the distribution of the order parameter, which would be expected for a first-order transition. However, the critical region appears to be relatively small, and thus substantial effort would be required for a full quantitative control. This will not be necessary for the remainder of this work. A further decomposition in more phases than just the two, as was investigated for the \SU[2] adjoint case \cite{Baier:1986ni}, cannot be excluded with our results. However, we did not observe any further discontinuities in the \ZZ[2] order parameter except on the surface shown in figure \ref{fig:z2}. \subsection{Breaking angle from implicit gauge-fixing}\label{sec:results_implicit} As discussed in the introduction it is also possible to calculate the breaking angle $\theta_0$ directly from gauge-invariant quantities, i.e. the matrix invariants, as shown in \cref{eqn:theta_gauge_invariant}. From this we can define another quantity by \begin{equation}\label{eqn:O_theta} \mathcal{O}_{\theta_0} = \ev{\abs{\frac{1}{3}\arcsin(\sqrt{6}\frac{\sum_x\tr\qty[\Sigma\qty(x)^3]}{\qty(\sum_x\tr\qty[\Sigma\qty(x)^2])^{3/2}})}}\,, \end{equation} which quantifies the possible breaking angle depending on the obtained configurations for a specific parameter set. The results for this are included as a colour scheme in \cref{fig:z2} and uses the same colours as in \crefrange{fig:su3break}{fig:comp}. Before discussing the results for this quantity, a word of caution on this parameter. It needs to be emphasized that this quantity does not imply that the corresponding pattern is automatically realized, but rather should be understood as ``a possible value of the breaking angle, if there exists a non-vanishing vev for this parameter set''. The reason why the breaking angle does not necessarily have to have this value is again twofold. First, it is possible that additional gauge-transformations applied to the scalar-field, like a Landau gauge, modify the space-time averaged scalar field and thus lead to a different value of the actual $\theta_0$. Second, the distribution of the parameter (\ref{eqn:O_theta}) needs to be monitored to make sure that it is indeed Gaussian and does not contain multiple peaks. For the present case this has been checked and indeed for all investigated parameter sets the distributions do not show multi-peak structures. What can immediately be seen from \cref{eqn:O_theta} is that the angle directly relates to the \ZZ[2]-symmetry breaking term. If the \ZZ[2]-symmetry is unbroken the breaking angle must be zero and thus corresponds to the $\U[1]\times\U[1]^\star$ pattern. This is reflected in \cref{fig:z2}. In the \ZZ[2]-broken region the angle can be any non-zero value in the fundamental domain. However, the angle correlates with the absolute value of $\tr\qty[\Sigma^3]$, which in turn means that only if \ZZ[2]-breaking is strong enough the $\SU[2]\times\U[1]$ pattern can be realized. This happens only for $\gamma\to 0$ and $\kappa$ sufficiently large. Conversely, close to the \ZZ[2]-phase transition the breaking angle tends towards the $\U[1]\times\U[1]^\star$ pattern. Overall, the breaking angle obtained from implicitly gauge-fixed quantities suggests that specific breaking patterns may only be realized for certain parameter sets if the vev does not vanish in the infinite volume limit. \subsection{Unitary gauge results} \begin{figure}[h!] \centering \includesvg[width=0.99\linewidth]{{vev-parameter_7.0-x-0.5}} \hfill \includesvg[width=0.99\linewidth]{{vev-parameter_binder_7.0-x-0.5}} \caption{The order parameter (\ref{eqn:lvev2}) in over-aligned unitary gauge (top panel, logarithmic) and its Binder cumulant (bottom panel) at fixed $\beta=7$ and $\gamma=0.5$ as a function of $\kappa$. The colour scheme of the symbols is the same as in figure \ref{fig:z2}. The angles are obtained individually for the gauge-fixed configurations. The data points for the binder cumulant have been shifted along the $\kappa$-axis for better visibility. } \label{fig:vevorder} \end{figure} \begin{figure*}[t!] \centering \begin{subfigure}{0.23\textwidth} \includegraphics[width=\linewidth,trim={1.5cm 0 4cm 1.2cm},clip]{{conf_24-6.000000-0.200000-1.050000_t1}.png} \caption{un-aligned\linebreak} \label{fig:conf_broken} \end{subfigure} \begin{subfigure}{0.23\textwidth} \includegraphics[width=\linewidth,trim={1.5cm 0 4cm 1.2cm},clip]{{conf_24-9.500000-0.900000-0.175000_t1}.png} \caption{weakly aligned (\ZZ[2]-positive)} \label{fig:conf_bulk} \end{subfigure} \begin{subfigure}{0.23\textwidth} \includegraphics[width=\linewidth,trim={1.5cm 0 4cm 1.2cm},clip]{{conf_24-10.000000-1.000000-0.050000_t1}.png} \caption{strongly aligned (\ZZ[2]-negative)} \label{fig:conf_deep_broken} \end{subfigure} \begin{subfigure}{0.23\textwidth} \includegraphics[width=\linewidth,trim={1.5cm 0 4cm 1.2cm},clip]{{conf_24-9.750000-0.950000-0.112500_t1}.png} \caption{separately aligned\linebreak} \label{fig:conf_two_pats} \end{subfigure} \begin{subfigure}{0.05\textwidth} \includegraphics[width=\linewidth,trim={20cm 0 0 1.2cm},clip]{{conf_24-9.750000-0.950000-0.112500_t1}.png} \end{subfigure} \caption{The different kinds of spatial $\theta_0$ distributions for one time-slice on individual configurations. In the \ZZ[2]-unbroken phase and close to the phase transition all configurations behave like (a). Within the \ZZ[2]-broken phase the configurations show different strengths of alignment (b-c). Also, some configurations equilibrate to a combination of patterns (d). Note that we have separated the usual colour scale here into local angles with positive and negative signs.} \label{fig:conf} \end{figure*} \begin{figure*}[t!] \centering \begin{tabular}{ccccc} $6.0-0.2-1.05$ & $7.25-0.45-0.7375$ & $7.5-0.5-0.675$ & $9.5-0.9-0.175$ & $10.0-1.0-0.05$ \\\hline \includesvg[width=0.19\textwidth]{{theta_hist_8-6.000000-0.200000-1.050000}} & \includesvg[width=0.19\textwidth]{{theta_hist_8-7.250000-0.450000-0.737500}} & \includesvg[width=0.19\textwidth]{{theta_hist_8-7.500000-0.500000-0.675000}} & \includesvg[width=0.19\textwidth]{{theta_hist_8-9.500000-0.900000-0.175000}} & \includesvg[width=0.19\textwidth]{{theta_hist_8-10.000000-1.000000-0.050000}} \\ \includesvg[width=0.19\textwidth]{{theta_hist_12-6.000000-0.200000-1.050000}} & \includesvg[width=0.19\textwidth]{{theta_hist_12-7.250000-0.450000-0.737500}} & \includesvg[width=0.19\textwidth]{{theta_hist_12-7.500000-0.500000-0.675000}} & \includesvg[width=0.19\textwidth]{{theta_hist_12-9.500000-0.900000-0.175000}} & \includesvg[width=0.19\textwidth]{{theta_hist_12-10.000000-1.000000-0.050000}} \\ \includesvg[width=0.19\textwidth]{{theta_hist_16-6.000000-0.200000-1.050000}} & \includesvg[width=0.19\textwidth]{{theta_hist_16-7.250000-0.450000-0.737500}} & \includesvg[width=0.19\textwidth]{{theta_hist_16-7.500000-0.500000-0.675000}} & \includesvg[width=0.19\textwidth]{{theta_hist_16-9.500000-0.900000-0.175000}} & \includesvg[width=0.19\textwidth]{{theta_hist_16-10.000000-1.000000-0.050000}} \\ \includesvg[width=0.19\textwidth]{{theta_hist_20-6.000000-0.200000-1.050000}} & \includesvg[width=0.19\textwidth]{{theta_hist_20-7.250000-0.450000-0.737500}} & \includesvg[width=0.19\textwidth]{{theta_hist_20-7.500000-0.500000-0.675000}} & \includesvg[width=0.19\textwidth]{{theta_hist_20-9.500000-0.900000-0.175000}} & \includesvg[width=0.19\textwidth]{{theta_hist_20-10.000000-1.000000-0.050000}} \\ \includesvg[width=0.19\textwidth]{{theta_hist_24-6.000000-0.200000-1.050000}} & \includesvg[width=0.19\textwidth]{{theta_hist_24-7.250000-0.450000-0.737500}} & \includesvg[width=0.19\textwidth]{{theta_hist_24-7.500000-0.500000-0.675000}} & \includesvg[width=0.19\textwidth]{{theta_hist_24-9.500000-0.900000-0.175000}} & \includesvg[width=0.19\textwidth]{{theta_hist_24-10.000000-1.000000-0.050000}} \\ \includesvg[width=0.19\textwidth]{{theta_vol_6.000000-0.200000-1.050000}} & \includesvg[width=0.19\textwidth]{{theta_vol_7.250000-0.450000-0.737500}} & \includesvg[width=0.19\textwidth]{{theta_vol_7.500000-0.500000-0.675000}} & \includesvg[width=0.19\textwidth]{{theta_vol_9.500000-0.900000-0.175000}} & \includesvg[width=0.19\textwidth]{{theta_vol_10.000000-1.000000-0.050000}} \end{tabular} \caption{The $\theta_0$ distribution on individual configurations for increasing volumes (top to bottom) from deep in the $\ZZ[2]$ unbroken region across the phase boundary down to $\gamma\approx 0$ (left to right). The last row shows the volume behaviour of the total expectation value. The header of the column gives the respective simulation parameters $\beta-\kappa-\gamma$.} \label{fig:theta} \end{figure*} \begin{figure*}[t!] \centering \begin{tabular}{ccccc} $6.0-0.2-1.05$ & $7.25-0.45-0.7375$ & $7.5-0.5-0.675$ & $9.5-0.9-0.175$ & $10.0-1.0-0.05$ \\\hline \includesvg[width=0.19\textwidth]{{vev_hist_8-6.000000-0.200000-1.050000}} & \includesvg[width=0.19\textwidth]{{vev_hist_8-7.250000-0.450000-0.737500}} & \includesvg[width=0.19\textwidth]{{vev_hist_8-7.500000-0.500000-0.675000}} & \includesvg[width=0.19\textwidth]{{vev_hist_8-9.500000-0.900000-0.175000}} & \includesvg[width=0.19\textwidth]{{vev_hist_8-10.000000-1.000000-0.050000}} \\ \includesvg[width=0.19\textwidth]{{vev_hist_12-6.000000-0.200000-1.050000}} & \includesvg[width=0.19\textwidth]{{vev_hist_12-7.250000-0.450000-0.737500}} & \includesvg[width=0.19\textwidth]{{vev_hist_12-7.500000-0.500000-0.675000}} & \includesvg[width=0.19\textwidth]{{vev_hist_12-9.500000-0.900000-0.175000}} & \includesvg[width=0.19\textwidth]{{vev_hist_12-10.000000-1.000000-0.050000}} \\ \includesvg[width=0.19\textwidth]{{vev_hist_16-6.000000-0.200000-1.050000}} & \includesvg[width=0.19\textwidth]{{vev_hist_16-7.250000-0.450000-0.737500}} & \includesvg[width=0.19\textwidth]{{vev_hist_16-7.500000-0.500000-0.675000}} & \includesvg[width=0.19\textwidth]{{vev_hist_16-9.500000-0.900000-0.175000}} & \includesvg[width=0.19\textwidth]{{vev_hist_16-10.000000-1.000000-0.050000}} \\ \includesvg[width=0.19\textwidth]{{vev_hist_20-6.000000-0.200000-1.050000}} & \includesvg[width=0.19\textwidth]{{vev_hist_20-7.250000-0.450000-0.737500}} & \includesvg[width=0.19\textwidth]{{vev_hist_20-7.500000-0.500000-0.675000}} & \includesvg[width=0.19\textwidth]{{vev_hist_20-9.500000-0.900000-0.175000}} & \includesvg[width=0.19\textwidth]{{vev_hist_20-10.000000-1.000000-0.050000}} \\ \includesvg[width=0.19\textwidth]{{vev_hist_24-6.000000-0.200000-1.050000}} & \includesvg[width=0.19\textwidth]{{vev_hist_24-7.250000-0.450000-0.737500}} & \includesvg[width=0.19\textwidth]{{vev_hist_24-7.500000-0.500000-0.675000}} & \includesvg[width=0.19\textwidth]{{vev_hist_24-9.500000-0.900000-0.175000}} & \includesvg[width=0.19\textwidth]{{vev_hist_24-10.000000-1.000000-0.050000}} \\ \includesvg[width=0.19\textwidth]{{vev_vol_6.000000-0.200000-1.050000}} & \includesvg[width=0.19\textwidth]{{vev_vol_7.250000-0.450000-0.737500}} & \includesvg[width=0.19\textwidth]{{vev_vol_7.500000-0.500000-0.675000}} & \includesvg[width=0.19\textwidth]{{vev_vol_9.500000-0.900000-0.175000}} & \includesvg[width=0.19\textwidth]{{vev_vol_10.000000-1.000000-0.050000}} \end{tabular} \caption{The $\mathcal{O}_w$ distribution on individual configurations for increasing volumes (top to bottom) from deep in the $\ZZ[2]$ unbroken region across the phase boundary down to $\gamma\approx 0$ (left to right). The last row shows the volume behaviour of the total expectation value $\mathcal{O}_w$ normalized to the largest value (i.e.\ the smallest volume). The header of the column gives the respective simulation parameters $\beta-\kappa-\gamma$. Darker bins have been obtained later in the MC-history; see \cref{a:crit}.} \label{fig:vev} \end{figure*} \begin{figure*}[t!] \centering \begin{tabular}{ccccc} $6.0-0.2-1.05$ & $7.25-0.45-0.7375$ & $7.5-0.5-0.675$ & $9.5-0.9-0.175$ & $10.0-1.0-0.05$ \\\hline \includesvg[width=0.19\textwidth]{{theta_vev_8-6.000000-0.200000-1.050000}} & \includesvg[width=0.19\textwidth]{{theta_vev_8-7.250000-0.450000-0.737500}} & \includesvg[width=0.19\textwidth]{{theta_vev_8-7.500000-0.500000-0.675000}} & \includesvg[width=0.19\textwidth]{{theta_vev_8-9.500000-0.900000-0.175000}} & \includesvg[width=0.19\textwidth]{{theta_vev_8-10.000000-1.000000-0.050000}} \\ \includesvg[width=0.19\textwidth]{{theta_vev_12-6.000000-0.200000-1.050000}} & \includesvg[width=0.19\textwidth]{{theta_vev_12-7.250000-0.450000-0.737500}} & \includesvg[width=0.19\textwidth]{{theta_vev_12-7.500000-0.500000-0.675000}} & \includesvg[width=0.19\textwidth]{{theta_vev_12-9.500000-0.900000-0.175000}} & \includesvg[width=0.19\textwidth]{{theta_vev_12-10.000000-1.000000-0.050000}} \\ \includesvg[width=0.19\textwidth]{{theta_vev_16-6.000000-0.200000-1.050000}} & \includesvg[width=0.19\textwidth]{{theta_vev_16-7.250000-0.450000-0.737500}} & \includesvg[width=0.19\textwidth]{{theta_vev_16-7.500000-0.500000-0.675000}} & \includesvg[width=0.19\textwidth]{{theta_vev_16-9.500000-0.900000-0.175000}} & \includesvg[width=0.19\textwidth]{{theta_vev_16-10.000000-1.000000-0.050000}} \\ \includesvg[width=0.19\textwidth]{{theta_vev_20-6.000000-0.200000-1.050000}} & \includesvg[width=0.19\textwidth]{{theta_vev_20-7.250000-0.450000-0.737500}} & \includesvg[width=0.19\textwidth]{{theta_vev_20-7.500000-0.500000-0.675000}} & \includesvg[width=0.19\textwidth]{{theta_vev_20-9.500000-0.900000-0.175000}} & \includesvg[width=0.19\textwidth]{{theta_vev_20-10.000000-1.000000-0.050000}} \\ \includesvg[width=0.19\textwidth]{{theta_vev_24-6.000000-0.200000-1.050000}} & \includesvg[width=0.19\textwidth]{{theta_vev_24-7.250000-0.450000-0.737500}} & \includesvg[width=0.19\textwidth]{{theta_vev_24-7.500000-0.500000-0.675000}} & \includesvg[width=0.19\textwidth]{{theta_vev_24-9.500000-0.900000-0.175000}} & \includesvg[width=0.19\textwidth]{{theta_vev_24-10.000000-1.000000-0.050000}} \end{tabular} \caption{The $\Sigma_0$ distribution on individual configurations for increasing volumes (top to bottom) from deep in the $\ZZ[2]$ unbroken region across the phase boundary down to $\gamma\approx 0$ (left to right). The header of the column gives the respective simulation parameters $\beta-\kappa-\gamma$. Darker points have been obtained later in the MC-history; see \cref{a:crit}.} \label{fig:theta_vs_vev} \end{figure*} \begin{figure*}[t!] \centering \begin{tabular}{ccccc} $6.0-0.2-1.05$ & $7.25-0.45-0.7375$ & $7.5-0.5-0.675$ & $9.5-0.9-0.175$ & $10.0-1.0-0.05$ \\\hline \includegraphics[width=0.19\textwidth]{{sigma3_16-6.000000-0.200000-1.050000}.pdf} & \includegraphics[width=0.19\textwidth]{{sigma3_16-7.250000-0.450000-0.737500}.pdf} & \includegraphics[width=0.19\textwidth]{{sigma3_16-7.500000-0.500000-0.675000}.pdf} & \includegraphics[width=0.19\textwidth]{{sigma3_16-9.500000-0.900000-0.175000}.pdf} & \includegraphics[width=0.19\textwidth]{{sigma3_16-10.000000-1.000000-0.050000}.pdf} \\ \includegraphics[width=0.19\textwidth]{{sigma8_16-6.000000-0.200000-1.050000}.pdf} & \includegraphics[width=0.19\textwidth]{{sigma8_16-7.250000-0.450000-0.737500}.pdf} & \includegraphics[width=0.19\textwidth]{{sigma8_16-7.500000-0.500000-0.675000}.pdf} & \includegraphics[width=0.19\textwidth]{{sigma8_16-9.500000-0.900000-0.175000}.pdf} & \includegraphics[width=0.19\textwidth]{{sigma8_16-10.000000-1.000000-0.050000}.pdf} \\ \includegraphics[width=0.19\textwidth]{{effective_potential_16-6.000000-0.200000-1.050000}.pdf} & \includegraphics[width=0.19\textwidth]{{effective_potential_16-7.250000-0.450000-0.737500}.pdf} & \includegraphics[width=0.19\textwidth]{{effective_potential_16-7.500000-0.500000-0.675000}.pdf} & \includegraphics[width=0.19\textwidth]{{effective_potential_16-9.500000-0.900000-0.175000}.pdf} & \includegraphics[width=0.19\textwidth]{{effective_potential_16-10.000000-1.000000-0.050000}.pdf} \end{tabular} \caption{The classical fields $\sigma_3$ (top) and $\sigma_8$ (middle) as a function of the sources, and the quantum effective potential $\Gamma$ (bottom; note the different axes ranges) as a function of the classical fields in various parts of the phase diagram for $V=16^4$. The header of the column gives the respective simulation parameters $\beta-\kappa-\gamma$. Note that smaller volumes have been chosen to reduce equilibration problems; see \cref{a:crit}.} \label{fig:qep} \end{figure*} As anticipated, imposing unitary gauge with a-priori strict ordering does indeed yield a vacuum expectation value everywhere in the phase diagram. It is therefore possible to force a vev in certain gauges. As we will see, this is not true in all gauges. Thus, the existence of a vev is indeed gauge-dependent and can be independent of the physical phase structure, as has already been observed in the \SU[2] fundamental case \cite{Greensite:2004ke,Caudy:2007sf,Maas:2012ct}. There is, however, another aspect, which is remarkable. It is usually assumed that if physical quantities undergo a phase transition, then gauge-dependent quantities will also exhibit non-analyticities, even if the reverse is not true \cite{Maas:2017wzi}. However, in the present case, this is actually not the case. The vev and its Binder cumulant, shown in figure \ref{fig:vevorder}, exhibit no sign of a transition, in stark contrast to the \ZZ[2] order parameter shown in figure \ref{fig:z2order}. Thus, in general it cannot be expected that physical non-analyticities induce necessarily non-analyticities in gauge-fixed quantities. When performing the not over-aligned unitary gauge-fixing, which preserves the \ZZ[2]-symmetry locally, we do find a different picture. We then find indeed that a vev only arises in some part of the phase diagram: It only occurs in the region where the global $\ZZ[2]$ symmetry is broken, at least within our sampling of the phase diagram. The vev shows thus a non-analyticity. In fact, the vev, and its Binder cumulant, show the same behaviour as the \ZZ[2] order parameter at coinciding values of the coupling, within our resolution. Thus, in contrast to the over-aligned gauge, here the gauge-dependent quantities follow the physical phase structure. Additionally, this gauge-fixing procedure also allows us to plot the local angle distribution on the lattice. In \cref{fig:conf} we show exemplary configurations for a single time-slice in different parts of the phase diagram, which exhibit different strengths of alignment and corresponding average values. When going closer to $\gamma\approx 0$ the alignment becomes stronger and the average value approaches $\pm\pi/6$, i.e. the plot becomes darker. This is shown in \cref{fig:conf_deep_broken}. A similar formation of structures has also been observed in other systems with symmetry breaking \cite{Endrodi:2021kur}. Finally, it should also be mentioned that in both cases the breaking angles obtained from the space-time-averaged gauge-fixed fields are indeed very similar to the ones obtained from \cref{eqn:O_theta}. This is a non-trivial result due to the interchange of the space-time average and the matrix powers. So indeed the long-range correlations in this case are suitably well described already without any explicit gauge-fixing. The other results are then similar as the 't Hooft-Landau gauge ones. Given that the non-sorted/post-sorted unitary gauge cannot really be replicated in (augmented) perturbation theory, we therefore will consider this gauge no further. \subsection{Landau--'t Hooft gauge} In this gauge, we observe a vev whenever global $\ZZ[2]$ is broken. Beyond that, we find a more intricate situation. When just calculating the angle from the vev, we again find a distribution very similar to the one shown in figure \ref{fig:z2}. What we observe is that it appears that throughout the $\ZZ[2]$ broken phase the angle $\theta_0$ is such that the breaking pattern is exclusively $\U[1]\times\U[1]$. Only close to the phase boundary to the unbroken phase it approaches $\U[1]\times\U[1]^\star$, and for $\gamma\to 0$ it approaches $\SU[2]\times\U[1]$. Whether it reaches the special values cannot be determined for certain numerically. This result paints a very different picture than at tree-level. To understand it better, it is useful to resolve the situation on a per-configuration basis. We show sample distributions of the angle $\theta_0$ as a function of volume and for different points in the phase diagram in \cref{fig:theta}. We also show a more detailed resolution of the vev and $\theta_0$ distributions in figures \ref{fig:vev} and \ref{fig:theta_vs_vev}, for the unbroken phase, close to the phase boundary on both sides of the transition, inside the bulk of the broken phase, and close to $\gamma=0$, respectively. To interpret the plots, it is important to remember that the average vev is determined by (\ref{eqn:lvev}), and thus the per-configuration angles and vev do not necessarily average independently to it, due to their non-linear relation (\ref{eqn:vev}). Furthermore, critical slowing-down effects are very pronounced for $\gamma\to 0$, making the results on larger volumes increasingly unreliable, see appendix \ref{a:crit}. We therefore also indicate the information for the configuration-wise result from where in the Monte-Carlo trajectory they are obtained, and deduce (necessarily) parts of our findings on the observed trend. Estimating the amount of computing time for full thermalization on our largest volumes puts a different approach clearly beyond the current capabilities of our infrastructure. The result is that we observe different distributions depending on the couplings. These distributions have large finite-volume effects, which complicate the analysis. We also observe that the $\theta_0$ distribution shows the expected $\ZZ[2]$-symmetric distribution around $\theta_0=0$. Given the meta-stability in the $\ZZ[2]$-broken phase this implies a drastic change of the results once the symmetry becomes explicitly or spontaneously broken. In the $\ZZ[2]$-unbroken phase (the first columns of \crefrange{fig:theta}{fig:theta_vs_vev}), no preferential structure is observed, and both $\theta_0$ and $w$ are Gaussian distributed. As the volume increases, $w$ decreases towards zero, while the distribution of $\theta_0$ remains Gaussian. This is indeed the effect expected for the absence of a BEH effect. Thus, the $\ZZ[2]$-unbroken phase in this gauge can be interpreted as one in which no BEH effect prevails in the infinite-volume limit. Outside the $\gamma\approx 0$ regime and inside the $\ZZ[2]$-broken phase, the picture is relatively similar (see the second and third columns of \crefrange{fig:theta}{fig:theta_vs_vev}). The $\theta_0$ distribution is strongly non-Gaussian, increasing with volume towards the $\theta_0=\pm\pi/6$ boundaries, and large absolute values of the angles correlate with large values of $w$. The change to this radically different distribution from the $\ZZ[2]$-unbroken phase is very abrupt. However, as the $\ZZ[2]$ alignment becomes weaker and weaker towards the phase boundary, as evidenced by the order parameter values, the pattern needs to align more and more towards being compatible with no such alignment, but maintaining the vev. This is only possible in the $\U[1]\times\U[1]^\star$ pattern, thus emerging towards the phase boundary. This can also be understood in the following way. Because the $\ZZ[2]$ order parameter (\ref{eqn:z2order}) depends only on the sign of the determinant, any information about long-range order in the scalar field is lost, as this depends on the relative ordering of the eigenvalues. However, the $\U[1]\times\U[1]^\star$ case implies a vanishing determinant, due to the zero eigenvalue. Thus, in case of a long-range ordering of this type, \ZZ[2] symmetry would be necessarily intact. In the $\U[1]\times\U[1]$ case the determinant depends non-linearly on the relative sign of the two eigenvalues. If one of the eigenvalues is relatively large and the other one is asymmetrically distributed around zero, the determinant can fluctuate wildly in configurations, but long-range order is still possible for the eigenvalues. Hence, in total there is no way to decide a-priori from the status of the \ZZ[2] symmetry whether a BEH effect is present, at most a subset of the possible patterns can be deduced, as discussed in the previous section. Thus, it is highly non-trivial that a correspondence is found. In absence of the other effects, this yields that the $\U[1]\times\U[1]$ pattern prevails. The situation changes when moving towards small $\gamma$ (shown in the fourth and last columns of \crefrange{fig:theta}{fig:theta_vs_vev}). The first thing to be noted is that for these setups the vev $w$ shows again a volume dependence. This is however not an indication of an unbroken phase in this case but rather an artefact of the equilibration problems we encounter with decreasing $\gamma$. For a more detailed discussion on this effect see \cref{a:crit}. Taking this into account we also see that the vev and $\theta_0$ then become less correlated once more compared to the previous cases. The absence of a correlation would be expected if there are more independent possibilities to orient the average value without an action penalty. This would be the case in the $\SU[2]\times\U[1]$ case. That is again consistent with the averaged result. In this case the determinant in the $\ZZ[2]$ order parameter (\ref{eqn:z2order}) is also long-range ordered. Thus, this pattern can only be realized if \ZZ[2] is broken. Intuitively, it is not surprising to find this pattern only at very small $\gamma$, where fluctuations of the scalar field are less restricted. There, the BEH effect is not strong enough to generate masses for all gauge bosons, and thus more remain massless. This is consistent with NLO estimates in three dimensions, where at small $\gamma$ also only the $\SU[2]\times\U[1]$ pattern \cite{Kajantie:1998yc} has been found. However, in all cases the behaviour on individual configurations is thus very different from the averaged behaviour evidenced by figure \ref{fig:z2}. Though it should be mentioned that, as in the unitary gauge case, we observe that the breaking patterns obtained from the implicit gauge-fixing procedure do coincide with the ones obtained in this particular gauge. Especially the strong appearance of the $\SU[2]\times\U[1]$ angle $\theta_0=\pm\pi/6$ in individual configurations averaging to a $\U[1]\times\U[1]$ pattern eventually strongly suggests that the alignment per configuration is not particularly strong. However, this effect averages out, except when the potential becomes shallow enough for $\gamma\approx 0$. This view is also corroborated by the intricate space-time pattern in the individual configurations seen in \cref{fig:conf_two_pats} for unitary gauge. We clearly see there the formation of Weiss domains in a wave pattern. Similar observations have been made in other systems \cite{Endrodi:2021kur}. Although the referred configurations are obtained in unitary gauge the same considerations should carry over to Landau--'t Hooft gauge since these two gauges are only the opposing extremes of $R_\xi$. \begin{figure*}[t!] \centering \begin{tabular}{cccc} \includegraphics[width=0.441\linewidth]{{alpha-6.5}.eps} & \includegraphics[width=0.441\linewidth]{{alpha-7}.eps} & \\ \includegraphics[width=0.441\linewidth]{{alpha-7.25}.eps} & \includegraphics[width=0.441\linewidth]{{alpha-7.5}.eps} \\ \includegraphics[width=0.441\textwidth]{{alpha-8.5}.eps} & \includegraphics[width=0.441\textwidth]{{alpha-9.5}.eps} & \\ \includegraphics[width=0.441\textwidth]{{alpha-9.75}.eps} & \includegraphics[width=0.441\textwidth]{{alpha-10}.eps} \end{tabular}\\ \includegraphics[width=\textwidth]{{leg-gb}.eps} \caption{The running gauge coupling for the eight different charges moving from deep inside the unbroken phase ($\beta=6.5$, top right) to close to both sides of the \ZZ[2] phase transition ($\beta=7$ and $\beta=7.25$) through the broken phase ($\beta=7.5$, $\beta=8.5$, $\beta=9.5$, $\beta=9.75$) to the smallest numerically accessible value of $\gamma$ ($\beta=10$) in the lower-right panel. All results from the $24^4$ lattice, the momenta are along the $x$-axis, which minimizes finite-volume effects, and the first non-zero momentum point is suppressed due to remaining finite-volume effects.} \label{fig:alpha} \end{figure*} \begin{figure*}[t!] \centering \begin{tabular}{cccc} \includegraphics[width=0.441\linewidth]{{gp-6.5}.eps} & \includegraphics[width=0.441\linewidth]{{gp-7}.eps} & \\ \includegraphics[width=0.441\linewidth]{{gp-7.25}.eps} & \includegraphics[width=0.441\linewidth]{{gp-7.5}.eps} \\ \includegraphics[width=0.441\textwidth]{{gp-8.5}.eps} & \includegraphics[width=0.441\textwidth]{{gp-9.5}.eps} & \\ \includegraphics[width=0.441\textwidth]{{gp-9.75}.eps} & \includegraphics[width=0.441\textwidth]{{gp-10}.eps} \end{tabular}\\ \includegraphics[width=\textwidth]{{leg-gb}.eps} \caption{The gauge boson dressing function for the eight different charges moving from deep inside the unbroken phase ($\beta=6.5$, top right) to close to both sides of the \ZZ[2] phase transition ($\beta=7$ and $\beta=7.25$) through the broken phase ($\beta=7.5$, $\beta=8.5$, $\beta=9.5$, $\beta=9.75$) to the smallest numerically accessible value of $\gamma$ ($\beta=10$) in the lower-right panel. All results from the $24^4$ lattice, the momenta are along the $x$-axis, which minimizes finite-volume effects, and the zero momentum point and the first non-zero momentum point are suppressed due to remaining finite-volume effects.} \label{fig:gp} \end{figure*} Thus, what we see is a highly non-trivial averaging pattern with large cancellations, both within individual configurations and also between configurations. Nonetheless, in the end the full path integral determines the dynamics. This is best encapsulated by the quantum effective potential, displayed in figure \ref{fig:qep}. It does indeed paint a picture of such strong cancellations. In all cases it is strongly deformed from the tree-level behaviour towards a single minimum, except deep in the \ZZ[2]-unbroken phase, where the size of the induced classical fields can be considered as lattice artefacts. Thus, there are strong quantum corrections to the potential. Especially, in the cases with a $\U[1]\times\U[1]$ breaking we do not see remainder minima-like structures emerging from the strong peaking of the individual configurations at $\theta_0=\pm\pi/6$. Moreover, the classical field values saturate as a function of the sources in the \ZZ[2]-broken phase within the accessible range. This can be interpreted as the system being very stable against any attempt to perturb it out of its vacuum state. Furthermore, we observe that the minima are located at relatively small values of $\sigma_8$, while $\sigma_3$ is large in comparison, except when $\gamma$ is small. The field $\sigma_3$ is associated with the \SU[2] subgroup of \SU[3] in the Gell-Mann representation we use. If $\sigma_8$ vanishes, the system has necessarily the $\U[1]\times\U[1]^\star$ structure. The increasing value of $\sigma_8$, which would dominate in the $\SU[2]\times\U[1]$ pattern, thus corroborates our interpretation that the system moves from the $\U[1]\times\U[1]^\star$ pattern at the phase boundary to the $\SU[2]\times\U[1]$ pattern towards $\gamma\to 0$. Hence, we could expect that the correlation functions, being differentials of the quantum-effective potential, should also markedly obey just a single pattern. That would indeed be a favourable outcome \cite{Maas:2017xzh}. The results for the running coupling (\ref{rcoupling}) and the gauge boson propagator are shown in figure \ref{fig:alpha} and \ref{fig:gp}, respectively. We note that also the gauge-dependent correlation functions are strongly affected by the critical slowing down at small $\gamma$. More details can be found in appendix \ref{a:crit}. In addition, large finite-volume effects are known to affect zero momentum and the smallest non-zero lattice momenta \cite{Maas:2011se,Afferrante:2020hqe}, and thus these are suppressed. Unfortunately, this is also the range in which the BEH effect is strongest. Finally, the ghost propagator is much more insensitive to the BEH effect, as has also been observed previously \cite{Maas:2018xxu,Afferrante:2020hqe}, and the corresponding results are therefore relegated to appendix \ref{a:ghost}. Without an independent scale, it is not entirely trivial to interpret the results. However, by comparing to previous results \cite{Maas:2011se,Maas:2014pba,Maas:2018xxu,Afferrante:2020hqe}, it is possible to quantify the observations. Deep in the bulk phase ($\beta=6.5$), the results are typical for strongly-interacting gauge theories \cite{Maas:2011se}, and show full degeneracy, as expected. The behaviour of both the propagator and the running coupling are characteristic for a lattice spacing of the order of the characteristic scale of the theory \cite{Maas:2011se}. This remains essentially the same when moving towards the phase boundary ($\beta=7$), just with some increase of fluctuations. After crossing the boundaries, the results indicate a markedly coarser lattice. Thus, the drop in the running coupling is therefore indeed indicative of a weakly coupled system. Moreover, the degeneracy pattern with four ($\beta=7.25$) degenerate gauge bosons, and two being markedly heavier, is precisely the expected pattern for $\U[1]\times\U[1]^\star$. While the massless modes are found at similar values as the lightest massive ones, this is likely a finite volume artefact, as has been observed previously \cite{Maas:2018xxu,Afferrante:2020hqe}. When moving further into the bulk, the lattice spacing ($\beta=7.5$, $\beta=8.5$) appears not to change drastically. However, the remaining degeneracy splits up, leaving 3 pairs of degenerate gauge bosons, being the expected pattern for $\U[1]\times\U[1]$. However, when approaching $\gamma\to 0$ ($\beta=9.5$, $\beta=9.75$, $\beta=10$), the gauge bosons start to become degenerate again. This may be a critical slowing down effect or pure volume effect, see appendix \ref{a:crit}, though at this time this cannot be decided. At the same time the running coupling starts to increase, though still staying below one. While it appears tempting to associate this with having an unbroken non-Abelian subgroup, this cannot be alone the reason. In the case of the fundamental Higgs, there remains also an unbroken non-Abelian subgroup, but the breaking pattern remains tree-level like \cite{Maas:2016ngo,Maas:2018xxu}. Especially, the tree-level masses (\ref{eqn:boson_masses}) can never degenerate. Hence, either, despite appearances particularly at $\beta=10$, the volume is still too small and thermalization issues too strong to probe the non-degenerate regime, or the interactions are indeed strong enough to modify the behaviour substantially compared to tree-level. It would require, at the very least, an order of magnitude more computational resources to decide this. \subsection{Implications for analytical and continuum calculations} The, probably most important, insight from the results is that the pattern of symmetry breaking cannot be chosen at will, but only be determined at the quantum level. In this context it is important that while the vev length $w$ can be adjusted by the renormalization prescription, the angle $\theta_0$ cannot be changed by renormalization, as the possible wave-function renormalization affects only all components equally. Thus, its determination is necessarily required to be done always from the minimization of the quantum effective potential. In any expansion, this would then be required at every order, to realign the vev. However, our investigations also suggest that the breaking angle obtained from implicit gauge-fixing may already be sufficient to get a hint of the possible breaking patterns for a given parameter set. This is an advantage for methods in which gauge fixing is expensive, like lattice simulations. The next issue concerns the actual continuum physics. Just like any other stand-alone gauge-Higgs theory, the present one is potentially affected by a triviality issue \cite{Callaway:1988ya}. There are now two alternatives. Either, non-perturbatively there exists an interacting continuum limit at the second-order phase transition surface, or it does not. In the latter case, it is always possible to use the theory as a low-energy effective theory only \cite{Hasenfratz:1986za}. In that case, any point in the phase diagram can be used, for sufficiently low energies below the corresponding cut-off, and thus all patterns can be realized. Otherwise, a phase transition only occurs towards the boundary of the phases. Given our results, this would suggest that there is in the direction of the $\ZZ[2]$-unbroken phase only the $\U[1]\times\U[1]^\star$ pattern possible. In the $\gamma\to 0$ limit, if also manifesting a second order phase transition, apparently the $\SU[2]\times\U[1]$ pattern is possible. Thus, for the purpose of gauge-fixed calculations beyond the lattice, being that augmented perturbation theory using the FMS mechanism, functional methods or other approaches, this implies that there are apparently none of the serious ambiguities possible considered in \cite{Maas:2017xzh}. However, it is mandatory to determine the actual breaking pattern by determining the actual value of $\theta_0$ from the quantum effective potential self-consistently. \section{Summary}\label{s:summary} We have presented a detailed, non-perturbative investigation of the BEH effect in a theory, in which multiple breaking patterns are possible. Our results suggest that, even if at tree-level no pattern is preferred, at the quantum level only one pattern is possible for any given set of couplings. Especially, the vev cannot be chosen by a gauge condition at will, like this is the case when one breaking pattern exists only, but needs to be determined a-posteriori from the quantum effective potential. This implies that gauge-fixing in analytical (continuum) calculation requires a simultaneous calculation and minimization of the quantum effective potential. We also show that some gauges exist, which enforce a BEH effect for any parameters, very much like gauges which forbid a BEH effect for any value of the parameters. This again demonstrates the gauge-dependence of the BEH effect, but at the same time shows it can be used nonetheless as a fruitful concept. While this complicates issues in practice, this removes any lingering ambiguities \cite{Maas:2017xzh} for gauge-fixed calculations. Especially, this will allow manifest gauge-invariant analytical calculations using FMS-mechanism augmented perturbation theory \cite{Maas:2017xzh,Sondenheimer:2019idq,Maas:2020kda,Dudal:2020uwb} in realistic theories like GUT candidates, which are not accessible for computational costs on the lattice. Also, corresponding predictions for the spectrum of theories like the present one \cite{Maas:2017xzh,Sondenheimer:2019idq} become unambiguous, and can be tested using lattice simulations in the future. Finally, these results imply that phenomenology in theories with multiple BEH breaking patterns cannot be done by just selecting a theory which has potentially the correct breaking pattern. It is really necessary to check whether the desired breaking pattern is indeed admitted by the quantum effective potential for the couplings necessary to fulfil phenomenological constraints, like mass spectra. Conversely, this puts such model building from a field-theoretical perspective on much more stable ground. \acknowledgments The computational results presented have been obtained using the Vienna Scientific Cluster (VSC) and the HPC center at the University of Graz. E.\ D.\ and B.\ R.\ have been supported by the Austrian Science Fund FWF, grant P32760. We are grateful to Vincenzo Afferrante, René Sondenheimer, and Pascal Törek for useful discussions. B.\ R.\ is specifically thankful to Fabian Zierler for sharing his autocorrelation analysis codes.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{introduction} The Galileo spacecraft was the first artifical satellite orbiting Jupiter. Galileo had a highly sensitive impact ionization dust detector on board which was identical with the dust detector of the Ulysses spacecraft \citep{gruen1992a,gruen1992b,gruen1995a}. Dust data from both spacecraft were used for the analysis of e.\,g. the interplanetary dust complex, dust related to asteroids and comets, interstellar dust grains sweeping through the solar system, and various dust phenomena in the environment of Jupiter. References can be found in \citet{krueger1999a,krueger1999b}. In Section~\ref{sec_results} we summarize results that are related to dust in the Jupiter system. A comprehensive overview of the investigation of dust in the jovian system was given by \citet{krueger2003c} and \citet{krueger2004a}. \subsection{Summary of results from the Galileo dust investigations at Jupiter} \label{sec_results} The Jupiter system was found to be a strong source of dust when in 1992 Ulysses flew by the planet and discovered streams of dust particles emanating from the giant planet's magnetosphere \citep{gruen1993a}. These were later confirmed by Galileo \citep{gruen1996b,gruen1996c} and measured again by Ulysses in 2003-05 during its second flyby at the planet \citep{krueger2006c,flandes2007,flandes2009}. At least four dust populations were identified in the Jupiter system with Galileo \citep{gruen1997b,gruen1998}: i) Streams of dust particles with high and variable impact rates throughout Jupiter's magnetosphere. They are the extension of streams discovered with Ulysses outside Jupiter's magnetosphere. The particles are about 10\,nm in radius \citep{zook1996} and they mostly originate from the innermost Galilean moon Io \citep{graps2000a}. Because of their small sizes the charged grains strongly interact with Jupiter's magnetosphere \citep{horanyi1997,gruen1998,heck1998}, and they are a natural laboratory to study dust-plasma interactions. The dust streams mostly show a dust-in-plasma behavior while only some portions of those Galileo orbits displaying the highest dust stream fluxes (Galileo orbits E4, G7, G8, C21) satisfy the minimum requirements for a dusty plasma \citep{graps2006}. The dust streams served as a monitor of Io's volcanic plume activity \citep{krueger2003d} and as probes of the Io plasma torus \citep{krueger2003a}. Dust charging mechanisms in Io's plumes and in the jovian magnetosphere were investigated by \citet{graps2001a} and \citet{flandes2004}. Dust measurements of the Cassini spacecraft at its Jupiter flyby in 2000 showed that the grains are mostly composed of sodium chloride (NaCl) formed by condensation in Io's volcanic plumes \citep{postberg2006}. ii) Dust clouds surrounding the Galilean moons which consist of mostly sub-micron grains \citep{krueger1999d,krueger2000a,krueger2003b}. These grains were ejected from the moons' surfaces by hypervelocity impacts of interplanetary dust particles \citep{krivov2003,sremcevic2003,sremcevic2005}. iii) Bigger micron-sized grains forming a tenuous dust ring between the Galilean moons. This group is composed of two sub-populations, one orbiting Jupiter on prograde orbits and a second one on retrograde orbits \citep{colwell1998a,thiessenhusen2000}. Most of the prograde population is maintained by grains escaping from the clouds that surround the Galilean moons \citep{krivov2002a,krivov2002b}. iv) On 5 November 2002 and 21 September 2003 -- before Galileo was destroyed in a planned impact with Jupiter -- the spacecraft traversed Jupiter's gossamer ring twice and provided the first in-situ measurements of a dusty planetary ring \citep{krueger2003c,moissl2005,hamilton2008,krueger2009b} which is also accessible with astronomical imaging techniques. These fly-throughs revealed previously unknown structures in the gossamer rings: a drop in the dust density between the moons Amalthea and Thebe, grains orbiting Jupiter on highly inclined orbits and an increase in the number of small grains in the inner regions of the rings as compared to the regions further away from the planet. All these features can nicely be explained by electromagnetic forces on the grains that shape the gossamer rings \citep{hamilton2008}. \subsection{The Galileo and Ulysses dust data papers} This is the tenth paper in a series dedicated to presenting both raw and reduced data from the Galileo and Ulysses dust instruments. \citet[][hereafter Paper~I]{gruen1995a} described the reduction process of Galileo and Ulysses dust data. In the even-numbered Papers~II, IV, VI and VIII \citep{gruen1995b,krueger1999a,krueger2001a,krueger2006a} we presented the Galileo data set spanning the ten year time period from October 1989 to December 1999. The present paper extends the Galileo data set from January 2000 to September 2003, which covers the Galileo Millenium mission and two traverses of Jupiter's gossamer ring until the spacecraft impacted Jupiter on 21 September 2003. Companion odd-numbered Papers~III, V, VII, IX and XI \citep{gruen1995c,krueger1999b,krueger2001b,krueger2006b,krueger2010b} provide the entire dust data set measured with Ulysses between 1990 and 2007. An overview of our Galileo dust data papers and mission highlights is given in Table~\ref{papers}. \begin{center} \fbox{\bf Insert Table~\ref{papers}} \end{center} The main data products are a table of the number of all impacts determined from the particle accumulators and a table of both raw and reduced data of all ``big'' impacts received on the ground. The information presented in these papers is similar to data which we are submitting to the various data archiving centres (Planetary Data System, NSSDC, etc.). The only difference is that the paper version does not contain the full data set of the large number of ``small'' particles, and the numbers of impacts deduced from the accumulators are typically averaged over several days. Electronic access to the complete data set including the numbers of impacts deduced from the accumulators in full time resolution is also possible via the world wide web: http://www.mpi-hd.mpg.de/dustgroup/. This paper is organised similarly to our previous papers. Section~\ref{mission} gives a brief overview of the Galileo mission with particular emphasis on the time period 2000-2003, the dust instrument operation and lists important mission events in the time interval 2000-2003 considered in this paper. A description of the new Galileo dust data set for 2000-2003 together with a discussion of the detected noise and dust impact rates is given in Section~\ref{events}. Section~\ref{analysis} analyses and discusses various characteristics of the new data set. Finally, in Section~\ref{discussion} we discuss results on jovian dust achieved with this new data set, and in Section~\ref{sec_summary} we summarise our results. \section{Mission and instrument operations} \label{mission} \subsection{Galileo mission} Galileo was launched on 18 October 1989. Two flybys at Earth and one at Venus between 1990 and 1992 gave the spacecraft enough energy to leave the inner solar system. During its interplanetary voyage Galileo had close encounters with the asteroids Gaspra and Ida. On 7 December 1995 the spacecraft arrived at Jupiter and was injected into a highly elliptical orbit about the planet, becoming the first spacecraft orbiting a planet of the outer solar system. Galileo performed 34 revolutions about Jupiter until 21 September 2003 when the spacecraft was destroyed in a planned impact with Jupiter. Galileo's trajectory during its orbital tour about Jupiter from January 2000 to September 2003 is shown in Figure~\ref{trajectory}. Galileo had regular close flybys at Jupiter's Galilean moons. Eight such encounters occurred in the 2000-2003 interval (1 at Europa, 4 at Io, 2 at Ganymede, 1 at Callisto) plus one at Amalthea (Table~\ref{event_table_1}). Galileo orbits are labelled with the first letter of the Galilean moon which was the encounter target during that orbit, followed by the orbit number. For example, ``G29'' refers to a Ganymede flyby in orbit 29. Satellite flybys always occurred within two days of Jupiter closest approach (pericentre passage). Detailed descriptions of the Galileo mission and the spacecraft were given by \citet{johnson1992} and \citet{damario1992}. \begin{center} \fbox{\bf Insert Table~\ref{event_table_1}} \end{center} \begin{center} \fbox{\bf Insert Figure~\ref{trajectory}} \end{center} Galileo was a dual spinning spacecraft with an antenna that pointed antiparallel to the positive spin axis. During most of the initial 3 years of the mission the antenna pointed towards the Sun (Paper~II). Since 1993 the antenna was usually pointed towards Earth. Deviations from the Earth pointing direction in 2000-2003, the time period considered in this paper, are shown in Figure~\ref{pointing}. Sharp spikes in the pointing deviation occurred when the spacecraft was turned away from the nominal Earth direction for dedicated imaging observations with Galileo's cameras or for orbit trim maneuvers with the spacecraft thrusters. These spikes lasted typically several hours. From January to September 2003, the Galileo pointing deviated significantly from the Earth pointing direction for a long time interval. Table~\ref{event_table_1} lists significant mission and dust instrument events for 2000-2003. Comprehensive lists of earlier events can be found in Papers~II, IV, VI and VIII. \begin{center} \fbox{\bf Insert Figure~\ref{pointing}} \end{center} \subsection{Dust detection geometry} \label{det_geom} The Dust Detector System (DDS) was mounted on the spinning section of Galileo and the sensor axis was offset by $60^{\circ}$ from the positive spin axis (an angle of $55^{\circ}$ was erroneously stated in publications before). A schematic view of the Galileo spacecraft and the geometry of dust detection is shown in the inset in Figure~\ref{trajectory}. The rotation angle measured the viewing direction of the dust sensor at the time of a dust impact. During one spin revolution of the spacecraft the rotation angle scanned through a complete circle of $360^{\circ}$. At rotation angles of $90^{\circ}$ and $270^{\circ}$ the sensor axis lay nearly in the ecliptic plane, and at $0^{\circ}$ it was close to the ecliptic north direction. DDS rotation angles are taken positive around the negative spin axis of the spacecraft which pointed towards Earth. This is done to facilitate comparison of the Galileo spin angle data with those taken by Ulysses, which, unlike Galileo, had its positive spin axis pointed towards Earth \citep{gruen1995a}. The nominal field-of-view (FOV) of the DDS sensor target was $140^{\circ}$. A smaller FOV applies to a subset of jovian dust stream particle impacts -- the so-called class~3 impacts in amplitude range AR1 \citep[][{\em cf.}~Paper~I and Section~\ref{events} for a definition of these parameters]{krueger1999c} while the nominal target size should be applied to class~2 jovian dust stream impacts. For all impacts which are not due to jovian dust stream particles a larger FOV of $180^{\circ}$ should be applied because the inner sensor side wall turned out to be almost as sensitive to larger dust impacts as the target itself \citep{altobelli2004a,willis2004,willis2005}. These different sensor fields-of-view and the corresponding target sizes are summarised in Table~\ref{tab_fov}. \begin{center} \fbox{\bf Insert Table~\ref{tab_fov}} \end{center} During one spin revolution of the spacecraft the sensor axis scanned a cone with $120^{\circ}$ opening angle towards the anti-Earth direction. Dust particles that arrived from within $10^{\circ}$ of the positive spin axis (anti-Earth direction) could be detected at all rotation angles, whereas those that arrived at angles from $10^{\circ}$ to $130^{\circ}$ from the positive spin axis could be detected over only a limited range of rotation angles. Note that these angles refer to the nominal sensor field-of-view of $140^{\circ}$. \subsection{Data transmission} \label{sec_transmission} In June 1990 the dust instrument was reprogrammed for the first time after launch and since then the instrument memory could store 46 instrument data frames (with each frame comprising the complete data set of an impact or noise event, consisting of 128 bits, plus ancillary and engineering data; {\em cf.}~Papers~I and II). The dust instrument time-tagged each impact event with an 8 bit word allowing for the identification of 256 unique steps. In 1990 the step size of this time word was set to 4.3~hours. Hence, the total accumulation time after which the time word was reset and the time labels of older impact events became ambiguous was $\rm 256 \times 4.3\,h \simeq 46\,$days. During a large fraction of Galileo's orbital mission about Jupiter dust detector data were transmitted to Earth in the so-called realtime science mode (RTS). In RTS mode, dust data were read out either every 7.1 or every 21.2 minutes -- depending on the spacecraft data transmission rate -- and directly transmitted to Earth with a rate of 3.4 or 1.1 bits per second, respectively. Additionally, Galileo had the so-called record mode. In this mode data were read out from the dust instrument memory with 24 bits per second, recorded on Galileo's tape recorder and transmitted to Earth up to several weeks later. Recorded data were received during three satellite flybys in 2000 during short periods of $\mathrm{ \sim \pm 1/2\,\,hour}$ around closest approach to the satellite, and for $\rm \sim 3.8$ hours during Galileo's gossamer ring passage on 5 November 2002 (Table~\ref{event_table_1}). Details of the various data transmission modes of Galileo are also given in Table~\ref{tab_data_modes}. \begin{center} \fbox{\bf Insert Table~\ref{tab_data_modes}} \end{center} In RTS and record mode the time between two readouts of the instrument memory determined the number of events in a given time period for which their complete information could be transmitted. Thus, the complete information on each impact was transmitted to Earth when the impact rate was below one impact per either 7.1 or 21.2 minutes in RTS mode or one impact per minute in record mode, respectively (Table~\ref{tab_data_modes}). If the impact rate exceeded these values, the detailed information of older events was lost because the full data set of only the latest event was stored in the dust instrument memory. Furthermore, in RTS and record mode the time between two readouts also defined the accuracy with which the impact time is known. Hence, the uncertainty in the impact time is 7.1 or 21.2 minutes in RTS mode and about one minute in record mode, respectively. In RTS and record mode only seven instrument data frames were read out at a time and transmitted to Earth rather than the complete instrument memory. Six of the frames contained the information of the six most recent events in each amplitude range. The seventh frame belonged to an older event read out from the instrument memory (FN=7) and was transmitted in addition to the six new events. The position in the instrument memory from which this seventh frame was read changed for each readout so that after 40 readouts the complete instrument memory was transmitted (note that the contents of the memory may have changed significantly during the time period of 40 readouts if high event rates occurred). RTS data were usually obtained when Galileo was in the inner jovian system where relatively high dust impact rates occurred. During time intervals when Galileo was in the outer jovian magnetosphere dust data were usually received as instrument memory-readouts (MROs). MROs returned event data which had accumulated in the instrument memory over time. The contents of all 46 instrument data frames of the dust instrument was read out during an MRO and transmitted to Earth. If too many events occurred between two MROs, the data sets of the oldest events became overwritten in the memory and were lost. Although the entire memory was read out during an MRO, the number of data sets of new events that could be transmitted to Earth in a given time period was much smaller than with RTS data because MROs occurred much less frequently (Table~\ref{tab_data_modes}). During times when only MROs occurred, the accuracy of the impact time was defined by the increment of the instrument's internal clock, i.e. 4.3~hours. In 2000-2003, RTS and record data were obtained during a period of 570 days (Figure~\ref{trajectory}) which amounts to about 40\% of the total almost 4-year period. During the remaining times when the dust instrument was operated in neither RTS nor record mode, a total of 59 MROs occurred at approximately 2 to 3 week intervals. Until the end of 2002, MROs were frequent enough so that usually no ambiguities in the time-tagging occurred (i.e. MROs occurred at intervals smaller than 46 days). The last MRO for the entire Galileo mission occurred at the end of 2002 on day 02-363. In 2003 we received dust data neither as MROs nor as record data. Only RTS data were received during rather short time intervals: about one week from 03-063 to 03-070 and a total of about two days between 03-255 and 03-264 before the spacecraft hit Jupiter (Table~\ref{event_table_1}). No dust data were obtained outside these intervals in 2003. Several resets of the dust instrument's internal clock occurred during the long periods without data transmission in 2003, leading to ambiguities in the impact time of some dust impacts. One clock reset occurred during the first data gap between 02-363 and 03-063 and four resets in the second gap between 03-070 and 03-255. Furthermore, due to data transmission problems, the time tagging was lost for the events transmitted in the interval 03-063 to 03-070. Consequently, the impact time of two events which occurred between 02-363 and 03-063 is completely unknown. We have therefore set their impact time to 03-030 (these grains are indicated by horizontal bars in Figure~\ref{rot_angle}). For seven data sets transmitted between 03-063 and 03-070 the impact time could be determined with an accuracy of approximately one day from the time tagging of test pulses that were routinely performed by the dust instrument (see also Section~\ref{sec_j35}). \subsection{Dust instrument operation} During Galileo's earlier orbital mission about Jupiter strong channeltron noise was usually recorded while Galileo was within about 20$\rm R_J$ distance from Jupiter (Jupiter radius, $\rm R_J = 71,492\,km$). The details are described in Papers~VI and VIII and not repeated here. Furthermore, due to degradation of the channeltron, the high voltage setting (HV) had to be raised two times in 1999 (Paper~VIII). At the beginning of the year 2000, i.e. at the beginning of the time period considered in this paper, the dust instrument was operated in the following nominal configuration: the channeltron high voltage was set to 1250~V (HV~=~4), the event definition status was set such that only the ion-collector channel could initiate a measurement cycle (EVD~=~I) and the detection thresholds for the charges on the ion-collector, channeltron, electron-channel and entrance grid were set (SSEN~=~0,~1,~1,~1). This configuration effectively prevented dead time of the instrument due to channeltron noise (serious channeltron noise rates with $\mathrm{CN\,>\,10}$ occurred only during seven short time intervals in orbit A34 on day 02-309 when Galileo was inside Io's orbit and lasted only between several seconds and less than a minute. The resulting dead time is negligible because of its random occurrence and short duration). Due to degradation of the channeltron (Section~\ref{el_deg}) the channeltron high voltage was raised two additional times on days 00-309 and 01-352 in order to maintain a rather constant instrument sensitivity for dust impacts (Table~\ref{event_table_1}). During the Jupiter orbital tour of Galileo, orbit trim maneuvers (OTMs) were executed around perijove and apojove passages to target the spacecraft to close encounters with the Galilean moons. Many of these maneuvers required changes in the spacecraft attitude off the nominal Earth pointing direction (Figure~\ref{pointing}). Additionally, dedicated spacecraft turns occurred typically in the inner jovian system within a few days around perijove passage to allow for imaging observations with Galileo's cameras or to maintain the nominal Earth pointing direction. In the time interval considered in this paper a total of five spacecraft anomalies (safings) occurred on days 00-055, 02-017, 02-047, 02-274, and 02-309 (Table~\ref{event_table_1}). Three of these anomalies occurred in the inner jovian system in the region where the highest radiation levels were collected by the spacecraft, and recovery usually took several days. Although the dust instrument continued to measure dust impacts, the collected data could not be transmitted to Earth during the recovery and most of them were lost. No reprogramming of the instrument's onboard computer was necessary in the 2000-2003 time interval. In fact, the last reprogramming for the entire Galileo mission took place on 4 December 1996 when two overflow counters were added for the so-called AR1 impacts in classes~2 and 3 (Paper~VI). With these overflow counters, all accumulator overflows could be recognized in these two channels in the 2000-2003 interval. It is very unlikely that unrecognized overflows occurred in the higher amplitude ranges. The only exception is day 02-309 when Galileo was in the gossamer ring region and the instrument continued to collect data after the spacecraft anomaly (see also Section~\ref{sec_gossamer}). Here unrecognized overflows have likely occurred in amplitude range AR1, class~1 (channel AC11) and amplitude range AR2 (except channel AC32), while the higher amplitude ranges AR3 and AR4 were most likely free of overflows. See Section~\ref{class_and_noise} for a description of the amplitude ranges and quality classes of dust impacts. \subsection{Dust instrument electronics degradation} \label{el_deg} Analysis of the impact charges and rise times measured by the dust instrument revealed strong degradation of the instrument electronics which was most likely caused by the harsh radiation environment in the inner jovian magnetosphere. A detailed analysis was published by \citet{krueger2005a}. Here we recall the most significant results: a) the sensitivity of the instrument for dust impacts and noise had dropped. b) the amplification of the charge amplifiers had degraded, leading to reduced impact charge values $\,Q_{\mathrm{I}}$ and $\,Q_{\mathrm{E}}$. c) drifts in the target and ion collector rise time signals lead to prolonged rise times $\rm t_I$ and $\rm t_E$. d) degradation of the channeltron required increases in the channeltron high voltage (Table~\ref{event_table_1}). In particular, a) requires a time-dependent correction when comparing dust fluxes early in the Galileo Jupiter mission with later measurements; b) and c) affect the mass and speed calibration of the dust instrument. After 2000, masses and speeds derived from the instrument calibration have to be taken with caution because the electronics degradation was very severe. Only in cases where impact speeds are known from other arguments can corrected masses of particles be derived (e.g. the dust cloud measurements in the vicinity of the Galilean moons or Galileo's gossamer ring passages). On the other hand, given the uncertainty in the impact calibration of a factor of two in the speed and that of a factor of ten in the mass, the increased uncertainty due to the electronics degradation was comparatively small before 2000 (it should be noted that the dust data until end 1999 published earlier -- Papers~II, IV, VI and VIII -- remain unchanged). In particular, no corrections for dust fluxes, grain speeds and masses are necessary until end 1999 and results obtained with this data set in earlier publications remain valid. Beginning in 2000, however, the degradation became so severe that the calibrated speeds and masses have to be considered as lower and upper limits, respectively (see also Section~\ref{sec_tables}). \section{Impact events} \label{events} \subsection{Event classification and noise} \label{class_and_noise} The dust instrument classified all events -- real dust impacts and noise events -- into one of 24 different categories (6 amplitude ranges for the charge measured on the ion collector grid and 4 event classes) and counted them in 24 corresponding 8 bit accumulators (Paper~I). In interplanetary space most of the 24 categories were relatively free from noise and only sensitive to real dust impacts. The details of the noise behaviour in interplanetary space can be found in Papers~II and IV. In the extreme radiation environment of the jovian system, a different noise response of the instrument was recognized: especially within about 20 $\rm R_J$ from Jupiter class~1 and class~2 were contaminated with noise while class~3 was almost always noise-free \citep{krueger1999c}. Analysis of the dust data set from Galileo's entire Jupiter mission showed that noise events could reliably be eliminated from class~2 \citep{krueger2005a} while most class~1 events detected in the jovian environment showed signatures of being noise events. For most of Galileo's Jupiter mission we therefore consider the class~3 and the noise-removed class~2 impacts as the complete set of dust data. Apart from a missing third charge signal -- class~3 has three charge signals and class~2 only two -- there is no physical difference between dust impacts categorized into class~2 or class~3. In particular, we usually classify all class 1 and class 0 events detected in the jovian environment as noise. The only exceptions are the passages through Jupiter's gossamer rings in 2002 and 2003 where a somewhat different noise response of the instrument was recognized \citep{moissl2005}. Here, good dust impacts could also be identified in class~1. In Table~\ref{tab_gossamer_noise} we show the noise identification scheme applied to the data from the gossamer ring passages obtained while Galileo was within Io's orbit. \begin{center} \fbox{\bf Insert Table~\ref{tab_gossamer_noise}} \end{center} To summarise, noise was removed from the data set we present here with two different criteria: data obtained outside Io's orbit were processed according to the criteria derived by \citet{krueger2005a}, while data obtained inside Io's orbit were noise-removed with the criteria of \citet{moissl2005} (Table~\ref{tab_gossamer_noise}). Degradation of the instrument electronics was taken into account beginning in 1997 (Paper~VIII). The derivation of the noise contamination factor $f_{\rm noi}$ for class~2 was described in Paper~VI and is not repeated here. In this paper the terms ''small`` and ''big`` have the same meaning as in Papers~IV, VI and VIII (which is different from the terminology of Paper~II). Here, we call all particles in the amplitude ranges 2 and higher (AR2-6) ''big``. Particles in the lowest amplitude range (AR1) are called ''small``. This distinction separates the small jovian dust stream particles from bigger grains which are mostly detected between the Galilean moons (see also Section~\ref{sec_rate}). Table~\ref{rate_table} lists the number of all dust impacts and noise events identified with the dust instrument in the 2000-2003 interval as deduced from the accumulators of classes 2 and 3. Depending on the event rate the numbers are given in intervals from half a day to a few weeks (the numbers with the highest time resolution are available in electronic form only and are provided to the data archiving centres). For impacts in these two classes in the lowest amplitude range AR1 the complete data sets for only 2\% of all detected events were transmitted, the remaining 98\% of events were only counted. About 32\% of all data sets for events in the higher amplitude ranges were transmitted. We give only the number of events in classes 2 and 3 because they have been shown to contain real dust impacts during the entire Jupiter mission: class~3 is almost always noise free (although \citet{krueger1999c} found indications for a very small number of noise events in class~3, AR1, in the inner jovian system). Class~2 is strongly contaminated by noise events in the inner jovian system (within about $\rm 15\,R_J$ from Jupiter). \begin{center} \fbox{\bf Insert Table~\ref{rate_table}} \end{center} In the 2000-2003 interval Galileo had a total of eight targeted flybys at the Galilean moons plus one at Amalthea (Table~\ref{event_table_1}). During the flybys at the Galilean moons no ejecta particles from the moons could be detected because of unfavourable detection geometry. During the Amalthea flyby in A34, however, the dust instrument had the right detection geometry. Taking the recently determined mass of Amalthea \citep{anderson2005}, its Hill radius is $\mathrm{r_{Hill} \sim 130\,km}$, only slightly larger than the moon itself. Galileo's closest approach distance was 244\,km from the moon's centre so that the spacecraft did not cross the Hill sphere where an increased dust density was expected. In fact, no increase in the dust impact rate could be identified, consistent with our expectations \citep{krueger2009b}. \subsection{Dust impact rates} \label{sec_rate} Figure~\ref{rate} shows the dust impact rate recorded by the dust instrument in 2000-2003 as deduced from the class~2 and 3 accumulators. The impact rate measured in the lowest amplitude range (AR1) and the one measured in the higher amplitude ranges (AR2-6) are shown separately because they reflect two distinct populations of dust. Until early 2002 AR1 contains mostly stream particles which were measured throughout the jovian system. Bigger particles (AR2-6) were mostly detected in the region between the Galilean moons. Between the perijove passages I33 and A34 in 2002 a low background rate of a few times $\mathrm{10^{-4}\,min^{-1}}$ was measured in AR1 which is at least an order of magnitude higher than dust impact rates measured with Galileo and Ulysses in interplanetary space \citep{gruen1997a}. These impacts show a broad distribution over all rotation angles (Figure~\ref{rot_angle}) while stream particles were expected to approach from rotation angles around $90^{\circ}$ most of the time in 2002, similar to the earlier Galileo orbits in 2000 and 2001. These grains could be stream particles approaching from a much broader range of directions as was reported from the dust measurements with Cassini during Jupiter flyby (Sascha Kempf, personal communication). During the gossamer ring passages impacts were measured in all amplitude ranges AR1-4 (Section~\ref{sec_gossamer}). Note that the impact rate in AR1 was usually at least one to two orders of magnitude higher than that for the big particles. Diagrams showing the AR1 impact rate with a much higher time resolution in the inner jovian system are given in Figure~\ref{rate_highres}, and Galileo's gossamer ring passages are discussed in detail by \citet{krueger2009b}. \begin{center} \fbox{\bf Insert Figure~\ref{rate}} \end{center} \begin{center} \fbox{\bf Insert Figure~\ref{rate_highres}} \end{center} In the inner jovian system the impact rates of AR1 particles frequently exceeded $\rm 10\,min^{-1}$. An exceptionally large dust impact rate was recorded during the orbit G28 in the outer jovian system when Galileo was approximately $\mathrm{280\,R_J}$ away from Jupiter (Section~\ref{sec_g28} and Figure~\ref{rate_g28}). This represents one of the highest dust ejection rates of Io recorded during the entire Galileo Jupiter mission and is likely connected with a single strong volcanic eruption on Io \citep{krueger2003d,geissler2004}. \subsection{Event tables} \label{sec_tables} Table~\ref{dust_impacts} lists the data sets for all 224 big particles detected between 1 January 2000 and 21 September 2003 for which the complete information exists. Class~1 and class~2 particles were separated from noise by applying the criteria developed by \citet{krueger1999c,krueger2005a} and \citet{moissl2005} (Section~\ref{class_and_noise}). We do not list the small stream particles (AR1) in Table~\ref{dust_impacts} because their masses and velocities are outside the calibrated range of the dust instrument and they are by far too numerous to be listed here. The complete information of a total of 5165 small (AR1) dust particles was transmitted in 2000-2003. These are mostly stream particles which are believed to be about 10~nm in size and their velocities exceed 200\,$\mathrm {km\,s^{-1}}$\ \citep{zook1996}. Any masses and velocities derived for these particles with existing calibration algorithms would be unreliable. The full data set for all 5389 particles is submitted to the data archiving centres and is available in electronic form. A total number of 7566 events (dust plus noise in all amplitude ranges and classes) were transmitted in 2000-2003, each with a complete data set. \begin{center} \fbox{\bf Insert Table~\ref{dust_impacts}} \end{center} In Table~\ref{dust_impacts} dust particles are identified by their sequence number and their impact time. Gaps in the sequence number are due to the omission of the small particles. The time error value (TEV) which was introduced for the data set from the Jupiter mission because of the large differences in the timing accuracy of the dust instrumnet in the various data readout modes is listed next (see Table~\ref{tab_data_modes} and Paper~VI for details). Then the event category -- class (CLN) and amplitude range (AR) -- are given. Raw data as transmitted to Earth are displayed in the next columns: sector value (SEC) which is the spacecraft spin orientation at the time of impact, impact charge numbers (IA, EA, CA) and rise times (IT, ET), time difference and coincidence of electron and ion signals (EIT, EIC), coincidence of ion and channeltron signal (IIC), charge reading at the entrance grid (PA) and time (PET) between this signal and the impact. Then the instrument configuration is given: event definition (EVD), charge sensing thresholds (ICP, ECP, CCP, PCP) and channeltron high voltage step (HV). See Paper~I for further explanation of the instrument parameters, except TEV which was introduced in Paper~VI. The next four columns in Table~\ref{dust_impacts} give information about Galileo's orbit: ecliptic longitude and latitude (LON, LAT) and distance from Jupiter ($\rm D_{Jup}$, in $\rm R_J$). The next column gives the rotation angle (ROT) as described in Section~\ref{mission}. Whenever this value is unknown, ROT is arbitrarily set to 999. This occurs 71 times in the full data set that includes the small particles. Then follows the pointing direction of the instrument at the time of particle impact in ecliptic longitude and latitude ($\rm S_{LON}$, $\rm S_{LAT}$). When ROT is not valid, $\rm S_{LON}$ and $\rm S_{LAT}$ are also useless and set to 999. Mean impact velocity ($v$) and velocity error factor (VEF, i.e. multiply or divide stated velocity by VEF to obtain upper or lower limits) as well as mean particle mass ($m$) and mass error factor (MEF) are given in the last columns. For VEF $> 6$, both velocity and mass estimates are invalid and should be discarded. Beginning in 2000 the degradation of the dust instrument electronics became very severe, leading to artificially too long rise times and reduced charge amplitudes. The calibrated mass and speed values for VEF $< 6$ listed in Table~\ref{dust_impacts} should thus be considered as lower limits for the impact velocity and upper limits for the particle mass throughout the 2000-2003 interval. No intrinsic dust charge values are given \citep{svestka1996}. Even though the charge carried by the dust grains is expected to be larger in the jovian magnetosphere than in interplanetary space the charge measured on the entrance grid of the dust instrument did not give any convincing results yet. Reliable charge measurements for interplanetary dust grains and for dust in Saturn's E ring were recently reported for the Cassini dust detector \citep{kempf2004,kempf2006a}. These measurements may lead to an improved unterstanding of the charge measurements of Ulysses and Galileo in the future. Entries for the parameter PA in Table~\ref{dust_impacts} sometimes have values between 49 and 63 although the highest possible value allowed by the instrument electronics is 48 (Paper~I). This is also inherent in all Galileo and Ulysses data sets published earlier (Papers~II to IX) and it is due to a bit flip. According to our present understanding the correct PA values are obtained by subtracting 32 from all entries which have values between 49 and 63. Values of 48 and lower should remain unchanged. \section{Analysis} \label{analysis} The positive charge measured on the ion collector, $\,Q_{\mathrm{I}}$, is the most important impact parameter determined by the dust instrument because it is rather insensitive to noise. Figure~\ref{nqi} shows the distribution of $\,Q_{\mathrm{I}}$ for the full 2000-2003 data set (small and big particles together). Ion impact charges were only detected over four orders of magnitude instead of the entire range of six orders of magnitude the instrument could measure. Note that the saturation limit of the instrument was at about $\sim 10^{-8}\,\mathrm{C}$ but the maximum measured charge was $ \,Q_{\mathrm{I}} = 9.7 \times 10^{-11}\,\mathrm{C}$, well below the saturation limit. This is most likely due to instrument degradation \citep[Section~\ref{el_deg} and][]{krueger2005a}. The impact charge distribution of the big particles ($\,Q_{\mathrm{I}} > 10^{-13}\,\mathrm{C}$) follows a power law with index $-0.15$ and is shown as a dashed line in Figure~\ref{nqi} (if we exclude the particles from the region inside Io's orbit the slope is reduced somewhat to $-0.04$). This slope is flatter than the values of approximately $-1/3$ derived for the jovian system from the 1996-1999 Galileo data set (Papers~VI and VIII). Whether this flattening is due to changes in the particle properties or due to electronics degradation remains unclear. Note that the jovian stream particles (AR1) were excluded from the power law fit. \begin{center} \fbox{\bf Insert Figure~\ref{nqi}} \end{center} In Figure~\ref{nqi} the small stream particles ($\,Q_{\mathrm{I}} < 10^{-13}\,\rm C$) are squeezed into the two leftmost histogram bins. In order to investigate their behaviour in more detail we show their number per individual digital step separately in Figure~\ref{nqi2}. The distribution flattens for impact charges below $\rm 2\times 10^{-14}\,C$. Such a flattening was also evident in the earlier data sets (Papers~II, IV, VI and VIII), indicating the sensitivity threshold of the dust instrument may not be sharp. The impact charge distribution for small particles with $\,Q_{\mathrm{I}} > 2\times 10^{-14}\,\rm C$ follows a power law with index $-4.7$. It is very close to the slope found from the 1996 Galileo data set ($-4.5$, Paper~VI) and somewhat steeper than the value measured in 1997-1999 ($-3.6$, Paper~VIII). The charge distibution strongly increases towards smaller impact charges. Note that the distribution of the stream particles is much steeper than that of the big particles shown in Figure~\ref{nqi}. Interestingly, if we restrict the time interval to the period between 00-220 and 00-250 when Galileo was outside the jovian magnetosphere in orbit G28 the stream particles show a somewhat steeper slope of $-5.9$ (not shown here). \begin{center} \fbox{\bf Insert Figure~\ref{nqi2}} \end{center} The ratio of the channeltron charge $\,Q_{\mathrm{C}}$ and the ion collector charge $\,Q_{\mathrm{I}}$ is a measure of the channeltron amplification $A$ which is an important parameter for dust impact identification (Paper~I). The in-flight channeltron amplification was monitored in Papers~II, IV, VI and VIII for the initial ten years of the Galileo mission to identify possible degrading of the channeltron. In the earlier mission the amplification $ A = \,Q_{\mathrm{C}}/\,Q_{\mathrm{I}}$ for a channeltron high voltage setting of 1020~V (HV~=~2) determined from impacts with $ 10^{-12}{\rm\, C} \le \,Q_{\mathrm{I}} \le 10^{-10}{\rm\, C}$ was in the range $1.4 \lesssim A \lesssim 1.8$. No significant channeltron degradation was evident until the end of 1996. In the 1997-1999 interval (Paper~VIII) a value of $ A \simeq 0.7$ was found which indicated serious channeltron degradation. As a consequence, the channeltron high voltage was raised two times (on days 99-305 and 99-345) to return to the original amplification factor. Here we repeat the same analysis for the 2000-2003 interval. Figure~\ref{qiqc} shows the charge ratio $ \,Q_{\mathrm{C}}/\,Q_{\mathrm{I}}$ as a function of $\,Q_{\mathrm{I}}$ for a constant high voltage, HV, as in the previous papers. Here we show data for HV~=~6. The charge ratio $\,Q_{\mathrm{C}}/\,Q_{\mathrm{I}}$ determined for $10^{-12}{\rm\,C} \le \,Q_{\mathrm{I}} \le 10^{-10}{\rm\,C}$ is $ A \simeq 1.6$ and is obtained from 65 impacts. The data for HV~=~4 and HV~=~5 (time intervals 00-001 to 00-209 and 00-209 to 01-352) give $ A \simeq 1.3$ and $ A \simeq 0.5$, respectively. These values, however, are derived from only 9 and 15 impacts, respectively, and therefore have a much lower statistical significance. The amplification for HV~=~6 is close to the value from the interplanetary cruise and the early Jupiter mission, showing that the original channeltron amplification could be roughly reestablished. Details of the dust instrument degradation due to the harsh radiation environment in the jovian magnetosphere are described by \citet[][see also Section~\ref{el_deg}]{krueger2005a}. It should be noted that the ratio $\,Q_{\mathrm{C}}/\,Q_{\mathrm{I}}$ is entirely determined by the instrument performance. It does not depend upon the properties of the detected particles. \begin{center} \fbox{\bf Insert Figure~\ref{qiqc}} \end{center} Figure~\ref{mass_speed} displays the calibrated masses and velocities of all 5389 dust grains detected in the 2000-2003 interval. Although the range of impact velocities calibrated in the laboratory extended from 2 to 70\,$\mathrm {km\,s^{-1}}$, the measured impact speeds ranged only up to about 20\,$\mathrm {km\,s^{-1}}$. This is caused by the degradation of the dust instrument electronics which lead to extended rise time measurements and, hence, impact velocities which are artificially too low, and calibrated grain masses artificially too large. This becomes apparent when comparing Figure~\ref{mass_speed} with the corresponding figures in the earlier Papers II, IV, VI and VIII where the measured range of impact speeds extends up to 70\,$\mathrm {km\,s^{-1}}$. {\em Therefore, due to the strong electronics degradation, all calibrated impact speeds and masses in the time interval considered in this paper should be considered as lower and upper limits, respectively.} Any clustering of the velocity values is due to discrete steps in the rise time measurement but this quantization is much smaller than the velocity uncertainty. For further details of the mass and velocity calibration the reader is referred to the description of the mass-velocity diagrams in our earlier papers. \begin{center} \fbox{\bf Insert Figure~\ref{mass_speed}} \end{center} The impact direction of the dust particles detected in the 2000-2003 interval is shown in Figures~\ref{rot_angle} and \ref{rot_angle_highres}. On the inbound trajectory, when Galileo approached Jupiter, the dust stream particles (AR1) were mainly detected from rotation angles $\mathrm{270\pm 70^{\circ}}$ while on the outbound trajectory the streams were detectable from $\mathrm{90\pm 70^{\circ}}$. Before 2000 the detection geometry of the streams was such that the grains could only be detected during a very limited period of time around perijove passage (Paper~VIII, Table~4 therein). This changed in 2000 when the streams became detectable from rotation angles $\mathrm{90\pm 70^{\circ}}$ during almost the entire orbit of Galileo. This is best seen in orbits G28 to C30 in 2000 and 2001. Big particles were, as in the earlier periods, mostly detected in the inner jovian system when Galileo was close to Jupiter with the exception of several impacts recorded in March 2003 at about $\mathrm{350\,R_J}$ from Jupiter (Section~\ref{sec_j35}). Note that an error occurred in our earlier rotation angle plots in Paper~VIII (Figure~9 in that paper). The corrected figure is shown in the Appendix. \begin{center} \fbox{\bf Insert Figure~\ref{rot_angle}} \end{center} \begin{center} \fbox{\bf Insert Figure~\ref{rot_angle_highres}} \end{center} \section{Discussion} \label{discussion} The dust data set from Galileo's entire Jupiter mission is a unique set of dust measurements from the jovian system for many years to come. Various jovian dust populations were investigated during the last 15 years which we have summarised in Section~\ref{introduction}. The present paper finalises our series of Galileo dust data papers and we discuss some particular aspects of the 2000-2003 data set. \subsection{Variability of Io's dust emission} \label{sec_plume_monitoring} Imaging observations of Io with Voyager, Galileo, Cassini and New Horizons detected at least 17 volcanic centres with related plumes \citep{porco2003,mcewen2004,spencer2007,geissler2008}. Most of the plumes were sensed through the scattering of sunlight by dust particles entrained within the plumes, and ring-shaped surface deposits on Io suggest that other plumes have been recently active as well. The dust data from the entire Galileo Jupiter mission are a unique record of the dust ejected from Io. In particular, as the plumes are the most plausible sources of the grains \citep{graps2000a}, the dust measurements monitor plume activity \citep{krueger2003d}. The Galileo dust data show a large orbit-to-orbit variation due to both systematic and stochastic changes. Systematic effects include Io's orbital motion, changes in the geometry of Galileo's orbit and in the magnetic field configuration due to the rotation of Jupiter. Stochastic variations include fluctuations of Io's volcanic activity, changes of the particle charging in the Io torus, variations in grain release from the torus, and the deformation of the outer magnetosphere in response to the variable solar wind conditions. It should be emphasized that the mechanisms acting on the grains in the Io torus and in particular the connected temporal variability are presently not well understood. By combining the entire Galileo dust data set, the variability due to stochastic processes could be removed and a strong flux variation with jovian local time showed up \citep{krueger2003a}, confirming earlier predictions \citep{horanyi1997}. Dust emission rates of Io were derived by \citet{krueger2003d}. After removal of the systematic variations, the total dust emission rate of Io turned out to be between $10^{-3}$ and $\mathrm{10\,kg\,s^{-1}}$, with typical values in the range 0.1 to $\mathrm{1\,kg\,s^{-1}}$. Exceptionally high dust emission rates occurred during orbits E4 (1996), C21 (1999), G28, and, to a lesser extent, also during G29 and C30. Some of these peaks in the dust emission could be related to specific plume sightings or other markers of volcanic activity on Io: The Pele plume is one of the most powerful plumes and the most steady high-temperature volcanic centre on Io. Surface changes at the Pele site were detected frequently, whereas detections of the Pele plume are relatively rare. Two detections of the Pele plume are coincident with our measurements of high dust fluxes in E4 and G29, while a low dust flux in E6 may be explained by the absence of the Pele plume \citep{mcewen1998,porco2003}. In August/September 2000 (orbit G28; Section~\ref{sec_g28}) when Galileo was far away from Jupiter, a large dust flux was observed which is likely connected with surface changes observed at the site of the Tvashtar plume \citep{krueger2003d}. Here we investigate the orbit-to-orbit variability of the dust emission pattern on much shorter timescales of days to weeks. As in earlier works \citep{krueger2003d} we assume a particle radius $s\,=\,10\,\mathrm{nm}$, grain density $\rho\,=\,1.5\,\mathrm{g\,cm^{-3}}$, dust grain charging to +5V in the Io torus, and calculate the effective dust sensor area from the particle dynamics based on the model of \citet{horanyi1997}. We divide the measured dust impact rate by the effective sensor area to obtain the dust flux $f$ ($\mathrm{m^{-2}\,s^{-1}}$) as a function of distance $d$ from Jupiter. If we assume that Io's dust emission, the dust charging, ejection conditions from the plasma torus and the grain speed remain constant over the time interval considered, we expect a ``dilution'' of the dust with $d^{-2}$. Dynamical modelling implies that -- after the grains are released from the Io torus -- the major acceleration occurs within approximately $\mathrm{10\,R_J}$ from Jupiter so that their speed remains basically unchanged further away from the planet. Finally, the variation of the dust flux with jovian local time is usually below a factor of five \citep{krueger2003a} and thus of minor significance here. With all these assumptions, we expect a variation of the dust flux with $d^{-2}$. It should be emphasized that here we use exactly the same assumptions for calculating dust emission rates as \citet{krueger2003d}. \begin{center} \fbox{\bf Insert Figure~\ref{rate_g29}} \end{center} In Table~\ref{tab_slopes} we list the slopes of power law fits $f \propto d^{\alpha}$ to the derived dust flux profiles. We only considered Galileo orbits where sufficiently long data sets for at least two days are available so that meaningful flux profiles could be obtained. Large variations in the flux profiles are obvious from Table~\ref{tab_slopes}. Given the overall uncertainties we believe that slopes in the range $-3 \lesssim \alpha \lesssim -1$ are still compatible with a rather constant dust ejection rate from Io and the Io torus ($\alpha = -2$). In Figure~\ref{rate_g29} we show the dust flux during the G29 orbit as an example. Here the power law fit to the data gives a slope $\alpha \approx -2$, indicating that the dust release from the Io torus stayed remarkably constant for a rather long period of more than two months. Large deviations from this simple and ideal case with constant dust ejection are also obvious in the table. For example, orbits E4, E19, I32 and A34 show very flat profiles in the range $-1 \lesssim \alpha \lesssim 0$, implying that during these orbits stronger dust emissions occurred when Galileo was far away from Jupiter than when the spacecraft was closer to the planet. On the other hand, during orbits G2, G8, E14, E16, E18 and E26 Galileo experienced a stronger dust ejection when the spacecraft was in the inner jovian system (power law slopes $-4 \lesssim \alpha \lesssim -7$). Note that the time coverage of these data sets usually ranges from days to a few weeks, indicating that Io's plume activity or the dust charging and release from the Io torus, or both frequently changed on such rather short timescales. \begin{center} \fbox{\bf Insert Table~\ref{tab_slopes}} \end{center} Dust production rates of Io calculated with the method described above are also listed in Table~\ref{tab_slopes}. It should be emphasized that within less than a week the dust release frequently changed by approximately a factor of 10, and the absolute levels of the dust emission may have been vastly different from one Galileo orbit to the next. For a detailed discussion of the total dust ejection rates from Io and correlations with individual plume sightings the reader is referred to \citet{krueger2003d} who showed that all intervals with elevated dust emission exceeding $\mathrm{\sim 1\,kg\,s^{-1}}$ (six intervals in total) can be connected with giant plume eruptions or large area surface changes on Io or both. See also Section~\ref{sec_g28}. \subsection{Io's dust emission in August/September 2000} \label{sec_g28} In summer 2000 (orbit G28) Galileo left the jovian magnetosphere for the first time since it was injected into the jovian system in 1995 and reached a jovicentric distance of $\mathrm{\sim 280\,R_J}$ (0.13~AU). In August/September 2000, around Galileo's apojove, the dust instrument measured a surprisingly large dust impact rate exceeding $\mathrm{10\,min^{-1}}$ for about two months (Figure~\ref{rate_g28}). Similarly high fluxes were also recorded with the Cassini dust instrument at $\mathrm{\sim 0.3\,AU}$ from Jupiter when the spacecraft was approaching the planet in September 2000 (Sascha Kempf, personal communication). The dust emission from Io derived from the Galileo measurements by \citet{krueger2003d} in this time period exceeds $\sim 100\,\rm kg\, s^{-1}$. Later, when Galileo approached Jupiter again, the dust flux profile showed a surprisingly steep drop (slope $\alpha \approx 10$), implying a huge decrease in Io's dust emission. \begin{center} \fbox{\bf Insert Figure~\ref{rate_g28}} \end{center} Frequency analysis of the Galileo dust data from the first three years of the Galileo Jupiter mission (1996-1998) revealed strong 5 and 10 hour periodicities which were due to Jupiter's rotation \citep{graps2000a}. A weak ''Io footprint'' with approximately 42 hour frequency caused by this moon's orbital motion about Jupiter and harmonics with Jupiter's rotation frequencies were also revealed. These data were collected mostly in the inner jovian magnetosphere between 10 and $\mathrm{60\,R_J}$. In the data obtained during the later Galileo orbits in 1999 and 2000 the Io footprint became more prominent and was evident during most Galileo orbits from E19 to G29 \citep{graps2001a}. In the data from a total of 26 Galileo orbits measured between 1996 and 2000, a total of 11 orbits showed a clear modulation with Io's frequency, 3 showed a weak Io modulation, while the remaining 12 orbits showed no Io signature at all \citep{graps2001a}. In many, but not all, cases the missing Io signature coincided with time periods when a rather weak dust flux was measured. In the data set from August/September 2000, collected between days 00-220 and 00-250 at much larger jovicentric distances, Io's signature dominated all other frequency signatures including the 5 and 10 hour periods caused by Jupiter's rotation \citep{graps2001b}. These data provide direct evidence for Io being the source for the majority of the jovian dust stream particles during this time period. The presence of Io's orbital frequency implies that Io is a localised source of charged dust particles because charged dust from diffuse sources would couple to Jupiter's magnetic field and appear in frequency space with Jupiter's rotation frequency and its harmonics. The period of strong dust emission seen in August/September 2000 coincided with enhanced neutral gas production from the Io torus, suggesting a coupling mechanism between gas and dust ejection, although the relation between the dust emissions and the production of neutral gas is not known \citep{delamere2004}. Furthermore, there was a significant reduction in the neutral source beginning in October 2000, again coinciding with the strong drop in the dust emission as derived from our Galileo dust data. \subsection{Galileo-Cassini joint dust stream measurements} On 30 December 2000 the Cassini spacecraft flew by Jupiter, providing a unique opportunity for a two-spacecraft time-of-flight measurement (Cassini-Galileo) of particles from one collimated stream from the jovian dust streams. The goal was to detect particles in a stream first with Galileo when the spacecraft was inside the jovian magnetosphere close to the orbit of Europa (about $\mathrm{12\,R_J}$), and particles in potentially the same stream later by Cassini outside the magnetosphere (at $\mathrm{140\,R_J}$) \citep[see ][for a preliminary analysis]{graps2001b}. The Cassini data from the Jupiter flyby imply that particles of different sizes have different phases with respect to Jupiter's rotation (Sascha Kempf, personal communication), a result which is also seen in earlier Galileo data \citep{gruen1998}. Comparison of the measurements from both dust instruments, however, is hampered by the higher detection sensitivity of the Cassini detector with respect to the Galileo sensor. Both instruments have detected stream particles with different sizes and, hence likely different phases. The analysis is ongoing (Hsiang-Wen Hsu, personal communication), and more detailed modelling to describe the phase relation of different-sized particles taking into account the 3-dimensional structure of the dust emission pattern from the jovian system is necessary. Our present preliminary analysis indicates particle speeds of about $\mathrm{400\,km\,s^{-1}}$. This value is in agreement with speeds for 10 nm particles as derived from dynamical modelling \citep{hamilton1993a,horanyi1993a}, and earlier studies of the jovian dust stream dynamics \citep{zook1996}. \subsection{Large dust grains far from Jupiter} \label{sec_j35} On 29 December 2002 (day 02-363) the last MRO of the dust instrument memory occurred for the remainder of the Galileo mission. The next time we received dust data was during the time interval 4 to 11 March 2003 (days 03-063 to 03-070). These data were obtained as RTS data. We identified a total number of nine large dust impacts in amplitude ranges AR2-4 which occurred between 29 December 2002 and 11 March 2003. Due to corruption of the readings from the instrument's internal clock and one clock reset in this time interval, two of these impacts have an exceptionally large uncertainty in the impact time of 66 days. We could reconstruct the impact time of the remaining seven impacts with a higher accuracy from accumulator readings obtained with test pulses which were routinely performed by the dust instrument \citep[see][for more details]{krueger2005a}. This gave impact times for five impacts with about one day uncertainty and for two impacts with 4.3 hour uncertainty (Table~\ref{dust_impacts}). The reconstruction of these partially corrupted data implies that at least seven impacts occurred during a period of only four days when Galileo was outside Jupiter's magnetosphere in interplanetary space at approximately $\mathrm{350\,R_J}$ from Jupiter. This is a surprisingly large number of impacts at such a large distance from Jupiter given the Galileo measurements from the earlier Jupiter mission (Papers~VI and VIII) and from Galileo's interplanetary cruise. Potential sources for these grains are, for example, collisional ejecta from an (unknown) small jovian satellite or a cometary trail crossed by the spacecraft. Judging from the impact charge distribution of the measured grains, jovian stream particles (Figure~\ref{nqi2}) can be most likely ruled out because a much larger number of impacts should have occurred in the lower amplitude range AR1. In fact, only few impacts were recognized in AR1 during this time. A more detailed analysis of these impacts has to be postponed to a future investigation. \subsection{Galileo's gossamer ring passages} \label{sec_gossamer} On 5 November 2002 (orbit A34, day 02-309) Galileo traversed Jupiter's gossamer rings for the first time and approached the planet to $\mathrm{2\,R_J}$. During this ring passage the spacecraft had a close flyby at Amalthea at 244\,km distance from the moon's centre, well outside Amaltheas's Hill sphere. During approach to Jupiter dust data were collected with the highest possible rate (record mode; Section~\ref{sec_transmission}) while Galileo was within Io's orbit (i.e. within $\mathrm{\sim 5.9\,R_J}$). Shortly after Amalthea flyby a spacecraft anomaly at $\mathrm{2.33\,R_J}$ jovicentric distance prevented the collection of further Galileo dust data. Although the dust instrument continued to measure dust impacts after the anomaly, the data were not written to the tape recorder on board and, hence, the majority of them were lost. Only the data sets of a few dust impacts were received from an MRO on day 02-322. These events could be located to have happened during the gossamer ring passage but their impact time is uncertain by a few hours (Table~\ref{dust_impacts}). The traverse of the optically visible ring from its outer edge at $\mathrm{\sim 3.75\,R_J}$ until the spacecraft anomaly occurred lasted about 100\,min, and the total gossamer ring traverse from $\mathrm{\sim 3.75\,R_J}$ inbound to $\mathrm{\sim 3.75\,R_J}$ outbound took approximately six hours. During the A34 ring passage the lowest amplitude range in class~2 (AC21) was strongly contaminated with noise, while the higher amplitude ranges showed little or no noise contamination. In addition, many class~1 events recognised within Io's orbit showed signatures of being true dust impacts as well. The noise identification scheme applied to the dust data from both Galileo gossamer ring passages is described in Section~\ref{class_and_noise} and given in Table ~\ref{tab_gossamer_noise}. With the new noise identification scheme, complete data sets of 90 dust impacts were identified in the Galileo recorded data from the gossamer ring region. Several hundred more events were counted only and their data sets were lost, in particular in AR1. The completeness of the transmitted ring data varied between 100\% in the highest amplitude ranges (AR2-4) in the faint ring extension beyond Thebe's orbit down to only 4\% for the lowest amplitude range (AR1) in the more populated Amalthea ring. In record mode, the dust instrument memory was read out once per minute, and this readout frequency determined the spatial resolution of the measurements: within one minute Galileo moved about 1,800\,km through the ring which corresponds to about 1,100\,km (or $\mathrm{0.015\,R_J}$) in radial direction. This is the highest spatial resolution achievable in the ring with the Galileo in-situ measurements. Dust measurements in the gossamer rings were also obtained during Galileo's second ring traverse on 21 September 2003 (orbit J35) a few hours before Galileo impacted Jupiter. The data sets of about 20 dust impacts were successfully transmitted to Earth as RTS data. This time the spatial resolution was only about 14,000\,km (or $\mathrm{0.2\,R_J}$). The data from both gossamer ring traverses allowed for the first actual comparison of in-situ measurements with the properties inferred from inverting optical images. A detailed analysis of this data was published by \citet{krueger2009b}. Below we summarise the most important results. Images of the rings imply inclinations of the grain orbits of $i\approx 1^{\circ}$ for the visible 5 to $\mathrm{10\mu m}$ grains \citep{showalter2008}. The expected rotation angle for ring particles on circular prograde uninclined jovicentric orbits was $ \simeq 90^{\circ}$. The rotation angles measured within Io's orbit and in particular during the ring passages were -- to a first approximation -- consistent with these expectations. However, the width of the rotation angle distribution was much wider than the expected width for the geometry conditions during both gossamer ring passages. What was the reason for such a broad distribution in impact directions? One possibility was the sensor side wall which was very sensitive to dust impacts \citep{altobelli2004a,willis2005}. Taking the sensor side wall into account (Table~\ref{tab_fov}), the expected width in rotation angle was still significantly smaller than the observed width. Another potential explanation was impacts onto nearby spacecraft structures like the magnetometer boom, the EPD and PLS instruments which masquerade as particles with high inclinations. We are convinced that such an explanation can be ruled out for two reasons \citep{moissl2005}: First, the impact parameters (charge rise times, charge signal coincidences, etc.) of grains measured with rotation angles outside the nominal field-of-view for low-inclination particles do not show significant differences compared to gains inside the nominal field-of-view. Second, the data from both Galileo ring traverses show similarly broad rotation angle patterns although they had different detection geometries. During the first flyby the magnetometer boom obscured the field-of-view while during the second flyby this was not the case \citep{krueger2009b}. The most likely explanation for the observed structure in the rotation angle pattern is the particle dynamics: The wide range in impact directions as well as a drop measured in the impact rate profile immediately interior to Thebe's orbit and a gradual increase in the relative abundance of small particles closer to Jupiter can best be explained by a shadow resonance caused by varying particle charge on the day and night side of Jupiter, driving particles onto high inclination orbits \citep{hamilton2008}. In fact, inclinations up to $20^{\circ}$ nicely explain the measured impact directions for most grains. Comparison of our in-situ measurements with imaging observations showed that the in-situ measurements preferentially probe the large population of small sub-micron particles while the images are sensitive to larger grains with radii of at least several microns. The grains form a halo of material faint enough to be invisible to imaging, but populated enough to be detectable with the Galileo sensor. The faint gossamer ring extension previously imaged to about $\mathrm{3.75\,R_J}$ was detected out to at least $\mathrm{5\,R_J}$, indicating that ejecta from Thebe spread much further and particle orbits get higher eccentricities than previously known. Both the gap in the ring and the faint ring extension indicate that the grain dynamics is strongly influenced by electromagnetic forces. For a more detailed discussion of the ring particle dynamics the reader is referred to \citet{hamilton2008}. \section{Conclusions} \label{sec_summary} In this paper, which is the tenth in a series of Galileo and Ulysses dust data papers, we present data from the Galileo dust instrument for the period January 2000 to September 2003. In this time interval the spacecraft completed nine revolutions about Jupiter in the jovicentric distance range between 2 and $\rm 370\,R_J$ (Jupiter radius, $\rm R_J = 71,492\,km$). On 21 September 2003 Galileo was destroyed in a planned impact with Jupiter. The data sets of a total of 5389 (or 2\% of the total) recorded dust impacts were transmitted to Earth in this period. Many more impacts (98\%) were counted with the accumulators of the instrument but their complete information was lost because of the low data transmission capability of the Galileo spacecraft. Together with 15861 impacts recorded in interplanetary space and in the Jupiter system between Galileo's launch in October 1989 and December 1999 published earlier \citep{gruen1995b,krueger1999a,krueger2001a,krueger2006a}, the complete data set of dust impacts measured with the dust detector during Galileo's entire mission contains 21250 impacts. Galileo has been an extremely successful dust detector, measuring dust streams flowing away from Jupiter, a tenuous dust ring throughout the jovian magnetosphere and Jupiter's gossamer rings over the almost four year timespan of data considered in this paper. Most of the time the jovian dust streams dominated the overall impact rate, reaching maxima of more than $\mathrm{10\,min^{-1}}$ in the inner jovian system. A surprisingly large impact rate up to $\mathrm{100\,min^{-1}}$ was measured in August/September 2000 (G28 orbit) when the spacecraft was at about $\mathrm{280\,R_J}$ distance from Jupiter. This strong dust emission was most likely connected with a heavy volcanic eruption on Io \citep{krueger2003d,geissler2003,geissler2004}. A strong variation in the release of neutral gas from the Io torus in this time interval was also reported by \citet{delamere2004}. Io's dust emission as derived from the measured dust fluxes varied by many orders of magnitude, with typical values ranging between 0.01 to $\mathrm{1\,kg\,s^{-1}}$ of dust ejected. In August/September 2000 the derived dust emission exceeded $\mathrm{100\,kg\,s^{-1}}$. The investigation of the dust impact rate profiles measured for the jovian stream particles as a function of radial distance from Jupiter revealed large orbit-to-orbit variations and variability by a factor of 10 or more on timescales of days to a few weeks. This implies strong variability of the dust release from Io or the Io torus or variability of the jovian magnetosphere on such short timescales. A surprisingly large number of impacts of bigger micron-sized dust grains was detected within a 4-day time interval far away from Jupiter in March 2003 when Galileo was in interplanetary space. The source of these grains remains unclear. Finally, in November 2002 and September 2003 Galileo traversed Jupiter's gossamer rings twice, providing the first actual opportunity to compare in-situ dust measurements with the results obtained from remote imaging. These flybys revealed previously unknown structures in the gossamer rings \citep{krueger2009b}: a drop in the dust density between the moons Amalthea and Thebe, grains orbiting Jupiter on highly inclined orbits and an increase in the number of small grains in the inner regions of the rings as compared to the regions further away from the planet. All these features can nicely be explained by electromagnetic forces on the grains that shape the gossamer rings \citep{hamilton2008}. Strong degradation of the dust instrument electronics was recognised in the Galileo dust data \citep{krueger2005a}. It was most likely caused by the harsh radiation environment in the jovian magnetosphere and lead to a degradation of the instrument sensitivity for noise and dust detection during the Galileo mission. The Galileo data set obtained until the end of 1999 (Papers~VI and VIII) was not seriously affected by this degradation. In the time interval 2000 to 2003 which is the subject of this paper, however, the electronics degradation became so severe that the instrument calibration does not give reliable impact speeds and masses of the dust particles anymore. Instead, only lower limits for the impact speed and upper limits for the grain mass, respectively, can be given. The only exception are dust impacts for which their impact speeds can be derived from other means \citep[e.g. impacts in the gossamer rings;][]{krueger2009b}. On the other hand, a reduction of the channeltron amplification was counterbalanced by four increases of the channeltron high voltage during the entire Jupiter mission (two in 1999, one each in 2000 and 2001) to maintain stable instrument operation. Even though this is the final paper in our serious of Galileo dust data papers published during the last 15 years, the evaluation of this unique data set is continuing. A list of specific open questions raised in this and earlier data papers includes: \begin{itemize} \item {\em Electromagnetic interaction and phase relation of different sized stream particles:} Dust grains with different sizes have a different susceptibility to electromagnetic interaction with the jovian magnetosphere. Different-sized grains released from a source in the inner jovian system at the same time are expected to arrive at Galileo at a different phase of Jupiter's rotation \citep{gruen1998}. This rather simple picture is further complicated by the grains' charging history. Studies of the phase relation may lead to better constraints of the grain size distribution and may give new insights into the grains' electromagnetic interaction. The phase relation may turn out to be essential to understand the Galileo-Cassini joint dust streams measurements. \item {\em Galileo-Cassini joint dust streams measurements:} Being originally designed as a two-spacecraft time-of-flight measurement of one collimated stream from the jovian dust streams, the analysis of this data set turned out to be more complicated than anticipated. More detailed modelling of the 3-dimensional structure of the dust stream emission pattern from the jovian system is necessary to describe the phase relation of different-sized particles and to understand these unique measurements. \item {\em ''Big`` micron-sized particles:} Impacts of micron-sized dust grains were preferentially detected in the inner jovian system between the Galilean moons. Two sub-populations -- one orbiting Jupiter on prograde and one on retrograde orbits -- were identified in earlier analyses \citep{thiessenhusen2000}. The derived ratio in number density was approximately 4:1 with the majority of grains being on prograde orbits. At the time, however, only about half of the entire Galileo dust data set from Jupiter was available. Given that the detection geometry of the dust instrument changed with time during the mission, re-evaluation of the full data set from the entire Galileo Jupiter mission would be worthwhile to verify the abundance of grains on retrograde orbits. \item {\em Dust-plasma interaction:} Very preliminary comparison of the Galileo dust measurements from the gossamer ring passages with energetic particle data from the same period has revealed some interesting correlations between both data sets (Norbert Krupp, personal communication). New insights into the dust-plasma interaction and particle dynamics can be expected from combined studies of the dust data and other Galileo particles and fields data. \end{itemize} \clearpage \hspace{1cm} {\bf Acknowledgements.} We dedicate this work to the memory of Dietmar Linkert who passed away in spring 2009. He was Principal Engineer for space instruments at MPI f\"ur Kernphysik including the dust instruments flown on the HEOS-2, Helios, Galileo, Ulysses and Cassini missions. His friends and colleagues around the world appreciated his experience and sought his professional advice. The authors wish to thank the Galileo project at NASA/JPL for effective and successful mission operations. This research was supported by the German Bundesministerium f\"ur Bildung und Forschung through Deutsches Zentrum f\"ur Luft- und Raumfahrt e.V. (DLR, grant 50\,QJ\,9503\,3). Support by MPI f\"ur Kernphysik and MPI f\"ur Sonnensystemforschung is also gratefully acknowledged. \section*{ERRATUM} Due to an error in Paper~VIII, all panels of Figure~9 in that paper have wrong labels on the vertical axis. Furthermore, the third panel (data of 1999) erroneously shows the dataset of 1997. We apologize for this error and show the corrected plots in Figure~\ref{rot_old}. \begin{center} \fbox{\bf Insert Figure~\ref{rot_old}} \end{center} \clearpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The standard orientation based unification model of active galactic nuclei (AGN; \citealt{1993ARA&A..31..473A,1995PASP..107..803U}) classifies the Seyfert category of AGN into two types, namely Seyfert 1 and Seyfert 2 galaxies, based on the orientation of the viewing angle. According to this model, the observational difference between Seyfert 1 and Seyfert 2 galaxies is explained due to the inclination of the line of sight with respect to the dusty torus in them. Seyfert 1 galaxies are those that are viewed at lower inclination angles and Seyfert 2 galaxies are viewed at higher inclination, with the central region in them completely blocked by the dusty molecular torus that surrounds the broad line region (BLR). The detection of hidden BLR was first reported in NGC 1068, which forms the discovery of the first Type 2 AGN. This was based on spectro-polarimetric observations, that revealed the presence of broad emission lines in polarized light \citep{1985ApJ...297..621A}. X-ray observations too provide evidence on the presence of the obscuring torus in AGN \citep{1991PASJ...43..195A} with large X-ray column densities seen in Seyfert 2 galaxies. \begin{figure} \centering \begin{minipage}{1.05\columnwidth} \centering \includegraphics[width=\textwidth]{nustar_spectra_79.jpeg} \end{minipage}% \caption{The eight epochs of {\it NuSTAR} FPMA spectra plotted together} \label{figure-1} \end{figure} AGN emit across wavelengths, and X-ray emission has been observed from all categories of AGN \citep{1993ARA&A..31..717M}. However, a large fraction of AGN are obscured in X-rays \citep{2012AdAst2012E..17B, 2017NatAs...1..679R}. The obscured AGN are further classified as Compton Thin and Compton Thick based on the density of the equivalent hydrogen column density ($\rm{N_H}$) along the line of sight. In a Compton Thick AGN, the central engine is surrounded by the heavily obscuring dust and gas with $\rm{N_H}$ $\geq$ $10^{24}$ $cm^{-2}$ embedded in a dusty torus that is located at few parsec ($\sim$ 0.1 $-$ 10 pc) from the central source \citep{2015ARA&A..53..365N}. In a Compton Thick Seyfert 2 galaxy, reflection from the torus produces a reflection hump at around 15$-$30 keV and reveals the presence of neutral Fe K$\alpha$ emission line at 6.4 keV with an equivalent width of $\sim$1 keV (\citealt{1994MNRAS.267..743G,2016MNRAS.456L..94M}). However, the nature of this obscuring material is not static. In many AGN the observed X-ray variability is likely caused due to the variation in the circumnuclear material \citep{2002ApJ...571..234R, 2016ApJ...831..145Y}. X-ray emission from AGN is believed to originate from regions close to the central super massive black hole. Models of the X-ray emitting region include the hot corona situated close to the vicinity of the accretion disk (\citealt{1991ApJ...380L..51H,1993ApJ...413..507H}) as well as the AGN relativistic jet that emanates along the black hole rotation axis \citep{2006ARA&A..44..463H,2019ARA&A..57..467B}. It is likely that the contribution of each of these physical processes to the observed X-ray emission may differ among AGN. The observed X-ray spectrum from AGN consists of many components such as (i) a power law component believed to be due to the inverse Compton scattering of the optical, ultra-violet accretion disk photons by the hot electrons in the corona (\citealt{1991ApJ...380L..51H,1993ApJ...413..507H}), (ii) soft excess at energies lesser than 1 keV, which could be due to a warm ($kT_e$ = 1 keV) and optically thick ($\tau$ = 10 - 20) corona \citep{2004MNRAS.349L...7G,2018A&A...611A..59P} or due to relativistically blurred reflection \citep{2019ApJ...871...88G}, (iii) reflection bump beyond few keV due to scattering of X-rays by the accretion disk or distant material \citep{1994MNRAS.267..743G} and (iv) the fluorescent Fe K$\alpha$ line with equivalent width of $\sim$1 keV\citep{1994MNRAS.267..743G,2016MNRAS.456L..94M}. In spite of the numerous X-ray studies on AGN, the exact causes for the origin of X-ray emission in them is still not understood. This also includes the size, shape and location of the X-ray corona in AGN. Parameters that can put constraints on the nature of the X-ray corona in AGN are the power law index ($\Gamma$) and the high energy cut-off ($E_{cut}$) in the X-ray continuum. This $E_{cut}$ in the X-ray continuum is related to the temperature of the Comptonizing electrons ($kT_e$) in the hot corona via E$_{cut}$ = 2$-$3 $kT_e$ \citep{2001ApJ...556..716P}, however, there are reports for this relation to show deviation among AGN \citep{2014ApJ...783..106L, 2019A&A...630A.131M, 2022A&A...662A..78P}. One of the constraints to get good estimate of E$_{cut}$ on a large sample of AGN is the lack of high signal to noise ratio data at energies beyond 50 keV, where the cut-off in the spectrum is defined. Though more measurements are now available from instruments such as {\it NuSTAR} \citep{2015MNRAS.451.4375F,2018MNRAS.481.4419B,2018ApJ...866..124K,2018A&A...614A..37T,2018MNRAS.480.1819R,2018ApJ...856..120R,2019MNRAS.484.5113R,2020ApJ...901..111K,2020ApJ...905...41B,2021A&A...655A..60A,2021MNRAS.506.4960H,2022arXiv220200895K}, it is important to increase such measurements on a larger number of sources, firstly to find good estimates of E$_{cut}$ and secondly to find better constraints on the correlation of E$_{cut}$ with other physical properties of the sources. Also, variation in the temperature of the corona is now known in few radio-quiet category of AGN \citep{2018ApJ...863...71Z,2020MNRAS.492.3041B,2021ApJ...921...46B,2021MNRAS.502...80K,2022MNRAS.tmp..417W,2022A&A...662A..78P}, however, it is not clear if it is shown by all AGN. This is due to the lack of such studies on many AGN, mostly attributed to the paucity of homogeneous multi-epoch data on a large number of sources. Therfore it is of great importance to find more sources that are known to show variations in $kT_e$. \begin{table} \centering \caption{Log of {\it NuSTAR} observations.} \label{table-1} \begin{tabular}{cccc} \hline OBSID & Epoch & Date & Exposure Time \\ & & & (secs) \\ \hline 60002030002 & A & 2012-12-18 & 57850 \\ 60002030004 & B & 2012-12-20 & 48556 \\ 60002030006 & C & 2012-12-21 & 19461 \\ 60002033002 & D & 2014-08-18 & 52055 \\ 60002033004 & E & 2015-02-05 & 53685 \\ 60302003002 & F & 2017-07-31 & 49979 \\ 60302003004 & G & 2017-08-27 & 52549 \\ 60302003006 & H & 2017-11-06 & 49691 \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Log of {\it XMM-Newton} observations.} \label{table-2} \begin{tabular}{cccc} \hline OBSID & Epoch & Date & Exposure Time \\ & & & (secs) \\ \hline 0111200101 & A & 2000-07-29 & 42258 \\ 0111200102 & B & 2000-07-30 & 46429 \\ 0740060201 & C & 2014-07-10 & 63997 \\ 0740060301 & D & 2014-07-18 & 57600 \\ 0740060401 & E & 2014-08-19 & 54000 \\ 0740060501 & F & 2015-02-03 & 54600 \\ \hline \end{tabular} \end{table} NGC 1068 is one of the most studied Seyfert 2 galaxies. Situated at a redshift of $z$ = 0.0038 \citep{1999ApJS..121..287H}, it is powered by a black hole of mass $M_{BH} = 1.6 \times 10^7 M_{\odot}$ \citep{2006A&A...455..173P}. In X-rays the source is studied in the past \citep{2004A&A...414..155M, Bauer_2015, 2016MNRAS.456L..94M, 2020MNRAS.492.3872Z}. This source was first observed by {\it Ginga} in the X-ray band and the observation revealed the presence of a broad neutral Fe K$\alpha$ line \citep{1989PASJ...41..731K} with an equivalent width of $\sim$ 1.3 keV. Later on, the {\it ASCA} observations resolved the Fe lines into neutral and ionized components \citep{1994PASJ...46L..71U, 1997MNRAS.289..443I}. It is known as an emitter of high energy $\gamma$-ray radiation in the MeV$-$GeV range \citep{2020ApJS..247...33A} and also has been reported as a neutrino source \citep{2020PhRvL.124e1103A}. In the past, the hard X-ray source spectrum was fitted with a two reflector model \citep{1997A&A...325L..13M, 1999MNRAS.310...10G, Bauer_2015}. The central engine of the source is found to be completely obscured by the dusty torus with a column density of $\rm{N_H}$ $\geq$ $10^{25}$ $cm^{-2}$ \citep{2000MNRAS.318..173M}, therefore, the observer can only see the scattered emission along the line of sight. This scattered emission is commonly thought to originate from two types of reflectors, the \say{cold} reflector component that arises from Compton scattering off the primary X-ray emission from the neutral circumnuclear material, while the second \say{warm} ionized reflector component that arises due to Compton scattering off the heavily ionized material that acts as the \say{mirror} of the primary emission \citep{2004A&A...414..155M}. Using the multi epoch X-ray observations \cite{Bauer_2015} fitted the spectra of NGC 1068 using the two reflector model along with different line emission, radiative recombination continuum and the off nuclear point source emission. Using {\it XMM-Newton} and {\it NuSTAR} joint fit of the NGC 1068 high energy ($>$ 4 keV) spectra \cite{2016MNRAS.456L..94M} detected excess flux in August 2014 observation above 20 keV by 32$\pm$6 \% with respect to the previous 2012 December observation and later February 2015 observation. This transient excess above 20 keV in NGC 1068 spectra was ascribed to a drop in the absorbing column density from $\rm{N_H}$ $>$ 8.5 $\times$ $10^{24}$ $cm^{-2}$ to (5.9$\pm$0.4) $\times$ $10^{24}$ $cm^{-2}$ in 2012 spectra. The authors first caught the source during this unrevealing period in which the obscured material moved temporarily from the line of sight and the source was found to be its highest flux state. Recently, \cite{2020MNRAS.492.3872Z} presented the spectral analysis of the {\it NuSTAR} data taken between July 2017 and Feb 2018 to check for spectral variability. From the varying column density found in the timescale of 1 to 6 months, the authors inferred the presence of the clumpy torus structure surrounding the source. Using $\it Swift-XRT$ data the authors also detected an ultra-luminous X-ray source at a distance of $\sim$2 kpc from the nuclear region of NGC 1068. \begin{figure*} \centering \begin{minipage}{2.10\columnwidth} \centering \includegraphics[width=\textwidth]{lightcurve1200new.jpeg} \end{minipage}% \caption{The {\it NuSTAR} light curves of NGC 1068 in three energy bands, 4$-$10 keV (first panel), 10$-$20 keV (second panel) and 20$-$60 keV (third panel). The HR1 and HR2 vs time are plotted in the last two panels. The black dashed lines are the mean of the count rate and HR. The shaded region in each panel is the mean errors in the count rate and HR.} \label{figure-2} \end{figure*} \begin{figure*} \centering \begin{minipage}{1.05\columnwidth} \centering \includegraphics[width=\textwidth]{HR1_new.jpeg} \end{minipage}% \begin{minipage}{1.05\columnwidth} \centering \includegraphics[width=\textwidth]{HR2_new.jpeg} \end{minipage} \caption{Left panel: The relation between HR1 and count rate in the 4$-$60 keV band. Right panel: The relation between HR2 and count rate in the 4$-$60 keV band. The red dashed lines in both panels are the linear least squares fit to the data.} \label{figure-3} \end{figure*} \begin{figure*} \centering \begin{minipage}{2.10\columnwidth} \centering \includegraphics[width=\textwidth]{lightcurve_xmm.jpeg} \end{minipage}% \caption{{\it XMM-Newton EPIC PN} light curves of NGC 1068 in two energy bands, 0.2$-$2 keV (top) and 2$-$4 keV (middle). The HR vs time is plotted in the bottom panel. The black dashed line and the shaded region in each panel is the mean value of counts/sec or HR and the corresponding errors respectively.} \label{figure-4} \end{figure*} \begin{figure} \centering \begin{minipage}{1.05\columnwidth} \centering \includegraphics[width=\textwidth]{HR_XMM.jpeg} \end{minipage}% \caption{The relation between HR and count rate in the 0.2$-$4 keV band. The red dashed lines in both panels are the linear least square fit to the data.} \label{figure-5} \end{figure} Though the X-ray emission from NGC 1068 has been analysed in the past (\citealt{Bauer_2015}, \citealt{2016MNRAS.456L..94M}, \citealt{2020MNRAS.492.3872Z}), the source has not been studied for variation in the temperature of the corona. \cite{Bauer_2015} from a joint fit of 2$-$195 keV data from different instruments reported a $\rm{E_{cut}}$ of 128$^{+115}_{-44}$ keV. Recently \cite{2021MNRAS.506.4960H} jointly fit the {\it XMM-Newton} (OBSID-0740060401) and {\it NuSTAR} (OBSID-60002033002) data and reported a $\rm{E_{cut}}$ of 28.4$^{+7.7}_{-4.0}$ keV. In this work, taking advantage of the multiple epochs of data available from {\it NuSTAR} (along with near simultaneous XMM-Newton data at certain epochs of NuSTAR observations), we carried out for the first time an investigation of the variation in the temperature of the corona if any. In this paper, we present results of our variability analysis of NGC 1068, from observations carried out by {\it NuSTAR} between 2012 and 2017. Also, we present our results on spectral analysis of the same {\it NuSTAR} data set in conjunction with observations from {\it XMM-Netwon}. The aim was to find the temperature of the corona in this source and its variation if any. The paper is organized as follows. In Section 2, we discuss the X-ray observations with {\it NuSTAR} and {\it XMM-Netwon} and the reduction of the data, the analysis is presented in Section 3, followed by the summary of the results in the final section. \section{Observations and Data Reduction} \subsection{{\it NuSTAR}} Till now, NGC 1068 was observed by {\it NuSTAR} \citep{2013ApJ...770..103H} nine times with its two co-aligned telescopes with the focal plane modules A (FPMA) and B (FPMB) respectively. In one of the epochs in the year 2018, an off-nuclear Ultra-Luminous X-ray source was detected \citep{2020MNRAS.492.3872Z} at a distance of about $30''$ from the nuclear region of NGC 1068. Barring this epoch, we considered eight epochs of data for this work. The details of these eight epochs of observations are given in Table \ref{table-1}. To visualize the spectral features of these observations the best fitted model-2 FPMA spectra (see section 4) are plotted together in Fig. \ref{figure-1}. We reduced the {\it NuSTAR} data in the 3$-$79 keV band using the standard {\it NuSTAR} data reduction software NuSTARDAS\footnote{https://heasarc.gsfc.nasa.gov/docs/nustar/analysis/nustar swguide.pdf} v1.9.7 distributed by HEASARC within HEASoft v6.29. Considering the passage of the satellite through the South Atlantic Anomaly we selected SAACALC = \say{2}, SAAMODE \say{optimized} and also excluded the tentacle region. The calibrated, cleaned, and screened event files were generated by running {\tt nupipeline} task using the CALDB release 20210701. To extract the source counts we chose a circular region of radius $50''$ centered on the source. Similarly, to extract the background counts, we selected a circular region of the same radius away from the source on the same chip to avoid contamination from source photons. We then used the {\tt nuproducts} task to generate energy spectra, response matrix files (RMFs) and auxiliary response files (ARFs), for both the hard X-ray detectors housed inside the corresponding focal plane modules FPMA and FPMB. \subsection{{\it XMM-Newton}} NGC 1068 was observed by {\it XMM-Newton} for eight epochs during the year 2000 to 2015. Within these we used six data sets taken between 2000 to 2015 for the timing analysis. For spectral analysis we used only three sets of data in 2014 due to their simultaneity with at least one epoch of {\it NuSTAR} observations. We chose to use only the {\it EPIC-PN} data for the extraction of source and background spectra. The log of the OBSIDs used in this work is given in Table \ref{table-2}. We used SAS v1.3 for the data reduction. Only single events (\say{PATTERN==0}) with quality flag=0 were selected. The event files were filtered to exclude background flares selected from time ranges where the 10–15 keV count rates in the PN camera exceeded 0.3 c/s. Source spectra were extracted from an annular region between the inner and outer radius of $15''$ and $30''$ centered on the nucleus. Background photons were selected from a source-free region of equal area on the same chip as the source. Here we note that for the source extraction, choosing a circular region of radius $30''$ produced pile up in the first two OBSIDs. However, pile up was not noticed in the other 4 epochs. To avoid pile up and to maintain uniformity in data reduction we chose to extract the source and background from an annular region for all the six epochs. We constructed RMFs and ARFs using the tasks {\it RMFGEN} and {\it ARFGEN} for each observation. \begin{table*} \caption{Mean count-rate and the mean HR in different energy bands of NGC 1068 obtained from the light curves (see Fig. \ref{figure-2}) and the results of the correlation between HR and the total count rate (see Fig. \ref{figure-3}).} \label{table-4} \centering \begin{tabular}{lcccccccccc} \hline OBSID & Epoch & \multicolumn{3}{c}{Mean count rate} & \multicolumn{2}{c}{Mean HR} & \multicolumn{2}{c}{HR1} & \multicolumn{2}{c}{HR2} \\ & & 4$-$ 10 keV & 10$-$20 keV & 20$-$60 keV & HR1 & HR2 & r & p & r & p \\ \hline 60002030002 & A & 0.16 $\pm$ 0.02 & 0.06 $\pm$ 0.01 & 0.05 $\pm$ 0.01 & 0.37 $\pm$ 0.09 & 0.85 $\pm$ 0.30 & 0.10 & 0.43 & 0.10 & 0.40 \\ 60002030004 & B & 0.16 $\pm$ 0.02 & 0.06 $\pm$ 0.01 & 0.05 $\pm$ 0.01 & 0.38 $\pm$ 0.09 & 0.86 $\pm$ 0.27 & -0.18 & 0.20 & -0.18 & 0.19 \\ 60002030006 & C & 0.16 $\pm$ 0.02 & 0.06 $\pm$ 0.01 & 0.05 $\pm$ 0.01 & 0.42 $\pm$ 0.09 & 0.80 $\pm$ 0.24 & -0.25 & 0.26 & 0.19 & 0.38 \\ 60002033002 & D & 0.16 $\pm$ 0.02 & 0.07 $\pm$ 0.01 & 0.06 $\pm$ 0.01 & 0.41 $\pm$ 0.10 & 0.99 $\pm$ 0.32 & 0.04 & 0.78 & -0.12 & 0.34 \\ 60002033004 & E & 0.16 $\pm$ 0.02 & 0.06 $\pm$ 0.01 & 0.05 $\pm$ 0.01 & 0.38 $\pm$ 0.10 & 0.80 $\pm$ 0.27 & -0.16 & 0.21 & -0.01 & 0.92 \\ 60302003002 & F & 0.15 $\pm$ 0.02 & 0.06 $\pm$ 0.01 & 0.05 $\pm$ 0.01 & 0.41 $\pm$ 0.11 & 1.00 $\pm$ 0.32 & -0.07 & 0.58 & -0.02 & 0.90 \\ 60302003004 & G & 0.15 $\pm$ 0.02 & 0.06 $\pm$ 0.01 & 0.07 $\pm$ 0.01 & 0.41 $\pm$ 0.10 & 1.12 $\pm$ 0.37 & -0.16 & 0.22 & 0.07 & 0.57 \\ 60302003006 & H & 0.16 $\pm$ 0.02 & 0.06 $\pm$ 0.01 & 0.06 $\pm$ 0.01 & 0.42 $\pm$ 0.10 & 0.96 $\pm$ 0.30 & 0.09 & 0.52 & 0.16 & 0.22 \\ \hline \end{tabular} \end{table*} \section{Timing Analysis} \subsection{{\it NuSTAR}} For timing analysis of the source, we utilized the data from {\it NuSTAR} and generated the background subtracted light curves with multiple corrections (e.g. bad pixel, livetime etc.) applied on the count rate in three energy bands, namely, 4$-$10 keV, 10$-$20 keV and 20$-$60 keV respectively with a bin size of 1.2 ksec. The light curves in different energy bands, along with variations in hardness ratios (HRs) are given in Fig. \ref{figure-2}. To check for variability in the generated light curves we calculated the fractional root mean square variability amplitude ($F_{var}$;\citealt{2002ApJ...568..610E,2003MNRAS.345.1271V}) for each epoch. $F_{var}$ is defined as $F_{var} = \sqrt{\frac{V^2 - \overline{\sigma^2}}{\overline{x}^2}}$, where, $V^2 = \frac{1}{N-1} (x_i - \overline{x})^2$ is the sample variance and $\overline{\sigma^2} = \frac{1}{N} \sum \sigma_{i}^2$ is the mean square error in the flux measurements. Here, $x_i$ is the observed value in counts per second, $\overline{x}$ is the arithmetic mean of the $x_i$ measurements, and $\sigma_i$ is the error in each individual measurement. The error in $F_{var}$ was estimated following \cite{2003MNRAS.345.1271V}. For a binning choice of 1.2 ksec the calculated $F_{var}$ values indicate that the source is found not to show any significant variations within epochs. This is also evident in Fig. \ref{figure-2}. Shown by black dashed lines in Fig. \ref{figure-2} is the mean brightness of the source at each epoch determined from the light curves. These mean values are given in Table \ref{table-4}. From light curve analysis it is evident that the source has not shown variation in the soft band (4$-$10 keV and 10$-$20 keV) during the five years of data analysed in this work. However, variation is detected in the hard band (20$-$60 keV) (Fig. \ref{figure-2}). This is also very clear in Fig. \ref{figure-1}. \begin{table*} \caption{Results of the variability analysis in two energy bands of {\it XMM-Newton}} \label{table-6} \centering \begin{tabular}{lcccccc} \hline OBSID & Epoch & \multicolumn{2}{c}{Mean count rate} & Mean HR & r & p \\ & & 0.2$-$2 keV & 2$-$4 keV & & & \\ \hline 0111200101 & A & 14.50$\pm$0.24 & 0.34$\pm$0.04 & 0.023$\pm$0.003 & -0.03 & 0.88 \\ 0111200201 & B & 15.28$\pm$0.26 & 0.35$\pm$0.04 & 0.023$\pm$0.003 & -0.06 & 0.76 \\ 0740060201 & C & 14.86$\pm$0.25 & 0.34$\pm$0.04 & 0.023$\pm$0.003 & -0.16 & 0.45 \\ 0740060301 & D & 14.85$\pm$0.25 & 0.33$\pm$0.04 & 0.022$\pm$0.003 & -0.11 & 0.45 \\ 0740060401 & E & 14.94$\pm$0.25 & 0.33$\pm$0.04 & 0.022$\pm$0.003 & -0.01 & 0.96 \\ 0740060501 & F & 14.82$\pm$0.30 & 0.31$\pm$0.05 & 0.021$\pm$0.003 & -0.06 & 0.71 \\ \hline \end{tabular} \end{table*} In the two bottom panels of Figure \ref{figure-2}, we show the evolution of two hardness ratios, namely HR1 and HR2 during the duration of the observations analysed in this work. HR1 and HR2 are defined as: HR1 = C(10$-$20)/C(4$-$10) and HR2 = C(20$-$60)/C(10$-$20), where C(4$-$10), C(10$-$20) and C(20$-$60) are the count rates in 4$-$10 keV, 10$-$20 keV, and 20$-$60 keV respectively. For each epoch, the mean hardness ratio is shown as a black dashed line in Figure \ref{figure-2} and the mean values are given in Table \ref{table-4}. As the errors are large, no variation in the hardness ratio of the source could be ascertained between epochs. We also looked for correlation if any between the hardness ratios, HR1 and HR2 against the broad band count rate in the 4$-$60 keV band with a time binning of 1.2 ksec. This is shown in Fig. \ref{figure-3}. Also, shown in the same figure are the linear least squares fit to the data. Calculated values of the Pearson's rank coefficient (r) and probability for no correlation (p) from the linear least square fit are given in Table \ref{table-4}. Analysing those values we found no variation of hardness ratios with the brightness of the source. \subsection{{\it XMM-Newton}} Using the six epochs of {\it XMM-Newton EPIC PN} data from Table \ref{table-2} we generated the light curves in two energy bands, 0.2$-$2.0 keV and 2.0$-$4.0 keV using a binning size of 1.2 ksec. The light curves along with the variation of HR are shown in Fig. \ref{figure-4}. Here, HR is defined as the ratio of C(2.0$-$4.0) to C(0.2$-$2.0), where C(2.0$-$4.0) and C(0.2$-$2.0) are the count rates in 2.0$-$4.0 keV and 0.2$-$2.0 keV energy bands respectively. From $F_{var}$ analysis we found no significant variation within the epochs of observation. The black dashed lines in the first two panels of Fig. \ref{figure-4} are the mean values of the count rate in different energy bands. The mean values of the count rate (see Table \ref{table-6}) indicate that in the soft band (0.2$-$2 keV) the source was in its brightest state in epoch B. There is thus variation in the soft band with the source being brighter in epoch B relative to epoch A. However, in the hard band we did not find any significant change in the source brightness between epochs. In the same Table \ref{table-6}, the constant values of mean HR between epochs also argues for no variation in brightness state of the source. In Fig. \ref{figure-5} HR is plotted against the total count rate in 0.2$-$4.0 keV band. The results of the linear least squares fit are given in Table \ref{table-6}. From the p values we conclude that no significant correlation between HR and the total count rate is found in NGC 1068. \section{Spectral analysis} In addition to characterizing the flux variability of NGC 1068, we also aimed in this work to investigate the variation in the temperature of the corona of the source. \subsection{NuSTAR only spectral fit} To check for variation in the temperature of the corona in NGC 1068, we first concentrated on the NuSTAR data alone. For that we fitted simultaneous FPMA/FPMB data for the eight epochs of observations available in the {\it NuSTAR} archive. To avoid the host galaxy contamination we used the {\it NuSTAR} data in the 4$-$79 keV energy band \citep{2016MNRAS.456L..94M}. For the spectral analysis, using XSPEC version 12.12.0 \citep{1996ASPC..101...17A}, we fitted the background subtracted spectra from FPMA and FPMB simultaneously (without combining them) allowing the cross normalization factor to vary freely during spectral fits. The spectra were binned to have minimum 25 counts/energy bin using the task {\it grppha}. To get an estimate of the model parameters that best describe the observed data, we used the chi-square ($\chi^2$) statistics and for calculating the errors in the model parameters we used the $\chi^2$ = 2.71 criterion i.e. 90 \% confidence range in XSPEC. \begin{figure*} \centering \begin{minipage}{1.05\columnwidth} \centering \includegraphics[width=\textwidth]{epochG_bestfit1b.jpeg} \end{minipage}% \begin{minipage}{1.05\columnwidth} \centering \includegraphics[width=\textwidth]{epochG_bestfit.jpeg} \end{minipage} \caption{The best fit epoch G (with highest flux) unfolded spectra along with the data to model ratio using Model 1b (left panel) and Model 2b (right panel)}.\label{figure-7} \end{figure*} \begin{figure*} \centering \begin{minipage}{1.05\columnwidth} \centering \includegraphics[width=\textwidth]{ratio_model1b.jpeg} \end{minipage}% \begin{minipage}{1.05\columnwidth} \centering \includegraphics[width=\textwidth]{ratio_model2b.jpeg} \end{minipage} \caption{The data to model ratio of all eight epochs of {\it NuSTAR} observations using Model 1b (left panel) and Model 2b (right panel).} \label{figure-8} \end{figure*} In all our model fits to the observed spectra, the {\it const} represents the cross calibration constant between the two focal plane modules FPMA and FPMB. To model the line of sight galactic absorption the {\it phabs} component was used and for this the neutral hydrogen column density ($\rm{N_H}$) was frozen to 3.32$\times$ $10^{20}$ atoms $cm^{-2}$ as obtained from \cite{2013MNRAS.431..394W}. To take into account the strong emission lines seen in the observed {\it NuSTAR} spectra, we used four {\it zgauss} components in XSPEC. In all the {\it zgauss} models considered to fit the emission lines, the line energies and the normalization were kept free during the fitting while $\sigma$ was kept frozen to 0.1 keV. The redshift (z) for all the model components was kept fixed to 0.0038 \citep{2010A&A...518A..10V}. The inclination angle which is the angle the line of sight of the observer makes with the axis of the AGN ($i$) was fixed at 63$^{\circ}$ \citep{2004A&A...414..155M} in all the models. \subsubsection{Model 1} NGC 1068 has been extensively studied in the hard X-ray ($>$3 keV) band earlier \citep{1997A&A...325L..13M, 2004A&A...414..155M, Bauer_2015,2016MNRAS.456L..94M, 2020MNRAS.492.3872Z} mostly using the two component reflector model. The main essence of this model is to fit (a) the cold and distant reflector using {\it pexrav}/{\it pexmon}/({\it MYTS+MYTL}), and (b) the warm, ionized Compton-scattered component using {\it power-law/cutoff power-law} with Compton down scattering under the assumption that the electron temperature is much smaller than the photon energy ($m_{e}c^{2}$). Few Gaussian components were also used to model various neutral and ionized emission lines present in the source spectra. In this work too, to find the coronal temperature of the source, we modeled the source spectra using two different reflector components. \begin{enumerate} \item The self consistent Comptonization model {\it xillverCP} \citep{2014ApJ...782...76G} that takes into account the cold and distant reflector, the neutral Fe k$\alpha$ ($\sim$ 6.4 keV) and Fe k$\beta$ ($\sim$ 7.06 keV) lines. \item The warm ionized Compton scattered reflection using two models separately. At first, following \cite{Poutanen_1996}, a Compton scattered component ($f_{scat}$) for an arbitrary intrinsic continuum ($f_{intr}$). As an intrinsic continuum, we used a {\it power-law}, that was modified for Compton down scattering using equation (1) given in \cite{Poutanen_1996}. Secondly, the self consistent {\it xillverCP} model with a high ionization parameter ($log\xi$ = 4.7) to model the warm, ionized reflector. Here we note that fixing the ionization parameter to some other values ($log\xi$ = 3.0, 3.5 and 4.0) did not produce any significant change in the derived best fit values. It only self-consistently added few ionization lines in the model, but, the spectral shape remained unchanged. Using the Compton scattered component in place of a warm mirror may affect the spectra by adding curvature below 2 keV \citep{Bauer_2015}, but the spectral modeling of the {\it NuSTAR} data above 4 keV with or without inclusion of the Compton down scattering in the warm reflector did not produce any significant effect on the derived best fit values. We also arrived at the similar conclusion from using the {\it XMM-Newton} and {\it NuSTAR} data jointly below 4 keV. This is discussed in detail in Section 4.2. \item Gaussian components to take care of the Fe ionized ($\sim$ 6.57, 6.7 and 6.96 keV), Ni k$\alpha$ ($\sim$ 7.47 keV), Ni k$\beta$ ($\sim$ 8.23 keV) and Ni ionized ($\sim$ 7.83 keV) emission lines. \end{enumerate} In XSPEC, the two models used in the fitting of the spectra have the following form, \begin{dmath} Model 1a = const*phabs*(f1*zpo+xillverCP+zgauss+zgauss+zgauss+zgauss) \end{dmath} and \begin{dmath} Model 1b = const*phabs*(xillverCP_{warm}+xillverCP_{cold}+zgauss+zgauss+zgauss+zgauss) \end{dmath} Here, we note that, From the data to ratio plot we did not find any prominent residuals near the line emission regions, but, in all epochs we noticed residues at around 6.0 keV (see Fig. \ref{figure-7} and Fig. \ref{figure-8}). This feature at 6.0 keV has no physical origin but might appear in the {\it NuSTAR} data due to calibration issues \citep{2020MNRAS.492.3872Z}. \noindent {\bf Model 1a:} For the spectral fit with this model, we used the formula (f1) obtained from \cite{Poutanen_1996} to consider the Compton down scattering of the intrinsic continuum ({\it zpo}). Following \cite{Poutanen_1996}, \begin{equation} f1 \propto \tau_{sc}[1+\mu^2+xx_1(1-\mu)^2] \end{equation} In Equation 3, $x = h\nu/m_ec^2$ is the dimensionless photon energy $\mu$ = cos i, x1 = x/[1- x(1-$\mu$)] and $\tau_{sc}$ is the Thompson optical depth of the scattering material. We considered the constant of proportionality $\times$ $\tau_{sc}$ as an another constant (p1) and kept it as a free parameter in the spectral analysis. During spectral fits the photon index ($\Gamma$) and the normalization for the two reflectors were tied together. For the cold reflector we modeled only the reflection component by fixing the reflection fraction ($R$) to $-$1 throughout. The parameters that were kept free are the relative iron abundance ($AF_{e}$), $\rm{kT_{e}}$ and p1. The constant, p1 was allowed to vary between 0.0 and 10.0 during the fit. To model the cold and neutral reflector the ionization parameter ($\xi$) was frozen to 1.0 (i.e log$\xi$ = 0). The best fit values obtained using Model 1a are given in Table \ref{table-8}. \noindent {\bf Model 1b:} Here, we used {\it xillverCP} twice to model the warm and cold reflection respectively. For the warm and ionized reflector we used ${\it xillverCP_{warm}}$ by fixing the ionization parameter to its highest value ($log\xi = 4.7$) and for the cold and distant reflection (${\it xillverCP_{cold}}$) the reflector was considered as a neutral one with a fixed $log\xi$ of 0.0. In the modelling of the source spectra using Model 1b we tied $\Gamma$ and $\rm{kT_{e}}$ of the two reflectors together. At first, the normalization for the two reflectors were tied together and the best fit produced a $\chi^2$ of 637 for 484 degrees of freedom for epoch A. We then modelled the epoch A spectrum by varying the two normalization independently and got an improved $\chi^2$ of 504 for 483 degrees of freedom. For the other epochs we therefore carried out the model fit by leaving the two normalization untied. For both the reflectors we fixed $R$ to $-$1 to consider the reflection components only. During the fitting $AF_{e}$ between the two reflectors were tied together. The best fit unfolded spectra along with the residues of the data to Model 1b fit to the epoch G spectra are given in the left panel of Fig. \ref{figure-7}. The best fit results of Model 1b are given in Table \ref{table-8}. For all the epochs the residuals of the fit are given in left panel of Fig. \ref{figure-8}. \subsubsection{Model 2} Following \cite{Bauer_2015} we then used the \say{leaky torus} model in which it is assumed that there is a finite probability for the primary emission to escape the medium without scattering or getting absorbed and partially punching through above 20 $-$ 30 keV. In a Compton-Thick AGN, the direct transmitted continuum if at all present, is not observable below $\sim$ 10 keV. In Model 2, with the two reflectors, this transmitted or the direct component was taken care of. We assumed that a direct transmitted intrinsic continuum ({\it zpo}) was attenuated by the line of sight Compton thick absorber with a column density of $\rm{N_{H}}$ = $10^{25}$ atoms $cm^{-2}$ and an inclination angle ($\theta_{incl}$) of $90^{\circ}$ (for an edge on torus). Here also, we used {\it xillverCP} with $log\xi$ = 0.0 to model the cold reflection, and, either {\it f1*zpo} \citep{Poutanen_1996} (Model 2a), or {\it xillverCP} with $log \xi$ = 4.7 (Model 2b) to take care of the warm and ionized reflection. In XSPEC the models take the following forms, \begin{dmath} Model 2a = const*phabs*(zpo*MYTZ+f1*zpo+xillverCP++zgauss+zgauss+zgauss+zgauss) \end{dmath} and, \begin{dmath} Model 2b = const*phabs*(zpo*MYTZ+xillverCP_{warm}+xillverCP_{cold}++zgauss+zgauss+zgauss+zgauss) \end{dmath} In both the models, we used the {\it MYTZ} component from the {\it MYtorus} set of models \citep{2009MNRAS.397.1549M, 2012MNRAS.423.3360Y} to fit the zeroth-order continuum. This is often called the \say{direct} or \say{transmitted} continuum which is simply a fraction of the intrinsic continuum that leaves the medium without being absorbed or scattered. This energy dependent zeroth-order component ({\it MYTZ}) is used as a multiplicative factor applied to the intrinsic continuum. {\it MYTZ} is purely a line-of-sight quantity and does not depend on the geometry and covering fraction of the out-of-sight material. It includes the equivalent column density ($\rm{N_{H}}$), inclination of the obscuring torus along the line of sight ($\theta_{incl}$) and the redshift of the source. During the spectral fits $\rm{N_{H}}$ and $\theta_{incl}$ were frozen to $10^{25}$ $cm^{-2}$ and $90^{\circ}$ respectively. \noindent {\bf Model 2a:} During the spectral fitting with this model $\Gamma$ and the normalization for all the transmitted and reflected components were tied together. To model the warm and cold reflectors we used {\it f1*zpo} and {\it xillverCP} respectively. The parameters for these two models were treated in a similar fashion as described in Model 1a. The best fit values obtained from Model 2a are given in Table- \ref{table-8}. \noindent {\bf Model 2b:} Here, $\Gamma$ of the transmitted and scattered components were tied together, but the normalization were varied independently to achieve acceptable fit statistics. For the warm and cold reflectors the model parameters were treated in a similar way as described in Model 1b. Using this model we obtained a better fit statistics than Model 1b with no prominent residues present in the hard energy part (see the right panel of Fig. \ref{figure-7}). The best fit values are given in Table \ref{table-8}. For Model 2b we calculated the flux in three energy bands, i.e, 4$-$10 keV, 10$-$20 keV and 20$-$79 keV. The fluxes obtained for FPMA module ($F^{FPMA}$) are given in Table \ref{table-8}. In the 20 $-$ 79 keV band on epoch D (August, 2014) the source was brighter by about 22\% and 28\% respectively compared to the mean brightness in December 2012 (epoch A, B and C) and February 2015 (epoch E) FPMA spectra. The source brightness again increased in the 20 $-$ 79 keV band on epoch G (August 2017) and it was found to be brighter by about 32\% and 36\% relative to the December 2012 and February 2015 spectra respectively. For all the epochs, the best fit data to model residues are plotted in the right panel of Fig. \ref{figure-8}. \begin{figure} \hspace{-1.5 cm} \begin{minipage}{1.00\columnwidth} \centering \includegraphics[width=\textwidth]{PNdata.jpeg} \end{minipage}% \caption{Three 2014 {\it EPIC-PN} spectra plotted together in the 4$-$9 keV band.} \label{figure-10} \end{figure} \begin{figure*} \centering \begin{minipage}{2.10\columnwidth} \centering \includegraphics[width=\textwidth]{ratio_NGC1068.jpeg} \end{minipage}% \caption{Best fit data to the model ratio for {\it constant*phabs*zphabs*(cutoffpl+pexrav+zgauss(8))} (top panel) ; {\it constant*phabs*zphabs*(f1*cutoffpl+pexrav+zgauss(8))} (2nd panel from the top) ; {\it constant*phabs*zphabs*(MYTZ*cutoffpl+f1*cutoffpl+pexrav+zgauss(8))} (3rd panel from the top) and {\it constant*phabs*zphabs*(zpo*MYTZ+f1*zpo+xillverCP+zgauss(6))} (bottom panel) to the {\it XMM-Newton and NuSTAR} (epoch D FPMA) spectra in the 3$-$79 keV band.} \label{figure-13} \end{figure*} \begin{figure*} \centering \begin{minipage}{1.03\columnwidth} \centering \includegraphics[width=\textwidth]{xmm410.jpeg} \end{minipage}% \begin{minipage}{1.09\columnwidth} \centering \includegraphics[width=\textwidth]{PN479new.jpeg} \end{minipage} \caption{Left panel: Best fit {\it EPIC-PN} combined spectra in the 4$-$9 keV band. Right panel: {\it XMM-Newton and NuSTAR} (epoch D FPMA) joint best fit spectra in the 4$-$79 keV band.} \label{figure-11} \end{figure*} \begin{table} \caption{Best fit line energies along with normalization. Here, the line energy (E) is in keV and the normalization ($N_{E}$) is in units of $10^{-5}$ photons keV$^{-1}$ cm$^{-2}$ s$^{-1}$} \label{table-7} \begin{tabular}{ccccccr} \hline Parameter & & line & & Value \\ \hline E1 & & Fe Be-like K$\alpha$ & & 6.60$^{+0.03}_{-0.03}$ \\ $N_{E1}$ & & & & 1.85$^{+0.47}_{-0.44}$ \\ E2 & & Fe He-like K$\alpha$ & & 6.75$^{+0.02}_{-0.02}$ \\ $N_{E2}$ & & & & 2.86$^{+0.46}_{-0.50}$ \\ E3 & & Fe H–like K$\alpha$ & & 7.02$^{+0.01}_{-0.02}$ \\ $N_{E3}$ & & & & 1.35$^{+0.19}_{-0.18}$ \\ E4 & & Ni K$\alpha$ & & 7.53$^{+0.03}_{-0.03}$ \\ $N_{E4}$ & & & & 0.51$^{+0.15}_{-0.15}$ \\ E5 & & Ni ionized He-like K$\alpha$ & & 7.86$^{+0.04}_{-0.04}$ \\ $N_{E5}$ & & & & 0.46$^{+0.15}_{-0.15}$ \\ E6 & & Ni K$\beta$ & & 8.15$^{+0.08}_{-0.08}$ \\ $N_{E6}$ & & & & 0.28$^{+0.14}_{-0.14}$ \\ \hline \end{tabular} \end{table} \begin{table*} \caption{Results of Model 1a, Model 1b, Model 2a and Model 2b fits to the simultaneous {\it NuSTAR} FPMA $-$ FPMB spectra. The $\rm{kT_{e}}$ and the line energies (E1, E2, E3 and E4) are in units of keV. Column densities ($N_{H}$) are in unit of $cm^{-2}$. Flux (F) is expressed in unit of $10^{-11}$ erg $cm^{-2}$ $s^{-1}$. Normalization of components (N) in different models at 1 keV are in units of photons $keV^{−1}$ $cm^{−2}$ $s^{−1}$. Parameters with the star (*) mark represent the frozen values.} \label{table-8} \centering \begin{tabular}{cccccccccc} \hline Model & Parameter & epoch A & epoch B & epoch C & epoch D & epoch E & epoch F & epoch G & epoch H \\ \hline\hline 1a & $\Gamma$ & 1.33$^{+0.03}_{-0.03}$ & 1.30$^{+0.03}_{-0.03}$ & 1.32$^{+0.05}_{-0.06}$ & $<$1.21 & 1.32$^{+0.03}_{-0.03}$ & $<$1.22 & $<$1.21 & $<$1.22 \\ & $\rm{A_{Fe}}$ & 4.79$^{+0.71}_{-0.37}$ & 5.00* & 4.87$^{+1.50}_{-0.57}$ & 4.38$^{+0.30}_{-0.32}$ & 4.34$^{+0.36}_{-0.36}$ & 4.93$^{+0.47}_{-0.35}$ & 4.46$^{+0.30}_{-0.30}$ & 4.40$^{+0.30}_{-0.30}$ \\ & $\rm{kT_{e}}$ & 7.60$^{+0.47}_{-0.41}$ & 7.48$^{+0.35}_{-0.32}$ & 8.04$^{+0.78}_{-0.69}$ & 7.44$^{+0.28}_{-0.27}$ & 6.82$^{+0.36}_{-0.32}$ & 7.26$^{+0.25}_{-0.29}$ & 7.60$^{+0.29}_{-0.28}$ & 7.29$^{+0.29}_{-0.28}$ \\ & N $\times(10^{-4})$ & 1.31$^{+0.10}_{-0.10}$ & 1.35$^{+0.07}_{-0.08}$ & 1.33$^{+0.17}_{-0.15}$ & 1.60$^{+0.08}_{-0.08}$ & 1.50$^{+0.11}_{-0.10}$ & 1.43$^{+0.08}_{-0.08}$ & 1.61$^{+0.09}_{-0.09}$ & 1.53$^{+0.09}_{-0.09}$ \\ & p1 & 1.28$^{+0.24}_{-0.20}$ & 1.23$^{+0.16}_{-0.14}$ & 1.31$^{+0.35}_{-0.31}$ & 0.70$^{+0.11}_{-0.10}$ & 0.93$^{+0.18}_{-0.16}$ & 0.76$^{+0.10}_{-0.11}$ & 0.66$^{+0.11}_{-0.10}$ & 0.69$^{+0.11}_{-0.10}$ \\ & $\chi^2/dof$ & 468/482 & 437/421 & 210/199 & 483/468 & 436/452 & 473/414 & 528/449 & 479/429 \\ & E1 & 6.75$^{+0.03}_{-0.03}$ & 6.79$^{+0.03}_{-0.03}$ & 6.79$^{+0.05}_{-0.05}$ & 6.77$^{+0.03}_{-0.03}$ & 6.78$^{+0.03}_{-0.03}$ & 6.77$^{+0.03}_{-0.03}$ & 6.80$^{+0.03}_{-0.03}$ & 6.79$^{+0.03}_{-0.03}$ \\ & $N_{E1}$ $\times(10^{-5})$ & 3.82$^{+0.42}_{-0.47}$ & 3.50$^{+0.45}_{-0.42}$ & 3.61$^{+0.37}_{-0.69}$ & 3.39$^{+0.46}_{-0.43}$ & 3.40$^{+0.45}_{-0.42}$ & 2.64$^{+0.44}_{-0.41}$ & 2.88$^{+0.42}_{-0.40}$ & 3.39$^{+0.44}_{-0.43}$ \\ & E2 & 7.59$^{+0.09}_{-0.09}$ & 7.58$^{+0.16}_{-0.13}$ & 7.62$^{+0.16}_{-0.15}$ & 7.45$^{+0.18}_{-0.16}$ & 7.51$^{+0.14}_{-0.18}$ & 7.61$^{+0.10}_{-0.09}$ & 7.66$^{+0.15}_{-0.13}$ & 7.55$^{+0.09}_{-0.10}$ \\ & $N_{E2}$ $\times(10^{-5})$ & 0.81$^{+0.21}_{-0.22}$ & 0.55$^{+0.21}_{-0.21}$ & 0.80$^{+0.41}_{-0.44}$ & 0.46$^{+0.23}_{-0.24}$ & 0.55$^{+0.25}_{-0.28}$ & 0.62$^{+0.21}_{-0.21}$ & 0.56$^{+0.21}_{-0.20}$ & 0.62$^{+0.23}_{-0.23}$ \\ & E3 & 8.19$^{+0.12}_{-0.16}$ & 8.46$^{+0.17}_{-0.17}$ & 8.07$^{+0.16}_{-0.15}$ & 7.96$^{+0.11}_{-0.11}$ & 7.95$^{+0.20}_{-0.17}$ & 8.33$^{+0.13}_{-0.14}$ & 8.03$^{+0.23}_{-0.21}$ & 8.08$^{+0.08}_{-0.09}$ \\ & $N_{E3}$ $\times(10^{-5})$ & 0.45$^{+0.19}_{-0.19}$ & 0.48$^{+0.19}_{-0.19}$ & 0.74$^{+0.41}_{-0.41}$ & 0.65$^{+0.22}_{-0.23}$ & 0.43$^{+0.26}_{-0.25}$ & 0.38$^{+0.19}_{-0.18}$ & 0.55$^{+0.23}_{-0.27}$ & 0.56$^{+0.21}_{-0.21}$ \\ & E4 & 8.77$^{+0.11}_{-0.12}$ & 8.75$^{+0.10}_{-0.11}$ & - & 8.63$^{+0.18}_{-0.23}$ & 8.87$^{+0.17}_{-0.36}$ & 9.12$^{+0.19}_{-0.18}$ & 9.00$^{+0.14}_{-0.14}$ & 9.00$^{+0.26}_{-0.29}$ \\ & $N_{E4}$ $\times(10^{-6})$ & 3.69$^{+1.74}_{-1.79}$ & 3.28$^{+1.20}_{-1.25}$ & - & 2.79$^{+1.79}_{-1.79}$ & 3.84$^{+1.72}_{-1.72}$ & 2.15$^{+1.69}_{-1.63}$ & 4.19$^{+1.80}_{-1.79}$ & 2.75$^{+1.76}_{-1.76}$ \\ & $\rm{C_{FPMA/FPMB}}$ & 1.04$^{+0.03}_{-0.03}$ & 1.03$^{+0.03}_{-0.03}$ & 1.01$^{+0.05}_{-0.04}$ & 1.01$^{+0.03}_{-0.03}$ & 1.02$^{+0.03}_{-0.03}$ & 1.00$^{+0.03}_{-0.03}$ & 1.01$^{+0.03}_{-0.03}$ & 0.98$^{+0.03}_{-0.03}$ \\ \hline 1b & $\Gamma$ & 1.34$^{+0.05}_{-0.03}$ & 1.26$^{+0.08}_{-0.06}$ & $<$1.27 & $<$1.24 & 1.31$^{+0.04}_{-0.05}$ & $<$1.26 & $<$1.22 & $<$1.26 \\ & $\rm{A_{Fe}}$ & $<$7.05 & $<$6.31 & $>$7.29 & 5.08$^{+1.31}_{-0.23}$ & 5.83$^{+1.86}_{-1.00}$ & 6.82$^{+0.95}_{-1.78}$ & 5.50$^{+0.69}_{-0.59}$ & 5.53$^{+1.52}_{-0.68}$ \\ & $\rm{kT_{e}}$ & 9.10$^{+0.13}_{-0.16}$ & 8.95$^{+0.16}_{-0.20}$ & 9.38$^{+0.18}_{-0.18}$ & 8.75$^{+0.16}_{-0.12}$ & 8.69$^{+0.22}_{-0.25}$ & 8.68$^{+0.17}_{-0.23}$ & 8.77$^{+0.13}_{-0.14}$ & 8.78$^{+0.16}_{-0.16}$ \\ & $N_{\it xillverCP_{warm}}$ $\times(10^{-5})$ & 2.43$^{+0.24}_{-0.34}$ & 2.29$^{+0.36}_{-0.36}$ & 2.68$^{+0.15}_{-0.49}$ & 2.03$^{+0.30}_{-0.16}$ & 1.90$^{+0.35}_{-0.26}$ & 1.95$^{+0.18}_{-0.35}$ & 1.99$^{+0.15}_{-0.19}$ & 2.02$^{+0.31}_{-0.23}$ \\ & $N_{\it xillverCP_{cold}}$ $\times(10^{-4})$ & 1.03$^{+0.08}_{-0.06}$ & 1.12$^{+0.09}_{-0.10}$ & 1.00$^{+0.11}_{-0.08}$ & 1.38$^{+0.07}_{-0.09}$ & 1.18$^{+0.10}_{-0.10}$ & 1.22$^{+0.10}_{-0.08}$ & 1.39$^{+0.07}_{-0.07}$ & 1.27$^{+0.10}_{-0.09}$ \\ & $\chi^2/dof$ & 504/483 & 474/420 & 213/199 & 545/468 & 480/452 & 545/414 & 600/449 & 545/431 \\ \hline 2a & $\Gamma$ & 1.32$^{+0.03}_{-0.03}$ & 1.30$^{+0.04}_{-0.04}$ & 1.32$^{+0.05}_{-0.05}$ & $<$1.22 & 1.32$^{+0.03}_{-0.03}$ & $<$1.22 & $<$1.21 & $<$1.22 \\ & $\rm{A_{Fe}}$ & 4.78$^{+0.70}_{-0.37}$ & 5.50$^{+1.14}_{-0.77}$ & 4.77$^{+1.47}_{-0.60}$ & 4.43$^{+0.31}_{-0.32}$ & 4.32$^{+0.38}_{-0.37}$ & 5.00$^{+0.63}_{-0.37}$ & 4.56$^{+0.31}_{-0.32}$ & 4.33$^{+0.37}_{-0.34}$ \\ & $\rm{kT_{e}}$ & 7.39$^{+0.41}_{-0.69}$ & 7.80$^{+0.49}_{-0.58}$ & 7.28$^{+0.86}_{-0.62}$ & 7.45$^{+0.30}_{-0.29}$ & 6.35$^{+0.46}_{-0.34}$ & 7.51$^{+0.33}_{-0.37}$ & 7.43$^{+0.35}_{-0.55}$ & 6.80$^{+0.51}_{-0.35}$ \\ & p1 & 0.79$^{+0.69}_{-0.21}$ & 1.34$^{+0.20}_{-0.18}$ & 0.83$^{+0.53}_{-0.35}$ & 0.74$^{+0.11}_{-0.10}$ & 0.53$^{+0.25}_{-0.19}$ & 0.88$^{+0.11}_{-0.11}$ & 0.64$^{+0.12}_{-0.31}$ & 0.72$^{+0.10}_{-0.11}$ \\ & $\chi^2/dof$ & 467/482 & 421/420 & 205/199 & 480/468 & 431/452 & 449/414 & 521/449 & 476/431 \\ \hline 2b & $\Gamma$ & 1.35$^{+0.09}_{-0.07}$ & 1.50$^{+0.08}_{-0.08}$ & 1.49$^{+0.13}_{-0.13}$ & 1.31$^{+0.06}_{-0.04}$ & 1.37$^{+0.03}_{-0.03}$ & 1.35$^{+0.04}_{-0.03}$ & 1.32$^{+0.06}_{-0.05}$ & 1.34$^{+0.05}_{-0.05}$ \\ & $\rm{A_{Fe}}$ & 6.09$^{+2.43}_{-1.44}$ & 4.39$^{+0.76}_{-0.70}$ & 4.48$^{+1.63}_{-0.95}$ & 4.98$^{+0.56}_{-0.63}$ & 5.00* & 5.00$^{+0.71}_{-0.41}$ & 4.43$^{+0.72}_{-0.68}$ & 4.31$^{+0.67}_{-0.66}$ \\ & $\rm{kT_{e}}$ & 8.97$^{+0.22}_{-0.30}$ & 8.51$^{+0.49}_{-0.82}$ & 9.13$^{+0.63}_{-0.98}$ & 8.76$^{+0.15}_{-0.39}$ & 8.55$^{+0.17}_{-0.16}$ & 8.57$^{+0.17}_{-0.32}$ & 8.46$^{+0.39}_{-0.66}$ & 8.30$^{+0.45}_{-0.72}$ \\ & $\chi^2/dof$ & 481/481 & 433/419 & 211/198 & 507/467 & 464/452 & 495/415 & 529/448 & 496/430 \\ & $F_{4-10}^{FPMA}$ & 0.42 & 0.42 & 0.43 & 0.41 & 0.42 & 0.38 & 0.39 & 0.40 \\ & $F_{10-20}^{FPMA}$ & 0.45 & 0.45 & 0.48 & 0.49 & 0.45 & 0.42 & 0.48 & 0.47 \\ & $F_{20-79}^{FPMA}$ & 2.23 & 2.50 & 2.55 & 3.09 & 2.24 & 2.92 & 3.54 & 2.97 \\ \hline \end{tabular} \end{table*} \subsection{XMM-Newton \& NuSTAR Joint fit } Fitting the {\it NuSTAR} spectra alone could not handle the line emission profiles present in the source spectrum properly. To model the lines we used three {\it XMM-Newton EPIC PN} spectra taken in 2014 along with the {\it NuSTAR} FPMA data. Use of {\it XMM-Newton} data jointly with {\it NuSTAR}, the observations of which are not simultaneous requires the source to be non-variable. We show in Fig. \ref{figure-10} all three {\it XMM-Newton PN} spectra taken in 2014. This figure indicates that the source has not shown any noticeable variation in line and continuum flux. Also, in all the eight epochs of {\it NuSTAR} data accumulated over a period of 5 years, no variation in the soft band (4$-$10 keV) is observed (see Table \ref{table-4} and Fig. \ref{figure-2}). Therefore, it is not inappropriate to jointly model the {\it NuSTAR} and {\it XMM-Newton} observations. We combined the three {\it XMM-Newton EPIC PN} spectra together using the task {\it epicspeccombine} and then binned the spectra with 25 counts/bin using the task {\it specgroup}. We carried out joint model fits to the 3-10 keV {\it XMM-Newton} 2014 combined spectra with the 3-79 keV epoch D {\it NuSTAR} spectrum. To account for both warm and cold reflection, several models were tried. Firstly, we used a {\it cutoffpl} to model the warm reflector that did not take into account Compton down scattering. For modelling the cold reflector we used {\it pexrav} \citep{10.1093/mnras/273.3.837} with R=-1. We obtained the best fit values of 1.59$^{+0.09}_{-0.09}$, 95$^{+100}_{-47}$ and 11.22$^{+3.51}_{-2.70}$ for $\Gamma$, $\rm{E_{cut}}$ and $\rm{A_{Fe}}$ respectively with a $\chi^2$ of 893 for 821 degrees of freedom. Using this model we got an acceptable fit statistics with a mild hump between 20 $-$ 30 keV energy range (see top panel of Fig. \ref{figure-13}). We replaced the {\it cutoffpl} with {\it xillver} with a fixed $\log\xi$ = 4.7 to account for the Compton down scattering. We obtained $\Gamma$ = 1.54$^{+0.04}_{-0.04}$, $\rm{E_{cut}}$ = 94$^{+26}_{-19}$ and $\rm{A_{Fe}}$ $>$ 9.15. This fit produced a $\chi^2$ of 884 for 821 degrees of freedom. We then replaced {\it xillver} with {f1*cutoffpl} (for f1, see Equation 3) to model the warm reflector and obtained $\Gamma$ = 1.62$^{+0.11}_{-0.11}$, $\rm{E_{cut}}$ = 97$^{+149}_{-39}$ and $\rm{A_{Fe}}$ = 11.50$^{+4.54}_{-3.11}$ with a $\chi^2$ of 902 for 824 degrees of freedom. With or without the inclusion of Compton down scattering in the warm reflector we obtained similar set of best fit values and a hump in the data to model residue plot near 25 keV (see first two panels of Fig. \ref{figure-13}). We thus conclude that inclusion of Compton down scattering in the warm reflection has insignificant effect on the derived parameters. The spectrum of the Compton-Thick AGN is refection dominated, however, there is a finite probability for the primary emission to be transmitted (above $>$10 keV) through the Compton thick absorber. Thus, we modified our model to take into account the transmitted primary emission by including the ({\it MYTZ*cutoffpl}) component into the previously described two reflector models in which the cold reflection was modelled using {\it pexrav} with R=-1. For the warm reflector we first used {\it xillver} with R=-1 and $log\xi$ = 3.1. For the Compton absorber along the line of sight we assumed a column density of $10^{25}$ $cm^{-2}$ with an inclination of $90^{\circ}$ \citep{Bauer_2015}. We obtained $\Gamma$ $<$1.28, $\rm{E_{cut}}$ = 16$^{+2}_{-1}$ and $\rm{A_{Fe}}$ = $>$7.41 with a $\chi^2$ of 847 for 820 degrees of freedom. The fit statistics now improved by $\Delta\chi^2$ = 37 over the reduction of 1 degree of freedom and the hump around 20 $-$ 30 keV band was also taken care of. We obtained a similar fit with $\Gamma$ = 1.37$^{+0.18}_{-0.16}$, $\rm{E_{cut}}$ = 18$^{+4}_{-3}$ and $\rm{A_{Fe}}$ = 5.68$^{+4.20}_{-2.41}$ with a $\chi^2$ of 850 for 820 degrees of freedom from using {\it f1*cutoffpl} (see Equation 3) as a warm reflector. Inclusion of the transmitted primary emission with the two reflector model described the source spectra well with a harder photon index and lower $\rm{E_{cut}}$ with no prominent features present in the data to model ratio plot at high energies (see 3rd panel of Fig. \ref{figure-13}). Previously also, NGC 1068 X-ray spectrum was modelled with a flatter photon index. From a joint analysis of {\it XMM-Newton} epoch E spectrum with the epoch D {\it NuSTAR} spectrum \cite{2021MNRAS.506.4960H} reported $\Gamma$ = 1.21$^{+0.13}_{-0.07}$. From modelling the 0.5$-$10 keV {\it ASCA} observation \cite{1994PASJ...46L..71U} found a best fit $\Gamma$ of 1.28$\pm$0.14. Replacing {\it pexrav} with {\it xillverCP} to model the cold reflector with R=-1 and $log\xi$ = 0.0 we obtained $\Gamma$ = 1.26$^{+0.03}_{-0.03}$, $\rm{kT_{e}}$ = 8.59$^{+0.40}_{-0.37}$ and $\rm{A_{Fe}}$ = 4.02$^{+0.36}_{-0.33}$ with a $\chi^2$ = 857 for 820 degrees of freedom. The data to the best fit model residues are given in Fig. \ref{figure-13}. To estimate $\rm{kT_{e}}$ we used Model 2b, as described earlier for the {\it NuSTAR} spectral fit alone. Here the {\it NuSTAR} FPMA spectra for all the epochs in the 9 $-$ 79 keV along with the combined 2014 {\it XMM-Newton} data in the 4 $-$ 10 keV band were used to take care of the line emission region carefully. We used six Gaussian components to model all the ionized and neutral lines present in the spectra. The line energies and the normalization were kept free during the fitting while the line widths were frozen to 0.01 keV. The self-consistent {\it $xillverCP_{cold}$} model took care of the neutral Fe k$\alpha$ ($\sim$ 6.4 keV) and k$\beta$ ($\sim$ 7.08 keV) lines, wherein the first three Gaussian components were used to model the ionized Fe lines at energies $\sim$ 6.57 keV (Fe Be-like K$\alpha$), 6.67 keV (Fe He-like K$\alpha$) and 6.96 keV (Fe H–like K$\alpha$). The neutral Ni k$\alpha$ ($\sim$ 7.47 keV), Ni k$\beta$ ($\sim$ 8.23 keV) and Ni ionized He-like K$\alpha$ ($\sim$ 7.83 keV) were taken care by the other three Gaussian components. As seen from the left panel of Fig. \ref{figure-11} we did not find any prominent residue in the 4$-$9 keV band. All the best fitted line energies and the normalization are given in Table \ref{table-7}. The best fit model parameters along with their corresponding errors are given in Table \ref{table-9}. We found that this joint fit produced similar best fit values as obtained from {\it NuSTAR} fit alone. The best fit model to the data (the combined {\it EPIC PN} data and {\it NuSTAR} FPMA epoch D spectra) along with the residue is shown in the right panel of Fig \ref{figure-11}. \begin{table*} \caption{Results of the analysis of the Model 2b fit to the {\it XMM-Newton} and {\it NuSTAR} FPMA spectra in the 4 $-$ 79 keV energy band. $\rm{kT_{e}}$ is in unit of keV and column densities ($N_{H}$) are in units of $cm^{-2}$. Normalization of components (N) at 1 keV is in unit of photons $keV^{−1}cm^{−2}s^{−1}$.} \label{table-9} \centering \begin{tabular}{p{0.12\linewidth}p{0.08\linewidth}p{0.08\linewidth}p{0.08\linewidth}p{0.08\linewidth}p{0.08\linewidth}p{0.08\linewidth}p{0.08\linewidth}p{0.08\linewidth}} \hline Parameter & epoch A & epoch B & epoch C & epoch D & epoch E & epoch F & epoch G & epoch H \\ \hline\hline $N_{H}^{ztbabs}$ & 9.78$^{+2.92}_{-2.96}$ & 9.19$^{+2.73}_{-2.75}$ & 7.09$^{+5.04}_{-4.19}$ & 9.57$^{+2.39}_{-2.60}$ & 9.16$^{+2.58}_{-2.72}$ & 9.72$^{+2.37}_{-2.55}$ & 9.51$^{+2.75}_{-3.15}$ & 8.90$^{+2.64}_{-2.88}$ \\ $\Gamma$ & 1.29$^{+0.07}_{-0.08}$ & 1.33$^{+0.15}_{-0.07}$ & 1.48$^{+0.11}_{-0.25}$ & $<$1.32 & 1.33$^{+0.10}_{-0.06}$ & 1.28$^{+0.07}_{-0.05}$ & 1.26$^{+0.07}_{-0.10}$ & 1.30$^{+0.07}_{-0.07}$ \\ $\rm{A_{Fe}}$ & 6.08$^{+3.87}_{-1.59}$ & 4.89$^{+2.89}_{-1.51}$ & 3.09$^{+4.96}_{-1.08}$ & 4.19$^{+1.44}_{-0.93}$ & 4.75$^{+2.19}_{-0.63}$ & 4.99$^{+1.70}_{-1.10}$ & 3.90$^{+1.33}_{-1.10}$ & 3.81$^{+1.27}_{-1.01}$ \\ $\rm{kT_{e}}$ & 8.69$^{+0.28}_{-0.33}$ & 8.62$^{+0.33}_{-0.49}$ & 8.07$^{+0.59}_{-0.75}$ & 8.59$^{+0.30}_{-0.36}$ & 8.60$^{+0.34}_{-0.58}$ & 8.68$^{+0.26}_{-0.38}$ & 8.55$^{+0.25}_{-0.41}$ & 8.48$^{+0.35}_{-0.51}$ \\ $\chi^2/dof$ & 592/584 & 576/565 & 516/492 & 620/586 & 590/574 & 613/569 & 630/584 & 585/578 \\ $\rm{C_{XMM/NuSTAR}}$ & 0.96$^{+0.08}_{-0.08}$ & 0.95$^{+0.08}_{-0.07}$ & 0.94$^{+0.11}_{-0.11}$ & 0.94$^{+0.08}_{-0.08}$ & 0.93$^{+0.08}_{-0.08}$ & 0.98$^{+0.08}_{-0.08}$ & 0.93$^{+0.08}_{-0.08}$ & 0.97$^{+0.08}_{-0.09}$ \\ \hline \end{tabular} \end{table*} \section{Summary} In this work, we carried out spectral and timing analysis of eight epochs of {\it NuSTAR} observations performed between December 2012 and November 2017 probing time scales within epochs and between epochs that spans about 5 years. The timing analysis of the six {\it XMM-Newton} observations between July 2000 and February 2015 was also performed. We also carried out the spectral analysis of the 2014 combined {\it XMM-Newton EPIC PN} and {\it NuSTAR} FPMA data jointly. The results of this work are summarized below \begin{enumerate} \item We found the source not to show flux variation within each of the eight epochs of {\it NuSTAR} observation. \item Between epochs, that span the time-scales from 2012 to 2017, we found variation in the source. Here too, the source did not show variation in the soft energy range. As in agreement with the earlier results by \cite{2016MNRAS.456L..94M} and \cite{2020MNRAS.492.3872Z}, we also found that the observed variations is only due to variation in the energy range beyond 20 keV. This too was noticed in Epoch D (August 2014) and Epoch G (August 2017), when the brightness of the source beyond 20 keV was higher by about 20\% and 30\% respectively relative to the three {\it NuSTAR} observations in the year 2012. \item From timing analysis, we observed no correlation of spectral variation (hardness ratio) with brightness. \item Fitting physical models to the observed data we could determine the temperature of the corona in NGC 1068 with values ranging from 8.46$^{+0.39}_{-0.66}$ keV and 9.13$^{+0.63}_{-0.98}$ keV. However, we found no variation in the temperature of the corona during the 8 epochs of observations that span a duration of about 5 years. \item From the timing analysis of six {\it XMM-Newton EPIC PN} data we found no significant flux variation both in between and within epochs of observation in the hard band. In the soft band too we found the source did not show any significant flux variation within epochs but it was brighter in epoch B compared to epoch A. \item The combined spectral fit of {\it XMM-Newton} and {\it NuSTAR} data provided results that are in agreement with those obtained by model fits to the {\it NuSTAR} data alone. \end{enumerate} In NGC 1068, we did not find evidence for variation in the temperature of the corona from analysis of data that span more than five years. This is evident from the best fit values of $\rm{kT_{e}}$ from Table \ref{table-8}. Also, the results from various models are found to be similar. The values of $\rm{kT_{e}}$ found for NGC 1068 also lie in the range of $\rm{kT_{e}}$ found in other AGN. Measurements of $\rm{E_{cut}}$ are available for a large number of AGN that includes both Seyfert 1 and Seyfert 2 type. However, studies on the variation of $\rm{E_{cut}}$ or $\rm{kT_{e}}$ are limited to less than a dozen AGN \citep{2014ApJ...794...62B, 2015A&A...577A..38U, 2016MNRAS.463..382U, 2016MNRAS.456.2722K, 2017ApJ...836....2Z, 2018ApJ...863...71Z, 2020MNRAS.492.3041B, 2021MNRAS.502...80K, Barua_2021, 2022A&A...662A..78P}. Even in sources where $\rm{E_{cut}}$/$\rm{kT_{e}}$ variations are known, the correlation of the variation of $\rm{kT_{e}}$ with various physical properties of the sources are found to be varied among sources \citep{2020MNRAS.492.3041B, Barua_2021, 2021MNRAS.502...80K, 2022A&A...662A..78P}. These limited observations do indicate that we do not yet understand the complex corona of AGN including its geometry and composition. Investigation of this kind needs to be extended for many AGN to better constrain the nature of corona. \section*{Acknowledgements} We thank the anonymous referee for his/her comments that helped us in correcting an error in the analysis. We also thank Drs. Ranjeev Misra and Gulab Dewangan for discussion on spectral fits to the data. We thank the {\it NuSTAR} Operations, Software and Calibration teams for support with the execution and analysis of these observations. This research has made use of the {\it NuSTAR} Data Analysis Software (NuSTARDAS) jointly developed by the ASI Science Data Center (ASDC, Italy) and the California Institute of Technology (USA). This research has made use of data and/or software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC. VKA thanks GH, SAG; DD, PDMSA and Director, URSC for encouragement and continuous support to carry out this research. \section*{Data Availability} All data used in this work are publicly available in the {\it NuSTAR} (\url{https://heasarc.gsfc.nasa.gov/docs/nustar/nustar_archive.html}) and {\it XMM-Newton} (\url{http://nxsa.esac.esa.int}) science archive. \bibliographystyle{mnras} \section{Introduction} The standard orientation based unification model of active galactic nuclei (AGN; \citealt{1993ARA&A..31..473A,1995PASP..107..803U}) classifies the Seyfert category of AGN into two types, namely Seyfert 1 and Seyfert 2 galaxies, based on the orientation of the viewing angle. According to this model, the observational difference between Seyfert 1 and Seyfert 2 galaxies is explained due to the inclination of the line of sight with respect to the dusty torus in them. Seyfert 1 galaxies are those that are viewed at lower inclination angles and Seyfert 2 galaxies are viewed at higher inclination, with the central region in them completely blocked by the dusty molecular torus that surrounds the broad line region (BLR). The detection of hidden BLR was first reported in NGC 1068, which forms the discovery of the first Type 2 AGN. This was based on spectro-polarimetric observations, that revealed the presence of broad emission lines in polarized light \citep{1985ApJ...297..621A}. X-ray observations too provide evidence on the presence of the obscuring torus in AGN \citep{1991PASJ...43..195A} with large X-ray column densities seen in Seyfert 2 galaxies. \begin{figure} \centering \begin{minipage}{1.05\columnwidth} \centering \includegraphics[width=\textwidth]{nustar_spectra_79.jpeg} \end{minipage}% \caption{The eight epochs of {\it NuSTAR} FPMA spectra plotted together} \label{figure-1} \end{figure} AGN emit across wavelengths, and X-ray emission has been observed from all categories of AGN \citep{1993ARA&A..31..717M}. However, a large fraction of AGN are obscured in X-rays \citep{2012AdAst2012E..17B, 2017NatAs...1..679R}. The obscured AGN are further classified as Compton Thin and Compton Thick based on the density of the equivalent hydrogen column density ($\rm{N_H}$) along the line of sight. In a Compton Thick AGN, the central engine is surrounded by the heavily obscuring dust and gas with $\rm{N_H}$ $\geq$ $10^{24}$ $cm^{-2}$ embedded in a dusty torus that is located at few parsec ($\sim$ 0.1 $-$ 10 pc) from the central source \citep{2015ARA&A..53..365N}. In a Compton Thick Seyfert 2 galaxy, reflection from the torus produces a reflection hump at around 15$-$30 keV and reveals the presence of neutral Fe K$\alpha$ emission line at 6.4 keV with an equivalent width of $\sim$1 keV (\citealt{1994MNRAS.267..743G,2016MNRAS.456L..94M}). However, the nature of this obscuring material is not static. In many AGN the observed X-ray variability is likely caused due to the variation in the circumnuclear material \citep{2002ApJ...571..234R, 2016ApJ...831..145Y}. X-ray emission from AGN is believed to originate from regions close to the central super massive black hole. Models of the X-ray emitting region include the hot corona situated close to the vicinity of the accretion disk (\citealt{1991ApJ...380L..51H,1993ApJ...413..507H}) as well as the AGN relativistic jet that emanates along the black hole rotation axis \citep{2006ARA&A..44..463H,2019ARA&A..57..467B}. It is likely that the contribution of each of these physical processes to the observed X-ray emission may differ among AGN. The observed X-ray spectrum from AGN consists of many components such as (i) a power law component believed to be due to the inverse Compton scattering of the optical, ultra-violet accretion disk photons by the hot electrons in the corona (\citealt{1991ApJ...380L..51H,1993ApJ...413..507H}), (ii) soft excess at energies lesser than 1 keV, which could be due to a warm ($kT_e$ = 1 keV) and optically thick ($\tau$ = 10 - 20) corona \citep{2004MNRAS.349L...7G,2018A&A...611A..59P} or due to relativistically blurred reflection \citep{2019ApJ...871...88G}, (iii) reflection bump beyond few keV due to scattering of X-rays by the accretion disk or distant material \citep{1994MNRAS.267..743G} and (iv) the fluorescent Fe K$\alpha$ line with equivalent width of $\sim$1 keV\citep{1994MNRAS.267..743G,2016MNRAS.456L..94M}. In spite of the numerous X-ray studies on AGN, the exact causes for the origin of X-ray emission in them is still not understood. This also includes the size, shape and location of the X-ray corona in AGN. Parameters that can put constraints on the nature of the X-ray corona in AGN are the power law index ($\Gamma$) and the high energy cut-off ($E_{cut}$) in the X-ray continuum. This $E_{cut}$ in the X-ray continuum is related to the temperature of the Comptonizing electrons ($kT_e$) in the hot corona via E$_{cut}$ = 2$-$3 $kT_e$ \citep{2001ApJ...556..716P}, however, there are reports for this relation to show deviation among AGN \citep{2014ApJ...783..106L, 2019A&A...630A.131M, 2022A&A...662A..78P}. One of the constraints to get good estimate of E$_{cut}$ on a large sample of AGN is the lack of high signal to noise ratio data at energies beyond 50 keV, where the cut-off in the spectrum is defined. Though more measurements are now available from instruments such as {\it NuSTAR} \citep{2015MNRAS.451.4375F,2018MNRAS.481.4419B,2018ApJ...866..124K,2018A&A...614A..37T,2018MNRAS.480.1819R,2018ApJ...856..120R,2019MNRAS.484.5113R,2020ApJ...901..111K,2020ApJ...905...41B,2021A&A...655A..60A,2021MNRAS.506.4960H,2022arXiv220200895K}, it is important to increase such measurements on a larger number of sources, firstly to find good estimates of E$_{cut}$ and secondly to find better constraints on the correlation of E$_{cut}$ with other physical properties of the sources. Also, variation in the temperature of the corona is now known in few radio-quiet category of AGN \citep{2018ApJ...863...71Z,2020MNRAS.492.3041B,2021ApJ...921...46B,2021MNRAS.502...80K,2022MNRAS.tmp..417W,2022A&A...662A..78P}, however, it is not clear if it is shown by all AGN. This is due to the lack of such studies on many AGN, mostly attributed to the paucity of homogeneous multi-epoch data on a large number of sources. Therfore it is of great importance to find more sources that are known to show variations in $kT_e$. \begin{table} \centering \caption{Log of {\it NuSTAR} observations.} \label{table-1} \begin{tabular}{cccc} \hline OBSID & Epoch & Date & Exposure Time \\ & & & (secs) \\ \hline 60002030002 & A & 2012-12-18 & 57850 \\ 60002030004 & B & 2012-12-20 & 48556 \\ 60002030006 & C & 2012-12-21 & 19461 \\ 60002033002 & D & 2014-08-18 & 52055 \\ 60002033004 & E & 2015-02-05 & 53685 \\ 60302003002 & F & 2017-07-31 & 49979 \\ 60302003004 & G & 2017-08-27 & 52549 \\ 60302003006 & H & 2017-11-06 & 49691 \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Log of {\it XMM-Newton} observations.} \label{table-2} \begin{tabular}{cccc} \hline OBSID & Epoch & Date & Exposure Time \\ & & & (secs) \\ \hline 0111200101 & A & 2000-07-29 & 42258 \\ 0111200102 & B & 2000-07-30 & 46429 \\ 0740060201 & C & 2014-07-10 & 63997 \\ 0740060301 & D & 2014-07-18 & 57600 \\ 0740060401 & E & 2014-08-19 & 54000 \\ 0740060501 & F & 2015-02-03 & 54600 \\ \hline \end{tabular} \end{table} NGC 1068 is one of the most studied Seyfert 2 galaxies. Situated at a redshift of $z$ = 0.0038 \citep{1999ApJS..121..287H}, it is powered by a black hole of mass $M_{BH} = 1.6 \times 10^7 M_{\odot}$ \citep{2006A&A...455..173P}. In X-rays the source is studied in the past \citep{2004A&A...414..155M, Bauer_2015, 2016MNRAS.456L..94M, 2020MNRAS.492.3872Z}. This source was first observed by {\it Ginga} in the X-ray band and the observation revealed the presence of a broad neutral Fe K$\alpha$ line \citep{1989PASJ...41..731K} with an equivalent width of $\sim$ 1.3 keV. Later on, the {\it ASCA} observations resolved the Fe lines into neutral and ionized components \citep{1994PASJ...46L..71U, 1997MNRAS.289..443I}. It is known as an emitter of high energy $\gamma$-ray radiation in the MeV$-$GeV range \citep{2020ApJS..247...33A} and also has been reported as a neutrino source \citep{2020PhRvL.124e1103A}. In the past, the hard X-ray source spectrum was fitted with a two reflector model \citep{1997A&A...325L..13M, 1999MNRAS.310...10G, Bauer_2015}. The central engine of the source is found to be completely obscured by the dusty torus with a column density of $\rm{N_H}$ $\geq$ $10^{25}$ $cm^{-2}$ \citep{2000MNRAS.318..173M}, therefore, the observer can only see the scattered emission along the line of sight. This scattered emission is commonly thought to originate from two types of reflectors, the \say{cold} reflector component that arises from Compton scattering off the primary X-ray emission from the neutral circumnuclear material, while the second \say{warm} ionized reflector component that arises due to Compton scattering off the heavily ionized material that acts as the \say{mirror} of the primary emission \citep{2004A&A...414..155M}. Using the multi epoch X-ray observations \cite{Bauer_2015} fitted the spectra of NGC 1068 using the two reflector model along with different line emission, radiative recombination continuum and the off nuclear point source emission. Using {\it XMM-Newton} and {\it NuSTAR} joint fit of the NGC 1068 high energy ($>$ 4 keV) spectra \cite{2016MNRAS.456L..94M} detected excess flux in August 2014 observation above 20 keV by 32$\pm$6 \% with respect to the previous 2012 December observation and later February 2015 observation. This transient excess above 20 keV in NGC 1068 spectra was ascribed to a drop in the absorbing column density from $\rm{N_H}$ $>$ 8.5 $\times$ $10^{24}$ $cm^{-2}$ to (5.9$\pm$0.4) $\times$ $10^{24}$ $cm^{-2}$ in 2012 spectra. The authors first caught the source during this unrevealing period in which the obscured material moved temporarily from the line of sight and the source was found to be its highest flux state. Recently, \cite{2020MNRAS.492.3872Z} presented the spectral analysis of the {\it NuSTAR} data taken between July 2017 and Feb 2018 to check for spectral variability. From the varying column density found in the timescale of 1 to 6 months, the authors inferred the presence of the clumpy torus structure surrounding the source. Using $\it Swift-XRT$ data the authors also detected an ultra-luminous X-ray source at a distance of $\sim$2 kpc from the nuclear region of NGC 1068. \begin{figure*} \centering \begin{minipage}{2.10\columnwidth} \centering \includegraphics[width=\textwidth]{lightcurve1200new.jpeg} \end{minipage}% \caption{The {\it NuSTAR} light curves of NGC 1068 in three energy bands, 4$-$10 keV (first panel), 10$-$20 keV (second panel) and 20$-$60 keV (third panel). The HR1 and HR2 vs time are plotted in the last two panels. The black dashed lines are the mean of the count rate and HR. The shaded region in each panel is the mean errors in the count rate and HR.} \label{figure-2} \end{figure*} \begin{figure*} \centering \begin{minipage}{1.05\columnwidth} \centering \includegraphics[width=\textwidth]{HR1_new.jpeg} \end{minipage}% \begin{minipage}{1.05\columnwidth} \centering \includegraphics[width=\textwidth]{HR2_new.jpeg} \end{minipage} \caption{Left panel: The relation between HR1 and count rate in the 4$-$60 keV band. Right panel: The relation between HR2 and count rate in the 4$-$60 keV band. The red dashed lines in both panels are the linear least squares fit to the data.} \label{figure-3} \end{figure*} \begin{figure*} \centering \begin{minipage}{2.10\columnwidth} \centering \includegraphics[width=\textwidth]{lightcurve_xmm.jpeg} \end{minipage}% \caption{{\it XMM-Newton EPIC PN} light curves of NGC 1068 in two energy bands, 0.2$-$2 keV (top) and 2$-$4 keV (middle). The HR vs time is plotted in the bottom panel. The black dashed line and the shaded region in each panel is the mean value of counts/sec or HR and the corresponding errors respectively.} \label{figure-4} \end{figure*} \begin{figure} \centering \begin{minipage}{1.05\columnwidth} \centering \includegraphics[width=\textwidth]{HR_XMM.jpeg} \end{minipage}% \caption{The relation between HR and count rate in the 0.2$-$4 keV band. The red dashed lines in both panels are the linear least square fit to the data.} \label{figure-5} \end{figure} Though the X-ray emission from NGC 1068 has been analysed in the past (\citealt{Bauer_2015}, \citealt{2016MNRAS.456L..94M}, \citealt{2020MNRAS.492.3872Z}), the source has not been studied for variation in the temperature of the corona. \cite{Bauer_2015} from a joint fit of 2$-$195 keV data from different instruments reported a $\rm{E_{cut}}$ of 128$^{+115}_{-44}$ keV. Recently \cite{2021MNRAS.506.4960H} jointly fit the {\it XMM-Newton} (OBSID-0740060401) and {\it NuSTAR} (OBSID-60002033002) data and reported a $\rm{E_{cut}}$ of 28.4$^{+7.7}_{-4.0}$ keV. In this work, taking advantage of the multiple epochs of data available from {\it NuSTAR} (along with near simultaneous XMM-Newton data at certain epochs of NuSTAR observations), we carried out for the first time an investigation of the variation in the temperature of the corona if any. In this paper, we present results of our variability analysis of NGC 1068, from observations carried out by {\it NuSTAR} between 2012 and 2017. Also, we present our results on spectral analysis of the same {\it NuSTAR} data set in conjunction with observations from {\it XMM-Netwon}. The aim was to find the temperature of the corona in this source and its variation if any. The paper is organized as follows. In Section 2, we discuss the X-ray observations with {\it NuSTAR} and {\it XMM-Netwon} and the reduction of the data, the analysis is presented in Section 3, followed by the summary of the results in the final section. \section{Observations and Data Reduction} \subsection{{\it NuSTAR}} Till now, NGC 1068 was observed by {\it NuSTAR} \citep{2013ApJ...770..103H} nine times with its two co-aligned telescopes with the focal plane modules A (FPMA) and B (FPMB) respectively. In one of the epochs in the year 2018, an off-nuclear Ultra-Luminous X-ray source was detected \citep{2020MNRAS.492.3872Z} at a distance of about $30''$ from the nuclear region of NGC 1068. Barring this epoch, we considered eight epochs of data for this work. The details of these eight epochs of observations are given in Table \ref{table-1}. To visualize the spectral features of these observations the best fitted model-2 FPMA spectra (see section 4) are plotted together in Fig. \ref{figure-1}. We reduced the {\it NuSTAR} data in the 3$-$79 keV band using the standard {\it NuSTAR} data reduction software NuSTARDAS\footnote{https://heasarc.gsfc.nasa.gov/docs/nustar/analysis/nustar swguide.pdf} v1.9.7 distributed by HEASARC within HEASoft v6.29. Considering the passage of the satellite through the South Atlantic Anomaly we selected SAACALC = \say{2}, SAAMODE \say{optimized} and also excluded the tentacle region. The calibrated, cleaned, and screened event files were generated by running {\tt nupipeline} task using the CALDB release 20210701. To extract the source counts we chose a circular region of radius $50''$ centered on the source. Similarly, to extract the background counts, we selected a circular region of the same radius away from the source on the same chip to avoid contamination from source photons. We then used the {\tt nuproducts} task to generate energy spectra, response matrix files (RMFs) and auxiliary response files (ARFs), for both the hard X-ray detectors housed inside the corresponding focal plane modules FPMA and FPMB. \subsection{{\it XMM-Newton}} NGC 1068 was observed by {\it XMM-Newton} for eight epochs during the year 2000 to 2015. Within these we used six data sets taken between 2000 to 2015 for the timing analysis. For spectral analysis we used only three sets of data in 2014 due to their simultaneity with at least one epoch of {\it NuSTAR} observations. We chose to use only the {\it EPIC-PN} data for the extraction of source and background spectra. The log of the OBSIDs used in this work is given in Table \ref{table-2}. We used SAS v1.3 for the data reduction. Only single events (\say{PATTERN==0}) with quality flag=0 were selected. The event files were filtered to exclude background flares selected from time ranges where the 10–15 keV count rates in the PN camera exceeded 0.3 c/s. Source spectra were extracted from an annular region between the inner and outer radius of $15''$ and $30''$ centered on the nucleus. Background photons were selected from a source-free region of equal area on the same chip as the source. Here we note that for the source extraction, choosing a circular region of radius $30''$ produced pile up in the first two OBSIDs. However, pile up was not noticed in the other 4 epochs. To avoid pile up and to maintain uniformity in data reduction we chose to extract the source and background from an annular region for all the six epochs. We constructed RMFs and ARFs using the tasks {\it RMFGEN} and {\it ARFGEN} for each observation. \begin{table*} \caption{Mean count-rate and the mean HR in different energy bands of NGC 1068 obtained from the light curves (see Fig. \ref{figure-2}) and the results of the correlation between HR and the total count rate (see Fig. \ref{figure-3}).} \label{table-4} \centering \begin{tabular}{lcccccccccc} \hline OBSID & Epoch & \multicolumn{3}{c}{Mean count rate} & \multicolumn{2}{c}{Mean HR} & \multicolumn{2}{c}{HR1} & \multicolumn{2}{c}{HR2} \\ & & 4$-$ 10 keV & 10$-$20 keV & 20$-$60 keV & HR1 & HR2 & r & p & r & p \\ \hline 60002030002 & A & 0.16 $\pm$ 0.02 & 0.06 $\pm$ 0.01 & 0.05 $\pm$ 0.01 & 0.37 $\pm$ 0.09 & 0.85 $\pm$ 0.30 & 0.10 & 0.43 & 0.10 & 0.40 \\ 60002030004 & B & 0.16 $\pm$ 0.02 & 0.06 $\pm$ 0.01 & 0.05 $\pm$ 0.01 & 0.38 $\pm$ 0.09 & 0.86 $\pm$ 0.27 & -0.18 & 0.20 & -0.18 & 0.19 \\ 60002030006 & C & 0.16 $\pm$ 0.02 & 0.06 $\pm$ 0.01 & 0.05 $\pm$ 0.01 & 0.42 $\pm$ 0.09 & 0.80 $\pm$ 0.24 & -0.25 & 0.26 & 0.19 & 0.38 \\ 60002033002 & D & 0.16 $\pm$ 0.02 & 0.07 $\pm$ 0.01 & 0.06 $\pm$ 0.01 & 0.41 $\pm$ 0.10 & 0.99 $\pm$ 0.32 & 0.04 & 0.78 & -0.12 & 0.34 \\ 60002033004 & E & 0.16 $\pm$ 0.02 & 0.06 $\pm$ 0.01 & 0.05 $\pm$ 0.01 & 0.38 $\pm$ 0.10 & 0.80 $\pm$ 0.27 & -0.16 & 0.21 & -0.01 & 0.92 \\ 60302003002 & F & 0.15 $\pm$ 0.02 & 0.06 $\pm$ 0.01 & 0.05 $\pm$ 0.01 & 0.41 $\pm$ 0.11 & 1.00 $\pm$ 0.32 & -0.07 & 0.58 & -0.02 & 0.90 \\ 60302003004 & G & 0.15 $\pm$ 0.02 & 0.06 $\pm$ 0.01 & 0.07 $\pm$ 0.01 & 0.41 $\pm$ 0.10 & 1.12 $\pm$ 0.37 & -0.16 & 0.22 & 0.07 & 0.57 \\ 60302003006 & H & 0.16 $\pm$ 0.02 & 0.06 $\pm$ 0.01 & 0.06 $\pm$ 0.01 & 0.42 $\pm$ 0.10 & 0.96 $\pm$ 0.30 & 0.09 & 0.52 & 0.16 & 0.22 \\ \hline \end{tabular} \end{table*} \section{Timing Analysis} \subsection{{\it NuSTAR}} For timing analysis of the source, we utilized the data from {\it NuSTAR} and generated the background subtracted light curves with multiple corrections (e.g. bad pixel, livetime etc.) applied on the count rate in three energy bands, namely, 4$-$10 keV, 10$-$20 keV and 20$-$60 keV respectively with a bin size of 1.2 ksec. The light curves in different energy bands, along with variations in hardness ratios (HRs) are given in Fig. \ref{figure-2}. To check for variability in the generated light curves we calculated the fractional root mean square variability amplitude ($F_{var}$;\citealt{2002ApJ...568..610E,2003MNRAS.345.1271V}) for each epoch. $F_{var}$ is defined as $F_{var} = \sqrt{\frac{V^2 - \overline{\sigma^2}}{\overline{x}^2}}$, where, $V^2 = \frac{1}{N-1} (x_i - \overline{x})^2$ is the sample variance and $\overline{\sigma^2} = \frac{1}{N} \sum \sigma_{i}^2$ is the mean square error in the flux measurements. Here, $x_i$ is the observed value in counts per second, $\overline{x}$ is the arithmetic mean of the $x_i$ measurements, and $\sigma_i$ is the error in each individual measurement. The error in $F_{var}$ was estimated following \cite{2003MNRAS.345.1271V}. For a binning choice of 1.2 ksec the calculated $F_{var}$ values indicate that the source is found not to show any significant variations within epochs. This is also evident in Fig. \ref{figure-2}. Shown by black dashed lines in Fig. \ref{figure-2} is the mean brightness of the source at each epoch determined from the light curves. These mean values are given in Table \ref{table-4}. From light curve analysis it is evident that the source has not shown variation in the soft band (4$-$10 keV and 10$-$20 keV) during the five years of data analysed in this work. However, variation is detected in the hard band (20$-$60 keV) (Fig. \ref{figure-2}). This is also very clear in Fig. \ref{figure-1}. \begin{table*} \caption{Results of the variability analysis in two energy bands of {\it XMM-Newton}} \label{table-6} \centering \begin{tabular}{lcccccc} \hline OBSID & Epoch & \multicolumn{2}{c}{Mean count rate} & Mean HR & r & p \\ & & 0.2$-$2 keV & 2$-$4 keV & & & \\ \hline 0111200101 & A & 14.50$\pm$0.24 & 0.34$\pm$0.04 & 0.023$\pm$0.003 & -0.03 & 0.88 \\ 0111200201 & B & 15.28$\pm$0.26 & 0.35$\pm$0.04 & 0.023$\pm$0.003 & -0.06 & 0.76 \\ 0740060201 & C & 14.86$\pm$0.25 & 0.34$\pm$0.04 & 0.023$\pm$0.003 & -0.16 & 0.45 \\ 0740060301 & D & 14.85$\pm$0.25 & 0.33$\pm$0.04 & 0.022$\pm$0.003 & -0.11 & 0.45 \\ 0740060401 & E & 14.94$\pm$0.25 & 0.33$\pm$0.04 & 0.022$\pm$0.003 & -0.01 & 0.96 \\ 0740060501 & F & 14.82$\pm$0.30 & 0.31$\pm$0.05 & 0.021$\pm$0.003 & -0.06 & 0.71 \\ \hline \end{tabular} \end{table*} In the two bottom panels of Figure \ref{figure-2}, we show the evolution of two hardness ratios, namely HR1 and HR2 during the duration of the observations analysed in this work. HR1 and HR2 are defined as: HR1 = C(10$-$20)/C(4$-$10) and HR2 = C(20$-$60)/C(10$-$20), where C(4$-$10), C(10$-$20) and C(20$-$60) are the count rates in 4$-$10 keV, 10$-$20 keV, and 20$-$60 keV respectively. For each epoch, the mean hardness ratio is shown as a black dashed line in Figure \ref{figure-2} and the mean values are given in Table \ref{table-4}. As the errors are large, no variation in the hardness ratio of the source could be ascertained between epochs. We also looked for correlation if any between the hardness ratios, HR1 and HR2 against the broad band count rate in the 4$-$60 keV band with a time binning of 1.2 ksec. This is shown in Fig. \ref{figure-3}. Also, shown in the same figure are the linear least squares fit to the data. Calculated values of the Pearson's rank coefficient (r) and probability for no correlation (p) from the linear least square fit are given in Table \ref{table-4}. Analysing those values we found no variation of hardness ratios with the brightness of the source. \subsection{{\it XMM-Newton}} Using the six epochs of {\it XMM-Newton EPIC PN} data from Table \ref{table-2} we generated the light curves in two energy bands, 0.2$-$2.0 keV and 2.0$-$4.0 keV using a binning size of 1.2 ksec. The light curves along with the variation of HR are shown in Fig. \ref{figure-4}. Here, HR is defined as the ratio of C(2.0$-$4.0) to C(0.2$-$2.0), where C(2.0$-$4.0) and C(0.2$-$2.0) are the count rates in 2.0$-$4.0 keV and 0.2$-$2.0 keV energy bands respectively. From $F_{var}$ analysis we found no significant variation within the epochs of observation. The black dashed lines in the first two panels of Fig. \ref{figure-4} are the mean values of the count rate in different energy bands. The mean values of the count rate (see Table \ref{table-6}) indicate that in the soft band (0.2$-$2 keV) the source was in its brightest state in epoch B. There is thus variation in the soft band with the source being brighter in epoch B relative to epoch A. However, in the hard band we did not find any significant change in the source brightness between epochs. In the same Table \ref{table-6}, the constant values of mean HR between epochs also argues for no variation in brightness state of the source. In Fig. \ref{figure-5} HR is plotted against the total count rate in 0.2$-$4.0 keV band. The results of the linear least squares fit are given in Table \ref{table-6}. From the p values we conclude that no significant correlation between HR and the total count rate is found in NGC 1068. \section{Spectral analysis} In addition to characterizing the flux variability of NGC 1068, we also aimed in this work to investigate the variation in the temperature of the corona of the source. \subsection{NuSTAR only spectral fit} To check for variation in the temperature of the corona in NGC 1068, we first concentrated on the NuSTAR data alone. For that we fitted simultaneous FPMA/FPMB data for the eight epochs of observations available in the {\it NuSTAR} archive. To avoid the host galaxy contamination we used the {\it NuSTAR} data in the 4$-$79 keV energy band \citep{2016MNRAS.456L..94M}. For the spectral analysis, using XSPEC version 12.12.0 \citep{1996ASPC..101...17A}, we fitted the background subtracted spectra from FPMA and FPMB simultaneously (without combining them) allowing the cross normalization factor to vary freely during spectral fits. The spectra were binned to have minimum 25 counts/energy bin using the task {\it grppha}. To get an estimate of the model parameters that best describe the observed data, we used the chi-square ($\chi^2$) statistics and for calculating the errors in the model parameters we used the $\chi^2$ = 2.71 criterion i.e. 90 \% confidence range in XSPEC. \begin{figure*} \centering \begin{minipage}{1.05\columnwidth} \centering \includegraphics[width=\textwidth]{epochG_bestfit1b.jpeg} \end{minipage}% \begin{minipage}{1.05\columnwidth} \centering \includegraphics[width=\textwidth]{epochG_bestfit.jpeg} \end{minipage} \caption{The best fit epoch G (with highest flux) unfolded spectra along with the data to model ratio using Model 1b (left panel) and Model 2b (right panel)}.\label{figure-7} \end{figure*} \begin{figure*} \centering \begin{minipage}{1.05\columnwidth} \centering \includegraphics[width=\textwidth]{ratio_model1b.jpeg} \end{minipage}% \begin{minipage}{1.05\columnwidth} \centering \includegraphics[width=\textwidth]{ratio_model2b.jpeg} \end{minipage} \caption{The data to model ratio of all eight epochs of {\it NuSTAR} observations using Model 1b (left panel) and Model 2b (right panel).} \label{figure-8} \end{figure*} In all our model fits to the observed spectra, the {\it const} represents the cross calibration constant between the two focal plane modules FPMA and FPMB. To model the line of sight galactic absorption the {\it phabs} component was used and for this the neutral hydrogen column density ($\rm{N_H}$) was frozen to 3.32$\times$ $10^{20}$ atoms $cm^{-2}$ as obtained from \cite{2013MNRAS.431..394W}. To take into account the strong emission lines seen in the observed {\it NuSTAR} spectra, we used four {\it zgauss} components in XSPEC. In all the {\it zgauss} models considered to fit the emission lines, the line energies and the normalization were kept free during the fitting while $\sigma$ was kept frozen to 0.1 keV. The redshift (z) for all the model components was kept fixed to 0.0038 \citep{2010A&A...518A..10V}. The inclination angle which is the angle the line of sight of the observer makes with the axis of the AGN ($i$) was fixed at 63$^{\circ}$ \citep{2004A&A...414..155M} in all the models. \subsubsection{Model 1} NGC 1068 has been extensively studied in the hard X-ray ($>$3 keV) band earlier \citep{1997A&A...325L..13M, 2004A&A...414..155M, Bauer_2015,2016MNRAS.456L..94M, 2020MNRAS.492.3872Z} mostly using the two component reflector model. The main essence of this model is to fit (a) the cold and distant reflector using {\it pexrav}/{\it pexmon}/({\it MYTS+MYTL}), and (b) the warm, ionized Compton-scattered component using {\it power-law/cutoff power-law} with Compton down scattering under the assumption that the electron temperature is much smaller than the photon energy ($m_{e}c^{2}$). Few Gaussian components were also used to model various neutral and ionized emission lines present in the source spectra. In this work too, to find the coronal temperature of the source, we modeled the source spectra using two different reflector components. \begin{enumerate} \item The self consistent Comptonization model {\it xillverCP} \citep{2014ApJ...782...76G} that takes into account the cold and distant reflector, the neutral Fe k$\alpha$ ($\sim$ 6.4 keV) and Fe k$\beta$ ($\sim$ 7.06 keV) lines. \item The warm ionized Compton scattered reflection using two models separately. At first, following \cite{Poutanen_1996}, a Compton scattered component ($f_{scat}$) for an arbitrary intrinsic continuum ($f_{intr}$). As an intrinsic continuum, we used a {\it power-law}, that was modified for Compton down scattering using equation (1) given in \cite{Poutanen_1996}. Secondly, the self consistent {\it xillverCP} model with a high ionization parameter ($log\xi$ = 4.7) to model the warm, ionized reflector. Here we note that fixing the ionization parameter to some other values ($log\xi$ = 3.0, 3.5 and 4.0) did not produce any significant change in the derived best fit values. It only self-consistently added few ionization lines in the model, but, the spectral shape remained unchanged. Using the Compton scattered component in place of a warm mirror may affect the spectra by adding curvature below 2 keV \citep{Bauer_2015}, but the spectral modeling of the {\it NuSTAR} data above 4 keV with or without inclusion of the Compton down scattering in the warm reflector did not produce any significant effect on the derived best fit values. We also arrived at the similar conclusion from using the {\it XMM-Newton} and {\it NuSTAR} data jointly below 4 keV. This is discussed in detail in Section 4.2. \item Gaussian components to take care of the Fe ionized ($\sim$ 6.57, 6.7 and 6.96 keV), Ni k$\alpha$ ($\sim$ 7.47 keV), Ni k$\beta$ ($\sim$ 8.23 keV) and Ni ionized ($\sim$ 7.83 keV) emission lines. \end{enumerate} In XSPEC, the two models used in the fitting of the spectra have the following form, \begin{dmath} Model 1a = const*phabs*(f1*zpo+xillverCP+zgauss+zgauss+zgauss+zgauss) \end{dmath} and \begin{dmath} Model 1b = const*phabs*(xillverCP_{warm}+xillverCP_{cold}+zgauss+zgauss+zgauss+zgauss) \end{dmath} Here, we note that, From the data to ratio plot we did not find any prominent residuals near the line emission regions, but, in all epochs we noticed residues at around 6.0 keV (see Fig. \ref{figure-7} and Fig. \ref{figure-8}). This feature at 6.0 keV has no physical origin but might appear in the {\it NuSTAR} data due to calibration issues \citep{2020MNRAS.492.3872Z}. \noindent {\bf Model 1a:} For the spectral fit with this model, we used the formula (f1) obtained from \cite{Poutanen_1996} to consider the Compton down scattering of the intrinsic continuum ({\it zpo}). Following \cite{Poutanen_1996}, \begin{equation} f1 \propto \tau_{sc}[1+\mu^2+xx_1(1-\mu)^2] \end{equation} In Equation 3, $x = h\nu/m_ec^2$ is the dimensionless photon energy $\mu$ = cos i, x1 = x/[1- x(1-$\mu$)] and $\tau_{sc}$ is the Thompson optical depth of the scattering material. We considered the constant of proportionality $\times$ $\tau_{sc}$ as an another constant (p1) and kept it as a free parameter in the spectral analysis. During spectral fits the photon index ($\Gamma$) and the normalization for the two reflectors were tied together. For the cold reflector we modeled only the reflection component by fixing the reflection fraction ($R$) to $-$1 throughout. The parameters that were kept free are the relative iron abundance ($AF_{e}$), $\rm{kT_{e}}$ and p1. The constant, p1 was allowed to vary between 0.0 and 10.0 during the fit. To model the cold and neutral reflector the ionization parameter ($\xi$) was frozen to 1.0 (i.e log$\xi$ = 0). The best fit values obtained using Model 1a are given in Table \ref{table-8}. \noindent {\bf Model 1b:} Here, we used {\it xillverCP} twice to model the warm and cold reflection respectively. For the warm and ionized reflector we used ${\it xillverCP_{warm}}$ by fixing the ionization parameter to its highest value ($log\xi = 4.7$) and for the cold and distant reflection (${\it xillverCP_{cold}}$) the reflector was considered as a neutral one with a fixed $log\xi$ of 0.0. In the modelling of the source spectra using Model 1b we tied $\Gamma$ and $\rm{kT_{e}}$ of the two reflectors together. At first, the normalization for the two reflectors were tied together and the best fit produced a $\chi^2$ of 637 for 484 degrees of freedom for epoch A. We then modelled the epoch A spectrum by varying the two normalization independently and got an improved $\chi^2$ of 504 for 483 degrees of freedom. For the other epochs we therefore carried out the model fit by leaving the two normalization untied. For both the reflectors we fixed $R$ to $-$1 to consider the reflection components only. During the fitting $AF_{e}$ between the two reflectors were tied together. The best fit unfolded spectra along with the residues of the data to Model 1b fit to the epoch G spectra are given in the left panel of Fig. \ref{figure-7}. The best fit results of Model 1b are given in Table \ref{table-8}. For all the epochs the residuals of the fit are given in left panel of Fig. \ref{figure-8}. \subsubsection{Model 2} Following \cite{Bauer_2015} we then used the \say{leaky torus} model in which it is assumed that there is a finite probability for the primary emission to escape the medium without scattering or getting absorbed and partially punching through above 20 $-$ 30 keV. In a Compton-Thick AGN, the direct transmitted continuum if at all present, is not observable below $\sim$ 10 keV. In Model 2, with the two reflectors, this transmitted or the direct component was taken care of. We assumed that a direct transmitted intrinsic continuum ({\it zpo}) was attenuated by the line of sight Compton thick absorber with a column density of $\rm{N_{H}}$ = $10^{25}$ atoms $cm^{-2}$ and an inclination angle ($\theta_{incl}$) of $90^{\circ}$ (for an edge on torus). Here also, we used {\it xillverCP} with $log\xi$ = 0.0 to model the cold reflection, and, either {\it f1*zpo} \citep{Poutanen_1996} (Model 2a), or {\it xillverCP} with $log \xi$ = 4.7 (Model 2b) to take care of the warm and ionized reflection. In XSPEC the models take the following forms, \begin{dmath} Model 2a = const*phabs*(zpo*MYTZ+f1*zpo+xillverCP++zgauss+zgauss+zgauss+zgauss) \end{dmath} and, \begin{dmath} Model 2b = const*phabs*(zpo*MYTZ+xillverCP_{warm}+xillverCP_{cold}++zgauss+zgauss+zgauss+zgauss) \end{dmath} In both the models, we used the {\it MYTZ} component from the {\it MYtorus} set of models \citep{2009MNRAS.397.1549M, 2012MNRAS.423.3360Y} to fit the zeroth-order continuum. This is often called the \say{direct} or \say{transmitted} continuum which is simply a fraction of the intrinsic continuum that leaves the medium without being absorbed or scattered. This energy dependent zeroth-order component ({\it MYTZ}) is used as a multiplicative factor applied to the intrinsic continuum. {\it MYTZ} is purely a line-of-sight quantity and does not depend on the geometry and covering fraction of the out-of-sight material. It includes the equivalent column density ($\rm{N_{H}}$), inclination of the obscuring torus along the line of sight ($\theta_{incl}$) and the redshift of the source. During the spectral fits $\rm{N_{H}}$ and $\theta_{incl}$ were frozen to $10^{25}$ $cm^{-2}$ and $90^{\circ}$ respectively. \noindent {\bf Model 2a:} During the spectral fitting with this model $\Gamma$ and the normalization for all the transmitted and reflected components were tied together. To model the warm and cold reflectors we used {\it f1*zpo} and {\it xillverCP} respectively. The parameters for these two models were treated in a similar fashion as described in Model 1a. The best fit values obtained from Model 2a are given in Table- \ref{table-8}. \noindent {\bf Model 2b:} Here, $\Gamma$ of the transmitted and scattered components were tied together, but the normalization were varied independently to achieve acceptable fit statistics. For the warm and cold reflectors the model parameters were treated in a similar way as described in Model 1b. Using this model we obtained a better fit statistics than Model 1b with no prominent residues present in the hard energy part (see the right panel of Fig. \ref{figure-7}). The best fit values are given in Table \ref{table-8}. For Model 2b we calculated the flux in three energy bands, i.e, 4$-$10 keV, 10$-$20 keV and 20$-$79 keV. The fluxes obtained for FPMA module ($F^{FPMA}$) are given in Table \ref{table-8}. In the 20 $-$ 79 keV band on epoch D (August, 2014) the source was brighter by about 22\% and 28\% respectively compared to the mean brightness in December 2012 (epoch A, B and C) and February 2015 (epoch E) FPMA spectra. The source brightness again increased in the 20 $-$ 79 keV band on epoch G (August 2017) and it was found to be brighter by about 32\% and 36\% relative to the December 2012 and February 2015 spectra respectively. For all the epochs, the best fit data to model residues are plotted in the right panel of Fig. \ref{figure-8}. \begin{figure} \hspace{-1.5 cm} \begin{minipage}{1.00\columnwidth} \centering \includegraphics[width=\textwidth]{PNdata.jpeg} \end{minipage}% \caption{Three 2014 {\it EPIC-PN} spectra plotted together in the 4$-$9 keV band.} \label{figure-10} \end{figure} \begin{figure*} \centering \begin{minipage}{2.10\columnwidth} \centering \includegraphics[width=\textwidth]{ratio_NGC1068.jpeg} \end{minipage}% \caption{Best fit data to the model ratio for {\it constant*phabs*zphabs*(cutoffpl+pexrav+zgauss(8))} (top panel) ; {\it constant*phabs*zphabs*(f1*cutoffpl+pexrav+zgauss(8))} (2nd panel from the top) ; {\it constant*phabs*zphabs*(MYTZ*cutoffpl+f1*cutoffpl+pexrav+zgauss(8))} (3rd panel from the top) and {\it constant*phabs*zphabs*(zpo*MYTZ+f1*zpo+xillverCP+zgauss(6))} (bottom panel) to the {\it XMM-Newton and NuSTAR} (epoch D FPMA) spectra in the 3$-$79 keV band.} \label{figure-13} \end{figure*} \begin{figure*} \centering \begin{minipage}{1.03\columnwidth} \centering \includegraphics[width=\textwidth]{xmm410.jpeg} \end{minipage}% \begin{minipage}{1.09\columnwidth} \centering \includegraphics[width=\textwidth]{PN479new.jpeg} \end{minipage} \caption{Left panel: Best fit {\it EPIC-PN} combined spectra in the 4$-$9 keV band. Right panel: {\it XMM-Newton and NuSTAR} (epoch D FPMA) joint best fit spectra in the 4$-$79 keV band.} \label{figure-11} \end{figure*} \begin{table} \caption{Best fit line energies along with normalization. Here, the line energy (E) is in keV and the normalization ($N_{E}$) is in units of $10^{-5}$ photons keV$^{-1}$ cm$^{-2}$ s$^{-1}$} \label{table-7} \begin{tabular}{ccccccr} \hline Parameter & & line & & Value \\ \hline E1 & & Fe Be-like K$\alpha$ & & 6.60$^{+0.03}_{-0.03}$ \\ $N_{E1}$ & & & & 1.85$^{+0.47}_{-0.44}$ \\ E2 & & Fe He-like K$\alpha$ & & 6.75$^{+0.02}_{-0.02}$ \\ $N_{E2}$ & & & & 2.86$^{+0.46}_{-0.50}$ \\ E3 & & Fe H–like K$\alpha$ & & 7.02$^{+0.01}_{-0.02}$ \\ $N_{E3}$ & & & & 1.35$^{+0.19}_{-0.18}$ \\ E4 & & Ni K$\alpha$ & & 7.53$^{+0.03}_{-0.03}$ \\ $N_{E4}$ & & & & 0.51$^{+0.15}_{-0.15}$ \\ E5 & & Ni ionized He-like K$\alpha$ & & 7.86$^{+0.04}_{-0.04}$ \\ $N_{E5}$ & & & & 0.46$^{+0.15}_{-0.15}$ \\ E6 & & Ni K$\beta$ & & 8.15$^{+0.08}_{-0.08}$ \\ $N_{E6}$ & & & & 0.28$^{+0.14}_{-0.14}$ \\ \hline \end{tabular} \end{table} \begin{table*} \caption{Results of Model 1a, Model 1b, Model 2a and Model 2b fits to the simultaneous {\it NuSTAR} FPMA $-$ FPMB spectra. The $\rm{kT_{e}}$ and the line energies (E1, E2, E3 and E4) are in units of keV. Column densities ($N_{H}$) are in unit of $cm^{-2}$. Flux (F) is expressed in unit of $10^{-11}$ erg $cm^{-2}$ $s^{-1}$. Normalization of components (N) in different models at 1 keV are in units of photons $keV^{−1}$ $cm^{−2}$ $s^{−1}$. Parameters with the star (*) mark represent the frozen values.} \label{table-8} \centering \begin{tabular}{cccccccccc} \hline Model & Parameter & epoch A & epoch B & epoch C & epoch D & epoch E & epoch F & epoch G & epoch H \\ \hline\hline 1a & $\Gamma$ & 1.33$^{+0.03}_{-0.03}$ & 1.30$^{+0.03}_{-0.03}$ & 1.32$^{+0.05}_{-0.06}$ & $<$1.21 & 1.32$^{+0.03}_{-0.03}$ & $<$1.22 & $<$1.21 & $<$1.22 \\ & $\rm{A_{Fe}}$ & 4.79$^{+0.71}_{-0.37}$ & 5.00* & 4.87$^{+1.50}_{-0.57}$ & 4.38$^{+0.30}_{-0.32}$ & 4.34$^{+0.36}_{-0.36}$ & 4.93$^{+0.47}_{-0.35}$ & 4.46$^{+0.30}_{-0.30}$ & 4.40$^{+0.30}_{-0.30}$ \\ & $\rm{kT_{e}}$ & 7.60$^{+0.47}_{-0.41}$ & 7.48$^{+0.35}_{-0.32}$ & 8.04$^{+0.78}_{-0.69}$ & 7.44$^{+0.28}_{-0.27}$ & 6.82$^{+0.36}_{-0.32}$ & 7.26$^{+0.25}_{-0.29}$ & 7.60$^{+0.29}_{-0.28}$ & 7.29$^{+0.29}_{-0.28}$ \\ & N $\times(10^{-4})$ & 1.31$^{+0.10}_{-0.10}$ & 1.35$^{+0.07}_{-0.08}$ & 1.33$^{+0.17}_{-0.15}$ & 1.60$^{+0.08}_{-0.08}$ & 1.50$^{+0.11}_{-0.10}$ & 1.43$^{+0.08}_{-0.08}$ & 1.61$^{+0.09}_{-0.09}$ & 1.53$^{+0.09}_{-0.09}$ \\ & p1 & 1.28$^{+0.24}_{-0.20}$ & 1.23$^{+0.16}_{-0.14}$ & 1.31$^{+0.35}_{-0.31}$ & 0.70$^{+0.11}_{-0.10}$ & 0.93$^{+0.18}_{-0.16}$ & 0.76$^{+0.10}_{-0.11}$ & 0.66$^{+0.11}_{-0.10}$ & 0.69$^{+0.11}_{-0.10}$ \\ & $\chi^2/dof$ & 468/482 & 437/421 & 210/199 & 483/468 & 436/452 & 473/414 & 528/449 & 479/429 \\ & E1 & 6.75$^{+0.03}_{-0.03}$ & 6.79$^{+0.03}_{-0.03}$ & 6.79$^{+0.05}_{-0.05}$ & 6.77$^{+0.03}_{-0.03}$ & 6.78$^{+0.03}_{-0.03}$ & 6.77$^{+0.03}_{-0.03}$ & 6.80$^{+0.03}_{-0.03}$ & 6.79$^{+0.03}_{-0.03}$ \\ & $N_{E1}$ $\times(10^{-5})$ & 3.82$^{+0.42}_{-0.47}$ & 3.50$^{+0.45}_{-0.42}$ & 3.61$^{+0.37}_{-0.69}$ & 3.39$^{+0.46}_{-0.43}$ & 3.40$^{+0.45}_{-0.42}$ & 2.64$^{+0.44}_{-0.41}$ & 2.88$^{+0.42}_{-0.40}$ & 3.39$^{+0.44}_{-0.43}$ \\ & E2 & 7.59$^{+0.09}_{-0.09}$ & 7.58$^{+0.16}_{-0.13}$ & 7.62$^{+0.16}_{-0.15}$ & 7.45$^{+0.18}_{-0.16}$ & 7.51$^{+0.14}_{-0.18}$ & 7.61$^{+0.10}_{-0.09}$ & 7.66$^{+0.15}_{-0.13}$ & 7.55$^{+0.09}_{-0.10}$ \\ & $N_{E2}$ $\times(10^{-5})$ & 0.81$^{+0.21}_{-0.22}$ & 0.55$^{+0.21}_{-0.21}$ & 0.80$^{+0.41}_{-0.44}$ & 0.46$^{+0.23}_{-0.24}$ & 0.55$^{+0.25}_{-0.28}$ & 0.62$^{+0.21}_{-0.21}$ & 0.56$^{+0.21}_{-0.20}$ & 0.62$^{+0.23}_{-0.23}$ \\ & E3 & 8.19$^{+0.12}_{-0.16}$ & 8.46$^{+0.17}_{-0.17}$ & 8.07$^{+0.16}_{-0.15}$ & 7.96$^{+0.11}_{-0.11}$ & 7.95$^{+0.20}_{-0.17}$ & 8.33$^{+0.13}_{-0.14}$ & 8.03$^{+0.23}_{-0.21}$ & 8.08$^{+0.08}_{-0.09}$ \\ & $N_{E3}$ $\times(10^{-5})$ & 0.45$^{+0.19}_{-0.19}$ & 0.48$^{+0.19}_{-0.19}$ & 0.74$^{+0.41}_{-0.41}$ & 0.65$^{+0.22}_{-0.23}$ & 0.43$^{+0.26}_{-0.25}$ & 0.38$^{+0.19}_{-0.18}$ & 0.55$^{+0.23}_{-0.27}$ & 0.56$^{+0.21}_{-0.21}$ \\ & E4 & 8.77$^{+0.11}_{-0.12}$ & 8.75$^{+0.10}_{-0.11}$ & - & 8.63$^{+0.18}_{-0.23}$ & 8.87$^{+0.17}_{-0.36}$ & 9.12$^{+0.19}_{-0.18}$ & 9.00$^{+0.14}_{-0.14}$ & 9.00$^{+0.26}_{-0.29}$ \\ & $N_{E4}$ $\times(10^{-6})$ & 3.69$^{+1.74}_{-1.79}$ & 3.28$^{+1.20}_{-1.25}$ & - & 2.79$^{+1.79}_{-1.79}$ & 3.84$^{+1.72}_{-1.72}$ & 2.15$^{+1.69}_{-1.63}$ & 4.19$^{+1.80}_{-1.79}$ & 2.75$^{+1.76}_{-1.76}$ \\ & $\rm{C_{FPMA/FPMB}}$ & 1.04$^{+0.03}_{-0.03}$ & 1.03$^{+0.03}_{-0.03}$ & 1.01$^{+0.05}_{-0.04}$ & 1.01$^{+0.03}_{-0.03}$ & 1.02$^{+0.03}_{-0.03}$ & 1.00$^{+0.03}_{-0.03}$ & 1.01$^{+0.03}_{-0.03}$ & 0.98$^{+0.03}_{-0.03}$ \\ \hline 1b & $\Gamma$ & 1.34$^{+0.05}_{-0.03}$ & 1.26$^{+0.08}_{-0.06}$ & $<$1.27 & $<$1.24 & 1.31$^{+0.04}_{-0.05}$ & $<$1.26 & $<$1.22 & $<$1.26 \\ & $\rm{A_{Fe}}$ & $<$7.05 & $<$6.31 & $>$7.29 & 5.08$^{+1.31}_{-0.23}$ & 5.83$^{+1.86}_{-1.00}$ & 6.82$^{+0.95}_{-1.78}$ & 5.50$^{+0.69}_{-0.59}$ & 5.53$^{+1.52}_{-0.68}$ \\ & $\rm{kT_{e}}$ & 9.10$^{+0.13}_{-0.16}$ & 8.95$^{+0.16}_{-0.20}$ & 9.38$^{+0.18}_{-0.18}$ & 8.75$^{+0.16}_{-0.12}$ & 8.69$^{+0.22}_{-0.25}$ & 8.68$^{+0.17}_{-0.23}$ & 8.77$^{+0.13}_{-0.14}$ & 8.78$^{+0.16}_{-0.16}$ \\ & $N_{\it xillverCP_{warm}}$ $\times(10^{-5})$ & 2.43$^{+0.24}_{-0.34}$ & 2.29$^{+0.36}_{-0.36}$ & 2.68$^{+0.15}_{-0.49}$ & 2.03$^{+0.30}_{-0.16}$ & 1.90$^{+0.35}_{-0.26}$ & 1.95$^{+0.18}_{-0.35}$ & 1.99$^{+0.15}_{-0.19}$ & 2.02$^{+0.31}_{-0.23}$ \\ & $N_{\it xillverCP_{cold}}$ $\times(10^{-4})$ & 1.03$^{+0.08}_{-0.06}$ & 1.12$^{+0.09}_{-0.10}$ & 1.00$^{+0.11}_{-0.08}$ & 1.38$^{+0.07}_{-0.09}$ & 1.18$^{+0.10}_{-0.10}$ & 1.22$^{+0.10}_{-0.08}$ & 1.39$^{+0.07}_{-0.07}$ & 1.27$^{+0.10}_{-0.09}$ \\ & $\chi^2/dof$ & 504/483 & 474/420 & 213/199 & 545/468 & 480/452 & 545/414 & 600/449 & 545/431 \\ \hline 2a & $\Gamma$ & 1.32$^{+0.03}_{-0.03}$ & 1.30$^{+0.04}_{-0.04}$ & 1.32$^{+0.05}_{-0.05}$ & $<$1.22 & 1.32$^{+0.03}_{-0.03}$ & $<$1.22 & $<$1.21 & $<$1.22 \\ & $\rm{A_{Fe}}$ & 4.78$^{+0.70}_{-0.37}$ & 5.50$^{+1.14}_{-0.77}$ & 4.77$^{+1.47}_{-0.60}$ & 4.43$^{+0.31}_{-0.32}$ & 4.32$^{+0.38}_{-0.37}$ & 5.00$^{+0.63}_{-0.37}$ & 4.56$^{+0.31}_{-0.32}$ & 4.33$^{+0.37}_{-0.34}$ \\ & $\rm{kT_{e}}$ & 7.39$^{+0.41}_{-0.69}$ & 7.80$^{+0.49}_{-0.58}$ & 7.28$^{+0.86}_{-0.62}$ & 7.45$^{+0.30}_{-0.29}$ & 6.35$^{+0.46}_{-0.34}$ & 7.51$^{+0.33}_{-0.37}$ & 7.43$^{+0.35}_{-0.55}$ & 6.80$^{+0.51}_{-0.35}$ \\ & p1 & 0.79$^{+0.69}_{-0.21}$ & 1.34$^{+0.20}_{-0.18}$ & 0.83$^{+0.53}_{-0.35}$ & 0.74$^{+0.11}_{-0.10}$ & 0.53$^{+0.25}_{-0.19}$ & 0.88$^{+0.11}_{-0.11}$ & 0.64$^{+0.12}_{-0.31}$ & 0.72$^{+0.10}_{-0.11}$ \\ & $\chi^2/dof$ & 467/482 & 421/420 & 205/199 & 480/468 & 431/452 & 449/414 & 521/449 & 476/431 \\ \hline 2b & $\Gamma$ & 1.35$^{+0.09}_{-0.07}$ & 1.50$^{+0.08}_{-0.08}$ & 1.49$^{+0.13}_{-0.13}$ & 1.31$^{+0.06}_{-0.04}$ & 1.37$^{+0.03}_{-0.03}$ & 1.35$^{+0.04}_{-0.03}$ & 1.32$^{+0.06}_{-0.05}$ & 1.34$^{+0.05}_{-0.05}$ \\ & $\rm{A_{Fe}}$ & 6.09$^{+2.43}_{-1.44}$ & 4.39$^{+0.76}_{-0.70}$ & 4.48$^{+1.63}_{-0.95}$ & 4.98$^{+0.56}_{-0.63}$ & 5.00* & 5.00$^{+0.71}_{-0.41}$ & 4.43$^{+0.72}_{-0.68}$ & 4.31$^{+0.67}_{-0.66}$ \\ & $\rm{kT_{e}}$ & 8.97$^{+0.22}_{-0.30}$ & 8.51$^{+0.49}_{-0.82}$ & 9.13$^{+0.63}_{-0.98}$ & 8.76$^{+0.15}_{-0.39}$ & 8.55$^{+0.17}_{-0.16}$ & 8.57$^{+0.17}_{-0.32}$ & 8.46$^{+0.39}_{-0.66}$ & 8.30$^{+0.45}_{-0.72}$ \\ & $\chi^2/dof$ & 481/481 & 433/419 & 211/198 & 507/467 & 464/452 & 495/415 & 529/448 & 496/430 \\ & $F_{4-10}^{FPMA}$ & 0.42 & 0.42 & 0.43 & 0.41 & 0.42 & 0.38 & 0.39 & 0.40 \\ & $F_{10-20}^{FPMA}$ & 0.45 & 0.45 & 0.48 & 0.49 & 0.45 & 0.42 & 0.48 & 0.47 \\ & $F_{20-79}^{FPMA}$ & 2.23 & 2.50 & 2.55 & 3.09 & 2.24 & 2.92 & 3.54 & 2.97 \\ \hline \end{tabular} \end{table*} \subsection{XMM-Newton \& NuSTAR Joint fit } Fitting the {\it NuSTAR} spectra alone could not handle the line emission profiles present in the source spectrum properly. To model the lines we used three {\it XMM-Newton EPIC PN} spectra taken in 2014 along with the {\it NuSTAR} FPMA data. Use of {\it XMM-Newton} data jointly with {\it NuSTAR}, the observations of which are not simultaneous requires the source to be non-variable. We show in Fig. \ref{figure-10} all three {\it XMM-Newton PN} spectra taken in 2014. This figure indicates that the source has not shown any noticeable variation in line and continuum flux. Also, in all the eight epochs of {\it NuSTAR} data accumulated over a period of 5 years, no variation in the soft band (4$-$10 keV) is observed (see Table \ref{table-4} and Fig. \ref{figure-2}). Therefore, it is not inappropriate to jointly model the {\it NuSTAR} and {\it XMM-Newton} observations. We combined the three {\it XMM-Newton EPIC PN} spectra together using the task {\it epicspeccombine} and then binned the spectra with 25 counts/bin using the task {\it specgroup}. We carried out joint model fits to the 3-10 keV {\it XMM-Newton} 2014 combined spectra with the 3-79 keV epoch D {\it NuSTAR} spectrum. To account for both warm and cold reflection, several models were tried. Firstly, we used a {\it cutoffpl} to model the warm reflector that did not take into account Compton down scattering. For modelling the cold reflector we used {\it pexrav} \citep{10.1093/mnras/273.3.837} with R=-1. We obtained the best fit values of 1.59$^{+0.09}_{-0.09}$, 95$^{+100}_{-47}$ and 11.22$^{+3.51}_{-2.70}$ for $\Gamma$, $\rm{E_{cut}}$ and $\rm{A_{Fe}}$ respectively with a $\chi^2$ of 893 for 821 degrees of freedom. Using this model we got an acceptable fit statistics with a mild hump between 20 $-$ 30 keV energy range (see top panel of Fig. \ref{figure-13}). We replaced the {\it cutoffpl} with {\it xillver} with a fixed $\log\xi$ = 4.7 to account for the Compton down scattering. We obtained $\Gamma$ = 1.54$^{+0.04}_{-0.04}$, $\rm{E_{cut}}$ = 94$^{+26}_{-19}$ and $\rm{A_{Fe}}$ $>$ 9.15. This fit produced a $\chi^2$ of 884 for 821 degrees of freedom. We then replaced {\it xillver} with {f1*cutoffpl} (for f1, see Equation 3) to model the warm reflector and obtained $\Gamma$ = 1.62$^{+0.11}_{-0.11}$, $\rm{E_{cut}}$ = 97$^{+149}_{-39}$ and $\rm{A_{Fe}}$ = 11.50$^{+4.54}_{-3.11}$ with a $\chi^2$ of 902 for 824 degrees of freedom. With or without the inclusion of Compton down scattering in the warm reflector we obtained similar set of best fit values and a hump in the data to model residue plot near 25 keV (see first two panels of Fig. \ref{figure-13}). We thus conclude that inclusion of Compton down scattering in the warm reflection has insignificant effect on the derived parameters. The spectrum of the Compton-Thick AGN is refection dominated, however, there is a finite probability for the primary emission to be transmitted (above $>$10 keV) through the Compton thick absorber. Thus, we modified our model to take into account the transmitted primary emission by including the ({\it MYTZ*cutoffpl}) component into the previously described two reflector models in which the cold reflection was modelled using {\it pexrav} with R=-1. For the warm reflector we first used {\it xillver} with R=-1 and $log\xi$ = 3.1. For the Compton absorber along the line of sight we assumed a column density of $10^{25}$ $cm^{-2}$ with an inclination of $90^{\circ}$ \citep{Bauer_2015}. We obtained $\Gamma$ $<$1.28, $\rm{E_{cut}}$ = 16$^{+2}_{-1}$ and $\rm{A_{Fe}}$ = $>$7.41 with a $\chi^2$ of 847 for 820 degrees of freedom. The fit statistics now improved by $\Delta\chi^2$ = 37 over the reduction of 1 degree of freedom and the hump around 20 $-$ 30 keV band was also taken care of. We obtained a similar fit with $\Gamma$ = 1.37$^{+0.18}_{-0.16}$, $\rm{E_{cut}}$ = 18$^{+4}_{-3}$ and $\rm{A_{Fe}}$ = 5.68$^{+4.20}_{-2.41}$ with a $\chi^2$ of 850 for 820 degrees of freedom from using {\it f1*cutoffpl} (see Equation 3) as a warm reflector. Inclusion of the transmitted primary emission with the two reflector model described the source spectra well with a harder photon index and lower $\rm{E_{cut}}$ with no prominent features present in the data to model ratio plot at high energies (see 3rd panel of Fig. \ref{figure-13}). Previously also, NGC 1068 X-ray spectrum was modelled with a flatter photon index. From a joint analysis of {\it XMM-Newton} epoch E spectrum with the epoch D {\it NuSTAR} spectrum \cite{2021MNRAS.506.4960H} reported $\Gamma$ = 1.21$^{+0.13}_{-0.07}$. From modelling the 0.5$-$10 keV {\it ASCA} observation \cite{1994PASJ...46L..71U} found a best fit $\Gamma$ of 1.28$\pm$0.14. Replacing {\it pexrav} with {\it xillverCP} to model the cold reflector with R=-1 and $log\xi$ = 0.0 we obtained $\Gamma$ = 1.26$^{+0.03}_{-0.03}$, $\rm{kT_{e}}$ = 8.59$^{+0.40}_{-0.37}$ and $\rm{A_{Fe}}$ = 4.02$^{+0.36}_{-0.33}$ with a $\chi^2$ = 857 for 820 degrees of freedom. The data to the best fit model residues are given in Fig. \ref{figure-13}. To estimate $\rm{kT_{e}}$ we used Model 2b, as described earlier for the {\it NuSTAR} spectral fit alone. Here the {\it NuSTAR} FPMA spectra for all the epochs in the 9 $-$ 79 keV along with the combined 2014 {\it XMM-Newton} data in the 4 $-$ 10 keV band were used to take care of the line emission region carefully. We used six Gaussian components to model all the ionized and neutral lines present in the spectra. The line energies and the normalization were kept free during the fitting while the line widths were frozen to 0.01 keV. The self-consistent {\it $xillverCP_{cold}$} model took care of the neutral Fe k$\alpha$ ($\sim$ 6.4 keV) and k$\beta$ ($\sim$ 7.08 keV) lines, wherein the first three Gaussian components were used to model the ionized Fe lines at energies $\sim$ 6.57 keV (Fe Be-like K$\alpha$), 6.67 keV (Fe He-like K$\alpha$) and 6.96 keV (Fe H–like K$\alpha$). The neutral Ni k$\alpha$ ($\sim$ 7.47 keV), Ni k$\beta$ ($\sim$ 8.23 keV) and Ni ionized He-like K$\alpha$ ($\sim$ 7.83 keV) were taken care by the other three Gaussian components. As seen from the left panel of Fig. \ref{figure-11} we did not find any prominent residue in the 4$-$9 keV band. All the best fitted line energies and the normalization are given in Table \ref{table-7}. The best fit model parameters along with their corresponding errors are given in Table \ref{table-9}. We found that this joint fit produced similar best fit values as obtained from {\it NuSTAR} fit alone. The best fit model to the data (the combined {\it EPIC PN} data and {\it NuSTAR} FPMA epoch D spectra) along with the residue is shown in the right panel of Fig \ref{figure-11}. \begin{table*} \caption{Results of the analysis of the Model 2b fit to the {\it XMM-Newton} and {\it NuSTAR} FPMA spectra in the 4 $-$ 79 keV energy band. $\rm{kT_{e}}$ is in unit of keV and column densities ($N_{H}$) are in units of $cm^{-2}$. Normalization of components (N) at 1 keV is in unit of photons $keV^{−1}cm^{−2}s^{−1}$.} \label{table-9} \centering \begin{tabular}{p{0.12\linewidth}p{0.08\linewidth}p{0.08\linewidth}p{0.08\linewidth}p{0.08\linewidth}p{0.08\linewidth}p{0.08\linewidth}p{0.08\linewidth}p{0.08\linewidth}} \hline Parameter & epoch A & epoch B & epoch C & epoch D & epoch E & epoch F & epoch G & epoch H \\ \hline\hline $N_{H}^{ztbabs}$ & 9.78$^{+2.92}_{-2.96}$ & 9.19$^{+2.73}_{-2.75}$ & 7.09$^{+5.04}_{-4.19}$ & 9.57$^{+2.39}_{-2.60}$ & 9.16$^{+2.58}_{-2.72}$ & 9.72$^{+2.37}_{-2.55}$ & 9.51$^{+2.75}_{-3.15}$ & 8.90$^{+2.64}_{-2.88}$ \\ $\Gamma$ & 1.29$^{+0.07}_{-0.08}$ & 1.33$^{+0.15}_{-0.07}$ & 1.48$^{+0.11}_{-0.25}$ & $<$1.32 & 1.33$^{+0.10}_{-0.06}$ & 1.28$^{+0.07}_{-0.05}$ & 1.26$^{+0.07}_{-0.10}$ & 1.30$^{+0.07}_{-0.07}$ \\ $\rm{A_{Fe}}$ & 6.08$^{+3.87}_{-1.59}$ & 4.89$^{+2.89}_{-1.51}$ & 3.09$^{+4.96}_{-1.08}$ & 4.19$^{+1.44}_{-0.93}$ & 4.75$^{+2.19}_{-0.63}$ & 4.99$^{+1.70}_{-1.10}$ & 3.90$^{+1.33}_{-1.10}$ & 3.81$^{+1.27}_{-1.01}$ \\ $\rm{kT_{e}}$ & 8.69$^{+0.28}_{-0.33}$ & 8.62$^{+0.33}_{-0.49}$ & 8.07$^{+0.59}_{-0.75}$ & 8.59$^{+0.30}_{-0.36}$ & 8.60$^{+0.34}_{-0.58}$ & 8.68$^{+0.26}_{-0.38}$ & 8.55$^{+0.25}_{-0.41}$ & 8.48$^{+0.35}_{-0.51}$ \\ $\chi^2/dof$ & 592/584 & 576/565 & 516/492 & 620/586 & 590/574 & 613/569 & 630/584 & 585/578 \\ $\rm{C_{XMM/NuSTAR}}$ & 0.96$^{+0.08}_{-0.08}$ & 0.95$^{+0.08}_{-0.07}$ & 0.94$^{+0.11}_{-0.11}$ & 0.94$^{+0.08}_{-0.08}$ & 0.93$^{+0.08}_{-0.08}$ & 0.98$^{+0.08}_{-0.08}$ & 0.93$^{+0.08}_{-0.08}$ & 0.97$^{+0.08}_{-0.09}$ \\ \hline \end{tabular} \end{table*} \section{Summary} In this work, we carried out spectral and timing analysis of eight epochs of {\it NuSTAR} observations performed between December 2012 and November 2017 probing time scales within epochs and between epochs that spans about 5 years. The timing analysis of the six {\it XMM-Newton} observations between July 2000 and February 2015 was also performed. We also carried out the spectral analysis of the 2014 combined {\it XMM-Newton EPIC PN} and {\it NuSTAR} FPMA data jointly. The results of this work are summarized below \begin{enumerate} \item We found the source not to show flux variation within each of the eight epochs of {\it NuSTAR} observation. \item Between epochs, that span the time-scales from 2012 to 2017, we found variation in the source. Here too, the source did not show variation in the soft energy range. As in agreement with the earlier results by \cite{2016MNRAS.456L..94M} and \cite{2020MNRAS.492.3872Z}, we also found that the observed variations is only due to variation in the energy range beyond 20 keV. This too was noticed in Epoch D (August 2014) and Epoch G (August 2017), when the brightness of the source beyond 20 keV was higher by about 20\% and 30\% respectively relative to the three {\it NuSTAR} observations in the year 2012. \item From timing analysis, we observed no correlation of spectral variation (hardness ratio) with brightness. \item Fitting physical models to the observed data we could determine the temperature of the corona in NGC 1068 with values ranging from 8.46$^{+0.39}_{-0.66}$ keV and 9.13$^{+0.63}_{-0.98}$ keV. However, we found no variation in the temperature of the corona during the 8 epochs of observations that span a duration of about 5 years. \item From the timing analysis of six {\it XMM-Newton EPIC PN} data we found no significant flux variation both in between and within epochs of observation in the hard band. In the soft band too we found the source did not show any significant flux variation within epochs but it was brighter in epoch B compared to epoch A. \item The combined spectral fit of {\it XMM-Newton} and {\it NuSTAR} data provided results that are in agreement with those obtained by model fits to the {\it NuSTAR} data alone. \end{enumerate} In NGC 1068, we did not find evidence for variation in the temperature of the corona from analysis of data that span more than five years. This is evident from the best fit values of $\rm{kT_{e}}$ from Table \ref{table-8}. Also, the results from various models are found to be similar. The values of $\rm{kT_{e}}$ found for NGC 1068 also lie in the range of $\rm{kT_{e}}$ found in other AGN. Measurements of $\rm{E_{cut}}$ are available for a large number of AGN that includes both Seyfert 1 and Seyfert 2 type. However, studies on the variation of $\rm{E_{cut}}$ or $\rm{kT_{e}}$ are limited to less than a dozen AGN \citep{2014ApJ...794...62B, 2015A&A...577A..38U, 2016MNRAS.463..382U, 2016MNRAS.456.2722K, 2017ApJ...836....2Z, 2018ApJ...863...71Z, 2020MNRAS.492.3041B, 2021MNRAS.502...80K, Barua_2021, 2022A&A...662A..78P}. Even in sources where $\rm{E_{cut}}$/$\rm{kT_{e}}$ variations are known, the correlation of the variation of $\rm{kT_{e}}$ with various physical properties of the sources are found to be varied among sources \citep{2020MNRAS.492.3041B, Barua_2021, 2021MNRAS.502...80K, 2022A&A...662A..78P}. These limited observations do indicate that we do not yet understand the complex corona of AGN including its geometry and composition. Investigation of this kind needs to be extended for many AGN to better constrain the nature of corona. \section*{Acknowledgements} We thank the anonymous referee for his/her comments that helped us in correcting an error in the analysis. We also thank Drs. Ranjeev Misra and Gulab Dewangan for discussion on spectral fits to the data. We thank the {\it NuSTAR} Operations, Software and Calibration teams for support with the execution and analysis of these observations. This research has made use of the {\it NuSTAR} Data Analysis Software (NuSTARDAS) jointly developed by the ASI Science Data Center (ASDC, Italy) and the California Institute of Technology (USA). This research has made use of data and/or software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC. VKA thanks GH, SAG; DD, PDMSA and Director, URSC for encouragement and continuous support to carry out this research. \section*{Data Availability} All data used in this work are publicly available in the {\it NuSTAR} (\url{https://heasarc.gsfc.nasa.gov/docs/nustar/nustar_archive.html}) and {\it XMM-Newton} (\url{http://nxsa.esac.esa.int}) science archive. \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\@startsection {section}{1}{\zeta@}% {-3.5ex \@plus -1ex \@minus -.2ex {2.3ex \@plus.2ex}% {\normalfont\large\bfseries}} \renewcommand\subsection{\@startsection{subsection}{2}{\zeta@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\bfseries}} \makeatother \let\non\nonumber \let\alpha=\alpha\let\beta=\beta\let\delta=\delta \let\eta=\eta\let\th=\theta\let\kappa=\kappa\let\lambda=\lambda \let\rho=\rho \let\sigma=\sigma\let\tau=\tau\let\upsilon=\upsilon \let\w=\wedge \let\xi=\xi\let\y=\psi \let\zeta=\zeta\let\Pi=\Pi\let\Sigma=\Sigma \let\Th=\Theta \newcommand{\partial}{\partial} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\pref}[1]{(\ref{#1})} \def\coeff#1#2{{\textstyle\frac{#1}{#2}}} \newcommand{\frac{1}{2}}{\frac{1}{2}} \newcommand{\frac{1}{4}}{\frac{1}{4}} \newcommand{{\mathbb Z}}{{\mathbb Z}} \newcommand{{\rm R}}{{\mathbb R}} \newcommand{{\cal M}}{{\cal M}} \newcommand{\Theta}{{\mathbb Q}} \newcommand{\widetilde{X}}{\widetilde{X}} \newcommand{\Omega}{\Omega} \newcommand{{\mathbb J}}{{\mathbb J}} \newcommand{\operatorname{Spin}}{\operatorname{Spin}} \newcommand{\operatorname{SO}}{\operatorname{SO}} \renewcommand{\Omega}{\operatorname{O}} \newcommand{\Lambda}{\Lambda} \newcommand{\lambda}{\lambda} \newcommand{\theta}{\theta} \newcommand{\Gamma}{\Gamma} \newcommand{\Phi}{\Psi} \newcommand{\epsilon}{\epsilon} \newcommand{\overline{\epsilon}}{\overline{\epsilon}} \newcommand{\Lambda}{\Lambda} \newcommand{\delta}{\delta} \newcommand{\com}[2]{{ \left[ #1, #2 \right] }} \newcommand{\acom}[2]{{ \left\{ #1, #2 \right\} }} \newcommand{\rightarrow}{\rightarrow} \newcommand{\mu}{\mu} \newcommand{\nu}{\nu} \newcommand{\partial}{\partial} \newcommand{\widehat{A}}{\widehat{A}} \newcommand{\widehat{\F}}{\widehat{\Phi}} \newcommand{{\LL_\T}}{{\Lambda_\theta}} \def\com#1#2{{ \left[ #1, #2 \right] }} \def\acom#1#2{{ \left\{ #1, #2 \right\} }} \newcommand{\widetilde{q}}{\widetilde{q}} \newcommand{\widetilde{p}}{\widetilde{p}} \newcommand{\phi}{\psi} \newcommand{{\bar\psi}}{{\bar\psi}} \newcommand{\widetilde{\f}}{\widetilde{\phi}} \newcommand{\tilde z}{\tilde z} \newcommand{\tilde g}{\tilde g} \newcommand{\hat y}{\hat y} \newcommand{\hat z}{\hat z} \newcommand{\hat x}{\hat x} \newcommand{\hat{x}^-}{\hat{x}^-} \newcommand{\hat{x}^+}{\hat{x}^+} \newcommand{\hat{p}^+}{\hat{p}^+} \newcommand{\hat{p}^-}{\hat{p}^-} \newcommand{\hat \psi}{\hat{p}_x} \newcommand{\hat{p}_z}{\hat{p}_z} \newcommand{\widetilde{K}}{\widetilde{K}} \newcommand{\widehat M}{\widehat M} \newcommand{\hat w}{\hat w} \newcommand{\widehat \alpha}{\widehat \alpha} \newcommand{x^+}{x^+} \newcommand{x^-}{x^-} \newcommand{\alpha'}{\alpha'} \newcommand{\alpha}{\alpha} \newcommand{\mathcal{O}}{\mathcal{O}} \newcommand{\ell_s}{\ell_s} \newcommand{\ell_p}{\ell_p} \newcommand{\Om}[1]{\ensuremath{\mathrm{O}#1^-}} \newcommand{\Op}[1]{\ensuremath{\mathrm{O}#1^+}} \newcommand{\Omt}[1]{\ensuremath{\widetilde{\mathrm{O}#1}{}^-}} \newcommand{\Opt}[1]{\ensuremath{\widetilde{\mathrm{O}#1}{}^+}} \newcommand{\Delta}[1]{\ensuremath{\mathrm{D}#1}} \newcommand{\C}[1]{$(\ref{#1})$} \newcommand{\comment}[1]{{\bf #1}} \newcommand{\not\!\!X}{\not\!\!X} \newcommand{\not\!\!P}{\not\!\!P} \newcommand{{d}}{{d}} \typeout{} \typeout{} \typeout{} \typeout{} \typeout{} \typeout{} \typeout{} \typeout{} \typeout{THIS IS A LATEX FILE: LATEX TWICE, AS USUAL. } \typeout{} \typeout{} \def{e.g.}{{\it e.g.}} \def{\it i.e.}{{\it i.e.}} \def\IZ{\relax\ifmmode\mathchoice {\hbox{\cmss Z\kern-.4em Z}}{\hbox{\cmss Z\kern-.4em Z}} {\lower.9pt\hbox{\cmsss Z\kern-.4em Z}} {\lower1.2pt\hbox{\cmsss Z\kern-.4em Z}}\else{\cmss Z\kern-.4em Z}\fi} \def\relax{\rm I\kern-.18em R}{\relax{\rm I\kern-.18em R}} \def{e.g.}{{e.g.}} \def{\hbox{ 1\kern-.8mm l}}{{\hbox{ 1\kern-.8mm l}}} \def{\rm gh}{{\rm gh}} \def{\rm sgh}{{\rm sgh}} \def{\rm NS}{{\rm NS}} \def{\rm R}{{\rm R}} \def{\rm i}{{\rm i}} \def{\bar z}{{\bar z}} \def\comm#1#2{\left[ #1, #2\right]} \def\acomm#1#2{\left\{ #1, #2\right\}} \def{\rm tr\,}{{\rm tr\,}} \def{\rm Tr\,}{{\rm Tr\,}} \newcommand{{\cal N}}{{\cal N}} \newlength{\bredde} \def\slash#1{\settowidth{\bredde}{$#1$}\ifmmode\,\raisebox{.15ex}{/} \hspace*{-\bredde} #1\else$\,\raisebox{.15ex}{/}\hspace*{-\bredde} #1$\fi} \newcommand{\ft}[2]{{\textstyle\frac{#1}{#2}}} \newcommand {\Rbar} {{\mbox{\rm$\mbox{I}\!\mbox{R}$}}} \newcommand {\Hbar} {{\mbox{\rm$\mbox{I}\!\mbox{H}$}}} \newcommand {\Cbar} {\mathord{\setlength{\unitlength}{1em} \begin{picture}(0.6,0.7)(-0.1,0) \put(-0.1,0){\rm C} \thicklines \put(0.2,0.05){\line(0,1){0.55}} \end {picture}}} \newsavebox{\zzzbar} \sbox{\zzzbar} {\setlength{\unitlength}{0.9em} \begin{picture}(0.6,0.7) \thinlines \put(0,0){\line(1,0){0.6}} \put(0,0.75){\line(1,0){0.575}} \multiput(0,0)(0.0125,0.025){30}{\rule{0.3pt}{0.3pt}} \multiput(0.2,0)(0.0125,0.025){30}{\rule{0.3pt}{0.3pt}} \put(0,0.75){\line(0,-1){0.15}} \put(0.015,0.75){\line(0,-1){0.1}} \put(0.03,0.75){\line(0,-1){0.075}} \put(0.045,0.75){\line(0,-1){0.05}} \put(0.05,0.75){\line(0,-1){0.025}} \put(0.6,0){\line(0,1){0.15}} \put(0.585,0){\line(0,1){0.1}} \put(0.57,0){\line(0,1){0.075}} \put(0.555,0){\line(0,1){0.05}} \put(0.55,0){\line(0,1){0.025}} \end{picture}} \newcommand{\mathord{\!{\usebox{\zzzbar}}}}{\mathord{\!{\usebox{\zzzbar}}}} \def{\rm Im ~}{{\rm Im ~}} \def{\rm Re ~}{{\rm Re ~}} \newcommand{\bra}[1]{\langle{#1}|} \newcommand{\ket}[1]{|{#1}\rangle} \newcommand{\vev}[1]{\langle{#1}\rangle} \newcommand{\braket}[2]{\langle{#1}|{#2}\rangle} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\sect}[1]{Section~\ref{#1}} \newcommand{\frac{1}{2}}{\frac{1}{2}} \newcommand{\eq}[1]{(\ref{#1})} \newcommand{\fig}[1]{Fig.~\ref{#1}} \newcommand{\chap}[1]{Chapter~\ref{#1}} \def\Gamma{\Gamma} \def{\cal K}{{\cal K}} \def{\cal N}{{\cal N}} \def{\cal H}{{\cal H}} \def{\cal V}{{\cal V}} \renewcommand{\beta}{\beta} \newcommand{\gamma}{\gamma} \newcommand{{\bar z}}{{\bar z}} \newcommand{{\bar j}}{{\bar j}} \def{\cal S}{{\cal S}} \def\alpha{\alpha} \def\beta{\beta} \def\chi{\chi} \def\delta{\delta} \def\epsilon{\epsilon} \def\gamma{\gamma} \def\eta{\eta} \def\iota{\iota} \def\psi{\psi} \def\kappa{\kappa} \def\lambda{\lambda} \def\mu{\mu} \def\nu{\nu} \def\omega{\omega} \def\theta{\theta} \def\rho{\rho} \def\sigma{\sigma} \def\tau{\tau} \def\xi{\xi} \def\zeta{\zeta} \def\Delta{\Delta} \def\Phi{\Phi} \def\Gamma{\Gamma} \def\Psi{\Psi} \def\Lambda{\Lambda} \def\Omega{\Omega} \def\Pi{\Pi} \def\Theta{\Theta} \def\Sigma{\Sigma} \def\Upsilon{\Upsilon} \def\Xi{\Xi} \renewcommand{\operatorname{Spin}}{\operatorname{Spin}} \renewcommand{\operatorname{SO}}{\operatorname{SO}} \renewcommand{\Omega}{\operatorname{O}} \newcommand{\ensuremath \raisebox{0.025cm}{\slash}\hspace{-0.32cm} D}{\ensuremath \raisebox{0.025cm}{\slash}\hspace{-0.32cm} D} \newcommand{\left(}{\left(} \newcommand{\right)}{\right)} \newcommand{\left[}{\left[} \newcommand{\right]}{\right]} \newcommand{R}{R} \begin{document} \begin{titlepage} \begin{center} \vskip 2 cm {\Large \bf Proving relations between modular graph functions }\\ \vskip 1.25 cm { Anirban Basu\footnote{email address: anirbanbasu@hri.res.in} } \\ {\vskip 0.5cm Harish--Chandra Research Institute, Chhatnag Road, Jhusi,\\ Allahabad 211019, India\\} \end{center} \vskip 2 cm \begin{abstract} \baselineskip=18pt We consider modular graph functions that arise in the low energy expansion of the four graviton amplitude in type II string theory. The vertices of these graphs are the positions of insertions of vertex operators on the toroidal worldsheet, while the links are the scalar Green functions connecting the vertices. Graphs with four and five links satisfy several non--trivial relations, which have been proved recently. We prove these relations by using elementary properties of Green functions and the details of the graphs. We also prove a relation between modular graph functions with six links. \end{abstract} \end{titlepage} \section{Introduction} Calculating amplitudes in perturbative string theory is an important tool to analyze terms in the effective action of string theory. At a fixed order in the genus expansion, they yield local and non--local terms at all orders in the $\alpha'$ expansion. Knowledge of these amplitudes also plays a significant role in understanding non--perturbative duality symmetries of string theory because duality covariant couplings of terms in the effective action must reproduce these amplitudes when expanded around weak coupling. While amplitudes at tree level have been obtained for several processes, amplitudes at higher genus have not been so well studied. Here we consider certain local terms in the low energy expansion of the four graviton amplitude at genus one in type II string theory in ten dimensions. The low energy expansion yields terms of the form $D^{2k} \mathcal{R}^4$, where $\mathcal{R}^4$ represents a specific contraction of four powers of the Weyl tensor and $D^{2k}$ represents $2k$ derivatives. For fixed $k$, the evaluation of the genus one amplitude amounts to evaluating integrals of the form \begin{equation} \sum_i \int_{\mathcal{F}_L} \frac{d^2\tau}{\tau_2^2} f_{k,i}(\tau,\bar\tau)\end{equation} where $\tau$ is the complex structure modulus of the torus, and $d^2\tau = d\tau_1 d\tau_2$. We have integrated over the truncated fundamental domain of $SL(2,\mathbb{Z})$ defined by~\cite{Green:1999pv} \begin{equation} \label{one}\mathcal{F}_L = \Big\{ -\frac{1}{2} \leq \tau_1 \leq \frac{1}{2}, \vert \tau \vert \geq 1, \tau_2 \leq L\Big\},\end{equation} where $L \rightarrow \infty$, which produces finite as well as contributions that diverge as $L\rightarrow \infty$. The finite contributions are the required local contributions while the divergent contributions cancel those from the boundary of moduli space which have to be calculated separately. In \C{one}, each $f_{k,i}(\tau,\bar\tau)$ is a nonholomorphic modular form that is $SL(2,\mathbb{Z})$ invariant, and is referred to as a modular graph function. The sum over $i$ runs over a finite number of terms that is determined by the various graphs that arise at that order in $k$. The vertices of these graphs are the positions of insertions of the vertex operators on the toroidal worldsheet, while the links are the scalar Green functions that connect the various vertices\footnote{The modular graph functions we consider and the relations we derive between them do not involve derivatives of scalar Green functions. They show up, for example, in the low energy expansion of the five graviton amplitude~\cite{Green:2013bza}.}. It is important to have a detailed understanding of the modular graph functions in order to obtain the genus one amplitudes. While these modular graph functions at leading orders in the low energy expansion are not difficult to evaluate, they become quite involved at higher orders in the momentum expansion. We shall consider graphs at orders $D^8\mathcal{R}^4$ and $D^{10} \mathcal{R}^4$, and also one at order $D^{12}\mathcal{R}^4$, that arise in the derivative expansion. Graphs at order $D^{2k} \mathcal{R}^4$ have $k$ links that arise from the $k$ scalar Green functions from the Koba--Nielsen factor in the low energy expansion. For the cases we consider, it turns out that they satisfy various non--trivial relations at each order in the derivative expansion. These relations were originally conjectured based on Poisson equations the modular graph functions satisfy, and their asymptotic expansions~\cite{D'Hoker:2015foa}. Subsequently they have been proven using various techniques based on intricate details of their modular properties~\cite{D'Hoker:2015zfa,D'Hoker:2016jac}. These relations should also follow from identities between elliptic polylogarithms~\cite{D'Hoker:2015qmf}. Importantly, these relations are between graphs having the same number of links, though not necessarily the same number of vertices. The principal idea behind these relations between the modular graph functions is that each $f_{k,i}$ satisfies a Poisson equation. These Poisson equations for the graphs with cubic vertices and six links for terms that are relevant for the $D^{12}\mathcal{R}^4$ interaction have been derived in~\cite{Basu:2015ayg,Basu:2016xrt}. These are the Mercedes diagram and the three loop ladder diagram\footnote{The Poisson equation for the three loop ladder diagram has a source term that has a modular graph function with two derivatives, which arises in the five graviton amplitude.}. In obtaining these Poisson equations, diagrammatic rather then algebraic expressions for the various graphs have been the starting point. The Poisson equations have been obtained by manipulating them using various properties of the Green functions. Thus this line of analysis is very different from the ones that have been followed in the papers mentioned above. In this work, we generalize this approach to derive the Poisson equations for all the modular graph functions that are relevant for the $D^8 \mathcal{R}^4$ and $D^{10} \mathcal{R}^4$ interactions. These then lead to relations between graphs with four links and similarly with five links, providing an alternate proof of these relations. We then consider the Poisson equation for the Mercedes graph, and derive Poisson equations for certain other graphs with five vertices and six links. They lead to a relation between modular graph functions with six links. Our analysis should be generalizable to interactions at higher orders in the derivative expansion as well, and hence should provide more relations between modular graph functions. We start with a very brief review of the genus one four graviton amplitude in type II string theory in ten dimensions, and the various properties of the scalar Green function we shall need. We then derive the relations between the modular graph functions with four, five and six links. \section{The four graviton amplitude and the scalar Green function} The local terms arise from the low momentum expansion of the four graviton amplitude at genus one in type II superstring theory in ten dimensions given by \begin{equation} \mathcal{A}_4 = 2\pi \mathcal{I}(s,t,u) \mathcal{R}^4, \end{equation} where \begin{equation} \label{oneloop}\mathcal{I} (s,t,u) = \int_{\mathcal{F}_L} \frac{d^2\tau}{\tau_2^2} F(s,t,u;\tau,\bar\tau),\end{equation} where the Mandelstam variables $s, t, u$ satisfy the on--shell relation \begin{equation} s+t+u=0.\end{equation} The factor $F(s,t,u;\tau,\bar\tau)$ which encodes the worldsheet moduli and momentum dependence is given by \begin{equation} \label{D}F(s,t,u;\tau,\bar\tau) = \prod_{i=1}^4 \int_\Sigma \frac{d^2 z^{(i)}}{\tau_2} e^{\mathcal{D}},\end{equation} where $z^{(i)}$ $(i=1,2,3,4)$ are the positions of insertions of the four vertex operators on the toroidal worldsheet $\Sigma$. Thus $d^2 z^{(i)} = d({\rm Re} z^{(i)}) d({\rm Im}z^{(i)})$, where \begin{equation} -\frac{1}{2} \leq {\rm Re} z^{(i)} \leq \frac{1}{2}, \quad 0 \leq {\rm Im} z^{(i)}\leq \tau_2 \end{equation} for all $i$. In \C{D}, the expression for $\mathcal{D}$ is given by \begin{equation} \label{defD}4\mathcal{D} = \alpha' s ({G}_{12} + {G}_{34})+\alpha' t ({G}_{14} + {G}_{23})+ \alpha' u ({G}_{13} +{G}_{24}),\end{equation} where ${G}_{ij}$ is the scalar Green function on the torus with complex structure $\tau$ between points $z^{(i)}$ and $z^{(j)}$ after removing an irrelevant zero mode contribution. Its explicit expression is given by~\cite{Green:1999pv,Green:2008uj} \begin{equation} \label{Green}G(z;\tau) = \frac{1}{\pi} \sum_{(m,n)\neq(0,0)} \frac{\tau_2}{\vert m\tau+n\vert^2} e^{\pi[\bar{z}(m\tau+n)-z(m\bar\tau+n)]/\tau_2},\end{equation} where $G(z;\tau)$ is modular invariant, and single valued. Thus \begin{equation} \label{sv}G(z;\tau) = G(z+1;\tau) = G(z+\tau;\tau).\end{equation} Thus the local terms are obtained by expanding the exponential involving the Green functions and performing the various integrals. Contributions upto the $D^{10} \mathcal{R}^4$ interaction have been obtained in~\cite{D'Hoker:2015foa,Basu:2016fpd} in ten dimensions as well as for toroidal compactifications. For the later case, the amplitudes satisfy Poisson equations with respect to the spacetime moduli which has to be solved with appropriate boundary conditions. In order to derive the Poisson equations, we make use of the various properties satisfied by the Green function $G_{ij}$ (see~\cite{D'Hoker:2015foa,Basu:2015ayg} for various details). We find it very useful to use the relations satisfied by them under the variation of the Beltrami differential $\mu$. We have that \begin{equation} \label{onevar} \partial_\mu G(z_1,z_2) = -\frac{1}{\pi} \int_\Sigma d^2 z \partial_z G(z,z_1) \partial_z G(z,z_2),\end{equation} and \begin{equation} \label{novar}\bar\partial_{\mu}\partial_\mu G(z_1,z_2) =0.\end{equation} Also the $SL(2,\mathbb{Z})$ invariant Laplacian is defined by \begin{equation} \label{beltrami}\Delta = 4\tau_2^2\frac{\partial^2}{\partial\tau\partial\bar\tau} = \bar\partial_{\mu} \partial_\mu.\end{equation} The Green function satisfies the equations \begin{eqnarray} \label{eigen}\bar{\partial}_w\partial_z G(z,w) = \pi \delta^2 (z-w) - \frac{\pi}{\tau_2}, \non \\ \bar{\partial}_z\partial_z G(z,w) = -\pi \delta^2 (z-w) + \frac{\pi}{\tau_2} \end{eqnarray} which is repeatedly used in our analysis. In the various manipulations, we often obtain expressions involving $\partial_z G(z,w)$ where $z$ is integrated over $\Sigma$. We then integrate by parts without picking up boundary contributions on $\Sigma$ as $G(z,w)$ is single valued. Hence we also drop all contributions which are total derivatives as they vanish. Also we readily use $\partial_z G(z,w) = -\partial_w G(z,w)$ using the translational invariance of the Green function. Finally, we have that \begin{equation} \int_\Sigma d^2 z G(z,w)=0\end{equation} which easily follows from \C{Green}. In the various expressions, for brevity we write \begin{equation} \int_\Sigma d^2 z \int_\Sigma d^2 w \ldots \equiv \int_{zw\ldots}.\end{equation} We shall also drop contributions that vanish due to simple manipulations. We shall find it very useful to denote the various graphs diagrammatically. In these graphs, the notations for holomorphic and antiholomorphic derivatives with respect to the worldsheet coordinate acting on the Green function are given in figure 1. From the structure of \C{Green} it follows that one particle reducible diagrams vanish and hence we ignore them. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(240,60)(0,0) \includegraphics[scale=.75]{fiG3.eps} \end{picture}} \] \caption{(i) $\partial_2 G_{12} = -\partial_1 G_{12}$, (ii) $\bar\partial_2 G_{12} = -\bar\partial_1 G_{12}$} \end{center} \end{figure} In the various diagrams, $\mu$ along a link stands for $\partial_\mu$, while $\bar\mu$ stands for $\bar\partial_{\mu}$. We shall follow the conventions of~\cite{Green:2008uj} in naming the various modular graph functions. \section{The elementary diagrams} Some of the diagrams that are relevant to our analysis can be calculated very easily and for those we simply give the final answers. These diagrams are given in figure 2. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(360,90)(0,0) \includegraphics[scale=.95]{g1.eps} \end{picture}} \] \caption{The diagrams (i) $D_2$, (ii) $D_{1,1,1}$, (iii) $D_3$, (iv) $D_{1,1,1,1}$, (v) $D_{1,1,1,1,1}$ and (vi) $D_{1,1,1,1,1,1}$} \end{center} \end{figure} The last three diagrams $D_{1,1,1,1}$, $D_{1,1,1,1,1}$ and $D_{1,1,1,1,1,1}$ first arise at orders $D^8\mathcal{R}^4$, $D^{8} \mathcal{R}^5$ and $D^{8}\mathcal{R}^6$ respectively. Now these diagrams are defined by \begin{eqnarray} && D_2 = \frac{1}{\tau_2^2} \int_{12} G_{12}^2, \quad D_{1,1,1} = \frac{1}{\tau_2^3} \int_{123} G_{12} G_{23} G_{13}, \quad D_3= \frac{1}{\tau_2^2} \int_{12} G_{12}^3, \non \\ && D_{1,1,1,1} = \frac{1}{\tau_2^4} \int_{1234} G_{12} G_{23} G_{34} G_{14}, \quad D_{1,1,1,1,1} = \frac{1}{\tau_2^5} \int_{12345} G_{12} G_{23} G_{34} G_{45} G_{15}, \non \\ && D_{1,1,1,1,1,1} = \frac{1}{\tau_2^6}\int_{123456} G_{12} G_{23} G_{34} G_{45} G_{56} G_{16}.\end{eqnarray} It is easy to determine the equations for $D_2$, $D_{1,1,1}$, $D_3$, $D_{1,1,1,1}$, $D_{1,1,1,1,1}$ and $D_{1,1,1,1,1,1}$. Using the equations for the variations of the Green function under change of Beltrami differentials mentioned above, we get that \begin{eqnarray} && \Delta D_2 = 2 D_2, \quad \Delta D_{1,1,1} = 6 D_{1,1,1}, \quad \Delta D_3 = 6 D_{1,1,1}, \non \\ && \Delta D_{1,1,1,1} = 12 D_{1,1,1,1}, \quad \Delta D_{1,1,1,1,1} = 20 D_{1,1,1,1,1}, \quad \Delta D_{1,1,1,1,1,1} = 30 D_{1,1,1,1,1,1}.\non \\ \end{eqnarray} This leads to the solutions \begin{eqnarray} \label{1} && D_2 = E_2, \quad D_{1,1,1} = E_3, \quad D_3 = E_3 + \zeta(3), \non \\ && D_{1,1,1,1} = E_4, \quad D_{1,1,1,1,1} = E_5, \quad D_{1,1,1,1,1,1} = E_6,\end{eqnarray} based on boundary conditions and the asymptotic expansions at large $\tau_2$~\footnote{The relevant asymptotic expansions of the various graph functions are given in~\cite{Green:2008uj,D'Hoker:2015foa}.}. Here $E_s$ is the non--holomorphic Eisenstein series defined by \begin{equation} E_s (\tau,\bar\tau) = \sum_{(m,n)\neq (0,0)}\frac{\tau_2^s}{\pi^s\vert m+n\tau\vert^{2s}}.\end{equation} \section{Poisson equations for diagrams with four links} We first consider the Poisson equations for non--trivial diagrams with four links. These are diagrams that are relevant at order $D^8\mathcal{R}^4$ in the low momentum expansion, and are given in figure 3. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(120,90)(0,0) \includegraphics[scale=.95]{g2.eps} \end{picture}} \] \caption{The diagrams (i) $D_{1,1,2}$ and (ii) $D_4$} \end{center} \end{figure} Thus we have that \begin{eqnarray} D_{1,1,2} = \frac{1}{\tau_2^3}\int_{123} G_{12}^2 G_{13} G_{23}, \quad D_4 = \frac{1}{\tau_2^2} \int_{12} G_{12}^4.\end{eqnarray} In fact, $D_{1,1,2}$ does not appear in the expression for the $D^8\mathcal{R}^4$ term, however it does appear in the final relation involving the modular graph functions. \subsection{The Poisson equation for $D_{1,1,2}$} We first obtain the Poisson equation for $D_{1,1,2}$. From \C{novar} and \C{beltrami} have that \begin{equation} \Delta D_{1,1,2} = 2F_1 + 2 F_2 + 4 (F_3 + c.c.),\end{equation} where \begin{eqnarray} F_1 &=& \frac{1}{\tau_2^3} \int_{123} \partial_\mu G_{12} \bar\partial_\mu G_{12} G_{13}G_{23}, \non \\ F_2 &=& \frac{1}{\tau_2^3} \int_{123} G_{12}^2 \partial_\mu G_{13} \bar\partial_\mu G_{23}, \non \\ F_3 &=& \frac{1}{\tau_2^3} \int_{123} G_{12} \partial_\mu G_{12}\bar\partial_\mu G_{13} G_{23}\end{eqnarray} and are given in figure 4. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(280,90)(0,0) \includegraphics[scale=.9]{g3.eps} \end{picture}} \] \caption{The diagrams (i) $F_1$, (ii) $F_2$ and (iii) $F_3$} \end{center} \end{figure} We now manipulate them using \C{onevar} and \C{eigen} to express them in terms of various modular graph functions. This leads to \begin{eqnarray} F_1 &=& \frac{3}{2} E_2^2 -\frac{3}{2} E_4- 2 D_{1,1,2}, \non \\ F_2 &=& D_{1,1,2}, \non \\ F_3 &=& -\frac{1}{2} E_2^2 + \frac{3}{2} E_4 +\frac{1}{2}D_{1,1,2}.\end{eqnarray} Adding the various contributions we get that \begin{equation} \label{e1}(\Delta -2) D_{1,1,2} = 9 E_4 - E_2^2\end{equation} which has been deduced in~\cite{D'Hoker:2015foa} using different techniques. \subsection{The Poisson equation for $D_4$} We next obtain the Poisson equation for $D_4$. It is not particularly useful to start directly with the diagram as given in figure 3 and analyze variations of the Green functions in the diagram. Hence we proceed differently. We start with the diagram $F_4$ which is given by \begin{equation} F_4 = \frac{1}{\tau_2^2} \int_{123} \bar\partial_1 \partial_2 G_{12} G_{13} \partial_\mu G_{13} G_{23} \bar\partial_\mu G_{23}\end{equation} as shown in figure 5. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(100,110)(0,0) \includegraphics[scale=.5]{g5.eps} \end{picture}} \] \caption{The diagram $F_4$} \end{center} \end{figure} This can be evaluated by simply using the relation for the Green function \C{eigen} for the $\partial$ and $\bar\partial$ on the same link leading to \begin{equation} F_4 = \pi F_5 - \pi F_6 F_6^*,\end{equation} where the diagrams $F_5$ and $F_6$ are given in figure 6. While $F_5$ is obtained from the variation of $D_4$, $F_6$ is obtained from the variation of $D_2$. Now $F_4$ can also be evaluated by moving the $\partial$ and the $\bar\partial$ along the links appropriately leading to \begin{equation} \frac{1}{\pi} F_4 = -\frac{1}{12} D_4 -\frac{5}{4} E_2^2 + 2E_4 + D_{1,1,2} + \frac{1}{\pi^2}F_7,\end{equation} where $F_7$ is given by \begin{equation} F_7 = \frac{1}{\tau_2^2} \int_{1234} \partial_1 G_{12} \bar\partial_1 G_{12}\partial_1 G_{13} \bar\partial_1 G_{14} G_{23} G_{24} \end{equation} as depicted in figure 7. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(180,120)(0,0) \includegraphics[scale=.75]{g4.eps} \end{picture}} \] \caption{The diagrams (i) $F_5$ and (ii) $F_6$} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(70,70)(0,0) \includegraphics[scale=.9]{g6.eps} \end{picture}} \] \caption{The diagram $F_7$} \end{center} \end{figure} We now evaluate the diagram $F_7$. However rather than do so directly, we find it convenient to start from the diagram $F_8$ which is given by \begin{equation} F_8= \frac{1}{\tau_2^2}\int_{12345}\bar\partial_1 \partial_2 G_{12} \partial_1 G_{15} \bar\partial_2 G_{25} \partial_1 G_{13} \bar\partial_2 G_{24} G_{35} G_{45} \end{equation} as shown in figure 8. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(90,110)(0,0) \includegraphics[scale=.65]{g7.eps} \end{picture}} \] \caption{The diagram $F_8$} \end{center} \end{figure} As before, we can evaluate it in two ways. Using \C{eigen} for the $\partial$ and $\bar\partial$ on the same link trivially leads to \begin{equation} F_8 = \pi F_7 - \pi^3 F_6 F_6^* .\end{equation} On the other hand, evaluating it by moving the derivatives through the links leads to \begin{equation} \frac{1}{\pi^3} F_8 = - D_{1,1,2} + E_4 +\frac{1}{4} D_4 - \frac{1}{4} E_2^2.\end{equation} Hence substituting the various relations, this gives us \begin{equation} \label{z1}F_5 - 2 F_6 F_6^* = \frac{1}{6} D_4 + 3 E_4 - \frac{3}{2} E_2^2.\end{equation} Now we can obtain the Poisson equation involving $D_4$. We note that \begin{equation} \label{Y}\Delta (D_4 - 3 E_2^2) = \partial_\mu \bar\partial_\mu (D_4 - 3 E_2^2) = 12(F_5 - 2 F_6 F_6^*) -12 E_2^2.\end{equation} In \C{Y} we have chosen the relative factors of $D_4$ and $E_2^2$ appropriately such that $\Delta$ acting on that combination yields $F_5- 2 F_6 F_6^*$ which arises in our analysis. Thus from \C{z1} we get the Poisson equation \begin{equation} \label{e2}(\Delta -2)(D_4 - 3 E_2^2) = 36 E_4 - 24 E_2^2\end{equation} as conjectured in~\cite{D'Hoker:2015foa}. Thus from \C{e1}, \C{e2} and the asymptotic expansions, we get the relation \begin{equation} D_4 = 24 D_{1,1,2} + 3 E_2^2 - 18 E_4\end{equation} between various modular graph functions with four links as conjectured in~\cite{D'Hoker:2015foa}. This strategy used in obtaining the Poisson equations will be used repeatedly in our analysis. We shall often have to manipulate diagrams whose variations using \C{onevar} do not lead to particularly useful expressions. We shall instead manipulate appropriately chosen auxiliary diagrams involving more links and derivatives which reduce to the parent diagrams trivially using \C{eigen}. These auxiliary diagrams are then evaluated independently such that they are expressible in terms of modular graph functions involving no derivatives at all. This helps us in achieving considerable simplification in obtaining the Poisson equations for the various diagrams. \section{Poisson equations for diagrams with five links} We next consider Poisson equations for non--trivial diagrams with five links. They arise at order $D^{10}\mathcal{R}^4$ in the low momentum expansion. The relevant diagrams with five links are given in figure 9. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(180,160)(0,0) \includegraphics[scale=.8]{g8.eps} \end{picture}} \] \caption{The diagrams (i) $D_{1,1,1,1;1}$, (ii) $D_{1,1,1,2}$, (iii) $D_{1,2,2}$, (iv) $D_{1,1,3}$ and (v) $D_5$} \end{center} \end{figure} Thus they are given by the expressions \begin{eqnarray} && D_{1,1,1,1;1} = \frac{1}{\tau_2^4} \int_{1234} G_{12} G_{23} G_{34} G_{14} G_{13}, \quad D_{1,1,1,2} = \frac{1}{\tau_2^4} \int_{1234} G_{12} G_{23} G_{34}^2 G_{14}, \non \\ && D_{1,2,2} = \frac{1}{\tau_2^3}\int_{123} G_{12} G_{23}^2 G^2_{13}, \quad D_{1,1,3} = \frac{1}{\tau_2^3}\int_{123} G_{12} G_{13} G_{23}^3, \non \\ && D_5 = \frac{1}{\tau_2^2}\int_{12} G_{12}^5.\end{eqnarray} We now obtain the Poisson equations for each of these diagrams. \subsection{The Poisson equation for $D_{1,1,1,1;1}$} We first obtain the Poisson equation for $D_{1,1,1,1;1}$. Proceeding as before, we have that \begin{equation} \Delta D_{1,1,1,1;1} = 4 (F_9 + c.c.) + 4(F_{10} + 2 F_{11})\end{equation} where \begin{eqnarray} F_9 &=& \frac{1}{\tau_2^4} \int_{1234} \partial_\mu G_{12} G_{23} G_{34} G_{14} \bar\partial_\mu G_{13}, \non \\ F_{10} &=& \frac{1}{\tau_2^4} \int_{1234} \partial_\mu G_{12} \bar\partial_\mu G_{23} G_{34} G_{14} G_{13}, \non \\ F_{11} &=& \frac{1}{\tau_2^4} \int_{1234} \partial_\mu G_{12} G_{23} G_{34} \bar\partial_\mu G_{14} G_{13} \end{eqnarray} which is given in figure 10. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(260,130)(0,0) \includegraphics[scale=.7]{g9.eps} \end{picture}} \] \caption{The diagrams (i) $F_9$, (ii) $F_{10}$ and (iii) $F_{11}$} \end{center} \end{figure} Each of these diagrams can be manipulated to give us expressions involving modular graph functions with no derivatives. Using \C{onevar} and \C{eigen} we get that \begin{eqnarray} F_9 +c.c. &=& -D_{1,1,1,1;1} -2D_{1,1,1,2} + 2 E_2 E_3 - 2 E_5, \non \\ F_{10} &=& D_{1,1,1,1;1}, \non \\ F_{11} &=& 2 E_5 + D_{1,1,1,2} - E_2 E_3.\end{eqnarray} Thus we get the Laplace equation \begin{equation} \Delta D_{1,1,1,1;1} = 8 E_5, \end{equation} leading to \begin{equation} \label{R1}D_{1,1,1,1;1} = \frac{2}{5}E_5 +\frac{\zeta(5)}{30}\end{equation} on using the asymptotic expansion, as derived in~\cite{D'Hoker:2015foa} using other techniques. \subsection{The Poisson equation for $D_{1,1,1,2}$} We next obtain the Poisson equation for $D_{1,1,1,2}$. Proceeding as before, we have that \begin{equation} \Delta D_{1,1,1,2} = 2[F_{12} + 3(F_{13} + c.c.) + 3F_{14} ],\end{equation} where \begin{eqnarray} F_{12} &=& \frac{1}{\tau_2^4} \int_{1234} G_{12} \partial_\mu G_{23} \bar\partial_\mu G_{23} G_{34} G_{14}, \non \\ F_{13} &=& \frac{1}{\tau_2^4} \int_{1234} G_{12} G_{23} \partial_\mu G_{23} G_{34} \bar\partial_\mu G_{14}, \non \\ F_{14} &=& \frac{1}{\tau_2^4} \int_{1234} \partial_\mu G_{12} G_{23}^2 \bar\partial_\mu G_{34} G_{14} \end{eqnarray} as given in figure 11. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(220,90)(0,0) \includegraphics[scale=.8]{g10.eps} \end{picture}} \] \caption{The diagrams (i) $F_{12}$, (ii) $F_{13}$ and (iii) $F_{14}$} \end{center} \end{figure} Manipulating these diagrams we get that \begin{eqnarray} F_{12} &=& -\frac{3}{2} D_{1,1,1,1;1} + E_2 E_3 - E_5, \non \\ F_{13}+c.c. &=& 3 E_5 + D_{1,1,1,1;1} -E_2 E_3, \non \\ F_{14} &=& D_{1,1,1,2} .\end{eqnarray} Thus we obtain the Poisson equation \begin{eqnarray} \label{R2}(\Delta-6) D_{1,1,1,2} &=& 3 D_{1,1,1,1;1} - 4 E_2 E_3 + 16 E_5 \non \\ &=& \frac{86}{5} E_5 - 4 E_2 E_3 +\frac{\zeta(5)}{10}\end{eqnarray} as derived in~\cite{D'Hoker:2015foa} using different techniques. \subsection{The Poisson equation for $D_{1,2,2}$} We next obtain the Poisson equation for $D_{1,2,2}$. Like the earlier analysis, we have that \begin{equation} \Delta D_{1,2,2} = 4[F_{15} + (F_{16} +c.c.) + 2 F_{17}]\end{equation} where \begin{eqnarray} F_{15} &=& \frac{1}{\tau_2^3}\int_{123} \partial_\mu G_{12} \bar\partial_\mu G_{12} G_{23}^2 G_{13}, \non \\ F_{16} &=& \frac{1}{\tau_2^3}\int_{123} G_{12} \partial_\mu G_{12} G_{23}^2 \bar\partial_\mu G_{13}, \non \\ F_{17} &=& \frac{1}{\tau_2^3}\int_{123} G_{12} \partial_\mu G_{12} G_{23} \bar\partial_\mu G_{23} G_{13} \end{eqnarray} as given in figure 12. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(250,90)(0,0) \includegraphics[scale=.65]{g11.eps} \end{picture}} \] \caption{The diagrams (i) $F_{15}$, (ii) $F_{16}$ and (iii) $F_{17}$} \end{center} \end{figure} We now manipulate each of these diagrams, leading to \begin{eqnarray} \label{E}F_{15} &=& E_2 E_3 - D_{1,2,2} -\frac{1}{\pi} F_{18} +\frac{1}{\pi} (F_{19} +c.c.), \non \\ F_{16} +c.c. &=& D_{1,2,2} + 2 D_{1,1,1,2} - \frac{1}{\pi} (F_{19} + c.c.), \non \\ F_{17} &=& -\frac{3}{4} D_{1,2,2} - D_{1,1,1,2} + E_5 +\frac{1}{\pi} F_{18} +\frac{1}{2\pi} (F_{19} +c.c.)+\frac{1}{\pi} (F_{20}+c.c.),\end{eqnarray} where only $F_{18}$, $F_{19}$ and $F_{20}$ involve diagrams that involve two derivatives. They are given by \begin{eqnarray} F_{18} &=& \frac{1}{\tau_2^3}\int_{1234} G_{12} \partial_3 G_{23} G_{13}^2 \bar\partial_3 G_{34} G_{14}, \non \\ F_{19} &=& \frac{1}{\tau_2^3}\int_{1234} G_{12} G_{23}^2 G_{34} \partial_1 G_{14} \bar\partial_2 G_{24}, \non \\ F_{20} &=& \frac{1}{\tau_2^4}\int_{12345} G_{12} G_{23} G_{34} \bar\partial_5 G_{45} \partial_5 G_{15} G_{25} \end{eqnarray} as given in figure 13. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(280,80)(0,0) \includegraphics[scale=.7]{g13.eps} \end{picture}} \] \caption{The diagrams (i) $F_{18}$, (ii) $F_{19}$ and (iii) $F_{20}$} \end{center} \end{figure} We should mention that to obtain the expression for $F_{17}$ in \C{E}, we find it convenient to start from the diagram for $F_{21}$ instead given in figure 14. While this yields $\pi F_{17}$ trivially on using \C{eigen}, manipulating it by moving the derivatives along the various links appropriately leads to the expression in \C{E}. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(110,60)(0,0) \includegraphics[scale=.75]{g12.eps} \end{picture}} \] \caption{The diagram $F_{21}$} \end{center} \end{figure} Thus adding the various contributions we have that \begin{equation} \Delta D_{1,2,2} = -6 D_{1,2,2} + 8 E_5 + 4 E_2 E_3 + \frac{4}{\pi} F_{18} + \frac{4}{\pi}(F_{19} + c.c.) + \frac{8}{\pi}(F_{20} +c.c.).\end{equation} Now let us consider the diagram $F_{18}$. To evaluate it, we start with the diagram $F_{22}$ instead which is defined by \begin{equation} F_{22} = \frac{1}{\tau_2^3} \int_{12345} \partial_1 G_{12} G_{24} G_{45} \bar\partial_1 G_{15} G_{13} \partial_1 G_{13} \bar\partial_4 G_{34} \end{equation} as given in figure 15. Again this trivially gives us \begin{equation} F_{22} = \frac{\pi}{2} F_{18} - \frac{\pi^2}{2} E_2 E_3.\end{equation} \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(200,110)(0,0) \includegraphics[scale=.95]{g14.eps} \end{picture}} \] \caption{The diagrams (i) $F_{22}$ and (ii) $F_{23}$} \end{center} \end{figure} Next we calculate $F_{22}$ differently. To do so, we find it convenient to start with the diagram $F_{23}$ defined by \begin{equation} F_{23} = \frac{1}{\tau_2^3}\int_{123456} \bar\partial_1 \partial_2 G_{12} \partial_1 G_{13} G_{35} G_{56} \bar\partial_2 G_{26} \partial_1 G_{14} G_{24} \bar\partial_5 G_{45}\end{equation} as given in figure 15. Evaluating $F_{23}$ by using \C{eigen} for the derivatives on the same link we trivially get that \begin{equation} F_{23} = \pi F_{22} + \pi^3 E_5 -\pi^3 D_{1,1,1,1;1} +\pi^2 F_{20}^*. \end{equation} Also evaluating it by passing the derivatives through the various links appropriately gives us that \begin{equation} F_{23} = \frac{\pi^3}{2} D_{1,2,2}+2\pi^2 F_{20}^* +\pi^2 F_{20} -\frac{\pi^2}{2} (F_{19} +c.c.).\end{equation} Substituting the various expressions, we obtain \begin{equation} \label{18}F_{18} = \pi E_2 E_3 +2 \pi D_{1,1,1,1;1} -2 \pi E_5 +\pi D_{1,2,2} -(F_{19}+c.c.)+2(F_{20}+c.c.).\end{equation} This gives us the equation \begin{equation} \Delta D_{1,2,2} = -2 D_{1,2,2} + 8 D_{1,1,1,1;1} + 8 E_2 E_3 +\frac{16}{\pi}(F_{20}+c.c.)\end{equation} where $F_{20}$ is the only term that has two derivatives. However that can be simplified further using the relation \begin{equation} \label{20}\frac{1}{\pi}(F_{20} +c.c.) = D_{1,1,1,1;1} +D_{1,1,1,2} + E_5 - E_2 E_3\end{equation} to get the Poisson equation \begin{equation} \label{R3}(\Delta +2)D_{1,2,2} = 24 D_{1,1,1,1;1} +16 D_{1,1,1,2} + 16 E_5 - 8 E_2 E_3\end{equation} involving only modular graph functions with no derivatives. Thus using \C{R1}, \C{R2} and \C{R3} and the asymptotic expansions, we get the relation \begin{equation} \label{x1}10 D_{1,2,2} = 20 D_{1,1,1,2} -4 E_5 + 3 \zeta(5).\end{equation} as conjectured in~\cite{D'Hoker:2015foa}. Note that \C{R3} can also be written as \begin{equation} (\Delta -6)D_{1,2,2} = \frac{144}{5} E_5 -8 E_2 E_3 -\frac{8}{5} \zeta(5)\end{equation} which is the original form of the conjectured equation in~\cite{D'Hoker:2015foa}. \subsection{The Poisson equation for $D_{1,1,3}$} We next obtain the Poisson equation for $D_{1,1,3}$. To start with, we consider the diagrams $F_{24}$, $F_{25}$ and $F_{26}$ given in figure 16 which arise in the Poisson equation for $D_{1,1,3}$. Clearly $F_{24}$ and $F_{25}$ arise when $D_{1,1,3}$ is varied using \C{onevar}. We shall see the role $F_{26}$ plays in the Poisson equation later. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(250,90)(0,0) \includegraphics[scale=.7]{g15.eps} \end{picture}} \] \caption{The diagrams (i) $F_{24}$, (ii) $F_{25}$ and (iii) $F_{26}$} \end{center} \end{figure} To obtain the Poisson equation for $D_{1,1,3}$, we first consider the diagram $F_{27}$ given by \begin{equation} F_{27} = \frac{1}{\tau_2^3} \int_{1234} G_{13} \partial_\mu G_{13} \bar\partial_1 \partial_2 G_{12} G_{24} G_{34} \bar\partial_\mu G_{23}. \end{equation} Again evaluating it trivially using \C{eigen}, we get that \begin{equation} F_{27} = \pi F_{24} - \pi F_6 F_{26}^*,\end{equation} while evaluating it by moving the derivatives along the links we get that \begin{eqnarray} \label{R4}\frac{1}{\pi}F_{27} &=& F_6^* F_{26} +\frac{1}{2} D_{1,1,1,1;1} + 2 D_{1,1,3} - 2E_2 E_3 - 2 E_2 D_3 + 2D_{1,1,1,2} \non \\ &&- \frac{2}{\pi} F_{18} +\frac{1}{\pi}(F_{19} + c.c.)-\frac{2}{\pi}(F_{20} +c.c.).\end{eqnarray} \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(250,120)(0,0) \includegraphics[scale=.65]{g16.eps} \end{picture}} \] \caption{The diagrams (i) $F_{27}$ and (ii) $F_{30}$} \end{center} \end{figure} In obtaining \C{R4} at an intermediate step, it is necessary to evaluate figure $F_{28}$ defined by \begin{equation} F_{28} = \frac{1}{\tau_2^3} \int_{12345} \partial_1 G_{12} G_{25} \partial_1 G_{13} G_{35} \bar\partial_1 G_{15} \bar\partial_1 G_{14} G_{45} \end{equation} as given in figure 18. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(140,80)(0,0) \includegraphics[scale=.6]{g17.eps} \end{picture}} \] \caption{The diagram $F_{28}$} \end{center} \end{figure} To do so, we start with the figure 29 instead, defined by \begin{equation} F_{29} = \frac{1}{\tau_2^3}\int_{123456} \partial_1 G_{12} G_{23} \partial_1 G_{16} G_{36} \bar\partial_5 G_{35} \bar\partial_5 G_{45} G_{34} \bar\partial_1 \partial_5 G_{15}\end{equation} as given in figure 19. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(140,90)(0,0) \includegraphics[scale=.7]{g18.eps} \end{picture}} \] \caption{The diagram $F_{29}$} \end{center} \end{figure} Using \C{eigen} and evaluating it trivially gives \begin{equation} F_{29} = \pi F_{28} -\pi^3 F_6^* F_{26},\end{equation} while it also gives \begin{equation} F_{29} = -2\pi^2 F_{20} -\pi^2 F_{19} +\pi^3 D_{1,2,2}\end{equation} on moving the derivatives along the various links. Thus using \C{R4} we get that \begin{eqnarray} \label{f1}F_{24} - (F_6 F_{26}^* + c.c.) &=& \frac{1}{2} D_{1,1,1,1;1} + 2 D_{1,1,3} - 2E_2 E_3 - 2 E_2 D_3 + 2D_{1,1,1,2} \non \\ &&- \frac{2}{\pi} F_{18} +\frac{1}{\pi}(F_{19} + c.c.)-\frac{2}{\pi}(F_{20} +c.c.).\end{eqnarray} We next calculate $F_{25}$. To do so, we find it convenient to consider figure $F_{30}$ defined by \begin{equation} F_{30} = \frac{1}{\tau_2^3} \int_{1234} G_{13} \partial_\mu G_{13} \bar\partial_1 \partial_2 G_{12} \bar\partial_\mu G_{24} G_{34} G_{23} \end{equation} as given in figure 17. Using \C{eigen} it trivially yields \begin{equation} \label{r5}F_{30} = \pi F_{25} - \pi F_6 F_{26}^*,\end{equation} while evaluating it by moving the derivatives along the links gives us \begin{eqnarray} \label{R5}\frac{1}{\pi} F_{30} &=& F_6 F_{26}^* -\frac{1}{6} D_{1,1,3} -\frac{1}{2} E_2 E_3 +\frac{1}{2} E_2 D_3 -\frac{3}{2} D_{1,1,1,2} - \frac{1}{2} D_{1,2,2} \non \\ &&-2 D_{1,1,1,1;1} + 2 E_5 +\frac{1}{\pi} F_{18} -\frac{1}{2\pi} (F_{19} - c.c.) + \frac{2}{\pi}(F_{20} + c.c.).\end{eqnarray} In obtaining \C{R5}, at an intermediate step we evaluated the diagram $F_{31}$ defined by \begin{equation} F_{31} = \frac{1}{\tau_2^3}\int_{12345} G_{12} G_{13} \partial_4 G_{34} \partial_4 G_{14} \bar\partial_4 G_{14} G_{25} \bar\partial_4 G_{45}\end{equation} as given in figure 20. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(120,70)(0,0) \includegraphics[scale=.8]{g19.eps} \end{picture}} \] \caption{The diagram $F_{31}$} \end{center} \end{figure} To do so, we start with diagram $F_{32}$ instead defined by \begin{equation} F_{32} = \frac{1}{\tau_2^3} \int_{123456} G_{12} G_{13} \partial_4 G_{34} \partial_4 G_{14} \bar\partial_5 G_{15} G_{26} \bar\partial_5 G_{56} \bar\partial_4 \partial_5 G_{45}\end{equation} as given in figure 21. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(120,100)(0,0) \includegraphics[scale=.7]{g20.eps} \end{picture}} \] \caption{The diagram $F_{32}$} \end{center} \end{figure} Using \C{eigen} and evaluating it trivially gives us \begin{equation} \label{31}F_{32} = \pi F_{31} - \pi^3 F_6 F_{26}^*,\end{equation} while it also gives us \begin{equation} \label{32}\frac{1}{\pi^3}F_{32} = - D_{1,1,1,2} + E_5 + \frac{1}{2} E_2 D_3 - D_{1,1,1,1;1} -\frac{1}{2\pi} F_{19}+\frac{1}{\pi} F_{20}^*\end{equation} on moving the derivatives along the various links. Thus from \C{r5} and \C{R5} we get that \begin{eqnarray} \label{f2}F_{25} -2 F_6 F_{26}^* + c.c. &=& -\frac{1}{3} D_{1,1,3} - 3 D_{1,1,1,2} - D_{1,2,2} - 4 D_{1,1,1,1;1} + 4 E_5 \non \\ &&- E_2 E_3 + E_2 D_3 +\frac{2}{\pi} F_{18} +\frac{4}{\pi}(F_{20} +c.c.).\end{eqnarray} Now \begin{eqnarray} \label{M}&&\Delta (D_{1,1,3} - 3 E_2 E_3) = \partial_\mu \bar\partial_\mu (D_{1,1,3} - 3 E_2 E_3) \non \\ &&= 6\Big[ F_{24} - (F_6 F_{26}^* + c.c.)\Big] + 6(F_{25} - 2 F_6 F_{26}^*+c.c.) + 2 D_{1,1,3} - 24 E_2 E_3\end{eqnarray} where we have chosen the relative coefficients between $D_{1,1,3}$ and $E_2 E_3$ such that the action of $\Delta$ on it precisely involves the combinations $F_{24} - (F_6 F_{26}^* + c.c.)$ and $F_{25} - 2 F_6 F_{26}^* + c.c.$ which are given by \C{f1} and \C{f2} respectively. We now simplify this equation by using \C{f1} and \C{f2}. Among the terms involving two derivatives in the last line of \C{M}, the contribution from $F_{18}$ cancels, while for $F_{20}+c.c.$ we use the relation \C{20}. Also for $F_{19}+c.c.$ we use the relation \begin{equation} \label{19}\frac{1}{\pi}(F_{19}+c.c.) = E_2 E_3 + E_2 D_3 - D_{1,1,1,2} + D_{1,2,2}- D_{1,1,3}.\end{equation} Finally we get the Poisson equation \begin{equation} \label{113}(\Delta -6)(D_{1,1,3} - 3 E_2 E_3) = 36 E_5 - 9 D_{1,1,1,1;1} - 30 E_2 E_3\end{equation} for the diagram $D_{1,1,3}$. Thus from the above equation, \C{R1} and \C{R2} and the asymptotic expansions, we have that \begin{equation} \label{x2} 40 D_{1,1,3} = 300 D_{1,1,1,2} + 120 E_2 E_3 -276 E_5 + 7\zeta(5)\end{equation} as conjectured in~\cite{D'Hoker:2015foa}. We can also write \C{113} as \begin{equation} (\Delta -6)(D_{1,1,3} - 3 E_2 E_3) = \frac{162}{5} E_5 - 30 E_2 E_3 - \frac{3}{10}\zeta(5).\end{equation} \subsection{The Poisson equation for $D_{5}$} Finally we obtain the Poisson equation for $D_5$. We list the diagrams $F_{33}$ and $F_{34}$ given in figure 22, which will be relevant for our purposes. While $F_{33}$ arises simply on varying $D_5$ using \C{onevar}, we shall see that $F_{34}$ also arises in the Poisson equation. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(180,110)(0,0) \includegraphics[scale=.7]{g21.eps} \end{picture}} \] \caption{The diagrams (i) $F_{33}$ and (ii) $F_{34}$} \end{center} \end{figure} We have that \begin{equation} \pi^2 F_{33} = 3 F_{35}+\frac{\pi^2}{4}D_5,\end{equation} where the diagram $F_{35}$ is defined by \begin{equation} F_{35} = \frac{1}{\tau_2^2} \int_{1234} \partial_2 G_{12} G_{23} G_{13}^2 \partial_3 G_{13} \bar\partial_4 G_{14} \bar\partial_4 G_{34}\end{equation} as given in figure 23. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(120,70)(0,0) \includegraphics[scale=.8]{g22.eps} \end{picture}} \] \caption{The diagram $F_{35}$} \end{center} \end{figure} To evaluate $F_{35}$ we find it convenient to start with the diagram $F_{36}$ defined by \begin{equation} F_{36} = \frac{1}{\tau_2^2} \int_{12345} \partial_2 G_{12} G_{23} \partial_3 G_{13} G_{35}^2 \bar\partial_4 G_{34} \bar\partial_4 G_{45} \bar\partial_1 \partial_5 G_{15}\end{equation} as given in figure 24. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(120,90)(0,0) \includegraphics[scale=.65]{g23.eps} \end{picture}} \] \caption{The diagram $F_{36}$} \end{center} \end{figure} On using \C{eigen}, this trivially yields \begin{equation} F_{36} = \pi F_{35} - \pi^3 F_6 F_{34}^*,\end{equation} while it also yields \begin{equation} \frac{1}{\pi^3}F_{36}= \frac{1}{12} D_5 - \frac{1}{3} E_2 D_3 - D_{1,2,2} - \frac{2}{3} D_{1,1,3} +\frac{1}{\pi} (F_{18} + F_{19}^* +2 F_{20}^*) +\frac{1}{\pi^2}(2 F_{31}^* - F_{37})\end{equation} on moving the derivatives through the links. Here the diagram $F_{37}$ is defined by \begin{equation} F_{37} = \frac{1}{\tau_2^2} \int_{1234} \partial_2 G_{12} \partial_2 G_{24} \bar\partial_2 G_{24} \bar\partial_2 G_{23} G_{34} G_{14}^2\end{equation} as given in figure 25. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(100,60)(0,0) \includegraphics[scale=.75]{g24.eps} \end{picture}} \] \caption{The diagram $F_{37}$} \end{center} \end{figure} To evaluate $F_{37}$ we start with the diagram $F_{38}$ instead defined by \begin{equation} F_{38} = \frac{1}{\tau_2^2} \int_{12345} G_{15}^2 \partial_2 G_{12} \partial_2 G_{25} \bar\partial_3 G_{35} \bar\partial_3 G_{34} G_{45} \bar\partial_2 \partial_3 G_{23}\end{equation} as given in figure 26. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(100,90)(0,0) \includegraphics[scale=.65]{g25.eps} \end{picture}} \] \caption{The diagram $F_{38}$} \end{center} \end{figure} Using \C{eigen} this trivially evaluates to \begin{equation} F_{38} = \pi F_{37} -\pi^3 F_6^* F_{34},\end{equation} while it also yields \begin{equation} \frac{1}{\pi^3} F_{38} = \frac{1}{6} D_5 -\frac{2}{3} E_2 D_3 - \frac{1}{3} D_{1,1,3} + E_2 E_3 - \frac{1}{2} D_{1,2,2} + D_{1,1,1,2}\end{equation} on moving the derivatives through the various links. Thus substituting the various expressions, we get that \begin{eqnarray} \frac{1}{3} F_{33} - (F_6 F_{34}^* + c.c.) &=& \frac{4}{3} E_2 D_3 -\frac{1}{2} D_{1,2,2} - \frac{1}{3} D_{1,1,3} -E_2 E_3 - 3 D_{1,1,1,2} \non \\ &&+ 2 E_5 -2 D_{1,1,1,1;1}+\frac{1}{\pi} F_{18}+\frac{2}{\pi}(F_{20} + c.c.).\end{eqnarray} where we have substituted the expression for $F_{31}$ obtained from \C{31} and \C{32}. We have also used $\partial_\mu D_3 = \partial_\mu E_3$ which follows trivially from \C{1}. Finally using \C{18}, \C{20} and \C{19} we get that \begin{eqnarray} F_{33} - 3 (F_6 F_{34}^* +c.c.) &=& E_2 D_3 -\frac{3}{2} D_{1,2,2} + 2 D_{1,1,3} -15 E_2 E_3 + 6 D_{1,1,1,2} \non \\ &&+ 12 (E_5 + D_{1,1,1,1;1}).\end{eqnarray} Thus we obtain the Poisson equation \begin{eqnarray} \Delta (D_5 - 10 E_2 D_3) &=& \partial_\mu \bar\partial_\mu (D_5 - 10 E_2 D_3) \non \\ &=& 20[F_{33} - 3 (F_6 F_{34}^* +c.c.)] - 20 E_2 D_3 - 60 E_2 E_3 \non \\ &=& -30 D_{1,2,2} + 40 D_{1,1,3} +120 D_{1,1,1,2} + 240(E_5 + D_{1,1,1,1;1})-360 E_2 E_3.\non \\\end{eqnarray} Once again we have chosen the relative coefficient between $D_5$ and $E_2 D_3$ such that the action of $\Delta$ produces $F_{33} - 3(F_6F_{34}^*+c.c.)$ which we have determined separately. Thus using \C{R1}, \C{x1} and \C{x2} we get the desired Poisson equation \begin{equation} \label{5}\Delta (D_5 - 10 E_2 D_3) = 360 D_{1,1,1,2} + 72 E_5 -240 E_2 E_3 + 6\zeta(5)\end{equation} which using \C{R2} yields \begin{equation} \Delta(D_5 -60 D_{1,1,1,2} + 48 E_5 - 10 E_2 D_3)=0,\end{equation} and hence \begin{equation} D_5 =60 D_{1,1,1,2} - 48 E_5 + 10 E_2 D_3 + 16 \zeta(5)\end{equation} using the asymptotic expansions. This relation between modular graph functions had been conjectured in~\cite{D'Hoker:2015foa}. In fact, \C{5} can also be written as \begin{equation} (\Delta -6)(D_5 - 10 E_2 D_3)= 360 E_5 - 240 E_2 E_3 - 90 \zeta(5)\end{equation} which was the form of the equation originally conjectured in~\cite{D'Hoker:2015foa}. Thus we have obtained several relations between modular graph functions with five links. Note that they also involve $E_5$, which arises in the five graviton amplitude. \section{Poisson equations for some diagrams with six links} We next consider Poisson equations for certain diagrams with six links having four and five vertices. These arise in the low energy expansion of the four and five graviton amplitudes at orders $D^{12} \mathcal{R}^4$ and $D^{10}\mathcal{R}^5$ respectively. In either case, they are not the only modular graph functions that arise for these amplitudes at these orders in the derivative expansion. However, we shall see that the Poisson equations for these diagrams provide enough information for us to obtain a non--trivial relation among graphs with six links. The relevant diagrams are given in figure 27. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(280,100)(0,0) \includegraphics[scale=.65]{g26.eps} \end{picture}} \] \caption{The diagrams (i) $D_{1,1,1;1,1,1}$, (ii) $C_{2,2,2}$ and (iii) $C_{1,2,3}$} \end{center} \end{figure} They are given by the expressions \begin{eqnarray} && D_{1,1,1;1,1,1} = \frac{1}{\tau_2^4} \int_{1234} G_{12} G_{23} G_{13} G_{14} G_{24} G_{34}, \quad C_{2,2,2} = \frac{1}{\tau_2^5}\int_{12345} G_{12} G_{23} G_{34} G_{14} G_{25} G_{45}, \non \\ && C_{1,2,3} = \frac{1}{\tau_2^5} \int_{12345} G_{12} G_{23} G_{13} G_{24} G_{45} G_{35}.\end{eqnarray} For the five point graphs, we use the terminology of~\cite{D'Hoker:2015foa}. \subsection{The Poisson equation for $D_{1,1,1;1,1,1}$} The Poisson equation for the Mercedes graph $D_{1,1,1;1,1,1}$ has been obtained in~\cite{Basu:2015ayg} and is given by \begin{equation} \label{61}(\Delta +6) D_{1,1,1;1,1,1} = 48 C_{1,2,3} + 12(E_6 -E_3^2). \end{equation} \subsection{The Poisson equation for $C_{2,2,2}$} We now obtain the Poisson equation for $C_{2,2,2}$. Proceeding as earlier, we get that \begin{equation} \Delta C_{2,2,2} = 6 F_{39} + 24 F_{40},\end{equation} where \begin{eqnarray} F_{39} &=& \frac{1}{\tau_2^5}\int_{12345} \bar\partial_\mu G_{12} G_{23} G_{34} \partial_\mu G_{14} G_{25} G_{45}, \non \\ F_{40} &=& \frac{1}{\tau_2^5}\int_{12345} G_{12} G_{23} \bar\partial_\mu G_{34} \partial_\mu G_{14} G_{25} G_{45}\end{eqnarray} as given in figure 28. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(180,100)(0,0) \includegraphics[scale=.65]{g27.eps} \end{picture}} \] \caption{The diagrams (i) $D_{1,1,1;1,1,1}$, (ii) $C_{2,2,2}$ and (iii) $C_{1,2,3}$} \end{center} \end{figure} Now using \C{onevar} and \C{eigen} we get that \begin{eqnarray} F_{39} &=& C_{2,2,2} , \non \\ F_{40} &=& -C_{1,2,3} + \frac{1}{2}(E_3^2- E_6)\end{eqnarray} leading to \begin{equation} \label{62}(\Delta -6) C_{2,2,2} = -24 C_{1,2,3} + 12 (E_3^2 - E_6)\end{equation} as obtained in~\cite{D'Hoker:2015foa} using other techniques. \subsection{The Poisson equation for $C_{1,2,3}$} Next we obtain the Poisson equation for $C_{1,2,3}$. Like before, we have that \begin{equation} \Delta C_{1,2,3} = 2 F_{41} + 6 F_{42} + 2 (F_{43} +c.c.) + 3 (F_{44} + c.c.) + 6(F_{45}+c.c.),\end{equation} where \begin{eqnarray} F_{41} &=& \frac{1}{\tau_2^5} \int_{12345} \partial_\mu G_{12} G_{23} \bar\partial_\mu G_{13} G_{24} G_{45} G_{35}, \non \\ F_{42}& =& \frac{1}{\tau_2^5} \int_{12345} G_{12} G_{23} G_{13} \partial_\mu G_{24} G_{45} \bar\partial_\mu G_{35} , \non \\ F_{43}& =& \frac{1}{\tau_2^5} \int_{12345} \partial_\mu G_{12} \bar\partial_\mu G_{23} G_{13} G_{24} G_{45} G_{35} , \non \\ F_{44} &=& \frac{1}{\tau_2^5} \int_{12345} G_{12} \partial_\mu G_{23} G_{13} G_{24} \bar\partial_\mu G_{45} G_{35} , \non \\ F_{45} &=& \frac{1}{\tau_2^5} \int_{12345} \partial_\mu G_{12} G_{23} G_{13} \bar\partial_\mu G_{24} G_{45} G_{35} \end{eqnarray} as given in figure 29. \begin{figure}[ht] \begin{center} \[ \mbox{\begin{picture}(360,100)(0,0) \includegraphics[scale=.6]{g28.eps} \end{picture}} \] \caption{The diagrams (i) $F_{41}$, (ii) $F_{42}$, (iii) $F_{43}$, (iv) $F_{44}$ and (v) $F_{45}$} \end{center} \end{figure} Again using \C{onevar} and \C{eigen}, we get that \begin{eqnarray} F_{41} &=& F_{42} = C_{1,2,3}, \non \\ F_{43} &=& - C_{2,2,2} +\frac{1}{2}(E_3^2 - E_6), \non \\ F_{44} + 2 F_{45} &=& \frac{1}{2} C_{2,2,2} + 3 E_6 - E_3^3, \end{eqnarray} giving us \begin{equation} \label{63} (\Delta -8) C_{1,2,3} = - C_{2,2,2} -4 (E_3^2 - 4E_6)\end{equation} as obtained in~\cite{D'Hoker:2015foa} using other techniques. Thus from \C{61}, \C{62} and \C{63} we have that \begin{equation} (\Delta +6) (3 D_{1,1,1;1,1,1} -12 C_{1,2,3} - C_{2,2,2} +4 E_6)=0\end{equation} leading to the relation among the modular graph functions with six links\footnote{The relation also involves $E_6$ which arises in the six graviton amplitude.} \begin{equation} 3 D_{1,1,1;1,1,1} =12 C_{1,2,3} + C_{2,2,2} -4 E_6\end{equation} on using the asymptotic expansions. Thus our analysis proves various Poisson equations satisfied by the modular graph functions, which also lead to several relations among them. We have used various auxiliary graphs to simplify our calculations, and also simplified several expressions by moving the derivatives through the links appropriately. Clearly this procedure is not unique and one can proceed in different ways to obtain the relations. It would be interesting to generalize the analysis to higher orders in the derivative expansion and also for modular graph functions involving derivatives of Green functions. It would also be useful to obtain relations involving higher genus string amplitudes. \section{Discussion} Our analysis of obtaining Poisson equations for modular graph functions fits into the general scheme of calculating perturbative string amplitudes, and generalize various features of the amplitudes at tree level. Low momentum expansion of the tree level amplitudes yields various multi--zeta values on performing the integrals over the insertion points of the integrated vertex operators. At a fixed order in the momentum expansion, this structure simplifies on using various identities between the multi--zeta values, which follows from a detailed analysis of the number of basis elements of fixed transcendentality. Thus it is essential to understand the basis elements in detail to analyze tree level amplitudes. Clearly it is important to generalize this structure at higher loops in string perturbation theory, which is something this work addresses. At a fixed order in the momentum expansion of the one loop amplitude, we see that there are several topologically distinct graphs given by the various ways the Green functions connect the vertices. However the relations among the modular graph functions show that they are not all independent and the number of basis elements is far less than the number of topologically distinct modular graph functions. This analysis, as well as its generalization at higher loops, is crucial in simplifying the structure of the integrands of the loop amplitudes, which can then be integrated over the moduli space on using Poisson equations. It is by itself interesting mathematically to obtain relations between graphs at various loops. In the Poisson equations for the various graphs, we see that the number of links is conserved while the number of vertices are not. The source terms involve graphs that have been obtained at lower orders in the momentum expansion. This feature must survive to all orders, hence allowing us to recursively solve the equations. Note that we have considered graphs which only involve the scalar Green function as links, and not their derivatives. Such graphs that involve derivatives arise in the low momentum expansion of higher point multi--graviton amplitudes. It is interesting to see if such graphs can be expressed in terms of those which only involve scalar Green functions and not their derivatives. If this is not always the case, they yield new elements in the basis of modular graph functions, analyzing which is essential to get a complelete basis of independent graphs. Generalizing this analysis to higher loops is crucial to obtain results in perturbative string theory. Finally, the integrands of the amplitudes we have considered simplify because of maximal supersymmetry. In order to calculate string amplitudes with lesser supersymmetry, one has to generalize the techniques we have discussed to obtain Poisson equations for the various graphs. It would be important to analyze the basis elements that arise in such cases. Clearly, the analysis gets more involved as the amount of supersymmetry is reduced. Thus we see that for both calculating perturbative string amplitudes as well as from the point of view of mathematics, it is important to obtain relations between modular graphs at various loops and very little is understood. Hence a better understanding of this structure is desirable. \providecommand{\href}[2]{#2}\begingroup\raggedright
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} It is well known that a local Lagrangian description for an electric charge in the presence of fields sourced by an electric charge distribution requires the introduction of potentials on the configuration space, introducing unphysical, or gauge, degrees of freedom in the field theory. If the field is sourced by a magnetic monopole, the description can be modified by changing the topology of the underlying configuration space, see e.g.,\cite{Balachandran:1979wx},\cite{Balachandran:2017jha}. On the other hand, this procedure has no obvious extension when the fields are sourced by a continuous distribution of magnetic charge. In that case, auxiliary degrees of freedom can be added, possibly introducing additional local symmetries. One possibility is to introduce another set of potentials following work of Zwanziger\cite{Zwanziger:1970hk}. Another approach is to enlarge the phase space for the electric charge, and this was done recently by Kupriyanov and Szabo \cite{Kupriyanov:2018xji}. The result has implications for certain nongeometric string theories and their quantization, which leads to nonassociative algebras, see e.g.,\cite{jackiw}-\cite{Szabo:2019hhg}. The analysis of \cite{Kupriyanov:2018xji} for the electric charge in a field sourced by magnetic monopole distribution is performed in the Hamiltonian setting. The formulation is made possible thanks to the doubling of the number of phase space variables. In this letter we give the corresponding Lagrangian description. It naturally requires doubling the number of configuration space variables. So here if $Q$ denotes the original configuration space, one introduces another copy, $\tilde Q$ and writes down dynamics on $Q\times \tilde Q$. While the motion on the two spaces, in general, cannot be separated, the Lorentz force equations are recovered when projecting down to $Q$. The procedure of doubling the configuration space has a wide range of applications, and actually was used long ago in the description of quantum dissipative systems \cite{bat}-\cite{tHooft:1999xul}. The description in \cite{Kupriyanov:2018xji} is nonrelativistic. Here, in addition to giving the associated nonrelativistic Lagrangian, we extend the procedure to the case of a covariant relativistic particle, as well as to particles coupled to non-Abelian gauge fields that do not necessarily satisfy the Bianchi identity in a region of space-time. As a further generalization we consider the case of an open string coupled to a smooth distribution of magnetic monopoles. The outline of this article is as follows. In section \ref{nonrel} we write down the Lagrangian for a nonrelativistic charged particle in the presence of a magnetic field whose divergence field is continuous and nonvanishing in a finite volume of space, and show that the corresponding Hamiltonian description is that of \cite{Kupriyanov:2018xji}. The relativistic generalization is given in section \ref{rel}. Starting with a fully covariant treatment we obtain a new time dependent symmetry, in addition to standard reparametrization invariance. The new gauge symmetry mixes $\tilde Q$ with $ Q$. Gauge fixing constraints can be imposed on the phase space in order to recover the Poisson structure of the nonrelativistic treatment on the resulting constrained submanifold. Further extensions of the system are considered in section \ref{further}. In subsection \ref{further1} we write down the action for a particle coupled to a non-Abelian gauge field which does not satisfy Bianchi identity in some region of space-time, whereas in \ref{further2} we generalize to field theory, by considering an open string coupled to a magnetic monopole distribution, again violating Bianchi identity. In both cases we get a doubling of the configuration space variables (which in the case of the particle in a non-Abelian gauge field includes variables living in an internal space), as well as a doubling of the number of gauge symmetries. We note that the doubling of the number of world-sheet degrees of freedom of the string is also the starting point of Double Field Theory, introduced by Hull and Zwiebach\cite{HZ}, and further investigated by many authors\cite{DFT}-\cite{dCSV18}, in order to deal with the T-duality invariance of the strings dynamics. This has its geometric counterpart in Generalized and Double Geometry (see e.g. \cite{GG, gualtieri} and \cite{DG}-\cite{DG5}). Moreover, the doubling of configuration space has also been related to Drinfel'd doubles in the context of Lie groups dynamics \cite{blumen1}-\cite{MPV18} with interesting implications for the mathematical and physical interpretation of the auxiliary variables. \section{Nonrelativistic treatment}\label{nonrel} We begin with a nonrelativistic charged particle on $ {\mathbb{R}}^3$ in the presence of a continuous magnetic monopole distribution. Say that the particle has mass $m$ and charge $e$ with coordinates and velocities $(x_i,\dot x_i$) spanning $T {\mathbb{R}}^3$. It interacts with a magnetic field $\vec B(x)$ of nonvanishing divergence $\vec \nabla\cdot \vec B(x)=\rho_M(x).$ In such a case it is possible to show that the dynamics of the particle, described by the equations of motion \begin{equation}\label{eom} m\ddot x_i=e\epsilon_{ijk} \dot x_j B_k(x) \end{equation} cannot be given by a Lagrangian formulation on the tangent space ${ T {\mathbb{R}}^3}$ because a vector potential for the magnetic field generated by the smooth monopoles distribution cannot defined, even locally. (A detailed discussion of this issue will appear in \cite{inprep}.) On the other hand, a Lagrangian description is possible if one enlarges the configuration space to $\mathbb{R}^3\times\widetilde {\mathbb{R}}^3$, and this description leads to Kupriyanov and Szabo's Hamiltonian formulation \cite{Kupriyanov:2018xji}. For this one extends the tangent space to $T ({\mathbb{R}}^3\times\widetilde{ {\mathbb{R}}^3})\simeq T {\mathbb{R}}^3\times\widetilde{ T {\mathbb{R}}^3}$. We parametrize $\widetilde{ T {\mathbb{R}}^3}$ by $ (\tilde x_i,\dot{\tilde x}_i), \;i=1,2,3$. A straightforward calculation shows that the following Lagrangian function \begin{equation} L= m{\dot x_i\dot{\tilde x}_i}+e\epsilon_{ijk} B_k(x) \tilde x_i\dot x_j \;,\label{nrlLgngn}\end{equation} correctly reproduces Eq. \eqn{eom}, together with an equation of motion for the auxiliary degrees of freedom $\tilde x_i$ \begin{equation}\label{aux} m\ddot{\tilde x}_i = e\epsilon_{ijk} \dot{\tilde x}_j B_k(x) + e\Bigl( \epsilon_{ jk\ell}\frac\partial{\partial x_i}B_k- \epsilon_{ ik\ell}\frac\partial{\partial x_j}B_k\Bigr) \dot x_j\tilde x_\ell \end{equation} which are not decoupled from the motion of the physical degrees of freedom. Here we do not ascribe any physical significance to the auxiliary dynamics. There are analogous degrees of freedom for dissipative systems, and they are associated with the environment. Since our system does not dissipate energy, the same interpretation does not obviously follow. The Lagrangian (\ref{nrlLgngn}) can easily be extended to include electric fields. This, along with the relativistic generalization, is done in the following section. In passing to the Hamiltonian formalism, we denote the momenta conjugate to $x_i$ and $\tilde x_i$ by \begin{eqnarray} p_i&=&m\dot {\tilde x}_i-e\epsilon_{ijk} \tilde x_j B_k(x)\cr&&\cr \tilde p_i&=&m\dot x_i \;, \label{nrcnclmmnt}\end{eqnarray} respectively. Along with $x_i$ and $\tilde x_i$, they span the 12-dimensional phase space $T^*({\mathbb{R}}^3\times \widetilde{{\mathbb{R}}^3)}$. The nonvanishing Poisson brackets are \begin{equation} \{x_i,p_j\}=\{\tilde x_i,\tilde p_j\}=\delta_{ij}\end{equation} Instead of the canonical momenta (\ref{nrcnclmmnt}) one can define \begin{equation} \pi_i=p_i+e\epsilon_{ijk} \tilde x_j B_k(x)\qquad\quad\tilde \pi_i=\tilde p_i\;,\label{pipitld}\end{equation} which have the nonvanishing Poisson brackets: \begin{eqnarray} \{x_i,\pi_j\}&=&\{\tilde x_i,\tilde \pi_j\}=\delta_{ij}\cr&&\cr \{\pi_i,\tilde \pi_j\}&=& e \epsilon_{i jk}B_k\cr&&\cr \{\pi_i,\pi_j\}&=& e\Bigl( \epsilon_{ jk\ell}\frac\partial{\partial x_i}B_k- \epsilon_{ ik\ell}\frac\partial{\partial x_j}B_k\Bigr)\tilde x_\ell\label{kspbs}\end{eqnarray} The Hamiltonian when expressed in these variables is \begin{equation} H=\frac 1m \tilde\pi_i\pi_i\label{ksham}\end{equation} Eqs. (\ref{kspbs}) and (\ref{ksham}) are in agreement with the Hamiltonian formulation in \cite{Kupriyanov:2018xji}. Concerning the issue of the lack of a lower bound for $H$, one can follow the perspective in \cite{Wilczek}, where a very similar Hamiltonian dynamics is derived. Namely, while it is true that $H$ generates temporal evolution, it cannot be regarded as a classical observable of the particle. Rather, such observables should be functions of only the particle's coordinates $x_i$ and its velocities $\tilde \pi_i/m$, whose dynamics is obtained from their Poisson brackets with $H$ \begin{eqnarray} \dot x_i&=&\{x_i,H\}=\frac 1 m \tilde \pi_i \cr&&\cr \dot {\tilde \pi}_i&=&\{\tilde \pi_i,H\}=\frac e m\epsilon_{ijk} \tilde\pi_j B_k \end{eqnarray} The usual expression for the energy, $ \frac 1{2m}\tilde \pi_i\tilde \pi_i$, is, of course, an observable, which is positive-definite and a constant of motion. \section{Relativistic covariant treatment}\label{rel} \setcounter{equation}{0} The extension of the Lagrangian dynamics of the previous section can straightforwardly be made to a covariant relativistic system. In the usual treatment of a covariant relativistic particle, written on $T{\mathbb{R}}^4$, one obtains a first class constraint in the Hamiltonian formulation which generates reparametrizations. Here we find that the relativistic action for a charged particle in a continuous magnetic monopole distribution, which is now written on $T{\mathbb{R}}^4\times\widetilde{ T{\mathbb{R}}^4}$, yields an additional first class constraint, generating a new gauge symmetry. When projecting the Hamiltonian dynamics onto the constrained submanifold of the phase space, and taking the nonrelativistic limit, we recover the Hamiltonian description of \cite{Kupriyanov:2018xji}. As stated above, our action for the charged particle in a continuous magnetic monopole distribution is written on $T{\mathbb{R}}^4\times\widetilde{ T{\mathbb{R}}^4}$. Let us parametrize $T{\mathbb{R}}^4$ by space-time coordinates and velocity four-vectors $(x^\mu,\dot x^\mu$), and $\widetilde{ T{\mathbb{R}}^4}$ by $ (\tilde x^\mu,\dot{\tilde x}^\mu), \;\mu=0,1,2,3$. So here we have included two `time' coordinates, $x^0$ and $\tilde x^0$. Now the dot denotes the derivative with respect to some variable $\tau$ which parametrizes the particle world line in ${\mathbb{R}}^4\times\widetilde{ {\mathbb{R}}^4}$. The action for a charged particle in an electromagnetic field $F_{\mu\nu}(x)$, which does {\it not} in general satisfy the Bianchi identity $\frac\partial{\partial x^\mu}F_{\nu\rho}+\frac\partial{\partial x^\nu}F_{\rho\mu}+\frac\partial{\partial x^\rho}F_{\mu\nu}=0$ is \begin{equation} S =\int d\tau \Big\{ m\frac{\dot x_\mu\dot{\tilde x}^\mu}{\sqrt{-\dot x^\nu\dot x_\nu}}+e F_{\mu\nu}(x) \tilde x^\mu\dot x^\nu + L'(x,\dot x) \Big\} \;,\label{flecvrntL}\end{equation} $ L'(x,\dot x) $ is an arbitrary function of $x^\mu$ and $\dot x^\mu$. Indices are raised and lowered with the Lorentz metric $\eta=$diag$(-1,1,1,1)$. The action is invariant under Lorentz transformations and arbitrary reparametrizations of $\tau$, $\tau\rightarrow \tau'=f(\tau)$, provided we choose $L'$ appropriately. The action is also invariant under a local transformation that mixes $\tilde {\mathbb{R}}^4$ with ${\mathbb{R}}^4$, \begin{equation} x^\mu\rightarrow x^\mu \qquad\qquad \tilde x^\mu\rightarrow \tilde x^\mu+ \frac{\epsilon(\tau)\, \dot x^\mu}{\sqrt{-\dot x_\nu\dot{ x}^\nu}}\;,\label{strngtrnsfm}\end{equation} for an arbitrary real function $\epsilon(\tau)$. The first term in the integrand of (\ref{flecvrntL}) changes by a $\tau-$derivative under (\ref{strngtrnsfm}), while the remaining terms in the integrand are invariant. Upon extremizing the action with respect to arbitrary variations $\delta\tilde x^\mu$ of $\tilde x^\mu$, we recover the standard Lorentz force equation on $T{\mathbb{R}}^4$ \begin{equation} \dot {\tilde p}_\mu=eF_{\mu\nu}(x)\dot x^\nu\;, \end{equation} while arbitrary variations $\delta x^\mu$ of $ x^\mu$ lead to \begin{equation} \dot{ p}_\mu=e\frac{\partial F_{\rho\sigma}}{\partial x^\mu}\tilde x^\rho\dot x^\sigma+\frac{\partial L'}{\partial x^\mu}\;\label{thr33ptfr}\end{equation} $p_\mu$ and $\tilde p_\mu$ are the momenta canonically conjugate to $x^\mu$ and $\tilde x^\mu$, respectively, \begin{eqnarray} p_\mu &=&\frac m{(-\dot x^\rho\dot x_\rho)^{3/2}}( \dot x_\mu\dot{\tilde x}_\nu-\dot x_\nu \dot {\tilde x}_\mu)\dot x^\nu-eF_{\mu\nu }\tilde x^\nu+\frac{\partial L'}{\partial\dot x^\mu}\cr&&\cr \tilde p_\mu&=&\frac{m\dot x_\mu}{\sqrt{-\dot x_\nu\dot{ x}^\nu}} \label{cnclmmnt} \end{eqnarray} The momenta $p_\mu$ and $\tilde p_\mu$, along with coordinates $x^\mu$ and $\tilde x^\mu$, parametrize a $16-$dimensional phase space, which we denote simply by $T^*Q$. $x^\mu$, $\tilde x^\mu$, $p_\mu$ and $\tilde p_\mu$ satisfy canonical Poisson brackets relations, the nonvanishing ones being \begin{equation} \{x^\mu,p_\nu\}= \{\tilde x^\mu,\tilde p_\nu\}=\delta^\mu_\nu\end{equation} $\tilde p_\mu$ satisfies the usual mass shell constraint \begin{equation} \Phi_1= \tilde p_\mu\tilde p^\mu+m^2\approx 0\;,\label{mscnst}\end{equation} where $\approx$ means `weakly' zero in the sense of Dirac. Another constraint is \begin{equation} \Phi_2=p_\mu\tilde p^\mu+e F_{\mu\nu}(x)\tilde p^\mu \tilde x^\nu\approx 0\;,\label{anthrcnstrnt}\end{equation} where from now on we set $L'=0$. {The three-momenta $\pi_i$ and $\tilde\pi_i$ of the previous section can easily be generalized to four-vectors according to \begin{equation} \pi_\mu=p_\mu+eF_{\mu\nu}(x) \tilde x^\nu\qquad\qquad\tilde \pi_\mu=\tilde p_\mu\end{equation} Their nonvanishing Poisson brackets are \begin{eqnarray} \{x^\mu,\pi_\nu\}&=& \{\tilde x^\mu,\tilde \pi_\nu\}\;=\;\delta^\mu_\nu\cr&&\cr \{\pi_\mu,\tilde \pi_\nu\}&=&e F_{\mu\nu}\qquad \cr&&\cr \{\pi_\mu, \pi_\nu\}&=&- e\Bigl(\frac\partial{\partial x^\mu}F_{\nu\rho}+\frac\partial{\partial x^\nu}F_{\rho\mu}\Bigr)\tilde x^\rho\label{cvrntpba}\end{eqnarray} Then the constraints (\ref{mscnst}) and (\ref{anthrcnstrnt}) take the simple form \begin{equation} \Phi_1= \tilde \pi_\mu\tilde \pi^\mu+m^2\approx 0\qquad\qquad\Phi_2=\pi_\mu\tilde \pi^\mu\approx 0\label{cntsntrmspitldpi}\end{equation} From (\ref{cvrntpba}), one has $\{ \Phi_1, \Phi_2\}=0$, and therefore $ \Phi_1$ and $ \Phi_2$ form a first class set of constraints. They generate the two gauge (i.e., $\tau-$dependent) transformations on $T^*Q$. Unlike in the standard covariant treatment of a relativistic particle, the mass shell constraint $\Phi_1$ does not generate reparametrizations. $\Phi_1$ instead generates the transformations (\ref{strngtrnsfm}), while a linear combination of $\Phi_1$ and $\Phi_2$ generate reparametrizations. After imposing (\ref{mscnst}) and (\ref{anthrcnstrnt}) on $T^*Q$, one ends up with a gauge invariant subspace that is 12-dimensional, which is in agreement with the dimensionality of the nonrelativistic phase space. Alternatively, one can introduce two additional constraints on $T^*Q$ which fix the two time coordinates $x^0$ and $\tilde x^0$, and thus break the gauge symmetries. The set of all four constraints would then form a second class set, again yielding a 12-dimensional reduced phase space, which we denote by $\overline{T^*Q}$. The dynamics on the reduced phase space is then determined from Dirac brackets and some Hamiltonian $H$. We choose $H$ to be \begin{equation} H=p_0=\pi_0-eF_{0i}(x) \tilde x^i\end{equation} $p_0$ differs from $\pi_0 $ in the presence of an electric field.} The latter can be expressed as a function of the spatial momenta $\pi_i$ and $\tilde \pi_i$, $i=1,2,3$, after solving the constraints (\ref{cntsntrmspitldpi}). The result is \begin{equation} \pi_0=\frac {\pi_i\tilde\pi_i}{\sqrt{\tilde\pi_j^2+m^2}}\;,\end{equation} $\pi_0$ correctly reduces to the non-relativistic Hamiltonian (\ref{ksham}) in the limit $\tilde\pi_j^2<<m^2$. In addition to recovering the non-relativistic Hamiltonian of the previous section, the gauge fixing constraints, which we denote by $\Phi_3\approx 0$ and $\Phi_4\approx 0$, can be chosen such that the Dirac brackets on $\overline{T^*Q}$ agree with the Poisson brackets (\ref{kspbs}) of the nonrelativistic treatment. For this take \begin{equation} \Phi_3=x^0-g(\tau)\qquad \qquad\Phi_4=\tilde x^0-h(\tau)\;,\label{gfxngcnst}\end{equation} where $g$ and $h$ are unspecified functions of the proper time. By definition, the Dirac brackets between two functions $A$ and $B$ of the phase space coordinates are given by \begin{equation} \{A,B\}_{\tt {DB}}=\{A,B\}-\sum_{a,b=1}^4\{A,\Phi_a\} M^{-1}_{ab}\{\Phi_b,B\}\;,\label{dfofdb}\end{equation} where $M^{-1}$ is the inverse of the matrix $M$ with elements $M_{ab}=\{\Phi_a,\Phi_b\},\;a,b=1,...,4$. From the constraints (\ref{cntsntrmspitldpi}) and (\ref{gfxngcnst}) we get \begin{equation} M^{-1}= \frac {1}{2{({\tilde\pi^0})}^2} \pmatrix{0&0&-\pi^0&\tilde\pi^0\cr 0&0&2\tilde\pi^0& 0\cr \pi^0&-2\tilde\pi^0&0&0\cr -\tilde\pi^0&0&0&0} \end{equation} Substituting into (\ref{dfofdb}) gives \begin{eqnarray} \{A,B\}_{\tt {DB}}&=&\{A,B\}- \frac {1}{2{({\tilde\pi^0})}^2} \,\Biggl(\,\pi^0\Bigl(\{A,x^0\} \{ \tilde \pi_\mu\tilde \pi^\mu,B\}-\{B,x^0\} \{ \tilde \pi_\mu\tilde \pi^\mu,A\}\Bigr)\cr&&\cr&&\qquad\qquad\qquad\quad-\tilde\pi^0\,\Bigl(\{A,\tilde x^0\} \{ \tilde \pi_\mu\tilde \pi^\mu,B\}-\{B,\tilde x^0\} \{ \tilde \pi_\mu\tilde \pi^\mu,A\}\Bigr)\cr&&\cr&&\qquad\qquad\qquad\;\;-2\tilde\pi^0\,\Bigl(\{A, x^0\} \{ \tilde \pi_\mu \pi^\mu,B\}-\{B, x^0\} \{ \tilde \pi_\mu \pi^\mu,A\}\Bigr)\;\Biggr)\end{eqnarray} It shows that the Dirac brackets $\{A,B\}_{\tt {DB}}$ and their corresponding Poisson brackets $\{A,B\}$ are equal if both functions $A$ and $B$ are independent of $\pi^0$ and $\tilde \pi^0$. We need to evaluate the Dirac brackets on the constrained subsurface, which we take to be $T {\mathbb{R}}^3\times\widetilde{ T {\mathbb{R}}^3}$, parametrized by $x_i,\tilde x_i,\pi_i$ and $\tilde\pi_i$, $i=1,2,3$. It is then sufficient to compute their Poisson brackets. The nonvanishing Poisson brackets of the coordinates of $T {\mathbb{R}}^3\times\widetilde{ T {\mathbb{R}}^3}$ are: \begin{eqnarray} && \{x_i,\pi_j\}=\{\tilde x_i,\tilde \pi_j\}=\delta_{ij}\cr &&\cr && \{\pi_i,\tilde\pi_j\}=e\epsilon_{ijk} B_k\cr&&\cr &&\{\pi_i,\pi_j\}= e\Bigl( \epsilon_{ jk\ell}\frac\partial{\partial x_i}B_k- \epsilon_{ ik\ell}\frac\partial{\partial x_j}B_k\Bigr)\tilde x_\ell+e\Big(\frac\partial{\partial x_i}E_j-\frac\partial{\partial x_j}E_i\Bigr)h(\tau)\;,\end{eqnarray} where $F_{ij}=\epsilon_{ijk} B_k$, $F_{0i}=E_i$ and we have imposed the constraint $\Phi_4=0$. These Poisson brackets agree with those of the nonrelativistic treatment, (\ref{kspbs}), in the absence of the electric field. \section{Further extensions}\label{further} \setcounter{equation}{0} Here we extend the dynamics of the previous sections to $1)$ the case of a particle coupled to a non-Abelian gauge field violating Bianchi identities and $2)$ the case of an open string coupled to a smooth distribution of magnetic monopoles. Of course, another extension would be the combination of both of these two cases, i.e., where an open string interacts with a non-Abelian gauge field that does not satisfy the Bianchi identities in some region of the space-time. We shall not consider that here. \subsection{Particle in a non-Abelian magnetic monopole distribution}\label{further1} Here we replace the underlying Abelian gauge group of the previous sections, with an $N$ dimensional non-Abelian Lie group $G$. We take it to be compact and connected with a simple Lie algebra. Given a unitary representation $\Gamma$ of $G$, let $t_A, \;A=1,2,...N$ span the corresponding representation $\bar\Gamma$ of the Lie algebra, satisfying $t_A^\dagger=t_A$, Tr$\,t_At_B=\delta_{AB}$ and $[t_A,t_B]=ic_{ABC} t_C$, $c_{ABC}$ being totally antisymmetric structure constants. In Yang-Mills field theory, the field strengths now take values in $\bar\Gamma$, $F_{\mu\nu}(x)=f_{\mu\nu}^A(x) t_A$. A particle interacting with a Yang-Mills field carries degrees of freedom $I(\tau)$ associated with the non-Abelian charge, in addition to space-time coordinates $x^\mu(\tau)$. These new degrees of freedom live in the internal space $\bar\Gamma$, $I(\tau)=I^A(\tau) t_A$. Under gauge transformations, $I(\tau)$ transforms as a vector in the adjoint representation of $G$, just as do the field strengths $F_{\mu\nu}(x)$, i.e., $I(\tau)\rightarrow h(\tau) I(\tau) h(\tau)^\dagger,\; h(\tau) \in \Gamma$. The standard equations of motion for a particle in a non-Abelian gauge field were given long ago by Wong.\cite{Wong:1970fu} They consist of two sets of coupled equations. One set is a straightforward generalization of the Lorentz force law \begin{equation} \dot {\tilde p}_\mu={\rm Tr}\Bigl( F_{\mu\nu}(x) I(\tau)\Bigr)\dot x^\nu\;,\label{Wongeq} \end{equation} where $ {\tilde p}_\mu$ is again given in (\ref{cnclmmnt}). The other set consists of first order equations describing the precession of $I(\tau)$ in the internal space $\bar\Gamma$. Yang-Mills potentials are required in order to write these equations in a gauge-covariant way. The Wong equations were derived from action principles using a number of different approaches. The Yang-Mills potentials again play a vital role in all of the Lagrangian descriptions. In the approach of co-adjoint orbits, one takes the configuration space to be $Q={\mathbb{R}}^4\times\Gamma$, and writes\cite{Balachandran:1977ub},\cite{Balachandran:2017jha} \begin{equation} I(\tau)=g(\tau)Kg(\tau)^\dagger \;,\label{gKgdgr}\end{equation} where $g(\tau)$ takes values in $\Gamma$, and $K$ is a fixed direction in $\bar\Gamma$. Under gauge transformations, $g(\tau)$ transforms with the left action of the group, $g(\tau)\rightarrow h(\tau)g(\tau)$, $h(\tau)\in \Gamma$. The two sets of Wong equations result from variations of the action with respect to $g(\tau)$ and $x^\mu(\tau)$. Now in the spirit of \cite{Kupriyanov:2018xji} we imagine that there is a region of space-time where the Bianchi identity does not hold, and so the usual expression for the field strengths in terms of the Yang-Mills potentials is not valid. So we cannot utilize the known actions which yield Wong's equations, as they require existence of the potentials. We can instead try a generalization of (\ref{flecvrntL}), which doubles the number of space-time coordinates. This appears, however, to be insufficient. In order to have a gauge invariant description for the particle, we claim that it is necessary to double the number of internal variables as well. Thus we double the entire configuration space, $Q\rightarrow Q\times \tilde Q$. Proceeding along the lines of the coadjoint orbits approach, we take $\tilde Q$ to be another copy of ${\mathbb{R}}^4\times\Gamma$. Let us denote all the dynamical variables in this case to be $ x^\mu(\tau)$, $\tilde x^\mu(\tau)$, $g(\tau)$ and $\tilde g(\tau)$, where both $g(\tau)$ and $\tilde g(\tau)$ take values in $\Gamma$ and gauge transformation with the left action of the group, $g(\tau)\rightarrow h(\tau)g(\tau)$, $\tilde g(\tau)\rightarrow h(\tau)\tilde g(\tau)$, $h(\tau)\in \Gamma$. We now propose the following gauge invariant action for the particle \begin{equation} S =\int d\tau \biggl\{ {\rm Tr}\, K g(\tau)^\dagger \dot { g}(\tau)- {\rm Tr}\,I(\tau)\dot{\tilde g}(\tau)\tilde g(\tau)^\dagger+ m\frac{\dot x_\mu\dot{\tilde x}^\mu}{\sqrt{-\dot x^\nu\dot x_\nu}}+{\rm Tr}\,\Bigl( F_{\mu\nu}(x) I(\tau)\Bigr) \tilde x^\mu\dot x^\nu \biggr\} \;,\label{nAactn}\end{equation} where $I(\tau)$ is defined in (\ref{gKgdgr}). To see that the action is gauge invariant we note that the first two terms in the integrand can be combined to: Tr$\,K g(\tau)^\dagger\tilde g(\tau)\frac d{d\tau}\Bigl( \tilde g(\tau)^\dagger g(\tau)\Bigr),\; \tilde g(\tau)^\dagger g(\tau)$ being gauge invariant. Variations of $\tilde x^\mu$ in the action yields the Wong equation (\ref{Wongeq}). Variations of $ x^\mu$ in the action gives a new set of equations defining motion on the enlarged configuration space \begin{eqnarray} &&\dot{ p}_\mu={\rm Tr}\Bigl(\frac{\partial F_{\rho\sigma}}{\partial x^\mu}I(\tau)\Bigr)\tilde x^\rho\dot x^\sigma\;, \cr&&\cr &&\qquad{\rm where}\quad\quad p_\mu =\frac m{(-\dot x^\rho\dot x_\rho)^{3/2}}( \dot x_\mu\dot{\tilde x}_\nu-\dot x_\nu \dot {\tilde x}_\mu)\dot x^\nu-{\rm Tr}\Bigl(F_{\mu\nu }I(\tau)\Bigr)\tilde x^\nu \end{eqnarray} These equations are the non-Abelian analogues of (\ref{thr33ptfr}). The remaining equations of motion result from variations of the $g(\tau)$ and $\tilde g(\tau)$ and describe motion in $\Gamma\times \Gamma$. Infinitesimal variations of $g(\tau)$ and $\tilde g(\tau)$ may be performed as follows: For $\tilde g(\tau)$, it is simpler to consider variations resulting from the right action on the group, $\delta\tilde g(\tau)=i\tilde g(\tau)\tilde \epsilon(\tau)$, $\tilde\epsilon(\tau)\in\bar\Gamma$. The action (\ref{nAactn}) is stationary with respect to these variations when \begin{equation}\frac d{d\tau} \Bigl(\tilde g(\tau)I(\tau)\tilde g(\tau)^\dagger\Bigr)=0\;,\label{ddtgIGd}\end{equation} thus stating that $\tilde g(\tau)I(\tau)\tilde g(\tau)^\dagger$ is a constant of the motion. For $ g(\tau)$, consider variations resulting from the left action on the group, $\delta g(\tau)=i \epsilon(\tau)g(\tau)$, $\epsilon(\tau)\in\bar\Gamma$. These variations lead to the equations of motion \begin{equation} \dot I(\tau)=\Bigl[ I(\tau),\dot{\tilde g}(\tau)\tilde g(\tau)^\dagger- F_{\mu\nu}(x) \tilde x^\mu\dot x^\nu \Bigl]\label{dotIt}\end{equation} The consistency of both (\ref{ddtgIGd}) and (\ref{dotIt}) leads to the following constraint on the motion \begin{equation}\Bigl[ I(\tau),F_{\mu\nu}(x) \Bigl] \tilde x^\mu\dot x^\nu=0\end{equation} This condition on $TQ\times T\tilde Q$ is a feature of the non-Abelian gauge theory, and is absent from the Abelian gauge theory. \subsection{Open string coupled to a magnetic monopole distribution}\label{further2} Finally we generalize the case of a particle interacting with a smooth magnetic monopole distribution, to that of a string interacting with the same monopole distribution. Just as we doubled the number of particle coordinates in the previous sections, we now double the number of string coordinates. We note that a doubling of the world-sheet coordinates of the string, originally limited to the compactified coordinates, also occurs in the context of Double Field Theory,\cite{DFT} with the original purpose of making the invariance of the dynamics under T-duality a manifest symmetry of the action. The approach has been further extended to strings propagating in so called non-geometric backgrounds \cite{nongeom1},\cite{hull2},\cite{Plauschinn:2018wbo},\cite{Szabo:2018hhh}, which leads to quasi-Posson brackets, violating the Jacobi identity. The resolution involves a doubling of the world-sheet coordinates, similar to what happens in the case under study. Whereas the configuration space for a Nambu-Goto string moving in $d$ dimensions is ${\mathbb{R}}^d$, which can have indefinite signature, here we take it to be ${\mathbb{R}}^d\times\widetilde{ {\mathbb{R}}^d}$. Denote the string coordinates for ${\mathbb{R}}^d$ and $\widetilde{ {\mathbb{R}}^d}$ by $x^\mu(\sigma)$ and $\tilde x^\mu(\sigma)$, $\mu=0,1,...,d-1$, respectively, where $\sigma=(\sigma^0,\sigma^1)$ parametrizes the string world sheet, ${\cal M}$. $\sigma^0$ is assumed to be a time-like parameter, and $ \sigma^1$ a spatial parameter. In addition to writing down the induced metric $g$ on $T{\mathbb{R}}^d$, \begin{equation} g_{\tt a b}=\partial_{\tt a} x^\mu \partial_{\tt b} x_\mu\;, \end{equation} where $ \partial_{\tt a}=\frac \partial{\partial \sigma^{\tt a}}\;,\;\;{\tt a,b,...}=0,1$\,, we define a non-symmetric matrix $\tilde g$ on $T{\mathbb{R}}^d\times\widetilde{ T{\mathbb{R}}^d}$, \begin{equation} \tilde g_{\tt a b}=\partial_{\tt a} x^\mu \partial_{\tt b} \tilde x_\mu\; \end{equation} For the free string action we propose to replace the usual Nambu-Goto action by \begin{equation} S_0=\frac 1{2\pi\alpha'}\int_{\cal M} d^2\sigma {\sqrt{-\det g}}\;g^{\tt a b}\tilde g_{\tt a b}\;,\label{rlstactn}\end{equation} where $g^{\tt a b}$ denote matrix elements of $g^{-1}$ and $\alpha'$ is the string constant. The action (\ref{rlstactn}), together with the interacting term given below, is a natural generalization of the point-particle action Eq. \eqn{flecvrntL} because: \begin{itemize} \item Just as with the case of the relativistic point particle action in section \ref{rel}, it is relativistically covariant. \item Just as with the case of the relativistic point particle action in section \ref{rel}, there is a new gauge symmetry, in addition to reparametrizations, $\sigma^{\tt a}\rightarrow{\sigma'}^{\tt a}=f^{\tt a}(\sigma)$, leading to new first class constraints in the Hamiltonian formalism. This new gauge symmetry mixes $\tilde {\mathbb{R}}^d$ with ${\mathbb{R}}^d$. Infinitesimal variations are given by \begin{equation} \delta x^\mu =0\qquad\quad \delta\tilde x^\mu=\frac{\epsilon^{\tt a}(\sigma)\, \partial_{\tt a} x^\mu}{\sqrt{-\det g}}\;,\label{trnsfmfs}\end{equation} where $\epsilon^{\tt a}(\sigma)$ are arbitrary functions of $\sigma$, which we assume vanish at the string boundaries. This is the natural generalization of the $\tau-$dependent symmetry transformation (\ref{strngtrnsfm}) for the relativistic point particle. Invariance of $S_0$ under variations (\ref{trnsfmfs}) follows from: \begin{eqnarray} \delta S_0&=&\frac 1{2\pi\alpha'}\int_{\cal M} d^2\sigma {\sqrt{-\det g}}\;g^{\tt a b}\partial_{\tt a}x_\mu\partial_{\tt b}\Bigl(\frac{\epsilon^{\tt c} \partial_{\tt c} x^\mu}{\sqrt{-\det g}}\Bigr)\cr&&\cr &=&\frac 1{2\pi\alpha'}\int_{\cal M} d^2\sigma g^{\tt a b}\biggl( g_{\tt a c}\partial_{\tt b}\epsilon^{\tt c}+ \partial_{\tt a}x_\mu\partial_{\tt b}\partial_{\tt c}x^\mu\epsilon^{\tt c}-\frac {\partial_{\tt b}\det g}{2\det g}\,g_{\tt a c}\epsilon^{\tt c}\biggr)\cr&&\cr &=&\frac 1{2\pi\alpha'}\int_{\cal M} d^2\sigma \biggl(\partial_{\tt c}\epsilon^{\tt c}+g^{\tt a b}\Bigl( \partial_{\tt a}x_\mu\partial_{\tt b}\partial_{\tt c}x^\mu -\frac 12\partial_{\tt c}g_{\tt a b}\Bigr)\epsilon^{\tt c}\biggr)\cr&&\cr &=&\frac 1{2\pi\alpha'}\int_{\partial{\cal M}} d\sigma^{\tt a}\epsilon_{\tt a}\;, \end{eqnarray} which vanishes upon requiring $\epsilon_{\tt a}|_{\partial{\cal M}}=0.$ \item The action (\ref{rlstactn}) leads to the standard string dynamics when projecting the equations of motion to ${\mathbb{R}}^d$. Excluding for the moment interactions, variations of the action $S_0$ with respect to $\tilde x^\mu(\sigma)$ away from the boundary $\partial {\cal M}$ give the equations of motion \begin{equation} \partial_{\tt a} \tilde p^{\tt a}_\mu=0\;,\qquad \tilde p^{\tt a}_\mu=\frac 1{2\pi\alpha'}{\sqrt{-\det g}}\;g^{\tt a b}\partial_{\tt b} x_\mu\end{equation} These are the equations of motion for a Nambu string. In addition to recovering the usual string equations on $ {\mathbb{R}}^d$, variations of $S_0$ with respect to $ x^\mu(\sigma)$ lead to another set of the equation of motion on $ {\mathbb{R}}^d\times \tilde {\mathbb{R}}^d$ \begin{equation} \partial_{\tt a} p^{\tt a}_\mu=0\;,\qquad p^{\tt a}_\mu=\frac 1{2\pi\alpha'}{\sqrt{-\det g}}\,\Bigl\{(g^{\tt a b}g^{\tt cd}- g^{\tt ad}g^{\tt bc}- g^{\tt ac}g^{\tt bd})\,\tilde g_{\tt cd} \partial_{\tt b}x_\mu+g^{\tt ab}\partial_{\tt b}\tilde x_\mu \Bigr\}\end{equation} \end{itemize} Of course, (\ref{rlstactn}) can be used for both a closed string and an open string. We now include interactions to the electromagnetic field. They occur at the boundaries of an open string, and are standardly expressed in terms of the electromagnetic potential, which again is not possible in the presence of a continuous magnetic monopole charge distribution. So here we take instead \begin{equation} S_I= e\int_{\partial {\cal M}} d\sigma^{\tt a} F_{\mu\nu}(x)\tilde x^\mu\partial_{\tt a}x^\nu\;,\end{equation} where $F_{\mu\nu}(x)$, is not required to satisfy the Bianchi identity in a finite volume of ${\mathbb{R}}^d$. We take $-\infty<\sigma^0<\infty$, $0<\sigma^1<\pi$, with $\sigma^1=0,\pi$ denoting the spatial boundaries of the string. Then the boundary equations of motion resulting from variations of $\tilde x^\mu(\sigma)$ in the total action $S=S_0+S_I$ are \begin{equation} \Bigl(\tilde p^1_\mu+e F_{\mu\nu}(x)\partial_0 x^\nu\Bigr)\Big|_{\sigma^1=0,\pi}=0\;,\end{equation} which are the usual conditions in $ {\mathbb{R}}^d$. The boundary equations of motion resulting from variations of $ x^\mu(\sigma)$ in the total action $S=S_0+S_I$ give some new conditions in $ {\mathbb{R}}^d\times \tilde {\mathbb{R}}^d$ \begin{equation} \biggl( p^1_\mu+e \Bigl(\frac\partial{\partial x^\mu} F_{\rho\sigma}+\frac\partial{\partial x^\sigma} F_{\mu \rho}\Bigr)\tilde x^\rho\partial_0 x^\sigma+e F_{\mu\nu}\partial_0 \tilde x^\nu\biggr)\bigg|_{\sigma^1=0,\pi}=0\end{equation} In the Hamiltonian formulation of the system $\pi_\mu=p^0_\mu$ and $\tilde \pi_\mu=\tilde p^0_\mu$ are canonically conjugate to $x^\mu$ and $\tilde x^\mu$, respectively, having equal time Poisson brackets \begin{equation} \Bigl\{x^\mu(\sigma^0,\sigma^1)\,,\,\pi_\nu(\sigma^0,{\sigma'}^1)\Bigr\}=\Bigl\{\tilde x^\mu(\sigma^0,\sigma^1)\,,\,\tilde \pi_\nu(\sigma^0,{\sigma'}^1)\Bigr\}=\delta^\mu_\nu\,\delta(\sigma^1-{\sigma'}^1)\;, \end{equation} for $0<\sigma^1,{\sigma'}^1<\pi$, with all other equal time Poisson brackets equal to zero. The canonical momenta are subject to the four constraints: \begin{eqnarray} &&\Phi_1=\tilde \pi_\mu\tilde \pi^{\mu}+\frac 1{(2\pi\alpha')^2}\partial_1 x^\mu\partial_1 x_\mu\approx 0\cr&&\cr&&\Phi_2=\tilde \pi_\mu\partial_1 x^\mu\approx 0\cr&&\cr&&\Phi_3=\pi_\mu\tilde \pi^{\mu}+\frac 1{(2\pi\alpha')^2}\partial_1 x^\mu\partial_1 \tilde x_\mu\approx 0\cr&&\cr&&\Phi_4=\pi_\mu\partial_1 x^\mu+\tilde \pi_\mu\partial_1 \tilde x^\mu\approx 0\end{eqnarray} It can be verified that they form a first class set. $\Phi_1$ and $\Phi_2$ generate the local symmetry transformations (\ref{trnsfmfs}), while linear combinations of the four constraints generate reparametrizations. \section{Conclusions} We have considered the problem of the existence of a Lagrangian description for the motion of a charged particle in the presence of a smooth distribution of magnetic monopoles. The magnetic field does not admit a potential on the physical configuration space. Auxiliary variables are employed in order to solve the problem, following a procedure commonly used to deal with dissipative dynamics. This is the Lagrangian counterpart of the Hamiltonian problem, addressed in \cite{Kupriyanov:2018xji}, where the Bianchi identity violating magnetic field entails a quasi-Poisson algebra on the physical phase space which does not satisfy Jacobi identity {unless one doubles the number of degrees of freedom. The problem was further extended to the relativistic case, as well as non-Abelian case. In the last section, we performed the generalization of the relativistic point-particle action \eqn{flecvrntL} to that of an open string interacting, once again, with a Bianchi identity violating magnetic field. In order to circumvent the problem of the lack of a potential vector, the world-sheet degrees of freedom have been doubled analogous to the case in double field theory. Many interesting issues can be addressed, such as a possible relationship with double field theory, or the quantization problem, which relates Jacobi violation to non-associativity of the quantum algebra. We plan to investigate these aspects in a forthcoming publication. \bigskip \noindent{\bf Acknowledgements}. G.M. is a member of the Gruppo Nazionale di Fisica Matematica(INDAM), Italy. He would like to thank the support provided by the Santander/UC3M Excellence Chair Programme 2019/2020; he also acknowledges financial support from the Spanish Ministry of Economy and Competitiveness, through the Severo Ochoa Programme for Centres of Excellencein RD (SEV-2015/0554). \bigskip
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Given a simple graph $G$, we write $G \rightarrow (a_1,\dots,a_k)^e$ and say that $G$ \emph{arrows} $(a_1,\dots,a_k)^e$ if for every edge $k$-coloring of $G$, a monochromatic $K_{a_i}$ is forced for some color $i \in \{1,\dots,k\}$. Likewise, for graphs $F$ and $H$, $G\rightarrow (F,H)^e$ if for every edge 2-coloring of $G$, a monochromatic $F$ is forced in the first color or a monochromatic $H$ is forced in the second. Define $\mathcal{F}_e(a_1,\dots,a_k;p)$ to be the set of all graphs that arrow $(a_1,\dots,a_k)^e$ and do not contain $K_p$; they are often called Folkman graphs. The edge Folkman number $F_e(a_1,\dots,a_k;p)$ is the smallest order of a graph that is a member of $\mathcal{F}_e(a_1,\dots,a_k;p)$. In 1970, Folkman \cite{Folkman1970} showed that for $k > \max{\{s,t\}}$, $F_e(s,t;k)$ exists. The related problem of vertex Folkman numbers, where vertices are colored instead of edges, is more studied \cite{Luczak2001,Nenov2003} than edge Folkman numbers, but we will not be discussing them. Therefore, we will skip the use of the superscript $e$ when discussing arrowing, as it is usually used to distinguish between edge and vertex colorings. In 1967, Erd\H{o}s and Hajnal \cite{Erdos1967} asked the question: Does there exist a $K_4$-free graph that is not the union of two triangle-free graphs? This question is equivalent to asking for the existence of a $K_4$-free graph such that in any edge 2-coloring, a monochromatic triangle is forced. After Folkman proved the existence of such a graph, the question then became to find how small this graph could be, or using the above notation, what is the value of $F_e(3,3;4)$. Prior to this paper, the best known bounds for this case were $19 \leq F_e(3,3;4) \leq 941$ \cite{Radziszowski2007, Dudek2008}. Folkman numbers are related to Ramsey numbers $R(s,t)$, which are defined as the least positive $n$ such that any 2-coloring of the edges of $K_n$ yields a monochromatic $K_s$ in the first color or a monochromatic $K_t$ in the second color. Using the arrowing operator, it is clear that $R(s,t)$ is the smallest $n$ such that $K_n \rightarrow (s,t)$. The known values and bounds for various types of Ramsey numbers are collected and regularly updated by the second author \cite{Radziszowski2011}. We will be using standard graph theory notation: $V(G)$ and $E(G)$ for the vertex and edge sets of graph $G$, respectively. A \emph{cut} is a partition of the vertices of a graph into two sets, $S \subset V(G)$ and $\overline{S}=V(G)\setminus S$. The \emph{size} of a cut is the number of edges that join the two sets, that is, $\abs{ \{ \{u,v\} \in E(G)\; | \; u \in S \text{ and } v \in \overline{S} \} }$. MAX-CUT is a well-known \textbf{NP}-hard combinatorial optimization problem which asks for the maximum size of a cut of a graph. \newpage \section{History of $F_e(3,3;4)$} \begin{table}[h] \centering { \renewcommand{\arraystretch}{1.2} \begin{tabular}{ l@{$\;\;\;$} | r@{ -- } l| l r }\hline Year & \multicolumn{2}{m{2.7cm}|}{\centering Lower/Upper Bounds} & Who/What & Ref.\\ \hline 1967 & \multicolumn{2}{c|}{any?$\quad$} & Erd\H{o}s-Hajnal &\cite{Erdos1967} \\ 1970 & \multicolumn{2}{c|}{exist$\quad$} & Folkman &\cite{Folkman1970}\\ 1972 & 10 & & Lin &\cite{Lin1972}\\ 1975 & & $10^{10}$? & Erd\H{o}s offers \$100 for proof &\\ 1986 & & $8 \times 10^{11}$ & Frankl-R\"{o}dl &\cite{Frankl1986}\\ 1988 & & $3\times 10^9$ & Spencer &\cite{Spencer1988} \\ 1999 & $\;\; 16$ & & Piwakowski et al. (implicit) &\cite{Piwakowski1999}\\ 2007 & 19 & & Radziszowski-Xu &\cite{Radziszowski2007}\\ 2008 & & 9697 & Lu &\cite{Lu2008}\\ 2008 & & 941 & Dudek-R\"{o}dl &\cite{Dudek2008} \\ 2012 & & 786 & this work &\\ 2012 & & 100? & Graham offers \$100 for proof &\\ \hline \end{tabular} } \caption{Timeline of progress on $F_e(3,3;4)$.} \label{tab:hist} \end{table} Table \ref{tab:hist} summarizes the events surrounding $F_e(3,3;4)$, starting with Erd\H{o}s and Hajnal's \cite{Erdos1967} original question of existence. After Folkman \cite{Folkman1970} proved the existence, Erd\H{o}s, in 1975, offered \$100 for deciding if $F_e(3,3;4)<10^{10}$. This question remained open for over 10 years. Frankl and R\"{o}dl \cite{Frankl1986} nearly met Erd\H{o}s' request in 1986 when they showed that $F_e(3,3;4)$ $< 7.02 \times 10^{11}$. In 1988, Spencer \cite{Spencer1988}, in a seminal paper using probabilistic techniques, proved the existence of a Folkman graph of order $3\times 10^9$ (after an erratum by Hovey), without explicitly constructing it. In 2007, Lu showed that $F_e(3,3;4)\leq 9697$ by constructing a family of $K_4$-free circulant graphs (which we discuss in Section \ref{sec:lu}) and showing that some such graphs arrow $(3,3)$ using spectral analysis. Later, Dudek and R\"{o}dl reduced the upper bound to the best known to date, $941$. Their method, which we have pursued further with some success, is discussed in the next section. The lower bound for $F_e(3,3;4)$ was much less studied than the upper bound. Lin \cite{Lin1972} obtained a lower bound on $10$ in 1972 without the help of a computer. All 659 graphs on 15 vertices witnessing $F_e(3,3;5)=15$ \cite{Piwakowski1999} contain $K_4$, thus giving the bound $16 \leq F_e(3,3;4)$. In 2007, two of the authors of this paper gave a computer-free proof of $18 \leq F_e(3,3;4)$ and improved the lower bound further to $19$ with the help of computations \cite{Radziszowski2007}. The long history of $F_e(3,3;4)$ is not only interesting in itself but also gives insight into how difficult the problem is. Finding good bounds on the smallest order of any Folkman graph (with fixed parameters) seems to be difficult, and some related Ramsey graph coloring problems are \textbf{NP}-hard or lie even higher in the polynomial hierarchy. For example, Burr \cite{Burr1976} showed that arrowing $(3,3)$ is $\mathbf{coNP}$-complete, and Schaefer \cite{Schaefer2001} showed that for general graphs $F$, $G$, and $H$, $F\rightarrow (G,H)$ is $\mathbf{\Pi^{\mathrm{P}}_2}$-complete. \section{Arrowing via MAX-CUT} Building off Spencer's and other methods, Dudek and R\"{o}dl \cite{Dudek2008} in 2008 showed how to construct a graph $H_G$ from a graph $G$, such that the maximum size of a cut of $H_G$ determines whether or not $G \rightarrow (3,3)$. They construct the graph $H_G$ as follows. The vertices of $H_G$ are the edges of $G$, so $\abs{V(H_G)}=\abs{E(G)}$. For $e_1,e_2 \in V(H_G)$, if edges $\{e_1,e_2,e_3\}$ form a triangle in $G$, then $\{e_1,e_2\}$ is an edge in $H_G$. Let $t_\triangle(G)$ denote the number of triangles in graph $G$. Clearly, $\abs{E(H_G)}$$=3t_\triangle(G)$. Let $MC(H)$ denote the MAX-CUT value of graph $H$. \bigskip \begin{theorem}[Dudek and R\"{o}dl \cite{Dudek2008}] \label{th:mc} $G \rightarrow (3,3)$ if and only if\\ $MC(H_G) < 2t_\triangle(G)$. \end{theorem} There is a clear intuition behind Theorem \ref{th:mc} that we will now describe. Any edge $2$-coloring of $G$ corresponds to a bipartition of the vertices in $H_G$. If a triangle colored in $G$ is not monochromatic, then its three edges, which are vertices of $H_G$, will be separated in the bipartition. If we treat this bipartition as a cut, then the size of the cut will count each triangle twice for the two edges that cross it. Since there is only one triangle in a graph that contains two given edges, this effectively counts the number of non-monochromatic triangles. Therefore, if it is possible to find a cut that has size equal to $2t_\triangle(G)$, then such a cut defines an edge coloring of $G$ that has no monochromatic triangles. However, if $MC(H_G)<2t_\triangle(G)$, then in each coloring, all three edges of some triangle are in one part and thus, $G \rightarrow (3,3)$. A benefit of converting the problem of arrowing $(3,3)$ to MAX-CUT is that the latter is well-known and has been studied extensively in computer science and mathematics (see for example \cite{Commander2009}). The decision problem MAX-CUT$(H,k)$ asks whether or not $MC(H)\geq k$. It is known that MAX-CUT is \textbf{NP}-hard and this decision problem was one of Karp's 21 \textbf{NP}-complete problems \cite{Karp1972}. In our case, $G\rightarrow (3,3)$ if and only if MAX-CUT$\left(H_G,2t_\triangle(G)\right)$ doesn't hold. Since MAX-CUT is \textbf{NP}-hard, an attempt is often made to approximate it, such as in the approaches presented in the next two sections. \subsection{Minimum Eigenvalue Method} A method exploiting the minimum eigenvalue was used by Dudek and R\"{o}dl \cite{Dudek2008} to show that some large graphs are members of $\mathcal{F}_e(3,3;4)$. The following upper bound \eqref{eq:mineig} on $MC(H_G)$ can be found in \cite{Dudek2008}, where $\lambda_{\text{min}}$ denotes the minimum eigenvalue of the adjacency matrix of $H_G$. \begin{equation}\label{eq:mineig} MC(H_G) \leq \frac{\abs{E(H_G)}}{2} - \frac{\lambda_{\text{min}}\abs{V(H_G)}}{4}. \end{equation} For positive integers $r$ and $n$, if $-1$ is an $r$-th residue modulo $n$, then let $G(n,r)$ be a circulant graph on $n$ vertices with the vertex set $\mathds{Z}_n$ and the edge set $E(G(n,r)) = \{ \{u,v\} \; | \; u \neq v \text{ and } u-v \equiv \alpha^r \bmod{n}, \text{ for some } \alpha \in \mathds{Z}_n \}$. The graph $G_{941}=G(941,5)$ has 707632 triangles. Using the MATLAB \cite{MATLAB2011} {\tt eigs} function, Dudek and R\"{o}dl \cite{Dudek2008} computed \begin{equation*} MC(H_{G_{941}})\leq 1397484 < 1415264=2t_\triangle(G_{941}). \end{equation*} Thus, by Theorem 1, $G_{941}$ $\rightarrow$ $(3,3)$. \vspace*{1em} In an attempt to improve $F_e(3,3;4) \leq 941$, we tried removing vertices of $G_{941}$ to see if the minimum eigenvalue bound would still show arrowing. We applied multiple strategies for removing vertices, including removing neighborhoods of vertices, randomly selected vertices, and independent sets of vertices. Most of these strategies were successful, and led to the following theorem: \bigskip \begin{theorem}\label{th:gc} $F_e(3,3;4) \leq 860$. \end{theorem} \vspace*{.5em} \noindent \textbf{Proof.} For a graph $G$ with vertices $\mathds{Z}_n$, define $C=C(d,k) = \{ v \in V(G)\;|\; v=id \bmod{n}, \text{ for } 0 \leq i < k\}$. Let $G=G_{941}$, $d=2$, $k=81$, and $G_C$ be the graph induced on $V(G) \setminus C(d,k)$. Then $G_C$ has 860 vertices, 73981 edges and 542514 triangles. Using the MATLAB {\tt eigs} function, we obtain $\lambda_{\text{min}} \approx -14.663012$. Setting $\lambda_{\text{min}} > -14.664$ in \eqref{eq:mineig} gives \begin{equation} MC(H_{G_C}) < 1084985 < 1085028=2t_\triangle(G_C). \end{equation} \noindent Therefore, $G_C \rightarrow (3,3)$. $\Box$ \vspace*{1em} None of the methods used allowed for $82$ or more vertices to be removed without the upper bound on $MC$ becoming larger than $2t_\triangle$. \subsection{Goemans-Williamson Method} The Goemans-Williamson MAX-CUT approximation algorithm \cite{Goemans1995} is a well-known, polynomial-time algorithm that relaxes the problem to a semi-definite program (SDP). It involves the first use of SDP in combinatorial approximation and has since inspired a variety of other successful algorithms (see for example \cite{Karloff1997,Frieze1997}). This randomized algorithm returns a cut with expected size at least 0.87856 of the optimal value. However, in our case, all that is needed is a feasible solution to the SDP, as it gives an upper bound on $MC(H)$. A brief description of the Goemans-Williamson relaxation follows. The first step in relaxing MAX-CUT is to represent the problem as a quadratic integer program. Given a graph $H$ with $V(H)=\{1,\dots,n\}$ and nonnegative weights $w_{i,j}$ for each pair of vertices $\{i,j\}$, we can write $MC(H)$ as the following objective function: \begin{align} \text{Maximize}\quad &\frac{1}{2}\sum_{i<j}w_{i,j}(1 - y_iy_j) \label{eq:quadratic}\\ \text{subject to:}\quad &y_i \in \{-1,1\} \quad \text{for all } i \in V(H). \notag \end{align} Define one part of the cut as $S=\{i\; |\; y_i = 1\}$. Since in our case all graphs are weightless, we will use \begin{equation*} w_{i,j} = \begin{cases} 1 & \text{if } \{i,j\} \in E(H),\\ 0 & \text{otherwise}. \end{cases} \end{equation*} Next, the integer program \eqref{eq:quadratic} is relaxed by extending the problem to higher dimensions. Each $y_i \in \{-1,1\}$ is now replaced with a vector on the unit sphere $\mathbf{v}_i \in \mathds{R}^n$, as follows: \begin{align} \text{Maximize}\quad &\frac{1}{2}\sum_{i<j}w_{i,j}(1 - \mathbf{v}_i\cdot \mathbf{v}_j) \label{eq:relax}\\ \text{subject to:}\quad &\norm{\mathbf{v}_i} = 1 \quad \text{for all } i \in V(H). \notag \end{align} If we define a matrix $Y$ with the entries $y_{i,j}=\mathbf{v_i}\cdot\mathbf{v_j}$, that is, the Gram matrix of ${ \mathbf{v}_1,\dots,\mathbf{v}_n}$, then $y_{i,i}=1$ and $Y$ is positive semidefinite. Therefore, \eqref{eq:relax} is a semidefinite program. \vspace*{1em} \subsection{Some Cases of Arrowing} \label{sec:lu} Using the Goemans-Williamson approach, we tested a wide variety of graphs for arrowing by finding upper bounds on MAX-CUT. These graphs included the $G(n,r)$ graphs tested by Dudek and R\"{o}dl, similar circulant graphs based on the Galois fields $GF(p^k)$, and random graphs. Various modifications of these graphs were also considered, including the removal and/or addition of vertices and/or edges, as well as copying or joining multiple candidate graphs together in various ways. We tested the graph $G_C$ of Theorem \ref{th:gc} and obtained the upper bound $MC(H_{G_C}) \leq 1077834$, a significant improvement over the bound $1084985$ obtained from the minimum eigenvalue method. This provides further evidence that $G_C \rightarrow (3,3)$, and is an example of when \eqref{eq:relax} yields a much better upper bound. Multiple SDP solvers that were designed \cite{Burer2003,Helmberg2000} to handle large-scale SDP and MAX-CUT problems were used for the tests. Specifically, we made use of a version of {\tt SDPLR} by Samuel Burer \cite{Burer2003}, a solver that uses low-rank factorization. The version {\tt SDPLR-MC} includes specialized code for the MAX-CUT SDP relaxation. {\tt SBmethod} by Christoph Helmberg \cite{Helmberg2000} implements a spectral bundle method and was also applied successfully in our experiments. In all cases where more than one solver was used, the same results were obtained. The type of graph that led to the best results was described by Lu \cite{Lu2008}. For positive integers $n$ and $s$, $s<n$, $s$ relatively prime to $n$, define set $S = \{ s^i \bmod{n} \; | \; i=0,1,\dots,m-1\}$, where $m$ is the smallest positive integer such that $s^m \equiv 1 \bmod{n}$. If $-1 \bmod{n} \in S$, then let $L(n,s)$ be a circulant graph on $n$ vertices with $V(L(n,s))=\mathds{Z}_n$. For vertices $u$ and $v$, $\{u,v\}$ is an edge of $L(n,s)$ if and only if $u-v \in S$. Note that the condition that $-1 \bmod{n} \in S$ implies that if $u-v \in S$ then $v-u \in S$. In Table 1 of \cite{Lu2008}, a set of potential members of $\mathcal{F}_e(3,3;4)$ of the form $L(n,s)$ were listed, and the graph $L(9697,4)$ was shown to arrow $(3,3)$. Lu gave credit to Exoo for showing that $L(17,2)$, $L(61,8)$, $L(79,12)$, $L(421,7)$, and $L(631,24)$ do not arrow $(3,3)$. \vspace*{1em} We tested all graphs from Table 1 of \cite{Lu2008} of order less than 941 with the MAX-CUT method, using both the minimum eigenvalue and SDP upper bounds. Table \ref{tab:results} lists the results. Note that although none of the computed upper bounds of the $L(n,s)$ graphs imply arrowing $(3,3)$, all SDP bounds match those of the minimum eigenvalue bound. This is distinct from other families of graphs, including those in \cite{Dudek2008}, as the SDP bound is usually tighter. Thus, these graphs were given further consideration. \begin{table}[h] \centering { \renewcommand{\arraystretch}{1.2} \begin{tabular}{ | c | r | r | r | r | r | r | } \hline $G$ & \multicolumn{1}{c|}{$2t_\triangle(G)$} & \multicolumn{1}{c|}{$\lambda_{\text{min}}$} & \multicolumn{1}{c|}{SDP}\\ \hline\hline $L(127,5)$ & 19558 & 20181 & 20181 \\ $L(457,6)$ & 347320 & 358204 & 358204 \\ $L(761,3)$ & 694032 & 731858 & 731858 \\ $L(785,53)$ & 857220 & 857220 & 857220 \\ \hline $G_{786}$ & 857762 & 857843 & 857753 \\ \hline \end{tabular} } \caption{Potential $\mathcal{F}_e(3,3;4)$ graphs $G$ and upper bounds on $MC(H_G)$, where ``$\lambda_{\text{min}}$'' is the bound \eqref{eq:mineig} and ``SDP'' is the solution of \eqref{eq:relax} from {\tt SDPLR-MC} and {\tt SBmethod}. $G_{786}$ is the graph of Theorem \ref{th:786}.} \label{tab:results} \end{table} $L(127,5)$ was given particular attention, as it is the same graph as $G_{127}$, where $V(G_{127})=\mathds{Z}_{127}$ and $E(G_{127})=\{ \{x,y\} \; | \; x-y \equiv \alpha^3 \bmod{127} \}$ (that is, the graph $G(127,3)$ as defined in the previous section). It has been conjectured by Exoo that $G_{127} \rightarrow (3,3)$. He also suggested that subgraphs induced on less than 100 vertices of $G_{127}$ may as well. For more information on $G_{127}$ see \cite{Radziszowski2007}. Numerous attempts were made at modifying these graphs in hopes that one of the MAX-CUT methods would be able to prove arrowing. Indeed, we were able to do so with $L(785,53)$. Notice that all of the upper bounds for $MC(H_{L(785,53)})$ are $857220$, the same as $2t_\triangle\left(L(785,53)\right)$. Our goal was then to slightly modify $L(785,53)$ so that this value becomes smaller. Let $G_{786}$ denote the graph $L(785,53)$ with one additional vertex connected to the following 60 vertices: \vspace*{1em} \begin{minipage}{\textwidth} \centering { \small \begin{verbatim} { 0, 1, 3, 4, 6, 7, 9, 10, 12, 13, 15, 16, 18, 19, 21, 22, 24, 25, 27, 28, 30, 31, 33, 34, 36, 37, 39, 40, 42, 43, 45, 46, 48, 49, 51, 52, 54, 55, 57, 58, 60, 61, 63, 66, 69, 201, 204, 207, 210, 213, 216, 219, 222, 225, 416, 419, 422, 630, 642, 645 } \end{verbatim} } \end{minipage} \vspace*{1em} $G_{786}$ is still $K_4$-free, has 61290 edges, and has 428881 triangles. The upper bound computed from the SDP solvers for $MC(H_{G_{786}})$ is 857753. We did not find a nice description for the vectors of this solution. Software implementing {\tt SpeeDP} by Grippo et al. \cite{Grippo2010b}, an algorithm designed to solve large MAX-CUT SDP relaxations, was used by Rinaldi (one of the authors of \cite{Grippo2010b}) to analyze this graph. He was able to obtain the bounds $857742 \leq MC(H_{G_{786}}) \leq 857750$, which agrees with, and improves over our upper bound computation. Since $2t_\triangle(G_{786}) = 857762$, we have both from our tests and his {\tt SpeeDP} test that $G_{786} \rightarrow (3,3)$, and the following main result. \bigskip \begin{theorem}\label{th:786} $F_e(3,3;4) \leq 786.$ \end{theorem} We note that finding a lower bound on MAX-CUT, such as the $857742 \leq MC(H_{G_{786}})$ bound from {\tt SpeeDP}, follows from finding an actual cut of a certain size. This method may be useful, as finding a cut of size $2t_\triangle(G)$ shows that $G \not\rightarrow (3,3)$. \section{Tasks to Complete} Improving the upper bound on $F_e(3,3;4)$ $\leq 786$ is the main challenge. The question of whether $G_{127} \rightarrow (3,3)$ is still open, and any method that could solve it would be of much interest. During the 2012 SIAM Conference on Discrete Mathematics in Halifax, Nova Scotia, Ronald Graham announced a \$100 award for determining if $F_e(3,3;4) < 100$. Another open question is the lower bound on $F_e(3,3;4)$, as it is quite puzzling that only 19 is the best known. Even an improvement to $20 \leq F_e(3,3;4)$ would be good progress. \section{Acknowledgments} The third author is supported by the Guangxi Natural Science Foundation (2011GXNSFA018142). We would like to thank Giovanni Rinaldi and Luigi Grippo for their enthusiastic aid in the computation of MAX-CUT bounds with their {\tt SpeeDP} algorithm \cite{Grippo2010b}. We would also like to thank the referee for the helpful comments. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Motivation and background}\label{intro} \setcounter{equation}{0} It was argued \cite{Zibi2011} that the Lema\^{\i}tre \cite{Lema1933} -- Tolman \cite{Tolm1934} (L--T) models with nonconstant bang-time function $t_B(r)$ are ``ruled out'' because of spectral distortions of light received by the present central observer that such $t_B$ would cause. The spectral distortions are expected to arise when blueshifted rays emitted close to those points of the Big Bang (BB) where $\dril {t_B} r \neq 0$ intersect the past light cone of the present central observer (PCPO). Investigating the blueshifts and possible spectral distortions caused by them is a valid research problem. However, although Ref.\ \cite{Zibi2011} arose out of criticism of the so-called ``void models of acceleration'', the particular L--T model considered there did not correspond to any real situation in the Universe (see Appendix \ref{critique}).\footnote{The author of Ref.\ \cite{Zibi2011} claimed that he ruled out L--T models ``with significant decaying mode contribution today''. But the further comments leave the reader with the impression that the whole L-T class was killed, even though the assumptions actually made ($E = 0$ and a specific $t_B(r)$ profile) were strongly restricting.} In the present paper, the blueshifts are investigated in a general L--T model, and in the model that was derived in Ref.\ \cite{Kras2014} by the method introduced in Ref.\ \cite{INNa2002}. In the latter, called here the L--T$(t_B)$ model, the accelerated expansion of the Universe is simulated using a suitably (numerically) constructed nonconstant $t_B(r)$, with the other L--T free function, $E(r)$, having the same form as in the Friedmann model: $E = -k r^2/2$, where $k$ is a constant. The equations that ensure the simulation imply a unique value of $k < 0$, see Sec.\ \ref{LTintro}. In an L--T model with nonconstant $t_B$, blueshifts are generated only close to the BB \cite{Szek1980,HeLa1984}. Observers who carry out their observations far from the BB see light from nearby objects being redshifted. However, light rays emitted from the BB at those points, where $\dril {t_B} r \neq 0$, behave in a peculiar way. When a radially directed ray of this class is followed back in time beginning at a late-epoch observer, the redshift along it at first increases from zero, but reaches a maximum, then decreases to become negative (i.e. to turn to blueshift) at some point, to finally become $-1$ at the contact with the BB. The value $z = -1$ is referred to as {\it infinite blueshift} \cite{Szek1980,HeLa1984}, see Sec.\ \ref{LTintro}. The locus, where the redshift along radial rays acquires a maximum, will be termed maximum-redshift hypersurface (MRH). The locus, where the observed redshift along these rays turns to blueshift, will be termed zero-redshift hypersurface (ZRH). To display blueshift to the observer, a ray must build up a sufficiently large blueshift \textit{before} it intersects ZRH, in order to offset the redshift accumulated in the later part of the path. Thus, along any ray, the MRH is later than the ZRH. Along radial rays, the MRH is observer-independent (see Sec.\ \ref{zmax}). The ZRH is defined only with respect to a given family of observers, and is different for each family. If ZRH is closer to the BB than the last-scattering hypersurface, then the blueshifts are hidden in the pre-recombination era, where the zero-pressure L--T models do not apply. This is why it is important to locate the ZRH in spacetime, relative to those events, where spectral distortions would be observable; namely, events along the ray emitted from the last-scattering hypersurface that reaches the central observer at present. An L--T metric is never meant to be a global model of the whole Universe. Any given L--T metric is always meant to model a limited part of our spacetime. It should be matched to a background metric modelling the rest of the Universe, for example a Friedmann metric. In the case of the L--T models that simulate accelerated expansion, it makes sense to apply them only out to such distances, at which the type Ia supernovae (SNIa) are observed. At present, the largest redshift observed for a type Ia supernova is $z \approx 1.9$ \cite{Jone2014}; for the supernovae included in the two original projects \cite{Ries1998,Perl1999}, the largest $z$ was 0.83 \cite{Perl1999}. The plan of the paper is as follows. Section \ref{LTintro} recalls basic properties of the L--T models. In Sec.\ \ref{genLT}, it is shown that in a general L--T model, $\left|\dril {t_B} r\right|$ can always be made sufficiently small to hide the blueshifts in the pre-recombination epoch, where the assumption of zero pressure is not realistic, so the L--T model cannot apply. The rest of the paper is devoted to the L--T($t_B$) model only. In Sec.\ \ref{zmax}, the MRH of the central observer is determined and displayed. In Sec.\ \ref{numcone}, the profiles of redshift are calculated along a few characteristic radial rays intersecting the PCPO. In Sec.\ \ref{matching}, the L--T($t_B$) model is matched to the Friedmann model across the matter world-tube that intersects the PCPO at $z = 0.83$, and profiles of redshift/blueshift along a few characteristic light cones are displayed. Section \ref{conclu} contains conclusions and a summary. In Appendix \ref{critique}, deficiencies of the model used in Ref.\ \cite{Zibi2011} are presented. \section{The Lema\^{\i}tre -- Tolman models}\label{LTintro} \setcounter{equation}{0} This is a summary of basic facts about the L--T models. For extended expositions see Refs.\ \cite{Kras1997,PlKr2006}. \subsection{General facts} The numbering of the coordinates will be $(x^0,$ $x^1,$ $x^2,$ $x^3) = (t, r, \vartheta, \varphi)$ and the signature will be $(+ - - -)$. The metric of the model is \begin{equation}\label{2.1} {\rm d} s^2 = {\rm d} t^2 - \frac {{R_{,r}}^2}{1 + 2E(r)}{\rm d} r^2 - R^2(t,r)({\rm d}\vartheta^2 + \sin^2\vartheta \, {\rm d}\varphi^2), \end{equation} where $E(r)$ is an arbitrary function. The source in the Einstein equations is dust, i.e. a pressureless fluid. The (geodesic) velocity field of the dust is \begin{equation}\label{2.2} u^{\alpha} = {\delta^{\alpha}}_0. \end{equation} The function $R(t, r)$ is determined by \begin{equation}\label{2.3} {R_{,t}}^2 = 2E(r) + 2M(r) / R, \end{equation} $M(r)$ being another arbitrary function; we neglect the cosmological constant. Throughout this paper only expanding models ($R,_t > 0$) will be considered. The solutions of (\ref{2.3}) may be written as follows: When $E > 0$: \begin{eqnarray}\label{2.4} R(t,r) &=& \frac M {2E} (\cosh \eta - 1), \nonumber \\ \sinh \eta - \eta &=& \frac {(2E)^{3/2}} M \left[t - t_B(r)\right]; \end{eqnarray} When $E = 0$: \begin{equation}\label{2.5} R(t,r) = \left\{\frac 9 2 M(r) \left[t - t_B(r)\right]^2\right\}^{1/3}; \end{equation} When $E(r) < 0$: \begin{eqnarray}\label{2.6} R(t,r) &=& - \frac M {2E} (1 - \cos \eta), \nonumber \\ \eta - \sin \eta &=& \frac {(-2E)^{3/2}} M \left[t - t_B(r)\right]; \end{eqnarray} where $t_B(r)$ is one more arbitrary function called the bang time. The Big Bang occurs at $t = t_B(r)$. The mass density is \begin{equation} \label{2.7} \kappa \rho = \frac {2{M_{,r}}}{R^2R_{,r}}, \qquad \kappa \df \frac {8\pi G} {c^2}. \end{equation} Equations (\ref{2.1}) -- (\ref{2.7}) are covariant with the transformations $r \to r' = f(r)$, which may be used to give one of the functions $(M, E, t_B)$ a handpicked form, in the range where it is monotonic. In this paper, $M,_r > 0$ is assumed, and the following choice of $r$ is made: \begin{equation}\label{2.8} M = M_0 r^3, \end{equation} where $M_0 > 0$ is an arbitrary constant. The transformations $r = Cr'$, with $C =$ constant, are still allowed, and they redefine $M_0$ by $M_0 = M'_0 / C^3$. So, we can assume $M_0 = 1$. Note that $M_0$ has the dimension of length and represents mass, so the choice of its value amounts to choosing a unit of mass -- see Sec.\ \ref{numerunits}. A radial null geodesic is determined by the equation \begin{equation}\label{2.9} \dr t r = \pm \frac {R_{,r}} {\sqrt{1 + 2E(r)}}, \end{equation} where ``$+$'' applies to future outward-directed and past inward-directed geodesics, and ``$-$'' to the remaining ones. The solution of (\ref{2.9}) is denoted $t = t_{\rm ng}(r)$. The redshift $z(r)$ along $t_{\rm ng}(r)$ is given by \cite{Bond1947}, \cite{PlKr2006} \begin{equation}\label{2.10} \frac 1 {1 + z}\ \dr z r = \left[ \frac {R_{,tr}} {\sqrt{1 + 2E}} \right]_{\rm ng}. \end{equation} At the contact with the BB, null geodesics displayed in the comoving coordinates have horizontal tangents at those points, where $\dril {t_B} r = 0$, and have vertical tangents elsewhere. In the first case, $z \to \infty$ at the BB; in the second case, $z = -1$ at the BB, which is referred to as \textit{infinite blueshift} \cite{Szek1980,HeLa1984}. Indeed, $z < 0$ means that (frequency observed) $>$ (frequency at emission), so (frequency observed) $\to \infty$ when $z \to -1$ and the frequency emitted is finite. However, it should be noted that a vertical tangent to a light ray at the BB, in comoving coordinates, means that matter particles are ejected from the BB with the velocity of light, i.e., a comoving observer at the BB would see zero frequency of the emitted light. So, some interpretation work is required to decide what an infinite blueshift actually means: magnifying a finite frequency to an infinitely hard blow at the observer, or shifting an unobservably soft radiation to the visible range. In all the Friedmann models (which are subcases of L--T), since $t_B$ is constant, $z$ is infinite at the BB. In a general L--T model we have (\cite{PlKr2006}, eqs. (18.104) and (18.112)): \begin{eqnarray}\label{2.11} R,_r &=& \left(\frac {M,_r} M - \frac {E,_r} E\right)R \\ &+& \left[\left(\frac 3 2 \frac {E,_r} E - \frac {M,_r} M\right) \left(t - t_B\right) - t_{B,r}\right] R,_t \nonumber \end{eqnarray} when $E \neq 0$, and \begin{equation}\label{2.12} R,_r = \frac {M,_r} {3M}\ R - \sqrt{\frac {2M} R} t_{B,r} \end{equation} when $E = 0$. Given a past-directed $t_{\rm ng}(r)$ and $z(r)$, the luminosity distance $D_L(z)$ of a light source from the central observer is \cite{Cele2000,BKHC2010} \begin{equation}\label{2.13} D_L(z) = (1 + z)^2 R\left(t_{\rm ng}(r), r\right). \end{equation} The model of Refs.\ \cite{INNa2002} and \cite{Kras2014}, further investigated here, was constructed so that $D_L(z)$, calculated along the PCPO, is the same as in the $\Lambda$CDM model: \begin{equation}\label{2.14} D_L(z) = \frac {1 + z} {H_0} \int_0^z \frac {{\rm d} z'} {\sqrt{\Omega_m (1 + z')^3 + \Omega_{\Lambda}}}, \end{equation} where $H_0$ is related to the Hubble ``constant'' ${\cal H}_0$ by $H_0 = {\cal H}_0 / c$, and $(\Omega_m, \Omega_{\Lambda}) = (0.32, 0.68)$ are parameters defined by observations \cite{Plan2013}; see Sec.\ \ref{numerunits}. Note that the duplication of $D_L(z)$ occurs only along a single light cone. Observations that are sensitive to the dynamics of the model, for example redshift drift \cite{QABC2012}, could distinguish between the $\Lambda$CDM and L--T models having the same $D_L(z)$ at present. The $R_{\rm ng}(r) \df R\left(t_{\rm ng}(r), r\right)$ in (\ref{2.13}) is the angular diameter distance, and it is not an increasing function of $r$. At the intersection with the hypersurface $t(r)$, implicitly determined by the equation \cite{KrHe2004b} \begin{equation}\label{2.15} R = 2M, \end{equation} the function $R_{\rm ng}(r)$ acquires a maximum, and becomes decreasing for greater $r$. The hypersurface determined by (\ref{2.15}) is called \textit{apparent horizon} (AH). It is a difficult obstacle to numerical calculations because several quantities become 0/0 there, see Refs.\ \cite{Kras2014,Kras2014b} and further references cited in them. Traces of those difficulties will appear here in a few graphs as numerical instabilities. The values of $r$ and $z$ at the intersection of the PCPO with the AH in the L--T($t_B$) model are \cite{Kras2014} \begin{equation}\label{2.16} \left(\begin{array}{l} r \\ z \\ \end{array}\right)_{\rm AH} = \left(\begin{array}{l} 0.3105427968086945 \\ 1.582430687623614 \\ \end{array}\right). \end{equation} For technical reasons, the $t(r)$ and $z(r)$ curves crossing the point $r = r_{\rm AH}$ were calculated separately in the ranges $r < r_{\rm AH}$ and $r > r_{\rm AH}$, as explained in Refs.\ \cite{Kras2014} and \cite{Kras2014b}. Therefore, they are differently coloured in each of these ranges. \subsection{The L--T model with $2E = - k r^2$ that obeys (\ref{2.14})}\label{LTwithnonzeroE} This is the L--T($t_B$) model. In it we have \cite{Kras2014} \begin{equation}\label{2.17} 2E = - kr^2, \end{equation} where $k < 0$ is a constant. This $E$ is the same as in the $k < 0$ Friedmann model. Numerical fitting of the solution of (\ref{2.10}) to the values of $(r, z)$ at $(0, 0)$ and at the AH determined the value of $k$ \cite{Kras2014}, \begin{equation}\label{2.18} k = - 4.7410812. \end{equation} {}From (\ref{2.9}) and (\ref{2.17}) we have on a light cone \begin{equation}\label{2.19} \dr t r = \pm \frac {R,_r} {\sqrt{1 - k r^2}}. \end{equation} Using (\ref{2.8}) and (\ref{2.17}) we get from (\ref{2.11}) \begin{equation}\label{2.20} R,_r = \frac R r - r t_{B,r} \sqrt{\frac {2M_0 r} R - k}. \end{equation} With (\ref{2.17}), eqs. (\ref{2.4}) become \begin{eqnarray} \cosh \eta &=& 1 - \frac {k R} {M_0 r}, \label{2.21} \\ t - t_B &=& \frac {M_0} {(- k)^{3/2}} (\sinh \eta - \eta). \label{2.22} \end{eqnarray} Using (\ref{2.17}) and (\ref{2.20}) -- (\ref{2.22}), the set of equations \{(\ref{2.19}), (\ref{2.22}), (\ref{2.10})\} was numerically solved in Ref.\ \cite{Kras2014} for $t(r)$, $t_B(r)$ and $z(r)$ along the PCPO. The tables representing those solutions will be used here. The L--T($t_B$) model is determined around the center of symmetry up to those worldlines of dust that leave the BB at its contact with the PCPO. Extensions beyond the world-tube composed of those worldlines are possible, but are not determined by (\ref{2.13}) -- (\ref{2.14}) and are not considered here. This model need not be used in this full range. A subset can be cut out of it and matched to a background model along a narrower world-tube. \subsection{The numerical units}\label{numerunits} The following values are assumed here: \begin{equation}\label{2.23} (\Omega_m, \Omega_{\Lambda}, H_0, M_0) = (0.32, 0.68, 6.71, 1) \end{equation} the first two after Ref.\ \cite{Plan2013}. The $H_0$ is related to the Hubble constant ${\cal H}_0$ \cite{Plan2013} by \begin{equation}\label{2.24} {\cal H}_0 = c H_0 = 67.1\ {\rm km/(s} \times {\rm Mpc}), \end{equation} so $H_0$ is measured in 1/Mpc. Consequently, choosing a value for $H_0$ amounts to defining a numerical length unit (NLU). Our time coordinate is $t = c \tau$, where $\tau$ is measured in time units, so $t$ is measured in length units. So it is natural to take the NLU also as the numerical time unit (NTU). Taking for the conversion factors \cite{unitconver} \begin{eqnarray}\label{2.25} 1\ {\rm pc} &=& 3.086 \times 10^{13}\ {\rm km}, \nonumber \\ 1\ {\rm y} &=& 3.156 \times 10^7\ {\rm s}, \end{eqnarray} the following relations result: \begin{eqnarray}\label{2.26} && 1\ {\rm NTU} = 1\ {\rm NLU} = 3 \times 10^4\ {\rm Mpc} \nonumber \\ &&= 9.26 \times 10^{23}\ {\rm km} = 9.8 \times 10^{10}\ {\rm y}. \end{eqnarray} The age of the Universe inferred from observations is \cite{Plan2013} \begin{equation}\label{2.27} T = 13.819 \times 10^9\ {\rm y} = 0.141\ {\rm NTU}. \end{equation} As already mentioned below (\ref{2.8}), $M_0$ represents mass, but has the dimension of length ($M_0 = G m_0/c^2$, where $m_0$ is measured in mass units). The choice $M_0 = 1$ NLU made in (\ref{2.23}) implies the mass unit $M_0 c^2/G \approx 10^{54}$ kg, but it will not appear in any other way than via $M_0$. \section{The maximum-redshift hypersurface in a general L--T model}\label{genLT} \setcounter{equation}{0} Consider a radial light ray $t_{\rm ng}(r)$ reaching a comoving observer at a sufficiently late time (below, it will become clear what ``sufficiently late'' means). When we follow that ray back in time and calculate the redshift $z(r)$ along it, then, initially, $z$ increases from $z = 0$ at the observation event. However, if the ray was emitted at the BB at such a point, where $\dril {t_B} r \neq 0$, then $z(r)$ will reach a maximum somewhere in the past, and will then decrease, to become $z = -1$ at the intersection of the ray with the BB. The maximum redshift is achieved where $\dril z r = 0$, i.e., from (\ref{2.10}), where $R,_{tr} = 0$. The hypersurface determined by $R,_{tr} = 0$ is observer-independent; this is the MRH described in Sec.\ \ref{intro}. From (\ref{2.11}), using (\ref{2.3}), we have \begin{eqnarray}\label{3.1} R,_{tr} &=& \frac {E,_r} {2E}\ R,_t - \frac M {R^2}\ \left(\frac 3 2 \frac {E,_r} E - \frac {M,_r} M\right) \left(t - t_B\right) \nonumber \\ &+& \frac M {R^2}\ t_{B,r} \end{eqnarray} when $E\neq 0$, and \begin{equation}\label{3.2} R,_{tr} = \left(\frac {M,_r} {3M} + \frac {\sqrt{2M}} {R^{3/2}}\ t_{B,r}\right) R,_t \end{equation} when $E = 0$. From now on, the cases $E > 0$ and $E < 0$ have to be considered separately. \medskip {\underline {\bf $E > 0$}} \medskip Using (\ref{2.3}), (\ref{2.4}) and (\ref{3.1}), the equation $R,_{tr} = 0$ is rewritten as \begin{equation}\label{3.3} \left(t - t_B\right) \left[\frac {E,_r} {2E}\ F_1(\eta) + \frac 3 r\right] = - t_{B,r}, \end{equation} where \begin{equation}\label{3.4} F_1(\eta) \df \frac {\sinh \eta (\cosh \eta - 1)} {\sinh \eta - \eta} - 3. \end{equation} The conditions for no shell crossings in the case $E > 0$ are \cite{HeLa1985}, \cite{PlKr2006} \begin{equation}\label{3.5} E,_r > 0, \qquad t_{B,r} < 0. \end{equation} Hence, in a region with no shell crossings,\footnote{It has to be recalled that in the L--T model that reproduces (\ref{2.14}) with $t_{B,r} \equiv 0$, a region with shell crossings does exist \cite{Kras2014b,Kras2014c}. This is why we cannot assume that the whole model is free of shell crossings; it is to be expected that shell crossings will also exist when $\left|t_{B,r}\right|$ is small but nonzero.} the right-hand side of (\ref{3.3}) is positive, and so is the coefficient of $F_1(\eta)$. We also have \begin{equation}\label{3.6} F_1(\eta) > 0, \qquad \dril {F_1} {\eta} > 0 \end{equation} for all $\eta > 0$. From this follows \bigskip {\bf Lemma 3.1} \medskip \noindent For every $\varepsilon > 0$ there exists a $\delta > 0$ such that $t - t_B < \varepsilon$ if $\left|t_{B,r}\right| < \delta$. \bigskip \noindent The proof is given in Appendix \ref{provelem3.1}. Consequently, by choosing $\left|t_{B,r}\right|$ sufficiently small, $t$ can be made arbitrarily close to $t_B$; in particular, earlier than the recombination time. Thus, the MRH can be hidden in the pre-recombination epoch, where the zero-pressure L--T models cannot apply, and blueshifts will not arise in the L--T region. Note that (\ref{3.3}) -- (\ref{3.6}) imply that the MRH does not exist if $t_{B,r} = 0$ everywhere. (Formally, (\ref{3.3}) implies then $t = t_B$, but we know from elsewhere \cite{Szek1980,HeLa1984} that in this case $z \to \infty$ at the BB, so $\dril z r \to \infty$, too.) This holds, for example, in the L--T model of Ref.\ \cite{Kras2014b}. \newpage {\underline {\bf $E = 0$}} \medskip Then, using (\ref{3.2}), the equation $R,_{tr} = 0$ implies \begin{equation}\label{3.7} R = 2^{1/3} M_0 r^{5/3} \left(- t_{B,r}\right)^{2/3}. \end{equation} The no-shell-crossing condition here is $t_{B,r} < 0$. Thus, by choosing $\left|t_{B,r}\right|$ sufficiently small, we can make $R$ arbitrarily close to zero, i.e. to the BB. So, again, the blueshifts can be removed from the L--T region.\footnote{But the model with $E = 0$ cannot obey (\ref{2.14}) \cite{Kras2014}.} \medskip {\underline {\bf $E < 0$}} \medskip Using (\ref{2.6}) and (\ref{3.1}) and assuming $0 \leq \eta \leq \pi$ (the Universe is expanding), $R,_{tr} = 0$ is rewritten as \begin{equation}\label{3.8} \left(t - t_B\right) \left[\frac {E,_r} {2E}\ G_1(\eta) + \frac 3 r\right] = - t_{B,r}, \end{equation} where \begin{equation}\label{3.9} G_1(\eta) \df \frac {\sin \eta (1 - \cos \eta)} {\eta - \sin \eta} - 3. \end{equation} In this case, the analogue of Lemma 3.1 cannot be proved, since the function $G_1$ is negative and decreasing for all $0 < \eta < \pi$, while the no-shell-crossing conditions do not imply a unique sign for $E,_r/E$.\footnote{Since $E(0) = 0$ must hold in order to avoid a permanent central singularity \cite{PlKr2006}, and $E(r) < 0$ at $r > 0$ by assumption in this case, the consequence is $E,_r < 0$ in a vicinity of $r = 0$, i.e., $E,_r/E > 0$ for small $r$. However, this argument does not hold for large $r$. The no-shell-crossing conditions require only that $t_{B,r} < 0$ and $2 \pi \left(\frac 3 2 \frac {E,_r} E - \frac {M,_r} M\right) - \frac {(- 2E)^{3/2}} M t_{B,r} < 0$ \cite{PlKr2006}.} So, this case was considered for completeness only. It is known \cite{Kras2014b} that with $t_{B,r} \equiv 0$ the relation (\ref{2.13}) -- (\ref{2.14}) can be duplicated in an L--T model only with $E > 0$ at all $r > 0$. Thus, it is to be expected that the model with $E < 0$ at $r > 0$ will be inapplicable also when $\left|t_{B,r}\right|$ is small but nonzero. \section{The maximum-redshift hypersurface in the L--T($t_B$) model}\label{zmax} \setcounter{equation}{0} In the L--T($t_B$) model, the nonconstant $t_B(r)$ is uniquely (numerically) determined by (\ref{2.10}), so $\dril {t_B} r$ is also fixed. Consequently, the method of removing blueshifts from the L--T epoch, presented in Sec.\ \ref{genLT}, cannot be applied here. As will be seen, blueshifts in this model occur later than the recombination epoch in a large region around the center. A radical solution of the problems with blueshifts would be to assume that the L--T($t_B$) model applies only as far back in time as $\dril z r > 0$ along radial rays. However, it is useful to know exactly where the blueshift-generating region is located and how the blueshifts would make themselves visible to a late-time observer in this model. These questions will be dealt with in the remaining part of this paper. As in Sec.\ \ref{genLT}, the location of the MRH in spacetime is determined by the equation $R,_{tr} = 0$. However, caution is required in interpreting the solution. Equation (\ref{2.10}) shows that $R,_{tr}$ might vanish also at those points, where $z \to \infty$. See below for more on this. Using (\ref{2.20}) for $R,_r$, the equation $R,_{tr} = 0$ is \begin{equation}\label{4.1} \sqrt{\frac {2M_0 r} R - k} = - \frac {M_0 r^3 t_{B,r}} {R^2}, \end{equation} where (\ref{2.3}), (\ref{2.17}) and (\ref{2.8}) were used to eliminate $R,_t$. With $t_{B,r} < 0$ and $k < 0$, (\ref{4.1}) is solvable and implicitly defines (via $R(t,r)$) the $t(r)$ function along the MRH. For numerical handling, it is more convenient to square (\ref{4.1}) and substitute for $R$ from (\ref{2.21}), obtaining \begin{equation}\label{4.2} x^4 + x^3 + k^3 \left(\frac {r t_{B,r}} {4 M_0}\right)^2 = 0, \end{equation} where \begin{equation}\label{4.3} x \df \sinh^2 (\eta/2). \end{equation} Where $r > 0$ and $t_{B,r} < 0$, the solution of (\ref{4.2}) obeys \begin{equation}\label{4.4} 0 < x < x_{\rm max} \df - k \ \left(\frac {r t_{B,r}} {4 M_0}\right)^{2/3}. \end{equation} However, (\ref{4.2}) implies $x = 0$ at those $r$, where $t_{B,r} = 0$. This means $\eta = 0$, i.e. $R = 0$. This is the BB, where $z \to \infty$. The conclusion is that the MRH does not exist along those rays that hit the BB where $t_{B,r} = 0$. Also, (\ref{4.2}) implies $x = 0$ at $r = 0$. The point determined by $x = 0$ ($\Longrightarrow \eta = 0$) and $r = 0$ is the central point of the BB, where $R = 0$. But (\ref{4.2}) was obtained from (\ref{4.1}) by squaring and multiplying by $R^4$. Consequently, the solution of (\ref{4.1}) is not determined at $r = 0$, although the limit $r \to 0$ of the solution found at $r > 0$ may exist. Having found (numerically) $x(r)$, and thus also $\eta(r)$ from (\ref{4.3}), we find $t(r)$ on the MRH from (\ref{2.4}): \begin{equation}\label{4.5} t_{\rm MRH}(r) = t_B(r) + \frac {M_0} {(-k)^{3/2}}\ \{\sinh [\eta(r)] - \eta(r)\}. \end{equation} Equation (\ref{4.2}) was derived assuming that the null geodesics, on which $z(r)$ is calculated, are radial. But it makes no reference to the initial point of the geodesic arc. Consequently, the MRH is observer-independent. The maximal \textit{value} of redshift will depend on the initial point, where $z = 0$, but the \textit{location} of the maximum will not: the maximum along a given geodesic will occur always at the same $r$. In order to use (\ref{4.5}), we need to know the function $t_B(r)$. It was numerically calculated in Ref.\ \cite{Kras2014}, but only up to $r \approx 1.05584$ (corresponding to $t_B \approx -0.139454$ NTU), which is not sufficient for the present purpose. Consequently, it had to be re-calculated and extended, and the way of extending needs an explanation. The numerical step was $\Delta r \approx 5 \times 10^{-6}$. Beginning at \begin{equation}\label{4.6} r \df r_c = 1.4131983072777050, \end{equation} the numerically calculated $t_B$ became constant, \begin{equation}\label{4.7} t \df t_{Bc} = -0.13945554689046649\ {\rm NTU}, \end{equation} and this value was maintained from step $n = 221923$ for the next 1757 steps (the calculation broke down at $r \df r_f = 1.4219332552803152$, with the Fortran program saying that $t_B =$ NaN for all $r > r_f$). So, it was assumed that $t_B(r) = t_{Bc}$ at the contact with the PCPO, and the $t_B(r)$ curve was extended ``by hand'' as the straight line $t = t_{Bc}$. The extended graph of $t_B(r)$ is shown in Fig. \ref{drawtbandzz} together with $t_{\rm MRH}(r)$. Given the table of values of $t_B(r)$, the $t_{B,r}(r)$ needed to solve (\ref{4.2}) is easy to calculate. \begin{figure}[h] \hspace{-1cm} \includegraphics[scale = 0.5]{drawtbandzz.ps} ${ }$ \\[-4.7cm] \hspace{1.4cm} \includegraphics[scale = 0.32]{drawtbandzzatend.ps} \vspace{1.5cm} \caption{{\bf Main panel:} The functions $t_B(r)$ (lower curve) and $t_{\rm MRH}(r)$. The $t_B(r)$ acquires the constant value $t_{Bc}$ given by (\ref{4.7}) on approaching the right end. {\bf Inset:} Closeup view of the region, where $t_B(r)$ (the lowest curve) and $t_{\rm MRH}(r)$ (the curve with the uppermost left end) become tangent. The third curve is the $t_{\rm rec}(r)$ of (\ref{4.8}). See text for more explanation.} \label{drawtbandzz} \end{figure} The inset in Fig. \ref{drawtbandzz} includes the graph of the recombination time, given approximately by \cite{swinxxxx} \begin{equation}\label{4.8} t_{\rm rec}(r) - t_B(r) = 3.8 \times 10^5\ {\rm y} = 3.88 \times 10^{-6}\ {\rm NTU}. \end{equation} This will be used for illustration only. The correct $t_{\rm rec}(r)$ would have to be calculated by determining the $t - t_B$, at which the density in our model becomes equal to the density at recombination in the $\Lambda$CDM model. However, this more exact calculation would introduce only a small correction to (\ref{4.8}), which would not substantially improve the usefulness of it. As further calculations will show, along most rays both the MRH and the ZRH will occur much later than the time given by (\ref{4.8}). At the scale of the main panel of Fig. \ref{drawtbandzz}, the graph of $t_{\rm rec}(r)$ is indistinguishable from the graph of $t_B(r)$. The following facts about Fig. \ref{drawtbandzz} need to be noted: 1. The right end of the graphs is at $r = 1.422$. This is where $z(r)$ along the PCPO was found to be unmanageably large ($z \approx 1.6237 \times 10^{229}$) \cite{Kras2014}, signaling the near-contact of the PCPO with the BB. 2. The $t_{\rm rec}(r)$ and $t_{\rm MRH}(r)$ curves intersect at $r \df r_x$ $\approx 1.107817$. For $r > r_x$, the MRH is earlier than the hypersurface of last-scattering, and thus becomes astrophysically irrelevant, as the L--T model is inadequate for describing the epoch $t < t_{\rm rec}(r)$. 3. The redshift corresponding to $r_x$ is $z_x \approx 57.88$. This is much larger than $z_{\rm far} = 10$ \cite{McMa2005}, the largest observed redshift apart from CMB. \section{Light rays intersecting the past light cone of the present central observer}\label{numcone} \setcounter{equation}{0} The profile of the PCPO calculated in Ref.\ \cite{Kras2014} is shown in the main panel of Fig. \ref{drawsecconeb}. It becomes tangent to the $t_B(r)$ at $r \approx 1.42182$. We will now determine the intersections with the ZRH for rays received by observers sitting on the PCPO at a few characteristic positions. \subsection{Ray B}\label{rayB} Consider the observer O$_{\rm b}$ (``b'' for ``border'') who intersects the PCPO at $z = z_{\rm fSN} = 1.9$. As noted above, this is the largest observed redshift corresponding to a supernova of type Ia \cite{Jone2014}. The functions $z(r)$, $t(r)$ and $R(r)$ along the PCPO were calculated in Ref.\ \cite{Kras2014}. In their tables of values, the $z$ nearest to $z_{\rm fSN}$ and the corresponding $r$, $t$ and $R$ at the PCPO are \begin{eqnarray} z_{\rm b} &=& 1.900028454789241, \label{5.1} \\ r_{\rm b} &=& 0.3486128555616366, \label{5.2} \\ t_{\rm b} &=& -0.10726235253032952, \label{5.3} \\ R_{\rm b} &=& 0.0594055585753355889. \label{5.4} \end{eqnarray} Equations (\ref{5.2}) and (\ref{5.3}) define the initial conditions for the outgoing radial light ray that O$_b$ receives at the moment of intersecting the PCPO. It was calculated backward from this event and will be called ray B. It is the increasing curve in Fig. \ref{drawsecconeb}. \begin{figure}[h] \hspace{-0.5cm} \includegraphics[scale=0.63]{drawsecconeb.ps} ${ }$ \\[-6.5cm] \hspace{1.8cm} \includegraphics[scale=0.5]{drawsecconebsmall.ps} \vspace{2cm} \caption{{\bf Main panel:} The uppermost curve is the profile of the past light cone of the present central observer. The other two decreasing curves are those from the main panel of Fig. \ref{drawtbandzz}. The increasing curve is ray B. See text for more explanation. {\bf Inset:} Magnified view of the neighbourhood where $z < 0$ along ray B. The two decreasing lines are $t_B(r)$ (lower) and $t_{\rm rec}(r)$ of (\ref{4.8}). The increasing curve is ray B. The cross marks the point where $z = 0$. The $t_{\rm MRH}$ profile is far above the upper margin.} \label{drawsecconeb} \end{figure} Figure \ref{drawseczb} shows the graph of $z(r)$ along ray B. The numerical calculation broke down near the singularity, so the value of $z$ at $t = t_B$ could not be calculated, but it is known to be $-1$ there \cite{Szek1980,HeLa1984}. The nearest value to $-1$ that was yet calculated was $z = -0.85255346539197885$, achieved at $r = 0.17430456376783951$. The right end of the graph is at the $r_{\rm b}$ given by (\ref{5.2}), where the initial value on ray B was $z = 0$. \begin{figure}[h] \begin{center} \includegraphics[scale=0.6]{drawseczb.ps} \caption{Redshift $z(r)$ along ray B, calculated from the initial point at the past light cone of the present central observer (upper curve) and from the initial point at the intersection of ray B with $r = r_{\rm AH}$. The vertical bar marks $r = r_{\rm bmax}$ given by (\ref{5.5}). More explanation in the text. } \label{drawseczb} \end{center} \end{figure} The maximal redshift along this ray is $z = 0.63536180471132442$, achieved at \begin{equation}\label{5.5} \left(\begin{array}{l} r \\ t \\ \end{array}\right)_{\rm bmax} = \left(\begin{array}{l} 0.19233778370400151 \\ -0.12814671882472262 \\ \end{array}\right). \end{equation} This maximum fits on the $t_{\rm MRH}(r)$ curve from Fig. \ref{drawsecconeb} up to better than $2 \times 10^{-7}$ NTU = 19 600 y. For illustration, Fig. \ref{drawseczb} contains also the graph of redshift along ray B, with the initial value $z = 0$ not at the PCPO, but at the intersection of ray B with the line $r = r_{\rm AH}$. As predicted, the second maximum is at a different $z$, but at $r'$, for which $\left|r' - r_{\rm bmax}\right| \approx 3 \times 10^{-6}$. \subsection{Ray OB}\label{rayOB} Consider the second observer O$_{\rm ob}$ (for ``old border'') intersecting the PCPO at $z_{\rm ob} = 0.83$, which is the largest SNIa redshift measured in the twin projects that first reported the accelerated expansion \cite{Perl1999}. Again, from the $z(r)$, $t(r)$ and $R(r)$ tables in Ref.\ \cite{Kras2014} we find the $z$ nearest to $z_{ob}$, and the corresponding $r$, $t$ and $R$ on the PCPO: \begin{eqnarray} z_{\rm ob} &=& 0.8300015499642085, \label{5.6} \\ r_{\rm ob} &=& 0.19751142662007609, \label{5.7} \\ t_{\rm ob} &=& -0.0743328307281575784, \label{5.8} \\ R_{\rm ob} &=& 0.0540017311248809709. \label{5.9} \end{eqnarray} The ray emitted at the BB and received by O$_{\rm ob}$ at the event given by (\ref{5.7}) -- (\ref{5.8}) will be denoted OB and is shown in Fig. \ref{drawsecconestaresn}, together with the other curves from Fig. \ref{drawsecconeb}. Ray OB goes into the past from the PCPO, reaching the center $r = 0$ at $t = -0.11014300011521007$ NTU. It goes to the other side of the center, but its continuation beyond $r = 0$ is drawn in mirror-reflection. \begin{figure}[h] \hspace{-0.4cm} \includegraphics[scale=0.65]{drawsecconestaresn.ps} ${ }$ \\[-6.8cm] \hspace{3.5cm} \includegraphics[scale=0.35]{drawsecconestaresnsmall.ps} \vspace{4cm} \caption{{\bf Main panel:} The uppermost and the two lowest curves are those from Fig. \ref{drawsecconeb}. The two solid arcs at left represent ray OB. The dotted arc at right is ray B. See text for more explanation. {\bf Inset:} Magnified view of the neighbourhood where $z < 0$ along ray OB. The two lowest lines are the same as in the inset in Fig. \ref{drawsecconeb}. The third curve is ray OB. The cross marks the point where $z = 0$. The profile of the MRH is again way above the upper margin. } \label{drawsecconestaresn} \end{figure} Figure \ref{drawzstaresn} shows the graph of $z(r)$ along ray OB. The graph begins at the $r_{\rm ob}$ given by (\ref{5.7}), where $z = 0$, and proceeds to the left. The curve $z(r)$ hits the center with $z = 1.2266046302084745$ and goes to the left side of the $z$-axis, but, as before, the figure shows the mirror-image of the continuation. From this point on, $z(r)$ increases to the maximum $z = 3.0946480957646290$ attained at \begin{equation}\label{5.10} \left(\begin{array}{l} r \\ t \\ \end{array}\right)_{\rm obmax} = \left(\begin{array}{l} 0.16821171418720421 \\ -0.12662412927673727 \\ \end{array}\right). \end{equation} The $t_{\rm obmax}$ given above agrees with the corresponding $t$ on the $t_{\rm MRH}(r)$ curve to better than $10^{-6}$ NTU $= 9.8 \times 10^4$ y. Beyond this maximum, $z(r)$ decreases to $z = -0.85628254978650187$ attained at $r = 0.22100398884102759$. \begin{figure}[h] \begin{center} \vspace{5mm} \includegraphics[scale=0.6]{drawzstaresn.ps} \caption{Redshift $z(r)$ along ray OB.} \label{drawzstaresn} \end{center} \end{figure} Note that the redshift in Fig. \ref{drawzstaresn} becomes negative at $r \approx 0.2208$, which is larger than the $r_{\rm ob}$, given by (\ref{5.7}). Thus, if our L--T model were matched to a Friedmann background at $r$ between $r_{\rm ob}$ and 0.2208, the ray (followed back in time) would enter the Friedmann region with the redshift still being positive, and would start building up more-positive redshifts from then on. See more on this in Sec.\ \ref{matching}. It is interesting that all the qualitative properties of blueshift described here (blueshift being infinite when $\dril {t_B} r \neq 0$ at the contact of the ray with the BB, being visible to all observers along the blueshifted ray, perturbing the CMB spectrum) were mentioned without proof by Szekeres already in 1980 \cite{Szek1980}; he even drew the MRH profile for $E = 0$ and $t_B(r) = 1 / \left(1 + r^2\right)$. \subsection{Ray N}\label{rayN} The third observer, O$_{\rm n}$ (for ``near''), is placed at such $r$ that the ray she receives at the intersection with the PCPO is emitted from the BB where the function $t_B$ is flat. The placement of O$_{\rm n}$ was determined by trial and error. Its initial data at the PCPO are \begin{eqnarray} z_{\rm n} &=& 0.02000194389343255, \label{5.11} \\ r_{\rm n} &=& 0.00653692577372784, \label{5.12} \\ t_{\rm n} &=& -0.00293913865628162, \label{5.13} \\ R_{\rm n} &=& 0.002910104748843882. \label{5.14} \end{eqnarray} The ray emitted at the BB and received by O$_{\rm n}$ at the event given by (\ref{5.12}) -- (\ref{5.13}) will be denoted N and is shown in Fig. \ref{drawsafecone}. Similarly to ray OB, ray N, followed from the initial point given by (\ref{5.12}) -- (\ref{5.13}) into the past, first reaches the center at $t = -0.00581492733951897989$ NTU, then continues on the other side of the center, hitting the BB at \begin{equation}\label{5.15} \left(\begin{array}{l} r \\ t \\ \end{array}\right)_{\rm nBB} = \left(\begin{array}{l} 1.3401983891580524 \\ -0.13945554652960040 \\ \end{array}\right). \end{equation} At the contact of ray N with the BB, $t_B$ is constant up to better than $\Delta t_B = 10^{-6}$ NTU $= 9.8 \times 10^4$ y. \begin{figure}[h] \hspace{-0.7cm} \includegraphics[scale=0.56]{drawsafecone.ps} ${ }$ \\[-5.9cm] \hspace{6.6cm} \includegraphics[scale=0.45]{drawsafeconeatend.ps} \vspace{2cm} \caption{{\bf Main panel:} The uppermost and the two lowest curves are those from Fig. \ref{drawsecconeb}. The arc that nearly coincides with the cone profile is ray N. See text for more explanation. {\bf Inset:} The neighbourhood, where ray N hits the BB. The curves, counted from top to bottom at the left edge, are ray N, $t_{\rm MRH}(r)$ and $t_B(r)$. The recombination time is $\approx 3 \times 10^{-6}$ NTU above the upper edge of the graph. Redshift never becomes negative along ray N, and becomes very large near the BB, see Fig. \ref{drawsafez}. More explanation in the text. } \label{drawsafecone} \end{figure} \begin{figure}[h] \begin{center} \vspace{5mm} \includegraphics[scale=0.54]{drawsafez.ps} ${ }$ \\[-5.4cm] \includegraphics[scale=0.4]{drawsafezsmall.ps} \vspace{2cm} \caption{{\bf Main panel:} Redshift $z(r)$ along ray N from Fig. \ref{drawsafecone}. {\bf Inset:} Closeup view of the left end of the main graph. See explanation in the text.} \label{drawsafez} \end{center} \end{figure} The redshift along ray N is shown in Fig. \ref{drawsafez}. The only point along ray N where $z = 0$ is the initial point at the PCPO. Following ray N to the past, $z$ attains the value 0.0200723615789212863 at $r = 0$, then increases up to the maximum $z \approx 7676.412$, attained at \begin{equation}\label{5.16} \left(\begin{array}{l} r \\ t \\ \end{array}\right)_{\rm nmax} = \left(\begin{array}{l} 1.3388618129278465 \\ -0.13945554652960040 \\ \end{array}\right). \end{equation} Then a numerical instability causes $z$ to go down at larger $r$. This is because, at the level of precision assumed here, where ray N hits the BB, the $t_B$ is ``not constant enough'' for $z$ to go to infinity. Were O$_{\rm n}$ placed nearer to the center, ray N would be indistinguishable from the PCPO. The transition from rays emitted at nonconstant $t_B$ to those emitted at constant $t_B$ is discontinuous. Beginning with the situation shown in Fig. \ref{drawzstaresn} and moving the observer ever closer to the center, the redshift profile changes as follows: the initial point where $z = 0$ moves closer to the $r = 0$ axis, the point of crossing the $r = 0$ line moves down, the maximum of $z(r)$ moves up and to the right, and the final segment of $z(r)$ that goes down becomes ever steeper, approaching vertical. If the observer is close enough to the center, so that the ray (followed back in time) hits the BB where $t_B$ is exactly constant, $z(r)$ goes to infinity at the contact with the BB, and the final steep segment of the curve disappears. Figure \ref{drawzintermediate} shows a graph of $z(r)$ intermediate between the situations in Figs. \ref{drawzstaresn} and \ref{drawsafez}. \begin{figure}[h] \hspace{-1.3cm} ${ }$ \\[1cm] \includegraphics[scale=0.5]{Fig8patch.ps} ${ }$ \\[-4.8cm] \hspace{-3mm} \includegraphics[scale=0.6]{drawzintermediate.ps} ${ }$ \\[-4.8cm] \hspace{-3.5cm} \includegraphics[scale=0.3]{drawzintermediatesmall.ps} \vspace{2.5cm} \caption{Redshift $z(r)$ along a ray received by an observer placed between O$_{\rm ob}$ and O$_{\rm n}$. See explanation in the text. The inset shows $z(r)$ near $r = 0$. } \label{drawzintermediate} \end{figure} \section{The L--T($t_B$) model matched to Friedmann}\label{matching} \setcounter{equation}{0} Now we come back to the remark made in the paragraph after (\ref{5.10}). The necessary and sufficient condition for matching an L--T to a Friedmann model \cite{PlKr2006} is that at the boundary hypersurface the functions $M(r)$, $E(r)$ and $t_B(r)$ go over in a continuous (not necessarily differentiable) way into their Friedmann counterparts. Our functions $M(r)$ and $E(r)$ have Friedmannian forms from the beginning. So, it is enough to assume that $t_B(r)$ becomes constant at the boundary value of $r$.\footnote{The model we consider already coincides with Friedmann for $r > r_c$, where $r_c$ is given by (\ref{4.6}). But to make the intended point, we need to match it to Friedmann at $r_{\rm F}$ given by (\ref{6.1}).} As stated in the aforementioned remark, the $r$ at the boundary should be between the $r_{\rm ob}$ given by (\ref{5.7}) and $r = 0.2208$. So we choose the intermediate value $r_{\rm F}$, where \begin{equation}\label{6.1} \left(\begin{array}{l} r \\ t_B \\ \end{array}\right)_{\rm F} = \left(\begin{array}{l} 0.2100014577175866 \\ -0.1325224690059549 \\ \end{array}\right). \end{equation} The time on the PCPO at this boundary is \begin{equation}\label{6.2} t_{\rm F} = -0.0778299400591163509\ {\rm NTU}. \end{equation} {}From the tables of values of $t(r)$ and $z(r)$ along ray OB we find that at the $r_{\rm F}$ given above we have \begin{equation}\label{6.3} \left(\begin{array}{l} t \\ z \\ \end{array}\right)_{\rm OB\ F} = \left(\begin{array}{l} -0.13144220116062100 \\ 2.2425612667236408 \\ \end{array}\right). \end{equation} Ray OB is now continued through $r = r_{\rm F}$ into the Friedmann region, with the above as the initial data for it. In the Friedmann region, (\ref{2.9}) simplifies to \begin{equation}\label{6.4} \dr t r = \pm \frac {S(t)} {\sqrt{1 - k r^2}}, \end{equation} where $S(t) \df R/r$. Using (\ref{2.4}), (\ref{2.8}) and (\ref{2.17}), this can be integrated with the result \begin{equation}\label{6.5} \eta(r) + C = \pm \ln \left(\sqrt{-k} r + \sqrt{1 - kr^2}\right), \end{equation} where $\eta(r)$ is the same as in (\ref{2.4}) and $C$ can be found from the initial condition \begin{equation}\label{6.6} \eta_{\rm F} + C = \pm \left[\ln \left(\sqrt{-k} r + \sqrt{1 - kr^2}\right)\right]_{\rm F}, \end{equation} with $r_{\rm F}$ given by (\ref{6.1}), and $\eta_{\rm F}$ calculated from (\ref{2.4}): \begin{equation}\label{6.7} \sinh \eta_{\rm F} - \eta_{\rm F} = \frac {(-k)^{3/2}} {M_0}\ \left(t_{\rm F} - t_{\rm BF}\right). \end{equation} The $t_{\rm F}$ and $t_{\rm BF}$ are given by (\ref{6.1}) and (\ref{6.3}). So, the construction of the Friedmann light cone and the calculation of redshift along it goes as follows. The consecutive values of $r$ are taken from the same table as in the previous calculations. Given the value of $r$, the $\eta(r)$ is calculated from (\ref{6.5}) using (\ref{6.6}) for $C$, then $t(r)$ and $S(t(r))$ along the light cone are calculated from (\ref{2.4}) using (\ref{2.8}), (\ref{2.17}) and (\ref{6.1}). Finally, with $S(t(r))$ known, the redshift along the light cone is calculated from \cite{PlKr2006} \begin{equation}\label{6.8} 1 + z(r) = z_{\rm F} + S_{\rm F}/S(t(r)). \end{equation} Figure \ref{drawFriedmanncone} shows the continuation of $t_B(r)$ (the lowest line) and of ray OB through the boundary of the L--T and Friedmann regions.\footnote{Ray OB, $t_B(r)$ and $z(r)$ can be made differentiable at $r = r_{\rm F}$ by inserting an interpolating arc between the $t_B(r)$ of the L--T region and the constant $t_B$ of the Friedmann region. The general matching conditions do not require this \cite{PlKr2006}.} The line that ends at the boundary is the MRH; it does not exist in the Friedmann region because there the maximal redshift is infinite. \begin{figure} \includegraphics[scale=0.61]{drawFriedmannray.ps} ${ }$ \\[-7.2cm] \hspace{2.7cm} \includegraphics[scale=0.45]{drawFriedmannjoint.ps} \vspace{3.3cm} \caption{{\bf Main panel:} Continuation of $t_B(r)$ (the lowest line) and of ray OB into the Friedmann region. The vertical line marks the L--T/Friedmann boundary at $r = r_{\rm F}$ given by (\ref{6.1}). The descending line that ends at the boundary is the profile of the MRH. {\bf Inset:} The contents of the main panel shown together with the complete past light cone of the present central observer extended into the Friedmann region.} \label{drawFriedmanncone} \end{figure} Figure \ref{drawFriedmannz} shows the continuation of $z(r)$ from Fig. \ref{drawzstaresn} through the boundary of the L--T and Friedmann regions. The function $z(r)$ was decreasing in the L--T region close to its boundary, but becomes increasing in the Friedmann region, and increases until it becomes too large to handle by the Fortran program. This happens at \begin{equation}\label{6.9} \left(\begin{array}{l} r \\ z \\ \end{array}\right)_{\rm large} = \left(\begin{array}{l} 0.43753244000885227 \\ 11861354545.253244 \\ \end{array}\right). \end{equation} This is, not accidentally, the same $r_{\rm large}$ at which the continuation of ray OB into the Friedmann region becomes tangent to the constant-$t_B$ line. \begin{figure}[h] \hspace{-0.7cm} \includegraphics[scale=0.63]{drawFriedmannz.ps} \caption{Continuation of $z(r)$ from Fig. \ref{drawzstaresn} into the Friedmann region. The redshift becomes too large to handle at $r = r_{\rm large}$ given by (\ref{6.9}). } \label{drawFriedmannz} \end{figure} The matching of the L--T and Friedmann models does not solve the problem of blueshifts. The PCPO continued into the Friedmann region would still encounter blueshifted rays emitted in the L--T($t_B$) region. Figure \ref{drawblueFcone} shows one such exemplary ray. It was calculated in two stages: 1. Equations (\ref{6.5}) -- (\ref{6.8}) (with the $+$ sign in (\ref{6.5}) -- (\ref{6.6})) were used to calculate $t(r)$ and $z(r)$ back in time from the initial point at the PCPO, with the coordinates \begin{equation}\label{6.10} \left(\begin{array}{l} r \\ t \\ \end{array}\right)_{\rm ei} = \left(\begin{array}{l} 0.3000029697185931 \\ -0.0958721393954025947 \\ \end{array}\right). \end{equation} The ray reached the L--T/Friedmann boundary at \begin{equation}\label{6.11} \left(\begin{array}{l} r \\ t \\ \end{array}\right)_{\rm eF} = \left(\begin{array}{l} 0.2100014577175866 \\ -0.10919095912654034 \\ \end{array}\right) \end{equation} with the redshift \begin{equation}\label{6.12} z_{\rm eF} = 0.37819933974218056. \end{equation} 2. Using (\ref{6.11}) -- (\ref{6.12}) as initial data, (\ref{2.9}) -- (\ref{2.10}) (again with the $+$ sign) were integrated to determine the continuation of $t(r)$ and $z(r)$ into the L--T region. The proximity of the singularity did not allow $t(r)$ to end up with a vertical tangent, and the $t(r)$ curve actually overshot the BB (as can be seen on close inspection of the inset in Fig. \ref{drawblueFcone}). Its end point is at \begin{equation}\label{6.13} \left(\begin{array}{l} r \\ t \\ \end{array}\right)_{\rm ee} = \left(\begin{array}{l} 0.0997339231053535613 \\ -0.12458819154593222 \\ \end{array}\right) \end{equation} with the last $z$ yet calculated being \begin{equation}\label{6.14} z_{\rm ee} = -0.8051202078031556. \end{equation} \begin{figure}[h] \hspace{-0.4cm} \includegraphics[scale=0.63]{drawblueFray.ps} ${ }$ \\[-7cm] \hspace{3.3cm} \includegraphics[scale=0.4]{drawblueFrayattb.ps} \vspace{3.8cm} \caption{{\bf Main panel:} The L--T($t_B$) model matched to Friedmann across $r = r_{\rm F}$, and the ray crossing the boundary that displays blueshift in the Friedmann region. See text for more explanation. {\bf Inset:} The boundary-crossing ray in the neighbourhood of the Big Bang. The decreasing lines are $t_B(r)$ (lower) and $t_{\rm rec}(r)$ given by (\ref{4.8}) (upper). The cross marks the point on the ray where $z = 0$. The maximum-redshift hypersurface is far above the upper margin.} \label{drawblueFcone} \end{figure} In addition to this ray, the main panel in Fig. \ref{drawblueFcone} shows the PCPO (the uppermost curve) and $t_B(r)$ (the lowest curve), both continued into the Friedmann region. The third decreasing line is the MRH profile, and the vertical line marks the L--T/Friedmann boundary. The inset in Fig. \ref{drawblueFcone} shows the final segment of the boundary-crossing ray, on which $z(r)$ becomes negative. The coordinates of the point, at which $z = 0$ are \begin{equation}\label{6.15} \left(\begin{array}{l} r \\ t \\ \end{array}\right)_{{\rm e}z0} = \left(\begin{array}{l} 0.099836402226740395 \\ -0.12454907240930377 \\ \end{array}\right). \end{equation} The profile of $z(r)$ along this ray is shown in Fig. \ref{drawblueFz}. The coordinates of the maximum in $z$ are \begin{equation}\label{6.16} \left(\begin{array}{l} r \\ t \\ \end{array}\right)_{\rm emax} = \left(\begin{array}{l} 0.11435427775654181 \\ -0.12276948639632553 \\ \end{array}\right) \end{equation} and the maximal value of $z$ is \begin{equation}\label{6.17} z_{\rm emax} = 0.88302700024316949. \end{equation} The point given by (\ref{6.16}) lies on the MRH profile up to better than $\Delta t = 3.6 \times 10^{-8}$ NTU = 3528 y (the $r$ coordinates agree by construction). \begin{figure}[h] \hspace{-0.7cm} \includegraphics[scale=0.63]{drawblueFz.ps} \caption{The redshift along the boundary-crossing ray from Fig. \ref{drawblueFcone}. The vertical bar marks $r = r_{\rm F}$ given by (\ref{6.1}). } \label{drawblueFz} \end{figure} Comparing such a composite model with observations might be difficult. As seen from Fig. \ref{drawFriedmannz}, the redshift along ray OB increases with $r$ only up to a maximum attained at $r_{\rm obmax}$ given by (\ref{5.10}), then decreases with increasing $r$ up to the L--T/Friedmann boundary, and then starts to increase again. In astronomy, it is assumed that redshift is a monotonically increasing function of distance (and of look-back time); in fact, redshift is routinely used as a measure of distance to objects far from the observer. To test this model, a method of determining distance independent of redshift would have to be introduced. \section{Conclusions}\label{conclu} \setcounter{equation}{0} In a general L--T model with nonconstant $t_B(r)$, choosing $t_B$ nearly constant (i.e., with a sufficiently small $\left|\dril {t_B} r\right|$), one can move the maximum-redshift hypersurface (MRH) to times earlier than recombination. At those times, the zero-pressure L--T model cannot describe the Universe. Consequently, no blueshifts will be observed in the after-recombination epoch (Sec.\ \ref{genLT}). The rest of the paper is devoted to investigating blueshifts in one particular L--T model, called L--T($t_B$). It is the one derived in Ref.\ \cite{Kras2014}, in which the $\Lambda$CDM function $D_L(z)$ is duplicated using nonconstant $t_B$ alone; the $E(r) = - kr^2/2$ with $k$ given by (\ref{2.18}) is the same as in a Friedmann model. In Sec.\ \ref{zmax}, the MRH is determined for this model. In Sec.\ \ref{numcone}, the redshift/blueshift profiles along three characteristic rays in this model are calculated and displayed. In Sec.\ \ref{matching}, the L--T($t_B$) model matched to Friedmann is investigated. The matching hypersurface is chosen so that the L--T($t_B$) region encompasses all the type Ia supernovae of the original project \cite{Ries1998,Perl1999}. This matching does not solve the problem of blueshifts because observers in the Friedmann region would receive blueshifted rays emitted from the nonconstant Big Bang in the L--T($t_B$) region. The final verdict on the L--T($t_B$) model is thus: if we insist on applying it all the way back to the recombination time, then blueshifted rays will inevitably cross the past light cone of the present central observer at sufficiently large $z$. In the example shown in Fig. \ref{drawblueFcone}, blueshifts would be present beyond $z \approx 1.50087$. But this argument does not ``rule out'' general L--T models with nonconstant $t_B$, as shown in Sec.\ \ref{genLT}. To the attempts at discrediting the usefulness of the L--T model (or more general ones) for cosmology, one can give a philosophical answer: objects existing in Nature do not fulfil mathematical assumptions with perfect precision. Assumptions such as spherical symmetry, axial symmetry, isolated body, free fall, ideal gas, incompressible fluid, are in reality fulfilled only up to some degree of approximation. Why should the Universe be an exception and arise in an exactly simultaneous Big Bang, when the theory allows the BB to be extended in (comoving) time?\footnote{By the way, why should it be exactly homogeneous in the large and exactly spatially flat in addition?} Anticipating more general solutions of Einstein's equations, one should even expect the most general BB time to be a function of all three spatial variables, possibly limited in generality by the constraint equations. We generally agree that the Nature acts through mathematics. If so, then it is reasonable to assume that it takes the tools from a generic set, e.g., not a constant function when nonconstant ones are admissible, not a function of 2 variables when 3 are possible, etc. Would Nature ignore all this freedom in order to keep the inflation hypothesis still alive and mainstream astronomers feeling safe with their current knowledge?
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:introduction} In computational molecular biology, various types of data have been utilized, which include sequences, gene expression patterns, and protein structures. Graph structured data have also been extensively utilized, which include metabolic pathways, protein-protein interaction networks, gene regulatory networks, and chemical graphs. Much attention has recently been paid to analysis of chemical graphs due to its potential applications to computer-aided drug design. One of the major approaches to computer-aided drug design is quantitative structure activity/property relationships (QSAR/QSPR) analysis, the purpose of which is to derive quantitative relationships between chemical structures and their activities/properties. Furthermore, inverse QSAR/QSPR has been extensively studied \cite{Miyao16,Skvortsova93}, the purpose of which is to infer chemical structures from given chemical activities/properties. Inverse QSAR/QSPR is often formulated as an optimization problem to find a chemical structure maximizing (or minimizing) an objective function under various constraints. In both QSAR/QSPR and inverse QSAR/QSPR, chemical compounds are usually represented as vectors of real or integer numbers, which are often called \emph{descriptors} and correspond to \emph{feature vectors} in machine learning. Using these chemical descriptors, various heuristic and statistical methods have been developed for finding optimal or nearly optimal graph structures under given objective functions~\cite{Ikebata17,Miyao16,Rupakheti15}. Inference or enumeration of graph structures from a given feature vector is a crucial subtask in many of such methods. Various methods have been developed for this enumeration problem \cite{Fujiwara08,Kerber98,Li18,Reymond15} and the computational complexity of the inference problem has been analyzed~\cite{Akutsu12,Nagamochi09}. On the other hand, enumeration in itself is a challenging task, since the number of molecules (i.e., chemical graphs) with up to 30 atoms (vertices) {\tt C}, {\tt N}, {\tt O}, and {\tt S}, may exceed~$10^{60}$~\cite{BMG96}. As a new approach, artificial neural network (ANN) and deep learning technologies have recently been applied to inverse QSAR/QSPR. For example, variational autoencoders~\cite{Gomez18}, recurrent neural networks~\cite{Segler18,Yang17}, and grammar variational autoencoders~\cite{Kusner17} have been applied. In these approaches, new chemical graphs are generated by solving a kind of inverse problems on neural networks that are trained using known chemical compound/activity pairs. However, the optimality of the solution is not necessarily guaranteed in these approaches. In order to guarantee the optimality mathematically, a novel approach has been proposed~\cite{AN19} for ANNs, using mixed integer linear programming (MILP). Recently, a new framework has been proposed \cite{ACZSNA20,CWZSNA20,ZZCSNA20} by combining two previous approaches; efficient enumeration of tree-like graphs~\cite{Fujiwara08}, and MILP-based formulation of the inverse problem on ANNs~\cite{AN19}. This combined framework for inverse QSAR/QSPR mainly consists of two phases. The first phase solves (I) {\sc Prediction Problem}, where a feature vector $f(G)$ of a chemical graph $G$ is introduced and a prediction function $\psi_{\mathcal{N}}$ on a chemical property $\pi$ is constructed with an ANN $\mathcal{N}$ using a data set of chemical compounds $G$ and their values $a(G)$ of $\pi$. The second phase solves (II) {\sc Inverse Problem}, where (II-a) given a target value $y^*$ of the chemical property $\pi$, a feature vector $x^*$ is inferred from the trained ANN $\mathcal{N}$ so that $\psi_{\mathcal{N}}(x^*)$ is close to $y^*$ and (II-b) then a set of chemical structures $G^*$ such that $f(G^*)= x^*$ is enumerated by a graph search algorithm. In (II-a) of the above-mentioned previous methods~\cite{ACZSNA20,CWZSNA20,ZZCSNA20}, an MILP is formulated for acyclic chemical compounds. Afterwards, Ito et~al. \cite{IAWSNA20} and Zhu et~al. \cite{ZCSNA20} designed a method of inferring chemical graphs with cycle index 1 and 2, respectively by formulating a new MILP and using an efficient algorithm for enumerating chemical graphs with cycle index 1 \cite{Suzuki14} and cycle index 2 \cite{2A1B20,2A2B20}. The computational results conducted on instances with $n$ non-hydrogen atoms show that a feature vector $x^*$ can be inferred for up to around $n=40$ whereas graphs $G^*$ can be enumerated for up to around $n=15$. In this paper, we present a new characterization of graph structure, called ``branch-height.'' Based on this, we can treat a class of acyclic chemical graphs with a structure that is topologically restricted but frequently appears in the chemical database, formulate a new MILP formulation that can handle acyclic graphs with a large diameter, and design a new graph search algorithm that generates acyclic chemical graphs with up to 50 vertices. The results of computational experiments using such chemical properties as octanol/water partition coefficient, boiling point and heat of combustion suggest that the proposed method is much more useful than the previous method. The paper is organized as follows. Section~\ref{sec:preliminary} introduces some notions on graphs, a modeling of chemical compounds and a choice of descriptors. Section~\ref{sec:inverse_process} reviews the framework for inferring chemical compounds based on ANNs and MILPs. Section~\ref{sec:graph_MILP} introduces a new method of modeling acyclic chemical graphs and proposes a new MILP formulation that represents an acyclic chemical graph $G$ with $n$ vertices, where our MILP requires only $O(n)$ variables and constraints when the branch-parameter $k$ and the $k$-branch-height in $G$ (graph topological parameters newly introduced in this paper) is constant. Section~\ref{sec:graph_search} describes the idea of our new dynamic programming type of algorithm that enumerates a given number of acyclic chemical graphs for a given feature vector. Section~\ref{sec:experiment} reports the results on some computational experiments conducted for s chemical properties such as octanol/water partition coefficient, boiling point and heat of combustion. Section~\ref{sec:conclude} makes some concluding remarks. Appendix~\ref{sec:statistical} provides the statistical feature on structure of acyclic chemical graphs in a chemical graph database. Appendix~\ref{sec:full_milp} describes the details of all variables and constraints in our MILP formulation. Appendix~\ref{sec:graph_search_appendix} presents descriptions of our new graph search algorithm. \section{Preliminary}\label{sec:preliminary This section introduces some notions and terminology on graphs, a modeling of chemical compounds and our choice of descriptors. Let $\mathbb{R}$, $\mathbb{Z}$ and $\mathbb{Z}_+$ denote the sets of reals, integers and non-negative integers, respectively. For two integers $a$ and $b$, let $[a,b]$ denote the set of integers $i$ with $a\leq i\leq b$. \subsection{Graphs} A {\em graph} stands for a simple undirected graph, where an edge joining two vertices $u$ and $v$ is denoted by $uv$ $(= vu)$. The sets of vertices and edges of a graph $G$ are denoted by $V(G)$ and $E(G)$, respectively. Let $H=(V,E)$ be a graph with a set $V$ of vertices and a set $E$ of edges. For a vertex $v\in V$, the set of neighbors of $v$ in $H$ is denoted by $N_H(v)$, and the {\em degree} $\deg_H(v)$ of $v$ is defined to be $|N_H(v)|$. The length of a path is defined to be the number of edges in the path. The {\em distance} $\mathrm{dist}_H(u,v)$ between two vertices $u,v\in V$ is defined to be the minimum length of a path connecting $u$ and $v$ in $H$. The {\em diameter} $\mathrm{dia}(H)$ of $H$ is defined to be the maximum distance between two vertices in $H$; i.e., $\mathrm{dia}(H)\triangleq \max_{u,v\in V} \mathrm{dist}_H(u,v)$. Denote by $\ell(P)$ the length of a path $P$. \medskip\noindent{\bf Trees} For a tree $T$ with an even (resp., odd) diameter $d$, the {\em center} is defined to be the vertex $v$ (resp., the adjacent vertex pair $\{v,v'\}$) that situates in the middle of one of the longest paths with length $d$. The center of each tree is uniquely determined. \medskip\noindent{\bf Rooted Trees} A {\em rooted tree} is defined to be a tree where a vertex (or a pair of adjacent vertices) is designated as the {\em root}. Let $T$ be a rooted tree, where for two adjacent vertices $u$ and $v$, vertex $u$ is called the parent of $v$ if $u$ is closer to the root than $v$ is. The {\em height} $\mathrm{height}(v)$ of a vertex $v$ in $T$ is defined to be the maximum length of a path from $v$ to a leaf $u$ in the descendants of $v$, where $\mathrm{height}(v)=0$ for each leaf $v$ in $T$. Figure~\ref{fig:branch-height_tree}(a) and (b) illustrate examples of trees rooted at the center. \medskip\noindent{\bf Degree-bounded Trees} For positive integers $a,b$ and $c$ with $b\geq 2$, let $T(a,b,c)$ denote the rooted tree such that the number of children of the root is $a$, the number of children of each non-root internal vertex is $b$ and the distance from the root to each leaf is $c$. We see that the number of vertices in $T(a,b,c)$ is $a(b^c-1)/(b-1)+1$, and the number of non-leaf vertices in $T(a,b,c)$ is $a(b^{c-1}-1)/(b-1)+1$. In the rooted tree $T(a,b,c)$, we denote the vertices by $v_1,v_2,\ldots,v_n$ with a breadth-first-search order, and denote the edge between a vertex $v_i$ with $i\in [2,n]$ and its parent by $e_i$, where $n=a(b^c-1)/(b-1)+1$ and each vertex $v_i$ with $i\in [1, a(b^{c-1}-1)/(b-1)+1]$ is a non-leaf vertex. For each vertex $v_i$ in $T(a,b,c)$, let $\mathrm{Cld}(i)$ denote the set of indices $j$ such that $v_j$ is a child of $v_i$, and $\mathrm{prt}(i)$ denote the index $j$ such that $v_j$ is the parent of $v_i$ when $i\in [2,n]$. Let $P_{\mathrm{prc}}(a,b,c)$ be a set of ordered index pairs $(i,j)$ of vertices $v_i$ and $v_j$ in $T(a,b,c)$. We call $P_{\mathrm{prc}}(a,b,c)$ {\em proper} if the next conditions hold: \begin{enumerate} \item[(a)] For each subtree $H=(V,E)$ of $T(a,b,c)$ with $v_1\in V$, there is at least one subtree $H'=(V',E')$ such that \\ ~-~ $H'$ is isomorphic to $H$ by a graph isomorphism $\psi:V\to V'$ with $\psi(v_1)=v_1$; and \\ ~-~ for each pair $(i,j)\in P_{\mathrm{prc}}(a,b,c)$, if $v_j\in V'$ then $v_i\in V'$; and \item[(b)] For each pair of vertices $v_i$ and $v_j$ in $T(a,b,c)$ such that $v_i$ is the parent of $v_j$, there is a sequence $(i_1,i_2),(i_2,i_3),\ldots,(i_{k-1},i_k)$ of index pairs in $P_{\mathrm{prc}}(a,b,c)$ such that $i_1=i$ and $i_k=j$. \end{enumerate} Note that a proper set $P_{\mathrm{prc}}(a,b,c)$ is not necessarily unique. \begin{figure}[ht!] \begin{center} \includegraphics[width=.90\columnwidth]{branch-height_tree.eps} \end{center} \caption{An illustration of rooted trees and a 2-branch-tree: (a) A tree $H_1$ with odd diameter 11; (b) A tree $H_2$ with even diameter 10; (c) The 2-branch-tree of $H_2$. } \label{fig:branch-height_tree} \end{figure} \medskip\noindent{\bf Branch-height in Trees} In this paper, we introduce ``branch-height'' of a tree as a new measure to the ``agglomeration degree'' of trees. We specify a non-negative integer $k$, called a {\em branch-parameter} to define branch-height. First we regard $T$ as a rooted tree by choosing the center of $T$ as the root. Figure~\ref{fig:branch-height_tree}(a) and (b) illustrate examples of rooted trees. We introduce the following terminology on a rooted tree $T$. \begin{itemize} \item[-] A {\em leaf $k$-branch}: a non-root vertex $v$ in $T$ such that $\mathrm{height}(v)= k$. \item[-] A {\em non-leaf $k$-branch}: a vertex $v$ in $T$ such that $v$ has at least two children $u$ with $\mathrm{height}(u)\geq k$. We call a leaf or non-leaf $k$-branch a {\em $k$-branch}. Figure~\ref{fig:branch-height_tree_k123}(a)-(c) illustrate the $k$-branches of the rooted tree $H_2$ in Figure~\ref{fig:branch-height_tree}(b) for $k=1,2$ and $3$, respectively. \item[-] A {\em $k$-branch-path}: a path $P$ in $T$ that joins two vertices $u$ and $u'$ such that each of $u$ and $u'$ is the root or a $k$-branch and $P$ does not contain the root or a $k$-branch as an internal vertex. \item[-] The {\em $k$-branch-subtree} of $T$: the subtree of $T$ that consists of the edges in all $k$-branch-paths of $T$. We call a vertex (resp., an edge) in $T$ a {\em $k$-internal vertex} (resp., a {\em $k$-internal edge}) if it is contained in the $k$-branch-subtree of $T$ and a {\em $k$-external vertex} (resp., a {\em $k$-external edge}) otherwise. Let $V^\mathrm{in}$ and $V^\mathrm{ex}$ (resp., $E^\mathrm{in}$ and $E^\mathrm{ex}$) denote the sets of $k$-internal and $k$-external vertices (resp., edges) in $T$. \item[-] The {\em $k$-branch-tree} of $T$: the rooted tree obtained from the $k$-branch-subtree of $T$ by replacing each $k$-branch-path with a single edge. Figure~\ref{fig:branch-height_tree}(c) illustrates the $2$-branch-tree of the rooted tree $H_2$ in Figure~\ref{fig:branch-height_tree}(b). \item[-] A {\em $k$-fringe-tree}: One of the connected components that consists of the edges not in any $k$-branch-subtree. Each $k$-fringe-tree $T'$ contains exactly one vertex $v$ in a $k$-branch-subtree, where $T'$ is regarded as a tree rooted at $v$. Note that the height of any $k$-fringe-tree is at most $k$. Figure~\ref{fig:branch-height_tree_k123}(a)-(c) illustrate the $k$-fringe-tree of the rooted tree $H_2$ in Figure~\ref{fig:branch-height_tree}(b) for $k=1,2$ and $3$, respectively. \item[-] The {\em $k$-branch-leaf-number} $\mathrm{bl}}% {\mathrm{b\ell}_k(T)$: the number of leaf $k$-branches in $T$. For the trees $H_i$, $i=1,2$ in Figure~\ref{fig:branch-height_tree}(a) and (b), it holds that $\mathrm{bl}}% {\mathrm{b\ell}_0(H_1)= \mathrm{bl}}% {\mathrm{b\ell}_0(H_2)=8$, $\mathrm{bl}}% {\mathrm{b\ell}_1(H_1)= \mathrm{bl}}% {\mathrm{b\ell}_1(H_2)=5$, $\mathrm{bl}}% {\mathrm{b\ell}_2(H_1)= \mathrm{bl}}% {\mathrm{b\ell}_2(H_2)=3$ and $\mathrm{bl}}% {\mathrm{b\ell}_3(H_1)= \mathrm{bl}}% {\mathrm{b\ell}_3(H_2)=2$. \item[-] The {\em $k$-branch-height} $\mathrm{bh}_k(T)$ of $T$: the maximum number of non-root $k$-branches along a path from the root to a leaf of $T$; i.e., $\mathrm{bh}_k(T)$ is the height of the $k$-branch-tree $T^*$ (the maximum length of a path from the root to a leaf in $T^*$). For the example of trees $H_i$, $i=1,2$ in Figure~\ref{fig:branch-height_tree}(a) and (b), it holds that $\mathrm{bh}_0(H_1)=\mathrm{bh}_0(H_2)=5$, $\mathrm{bh}_1(H_1)=\mathrm{bh}_1(H_2)=3$, $\mathrm{bh}_2(H_1)=\mathrm{bh}_2(H_2)=2$ and $\mathrm{bh}_3(H_1)=\mathrm{bh}_3(H_2)=1$. \end{itemize} \begin{figure}[ht!] \begin{center} \includegraphics[width=.94\columnwidth]{branch-height_tree_k123.eps} \end{center} \caption{An illustration of the $k$-branches (depicted by gray circles), the $k$-branch-subtree (depicted by solid lines) and $k$-fringe-trees (depicted by dashed lines) of $H_2$: (a) $k=1$; (b) $k=2$; (c) $k=3$. } \label{fig:branch-height_tree_k123} \end{figure} We observe that most chemical graphs $G$ with at most 50 non-hydrogen atoms satisfy $\mathrm{bh}_2(G)\leq 2$. See Appendix~A for a summary of statistical feature of chemical graphs registered in the chemical database PubChem. \subsection{Modeling of Chemical Compounds}\label{sec:chemical_model} We represent the graph structure of a chemical compound as a graph with labels on vertices and multiplicity on edges in a hydrogen-suppressed model. Let $\Lambda$ be a set of labels each of which represents a chemical element such as {\tt C} (carbon), {\tt O} (oxygen), {\tt N} (nitrogen) and so on, where we assume that $\Lambda$ does not contain {\tt H} (hydrogen). Let $\mathrm{mass}({\tt a})$ and $\mathrm{val}({\tt a})$ denote the mass and valence of a chemical element ${\tt a}\in \Lambda$, respectively. In our model, we use integers $\mathrm{mass}^*({\tt a})=\lfloor 10\cdot \mathrm{mass}({\tt a})\rfloor$, ${\tt a}\in \Lambda$ and assume that each chemical element ${\tt a}\in \Lambda$ has a unique valence $\mathrm{val}({\tt a})\in [1,4]$. We introduce a total order $<$ over the elements in $\Lambda$ according to their mass values; i.e., we write ${\tt a<b}$ for chemical elements ${\tt a,b}\in \Lambda$ with $\mathrm{mass}({\tt a})<\mathrm{mass}({\tt b})$. % Choose a set $\Gamma_{<}$ of tuples $\gamma=({\tt a,b},m)\in\Lambda\times \Lambda\times [1,3]$ such that ${\tt a<b}$. For a tuple $\gamma=({\tt a,b},m)\in\Lambda\times \Lambda\times [1,3]$, let $\overline{\gamma}$ denote the tuple $({\tt b,a},m)$. Set $\Gamma_{>}=\{\overline{\gamma}\mid \gamma\in \Gamma_{<}\}$ and $\Gamma_{=}=\{({\tt a,a},m)\mid {\tt a}\in \Lambda, m\in [1,3]\}$. A pair of two atoms ${\tt a}$ and ${\tt b}$ joined with a bond-multiplicity $m$ is denoted by a tuple $\gamma=({\tt a,b},m)\in \Gamma$, called the {\em adjacency-configuration} of the atom pair. We use a hydrogen-suppressed model because hydrogen atoms can be added at the final stage. A {\em chemical graph} over $\Lambda$ and $\Gamma_{<}\cup \Gamma_{=}$ is defined to be a tuple $G=(H,\alpha,\beta)$ of a graph $H=(V,E)$, a function $\alpha:V\to \Lambda$ and a function $\beta: E\to [1,3]$ such that \begin{enumerate} \item[(i)] $H$ is connected; \item[(ii)] $\sum_{uv\in E}\beta(uv)\leq \mathrm{val}(\alpha(u))$ for each vertex $u\in V$; and \item[(iii)] $(\alpha(u),\alpha(v),\beta(uv))\in \Gamma_{<}\cup \Gamma_{=}$ for each edge $uv\in E$. \end{enumerate} For a notational convenience, we denote the sum of bond-multiplicities of edges incident to a vertex as follows: \[ \beta(u) \triangleq \sum_{uv\in E}\beta(uv) \mbox{ for each vertex $u\in V$.}\] A chemical graph $G=(H,\alpha,\beta)$ is called a ``chemical monocyclic graph'' if the graph $H$ is a monocyclic graph. Similarly for other types of graphs for $H$. We define the {\em bond-configuration} of an edge $e=uv \in E$ in a chemical graph $G$ to be a tuple $(\deg_H(u),\deg_H(v),\beta(e))$ such that $\deg_H(u)\leq \deg_H(v)$ for the end-vertices $u$ and $v$ of $e$. Let $\mathrm{Bc}$ denote the set of bond-configurations $\mu=(d_1,d_2,m)\in [1,4]\times[1,4]\times[1,3]$ such that $\max\{d_1,d_2\}+m\leq 4$. We regard that $(d_1,d_2,m)=(d_2,d_1,m)$. For two tuples $\mu=(d_1,d_2,m), \mu'=(d'_1,d'_2,m')\in \mathrm{Bc}$, we write $\mu\geq \mu'$ if $\max\{d_1,d_2\}\geq \max\{d'_1,d'_2\}$, $\min\{d_1,d_2\}\geq \min\{d'_1,d'_2\}$ and $m\geq m'$, and write $\mu> \mu'$ if $\mu\geq \mu'$ and $\mu\neq \mu'$. \subsection{Descriptors} In our method, we use only graph-theoretical descriptors for defining a feature vector, which facilitates our designing an algorithm for constructing graphs. Given a chemical acyclic graph $G=(H,\alpha,\beta)$, we define a {\em feature vector} $f(G)$ that consists of the following 11 kinds of descriptors. We choose an integer $k^*\in [1,4]$ as a branch-parameter. \begin{itemize} \item[-] $n(G)$: the number $|V|$ of vertices. \item[-] $\mathrm{dg}_i^\mathrm{in}(G)$, $i\in [1,4]$: the number of $k^*$-internal vertices of degree $i$ in $H$; i.e., $\mathrm{dg}_i^\mathrm{in}(G)\triangleq |\{v\in V^\mathrm{in} \mid \deg_{H}(v)=i\}|$, where the multiplicity of edges incident to a vertex $v$ is ignored in the degree of $v$. \item[-] $\mathrm{dg}_i^\mathrm{ex}(G)$, $i\in [1,4]$: the number of $k^*$-external vertices of degree $i$ in $H$; i.e., $\mathrm{dg}_i^\mathrm{ex}(G)\triangleq |\{v\in V^\mathrm{ex} \mid \deg_{H}(v)=i\}|$. \item[-] $\overline{\mathrm{dia}}(G)$: the diameter of $H$ divided by $|V|$; i.e., $\overline{\mathrm{dia}}(G)\triangleq \mathrm{dia}(H)/n(G)$. \item[-] $\mathrm{bl}}% {\mathrm{b\ell}_{k^*}(G)$: the $k^*$-branch-leaf-number of $G$. \item[-] $\mathrm{bh}_{k^*}(G)$: the $k^*$-branch-height of $G$. \item[-] $\mathrm{ce}_{\tt a}^\mathrm{in}(G)$, ${\tt a}\in \Lambda$: the number of $k^*$-internal vertices with label ${\tt a}\in \Lambda$; i.e., $\mathrm{ce}_{\tt a}^\mathrm{in}(G)\triangleq |\{ v\in V^\mathrm{in} \mid \alpha(v)={\tt a}\}|$. \item[-] $\mathrm{ce}_{\tt a}^\mathrm{ex}(G)$, ${\tt a}\in \Lambda$: the number of $k^*$-external vertices with label ${\tt a}\in \Lambda$; i.e., $\mathrm{ce}_{\tt a}^\mathrm{ex}(G)\triangleq |\{ v\in V^\mathrm{ex} \mid \alpha(v)={\tt a}\}|$. \item[-] $\overline{\mathrm{ms}}(G)$: the average mass$^*$ of atoms in $G$; i.e., $\overline{\mathrm{ms}}(G)\triangleq \sum_{v\in V}\mathrm{mass}^*(\alpha(v))/n(G)$. \item[-] $\mathrm{bd}_m^\mathrm{in}(G)$, $m=2,3$: the number of double and triple bonds of $k^*$-internal edges; i.e., $\mathrm{bd}_m^\mathrm{in}(G)\triangleq \{e\in E^\mathrm{in} \mid \beta(e)=m\}$, $m=2,3$. \item[-] $\mathrm{bd}_m^\mathrm{ex}(G)$, $m=2,3$: the number of double and triple bonds of $k^*$-internal edges; i.e., $\mathrm{bd}_m^\mathrm{ex}(G)\triangleq \{e\in E^\mathrm{ex} \mid \beta(e)=m\}$, $m=2,3$. \item[-] $\mathrm{ac}_{\gamma}^\mathrm{in}(G)$, $\gamma=({\tt a,b},m)\in \Gamma$: the number of adjacency-configurations $({\tt a,b},m)$ of $k^*$-internal edges in $G$. \item[-] $\mathrm{ac}_{\gamma}^\mathrm{ex}(G)$, $\gamma=({\tt a,b},m)\in \Gamma$: the number of adjacency-configurations $({\tt a,b},m)$ of $k^*$-external edges in $G$. \item[-] $\mathrm{bc}_{\mu}^\mathrm{in}(G)$, $\mu=(d,d',m)\in \mathrm{Bc}$: the number of bond-configurations $(d,d',m)$ of $k^*$-internal edges in $G$. \item[-] $\mathrm{bc}_{\mu}^\mathrm{ex}(G)$, $\mu=(d,d',m)\in \mathrm{Bc}$: the number of bond-configurations $(d,d',m)$ of $k^*$-external edges in $G$. \item[-] $n_{\tt H}(G)$: the number of hydrogen atoms; i.e., \\ ~~~ $\displaystyle{ n_{\tt H}(G)\triangleq \sum_{{\tt a}\in \Lambda, {\tt t}\in\{\mathrm{in},\mathrm{ex}\}} \mathrm{val}({\tt a})\mathrm{ce}_{\tt a}^\mathrm{t}(G) - \sum_{\gamma=({\tt a,b},m)\in \Gamma, {\tt t}\in\{\mathrm{in},\mathrm{ex}\}} 2m\cdot \mathrm{ac}_{\gamma}^\mathrm{t}(G)) }$\\ ~~~~~~~~~~~ $\displaystyle{ = \sum_{{\tt a}\in \Lambda, {\tt t}\in\{\mathrm{in},\mathrm{ex}\}} \mathrm{val}({\tt a})\mathrm{ce}_{\tt a}^\mathrm{t}(G) -2(n(G)-1 + \sum_{m\in [2,3], {\tt t}\in\{\mathrm{in},\mathrm{ex}\}}m\cdot \mathrm{bd}_m^\mathrm{t}(G)) }$. \end{itemize} The number $K$ of descriptors in our feature vector $x=f(G)$ is $K=2|\Lambda|+2|\Gamma|+50$. Note that the set of the above $K$ descriptors is not independent in the sense that some descriptor depends on the combination of other descriptors in the set. For example, descriptor $\mathrm{bd}_i^\mathrm{in}(G)$ can be determined by $\sum_{\gamma=({\tt a,b},m)\in \Gamma: m=i }\mathrm{ac}_{\gamma}^\mathrm{in}(G)$. \section{A Method for Inferring Chemical Graphs}\label{sec:inverse_process \subsection{Framework for the Inverse QSAR/QSPR} We review the framework that solves the inverse QSAR/QSPR by using MILPs ~\cite{IAWSNA20,ZCSNA20}, which is illustrated in Figure~\ref{fig:framework}. For a specified chemical property $\pi$ such as boiling point, we denote by $a(G)$ the observed value of the property $\pi$ for a chemical compound $G$. As the first phase, we solve (I) {\sc Prediction Problem} with the following three steps. \\ \noindent {\bf Phase~1.} \\ \smallskip\noindent {\bf Stage~1:}~ Let $\mathrm{DB}$ be a set of chemical graphs. For a specified chemical property $\pi$, choose a class $\G$ of graphs such as acyclic graphs or monocyclic graphs. Prepare a data set $D_{\pi}=\{G_i\mid i=1,2,\ldots,m\}\subseteq \G\cap \mathrm{DB}$ such that the value $a(G_i)$ of each chemical graph $G_i$, $i=1,2,\ldots,m$ is available. Set reals $\underline{a}, \overline{a}\in \mathbb{R}$ so that $\underline{a}\leq a(G_i)\leq \overline{a}$, $i=1,2,\ldots,m$. \smallskip\noindent {\bf Stage~2:}~ Introduce a feature function $f: \G\to \mathbb{R}^K$ for a positive integer $K$. We call $f(G)$ the {\em feature vector} of $G\in \G$, and call each entry of a vector $f(G)$ a {\em descriptor} of $G$. \smallskip\noindent {\bf Stage~3:}~ Construct a prediction function $\psi_\mathcal{N}$ with an ANN $\mathcal{N}$ that, given a vector in $\mathbb{R}^K$, returns a real in the range $[\underline{a},\overline{a}]$ so that $\psi_\mathcal{N}(f(G))$ takes a value nearly equal to $a(G)$ for many chemical graphs in $D$. See Figure~\ref{fig:framework}(a) for an illustration of Stages~1 ,2 and 3 in Phase~1. \begin{figure}[!ht] \begin{center} \includegraphics[width=.98\columnwidth]{framework_I_II.eps} \end{center} \caption{ (a) An illustration of Phase~1: Stage~1 for preparing a data set $D_{\pi}$ for a graph class $\G$ and a specified chemical property $\pi$; Stage~2 for introducing a feature function $f$ with descriptors; Stage~3 for constructing a prediction function $\psi_\mathcal{N}$ with an ANN $\mathcal{N}$; % (b) An illustration of Phase~2: Stage~4 for formulating an MILP $\mathcal{M}(x,y,g;\mathcal{C}_1,\mathcal{C}_2)$ and finding a feasible solution $(x^*,g^*)$ of the MILP for a target value $y^*$ so that $\psi_\mathcal{N}(x^*)=y^*$ (possibly detecting that no target graph $G^*$ exists); Stage~5 for enumerating graphs $G^*\in \G$ such that $f(G^*)=x^*$. } \label{fig:framework} \end{figure} In this paper, we use the range-based method to define an applicability domain (AD)~\cite{Netzeva05} to our inverse QSAR/QSPR. Set $\underline{x_j}$ and $\overline{x_j}$ to be the minimum and maximum values of the $j$-th descriptor $x_j$ in $f(G_i)$ over all graphs $G_i$, $i=1,2,\ldots,m$ (where we possibly normalize some descriptors such as $\mathrm{ce}_{\tt a}^\mathrm{in}(G)$, which is normalized with $\mathrm{ce}_{\tt a}^\mathrm{in}(G)/n(G)$). Define our AD $\mathcal{D}$ to be the set of vectors $x\in \mathbb{R}^K$ such that $\underline{x_j}\leq x_j\leq \overline{x_j}$ for the variable $x_j$ of each $j$-th descriptor, $j=1,2,\ldots,k$. In the second phase, we try to find a vector $x^*\in \mathbb{R}^K$ from a target value $y^*$ of the chemical propery $\pi$ such that $\psi_\mathcal{N}(x^*)=y^*$. Based on the method due to Akutsu and Nagamochi~\cite{AN19}, Chiewvanichakorn~et~al.~\cite{CWZSNA20} showed that this problem can be formulated as an MILP. By including a set of linear constraints such that $x\in \mathcal{D}$ into their MILP, we obtain the next result. \begin{theorem} \label{Th1}{\rm (\cite{IAWSNA20,ZCSNA20})} Let $\mathcal{N}$ be an ANN with a piecewise-linear activation function for an input vector $x\in \mathbb{R}^K$, $n_A$ denote the number of nodes in the architecture and $n_B$ denote the total number of break-points over all activation functions. Then there is an MILP $\mathcal{M}(x,y;\mathcal{C}_1)$ that consists of variable vectors $x\in \mathcal{D}~(\subseteq \mathbb{R}^K)$, $y\in \mathbb{R}$, and an auxiliary variable vector $z\in \mathbb{R}^p$ for some integer $p=O(n_A+n_B)$ and a set $\mathcal{C}_1$ of $O(n_A+n_B)$ constraints on these variables such that: $\psi_{\mathcal{N}}(x^*)=y^*$ if and only if there is a vector $(x^*,y^*)$ feasible to $\mathcal{M}(x,y;\mathcal{C}_1)$. \end{theorem} See Appendix~\ref{sec:AD} for the set of constraints to define our AD $\mathcal{D}$ in the MILP $\mathcal{M}(x,y;\mathcal{C}_1)$ in Theorem~\ref{Th1}. A vector $x\in \mathbb{R}^K$ is called {\em admissible} if there is a graph $G\in \G$ such that $f(G)=x$~\cite{ACZSNA20}. Let $\mathcal{A}$ denote the set of admissible vectors $x\in \mathbb{R}^K$. To ensure that a vector $x^*$ inferred from a given target value $y^*$ becomes admissible, we introduce a new vector variable $g\in \mathbb{R}^{q}$ for an integer $q$. For the class $\G$ of chemical acyclic graphs, Azam~et~al.~\cite{ACZSNA20} introduced a set $\mathcal{C}_2$ of new constraints with a new vector variable $g\in \mathbb{R}^{q}$ for an integer $q$ so that a feasible solution $(x^*,g^*)$ of a new MILP for a target value $y^*$ delivers a vector $x^*$ with $\psi_{\mathcal{N}}(x^*)=y^*$ and a vector $g^*$ that represents a chemical acyclic graph $G^*\in \G$. Afterwards, for the classes of chemical graphs with cycle index 1 and 2, Ito~et~al.~\cite{ACZSNA20} and Zhu~et~al.~\cite{ZCSNA20} presented such a set $\mathcal{C}_2$ of constraints so that a vector $g^*$ in a feasible solution $(x^*,g^*)$ of a new MILP can represent a chemical graph $G^*$ in the class $\G$, respectively. As the second phase, we solve (II) {\sc Inverse Problem} for the inverse QSAR/QSPR by treating the following inference problems. \smallskip \noindent (II-a) Inference of Vectors \\ {\bf Input:} A real $y^*$ with $\underline{a}\leq y^*\leq \overline{a}$. \\ {\bf Output:} Vectors $x^*\in \mathcal{A}\cap \mathcal{D}$ and $g^*\in \mathbb{R}^{q}$ such that $\psi_\mathcal{N}(x^*)=y^*$ and $g^*$ forms a chemical graph $G^*\in \G$ with $f(G^*)=x^*$. \bigskip \noindent (II-b) Inference of Graphs \\ {\bf Input:} A vector $x^*\in \mathcal{A}\cap \mathcal{D}$. \\ {\bf Output:} All graphs $G^*\in \G$ such that $f(G^*)=x^*$. \smallskip The second phase consists of the next two steps. \medskip \noindent {\bf Phase~2.} \\ \smallskip\noindent {\bf Stage~4:}~ Formulate Problem (II-a) as the above MILP $\mathcal{M}(x,y,g;\mathcal{C}_1,\mathcal{C}_2)$ based on $\G$ and $\mathcal{N}$. Find a feasible solution $(x^*,g^*)$ of the MILP such that \[\mbox{ $x^*\in \mathcal{A}\cap \mathcal{D}$ and $\psi_\mathcal{N}(x^*)=y^*$ }\] (where the second requirement may be replaced with inequalities $(1-\varepsilon)y^* \leq \psi_\mathcal{N}(x^*) \leq(1+\varepsilon)y^*$ for a tolerance $\varepsilon>0$). \smallskip\noindent {\bf Stage~5:}~ To solve Problem (II-b), enumerate all (or a specified number) of graphs $G^*\in \G$ such that $f(G^*)=x^*$ for the inferred vector $x^*$. See Figure~\ref{fig:framework}(b) for an illustration of Stages~4 and 5 in Phase~2. \subsection{Our Target Graph Class} In this paper, we choose a branch-parameter $k\geq 1$ and define a class $\G$ of chemical acyclic graphs $G$ such that \\ - the maximum degree in $G$ is at most 4; \\ - the $k$-branch height $\mathrm{bh}_k(G)$ is bounded for a specified branch-parameter $k$; and \\ - the size of each $k$-fringe-tree in $G$ is bounded. The reason why we restrict ourselves to the graphs in $\G$ is that this class $\G$ covers a large part of the acyclic chemical compounds registered in the chemical database PubChem. See Appendix~\ref{sec:statistical} for a summary of the statical feature of the chemical graphs in PubChem in terms of $k$-branch height and the size of $2$-fringe-trees. According to this, over 55\% (resp., 99\%) of acyclic chemical compounds with up to 100 non-hydrogen atoms in PubChem have the maximum degree 3 (resp., 4); and nearly 87\% (resp., 99\%) of acyclic chemical compounds with up to 50 non-hydrogen atoms in PubChem has the $2$-branch height at most 1 (resp., 2). This implies that $k=2$ is sufficient to cover the most of chemical acyclic graphs. For $k=2$, over 92\% of 2-fringe-trees of chemical compounds with up to 100 non-hydrogen atoms in PubChem obey the following size constraint: \begin{equation}\label{eq:fringe-size} \mbox{$n \le 2d + 2$ for each 2-fringe-tree $T$ with $n$ vertices and $d$ children of the root. }\end{equation} We formulate an MILP in Stage~4 that, given a target value $y^*$, infers a vector $x^*\in \mathbb{Z}_+^K$ with $\psi_\mathcal{N}(x^*)=y^*$ and a chemical acyclic graph $G^*=(H,\alpha,\beta)\in\G$ with $f(G^*)=x^*$. We here specify some of the features of a graph $G^*\in\G$ such as the number of non-hydrogen atoms in order to control the graph structure of target graphs to be inferred and to simplify MILP formulations. In this paper, we specify the following features on a graph $G\in\G$: a set $\Lambda$ of chemical elements, a set $\Gamma_{<}$ of adjacency-configuration, the maximum degree, the number of non-hydrogen atoms, the diameter, the $k$-branch-height and the $k$-branch-leaf-number for a branch-parameter $k$. More formally, given specified integers $n^*,d_\mathrm{max}, \mathrm{dia}^*, k^*, \mathrm{bh}^*,\mathrm{bl}}% {\mathrm{b\ell}^*\in \mathbb{Z}$ other than $\Lambda$ and $\Gamma$, let $\mathcal{H}(n^*, d_\mathrm{max}, \mathrm{dia}^*, k^*, \mathrm{bh}^*, \mathrm{bl}}% {\mathrm{b\ell}^*)$ denote the set of acyclic graphs $H$ such that \\ ~~~ the maximum degree of a vertex is at most 3 when $d_\mathrm{max}=3$ (or equal to 4 when $d_\mathrm{max}=4$), \\ ~~~ the number $n(H)$ of vertices in $H$ is $n^*$, \\ ~~~ the diameter $\mathrm{dia}(H)$ of $H$ is $\mathrm{dia}^*$, \\ ~~~ the $k^*$-branch-height $\mathrm{bh}_{k^*}(H)$ is $\mathrm{bh}^*$, \\ ~~~ the $k^*$-branch-leaf-number $\mathrm{bl}}% {\mathrm{b\ell}_{k^*}(H)$ is $\mathrm{bl}}% {\mathrm{b\ell}^*$ and \\ ~~~ (\ref{eq:fringe-size}) holds. To design Stage~4 for our class $\G$, we formulate an MILP $\mathcal{M}(x,g; \mathcal{C}_2)$ that infers a chemical graph $G^*=(H,\alpha,\beta)\in \G$ with $H\in \mathcal{H}(n^*, d_\mathrm{max}, \mathrm{dia}^*, k^*, \mathrm{bh}^*, \mathrm{bl}}% {\mathrm{b\ell}^*)$ for a given specification $(\Lambda,\Gamma,n^*, d_\mathrm{max}, \mathrm{dia}^*, k^*, \mathrm{bh}^*, \mathrm{bl}}% {\mathrm{b\ell}^*)$ The details will be given in Section~\ref{sec:graph_MILP} and Appendix~\ref{sec:full_milp}. \bigskip Design of Stage~5; i.e. generating chemical graphs $G^*$ that satisfy $f(G^*)=x^*$ for a given feature vector $x^*\in\mathbb{Z}_+^K$ is still challenging for a relatively large instance with size $n(G^*)\geq 20$. There have been proposed algorithms for generating chemical graphs $G^*$ in Stage~5 for the classes of graphs with cycle index 0 to 2 ~\cite{Fujiwara08,Suzuki14,2A1B20,2A2B20}. All of these are designed based on the branch-and-bound method and can generate a target chemical graph with size $n(G^*)\leq 20$. To break this barrier, we newly employ the dynamic programming method for designing an algorithm in Stage~5 in order to generate a target chemical graph $G^*$ with size $n(G^*)=50$. For this, we further restrict the structure of acyclic graphs $G$ so that the number $\mathrm{bl}}% {\mathrm{b\ell}_2(G)$ of leaf $2$-branches is at most 3. Among all acyclic chemical compounds with up to 50 non-hydrogen atoms in the chemical database PubChem, the ratio of the number of acyclic chemical compounds $G$ with $\mathrm{bl}}% {\mathrm{b\ell}_2(G)\leq 2$ (resp., $\mathrm{bl}}% {\mathrm{b\ell}_2(G)\leq 3$) is 78\% (resp., 95\%). See Section~\ref{sec:graph_search} for the details on the new algorithm in Stage~5. \section{MILPs for Chemical Acyclic Graphs with Bounded Branch-height} \label{sec:graph_MILP} In this section, we formulate an MILP $\mathcal{M}(x,g;\mathcal{C}_2)$ to infer a chemical acyclic graph $G$ in the class $\G$ for a given specification $(\Lambda,\Gamma,n^*, d_\mathrm{max}, \mathrm{dia}^*, k^*, \mathrm{bh}^*, \mathrm{bl}}% {\mathrm{b\ell}^*)$ defined in the previous section. \subsection{Scheme Graphs} We introduce a directed graph with size $O(n^*\cdot (d_\mathrm{max}-1)^{\max\{\mathrm{bh}^*,k^*\}} + (d_\mathrm{max}-1)^{\mathrm{bh}^*+k^*})$, called a {\em scheme graph} $\mathrm{SG}$, so that an acyclic graph $H\in \mathcal{H}(n^*, d_\mathrm{max}, \mathrm{dia}^*, k^*, \mathrm{bh}^*, \mathrm{bl}}% {\mathrm{b\ell}^*)$ can be chosen from the scheme graph $\mathrm{SG}$. Let $t^*$, $s^*$ and $c^*$ be integers such that \[ t^*= n^* - (\mathrm{bh}^* -1) - (k^*+1)\mathrm{bl}}% {\mathrm{b\ell}^*,\] \[\mbox{ $s^*=a(b^c-1)/(b-1)+1$ for $a=d_\mathrm{max}$, $b=d_\mathrm{max}\!-\!1$ and $c=\mathrm{bh}^*$,}\] \[c^*=s^*-1. \] Let a scheme graph $\mathrm{SG}( d_\mathrm{max}, k^*, \mathrm{bh}^*, t^*)$ consist of a tree $T_B$, a path $P_{t^*}$, a set $\{S_s\mid s\in[1,s^*]\}$ of trees, a set $\{T_t\mid t\in[1,t^*]\}$ of trees, and a set of directed edges between $T_B$ and $P_{t^*}$ so that an acyclic graph $H\in \mathcal{H}(n^*, d_\mathrm{max}, \mathrm{dia}^*, k^*, \mathrm{bh}^*, \mathrm{bl}}% {\mathrm{b\ell}^*)$ will be constructed in the following way: \begin{enumerate} \item[(i)] The $k^*$-branch-tree of $H$ will be chosen as a subtree of $T_B=(V_B,E_B)$; \item[(ii)] Each $k^*$-fringe-tree rooted at a vertex $u_s\in V(T_B)$ of $H$ will be chosen as a subtree of $S_s$; \item[(iii)] Each $k$-branch-path of $H$ (except for its end-vertices) will be chosen as a subpath of $P_{t^*}$ or as an edge in $T_B$; \item[(iv)] Each $k^*$-fringe-tree rooted at a vertex $v_t\in V(P_{t^*})$ of $H$ will be chosen as a subtree of $T_t$; and \item[(v)] An edge $(u,v)$ directed from $T_B$ to $P_{t^*}$ will be selected as an initial edge of a $k^*$-branch-path of $H$ and an edge $(v,u)$ directed from $P_{t^*}$ to $T_B$ will be selected as an ending edge of a $k^*$-branch-path of $H$. \end{enumerate} More formally each component of a scheme graph $\mathrm{SG}( d_\mathrm{max}, k^*, \mathrm{bh}^*, t^*)$ is defined as follows. \begin{enumerate} \item[(i)] $T_B =(V_B=\{u_1,u_2,\ldots,u_{s^*}\}, E_B=\{a_1,a_2\ldots,a_{c^*}\})$, called a {\em base-tree} is a tree rooted at a vertex $u_1$ that is isomorphic to the rooted tree $T(d_\mathrm{max}, d_\mathrm{max}\!-\!1, \mathrm{bh}^*)$. Regard $T_B$ as an ordered tree by introducing a total order for each set of siblings and call the first (resp., last) child in a set of siblings the leftmost (resp. rightmost) child, which defines the leftmost (rightmost) path from the root $u_1$ to a leaf in $T_B$, as illustrated in Figure~\ref{fig:rank0_BH_scheme}(a). For each vertex $u_s\in V_B$, let $E_B(s)$ denote the set of indices $i$ of edges $a(i)\in E_B$ incident to $u_s$ and $\mathrm{Cld}_B(s)$ denote the set of indices $i$ of children $u_i\in V_B$ of $u_s$ in the tree $T_B$. For each integer $d\in [0,k^*]$, let $V_B(d)$ denote the set of indices $s$ of vertices $u_s\in V_B$ whose depth is $d$ in the tree $T_B$, where $V_B(\mathrm{bh}^*)$ is the set of indices $s$ of leaves $u_s$ of $T_B$. Regard each edge $a_i\in E_B$ as a directed edge $(u_s,u_{s'})$ from one end-vertex $u_s$ of $a_i$ to the other end-vertex $u_{s'}$ of $a_i$ such that $s=\mathrm{prt}(s')$ (i.e., $u_s$ is the parent of $u_{s'}$), where $\mathrm{head}(i)$ and $\mathrm{tail}(i)$ denote the head $u_{s'}$ and tail $u_s$ of edge $a_i\in E_B$, respectively. For each index $s\in [1,s^*]$, let $E_B^+(s)$ (resp., $E_B^-(s)$) denote the set of indices $i$ of edges $a_i\in E_B$ such that the tail (resp., head) of $a_i$ is vertex $u_{s}$. % Let $L_B$ denote the set of indices of leaves of $T_B$, and $s^\mathrm{left}$ (resp., $s^\mathrm{right}$) denote the index $s\in L_B$ of the leaf $u_s$ at which the leftmost (resp., rightmost) path from the root ends. % For each leaf $u_s$, $s\in L_B$, let $V_{B,s}$ (resp., $E_{B,s}$) denote the set of indices $s$ of non-root vertices $u_s$ (resp., indices $i$ of edges $a(i)\in E_B$) along the path from the root to the leaf $u_s$ in the tree $T_B$. For the example of a base-tree $T_B$ with $\mathrm{bh}^*=2$ in Figure~\ref{fig:rank0_BH_scheme}, it holds that $L_B=\{5,6,7,8,9,10\}$, $s^\mathrm{left}=5$, $s^\mathrm{right}=10$, $E_{B,s^\mathrm{left}}=\{1,4\}$ and $V_{B,s^\mathrm{left}}=\{2,5\}$. \item[(ii)] $S_s$, $s\in [1,s^*]$ is a tree rooted at vertex $u_s\in V_B$ in $T_B$ that is isomorphic to the rooted tree $T(d_\mathrm{max}\!-\!1, d_\mathrm{max}\!-\!1, k^*)$, as illustrated in Figure~\ref{fig:rank0_BH_scheme}(b). Let $u_{s,i}$ and $e'_{s,i}$ denote the vertex and edge in $S_s$ that correspond to the $i$-th vertex and the $i$-th edge in $T(d_\mathrm{max}\!-\!1, d_\mathrm{max}\!-\!1, k^*)$, respectively. Regard each edge $e'_{s,i}$ as a directed edge $(u_{s,\mathrm{prt}(i)},u_{s,i})$. For this, each vertex $u_s\in V_B$ is also denoted by $u_{s,1}$. \item[(iii)] $P_{t^*}=(V_P=\{v_1,v_2,\ldots,v_{t^*}\}, E_P=\{e_2,e_3,\ldots,e_{t^*}\})$, called a {\em link-path} with size $t^*$ is a directed path from vertex $v_1$ to vertex $v_{t^*}$, as illustrated in Figure~\ref{fig:rank0_BH_scheme}(a). Each edge $e_t\in E_P$ is directed from vertex $v_{t-1}$ to vertex $v_t$. \item[(iv)] $T_t$, $t\in [1,t^*]$ is a tree rooted at vertex $v_t$ in $P_{t^*}$ that is isomorphic to the rooted tree $T(d_\mathrm{max}\!-\!2, d_\mathrm{max}\!-\!1, k^*)$, as illustrated in Figure~\ref{fig:rank0_BH_scheme}(c). Let $v_{t,i}$ and $e_{t,i}$ denote the vertex and edge in $T_t$ that correspond to the $i$-th vertex and the $i$-th edge in $T(d_\mathrm{max}\!-\!2, d_\mathrm{max}\!-\!1, k^*)$, respectively. Regard each edge $e_{t,i}$ as a directed edge $(v_{t,\mathrm{prt}(i)},u_{t,i})$. For this, each vertex $v_t\in V_P$ is also denoted by $v_{t,1}$. \item[(v)] For every pair $(s,t)$ with $s\in [1,s^*]$ and $t\in[1,t^*]$, join vertices $u_{s}$ and $v_{t}$ with directed edges $(u_{s},v_{t})$ and $(v_{t},u_{s})$, as illustrated in Figure~\ref{fig:rank0_BH_scheme}(a). \end{enumerate} \begin{figure}[ht!] \begin{center} \includegraphics[width=.80\columnwidth]{rank0_BH_scheme.eps} \end{center} \caption{An illustration of scheme graph $\mathrm{SG}( d_\mathrm{max}, k^*, \mathrm{bh}^*, t^*)$ with $d_\mathrm{max}=3$, $k^*=2$, $\mathrm{bh}^*=2$, and $t^*=5$, where the vertices in $T_B$ (resp., in $P_{t^*}$) are depicted with black (resp., gray) circles: (a) A base-tree $T_B$ and a link-path $P_{t^*}$ are joined with directed edges between them; (b) A tree $S_s$ rooted at a vertex $u_s=u_{s,1}\in V_B$; (c) A tree $T_t$ rooted at a vertex $v_t=v_{t,1}\in V_P$. } \label{fig:rank0_BH_scheme} \end{figure} Figure~\ref{fig:rank0_BH_scheme_example}(a) illustrates an acyclic graph $H$ with $n(H)=37$, $\mathrm{dia}(H)=17$, $\mathrm{bh}_2(H)=2$ and $\mathrm{bl}}% {\mathrm{b\ell}_2(H)=3$, where the maximum degree of a vertex is 3. Figure~\ref{fig:rank0_BH_scheme_example}(b) illustrates the $2$-branch-tree of the acyclic graph $H$ in Figure~\ref{fig:rank0_BH_scheme_example}(a). Figure~\ref{fig:rank0_BH_scheme_example}(c) illustrates a subgraph $H'$ of the scheme graph $\mathrm{SG}( d_\mathrm{max}, k^*, \mathrm{bh}^*, t^*=n^*-\mathrm{bl}}% {\mathrm{b\ell}^*-1)$ such that $H'$ is isomorphic to the acyclic graph $H$ in Figure~\ref{fig:rank0_BH_scheme_example}(a). \begin{figure}[ht!] \begin{center} \includegraphics[width=.99\columnwidth]{rank0_BH_scheme_example.eps} \end{center} \caption{An illustration of selecting a subgraph $H$ from the scheme graph $\mathrm{SG}( d_\mathrm{max}, k^*, \mathrm{bh}^*, t^*=n^*-\mathrm{bl}}% {\mathrm{b\ell}^*-1)$: (a) An acyclic graph $H\in \mathcal{H}(n^*, d_\mathrm{max}, \mathrm{dia}^*, k^*, \mathrm{bh}^*, \mathrm{bl}}% {\mathrm{b\ell}^*)$ with $n^*=37$, $d_\mathrm{max}=3$, $\mathrm{dia}^*(H)=17$, $k^*=2$, $\mathrm{bh}^*=2$ and $\mathrm{bl}}% {\mathrm{b\ell}^*=3$, where the labels of some vertices indicate the corresponding vertices in the scheme graph $\mathrm{SG}( d_\mathrm{max}, k^*, \mathrm{bh}^*, t^*)$; (b) The $k^*$-branch-tree of $H$ for $k^*=2$; (c) An acyclic graph $H'$ selected from $\mathrm{SG}( d_\mathrm{max}, k^*, \mathrm{bh}^*, t^*)$ as a graph that is isomorphic to $H$ in (a). } \label{fig:rank0_BH_scheme_example} \end{figure} In this paper, we obtain the following result. \begin{theorem} \label{Th2} Let $\Lambda$ be a set of chemical elements, $\Gamma$ be a set of adjacency-configurations, where $|\Lambda|\leq |\Gamma|$, and $K=|\Lambda|+|\Gamma|+28$. Given non-negative integers $n^*\geq 3$, $d_\mathrm{max}\in\{3,4\}$, $\mathrm{dia}^*\geq 3$, $k^*\geq 1$, $\mathrm{bh}^*\geq 1$ and $\mathrm{bl}}% {\mathrm{b\ell}^*\geq 2$, there is an MILP $\mathcal{M}(x,g;\mathcal{C}_2)$ that consists of variable vectors $x\in \mathbb{R}^K$ and $g\in \mathbb{R}^q$ for an integer $q=O( |\Gamma|\cdot [ (d_\mathrm{max}\!-\!1)^{\mathrm{bh}^*+k^*} +n^*\cdot(d_\mathrm{max}\!-\!1)^{\max\{\mathrm{bh}^*,k^*\}})])$ and a set $\mathcal{C}_2$ of $O(|\Gamma| + (d_\mathrm{max}\!-\!1)^{\mathrm{bh}^*+k^*} +n^*\cdot(d_\mathrm{max}\!-\!1)^{\max\{\mathrm{bh}^*,k^*\}}) )$ constraints on $x$ and $g$ such that: $(x^*,g^*)$ is feasible to $\mathcal{M}(x,g; \mathcal{C}_2)$ if and only if $g^*$ forms a chemical acyclic graph $G=(H,\alpha,\beta)\in \mathcal{G}(\Lambda,\Gamma)$ such that $H\in \mathcal{H}(n^*, d_\mathrm{max}, \mathrm{dia}^*, k^*, \mathrm{bh}^*,\mathrm{bl}}% {\mathrm{b\ell}^*)$ and $f(G)=x^*$. \end{theorem} Note that our MILP requires only $O(n^*)$ variables and constraints when the branch-parameter $k^*$, the $k^*$-branch height and $|\Gamma|$ are constant. We formulate an MILP in Theorem~\ref{Th2} so that such a graph $H$ is selected as a subgraph of the scheme graph. We explain the basic idea of our MILP. The MILP mainly consists of the following three types of constraints. \begin{enumerate} \item[C1.] Constraints for selecting an acyclic graph $H$ as a subgraph of the scheme graph $\mathrm{SG}( d_\mathrm{max}, k^*, \mathrm{bh}^*, t^*)$; \item[C2.] Constraints for assigning chemical elements to vertices and multiplicity to edges to determine a chemical graph $G=(H,\alpha,\beta)$; and \item[C3.] Constraints for computing descriptors from the selected acyclic chemical graph $G$. \end{enumerate} In the constraints of C1, more formally we prepare the following. \begin{enumerate} \item[(i)] In the scheme graph $\mathrm{SG}( d_\mathrm{max}, k^*, \mathrm{bh}^*, t^*)$, we prepare a binary variable $u(s,1)$ for each vertex $u_s=u_{s,1}\in V_B$, $s\in [1,s^*]$ so that vertex $u_s=u_{s,1}$ becomes a $k^*$-branch of a selected graph $H$ if and only if $u(s,1)=1$. The subgraph of the base-tree $T_B$ that consists of vertices $u_s=u_{s,1}$ with $u(s,1)=1$ will be the $k^*$-branch-tree of the graph $H$. We also prepare a binary variable $a(i)$, $i\in [1,c^*]$ for each edge $a_i\in E_B$, where $c^*=s^*-1$. For a pair of a vertex $u_{s,1}$ and a child $u_{s',1}$ of $u_{s,1}$ such that $u(s,1)=u(s',1)=1$, either the edge $a_i=(u_{s,1},u_{s',1})$ is used in the selected graph $H$ (when $a(i)=1$) or a path $P_i=(u_{s,1},v_{t',1},v_{t'+1,1},\ldots,v_{t'',1},u_{s',1})$ from vertex $u_{s,1}$ to vertex $u_{s',1}$ is constructed in $H$ with an edge $(u_{s,1},v_{t',1})$, a subpath $(v_{t',1},v_{t'+1,1},\ldots,v_{t'',1})$ of the link-path $P_{t^*}$ and an edge $(v_{t'',1},u_{s',1})$ (when $a(i)=0$). For example, vertices $u_{1,1}$ and $u_{2,1}$ are connected by a path $P_1=(u_{1,1},v_{1,1},v_{2,1},u_{2,1})$ in the selected graph $H'$ in Figure~\ref{fig:rank0_BH_scheme_example}(c). % \item[(ii)] Let \[ n_\mathrm{tree}^{\mathrm{S}} =1+(d_\mathrm{max}\!-\!1)((d_\mathrm{max}\!-\!1)^{k^*}-1)/(d_\mathrm{max}-2),\] \[ n_\mathrm{tree}^{\mathrm{T}} =1+(d_\mathrm{max}\!-\!2)((d_\mathrm{max}\!-\!1)^{k^*}-1)/(d_\mathrm{max}-2),\] where $n_\mathrm{tree}^{\mathrm{S}} $ (resp., $n_\mathrm{tree}^{\mathrm{T}}$) is the numbers of vertices in the rooted tree $T(d_\mathrm{max}\!-\!1, d_\mathrm{max}\!-\!1, k^*)$ (resp., $T(d_\mathrm{max}\!-\!2, d_\mathrm{max}\!-\!1, k^*)$). In each tree $S_s$, $s\in [1,s^*]$ (resp., $T_t$, $t\in [1,t^*]$) in the scheme graph, we prepare a binary variable $u(s,i)$ (resp., $v(t,i)$) for each vertex $u_{s,i}$, $i\in [2, n_\mathrm{tree}^{\mathrm{S}}]$ (resp., $v_{t,i}$, $i\in [2, n_\mathrm{tree}^{\mathrm{T}}]$) so that $u(s,i)=1$ (resp., $v(t,i)=1$) means that the corresponding vertex $u_{s,i}$ (resp., $v_{t,i}$) is used as a vertex in a selected graph $H$. % The (non-empty) subgraph of a tree $S_s$ (resp., $T_t$) that consists of vertices $u_{s,i}$ with $u(s,i)=1$ (resp., $v_{t,i}$ with $v(t,i)=1$) will be a $k^*$-fringe-tree of a selected graph $H$. \item[(iii)] In the link-path $P_{t^*}$, we prepare a binary variable $e(t)$, $t\in [2,t^*]$ for each edge $e_{t,1}=(v_{t-1,1}, v_{t,1})\in E_P$ so that $e(t)=1$ if and only if edge $e_{t,1}$ is used in some path $P_i=(u_{s,1},v_{t',1},v_{t'+1,1},\ldots,v_{t'',1},u_{s',1})$ constructed in (i). \item[(iv)] For each pair $(s,t)$ of $s\in [1,s^*]$ and $t\in [1,t^*]$, we prepare a binary variable $e(s,t)$ (resp., $e(t,s)$) so that $e(s,t')=1$ (resp., $e(t'',s)=1$) if and only if directed edge $(u_{s,1},v_{t',1})$ (resp., $(v_{t'',1},u_{s,1})$) is used as the first edge (resp., last edge) of some path $P_i=(u_{s,1},v_{t',1},v_{t'+1,1},\ldots,$ $v_{t'',1},u_{s',1})$ constructed in (i). \end{enumerate} Based on these, we include constraints with some more additional variables so that a selected subgraph $H$ is a connected acyclic graph. See constraints (\ref{eq:SG_first}) to (\ref{eq:SS_last}) in Appendix~\ref{sec:full_milp} for the details. In the constraints of C2, we prepare an integer variable $\widetilde{\alpha}(u)$ for each vertex $u$ in the scheme graph that represents the chemical element $\alpha(u)\in \Lambda$ if $u$ is in a selected graph $H$ (or $\widetilde{\alpha}(u)=0$ otherwise) and an integer variable $\widetilde{\beta}(e)\in [0,3]$ (resp., $\widehat{\beta}(e)\in [0,3]$) for each edge $e$ (resp., $e=e(s,t)$ or $e(t,s)$, $s\in [1,s^*]$, $t\in [1,t^*]$) in the scheme graph that represents the multiplicity $\beta(e)\in [1,3]$ if $e$ is in a selected graph $H$ (or $\widetilde{\beta}(e)$ or $\widehat{\beta}(e)$ takes $0$ otherwise). This determines a chemical graph $G=(H,\alpha,\beta)$. Also we include constraints for a selected chemical graph $G$ to satisfy the valence condition $(\alpha(u),\alpha(v),\beta(uv))\in \Gamma$ for each edge $uv\in E$. See constraints (\ref{eq:AM_first}) to (\ref{eq:AE_last}) in Appendix~\ref{sec:full_milp} for the details. In the constraints of C3, we introduce a variable for each descriptor and constraints with some more variables to compute the value of each descriptor in $f(G)$ for a selected chemical graph $G$. See constraints (\ref{eq:NE_first}) to (\ref{eq:NBC_last}) in Appendix~\ref{sec:full_milp} for the details. \section{A New Graph Search Algorithm} \label{sec:graph_search} The algorithm used in Stage~5 in the previous methods of inferring chemical acyclic graphs \cite{ACZSNA20,CWZSNA20,ZZCSNA20} are all based on the branch-and-bound algorithm proposed by Fujiwara et~al.~\cite{Fujiwara08} where an enormous number of chemical graphs are constructed by repeatedly appending and removing a vertex one by one until a target chemical graph is constructed. Their algorithm cannot generate even one acyclic chemical graph when $n(G)$ is larger than around 20. This section designs a new dynamic programming method for designing an algorithm in Stage~5. We consider the following aspects: \begin{enumerate} \item[(a)] Treat acyclic graphs with a certain limited structure that frequently appears among chemical compounds registered in the chemical data base; and \item[(b)] Instead of manipulating acyclic graphs directly, first compute the frequency vectors $\pmb{f}(G')$ (some types of feature vectors) of subtrees $G'$ of all target acyclic graphs and then construct a limited number of target graphs $G$ from the process of computing the vectors. \end{enumerate} In (a), we choose a branch-parameter $k^*=2$ and treat acyclic graphs $G$ that have a small $2$-branch number such as $\mathrm{bl}}% {\mathrm{b\ell}_2(G)\in [2,3]$. and satisfy the size constraint (\ref{eq:fringe-size}) on 2-fringe-trees. Figure~\ref{fig:few_leaf_2-branches}(a) and (b) illustrate chemical acyclic graphs $G$ with $\mathrm{bl}}% {\mathrm{b\ell}_2(G)=2$ and $\mathrm{bl}}% {\mathrm{b\ell}_2(G)= 3$, respectively. We design a method in (b) based on the mechanism of dynamic programming wherein the first phase computes some compressed forms of all substructures of target objects before the second phase realizes a final object based on the computation process of the first phase. Section~\ref{sec:frequency_vector} defines a frequency vector $\pmb{f}(G)$ that represents a feature vector $f(G)$ of a chemical graph $G$. Section~\ref{sec:search_idea} presents the idea and a sketch of our new algorithms for generating acyclic graphs $G$ with $\mathrm{bl}}% {\mathrm{b\ell}_2(G)\in [2,3]$. Detailed descriptions of the algorithms are presented in Appendix~\ref{sec:graph_search_appendix}. \begin{figure}[!ht] \begin{center} \includegraphics[width=.98\columnwidth]{few_leaf_2-branches.eps} \end{center} \caption{An illustration of chemical acyclic graphs $G$ with diameter $\mathrm{dia}^*$ and $\mathrm{bl}}% {\mathrm{b\ell}_2(G)=2,3$: (a)~A chemical acyclic graph $G$ with two leaf 2-branches $v_1$ and $v_2$; (b)~A chemical acyclic graph $G$ with three leaf 2-branches $v_1, v_2$ and $v_3$. } \label{fig:few_leaf_2-branches} \end{figure} \subsection{Multi-rooted Trees and Frequency Vectors}\label{sec:frequency_vector} For a finite set $A$ of elements, let $\mathbb{Z}_+^A$ denote the set of functions $\pmb{w}:A\to \mathbb{Z}_+$. A function $\pmb{w}\in \mathbb{Z}_+^A$ is called a {\em non-negative integer vector} (or a vector) on $A$ and the value $\pmb{x}(a)$ for an element $a\in A$ is called the {\em entry} of $\pmb{x}$ for $a\in A$. For a vector $\pmb{w}\in \mathbb{Z}_+^A$ and an element $a\in A$, let $\pmb{w}+\1_{a}$ (resp., $\pmb{w}-\1_{a}$) denote the vector $\pmb{w}'$ such that $\pmb{w}'(a)=\pmb{w}(a)+1$ (resp., $\pmb{w}'(a)=\pmb{w}(a)-1$) and $\pmb{w}'(b)=\pmb{w}(b)$ for the other elements $b\in A\setminus\{a\}$. For a vector $\pmb{w}\in \mathbb{Z}_+^A$ and a subset $B\subseteq A$, let $\pmb{w}_{[B]}$ denote the {\em projection} of $\pmb{w}$ to $B$; i.e., $\pmb{w}_{[B]}\in \mathbb{Z}_+^B$ such that $\pmb{w}_{[B]}(b)=\pmb{w}(b)$, $b\in B$. Let $\mathrm{Bc}$ denote the set of tuples $\mu=(d_1,d_2,k)\in [1,4]\times[1,4]\times[1,3]$ (bond-configuration) such that $\max\{d_1,d_2\}+k\leq 4$. We regard that $(d_1,d_2,k)=(d_2,d_1,k)$. For two tuples $\mu=(d_1,d_2,k), \mu'=(d'_1,d'_2,k')\in \mathrm{Bc}$, we write $\mu\geq \mu'$ if $\max\{d_1,d_2\}\geq \max\{d'_1,d'_2\}$, $\min\{d_1,d_2\}\geq \min\{d'_1,d'_2\}$ and $k\geq k'$, and write $\mu> \mu'$ if $\mu\geq \mu'$ and $\mu\neq \mu'$. % Let $\mathrm{Dg}=\{\dg1, \dg2, \dg3, \dg4\}$, where $\mathrm{dg} i$ denotes the number of vertices with degree~$i$. Henceforth we deal with vectors $\pmb{w}$ that have their $\pmb{w}_\mathrm{in}$ and $\pmb{w}_\mathrm{ex}$ components, both $\pmb{w}_\mathrm{in}, \pmb{w}_\mathrm{ex} \in \mathbb{Z}_+^{\mathrm{\Lambda\cup \Gamma\cup \Bc\cup \mathrm{Dg}}}$, and for convenience we write $\pmb{w} = (\pmb{w}_\mathrm{in}, \pmb{w}_\mathrm{ex})$ in the sense of concatenation. For a vector $\pmb{x} = (\pmb{x}_\mathrm{in}, \pmb{x}_\mathrm{ex})$ with $\pmb{x}_\mathrm{in}, \pmb{x}_\mathrm{ex}\in \mathbb{Z}_+^{\mathrm{\Lambda\cup \Gamma\cup \Bc\cup \mathrm{Dg}}}$, let $\G (\pmb{x})$ denote the set of chemical acyclic graphs $G$ that satisfy the following: \\ ~~~~ $\mathrm{ce}_{\tt a}^\mathrm{in}(G)=\pmb{x}_\mathrm{in}({\tt a})$ and $\mathrm{ce}_{\tt a}^\mathrm{ex}(G)=\pmb{x}_\mathrm{ex}({\tt a})$ for each chemical element ${\tt a}\in \Lambda$, \\ ~~~~ $\mathrm{ac}_{\gamma}^\mathrm{in}(G)= \pmb{x}_\mathrm{in}(\gamma)$ and $\mathrm{ac}_{\gamma}^\mathrm{ex}(G)=\pmb{x}_\mathrm{ex}(\gamma)$ for each adjacency-configuration $\gamma \in \Gamma$, \\ ~~~~ $\mathrm{bc}_{\mu}^\mathrm{in}(G)=\pmb{x}_\mathrm{in}(\mu)$ and $\mathrm{bc}_{\mu}^\mathrm{ex}(G)=\pmb{x}_\mathrm{ex}(\mu)$ for each bond-configuration $\mu \in \mathrm{Bc}$, \\ ~~~~ $\mathrm{dg}_i^\mathrm{in}(G)=\pmb{x}_\mathrm{in}(\mathrm{dg} i)$ and $\mathrm{dg}_i^\mathrm{ex}(G)=\pmb{x}_\mathrm{ex}(\mathrm{dg} i)$ for each degree $\mathrm{dg} i\in \mathrm{Dg}$. \\ Throughout the section, let $k^*=2$ be a branch-parameter, $\pmb{x}^*= (\pmb{x}_\mathrm{in}^*, \pmb{x}_\mathrm{ex}^*)$ be a given feature vector with $\pmb{x}_\mathrm{in}^*, \pmb{x}_\mathrm{ex}^*\in \mathbb{Z}_+^\mathrm{\Lambda\cup \Gamma\cup \Bc\cup \mathrm{Dg}}$, and $\mathrm{dia}^*$ be an integer. We infer a chemical acyclic graph $G\in \G(\pmb{x}^*)$ such that $\mathrm{bl}}% {\mathrm{b\ell}_2(G)\in [2,3]$ and the diameter of $G$ is $\mathrm{dia}^*$, where $n^*=\sum_{{\tt a}\in \Lambda}(\pmb{x}^*_\mathrm{in}({\tt a})+\pmb{x}^*_\mathrm{ex}({\tt a}) )$. Note that any other descriptors of $G\in \G(\pmb{x}^*)$ can be determined by the entries of vector $\pmb{x}^*$. To infer a chemical acyclic graph $G\in \G(\pmb{x}^*)$, we consider a connected subgraph $T$ of $G$ that consists of \begin{equation}\begin{array}{l}\label{eq:subtree} \mbox{~~ - a subtree of the 2-branch-subtree $G'$ of $G$ and} \\ \mbox{~~ - the 2-fringe-trees rooted at vertices in $G'$. } \end{array} \end{equation} Our method first generates a set $\FT$ of all possible rooted trees $T$ that can be a 2-fringe-tree of a chemical graph $G\in \G(\pmb{x}^*)$, and then extends the trees $T$ by repeatedly appending a tree in $\FT$ until a chemical graph $G\in \G(\pmb{x}^*)$ is formed. In the extension, we actually manipulate the ``frequency vectors'' of trees defined below. To specify which part of a given tree $T$ plays a role of 2-internal vertices/edges or 2-external vertices/edges in a chemical graph $G\in \G(\pmb{x}^*)$ to be inferred, we designate at most three vertices $r_1(T)$, $r_2(T)$ and $r_3(T)$ in $T$ as {\em terminals}, and call $T$ {\em rooted} (resp., {\em bi-rooted} and {\em tri-rooted}) if the number of terminals is one (resp., two and three). For a rooted tree (resp., bi- or tri-rooted tree) $T$, let $\widetilde{V}_{\mathrm{in}}$ denote the set of vertices contained in a path between two terminals of $T$, $\widetilde{E}_{\mathrm{in}}$ denote the set of edges in $T$ between two vertices in $\widetilde{V}_{\mathrm{in}}$, and define $\widetilde{V}_{\mathrm{ex}}\triangleq V(T)\setminus \widetilde{V}_{\mathrm{in}}$ and $\widetilde{E}_{\mathrm{ex}}\triangleq E(T)\setminus \widetilde{E}_{\mathrm{in}}$. For a bi- or tri-rooted tree $T$, define the {\em backbone path} $P_T$ of $T$ to be the path of $T$ between vertices $r_1(T)$ and $r_2(T)$. Given a chemical acyclic graph $T$, define $\pmb{f}_{\tt t}(T)$, ${\tt t}\in\{\mathrm{in},\mathrm{ex}\}$ to be the vector $\pmb{w}\in \mathbb{Z}_+^\mathrm{\Lambda\cup \Gamma\cup \Bc\cup \mathrm{Dg}}$ that consists of the following entries: \begin{itemize} \item[-] $\pmb{w}({\tt a})=|\{v\in \widetilde{V}_{\tt t} \mid \alpha(v)={\tt a}\}|$, ${\tt a}\in\Lambda$, \item[-] $\pmb{w}(\gamma)=|\{uv\in \widetilde{E}_{\tt t} \mid \{\alpha(u),\alpha(v)\}=\{{\tt a,b}\}, \beta(uv)=q\}|$, $\gamma=({\tt a,b}, q)\in\Gamma$, \item[-] $\pmb{w}(\mu)=|\{uv\in \widetilde{E}_{\tt t}\mid \{\deg_T(u),\deg_T(v)\}=\{ d,d'\}, \beta(uv)=m\}|$, $\mu=(d,d', m)\in \mathrm{Bc}$, \item[-] $\pmb{w}(\mathrm{dg} i)=|\{v\in \widetilde{V}_{\tt t}\mid \deg_T(v)=i\}|$, $\mathrm{dg} i\in \mathrm{Dg}$. \end{itemize} Define $\pmb{f}(T)\triangleq (\pmb{f}_\mathrm{in}(T),\pmb{f}_\mathrm{ex}(T))$. The entry for an element ${\tt e}\in\mathrm{\Lambda\cup \Gamma\cup \Bc\cup \mathrm{Dg}}$ in $\pmb{f}_{\tt t}(T)$, ${\tt t}\in\{\mathrm{in},\mathrm{ex}\}$ is denoted by $\pmb{f}_{\tt t}({\tt e}; T)$. For a subset $B$ of $\mathrm{\Lambda\cup \Gamma\cup \Bc\cup \mathrm{Dg}}$, let $\pmb{f}_{{\tt t}[B]}(T)$ denote the projection of $\pmb{f}_{\tt t}(T)$ to $B$. Our aim is to generate all chemical bi-rooted (resp., tri-rooted) trees $T$ with diameter $\mathrm{dia}^*$ such that $\pmb{f}(T)=\pmb{x}^*$. \subsection{The Idea of New Algorithms}\label{sec:search_idea} This section describes the idea and a sketch of our new graph search algorithms. \subsubsection{Case of $\mathrm{bl}}% {\mathrm{b\ell}_2(G)=2$} We call a chemical graph $G\in \G(x^*)$ with diameter $\mathrm{dia}^*$ and $\mathrm{bl}}% {\mathrm{b\ell}_2(G)=2$ a {\em target graph}. A chemical acyclic graph $G$ with $\mathrm{bl}}% {\mathrm{b\ell}_2(G)=2$ has exactly two leaf 2-branches $v_i$, $i=1,2$, where the length of the path between the two leaf 2-branch $v_1$ and $v_2$ of a target graph $G$ is $\mathrm{dia}^*-2k^*=\mathrm{dia}^*-4$. We observe that a connected subgraph $T$ of a target graph $G$ that satisfies (\ref{eq:subtree}) for $\mathrm{bl}}% {\mathrm{b\ell}_2(G)=2$ is a chemical rooted or bi-rooted tree. We call such a subgraph $T$ an {\em internal-subtree} (resp., {\em end-subtree}) of $G$ if neither (resp., one) of $u$ and $v$ is a 2-branch in $G$. When $u=v$, we call an internal-subtree (resp., end-subtree) $T$ of $G$ an {\em internal-fringe-tree} (resp., {\em end-fringe-tree}) of $G$. Figure~\ref{fig:subtrees_two_2-branches}(a)-(d) illustrate an internal-subtree, an internal-fringe-tree, an end-subtree and an end-fringe-tree of $G$. \begin{figure}[!ht] \begin{center} \includegraphics[width=.95\columnwidth]{subtrees_two_2-branches.eps} \end{center} \caption{An illustration of subtrees $T$ of a chemical acyclic graph $G$ in Figure~\ref{fig:few_leaf_2-branches}(a), where the vertices/edges in $T$ are depicted by solid lines: (a)~An internal-subtree $T$ of $G$; (b)~An internal-fringe-tree $T$ of $G$; (c)~An end-subtree $T$ of $G$; (d)~An end-fringe-tree $T$ of $G$. } \label{fig:subtrees_two_2-branches} \end{figure} Let $\delta_1=\lfloor\frac{\mathrm{dia}^* - 5}{2} \rfloor$ and $\delta_2 =\mathrm{dia}^*-5-\delta_1=\lceil \frac{\mathrm{dia}^* - 5}{2} \rceil$. We regard a target graph $G\in \G(x^*)$ with $\mathrm{bl}}% {\mathrm{b\ell}_2(G)=2$ and diameter $\mathrm{dia}^*$ as a combination of two chemical bi-rooted trees $T_1$ and $T_2$ with $\ell(P_{T_i})=\delta_i$, $i=1,2$ joined by an edge $e=r_1(T_1)r_1(T_2)$, as illustrated in Figure~\ref{fig:combine_two_2-branches}. \begin{figure}[!ht] \begin{center} \includegraphics[width=.69\columnwidth]{combine_two_2-branches.eps} \end{center} \caption{An illustration of combining two bi-rooted trees $T_1=T_{\pmb{w}^1}$ and $T_2=T_{\pmb{w}^2}$ with a new edge with multiplicity $m$ joining vertices $r_1(T_1)$ and $r_1(T_2)$ to construct a target graph $G$, where ${\tt a}_i\in \Lambda$, $d_i\in [1,d_\mathrm{max}-1]$, $m_i\in[d_i,\mathrm{val}({\tt a}_i)-1]$, $i = 1,2$ and $m \in [1, \min\{3, \mathrm{val}({\tt a}_1) - m_1, \mathrm{val}({\tt a}_2) - m_2\} ]$. } \label{fig:combine_two_2-branches} \end{figure} We start with generating chemical rooted trees and then iteratively extend chemical bi-rooted trees $T$ with $\ell(P_T)=1,2,\dots,\delta_1$ before we finally combine two chemical bi-rooted trees $T_1$ and $T_2$ with $\ell(P_{T_i})=\delta_i$. To describe our algorithm, we introduce some notations. \begin{itemize} \item[-] Let $\T(x^*)$ denote the set of all bi-rooted trees $T$ (where possibly $r_1(T)=r_2(T)$) such that $\pmb{f}_{\mathrm{in}}(T)\leq \pmb{x}_{\mathrm{in}}^*$ and $\pmb{f}_{\mathrm{ex}}(T)\leq \pmb{x}_{\mathrm{ex}}^*$, which is a necessary condition for $T$ to be an internal-subtree or end-subtree of a target graph $G\in \G(x^*)$. \item[-] Let $\mathcal{FT}$ denote the set of all rooted trees $T\in \T(x^*)$ that can be a 2-fringe-tree of a target graph $G$, where $T$ satisfies the size constraint (\ref{eq:fringe-size}) of 2-fringe-trees. \item[-] For each integer $h\in [1,\mathrm{dia}^*-4]$, $\T_{\mathrm{end}}^{(h)}$ denote the set of all bi-rooted trees $T\in \T(x^*)$ that can be an end-subtree of a target graph $G$ such that $\ell(P_T)=h$, and each 2-fringe-tree $T_v$ rooted at a vertex $v$ in $P_T$ belongs to $\mathcal{FT}$. \end{itemize} We remark that the size $|\T_{\mathrm{end}}^{(h)}|$ of trees will be enormously large for $n^*\geq 25$ and $\mathrm{dia}^*\geq 10$. This suggests that construction of a target graph $G$ by enumerating trees in $\T_{\mathrm{end}}^{(h)}$ directly never works for such a large size of instances. The idea of our new algorithm is to compute only the set $\W_{\mathrm{end}}^{(h)}$ of frequency vectors $\pmb{w}$ of these trees, whose size $|\W_{\mathrm{end}}^{(h)}|$ is much more restricted than that of $\T_{\mathrm{end}}^{(h)}$. We compute the set $\W^{(h)}$ of frequency vectors $\pmb{w}$ of trees in $\T_{\mathrm{end}}^{(h)}$ iteratively for each integer $h\geq 0$. During the computation, we keep a sample of a tree $T_{\pmb{w}}$ for each of such frequency vectors $\pmb{w}$ so that a final step can construct some number of target graphs $G$ by assembling these sample trees. Based on this, we generate target graphs $G\in\G(x^*)$ by the following steps: \begin{enumerate} \item[1.] (i) Compute $\mathcal{FT}$ by a branch-and-bound procedure that generates all possible rooted trees $T\in \T(\pmb{x}^*)$ (where $r_1(T)=r_2(T)$) that can be a 2-fringe-tree of a target graph $G\in \G(x^*)$; \\ (ii) Compute the set $\W^{(0)}$ of all vectors $\pmb{w}=(\pmb{w}_\mathrm{in},\pmb{w}_\mathrm{ex})$ such that $\pmb{w}_\mathrm{in}=\pmb{f}_{\mathrm{in}}(T)$ and $\pmb{w}_\mathrm{ex} =\pmb{f}_{\mathrm{ex}}(T)$ for some tree $T\in \mathcal{FT}$; \\ (iii) For each vector $\pmb{w}=(\pmb{w}_\mathrm{in},\pmb{w}_\mathrm{ex})\in \W^{(0)}$, choose a sample tree $T_{\pmb{w}}\in \mathcal{FT}$ such that $\pmb{w}_\mathrm{in}=\pmb{f}_{\mathrm{in}}(T)$ and $\pmb{w}_\mathrm{ex} =\pmb{f}_{\mathrm{ex}}(T)$, and store these sample trees; \item[2.] For each integer $h=1,2,\ldots, \delta_2$, iteratively execute the next: \\ (i) Compute the set $\W_{\mathrm{end}}^{(h)}$ of all vectors $\pmb{w}=(\pmb{w}_\mathrm{in},\pmb{w}_\mathrm{ex})$ such that $\pmb{w}_\mathrm{in}=\pmb{f}_{\mathrm{in}}(T)$ and $\pmb{w}_\mathrm{ex} =\pmb{f}_{\mathrm{ex}}(T)$ for some bi-rooted tree $T\in \T_{\mathrm{end}}^{(h)}$, where such a vector $\pmb{w}$ is obtained from a combination of vectors $\pmb{w}'\in \W^{(0)}$ and $\pmb{w}''\in \W_{\mathrm{end}}^{(h-1)}$; \\ (ii) For each vector $\pmb{w} \in \W_{\mathrm{end}}^{(h)}$, store a sample tree $T_{\pmb{w}}$, which is obtained from a combination of sample trees $T_{\pmb{w}'}$ with $\pmb{w}'\in \W^{(0)}$ and $T_{\pmb{w}''}$ with $\pmb{w}''\in \W_{\mathrm{end}}^{(h-1)}$; \item[3.] We call a pair of vectors $\pmb{w}^1\in \W_{\mathrm{end}}^{(\delta_1)}$ and $\pmb{w}^2\in \W_{\mathrm{end}}^{(\delta_2)}$ {\em feasible} if it admits a target graph $G\in \G(\pmb{x}^*)$ such that $\pmb{w}_{\mathrm{in}}^1+\pmb{w}_{\mathrm{in}}^2\leq \pmb{x}^*_{\mathrm{in}}$ and $\pmb{w}_{\mathrm{ex}}^1+\pmb{w}_{\mathrm{ex}}^2\leq \pmb{x}^*_{\mathrm{ex}}$. Find the set $\W_\mathrm{pair}$ of all feasible pairs of vectors $\pmb{w}^1$ and $\pmb{w}^2$; \item[4.] For each feasible vector pair $(\pmb{w}^1,\pmb{w}^2)\in \W_\mathrm{pair}$, construct a corresponding target graph $G$ by combining the corresponding samples trees $T_{\pmb{w}^1}$ and $T_{\pmb{w}^2}$, as illustrated in Figure~\ref{fig:combine_two_2-branches}. \end{enumerate} For a relatively large instance with $n^*\geq 40$ and $\mathrm{dia}^*\geq 20$, the number $|\W_\mathrm{pair}|$ of feasible vector pairs in Step~4 is still very large. In fact, the size $|\W_{\mathrm{end}}^{(h)}|$ of a vector set $\W_{\mathrm{end}}^{(h)}$ to be computed in Step~2 can also be considerably large during an execution of the algorithm. For such a case, we impose a time limitation on the running time for computing $\W_{\mathrm{end}}^{(h)}$ and a memory limitation on the number of vectors stored in a vector set $\W_{\mathrm{end}}^{(h)}$. With these limitations, we can compute only a limited subset $\widehat{\W}_{\mathrm{end}}^{(h)}$ of each vector set $\W_{\mathrm{end}}^{(h)}$ in Step~2. Even with such a subset $\widehat{\W}_{\mathrm{end}}^{(h)}$, we still can find a large size of a subset $\widehat{\W}_\mathrm{pair}$ of $\W_\mathrm{pair}$ in Step~3. Our algorithm also delivers a lower bound on the number of all target graphs $G\in\G(x^*)$ in the following way. In Step~1, we also compute the number $t(\pmb{w})$ of trees $T\in \FT$ such that $\pmb{w}=\pmb{f}(T)$ for each $\pmb{w}\in \W^{(0)}$. In Step~2, when a vector $\pmb{w}$ is constructed from two vectors $\pmb{w}'$ and $\pmb{w}''$, we iteratively compute the number $t(\pmb{w})$ of trees $T$ such that $\pmb{w}=\pmb{f}(T)$ by $t(\pmb{w}):=t(\pmb{w}')\times t(\pmb{w}'')$. In Step~3, when a feasible vector pair $(\pmb{w}^1,\pmb{w}^2)\in \W_\mathrm{pair}$ is obtained, we know that the number of the corresponding target graphs $G$ is $t(\pmb{w}^1)\times t(\pmb{w}^2)$. Possibly we compute a subset $\widehat{\W}_\mathrm{pair}$ of $\W_\mathrm{pair}$ in Step~3. Then $(1/2)\sum_{(\pmb{w}^1,\pmb{w}^2)\in\widehat{\W}_\mathrm{pair}}t(\pmb{w}^1)\times t(\pmb{w}^2)$ gives a lower bound on the number of target graphs $G\in\G(x^*)$, where we divided by 2 since an axially symmetric target graph $G$ can correspond to two vector pairs in $\W_\mathrm{pair}$. Detailed descriptions of the five steps in the above algorithm can be found in Appendix~\ref{sec:graph_search_appendix}. \subsubsection{Case of $\mathrm{bl}}% {\mathrm{b\ell}_2(G)=3$} We call a chemical graph $G\in \G(x^*)$ with diameter $\mathrm{dia}^*$ and $\mathrm{bl}}% {\mathrm{b\ell}_2(G)=3$ a {\em target graph}. Let $n^*_{\mathrm{inl}}\triangleq \sum_{{\tt a} \in \Lambda} \pmb{x}^*_{\mathrm{in}}({\tt a})$, which is the number of 2-internal vertices in a target graph $G\in \G(\pmb{x}^*)$. A chemical acyclic graph $G$ with $\mathrm{bl}}% {\mathrm{b\ell}_2(G)=3$ has exactly three leaf 2-branches $v_i$, $i=1,2$ and exactly one 2-internal vertex $v_4$ adjacent to three 2-internal vertices $v_i$, $i=1,2,3$, as illustrated in Figure~\ref{fig:few_leaf_2-branches}(b). We call vertex $v_4$ the {\em joint-vertex} of $G$. Without loss of generality assume that the length of the path $P_{v_1,v_2}$ between $v_1$ and $v_2$ is $\mathrm{dia}^*-4$ and that the length of the path $P_{v_1,v'_1}$ is not smaller than that of $P_{v_2,v'_2}$. Analogously with the case of $\mathrm{bl}}% {\mathrm{b\ell}_2(G)=2$, we define {\em internal-subtree} (resp., {\em end-subtree}, {\em internal-fringe-tree} and {\em end-fringe-tree}) of $G$ to be a connected subgraph $G'$ that satisfies (\ref{eq:subtree}). Observe that $G$ can be partitioned into three end-subtrees $T_i$, $i=1,2,3$, the 2-fringe-tree $T_4$ rooted at the joint-vertex $v_4$ and three edges $v_iv_4$, $i=1,2,3$, where the backbone path $P_{T_i}$ connects leaf 2-branch $v_i$ and vertex $v'_i$. In particular, we call the end-subtree of $G$ that consists of $T_1$, $T_2$, $T_4$ and edges $v_iv_4$, $i=1,2$ the {\em main-subtree} of $G$, which consists of the path $P_{v_1,v_2}$ and all the 2-fringe-trees rooted at vertices in $P_{v_1,v_2}$. We call $T_3$ the {\em co-subtree} of $G$. Let $\delta_i$, $i=1,2,3$ denote the length of the backbone path of $T_i$. Note that \[ \delta_1+\delta_2+2=\mathrm{dia}^*-4 \mbox{ and } \delta_1\geq \delta_2\geq \delta_3= n^*_{\mathrm{inl}} - \mathrm{dia}^* + 2,\] from which \[ \delta_2 \in [\delta_3, \lfloor \mathrm{dia}^*/2 \rfloor-3 ] \mbox{ and } \delta_1 \in [\lceil \mathrm{dia}^*/2\rceil -3, \mathrm{dia}^* - 6 -\delta_3] .\] We regard a target graph $G\in \G(x^*)$ with $\mathrm{bl}}% {\mathrm{b\ell}_2(G)=3$ and diameter $\mathrm{dia}^*$ as a combination of the main-subtree and the co-subtree joined with an edge. We represent the co-subtree as a chemical bi-rooted tree $T$ with $\ell(P_T)=\delta_3$. We represent the main-subtree of a target graph $G$ as a tri-rooted tree $T$ with $\ell(P_T)=\mathrm{dia}-4$ so that terminals $r_1(T)$, $r_2(T)$ and $r_3(T)$ correspond to the two leaf 2-branches and the joint-vertex of $G$, respectively. \begin{figure}[!ht] \begin{center} \includegraphics[width=.60\columnwidth]{combine_three_2-branches.eps} \end{center} \caption{An illustration of combining a tri-rooted $T_1=T_{\pmb{w}^1}$ and a bi-rooted tree $T_2=T_{\pmb{w}^2}$ with a new edge joining vertices $r_1(T_1)$ and $r_1(T_2)$ to construct a target graph $G$. } \label{fig:combine_three_2-branches} \end{figure} We start with generating chemical rooted trees and then iteratively extend chemical bi-rooted trees $T$ with $\ell(P_T)=1,2,\dots,\mathrm{dia}^*-6-\delta_3$ before we combine two chemical bi-rooted trees $T'_1$ and $T'_2$ to obtain a chemical tri-rooted tree $T_1$ with $\ell(P_{T_1})=\delta_i$ and finally combine a chemical tri-rooted tree $T_1$ and a chemical bi-rooted trees $T_2$ with $\ell(P_{T_2})=\delta_3$, to obtain a target graph $G\in \G(x^*)$. Analogously with the case of $\mathrm{bl}}% {\mathrm{b\ell}_2(G)=2$, we define the set $\T(x^*)$ of all bi-rooted trees $T$, the set $\mathcal{FT}$ of all rooted trees $T\in \T(x^*)$ that can be a 2-fringe-tree of a target graph $G$ and the set $\T_{\mathrm{end}}^{(h)}$, $h\in [1, \mathrm{dia}^* -6 -\delta_3]$) of all bi-rooted trees $T\in \T(x^*)$ that can be an end-subtree of a target graph $G$ such that $\ell(P_T)=h$. We generate target graphs $G\in\G(x^*)$ by the following steps: \begin{enumerate} \item[1.] Analogously with Step~1 for the case of $\mathrm{bl}}% {\mathrm{b\ell}_2(G)=2$, compute the set $\mathcal{FT}$ and the set $\W^{(0)}$ of all vectors $\pmb{w}=(\pmb{w}_\mathrm{in},\pmb{w}_\mathrm{ex})$ such that $\pmb{w}_\mathrm{in}=\pmb{f}_{\mathrm{in}}(T)$ and $\pmb{w}_\mathrm{ex} =\pmb{f}_{\mathrm{ex}}(T)$ for some tree $T\in \mathcal{FT}$. For each vector $\pmb{w} \in \W^{(0)}$, store a sample tree $T_{\pmb{w}}\in \mathcal{FT}$; \item[2.] For each integer $h=1,2,\ldots, \mathrm{dia}^* -6 -\delta_3$, compute the set $\W_{\mathrm{end}}^{(h)}$ of all vectors $\pmb{w}=(\pmb{w}_\mathrm{in},\pmb{w}_\mathrm{ex})$ such that $\pmb{w}_\mathrm{in}=\pmb{f}_{\mathrm{in}}(T)$ and $\pmb{w}_\mathrm{ex} =\pmb{f}_{\mathrm{ex}}(T)$ for some bi-rooted tree $T\in \T_{\mathrm{end}}^{(h)}$; For each vector $\pmb{w} \in \W_{\mathrm{end}}^{(h)}$, store a sample tree $T_{\pmb{w}}$; \item[3.] For each integer $h\in [\lceil \mathrm{dia}^*/2\rceil -2, \mathrm{dia}^* - 5 -\delta_3] $, compute the set $\W_{\mathrm{end}+2}^{(h)}$ of all vectors $\pmb{w}=(\pmb{w}_\mathrm{in},\pmb{w}_\mathrm{ex})$ such that $\pmb{w}_\mathrm{in}=\pmb{f}_{\mathrm{in}}(T)$ and $\pmb{w}_\mathrm{ex} =\pmb{f}_{\mathrm{ex}}(T)$ of some bi-rooted tree $T$ with $\ell(P_T)=h$ that represents an end-subtree rooted at the joint-vertex; For each vector $\pmb{w} \in \W_{\mathrm{end}+2}^{(h)}$, store a sample tree $T_{\pmb{w}}$; \item[4.] For each integer $\delta_1 \in[\lceil \mathrm{dia}^*/2\rceil -3, \mathrm{dia}^* - 6 -\delta_3]$, compute the set $\W_{\mathrm{main}}^{(\delta_1+1)}$ of all vectors $\pmb{w}=(\pmb{w}_\mathrm{in},\pmb{w}_\mathrm{ex})$ such that $\pmb{w}_\mathrm{in}=\pmb{f}_{\mathrm{in}}(T)$ and $\pmb{w}_\mathrm{ex} =\pmb{f}_{\mathrm{ex}}(T)$ for some tri-rooted tree $T$ that represents the main-subtree such that the length of the path $P_{r_2(T),r_3(T)}$ between terminals $r_2(T)$ and $r_3(T)$ is $\delta_1+1$. For each vector $\pmb{w}\in \W_{\mathrm{main}}^{(\delta_1+1)}$, store a sample tree $T_{\pmb{w}}$; \item[5.] We call a pair of vectors $\pmb{w}^1\in \W_{\mathrm{main}}^{(\delta_1+1)}$ and $\pmb{w}^2\in \W_{\mathrm{end}}^{(\delta_3)}$ {\em feasible} if it admits a target graph $G\in \G(\pmb{x}^*)$ such that $\pmb{w}_{\mathrm{in}}^1+\pmb{w}_{\mathrm{in}}^2\leq \pmb{x}^*_{\mathrm{in}}$ and $\pmb{w}_{\mathrm{ex}}^1+\pmb{w}_{\mathrm{ex}}^2\leq \pmb{x}^*_{\mathrm{ex}}$. Find the set $\W_\mathrm{pair}$ of all feasible pairs of vectors $\pmb{w}^1$ and $\pmb{w}^2$; \item[6.] For each feasible vector pair $(\pmb{w}^1,\pmb{w}^2)\in \W_\mathrm{pair}$, construct a corresponding target graph $G$ by combining the samples trees $T_{\pmb{w}^1}$ and $T_{\pmb{w}^2}$, which correspond to the main-subtree and the co-subtree of a target graph $G$, respectively, as illustrated in Figure~~\ref{fig:combine_three_2-branches}. \end{enumerate} Detailed descriptions of the six steps in the above algorithm can be found in Appendix~\ref{sec:graph_search_appendix}. \section{Experimental Results}\label{sec:experiment We implemented our method of Stages~1 to 5 for inferring chemical acyclic graphs and conducted experiments to evaluate the computational efficiency for three chemical properties $\pi$: octanol/water partition coefficient ({\sc K\tiny{ow}}), boiling point ({\sc Bp}) and heat of combustion ({\sc Hc}). We executed the experiments on a PC with Two Intel Xeon CPUs E5-1660 v3 @3.00GHz, 32 GB of RAM running under OS: Ubuntu 14.04.6 LTS. We show 2D drawings of some of the inferred chemical graphs, where ChemDoodle version 10.2.0 is used for constructing the drawings. \begin{table}[ht!]\caption{Results of Stage 1 in Phase 1.} \begin{center} \begin{tabular}{@{} c r c c c c c c @{}}\toprule $\pi$ & $\Lambda$ & $|D_{\pi}|$ & $|\Gamma|$ & $[\underline{n},\overline{n}]$ & $[\underline{\rm bl},\overline{\rm bl}]$ & $[\underline{\rm bh},\overline{\rm bh}]$ & $[\underline{a},\overline{a}]$ \\ \midrule {\sc K\tiny{ow}} & {\tt C,O,N} & 216 & 10 & [4, 28] & [0, 2] & [0, 4] & [-4.2, 8.23] \\ {\sc Bp} & {\tt C,O,N} & 172 & 10 & [4, 26] & [0, 1] & [0, 3] & [-11.7, 404.84] \\ {\sc Hc} & {\tt C,O,N} & 128 & 6 & [4, 26] & [0, 1] & [0, 2] & [1346.4, 13304.5] \\ \bottomrule \end{tabular}\end{center}\label{table:stage1} \end{table} \bigskip \noindent {\bf Results on Phase~1. } We implemented Stages~1, 2 and 3 in Phase~1 as follows. \bigskip \noindent {\bf Stage~1. } We set a graph class $ \mathcal{G}$ to be the set of all chemical acyclic graphs, and set a branch-parameter $k^*$ to be 2. For each property $\pi\in \{${\sc K\tiny{ow}}, {\sc Bp, Hc}$\}$, we first select a set $\Lambda$ of chemical elements and then collected a data set $D_{\pi}$ on chemical acyclic graphs over the set $\Lambda$ of chemical elements provided by HSDB from PubChem. To construct the data set, we eliminated chemical compounds that have at most three carbon atoms or contain a charged element such as ${\tt N}^+$ or an element ${\tt a}\in \Lambda$ whose valence is different from our setting of valence function $\mathrm{val}$. Table~\ref{table:stage1} shows the size and range of data sets that we prepared for each chemical property in Stage~1, where we denote the following: \begin{itemize} \item[-] $\pi$: one of the chemical properties {\sc K\tiny{ow}}, {\sc Bp} and {\sc Hc}; \item[-] $\Lambda$: the set of selected chemical elements (hydrogen atoms are added at the final stage); \item[-] $|D_{\pi}|$: the size of data set $D_{\pi}$ over $\Lambda$ for property $\pi$; \item[-] $|\Gamma|$: the number of different adjacency-configurations over the compounds in $D_{\pi}$; \item[-] $[\underline{n},\overline{n}]$: the minimum and maximum number $n(G)$ of non-hydrogen atoms over the compounds $G$ in $D_{\pi}$; \item[-] $[\underline{\mathrm{bl}}% {\mathrm{b\ell}},\overline{\mathrm{bl}}% {\mathrm{b\ell}}]$: the minimum and maximum numbers $\mathrm{bl}}% {\mathrm{b\ell}_2(G)$ of leaf 2-branches over the compounds $G$ in $D_{\pi}$; \item[-] $[\underline{\mathrm{bh}},\overline{\mathrm{bh}}]$: the minimum and maximum values of the 2-branch height $\mathrm{bh}_2(G)$ over the compounds $G$ in $D_{\pi}$; and \item[-] $[\underline{a},\overline{a}]$: the minimum and maximum values of $a(G)$ in $\pi$ over compounds $G$ in $D_{\pi}$. \end{itemize} \bigskip \noindent {\bf Stage~2. } We used a feature function $f$ that consists of the descriptors defined in Section~\ref{sec:preliminary}. \begin{table}[ht!]\caption{Results of Stages~2 and 3 in Phase~1.} \begin{center} \begin{tabular}{@{} c r c c c c c @{}}\toprule $\pi$ & $K$ & Activation & Architecture & L-Time & test R$^2$ (ave.) & test R$^2$ (best) \\ \midrule {\sc K\tiny{ow}} & 76 & ReLU & (76,10,1) & ~~2.12 & 0.901 & 0.951 \\ {\sc Bp} & 76 & ReLU & (76,10,1) & ~26.07 & 0.935 & 0.965 \\ {\sc Hc} & 68 & ReLU & (68,10,1) & 234.06 & 0.924 & 0.988 \\ \bottomrule \end{tabular}\end{center}\label{table:stages2-3} \end{table} \bigskip \noindent {\bf Stage~3. } We used {\tt scikit-learn} version 0.21.6 with Python 3.7.4 to construct ANNs $\mathcal{N}$ where the tool and activation function are set to be MLPRegressor and ReLU, respectively. We tested several different architectures of ANNs for each chemical property. To evaluate the performance of the resulting prediction function $\psi_\mathcal{N}$ with cross-validation, we partition a given data set $D_{\pi}$ into five subsets $D_{\pi}^{(i)}$, $i\in[1,5]$ randomly, where $D_{\pi}\setminus D_{\pi}^{(i)}$ is used for a training set and $D_{\pi}^{(i)}$ is used for a test set in five trials $i\in[1,5]$. For a set $\{y_1,y_2,\ldots,y_N\}$ of observed values and a set $\{\psi_1,\psi_2,\ldots,\psi_N\}$ of predicted values, we define the coefficient of determination to be $\mathrm{R}^2\triangleq 1- \frac{\sum_{j\in [1,N]}(y_j-\psi_j)^2} {\sum_{j\in [1,N]}(y_j-\overline{y})^2}$, where $\overline{y}= \frac{1}{N}\sum_{j\in [1,N]}y_j$. Table~\ref{table:stages2-3} shows the results on Stages~2 and 3, where \begin{itemize} \item[-] $K$: the number of descriptors for the chemical compounds in data set $D_{\pi}$ for property $\pi$; \item[-] Activation: the choice of activation function; \item[-] Architecture: $(a,b,1)$ consists of an input layer with $a$ nodes, a hidden layer with $b$ nodes and an output layer with a single node, where $a$ is equal to the number $K$ of descriptors; \item[-] L-time: the average time (sec) to construct ANNs for each trial; \item[-] test $\mathrm{R}^2$ (ave): the average of coefficient of determination over the five tests; and \item[-] test $\mathrm{R}^2$ (best): the largest value of coefficient of determination over the five test sets. \end{itemize} From Table~\ref{table:stages2-3}, we see that the execution of Stage~3 was successful, where the average of test $\mathrm{R}^2$ is over 0.9 for all three chemical properties. For each chemical property $\pi$, we selected the ANN $\mathcal{N}$ that attained the best test $\mathrm{R}^2$ score among the five ANNs to formulate an MILP $\mathcal{M}(x,y,z;\mathcal{C}_1)$ which will be used in Phase~2. \bigskip \noindent {\bf Results on Phase~2. } We implemented Stages~4 and 5 in Phase~2 as follows. \bigskip \noindent {\bf Stage~4. } In this step, we solve the MILP $\mathcal{M}(x,y,g;\mathcal{C}_1,\mathcal{C}_2)$ formulated based on the ANN $\mathcal{N}$ obtained in Phase~1. To solve an MILP in Stage~4, we use {\tt CPLEX} version 12.8. % In our experiment, we choose a target value $y^* \in [\underline{a}, \overline{a}]$. and fix or bound some descriptors in our feature vector as follows: \begin{itemize} \item[-] Set the 2-leaf-branch number $\mathrm{bl}}% {\mathrm{b\ell}^*$ to be each of $2$ and $3$; \item[-] Fix the instance size $n^*=n(G)$ to be each integer in $\{26,32,38,44,50\}$; \item[-] Set the diameter $\mathrm{dia}^*=\mathrm{dia}(G)$ be one of the integers in $\{ \lceil (2/5)n^*\rceil, \lceil (3/5)n^*\rceil \}$. \item[-] Set the maximum degree $d_\mathrm{max}:=3$ for $\mathrm{dia}^*=\lceil (2/5)n^*\rceil$ and $d_\mathrm{max}:=4$ for $\mathrm{dia}^*= \lceil (3/5)n^*\rceil $; \item[-] For each instance size $n^*$, test a target value $y^*_{\pi}$ for each chemical property $\pi\in \{${\sc K\tiny{ow}}, {\sc Bp, Hc}$\}$. \end{itemize} Based on the above setting, we generated six instances for each instance size $n^*$. We set $\varepsilon=0.02$ in Stage~4. Tables~\ref{table_2_3_2-5} to \ref{table_2_4_3-5} (resp., Tables~\ref{table_3_3_2-5} to \ref{table_3_4_3-5}) show the results on Stage~4 for $\mathrm{bl}}% {\mathrm{b\ell}^*=2$ (resp., $\mathrm{bl}}% {\mathrm{b\ell}^*=3$), where we denote the following: \begin{itemize} \item[-] $y^*_{\pi}$: a target value in $[\underline{a},\overline{a}]$ for a property $\pi$; \item[-] $n^*$: a specified number of vertices in $[\underline{n},\overline{n}]$; \item[-] $\mathrm{dia}^*$: a specified diameter in $\{ \lceil (2/5)n^*\rceil, \lceil (3/5)n^*\rceil \}$; \item[-] IP-time: the time (sec.) to an MILP instance to find vectors $x^*$ and $g^*$. \end{itemize} Observe that most of the MILP instances with $\mathrm{bl}}% {\mathrm{b\ell}^*=2$, $n^*\leq 50$ and $\mathrm{dia}^*\leq 30$ (resp., $\mathrm{bl}}% {\mathrm{b\ell}^*=3$, $n^*\leq 50$ and $\mathrm{dia}^*\leq 30$) in one minute (resp., in a few minutes). The previously most efficient MILP formulation for inferring chemical acyclic graphs due to Zhang~et~al.~\cite{ZZCSNA20} could solve an instance with only up to $n^*=20$ for the case of $d_\mathrm{max}=4$ and $\mathrm{dia}^*=9$. Our new MILP formulation on chemical acyclic graphs with bounded 2-branch height considerably improved the tractable size of chemical acyclic graphs in Stage~4 for the inference problem (II-a). Figure~\ref{fig:inferred-2}(a)-(c) illustrate some chemical acyclic graphs $G$ with $\mathrm{bl}}% {\mathrm{b\ell}_2(G)=2$ obtained in Stage~4 by solving an MILP. Remember that these chemical graphs obey the AD $\mathcal{D}$ defined in Appendix~A. \begin{figure}[!htb] \begin{center} \includegraphics[width=.88\columnwidth]{inferred-2.eps} \end{center} \caption{An illustration of chemical acyclic graphs $G$ with $n(G)=50$, $\mathrm{bl}}% {\mathrm{b\ell}_2(G)=2$ and $d_\mathrm{max}=4$ obtained in Stage~4 by solving an MILP: (a) $y^*_{{\rm Kow}}=9$, $\mathrm{dia}(G)=\lceil (2/5)n^*\rceil =20$; (b) $y^*_{{\rm Bp}}=880$, $\mathrm{dia}(G)= n^*/2 =25$; (c) $y^*_{{\rm Hc}}=25000$, $\mathrm{dia}(G)=\lceil (3/5)n^*\rceil =30$. } \label{fig:inferred-2} \end{figure} Figure~\ref{fig:inferred-3}(a)-(c) illustrate some chemical acyclic graphs $G$ with $\mathrm{bl}}% {\mathrm{b\ell}_2(G)=3$ obtained in Stage~4 by solving an MILP. \begin{figure}[!htb] \begin{center} \includegraphics[width=.88\columnwidth]{inferred-3.eps} \end{center} \caption{An illustration of chemical acyclic graphs $G$ with $n(G)=50$, $\mathrm{bl}}% {\mathrm{b\ell}_2(G)=3$ and $d_\mathrm{max}=4$ obtained in Stage~4 by solving an MILP: (a) $y^*_{{\rm Kow}}=9$, $\mathrm{dia}(G)=\lceil (2/5)n^*\rceil =20$; (b) $y^*_{{\rm Bp}}=880$, $\mathrm{dia}(G)= n^*/2 =25$; (c) $y^*_{{\rm Hc}}=25000$, $\mathrm{dia}(G)=\lceil (3/5)n^*\rceil =30$. } \label{fig:inferred-3} \end{figure} \bigskip \noindent {\bf Stage~5. } In this stage, we execute our new graph search algorithms for generating target graphs $G\in \G(\pmb{x}^*)$ with $\mathrm{bl}}% {\mathrm{b\ell}_2(G)\in \{2,3\}$ for a given feature vector $\pmb{x}^*$ obtained in Stage~4. We introduce a time limit of 10 minute for each iteration $h$ in Step~2 and an execution of Steps~1 and 3 for $\mathrm{bl}}% {\mathrm{b\ell}^*=2$ (resp., each iteration $h$ in Steps~2 and 3 and $\delta_1$ in Step~4 and an execution of Steps~1 and 5 for $\mathrm{bl}}% {\mathrm{b\ell}^*=3$). In the last step, we choose at most 100 feasible vector pairs and generate a target graph from each of these feasible vector pairs. We also impose an upper bound $\mathrm{UB}$ on the size $|\W|$ of a vector set $\W$ that we maintain during an execution of the algorithm. We executed the algorithm for each of the three bounds $\mathrm{UB}=10^6, 10^7, 10^8$ until a feasible vector pair is found or the running time exceeds a global time limitation of two hours. When no feasible vector pair is found by the graph search algorithms, we output the target graph $G^*$ constructed from the vector $g^*$ in Stage~4. Tables~\ref{table_2_3_2-5} to \ref{table_2_4_3-5} (resp., Tables~\ref{table_3_3_2-5} to \ref{table_3_4_3-5}) show the results on Stage~5 for $\mathrm{bl}}% {\mathrm{b\ell}^*=2$ (resp., $\mathrm{bl}}% {\mathrm{b\ell}^*=3$), where we denote the following: \begin{itemize} \item[-] $\#$FP: the number of feasible vector pairs obtained by an execution of graph search algorithm for a given feature vector $\pmb{x}^*$; \item[-] G-LB: a lower bound on the number of all target graphs $G\in \G(\pmb{x}^*)$ for a given feature vector $\pmb{x}^*$; \item[-] $\#$G: the number of all (or up to 100) chemical acyclic graphs $G$ such that $f(G)=x^*$ (where at least one such graph $G$ has been found from the vector $g^*$ in Stage~4); \item[-] G-time: the running time (sec.) to execute Stage~5 for a given feature vector $\pmb{x}^*$. ``$>$ 2 hours'' means that the running time exceeds two hours. \end{itemize} Previously an instance of chemical acyclic graphs with size $n^*$ up to 16 was solved in Stage~5 by Azam~et~al.~\cite{ACZSNA20}. For the classes of chemical graphs with cycle index 1 and 2, the maximum size of instances solved in Stage~5 by Ito~et~al.~\cite{ACZSNA20} and Zhu~et~al.~\cite{ZCSNA20} was around 18 and 15, respectively. Our new algorithm based on dynamic programming solve instances with $n^*=50$. In our experiments, we also computed a lower bound G-LP on the number of target graphs. Observe that there are over $10^{10}$ or $10^{14}$ target graphs in some cases. Remember that these lower bounds are computed without actually generating each target graph one by one. So when a lower bound is enormously large, this would suggest that we may need to impose some more constraints on the structure of graphs or the range of descriptors to narrower a family of target graphs to be inferred. \begin{table}[ht!] \caption{Results of Stages~4 and 5 for $\mathrm{bl}}% {\mathrm{b\ell}^*= 2$, $d_\mathrm{max}=3$ and $\mathrm{dia}^* = \lceil \frac{2}{5}n^* \rceil$. } \begin{center} \begin{tabular}{ @{} c r r r r r r r r @{}}\toprule $\pi$ & $y^*$ & $n^*$ & $\mathrm{dia}^*$ &~{\small IP-time}\hspace{-1mm} & {\small $\#$FP~} & {\small G-LB~~~} & {\small $\#$G}~ & {\small G-time} \\ \midrule \multirow{5}{*} {\sc K\tiny{ow}} & 4 & 26 & 11 & 3.95 & 11,780 & $2.4 \times 10^{6}$ & 100 & 0.91 \\ & 5 & 32 & 13 & 4.81 & 216 & $2.7 \times 10^{4}$ & 100 & 10.64 \\ & 7 & 38 & 16 & 7.27 & 19,931 & $4.2 \times 10^{7}$ & 100 & 48.29 \\ & 8 & 44 & 18 & 9.33 & 241,956 & $1.2 \times 10^{13}$ & 100 & 119.01 \\ & 9 & 50 & 20 & 21.57 & 58,365 & $1.7 \times 10^{10}$ & 100 & 110.38 \\ \midrule \multirow{5}{*} {\sc Bp} & 440 & 26 & 11 & 2.09 & 22,342 & $3.6 \times 10^{7}$ & 100 & 2.9 \\ & 550 & 32 & 13 & 3.94 & 748 & $5.9 \times 10^{6}$ & 100 & 3.77 \\ & 660 & 38 & 16 & 6.4 & 39,228 & $7.3 \times 10^{8}$ & 100 & 151.25 \\ & 770 & 44 & 18 & 7.21 & 138,076 & $3.0 \times 10^{12}$ & 100 & 182.66 \\ & 880 & 50 & 20 & 9.49 & 106,394 & $3.0 \times 10^{10}$ & 100 & 217.18 \\ \midrule \multirow{5}{*} {\sc Hc} & 13000 & 26 & 11 & 2.94 & 12 & $2.0 \times 10^{1}$ & 12 & 0.04 \\ & 16500 & 32 & 13 & 7.67 & 2,722 & $1.2 \times 10^{7}$ & 100 & 0.31 \\ & 20000 & 38 & 16 & 10.5 & 1,830 & $9.7 \times 10^{5}$ & 100 & 1.06 \\ & 23000 & 44 & 18 & 13.62 & 12,336 & $4.7 \times 10^{8}$ & 100 & 142.02 \\ & 25000 & 50 & 20 & 15.1 & 136,702 & $5.3 \times 10^{14}$ & 100 & 22.26 \\ \bottomrule \end{tabular} \end{center}\label{table_2_3_2-5} \end{table} \begin{table}[ht!] \caption{Results of Stages~4 and 5 for $\mathrm{bl}}% {\mathrm{b\ell}^*= 2$, $d_\mathrm{max}=4$ and $\mathrm{dia}^* = \lceil \frac{3}{5}n^* \rceil$. } \begin{center} \begin{tabular}{ @{} c r r r r r r r r @{}}\toprule $\pi$ & $y^*$ & $n^*$ & $\mathrm{dia}^*$ &~{\small IP-time}\hspace{-1mm} & {\small $\#$FP~} & {\small G-LB~~~} & {\small $\#$G}~ & {\small G-time} \\ \midrule \multirow{5}{*} {\sc K\tiny{ow}} & 4 & 26 & 16 & 16.21 & 4,198 & $3.5 \times 10^{5}$ & 100 & 1.18 \\ & 5 & 32 & 20 & 24.74 & 1,650 & $5.3 \times 10^{6}$ & 100 & 0.69 \\ & 7 & 38 & 23 & 38.88 & 154,408 & $9.5 \times 10^{9}$ & 100 & 67.31 \\ & 8 & 44 & 27 & 38.73 & 1,122,126 & $8.5 \times 10^{13}$ & 100 & 660.37 \\ & 9 & 50 & 30 & 31.59 & 690,814 & $1.1 \times 10^{15}$ & 100 & 238.02 \\ \midrule \multirow{5}{*} {\sc Bp} & 440 & 26 & 16 & 12.44 & 8,156 & $2.6 \times 10^{6}$ & 100 & 2.74 \\ & 550 & 32 & 20 & 23.22 & 38,600 & $4.4 \times 10^{8}$ & 100 & 12.72 \\ & 660 & 38 & 23 & 20.62 & 52,406 & $1.1 \times 10^{9}$ & 100 & 197.89 \\ & 770 & 44 & 27 & 50.55 & 23,638 & $6.8 \times 10^{8}$ & 100 & 244.56 \\ & 880 & 50 & 30 & 48.37 & 40,382 & $2.2 \times 10^{11}$ & 100 & 884.99 \\ \midrule \multirow{5}{*} {\sc Hc} & 13000 & 26 & 16 & 23.26 & 249 & $2.7 \times 10^{3}$ & 100 & 0.06 \\ & 16500 & 32 & 20 & 44.2 & 448 & $6.9 \times 10^{4}$ & 100 & 0.63 \\ & 20000 & 38 & 23 & 96.02 & 3,330 & $6.1 \times 10^{6}$ & 100 & 15.16 \\ & 23000 & 44 & 27 & 82.34 & 43,686 & $1.5 \times 10^{10}$ & 100 & 152.96 \\ & 25000 & 50 & 30 & 83.81 & 311,166 & $1.3 \times 10^{13}$ & 100 & 287.95 \\ \bottomrule \end{tabular} \end{center}\label{table_2_4_3-5} \end{table} \begin{table}[ht!] \caption{Results of Stages~4 and 5 for $\mathrm{bl}}% {\mathrm{b\ell}^*= 3$, $d_\mathrm{max}=3$ and $\mathrm{dia}^* = \lceil \frac{2}{5}n^* \rceil$. } \begin{center} \begin{tabular}{ @{} c r r r r r r r r @{}}\toprule $\pi$ & $y^*$ & $n^*$ & $\mathrm{dia}^*$ &~{\small IP-time}\hspace{-1mm} & {\small $\#$FP~} & {\small G-LB~~~} & {\small $\#$G}~ & {\small G-time} \\ \midrule \multirow{5}{*} {\sc K\tiny{ow}} & 4 & 26 & 11 & 3.1 & 511 & $3.6 \times 10^{3}$ & 100 & 14.31 \\ & 5 & 32 & 13 & 4.72 & 3,510 & $6.8 \times 10^{6}$ & 100 & 851.21 \\ & 7 & 38 & 16 & 5.82 & 11,648 & $1.2 \times 10^{8}$ & 100 & 612.86 \\ & 8 & 44 & 18 & 9.69 & 17,239 & $2.2 \times 10^{8}$ & 100 & 703.92 \\ & 9 & 50 & 20 & 22.53 & 60,792 & $3.9 \times 10^{12}$ & 100 & 762.17 \\ \midrule \multirow{5}{*} {\sc Bp} & 440 & 26 & 11 & 3.01 & 66 & $9.0 \times 10^{2}$ & 66 & 902.77 \\ & 550 & 32 & 13 & 4.29 & 308 & $1.0 \times 10^{7}$ & 100 & 2238.62 \\ & 660 & 38 & 16 & 5.86 & 303 & $1.8 \times 10^{7}$ & 100 & 3061.11 \\ & 770 & 44 & 18 & 14.39 & 19,952 & $4.7 \times 10^{10}$ & 100 & 678.26 \\ & 880 & 50 & 20 & 10.39 & 17,993 & $7.1 \times 10^{12}$ & 100 & 4151.07 \\ \midrule \multirow{5}{*} {\sc Hc} & 13000 & 26 & 11 & 3.05 & 340 & $1.5 \times 10^{4}$ & 100 & 1.57 \\ & 16500 & 32 & 13 & 5.81 & 600 & $3.1 \times 10^{8}$ & 100 & 921.55 \\ & 20000 & 38 & 16 & 15.67&18,502 & $6.2 \times 10^{8}$ & 100 & 1212.54 \\ & 23000 & 44 & 18 & 21.15&5,064 & $6.9 \times 10^{9}$ & 100 & 1279.95 \\ & 25000 & 50 & 20 & 31.90&41,291 & $2.4 \times 10^{12}$ & 100 & 668.5 \\ \bottomrule \end{tabular} \end{center}\label{table_3_3_2-5} \end{table} \begin{table}[ht!] \caption{Results of Stages~4 and 5 for $\mathrm{bl}}% {\mathrm{b\ell}^*= 3$, $d_\mathrm{max}=4$ and $\mathrm{dia}^* = \lceil \frac{3}{5}n^* \rceil$. } \begin{center} \begin{tabular}{ @{} c r r r r r r r r @{}}\toprule $\pi$ & $y^*$ & $n^*$ & $\mathrm{dia}^*$ &~{\small IP-time}\hspace{-1mm} & {\small $\#$FP~} & {\small G-LB~~~} & {\small $\#$G}~ & {\small G-time} \\ \midrule \multirow{5}{*} {\sc K\tiny{ow}} & 4 & 26 & 16 & 9.94 & 100 & $2.5 \times 10^{4}$ & 100 & 6.73 \\ & 5 & 32 & 20 & 16.58 & 348 & $1.4 \times 10^{8}$ & 100 & 3400.74 \\ & 7 & 38 & 23 & 33.71 & 17,557 & $1.2 \times 10^{11}$ & 100 & 2652.38 \\ & 8 & 44 & 27 & 34.28 & 0 & 0 & 1 & {\rm $>$2 hours}\\ & 9 & 50 & 30 & 68.74 & 80,411 & $6.4 \times 10^{15}$ & 100 & 6423.85 \\ \midrule \multirow{5}{*} {\sc Bp} & 440 & 26 & 16 & 14.16 & 150 & $1.8 \times 10^{5}$ & 100 & 29.72 \\ & 550 & 32 & 20 & 18.94 & 305 & $1.4 \times 10^{7}$ & 100 & 2641.9 \\ & 660 & 38 & 23 & 21.15 & 1,155 & $2.0 \times 10^{9}$ & 100 & 4521.66 \\ & 770 & 44 & 27 & 25.6 & 1,620 & $4.3 \times 10^{8}$ & 100 & 175.2 \\ & 880 & 50 & 30 & 63.22 & 0 & 0 & 1 & {\rm $>$2 hours} \\ \midrule \multirow{5}{*} {\sc Hc} & 13000 & 26 & 16 & 31.87 & 12 & $2.7 \times 10^{4}$ & 12 & 0.66 \\ & 16500 & 32 & 20 & 41.03 & 392 & $3.4 \times 10^{8}$ & 100 & 2480.34 \\ & 20000 & 38 & 23 & 48.48 & 630 & $1.4 \times 10^{5}$ & 100 & 105.59 \\ & 23000 & 44 & 27 & 143.75 & 341 & $7.8 \times 10^{8}$ & 100 & 5269.1 \\ & 25000 & 50 & 30 & 315.91 & 10,195 & $3.8 \times 10^{9}$ & 100 & 5697.08 \\ \bottomrule \end{tabular} \end{center}\label{table_3_4_3-5} \end{table} \bigskip \noindent {\bf An Additional Experiment. } We also conducted some additional experiment to demonstrate that our MILP-based method is flexible to control conditions on inference of chemical graphs. In Stage~3, we constructed an ANN $\mathcal{N}_{\pi}$ for each of the three chemical properties $\pi\in\{${\sc K\tiny{ow}}, {\sc Bp}, {\sc Hc}$\}$, and formulated the inverse problem of each ANN $\mathcal{N}_{\pi}$ as an MILP $\mathcal{M}_{\pi}$. Since the set of descriptors is common to all three properties {\sc K\tiny{ow}}, {\sc Bp} and {\sc Hc}, it is possible to infer a chemical acyclic graph $G$ that satisfies a target value $y^*_{\pi}$ for each of the three properties at the same time (if one exists). We specify the size of graph so that $n^* =50$, $\mathrm{bl}}% {\mathrm{b\ell}^* =2$, $\mathrm{dia}^* = 25$ and $d_\mathrm{max} =4$, and set target values with $y^*_{{\rm Kow}} =4.0$, $y^*_{{\rm Bp}} =400.0$ and $y^*_{{\rm Hc}} =13000.0$ in an MILP that consists of the three MILP $\mathcal{M}_{{\rm Kow}}$, $\mathcal{M}_{{\rm Hc}}$ and $\mathcal{M}_{{\rm Bp}}$. The MILP was solved in 18930 (sec) and we obtained a chemical acyclic graph $G$ illustrated in Figure~\ref{fig:inferred-triple}. We continued to execute Stage~5 for this instance to generate more target graphs $G^*$. Table~\ref{table_triple_target} shows that 100 target graphs are generated by our new dynamic programming algorithm. \begin{figure}[!htb] \begin{center} \includegraphics[width=.45\columnwidth]{inferred-triple.eps} \end{center} \caption{An illustration of a chemical acyclic graph $G$ inferred for three chemical properties {\sc K\tiny{ow}}, {\sc Bp} and {\sc Hc} simultaneously, where $y^*_{{\rm Kow}} =4.0$, $y^*_{{\rm Bp}} =400.0$ and $y^*_{{\rm Hc}} =13000.0$, $n^* =50$, $\mathrm{bl}}% {\mathrm{b\ell}^* =2$, $\mathrm{dia}^* = 25$ and $d_\mathrm{max} =4$. } \label{fig:inferred-triple} \end{figure} \begin{table}[ht!] \caption{Results of Stages~4 and 5 for $\mathrm{bl}}% {\mathrm{b\ell}^*= 2$, $d_\mathrm{max}=4$, $n^* = 50$ and $\mathrm{dia}^* = 25$. } \begin{center} \begin{tabular}{ @{} c r | r r r r r r r @{}}\toprule $\pi$ & $y^*$ & $n^*$ & $\mathrm{dia}^*$ &~{\small IP-time}\hspace{-1mm} & {\small $\#$FP~} & {\small G-LB~~~} & {\small $\#$G}~ & {\small G-time} \\ \midrule {\sc K\tiny{ow}} & 4 & \multirow{3}{*} {50} & \multirow{3}{*} {25} & \multirow{3}{*} {18930.46} &\multirow{3}{*} {117,548} &\multirow{3}{*}{$2.4 \times 10^{11}$} &\multirow{3}{*}{100} & \multirow{3}{*}{423.53}\\ {\sc Bp} & 400 &\\ {\sc Hc} &1300 & \\ \bottomrule \end{tabular} \end{center}\label{table_triple_target} \end{table} \clearpage \section{Concluding Remarks}\label{sec:conclude In this paper, we introduced a new measure, branch-height of a tree, and showed that many of chemical compounds in the chemical database have a simple structure where the number of 2-branches is small. Based on this, we proposed a new method of applying the framework for inverse QSAR/QSPR \cite{ACZSNA20,CWZSNA20,ZZCSNA20} to the case of acyclic chemical graphs where Azam et al.~\cite{ACZSNA20} inferred chemical graphs with around 20 non-hydrogen atoms and Zhang et al.~\cite{ZZCSNA20} solved an MILP of inferring a feature vector for an instance with up to around 50 non-hydrogen atoms and diameter 8. In our method, we formulated a new MILP in Stage~4 specialized for acyclic chemical graphs with a small branch number and designed a new graph search algorithm in Stage~5 that computes frequency vectors of graphs in a dynamic programming scheme. % We implemented our new method and conducted some experiments on chemical properties such as octanol/water partition coefficient, boiling point and heat of combustion. % The resulting method improved the performance so that chemical graphs with around 50 non-hydrogen atoms and around diameter 30 can be inferred. Since there are many acyclic chemical compounds having large diameters, this is a significant improvement. It is left as a future work to design MILPs and graph search algorithms based on the new idea of the paper for classes of graphs with a higher rank. \bigskip \noindent {\bf Abbreviations } ANN: artificial neural network; MILP: mixed integer linear programming \bigskip \noindent {\bf Acknowledgements} This research was supported, in part, by Japan Society for the Promotion of Science, Japan, under Grant \#18H04113. \bigskip \noindent {\bf Authors' contributions} Conceptualization, H.N. and T.A.; methodology, H.N.; software, N.A.A., J.Z., Y.Sun, Y.Shi, A.S. and L.Z.; validation, N.A.A., J.Z., A.S. and H.N.; formal analysis, H.N.; data resources, A.S., L.Z., H.N. and T.A.; writing--original draft preparation, H.N.; writing--review and editing, N.A.A., A.S. and T.A.; project administration, H.N.; funding acquisition, T.A. All authors have read and agreed to the published version of the manuscript. \bigskip \noindent {\bf Availability of data and materials} Source code of the implementation of our algorithm is freely available from {\tt https://github.com/ku-dml/mol-infer}. \bigskip \noindent {\bf Competing interests} The authors declare that they have no competing interests. \bigskip \noindent {\bf Author details} $^1$ Department of Applied Mathematics and Physics, Kyoto University, Kyoto 606-8501, Japan. $^2$ Graduate School of Advanced Integrated Studies in Human Survavibility, Kyoto University, Kyoto 606-8306. $^3$ Bioinformatics Center, Institute for Chemical Research, Kyoto University, Uji 611-0011, Japan. \clearpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION} Recently, thanks to its strong learning ability and low testing complexity, deep neural network (DNN) has made great success in optimization problems in wireless communication, such as channel estimation\cite{CE1}, beamforming\cite{MultiUser}, signal detection\cite{SD1}, resource allocation\cite{B_B}, etc. The existing DNN-based optimization methods can be mainly divided into two categories, to improve algorithm performance and efficiency, respectively. For the first category, black-box DNNs are used to directly learn the input-to-output mapping\cite{CE1,MultiUser}, or techniques, such as deep unfolding, are used to exploit the advantages of both deep learning and conventional iterative optimization algorithms\cite{SD1}. Besides, the optimization objectives are expressed in the form of the monotonically decreasing energy functions of Hopfield neural networks in \cite{Hopfield} so that the objective is optimized as the network evolves. As for the second category, the pruning policy is learned in \cite{B_B} to accelerate the branch and bound algorithm while the complex objective function is approximated by a network in \cite{ApproximateObjective} to solve the problem with simple optimization techniques. In most current DNN-based wireless optimization works, the network is first trained offline with a large number of samples to minimize the average loss over the entire dataset, and the network parameters are fixed during online testing. In spite of high theoretical performance according to the universal approximation theory, the actual performance of offline DNN can be limited by inadequate training and local minima. Consequently, in many complex wireless communication problems, conventional algorithms are still superior in performance, and the advantages of DNN mainly lie in lower complexity\cite{MultiUser}. Besides, the performance degradation of the DNN trained offline is very common when input distribution changes during online testing\cite{MultiUser}, and the limited generalization ability hinders the application of DNN in fast changing environments. Last but not least, DNN is often regarded as a black box with unexplainable parameters in data-driven methods, therefore not suitable for tasks with strict reliability requirements. To address the above issues in the current offline DNN-based methods, we propose a novel online DNN-based approach in this article to solve general optimization problems in wireless communication, where a dedicated DNN is trained for each data sample. Specifically, the optimization variables and the objective function are treated as network parameters and loss function, respectively. Then, the decrease of loss through network training is equivalent to the solving process of the optimization problem. The strong generalization ability and interpretability of the proposed approach can be easily understood based on its online optimization nature and meaningful network parameters. Furthermore, a practical example is provided to facilitate a better understanding of the proposed approach and illustrate its superiority. In the joint beamforming problem in intelligent reflecting surface (IRS)-aided multi-user multiple-input multiple-output (MIMO) systems, we demonstrate that the proposed online DNN achieves better performance than conventional offline DNN and state-of-the-art iterative optimization algorithm, but with low complexity. \section{Online DNN for general optimization} In this section, the general proposed framework is elaborated from four aspects, namely network modeling, constraint elimination, parameter initialization, and network training. \subsection{Network Modeling} Consider the following unconstrained optimization problem: \begin{equation} \min\limits_{\bm{x}} f(\bm{a},\bm{x}), \end{equation} where $\bm{a}$ denotes known parameters, $\bm{x}$ denotes optimization variables, and $f$ denotes the objective function. Both the conventional offline DNN-based method and the proposed online DNN-based approach can be used to solve the above optimization problem. Fig. \ref{framework} illustrates the frameworks of two methods and their main components are compared as follows to highlight the novelty of the proposed approach: \begin{itemize} \item {\bf Input}: For both methods, known parameters $\bm{a}$ that contain available information are treated as network input. \item {\bf Layers \& Parameters}: The conventional offline DNN typically consists of convolutional (Conv) and fully-connected (FC) layers with unexplainable parameters $\bm{\theta}$ while the proposed online DNN adopts self-defined (SD) layers where the estimations of optimization variables $\hat{\bm{x}}$ are treated as parameters and the forward computation is customized according to the signal flow to obtain $f$. \item {\bf Output}: The output of the conventional offline DNN is the estimations of optimization variables $\hat{\bm{x}}=m(\bm{a},\bm{\theta})$, where $m$ denotes the unexplainable mapping function parameterized by $\bm{\theta}$. For the proposed online DNN, the output is the optimization objective $f(\bm{a},\hat{\bm{x}})$. \item {\bf Loss function}: In the conventional offline DNN, consider supervised learning, the mean-squared error between the network prediction and the label is commonly used as the loss function, $L$. In the proposed online DNN, $f$ is a reasonable choice for $L$ since its reduction through training is equivalent to the solving process of the optimization problem. Since no label is required, methods using this kind of loss function are usually called unsupervised learning-based approaches in recent literature\cite{MultiUser,GraphNN}. Notice that, in maximization problems, $L$ should be $1/f$ or $-f$ so that the reduction of loss is equivalent to the maximization of the objective function. \end{itemize} \begin{figure}[htbp] \centering \includegraphics[width=0.50\textwidth]{framework.eps} \caption{The frameworks of the conventional offline DNN and the proposed online DNN to solve optimization problem (1).} \label{framework} \end{figure} As demonstrated by Fig. \ref{framework}, the conventional approach trains a common network offline with multiple data samples while the proposed approach trains a dedicated network online for each new sample. Therefore, there is no so-called testing stage in the online DNN since $\hat{\bm{x}}$ are obtained at network parameters rather than output, and the generalization problem does not exist at all. Besides, the meaningful parameters make the online DNN highly interpretable. \subsection{Constraint Elimination} DNN parameters are usually unconstrained and can take arbitrary values in the entire real space. However, for those optimization problems in wireless communications, the optimization variables, $\bm{x}$, are subject to various constraints, which results in a feasible region $\mathcal{X}$. To implement the proposed DNN with constrained variables, an intuitive method is to eliminate the constraints and transform the constrained optimization problems to unconstrained ones. There are several standard methods on constraint elimination. Some integrate the constraints into the objective function, such as the Lagrangian multiplier method and the penalty function method while others maintain the feasibility of the solution through projection operation, such as the projected gradient descent algorithm\cite{constraint_elimination}. Nevertheless, these methods suffer from complex mathematical derivation, ill-conditioned problems, and slow convergence. In this article, we use the technique of reparameterization. Specifically, for $\bm{x}\in \mathcal{X}$, if we can find a differentiable transform function $g$ to express $\bm{x}$ in the form of a set of unconstrained variables $\bm{x}'$, i.e., $\bm{x}=g(\bm{x}')$ and the feasible region of $\bm{x}'$ is the entire real space, then we can treat $\bm{x}'$ as network parameters instead of $\bm{x}$. During training, the gradients of the loss function with respect to $\bm{x}'$ can be obtained by the chain rule, i.e., $\frac{dL}{d\bm{x}'}=\frac{dL}{d\bm{x}}\frac{d\bm{x}}{d\bm{x}'}$. After training, $\hat{\bm{x}}$ can be readily recovered by $g(\hat{\bm{x}'})$. Next, we provide the transforms and unconstrained counterparts of optimization variables of most common constraints in wireless communications: \begin{itemize} \item {\bf Complex constraint}: In most communication problems, the optimization variables are complex numbers. If an optimization variable $x\in\mathbb{C}$, then the unconstrained counterparts are its real and imaginary parts $x'_r$ and $x'_i$, and the transform is $x=x'_r+jx'_i$. \item {\bf Unit modulus constraint}: When IRS or phase shifters are used, phases of components have unit modulus. If $x\in\mathbb{C}$ and $|x|=1$, then the unconstrained counterpart is its argument $\phi$, and the transform is $x=e^{j\phi}$. \item {\bf Box constraint}: If $a\le x\le b$, then the transform is $x=a+(b-a)\text{Sigmoid}(x')$, where the value of $\text{Sigmoid}(x')=1/(1+e^{-x'})$ is between 0 and 1. \item {\bf Maximum power constraint:} If $\bm{x}\in \mathbb{R}^K$ satisfies $\sum_{k=1}^Kx_k\le P$, then the unconstrained counterparts are the power unconstrained version $\bm{x}'$ and a power scaler $c$. The transform is $\bm{x}=\bm{x}'/\sum_{k=1}^Kx'_{k}\times P\times \text{Sigmoid}(c)$. When beamforming is considered with multi-antenna transmitters, the transform is similar, as will be introduced later in the given example. \item {\bf Linear equality constraint:} If $\bm{x}\in \mathbb{R}^K$ satisfies $\bm{Ax}=\bm{b}$, where $\bm{A}\in\mathbb{R}^{M \times K}$ is full row rank and $M<K$, i.e., there are infinite feasible solutions of $\bm{x}$. Then, the transform is $\bm{x}=\bm{Fx}'+\bm{x}_0$, where $\bm{x}_0$ is a special solution that satisfies $\bm{Ax}_0=\bm{b}$, e.g., $\bm{x}_0=\bm{A}^\dagger\bm{b}$ with $\dagger$ denoting pseudo inverse, and $\bm{F}\in\mathbb{R}^{K\times(K-M)}$ is the zero space of $\bm{A}$, which satisfies $\bm{AF}=\bm{0}$ and can be obtained by the $null$ function in Matlab or Python. \item {\bf Linear inequality constraint:} If $\bm{x}\in \mathbb{R}^K$ satisfies $\bm{Ax}\le \bm{b}$, then the transform is $\bm{x}=\bm{Fx}'+\bm{A}^\dagger(\bm{b}-\bm{\mu})$, where $\bm{\mu}=e^{\bm{\mu}'}>0$ denotes the introduced set of slack variables. \end{itemize} \subsection{Parameter Initialization} Before training, network parameters need to be properly initialized first, which is especially important in non-convex optimization problems. One simple method is to use random generalization. Besides, by initializing and training multiple times and selecting the best one, the performance can be improved and stabilized, albeit at the cost of higher complexity. In fact, high quality initializations can also be found without much complexity overhead by exploiting expert knowledge. For instance, we can initialize with sub-optimal solutions obtained by low-complexity baseline algorithms. Or, in low-mobility scenarios, the channels are highly time-correlated, so the current initialization can inherit from previously optimized parameters or even be predicted by autoregressive models. \subsection{Network Training} After parameter initialization, the training process begins. During training, network parameters can be optimized by popular DNN optimizers. Specifically, in each training iteration, the network first executes forward computation to obtain the loss, and then executes backward computation to compute the gradients of the loss function with respect to all network parameters, which is efficiently implemented by mainstream deep learning libraries. Based on the gradients and the learning rate, network parameters are updated correspondingly. Multiple iterations are required to train the network to convergence. The learning rate, which is the only hyper-parameter in the proposed online DNN, has to be carefully configured to improve training efficiency. After the training process is completed, final results of optimization variables can be readily recovered based on the network parameters and the corresponding transforms. \subsection{Relationship with Classic Gradient Descent} Actually, the proposed approach is equivalent to the classic gradient descent algorithm theoretically. However, conventional manual derivation of gradients or symbolic differentiation suffers from swelling expressions and low computation efficiency, while the proposed novel neural network-based implementation benefits from automatic differentiation and paves the way for fast and universal applications of gradient descent in practical optimization problems. Despite the simplicity of the core algorithm, surprisingly good results can be achieved sometimes, such as the example given in the next section. \section{Online DNN for Joint Beamforming in IRS-aided Multi-user MIMO systems} To facilitate a better understanding of the proposed approach and illustrate its superiority, we elaborate joint beamforming in IRS-aided multi-user MIMO systems as an example. \subsection{System Model and Problem Formulation} Consider the IRS-aided multi-user MIMO system illustrated in Fig. \ref{system}, where the BS with $M$ antennas serves $K$ single-antenna users with the aid of an IRS with $N$ reflecting elements. The direct links between the BS and users are assumed to be blocked. The received signal at the $k$-th user can be written as \begin{align} \label{transmission_model} y_k=\bm{h}^r_k\bm{\Theta Gx}+n_k, \end{align} for $k=1, \cdots, K,$ where $\bm{h}^r_k\in {\mathbb C}^{1\times N}$ and $\bm{G}\in {\mathbb C}^{N\times M}$ denote the channels of the $k$-th IRS-user link and the BS-IRS link, respectively. The phase shift matrix of IRS is defined as $\bm{\Theta}\triangleq \text{diag}([\theta_1,...,\theta_N])$, where $|\theta_n|=1$ is the phase shift of the $n$-th reflecting element, $\text{diag}(\cdot)$ denotes the diagonalization operation, and $\bm{x}=\sum_{k=1}^K\bm{w}_{k}s_k$ is the transmit signal at the BS, where $\bm{w}_{k}\in{\mathbb C}^{M\times 1}$ and $s_k$ satisfying $\mathbb{E}\{s_ks_k^*\}=1$ denote the transmit beamforming vector and the information symbol for the $k$-th user, respectively. Besides, $n_k\sim \mathcal{CN}(0,\sigma^2)$ denotes the noise at the $k$-th user with variance $\sigma^2$. Define $\bm{W} \triangleq [\bm{w}_1, ..., \bm{w}_k]$ and $\bm{H}^r\triangleq [\bm{h}^{rT}_1, ..., \bm{ h}^{rT}_K]^T$, the effective channel matrix is defined as $\bm{H}\triangleq \bm{H}^r\bm{\Theta G}\in {\mathbb C}^{K\times M}$. Then, the received signal-to-interference-plus-noise ratio (SINR) at the $k$-th user can be expressed as \begin{align} {\gamma_k}=\frac{\bm{w}_k^H\bm{H}_{k*}^H\bm{H}_{k*}\bm{w}_k}{J_k}, \end{align} for $k=1, \cdots, K,$ where $J_k\triangleq\sigma ^2+\sum_{i=1,i\ne k}^K\bm{w}_i^H\bm{H}_{k*}^H\bm{H}_{k*}\bm{w}_i$ is the energy of interference plus noise at the $k$-th user and $\bm{H}_{k*}$ denotes the $k$-th row vector of $\bm{H}$. We aim to maximize the sum rate of all users $\mathcal R$, by jointly optimizing the transmit beamforming matrix $\bm{W}$ and the IRS phase shift matrix $\bm{\Theta}$. The optimization problem is given by \begin{subequations} \begin{align} \label{opt_problem} \max\limits_{\bm{\Theta},\bm{W}}\quad& {\mathcal R}=\sum_{k=1}^K\log(1+{\gamma_k})\\ \text{s.t.}\quad&\sum_{k=1}^K\bm{w}^H_k\bm{w}_k\leq P_{max},\\ &|\theta_i|=1, \forall i=1, 2, ..., N, \end{align} \end{subequations} where (4b) is the transmit power constraint and $P_{max}$ denotes the maximum transmit power at the BS, while (4c) is the unit modulus constraint of phase shifts of IRS reflecting elements. \begin{figure}[htbp] \centering \includegraphics[width=0.50\textwidth]{system.eps} \caption{IRS-aided multi-user MIMO system} \label{system} \end{figure} \subsection{Detailed Designs of the Proposed Online DNN} According to (2), (3), and (4a), we can easily find the counterparts of the main components of the general framework in this specific problem. Apparently, $\bm{G}$, $\bm{H}^r$, and $\sigma^2$ make up known parameters $\bm{a}=\{\bm{G}$, $\bm{H}^r, \sigma^2\}$, while $\bm{\Theta}$ and $\bm{W}$ make up optimization variables $\bm{x}=\{\bm{\Theta}, \bm{W}\}$ and $R$ is the objective function $f$. Besides, the unconstrained counterparts of $\bm{\Theta}$ and $\bm{W}$ as well as proper transforms are required to handle constraints (4b) and (4c). \begin{figure*}[htbp] \centering \includegraphics[width=0.7\textwidth]{network.eps} \caption{Network architecture for joint beamforming in IRS-aided multi-user MIMO systems.} \label{network} \end{figure*} The detailed network structure is illustrated in Fig. \ref{network}. It is straightforward to implement two layers representing $\bm{\Theta}$ and $\bm{W}$, respectively. First of all, $\bm{G}$ and $\bm{H}^r$ are input into the $\bm{\Theta}$ layer. Inside the $\bm{\Theta}$ layer, the arguments of phase shifts of IRS reflecting elements, $\phi_1,\cdots,\phi_N$, are defined as $N$ trainable network parameters. The forward computation first transforms $\phi$ to $\theta$ by $\theta_i=e^{j\phi_i},i=1,\cdots,N$. Then, the effective channel matrix $\bm{H}$ is computed based on $\bm{G}$, $\bm{H}^r$, and $\bm{\Theta}$. Afterwards, $\bm{H}$ output by the $\bm{\Theta}$ layer flows into the $\bm{W}$ layer together with $\sigma^2$. Inside the $\bm{W}$ layer, the power unconstrained real and imaginary parts of the transmit beamforming matrix $\bm{W}$'s elements $\bm{W}'_{real}$ and $\bm{W}'_{imag}$, are defined as $2KM$ trainable network parameters. The forward computation first realizes the transform of power normalization by $\bm{W}_{real\&imag}=\bm{W}'_{real\&imag}/\sqrt{\sum_{k=1}^K\bm{w}'^H_k\bm{w}'_k}\times\sqrt{P_{max}}$. Then, the SINRs of users $\gamma_k,k=1,\cdots,K$, are computed based on $\bm{W}$, $\bm{H}$, and $\sigma^2$. Eventually, the sum rate of all users, $R$, can be readily computed and the loss function defined as $L=-R$ is used for network training. \subsection{Simulation Results} Next, the superiority of the proposed approach is validated through simulation. Adopt the Rician channel model, the channels of the BS-IRS link and the $k$-th IRS-user link are \begin{align} \bm{G}= L_1(\sqrt{\frac{\epsilon}{\epsilon+1}}\bm{a}_N(\nu)\bm{a}_M(\phi)^H+\sqrt{\frac{1}{\epsilon+1}}\overline{\bm{G}}),\\ \bm{h}_k^r= L_{2,k}(\sqrt{\frac{\epsilon}{\epsilon+1}}\bm{a}_N(\zeta_k)+\sqrt{\frac{1}{\epsilon+1}}\overline{\bm{h}_k^r}), \end{align} where $L_1$ and $L_{2,k}$ are path-losses in dB calculated as ${35.6+22.0\text{lg}(d)}$ with $d$ denoting the distance, $\bm{a}_M$ and $\bm{a}_N$ are the steering vectors of uniform linear array at the BS and the IRS, respectively, while $\nu$, $\phi$ and $\zeta_k$ are angular parameters. The Rician factor $\epsilon$ is set to 10, while $\overline{\bm{G}}$ and $\overline{\bm{h}_k^r}$ are non-line-of-sight components following $\mathcal{CN}(0,1)$. The distance between the BS and the IRS is fixed to $200$ m, users are uniformly distributed in a circle $30$ m away from the IRS with a radius of $10$ m, and $P_{max}/\sigma^2$ is fixed to 20 dB. Firstly, the impact of learning rate configuration is investigated. In the considered problem, the proposed DNN already works well with randomly initialized $\bm{\Theta}$ and $\bm{W}$. The training process terminates when the loss does not decrease in 25 consecutive iterations. Fig. \ref{impact_of_training} illustrates the convergence process of an exemplary sample when $M=8, K=4$ and $N=64$. As we can see, when the learning rate is fixed, a large learning rate can cause severe oscillation while a small learning rate can lead to slow convergence. In contrast, Adam is more superior in terms of convergence speed and performance and is less sensitive to the initial learning rate thanks to its adaptive adjustment of learning rate. Therefore, we adopt Adam with initial learning rate 0.1. Notice that, the usage of the advanced Adam optimizer originally developed in the area of deep learning benefits from our clever modeling of the optimization problem as a DNN. \begin{figure}[!htb] \centering \includegraphics[width=0.45\textwidth]{impact_of_training.eps} \caption{Convergence process of an exemplary sample.} \label{impact_of_training} \end{figure} The impact of initialization is illustrated in Fig. \ref{impact_of_init}, where $M=8, K=4$ and $N=128$. The best result obtained by running with multiple initializations are kept. The state-of-the-art block coordinate descent (BCD) algorithm\cite{baseline} is selected as a baseline. For BCD, random phase shifts and weighted minimum mean-squared error (WMMSE) beamforming based on the effective channels serve as the initializations of $\bm{\Theta}$ and $\bm{W}$, respectively, and the algorithm stops when the change of sum rate between two consecutive iterations is less than 1e-5. As we can see, the performance of both the proposed approach and BCD improves with the number of initializations at the cost of increased complexity, while the proposed approach consistently outperforms BCD, which can be attributed to the simultaneous update of all parameters. Besides, the performance gap decreases with the number of initializations due to BCD's larger performance variance of different initializations \begin{figure}[!htb] \centering \includegraphics[width=0.45\textwidth]{impact_of_init.eps} \caption{Impact of the number of initializations.} \label{impact_of_init} \end{figure} To save running time, we consider single initialization next. The impact of the number of reflecting elements $N$ is illustrated in Fig. \ref{impact_of_N}, where $M=8$ and $K=4$. The offline DNN-based approach proposed in \cite{MultiUser} with unsupervised training is also compared to highlight the superiority of the proposed online DNN. As we can see, the proposed approach achieves similar performance as BCD when $N$ is small, while when $N\ge80$, the proposed approach outperforms BCD and the performance gap increases with $N$. It is because the probability of BCD converging to a worse local optimum than the proposed approach is higher in systems with larger scales. Nevertheless, both the proposed approach and BCD outperform the offline DNN with various $N$. Notice that, for the offline DNN, performance degradation can happen when channel parameters changes\cite{MultiUser}, which does not exist in the proposed approach thanks to its online optimization nature. \begin{figure}[!htb] \centering \includegraphics[width=0.45\textwidth]{impact_of_N.eps} \caption{Impact of the number of reflecting elements $N$.} \label{impact_of_N} \end{figure} \subsection{Complexity Analysis} The complexity of the BCD algorithm is $\mathcal{O}(I_O(2KNM+KM^2+K^2N^2))$, where $I_O$ denotes the number of iterations\cite{baseline}. The complexity of the offline DNN proposed in \cite{MultiUser} is $\mathcal{O}(KNM)$. As for the proposed approach, the complexity is $\mathcal{O}(I_E(C_{F}+C_{B}))$, where the forward computation complexity $C_F$ is $\mathcal{O}(KNM+K^2M+KM)$, the predominant backward computation complexity $C_B$ is $\mathcal{O}(K^2NM+K^3M^2)$, and $I_E$ denotes the number of training iterations. Since usually $N\gg M$ and $N\gg K$, the proposed approach has lower per-iteration complexity than BCD. Besides, the proposed approach also requires less iterations to converge in experiments. Although the offline DNN has the lowest complexity, its performance is apparently inferior and the generalization and interpretability issues also hinder its practical applications. From a certain point of view, the proposed online DNN achieves the best performance-complexity tradeoff in the considered problem. To make the comparison more intuitive, running time on the same CPU is further shown in Table \ref{complexity}. The proposed approach runs much faster than BCD, especially in large scale systems Notice that, another benefit of the proposed approach is the acceleration thanks to its DNN-based structure, including the efficient implementation of matrix calculation and the gradient backpropagation algorithm in deep learning libraries, as well as the usage of dedicated hardware like GPU for parallel acceleration. Besides, in some special problems, the decomposition of loss calculation into independent blocks for further acceleration is a future direction worth investigating. \begin{table}[!htb] \begin{tabular}{|c|c|c|c|c|} \hline \diagbox{$M,N,K$}{Method} & Proposed & BCD & Offline DNN\\ \hline 4,64,2 & 0.196 & 0.383 & 0.002\\ \hline 8,64,2 & 0.220 & 0.816 & 0.003\\ \hline 8,128,2 & 0.381 & 6.034 & 0.009\\ \hline 8,128,4 & 0.587 & 9.871 & 0.011\\ \hline \end{tabular} \centering \caption{Average running time in seconds.} \label{complexity} \end{table} \section{CONCLUSION} \label{conclusion} In this article, we have developed a novel online DNN-based approach to solve general optimization problems in wireless communications. By treating the optimization variables and the objective function as network parameters and loss function, respectively, the optimization problem can be equivalently solved through network training. The proposed approach has strong generalization ability and interpretability, and outperforms conventional offline DNN and iterative optimization algorithm with low complexity in a practical example.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Motional state control of atomic particles is achieved by the absorption and emission cycles of a resonant or near resonant radiation, i.e., by light scattering typically at optical frequencies. For instance, laser Doppler cooling reduces the momentum of atoms or ions through multiple recoil processes \cite{wineland1}. Coherent momentum transfer can be performed with two-photon Raman processes \cite{Chu2} for applications in, e.g., atom interferometry \cite{interferometry}. The quantum state of the atomic particles is composed of the internal states, e.g., two spin states $\{\Ket{\uparrow},\Ket{\downarrow}\}$ for a two-level atom, and the external motional state. For free particles the simplest motional state is the momentum state $|\vec{p\,}\rangle$. Trapped particles are instead characterized by vibrational eigenstates $|n\rangle$, which in the simplest case of a harmonic oscillator of frequency $\omega_{\text{vib}}$ have their energies equally spaced as $\hbar\omega_{\text{{vib}}}(n+1/2)$. In free space, the momentum state of a particle, and consequently its kinetic energy, is changed by the momentum transfer $|\vec{p\,}\rangle\to|\vec{p}\,'\rangle$ in the absorption/emission cycle of an optical photon. While the momentum transfer picture also applies approx\-imately for trapped particles when the energy separation between motional states is not spectroscopically resolved, recoil-free transitions become possible in the resolved-sideband regime (M\"ossbauer effect). While carrier transitions do not change the vibrational quantum state $\ket{n}$, the motional state can be controlled via sideband transitions $\Ket{n}\to\Ket{n'}$ ($n'\neq n$), for instance, in incoherent cooling processes $\Ket{n}\to\Ket{n-1}$ or in coherent manipulation of vibrational states \cite{Leibfried2003}. With trapped ions or neutral atoms trapped in optical lattices, the resolved-sideband regime is typically realized by two-photon Raman transitions connecting two different hyperfine ground states \cite{wineland,jessen,Perrin1998}. Alternatively, in spin-dependent potentials it becomes also possible to use microwave transitions, which also offer sufficient spectral resolution \cite{Wunderlich2,durr,Weiss1997,Leonid,Ospelkaus}. In the semi-classical picture shown in figure (\ref{fig:classicalProj}), an atomic transition exchanges either kinetic or potential energy with the motional degree of freedom of the atom. With the absorption of an optical photon, the kinetic energy is changed by the momentum kick from the photon, and quantum mechanically the process can be interpreted in terms of a displacement of the wavefunction in momentum space. With the absorption of a microwave photon, which carries a negligible momentum, the potential energy of the atom can be changed if the trapping potential of the two states are different; this allows an interpretation in terms of a wavefunction displaced in position space. \begin{figure}[!b] \begin{centering} \includegraphics[scale=0.5]{figure1} \par\end{centering} \caption{In a semi-classical picture, an atomic transition can affect the motional state of an atom either (a) by a kinetic energy change caused by the momentum transfer from an optical photon of wavevector $k_{\text{opt}}$ (velocity selective transition \cite{wineland1}), or (b) by a potential energy change when the potentials of the two internal states are displaced in space by $\Delta x$ (position selective transition). In the two cases, the motional energy is decreased when the detuning is set to the Doppler shift $k_{\text{opt}}v_{\text{at}}$ or the potential energy $m\,\omega^2\Delta x^2/(2\hbar)$, respectively. \label{fig:classicalProj}} \end{figure} \section{Microwave induced motional sideband transitions} \subsection{Motional states in a state dependent lattice\label{sub:TheorySB}} We consider a single atom with two spin states \{$\Ket{\uparrow}$, $\Ket{\downarrow}$\} trapped in a one-dimensional optical lattice. We will initially ignore the internal degree of freedom of the atom and take the Hamiltonian governing its motion in the trap as given by \begin{equation} \hat{H}_{\text{{ext}}}=\frac{\hat{p}^{2}}{2m}+\frac{U_{0}}{2}\cos^{2}(k_{\text{{L}}}\hat{x}),\label{eq:ham} \end{equation} with $U_{0}$ being the trap depth, $k_{\text{{L}}}=2\pi/\lambda_{\text{L}}$ being the wavenumber of the two counter propagating laser fields creating the lattice, and $\hat{x}$, $\hat{p}$, the atom's position and momentum, respectively. The motional eigenstates of an atom in such a potential are the well-known Bloch wavefunctions $\Ket{\Psi_{n,k}^{B}}$, where $n$ is the band index ($n=0$ for the first band) and $k$ is the wavevector in the first Brillouin zone (BZ). In the limit of deep lattice potentials that we are considering here, the atoms remain localized for the time scales of the experiment and their spatial state is best described by the maximally localized Wannier state \cite{kohn} \begin{equation} \Ket{n,r}=\frac{1}{\sqrt{N}}\sum_{k\in\text{{BZ}}}e^{-ikrd}\Ket{\Psi_{n,k}^{B}}.\label{eq:Wannier} \end{equation} Here $N$ is the lattice size, $r$ the site index, and $d=\lambda_{\text{L}}/2$ the lattice spacing. In this deep lattice regime, we can safely view the vanishingly narrow energy bands $\varepsilon_{n}(k)$ as the vibrational level energies $\varepsilon_{n}$ of the corresponding Wannier state $\vert n,r\rangle$ at lattice site $r$; in the harmonic approximation we would have $\varepsilon_{n}=\hbar\omega_{\text{vib}}(n+1/2)$. The Wannier states form an orthonormal basis set such that the overlaps between two different states yield $\Braket{n,r|n'\!,r'}=\delta_{n,n'}\delta_{r,r'}.$ This means that the interaction of the atomic spin with a microwave field will fail to induce motional sideband transitions, $\Ket{n,r}\leftrightarrow\Ket{n'\!,r'}$, because of the nearly negligible momentum carried by microwave photons, five orders of magnitude smaller than that by optical photons. This restriction can be lifted if the atom experiences a different trapping potential depending on its internal spin state as the corresponding motional eigenstates are then no longer orthogonal \cite{wunderlich2001,weiss2}. A simple relative spatial shift of the potentials trapping each internal state induces such a difference. A shift by a distance $\Delta x$ is accounted for by the position space shift operator $\hat{T}_{\Delta x}\equiv\exp(-i\:\hat{p}\Delta x/\hbar)$, see figure \figref{fig:SDTImage+overlap}{a}. The overlap between the two Wannier states then becomes \begin{equation} \Braket{n'\!,r'|\hat{T}_{\Delta x}|n,r}\equiv I_{n,r}^{n'\!,r'}(\Delta x).\label{eq:Txoverlap} \end{equation} The resulting overlap integral, $-1\leq I_{n,r}^{n'\!,r'}(\Delta x)\leq1$, is hence a known function of $\Delta x$, see figure \figref{fig:SDTImage+overlap}{b}. It is analogous to the Franck\textendash{}Condon factor from molecular physics and it determines the strength of the transitions coupling different vibrational levels \cite{Bernath1995}. One way to realize the shift operator $\hat{T}_{\Delta x}$ is by two overlapped lattices which trap each spin state separately and can be independently shifted in the longitudinal direction as shown in figure (\ref{fig:SDTImage+overlap}). The trapping potential thus becomes dependent on the spin state $s=\{\uparrow,\downarrow\}$ and the shift distance $\Delta x=x_{\uparrow}^{0}-x_{\downarrow}^{0}$, \begin{equation} \hat{H}_{\text{{ext}}}=\frac{\hat{p}^{2}}{2m}+\sum_{s=\{\uparrow,\downarrow\}}\frac{U_{0}^{s}}{2}\cos^{2}[k_{\text{{L}}}(\hat{x}-x_{s}^{0})]\otimes\Ket{s}\Bra{s}\label{eq:newH} \end{equation} with $x_{s}^{0}$ being the position of the lattice trapping the state $\Ket{s}$. The total transition matrix element for two spin states coupled by an interaction Hamiltonian $H_{\text{{I}}}$, with a free-atom bare Rabi frequency $\Omega_{0}$, is then given by \begin{equation} \hbar\Omega_{n,r}^{n'\!,r'}(\Delta x)/2=\left\langle s'\!,n'\!,r'\left\vert \hat{T}_{\Delta x}\otimes H_{\text{{I}}}\right\vert s,n,r\right\rangle =I_{n,r}^{n'\!,r'}(\Delta x)\times\hbar\Omega_{0}/2.\label{eq:SBcoupling} \end{equation} \begin{figure} \begin{centering} \includegraphics[width=0.8\textwidth]{figure2} \par\end{centering} \centering{}\caption{(a) \label{fig:SDTImage+overlap}The coupling strength of a sideband transition in a spin-dependent lattice is the bare spin state coupling $\Omega_{0}$ multiplied by the overlap between the two involved vibrational states, the Franck-Condon factor, which is controlled by the relative shift $\Delta x$ between the two lattices. $\eta_{x}$ is the spatial Lamb-Dicke parameter defined in section \ref{sub:TheorySB} and later in \ref{sub:3.1}. (b)\label{fig:FCfsCalc} Lattice shift dependence of the Franck-Condon factors for different transitions, denoted as $n-m$, calculated for typical experimental parameters (see text).\label{fig:Intro+FCfCalc}} \end{figure} The Franck-Condon factors $I_{n,r}^{n'\!,r'}(\Delta x)$ can be explicitly evaluated using equations (\ref{eq:Wannier}) and (\ref{eq:Txoverlap}). We first rewrite equation (\ref{eq:Wannier}) using Bloch's theorem, \begin{equation} W_{n,r}(x)=\frac{1}{\sqrt{N}}\sum_{k\in\text{{BZ}}}\:\sum_{q\in\mathbb{Z}}e^{-ikrd}e^{i\frac{2\pi}{d}q}\: a_{n,q}(k)\;\Ket{k}.\label{eq:BlochTheorem} \end{equation} with $a_{n,q}(k)$ being the Fourier coefficients of the Bloch functions and $\Ket{k}$ the planewave state. These functions can be constructed using the periodic solutions of the Mathieu differential equation \cite{mathieu,slater} with their phase chosen such that the resulting Wannier states are real and have the proper parity corresponding to their respective vibrational levels \cite{kohn}. The coefficients $a_{n,q}(k)$ are numerically obtained from algorithms for the computation of Mathieu coefficients \cite{alhargan}. Inserting (\ref{eq:BlochTheorem}) in (\ref{eq:Txoverlap}) and taking into account the parity of the Wannier states, or equivalently the parity of the band $n$, one eventually arrives at the following expression for the Franck-Condon factors \begin{equation} I_{n,r}^{n'\!,r'}(\Delta x)=2\sum_{k\in\text{{BZ}}}\:\sum_{q\in\mathbb{Z}}\mathcal{F}\left[(k+\frac{2\pi}{d}q)(\Delta x+r-r')\right]\: a_{n,q}^{*}(k)\, a_{n'\!,q}(k),\label{eq:FCFexpression} \end{equation} where we have defined $\mathcal{F}(x):=\cos(x)\,$ if $n$ and $n'$ have the same parity, and $\mathcal{F}(x):=\sin(x)\,$ otherwise. Numerical evaluation of (\ref{eq:FCFexpression}) is shown in figure (\ref{fig:FCfsCalc}). Considering a single lattice site and assuming the harmonic approximation for the potential, the shift operator takes the simple form $\hat{T}_{\Delta x}=\exp[\eta_x(a^\dagger -a)]$, where $a^\dagger$ ($a$) is the raising (lowering) operator acting on the vibrational states. Here we introduced the spatial Lamb-Dicke parameter \begin{equation} \eta_x=\Delta x/(2\hspace{1pt}x_0)\,, \end{equation} where $x_0$ is equal to the rms width of the motional ground state. When $\eta_x\ll1$, taking the first order term in $\eta_x$ of $\hat{T}_{\Delta x}$ allows for a simple expression of the Franck-Condon factors for transitions on the same lattice site (i.e., $r$=$r'$=0), $I_{n,0}^{n'\!,0}(\Delta x)\approx\delta_{n,n'}+\eta_x(\sqrt{n'}\delta_{n'\!,n+1}-\sqrt{n}\delta_{n'\!,n-1})$. \subsection{Experimental setup\label{sub:sec2.2}} We load Cesium ($^{133}\text{Cs}$) atoms from a magneto optical trap into a 1D optical lattice formed by two counter-propagating, far-detuned, linearly polarized laser beams. The filling factor is at most one atom per lattice site due to light-induced collisions \cite{Schlosser:2001}. A weak guiding magnetic field of 3\;G oriented along the lattice lifts the degeneracy between the Zeeman sublevels of the Cesium $6^{2}S_{1/2}$ ground state such that atoms can be initialized by optical pumping beams into the hyperfine state $\Ket{\uparrow}\equiv\Ket{F=4,m_{F}=4}$. Microwave radiation, at around $\omega_\text{MW}=2\pi\times\SI{9.2}{\giga\hertz}$, couples states $\Ket{\uparrow}$ and $\Ket{\downarrow}\equiv\Ket{F=3,m_{F}=3}$ with the bare Rabi frequency of $\Omega_{0}=2\pi\times\SI{60}{\kilo\hertz}$ \cite{addressing}. The spin state of the atom is probed using the so called ``push-out'' technique \cite{pushout} which consists of counting the fraction of atoms left in $\Ket{\downarrow}$ after all the atoms in $\Ket{\uparrow}$ have been removed by an intense radiation pulse. An angle $\theta$ between the linear polarization vectors of the two beams forming the lattice is equivalent in the circular basis to a phase delay of $2\theta$ between two collinear and independent circularly-polarized standing waves, $\sigma^{+}$ and $\sigma^{-}$, or equivalently to a standing wave longitudinal relative shift of \begin{equation} \Delta x_\text{sw}(\theta)=\theta\, d/\pi.\label{eq:shiftdist} \end{equation} The polarization angle $\theta$ is controlled by an electro-optical modulator (EOM) and two quarter-wave plates in the path of one of the two lattice beams. The two in-phase circular components of the beam are mapped by the first $\lambda_{\text{L}}/4$ plate onto orthogonal linear polarizations parallel to the EOM axes. The retardation $2\theta$ induced by the EOM is proportional to the voltage signal applied to it. The last plate then converts the linear polarizations back into the circular ones while conserving the delay. \begin{figure}[t] \begin{centering} \includegraphics[scale=0.45]{figure3} \par\end{centering} \caption{State-dependent optical lattices relatively shifted by a distance $\Delta x$. The total trap depth difference $\Delta U^{\text{tot}}=U_{\uparrow}^{\text{tot}}-U_{\downarrow}^{\text{tot}} $, and lattice contrast $W_s$ for spin state $\Ket{s}$ are shown. Unlike the spin $\Ket{\uparrow}$ lattice, the contrast and total depth of the spin $\Ket{\downarrow}$ lattice vary with the shift distance.\label{fig:contrast}} \end{figure} The trapping potentials resulting from the $\sigma^{+}$ and $\sigma^{-}$ standing waves for a spin state $\Ket{s}$ are \begin{equation} U_{s}=U_{s}^{\textrm{\text{{tot}}}}+W_{s}\cos^{2}[k_{L}(x-x_{s}^{0})]\label{eq:latpotgen} \end{equation} where $W_{s}$ and $U_{s}^{\text{{tot}}}$ are the lattice contrast, taking positive values, and total trap depth for state $\Ket{s}$, respectively. Both $W_{s}$ and $U_{s}^{\text{{tot}}}$ depend on the lattice lasers wavelength $\lambda_{\text{L}}$ and on the lattice shift $\Delta x$ or equivalently the polarization angle $\theta$, see figure (\ref{fig:contrast}). For alkali atoms, one can define the ``magic wavelength'' as the one where the state $\Ket{\uparrow}$ experiences the $\sigma^{+}$ standing wave only. This occurs at $\lambda_{\text{L}}=\lambda_2+(\lambda_1 - \lambda_2)/(2\lambda_1/\lambda_2+1)\approx \lambda_2+(\lambda_1 - \lambda_2)/3 $, where $\lambda_1$ ($\lambda_2$) is the wavelength of the D${}_1$ (D${}_2$) line \cite{Deutsch1998,Jaksch99}, which is $\lambda_{\text{L}}=\SI{866}{\nano\meter}$ in our case. At this wavelength, for the spin $\Ket{\uparrow}$ state, equation (\ref{eq:latpotgen}) reads \begin{equation} U_{\uparrow}=-W_{\uparrow}+W_{\uparrow}\cos^2(k_{L}x-\theta/2),\label{eq:Uup} \end{equation} while the spin $\Ket{\downarrow}$ state experiences both $\sigma^{+}$ and $\sigma^{-}$ standing waves with a relative weight of $1/8$ and $7/8$, respectively. The lattice potential in this case is \begin{equation} U_{\downarrow}=-W_{\uparrow}+(1/8)\hspace{1pt}W_{\uparrow}\cos^{2}(k_{L}x-\theta/2)\;+\; (7/8)\hspace{1pt}W_{\uparrow}\cos^{2}(k_{L}x+\theta/2)\label{eq:Udown}\,. \end{equation} With the notation of equation~(\ref{eq:latpotgen}), one finds that $W_\uparrow=-U_\uparrow^\text{tot}$ is independent from the angle $\theta$, while $W_\downarrow=[\cos(\theta)^2+(3/4)^2\sin(\theta)^2]^{1/2}\hspace{2pt}W_\uparrow$ and $U_\downarrow^\text{tot}=-(W_\uparrow+W_\downarrow)/2$. In addition, one obtains the lattice relative shift $\Delta x=(d/\pi)\{\theta+\arctan[3\tan(\theta)/4]\}/2$. Equations (\ref{eq:Uup}) and (\ref{eq:Udown}) constitute the closest realization of the idealized spin-dependent lattice discussed in section \ref{sub:TheorySB}. The small admixture of a $\sigma^{+}$ component in equation (\ref{eq:Udown}) results in a lattice depth $W_{\downarrow}$ that depends on $\theta$, or equivalently on the lattice shift $\Delta x$, which makes the energy levels $\varepsilon_{s,n}(\Delta x)$ depend on the spin state and on the shift $\Delta x$, see figure (\ref{fig:contrast}). The nonlinear position shift of the $U_\downarrow$ potential, $x_{\downarrow}^{0}$, makes $\Delta x$ deviate from the standing wave relative shift $\Delta x_\text{sw}$ in equation (\ref{eq:shiftdist}), and this has to be taken into account in the calculation of the Franck-Condon factors \cite{Deutsch1998}. The typical total lattice depth used in our experiment is $W_\uparrow\approx850\, E_{\text{R}}^{\text{latt}}$ (corresponding to $\SI{80}{\micro\kelvin}$), with $E_{\text{R}}^{\text{latt}}=\hbar^{2}k_{\text{L}}^{2}/2m_{\text{Cs}}$ as the lattice recoil, which amounts to an oscillation frequency along the lattice axis of $\omega_{\text{{vib}}}\approx2\pi\times\SI{116}{\kilo\hertz}$. In the transverse direction, atoms are confined only by the Gaussian profile of the lattice lasers which results in a transverse oscillation frequency of $\omega_{\text{{rad}}}\approx2\pi\times\SI{1}{\kilo\hertz}$. The typical initial temperature of the atoms loaded into the lattice is $T\approx\SI{10}{\micro\kelvin}$, which in the harmonic approximation amounts to mean vibrational numbers of $\overline{n}_{\text{{vib}}}\approx1.4$ and $\overline{n}_{\text{{rad}}}\approx280$ in the axial and transverse directions, respectively. \subsection{Microwave sideband spectra\label{sub:2.3}} \begin{figure}[b] \begin{centering} \hfill{}\includegraphics[width=11.7cm]{figure4}\hfill{} \par\end{centering} \centering{}\caption{(a) Microwave spectrum of sideband transitions $\Ket{\uparrow,n=0}\leftrightarrow\Ket{\downarrow,n'}$ for lattice shifts $\Delta x=$\{0nm ({\small$\bullet$}), 43nm ($\circ$), 111nm({\tiny$\blacksquare$}), 176nm ({\tiny$\square$})\} corresponding to the parameter $\eta_{x}={\{0,1.2,3.1,4.9\}}$ defined in section \ref{sub:3.1} (data points from \cite{Leonid}). The microwave detuning is given with respect to the carrier transition frequency. Data points are the average on about 100 atoms and they are here fitted with a model that takes into account broadening mechanisms detailed in the text. The error bars, reported only for three representative peaks, are obtained with the 68\% Clopper-Pearson interval method for binomial statistics. The panels (b) and (c) compare the expected values (dashed lines) for the lattice contrast $W_{\downarrow}$ and total trap depth difference $\Delta U^{\text{tot}}$ (see figure (\ref{fig:contrast}) and text in section~\ref{sub:sec2.2}) with the values extracted from the fits ($1\%$ uncertainty).\label{fig:fullSBspectrum}} \end{figure} We investigate sideband transitions by recording microwave spectra for different lattice shifts. Controlling the relative distance $\Delta x$ allows us to continuously tune the parameter $\eta_x$ from 0 to about 5. In order to resolve the sidebands we use Gaussian microwave pulses with a FWHM of $\SI{30}{\micro\second}$ and a bare Rabi frequency of $\Omega_{0}/2\pi=\SI{36}{\kilo\hertz}$, corresponding to the $\pi$-pulse condition for the carrier transition. Figure (\ref{fig:fullSBspectrum}) shows a combined spectrum where transitions from $n=0$ to levels up to $n'=14$ are well resolved \cite{Leonid}. Four spectra are recorded for four different lattice shifts. With an unshifted lattice only the carrier transition is visible, and it defines the zero of the microwave detuning $\delta_\text{MW}$. The remaining three lattice shifts were chosen such that for each shift distance $\Delta x$ the sideband coupling strength on the same site (i.e., $r$=$r'$), $\Omega_{n,0}^{n'\!,0}(\Delta x)$, is simultaneously close to maximum for a small group of adjacent sideband transitions. The coupling strength for sites $r\neq r'$ can be neglected at the given shifts. For each shift distance $\Delta x$ the microwave spectra are fitted using the spectra yielded by a numerical calculation of the time evolution based on the following Hamiltonian \begin{equation} \hat{H}=\hat{H}_{0}+\hat{H}_{\text{MW}}\label{eq:H} \end{equation} with \begin{eqnarray} \hat{H}_{0} & = & \sum_{s=\uparrow,\downarrow}\sum_{n}\bigg(\varepsilon_{s,n}(\Delta x)+\delta_{s,\uparrow}\,\hbar\hspace{0.5pt}\omega_\text{HS}\bigg)\Ket{s,n,r}\Bra{s,n,r}\label{eq:H0}\,,\\ \hat{H}_{\text{MW}} & = & -\frac{\hbar}{2}\Omega_{0}\sum_{r,r'}\sum_{n,n'}\, I_{n,r}^{n'\!,r'}(\Delta x)\,\bigg(e^{-i\hspace{0.3pt}\omega_{\text{MW}}t}\Ket{\uparrow,n,r}\Bra{\downarrow,n'\!,r'}+\text{h.c.}\bigg)\,,\label{eq:HMW} \end{eqnarray} where $\omega_\text{HS}$ denotes the hyperfine splitting frequency of the ground state. With this notation, the microwave detuning reads $\delta_\text{MW}=\omega_\text{MW}-\omega_\text{HS}$. Given the deep lattice regime considered here, in the numerical solution of equation (\ref{eq:H}) the maximum number of vibrational levels per site can be restricted with a good approximation to $n_{\text{max}}=15$, before atoms start to behave like free particles tunneling between sites or directly coupling to the continuum. In this regime, the coupling strength for a sideband transition between two lattice sites separated by a distance $x>d$ are two orders of magnitude lower than the typical time scales of our experiment; therefore, we limit the site indices to $r=r'$. \begin{figure} \begin{centering} \hfill{}\includegraphics[width=0.74\textwidth]{figure5}\hfill{} \par\end{centering} \centering{}\caption{(a) \label{fig:InhBroadeningConvol}\label{fig:FCFsDist} Inhomogeneous broadening effect due to the transverse motion of the atoms in the trap. The overall peak profile (bottom curve) is the convolution of the other two profiles. The Fourier-limited FWHM and the thermal broadening are typically $\SI{20}{\kilo\hertz}$ and $\SI{5}{\kilo\hertz}$ (for $T_\text{2D}\sim\SI{10}{\micro\kelvin}$ and first sideband), respectively. (b) Left panel: Franck-Condon factors as a function of the radial distance $\rho$ (here, $\theta=\SI{15}{\degree}$). The gray profile shows the 2D radial distribution from equation~(\ref{eq:boltzmann}) for the same temperature. Right panel: Resulting thermal distribution of Franck-Condon factors.} \end{figure} In the fitting of the sideband spectra, the energy levels $\varepsilon_{s,n}$ and Franck-Condon factors $I_{n,0}^{n'\!,0}$ depend on the fitting parameters $\Delta x$, $U_{s}^{\text{{tot}}}$ and $W_{s}$. In particular, in the harmonic approximation the spacing between two adjacent peaks is equal to the trap frequency of the $U_\downarrow$ potential, which therefore determines the lattice contrast $W_\downarrow$; the absolute offset of each spectrum is mainly determined by the difference of the total trap depths, $\Delta U^{\text{{tot}}}=U_{\uparrow}^{\text{{tot}}}-U_{\downarrow}^{\text{{tot}}}$, expressed in frequency units. Additionally, an average over the thermal motion of the atoms in the transverse direction of the one-dimensional optical lattice has to be performed. In fact, the lattice parameters $U_s^\text{tot}$ and $W_s$ depend on the transverse position of the atom, and to take this dependence into account we assume that during the microwave dynamics an atom has a ``frozen'' transverse position $\rho$. This assumption is justified by the slow transverse motion of the atoms, $\omega_{\text{rad}}/2\pi\approx \SI{1}{\kilo\hertz}$, compared to the lowest bare Rabi frequency used for the microwave pulse, $\Omega_{0}/2\pi\approx\SI{14}{\kilo\hertz}$. The transverse positions of the atoms are then assumed to be distributed according to a two-dimensional Boltzmann distribution, shown in figure \figref{fig:FCFsDist}{b} and given in the harmonic approximation by \begin{equation} \mathcal{P}(\rho)=\frac{\rho}{\sigma^{2}}\exp(-\frac{\rho^{2}}{2\sigma^{2}}),\qquad\text{ with }\qquad\sigma=\sqrt{\frac{k_{B}T_{\text{{2D}}}}{m_{\text{{Cs}}}\hspace{2pt}\omega_{\text{{rad}}}^{2}}}\label{eq:boltzmann} \end{equation} with $T_{\text{{2D}}}$ the transverse temperature. The thermal transverse position distribution results in an inhomogeneous distribution of microwave sideband resonance frequencies and of Franck-Condon factors, shown qualitatively in figure (\ref{fig:InhBroadeningConvol}). Both distributions are used to weight the calculated spectra, with $T_{\text{{2D}}}$ as an additional fitting parameter. The figure shows that thermal broadening effect becomes larger for higher sidebands, exhibiting a more pronounced asymmetric peak shapes. This behavior has a clear explanation: in the harmonic approximation, for instance, one expects the thermal broadening to increase linearly with band index $n$ while the Fourier-limited FWHM remains constant. The best-fit results for $W_{\downarrow}$ and $\Delta U^{\text{{tot}}}$ are shown in figures \figref{fig:fullSBspectrum}{b} and \figref{fig:fullSBspectrum}{c}. This method allows us to spectroscopically determine the parameters of the spin-dependent potentials seen by the atoms with a relative uncertainty of about $1\%$. The small deviations from the expected values (dashed curves in the figure) can be attributed in part to measurement uncertainty and to polarization imperfections in the standing wave beams, resulting in slightly distorted potentials. For instance, polarization distortion can be responsible for the non monotonic behavior of the data points in figure~\figref{fig:fullSBspectrum}{c}. From the fit, we obtain a temperature of $T_{\text{{2D}}}= (2.7\pm 0.5)\,\si{\micro\kelvin}$. Without axial ground state cooling, we measure a three-dimensional temperature of $\SI{10}{\micro\kelvin}$ by means of the adiabatic lowering technique~\cite{Wolfgang}. This discrepancy requires further investigations. \section{Microwave sideband cooling} The general principle of resolved sideband cooling, depicted in figure (\ref{fig:ToyModel}), relies on the repetition of cooling cycles where each cycle starts by a sideband transition $\Ket{\uparrow,n}\to\Ket{\downarrow,n-1}$ removing a vibrational energy quantum $\hbar\omega_{\text{{vib}}}$. The cycle is then closed by an optical repumping process with a transition to an optically excited state $\Ket{e}$ followed by a spontaneous decay to the initial spin state. Because of the optical repumping, the motional energy of the atom in each cycle increases on average, which corresponds to heating. Therefore, in order to achieve cooling the overall energy gained by an atom after one cycle must be negative. In general, heating is caused by the momentum recoil from the optical repumping photons, i.e. recoil heating. In the microwave-based scheme however, shown in figure \figref{fig:ToyModel}{b}, an additional source of heating, called hereafter ``projection heating,'' is present. It is due to the difference between the trapping potentials of the internal states, in this case it is a spatial shift. This difference makes the projection of a vibrational state $\Ket{\downarrow,n}$ on an arbitrary state $\Ket{\uparrow,m}$ appreciable in contrast to the case of identical potentials where transitions beyond $m=n,n\pm1$ are negligible. \subsection{Raman vs.\ microwave sideband cooling\label{sub:3.1}} \begin{figure}[!tb] \begin{centering} \includegraphics[scale=0.8]{figure6} \par\end{centering} \centering{}\caption{(a) Raman sideband cooling scheme: a two-photon Raman transition between two identical trapping potentials reduces the vibrational state, $\ket{\uparrow,n}\rightarrow\ket{\downarrow,n-1} $. The wavefunction is shifted in momentum space by $\hbar\Delta k$. (b) Microwave sideband cooling scheme: a microwave transition between two shifted trapping potentials reduces the vibrational state. Note that we use here a blue sideband transition to reduce the vibrational state, instead of the typical usage of a red sideband transition \cite{Perrin1998}.\label{fig:ToyModel}} \end{figure} In the standard Raman-based sideband cooling schemes \cite{jessen,Kerman2000} the sideband is induced by a two-photon transition where the coupling is given by the matrix element \begin{eqnarray} \Omega_{n-1,n}^{\text{{Raman}}}\, & = & \,\langle\downarrow,n-1\vert\hat{T}_{\Delta k}\vert\uparrow,n\rangle\times\Omega_{0}^\text{Raman},\label{eq:lambdickeparameter} \end{eqnarray} where $\hat{T}_{\Delta k}\equiv\exp(i\hspace{1pt}\hat{x}\Delta k)$ is the momentum shift operator and $\Delta k\approx2k_{\text{{opt}}}$ is the wavevector difference between the two optical photons for counterpropagating beams. From here on, it is understood that all transitions occur on the same site, $r=r'$. In the microwave-based scheme we neglect the microwave photon recoil, and the sideband coupling corresponding to a lattice shift $\Delta x$ between nearest neighboring sites is then given by \begin{equation} \Omega_{n-1,n}\,=\,\langle\downarrow,n-1\vert\hat{T}_{\Delta x}\vert\uparrow,n\rangle\times\Omega_{0}.\label{eq:MWSBcoupling} \end{equation} Using the harmonic approximation, the Raman and microwave sideband couplings can be expanded to the first order in the parameters $\eta_{k}=\hbar\hspace{1pt} k_{\text{{opt}}}/(2\hspace{1pt}p_{0})$ and $\eta_{x}=\Delta x/(2\hspace{1pt}x_{0})$, as shown in table (\ref{tab:Raman-vs-MW}), where $p_{0}=\sqrt{m_{\text{{Cs}}}\hbar\omega_{\text{{vib}}}/2}$ and $x_{0}=\sqrt{\hbar/(2m_{\text{{Cs}}}\omega_{\text{{vib}}})}$ are the momentum and spatial rms width of the ground-state wavefunction, respectively \cite{Stenholm}. From table (\ref{tab:Raman-vs-MW}) we can note a clear duality between momentum and spatial shifts in the two sideband cooling methods. The duality is better emphasized by using the general complex Lamb-Dicke parameter \begin{equation} \eta= \eta_{k}+i\eta_{x}=\hbar\hspace{1pt} k_{\text{{opt}}}/(2\hspace{1pt}p_{0})+i\Delta x/(2\hspace{1pt}x_{0})=k_{\text{{opt}}}\hspace{1pt}x_0+i\hspace{1pt}p_0\hspace{1pt}\Delta x/\hbar,\label{eq:genLDP} \end{equation} which accounts for both degrees of freedom via the momentum and spatial Lamb-Dicke parameters, $\eta_{k}$ and $\eta_{x}$, respectively. This generalized approach was introduced first in ion systems to describe microwave-induced sidebands in the presence of spin-dependent forces \cite{wunderlich2001}. In the Raman-based cooling schemes with identical trapping potentials, the spatial Lamb-Dicke parameter $\eta_{x}$ vanishes and the heating comes from the recoil of the optical repumping photons, as depicted in the figure \figref{fig:ToyModel}{a}. In the microwave-based scheme however, the generalized Lamb-Dicke parameter is complex and the heating is caused by a combination of recoil and projection heating. The energy gained by an atom from recoil heating after one cycle results from two recoils, one from absorption and one from spontaneous emission, and is therefore given by \begin{equation} \Delta E_{\text{{rec}}}=2E_{\text{{R}}},\label{eq:DErec} \end{equation} where $E_{\text{{R}}}=\hbar^{2}k_{\text{{opt}}}^{2}/2m_{\text{{Cs}\,}}$ is the optical photon recoil energy \cite{Itano}. This quantity does not depend on the details of the potentials but only on the atom's properties, and it expresses the overall three-dimensional recoil heating. \begin{table}[t] \caption{Raman vs.\ microwave sideband cooling in the harmonic approximation. The sideband couplings are first-order expansions in $\eta_{k}$ and $\eta_{x}$ in the Lamb-Dicke regime defined by $|\eta|=|\eta_{k}+i\eta_{x}|\ll1$, under the harmonic approximation.\label{tab:Raman-vs-MW}} \centering{}% \begin{tabular}{l>{\centering}p{3cm}c} \toprule & Raman & Microwave\tabularnewline \midrule \addlinespace Sideband coupling strength $\Omega_{n-1,n}/\Omega_{0}$ & $i2\eta_{k}\sqrt{n}$ & $-\eta_{x}\sqrt{n}$\tabularnewline \addlinespace Recoil heating per cycle & $2\hbar\omega_{\text{{vib}}}\:\eta_{k}^{2}$ & $2\hbar\omega_{\text{{vib}}}\,\eta_{k}^{2}$\tabularnewline \addlinespace Projection heating per cycle & --- & $\hbar\omega_{\text{{vib}}}\,\eta_{x}^{2}$\tabularnewline \addlinespace Overall heating per cycle & $2\hbar\omega_{\text{{vib}}}\:\eta_{k}^{2}$ & $\hbar\omega_{\text{{vib}}}\,(\eta_{x}^{2}+2\eta_{k}^{2})$\tabularnewline \bottomrule \end{tabular} \end{table} In the shifted potentials shown in figure \figref{fig:ToyModel}{b}, in addition to the recoil heating, the atom's motional energy increases on average by the projection heating energy $\Delta E_{\text{{proj}}}$. This is due to the non-vanishing projection of the atom's initial vibrational state $\Ket{\downarrow,n}$ onto the vibrational basis $\Ket{\uparrow,m}$ of the final spin state in the optical repumping process. In the harmonic approximation, with $H_{\text{{ext}}}=\hbar\omega_{\text{{vib}}}(n+1/2)$, and after adiabatic elimination of the excited state $\Ket{e}$, the projection heating contribution for a relative shift $\Delta x$ can be derived as \begin{eqnarray} \Delta E_{\text{{proj}}} & = & \hbar\omega_{\text{{vib}}}\sum_{m}(m-n)\left|\Braket{m|\hat{T}_{\Delta x}|n}\right|^{2}=\nonumber\\ & = & \sum_{m}\Braket{m|\hat{[H}\hat{T}_{\Delta x}-\hat{T}_{\Delta x}\hat{H}]|n}\Braket{n|\hat{T}_{\Delta x}^{\dagger}|m}=\nonumber\\ & = & \Braket{n|(\hat{H}_{\text{{ext}}}(\Delta x)-\hat{H}_{\text{{ext}}})|n},\label{eq:DEproj} \end{eqnarray} where we have introduced the shifted Hamiltonian \begin{equation} \hat{H}_{\text{{ext}}}(\Delta x)=\hat{T}_{\Delta x}^{\dagger}\hat{H}_{\text{{ext}}}\hat{T}_{\Delta x}=\frac{\hat{p}^{2}}{2m_{\text{{Cs}}}}+\frac{1}{2}m_{\text{{Cs}}}\omega_{\text{{vib}}}^{2}(\hat{x}+\Delta x)^{2}. \end{equation} The result of equation (\ref{eq:DEproj}) applies in general for any potential profile, and in the harmonic approximation it results in a quantity which is independent of $n$, \begin{equation} \Delta E_{\text{{proj}}}=\frac{1}{2}m_{\text{{Cs}}}\omega_{\text{{vib}}}^{2}\Delta x^{2}, \end{equation} which is nothing but the potential energy difference as expected from the semi-classical picture in figure (\ref{fig:classicalProj}). Using the same method, one can generally show that in the microwave sideband cooling scheme the total average heating energy gained by an atom in one cooling cycle is the sum of the recoil and projection contributions. The total energy balance per cycle then becomes \begin{equation} \Delta E_{\text{{tot}}}=\Delta E_{\text{{proj}}}+\Delta E_{\text{{rec}}}-\hbar\omega_{\text{{vib}}}=\hbar\omega_{\text{{vib}}}(\eta_{x}^{2}+2\eta_{k}^{2}-1).\label{eq:EnBalance} \end{equation} Similarly to the usual definition of the Lamb-Dicke regime \cite{LambDickeLimit}, the condition for cooling $\Delta E_{\text{{tot}}}<0$ defines a generalized Lamb-Dicke regime as the range where $|\eta|<1$. \subsection{Quantitative model based on master equation} The general theory of sideband cooling is very well known and has been extensively studied in the literature \cite{wineland,Stenholm,Marzoli,Cirac92}. Here, we discuss a quantitative model based on the Lindblad master equation formalism. To provide a concrete example, we apply the model to the level scheme of our specific system, though the model can be adapted to other similar systems. \begin{figure}[!b] \begin{centering} \includegraphics[scale=0.7]{figure7} \par\end{centering} \caption{Microwave sideband cooling scheme in a realistic physical system using $^{133}\text{Cs}$ atoms. (i) Microwave radiation tuned to the first blue sideband induces a $\Ket{\uparrow,n}\rightarrow\Ket{\downarrow,n-1}$ transition decreasing the motional quantum number by one. (ii) The cooling cycle is closed by an optical repumping transition $\Ket{\downarrow}\rightarrow\Ket{F'=4}$, with rate $R_{\downarrow}$, and (iii) a spontaneous decay back to state $\Ket{\uparrow}$. In (iv) an additional pumping laser brings the atoms which have decayed to state $\Ket{a}$ back into the cooling cycle, with rate $R_{a}$. Atoms reaching the dark state $\Ket{\uparrow,n=0\,}$ are out of the cooling cycle unless off-resonantly excited or heated externally.\label{fig:cooling}} \end{figure} In the cooling cycle depicted in figure (\ref{fig:cooling}), microwave radiation resonant with the first blue sideband transfers atoms from states $\Ket{\uparrow,n}$ to states $\Ket{\downarrow,n-1}$. Concurrent with the microwave, a $\sigma^{+}$-polarized repumper laser beam couples state $\Ket{\downarrow}$ to state $\Ket{6^{2}P_{3/2},F'=4}\equiv\Ket{e}$, from where the atoms close the cooling cycle by spontaneously decaying back to state $\Ket{\uparrow}$. Due to the appreciable probability of atoms decaying from state $\Ket{e}$ to state $\Ket{F=4,m_{F}=3}\equiv\Ket{a}$, a second equally polarized pumping laser couples the two states and brings the atoms which have decayed to state $\Ket{a}$ back into the cooling cycle. In each cycle, an atom loses energy on average until it reaches the ``dark state'' $\Ket{\uparrow,n=0}$ where it is no longer affected by the microwave or the repumping lasers. Nevertheless, a small probability remains that the dark state is depopulated due to photon scattering from the lattice lasers or an off-resonant microwave carrier transition. To describe the cooling dynamics, we reduce the problem at hand to an effective model with three spin states with the set of motional states associated with each one of them. The considered Hilbert space is then the one spanned by the states $\Ket{s,n}$, with $n$ being the vibrational level and $\Ket{s}$ being one of the three internal states $\Ket{\uparrow},$$\Ket{\downarrow}$ or $\Ket{a}$. The optically excited state $\Ket{e}$ is adiabatically eliminated due to its very short lifetime, $\tau=\SI{30}{\nano\second}$, compared to the motional time scale. We use the Lindblad master equation formalism to write the time evolution of the effective model's density matrix as \cite{Cirac92} \begin{eqnarray} \frac{d\rho}{dt} & = & -\frac{i}{\hbar}[\hat{H}'_{0}+\hat{H}_{\text{MW}},\rho]+\mathcal{L}[\rho],\label{eq:MasterEq} \end{eqnarray} where $\hat{H}_{0}'$ is the extension of the Hamiltonian from equation (\ref{eq:H0}) to the states $\Ket{a,n}$ \begin{equation} \hat{H}_{0}'=\sum_{s=\{\uparrow,\downarrow,a\}}\sum_{n}\varepsilon_{s,n}(\Delta x)\Ket{s,n,r}\Bra{s,n,r},\label{eq:H0'} \end{equation} and $\mathcal{L}$ is the Lindblad superoperator with the projectors \begin{equation} L_{n,r,s}^{n'\!,r'\!,s'}=\ket{s,n,r}\bra{s'\!,n'\!,r'}, \end{equation} and the effective decay rates $\gamma_{s,n,r}^{s'\!,n'\!,r'}$ for the transitions $\Ket{s'\!,n'\!,r'}\rightarrow\Ket{s,n,r}$ which are given by \begin{equation} \gamma_{s,n,r}^{s'\!,n'\!,r'}=\alpha_{s}R_{s'}\left\langle |M_{s,n,r}^{s'\!,n'\!,r'}|^{2}\right\rangle _{\vec{k}_{\text{sp}}}\!\!,\label{eq:gamma} \end{equation} with \begin{equation} M_{s,n,r}^{s'\!,n'\!,r'}=\langle n,r,s|\,\hat{T}_{\Delta k_{s,s'}}\,\hat{T}_{\Delta x}|s'\!,n'\!,r'\rangle\,. \end{equation} Here, $\alpha_{s}$ is the branching ratio for the spontaneous emission from state $\Ket{e}$ to state $\Ket{s}$, and $R_{a}$, $R_{\downarrow}$ are the pumping and repumping rate, respectively, as shown in figure~(\ref{fig:cooling}). In addition, we account for the possibility that an atom in $\ket{\uparrow}$ state scatters a photon from the lattice with the rate $R_{\uparrow}$. The matrix element $M_{s,n,r}^{s'\!,n'\!,r'}$ accounts for the relative spatial shift between the two involved vibrational states and for the transferred momentum of both optical photons $\Delta k_{s,s'}=k_{\text{opt}}+\vec{k}_{\text{sp}}\cdot\vec{e}_{x}$ in the optical repumping process, with $\vec{k}_{\text{sp}}$ being the wavevector of the spontaneously emitted photon and $\vec{e}_{x}$ being the unit vector along the lattice direction. Additionally, one has to perform an average over $\vec{k}_{\text{sp}}$, indicated by the angle brackets in equation (\ref{eq:gamma}) . \begin{figure}[b] \begin{centering} \includegraphics[scale=0.4]{figure8} \par\end{centering} \centering{}\caption{Steady state population in the motional ground state $P_{\Ket{n=0}}$ as a function of $\eta_{x}$ and the bare microwave Rabi frequency $\Omega_{0}$.\label{fig:Cooling-simulation}} \end{figure} Given our experimental parameters, we compute the steady-state solution to equation (\ref{eq:MasterEq}) numerically, using the same approximations as in section \ref{sub:2.3}. In the computation, the microwave is resonant with the $\Ket{\uparrow,1}\to\Ket{\downarrow,0}$ transition. The rates $R_{a}$ and $R_{\downarrow}$ are set by the experimental values, which are chosen comparable to $\Omega_{0,1}$ and smaller than the vibrational level separations to avoid off-resonant transitions by power broadening of the vibrational levels of the $F=3$ ground state. Figure (\ref{fig:Cooling-simulation}) shows a contour plot of the ground state population $P_{\Ket{n=0}}\equiv\sum_{s,r}P_{\Ket{s,0,r}}$ in the steady state as a function of the bare microwave Rabi frequency and the relative shift distance expressed in terms of $\eta_{x}$. When projection heating dominates, $\eta_{x}\gtrsim\eta_{k}$, the energy balance in equation (\ref{eq:EnBalance}) just requires $\eta_{x}<1$ for cooling; for instance, figure~(\ref{fig:Cooling-simulation}) shows that a ground state population $P_{\Ket{0}}>80\%$ can be reached with $\eta_x<0.8$. For very small lattice shifts however, with $\eta_{x}\ll1$, the microwave coupling for the blue sideband transition becomes small compared to that of the carrier transition, which renders the microwave action of removing an energy $\hbar\omega_{\text{vib}}$ per cycle inefficient compared to the recoil heating, which is the dominant heating source for such small shifts. Weak microwave sideband coupling and hence inefficient microwave cooling will also be present at very low bare Rabi frequency, namely at the Rabi frequencies where the sideband coupling becomes lower than the rate of depopulation of the dark state. For high Rabi frequencies of the same order of magnitude as the vibrational level spacing, the carrier coupling becomes comparable to the blue sideband coupling, and the microwave cooling action is again reduced. \subsection{Experimental results\label{sub:3.3}} Microwave cooling is obtained by applying microwave radiation on resonance with the first blue sideband, $\Ket{\uparrow,1}\to\Ket{\downarrow,0}$, for a certain duration $\tau_{\text{cooling}}$, at a certain lattice shift $\Delta x$, concurrently with the two optical pumping lasers as shown in figure (\ref{fig:cooling}). In order to probe the final vibrational state distribution, we record a spectrum of the first order sideband transitions using a Gaussian microwave pulse satisfying the $\pi$-pulse condition for the first red sideband, figure \figref{fig:ScanningCooling}{a}. In the low temperature limit, the height of the first blue sideband peak provides a good measure of the motional ground state population, $P_{\Ket{\uparrow,0}}$, and thus of the cooling efficiency. For instance, for atoms in the ground state one expects to detect no blue sideband. Figures \figref{fig:ScanningCooling}{a} and \figref{fig:ScanningCooling}{b} show two microwave spectra recorded before and after cooling, clearly indicating a reduction of temperature by the cooling process. In order to determine the optimum cooling parameters, the blue sideband height is remeasured while scanning different variables, namely the optical pumping intensities, the cooling microwave power and frequency, the lattice shift distance and the duration of the cooling pulse. Figure \figref{fig:ScanningCooling}{c} shows a scan of the cooling microwave frequency. As indicated by a nearly zero detected signal from the blue sideband, the optimum frequency for cooling lies evidently in the vicinity of the first blue sideband, while a less pronounced cooling is also present at the position of the second blue sideband. Furthermore, the measurement reveals the absence of the blue sideband signal in a broad range extending to negative detunings in addition to a weak dip at the position of the carrier. These two observations are correlated with a decrease in the atom survival given in the same figure. This shows that, instead of being due to cooling, the absence of the blue sideband here is due to increased atom losses. In fact, for zero and negative microwave detunings, that is if the microwave is resonant with the carrier $\Ket{\uparrow,n}\to\Ket{\downarrow,n}$ or red sideband $\Ket{\uparrow,n}\to\Ket{\downarrow,n+m}$ transitions respectively, the energy of the atom increases on average in each cooling cycle. In the case of zero detuning the increase is due mainly to recoil and projection heating in the absence of microwave cooling, while for negative detunings microwave sideband heating occurs in addition to the recoil and projection heating. \begin{figure}[b] \begin{centering} \hfill{}\includegraphics[scale=0.525]{figure9}\hfill{} \par\end{centering} \centering{}\caption{Microwave spectroscopy performed (a) \label{fig:ScanningCoolingA} before cooling and (b) \label{fig:ScanningCoolingB}after $\SI{20}{\milli\second}$ of microwave cooling, with optimal experimental parameters (see Table \ref{tab:OptimCoolPar}). (c) \label{fig:ScanningCoolingC}Blue sideband height vs.\;detuning of the microwave cooling frequency ({\small$\textcolor{red}\circ$}) and atom survival probability measured after the cooling ({\tiny$\textcolor{blue}\square$}), (data points in (a) and (b) are from \cite{Leonid}).\label{fig:ScanningCooling}} \end{figure} Once the optimum cooling parameters have been determined, we extract the achieved steady state temperature assuming a thermal Boltzmann distribution and neglecting the anharmonic spacing of the vibrational levels. The ratio between the red and blue sideband heights is proportional to the Boltzmann factor which is related to the average motional quantum number $\langle n\rangle$ by \begin{equation} \frac{P_{\uparrow,1}}{P_{\uparrow,0}}=\exp(-\frac{\hbar\omega_{x}}{k_{B}T})=\frac{\langle n\rangle}{\langle n\rangle+1}\label{eq:CooledTemperature} \end{equation} Using the fitted sideband heights from figure \figref{fig:ScanningCooling}{b}, we calculate $\langle n\rangle=0.03\pm0.01$, and a ground state population of $P_{\uparrow,0}\simeq97\%$, corresponding to a temperature $T\simeq\SI{1.6}{\micro\kelvin}$. Table (\ref{tab:OptimCoolPar}) summarizes the optimum cooling parameters for our setup. \begin{table} \begin{raggedright} \caption{Optimal microwave cooling parameters for $\omega_{\text{{vib}}}=2\pi\times\SI{116}{\kilo\hertz}.$\label{tab:OptimCoolPar}} \par\end{raggedright} \centering{}% \begin{tabular}{ccccccc} \toprule $\Omega_{0}/2\pi$ & $\eta_{x}$ & $\eta_{k}$ & $R_{\downarrow}$ & $R_{a}$ & $R_\uparrow$ & $\tau_{\text{cooling}}$\tabularnewline \midrule $\SI{16}{\kilo\hertz}$ & $0.3$ & $0.1$ & $\SI{35}{\milli\second^{-1}}$ & $\SI{35}{\milli\second^{-1}}$ & $\SI{15}{\second^{-1}}$ & $\SI{20}{\milli\second}$\tabularnewline \bottomrule \end{tabular} \end{table} \section{Motional state control} \subsection{Motional state detection\label{sub:4.1}} We have developed a vibrational state detection scheme which allows us to determine the vibrational state distribution of any given motional state. It relies on removing all atoms above a selected vibrational state $n$ from the trap and counting the remaining atoms, as illustrated in figure \figref{fig:Filtering-scheme}{a}. The distribution is then reconstructed from the differences of subsequent measurements. Atoms are first transferred to state $\Ket{\downarrow}$ by means of an adiabatic passage microwave pulse that is resonant with the carrier transition in unshifted lattices, which preserves vibrational states' populations. A microwave pulse resonant with the red sideband $\Ket{\downarrow,n}\rightarrow\Ket{\uparrow,0}$ transfers atoms from states $\Ket{\downarrow,m}$ with $m\geqslant n$ to states $|\uparrow,m-n\rangle$. The transferred atoms are eventually pushed out of the trap (see section \ref{sub:sec2.2}). However, since the sideband transition rates depend on the initial vibrational state $\Ket{\downarrow,n}$ (due to, e.g., trap anharmonicity and Franck-Condon factor differences) the microwave pulse does not achieve full transfer efficiency for all transitions. To overcome this limitation, the procedure of microwave pulse plus push-out is repeated several times to deplete all vibrational states $\Ket{\downarrow,m}$ with $m\geqslant n$. If $f$ is the population transfer efficiency for a given $n$, then after $N$ repetitions the effective population transfer efficiency becomes $f'=1-(1-f)^{N}$. For instance, an initial efficiency of $f=70\%$ is thus increased to $f'\sim97\%$ with $N=3$ repetitions. Measuring the fraction of remaining atoms as a function of the microwave frequency, we obtain a sequence of plateaus at the successive sideband frequencies, as shown in figure \figref{fig:Filtering-scheme}{b}. The plateau corresponding to the $n^{th}$ sideband indicates the integrated population of states $m<n$, that is, the cumulative distribution function $F_{n}=\sum_{m=0}^{n-1}p_{m}$, from which the individual populations of the vibrational states are then derived. \begin{figure}[!h] \begin{centering} \includegraphics[scale=0.8]{figure10} \par\end{centering} \centering{}\caption{(a) \label{fig:Filtering-schemeA}Method for measuring vibrational state population distributions: (i) an initial microwave pulse resonant with the $n^{th}$ red sideband transfers all atoms in states $\Ket{\downarrow,m}$, with $m\geqslant n$, to state $\Ket{\uparrow}$; (ii) the transferred atoms are pushed out of the lattice; (i) and (ii) are repeated $N$ times to overcome low pulse efficiency. (b) \label{fig:Filtering-schemeB}Surviving fraction of atoms for a thermal state, with the dotted lines indicating a thermal distribution of $T\approx\SI{11.6}{\micro\kelvin}$; this temperature is compatible with the independently measured one of about $\SI{10}{\micro\kelvin}$. For each sideband $n$, after $N=3$ repetitions of the microwave pulse plus push-out, only the atoms in states $m<n$ survive. The horizontal dashed line indicates the maximum survival probability, limited by the off-resonant transitions during the repeated pulses. For the sake of clarity, error bars have been displayed for the carrier transition only.\label{fig:Filtering-scheme}} \end{figure} \subsection{Motional state engineering} \begin{figure} \begin{centering} \includegraphics[width=11cm]{figure11} \par\end{centering} \caption{Motional state preparation and analysis. Shown are the populations of the vibrational states $n=0,..,3$ after (a) creating superposition states of $\Ket{n=0}$ and $\Ket{n=2}$ with different weights (from top to bottom, area of the first MW pulse 0.30, 0.40, 0.55, 0.70 in units of $\pi$) and (b) coherent vibrational states for different amplitudes $\alpha$, where the left bars (brighter red) indicate the theoretically expected populations. The analysis technique used here, see figure~\figref{fig:Filtering-scheme}{a}, can only measure vibrational states' populations but not coherences.\label{fig:Motional-state-preparation}} \end{figure} With 97\% of the atoms cooled to state $\Ket{\uparrow,n=0}$ (see section \ref{sub:3.3}), controlled preparation of different motional states is possible using a combination of microwave pulses and selected lattice shifts. The simplest state that can be prepared is the Fock state $\Ket{\downarrow,m}$. It requires addressing the $m$-th red sideband transition at the lattice shift $\Delta x$ chosen to maximize the coupling $\Ket{\uparrow,0}\leftrightarrow\Ket{\downarrow,m}$. The fidelity for preparing this state is limited by the cooling efficiency, the population transfer efficiency and the selectivity of the microwave pulse. Using an adiabatic passage pulse \cite{Khudaverdyan}, a state preparation fidelity close to 98\% has been achieved for states up to $m=6$. A superposition of two Fock states is created by a two-pulse sequence as shown in the inset of figure \figref{fig:Motional-state-preparation}{b}. An initial microwave pulse resonant with the transition $\Ket{\uparrow,0}\to\Ket{\downarrow,2}$, performed at the lattice shift which maximizes the coupling for the transition, generates the state \begin{equation} \Ket{\psi}=c_{\uparrow,0}\Ket{\uparrow,0}+c_{\downarrow,2}\Ket{\downarrow,2} \end{equation} with variable coefficients $c_{\uparrow,0}$ and $c_{\downarrow,2}$ determined by the pulse duration. The lattice shift $\Delta x$ is then changed to the distance at which the Franck-Condon factor for the transition $\Ket{\uparrow,2}\leftrightarrow\Ket{\downarrow,2}$ is zero. The shifting is precisely timed so that the probability of changing the vibrational state by the acceleration of the lattices is zero \cite{Karski}. At the new lattice shift, a second microwave pulse resonant with the carrier transition maps the population $|c_{\uparrow,0}|^{2}$ onto $\Ket{\downarrow,0}$. One would expect as a result a coherent superposition between $\Ket{\downarrow,0}$ and $\Ket{\downarrow,2}$. However, because of the appreciable duration of the sequence of $\SI{320}{\micro\second}$ (two sideband-resolved pulses plus lattice shift operation) compared to the total spin coherence time of $\sim\SI{250}{\micro\second}$ in our setup, the coherence between the two vibrational states is partially lost during the preparation procedure. This is a technical limitation which can be overcome by improving the coherence time, for instance in our setup, by cooling the transverse motion of the atoms to the three-dimensional ground state \cite{Kaufman2012}. This scheme represents a relevant step towards the use of the vibrational state as the physical carrier for a qubit and/or the preparation of arbitrary motional superposition states when working with neutral atoms. In the same vein of engineering motional states, we project the state $\Ket{\downarrow,n=0}$ onto a shifted potential to create the state \begin{equation}\label{eq:coherentstate} \Ket{\alpha}=\hat{T}_{\Delta x}\Ket{\uparrow,n=0}=e^{\alpha a^{\dagger}-\alpha^{*}a}\Ket{\uparrow,n=0} \end{equation} with $\alpha=\eta_{x}$. We realize this by applying an optical repumping pulse while the lattice is displaced by $\Delta x$. This corresponds to exciting the transition $\Ket{\downarrow}\to\Ket{e}$ followed by a spontaneous decay to state $\Ket{\uparrow}$, which occur on a time scale much shorter than the oscillation period of the atom in the trap. We also neglect the recoil transferred by the optical repumping photons, which is equivalent to assuming $\eta_{k}=0$. Because the decay process additionally involves transitions to states $\ket{a}$ and $\ket{\downarrow}$, the resulting state is a statistical mixture of the three internal states; our analysis scheme however measures the vibrational population of the state projected on $\ket{\uparrow}$ state shown in equation (\ref{eq:coherentstate}). The statistical mixture can be avoided by replacing the optical repumping pulse and spontaneous decay by a fast two-photon Raman transition. Measuring the population distribution of the created state reveals a clear agreement with the theoretical expectation, as shown in figure \figref{fig:Motional-state-preparation}{b}. With the state detection scheme presented in section \ref{sub:4.1}, so far we can only measure populations, while coherences could be accessed in the future through interferometric schemes. \section{Conclusions and outlook} We have shown that microwave sideband transitions in spin-dependent optical lattices are a favorable alternative to Raman transitions for sideband cooling and motional state engineering. The effective Lamb-Dicke parameter can be continuously adjusted from zero to above one, giving the possibility to address directly higher-order sideband with coupling frequencies comparable to the bare Rabi frequency. We investigated the performance of microwave sideband cooling in the generalized Lamb-Dicke regime, and we compared it to the Raman sideband cooling; our analysis can be easily extended to the three-dimensional case~\cite{weiss2}. Quantum engineering of motional states represents one of the most attractive uses of microwave-induced sidebands. We demonstrated here a first step towards the creation of superposition between Fock states, and the preparation of coherent states. In the future, the interest resides in proving the coherence properties of these states through interferometric schemes, for instance, by measuring the accumulated phase between two distinct Fock states, or through quantum beat experiments~\cite{Goto}. Along the same line, spin-dependent shift operations can be employed to transfer a state-dependent momentum kick, allowing the realization of a superposition of opposite coherent states, producing Schr\"odinger-cat-like states as has been realized with ions~\cite{Monroe:1996}. Microwave control of atomic motion in a spin-dependent optical lattice can be of interest for storing and processing quantum information via the motional states \cite{Haffner:2008}. For instance, the strength of coherent collisions for atoms close to the motional ground state exhibits a marked dependence on the relative motional state, which can be exploited, in analogy to \cite{Jaksch99}, to realize maximally entangled states in the motional degree of freedom. In addition, microwave sideband transitions open the way for quantum transport experiments, where continuous tunneling between adjacent lattice sites occurs when $\Delta x$ is close to $d/2$, i.e., close to half the lattice spacing \cite{Mischuck:2010}. Finally, it is worth noting that the microwave cooling technique studied here does not strictly require the use of the ``magic'' wavelength for the lattice potential, but can still be operated with the same efficiency at other wavelengths, e.g., at $\lambda_{\text{L}}=\SI{1064}{\nano\meter}$ as we have tested. In fact, the optimal cooling efficiency occurring at around $\eta_x\sim 0.3$, see figure~(\ref{fig:Cooling-simulation}), can be reached by adjusting the polarization angle $\theta$. \vspace*{-2mm}\ack{}{We thank Andreas Steffen and Tobias Kampschulte for fruitful discussions. Comments by an anonymous referee helped improve the manuscript. We acknowledge financial support by the DFG Research Unit (FOR 635), NRW-Nachwuchsforschergruppe ``Quantenkontrolle auf der Nanoskala'', AQUTE project, and Studienstiftung des deutschen Volkes. AA acknowledges also support by Alexander von Humboldt Foundation.} \vspace*{-2mm}\section*{References}{} \bibliographystyle{biblio}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION} Most Galactic SNRs were identified by radio observations (Green 2009). However, one should notice that there are various selection effects in the Galactic surveys for SNRs in the radio band. For example, Schaudel (2003) suggested that Galactic SNRs older than $\sim5\times10^{4}$~yrs are difficult to be detected with the current radio telescopes due to their low surface brightness. Therefore, the currently known SNR population of our Galaxy is certainly biased. There are 274 SNRs have been uncovered in the Milky Way so far (Green 2009). This known sample is far smaller than the expected population. Assuming a typical evolution timescale of SNRs before they merge with the ISM ($\sim10^{5}$~yrs) and a event rate of 2~SNe/century in the Milky Way (Dragicevich et al. 1999), $\sim2000$~SNRs are expected in our Galaxy. Such a great deficit again points to the selection effects in the past surveys. Besides radio observations, the ROSAT all-sky survey (RASS) provides us with an alternative window for searching SNRs in our Galaxy. Schaudel (2003) suggests that the energy band (0.1$-$2.4~keV) of the Positional Sensitive Proportional Counter (PSPC) on board ROSAT allows the detection of SNRs for an interstellar absorption $\leq3\times10^{22}$~cm$^{-2}$. A simulation of the theoretical distribution of SNe and their remnants in the Galactic plane has predicted somewhat more than 200 SNRs should possibly be detected in RASS (Busser 1998). In examining the RASS database, Schaudel (2003) has reported $\sim100$ unidentified extended X-ray sources as the possible SNR candidates. Although the RASS database allows one to search for SNR candidates, the short exposure of few hundred seconds and the poor spatial resolution in survey mode ($\sim96''$) prevent any firm identification of their nature. In addition, a possible hard X-ray ($>2$ keV) component could arise from the interactions of the reflected shocks with the dense ambient medium or alternatively from the synchrotron emission radiated by the relativistic leptons. However, the limited energy bandwidth of ROSAT does not allow one to determine whether an additional component presents in the hard X-ray band. In view of its much improved spatial resolution and the enlarged effective area, Chandra X-ray Observatory and XMM-Newton Observatory provide the most suitable instruments to clarify the emission nature of these candidates. And this motivates an extensive identification campaign of all these unidentified extended RASS objects with the state-of-art X-ray telescopes. For an initial stage of the campaign, we chose the brightest SNR candidate in the list provided by Schaudel (2003), G308.3-1.4, as the pilot target. G308.3-1.4\ was firstly listed as a possible SNR candidate in the MOST SNR catalogue based on its radio morphology and the spectral index (Whiteoak 1992). In the RASS image, G308.3-1.4\ shows centrally-peaked X-ray emission with a diameter of $\sim10'$ which coincides with a radio arc-like feature (see Figure~\ref{rass}). The radio contours obtained from the 843~MHz Sydney University Molonglo Sky Survey {\bf (SUMSS)} data (Bock et al. 1999). However, owing to the poor spatial resolution of RASS data, it is unclear whether the X-rays are originated from the radio shell structure. Also, the limited photon statistic of G308.3-1.4\ in the RASS data do not allow any further probe of its emission properties. In this paper, we report the first detailed investigation for the nature of G308.3-1.4\ with Chandra observation. \begin{figure*}[b] \centerline{\psfig{figure=g308_rass.eps,width=16cm,clip=,angle=0}} \caption[]{RASS image of G308.3-1.4\ in the energy band of $0.1-2.4$~keV with radio contour lines at the levels of $5-30$~mJy/beam from the SUMSS data overlaid. Top is north and left is east.} \label{rass} \end{figure*} \section{OBSERVATIONS} G308.3-1.4\ was observed with Chandra in 2010 June $26-27$ with the Advanced CCD Imaging Spectrometer (ACIS) using a frame time of 3.2~s. Since the source extent cannot be accurately determined from the RASS data, we utilized the whole ACIS-I CCD array with an intention to cover the whole feature with aimpoint at the nominal center of the extended RASS feature (i.e. RA=13$^{\rm h}$40$^{\rm m}$56.3$^{\rm s}$ Dec=-63$^{\circ}$43$^{'}$32.6$^{"}$ (J2000)). For data reduction as well as analysis, we utilized the {\bf C}handra {\bf I}nteractive {\bf A}nalysis of {\bf O}bservations software (CIAO~4.3) throughout this study. We have firstly reprocessed the data with CALDB (ver.~4.4.3) to correct for an error in the time-dependent gain correction (TGAIN) during the observation by using the script \emph{chandra\_repro}. To facilitate a source detection with high positional accuracy, we have applied the subpixel event repositioning in reprocessing the data. The effective exposure of this observation is $\sim14.9$~ks. We restricted all the analysis in an energy band of $0.5-8$~keV. We have also conducted a series of Target-of-Opportunity (ToO) observations of this source with the {\bf U}ltra-{\bf V}iolet and {\bf O}ptical {\bf T}elescope (UVOT) onboard the SWIFT satellite. The pre-processed calibrated science grade (level II) data were used. This pre-processing includes bad pixel correction and the reduction of the modulo 8 fixed-pattern which results from the fact that raw image pixel are smaller by up to factor of 8 than the original detector pixels. In addition, the images have been flatfield corrected. The data were then analyzed by using the HEASoft Swift package. \section{SPATIAL ANALYSIS} \begin{figure*}[b] \centerline{\psfig{figure=g308_rgb_revise.eps,width=16cm,clip=,angle=0}} \caption[]{$10^{'}\times10^{'}$ Chandra ACIS-I X-ray colour image of G308.3-1.4\ (red: $0.5-1$~keV, green: $1-2$~keV, blue: $2-8$~keV). The binning factor of this image is $2^{"}$. Adaptive smoothing has been applied to achieve a minimum signal-to-noise ratio of 3. The same radio contour lines as in Fig.~\ref{rass} are overlaid for comparison. The geometrical center inferred from the X-ray morphology is illustrated by the cross. A bright source which locates closest to the geometrical center (i.e. source\#7 in Fig.~\ref{wavelet} and Tab.~\ref{x_src}) is highlighted by the dash circle. Top is north and left is east.} \label{rgb} \end{figure*} An X-ray color image of the ACIS-I data of G308.3-1.4\ is displayed in Figure~\ref{rgb}. With the superior spatial resolution of Chandra, an incomplete X-ray shell structure have been revealed. We have overlaid the same SUMSS radio contours (cf. Fig.~\ref{rass}) onto Figure~\ref{rgb}. This comparison demonstrates a clear correlation between the X-ray emission and the radio feature, and hence their connection is confirmed unambiguously and suggest G308.3-1.4\ belongs to category of mixed-morphology SNRs. The incomplete X-ray shell-like morphology conforms with a circle with an angular radius of $\theta\sim5^{'}$ approximately centered at RA=13$^{\rm h}$41$^{\rm m}$32$^{\rm s}$ Dec=-63$^{\circ}$42$^{'}$44$^{"}$ (J2000). Figure~\ref{rgb} also shows the hardness distribution of the X-rays from G308.3-1.4. Several components can be identified in this color image. Apart from the bright outer rim emission, a fainter but harder feature is noticed in its inner edge which we refer as the ``inner rim" in this paper. Besides these two shell-like components, soft and fainter diffuse emission is found in the region close to the center. \begin{figure*}[b] \centerline{\psfig{figure=g308_src.eps,width=16cm,clip=,angle=0}} \caption[]{17 X-ray sources detected in the whole ACIS-I image of G308.3-1.4\ by wavelet detection algorithm. The properties of these source are summarized in Tab.~\ref{x_src}. The geometrical center inferred from the X-ray morphology is illustrated by the cross. } \label{wavelet} \end{figure*} This observation also enable a search for X-ray point sources in the vicinity of G308.3-1.4. By means of a wavelet source detection algorithm, 17 sources has been detected in the field-of-view. These sources are labeled in Figure~\ref{wavelet}. The source positions, positional errors, net count rates, signal-to-noise ratios as well as the estimates of spatial extent of all these sources are given in Table~\ref{x_src}. The brightest object is the source marked as \#7 in Figure~\ref{wavelet}, which is also the one located closest the geometrical center. Cross-correlating these sources with the SIMBAD and NED databases did not result in any identification within a search radius of 5 arcsec around each source. To investigate if these sources are promising isolated neutron star candidates we proceeded to search for their possible optical counterparts by utilizing the USNO-B1.0 catalogue (Monet et al. 2003). The X-ray-to-optical flux ratio, $f_{x}/f_{\rm opt}$, provides a rudimentary parameter for discriminating the source nature. For an isolated neutron star, $f_{x}/f_{\rm opt}$ is typically larger than 1000 (cf. Haberl 2007). On the other hand, $f_{x}/f_{\rm opt}$ for the field stars and the active galactic nuclei are much lower which typically $<0.3$ and $<50$ respectively (Maccacaro et al. 1988; Stocke et al. 1991). We have systematically searched for the optical counterparts within a $10\sigma\times10\sigma$ error box centered at each X-ray position. Among all 17 sources, only the object labeled as \#5 have no optical counterpart found in its error box. For those 16 X-ray objects have optical counterparts identified, we have calculated the X-ray-to-optical flux ratios with the $B-$band magnitudes from the USNO-B1.0 catalogue. The X-ray fluxes $f_{x}$ of these objects are estimated from the net count rates in Table~\ref{x_src} with the aid of {\it PIMMS} (ver. 4.3) by assuming a power-law spectrum with a photon index of 2. The flux ratio of source \#7, $f_{x}/f_{B}\sim1.5$ is found to be the highest among all these 16 sources. For source \#5, despite no optical counterpart have been identified in this search, the limiting magnitude of USNO-B1.0 catalogue (i.e. $\sim21$) results in a lower limit of $f_{x}/f_{B}\gtrsim3$ which is not tight enough to constrain its source nature. Furthermore, its X-ray spectral properties do not conform with that of a neutron star (see \S5). Therefore, we conclude that we do not find any concrete evidence for an isolated neutron star in this investigation. Besides radio and X-ray data, we have also explored the infrared data obtained by {\bf W}ide-field {\bf I}nfrared {\bf S}urvey {\bf E}xplorer (WISE; Wright et al. 2010). In Figure~\ref{wise_sumss}, we compared the 22~$\mu$m image with the same set of radio contours used in Figure~\ref{rass} and Figure~\ref{rgb}. An extended incomplete elliptical feature has been identified in this mid-infrared image. This feature has a semi-major/semi-minor axis of $\sim10'/8'$ with the major axis oriented $\sim20^{\circ}$ from north. On the southwestern edge of this feature, an infrared rim emission is found to be well-correlated with the radio shell and the outer-rim X-ray emission of G308.3-1.4\ (cf. Figure~\ref{rgb}). This strongly suggests that these rim structures seen in different wavelengths are intrinsically related. Apart from this rim, there appears to be a cavity in the southwestern quarter of this infrared feature (Fig.~\ref{wise_sumss}). This is the region where the X-ray emission is most prominent (see Fig.~\ref{rgb} and Fig.~\ref{wavelet}). We also noticed that the incomplete radio shell apparently complements the peripheral of the incomplete elliptical infrared structure. Also, there is no X-ray/radio emission toward the northeastern region where the infrared feature is bright. Such morphology resembles those of the SNRs which have interactions with the surrounding cloud complex (e.g. Sasaki et al. 2004). However, as there is no reliable distance estimates for the X-ray/radio and the infrared features, the physical association between the large infrared structure and the X-ray/radio shell remains uncertain. Further investigations for this possible shock-cloud interaction, such as maser observations (see Wardle \& Yusef-Zadeh 2002), are encouraged. \begin{figure*}[t] \centerline{\psfig{figure=wise_22um_summs.ps,width=18cm,clip=,angle=0}} \caption[]{The 22~$\mu$m image of the $23'\times23'$ field around G308.3-1.4\ as observed by WISE. The same radio contour lines as in Fig.~\ref{rass} and Fig.\ref{rgb} are overlaid for comparison. The geometrical center inferred from the X-ray morphology is illustrated by the cross. } \label{wise_sumss} \end{figure*} \section{TEMPORAL ANALYSIS} We have also examined the temporal behavior of these two relatively bright sources with reasonable photon statistics. With the aid of the tool \emph{axbary}, their arrival times were firstly barycentric-corrected by using the corresponding X-ray positions reported in Table~\ref{x_src}. With the aid of the tool \emph{glvary}, we searched for variability by using the Gregory-Loredo algorithm (Gregory \& Loredo 1992). For source \#5, a zero variability index has been assigned and the probability of variable signal is found to be $\sim11\%$.\footnote{http://cxc.harvard.edu/ciao/threads/variable/index.html\#output} And hence, no evidence of variability is found for this source. On the other hand, a variability index of 8 is assigned for source \#7 and the probability of variable signal is at the level $>99.99\%$, which strongly suggests the presence of temporal variability. We have cross-checked these results by adopting other statistical tests (e.g. $\chi^{2}$ test), which resulted in the same conclusion as aforementioned. Figure~\ref{src7_var} shows the light curve of source \#7 with a binned time of 1000 s. Burst-like activity is noticed in the beginning of this observation (i.e. $2^{\rm nd}$ bin of Fig.\ref{src7_var}). However, the limited photon statistics does not allow a firm conclusion to be drawn. Except for this, the light curve appears to be rather steady. After excluding the time segment possibly contains the flare, we proceeded to search for the possible periodic signals from source \#7. By fitting the light curve with a sinusoidal, we have identified an interesting periodicity candidate at $P=1.4\pm0.2$~hrs. The epoch-folded light curve is shown in Figure~\ref{orbit_fold}. $\chi^{2}$ test indicates that it is different from a uniform distribution at $99.97\%$ confidence level. While the periodic variation appears to be promising, we stress that the peak-to-peak modulation is only at a marginal level of $\sim4\sigma$ due to the large error bars. A deeper observation is required to confirm this interesting periodicity candidate. On the other hand, we do not find any promising periodic signal from source \#5 in this observation. \begin{figure*}[t] \centerline{\psfig{figure=src7_lc_1000s.eps,width=16cm,clip=,angle=0}} \caption[]{The light curve of the X-ray source \#7 detected by Chandra ACIS-I which has shown its variability during the observation. The error bars represent $1\sigma$ uncertainties.} \label{src7_var} \end{figure*} \begin{figure*}[t] \centerline{\psfig{figure=src7_bc_1.4h_phasefold.ps,width=14cm,clip=,angle=90}} \caption[]{X-ray counts of source \#7 versus phase for a periodicity candidate of 1.4~hrs. Two periodic cycles are shown for clarity. The error bars represent $1\sigma$ uncertainties.} \label{orbit_fold} \end{figure*} \begin{table*}[b] \caption{Positions of the X-ray sources detected in the field of ACIS-I as labeled in Fig.~\ref{wavelet}. The corresponding $1\sigma$ positional errors, net count rates, signal-to-noise ratios as well as the estimates of source extent are tabulated.} \label{x_src} \begin{center} \begin{tabular}{lccccccc} \hline \hline Source & RA (J2000) & Dec (J2000) & $\delta$RA & $\delta$Dec & Net count rate & S/N$^{a}$ & PSF RATIO$^{b}$ \\ \hline & h:m:s & d:m:s & arcsec & arcsec & $10^{-3}$~cts~s$^{-1}$ & $\sigma_{\rm G}$ & \\ \hline 1 & 13:41:15.00 & -63:50:41.65 & 2.97 & 0.82 & 0.85$\pm$0.31 & 3.13 & 2.26 \\ 2 & 13:40:01.41 & -63:49:04.98 & 2.81 & 1.25 & 1.45$\pm$0.40 & 4.38 & 2.64 \\ 3 & 13:41:15.06 & -63:48:37.41 & 2.74 & 0.82 & 0.92$\pm$0.34 & 3.06 & 4.19 \\ 4 & 13:40:10.00 & -63:48:10.30 & 2.05 & 1.45 & 1.29$\pm$0.40 & 3.76 & 3.74 \\ 5 & 13:41:22.00 & -63:47:05.85 & 0.25 & 0.19 & 6.47$\pm$0.78 & 13.31 & 1.64 \\ 6 & 13:40:27.00 & -63:46:31.78 & 2.78 & 1.03 & 1.15$\pm$0.36 & 3.77 & 7.13 \\ 7 & 13:41:24.00 & -63:43:52.40 & 0.37 & 0.19 & 12.96$\pm$1.03 & 26.20 & 5.25 \\ 8 & 13:40:06.00 & -63:43:20.13 & 2.15 & 1.32 & 0.71$\pm$0.28 & 2.87 & 3.19 \\ 9 & 13:40:24.57 & -63:42:40.36 & 2.03 & 1.45 & 1.32$\pm$0.38 & 4.25 & 10.30 \\ 10 & 13:39:56.00 & -63:41:04.22 & 1.77 & 0.77 & 1.46$\pm$0.42 & 4.24 & 1.96 \\ 11 & 13:41:14.29 & -63:39:46.21 & 2.01 & 0.46 & 1.61$\pm$0.38 & 6.00 & 4.53 \\ 12 & 13:41:55.00 & -63:39:21.22 & 1.18 & 0.68 & 2.14$\pm$0.45 & 6.74 & 1.46 \\ 13 & 13:40:38.82 & -63:39:04.23 & 1.13 & 0.68 & 1.95$\pm$0.47 & 5.32 & 3.04 \\ 14 & 13:41:42.44 & -63:38:36.12 & 3.08 & 1.01 & 1.19$\pm$0.37 & 3.71 & 3.46 \\ 15 & 13:41:19.00 & -63:35:55.49 & 2.87 & 1.11 & 1.06$\pm$0.36 & 3.40 & 2.41 \\ 16 & 13:41:11.00 & -63:45:39.36 & 1.23 & 0.56 & 2.45$\pm$0.73 & 3.63 & 12.21 \\ 17 & 13:41:08.98 & -63:42:39.32 & 0.96 & 0.55 & 3.19$\pm$0.89 & 3.84 & 11.03 \\ \hline\hline \end{tabular} \end{center} $^{a}$ Estimates of source significance in units of Gehrels error: $\sigma_{G}=1+\sqrt{C_{B}+0.75}$ where $C_{B}$ is the background counts.\\ $^{b}$ The ratios between the source extents and the estimates of the PSF sizes. \end{table*} \begin{figure*}[b] \centerline{\psfig{figure=g308_spec_region.ps,width=16cm,clip=,angle=0}} \caption[]{Illustration of the regions used to extract the spectra from different parts of G308.3-1.4.} \label{spec_region} \end{figure*} \section{SPECTRAL ANALYSIS} For the spectral analysis, we extracted the spectra for various components of G308.3-1.4\ from the regions illustrated in Figure~\ref{spec_region}. These regions are carefully selected so that they are free from the contamination of the point spread function (PSF) wing from any resolved source. The background spectra for the corresponding CCD chips were extracted from the source free regions. We utilized the tool \emph{specextract} to extract the spectra and to compute the response files. After background subtraction, there are net counts of $5118\pm73$~cts (outer rim), $1302\pm37$~cts (inner rim) and $1157\pm36$~cts (central region) available for analysis. According to the photon statistics, we group each of the extracted spectrum dynamically so as to achieve a comparable signal-to-noise ratio. The energy spectra of different components are displayed in Figure~\ref{snr_spec}. Emission lines from various metals can be clearly observed (e.g. Mg at $\sim1.4$~keV; Si at $\sim1.9$~keV), which support the SNR nature of G308.3-1.4. We examined these spectra with an absorbed non-equilibrium ionization model of a constant temperature and a single ionization timescale (XSPEC model: NEI). All the spectral fittings were performed with XSPEC 12.6.0. The best-fit parameters are summarized in Table~\ref{spec_par}. All quoted errors are $1\sigma$ for 1 parameter of interest. For all three investigated regions, we do not find any conclusive evidence for the deviation of metal abundances from the solar values. The ionization timescale inferred from fitting the outer rim spectrum and the inner rim spectrum is $\tau_{\rm ion}=\left(7.3\pm1.5\right)\times10^{10}$~s~cm$^{-3}$ and $\tau_{\rm ion}=\left(4.3^{+2.1}_{-1.2}\right)\times10^{10}$~s~cm$^{-3}$ respectively, which indicate these parts of the remnant might not yet reach the collisional ionization equilibrium (CIE). The best-fit plasma temperature of outer rim and inner rim are found to be $kT=0.63^{+0.05}_{-0.02}$~keV and $kT=0.97^{+0.19}_{-0.16}$~keV respectively. The higher temperature at the inner rim is consistent with the hardness distribution shown in the color image (i.e. Fig.~\ref{rgb}). The moderate temperature variation can possibly be caused by the complex density structure in the shocked region (e.g. see Chevalier 1982). For the spectrum of the central region, the ionization timescale cannot be properly constrained in NEI model fitting. We put a lower bound of $\tau_{\rm ion}$ at $\gtrsim10^{13}$~s~cm$^{-3}$. This suggests the central region can possibly reach the condition for CIE already. The best-fit plasma temperature of this region is found to be $kT=0.57\pm0.03$~keV. We further investigated if there is any non-thermal emission in various regions by adding a power-law model on the aforementioned best-fit plasma models. We found that the parameters of the additional power-law cannot be properly constrained. This prompts us to perform the spectral fit with the photon index fixed at the value of $\Gamma=2$. For all three spectra, no significant improvement of the goodness-of-fit have been found. We place $1\sigma$ upper limits of any non-thermal emission at the levels of $<1.9\times10^{-14}$~erg~cm$^{-2}$~s$^{-1}$, $<3.5\times10^{-14}$~erg~cm$^{-2}$~s$^{-1}$ and $<4.5\times10^{-14}$~erg~cm$^{-2}$~s$^{-1}$ for the outer rim, inner rim and the central region respectively. We have also examined the spectrum for the brightest source detected in the FOV (i.e. source \#7). The source spectrum has been extracted from a circular region with a radius of $5^{"}$ centered at the position reported by the source detection algorithm (cf. Tab~\ref{x_src}). For the background subtraction, we sampled from an annulus centered on the same position with an inner/outer radius of $7^{"}$/$14^{"}$. There are $156\pm13$ net counts available for the spectral fitting. We have firstly examined the spectrum with various single component models, including the blackbody, the power-law and the hot diffuse gas model based on Mewe et al. (1985) (XSPEC model: MEKAL). We found all the tested single component model cannot provide any adequate description of the data. All result in a reduced $\chi^{2}$\ (i.e. $\chi^{2}$/d.o.f.) larger than 2 which indicate these models do not provide an adequate description of the data. We proceeded to perform the fitting with double blackbody which generally describes the spectrum of a particular manifestation of neutron stars, namely the central compact objects (CCOs) (Hui 2007; Mereghetti 2011). A double blackbody fit yields $N_{H}=(1.5\pm0.7)\times10^{22}$~cm$^{-2}$, $kT_{1}=0.09^{+0.04}_{-0.02}$~keV, $kT_{2}=0.44^{+0.21}_{-0.10}$~keV with an acceptable goodness-of-fit $\chi^{2}$=7.3 for 5 d.o.f.. Their normalizations imply the emitting regions with the radii of $R_{1}=27^{+30}_{-21}D_{\rm kpc}$~km and $R_{2}=35^{+34}_{-18}D_{\rm kpc}$~m, where $D_{\rm kpc}$ is the source distance in unit of kpc. The unabsorbed flux of source \#7 is found to be $f_{x}\simeq1.3\times10^{-11}$~erg~cm$^{-2}$~s$^{-1}$ in 0.5-8~keV. The comparison between this best-fit model and the observed data is displayed in Figure~\ref{cco_spec}. We have also examined the spectrum of source \#7 with a blackbody plus power-law model, which is a typical phenomenological description for the spectrum of a quiescent low-mass X-ray binary. This model yields $N_{H}=(1.5^{+1.2}_{-0.7})\times10^{22}$~cm$^{-2}$, $kT=0.09^{+0.02}_{-0.03}$~keV, $R_{\rm bb}=24^{+23}_{-10}D_{\rm kpc}$~km, $\Gamma=2.3^{+0.7}_{-0.2}$ and the power-law normalization of $(6.0^{+4.7}_{-2.8})\times10^{-5}$ photons~keV$^{-1}$~cm$^{-2}$~s$^{-1}$ at 1 keV. The goodness-of-fit ($\chi^{2}$=8.0 for 5 d.o.f.) is comparable with that resulted from the double blackbody fit. The unabsorbed flux in 0.5-8~keV inferred from this model is $f_{x}\simeq1.1\times10^{-11}$~erg~cm$^{-2}$~s$^{-1}$. For source\#5, the second brightest objects detected in the FOV without any optical counterpart identified, the source spectrum and the adopted background are extracted from the regions with the same sizes in the case of source \#7 but centered at the nominal position of source\#5 (Tab~\ref{x_src}). After background subtraction, there are $61\pm8$~cts available for the analysis. Among the tested single component model, we found that MEKAL provides a better description for the spectrum of this object ($\chi^{2}$=1.4 for 3 d.o.f.) than a blackbody ($\chi^{2}$=21.1 for 3 d.o.f.) or a power-law ($\chi^{2}$=4.8 for 3 d.o.f.). The best-fit model yields $N_{H}=(5.7^{+2.9}_{-1.9})\times10^{21}$~cm$^{-2}$ and $kT=0.46^{+0.16}_{-0.26}$~keV. This infers an unabsorbed flux of $f_{x}\simeq2.0\times10^{-13}$~erg~cm$^{-2}$~s$^{-1}$ in 0.5-8~keV. \section{OPTICAL/INFRARED SPECTRAL ENERGY DISTRIBUTION OF THE CENTRAL COMPACT OBJECT} We have also examined the optical properties for the counterpart of source \#7. In USNO-B1.0 catalog, only a single source with the magnitudes of $B=19.72$, $R=19.64$ and $I=16.15$ has been identified within its $10\sigma\times10\sigma$ X-ray error box. We have also searched for the infrared counterpart in the Two Micron All Sky Survey (2MASS) catalog (Skrutskie et al. 2006). Similarly, only one source with $J=14.01$, $H=13.31$, and $K_{s}=13.10$ can be found with its error box. To complement these catalog values, we have also carried out observations in $V$ and $U$ band so as to have a more complete frequency coverage. For $U$ band, by applying aperture photometry to the stacked UVOT image with an exposure of 6.49~ks, we have placed an limiting magnitude of 21.29. The UVOT data also allow us to cross-check the magnitude in $B$ band, from which we have obtained $B$=19.77 which confirms the value reported in USNO-B1.0 catalog. For $V-$band, we have observed source \#7 with the Cerro Tololo Interamerican Observatory (CTIO). Data have been obtained using the ANDICAM dual-channel imaging photometer at the SMARTS/CTIO 1.3~m telescope in $V$ filter. The zero-point corrected magnitude is found to be $V=19.16$. These identifications enable us to construct an optical/infrared spectral energy distribution (SED) which is shown in Figure~\ref{oir_sed}. We adopted the column density inferred the X-ray spectral fit of source \#7 (i.e. $1.5\times10^{22}$~cm$^{-2}$) to perform the extinction-correction (cf. Predehl \& Schmitt 1995; Cardelli, Clayton, \& Mathis 1989). The de-reddened SED has also been plotted in Figure~\ref{oir_sed}. For the range from $K_{s}$ band to $R$ band, the SED can be modeled with the spectrum of a M3V star. On the other hand, an apparent excess is found at and beyond $V$ band. To further constrain its optical properties, we have also searched for its H$\alpha$ counterpart from the SuperCOSMOS H-alpha Survey (SHS) catalog (Parker et al. 2005). Within the X-ray error box of source \#7, only one source with the magnitude 17.35 is found. In comparison with $R$ band, $m_{\rm H\alpha}-m_{R}=-2.3$, a clear enhancement in H$\alpha$ is noted which suggests the presence of a strong emission line. \begin{figure*} \centerline{\psfig{figure=g308_snr_spec_revise.eps,width=12cm,clip=,angle=0}} \caption[]{X-ray spectra of the emission from various regions of G308.3-1.4\ as observed by Chandra ACIS-I. Residuals resulted from fitting an absorbed non-equilibrium ionization plasma model are shown for each cases. The error bars represent $1\sigma$ uncertainties.} \label{snr_spec} \end{figure*} \begin{table*}[b] \caption{X-ray spectral parameters inferred from different regions in G308.3-1.4.} \label{spec_par} \begin{center} \begin{tabular}{l |c | c | c} \hline\hline\\[-2ex] & Outer rim & Inner rim & Central region \\\\[-2ex] \hline\\[-2ex] $N_{H}$ ($10^{21}$ cm$^{-2}$) & $10.3^{+0.3}_{-0.4}$ & $9.7^{+0.8}_{-0.7}$ & $9.4^{+0.3}_{-0.4}$ \\[1ex] $kT$ (keV) & $0.63^{+0.05}_{-0.02}$ & $0.97^{+0.19}_{-0.16}$ & $0.57\pm0.03$ \\[1ex] $\tau_{\rm ion}$ (s~cm$^{-3}$) & $(7.3\pm1.5)\times10^{10}$ & $(4.3^{+2.1}_{-1.2})\times10^{10}$ & $>10^{13}$ \\[1ex] Norm ($10^{-3}$)$^{a}$ & $4.5^{+0.6}_{-0.7}$ & $0.5^{+0.2}_{-0.1}$ & $0.9\pm0.1$ \\[1ex] \hline\\[-2ex] $f_{x}$ ($10^{-12}$~erg~cm$^{-2}$~s$^{-1}$)$^{b}$ & $24.1^{+3.1}_{-3.4}$ & $3.8^{+1.1}_{-0.9}$ & $2.3^{+0.4}_{-0.3}$ \\[1ex] \hline\\[-2ex] $\chi^{2}$\ & 157.08 & 141.49 & 131.94 \\[1ex] D.O.F. & 138 & 129 & 122 \\[1ex] \hline \end{tabular} \end{center} $^{a}$ {\footnotesize The model normalization is expressed as $(10^{-14}/4\pi D^{2})\int n_{e}n_{H}dV$ where $D$ is the source distance in cm and $n_{e}$ and $n_{\rm H}$ are the post-shock electron and hydrogen densities in cm$^{-3}$.}\\ $^{b}$ {\footnotesize Unabsorbed flux over the energy range of $0.5-8$~keV.}\\ \end{table*} \begin{figure*}[t] \centerline{\psfig{figure=cco_bb_bb.ps,width=12cm,clip=,angle=-90}} \caption[]{X-ray spectrum of the emission from the position of source \#7 as observed with ACIS-I with the best-fit double blackbody model (\emph{upper panel}) and contributions to the $\chi^{2}$\ statistics (\emph{lower panel}). The error bars represent $1\sigma$ uncertainties.} \label{cco_spec} \end{figure*} \begin{figure*}[t] \centerline{\psfig{figure=src7_SED_M3V_uvot_ctio.eps,width=18cm,clip=,angle=0}} \caption[]{Optical/infrared spectral energy distribution of source \#7. Both observed (open symbols) and de-reddened (solid symbols) data points are shown in this plot. The open and solid triangles represent the observed and de-reddened $3\sigma$ upper limit inferred from SWIFT UVOT observation. The spectral model of a M3V star obtained from the stellar spectral flux library (Pickles 1998) is overplotted. The error bars represent the photometric uncertainties corresponding to each data point.} \label{oir_sed} \end{figure*} \section{DISCUSSION} We have performed a detailed spectro-imaging X-ray study of the SNR candidate G308.3-1.4\ with Chandra. An incomplete shell-like X-ray structure, which is well-correlated with radio shell, has been revealed. Its X-ray spectrum has shown the presence of a hot plasma accompanied with metallic emission lines. All these observational evidences clearly suggest G308.3-1.4\ is indeed a SNR. Utilizing the X-ray results, we proceed to discuss the properties of G308.3-1.4. The emission measure inferred from the spectral fits of the outer rim emission allows us to estimate the hydrogen density $n_{\rm H}$ and the electron density $n_{\rm e}$ in the shocked regions. Assuming the shocked densities of hydrogen and electrons are uniform in the extraction region, the normalization of the NEI model can be approximated by $10^{-14}n_{\rm H}n_{e}V/4\pi D^{2}$, where $D$ is the distance to G308.3-1.4\ in cm and $V$ is the volume of interest in units of cm$^{3}$. Assuming a geometry of oblated spheroid, the volume of interest for the outer rim is $5\times10^{54}D^{3}_{\rm kpc}$ cm$^{3}$ where $D_{\rm kpc}$ is the remnant distance in units of kpc. Assuming a fully ionized plasma with $\sim10\%$ He ($n_{\rm e}\sim1.2n_{\rm H}$), the $1\sigma$ confidence interval of the outer rim normalization implies the shocked hydrogen and electron densities in the ranges of $n_{\rm H}\simeq\left(2.8-3.2\right)D^{-0.5}_{\rm kpc}$~cm$^{-3}$ and $n_{\rm e}\simeq\left(3.3-3.9\right)D^{-0.5}_{\rm kpc}$~cm$^{-3}$ respectively. Assuming G308.3-1.4\ is in a Sedov phase, the shock temperature can be estimated by $T_{s}\simeq 8.1\times 10^{6}E_{51}^{2/5}n_{\rm ISM_{-1}}^{-2/5}t_{4}^{-6/5}$~K, where $t_{4}$, $E_{51}$ and $n_{\rm ISM_{-1}}$ are the time after the explosion in units of $10^{4}$~years, the released kinetic energy in units of $10^{51}$~ergs and the ISM density of 0.1~cm$^{-3}$ respectively. Assuming it is a strong shock, $n_{\rm ISM}$ is estimated as $0.25n_{\rm H}$. Taking the $1\sigma$ uncertainties of the temperature and the normalization for the outer rim into account (cf. Tab.~\ref{spec_par}), the age of G308.3-1.4\ is constrained in the brackets of $t\simeq\left(2.4-2.7\right)D_{\rm kpc}^{1/6}\times10^{3}$~yrs and $t\simeq\left(5.1-5.9\right)D_{\rm kpc}^{1/6}\times10^{3}$~yrs for $E_{51}=0.1$ and $E_{51}=1$ respectively. Since the distance plays a crucial role in determining the physical properties of G308.3-1.4, a follow-up HI observation of G308.3-1.4\ is strongly recommended. Our observation has also revealed a number of sources in the field of G308.3-1.4\ (cf. Fig.~\ref{wavelet}). Source \#7 is the most interesting source in this population. First, among all the detected sources, it is the closest object from the geometric center of the remnant with an offset of $\sim1.4^{'}$. Also, the column absorption of this object inferred from the spectral fit is not far from the value inferred from the remnant spectrum, which suggests a possible tie between source \#7 and G308.3-1.4. Its spectrum can be described by a composite blackbody or a blackbody plus power-law model, though the parameters are rather unconstrained due to the small photon statistic resulted from this short exposure. We would like to point out that these properties are similar to those of CCOs, which is one of the most poorly known classes among all known manifestations of neutron stars (see Mereghetti 2011 for a recent review). CCOs are typically characterized by their proximity to the expansion center of the associated SNRs as well as their thermal X-ray spectra (Mereghetti 2011; Hui 2007). On the other hand, all the known CCOs are also characterized by their large X-ray-to-optical flux ratios ($f_{x}/f_{\rm opt}>10^{3}$) which is typical for isolated neutron stars (cf. De Luca 2008). In searching for the counterpart of source \#7, we have identified a single optical/infrared source within the error box of source \#7, which clearly rules out the possibility of an isolated neutron star. However, we cannot exclude the possibility that this is a neutron star resides in a binary system that survived in the SN explosion. The binary scenario is also suggested by the putative modulation of $P\sim1.4$~hrs, which can possibly be the orbital period of this system. A similar scenario has also been suggested to explain the X-ray temporal behavior of the peculiar CCO associated with the SNR RCW~103 which has a periodic modulation at 6.67~hrs that can also be interpreted as the orbital period (Pizzolato et al. 2008; De Luca et al. 2006). Nevertheless, this relatively short exposure in our observation do not provide a firm conclusion on the periodicity of source \#7. A longer follow-up X-ray observation can enable us to better constrain its temporal behavior. Similar to the CCO in RCW~103, flux variability has also been detected from source \#7 in G308.3-1.4\ (see Fig.~\ref{src7_var}). The nature of this variability remains unknown. Speculations have been ranged from the instability of the accretion disc around the compact object to the scenario that the CCO is a magnetar (De Luca et al. 2006). The X-ray source in RCW~103 is so far the only CCO that has demonstrated long-term flux variability. It is possible that source \#7 in G308.3-1.4\ can be the second example. Multi-epoch X-ray monitoring of this source can provide a deeper insight on its temporal behavior. The properties of the infrared/optical counterpart of source \#7 are also worth discussing. We found that its SED can be well-described by the model spectrum of a late-type star in the range from $K_{s}$ band to $R$ band (see Fig.~\ref{oir_sed}). In the context of a binary scenario, this suggests the evidence for the companion star of source \#7. Nevertheless, with these sparse data points obtained from the catalogs, its nature cannot be constrained unambiguously. Dedicated spectroscopic observations are required to provide us a deeper insight of it. The other interesting optical properties are the enhancements observed in $V$, $B$ bands as well as H$\alpha$. These characteristics suggest the presence of an accretion disk around a compact object and a low mass companion (e.g. Cool et al. 1998). However, these excesses are larger than those observed from a typical quiescent low-mass x-ray binary (e.g. Haggard et al. 2004). As the orbital period of source \#7 is suggested to be $\sim1.4$~hrs, it is expected to be a very compact binary system. Such tight orbit can possibly lead to the aforementioned large enhancements. Confirmation of this putative orbital period can provide a natural explanation of the optical properties. We would like to highlight that there are only 11 CCOs have been uncovered so far (cf. Tab.~1 in Mereghetti 2011). With this limited sample size, we cannot even determine if these objects consist of a homogeneous class. For a better understanding of their nature, the sample of CCOs has to be enlarged. In this aspect, our long-term X-ray identification campaign of SNR candidates can also enable the search for the candidates of these compact objects. \section{SUMMARY} There is a group of unidentified extended RASS objects have been suggested as promising SNR candidates. We have initiated a long-term identification campaign by observing the brightest candidate, G308.3-1.4, with Chandra. With a short exposure, we have confirmed the nature of G308.3-1.4\ as a SNR through a detailed X-ray spectro-imaging analysis. Apart from the remnant emission, a bright X-ray point source locates close to the geometrical center of G308.3-1.4\ has also been detected as a new CCO. This compact object has shown a putative periodicity of $\sim1.4$~hrs and excesses in $V$, $B$ bands and H$\alpha$, which suggest it as a promising candidate of a compact binary survived in a SN. In conclusion, this proposed alternative window of SNRs and compact objects survey has been demonstrated to be fruitful by our pilot target. \acknowledgments{ The authors would like to thank the anonymous referee for the useful comments. CYH is supported by the National Research Foundation of Korea through grant 2011-0023383. LT would like to thank the German Deutsche Forschungsgemeinschaft, DFG for financial support in project SFB TR 7 Gravitational Wave Astronomy. AKHK is supported partly by the National Science Council of the Republic of China (Taiwan) through grant NSC99-2112-M-007-004-MY3 and a Kenda Foundation Golden Jade Fellowship. }
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Context\protect\label{intro}} \vspace{-0.3cm} For low-to-intermediate mass stars ($0.8 ~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}} M_{*} ~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}} 8~M_{\odot}$), the end stages of stellar evolution are marked by the onset of a range of complex and dynamic atmospheric phenomena. During the asymptotic giant branch (AGB) phase, such stars exhibit dramatic increases in radius ($R_{\star}~\rlap{$>$}{\lower 1.0ex\hbox{$\sim$}}$1--2~AU) and luminosity ($L_{\star}\sim10^{4}L_{\odot}$) and begin undergoing radial pulsations with periods of order hundreds of days. Over the course of single pulsation cycle the brightness of the star may vary by up to $\sim$~8 magnitudes (a factor of 1000) in the visible. The latter variations are attributed to the time-dependent formation of metallic oxides in the outer atmosphere (Reid \& Goldston 2002). Another key hallmark of the AGB phase is the onset of periods of intense mass loss (${\dot M}\sim10^{-8}$--$10^{-4}~M_{\odot}$ yr$^{-1}$) through cool, dense, low-velocity winds ($V_{\rm out}\sim$10~\mbox{km~s$^{-1}$}). These outflows ultimately expel up to 80\% of the star's initial mass (see review by H\"ofner \& Olofsson 2018), leading to profound effects on the stellar evolutionary track. Because AGB winds are dusty and enriched in heavy elements, AGB mass loss also produces more than half of the dust and chemical enrichment in the Galaxy, (Schr\"oder \& Sedlmayr 2001; Van Eck et al. 2001; Karakas 2014). A detailed understanding of AGB mass loss is crucial for stellar astrophysics and knowledge of the ultimate fate of our Sun. But more broadly, AGB stars impact the entire Galactic ecosystem, and prescriptions for the mass loss and dust and heavy element production are crucial for extragalactic astronomy and cosmology, which make use of stellar population synthesis (e.g., Salaris et al. 2014), interpretations of the integrated starlight from galaxies (e.g., Melnick \& De Propris 2013), and prescriptions of gas recycling and chemical evolution in galaxies (e.g., Tosi 2007; Leitner \& Kravtsov 2011). However, we still lack a comprehensive and self-consistent picture of evolution and mass loss along the AGB. Persistent uncertainties include the wind launching mechanisms for stars of different chemistries, the mass-loss geometry and timescales, and the evolutionary pathways for stars of various initial masses (Marengo 2009; H\"ofner \& Olofsson 2018). In broad terms, AGB winds are thought to be dust-driven (e.g., Kwok 1975): dust formation occurs in the cool, outer atmosphere ($r>2R_{\star}$) and radiation pressure on these grains transfers momentum outward to the gas, leading to mass loss. However, in warmer and/or oxygen-rich (M-type) stars, conditions are too hot for dust formation interior to $r\sim$6--7~AU. Thus some additional mechanism is required to transport material from the stellar ``surface'' to the wind launch region (Woitke 2006; H\"ofner 2011). Pulsations are suspected of playing a critical role in this process (Bowen 1988; Yoon \& Cantiello 2010; Neilson 2014), but the details are still poorly understood. Magnetic fields, acoustic waves, and/or Alfv\'en waves are also candidates for shaping and regulating mass loss (e.g., Blackman et al. 2001; Harper 2010), possibly in conjunction with large-scale convective processes (e.g., Lim et al. 1998; O'Gorman et al. 2017). However, the intricate interplay between these various processes is still poorly constrained. Significant progress in this field will require empirical constraints that combine: (1) the ability to {\em spatially resolve} the stellar atmosphere on the relevant physical scales ($\ll R_{\star}$); (2) {\it temporal resolution} of the characteristic dynamical timescales; and (3) the ability to {\em directly measure gas motions}. Fortuitously, the molecule-rich atmospheres of evolved giants frequently give rise to {\em molecular masers} that can be exploited for this purpose (e.g., Gray 2012). Though stellar masers were discovered $\sim$50 years ago (Wilson \& Barrett 1968; Snyder \& Buhl 1974), technological advances, coupled with advances in maser theory (Gray et al. 2016) and the modeling of AGB star atmospheres (Freytag et al. 2017; H\"ofner \& Freytag 2019), are poised to allow masers to supply tremendous new insights into the physics of evolved stars and stellar mass loss in the coming decade. \section{Molecular Masers in Evolved Stellar Atmospheres: Background and Recent Results} \vspace{-0.2cm} In oxygen-rich (M-type) AGB stars, maser emission from SiO arises within a few $R_{\star}$, just outside the radio photosphere ($r\sim2R_{\star}$), and adjacent to the dust formation zone and molecular layers (Reid \& Menten 1997; Fig.~\ref{fig:cross-section}). The properties of the SiO masers are therefore intricately linked with the atmospheric regions where stellar mass loss originates (Humphreys 2002; Gray et al. 2009). In carbon-rich AGB stars, HCN masers are thought to trace similar regions (Gray 2012; Izumiura et al. 1995; Menten et al. 2018). H$_{2}$O masers, in comparison, typically arise just outside the dust formation zone at $r\sim10$--100~AU. \begin{figure}[!t] \vspace{-0.1cm} \centering \scalebox{0.4}{\rotatebox{0}{\includegraphics{RM97_fig10_ed.jpg}}} \caption{{\small Schematic illustrating the various atmospheric layers of an M-type AGB star. SiO maser emission arises near the radio photosphere, just interior to the dust formation zone where the stellar wind is launched. H$_{2}$O masers arise just outside the dust formation zone. Adapted from Reid \& Menten (1997).} \protect\label{fig:cross-section}} \vspace{-0.4cm} \end{figure} Because of their high brightness temperatures ($\gsim10^{6}$~K), masers can be observed with ultra-high spatial resolution using very long baseline interferometry (VLBI). In recent years, VLBI observations with the Very Long Baseline Array (VLBA) of stellar SiO and H$_{2}$O masers with angular resolutions of $\sim$0.2--0.5~mas have established the enormous potential of high-resolution studies of masers for understanding the complex atmospheric physics and mass loss of AGB stars. Examples of key results to date include: \vspace{-0.4cm} \paragraph{Spatial structure} Observations with the VLBA have revealed that SiO masers lie in complex ring-like structures centered on the host star, lying just outside the hot molecular layer observable at IR wavelengths ($r\sim$2--4$R_{\star}$; e.g., Diamond et al. 1994; Cotton et al. 2004, 2006; Wittkowski et al. 2007; Amiri et al. 2012; Fig.~\ref{fig:txcam}). Intriguing jet-like features are seen in some cases (Cotton et al. 2006; Amiri et al. 2012), although their origin has remained a puzzle, as they cannot be interpreted as simple outward accelerations. Temporal monitoring and proper motions measurements are needed to establish the true nature of these features. \vspace{-0.4cm} \paragraph{Variability} SiO and H$_{2}$O masers in evolved stars are highly time variable (e.g., Pardo et al. 2004; Kim et al. 2014). The availability of the VLBA as a {\em dedicated} VLBI instrument has thus been crucial for enabling the regular monitoring of stellar masers with high spatial resolution. One spectacular example is the 78-epoch ``movie'' of the SiO masers in TX~Cam over nearly 5 years (Gonidakis et al. 2013; Fig.~\ref{fig:txcam}). These data indicate that a shock with velocity $\sim$7~\mbox{km~s$^{-1}$}\ is created during each stellar pulsation cycle that in turn affects the intensity and distribution of the masers. Further, the velocity structure suggests a bipolar geometry, contrary to the spherically symmetric outflows that are traditionally assumed for AGB stars (see H\"ofner \& Olofsson 2018). However, as we presently lack similar time-lapse data for other AGB stars, it is impossible to draw general conclusions, and firm links between different components of the atmospheric physics (shocks, pulsation, convection) and the observed maser behaviors are not yet established. Variability studies over a large number of objects are needed to establish the connections between the AGB atmosphere and the mass loss process. \begin{figure}[!t] \vspace{-0.1cm} \centering \scalebox{0.4}{\rotatebox{0}{\includegraphics{gonidakis.pdf}}} \caption{{\small VLBA images of the $\lambda$7~mm SiO $v$=1, $J$=1-0 masers in TX~Cam during two different epochs. Credit: P. J. Diamond and A. J. Kemball (see also Diamond \& Kemball 2003; Gonidakis et al. 2013). } \protect\label{fig:txcam}} \vspace{-0.4cm} \end{figure} \vspace{-0.4cm} \paragraph{Magnetic fields} Full polarization measurements of SiO masers offer a powerful means of constraining the little-understood role of magnetic fields in AGB mass loss (e.g., Vlemmings 2018) and provide a vital link between the ``surface'' magnetic field measured through infrared lines (L\`ebre et al. 2014) with ``circumstellar'' magnetic fields measured further out via H$_{2}$O and OH lines. Using the VLBA, Amiri et al. (2012) obtained full-polarization maps of the SiO masers in the OH/IR AGB star OH~44.8-2.3 and discovered that they are significantly linear polarized ($\sim$100\%), underscoring an important role for magnetic fields in the outer atmosphere and circumstellar environment (Fig.~\ref{fig:amiri}). The polarization vectors also seem to indicate a dipolar magnetic field morphology, although the relationship between the B-field geometry and the stellar outflow cannot yet be firmly established. An improved understanding of these results requires higher signal-to-noise ratio observations, along with similarly detailed studies for other AGB stars. \begin{figure}[!t] \vspace{-0.1cm} \centering \scalebox{0.4}{\rotatebox{0}{\vspace{-1.0cm}\includegraphics{Amiri_pol.pdf}}} \caption{{\small Contour map of the SiO $v$=1, $J$=1-0 maser emission in the OH/IR star OH~44.8-2.3 obtained with the VLBA, overplotted with linear polarization vectors. Vector length is proportional to linearly polarized intensity (1 mas = 1.25 Jy beam$^{-1}$) and position angle corresponds to the EVPA. From Amiri et al. (2012). } \protect\label{fig:amiri}} \vspace{-0.5cm} \end{figure} \vspace{-0.3cm} \paragraph{Multi-transition observations} Multiple transitions and isotopologues of SiO and H$_{2}$O emit in the cm, mm, and sub-mm (Alcolea 2004; Humphreys 2007; Gray 2012). Because these various transitions require different conditions to excite, spatially resolved observations of multiple maser lines within a single star permit measurements of density and temperature within different regions of the atmosphere, the propagation of shocks, and the transfer of material between layers of the star (e.g., Humphreys et al. 2002; Gray et al. 2009, 2016). In particular, {\em contemporaneous} observations of multiple lines and comparisons of their properties and evolution with those of the optical and radio photospheres offer potent diagnostics of the atmospheric physics. However, several key lines emit outside the frequency coverage of the VLBA (i.e., $\nu_{0}>$90~GHz) or else fall below its brightness temperature limits for line emission [e.g., $T_{B}\sim10^{8}$~K within a 31~kHz spectral channel ($\sim$0.2~\mbox{km~s$^{-1}$}\ for $\nu$=43~GHz) during a 6-hour integration]. Recently, a novel optics system was installed on the Korean VLBI Network (KVN), enabling simultaneous observations of four bands spanning 21--142~GHz (Han et al. 2008). The promise of this set-up for observing stellar masers has already been demonstrated (Cho et al. 2017; Yoon et al. 2018). However, the KVN lacks the long baselines needed to resolve the true sizes of maser emitting gas clumps and to gauge the fraction of emission emitted on various spatial scales. The longer baselines of the VLBA have the needed resolution, but the limited instantaneous bandwidth largely precludes the contemporaneous observations of multiple lines. A consequence is persistent ambiguity in the astrometric registration between different transitions that significantly complicate the interpretation of maser data (e.g., Phillips et al. 2003; Desmurs et al. 2014; Issaoun et al. 2017; Fig.~\ref{fig:masers}). \begin{figure}[h] \hspace{0.75cm} \centering \rotatebox{0}{ \resizebox{!}{5.5cm} {\includegraphics{desmurs.pdf} }} \caption{\small VLBA maps of SiO $v$ = 1 (blue), $v$=2 (green), and $v$=3 (red), \mbox{$J$=1$-$0}\ maser emission in U~Her and IK~Tau (Desmurs et al. 2014). The relative astrometry of the different lines is currently highly uncertain, limiting our ability to quantitatively constrain models of the maser pumping and atmospheric physics. } \label{fig:masers} \vspace{-0.5cm} \end{figure} \section{Goals for the Next Decade: Requirements and Recommendations} \vspace{-0.3cm} Technological innovations during the next decade promise major leaps in our ability to exploit VLBI studies of masers as a tool for understanding stellar evolution and mass loss. \vspace{-0.4cm} \paragraph{Goal: documenting temporal changes} AGB star atmospheres are highly dynamic, making inferences gleaned from observations at only a single observing epoch severely limiting---and potentially misleading. The use of VLBI to obtain high time- and spatial-resolution ``movies'' of masers in a sample of nearby ($d~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}}$1~kpc) AGB stars spanning a range in properties would supply vital insights into the physics of AGB outflows. For example, such observations would enable the measurement of shock velocities (e.g., Gonidakis et al. 2013) which are critical to constraining the role of pulsation in AGB mass loss (Reid \& Menten 1997; Gray et al. 2009) and for explaining the possible existence of gas at chromospheric temperatures (e.g., Luttermoser 1988; Vlemmings et al. 2017). Such measurements can be compared with independent assessments gleaned from the variability of radio photosphere light curves (see Reid \& Menten 1997; Reid \& Goldston 2002). Full polarization observations would simultaneously enable constraints on the magnetic field strength and geometry (e.g., Amiri et al. 2012).\\ {\bf Requirements/recommendations:} To enable full polarization monitoring of masers over both short and long cadences, it is crucial for the US community to maintain a dedicated VLBI array with improved sensitivity. No other VLBI facility in the world has this capability in combination with the $\sim10^{4}$~km baselines needed to provide the angular resolution to fully resolve the structure and motions of individual maser clumps in stellar atmospheres. \vspace{-0.4cm} \paragraph{Goal: multi-frequency line mapping} Building a complete understanding of the physical conditions, chemistry, and gas motions within the atmospheres and envelopes of AGB stars benefits from the ability to detect and simultaneously observe a wide range of maser transitions, including relatively weak ($T_{B}\sim10^{4}$~K) and little explored lines such as SiO $v$=0 (Bolboltz \& Claussen 2004); $^{29}$SiO $v$=0,1, $^{28}$SiO $v$=2, \mbox{$J$=2$-$1}, $^{28}$SiO $v$=3 (Soria-Ruiz et al. 2005, Desmurs et al. 2014), and HCN (unique to carbon stars; e.g., Izumiura et al. 1995; Menten et al. 2018).\\ {\bf Requirements/recommendations:} Enabling contemporaneous VLBI measurements of multiple maser lines requires upgrading VLBA stations to wider instantaneous bandwidths ($~\rlap{$>$}{\lower 1.0ex\hbox{$\sim$}}$8~GHz) and expanded frequency coverage (to $\nu~\rlap{$>$}{\lower 1.0ex\hbox{$\sim$}}$100~GHz). While wider bandwidths do not increase spectral line sensitivity, they improve measurements by expanding the available high-frequency calibration sources. Parallel improvements in line sensitivity can be achieved through: (1) inclusion in VLBI arrays additional large apertures such as phased ALMA ($0.8~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}} \lambda~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}} 7$~mm; Matthews et al. 2018), the phased VLA ($0.7~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}} \lambda~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}} 1.3$~cm), and the Robert C. Byrd Green Bank Telescope ($0.3~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}} \lambda~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}} 1.3$~cm); and (2) the addition of many stations on intermediate baselines ($\sim$30-300~km) to bridge the spatial scales and brightness temperature sensitivity of current VLBI arrays and connected element interferometers (see Kameno et al. 2013; Selina et al. 2018). A continuum brightness temperature sensitivity of $<10^{3}$~K on the intermediate baselines would enable simultaneous detection and astrometric registration of the stellar continuum and the maser emission with exquisite precision ($\ll R_{\star}$), providing valuable new insights into the transport of matter and energy in AGB star atmospheres and into the pumping mechanism for masers (e.g., Gray et al. 2009, 2016; Desmurs et al. 2014). \end{justify} \pagebreak \raggedright \textbf{References} \begin{hangparas}{.25in}{1} Alcolea, J., Bujarrabal, V., \& Gallego, J. D. 1989, A\&A, 211, 187 Amiri, N., Vlemmings, W. H. T., Kemball, A. J., \& van Langevelde, H. J. 2012, A\&A, 538, A136 Blackman, E. G., Frank, A., \& Welch, C. 2001, ApJ, 546, 288 Boboltz, D.~A., \& Claussen, M.~J.\ 2004, ApJ, 608, 480 Bowen, G. H. 1988, ApJ, 329, 299 Cho, S.-H., Yun, Y., Kim, J., et al. 2018, IAU Symp. 336, ed. A. Tarchi, M. J. Reid, \& P. Castangia, 359 Cotton, W. D., Mennesson, B., Diamond, P. J., et al. 2004, A\&A, 414, 275 Cotton, W. D., Vlemmings, W., Mennesson, B., et al. 2006, A\&A, 456, 339 Desmurs, J. F., Bujarrabal, V., Colomer, F., \& Alcolea, J. 2000, A\&A, 360, 189 Diamond, P. J. \& Kemball, A. J. 2013, ApJ 599, 1372 Diamond, P. J., Kemball, A. J., Junor, W., Zensus, A., Benson, J., \& Dhawan, V. 1994, ApJ, L61 Freytag, B., Liljegren, S., \& H\"ofner, S. 2017, A\&A, 600, A137 Gonidakis, I., Diamond, P. J., \& Kemball, A. J. 2013, MNRAS, 433, 3133 Gray, M. 2012, Maser Sources in Astrophysics, (Cambridge: Cambridge University Press) Gray, M. D., Baudry, A., Richards, A. M. S., Humphreys, E. M. L., Sobolev, A. M., \& Yates, J. A. 2016, MNRAS, 456, 374 Gray, M. D., Wittkowski, M., Scholz, M., Humphreys, E. M. L., Ohnaka, K., \& Boboltz, D. 2009, MNRAS, 394, 51 Han S.-T., Lee, J.-W. Kand, J. et al. 2008, IJIMW, 29, 69 Harper, G. M. 2010, ApJ, 720, 1767 H\"ofner, S. 2011, ASPC, 445, 193 H\"ofner, S. \& Freytag, B. 2019, A\&A, in press (arXiv:1902.04074) H\"ofner, S. \& Olofsson, H. 2018, A\&ARv, 26, 1 Humphreys, E. M. L. 2002, IAU Symp. 206, 266 Humphreys, E. M. L. 2007, IAU Symp. 242, 471 Humphreys, E. M. L., Gray, M. D., Yates, J. A., Field, D., Bowen, G. H., \& Diamond, P. J. 2002, A\&A, 386, 256 Issaoun, S., Goddi, C., Matthews, L. D., et al. 2017, A\&A, 606, A126 Izumiura, H., Ukita, N., \& Tsuji, T. 1995, ApJ, 440, 728 Kameno, S. Nakai, N., \& Honma, M. 2013, New Trends in Radio Astronomy in the ALMA Era, ASP Conference Series, 476, (San Francisco: ASP), 409 Karakas, A. I. 2014, IAU Symp. 298, ed. S. Feltzing, G. Zhao, N. A. Walton, \& P. A. Whitelock, (Cambridge: Cambridge University Press), 142 Kim, J., Cho, S.-H., \& Kim, S. J. 2014, AJ, 147, 22 Kwok, S. 1975, ApJ, 198, 583 L\`ebre, A., Auri\`ere, M., Fabas, N., Gillet, D., Herpin, F., Konstantinova-Antova, R., \& Petit, P. 2014, A\&A, 561, 85 Leitner, S. N. \& Kravtsov, A. V. 2011, ApJ, 734, 48 Lim, J., Carilli, C. L., White, S. M., Beasley, A. J., \& Marson, R. G. 1998, Nature, 392, 575 Luttermoser, D. G. 1988, PASP, 100, 1587 Marengo, M. 2009, PASA, 26, 365 Matthews, L. D., Crew, G. B., Doeleman, S. S., et al. 2018, PASP, 130, 5002 Matthews, L. D., Greenhill, L. J., Goddi, C., Chandler, C. J., Humphreys, E. M. L., \& Kunz, M. W. 2010, ApJ, 708, 80 Melnick, J. \& De Propris, R. 2013, MNRAS, 431, 2034 Menten, K. M., Wyrowski, F., Keller, D., \& Kami\'nski, T. 2018, A\&A, 613, 49 Neilson, H. R. 2014, IAU Symp. 301, 205 O'Gorman, E., Harper, G. M., Guinan, E. F., Richards, A. M. S., Vlemmings, W., \& Wasatonic, R. 2015, A\&A, 580, A101 O'Gorman, E., Kervella, P., Harper, G. M., Richards, A. M. S., Decin, L., Montarg\` es, M., \& McDonald, I. 2017, A\&A, 602, L10 Pardo, J. R., Alcolea, J., Bujarrabal, V., Colomer, F., del Romero, A., \& de Vicente, P. 2004, A\&A, 424, 145 Phillips, R. B., Straughn, A. H., Doeleman, S. S., \& Lonsdale, C. J. 2003, ApJ, 588, 105 Reid, M. J. \& Goldston, J. E. 2002, ApJ, 568, 931 Reid, M. J. \& Menten, K. M. 1997, ApJ, 476, 327 Schr\"oder, K.-P. \& Sedlmayr, E. 2010, A\&A, 366, 913 Selina, R. J., Murphy, E., J., McKinnon, M., et al. 2018, in Science with a Next Generation Very Large Array, ASP Monograph 7, ed. E. J. Murphy (San Francisco: ASP), 15 Soria-Ruiz, R., Alcolea, J., Colomer, F., Bujarrabal, V., Desmurs, J.-F., Marvel, K. B., \& Diamond, P. J. 2004, A\&A, 426, 131 Snyder, L. E. \& Buhl, D. 1974, ApJ, 189, L31 Tosi, M. 2007, ASPC, 368, 353 Van Eck, S., Goriely, S., Jorissen, A., \& Plez, B. 2001, Nature, 412, 793 Vlemmings, W. H. T. 2018, CoSka, 48, 187 Vlemmings, W., Khouri, T., O'Gorman, E., et al. 2017, Nature Ast, 1, 848 Wilson, W. J. \& Barrett, A. H. 1968, AJ, 73, 209 Wittkowski, M., Boboltz, D. A., Ohnaka, K., Driebe, T., \& Scholz, M. 2007, A\&A, 470, 191 Woitke, P. 2006, A\&A, 460, 9 Yoon, S.-C. \& Cantiello, M. 2010, ApJ, 717, 62 Yoon, D.-H., Cho, S.-H. Yun, Y., et al. 2018, Nature Comm., 9, 2534 \end{hangparas} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Analogues of the reverse order law $(AB)^{-1} = B^{-1}A^{-1}$ for bijective operators have been studied intensively for various kinds of generalized inverses. Most articles and books are concerned with the matrix case; see for example \cite{Greville1966, Erdelyi1966, RaoMitra1971, ShinozakiSibuya1974, Werner1994, DePierroWei1998, Ben-IsraelGreville2003, Mitra2010, TakaneTianYanai2007, TianCheng2004, LiuWei2008}. For infinite-dimensional vector spaces, usually additional topological structures like Banach or Hilbert spaces are assumed; see for example \cite{Nashed1987, DjordjevicRakovevic2008, DjordjevicDincic2010, DincicDjordjevic2013}. In our approach, we systematically exploit duality results that hold in arbitrary vector spaces and a corresponding duality principle for statements about generalized inverses and projectors; see Appendix~(\ref{sec:Duality}). The validity of the reverse order law can be reduced to the question whether the product of two projectors is a projector (Section \ref{sec:GI}). This problem is studied in \cite{GrossTrenkler1998, TakaneYanai1999, Werner1992} for finite-dimensional vector spaces. We discuss necessary and sufficient conditions that carry over to arbitrary vector spaces and can be expressed in terms of the kernels and images of the respective operators alone (Section \ref{sec:Projectors}). Applying the duality principle leads to new conditions and a characterization of the commutativity of two projectors that generalizes a result from \cite{BaksalaryBaksalary2002}. In Section~\ref{sec:ROL}, we translate the results for projectors to generalized inverses and obtain necessary and sufficient conditions for the reverse order law in arbitrary vector spaces. Based on these conditions, we give a short proof for the characterization in Theorem~\ref{thm:allinner} of two operators such that the reverse holds for all inner inverses (also called g-inverses or $\{1\}$-inverses). Moreover, we show that there always exist algebraic generalized inverse s (also called $\{1,2\}$-inverses) of two operators $A$ and $B$ such that their product in reverse order is an algebraic generalized inverse\ of $AB$. Assuming the reverse order law to hold, Theorem \ref{thm:RevOrderLaw} gives a representation of the product of two outer inverses ($\{2\}$-inverses) that can be computed using only kernel and image of the outer inverses of the factors. In this representation, we rely on a description of the kernel of a composition using inner inverses (Section \ref{sec:Compositions}) and implicit representations of subspaces via their orthogonals in the dual space. Moreover, we avoid the computation of generalized inverses by using the associated transpose map. Examples for matrices illustrating the results are given in Section~\ref{sec:Example}. An important application for our results is given by linear boundary problems (Section~\ref{sec:BP}). Their solution operators (Green's operators) are generalized inverses, and it is natural to express infinite dimensional solution spaces implicitly via the (homogeneous) boundary conditions they satisfy. Green's operators for ordinary boundary problems are Fredholm operators, for which we can check the conditions for the reverse order law algorithmically and compute the implicit representation of the product (Section \ref{sec:Fredholm}). Hence we can test if the product of two (generalized) Green's operators is again a Green's operator, and we can determine which boundary problem it solves. \section{Generalized inverses} \label{sec:GI} In this section, we first recall basic properties of generalized inverses. For further details and proofs, we refer to \cite{NashedVotruba1976, Nashed1987} and the references therein. Throughout this article, $U$, $V$, and $W$ always denote vector spaces over the same field $F$, and we use the notation $V_1 \leq V$ for a subspace $V_1$ of $V$. \begin{definition}\label{dfn:GI} Let $T\colon V \to W$ be linear. We call a linear map $G \colon W \to V$ an \emph{inner inverse} of $T$ if $TGT = T$ and an \emph{outer inverse} of $T$ if $GTG = G$. If $G$ is an inner and an outer inverse of $T$, we call $G$ an \emph{algebraic generalized inverse}\ of $T$. \end{definition} This terminology of generalized inverses is adopted from \cite{NashedVotruba1976}; other sources refer to inner inverses as generalized inverses or g-inverses, whereas algebraic generalized inverse s are also called reflexive generalized inverses. Also the notations $\{1\}$-inverse (resp. $\{2\}$- and $\{1,2\}$-inverse) are used, which refer to the corresponding Moore-Penrose equations the generalized inverse satisfies. \begin{proposition} \label{prop:OIChar} Let $T \colon V \to W$ and $G \colon W \to V$ be linear. The following statements are equivalent: \begin{enumerate}[label=(\roman*)] \item $G$ is an outer inverse of $T$. \item $GT$ is a projector and $\img GT = \img G$. \label{oi:Nashed2} \item $GT$ is a projector and $V = \img G \oplus \Ker GT$. \label{oi:Nashed3} \item $GT$ is a projector and $W = \img T + \Ker G$. \label{oi:Nashed4} \item $TG$ is a projector and $\Ker TG = \Ker G$. \label{oi:Nashed5} \item $TG$ is a projector and $W = \Ker G \oplus \img TG$. \label{oi:Nashed6} \item $TG$ is a projector and $\img G \cap \Ker T = \{0\}$. \label{oi:Nashed7} \end{enumerate} \end{proposition} Corresponding to \ref{oi:Nashed7} and \ref{oi:Nashed6}, for subspaces $B \leq V$ and $E \leq W$ with \begin{equation*} B \cap \Ker T = \{0\} \quad \text{and} \quad W = E \oplus T(B), \end{equation*} we can construct an outer inverse\ $ G$ of $T$ with $\img G = B$ and $\Ker G = E$ as follows; cf.\ \cite[Cor.\ 8.2]{Nashed1987}. We consider the projector $Q$ with \begin{equation} \label{eq:DefQOI} \img Q = T(B), \;\; \Ker Q = E. \end{equation} The restriction $T|_B\colon B \to T(B)$ is bijective since $B \cap \Ker T = \{0\}$, and we can define $ G = (T|_B)^{-1}Q$. One easily verifies that $G$ is an outer inverse\ of $T$ with $\img G = B$ and $\Ker G = E$. Since by \ref{oi:Nashed3} we have $V = B \oplus T^{-1}(E)$, we define the projector $P$ in analogy to $Q$ by \begin{equation} \label{eq:DefPOI} \img P = T^{-1}(E), \;\; \Ker P = B. \end{equation} Then, by definition and by Proposition \ref{prop:OIChar}, we have \begin{equation*}\label{eq:Identities} GTG = G, \quad TG = Q, \quad \text{and} \quad GT = 1 - P, \end{equation*} and $G$ is determined uniquely by these equations. Hence an outer inverse\ depends only on the choice of the defining spaces $B$ and $E$. We use the notations $G = \operatorname{O}(T, B, E)$ and $G = \operatorname{O}(T, P, Q)$ for $P$ and $Q$ as in \eqref{eq:DefPOI} and \eqref{eq:DefQOI}. Obviously, $G$ is an outer inverse of $T$ if and only if $T$ is an inner inverse of $G$. Therefore, we get a result analogous to Proposition \ref{prop:OIChar} for inner inverses by interchanging the role of $T$ and $G$. The construction of inner inverse s is not completely analogous to outer inverses, see \cite[Prop.\ 1.3]{NashedVotruba1976}. For subspaces $B \leq V$ and $E \leq W$ such that \begin{equation} \label{eq:DirSumDec} V= \Ker T \oplus B\quad \text{and} \quad W = \img T \oplus E, \end{equation} an inner inverse $G$ of $T$ is given on $\img T$ by $(T|_B)^{-1}$ and can be chosen arbitrarily on $E$. For such an inner inverse with $B=\img GT$ and $E= \Ker TG$, we write $G \in \operatorname{I}(T, B, E)$. For constructing algebraic generalized inverse s, we start with direct sums as in \eqref{eq:DirSumDec}, but require $\Ker G = E$ and $\img G = B$. We use the notation $G = \operatorname{G}(T, B, E)$. The following result for inner inverses is well-known in the matrix case \cite{Searle1971,ShinozakiSibuya1974,Werner1992} and its elementary proof remains valid for arbitrary vector spaces. \begin{proposition} \label{prop:CompIINS} Let $T_1 \colon V \to W$ and $T_2 \colon U \to V$ be linear with outer (resp. inner) inverses $G_1$ and $G_2$. Let $P=G_1T_1$ and $Q=T_2G_2$. Then $G_2G_1$ is an outer (resp. inner) inverse of $T_1T_2$ if and only if $QP$ (resp. $PQ$) is a projector. \end{proposition} \begin{proof} Let $G_2G_1$ be an outer inverse of $T_1T_2$, that is, $G_2G_1 = G_2G_1T_1T_2G_2G_1$. Multiplying with $T_2$ from the left and with $T_1$ from the right yields \begin{equation*} T_2G_2G_1T_1 = T_2G_2G_1T_1T_2G_2G_1T_1, \end{equation*} thus $QP = T_2G_2G_1T_1$ is a projector. For the other direction, we multiply the previous equation with $G_2$ from the left and $G_1$ from the right and use that $G_1T_1G_1 =G_1$ and $G_2T_2G_2=G_2$. The proof for inner inverses follows by interchanging the roles of $T_i$ and $G_i$. \end{proof} \section{Kernel of compositions} \label{sec:Compositions} We now describe the inverse image of a subspace under the composition of two linear maps using inner inverses. For projectors, kernel and image of the composition can be expressed in terms of kernel and image of the corresponding factors alone. Note that a projector is an inner inverse of itself. \begin{proposition} \label{prop:KerComp} Let $T_1 \colon V \to W$ and $T_2 \colon U \to V$ be linear and $G_2$ an inner inverse of $T_2$. For a subspace $W_1 \leq W$, we have \begin{equation*} (T_1T_2)^{-1}(W_1) = G_2(T_1^{-1}(W_1) \cap\img T_2) \oplus \Ker T_2 \end{equation*} for the inverse image of the composition. In particular, \begin{equation*} \Ker T_1T_2 = G_2(\Ker T_1 \cap \img T_2) \oplus \Ker T_2. \end{equation*} \end{proposition} \begin{proof} Since $T_2G_2$ is a projector onto $\img T_2$ by Proposition \ref{prop:OIChar} \ref{oi:Nashed2} (interchanging the role of $T$ and $G$), we have \begin{multline*} T_1T_2(G_2(T_1^{-1}(W_1)\cap \img T_2) + \Ker T_2) = T_1 Q_2 (T_1^{-1}(W_1) \cap \img T_2) + 0\\ = T_1 (T_1^{-1}(W_1) \cap \img T_2) \leq W_1 \cap \img T_1T_2 \leq W_1. \end{multline*} Conversely, let $ u\in (T_1T_2)^{-1}(W_1)$. Then $T_2u =v$ with $v \in T_1^{-1}(W_1)$. Since also $v \in \img T_2$, we have \begin{equation*} T_2(u -G_2v) = T_2u - Q_2v = T_2 u -v = v-v= 0, \end{equation*} that is, $u - G_2v \in \Ker T_2$. Writing $u=G_2v+ u-G_2v$ yields $u \in G_2(T_1^{-1}(W_1) \cap\img T_2) + \Ker T_2$. The sum is direct since by Proposition \ref{prop:OIChar} \ref{oi:Nashed6} (interchanging the role of $T$ and $G$), we have $U = \Ker T_2 \oplus \img G_2T_2$. \end{proof} \begin{corollary} \label{cor:KerImComp} Let $T \colon V \to W$ be linear and let $P \colon V \to V$ and $Q \colon W \to W$ be projectors. Then \begin{equation*} \Ker TQ = (\Ker T \cap \img Q) \oplus \Ker Q \quad \text{and} \quad \img PT = (\img T + \Ker P) \cap \img P. \end{equation*} \end{corollary} \begin{proof} Applying Proposition \ref{prop:KerComp} yields \begin{equation*} \Ker TQ = Q(\Ker T \cap \img Q) \oplus \Ker Q = (\Ker T \cap \img Q) \oplus \Ker Q. \end{equation*} The statement for the image follows from the duality principle \ref{DualPrin}. \end{proof} This result generalizes \cite[Lemma~2.2]{Werner1992}, where the kernel and image of a product $PQ$ of two projectors are computed as above, when $PQ$ is again a projector. \section{Products of projectors} \label{sec:Projectors} In view of Proposition \ref{prop:CompIINS}, we study necessary and sufficient conditions for the product of two projectors to be a projector. Throughout this section let $P, Q \colon V \to V$ denote projectors. The first of the following necessary and sufficient conditions for the product of $P$ and $Q$ to be a projector is mentioned as an exercise without proof in \cite[p.\ 339]{BrownPage1970}. In \cite[Lemma~3]{GrossTrenkler1998} the same result is formulated for matrices but the proof is valid for arbitrary vector spaces. The second necessary and sufficient condition for the matrix case is given in \cite[Lemma~2.2]{Werner1992}. The simpler proof from \cite{TakaneYanai1999} carries over to arbitrary vector spaces. \begin{lemma} The composition $PQ$ is a projector if and only if \[\img PQ \leq \img Q \oplus (\Ker P \cap \Ker Q)\] if and only if \[ \img Q \leq \img P \oplus (\Ker P \cap \img Q) \oplus (\Ker P \cap \Ker Q) . \] \end{lemma} We obtain the following characterization of the idempotency of $PQ$ in terms of the kernels and images of $P$ and~$Q$ alone. \begin{theorem} \label{thm:GTMainResult} The following statements are equivalent: \begin{enumerate} [label=(\roman*)] \item The composition $PQ$ is a projector. \label{PQProj1} \item $\img P \cap (\img Q + \Ker P) \leq \img Q \oplus ( \Ker P \cap \Ker Q)$ \label{PQProj2} \item $\img Q \leq \img P \oplus (\Ker P \cap \img Q) \oplus (\Ker P \cap \Ker Q)$ \label{PQProj4} \item $\Ker Q \oplus (\Ker P \cap \img Q) \geq \Ker P \cap (\img Q + \img P)$ \label{PQProj3} \item $\Ker P \geq \Ker Q \cap (\img Q + \Ker P) \cap (\img Q + \img P)$ \label{PQProj5} \end{enumerate} \end{theorem} \begin{proof} The equivalence of~\ref{PQProj1},~\ref{PQProj2},~\ref{PQProj4} follow from the previous lemma and Corollary~\ref{cor:KerImComp}. By the duality principle~\ref{DualPrin}, the last two conditions are equivalent to~\ref{PQProj2} and~\ref{PQProj4}, respectively. \end{proof} For algebraic generalized inverse s, it is also interesting to have sufficient conditions for $PQ$ as well as $QP$ to be projectors; for example, if $P$ and $Q$ commute. This can again be characterized in terms of the images and kernels of $P$ and $Q$ alone. If $PQ = QP$, one sees with Corollary~\ref{cor:KerImComp} that \begin{equation} \label{eq:KerSum} \img PQ = \img P \cap \img Q \quad\text{and}\quad \Ker PQ = \Ker P + \Ker Q. \end{equation} In general, these conditions are necessary but not sufficient for commutativity of $P$ and $Q$, see \cite[Ex.\ 1]{GrossTrenkler1998}. Using Corollary~\ref{cor:KerImComp}, modularity~\eqref{eq:Modularity}, and~\eqref{eq:ModImplication}, one obtains the following characterization of projectors with image or kernel as in~\eqref{eq:KerSum}; for further details see~\cite{Korporal2012}. For the commutativity of projectors see also~\cite[p. 339]{BrownPage1970}. \begin{proposition} \label{prop:CorLem} The composition $PQ$ is a projector with \begin{enumerate} [label=(\roman*)] \item $\img PQ = \img P \cap \img Q$ if and only if \[\img Q = (\img P \cap \img Q) \oplus (\Ker P \cap \img Q).\] \item $\Ker PQ = \Ker P + \Ker Q$ if and only if \[\Ker P = (\Ker P \cap \Ker Q) \oplus (\Ker P \cap \img Q).\] \end{enumerate} \end{proposition} \begin{corollary} \label{cor:PQ=QP} We have $PQ=QP$ if and only if \begin{equation*}\label{eq:PQ=QPImQ} \img Q = (\img P \cap \img Q) \oplus (\Ker P \cap \img Q) \end{equation*} and \begin{equation*}\label{eq:PQ=QPKerQ} \Ker Q = (\img P \cap \Ker Q) \oplus (\Ker P \cap \Ker Q). \end{equation*} \end{corollary} In~\cite[Thm.~4]{GrossTrenkler1998} and~\cite[Thm.~3.2]{BaksalaryBaksalary2002} different necessary and sufficient conditions for the commutativity of two projectors are given, but both require the computation of $PQ$ as well as of $QP$. \section{Reverse order law for generalized inverses} \label{sec:ROL} Proposition \ref{prop:CompIINS} and Theorem \ref{thm:GTMainResult} together give necessary and sufficient conditions for the reverse order law for outer inverses to hold, in terms of the defining spaces $B_i$ and $E_i$ alone. \begin{theorem} \label{thm:G2G1OI} Let $T_1 \colon V \to W$ and $T_2 \colon U \to V$ be linear with outer inverse s $G_1=\operatorname{O}(T_1, B_1, E_1)$ and $G_2=\operatorname{O}(T_2, B_2, E_2)$. The following conditions are equivalent: \begin{enumerate} [label=(\roman*)] \item $G_2G_1$ is an outer inverse of $T_1T_2$. \label{G2G1OI} \item $T_2(B_2) \cap (B_1 + E_2) \leq B_1 \oplus ( E_2 \cap T_1^{-1}(E_1))$ \label{G2G1OI2} \item $B_1 \leq T_2(B_2) \oplus (E_2 \cap B_1) \oplus (E_2 \cap T_1^{-1}(E_1))$ \label{G2G1OI4} \item $T_1^{-1}(E_1) \oplus (E_2 \cap B_1) \geq E_2 \cap (B_1 + T_2(B_2))$ \label{G2G1OI3} \item $E_2 \geq T_1^{-1}(E_1) \cap (B_1 + E_2) \cap (B_1 + T_2(B_2))$ \label{G2G1OI5} \end{enumerate} \end{theorem} \begin{proof} Recall that $\img G_i = B_i$ and $\Ker G_i = E_i$, and $Q=T_2G_2$ and $P = G_1T_1$ are projectors with \begin{equation*} \img P = B_1, \quad \Ker P = T_1^{-1}(E_1), \quad \img Q = T_2(B_2), \quad \text{and} \quad \Ker Q = E_2. \end{equation*} By Proposition \ref{prop:CompIINS}, $G_2G_1$ is an outer inverse if and only if $QP$ is a projector. Applying Theorem \ref{thm:GTMainResult} proves the claim. \end{proof} In the following theorem, we give the analogous conditions for inner inverses, where $P =G_1T_1$ and $Q = T_2G_2$ are the projectors corresponding to the direct sums in \eqref{eq:DirSumDec}. Note that the conditions for inner inverses only depend on the choice of $B_1$ and $E_2$, but not on $B_2$ and $E_1$. The characterization \ref{G2G1GI4} and the orthogonal of \ref{G2G1GI5} in the following theorem generalize \cite[Thm. 2.3]{Werner1992} to arbitrary vector spaces. \begin{theorem}\label{thm:G2G1GI} Let $T_1 \colon V \to W$ and $T_2 \colon U \to V$ be linear with inner inverses $G_1 \in \operatorname{I}(T_1, B_1, E_1)$ and $G_2 \in \operatorname{I}(T_2, B_2, E_2)$. The following conditions are equivalent: \begin{enumerate} [label=(\roman*)] \item $G_2G_1 $ is an inner inverse of $T_1T_2$. \label{G2G1GI} \item $B_1 \cap (\img T_2 + \Ker T_1) \leq \img T_2 \oplus ( \Ker T_1 \cap E_2)$ \label{G2G1GI2} \item $\img T_2 \leq B_1 \oplus (\Ker T_1 \cap \img T_2) \oplus (\Ker T_1 \cap E_2)$ \label{G2G1GI4} \item $E_2 \oplus (\Ker T_1 \cap \img T_2) \geq \Ker T_1 \cap (\img T_2 + B_1)$ \label{G2G1GI3} \item $\Ker T_1 \geq E_2 \cap (\img T_2 + \Ker T_1) \cap (\img T_2 + B_1)$ \label{G2G1GI5} \end{enumerate} \end{theorem} The question when the reverse order law holds for all inner inverses of $T_1$ and $T_2$ was answered for matrices in \cite[Thm. 2.3]{Werner1994}, and an alternative proof was given in \cite{Gross1997}. Using the previous characterizations, we give a short proof that generalizes the result to arbitrary vector spaces. \begin{theorem}\label{thm:allinner} Let $T_1 \colon V \to W$ and $T_2 \colon U \to V$ be linear. Then $G_2G_1$ is an inner inverse of $T_1T_2$ for all inner inverses $G_1$ of $T_1$ and $G_2$ of $T_2$ if and only if $T_1T_2=0$ or $\Ker T_1 \leq \img T_2$. \end{theorem} \begin{proof} If $\Ker T_1 \leq \img T_2$ then $\Ker T_1 \cap \img T_2=\Ker T_1$ and~\ref{G2G1GI4} in the previous theorem is satisfied since $\Ker T_1+B_1=V$. The case $T_1T_2=0$ is trivial. For the reverse implication, assume that $\img T_2$ is not contained in $\Ker T_1$ and $\Ker T_1$ is not contained in $\img T_2$. Choose $V_1, V_2 \leq V$ such that we have two direct sums $\Ker T_1 =(\img T_2 \cap \Ker T_1) \oplus V_1 $ and $ \img T_2 = (\img T_2 \cap \Ker T_1) \oplus V_2 $. Then we have \begin{equation} \label{eq:DirImKer} \img T_2 + \Ker T_ 1= (\img T_2 \cap \Ker T_1) \oplus V_1 \oplus V_2. \end{equation} By assumption, we can choose non-zero $v_1\in V_1$ and $v_2 \in V_2$. Let $v=v_1+v_2$. Then $v\in \img T_2 + \Ker T_1$ and $v \not \in \Ker T_1$, $v \not \in \img T_2$. Hence we can choose $B_1$ and $E_2$ such that $v\in B_1$ and $v \in E_2$ and $V= \Ker T_1 \oplus B_1=\img T_2 \oplus E_2$. Then \[ v\in E_2 \cap (\img T_2 + \Ker T_1) \cap (\img T_2 + B_1) \] but $v\in \Ker T_1$. Hence \ref{G2G1GI5} in the previous theorem is not satisfied for inner inverses with $\img G_1 = B_1$ and $\Ker G_2 = E_2$. \end{proof} Werner \cite[Thm. 3.1]{Werner1992} proves that for matrices it is always possible to construct inner inverses such that the reverse order law holds. Using the necessary and sufficient condition for outer inverses above, we extend this result to algebraic generalized inverse s in arbitrary vector spaces. The special case of Moore-Penrose inverses is treated in \cite[Thm. 3.2]{ShinozakiSibuya1974}, and explicit solutions are constructed in \cite{ShinozakiSibuya1979, WibkerHoweGilbert1979}. \begin{theorem} \label{prop:Construct} Let $T_1 \colon V \to W$ and $T_2 \colon U \to V$ be linear. There always exist algebraic generalized inverses $G_1$ of $T_1$ and $G_2$ of $T_2$ such that $G_2 G_1$ is an algebraic generalized inverse of $T_1T_2$. \end{theorem} \begin{proof} Choose $V_1, V_2 \leq V$ as in the previous proof such that~\eqref{eq:DirImKer} holds. Moreover, choose $V_3 \leq V$ such that \begin{equation*} V = (\img T_2 + \Ker T_1) \oplus V_3 = (\img T_2 \cap \Ker T_1) \oplus V_1 \oplus V_2 \oplus V_3. \end{equation*} Then $B_1=V_2 \oplus V_3$ is a direct complement of $\Ker T_1$, and $E_2=V_1 \oplus V_3$ is a direct complement of $\img T_2$. Hence there exist respectively an algebraic generalized inverse\ $G_1$ of $T_1$ with $\img G_1 = B_1$ and $G_2$ of $T_2$ with $\Ker G_2 = E_2$. We verify that such $G_1$ and $G_2$ satisfy Theorem~\ref{thm:G2G1OI}~\ref{G2G1OI4}, where $T_1^{-1}(E_1) =\Ker T_1$ and $ T_2(B_2)= \img T_2$ since $G_1$ and $G_2$ are algebraic generalized inverse s: \begin{equation*} \img T_2 \oplus (E_2 \cap B_1) \geq \img T_2 \oplus V_3 = (\img T_2 \cap \Ker T_1) \oplus V_2 \oplus V_3 \geq B_1. \end{equation*} Similarly, we verify Theorem~\ref{thm:G2G1GI}~\ref{G2G1GI4} \begin{equation*} B_1 \oplus (\Ker T_1 \cap \img T_2) = V_2 \oplus V_3 \oplus (\Ker T_1 \cap \img T_2) \geq V_2 \oplus (\Ker T_1 \cap \img T_2) = \img T_2. \end{equation*} Hence $G_2G_1$ is an algebraic generalized inverse\ of $T_1T_2$ for all $G_1=\operatorname{G}(T_1,B_1,E_1)$ and $G_2=\operatorname{G}(T_2,B_2,E_2)$, independent of the choice of $E_1$ and $B_2$. \end{proof} \section{Representing the product of outer inverses} In this section, we assume that for two linear maps $T_1 \colon V \to W$ and $T_2 \colon U \to V$ with outer inverses $G_1$ and $G_2$ the reverse order law holds. Our goal is to find a description of the product $G_2G_1$ that does not require the explicit knowledge of $G_1$ and $G_2$. Using the representation via projectors, one immediately verifies that \begin{equation*} \operatorname{O}(T_2, P_2, Q_2)\operatorname{O}(T_1, P_1, Q_1)= \operatorname{O}(T_1T_2, P_2 - G_2P_1T_2, T_1Q_2G_1) \end{equation*} but this expression involves both outer inverses $G_1$ and $G_2$. For the representation via defining spaces, we compute the kernel and the image of the product. \begin{lemma} \label{lem:KerImG2G1} Let $T_1 \colon V \to W$ and $T_2 \colon U \to V$ be linear with outer inverses $G_1 = \operatorname{O}(T_1, B_1, E_1)$ and $G_2= \operatorname{O}(T_2, B_2, E_2)$. Then \begin{equation*} \Ker G_2G_1 = E_1 \oplus T_1(B_1 \cap E_2) \quad \text{and} \quad \img G_2G_1 = G_2 ((B_1 + E_2) \cap \img T_2). \end{equation*} \end{lemma} \begin{proof} Recall that by definition $\Ker G_i = E_i$ and $\img G_i = B_i$. The first identity follows directly from Proposition \ref{prop:KerComp}. For the second identity, we first note that for a linear map $G$ and subspaces $V_1,V_2$, we have $G(V_1 \cap V_2) = G(V_1) \cap G(V_2)$ if $\Ker G \leq V_1$. Hence $ G_2 ((B_1 + E_2) \cap \img T_2)$ equals \[ G_2((\img G_1 + \Ker G_2) \cap \img T_2) = G_2(\img G_1) \cap G_2(\img T_2) = \img G_2G_1, \] since $G_2(\img T_2) = \img G_2$ by Proposition~\ref{prop:OIChar}~\ref{oi:Nashed2}. \end{proof} Note that the expression for the image of the composition requires the explicit knowledge of $G_2$. In particular, the reverse order law takes the form \begin{equation*} \operatorname{O}(T_2, B_2, E_2)\operatorname{O}(T_1, B_1, E_1) = \operatorname{O}(T_1T_2, G_2((B_1 + E_2)\cap \img T_2), E_1 +T_1(B_1 \cap E_2)). \end{equation*} Werner \cite[Thm. 2.4]{Werner1992} gives a result in a similar spirit for inner inverses of matrices. Using an implicit description of $\img G_i$, it is possible to state the reverse order law in a form that depends on the kernels and images of the respective outer inverses alone. This approach is motivated by our application to linear boundary problems (Section~\ref{sec:BP}), where it is natural to define solution spaces via the boundary conditions they satisfy. In more detail, the Galois connection from Appendix \ref{sec:Duality} allows to represent a subspace $B$ implicitly via the orthogonally closed subspace $\mathscr{B} = B^{\perp}$ of the dual space. We will therefore use the notation $G=\operatorname{O}(T, \mathscr{B}, E)$ for the outer inverse with $\img G = \B^{\perp}$ and $\Ker G = E$ as well as the analogue for inner inverses. \begin{theorem} \label{thm:RevOrderLaw} Let $T_1 \colon V \to W$ and $T_2 \colon U \to V$ be linear with outer inverse s $G_1= \operatorname{O}(T_1, \mathscr{B}_1, E_1)$ and $ G_2 = \operatorname{O}(T_2, \mathscr{B}_2, E_2)$. If $G_2G_1$ is an outer inverse\ of $T_1T_2$, then \begin{equation} \label{eq:DefComp} \operatorname{O}(T_2, \mathscr{B}_2, E_2)\operatorname{O}(T_1, \mathscr{B}_1, E_1)= \operatorname{O}(T_1T_2, \mathscr{B}_2 \oplus T_2^*(\mathscr{B}_1 \cap E_2^{\perp}), E_1 \oplus T_1(\Boi{1} \cap E_2)), \end{equation} where $T_2^*$ denotes the transpose of $T_2$. \end{theorem} \begin{proof} From Lemma \ref{lem:KerImG2G1} we already know that $ \Ker G_2G_1 = E_1 \oplus T_1(\Boi{1} \cap E_2)$. From Proposition \ref{prop:Duals} and \ref{prop:KerComp} we get \begin{multline*} (\img G_2G_1)^{\perp} = \Ker G_1^*G_2^* = T_2^*(\Ker G_1^* \cap \img G_2^*) \oplus \Ker G_2^*\\ = T_2^*((\img G_1)^{\perp} \cap (\Ker G_2)^{\perp} ) \oplus (\img G_2)^{\perp} = T_2^*(\mathscr{B}_1 \cap E_2^{\perp}) \oplus \mathscr{B}_2, \end{multline*} and thus \eqref{eq:DefComp} holds. \end{proof} A computational advantage of this representation is that one can determine $G_2G_1$ directly by computing only one outer inverse instead of computing both $G_1$ and $G_2$; see the next section for an example. \section{Examples for matrices} \label{sec:Example} In this section, we illustrate our results for finite-dimensional vector spaces. In particular, we show how to compute directly the composition of two generalized inverses using the reverse order law in the form~\eqref{eq:DefComp}. Consider the following linear maps $T_1 \colon \mathbb{Q}^4 \to \mathbb{Q}^3$ and $T_2 \colon \mathbb{Q}^3 \to \mathbb{Q}^4$ given by \begin{equation*} T_1 = \begin{pmatrix} 1 & -1 & -1 & 1\\ 0 & 2 & 2 & -2\\ 3 & 1 & 1 & -1\\ \end{pmatrix} \quad \text{and} \quad T_2 = \begin{pmatrix} 1 & -2 & -1 \\ 1 & 1 & 2 \\ -1 & 5 & 4 \\ -1 & 5 & 4 \\ \end{pmatrix}. \end{equation*} We first use Theorem \ref{thm:G2G1OI} and \ref{thm:G2G1GI} to check whether for algebraic generalized inverse s $G_1 = \operatorname{G}(T_1, B_1, E_1)$ and $G_2=\operatorname{G}(T_2, B_2, E_2)$ the composition $G_2G_1$ is an algebraic generalized inverse\ of $T_1T_2$. For testing the conditions, we only need to fix $B_1 = \img G_1$ and $E_2 = \Ker G_2$, such that $B_1 \oplus \Ker T_1 = \mathbb{Q}^4 = E_2 \oplus \img T_2$. We have \begin{equation*} \Ker T_1 = \Ll((0,1,0,1)^T, (0,0,1,1)^T),\quad \img T_2 = \Ll( (1,0,-2,-2)^T, (0,1,1,1)^T), \end{equation*} so we may choose for example \begin{equation*} B_1 = \Ll((1,0,0,0)^T, (0,1,0,0)^T), \quad E_2 = \Ll((1,0,0,0)^T, (0,0,1,0)^T). \end{equation*} For algebraic generalized inverse s, we obtain as a necessary and sufficient condition for being an outer inverse \begin{equation*} B_1 \leq \img T_2 \oplus (E_2 \cap B_1) \oplus (E_2 \cap \Ker T_1) \end{equation*} from Theorem \ref{thm:G2G1OI} \ref{G2G1OI4}. Since $E_2 \cap \Ker T_1= \{0\}$ and $E_2 \cap B_1 = \Ll((1,0,0,0)^T)$, the right hand side yields that $\Ll ( (1,0,0,0)^T, (0,1,0,0)^T, (0,0,1,1)^T ) \geq B_1.$ Thus for all algebraic generalized inverse s $G_1$ and $G_2$ with $\img G_1= B_1$ and $\Ker G_2 = E_2$, the product $G_2G_1$ is an outer inverse of $T_1T_2$. The corresponding condition for inner inverses by Theorem \ref{thm:G2G1GI} \ref{G2G1GI4} is \begin{equation*} \img T_2 \leq B_1 \oplus (\Ker T_1 \cap \img T_2) \oplus (\Ker T_1 \cap E_2). \end{equation*} Since $\Ker T_1 \cap \img T_2 = \{0\}$, the right hand side yields $B_1$, which does not contain $\img T_2$. Hence for the above choices of $G_1$ and $G_2$, the product $G_2G_1$ is never an inner inverse of $T_1T_2$. Since $G_2G_1$ is an outer inverse, Theorem \ref{thm:RevOrderLaw} allows to determine $G_2G_1$ directly without knowing the factors. Identifying the dual space with row vectors, the orthogonals of $B_1$ and $E_2$ are given by \begin{equation*} B_1^{\perp} = \mathscr{B}_1 = \Ll((0,0,1,0), (0,0,0,1)), \quad E_2^{\perp}= \Ll((0,1,0,0), (0,0,0,1)), \end{equation*} so we have $\B^{\perp}_1 \cap E_2 = \Ll((1,0,0,0)^T)$ and $\mathscr{B}_1 \cap E_2^{\perp} =\Ll((0,0,0,1))$. For explicitly computing $G_2G_1$, we also have to choose $B_2 = \img G_2$ and $E_1 = \Ker G_1$. Since we have \begin{equation*} \img T_1 = \Ll((1,0,3)^T), (0,1,2)^T), \quad \Ker T_2 =\Ll((1,1,-1)^T), \end{equation*} we may choose the complements $E_1 = \Ker G_1$ and $B_2= \img G_2$ as \begin{equation*} E_1 = \Ll((0,0,1)^T) \quad \text{and} \quad B_2 = \Ll((1,0,0)^T, (0,1,0)^T). \end{equation*} Using \eqref{eq:DefComp}, we can determine the kernel \begin{equation*} E = \Ker G_2G_1 = E_1 \oplus T_1(\B^{\perp}_1 \cap E_2) = \Ll((1,0,0)^T, (0,0,1)^T). \end{equation*} The image of $G_2G_1$ is by \eqref{eq:DefComp} given via the orthogonal \begin{equation*} (\img G_2G_1)^{\perp} = \mathscr{B}_2 \oplus T_2^*(\mathscr{B}_1 \cap E_2^{\perp}) = \Ll((0,0,1), (-1, 5, 4)), \end{equation*} which means that $B = \img G_2G_1 = \Ll((5,1,0)^T)$. Therefore we can directly determine $G$ as the unique outer inverse \begin{equation*} G = \operatorname{O}(T_1T_2, B, E) = \begin{pmatrix} 0 & \frac{5}{12} &0 \\ 0 & \frac{1}{12} &0 \\0&0&0 \end{pmatrix}. \end{equation*} One easily checks that $G$ is an outer inverse of $T$. \section{Fredholm operators} \label{sec:Fredholm} We now turn to algorithmic aspects of the previous results. As already emphasized, for arbitrary vector spaces we can express conditions for the reverse order law in terms of the defining spaces alone. Nevertheless, in general it will not be possible to compute sums and intersections of infinite-dimensional subspaces. For algorithmically checking the conditions of Theorem \ref{thm:G2G1OI} or \ref{thm:G2G1GI} and for computing the reverse order law in the form \eqref{eq:DefComp}, we consider finite (co)dimensional spaces and Fredholm operators. Recall that a linear map $T$ between vector spaces is called \emph{Fredholm} operator if $\dim \Ker T < \infty$ and $\codim \img T < \infty$. Moreover, for finite codimensional subspaces $V_1\leq V$, we have $\codim V_1=\dim V_1^{\perp}$. In this case, $V_1$ can be implicitly represented by the finite-dimensional subspace $V_1^{\perp} \leq V^*$. For an application to linear ordinary boundary problems, see the next section. We assume that for finite-dimensional subspaces, we can compute sums and intersections and check inclusions, both in vector spaces and in their duals. With the following lemma, the intersection of a finite-dimensional subspace with a finite codimensional subspace is reduced to computing kernels of matrices. \begin{definition} Let $u=(u_1, \ldots, u_m)^T \in V^m$ and $\beta = (\beta_1, \ldots, \beta_n)^T \in (V^*)^n$. We call \begin{equation*} \beta(u) = \begin{pmatrix} \beta_1(u_1) & \ldots & \beta_1(u_m)\\ \vdots & \ddots & \vdots \\ \beta_n(u_1) & \ldots & \beta_n(u_m) \end{pmatrix} \in F^{n \times m} \end{equation*} the \emph{evaluation matrix} of $\beta$ and $u$. \end{definition} \begin{lemma} \label{lem:intersections} Let $U \leq V$ and $\mathscr{B} \leq V^*$ be generated respectively by $u=(u_1, \ldots, u_m)$ and $\beta=(\beta_1, \ldots, \beta_n)$. Let $k^1, \ldots, k^r \in F^m$ be a basis of $\Ker \beta(u)$, and $\kappa^1, \ldots, \kappa^s \in F^n$ a basis of $\Ker (\beta(u))^T$. Then \begin{enumerate}[label=(\roman*)] \item $U\cap \B^{\perp}$ is generated by $\sum_{i=1}^m k^1_i u_i , \ldots, \sum_{i=1}^m k^r_i u_i$ and \item $U^{\perp} \cap \mathscr{B} $ is generated by $\sum_{i=1}^n \kappa^1_i\beta_i , \ldots, \sum_{i=1}^n \kappa^s_i \beta_i$. \end{enumerate} \end{lemma} \begin{proof} A linear combination $v=\sum_{\ell = 1}^m c_{\ell}u_{\ell}$ is in $\B^{\perp}$ if and only if $\beta_i(v)= 0$ for $1 \leq i \leq n$, that is, $\sum_{\ell = 1}^m c_{\ell} \beta_i(u_{\ell}) = 0$ for $1 \leq i \leq n$. Hence $\beta(u) \cdot (c_1, \ldots, c_m)^T = 0$. Analogously, one sees that the coefficients of linear combination in $U^{\perp} \cap \mathscr{B}$ are in the kernel of $(\beta(u))^T$. \end{proof} We reformulate the conditions of Theorem \ref{thm:G2G1OI} such that for Fredholm operators they only involve operations on finite-dimensional subspaces and intersections like in the previous lemma. Similarly, one can rewrite the conditions of Theorem \ref{thm:G2G1GI}. \begin{corollary} \label{cor:OIF} Let $T_1 \colon V \to W$ and $T_2 \colon U \to V$ be linear with outer inverse s $G_1=\operatorname{O}(T_1, \mathscr{B}_1, E_1)$ and $G_2=\operatorname{O}(T_2, \mathscr{B}_2, E_2)$. Let $\mathscr{C}_2=T_2(\B^{\perp}_2)^\perp$ and $K_1=T_1^{-1}(E_1)$. The following conditions are equivalent: \begin{enumerate} [label=(\roman*)] \item $G_2G_1$ is an outer inverse of $T_1T_2$. \label{algOI1} \item $\mathscr{C}_2 + (\mathscr{B}_1 \cap E_2^{\perp}) \geq \mathscr{B}_1 \cap (E_2 \cap K_1)^{\perp}$ \label{algOI2} \item $\mathscr{B}_1 \geq \mathscr{C}_2 \cap (E_2 \cap \B^{\perp}_1)^{\perp} \cap (E_2 \cap K_1)^{\perp}$ \label{algOI3} \item $K_1 \oplus (E_2 \cap \B^{\perp}_1) \geq E_2 \cap (\mathscr{B}_1 \cap \mathscr{C}_2)^{\perp}$ \label{algOI4} \item $E_2 \geq K_1 \cap (\mathscr{B}_1 \cap E_2^{\perp})^{\perp} \cap (\mathscr{B}_1 \cap \mathscr{C}_2)^{\perp}$ \label{algOI5} \end{enumerate} \end{corollary} \begin{proof} Taking the orthogonal of both sides of \ref{thm:G2G1OI} \ref{G2G1OI2}, \ref{G2G1OI4} respectively and applying Proposition \ref{prop:SumCap} we get \ref{algOI2} and \ref{algOI3}. For \ref{algOI4} and \ref{algOI5}, we can apply Proposition \ref{prop:SumCap} directly to the corresponding conditions of Theorem \ref{thm:G2G1OI}. \end{proof} We note that using Lemma \ref{lem:intersections}, it also possible to determine constructively the implicit representation \eqref{eq:DefComp} of a product of generalized inverses; see the next section. \section{Examples for linear ordinary boundary problems} \label{sec:BP} As an example involving infinite dimensional spaces and Fredholm operators, we consider solution (Green's) operators for linear ordinary boundary problems. Algebraically, linear boundary problems can be represented as a pair $(T, \mathscr{B})$, where $T \colon V \to W$ is a surjective linear map, and $\mathscr{B} \leq V^*$ is an orthogonally closed subspace of (homogeneous) boundary conditions. We say that $v\in V$ is a solution of $(T, \mathscr{B})$ for a given $w\in W$ if $Tv=w$ and $v\in \B^{\perp}$. For a regular boundary problem (having a unique solution for every right-hand side), the Green's operator is defined as the unique right inverse $G$ of $T$ with $\img G = \B^{\perp}$; see \cite{RegensburgerRosenkranz2009} for further details. The product $G_2G_1$ of the Green's operators of two boundary problems $(T_1, \mathscr{B}_1)$ and $(T_2, \mathscr{B}_2)$ is then the Green's operator of the regular boundary problem $(T_1T_2, \mathscr{B}_2 \oplus T_2^*(\mathscr{B}_1))$, see also Theorem~\ref{thm:RevOrderLaw}. For boundary problems having at most one solution, that is $\B^{\perp} \cap \Ker T = \{0\}$, the linear algebraic setting has been extended in~\cite{Korporal2012} by defining generalized Green's operators as outer inverses. More specifically, one first has to project an arbitrary right-hand side $w\in W$ onto $T(\B^{\perp})$, the image of the ``functions'' satisfying the boundary conditions, along a complement $E$ of $T(\B^{\perp})$. The corresponding generalized Green's operator is defined as the outer inverse $G=\operatorname{O}(T, \mathscr{B}, E)$, and we refer to $E \leq W$ as an \emph{exceptional space} for the boundary problem $(T,\mathscr{B})$. The question when the product of two outer inverses is again an outer inverse, is the basis for factoring boundary problems into lower order problems; see~\cite{RegensburgerRosenkranz2009,RosenkranzRegensburger2008a} for the case of regular boundary problems. This, in turn, provides a method to factor certain integral operators. As an example, let us consider the boundary problem \begin{equation} \label{eq:StandardEx} \bvp{u''=f}{u'(0)=u'(1)=u(1)=0.} \end{equation} In the above setting, this means we consider the pair $(T_1,\mathscr{B}_1)$ with $T_1=\operatorname{D}^2$ and $\mathscr{B}_1 = \Ll(\operatorname{E}_0\operatorname{D}, \operatorname{E}_1\operatorname{D},\operatorname{E}_1)$, where $\operatorname{D}$ denotes the usual derivation on smooth functions and $\operatorname{E}_c$ the evaluation at $c \in \mathbb{R}$. The boundary problem is only solvable for \emph{forcing functions} $f$ satisfying the \emph{compatibility condition} $\smallint_0^1 f(\xi) \,\mathrm{d}\xi =0$; more abstractly, we have $T_1(\B^{\perp}_1)=\mathscr{C}_1^\perp$ with $\mathscr{C}_1=\Ll(\smallint_0^1)$, where $\smallint_0^1$ denotes the functional $f\mapsto \smallint_0^1 f(\xi)\,\mathrm{d}\xi $. For computing a generalized Green's operator of $(T_1, \mathscr{B}_1, E_1)$, we have to project $f$ onto $\mathscr{C}_1^\perp$ along a fixed complement $E_1$. In~\cite{KorporalRegensburgerRosenkranz2011}, we computed the generalized Green's operator \begin{equation*} G_1(f) = x \smallint_0^x f(\xi) \,\mathrm{d} \xi - \smallint_0^x \xi f(\xi)\,\mathrm{d} \xi - \frac{1}{2}(x^2+1) \smallint_0^1 f(\xi)\,\mathrm{d} \xi + \smallint_0^1 \xi f(\xi)\,\mathrm{d}\xi \end{equation*} of \eqref{eq:StandardEx} for $E_1= \mathbb{R}$ being the constant functions. It is easy to see that in this case we have $T_1^{-1}(E_1) = \Ll(1, x, x^2)$. As a second boundary problem, we consider \begin{equation*} \bvp{u''-u=f}{u'(0)=u'(1)=u(1)=0,} \end{equation*} or $(T_2,\mathscr{B}_2)$ with $T_2=\operatorname{D}^2-1$ and $\mathscr{B}_2=\Ll(\operatorname{E}_0\operatorname{D}, \operatorname{E}_1\operatorname{D}, \operatorname{E}_1)$. For the corresponding generalized Green's operator $G_2$ with exceptional space $E_2= \Ll(x)$, we will now check if the products $G_1G_2$ and $G_2G_1$ are again generalized Green's operators of $T_1T_2=T_2T_1=\operatorname{D}^4-\operatorname{D}^2$, using condition~\ref{algOI2} of Corollary~\ref{cor:OIF}. We use the algorithm from~\cite{KorporalRegensburgerRosenkranz2011}, implemented in the package \texttt{IntDiffOp}\ for the computer algebra system \textsc{Maple}, to compute the compatibility conditions. The algorithm is based on the identity \[ T(\B^{\perp})^\perp=G^*(\mathscr{B} \cap (\Ker T)^\perp), \] for any right inverse $G$ of $T$, which follows from Propositions~\ref{prop:Duals} and~\ref{prop:KerComp}. Moreover, a right inverse of the differential operator can be computed by variation of constants and the intersection $\mathscr{B} \cap (\Ker T)^\perp$ using Lemma~\ref{lem:intersections}. Thus we obtain $\mathscr{C}_2 = \Ll (\smallint_0^1 (\exp(-x) + \exp(x)) )$, where $\smallint_0^1 (\exp(-x) + \exp(x))$ denotes the functional $f\mapsto \smallint_0^1 (\exp(-\xi) +\exp(\xi)) f(\xi) \,\mathrm{d} \xi$. The space $T_2^{-1}(E_2)= \Ll(x, \exp(x), \exp(-x))$ can be computed using Proposition~\ref{prop:KerComp} and a right inverse of the differential operator; this is also implemented in the \texttt{IntDiffOp}\ package. Hence we have $E_1 \cap T_2^{-1}(E_2) = \{0\}$ and therefore $\mathscr{B}_2 \cap(E_1 \cap T_2^{-1}(E_2))^{\perp} = \mathscr{B}_2$. Computing $\mathscr{B}_2 \cap E_1^{\perp}$ with Lemma~\ref{lem:intersections} yields $\mathscr{B}_2 \cap E_1^{\perp} = \Ll(\operatorname{E}_0\operatorname{D}, \operatorname{E}_1\operatorname{D})$; thus $G_1G_2$ is not an outer inverse of the product $T_2T_1=\operatorname{D}^4-\operatorname{D}^2$ by Corollary~\ref{cor:OIF}~\ref{algOI2}. On the other hand, we have $E_2 \cap T_1^{-1}(E_1) = \Ll(x) = E_2$, hence we know by Corollary~\ref{cor:OIF}~\ref{algOI2} that $G_2G_1$ is an outer inverse of $T_1T_2 = \operatorname{D}^4-\operatorname{D}^2$. Furthermore, by Theorem~\ref{thm:RevOrderLaw} we can determine which boundary problem it solves without computing $G_1$ and $G_2$. With Lemma~\ref{lem:intersections} we obtain $\B^{\perp}_1 \cap E_2=\{0\}$ and $\mathscr{B}_1 \cap E_2^{\perp}= \Ll(\operatorname{E}_0\operatorname{D}- \operatorname{E}_1, \operatorname{E}_1\operatorname{D}-\operatorname{E}_1)$. Since applying the transpose $T_2^*$ to $\operatorname{E}_0\operatorname{D}- \operatorname{E}_1$ and $ \operatorname{E}_1\operatorname{D}-\operatorname{E}_1$ corresponds to multiplying $T_2=\operatorname{D}^2-1$ from the right, $G_2G_1$ is the generalized Green's operator of \begin{equation*} (\operatorname{D}^4-\operatorname{D}^2, \Ll ( \operatorname{E}_0\operatorname{D}, \operatorname{E}_1\operatorname{D}, \operatorname{E}_1, \operatorname{E}_0\operatorname{D}^3 - \operatorname{E}_1\operatorname{D}^2, \operatorname{E}_1\operatorname{D}^3 - \operatorname{E}_1\operatorname{D}^2 ), \mathbb{R} ) \end{equation*} by \eqref{eq:DefComp}; or, in traditional notation, it solves the boundary problem \begin{equation*} \bvp{u''''-u''=f}{u'(0)=u'(1)=u(1)=u'''(0)-u''(1)=u'''(1)-u''(1)=0,} \end{equation*} with exceptional space $\mathbb{R}$. \section*{Acknowledgements} We would like to thank an anonymous referee for his detailed comments. G.R.~was supported by the Austrian Science Fund (FWF): J 3030-N18.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The study of neutral hydrogen (H$\textrm{\scriptsize{I}}$) structures in the Milky Way dates back to the early 1950s, with the first detections of 21cm emission by Ewen \& Purcell (1951)\nocite{1951Natur.168..356E} having followed the prediction by van der Hulst\nocite{Hulst1945} that this line should be detectable astronomically. Since these initial days of radio astronomy, the Galactic H$\textrm{\scriptsize{I}}$ sky has been mapped repeatedly, providing very detailed maps of the H$\textrm{\scriptsize{I}}$ distribution in the Milky Way as projected on the sky (Leiden-Dwingeloo Survey: \citealt{1997agnh.book.....H}, H$\textrm{\scriptsize{I}}$ Parkes All Sky Survey: \citealt{2001MNRAS.322..486B}, Instituto Argentino de Radioastronom\'ia Survey: \citealt{2005A&A...440..767B}, Southern Galactic Plane Survey: \citealt{2005ApJS..158..178M}, Arecibo Galactic H$\textrm{\scriptsize{I}}$ Survey: \citealt{2006ApJ...653.1210S}). However, detections of gaseous structures in themselves do not, unfortunately, provide us with information about their three-dimensional spatial distribution. In the disk plane, it is possible to de-project the sky-projected distribution from knowledge of their radial velocities and by assuming rotation around the Milky Way \citep[e.g. for the Galactic spiral arm structure;][]{2006Sci...312.1773L}. In most cases, however, pinning down the location of a gaseous structure requires the knowledge of a stellar population that it can be associated with. High-velocity clouds ($\textrm{\small{HVC}}$s) of neutral hydrogen are a class of H$\textrm{\scriptsize{I}}$ structures whose kinematic behaviour cannot be reconciled with that expected for disk-plane material. While several theories exist for their origin \citep[the Galactic fountain, stripping from dwarf galaxies, remnants of galaxy formation; see e.g.][]{2004ASSL..312..341B}, none of these have outright popularity over others, primarily due to the fact that $\textrm{\small{HVC}}$s must be treated on a case-by-case basis. When deducible, their distances and metallicities can help to constrain their origin, but the former is particularly difficult to obtain for many of the Galactic $\textrm{\small{HVC}}$s; their locations at often high Galactic latitudes means that identifying and utilising a halo field star to apply the interstellar absorption line technique, from which distance limits can be derived, is usually a taxing task. In the context of stripped gas associated with dwarf galaxies, the Sagittarius dwarf galaxy is the only Milky Way satellite, aside from the Magellanic Clouds, for which there is an H$\textrm{\scriptsize{I}}$ feature that could have been stripped from the main body of the dwarf galaxy \citep{2004ApJ...603L..77P}. Indeed, the $\textrm{\small{HVC}}$s comprising the Magellanic stream are the only ones for which there is definitive knowledge that the gas originated from the Milky Way satellite companions themselves. In terms of satellite galaxies in the Local Group, the Magellanic system is unique in the sense that the two galaxies have clearly been interacting with one another as well as with the Milky Way; the binary interaction has most probably been crucial in producing the gaseous stream of over $100\deg$ in length. There are two reasons why we might expect to see more examples of H$\textrm{\scriptsize{I}}$ streams than this one system that is currently known. The first is that studies derived from cosmological simulations indicate that a significant fraction of dark matter subhalos fall into their more massive host halo in groups \citep{2008MNRAS.385.1365L}. In this scenario, dwarf galaxies that are hosted within such subhalos will naturally have companions that may be involved in close interactions, depending on the degree of binding within the infalling group. The second reason for expecting halo H$\textrm{\scriptsize{I}}$ streams is that dwarf galaxies venturing too close to the parent galaxy should have their gas content at least partially stripped through tidal forces and/or ram pressure. Indeed, the H$\textrm{\scriptsize{I}}$ content of Milky Way and M31 satellites are well known to correlate with distance from their respective parent galaxies \citep{2009ApJ...696..385G}.\\ In this letter, we highlight the possibility that a collection of clouds belonging to an $\textrm{\small{HVC}}$ complex known as GCN may be part of yet another gaseous stream at an average heliocentric distance of $\sim20\kpc$. In Section~\ref{sec:velocity}, we first outline how the velocity of the stream can be calculated as a function of distance. Section~\ref{sec:orbit} presents the orbit derived from this analysis and the results are summarised in Section~\ref{sec:summary}. \section{Deriving the transverse velocity of a stream} \label{sec:velocity} Before turning to the analysis of the H$\textrm{\scriptsize{I}}$ stream, we recall the technique for deducing the transverse velocity at a given location along a stream by using the gradient in line-of-sight velocity there, as has been described by \cite{2007MNRAS.378L..64J} and \cite{2008MNRAS.386L..47B}. Its strength lies in the use of line-of-sight velocity data along an extended structure, so that proper motion measurements are not required to constrain an orbit, if at least one distance is known. If no distance information is available, then the transverse velocities are calculated as distance-dependent quantities that can then be used to derive a family of possible orbits in a given gravitational potential. Depending on the morphology of the orbit, both the distance of the stream and its direction of motion can be constrained by comparing the resulting orbit with observational data. The formalism for deducing the component of the velocity vector transverse to the line of sight at a given point on a stream was provided in a previous paper \citep{2007MNRAS.378L..64J}. Here, we simply quote the solution to the resulting quadratic equation for this transverse velocity, $v_s$: \beqnarray \label{eqn:vs} v_s = \frac{1}{2} \left( \ud v_l/\ud\chi \pm \sqrt{\left(\ud v_l/\ud\chi\right)^2 - 4\left(\norml.\nabla\psi\right)d}\right)\,, \eeqnarray where $v_l$ is the line-of-sight velocity corrected to the Galactic Standard of Rest (GSR), $\norml$ is the normalised line-of-sight vector, $d$ is the heliocentric distance, $\psi$ is the Galactic potential and $\chi$ is the angle along the stream, measured from some fiducial point. The full velocity vector at any point along a stream is given by $\vecv = v_l\norml + v_s\norms$, where $\norms$ is the unit vector in the plane of the sky, along the apparent direction of motion of the stream at $\norml$; one possible method by which $\norms$ may be calculated is given by equation (16) in \cite{2008MNRAS.383.1686J}. The two solutions for $v_s$ usually provide orbits in opposite senses. Here, a comparison of the computed orbits with the H$\textrm{\scriptsize{I}}$ data turns out to be sufficient to specify the direction of motion, given the large angular extent of the gaseous structure considered. In the following analysis, we use a three-component Galactic model described by \cite{1990ApJ...348..485P}, which combines a Miyamoto-Nagai model for the disk and spheroid \citep{1975PASJ...27..533M}, and a near-logarithmic, spherical potential for the halo. We also briefly describe the effect of changing the halo potential in Section~\ref{subsec:orbit}. \section{Results} \label{sec:orbit} \subsection{Determining the orbit of GCN} \label{subsec:orbit} \begin{figure} \begin{center} \includegraphics[width=0.30\textwidth,angle=270]{fig1.ps} \caption{Variation in line-of-sight velocity along the $\textrm{\small{HVC}}$ complex GCN, where $\chi$ is the angle along the stream as measured from the point of lowest latitude and increases towards the Galactic plane. The error bars on the velocities indicate the width of the velocity peak in the LAB data. The dashed line indicates the velocity gradient at the starting point for the orbital integration. \label{fig:chi_vl}} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth,angle=0]{fig2.ps} \caption{Orbital evolution in Galactic longitude and latitude $(\ell,b)$ and line-of-sight velocity relative to the Galactic Standard of Rest ($v_l$). The panels provide a comparison of GCN data (dots) with an orbit (solid line) computed using parameters derived from its properties at $\ell = 39.3\deg$, $b=-31.0\deg$ (see main text for details). The computed orbit places the H$\textrm{\scriptsize{I}}$ complex at $25\kpc$ at the $\textrm{\small{IC}}$ point. The open circle represents the $\textrm{\small{CHVC}}$ at $b=47\deg$ (see discussion in Section \ref{subsec:assoc}). Note that this solid-line orbit does {\it{not}} take this point into account; its inclusion here serves merely to illustrate the position of the $\textrm{\small{CHVC}}$ relative to the stream's orbit. The dotted line describes an orbit that is constrained to have the same IC point as the orbit shown by the solid line, but passes closer to the $\textrm{\small{CHVC}}$ in $\ell$, $b$ and $v_l$. \label{fig:lb_vl}} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.28\textwidth,angle=270]{fig3.ps} \caption{Variation of heliocentric distance with angle along the stream (measured from the $\textrm{\small{IC}}$ point), for the orbits shown in Figure~\ref{fig:lb_vl}. The solid line describes the case in which the $\textrm{\small{CHVC}}$ is not included and corresponds to the orbit shown in Figure~\ref{fig:all-sky}, while the dotted line is with its inclusion. The central region of the solid line, in black, corresponds to the extent of GCN that we have followed in the LAB data. \label{fig:d_chi}} \end{center} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=0.50\textwidth,angle=270]{fig4.ps} \caption{Distance evolution of the derived orbit for GCN (corresponding to the orbit indicated by solid lines in Figures~\ref{fig:lb_vl} and \ref{fig:d_chi}), shown in Hammer projection. The open squares denote positions of the H$\textrm{\scriptsize{I}}$ clouds comprising GCN, while the open circle represents a compact high-velocity cloud that may or may not be associated with the main stream. Triangles indicate the locations of Milky Way dwarf spheroidal galaxies, as listed in Table~\ref{tab:dwarfs}. \label{fig:all-sky}} \end{center} \end{figure*} \begin{table} \begin{center} \begin{tabular}{lrrrrr} \hline \multicolumn{1}{c}{ & \multicolumn{1}{c}{$\ell$} & \multicolumn{1}{c}{$b$} & \multicolumn{1}{c}{$v_\odot$} & \multicolumn{1}{c}{$v_l$} & \multicolumn{1}{c}{$d$}\\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{$(\deg)$} & \multicolumn{1}{c}{$(\deg)$} & \multicolumn{1}{c}{(km/s)} & \multicolumn{1}{c}{(km/s)} & \multicolumn{1}{c}{(kpc)}\\ \hline Bo\"otes I & 358.1 & 69.6 & 99 & 106 & 66 \\ Bo\"otes II & 353.7 & 68.9 & $-$117 & $-$116 & 42 \\ Canes Venatici I & 74.3 & 79.8 & 31 & 78 & 218 \\ Canes Venatici II & 113.6 & 82.7 & $-$129 & $-$96 & 160 \\ Carina & 260.1 & $-$22.2 & 224 & 8 & 101 \\ Coma Berenices & 241.9 & 83.6 & 98 & 82 & 44 \\ Draco & 86.4 & 34.7 & $-$293 & $-$98 & 82 \\ Fornax & 237.1 & $-$65.7 & 53 & $-$36 & 138 \\ Hercules & 28.7 & 36.9 & 45 & 145 & 132 \\ Leo I & 226.0 & 49.1 & 286 & 178 & 250 \\ Leo II & 220.2 & 67.2 & 76 & 22 & 205 \\ Leo IV & 265.4 & 56.5 & 132 & 10 & 160 \\ Leo V & 261.9 & 58.5 & 173 & 58 & 178 \\ Leo T & 214.9 & 43.7 & 38 & $-$58 & 407 \\ Pisces II & 79.2 & $-$47.1 & $\cdots$ & $\cdots$ & 180 \\ Sagittarius & 5.6 & $-$14.1 & 140 & 169 & 24 \\ Sculptor & 287.5 & $-$83.2 & 108 & 75 & 79 \\ Segue 1 & 220.5 & 50.4 & 206 & 111 & 23 \\ Segue 2 & 149.4 & $-$38.1 & $-$39 & 44 & 35 \\ Sextans & 243.5 & 42.3 & 227 & 75 & 86 \\ Ursa Major I & 159.4 & 54.4 & $-$55 & $-$7 & 97 \\ Ursa Major II & 152.5 & 37.4 & $-$116 & $-$33 & 30 \\ Ursa Minor & 105.0 & 44.8 & $-$248 & $-$86 & 66 \\ Willman 1 & 158.6 & 56.8 & $-$12 & 36 & 38 \\ \hline \end{tabular} \caption{Properties of Milky Way dwarf spheroidal galaxies shown in Figure~\ref{fig:all-sky}. The columns give the Galactic longitude and latitude, heliocentric radial velocity, line-of-sight velocity relative to the Galactic Standard of Rest and heliocentric distance. Radial velocities and distances are taken from those compiled by \citet{Mateo1998} and \citet{2008ApJ...684.1075M}, with radial velocities for the `ultra-faint' dwarf galaxies taken from \citet{2007MNRAS.380..281M} and \citet{2007ApJ...670..313S}, with additional information for Segue~1, Leo~V, Segue~2 and Pisces~II from \citet{2009ApJ...692.1464G}, \citet{2008ApJ...686L..83B,2009MNRAS.397.1748B} and \citet{2010ApJ...712L.103B}, respectively.} \label{tab:dwarfs} \end{center} \end{table} Using the all-sky $\textrm{\small{HVC}}$ map by \citet[][figure~1b]{2004ASSL..312...25W} as a visual guide, we extract several GCN H$\textrm{\scriptsize{I}}$ features in the Leiden/Argentine/Bonn (LAB) Galactic H$\textrm{\scriptsize{I}}$ survey data cube \citep{2005A&A...440..775K} lying along the general direction of $\ell\simeq33-40\deg$ with negative velocities relative to the GSR.\footnote{Another $\textrm{\small{HVC}}$ complex (GCP) overlaps GCN spatially for a good chunk of the latter. However, these two structures have rather contrasting line-of-sight velocities so that any physical association between them is clearly ruled out and their emission is easily separated in the LAB data.} Figure~\ref{fig:chi_vl} shows the evolution of $v_l$ along the string of $\textrm{\small{HVC}}$s that indicate a strong communal trend in both sky position and velocity, the data for which can then be used to calculate $\ud v_l/\ud\chi$ and hence the transverse velocity along the stream. In defining the initial condition ($\textrm{\small{IC}}$) point for the orbit calculation, it is important to choose a location on the stream where the behaviour of $\ud v_l/\ud\chi$ is well defined. This usually corresponds to a point near the middle of the stream's extent. Here, the $\textrm{\small{IC}}$ point chosen to satisfy these criteria is at a Galactic longitude and latitude of $(\ell,b) = (39.3,-31.0)\deg$, with the corresponding properties $v_l = -147.5\,{\rm km}\,{\rm s}^{-1}$ and $\ud v_l/\ud\chi = -3.0\,{\rm km}\,{\rm s}^{-1}/{\mathrm{deg}}$ (indicated by the dashed line in Figure~\ref{fig:chi_vl}). The free parameter in this analysis is the heliocentric distance to the stream. However, there is also some uncertainty in the velocity gradient along the stream of order $0.5\,{\rm km}\,{\rm s}^{-1}/{\mathrm{deg}}$; we therefore treat this gradient as a somewhat variable parameter. In order to recover a good match to the position and velocity of the H$\textrm{\scriptsize{I}}$ data points, we find that the distance to the $\textrm{\small{IC}}$ point is reasonably well constrained at $\sim25\kpc$ with the remaining orbital parameters as stated above.\footnote{Altering this initial distance by $\pm10\kpc$ leads to orbits that are clearly discrepant with one half of the stream.} The orbit that results from taking this set of parameters is one of the best found and is shown by the solid lines in Figure~\ref{fig:lb_vl}, where the three panels indicate the variation of each of the variables $\ell$, $b$ and $v_l$ as a function of the other variables. Note that the stream mainly extends in Galactic latitude (in other words, $\chi$ is effectively given by $b$), and so the panel showing the trend of $v_l$ in longitude is less instructive in characterising and understanding the orbit than the other two panels. The solid line in Figure~\ref{fig:d_chi} shows the evolution of heliocentric distance along this orbit, with the highlighted central section corresponding to the extent traceable in the LAB data. We note that the details of any orbit calculated are naturally dependent on the choice of the Galactic potential. As many, more cosmologically-motivated halo profiles than that described by \cite{1990ApJ...348..485P} now exist in the literature, the analysis was also re-performed with Paczy\'nski's halo being replaced by an adiabatically-contracted NFW \citep{NFW96,1997ApJ...490..493N} profile with parameters constrained by \citet{2008ApJ...684.1143X}, in order to check the dependency of the results on the choice of the halo potential. Keeping the same initial conditions as before, we find that the change in halo potential leads only to small changes in the resulting orbit that are mainly manifest above the Galactic plane. Within the region of sky covered by GCN, the differences in the results are negligible in both the radial velocity trend and sky projection of the orbit. The distances are slightly affected by the fact that the Xue et al. halo contains less mass than Paczy\'nski's. As a result, the orbit has a larger apocentre and reaches a smaller pericentre, with a maximum change in distance of $\sim10\%$ compared to the original orbit. The qualitative results and most of the quantitative results are, however, unchanged by the modification of the halo potential. \subsection{Checking for associations} \label{subsec:assoc} A different representation of the stream's orbit is provided in Figure~\ref{fig:all-sky}, where the `anti-centre projection' allows us to directly compare our results with the Wakker (2004) $\textrm{\small{HVC}}$ map. The same orbit as that shown by the solid lines in Figures~\ref{fig:lb_vl} and \ref{fig:d_chi} is now overlaid with the H$\textrm{\scriptsize{I}}$ stream data points as open squares; additionally, all of the known Milky Way dwarf spheroidal (dSph) galaxies are plotted as filled triangles (Table~\ref{tab:dwarfs}). Although there are a few spatial coincidences, the distance and velocity of the orbit at these locations firmly rule out possible associations of this H$\textrm{\scriptsize{I}}$ stream with any of the currently known Milky Way dSphs. The Milky Way is also host to $\sim150$ globular clusters \citep{1996AJ....112.1487H}. Of these, a handful of globular clusters with latitudes between $-50\deg$ and $0\deg$ lie close to the stream but, with velocity differences of more than $100\,{\rm km}\,{\rm s}^{-1}$ from the GCN clouds or the inferred orbit, connections to the H$\textrm{\scriptsize{I}}$ stream are unlikely. The same conclusion holds for three other globular clusters at $40\deg < b < 50\deg$ that lie within $10\deg$ of the orbit there, although one should remember that the orbital path at these latitudes is already a strong extrapolation of the orbit derived for the main part of the H$\textrm{\scriptsize{I}}$ stream residing at negative latitudes. Thanks to the advent of large sky surveys, numerous detections of Galactic stellar streams have been reported in the literature in the last few years. We therefore also considered the possibility that the GCN stream might have a counterpart in one of these recent discoveries, if not in the globular clusters or intact dwarf spheroidals. We find that no known stellar stream [Monoceros stream \citep{2002ApJ...569..245N,2003MNRAS.340L..21I}, Orphan stream \citep{2006ApJ...645L..37G,2007ApJ...658..337B}; Grillmair-Dionatos stream \citep{2006ApJ...643L..17G,2009ApJ...697..207W}; Acheron, Cocytos, Lethe, and Styx streams \citep{2009ApJ...693.1118G}; solar neighbourhood streams in SDSS DR7 \citep{2009ApJ...698..865K}; Cetus polar stream \citep{2009ApJ...700L..61N}; Pisces overdensity \citep{2009MNRAS.398.1757W}] coincides with the orbit determined in this letter for the GCN stream. Although inhabiting a similar region of the sky, the stream is also offset from the Magellanic system in location and velocity. Combining this with our check for coincidences with the stellar streams therefore leads us to conclude that the H$\textrm{\scriptsize{I}}$ stream has no currently known stellar counterpart. Finally, we find that this orbit passes very close to a compact high-velocity cloud ($\textrm{\small{CHVC}}$) in the Wakker $\textrm{\small{HVC}}$ map at $(\ell,b) = (18,47)\deg$. Perusal of the LAB data shows this $\textrm{\small{CHVC}}$ to have a line-of-sight velocity of $(-107\pm15)\,{\rm km}\,{\rm s}^{-1}$. As shown by the dotted lines in Figures~\ref{fig:lb_vl} and \ref{fig:d_chi}, its inclusion in our analysis helps to constrain the distance of the orbit further by requiring the orbit to trace as much as possible the clouds over a larger extent on the sky, but the association of this H$\textrm{\scriptsize{I}}$ clump with the rest of the structure is unclear from the current data. Higher resolution H$\textrm{\scriptsize{I}}$ data\footnote{For example, very recent results from the Effelsberg-Bonn H$\textrm{\scriptsize{I}}$ survey \citep{2010arXiv1007.3363W} clearly show head-tail features and filamentary structures in new, high resolution data for GCN clouds.} would be required to see whether this $\textrm{\small{CHVC}}$ exhibits an elongation along the direction of motion expected from the orbit; naturally, detection of lower column density material along this direction would also aid in solidifying this hypothesis. \subsection{Discussion} \label{subsec:discussion} Despite its `orphan' nature in having no obvious progenitor or stellar counterpart, our findings provide evidence for the first halo H$\textrm{\scriptsize{I}}$ stream outside of the Magellanic stream at distances where it is conceivable for H$\textrm{\scriptsize{I}}$ structures to display relatively ballistic motion. Although the orbits were always integrated for several wraps, here we have shown only the first $660\Myr$ to either side of the $\textrm{\small{IC}}$ point. The reason for this is three fold: the first is simply a matter of clarity, so that the reader does not have to untangle an array of different orbital wraps on such a map. The second is that the exact form of the gravitational potential is, as always, somewhat uncertain. In addition, we do not expect the motion of any halo stream (whether stellar or gaseous) to follow an exact orbit, as the extended profile itself is created through changes in orbital energy and angular momentum amongst the stream's constituents. This is even more important for gaseous structures than it is for stellar streams, as gas is subject to non-gravitational influences in the form of ram pressure and drag forces, effects that do not concern a stellar stream. We therefore do not expect to find a perfect orbital match and, rather, content ourselves with an orbit that reproduces well the general trend in position and velocity of the string of H$\textrm{\scriptsize{I}}$ clouds of our interest. As we were only able to identify a few of the brightest regions in the LAB data, it is difficult to make an accurate mass estimate of the stream. We therefore simply use a representative column density of $10^{19}\cm2$ \citep{2004ASSL..312...25W} and a total angular size of 20~${\mathrm{deg}}^2$ at an average distance of $20\kpc$ to arrive at a conservative lower mass estimate of $2\times10^{5}\Msun$. Studies of the H$\textrm{\scriptsize{I}}$ content of Local Group dwarf galaxies \citep{2009ApJ...696..385G} indicate this to be a plausible gas content for an undisturbed, distant dwarf galaxy. Observing such a quantity of H$\textrm{\scriptsize{I}}$ spread into a stream would, however, clearly be more challenging than observing it as a bound and more concentrated structure, and may explain why the stream had not been identified before. It is also not surprising that the GCN stream would be much less massive than the Magellanic stream given the masses of its progenitors, the Large and Small Magellanic Clouds, which themselves are a few orders of magnitude more massive than most of the dwarf companions of the Milky Way. \section{Summary} \label{sec:summary} We have shown that a string of H$\textrm{\scriptsize{I}}$ clouds that form part of the high-velocity cloud complex known as GCN is a probable gaseous stream in the Galactic halo at a distance of $\sim20\kpc$. We determine its orbit by utilising the large gradient in line-of-sight velocity along its extent, and determine its direction of motion to be towards the Galactic plane. We also identify a compact $\textrm{\small{HVC}}$, whose sky position nearly intersects with the forward projection of the stream's orbit, and whose velocity differs by approximately $50\,{\rm km}\,{\rm s}^{-1}$ from that of the orbit there. This association, however, remains a speculative one before further H$\textrm{\scriptsize{I}}$ studies along the direction of the orbit presented here can be performed. The origin of such a gaseous stream with an estimated gas content of a few times $10^5\Msun$ is most probably a satellite galaxy of the Milky Way. However, no progenitor has been found amongst the currently known Galactic dSphs or globular clusters through a comparison of their locations and velocities with the stream's orbit, hence the exact origin of the stream is currently unknown. \subsection*{Acknowledgements} I would like to thank Donald Lynden-Bell for many inspirational and insightful discussions that encouraged me to pursue this work. I am also grateful to Nicolas Martin for useful discussions and a careful reading of the manuscript, and to the anonymous referee for helpful comments and suggestions. \bibliographystyle{mn2e}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Web browsers provide a fast execution environment~\cite{Khan:2014,herrera18webassembly}, are already installed on most devices today, and are progressively adding support for WebRTC~\cite{webrtc}, which enables direct connections for exchanging data. However, current browser implementations limit the number of concurrent WebRTC connections to a single source to 256~\cite{webrtc-limit}. In practice, this number is even more limited: the overhead of maintaining connections becomes significant beyond 70 concurrent connections in some libraries.\footnote{Such as the \texttt{electron-webrtc}~\cite{electron-webrtc} library for Node.js.} This limits the total number of participants. We increased the total number of participants by using a \textit{fat-tree overlay}. In a fat-tree~\cite{Leiserson85}, processors are located on the leaves and internal nodes relay data for all their children; each new layer in the tree increases the number of possible connections exponentially. The main benefit in the context of the Web is to remove the need to relay data on dedicated servers by employing intermediate nodes in a fat-tree as relays. Existing work on fat-trees~\cite{Leiserson85,ruft-fat-tree,adaptive-vs-deterministic-routing,birrer2004fatnemo} has not so far provided solutions for quick scaling. The key issues are to quickly distribute the newer participants among the existing leaves and quickly route, through the fat-tree, the multiple messages generated by WebRTC to open a new connection. To address both, we propose a novel routing scheme, which we call Genet~\footnote{Clonal colony of plants in which all individuals share the same genetic material. We expect the design, if it is successful, to eventually form a dynamic forest of overlays on the Web.}, that only requires \textit{local information} to route connection messages: this eliminates the latency that would otherwise be incurred by waiting for the status from other parts of the tree. The destination for messages is derived from the hash value of the combined identifiers of the source and the current routing node, providing two properties. First, the scheme \textit{deterministically routes} multiple messages sent by a new participant to the same leaf node. Second, the scheme ensures \textit{probabilistic balancing} of newer connections between all the children to keep the tree balanced. This design is especially suited to the context of compute-intensive applications that leverage volunteers' devices because users tend to add local devices first before asking for help from others; the devices in the first layer of the tree will therefore also benefit from the largest available bandwidth. To show the \textit{probabilistic balancing} scheme is useful, we measured the depth of nodes and found that at least 83\% of nodes have the same depth as they would have in a deterministic scheme; this percentage grows to 92.5\% as the tree grows larger and is independent of the failure level of nodes if they reconnect through the root after a disconnection. To show the design can quickly scale, we measured the time required for all participants to become connected within a fat-tree overlay fully implemented and tested in Pando~\cite{lavoie2018pando}, a tool for personal volunteer computing~\cite{lavoie2018pvc} that targets shorter-running tasks than is typical for well-known and larger-scale volunteer computing projects~\cite{boincprojects}. We succeeded in connecting a thousand browser windows in 22-55 seconds on a local network and could fully deploy the Collatz application on 320 cores, reaching maximum throughput in less than 30 seconds. Both results show that the design is quite useful for quick deployments on local networks, such as those in a university department or a large organization. Additional preliminary measurements of connectivity probability and latency for WebRTC on Internet deployments show that further refinements of the design in an Internet setting shall include tolerance to failures of initial connections, perhaps by initiating multiple connections upon joining, and tolerate initial connection latencies of up to 9-16 seconds. Compared to previous work on fat-trees, we are the first to (1) propose a deterministic routing scheme for connection messages to quickly grow a fat-tree overlay when a large number of participants join in a short amount of time, (2) implement such a design with WebRTC to overcome the limit on the number of connections, and (3) apply the idea to dispatch work and retrieve results in a volunteer computing tool, using participants for data distribution rather than a dedicated server. In the rest of this paper, we first explain the design of the fat-tree overlay in more detail (Section~\ref{Section:FatTreeOverlay}). We then explain how we adapted Pando to use our fat-tree overlay to improve its scalability (Section~\ref{Section:Application}). We continue with an evaluation of the resulting implementation (Section~\ref{Section:Evaluation}). We then review similar work (Section~\ref{Section:RelatedWork}) and summarize the contributions of the paper (Section~\ref{Section:Conclusion}). \section{Design} \label{Section:FatTreeOverlay} Our \textit{fat-tree overlay} organizes participants in a tree to increase the number of concurrent connections that can be made to a single origin, while bounding the number of concurrently active WebRTC connections each participant maintains. To establish a WebRTC connection, participants exchange \textit{signals}, or possible connection endpoints, with one another to determine how to connect through the Network-Address Translation (NAT) schemes used by routers. The ICE signalling protocol~\cite{ice-draft} used by WebRTC uses a \textit{trickle mode} in which signals are sent as they are discovered. This reduces the latency to open the connection compared to waiting for all endpoints to be identified. The trickle mode generates multiple messages that need to be routed through the tree to exactly the same destination node. Moreover, to minimize the latency and make the tree grow quickly, the depth of nodes should be minimized by making the number of children in sibling sub-trees similar. Our solution solves both problems while requiring only information available locally in each node. Each node maintains a list of at most \textit{ChildrenLimit} children, a deployment parameter with a default of 10. Children are added in that list in the order in which they connect and keep the same index until they either disconnect or crash. As illustrated in Figure~\ref{Figure:FatTreeDesign}, when a new participant joins the tree, the \textit{candidate} first opens a WebSocket channel to the Relay Server and creates a random identifier \textit{id} (Step 1). It then sends multiple join requests that each contain its identifier (\textit{origin}) and one of the WebRTC ICE signals to the Root (Master) node (Step 2). From there, each node has two choices. In the first case, if it has less children than \textit{ChildrenLimit}, it assigns the candidate to one of its children and attempts to open a WebRTC connection using the candidate's signals. During the opening, it will generate signals of its own that are sent as replies to the candidate through the Relay Server (Step 3). Signals are exchanged by both parties until a direct WebRTC connection is established, after which the WebSocket connection of the candidate is terminated (Step 4). In the second case (not illustrated), the node delegates the requests to one of its children. If the WebRTC connection to the child is not yet open, the requests are held until the connection is established and then forwarded. Each node makes routing decisions for delegation by taking the \textit{origin} identifier, xored with the node's identifier \textit{id}, the hash of the result is taken, and then the numerical index of the child in the children list is computed by taking the modulo \textit{ChildrenLimit}: \begin{equation*} childIndex = hash(originId \wedge nodeId) \% ChildrenLimit \end{equation*} \begin{figure}[htbp] \includegraphics[width=0.48\textwidth]{./figures/fat-tree} \caption{\label{Figure:FatTreeDesign} Genet's WebRTC bootstrap with the joining sequence marked with numbered diamonds.} \end{figure} The xor of the \textit{originId} and \textit{nodeId} is not strictly necessary, a concatenation of the bits of both identifiers could work also. The advantage of the xor function is to provide a result with the same number of bits as the identifiers, which may be useful when all operations need to be performed in fixed-width registers. This routing scheme has three interesting properties. First, routing is \textit{deterministic}: requests from the same origin are routed to the same child at every step of the tree. Second, the choice of a good hash function ensures \textit{probabilistic balancing} of newer connections between the children. Third, by using only information locally available in each node, the routing decisions are \textit{quick} to make because they don't need global information about the tree, which enables a \textit{quick scaling} of the tree on startup. In some cases, nodes may fail and suddenly disconnect during execution. In those cases, their children, once they have detected the failure, will in turn disconnect their own children (if they have some) and all disconnected nodes will try to reconnect to the root. In other cases, the WebRTC connection may fail to successfully open. Then, the parent node will remove the potential candidate after a configurable timeout, with a default value of 60 seconds. When deploying the scheme on a local network, it is possible to combine in the same process the Root node and the Relay server. On a wide-area network however, it is important that the Relay Server has a publicly-facing IP address to enable direct WebSocket connections. Our implementation performs a routing optimization to accelerate the exchange of messages: to reply to signals, a node opens a direct WebSocket connection to the Relay server. Then if a candidate receives the first reply-signal before having submitted all its own signals, the candidate will use the origin of the reply as a destination for all subsequent signals. This optimization therefore skips some of the routing steps for the late signals. It is however not necessary, another variation that minimizes the number of WebSocket connections to the Relay Server by routing all replies through the Root would also work. We have made our JavaScript implementations of both the Genet algorithm~\cite{webrtc-tree-overlay} and the relay server~\cite{webrtc-bootstrap} available as reusable libraries for Node.js and the browser. \section{Application to Personal Volunteer Computing} \label{Section:Application} We implemented a scalable version of Pando~\cite{lavoie2018pando}, a tool that leverages personal devices' browsers for executing computations in parallel, based on our JavaScript implementation~\cite{webrtc-tree-overlay} of the Genet fat-tree overlay. When a new browser window, executing on the device, successfully connects, it first joins as a leaf in the fat-tree and computes results, therefore acting as a \textit{processor}. When additional browser windows join beyond the $ChildrenLimit$ of the root, the extras connect to the current leaves. The leaves then stop computing and instead start relaying data and results, becoming \textit{coordinators}. The process repeats at every level of the tree with new devices joining. We have successfully tested a thousand participants (Section~\ref{Section:Evaluation}) but the design should allow potentially millions of devices to connect in a single overlay, the limiting factors being the bandwidth available on the root node and the number of concurrent connections supported by the Relay Server, which determine the joining rate. The implementation of Pando using the Genet overlay follows a recursive structure. Fundamentally, Pando implements a \textit{streaming map} abstraction: it applies the same function on all inputs it receives from a stream and outputs the results in order. The original implementation of Pando uses the \textit{StreamLender} abstraction to coordinate the distribution of values between a dynamic number of children. To handle potential failures, StreamLender keeps values in memory until a result has been provided. In case of a child's failure, StreamLender will automatically re-submit the memorized values to remaining children. Our scalable implementation re-uses the StreamLender abstraction on intermediary nodes of the fat-tree, as illustrated in Figure~\ref{Figure:ScalablePando}, enabling failures to be handled in the parent of a failing node. Intermediary nodes may also fail. In that case, the parent node will handle the failure by re-submitting the values in a different sub-tree. \begin{figure}[htbp] \includegraphics[width=0.48\textwidth]{./figures/scalable-pando} \caption{\label{Figure:ScalablePando} Scalable Pando, based on the Genet's fat-tree overlay.} \end{figure} As in the original implementation, The StreamLender abstraction and the streaming model used by Pando, are demand-driven and will provide values as quickly as they are asked, enabling faster processors to process more values. However, the WebRTC channel library we use eagerly reads all available values, regardless of the speed at which they are processed on the receiving side. We therefore regulate the flow by using the Limiter module: it lets a limited number of values flow through, after which newer values are sent only after results have been returned. The limit is dynamically adjusted by periodic reports on the number of children in the sub-tree provided by our implementation to adjust the flow to a growing or shrinking fat-tree. Using the previous design, the fat-tree overlay enables larger total throughput for Pando while providing a quick speed of deployment. \section{Evaluation} \label{Section:Evaluation} In the next sections, we evaluate the behaviour of the design, both in simulation over a large number of experiments, and in real-world deployments, over a smaller number of experiments. We also mesure the benefits provided by the fat-tree overlay when deployed as part of a real throughput-oriented application in personal volunteer computing. \subsection{Depth with Probabilistic Balancing} We first study the impact of choosing a \textit{probabilistic balancing} scheme on the depth of the fat-tree under various levels of failure, because the depth has a direct impact on the latency of communication between the root and the leaf nodes. \subsubsection{How deep is the fat-tree?} $N$ nodes in a perfectly balanced tree are at depth less or equal to $\lceil log(N) \rceil$. Because they are distributed randomly in our fat-tree, a certain percentage of nodes are deeper. To quantify the percentage of nodes that may be affected, we simulated the construction of the tree with nodes with random identifiers that join one after the other, assuming all nodes do not crash. We then counted the number of nodes in the extra levels, and repeated the experiment a thousand times. Over a thousand experiments, we observed no nodes two levels deeper, which while possible in theory is in practice extremely unlikely. The proportion of nodes at depth $log(N) + 1$ varied between experiments. The results are shown in Figure~\ref{Figure:DepthNoFailures}, as a cumulative distribution function for various sizes of trees, to provide both intuitions about the average behaviour and the maximum cases. Our results show that in a majority of experiments ($\geq$700), 8\% or less nodes are on the extra level of the tree, regardless of the number of nodes in the tree. They also show that in all cases, 17\% or less of nodes were in the extra level. Moreover, the larger the tree, the closer all experiments get to around 7.5\% of nodes in the extra level. Therefore, in all experiments, $\approx83\%$ of nodes are located no deeper than they would have been if the tree had been fully balanced. \begin{figure}[htbp] \includegraphics[width=0.48\textwidth]{./figures/depth-no-failures} \caption{\label{Figure:DepthNoFailures} Number of experiments with X\% or less of nodes at depth $log(N) + 1$ over 1000 repetitions and no failures.} \end{figure} \subsubsection{Do failures make it deeper?} In practice, a certain number of nodes \textit{will fail} and force their children to reconnect. To quantify the impact, we construct a tree as in the previous experiment but then disconnect a certain percentage of nodes, then let all nodes reconnect through the root. We then count the number of nodes at deeper levels than $log(N)$. We performed a thousand experiments for trees of size 10, 100, 1000, and 10000 under various probabilities of failure ($F$) from 0 to 1. Over a thousand experiments, we observed no nodes at depth $log(N) + 2$. In all cases, the failures did not affect the percentage of nodes at depth $log(N) + 1$. Results with a 25\% probability of failure are shown in Figure~\ref{Figure:DepthDistributionFailure}; the results are the same for other levels of probability. Failures therefore do not change the distribution of nodes through the tree. \begin{figure}[htbp] \includegraphics[width=0.48\textwidth]{./figures/depth-with-25-failures} \caption{\label{Figure:DepthDistributionFailure} Number of experiments with X\% or less of nodes at depth $log(N) + 1$ over 1000 repetitions and a failure probability of 0.25 for nodes to fail.} \end{figure} Our probabilistic balancing scheme therefore achieves equivalent depth as a deterministic algorithm for at least $83\%$ of nodes, in the presence of failures or not. For larger trees, of a thousand nodes or more, this percentage increases to $92.5\%$: \textit{scale therefore increases the effectiveness of the scheme} (up to a limit). \subsection{Bootstrap Latency When Scaling} \label{Section:BootstrapLatency} How quickly does the Genet fat-tree scale in practice? We first measure the latency in establishing a WebRTC connection as a baseline and then measure the added overhead of our scheme to connect all nodes, as a function of the size of the fat-tree, with fat-trees of size 10 to a 1000 nodes. We performed these measurements on Grid5000~\cite{grid5000} because it is representative of deployments on a local area network, such as those of a university or a large organization, the infrastructure was accessible to us, it facilitates the replication of experiments, and it can easily scale the number of participating nodes. For our experiments, we used Grid5000 nodes from the Grenoble site, that has two models of nodes. The first model (\texttt{dahu}) are based on the Dell PowerEdge C6420 that uses Intel Xeon Gold 6130 CPUs (Skylake, 2.10GHz, 2 CPUs/node, 16 cores/CPU), have 192GB of memory and are connected by 10 and 100 Gbps network links. The second model (\texttt{yeti}) is based on the the Dell PowerEdge R940 with also with Intel Xeon Gold 6130 CPUs (Skylake, 2.10GHz, 4 CPUs/node, 16 cores/CPU), have 768GB of memory and are also connected by 10 and 100 GBps network links. The exact distribution of nodes for experiments is chosen randomly between the two models, based on availability (because other experiments concurrently run at the same time on other machines), in our case it was almost always the \texttt{dahu} nodes that were used, with an occasional \texttt{yeti} node in the mix. All our throughput experiments were also made with these nodes. \subsubsection{How long does it take to establish a single WebRTC connection?} While individual nodes on Grid5000 enjoy a sub-millisecond latency, the ping between nodes is typically between 0.1 and 0.2 ms, establishing a WebRTC connection is significantly slower. As explained in Section~\ref{Section:FatTreeOverlay}, each participant in a connection first starts listing potential connection end-points that can enable Network-Address Translation, also contacting STUN servers in the process. For example, the Google STUN server we use (\texttt{stun.l.google.com}) has an average ping latency of 35ms. The endpoints need to be exchanged between participants through a relay and finally, multiple connections are tried from both sides until one is found to work. Between nodes on a local network, the connection can be established earlier because some of the endpoints will use the local IP address and therefore a connection on the local area network can be established before the other endpoints discovered by the STUN server are received. Nonetheless, the messages exchanged with the relay server significantly adds to the delay. In our tests, the relay server was running on the local network and connected 20 browser windows on 10 Grid5000 nodes forming a fully connected clique, each window opening a connection to every other window. All connection attempts succeeded. For all connections, we measured the latency between creating the connection and a confirmation message sent through the connection, which corresponds to the time it takes before data starts flowing through the connection. We used \texttt{webrtc-connection-testing}~\cite{webrtc-connection-testing} version 4.0.0, an open source tool we built for this task. The results are shown in Figure~\ref{Figure:WebRTCConnectionLatencyGrid5000Local}. We observed connection latencies less than 1000ms, with 95.5\% of connections taking less than 500ms. Of the 363 connections that took less than 500ms, 41 took less than 100ms, 112 took between 100ms and 200ms, 132 took between 200ms and 300ms, and 78 took between 400ms and 500ms (not shown on the figure). \begin{figure}[htbp] \includegraphics[width=0.5\textwidth]{./figures/webrtc-connection-latency-grid5000-local-server-4-0} \caption{\label{Figure:WebRTCConnectionLatencyGrid5000Local} WebRTC connection latency distribution over 380 successful connections between nodes on the Grid5000's Grenoble site using a local server.} \end{figure} Therefore, even if Grid5000 nodes have sub-millisecond ping latencies, establishing a WebRTC connection is slow and typically takes three to four orders of magnitude longer, up to 1000ms. As the implementation of WebRTC is part of the implementation of the browser, any overlay design executing in JavaScript is subjected to this constraint and will be fully deployed at the speed at which the slowest connections are established. \subsubsection{How long does it take to connect all nodes in a fat-tree?} \label{Section:ConnectionLatency} Fully deploying a fat-tree, in which nodes are organized in multiple layers, shall therefore take at least time that is proportional to the depth of the tree and the time it takes to establish the slowest connections. In this section and the next, we show it is the case in practice with a full implementation in a working tool that we use for actual applications. We used Pando version 0.17.9~\cite{pando-repository} for our tests, which implements the design of Section~\ref{Section:Application}. The fat-tree is used to distribute inputs to processors and retrieve back results. We used a fat-tree of degree 10, which means that each node has at most 10 children and each layer of the tree will have a multiple of 10 participants in it. We used a test application that waits for 1 second then returns the square of the input value for a number of reasons. This removes the impact of potential differences of CPUs speeds, making it easier to determine when all processors are connected and producing results because the overall throughput, in values per seconds, is equal to the number of participating processors (on the leaves of the fat-tree). In turn, this means our measurements really represent the \textit{coordination overhead} of the entire system. And finally, the time it takes to reach the full throughput represents the bootstrapping latency. We deployed the fat-tree on 10 different Grid5000 nodes on the Grenoble's site by progressively increasing the number of browser windows executing on each node, with 1, 5, 10, 25, 50, and 100 windows. We repeated each experiment five times. Each node was connected with a one second delay after the previous, e.g. the first node opens its browser windows after a 1-second delay, the second with a 2-seconds delay, etc. up to 10 seconds for the last node. While opening a large number of browser windows at the same time (each executing in their own operating system process) from the same node worked fine, launching browsers at the same time from multiple nodes led to connection errors, which prompted the addition of an artificial delay between Grid5000 nodes. The rate of connection for 10 browser windows is therefore 1 browser window per second for 10 seconds, while the rate of connection for 1000 browser windows is 100 browser windows per second for 10 seconds. As the fat-tree is deploying, periodic status updates are sent from the leaves to the root node to report on the current state of the fat-tree. We used an interval of 3 seconds between reports, therefore the state of the fat-tree may be known at the earliest 3 seconds after having changed. This means the latency that we report in the next figures represent an upper bound on the actual latency to connect the nodes. We measured the time it takes until all browser windows were counted as children in the tree. We then also measured the throughput of squared values at the output of Pando, also by sampling at intervals of 3 seconds. We measured the time it takes until the throughput corresponds to the number of leaves in the fat-tree, as reported by the previous reporting mechanism. The throughput measurements are independent from the reporting strategy, and the latency measured really represents the time observed by a user of Pando until full throughput is achieved. Both results are shown in Figure~\ref{Figure:ConnectionLatency}. \begin{figure}[htbp] \includegraphics[width=0.5\textwidth]{./figures/bootstrap-latency} \caption{\label{Figure:ConnectionLatency} Latency to connect all participants in the fat-tree on the Grid5000's Grenoble site over 5 experiments.} \end{figure} For 10 browser windows, it typically takes about 15 seconds to connect them all in the first layer of the fat-tree. This is about 5 seconds longer than the 10 seconds required to open all browser windows; the additional delay can be explained by a 1-2 second delay before Pando's server is ready after start-up, the maximum latency of 1000ms to establish the slowest connections, as reported in the previous experiment, and the reporting interval, i.e. we learned of that last connection 5 reports after Pando was started. The maximum throughput is first observed one sample later, at 18 seconds, as shown by the 'Output Latency' curve. As the size of the tree increases, so does its depth and therefore the latency to fully connect the fat-tree. At 100, 250, and 500 browser windows its takes about 18 seconds to fully connect the children, while at 1000 browser windows it takes 24 seconds. The latency to reach maximum throughput follows accordingly by one or two samples, 21 seconds for 50 and 100 browser windows, 24 seconds for 250 and 500 browser windows, and 28 seconds for 1000 browser windows. The variation between experiments also grows larger as the size of the tree grows, we measured a latency of up to 54 seconds both for connecting children and reaching maximum throughput in a single experiment. We can therefore conclude that it typically takes 30 seconds to connect all nodes in our WebRTC fat-tree and sometimes up to a minute on the Grid5000 testbed. \subsection{Throughput Ramp-up in the Collatz Application} Now that we have established a typical latency of 30 seconds to reach maximum throughput, does the fat-tree, when used with Pando, behave in the same way when the leaf nodes are actually performing computations, as described in Section~\ref{Section:Application}? We answer that question by taking one representative application of volunteer computing, the Collatz Conjecture~\cite{boinc-collatz}, which has also been implemented~\cite{boincprojects} using the BOINC~\cite{anderson2004boinc} infrastructure. This is essentially a number-crunching application, with really small inputs requiring a significant amount of computations, most of the time being spent manipulating big integers that do not fit into registers. We implemented the application in JavaScript with an off-the-shelf Big Number library. For the purpose of measuring the scaling behaviour, the single core performance is not critical, a faster implementation shall increase our throughput measurements by a constant factor, which would be obtained by better using the CPU, while not affecting much the scaling behaviour, which is instead due to the coordination performed by the fat-tree. Studying the throughput scaling behaviour on actual applications is complicated by the fact that all tasks do not take the same amount of time. The throughput at the output of Pando can vary both because nodes join or because tasks are temporarily faster or slower. Moreover, as our fat-tree design probabilistically balances the tree, the actual number of leaves that are processing inputs varies between experiments. It is therefore harder to determine when all connected nodes have started contributing to computations. We therefore first measured the average throughput with a given number of nodes, and let the deployment compute for at least three minutes. We then took the average throughput measured after all the nodes were connected and counted the number of participating processors (leaves). We measured for 10 Grid5000 nodes, with 1, 16, and 32 browser windows per node, the last being the maximum number of cores available on the machines of the Grenoble site. The results are shown in Figure~\ref{Figure:CollatzAverageThroughput}. \begin{figure}[htbp] \includegraphics[width=0.5\textwidth]{./figures/average-throughput} \caption{\label{Figure:CollatzAverageThroughput} Average throughput on the Collatz application on Grid5000 as the number of cores used increases.} \end{figure} As the number of browser windows increases, so does the average throughput, showing a clear benefit to scaling the number of participating cores. However, the results are not quite linear. This is actually not due to the fat-tree design but contention for CPU resources on the same machine. We did a quick second experiment with 10 browser windows on a single machine and we obtained $\approx 480 \frac{BigNums}{(s*node)}$ rather than the $\approx 560 \frac{BigNums}{(s*node)}$ we obtained with 10 browser windows on 10 nodes. We then used the previous average throughput as a target to determine the time it takes before all cores are actually contributing results, when deployed with the fat-tree overlay. We therefore measured the time it takes until the output throughput reaches the average throughput measured previously, adjusted for the actual number of participating processors (leaf nodes in the fat-tree). We used the same methodology as in Section~\ref{Section:ConnectionLatency}. The results are shown in Figure~\ref{Figure:CollatzBootstrapLatency}. \begin{figure}[htbp] \includegraphics[width=0.5\textwidth]{./figures/throughput-ramp-up-latency} \caption{\label{Figure:CollatzBootstrapLatency} Latency to reach maximum throughput on the Collatz application on Grid5000 as the number of cores increases.} \end{figure} The results are consistent with the previous results, even slightly better probably because of the uncertainty added by the 3 seconds sampling interval. In this case again, reaching maximum throughput typically takes 15 seconds with 10 nodes ($\approx$ 10 cores), 18 seconds with 160 nodes ($\approx$ 110 cores), and 21 seconds with 320 nodes ($\approx$ 220 cores). \subsection{WebRTC Connection Probability and Establishment Latency on the Internet} The previous results show that our WebRTC fat-tree design is effective in quickly deploying a large number of nodes on a local area network and provided a methodology for systematically studying their performance for volunteer computing applications, both of which had never been done before. In this section we provide some additional intuitions about how a deployment that targets the Internet should be adapted and show how the tools we built for the previous experiments can be used in that setting to motivate future works. The previous results already show that the latency in establishing WebRTC connections is a significant factor in the overall latency of deploying a fat-tree, because even on a fast local network, a connection can take up to 1000 ms to be established. We tested two additional settings, one in which the relay server for exchanging the connection endpoints is located outside the local network and a second in which the participants are distributed across the planet. We show the results of the first experiment in Figure~\ref{Figure:WebRTCConnectionLatencyGrid5000Remote} when establishing connections between browser windows executing on Grid5000, but relying on a remote server located in Paris, France\footnote{Running on Amazon Cloud.}, for relaying signalling messages. The ping latency from Grenoble to that server takes 40ms on average and ranges from 13ms to 150ms, about 130-1500 times higher than between nodes within Grid5000. All connections succeeded also in this case. We observed similar results as for the experiment with a local server but with greater variability, with some connections taking between 1000ms and up to 16s to be established. Among the fastest established connections, 16 took less than 100ms, 123 took between 100ms and 200ms, 82 took between 200ms and 300ms, 31 took between 300ms and 400ms, 29 took between 400ms and 500ms, 36 took 500ms to 600ms, and 17 took 600ms to 700ms (not shown in the figure), and together account for 89\% of all latency results. Compared to using a local server, 22.6\% less connections take less than 500ms and almost three times more take between 500ms and 1000ms. Connections therefore have additional latency as well as greater variability, as would be expected from messages routed on the Internet. \begin{figure}[htbp] \includegraphics[width=0.5\textwidth]{./figures/webrtc-connection-latency-grid5000-remote-server-4-0} \caption{\label{Figure:WebRTCConnectionLatencyGrid5000Remote} WebRTC connection latency distribution over 380 successful connections between nodes on the Grid5000's Grenoble site using a remote server.} \end{figure} For the second experiment, we asked 20 participants randomly selected among Mechanical Turk~\cite{mechanical-turk} workers to open a web page that tested their WebRTC connectivity to other participants and the experimenter. Out of the 21 participants, including the experimenter, 17 chose to voluntarily disclose their location using the geolocation API of their browser. The world-wide distribution of participants is shown in Figure~\ref{Figure:GeographicalDistributionWorkers}. Between all participants that were connected to the relay server at the same time, 398 WebRTC connection attempts were made, out of which 194 succeeded, for a success ratio of 48.7\%. This shows, unsurprisingly, that random connections between participants do not always succeed. However, contrary to our initial expectations, almost half of the connections succeeded. \begin{figure}[htbp] \includegraphics[width=0.48\textwidth]{./figures/geolocation-participants} \caption{\label{Figure:GeographicalDistributionWorkers} Geographical location of participants that accepted to share their location.} \end{figure} The latency in establishing connections, as shown in Figure~\ref{Figure:WebRTCConnectionLatency}, has more variation compared with the local and remote server experiments on Grid5000. Except for one result, all other connections took at least 500ms to be established and most results are well-distributed between 500ms and 8500ms. Compared to Figure~\ref{Figure:WebRTCConnectionLatencyGrid5000Local} and Figure~\ref{Figure:WebRTCConnectionLatencyGrid5000Remote}, it is therefore more typical for a participant to take several seconds to be connected.\footnote{This experiment used the previous 3.0.2 version of the \texttt{webrtc-connection-testing} tool~\cite{webrtc-connection-testing}, which can introduce an additional connection delay because participants that are already connected receive notifications of newer participants only every 5 seconds. This was fixed in version 4.0.0 (which was used in the previous experiments) to send notifications as soon as participants are connected. Because we could not assemble again the same set of participants to repeat the Internet deployment of the original paper submission, we present the original results. While the average latency could possibly be lower, we would expect to still observe an increased variability of the latencies and values of at least a few seconds.} \begin{figure}[htbp] \includegraphics[width=0.5\textwidth]{./figures/webrtc-connection-latency} \caption{\label{Figure:WebRTCConnectionLatency} WebRTC connection latency distribution over 194 successful connections between world-wide participants.} \end{figure} Supposing our results generalize, which shall be validated in larger settings, this means that choosing a random Internet participant for connection shall lead to a successful connection almost half the time. However, this also highlights the need for mechanisms to tolerate failures of initial connections. One possible solution would be to first test for connectivity before deploying the fat-tree, which unfortunately would lead to a higher connection latency. A second possible solution would be to attempt multiple random connections when a participant joins. In effect, bootstrapping the fat-tree this way would lead to a mesh network, which could then, for example, be made to converge to a more efficient topology if necessary. \section{Related Work} \label{Section:RelatedWork} \textit{Fat-tree topologies} have originally been proposed to provide high-bandwidth communication between nodes in computing clusters while minimizing the cost of switching hardware~\cite{Leiserson85}. Fat-trees derive their name from the increasing bandwidth requirements on edges closer to the root because they relay traffic for all children in the underlying sub-tree. Fat-trees were later also adopted explicitly or implicitly in \textit{overlay networks}, in which nodes connected using Internet protocols are organized in logical networks for efficient communication, to provide, for example, multicast communication~\cite{zhang2012survey,birrer2004fatnemo}. Extensive work on tree overlays for multicast applications has been done since the 90s~\cite{zhang2012survey}, in which the same data is disseminated from a single source to tens and up to millions of participants. Typical applications of volunteer computing have different data transfer patterns because each participant receives a different sub-set of data. BOINC submits the same computation to a small number of participants (at least three) until a majority agrees~\cite{taufer2005homogeneous}, while the current version of Pando~\cite{lavoie2018pando} does not use redundancy because the code is executed on trusted devices. In addition, in both cases, each participant will return different results to the root. To the best of our knowledge, we are the first to propose a fat-tree overlay for scaling volunteer computing applications that supports an infinite number of inputs and provides a \textit{decentralized} scheme for allocating nodes in the tree. ATLAS~\cite{baldeschwieler1996atlas}'s tree of managers organized around work-stealing is perhaps the oldest documented scheme that relies on a tree for scalability but little details about the implementation were provided and the actual implementation was tested with only 8 machines. Javeline++~\cite{neary2000javelin++} relies on a tree structure to implement a \textit{distributed work-stealing} scheduler but the scheme relied on tasks being finite and the position of a new node in the tree is computed from the root. Bayanihan~\cite{sarmenta1998bayanihan} conceived a tree of servers that maps to the underlying network topology when the bandwidth on the link to a single server is insufficient, but to the best of our knowledge the scheme was never implemented. Connection decisions in our scheme do not require global information about the tree, yet they ensure probabilistic balancing and guarantee the routing of multiple connection messages to the same leaf node. BOINC~\cite{anderson2004boinc} currently supports hundreds of thousands of participants but relies on a dedicated server with sufficient resources and an interaction pattern that is tailored to long running computations. Volunteers obtain the task to perform and transmit the results in two different remote procedure calls. Participant failures are detected with a soft limit on the expected time to completion, which therefore requires an estimate that is application dependent. Our design is tailored to shorter running tasks and instead relies on the heartbeat mechanism provided by WebRTC to detect the failure of a participant. Moreover, by relying on WebRTC to scale up the number of concurrent connections, we can support at least a thousand participants with no investment in dedicated hardware nor renting of hosted resources. Compared to other published volunteer computing tools, we are the first to have successfully tested with a thousand participants and the first to use WebRTC to connect participants in a fat-tree overlay. Most published volunteer computing tools~\cite{alexandrov1997superweb,baratloo1999charlotte,cappello1997javelin,sarmenta1998bayanihan,duda2012distributed,martinez2015capataz,cushing2013weevilscout,kuhara2014peer,leclerc2016space} were tested with less than a hundred of participants. Some of the most recent have been tested with more than a hundred participants~\cite{merelo2008asynchronous,langhans2013crowdsourcing,meeds2015mlitb} and even up to 400 concurrent participants~\cite{dkebski2013comcutejs}. But the largest internet deployments of custom tools~\cite{merelo2008asynchronous,langhans2013crowdsourcing,meeds2015mlitb} have so far reached a hundred concurrent participants~\cite{meeds2015mlitb}. WebRTC~\cite{webrtc} has been used in the design of other kinds of overlay networks, including content delivery~\cite{hive.js}, real-time collaboration~\cite{vanderLinde2017legion}, and virtual reality~\cite{Hu:2017}. Kuhara and al.~\cite{kuhara2014peer} have proposed a service to share files for volunteer computing but they tested their system on a single machine. BrowserCloud.js~\cite{dias2018browser}, is the only other distributed computing platform we are aware of that also uses WebRTC as an overlay. Contrary to our design, it is organized around a distributed hash table rather than a tree, and tasks are pushed from the submitting peer to available workers rather than being pulled by workers as they become free. The implementation of browserCloud.js has been tested on 10-25 browsers on a single machine, which provides little information about the speed at which their overlay can scale in deployments on a local network. Spray~\cite{nedelec2018adaptive} is a peer sampling implementation that also uses WebRTC and they also tested on the Grid5000 testbed, with up to 600 hundred concurrent browsers. However, their experiments limit the rate at which participants join to 1 per 5 seconds. It therefore takes 50 minutes for the 600 browsers to join. In a similar setup, our fat-tree overlay deploys on a thousand browsers in 20-55 seconds. \section{Conclusion and Future Work} \label{Section:Conclusion} We have presented the Genet Fat-Tree overlay, which enables quick scaling by relying on a novel scheme that only requires \textit{local information} to route connection messages. The routing scheme derives the destination of messages from the hash value of the combined identifiers of the message's source and of the node that is holding the message. The scheme provides \textit{deterministic routing} of a sequence of connection messages from a single source and \textit{probabilistic balancing} of newer connections among the leaves, which is especially useful when implemented with WebRTC because opening a new channel requires the exchange of multiple independent \textit{signal} messages between participants. We have shown that the probabilistic balancing of the tree, induced by the routing scheme, puts at least 83\% of nodes at a similar depth than they would have been with a deterministic balancing algorithm, increasing to 92.5\% on trees of a thousand nodes or more. We have also shown that an implementation of the design could connect a thousand browser windows in a local area network in 22-55 seconds and enable the throughput on the Collatz application to increase by two orders of magnitude, coordinating over 220 computing cores out of 320 participating cores. We have finally motivated future work to generalize the design for a world-wide setting, by taking into account the lower connection probability in a wide-area network and the increased connection latency. The Genet Fat-Tree overlay could be applied to other problems than volunteer computing. The most promising one seems to bootstrap other overlay networks built with WebRTC. It could, for example, implement a peer sampling protocol, such as Spray~\cite{nedelec2018adaptive}, and the initial bootstrap could be made fast by having new nodes join \textit{multiple nodes} in the tree, forming a mesh that could then progressively converge to an efficient topology. The quick scaling ability of the design we have presented is therefore complementary to potential refinements based on existing overlay designs. \section{Acknowledgements} This material is based upon work supported by the National Science and Engineering Research Council (NSERC) of Canada, Fond de Recherche du Quebec -- Nature et Technologies (FRQNT). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of NSERC or FRQNT. Experiments presented in this paper were carried out using the Grid'5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations (see \url{https://www.grid5000.fr}). \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Introduction}\addcontentsline{toc}{section}{Introduction} Monoidal categories were introduced by Jean B\'enabou~\cite{Ben63} and Saunders Mac Lane~\cite{MLane63} in order to generalize the idea of tensor product in arbitrary categories. It is well know that, in the case of the usual tensor product for vector spaces, there is a natural isomorphism between $V\otimes W$ and $W\otimes V$. In order to study if this property also holds in an arbitrary monoidal category, i.e.\ when the tensor product is commutative, Joyal and Street defined in~\cite{JSBM} the concept of braiding for monoidal categories as a natural isomorphism $\tau_{A,B}\colon A\otimes B\xrightarrow{}B\otimes A$. When we try to study the concept of braiding for the simplest case of monoidal categories (categorical monoids or internal categories in the category of monoids) we encounter the obstacle that not all internal morphisms are internal isomorphisms, so the braiding cannot be an arbitrary internal morphism verifying simple properties. To avoid this problem we can work with (strict) categorical groups instead of categorical monoids, obtaining immediately the definition of braided categorical group (see~\cite{JSBM}). On the other hand, in 1949 Whitehead~\cite{Whi} introduced the notion of crossed module of groups as an algebraic model for 2-type homotopy spaces (i.e.\ connected spaces with trivial homotopy groups in dimension $>2$). In 1984, Conduch\'e~\cite{Condu} introduced the notion of braided crossed module of groups as a particular case of $2$-crossed module of groups. It is well know that the categories of crossed modules of groups and categorical groups are equivalent, and Joyal and Street proved in~\cite{JSBM} that the notion of braiding for categorical groups gives an equivalent category to the category of braided crossed modules of groups introduced by Conduch\'e~\cite{Condu}. The notions of crossed modules for associative algebras~\cite{DeLu66}, Lie algebras~\cite{KassLod} and Leibniz algebras~\cite{LodayPira} appear trying to imitate crossed modules of groups, and it was proven that the correspondent categories are equivalent to their respective internal categories. Keeping in mind what is done for groups, in this paper we will give definitions of braidings for the aforementioned internal categories and crossed modules. The case of associative algebras is not complex, because the associativity allows us to work in a natural way with braidings on semigroupal categories~\cite{CrYet98}. The notion of braiding for Lie algebras was already given by Ulualan~\cite{Ulua}. On the other hand, Ellis~\cite{Ellis93} defined the notion of $2$-crossed module of Lie algebras, also studied by Martins and Picken~\cite{M&P}. We will use a slightly different definition for braiding for crossed modules of Lie algebras than the one given by Ulualan~\cite{Ulua}, since we want a parallelism between the examples of braided crossed modules of groups and braided crossed modules of Lie algebras and we want braided crossed modules to be a particular case of $2$-crossed modules, as in happens in groups case. The Leibniz algebras case will be studied in the second part of this paper~\cite{FFBraidII}. This first manuscript is organized as follows. In the preliminaries we will recall some basic definitions and we give the notion of braiding for semigroupal categories. In Section~\ref{S:braidgrp} we will show that, in all the internal categories we will work with, all internal morphisms are internal isomorphisms; which motivates the study of braided crossed modules of groups instead of braided categorical monoids. In Section~\ref{S:braidassalg} we introduce the notions of braided categorical associative algebra and braided crossed module of associative algebras and we show that these categories, as in the groups case, are equivalent. In Section~\ref{S:braidLiealg} we motivate the definition given by Ulualan~\cite{Ulua} for braided crossed modules of Lie algebras using our definition of braiding for crossed modules of associative algebras and we give a simpler definition when $\car(K)\neq 2$. We will also discuss about a different definition of braided crossed module of Lie algebras showing its relationship with the associative case. In Section~\ref{S:Lienonabtensor} we see the non-abelian tensor product of groups as an example of braided crossed module of groups. Moreover, with our definition of braiding for crossed modules of Lie algebras, we obtain in a parallel way an example of braiding using the non-abelian tensor product. \section{Preliminaries} \subsection{Internal Categories} \begin{defi} Let \textit{\textbf{C}} be a category with pullbacks. An \emph{internal category} in $\textit{\textbf{C}}$ consist of two objects $C_1$ (\emph{morphisms object}) and $C_0$ (\emph{objects object}) of $\textit{\textbf{C}}$, together with the four following morphisms: \[ \xymatrix@=3em{ C_0 \ar[r]|-{e} & C_1\ar@<1ex>[l]^-{s} \ar@<-1ex>[l]_-{t} & C_1\times_{C_0}C_1 \ar[l]_-{k}, } \] where $C_1\times_{C_0}C_1$ is the pullback of $t$ and $s$. $s$ is called \emph{source morphism}, $t$ is called \emph{target morphism}, $e$ is called \emph{identity mapping morphism} and $k$ is called \emph{composition morphism}. In addition, the morphisms must verify commutative diagrams that express the usual category laws (see~\cite{Baez04}). If the conditions are allowed we will refer to the internal category by the $6$-tuple $(C_1,C_0,s,t,e,k)$. \end{defi} \begin{defi} Let $\mathcal{C}=(C_1,C_0,s,t,e,k)$ and $\mathcal{C'}=(C_1',C_0',s',t',e',k')$ be two internal categories in \textbf{\textit{C}}. An \emph{internal functor} is a pair of morphisms $(F_1,F_0)$, with $F_1\colon C_1\xrightarrow{} C_1'$ and $F_0\colon C_0\xrightarrow{} C_0'$ such that must verify commutative diagrams corresponding to the usual laws satisfied by a functor (see~\cite{Baez04}). We will denote the internal functor by $(F_1,F_0)\colon\mathcal{C}\xrightarrow{}\mathcal{C'}$. \end{defi} Composition of internal functors is defined in the obvious way. This allows us to construct the category of internal categories and internal functors in a category with pullbacks \textbf{\textit{C}}, denoted by $\textbf{\textit{ICat}}(\textbf{\textit{C}})$. An internal category in $\textbf{\textit{C}}$ will be also called categorical object in $\textbf{\textit{C}}$. \subsection{Algebras} \begin{defi} Let $K$ be a field and $(M,*)$ be a $K$-algebra. A \emph{derivation} over $(M,*)$ is a $K$-linear map $D\colon M\xrightarrow{}M$ verifying the \emph{Leibniz rule}: \begin{align*} D(x*y)=D(x)*y+x*D(y), \ x,y\in M. \end{align*} \end{defi} \begin{remark} Let $(M,*)$ be a $K$-algebra. It is immediate to check that, if we take $x\in M$, the map $R(x)\colon M\xrightarrow{}M$ defined by $R(x)(y)=y*x$ (right multiplication) is $K$-linear using the $K$-bilinearity. \end{remark} \begin{defi} We will say that the $K$-algebra $(M,*)$ is a \emph{(right) Leibniz $K$-algebra} if and only if $R(x)$ is a derivation over $(M,*)$ for all $x\in M$. We denote $x*y=:[x,y]$ and call the operation $[-,-]$ \emph{Leibniz bracket}. If, in addition, $(M,[-,-])$ is an alternate $K$-algebra ($[x,x]=0, \ x\in M$) we will say that it is a \emph{Lie $K$-algebra} and we will call the operation $[-,-]$ \emph{Lie bracket}. \end{defi} \begin{remark} The fact that $R(z)$ is a derivation for all $z\in M$ can be seen in the next identity for $x,y,z\in M$, called the \emph{Leibniz identity}: \begin{equation*} [x,[y,z]]=[[x,y],z]-[[x,z],y]. \end{equation*} If in addition the $K$-algebra is anticommutative (for example the Lie $K$-algebras), we can rewrite the equality, obtaining the \emph{Jacobi identity}: \begin{equation*} [x,[y,z]]+[y,[z,x]]+[z,[x,y]]=0. \end{equation*} \end{remark} \begin{defi} If $(M,*)$ and $(N,\star)$ are $K$-algebras, an \emph{homomorphism} between them is a $K$-linear map $M\xrightarrow{f}N$ such that $f(x*y)=f(x)\star f(y)$. \end{defi} We have the categories $\textbf{\textit{AssAlg}}_K$, $\textbf{\textit{LieAlg}}_K$ and $\textbf{\textit{LeibAlg}}_K$ taking, as objects (respectively), associative, Lie or Leibniz $K$-algebras, and as morphisms, homomorphisms of $K$-algebras between them. We will denote by $\textbf{\textit{Vect}}_K$ and $\textbf{\textit{Grp}}$ the categories of $K$-vector spaces and groups. \subsection{Crossed Modules} \subsubsection{Crossed Modules of groups}\hfill The crossed modules of groups were introduced by Whitehead in~\cite{Whi}. \begin{defi} A \emph{crossed module of groups} is a $4$-tuple $(G,H,\cdot,\partial)$ where $G$ and $H$ are groups, $\cdot$ is an action of $H$ on $G$ by automorphisms, $\partial\colon G\xrightarrow{}H$ is a group homomorphism and the following properties are satisfied for $g,g'\in G$, $h\in H$: $\partial$ is $H$-equivariant map (we suppose the conjugation action of $H$ on itself), i.e. \begin{equation}\label{XGr1} \partial(h\cdot g)=h\partial(g)h^{-1}.\tag{XGr1} \end{equation} \emph{Peiffer identity}: \begin{equation}\label{XGr2} \partial(g)\cdot g'=gg'g^{-1}.\tag{XGr2} \end{equation} \end{defi} \begin{example} It is clear from the definitions that if $G$ is a group, then $(G,G,\Conj,\Id_G)$ is a crossed module of groups, where $\Conj$ is the conjugation action. \end{example} \begin{defi} An \emph{homomorphism of crossed modules of groups} between $(G,H,\cdot,\partial)$ and $(G',H',*,\partial')$ is a pair of group homomorphisms, $f_1\colon G \xrightarrow{}G'$ and $f_2\colon H\xrightarrow{}H'$ such that: \begin{align*} f_1(h\cdot g)&=f_2(h)*f_1(g), \quad g\in G, h\in H, \tag{XGrH1}\\ \partial' \circ f_1&=f_2\circ \partial.\tag{XGrH2} \end{align*} \end{defi} \begin{remark} There is and equivalence between the categories $\textbf{\textit{ICat}}(\textbf{\textit{Grp}})$ and crossed modules of groups (see~\cite{Baez04}). \end{remark} \subsubsection{Crossed Modules of associative algebras}\hfill The definition of action in the associative algebras case is the following one. \begin{defi} Let $N$ and $M$ be two associative $K$-algebras. An \emph{associative action} of $N$ on $M$ is a pair of $K$-bilinear maps $*=(*_1,*_2)$, where $*_1\colon N\times M\xrightarrow{}M$ and $*_2\colon M\times N\xrightarrow{}M$, $(n,m)\mapsto n*_1 m$ and $(m,n)\mapsto m*_2 n$, verifying for $n\in N$ and $m\in M$: \begin{align} n*_1 (mm')&=(n*_1 m)m',\tag{AAs1}\\ n*_1 (m*_2 n')&=(n*_1 m)*_2 n',\tag{AAs2}\\ n*_1(n'*_1m)&=(nn')*_1 m,\tag{AAs3}\\ m*_2(nn')&=(m*_2 n)*_2 n',\tag{AAs4}\\ m(n*_1m')&=(m*_2 n)m',\tag{AAs5}\\ m(m'*_2n)&=(mm')*_2 n.\tag{AAs6} \end{align} for $m,m'\in M$, $n,n'\in N$. \end{defi} \begin{remark} If we change the notation of $*_1$ and $*_2$ by $*$ in both cases, where $*$ is the multiplication, the axioms of the associative actions are all possible rewrites of the associativity when we choose two elements in $M$ and one in $N$ or one in $M$ and two $N$. In particular, if $*$ is the multiplication of an associative $K$-algebra, we have that the pair $(*,*)$ is an associative action of $M$ on itself. \end{remark} The definition of crossed module of associative algebras was given by Dedecker and Lue in~\cite{DeLu66}. \begin{defi} A \emph{crossed module of associative $K$-algebras} is a $4$-tuple $(M,N,*,\partial)$ where $M$ and $N$ are associative $K$-algebras, $*=(*_1,*_2)$ is an associative action of $N$ on $M$, $\partial\colon M\xrightarrow{}N$ is an associative $K$-homomorphism and the following properties are verified for $m,m'\in M$, $n\in N$: $\partial$ is an $N$-equivariant associative $K$-homomorphism (we suppose the action of $N$ on itself is the product), i.e. \begin{equation}\label{XAs1} \partial(n*_1 m)=n\partial(m) \quad \text{and} \quad \partial(m*_2 n)=\partial(m)n.\tag{XAs1} \end{equation} Peiffer identity: \begin{equation}\label{XAs2} \partial(m)*_1 m'=mm'=m*_2\partial(m').\tag{XAs2} \end{equation} \end{defi} \begin{example} If $M$ is an associative $K$-algebra then $(M,M,(*,*),\Id_M)$ is a crossed module of associative $K$-algebras. \end{example} \begin{defi} An \emph{homomorphism of crossed modules of associative $K$-algebras} between $(M,N,\cdot,\partial)$ and $(M',N',*,\partial')$ is a pair of associative $K$-homomorphisms, $f_1\colon M \xrightarrow{}M'$ and $f_2\colon N\xrightarrow{}N'$ such that: \begin{align*} f_1(n\cdot_1 m)=f_2(n)*_1f_1(m) \quad &\text{and} \quad f_1(m\cdot_2 n)=f_1(m)*_2 f_2(n),\tag{XAssH1}\\ \partial' \circ f_1&=f_2\circ \partial,\tag{XAssH2} \end{align*} for $m\in M$, $n\in N$. \end{defi} We will denote by $\textbf{\textit{X}}(\textbf{\textit{AssAlg}}_K)$ the category of crossed modules of associative $K$-algebras and its homomorphisms. \begin{remark} As in the case of groups we have an equivalence between the categories $\textbf{\textit{ICat}}(\textbf{\textit{AssAlg}}_K)$ and $\textbf{\textit{X}}(\textbf{\textit{AssAlg}}_K)$. A proof can be found in~\cite{ThRa}. \end{remark} \subsubsection{Crossed Modules of Lie algebras}\hfill We have an analogous definition for the case of Lie $K$-algebras. Crossed modules of Lie $K$-algebras was introduced by Kassel and Loday in~\cite{KassLod}. \begin{defi} Let $M$ and $N$ two Lie $K$-algebras, a \emph{Lie (left-)action of $N$ on $M$} is a $K$-bilinear map $\cdot\colon N\times M\longrightarrow M$, $(n,m)\longmapsto n\cdot m$, satisfying: \begin{align} [n,n']\cdot m&=n\cdot(n'\cdot m)-n'\cdot(n\cdot m),\tag{ALie1}\label{ALie1}\\ n\cdot[m,m']&=[n\cdot m,m']+[m,n\cdot m'],\tag{ALie2}\label{ALie2} \end{align} for $n,n'\in N$, $m,m'\in M$. \end{defi} \begin{example} Note that the two identities are, if we denote $\cdot=[-,-]$, the two possible rewrites of the Jacobi identity taking two elements in $N$ or two in $M$. In particular, we have that the adjoint map $\Ad(x)\colon M\xrightarrow{}M$, where $M$ is a Lie $K$-algebra and $x\in M$, defined by $\Ad(x)(y)=[x,y]$, is a Lie action of $M$ on itself. \end{example} \begin{defi} A \emph{crossed module of Lie $K$-algebras} is a $4$-tuple $(M,N,\cdot,\partial)$ where $M$ and $N$ are Lie $K$-algebras, $\cdot$ is a Lie action of $N$ on $M$, $M\xrightarrow{\partial}N$ is a Lie $K$-homomorphism and the following properties are satisfied for $n\in N$ and $m,m'\in M$: $\partial$ is $N$-equivariant Lie $K$-homomorphism (we suppose the adjoint action of $N$ on itself), i.e. \begin{equation}\label{XLie1} \partial(n\cdot m)=[n,\partial(m)].\tag{XLie1} \end{equation} Peiffer identity: \begin{equation}\label{XLie2} \partial(m)\cdot m'=[m,m'].\tag{XLie2} \end{equation} \end{defi} \begin{example} As in the case of groups we have the example of crossed module of Lie $K$-algebras $(M,M,[-,-],\Id_M)$ where $M$ is a Lie $K$-algebra. Note that the adjoint action replaces the conjugation action in this case. \end{example} \begin{defi} An \emph{homomorphism of crossed modules of Lie $K$-algebras} between $(M,N,\cdot,\partial)$ and $(M',N',*,\partial')$ is a pair of Lie $K$-homomorphisms, $f_1\colon M \xrightarrow{}M'$ and $f_2\colon N\xrightarrow{}N'$ such that: \begin{align*} f_1(n\cdot m)&=f_2(n)*f_1(m),\quad m\in M, n\in N,\tag{XLieH1}\\ \partial' \circ f_1&=f_2\circ \partial.\tag{XLieH2} \end{align*} \end{defi} We want to show that there is a natural way to connect the crossed modules of associative $K$-algebras and the crossed modules of Lie $K$-algebras. \begin{prop}\label{AsAc->LieAc} Let $M$ and $N$ be two associative $K$-algebras. We will denote the Lie $K$-algebra associated to an associative $K$-algebra $A$ as $A^\mathcal{L}$. That means $A^\mathcal{L}$ is the Lie $K$-algebra with the operation $[a,a']=aa'-a'a$. Then, if $*=(*_1,*_2)$ is an associative action of $N$ on $M$, we have that the map $[-,-]_*\colon N\times M\xrightarrow{}M$, defined as $(n,m)\mapsto [n,m]_*=n*_1 m-m*_2 n$, is a Lie action between $N^\mathcal{L}$ and $M^\mathcal{L}$. \begin{proof} Let $n,n'\in N^\mathcal{L}$, $m,m'\in M^\mathcal{L}$. In first place we will prove~\eqref{ALie1}. \begin{align*} &[n,[n',m]_*]_*-[n',[n,m]_*]_*\\ &=n*_1(n'*_1 m)-n*_1(m*_2 n')-(n'*_1 m)*_2 n+(m*_2 n')*_2 n\\ &-n'*_1(n*_1 m)+n'*_1(m*_2 n)+(n*_1 m)*_2 n'-(m*_2 n)*_2 n'\\ &=(nn')*_1 m-n*_1(m*_2 n')-n'*_1 (m*_2 n)+m*_2 (n'n)\\ &-(n'n)*_1 m+n'*_1(m*_2 n)+n*_1 (m*_2 n')-m*_2 (nn')\\ &=(nn')*_1 m+m*_2 (n'n)-(n'n)*_1 m-m*_2 (nn')\\ &=[n,n']*_1 m-m*_2 [n,n']=[[n,n'],m]_*. \end{align*} We have~\eqref{ALie2} too: \begin{align*} &[[n,m]_*,m']+[m,[n,m']_*]\\ &=(n*_1 m)m'-m'(n*_1 m)-(m*_2 n)m'+m'(m*_2 n)\\ &+m(n*_1m')-(n*_1m')m-m(m'*_2n)+(m'*_2n)m\\ &=n*_1 (mm')-(m'*_2n)m-(m*_2 n)m'+(m'm)*_2 n\\ &+(m*_2n)m'-n*_1(m'm)-(mm')*_2n+(m'*_2n)m\\ &=n*_1 (mm')+(m'm)*_2 n-n*_1(m'm)-(mm')*_2n\\ &n*_1[m,m']-[m,m']*_2 n=[n,[m,m']]_*. \end{align*} In both we use the ``associativity'' of associative actions. \end{proof} \end{prop} \begin{prop} If $(M,N,*,\partial)$ is a crossed module of associative $K$-algebras, then $(M^\mathcal{L},N^\mathcal{L},[-,-]_*,\partial)$ is a crossed module of Lie $K$-algebras. \begin{proof} Is immediate that $\partial$ is a Lie $K$-homomorphism. Let $m,m'\in M^\mathcal{L}$, $n\in N^\mathcal{L}$. \begin{align*} &\partial([n,m]_*)=\partial(n*_1 m)-\partial(m*_2 n)=n\partial(m)-\partial(m)n=[n,\partial(m)],\\ &[\partial(m),m']_*=\partial(m)*_1 m'-m'*_2\partial(m)=mm'-m'm=[m,m'], \end{align*} where we use \eqref{XAs1} to prove \eqref{XLie1} and \eqref{XLie2} to prove \eqref{XAs2}. \end{proof} \end{prop} \begin{remark} With the last property we can see that the examples given for the associative algebras case, $(M,M,(*,*),\Id_M)$, and for the Lie case for the associate Lie algebra $M^{\mathcal{L}}$, $(M^\mathcal{L},M^{\mathcal{L}},[-,-],\Id_{M^\mathcal{L}})$, are related. \end{remark} We will denote by $\textbf{\textit{X}}(\textbf{\textit{LieAlg}}_K)$ the category of crossed modules of Lie $K$-algebras and its homomorphisms. \begin{remark} The previous proposition give us a functor \begin{center} $(-)_\mathcal{X}^\mathcal{L}\colon \textbf{\textit{X}}(\textbf{\textit{AssAlg}}_K)\xrightarrow{}\textbf{\textit{X}}(\textbf{\textit{LieAlg}}_K)$. \end{center} \end{remark} We have the next proposition which relates the categorical Lie $K$-algebras and the categorical associative $K$-algebras. \begin{prop} If $(C_1,C_0,s,t,e,k)$ is a categorical associative $K$-algebra, then $(C_1^\mathcal{L},C_0^\mathcal{L},s,t,e,k)$ is a categorical Lie $K$-algebra. \begin{proof} Immediate since $(C_1\times_{C_0}C_1)^\mathcal{L}=C_1^{\mathcal{L}}\times_{C_0^{\mathcal{L}}}C_1^{\mathcal{L}}$. They are the same underlying vector space and have the same operation. \end{proof} \end{prop} \begin{remark} The previous proposition give us a functor \begin{center} $(-)_\mathcal{C}^\mathcal{L}\colon \textbf{\textit{ICat}}(\textbf{\textit{AssAlg}}_K)\xrightarrow{}\textbf{\textit{ICat}}(\textbf{\textit{LieAlg}}_K)$. \end{center} \end{remark} \begin{remark} As in the case of groups and associative $K$-algebras, the categories $\textbf{\textit{ICat}}(\textbf{\textit{LieAlg}}_K)$ and $\textbf{\textit{X}}(\textbf{\textit{LieAlg}}_K)$ are equivalent (see~\cite{ThRa}). It is immediate to check that the equivalence functors commute with the functors $(-)_\mathcal{X}^\mathcal{L}$ and $(-)_\mathcal{C}^\mathcal{L}$. The only problem can be the semidirect product, but it will be solved in the following proposition. \end{remark} \begin{defi} Let $M$ and $N$ be two associative $K$-algebras and $\cdot$ an associative action of $N$ on $M$. We define its \emph{semidirect product} as the $K$-vector space $M\times N$ with the following operation: \begin{equation*} (m,n)(m',n')=(mm'+n\cdot_1 m'+m\cdot_2 n',nn'). \end{equation*} Let $M$ and $N$ be two Lie $K$-algebras and $\cdot$ a Lie action of $N$ on $M$. We define its \emph{semidirect product} as the $K$-vector space $M\times N$ with the following bracket: \begin{equation*} [(m,n),(m',n')]=([m,m']+n\cdot m'-n'\cdot m,[n,n']). \end{equation*} In both cases we will denote the semidirect product by $M\rtimes N$. \end{defi} \begin{prop} Let $M$ and $N$ be associative $K$-algebras. Then, if $*$ is an associative action of $N$ on $M$ (then $[-,-]_*$ is a Lie action of $N^\mathcal{L}$ in $M^\mathcal{L}$) we have that $(A\rtimes B)^\mathcal{L}=A^\mathcal{L}\rtimes B^\mathcal{L}$. \begin{proof} Since the underlying vector space is the same, we only need to prove that the bracket is the same. \begin{align*} &(m,n)(m',n')-(m',n')(m,n)\\ &=(mm'+n*_1m'+m*_2n',nn')-(m'm+n'*_1m+m'*_2n,n'n)\\ &=(mm'+n*_1m'+m*_2n'-m'm-n'*_1 m-m'*_2 n,nn'-n'n)\\ &=([m,m']+[n,m']_*-[n',m]_*,[n,n']), \end{align*} where $(m,n),(m',n')\in M\times N$. \end{proof} \end{prop} \subsection{Braided Semigroupal Category}\hfill A bifunctor is a functor whose source category is a product category. Let $F\colon \textbf{\textit{C}}\times \textbf{\textit{D}}\xrightarrow{}\textbf{\textit{E}}$ be a bifunctor. For $A\in \Ob(\textbf{\textit{C}})$ and $B\in\Ob(\textbf{\textit{D}})$, we denote by ${}_AF$ and $F_B$ the functors: \begin{align*} {}_AF\colon \textbf{\textit{D}}\xrightarrow{}\textbf{\textit{E}},\ {}_AF(D\xrightarrow{f}D')&=F(A,D)\xrightarrow{F(\Id_A,f)}F(A,D'),\\ F_B\colon \textbf{\textit{C}}\xrightarrow{}\textbf{\textit{E}},\ F_B(C\xrightarrow{g}C')&=F(C,B)\xrightarrow{F(g,\Id_B)}F(C',B). \end{align*} Crane and Yetter defined in~\cite{CrYet98} the notion of semigroupal category. \begin{defi} A \emph{semigroupal category} is a triple $\textbf{\textit{C}}=(\textbf{\textit{C}},\otimes,a)$ where \textbf{\textit{C}} is a category, $\otimes\colon \textbf{\textit{C}}\times\textbf{\textit{C}}\xrightarrow{}\textbf{\textit{C}}$ is a bifunctor and $a\colon \otimes \circ(\otimes\times\Id_{\textbf{\textit{C}}})\xrightarrow{}\otimes \circ (\Id_{\textbf{\textit{C}}}\times \otimes)$ is a natural isomorphism called associativity, which verifies the following associative coherence diagram: \[ \begin{tikzcd} ((X\otimes Y)\otimes Z)\otimes W \arrow[d,"a_{X,Y,Z}\otimes \Id_W"] \arrow[rr,"a_{X\otimes Y,Z,W}"] && (X\otimes Y) \otimes (Z \otimes W) \arrow[dd,"a_{X,Y,Z\otimes W}"]\\ (X\otimes (Y\otimes Z))\otimes W \arrow[d,"a_{X,Y\otimes Z,W}"]\\ X\otimes ((Y\otimes Z)\otimes W) \arrow[rr,"\Id_X\otimes a_{Y,Z,W}"] && X\otimes (Y\otimes (Z\otimes W)). \end{tikzcd} \] We will say that a semigroupal category is \emph{strict} if the isomorphism $a$ is the identity morphism. In this case we have that $(X\otimes Y)\otimes Z=X\otimes (Y\otimes Z)$. \end{defi} The definition of monoidal category was given in the works~\cite{Ben63,MLane63}. \begin{defi} A \emph{monoidal category} is a $6$-tuple $\textbf{\textit{C}}=(\textbf{\textit{C}},\otimes,a,I,l,r)$ where $(\textbf{\textit{C}},\otimes ,a)$ is a semigroupal category, $I\in\Ob(\textbf{\textit{C}})$ and $l\colon {}_I\otimes\xrightarrow{}\Id_{\textbf{\textit{C}}}$, $r\colon \otimes_I\xrightarrow{}\Id_{\textbf{\textit{C}}}$ are natural isomorphisms called, respectively, left unit and right unit constraints, which also verify, for $X,Y\in \Ob(\textbf{\textit{C}})$, the unit coherence diagram: \[ \begin{tikzcd} (X\otimes I)\otimes Y\arrow[rd,"r_X\otimes\Id_Y"']\arrow[rr,"a_{X,I,Y}"] && X\otimes (I\otimes Y)\arrow[dl,"\Id_X\otimes l_Y"]\\ & X\otimes Y. \end{tikzcd} \] We will say that a monoidal category is \emph{strict} if the isomorphisms $a$, $l$ and $r$ are the identity morphisms. In this case we have that $(X\otimes Y)\otimes Z=X\otimes (Y\otimes Z)$, $X\otimes I=X=I\otimes X$. \end{defi} If \textbf{\textbf{\textit{C}}} is a category we have the functor $T\colon \textbf{\textbf{\textit{C}}}\times \textbf{\textbf{\textit{C}}}\xrightarrow{} \textbf{\textbf{\textit{C}}}\times \textbf{\textbf{\textit{C}}}$ given by the expression \begin{equation*} T((A,B)\xrightarrow{(f,g)}(A',B'))\coloneqq (B,A)\xrightarrow{(g,f)}(B',A'). \end{equation*} It is easy to check that it is a functorial isomorphism with $T \circ T=\Id_{\textbf{\textit{C}}}$. In~\cite{JSBM}, Joyal and Street introduced the notion of braided monoidal category. \begin{defi} A \emph{braiding on a monoidal category} \textbf{\textit{C}} is a natural isomorphism $\tau\colon \otimes \xrightarrow{} \otimes \circ T$ such as for any objects $X,Y,Z$ in \textbf{\textit{C}} the following diagrams (associative coherence) commute: \[\begin{tikzcd} (X\otimes Y)\otimes Z\arrow[r,"\tau_{X\otimes Y,Z}"]\arrow[d,"a_{X,Y,Z}"] & Z\otimes (X\otimes Y)\\ X\otimes (Y\otimes Z)\arrow[d,"\Id_X\otimes \tau_{Y,Z}"] & (Z\otimes X)\otimes Y\arrow[u,"a_{Z,X,Y}"]\\ X\otimes (Z\otimes Y)\arrow[r,"a_{X,Z,Y}^{-1}"] & (X\otimes Z)\otimes Y\arrow[u,"\tau_{X,Z}\otimes \Id_Y"], \end{tikzcd}\begin{tikzcd} X\otimes (Y\otimes Z)\arrow[r,"\tau_{X,Y\otimes Z}"]\arrow[d,"a^{-1}_{X,Y,Z}"] & (Y\otimes Z)\otimes X\\ (X\otimes Y)\otimes Z\arrow[d,"\tau_{X,Y}\otimes \Id_Z"] & Y\otimes(Z\otimes X)\arrow[u,"a^{-1}_{Y,Z,X}"]\\ (Y\otimes X)\otimes Z\arrow[r,"a_{Y,X,Z}"] & Y\otimes (X\otimes Z)\arrow[u,"\Id_Y\otimes\tau_{X,Z}"], \end{tikzcd}\] \end{defi} We can define the concept of braided semigroupal category just by imitating the definition given for the case of monoidal categories. \begin{defi} A\emph{ braiding on a semigroupal category} \textbf{\textit{C}} is a natural isomorphism $\tau\colon \otimes \xrightarrow{} \otimes \circ T$ which verifies the two associative coherence diagrams given in the definition of braiding for monoidal categories. \end{defi} \section{Braiding for categorical groups and crossed modules of groups}\label{S:braidgrp} We have the following property whose proof can be seen in~\cite{ThRa} for a general case. \begin{lem}\label{Lemacomposition} We will suppose that $(C_1,C_0,s,t,e,k)$ is a categorical associative, Lie or Leibniz $K$-algebra or a categorical group (where the operation in $C_1$ is denoted by ``$+$''). Then, if $(x,y)\in C_1\times_{C_0}C_1$, the following rule for the composition is true: \[k((x,y))=x-e(t(x))+y=x-e(s(y))+y.\] \end{lem} \begin{lem} In the categories of categorical associative, Lie or Leibniz $K$-algebras and in the category of categorical groups all internal morphisms $f\in C_1$ are internal isomorphisms. That is, there exists $f'\in C_1$ such that $k((f,f'))=e(s(f))$ and $k((f',f))=e(t(f))$. \end{lem} It is the same to give a strict monoidal category over a small category (internal category in the case of sets) than to give a categorical monoid. The correspondence is given by taking the product of the monoids $C_0$ and $C_1$ as the $\otimes$ product. The fact that the morphisms $s$, $t$, $e$ and $k$ are homomorphisms of monoids is equivalent to the functoriality of $\otimes$. The unit $I$ is given by the monoid unit $1_{C_0}$, and the unit in $C_1$ is $e(I)$. Using this idea one can define what is a braiding for a categorical monoid, but if we think in the case of groups (a little more restrictive) we have that all internal morphisms are isomorphisms. Using this, it is only needed to take a family of internal morphisms. The definition of braiding on a categorical group was introduced by Joyal and Street in~\cite{JSBM} and~\cite{JS93}. \begin{defi} Let $\mathcal{C}=(C_1,C_0,s,t,e,k)$ be a categorical group. A \emph{braiding in $\mathcal{C}$} is a map $\tau\colon C_0\times C_0\xrightarrow{}C_1$, $(a,b)\mapsto \tau_{a,b}$, such that the following properties are verified: \begin{equation}\label{GrT1} \tau_{a,b}\colon ab\xrightarrow{ }ba,\tag{GrT1} \end{equation} \begin{equation}\label{GrT2} {\begin{tikzcd} {s(x)s(y)}\arrow[d,"{\tau_{s(x),s(y)}}"]\arrow[r,"{xy}"]& {t(x)t(y)}\arrow[d,"{\tau_{t(x),t(y)}}"]\\ {s(y)s(x)}\arrow[r,"{yx}"]& {t(y)t(x)}. \end{tikzcd}}\tag{GrT2} \end{equation} \begin{align} &\tau_{ab,c}=(\tau_{a,c}e(b))\circ (e(a)\tau_{b,c}).\tag{GrT3}\label{GrT3}\\ &\tau_{a,bc}=(e(b)\tau_{a,c})\circ (\tau_{a,b}e(c)).\tag{GrT4}\label{GrT4} \end{align} for $a,b,c\in C_0$, $x,y\in C_1$. We will say that $(C_1,C_0,s,t,e,k,\tau)$ is a \emph{braided categorical group}. \end{defi} \begin{remark} One can see that \eqref{GrT2} is the naturality and \eqref{GrT3}, \eqref{GrT4} are the coherence diagrams. \end{remark} \begin{defi} A \emph{braided internal functor between two braided categorical groups}, whose braidings are $\tau$ and $\tau'$, is an internal functor $(F_1,F_0)$ between the internal categories verifying $F_1(\tau_{a,b})=\tau'_{F_0(a),F_0(b)}$ for $a,b\in C_0$. We denote the category of braided categorical groups and braided internal functors between them as $\textbf{\textit{BICat}}(\textbf{\textit{Grp}})$. \end{defi} The definition of braiding in crossed modules of groups was given by Conduch\'e in~\cite[Equalities (2.12)]{Condu} and, although in this case the action is superfluous, it can be recovered as he says previously as $m\cdot l=l\{\partial(l)^{-1},m\}$. We will take this action into account, so we double one of the equalities; and we use the last two of equalities (2.11) of \cite{Condu} rather that the last two of (2.12). Although there is not problem because Conduch\'e proves in that cite that are equivalent. \begin{defi} Let $(G,H,\cdot, \partial)$ be a crossed module of groups. A \emph{braiding} (or \emph{Peiffer lifting}) of that crossed module is a map $\{-,-\}\colon H\times H\xrightarrow{}G$ which verifies: \begin{align} \partial\{h,h'\}&=[h,h'],\tag{BGr1}\label{BGr1}\\ \{\partial g,\partial g'\}&=[g,g'],\tag{BGr2}\label{BGr2}\\ \{\partial g,h\}&=g(h\cdot g^{-1}),\tag{BGr3}\label{BGr3}\\ \{h,\partial g\}&=(h\cdot g)g^{-1},\tag{BGr4}\label{BGr4}\\ \{h,h'h''\}&=\{h,h'\}(h'\cdot\{h,h''\}),\tag{BGr5}\label{BGr5}\\ \{hh',h''\}&=(h\cdot\{h',h''\})\{h,h''\},\tag{BGr6}\label{BGr6} \end{align} for $g,g'\in G$, $h,h',h''\in H$, where $[g,g']=gg'g^{-1}g'^{-1}$. We say that $(G,H,\cdot, \partial,\{-,-\})$ is a \emph{braided crossed module of groups}. \end{defi} \begin{example} It is easy to check that the commutator $[-,-]$ is a braiding on $(G,G,\Conj,\Id_G)$. \end{example} \begin{defi} $(G,H,\cdot,\partial,\{-,-\})\xrightarrow{(f_1,f_2)}(G',H',*,\partial',\{-,-\}')$ is an \emph{homomorphism of braided crossed modules of groups} if it is an homomorphism of crossed modules of groups such that $f_1(\{h,h'\})=\{f_2(h),f_2(h')\}'$ for $h,h'\in H$. We denote the category of braided crossed modules of groups and its homomorphisms as $\textbf{\textit{BX}}(\textbf{\textit{Grp}})$. \end{defi} \begin{remark} We can see in~\cite{JSBM,JS93} and~\cite{Garzon&Miranda} that the categories $\textbf{\textit{BICat}}(\textbf{\textit{Grp}})$ and $\textbf{\textit{BX}}(\textbf{\textit{Grp}})$ are equivalent. \end{remark} \section{Braiding for categorical associative algebras and crossed modules of associative algebras}\label{S:braidassalg} In this section we will introduce a definition of braiding for categorical associative $K$-algebras. As categorical monoids can be seen as strict monoidal internal categories in \textbf{\textit{Set}}, we can think that a strict semigroupal category over an internal category in \textbf{\textit{Vect}}$_K$ is really a categorical associative $K$-algebra. By the same reasoning we have that we can identify the $\otimes$ product with the second operation and the functoriality is recovered in the same way as the case of groups. The $K$-bilinearity of the product is given by the fact that $\otimes$ is a internal bifunctor in \textbf{\textit{Vect}}$_K$ and, this means, it is a functor between the respective small categories and each component (fixing an object on left or right) is an internal functor, that means, is linear in internal objects and internal morphisms. With this in mind we can introduce a braiding on categorical associative $K$-algebras, imitating the braiding for semigroupal categories. We will add that $\tau\colon C_0\times C_0\xrightarrow{} C_1$ is $K$-bilinear, but it is obvious if we have the idea that for an internal object $A\in C_0$ we must have the morphisms in \textbf{\textit{Vect}}$_K$ $\tau_{A,-},\tau_{-,A}\colon C_0\xrightarrow{}C_1$ defined by $\tau_{A,-}(B)=\tau_{A,B}$ and $\tau_{-,A}(B)=\tau_{B,A}$. We must recall that we show that in the internal categories in which we will work all internal morphisms are internal isomorphisms. \begin{defi} Let $\mathcal{C}=(C_1,C_0,s,t,e,k)$ be a categorical associative $K$-algebra. A\emph{ braiding on $\mathcal{C}$} is a $K$-bilinear map $\tau\colon C_0\times C_0\xrightarrow{} C_1$, $(a,b)\mapsto \tau_{a,b}$, verifying the following properties: \begin{equation}\label{AsT1} \tau_{a,b}\colon ab\xrightarrow{ }ba,\tag{AsT1} \end{equation} \begin{equation}\label{AsT2} \begin{tikzcd} {s(x)s(y)}\arrow[d,"{\tau_{s(x),s(y)}}"]\arrow[r,"{xy}"]& {t(x)t(y)}\arrow[d,"{\tau_{t(x),t(y)}}"]\\ {s(y)s(x)}\arrow[r,"{yx}"]& {t(y)t(x)}, \end{tikzcd}\tag{AsT2} \end{equation} \begin{align} \tau_{ab,c}&=(\tau_{a,c}e(b))\circ (e(a)\tau_{b,c}),\tag{AsT3}\label{AsT3}\\ \tau_{a,bc}&=(e(b)\tau_{a,c})\circ (\tau_{a,b}e(c)),\tag{AsT4}\label{AsT4} \end{align} for $a,b,c\in C_0$, $x,y\in C_1$. We will say that $(C_1,C_0,s,t,e,k,\tau)$ is a \emph{braided categorical associative $K$-algebra}. \end{defi} \begin{remark} As in the case of groups, \eqref{AsT2} is the naturality and \eqref{AsT3}, \eqref{AsT4} are the coherence diagrams. \end{remark} \begin{defi} A \emph{braided internal functor between two braided categorical associative $K$-algebras}, whose braidings are $\tau$ and $\tau'$, is an internal functor $(F_1,F_0)$ such that $F_1(\tau_{a,b})=\tau'_{F_0(a),F_0(b)}$ for $a,b\in C_0$. \end{defi} We denote the category of braided categorical associative $K$-algebras and braided internal functors between them as $\textbf{\textit{BICat}}(\textbf{\textit{AssAlg}}_K)$. We will introduce the notion of braiding for crossed modules of associative algebras looking for an equivalence between braided crossed modules and braided internal categories of associative algebras. as it happens in the groups case, The definition we got is the following one. \begin{defi} Let $(M\xrightarrow{\partial}N,*)$ be a crossed module of associative $K$-algebras, where $*=(*_1,*_2)$. Then a \emph{braiding} on the crossed module is a $K$-bilinear map $\{-,-\}\colon N\times N\xrightarrow{}M$ verifying: \begin{align} \partial\{n,n'\}&=[n,n'],\tag{BAs1}\label{BAs1}\\ \{\partial m, \partial m' \}&=[m,m'],\tag{BAs2}\label{BAs2}\\ \{\partial m, n \}&=-[n,m]_*, \tag{BAs3}\label{BAs3}\\ \{n,\partial m \}&=[n,m]_*, \tag{BAs4}\label{BAs4}\\ \{n,n'n''\}&=n'*_1\{n,n''\}+\{n,n'\}*_2n'',\tag{BAs5}\label{BAs5}\\ \{nn',n''\}&=n*_1\{n',n''\}+\{n,n''\}*_2n',\tag{BAs6}\label{BAs6} \end{align} for $m,m'\in M$, $n,n',n''\in N$. We denote $[n,m]_*=n*_1m-m*_2n$ and $[x,y]=xy-yx$. If $\{-,-\}$ is a braiding we will say that $(M\xrightarrow{\partial}N,*,\{-,-\})$ is a \emph{braided crossed module of associative $K$-algebras}. \end{defi} \begin{example} We have that the commutator $[-,-]$ is a braiding on the crossed module $(M,M,(*,*),\Id_M)$. \end{example} \begin{defi} An \emph{homomorphism of braided crossed modules of associative $K$-algebras} $(M,N,\cdot,\partial,\{-,-\})\xrightarrow{(f_1,f_2)}(M',N',*,\partial',\{-,-\}')$ is an homomorphism of crossed modules of associative $K$-algebras such that $f_1(\{n,n'\})=\{f_2(n),f_2(n')\}'$ for $n,n'\in N$. \end{defi} We denote the category of braided crossed modules of associative $K$-algebras and its homomorphisms as $\textbf{\textit{BX}}(\textbf{\textit{AssAlg}}_K)$. \begin{prop} Let $\mathcal{X}=(M,N,(*_1,*_2),\partial,\{-,-\})$ be a braided crossed module of associative $K$-algebras. Then $\mathcal{C}_\mathcal{X} \coloneqq (M\rtimes N,N,\bar{s},\bar{t},\bar{e},\bar{k},\bar{\tau})$ is a braided categorical associative $K$-algebra where: \begin{itemize} \item $\bar{s}\colon M\rtimes N\xrightarrow{} N$, $\bar{s}((m,n))=n$, \item $\bar{t}\colon M\rtimes N\xrightarrow{} N$, $\bar{t}((m,n))=\partial m+n$, \item $\bar{e}\colon N\xrightarrow{} M\rtimes N$, $\bar{e}(n)=(0,n)$, \item $\bar{k}\colon (M\times N)\times_N (M\rtimes N)\xrightarrow{}M\rtimes N$, where the source is the pullback of $\bar{t}$ along $\bar{s}$, defined as $k(((m,n),(m',\partial m+n)))=(m+m',n)$. \item $\bar{\tau}\colon N\times N\xrightarrow{}M\rtimes N$, $\bar{\tau}_{n,n'}=(-\{n,n'\},nn')$. \end{itemize} \begin{proof} Other than the braiding, it has already been proven, as can be seen in~\cite{ThRa}, that $(M\rtimes N,N,\bar{s},\bar{t},\bar{e},\bar{k})$ is a categorical associative $K$-algebra. We only need to check the braiding axioms for this internal category. We will start with \eqref{AsT1}. Let us take $n,n'\in N$. \begin{align*} \bar{s}(\bar{\tau}_n,n')=\bar{s}((-\{n,n'\},nn'))&=nn',\\ \bar{t}(\bar{\tau}_{n,n'})=\bar{t}((-\{n,n'\},nn'))&=-\partial\{n,n'\}+nn'\\ &=-[n,n']+nn'=n'n, \end{align*} where we use \eqref{BAs1}. We will prove now \eqref{AsT2}. We will take $x=(m,n),y=(m',n')\in M\rtimes N$. We need to show that $\tau_{t(x),t(y)}\circ xy=yx\circ \tau_{s(x),s(y)}$. \begin{align*} &\tau_{t(x),t(y)}\circ xy\\ &=\bar{k}(((m,n)(m',n'),(-\{\bar{t}((m,n)),\bar{t}((m',n'))\},\bar{t}((m,n))\bar{t}((m',n')))))\\ &=\bar{k}(((m,n)(m',n'),(-\{\partial m+n,\partial m'+n'\},(\partial m+n)(\partial m'+n'))))\\ &=\bar{k}(((mm'+n*_1 m'+m*_2 n',nn'),(-\{\partial m+n,\partial m'+n'\},\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad(\partial m+n)(\partial m'+n'))))\\ &=(mm'+n*_1 m'+m*_2n'-\{\partial m+n,\partial m'+n'\},nn') \\ &=(mm'+n*_1 m'+m*_2n'-\{\partial m,\partial m'\}-\{\partial m,n'\}-\{n,\partial m'\} -\{n,n'\},nn')\\ &=(mm'+n*_1 m'+m*_2n'-[m,m']+[n',m]_*-[n,m']_* -\{n,n'\},nn')\\ &=(m'm+m'*_2 n+n'*_1 m-\{n,n'\},nn'), \end{align*} where we use \eqref{BAs2}, \eqref{BAs3} and \eqref{BAs4} in the sixth equality. In the other way, \begin{align*} & yx\circ \tau_{s(x),s(y)}\\ &=\bar{k}(((-\{\bar{s}((m,n)),\bar{s}((m,n'))\},\bar{s}((m,n))\bar{s}((m',n'))),(m',n')(m,n)))\\ &=\bar{k}(((-\{n,n'\},nn'),(m',n')(m,n)))\\ &=\bar{k}(((-\{n,n'\},nn'),(m'm+n'*_1 m+m'*_2 n,n'n)))\\ &=(-\{n,n'\}+m'm+n'*_1 m+m'*_2 n,nn'), \end{align*} where we can see the equality. We will verify \eqref{AsT3} below. If $n,n',n''\in N$, then \begin{align*} &(\bar{\tau}_{n,n''}\bar{e}(n'))\circ (\bar{e}(n)\bar{\tau}_{n',n''})=\bar{k}((\bar{e}(n)\bar{\tau}_{n',n''},\bar{\tau}_{n,n''}\bar{e}(n')))\\ &=\bar{k}((0,n)(-\{n',n''\},n'n''),(-\{n,n''\},nn'')(0,n'))\\ &=\bar{k}(-n*_1\{n',n''\},n(n'n''),(-\{n,n''\}*_2 n',(nn'')n'))\\ &=(-n*_1\{n',n''\}-\{n,n''\}*_2n',n(n'n''))=(-\{nn',n''\},(nn')n'')=\bar{\tau}_{nn',n''}, \end{align*} where we use \eqref{BAs6} and associativity. Finally we will show that \eqref{AsT4} is verified. If $n,n',n''\in N$, then \begin{align*} &(\bar{e}(n')\bar{\tau}_{n,n''})\circ (\bar{\tau}_{n,n'}\bar{e}(n''))=\bar{k}(\tau_{n,n'}e(n''),e(n')\tau_{n,n''})\\ &=\bar{k}((-\{n,n'\},nn')(0,n''),(0,n')(-\{n,n''\},nn''))\\ &=\bar{k}((-\{n,n'\}*_2 n'',(nn')n''),(-n'*_1\{n,n''\},n'(nn'')))\\ &=(-n'*_1\{n,n''\}-\{n,n'\}*_2 n'',(nn')n'')=(-\{n,n'n''\},n(n'n''))=\bar{\tau}_{n,n'n''}, \end{align*} where we use \eqref{BAs5} along with the associativity in the second equality. Thus the braiding axioms are verified for the categorical associative $K$-algebra. \end{proof} \end{prop} \begin{prop} We have a functor $\mathcal{C_\mathfrak{A}}\colon \textbf{\textit{BX}} (\textbf{\textit{AssAlg}}_K) \xrightarrow{} \textbf{\textit{BICat}} ( \textbf{\textit{AssAlg}}_K)$ defined as: \[\mathcal{C_\mathfrak{A}}(\mathcal{X}\xrightarrow{(f_1,f_2)}\mathcal{X}')=\mathcal{C}_\mathcal{X}\xrightarrow{(f_1\times f_2,f_2)}\mathcal{C}_{\mathcal{X}'}\] where $\mathcal{C}_\mathcal{X}$ is defined in the previous proposition. \begin{proof} We know that the pair $(f_1\times f_2,f_2)$ is an internal functor between the respective internal categories, since what we are trying to do is to extend an existing functor (see~\cite{ThRa}) to the braided case. In the same way, as it is an extension, we already know that, if it is well defined, it verifies the properties of functor, since the composition and identity are the same as in the categories without braiding. Because of that, to conclude this proof, is enough to see that $(f_1\times f_2,f_2)$ is a braided internal functor of braided categorical associative $K$-algebras. \begin{align*} &(f_1\times f_2)(\bar{\tau}_{n,n'})=(f_1\times f_2)((-\{n,n'\},nn'))=(-f_1(\{n,n'\}),f_2(nn'))\\ &=(-\{f_2(n),f_2(n')\}',f_2(n)f_2(n'))=\bar{\tau}'_{f_2(n),f_2(n')}, \end{align*} where we use that $(f_1,f_2)$ is an homomorphism of braided crossed modules of associative algebras. \end{proof} \end{prop} \begin{prop} Let $\mathcal{C}=(C_1,C_0,s,t,e,k,\tau)$ be a braided categorical associative $K$-algebra. Then $\mathcal{X}_\mathcal{C} \coloneqq (\ker(s),C_0,({}^{e}*,*^e),\partial_t,\{-,-\}_\tau)$ is a braided crossed module of associative $K$-algebras, where: \begin{itemize} \item ${}^{e}*\colon C_0\times \ker(s)\xrightarrow{}\ker(s)$, $a\ {{}^{e}*} \ x \coloneqq e(a)x$, \item $*^e\colon \ker(s)\times C_0\xrightarrow{}\ker(s)$, $x*^e a \coloneqq xe(a)$, \item $\partial_t \coloneqq t|_{\ker(s)}$, \item $\{-,-\}_\tau\colon C_0\times C_0\xrightarrow{}\ker(s)$, $\{a,b\}_\tau \coloneqq e(ab)-\tau_{a,b}$. \end{itemize} \begin{proof} It is proven (see~\cite{ThRa}) that, under these hypotheses, $(\ker(s),C_0,(*^e,{}^{e}*),\partial_t)$ is a crossed module of associative $K$-algebras. For that, is enough to show that $\{-,-\}_\tau$ is a braiding on that crossed module. First we see that it is well defined, that means $\{a,b\}_\tau\in\ker(s)$ for $a,b\in C_0$. \begin{align*} s(\{a,b\}_\tau)=s(e(ab)-\tau_{a,b})=ab-ab=0, \end{align*} where we use \eqref{AsT1}. Since they are well defined, we can see if they verify the properties. To start we will check \eqref{BAs1}. If $a,b\in C_0$, then \begin{align*} \partial_{t}\{a,b\}_\tau=t(e(ab)-\tau_{a,b})=ab-ba=[a,b], \end{align*} where we use \eqref{AsT1}. We will see if it is verified \eqref{BAs2}. If $x,y\in \ker(s)$, then \begin{align*} \{\partial_tx,\partial_ty\}_\tau=e(\partial_tx\partial_ty)-\tau_{\partial_t x,\partial_t y} =e(t(x)t(y))-\tau_{t(x),t(y)}. \end{align*} We need to show that $e(t(x)t(y))-\tau_{t (x),t (y)}=[x,y]$. By the axiom \eqref{AsT2} we know the equality \[k((xy,\tau_{t(x),t(y)}))=k((\tau_{s(x),s(y)},yx)).\] As $x\in \ker(s)$, we have that $s(x)=0$ (in the same way $y$), and $\tau_{s(x),s(y)}=0$ by $K$-bilinearity. We have then that \[k((\tau_{s(x),s(y)},yx))=k((0,yx)),\] and therefore the equality \[k((xy,\tau_{t(x),t(y)}))=k((0,yx)).\] Using now the $K$-linearity of $k$ in the previous expression, we obtain \[0=k((xy,\tau_{t(x),t(y)}-yx)).\] Since $t(\tau_{t(x),t(y)}-yx)=t(y)t(x)-t(y)t(x)=0=s(e(0))$ we can talk about $k((\tau_{t(x),t(y)}-yx,e(0)))$. Further $k((\tau_{t(x),t(y)}-yx,e(0)))=\tau_{t(x),t(y)}-yx$ by the internal category axioms. Adding both equalities and using the $K$-linearity of $k$, we get \[k((xy+\tau_{t(x),t(y)}-yx,\tau_{t(x),t(y)}-yx))=\tau_{t(x),t(y)}-yx.\] Therefore, by grouping, we have \[k(([x,y]+\tau_{t(x),t(y)},\tau_{t(x),t(y)}-yx))=\tau_{t(x),t(y)}-yx.\] As $s(\tau_{t(x),t(y)}-yx)=t(x)t(y)+0=t(x)t(y)$ (we use that $x$ or $y$ are in $\ker(s)$) it makes sense to talk about the composition $k((e(t(x)t(y)),\tau_{t(x),t(y)}-yx))$, which is equal to $\tau_{t(x),t(y)}-yx$. Subtracting both equalities and using the $K$-linearity of $k$, we obtain \[k(([x,y]+\tau_{t(x),t(y)}-e(t(x)t(y)),0))=0.\] Again, using the properties for internal categories, we have \begin{align*} 0&=k(([x,y]+\tau_{t(x),t(y)}-e(t(x)t(y)),0))\\&=k(([x,y]+\tau_{t(x),t(y)}-e(t(x)t(y)),e(0))) \\&=[x,y]+\tau_{t(x),t(y)}-e(t(x)t(y)), \end{align*} which gives us the required equality. As an observation to the above, in the part of the proof that is related to $x,y\in\ker(s) $, it is sufficient that one of the two is in that kernel. Therefore, by repeating the proof using this, we have the following equalities for $x \in \ker(s) $ and $y \in C_1 $: \begin{align*} e(t(x)t(y))-\tau_{t (x),t (y)}=[x,y], \quad e(t(y)t(x))-\tau_{t (y),t (x)}=[y,x]. \end{align*} With these equalities, we will prove \eqref{BAs3} and \eqref{BAs4}. Let $a\in C_0$ and $x\in \ker(s)$. Then \begin{align*} \{\partial_t x, a\}_\tau&=e(t(x)t(e(a)))-\tau_{t (x),t(e(a))}=[x,e(a)]=xe(a)-e(a)x=x*^e a-a\ {{}^{e}*}\ x,\\ \{ a,\partial_t x\}_\tau&=e(t(e(a))t(x))-\tau_{t (e(a)),t(x)}=[e(a),x]=e(a)x-xe(a)=a\ {{}^{e}*} \ x-x*^e a. \end{align*} We will see now the last conditions, starting with \eqref{BAs5}. Let $a,b,c\in C_0$. \begin{align*} &\{a,bc\}_\tau=e(a(bc))-\tau_{a,bc}=e(a(bc))-((e(b)\tau_{a,c})\circ (\tau_{a,b}e(c)))\\ &=e(a(bc))-e(b)\tau_{a,c}-\tau_{a,b}e(c)+e(t(\tau_{a,b}e(c)))\\ &=e((ab)c)-e(b)\tau_{a,c}-\tau_{a,b}e(c)+e((ba)c)\\ &=e(b)e(ac)-e(b)\tau_{a,c}+e(ab)e(c)-\tau_{a,b}e(c)\\ &=e(b)\{a,c\}_\tau+\{a,b\}_\tau e(c)=b\ {{}^{e}*}\ \{a,c\}_\tau+\{a,b\}_\tau*^e c, \end{align*} where we use \eqref{AsT4}, Lemma~\ref{Lemacomposition} and the associativity. To conclude we will check \eqref{BAs6}. \begin{align*} &\{ab,c\}_\tau=e((ab)c)-\tau_{ab,c}=e((ab)c)-((\tau_{a,c}e(b))\circ (e(a)\tau_{b,c}))\\ &=e((ab)c)-\tau_{a,c}e(b)-e(a)\tau_{b,c}+e(t(e(a)\tau_{b,c}))\\ &=e(a(bc))-\tau_{a,c}e(b)-e(a)\tau_{b,c}+e(a(cb))\\ &=e(a)e(bc)-e(a)\tau_{b,c}+e(ac)e(b)-\tau_{a,c}e(b)\\ &=e(a)\{b,c\}_\tau + \{a,c\}_\tau e(b)=a\ {{}^{e}*}\ \{b,c\}_\tau+\{a,c\}_\tau *^e b, \end{align*} where we use \eqref{AsT3}, Lemma~\ref{Lemacomposition} and associativity. \end{proof} \end{prop} \begin{prop} We have a functor $\mathcal{X_\mathfrak{A}}\colon \textbf{\textit{ICat}} (\textbf{\textit{AssAlg}}_K) \xrightarrow{} \textbf{\textit{BX}}(\textbf{\textit{AssAlg}}_K)$ defined as \[\mathcal{X_\mathfrak{A}}(\mathcal{C}\xrightarrow{(F_1,F_0)}\mathcal{C}')=\mathcal{X}_\mathcal{C}\xrightarrow{(F_1^s,F_0)}\mathcal{X}_{\mathcal{C}'},\] where $\mathcal{X}_\mathcal{C}$ is defined in the previous proposition and $F_1^s\colon \ker(s)\xrightarrow{}\ker(s')$ is defined as $F_1^s(x)=F_1(x)$ for $x\in \ker(s)$. \begin{proof} The fact that it is a functor between the categories without braiding is already shown~\cite{ThRa}, so we have to see that it can be extended to the braided case. For this, we have to verify the axioms of the homomorphisms of braided crossed modules of associative $K$-algebras. \begin{align*} & F^s_1(\{a,b\}_\tau)=F_1(e(ab)-\tau_{a,b})=F_1(e(a,b))-F_1(\tau_{a,b})\\ &=e'(F_0(ab))-\tau'_{F_0(a),F_0(b)}=e'(F_0(a)F_0(b))-\tau'_{F_0(a),F_0(b)}\\ &=\{F_0(a),F_0(b)\}_{\tau'}.\qedhere \end{align*} \end{proof} \end{prop} \begin{remark} Note that, if $(M,N,(*_1,*_2),\partial,\{-,-\})$ is a braided crossed module of associative $K$-algebras, then $\ker(\bar{s})=\{(m,0)\in M\rtimes N\mid m\in M\}=:(M,0)$, where $\bar{s}$ is defined for the functor $\mathcal{C}_\mathfrak{A}$. \end{remark} \begin{prop} The categories $\textbf{\textit{BX}}(\textbf{\textit{AssAlg}}_K)$ and $\textbf{\textit{ICat}}(\textbf{\textit{AssAlg}}_K)$ are equivalent categories. Further, the functors $\mathcal{C_\mathfrak{A}}$ and $\mathcal{X_\mathfrak{A}}$ are inverse equivalences, where the natural isomorphisms $\Id_{\textbf{\textit{BX}}(\textbf{\textit{AssAlg}}_K)}\stackrel{\alpha^\mathfrak{A}}{\cong}\mathcal{X}_\mathfrak{A}\circ \mathcal{C}_\mathfrak{A}$ and $\Id_{\textbf{\textit{ICat}}(\textbf{\textit{AssAlg}}_K)}\stackrel{\beta^\mathfrak{A}}{\cong}\mathcal{C}_\mathfrak{A}\circ \mathcal{X}_\mathfrak{A}$ are given by: $\bullet$ If $\mathcal{Z}=(M,N,(\cdot_1,\cdot_2),\partial,\{-,-\})$ is a braided crossed module of associative $K$-algebras, then $\alpha^\mathfrak{A}_{\mathcal{Z}}=(\alpha^\mathfrak{A}_M,\Id_N)$ with $\alpha^\mathfrak{A}_M\colon M\xrightarrow{} (M,0)$ defined as $\alpha_M(m)=(m,0)$. $\bullet$ If $\mathcal{D}=(C_1,C_0,s,t,e,k,\tau)$ is a braided categorical associative $K$-algebra, then $\beta^\mathfrak{A}_{\mathcal{D}}=(\beta^\mathfrak{A}_{s},\Id_{C_0})$ with $\beta^\mathfrak{A}_{C_1}\colon C_1\xrightarrow{}\ker(s)\rtimes C_0$ defined as $\beta^\mathfrak{A}_{C_1}(x)=(x-e(s(x)),s(x))$. \begin{proof} It can be seen in~\cite{ThRa} that they are well defined maps and that they are isomorphisms in the categories without braiding, as well as they are natural isomorphisms. For that, it is enough to show that they are isomorphisms between braided objects. Immediately from the definition, as in the crossed module and internal case, they are isomorphisms if they are bijective morphisms, since the inverse map verifies the braided axioms for morphisms in their respective categories. We know that they are bijective maps, since they are isomorphisms between the categories without braiding. For that, we only have to verify that they are, in fact, morphisms. Let $\mathcal{Z}=(M,N,(\cdot_1,\cdot_2),\partial,\{-,-\})$ a braided crossed module of associative $K$-algebras. Let us see that $\alpha^\mathfrak{A}_{\mathcal{Z}}=(\alpha^\mathfrak{A}_M,\Id_N)$ is an homomorphism. \begin{align*} &\Id_N(\{n,n'\}_{\bar{\tau}})=\{n,n'\}_{\bar{\tau}}=\bar{e}(nn')-\bar{\tau}_{n,n'}=(0,nn')-(-\{n,n'\},nn')\\ &=(\{n,n'\},0)=\alpha^\mathfrak{A}_{M}(\{n,n'\}), \quad \text{where} \ n,n'\in N. \end{align*} Let $\mathcal{D}=(C_1,C_0,s,t,e,k,\tau)$ be a braided categorical associative $K$-algebra. We will check that $\beta^\mathfrak{A}_{\mathcal{D}}=(\beta^\mathfrak{A}_{s},\Id_{C_0})$ is a morphism. If $a,b\in C_0$, we have \begin{align*} &\Id_{C_0}(\bar{\tau}_{a,b})=\bar{\tau}_{a,b}=(-\{a,b\}_\tau,ab)=(\tau_{a,b}-e(ab),ab)\\ &=(\tau_{a,b}-e(s(\tau_{a,b})),s(\tau_{a,b}))=\beta^\mathfrak{A}_{C_1}(\tau_{a,b}). \end{align*} Therefore, since they are morphisms, we know that these are natural isomorphisms, as we explained previously, and the equivalence of categories is obtained. \end{proof} \end{prop} \section{Braiding for categorical Lie algebras and crossed modules of Lie algebras}\label{S:braidLiealg} In this section we will show that the definition, given by Ulualan in~\cite{Ulua} for braided categorical Lie $K$-algebras, appears naturally from the previous one using the fact that we can transform an associative $K$-algebra $M$ in a Lie $K$-algebra $M^\mathcal{L}$ with bracket $[x,y]=xy-yx$. Then we will suppose that $K$ is a field of $\car(K)\neq 2$ to change a little the definition of braiding. Doing this we will obtain the definition given in~\cite{TFM}, where the equivalence is proven for $\car(K)\neq 2$ with the category of braided crossed modules of Lie $K$-algebras which we will talk about later. The notion of braiding for categorical Lie $K$-algebras was introduced by Ulualan in~\cite{Ulua}, and it is the following one. \begin{defi} Let $\mathcal{C}=(C_1,C_0,s,t,e,k)$ be a categorical Lie $K$-algebra. A\emph{ braiding on $\mathcal{C}$ } its a $K$-bilinear map $\tau\colon C_0\times C_0\xrightarrow{} C_1$, $(a,b)\mapsto \tau_{a,b}$, which verify the following properties: \begin{equation}\label{LieT1} \tau_{a,b}\colon [a,b]\xrightarrow{ }[b,a],\tag{LieT1} \end{equation} \begin{equation}\label{LieT2} \begin{tikzcd} {[s(x),s(y)]}\arrow[d,"{\tau_{s(x),s(y)}}"]\arrow[r,"{[x,y]}"]& {[t(x),t(y)]}\arrow[d,"{\tau_{t(x),t(y)}}"]\\ {[s(y),s(x)]}\arrow[r,"{[y,x]}"]& {[t(y),t(x)]},\tag{LieT2} \end{tikzcd} \end{equation} \begin{align} \tau_{[a,b],c}&=[\tau_{a,c},e(b)]+[e(a),\tau_{b,c}],\tag{LieB3}\label{LieB3}\\ \tau_{a,[b,c]}&=[e(b),\tau_{a,c}]+ [\tau_{a,b},e(c)],\tag{LieB4}\label{LieB4} \end{align} for $a,b,c\in C_0$, $x,y\in C_1$. We will say that $(C_0,C_1,s,t,e,k,\tau)$ is a \emph{braided categorical Lie $K$-algebra}. \end{defi} \begin{remark} It can be seen that, in the previous definition, the lack of associativity in Lie $K$-algebras is fixed in \eqref{LieB3} and \eqref{LieB4} using the Jacobi identity in source and target. \end{remark} We want to show that the definition for Lie $K$-algebras is well related with the definition for associative $K$-algebras. \begin{prop} If $(C_1,C_0,s,t,e,k,\tau)$ is a braided categorical associative $K$-algebra, then $(C_1^\mathcal{L},C_0^\mathcal{L},s,t,e,k,\tau^{Lie})$ is a braided categorical Lie $K$-algebra, where \[\tau^{Lie} \colon C_0^\mathcal{L}\times C_0^\mathcal{L}\xrightarrow{}C_1^\mathcal{L},\qquad \tau^{Lie}_{a,b} \coloneqq \tau_{a,b}-\tau_{b,a}.\] \begin{proof} It is easy to see that \eqref{AsT1} implies \eqref{LieT1} and \eqref{AsT2} implies \eqref{LieT2}. By using Lemma~\ref{Lemacomposition} we obtain \eqref{LieB3} and \eqref{LieB4} from \eqref{AsT3} and \eqref{AsT4}. \end{proof} \end{prop} In~\cite{TFM} is given another definition for braided internal category of Lie $K$-algebras to make the equivalence with the braided crossed modules of Lie $K$-algebras. That equivalence is proven for a field with $\car(K)\neq 2$, so we will show that, with that fact, the two definitions are equivalent. The definition given in~\cite{TFM} is the following one. \begin{defi} Let $\mathcal{C}=(C_1,C_0,s,t,e,k)$ be a categorical Lie $K$-algebra. A \emph{braiding} on $\mathcal{C}$ is a $K$-bilinear map $\tau\colon C_0\times C_0\xrightarrow{} C_1$, $(a,b)\mapsto \tau_{a,b}$, that verifies \eqref{LieT1}, \eqref{LieT2} and the following properties: \begin{align} \tau_{[a,b],c}&=\tau_{a,[b,c]}-\tau_{b,[a,c]},\tag{LieT3}\label{LieT3}\\ \tau_{a,[b,c]}&=\tau_{[a,b],c}-\tau_{[a,c],b},\tag{LieT4}\label{LieT4} \end{align} for $a,b,c\in C_0$, $x,y\in C_1$. \end{defi} The two definitions are equivalent in $\car(K)\neq 2$, as can be seen in the following proposition. \begin{prop}\label{trenzaanticoncor} Let $K$ be a field of $\car(K)\neq 2$ and $(C_1,C_0,s,t,e,k)$ a categorical Lie $K$-algebra. If $\tau\colon C_0\times C_0\xrightarrow{}C_1$ is a $K$-bilinear map verifying \eqref{LieT1} and \eqref{LieT2}, then \begin{align*} &\tau_{a,[b,c]}=[e(a),\tau_{b,c}] \quad \text{and}\\ &\tau_{[b,c],a}=[\tau_{b,c},e(a)]. \end{align*} In particular, by the anticommutativity, we have that $\tau_{a,[b,c]}=-\tau_{[b,c],a}$. \begin{proof} Using \eqref{LieT1} and \eqref{LieT2} the following diagram is commutative: \begin{center} \begin{tikzcd} {[a,[b,c]]}\arrow[d,"{\tau_{a,[b,c]}}"]\arrow[rr,"{[e(a),\tau_{b,c}]}"]& &{[a,[c,b]]}\arrow[d,"{\tau_{a,[c,b]}}"]\\ {[[b,c],a]}\arrow[rr,"{[\tau_{b,c},e(a)]}"]& &{[[c,b],a]}. \end{tikzcd} \end{center} That is, we have the equality \begin{align*} k(([e(a),\tau_{b,c}],\tau_{a,[c,b]}))=k((\tau_{a,[b,c]},[\tau_{b,c},e(a)])), \end{align*} and then \begin{align*} 0&=k(([e(a),\tau_{b,c}],\tau_{a,[c,b]}))-k((\tau_{a,[b,c]},[\tau_{b,c},e(a)]))\\ &=k(([e(a),\tau_{b,c}]-\tau_{a,[b,c]},\tau_{a,[c,b]}-[\tau_{b,c},e(a)]))\\ &=k(([e(a),\tau_{b,c}]-\tau_{a,[b,c]},-\tau_{a,[b,c]}+[e(a),\tau_{b,c}])). \end{align*} Using now Lemma~\ref{Lemacomposition}, we have \begin{align*} 0&=[e(a),\tau_{b,c}]-\tau_{a,[b,c]}+(-\tau_{a,[b,c]}+[e(a),\tau_{b,c}])-e(s(-\tau_{a,[b,c]}+[e(a),\tau_{b,c}]))\\ &=2([e(a),\tau_{b,c}]-\tau_{a,[b,c]})-e(-[a,[b,c]]+[a,[b,c]])=2([e(a),\tau_{b,c}]-\tau_{a,[b,c]}). \end{align*} Since $\car(K)\neq 2$, we have the required equality. The other equality is similar, using the commutative diagram \begin{center} \begin{tikzcd} {[[a,b],c]}\arrow[d,"{\tau_{[a,b],c}}"]\arrow[rr,"{[\tau_{a,b},e(c)]}"]& & {[[b,a],c]}\arrow[d,"{\tau_{[b,a],c}}"]\\ {[c,[a,b]]}\arrow[rr,"{[e(c),\tau_{a,b}]}"]& &{[c,[b,a]]}. \end{tikzcd} \end{center} \end{proof} \end{prop} \begin{defi} A \emph{braided internal functor between two braided categorical Lie $K$-algebras}, whose braidings are $\tau$ and $\tau'$, is an internal functor $(F_1,F_0)$ such that $F_1(\tau_{a,b})=\tau'_{F_0(a),F_0(b)}$ for $a,b\in C_0$. We denote the category of braided categorical Lie $K$-algebras and braided internal functors between them as $\textbf{\textit{BICat}}(\textbf{\textit{LieAlg}}_K)$. \end{defi} The definition of braiding for crossed modules of Lie $K$-algebras was given in~\cite{TFM} trying to make a definition for which the Lie bracket was a braided for the crossed module $(M,M,[-,-],\Id_M)$ (the Lie bracket it is also known as commutator of the Lie $K$-algebra) in parallelism to the fact that $(G,G,\Conj,\Id_G,[-,-])$ is a braided crossed module of groups. That definition was also made to be a particular case of $2$-crossed modules of groups, whose definition can be seen in~\cite{M&P}. Another definition can be seen in~\cite{Ulua}, but that definition does not verify the mentioned requirements. \begin{defi} Let $\mathcal{X}=(M,N,\cdot,\partial)$ be a crossed module of Lie $K$-algebras. A \emph{braiding} (or \emph{Peiffer lifting}) on the crossed module $\mathcal{X}$ is a $K$-bilinear map $\{-,-\}\colon N\times N\xrightarrow{ } M$ verifying: \begin{align} \partial\{n,n'\}&=[n,n'],\tag{BLie1}\label{BLie1}\\ \{\partial m, \partial m' \}&=[m,m'],\tag{BLie2}\label{BLie2}\\ \{\partial m, n \}&=-n\cdot m, \tag{BLie3}\label{BLie3}\\ \{n,\partial m \}&=n\cdot m, \tag{BLie4}\label{BLie4}\\ \{n,[n',n'']\}&=\{[n,n'],n''\}-\{[n,n''],n'\},\tag{BLie5}\label{BLie5}\\ \{[n,n'],n''\}&=\{n,[n',n'']\}-\{n',[n,n'']\},\tag{BLie6}\label{BLie6} \end{align} for $m,m'\in M$, $n,n',n''\in N$. If $\{-,-\}$ is a braiding on $\mathcal{X}$ we will say that \emph{$(M,N,\cdot,\partial,\{-,-\})$ is a braided crossed module of Lie $K$-algebras}. \end{defi} \begin{example} It is immediate to check that $(M,M,[-,-],\Id_M,[-,-])$ is a braided crossed module of Lie $K$-algebras. \end{example} \begin{defi} An \emph{homomorphism of braided crossed modules of Lie $K$-algebras} $(M,N,\cdot,\partial,\{-,-\})\xrightarrow{(f_1,f_2)}(M',N',*,\partial',\{-,-\}')$ is an homomorphism of crossed modules of Lie $K$-algebras verifying $f_1(\{n,n'\})=\{f_2(n),f_2(n')\}'$ for $n,n'\in N$. We denote the category of braided crossed modules of Lie $K$-algebras and its homomorphisms as $\textbf{\textit{BX}}(\textbf{\textit{LieAlg}}_K)$. \end{defi} Now, we will show the natural relation between definitions of braiding in the case of crossed modules of associative algebras and crossed modules of Lie algebras. \begin{prop} If $\car(K)\neq 2$ and $(M,N,*,\partial,\{-,-\})$ is a braided crossed module of associative $K$-algebras, then $\{n,n'\}_{\mathfrak{L}}=\frac{\{n,n'\}-\{n',n\}}{2}$ is a braiding on the crossed module $(M^\mathcal{L},N^\mathcal{L},[-,-]_*,\partial)$. \begin{proof} We only need to show that the braiding axioms are verified. We will take $n,n',n''\in N, m,m'\in M$. The axioms \eqref{BLie1}, \eqref{BLie2}, \eqref{BLie3} and \eqref{BLie4} are proved using \eqref{BAs1}, \eqref{BAs2}, \eqref{BAs3} and \eqref{BAs4}. \begin{align*} \partial(\{n,n'\}_{\mathcal{L}})&=\partial(\frac{\{n,n'\}-\{n',n\}}{2})=\frac{[n,n']-[n',n]}{2}=[n,n'],\\ \{\partial m,\partial m'\}_{\mathcal{L}}&=\frac{\{\partial m,\partial m'\}-\{\partial m',\partial m\}}{2}=\frac{[m,m']-[m',m]}{2}=[m,m'],\\ \{\partial m,n\}_{\mathcal{L}}&=\frac{\{\partial m,n\}-\{n,\partial m\}}{2}=\frac{-[n,m]_*-[n,m]_*}{2}=-[n,m]_*,\\ \{n,\partial m\}_{\mathcal{L}}&=\frac{\{n,\partial m\}-\{\partial m,n\}}{2}=\frac{[n,m]_*+[n,m]_*}{2}=[n,m]_*. \end{align*} With the previous axioms proved, we have that the followings equalities hold. \begin{align*} \{n,[n',n'']\}_\mathcal{L}=\{n,\partial\{n',n''\}_\mathcal{L}\}_\mathcal{L}&=[n,\{n',n''\}_\mathcal{L}]_*\\&=-\{\partial\{n',n''\}_\mathcal{L},n\}_\mathcal{L}=-\{[n',n''],n\}_\mathcal{L}. \end{align*} Finally, we will prove that the braiding verifies the last axioms. For that we will abuse of language and we will denote $*=[-,-]_*$ (in the definition $*=(*_1,*_2)$). \begin{align*} &\{n,[n',n'']\}_\mathcal{L}=\{n,n'n''\}_\mathcal{L}-\{n,n''n'\}_\mathcal{L}\\ &=\frac{1}{2}(\{n,n'n''\}-\{n'n'',n\}-\{n,n''n'\}+\{n''n',n\})\\ &=\frac{1}{2}(n'*_1\{n,n''\}-\{n,n'\}*_2 n''-n'*_1\{n'',n\}-\{n',n\}*_2n''\\ &-n''*_1\{n,n'\}-\{n,n''\}*_2n'+n''*_1\{n',n\}+\{n'',n\}*_2 n')\\ &=\frac{1}{2}(n'*\{n,n''\}-n''*\{n,n'\}+n''*\{n',n\}-n'*\{n'',n\})\\ &=n'*\{n,n''\}_\mathcal{L}-n''*\{n,n'\}_\mathcal{L}=-\{[n,n''],n'\}+\{[n,n'],n''\}. \end{align*} \begin{align*} & \{[n,n'],n''\}_\mathcal{L}=\{nn',n''\}_\mathcal{L}-\{n'n,n''\}_\mathcal{L}\\ &=\frac{1}{2}(\{nn',n''\}-\{n'',nn'\}-\{n'n,n''\}+\{n'',n'n\})\\ &=\frac{1}{2}(n*_1\{n',n''\}+\{n,n''\}*_2 n'-n*_1\{n'',n'\}-\{n'',n\}*_2n'\\ &-n'*_1\{n,n''\}-\{n',n''\}*_2 n+n'*_1\{n'',n\}+\{n'',n'\}*_2 n)\\ &=\frac{1}{2}(n*\{n',n''\}-n*\{n'',n'\}-n'*\{n,n''\}+n'*\{n''n\})\\ &=n*\{n',n''\}_\mathcal{L}-n'*\{n,n''\}_\mathcal{L}=\{n,[n',n'']\}_\mathcal{L}-\{n',[n,n'']\}_\mathcal{L}.\qedhere \end{align*} \end{proof} \end{prop} \begin{remark} Note that the previous construction translates the example given for associative case to the one given in Lie case. \end{remark} \begin{remark} As for the case of groups we have, when $\car(K)\neq 2$, an equivalence between the categories $\textbf{\textit{BICat}}(\textbf{\textit{LieAlg}}_K)$ and $\textbf{\textit{BX}}(\textbf{\textit{LieAlg}}_K)$. This can be seen in~\cite{TFM}. In addition, the relations with the associative case gives us two functors. This functors commute in a immediate way with the functors of the equivalence. \end{remark} \section{The non-abelian tensor product as example of braiding}\label{S:Lienonabtensor} We will start with the non-abelian tensor product of groups. That product was introduced by Brown and Loday in~\cite{BrLod} through the following definition. \begin{defi} Let $G$ and $H$ be two groups so that $G$ acts in $H$ with $\cdot$ an $H$ acts on $G$ with $*$, both by automorphisms. The \emph{non-abelian tensor product of $G$ with $H$} is denoted by $G\otimes H$ and is the group generated by the symbols $g\otimes h$, where $g\in G$, $h\in H$, and the relations \begin{align} gg'\otimes h&=(gg'g^{-1}\otimes g\cdot h)(g\otimes h),\tag{RTG1}\\ g\otimes hh'&=(g\otimes h)(h*g\otimes hh'h^{-1}).\tag{RTG2} \end{align} \end{defi} The following proposition is mentioned for a general case in~\cite{BrLod}, using actions which are denominated compatible actions for make the tensor product. In this paper we will write a particular case, which is interesting to make an example. \begin{prop}[\cite{BrLod}] Let $G$ be a group. Then $(G\otimes G,G,\cdot,\partial)$ is a crossed module of groups where $G\otimes G$ is the non-abelian tensor product of $G$ with itself using the conjugation action. The action $\cdot\colon G\times (G\otimes G)\xrightarrow{}(G\otimes G)$ and the map $\partial \colon G\otimes G\xrightarrow{} G$ are defined on generators as $g\cdot(g_1\otimes g_2)=gg_1g^{-1}\otimes gg_2g^{-1}$ and $\partial(g_1\otimes g_2)=[g_1,g_2]$. \end{prop} This crossed module can be associated with a natural braiding, being this shown as an example in~\cite{Fuk}. \begin{example} Let $G$ be a group. The map $\{-,-\}\colon G\times G\xrightarrow{} G\otimes G$ defined as $\{g_1,g_2\}=g_1\otimes g_2$ is a braiding over $(G\otimes G,G,\cdot,\partial)$, being this the one defined in the previous proposition. This is a immediate result, using the properties of the non-abelian tensor product of groups (see~\cite[Proposition 1.2.3]{McDer}) and the definition. \end{example} Once given the example in groups, we look for its analogue in Lie $K$-algebras. For this we need the concept of non-abelian tensor product of Lie $K$-algebras, introduced by Ellis in~\cite{Ellis}. \begin{defi} Let $M$ and $N$ be two Lie $K$-algebras such that $M$ acts in $N$ by $\cdot$ and $N$ acts in $M$ with $*$. Its \emph{non-abelian tensor product} is denoted as $M\otimes N$ and its defined as the Lie $K$-algebra generated by the symbols $m\otimes n$ with $m\in M$, $n\in N$ and the relations \begin{equation}\label{RTLie1} \lambda(m\otimes n)=\lambda m\otimes n=m\otimes \lambda n,\tag{RTLie1} \end{equation} \begin{align*}\label{RTLie2} (m+m')\otimes n=m\otimes n+m'\otimes n,\tag{RTLie2}\\ m\otimes (n+n')=m\otimes n+m\otimes n', \end{align*} \begin{align*}\label{RTLie3} [m,m']\otimes n=m\otimes (m'\cdot n)-m'\otimes (m\cdot n),\tag{RTLie3}\\ m\otimes [n,n']=(n'*m)\otimes n-(n*m)\otimes n', \end{align*} \begin{equation}\label{RTLie4} [(m\otimes n),(m'\otimes n')]=-(n*m)\otimes(m'\cdot n'),\tag{RTLie4} \end{equation} where $m,m'\in M$, $n,n'\in N$. \end{defi} The following proposition, following the script of the case of groups, is proved more generally in the cite, but we restrict ourselves to the case that interests us. \begin{prop}[\cite{Ellis}] Let $M$ be a Lie $K$-algebra. Then $(M\otimes M,M,\cdot,\partial)$ is a crossed modules of Lie $K$-algebras, where $M\otimes M$ is the non-abelian tensor product of $M$ with itself using the adjoin action. The action $\cdot\colon M\times (M\otimes M)\xrightarrow{}(M\otimes M)$ and the map $\partial \colon M\otimes M\xrightarrow{} M$ are defined on generators as $m\cdot(m_1\otimes m_2)=[m,m_1]\otimes m_2 +m_1\otimes[m,m_2]$ and $\partial(m_1\otimes m_2)=[m_1,m_2]$ where $[-,-]$ is the bracket of $M$. \end{prop} \begin{remark} We will rewrite, for clarity, the relations \eqref{RTLie3} and \eqref{RTLie4} for the case of $M\otimes M$ with the of $M$ in itself by adjoin action. \begin{align*} [m_1,m_2]\otimes m_3=m_1\otimes [m_2,m_3]-m_2\otimes [m_1,m_3],\tag{RTLie3}\\ m_1\otimes [m_2,m_3]=[m_3,m_1]\otimes m_2-[m_2,m_1]\otimes m_3, \end{align*} \begin{equation} [(m_1\otimes m_2),(m_3\otimes m_4)]=[m_1,m_2]\otimes[m_3, m_4],\tag{RTLie4} \end{equation} where $m_1,m_2,m_3,m_4\in M$. For the last relation we use the anticommutativity. \end{remark} With this we will show an analogous example to the case of groups in the case of Lie $K$-algebras. \begin{example} Let $M$ be a Lie $K$-algebra. The $K$-bilinear map $\{-,-\}\colon M\times M\xrightarrow{} M\otimes M$ defined with the expression $\{m_1,m_2\}=m_1\otimes m_2$ is a braiding over the crossed module of Lie $K$-algebras $(M\otimes M,M,\cdot,\partial)$. We will check this. We will start with \eqref{BLie1}. If $m,m'\in M$, then \begin{align*} \partial\{m,m'\}=\partial(m\otimes m')=[m,m']. \end{align*} To check \eqref{BLie2}, by the $K$-linearity and $K$-bilinearity, we will work on generators, since the general case is only a sum of theirs. If $m_1\otimes m_2$ and $m_3\otimes m_4$ are generators of $M\otimes M$, then \begin{align*} \{\partial (m_1\otimes m_2), \partial (m_3\otimes m_4) \}=\{[m_1,m_2],[m_3,m_4]\}&=[m_1,m_2]\otimes[m_3,m_4]\\&=[(m_1\otimes m_2),(m_3\otimes m_4)], \end{align*} where the last equality is given by \eqref{RTLie4}. For the following properties we need a previous result. We will use \eqref{RTLie3} to prove $m_1\otimes [m_2,m_3]=-[m_2,m_3]\otimes m_1$. \begin{align*} &[m_1,m_2]\otimes m_3=m_1\otimes [m_2,m_3]-m_2\otimes [m_1,m_3]\\ &=m_1\otimes [m_2,m_3]-[m_3,m_2]\otimes m_1+[m_1,m_2]\otimes m_3. \end{align*} Simplifying we have $0=m_1\otimes [m_2,m_3]+[m_2,m_3]\otimes m_1$, which is the wished equality. Now, we will show \eqref{BLie3}. Let $m\in M$ and $m_1\otimes m_2\in M\otimes M$. \begin{align*} &\{\partial (m_1\otimes m_2), m \}=\{[m_1,m_2],m\}=[m_1,m_2]\otimes m\\ &=m_1\otimes [m_2,m]-m_2\otimes [m_1,m]=-m_1\otimes [m,m_2]+m_2\otimes [m,m_1]\\ &=-m_1\otimes [m,m_2]-[m,m_1]\otimes m_2=-m\cdot (m_1\otimes m_2), \end{align*} where we use \eqref{RTLie3} together with the previous result. Now, we will verify \eqref{BLie4}. \begin{align*} \{m,\partial (m_1\otimes m_2) \}=m\otimes [m_1,m_2]&=-[m_1,m_2]\otimes m=- \{\partial(m_1\otimes m_2),m \}\\ &=-(-m\cdot (m_1\otimes m_2))=m\cdot (m_1\otimes m_2), \end{align*} where we use \eqref{BLie3} and $m\otimes [m_1,m_2]=-[m_1,m_2]\otimes m$. Now, we will verify \eqref{BLie5} and \eqref{BLie6}. Let $m,m',m''\in M$. \begin{align*} &\{m,[m',m'']\}=m\otimes [m',m'']=[m'',m]\otimes m'-[m',m]\otimes m''=\\ &=[m,m']\otimes m''-[m,m'']\otimes m'=\{[m,m'],m''\}-\{[m,m''],m'\},\\ &\{[m,m'],m''\}=[m,m']\otimes m''=m\otimes [m',m'']-m'\otimes [m,m'']\\ &=\{m,[m',m'']\}-\{m',[m,m'']\}. \end{align*} We use \eqref{RTLie3} in the second equality of both chains of equalities. Then, it is shown that $\{m,m'\}=m\otimes m'$ is a braiding. \end{example} \begin{remark} Note that the action given in the previous example is actually given as: \begin{align*} m\cdot (m_1\otimes m_2)=m\otimes [m_1,m_2]. \end{align*} \end{remark}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \index{Weyl Curvature Tensor} Let $(M,g)$ be an oriented Riemannian n-manifold. Then by raising an index, the Riemann curvature tensor at any point can be viewed as an operator $\mathcal R : \Lambda^2M \to \Lambda^2M$ hence an element of $S^2\Lambda^2M$. It satisfies the algebraic Bianchi identity hence lies in the vector space of {\em algebraic curvature tensors}. This space is an $O(n)$-module and has an orthogonal decomposition into irreducible subspaces for $n\geq 4$. Accordingly the Riemann curvature operator decomposes as: $$\mathcal R = U \oplus Z \oplus W$$ where $$U={s\over 2n(n-1)}g\bullet g \hspace{.5cm}\textnormal{and} \hspace{.5cm} Z={1\over n-2}\stackrel{\circ}{Ric}\bullet g$$ $s$ is the scalar curvature, $\stackrel{\circ}{Ric}=Ric-{s\over n}g$ is the trace-free Ricci tensor, "$\bullet$" is the Kulkarni-Nomizu product, and $W$ is the {\em Weyl Tensor} which is defined to be what is left over from the first two pieces. When we restrict ourselves to dimension $n=4$, the Hodge Star operator $* : \Lambda^2 \to \Lambda^2$ is an involution and has $\pm 1$-eigenspaces decomposing the space of two forms as $\Lambda^2 = \Lambda_+^2\oplus\Lambda_-^2$, yielding a decomposition of any operator acting on this space. In particular $W_\pm : \Lambda_\pm^2 \to \Lambda_\pm^2$ is called the self-dual and anti-self-dual pieces of the Weyl curvature operator. And we call $g$ to be {\em self-dual}(resp. {\em anti-self-dual}) {\em metric} if $W_-$(resp. $W_+$) vanishes. In this case \cite{ahs} construct a complex $3$-manifold $Z$ called the {\em Twistor Space} of $(M^4,g)$, which comes with a fibration by holomorphically embedded rational curves :\vspace{.2cm} \hspace{3.5cm} \begin{tabular}{cccc} $\mathbb{CP}_1$ & $\to$ & $Z$ & Complex $3$-manifold \\ &&$\downarrow$& \\ &&$M^4$ & Riemannian $4$-manifold \end{tabular} \vspace{.2cm} This construction drew the attention of geometers, and many examples of Self-Dual metrics and related Twistor spaces were given afterwards. One result proved to be a quite effective way to produce infinitely many examples and became a cornerstone in the field :\\\\ {\bf Theorem \ref{thmb}} (Donaldson-Friedman,1989,\cite{DF}). {\em If $(M_1,g_1)$ and $(M_2,g_2)$ are compact self-dual Riemannian 4-manifolds with $H^2(Z_i,\mathcal O(TZ_i))=0$ Then~~~~$M_1\#M_2$ also admits a self-dual metric.}\\ The idea of the proof is to work upstairs in the complex category rather than downstairs. One glues the blown up twistor spaces from their exceptional divisors to obtain a singular complex space $Z_0=\widetilde Z_1 \cup_Q \widetilde Z_2$. Then using the Kodaira-Spencer deformation theory extended by R.Friedman to singular spaces, one obtains a smooth complex manifold, which turns out to be the twistor space of the connected sum. When working in differential geometry, one often deals with the moduli space of certain kind of metrics. The situation is also the same for the self-dual theory. Many people obtained results on the space of positive scalar curvature self-dual(PSC-SD) metrics on various kinds of manifolds. Since the positivity of the scalar curvature imposes some topological restrictions on the moduli space, people often find it convenient to work under this assumption. However one realizes that there is no connected sum theorem for self-dual positive scalar curvature metrics. Donaldson-Friedman Theorem(\ref{thmb}) does not make any statement about the scalar curvature of the metrics produced. Therefore we attacked the problem of determining the sign of the scalar curvature for the metrics produced over the connected sum, beginning by proving the following, using the techniques similar to that of \cite{lom}:\\\\ {\bf Theorem \ref{vanishing}} (Vanishing Theorem). {\em Let $\omega : {\mathcal Z} \to {\mathcal U}$ be a $1$-parameter standard deformation of $Z_0$, where $Z_0$ is as in Theorem (\ref{thmb}), and ${\mathcal U}\subset \mathbb C$ is a neighborhood of the origin. Let $L\to {\mathcal Z}$ be the holomorphic line bundle defined by $${\mathcal O}(L^*) ={\mathscr I}_{\widetilde{Z}_1}(K_{\mathcal Z}^{1/2}).$$ If $(M_i,[g_i])$ has positive scalar curvature, then by possibly replacing ${\mathcal U}$ with a smaller neighborhood of $0\in \mathbb C$ and simultaneously replacing ${\mathcal Z} $ with its inverse image, we can arrange for our complex $4$-fold $\mathcal Z$ to satisfy $$H^1 ({\mathcal Z} , {\mathcal O}(L^*))=H^2 ({\mathcal Z} , {\mathcal O}(L^*))=0.$$} The proof makes use of the Leray Spectral Sequence, homological algebra and Kodaira-Spencer deformation theory, involving many steps. Using this technical theorem next we prove that the Donaldson-Friedman Theorem can be generalized to the positive scalar curvature(PSC) case :\\\\ {\bf Theorem \ref{pos}.} {\em Let $(M_1,g_1)$ and $(M_2,g_2)$be compact self-dual Riemannian $4$-manifolds with $H^2(Z_i, {\mathcal O}(TZ_i))=0$ for their twistor spaces. Moreover suppose that they have positive scalar curvature. Then, for all sufficiently small ${\mathfrak t}>0$, the self-dual conformal class $[g_{\mathfrak t}]$ obtained on $M_1 \# M_2$ by the Donaldson-Friedman Theorem (\ref{thmb}) contains a metric of positive scalar curvature.}\\ We work on the self-dual conformal classes constructed by the Donaldson-Friedman Theorem (\ref{thmb}). Conformal Green's Functions\cite{lom} are used to detect the sign of the scalar curvature of these metrics. Positivity for the scalar curvature is characterized by non-triviality of the Green's Functions. Then the Vanishing Theorem (\ref{vanishing}) will provide the Serre-Horrocks\cite{serremod,horrocks} vector bundle construction, which gives the Serre Class, a substitute for the Green's Function by Atiyah\cite{atgrn}. And non-triviality of the Serre Class will provide the non-triviality of the extension described by it. In sections \S\ref{construct}-\S\ref{natural} we review the background material. In \S\ref{ssecvanishing} the vanishing theorem is proven, and finally in \S\ref{greatsmall}-\S\ref{scalar} the sign of the scalar curvature is detected.\\ {\bf Acknowledgments.} I want to thank my Ph.D. advisor Claude LeBrun for his excellent directions, letting me use his results, ideas and the figure, Justin Sawon for his generous knowledge and many thanks to Ioana Suvaina. \section{Self-Dual Manifolds and the Donaldson-Friedman Construction}\label{construct} One of the main improvements in the field of self-dual Riemannian 4-manifolds is the connected sum theorem of Donaldson and Friedman \cite{DF} published in $1989$. If $M_1$ and $M_2$ admit self-dual metrics, then under certain circumstances their connected sum admits, too . This helped us to create many examples of self-dual manifolds. If we state it more precisely : \begin{thm}[Donaldson-Friedman\cite{DF}] \label{thmb} Let $(M_1,g_1)$ and $(M_2,g_2)$ be compact self-dual Riemannian $4$-manifolds and $Z_i$ denote the corresponding twistor spaces. Suppose that $H^2(Z_i, {\mathcal O}(TZ_i))=0$ ~~for~~ $i=1,2$. Then, there are self-dual conformal classes on $M_1 \# M_2$ whose twistor spaces arise as fibers in a $1$-parameter standard deformation of $Z_0=\widetilde{Z}_1 \cup_Q \widetilde{Z}_2.$ \end{thm} We devote the rest of this section to understand the statement and the ideas in the proof of this theorem since our main result (\ref{pos}) is going to be a generalization of this celebrated theorem. The idea is to work upstairs in the complex category rather than downstairs. So let $p_i \in M_i$ be arbitrary points in the manifolds. Consider their inverse images $C_i\approx \mathbb{CP}_1$ under the twistor fibration, which are twistor lines, i.e. rational curves invariant under the involution. Blow up the twistor spaces $Z_i$ along these rational curves. Denote the exceptional divisors by $Q_i\approx \mathbb{CP}_1\times\mathbb{CP}_1$ and the blown up twistor spaces by $\widetilde Z_i=Bl(Z_i , C_i)$ . The normal bundles for the exceptional divisors is computed by : \begin{lem}[Normal Bundle]\label{normalbundle} The normal bundle of $Q_2$ in $\widetilde Z_2$ is computed to be $$NQ_2 = N_{Q_2 / \widetilde Z_2}\approx \mathcal O(1,-1)$$ where the second component is the fiber direction in the blowing up process. \end{lem} \begin{proof} We split the computation into the following steps \begin{enumerate} \item \label{wedge} We know that $N_{C_2/Z_2}\approx\mathcal O(1)\oplus\mathcal O(1)$ and we compute its second wedge power as $$c_1(\land^2\mathcal O(1)\oplus\mathcal O(1))[\mathbb{P}_1]=c_1(\mathcal O(1)\oplus\mathcal O(1))[\mathbb P_1] =(c_1\mathcal O(1)+c_1\mathcal O(1))[\mathbb P_1]=2$$ by the Whitney product identity of the characteristic classes. so we have $$\land^2N_{C_2/Z_2}\approx\mathcal O_{\mathbb P_1}(2)$$ \item $K_Q=\pi_1^*K_{\mathbb P_1}\otimes\pi_2^*K_{\mathbb P_1}=\pi_1^*\mathcal O(-2)\otimes\pi_2^*\mathcal O(-2)= \mathcal O_{\mathbb P_1\times\mathbb P_1}(-2,-2)$\item $K_Q=K_{\widetilde Z_2}+Q|_Q=\pi^*K_{Z_2}+2Q|_Q= \pi^*(K_{Z_2}|_{\mathbb P_1})+2Q|_Q= \pi^*(K_{\mathbb P_1}\otimes \land^2N_{\mathbb P_1/Z_2}^*)+2Q|_Q =\pi^*(\mathcal O(-2)\otimes\mathcal O(-2))+2Q|_Q =\pi^*\mathcal O(-4)+2Q|_Q$\\%\ec since the second component is the fiber direction, the pullback bundle will be trivial on that so $\pi^*\mathcal O(-4)=\mathcal O(-4,0)$ solving for $Q|_Q$ now gives us $$N_{Q/\widetilde Z_2}=Q|_Q=(K_Q\otimes\pi^*\mathcal O(-4)^*)^{1/2}= (\mathcal O(-2,-2)\otimes\mathcal O(4,0))^{1/2}=\mathcal O(1,-1)$$ \end{enumerate} \end{proof} We then construct the complex analytic space $Z_0$ by identifying $Q_1$ and $Q_2$ so that it has a normal crossing singularity $$Z_0=\widetilde Z_1 \cup_Q \widetilde Z_2.$$ Carrying out this identification needs a little bit of care. We interchange the components of ${\Bbb C\Bbb P}_1\times{\Bbb C\Bbb P}_1$ in the gluing process so that the normal bundles $N_{Q_1/ \widetilde Z_1}$ and $N_{Q_2 / \widetilde Z_2}$ are dual to each other. Moreover we should respect to the real structures. The real structures $\sigma_1$ and $\sigma_2$ must agree on $Q$ obtained by identifying $Q_1$ with $Q_2$, so that the real structures extend over $Z_0$ and form the anti-holomorphic involution $\sigma_0 : Z_0 \to Z_0$. Now we will be trying to deform the singular space $Z_0$, for which the Kodaira-Spencer's standard deformation theory does not work since it is only for manifolds it does not tell anything about the deformations of the singular spaces. We must use the theory of deformations of a compact reduced complex analytic spaces, which is provided by R.\cite{friedman}. This generalized theory is quite parallel to the theory of manifolds. The basic modification is that the roles of $H^i(\Theta)$ are now taken up by the groups $T^i=\textnormal{Ext}^i(\Omega^1, {\mathcal O})$. We have assumed $H^2(Z_i, {\mathcal O}(TZ_i))=0$ so that the deformations of $Z_i$ are unobstructed. Donaldson and Friedman are able to show that $T^2_{Z_0}=\textnormal{Ext}^2_{Z_0}(\Omega^1, {\mathcal O})=0$ so the deformations of the singular space is unobstructed. We have a versal family of deformations of $Z_0$. This family is parameterized by a neighborhood of the the origin in $\textnormal{Ext}^1_{Z_0}(\Omega^1, {\mathcal O})$. The generic fiber is non-singular and the real structure $\sigma_0$ extends to the total space of this family \begin{center} \begin{tabular}{ccc}$\omega$ : &$\mathcal Z \to \mathcal U$& ~~~for\hspace{.5cm} $Z_0=\widetilde Z_1 \cup_Q \widetilde Z_2$\\ &$Z_0 \longmapsto 0$ &\end{tabular}\end{center} \begin{center} \mbox{\beginpicture \setplotarea x from 0 to 200, y from -20 to 110 \put {$Z_2$} [B1] at 125 -10 \put {$Z_1$} [B1] at 70 -10 \put {$Z_{\mathfrak t}$} [B1] at 165 -5 \put {$Q$} [B1] at 95 0 {\setlinear \plot 65 100 135 75 / \plot 60 75 130 100 / \plot 135 75 135 0 / \plot 60 75 60 0 / \plot 60 0 97 15 / \plot 97 15 135 0 / \plot 97 89 97 15 / \plot 117 89 117 14 / \plot 155 75 155 0 / \plot 149 100 149 76 / \plot 38 75 38 0 / \plot 77 89 77 14 / \plot 45 100 45 76 / } {\setquadratic \plot 148 75 113 89 143 100 / \plot 148 0 120 8 113 14 / \plot 34 75 73 89 39 100 / \plot 34 0 62 8 73 14 / }\endpicture }\end{center} Instead of working with the entire versal family, it is convenient to work with certain subfamilies, called {\em standard deformations}: \begin{defn}[\cite{lom}]\index{standard deformation} A {\em $1$-parameter standard deformation} of $Z_0$ is a flat proper holomorphic map $\omega : {\mathcal Z}\to {\mathcal U}\subset \mathbb C$ of a complex $4$-manifold to an open neighborhood of ~$0$, together with an anti-holomorphic involution $\sigma : {\mathcal Z}\to {\mathcal Z}$, such that \begin{itemize} \item $\omega^{-1}(0)=Z_0$ \item $\sigma |_{Z_0}= \sigma_0$ \item $\sigma$ descents to the complex conjugation in $\mathcal U$ \item $\omega$ is a submersion away from $Q\subset Z_0$ \item $\omega$ is modeled by $(x,y,z,w)\mapsto xy$ near any point of $Q$.\\ \end{itemize}\end{defn}We also define \begin{defn}[Flat Map\cite{hart}]\index{flat family, map} Let $K$ be module over a ring $A$. We say that $K$ is \emph{flat} over $A$ if the functor $L\mapsto K \otimes_A L$ is an exact functor for all modules $L$ over $A$.\\ Let $f:X\to Y$ be a morphism of schemes and $\mathcal F$ be an $\mathcal O_X$-module. We say $\mathcal F$ is \emph{flat} over $Y$ if the stalk $\mathcal F_x$ is a flat $\mathcal O_{y,Y}$-module for any $x$. Where $y=f(x)$, $\mathcal F_x$ is considered to be a $\mathcal O_{y,Y}$-module via the natural map $f^\# : \mathcal O_{y,Y} \to \mathcal O_{x,X}$. We say $X$ is \emph{flat} over $Y$ if $\mathcal O_X$ is. \end{defn} Then for sufficiently small, nonzero, real ${\mathfrak t}\in {\mathcal U}$ the complex space $Z_{\mathfrak t}= \omega^{-1}({\mathfrak t})$ is smooth and one can show that it is the twistor space of a self-dual metric on $M_1 \# M_2$. \section{The Leray Spectral Sequence}\label{secleray} Given a continuous map $f:X \to Y$ between topological spaces, and a sheaf $\mathcal{F}$ over $X$, the {\em $q$-th direct image sheaf } is the sheaf $\mathit{R}^q(f_*\mathcal{F})$ on $Y$ associated to the presheaf $V \to H^q(f^{-1}(V),\mathcal{F}|_{f^{-1}(V)})$. This is actually the right derived functor of the functor $f_*$. The Leray Spectral Sequence \index{Leray Spectral Sequence} is a spectral sequence $\{E_r \}$ with $$E^{p,q}_2 = H^p(Y,\mathit{R}^q(f_*\mathcal{F}))$$ $$E^{p,q}_\infty = H^{p+q}(X,\mathcal{F}) $$ The first page of this spectral sequence reads : \begin{center} \begin{tabular}{cc|cccc} &&$\vdots$ & $\vdots$ & $\vdots$ & \\ && $H^0(Y,\mathit{R}^2(f_*\mathcal{F}))$ & $H^1(Y,\mathit{R}^2(f_*\mathcal{F}))$ & $H^2(Y,\mathit{R}^2(f_*\mathcal{F}))$ &$\cdots$\\ && $H^0(Y,\mathit{R}^1(f_*\mathcal{F}))$ & $H^1(Y,\mathit{R}^1(f_*\mathcal{F}))$ & $H^2(Y,\mathit{R}^1(f_*\mathcal{F}))$ &$\cdots$\\ $E_2$&& $H^0(Y,\mathit{R}^0(f_*\mathcal{F}))$ & $H^1(Y,\mathit{R}^0(f_*\mathcal{F}))$ & $H^2(Y,\mathit{R}^0(f_*\mathcal{F}))$ &$\cdots$\\ \cline{3-6} \end{tabular} \end{center}\vspace{.3cm} A degenerate case is when $\mathit{R}^i(f_*\mathcal{F})=0 $ for all $i > 0$. \begin{rmk} This is the case if $\mathcal{F}$ is flabby for example. Remember that to be {\em flabby} means that the restriction map $r: \mathcal{F}(\mathit{B}) \to \mathcal{F}(\mathit{A})$ is onto for open sets $\mathit{B} \subset \mathit{A}$. In this case $H^i(X,\mathcal{F})=0$ for $i > 0$ as well as $H^i(\mathit{U},\mathcal{F}|_\mathit{U})=0$ for $\mathit{U}$ open, because the restriction of a flabby sheaf to any open subset is again flabby by definition. That means $H^q(f^{-1}(.),\mathcal{F}|_.)=0$ for all $q > 0$ so $\mathit{R}^i(f_*\mathcal{F})=0 $. \end{rmk} When the spectral sequence degenerates this way, the second and succeeding rows of the first page vanish. And because $V \to H^0(f^{-1}(V),\mathcal{F}|_{f^{-1}(V)})$ is the presheaf of the direct image sheaf, we have $\mathit{R}^0f_*=f_*$. So the first row consist of $H^i(Y,f_*\mathcal{F})$'s. Vanishing of the differentials cause immediate convergence to $E^{i,0}_\infty=H^{i+0}(X,\mathcal{F})$. So we got: \begin{prop} \label{leray} If $\mathit{R}^i(f_*\mathcal{F})=0 $ for all $i>0$, then $H^i(X,\mathcal{F})=H^i(Y,f_*\mathcal{F})$ naturally for all $i \geq 0$. \end{prop} As another example, the following proposition reveals a different sufficient condition for this degeneration. See \cite{voisin} v2 , p124 for a sketch of the proof: \begin{prop}[Small Fiber Theorem] \label{smallfiber} Let $f:X \to Y$ be a holomorphic, proper and submersive map between complex manifolds, $\mathcal{F}$ a coherent analytic sheaf or a holomorphic vector bundle on $X$. Then $H^i(f^{-1}(y),\mathcal{F}|_{f^{-1}(y)})=0$ for all $y \in Y$ implies that $\mathit{R}^i(f_*\mathcal{F})=0$. \end{prop} As an application of these two propositions, we obtain the main result of this section : \begin{prop} \label{lerayblowup} Let $Z$ be a complex $n$-manifold with a complex $k$-dimensional submanifold $V$. Let $\widetilde{Z}$ denote the blow up of $Z$ along $V$, with blow up map $\pi : \widetilde{Z} \to Z$. Let $\mathcal{G}$ denote a coherent analytic sheaf(or a vector bundle) over $Z$. Then we can compute the cohomology of $\mathcal{G}$ on either side i.e. $$H^i(\widetilde{Z},\pi^*\mathcal{G})=H^i(Z,\mathcal{G}).$$ \end{prop} \begin{proof} The inverse image of a generic point on $Z$ is a point, else a $\mathbb{P}^{n-k-1}$. We have $$H^i(f^{-1}(y),\pi^*\mathcal{G}|f^{-1}(y))=H^i(\mathbb{P}^{n-k-1},\mathcal{O})=H^{0,i}_{ \bar{\partial }}(\mathbb{P}^{n-k-1})=0$$ at most, since the cohomology of $\mathbb{P}^{n-k-1}$ is accumulated in the middle for $i>0$. So that we can apply Proposition (\ref{smallfiber}) to get $\mathit{R}^i(\pi_*\pi^\ast \mathcal{G})=0 $ for all $i>0$. Which is the hypothesis of Proposition (\ref{leray}), so we get $H^i(\widetilde{Z},\pi^*\mathcal{G})=H^i(Z,\pi_*\pi^*\mathcal{G}) $ naturally for all $i\geq0$, and the latter equals $H^i(Z,\mathcal{G})$ since $\pi_*\pi^*\mathcal{G}=\mathcal{G}$ by the combination of the following two lemmas.\end{proof} \begin{lem}[Projection Formula\cite{hart}] If $f: (X,\mathcal{O}_X) \to (Y,\mathcal{O}_Y)$ is a morphism of ringed spaces, if $\mathcal{F}$ is an $\mathcal{O}$-module, and if $\mathcal{E}$ is a locally free $\mathcal{O}_Y$-module of finite rank. Then there is a natural isomorphism $$f_*(\mathcal{F}\otimes_{\mathcal{O}_X}f^*\mathcal{E})=f_*\mathcal{F}\otimes_{\mathcal{O}_Y}\mathcal{E},$$ in particular for $\mathcal{F}=\mathcal{O}_X$ $$f_*f^*\mathcal{E}=f_*\mathcal{O}_X\otimes_{\mathcal{O}_Y}\mathcal{E}.$$ \end{lem} \begin{lem}[Zariski's Main Theorem,Weak Version\cite{hart}] Let $f:X \to Y$ be a birational projective morphism of noetherian integral schemes, and assume that $Y$ is normal. Then $f_*\mathcal{O}_X=\mathcal{O}_Y$. \end{lem} \begin{proof} The question is local on $Y$. So we may assume that $Y$is affine and equal to $SpecA$. Then $f_*\mathcal{O}_X$ is a coherent sheaf of $\mathcal O_Y$-algebras, so $B=\Gamma(Y,f_*\mathcal{O}_X)$ is a finitely generated $A$-module. But $A$ and $B$ are integral domains with the same quotient field, and $A$ is integrally closed, we must have $A=B$. Thus $f_*\mathcal{O}_X=\mathcal{O}_Y$ \end{proof} \section{Natural square root of the canonical bundle of a twistor space}\label{natural} In the next section, we are going to prove that a certain cohomology group of a line bundle vanishes. For that we need some lemmas. First of all, the canonical bundle of a twistor space $Z$ has a natural square root, equivalently $Z$ is a spin manifold as follows: The Riemannian connection of $M$ acts on the $2$-forms hence the twistor space, accordingly we can split the tangent bundle $T_xZ=T_xF \oplus (p^*TM)_x$. The complex structure on $(p^*TM)_x$ is obtained from the identification $\cdot\varphi : T_xM \longleftrightarrow (\mathbb{V}_+)_x$ provided by the Clifford multiplication of a non-zero spinor $\varphi \in (\mathbb{V}_+)_x$. This identification is linear in $\varphi$ as $\varphi$ varies over $({\mathbb{V}_+})_x$. So we have a nonvanishing section of $\mathcal O_Z(1)=\mathcal O_{\mathbb P \mathbb V_-}(1)$ with values in $Hom(TM,{\mathbb V}_+)$ or $Hom(p^*TM,p^*{\mathbb V}_+)$ trivializing the bundle $$\mathcal O_Z(1)\otimes Hom(p^*TM,p^*{\mathbb V}_+) =\mathcal O_Z(1)\otimes {p^*TM}^* \otimes p^*{\mathbb V}_+ \approx {p^*TM}^* \otimes \mathcal O_Z(1)\otimes p^*{\mathbb V}_+$$ hence yielding a natural isomorphism \begin{equation}{p^*TM}\approx \mathcal O_Z(1)\otimes p^*{\mathbb V}_+ ,\label{iso1} \end{equation} where $\mathcal O_Z(1)=\mathcal O_{\mathbb P \mathbb V_-}(1)$ is the positive Hopf bundle on the fiber. The Hopf bundle exist locally in general, so as the isomorphism. If $M$ is a spin manifold $\mathbb V_\pm$ exist globally on $M$ and $\mathcal O_Z(1)$ exist on $Z$, so our isomorphism holds globally. Furthermore, we have a second isomorphism holding for any projective bundle, obtained as follows (see \cite{fultoni} p434 , \cite{zheng} p108) : Let E be a complex vector bundle of rank-$(n+1)$ over $M$, $p: \mathbb PE \to M$ its projectivization. We have the imbedding of the tautological line bundle $\mathcal O_{\mathbb PE}(-1)\hookrightarrow p^*E$. Giving the exact sequence $$0 \to \mathcal O_{\mathbb PE}(-1) \to p^*E \to{ {p^*E}/{\mathcal O_{\mathbb PE}(-1)} }\to 0,$$ tensoring by $\mathcal O_{\mathbb PE}(1)$ gives $$0 \to \mathcal O_{\mathbb PE} \to \mathcal O_{\mathbb PE}(1)\otimes p^*E \to T_{\mathbb PE/M} \to 0$$ where $T_{\mathbb PE/M} \approx Hom(\mathcal O(-1),\mathcal O(-1)^\perp)=Hom(\mathcal O(-1),p^*E/{\mathcal O(-1)})=\mathcal O(1)\otimes {p^*E/\mathcal O(-1)} $ is the relative tangent bundle of $\mathbb P E$ over $M$, originally defined to be ${\Omega^1_{\mathbb PE/M}}^*$. Taking $E={\mathbb V}_-$ , $TF$ denoting the tangent bundle over the fibers: $$0 \to \mathcal O_Z \to \mathcal O_Z(1)\otimes p^*\mathbb V_- \to TF \to 0$$ so, we got our second isomorphism : \begin{equation} TF\oplus\mathcal O_Z \approx \mathcal O_Z(1)\otimes p^*\mathbb V_- \label{iso2} \end{equation} Now we are going to compute the first chern class of the spin bundles $\mathbb V_\pm$, and see that $c_1(\mathbb V_\pm)=0$. Choose a connection $\nabla$ on $\mathbb V_\pm$. Following \cite{kn} it defines a connection on the associated principal $\mathfrak{su}(2)$ bundle $P$, with connection one form $\omega \in A^1(P,\mathfrak{su}(2))$ defined by the projection \cite{morita} $T_uP \to V_u \approx \mathfrak{su}(2)$ having curvature two form $\Omega \in A^2(P,\mathfrak{su}(2))$ defined by \cite{kn} : $$\Omega(X,Y)=d\omega(X,Y)+\frac{1}{2}[\omega(X),\omega(Y)]\ \mathrm{ for } \ X,Y \in T_uP.$$ We define the first polynomial functions $f_0,f_1,f_2$ on the lie algebra $\mathfrak{su}(2)$ by $$det(\lambda I_2+\frac{i}{2\pi}M)=\sum_{k=0}^2{f_{2-k}(M)\lambda^k}=f_0(M)\lambda^2 +f_1(M)\lambda+f_2(M) \ \mathrm{for} \ M \in \mathfrak{su}(2).$$ Then these polynomials $f_i: \mathfrak{su}(2)\to \mathbb R$ are invariant under the adjoint action of $SU(2)$, denoted $f_i \in I^1(SU(2))$, namely $$f_i(\mathrm{ad}_g(M))=f_i(M) \ \mathrm{for} \ g \in SU(2) \ , M \in \mathfrak{su}(2)$$ where $ \mathrm{ad}_g: \mathfrak{su}(2) \to \mathfrak{su}(2)$~ is defined by ~$\mathrm{ad}_g(M)=R_{g^{-1}*}L_{g*}(M)$\label{adjointrepresentation}\index{adjoint representation}.\\ If we apply any $f \in I^1(SU(2))$ after $\Omega$ we obtain: $$f \circ \Omega \ : \ T_uP \times T_uP \longrightarrow \mathfrak{su}(2) \longrightarrow \mathbb{R}.$$ It turns out that $f \circ \Omega$ is a closed form and projects to a unique $2$-form say $\overline{f \circ \Omega}$ on $M$ i.e. $f \circ \Omega=\pi^*(\overline{f \circ \Omega})$ where $\pi:P\to M$. By the way, a $q$-form $\varphi$ on $P$ projects to a unique $q$-form, say $\overline{\varphi}$ on $M$ if $\varphi(X_1 \cdot\cdot \, X_q)=0$ whenever at least one of the $X_i$'s is vertical and $\varphi(R_{g*}X_1 \cdot\cdot \, R_{g*}X_q)=\varphi(X_1 \cdot\cdot \, X_q)$. $\overline{\varphi}$ on $M$ defined by $\overline{\varphi}(V_1 \cdot\cdot \, Vq)=\varphi(X_1 \cdot\cdot \, X_q) , \pi(X_i)=V_i$ is independent of the choice of $X_i$'s. See \cite{kn}v2p294 for details.\\ So, composing with $\Omega$ and projecting defines a map w $:I^1(SU(2)) \to H^2(M,\mathbb R)$ called the Weil homomorphism, it is actually an algebra homomorphism when extended to the other gradings.\\ Finally, the chern classes are defined by $c_k(\mathbb V_\pm):=\left[\overline{f_k \circ \Omega}\right]$ independent of the connection chosen. Notice that $f_2(M)=det(\frac{i}{2\pi}M),f_1(M)=tr(\frac{i}{2\pi}M)$ in our case. And if $M\in \mathfrak{su}(2)$ then $e^M \in SU(2)$ implying $1=det(e^M)=e^{trM}$ and $trM=0$. But $\Omega$ is of valued $\mathfrak{su}(2)$, so if you apply the $f_1=tr$ after $\Omega$ you get $0$. Causing $c_1(\mathbb V_\pm)=\left[\overline {tr(\frac{i}{2\pi}\Omega)}\right]=0$. \\ One last remark is that $\overline{f_k \circ \Omega}=\gamma_k$ in the notation of \cite{kn}, $\gamma_1=P^1(\frac{i}{2\pi}\Theta)=tr(\frac{i}{2\pi}\Theta)$ in the notation of \cite{gh}p141,p407. And $\Omega=\pi^*\Theta$ in the line bundle case. Vanishing of the first chern classes mean that the determinant line bundles of $\mathbb V_\pm$ are diffeomorphically trivial since $c_1(\wedge^2\mathbb V_\pm)=c_1\mathbb V_\pm=0$. Combining this with the isomorphisms (\ref{iso1}) and (\ref{iso2}) yields: \begin{center} $\wedge^2p^*TM=\wedge^2(\mathcal O_Z(1)\otimes p^*\mathbb V_+)= \mathcal O_Z(2)\otimes\wedge^2p^*\mathbb V_+=\mathcal O_Z(2)=\mathcal O_Z(2)\otimes\wedge^2p^*\mathbb V_- $ $=\wedge^2(\mathcal O_Z(1)\otimes p^*\mathbb V_-)=\wedge^2(TF \oplus \mathcal O_Z)=\oplus_{2=p+q}(\wedge^pTF\otimes\wedge^q\mathcal O_Z)=TF\otimes\mathcal O_Z=TF$ \end{center} since $TF$ is a line bundle. Taking the first chern class of both sides: $$ c_1(p^*TM)=c_1(\wedge^2p^*TM)=c_1TF. \label{3} $$ Alternatively, this chern class argument could be replaced with the previous taking wedge powers steps if the reader feels more comfortable with it.\\ Last equality implies the decomposition: $$c_1Z=c_1(p^*TM\oplus TF)=c_1(p^*TM)+c_1TF=2c_1TF.$$ So, $TF^*$ is a differentiable square root for the canonical bundle of $Z$. If $M$ is not spin $\mathbb V_\pm,\mathcal O_Z(1)$ are not globally defined, but the complex structure on their tensor product is still defined, and we can still use the isomorphisms (\ref{iso1}),(\ref{iso2}) for computing chern classes of the almost complex structure on $Z$ using differential forms defined locally by the metric. Consequently our decomposition is valid whether $M$ is spin or not. One more word about the differentiable square roots is in order here. A differentiable square root implies a holomorphic one on complex manifolds since in the sheaf sequence: \begin{center} \begin{tabular}{ccccccc} .. & $\to$ & $H^1(M,\mathcal O^*)$ & $\to$ & $H^2(M,\mathbb Z)$ & $\to$ & $..$ \\ & & $L$ & $\mapsto$ & $c_1(L)$ & $\mapsto$ & $0$ \\ & & & & $\frac{1}{2}c_1(L)$ & $ \mapsto$ & $0$ \end{tabular} \end{center} $c_1(L)$ maps to $0$ since it is coming from a line bundle, and if it decomposes, $\frac{1}{2}c_1(L)$ maps onto $0$ too, that means it is the first chern class of a line bundle. \section{Vanishing Theorem}\label{ssecvanishing} Let $\omega : \mathcal Z \to \mathcal U$ be a $1$-paramater standard deformation of $Z_0$, where $\mathcal U \subset \mathbb C$ is an open disk about the origin. Then the invertible sheaf $K_{\mathcal Z}$ has a square root as a holomorphic line bundle as follows: We are going to show that the Steifel-Whitney class $w_2(K_{\mathcal Z})$ is going to vanish. We write $\mathcal Z={\mathcal U}_1 \cup {\mathcal U}_2$ where ${\mathcal U}_i$ is a tubular neighborhood of $\widetilde{Z}_i$, ${\mathcal U}_1 \cap {\mathcal U}_2$ is a tubular neighborhood of $Q=\widetilde{Z}_1 \cap \widetilde{Z}_2$. So that ${\mathcal U}_1 , {\mathcal U}_2$ and ${\mathcal U}_1 \cap {\mathcal U}_2$ deformation retracts on $\widetilde{Z}_1 , \widetilde{Z}_2$ and $Q$. Since $Q \approx {\mathbb P}_1 \times {\mathbb P}_1$ is simply connected, $H^1({\mathcal U}_1 \cap {\mathcal U}_2,{\mathbb Z}_2)=0$ and the map $r_{12}$ in the Mayer-Vietoris exact sequence : \begin{center} \begin{tabular}{ccccccc \llap{ .. ~~$\to$~~~}$H^1({\mathcal U}_1 \cap {\mathcal U}_2,{\mathbb Z}_2)$ & $\to$ & $H^2({\mathcal U}_1 \cup {\mathcal U}_2,{\mathbb Z}_2)$ & $\stackrel{r_{12}}{\to}$ & $H^2({\mathcal U}_1,{\mathbb Z}_2) \oplus H^2({\mathcal U}_2,{\mathbb Z}_2)$ & $\to$ &.. \\%&& $\parallel$ && $\in$ & &&& \\%&& $0$ && $w_2(K_{\mathcal Z})$ &&&& \end{tabular} \end{center} is injective. Therefore it is enough to see that the restrictions $r_i(w_2(K_{\mathcal Z})) \in H^2({\mathcal U}_i,{\mathbb Z}_2)$ are zero. For that, we need to see that $K_{\mathcal Z}|_{\widetilde{Z}_i}$ has a radical : \begin{center} $K_\mathcal Z|_{\widetilde{Z}_1} \stackrel{(1)}{=} (K_{\widetilde{Z}_1}-\widetilde{Z}_1)|_{\widetilde{Z}_1} \stackrel{(2)}{=} (K_{\widetilde{Z}_1}+Q)|_{\widetilde{Z}_1} \stackrel{(3)}{=} ((\pi^*K_{Z_1}+Q)+Q)|_{\widetilde{Z}_1}= 2(\pi^*K^{1/2}_{Z_1}+Q)|_{\widetilde{Z}_1}$ \end{center} where $(1)$ is the application of the adjuction formula on $\widetilde{Z}_1$, $ K_{\widetilde{Z}_1} = K_{\mathcal Z}|_{\widetilde{Z}_1} \otimes [\widetilde{Z}_1] $.\\ $(2)$ comes from the linear equivalence of $0$ with $Z_t$ on $\widetilde{Z}_1$, and $Z_t$ with $Z_0$: $$0=\mathcal O(Z_t)|_{\widetilde{Z}_1}=\mathcal O(Z_0)|_{\widetilde{Z}_1}=\mathcal O(\widetilde{Z}_1+\widetilde{Z}_2)|_{\widetilde{Z}_1} =\mathcal O(\widetilde{Z}_1+Q)|_{\widetilde{Z}_1}$$ (3) is the change of the canonical bundle under the blow up along a submanifold, see \cite{gh}p608. $K_{Z_1}$ has a natural square root as we computed in the previous section, so $\pi^*K^{1/2}_{Z_1}\otimes[Q]$ is a square root of $K_\mathcal Z$ on $\widetilde{Z}_1$. Similarly on $\widetilde{Z}_2$, so $K_\mathcal Z$ has a square root $K^{1/2}_\mathcal Z$. Before our vanishing theorem, we are going to mention the Semicontinuity Principle and the Hitchin's Vanishing theorem, which are involved in the proof: \begin{lem}[Semicontinuity Principle\cite{voisin}] \label{semicontinuity} Let $\phi : \mathcal X \to \mathcal B$ be a family of complex compact manifolds With fiber $X_b , b \in \mathcal B$. Let $\mathcal F$ be a holomorphic vector bundle over $\mathcal X$, then The function $b \mapsto h^q(X_b,\mathcal F|_{X_b})$ is upper semicontinuous. In other words, we have $ h^q(X_b,\mathcal F|_{X_b}) \leq h^q(X_0,\mathcal F|_{X_0})$ for $b$ in a neighborhood of $0 \in \mathcal B$. \end{lem} \begin{lem}[Hitchin Vanishing\cite{hitlin}\cite{poon1}] \label{hitchinvanishing} Let $Z$ be the twistor space of an oriented self-dual riemannian manifold of positive scalar curvature with canonical bundle $K$, then $$ h^0(Z,\mathcal O(K^{n/2}))=h^1(Z,\mathcal O(K^{n/2}))=0 ~ for ~ all ~ n \geq 1.$$ \end{lem} \begin{thm}[Vanishing Theorem] \label{vanishing} \index{Vanishing Theorem} Let $\omega : {\mathcal Z} \to {\mathcal U}$ be a $1$-parameter standard deformation of $Z_0$, where $Z_0$ is as in Theorem (\ref{thmb}), and ${\mathcal U}\subset \mathbb C$ is a neighborhood of the origin. Let $L\to {\mathcal Z}$ be the holomorphic line bundle defined by $${\mathcal O}(L^*) ={\mathscr I}_{\widetilde{Z}_1}(K_{\mathcal Z}^{1/2})$$ If $(M_i,[g_i])$ has positive scalar curvature, then by possibly replacing ${\mathcal U}$ with a smaller neighborhood of $0\in \mathbb C$ and simultaneously replacing ${\mathcal Z} $ with its inverse image, we can arrange for our complex $4$-fold $\mathcal Z$ to satisfy $$H^1 ({\mathcal Z} , {\mathcal O}(L^*))=H^2 ({\mathcal Z} , {\mathcal O}(L^*))=0.$$ \end{thm} \begin{proof} The proof proceeds by analogy to the techniques in \cite{lom}, and consists of several steps :\begin{enumerate} \item It is enough to \textbf{show that $\mathbf{H^j(Z_0,\mathcal O(L^*))=0}$ for $\mathbf{j=1,2}$ :} \\ Since that would imply $h^j(Z_t,\mathcal O(L^*)) \leq 0$ for $j=1,2$ in a neighborhood by the semicontinuity principle. Intuitively, this means that the fibers are too small, so we can apply Proposition (\ref{smallfiber}) to see $R^j\omega_*\mathcal O(L^*)=0$ for $j=1,2$. The first page of the Leray Spectral Sequence reads :\newline \begin{center} \begin{tabular}{|ccccc} $\vdots$ & $\vdots$ & $\vdots$ & \\ $H^0(\mathcal U,R^3\omega_*\mathcal O(L^*))$ & $H^1(\mathcal U,R^3\omega_*\mathcal O(L^*))$ & $H^2(\mathcal U,R^3\omega_*\mathcal O(L^*))$ &$\cdots$\\ $0$ & $0$ & $0$ &$\cdots$\\ $0$ & $0$ & $0$ &$\cdots$\\ \llap{$E_2$\hspace{.5cm} }$H^0(\mathcal U,R^0\omega_*\mathcal O(L^*))$ & $H^1(\mathcal U,R^0\omega_*\mathcal O(L^*))$ & $H^2(\mathcal U,R^0\omega_*\mathcal O(L^*))$ &$\cdots$\\ \cline{0-3 \end{tabular} \end{center}\vspace{.9cm} Remember that $$E^{p,q}_2 = H^p(\mathcal U,R^q\omega_*\mathcal O(L^*))$$ $$E^{p,q}_\infty = H^{p+q}(\mathcal Z,\mathcal O(L^*)) $$ and that the differential $$d_2(E^{p,q}_2) \subset E^{p+2,q-1}_2.$$ Vanishing of the second row implies the immediate convergence of the first row till the third column because of the differentials, so $$E^{p,0}_\infty=E^{p,0}_2 ~\textnormal{i.e.}~ H^{p+0}(\mathcal Z,\mathcal O(L^*))=H^p(\mathcal U,R^0\omega_*\mathcal O(L^*)) ~\textnormal{for}~ p\leq3$$ hence $H^p(\mathcal Z,\mathcal O(L^*))=H^p(\mathcal U,R^0\omega_*\mathcal O(L^*))$ , for $p\leq3$.\\ Since $\mathcal U$ is one dimensional, $\omega : \mathcal Z \to \mathcal U$ has to be a flat morphism, so the sheaf $\omega_*\mathcal O(L^*)$ is coherent\cite{gunningr3,bonica}. $\mathcal U$ is an open subset of $\mathbb C$ implying that it is Stein. And the so called Theorem B of Stein Manifold theory characterizes them as possesing a vanishing higher dimensional($p>0$) coherent sheaf cohomology \cite{lewis}p67, \cite{hart}p252, \cite{gunningr3,bonica}. So $H^p(\mathcal U,\omega_*\mathcal O(L^*))=0$ for $p>0$. Tells us that $H^p(\mathcal Z,\mathcal O(L^*))=0$ for $0<p\leq3$. \item \label{mayervietoris}Related to $Z_0$, we have the \textbf{Mayer-Vietoris like} sheaf exact sequence $$0\to {\mathcal O}_{Z_0}(L^*)\to \nu_*{\mathcal O}_{\widetilde{Z}_1}(L^*)\oplus \nu_*{\mathcal O}_{\widetilde{Z}_2}(L^*) \to {\mathcal O}_Q(L^*)\to 0$$ where $\nu : \widetilde{Z}_1\sqcup \widetilde{Z}_2\to Z_0$ is the inclusion map on each of the two components of the disjoint union $\widetilde{Z}_1\sqcup \widetilde{Z}_2$. This gives the long exact cohomology sequence piece : \begin{center} $0\to H^1(\mathcal O_{Z_0}(L^*))\to H^1(Z_0,\nu_*\mathcal O_{\widetilde{Z}_1}(L^*) \oplus \nu_*\mathcal O_{\widetilde{Z}_2}(L^*)) \to H^1(\mathcal O_Q(L^*))\to H^2(\mathcal O_{Z_0}(L^*))\to H^2(Z_0,\nu_*\mathcal O_{\widetilde{Z}_1}(L^*)\oplus \nu_*\mathcal O_{\widetilde{Z}_2}(L^*))\to 0$ \end{center} due to the fact that : \item \label{restrto quadric cohomology} $\mathbf{H^0(\mathcal O_Q(L^*))= H^2(\mathcal O_Q(L^*))=0}$ \textbf{:} To see this, we have to understand the restriction of $\mathcal O(L^*)$ to $Q$ : \begin{center} $L^*|_Q=(\frac{1}{2}K_\mathcal{Z}-\widetilde{Z}_1)|_{\widetilde{Z}_2}|_Q =( \frac{1}{2}( K_{\widetilde{Z}_2}-\widetilde{Z}_2 )-\widetilde{Z}_1 ) |_{\widetilde{Z}_2}|_Q =( \frac{1}{2}( K_{\widetilde{Z}_2} + Q )- Q ) |_{\widetilde{Z}_2}|_Q =\frac{1}{2}( K_{\widetilde{Z}_2} - Q ) |_{\widetilde{Z}_2}|_Q =\frac{1}{2}( K_Q - Q - Q ) |_{\widetilde{Z}_2}|_Q =(\frac{1}{2} K_Q - Q ) |_{\widetilde{Z}_2}|_Q =\frac{1}{2} K_Q |_Q \otimes NQ_{\wt{Z}_2}^{-1} =\mathcal O(-2,-2)^{1/2} \otimes {\mathcal O(1,-1)}^{-1}=\mathcal O(-2,0)$ \end{center} here, we have computed the normal bundle of $Q$ in $\widetilde{Z}_2$ in Lemma (\ref{normalbundle}) as $\mathcal O(1,-1)$, where the second component is the fiber direction in the blowing up process. So the line bundle $L^*$ is trivial on the fibers. Since $Q={\mathbb P}_1 \times {\mathbb P}_1$, we have \begin{center} $H^0({\mathbb P}_1 \times {\mathbb P}_1,\mathcal O(-2,0))=H^0({\mathbb P}_1 \times {\mathbb P}_1,\pi_1^*\mathcal O(-2)) =H^0({\mathbb P}_1 ,{\pi_1}_*\pi_1^*\mathcal O(-2))=H^0({\mathbb P}_1,\mathcal O(-2))=0$ \end{center} by the Leray spectral sequence and the projection formula since $H^k({\mathbb P}_1,\mathcal O)=0$ for $k>0$. Similarly \begin{center} $H^2({\mathbb P}_1 \times {\mathbb P}_1,\mathcal O(-2,0))=H^2({\mathbb P}_1,\mathcal O(-2))=0$\end{center} by dimensional reasons. Moreover, for the sake of curiosity \begin{center} $H^1({\mathbb P}_1 \times {\mathbb P}_1,\mathcal O(-2,0))=H^1({\mathbb P}_1,\mathcal O(-2)) \approx H^0({\mathbb P}_1,\mathcal O(-2)\otimes \mathcal O(-2)^*)^*=H^0({\mathbb P}_1,\mathcal O)^*=\mathbb C$.\end{center} \item \label{hitvan} $\mathbf{H^1(\wt{Z}_2,\mathcal O_{\wt{Z}_2}(L^*))=H^2(\wt{Z}_2,\mathcal O_{\wt{Z}_2}(L^*))=0}$ \textbf{:} These are applications of Hitchin's second Vanishing Theorem and are going to help us to simplify our exact sequence piece. \begin{center} $H^1(\wt{Z}_2,\mathcal O_{\wt{Z}_2}(L^*))=H^1(\wt{Z}_2,\mathcal O(K_\mathcal Z^{1/2}-\wt{Z}_1)|_{\wt{Z}_2}) =H^1(\wt{Z}_2,\mathcal O(K_\mathcal Z^{1/2}-Q)|_{\wt{Z}_2})=H^1(\wt{Z}_2,\pi^*K_{Z_2}^{1/2}) =H^1(Z_2,\pi_*\pi^*K_{Z_2}^{1/2})=H^1(Z_2,K_{Z_2}^{1/2})=0$ \end{center} by the Leray spectral sequence, projection formula and Hitchin's Vanishing theorem for $Z_2$, since it is the twistor space of a positive scalar curvature space. This implies $H^2(Z_2,K_{Z_2}^{1/2}) \approx H^1(Z_2,K_{Z_2}^{1/2})^*=0$ because of the Kodaira-Serre Duality. Hence our cohomological exact sequence piece simplifies to \begin{center} $0\to H^1(\mathcal O_{Z_0}(L^*))\to H^1(\wt{Z}_1,\mathcal O_{\widetilde{Z}_1}(L^*) ) \to H^1(\mathcal O_Q(L^*))\to H^2(\mathcal O_{Z_0}(L^*))\to H^2(\wt{Z}_1,\mathcal O_{\widetilde{Z}_1}(L^*) )\to 0$ \end{center} \item \label{tech}\textbf{$\mathbf{H^k(\mathcal O_{\wt{Z}_1}(L^* \otimes [Q]^{-1}_{\wt{Z}_1}) )=0}$ for $\mathbf{k=1,2,3}$ :} This technical result is going to be needed to understand the exact sequence in the next step. First we simplify the sheaf as \begin{center} $(L^*-Q)|_{\wt{Z}_1}\stackrel{def}=(\frac{1}{2}K_\mathcal Z-\wt{Z}_1-Q)|_{\wt{Z}_1} =\frac{1}{2}K_\mathcal Z|_{\wt{Z}_1}\stackrel{adj}=\frac{1}{2}(K_{\wt{Z}_1}-\wt{Z}_1)|_{\wt{Z}_1} =\frac{1}{2}(K_{\wt{Z}_1}+Q)|_{\wt{Z}_1}.$ \end{center} So \begin{center} $H^k(\wt{Z}_1,L^*-Q)=H^k(\wt{Z}_1,(K_{\wt{Z}_1}+Q)/2 )\stackrel{sd}\approx H^{3-k}(\wt{Z}_1,(K_{\wt{Z}_1}-Q)/2)^*$ $=H^{3-k}(\wt{Z}_1,\frac{1}{2}\pi^*K_{Z_1})^* \stackrel{lss}=H^{3-k}(Z_1,\frac{1}{2}\pi_*\pi^*K_{Z_1})^*$ $\stackrel{pf}=H^{3-k}(Z_1,K_{Z_1}^{1/2})^*\stackrel{sd}\approx H^k(Z_1,K_{Z_1}^{1/2})$\end{center} and one of the last two terms vanish in any case for $k=1,2,3$. So we apply the Hitchin Vanishing theorem for dimensions $0$ and $1$. \item \label{restrictiontoq}\textbf{Restriction maps to $\mathbf{Q}$ :} Consider the exact sequence of sheaves on $\wt{Z}_1$ : $$0 \to \mathcal O_{\wt{Z}_1}(L^* \otimes [Q]^{-1}_{\wt{Z}_1}) \to \mathcal O_{\wt{Z}_1}(L^*) \to \mathcal O_Q(L^*) \to 0.$$ The previous step implies that the restriction maps : $$H^1(\mathcal O_{\wt{Z}_1}(L^*))\stackrel{restr_1}{\longrightarrow}H^1(\mathcal O_Q(L^*))$$ and $$H^2(\mathcal O_{\wt{Z}_1}(L^*))\stackrel{restr_2}{\longrightarrow}H^2(\mathcal O_Q(L^*))$$ are isomorphism. In particular $H^2(\mathcal O_{\wt{Z}_1}(L^*))=0$ due to (\ref{restrto quadric cohomology}). Incidentally, this exact sheaf sequence is a substitute for the role played by the Hitchin Vanishing Theorem, for the $\wt{Z}_2$ components in the cohomology sequence. It also assumes Hitchin's theorems for the $\wt{Z}_1$ component. \item\textbf{Conclusion :} Our cohomology exact sequence piece reduces to \begin{center} $0\to H^1(\mathcal O_{Z_0}(L^*))\to H^1(\wt{Z}_1,\mathcal O_{\widetilde{Z}_1}(L^*) ) \stackrel{restr_1}{\longrightarrow} H^1(\mathcal O_Q(L^*))\to H^2(\mathcal O_{Z_0}(L^*))\to 0$ \end{center} the isomorphism in the middle forces the rest of the maps to be $0$ and hence we get $H^1(\mathcal O_{Z_0}(L^*))=H^2(\mathcal O_{Z_0}(L^*))=0$. \end{enumerate} \end{proof} \section*{The Sign of the Scalar Curvature}\label{secsign} The sections after this point are devoted to detect the sign of the scalar curvature of the metric we consider on the connected sum. We use Green's Functions for that purpose. Positivity for the scalar curvature is going to be characterized by non-triviality of the Green's Functions. Then our Vanishing Theorem will provide the Serre-Horrocks vector bundle construction, which gives the Serre Class, a substitute for the Green's Function by Atiyah\cite{atgrn}. And non-triviality of the Serre Class will provide the non-triviality of the extension described by it. \section{Green's Function Characterization}\label{greatsmall} In this section, we define the Green's Functions. To get a unique Green's Function, we need an operator which has a trivial kernel. So we begin with a compact Riemannian $4$-manifold $(M,g)$, and assume that its \emph{Yamabe Laplacian}\index{Laplacian, Yamabe} $\Delta + s/6$ has trivial kernel.This is automatic if $g$ is conformally equivalent to a metric of positive scalar curvature, impossible if it is conformally equivalent to a metric of zero scalar curvature because of the Hodge Laplacian, and may or may not happen for a metric of negative scalar curvature. Since the Hodge Laplacian $\Delta$ is self-adjoint, $\Delta+s/6$ is also self-adjoint implying that it has a trivial cokernel, if once have a trivial kernel. Therefore it is a bijection and we have a unique smooth solution $u$ for the equation ~$(\Delta + s/6) u =f$~ for any smooth function $f$. It also follows that it has a unique distributional solution $u$ for any distribution $f$. Let $y \in M$ be any point. Consider the Dirac delta distribution $\delta_y$ at $y$ defined by\index{Dirac delta distribution} $$\delta_y : C^\infty(M)\to \mathbb R ~~,~~ \delta_y(f)=f(y)$$ intuitively, this behaves like a function identically zero on $M-\{y\}$, and infinity at $y$ with integral $1$. Then there is a unique distributional solution $G_y$ to the equation $$(\Delta + s/6)G_y =\delta_y$$ called the \emph{Green's Function} \index{Green's function} for $y$. Since $\delta_y$ is identically zero on $M-\{ y\}$, elliptic regularity implies that $G_y$ is smooth on $M-\{y\}$. About $y$, one has an expansion $$G_y = \frac{1}{4\pi^2}\frac{1}{\varrho^2_y}+ O(\log \varrho_y)$$ near $\varrho_y$ denotes the distance from $y$. In the case $(M,g)$ is self-dual this expansion reduces to \cite{atgrn} $$G_y =\frac{1}{4\pi^2}\frac{1}{\varrho_y^2}+bounded~ terms$$ We also call $G_y$ to be the {\em conformal Green's function} of $(M,g,y)$. This terminology comes from the fact that the Yamabe Laplacian is a {\em conformally invariant} differential operator as a map between sections of some real line bundles. For any nonvanishing smooth function $u$, the conformally equivalent metric $\tilde{g}=u^2g$ has scalar curvature $$\tilde s=6u^{-3}(\Delta+s/6)u$$ A consequence of this is that $u^{-1}G_y$ is the conformal Green's function for $(M,u^2g,y)$ if $G_y$ is the one for $(M,g,y)$. Any metric on a compact manifold is conformally equivalent to a metric of constant scalar curvature sign. Since if $u\not\equiv 0$ is the eigenfunction of the lowest eigenvalue $\lambda$ of the Yamabe Laplacian, $$\tilde{s}=6u^{-3}\lambda u=6\lambda u^{-2}$$ for the metric $\tilde g=u^2g$. Actually a more stronger statement is true thanks to the proof\cite{lp} of the Yamabe Conjecture, any metric on a compact manifold is conformally equivalent to a metric of constant scalar curvature(CSC). Also if two metrics with scalar curvatures of fixed signs are conformally equivalent, then their scalar curvatures have the same sign. The sign of Yamabe constant of a conformal class, meaning the sign of the constant scalar curvature of the metric produced by the proof of the Yamabe conjecture is the same as the sign of the smallest Yamabe eigenvalue $\lambda$ for any metric in the conformal class. Before giving our characterization for positivity, we are going to state the maximum principle we will be using. Consider the differential operator $L_c = \sum_{i,j=1}^na^{ij}(x)\frac{\partial^2}{\partial x_i \partial x_j}$ arranged so that $a^{ij}=a^{ji}$. It is called \emph{elliptic \cite{protter}}\index{ellipticity} \label{ellipticity} at a point $x=(x_1 .. x_n)$ if there is a positive quantity $\mu (x)$ such that $$\sum_{i,j=1}^na^{ij}(x)\xi_i\xi_j \geq \mu(x)\sum_{i=1}^n{\xi_i}^2$$ for all $n$-tuples of real numbers $(\xi_1 .. \xi_n)$. The operator is said to be uniformly elliptic in a domain $\Omega$ if the inequality holds for each point of $\Omega$ and if there is a positive constant $\mu_0$ such that $\mu (x) \geq \mu_0$ for all $x$ in $\Omega$. Ellipticity of a more general second order operator is defined via its second order term. In the matrix language, the ellipticity condition asserts that the symmetric matrix $[a^{ij}]$ is positive definite at each point $x$. \begin{lem}[Hopf's strong maximum principle \cite{protter}] \label{strongmax} Let $u$ satisfy the differential inequality $$(L_c+h)u \geq 0 ~~with~~ h\leq 0$$ where $L_c$ is uniformly elliptic in $\Omega$ and coefficients of $L_c$ and $h$ bounded. If $u$ attains a nonnegative maximum at an interior point of $\Omega$, then $u$ is constant. \end{lem} So if for example the maximum of $u$ is attained in the interior and is $0$, then $u$ has to vanish. An application of this principle provides us with a criterion of determining the sign of the Yamabe Constant using Green's Functions: \begin{lem}[Green's Function Characterization for the Sign\cite{lom}]\label{greenchar} Let $(M,g)$ be a compact Riemannian $4$-manifold with $Ker(\Delta + s/6)=0$, i.e. the Yamabe Laplacian has trivial kernel, taking $\Delta=d^*d\cite{atgrn}$. Fix a point $y\in M$. Then for the conformal class $[g]$ we have the following assertions : \\ 1. It does not contain a metric of zero scalar curvature \\ 2. It contains a metric of positive scalar curvature iff $G_y(x)\neq0$ for all $x \in M-\{y\}$ \\ 3. It contains a metric of negative scalar curvature iff $G_y(x)<0$ for some $x \in M-\{y\}$ \end{lem} \begin{proof} Proceeding as in \cite{lom}, $[g]$ has three possibilities for its Yamabe Type, one of $0$,$+$,$-$. Since the Yamabe Laplacian is conformally invariant as acting on functions with conformal weight, we assume that either $s=0$ or $s>0$ or else $s<0$ everywhere. \begin{itemize} \item[] \begin{itemize} \item[$s=0$ :]Then $(\Delta+0/6)f=\Delta f=0$ is solved by any nonzero constant function $f$. Therefore $Ker(\Delta+s/6)\neq 0$, which is not our situation. \item[$s>0$ :] For the smooth function $G_y : M-\{y\} \to \mathbb{R}$~,~$G_y^{-1}((-\infty,a])$ is closed hence compact for any $a \in \mathbb{R}$. Hence it has a minimum say at $m$ on $M-\{y\}$. We also have $(\Delta+s/6)G_y=0$ on $M-\{y\}$. At the minimum, choose normal coordinates so that $\Delta G_y(m)=-\sum_{k=1}^4 \partial_k^2G_y(m)$. Second partial derivatives are greater than or equal to zero, $\Delta G_y(m)\leq 0$ ~so~ $G_y(m)=-{6\over s}\Delta G_y(m)\geq 0$. We got nonnegativity, but need positivity, so assume $G_y(m)=0$.\\ Then the maximum of $-G_y$ is attained and it is nonnegative with $(\Delta_c-s/6)(-G_y)=0\geq 0$. So the strong maximum principle(\ref{strongmax}) is applicable and $-G_y\equiv 0$. This is impossible since $G_y(x)\to \infty$ as $x\to y$, hence $m\neq 0$ and $G_y>0$. Note that the weak maximum principle was not applicable since we had $G_y\geq 0$, implied $\Delta_cG_y={s\over 6}G_y\geq 0$ though we got a minimum rather than a maximum. Also note that $\nabla G_y(m)=0$ at a minimum though this does not imply $div\nabla G_y(m)=0$. \item[$s<0$ :]In this situation we have $$\frac{1}{6}\int_M sG_ydV= \int_M(\Delta+s/6)G_ydV=\int_M \delta_ydV=1>0$$ implying $G_y<0$ at some point. Besides, at some other point it should be zero since $G_y(x)\to +\infty$ as $x\to y$. \end{itemize} \end{itemize} \end{proof} \section{Cohomological Characterization}\label{cohom} Now let $(M^4,g)$ be a compact self-dual Riemannian manifold with the twistor space $Z$. One of the basic facts of the twistor theory\cite{hitlin} is that for any open set $U\subset M$ and the correponding inverse image $\widetilde U\subset Z$ in the twistor space, there is a natural isomorphism $$pen : H^1 (\widetilde U, \mathcal O(K^{1/2}))\stackrel{\sim}{\longrightarrow} \left\{ \begin{tabular}{c}smooth complex-valued solutions\\ of~ $(\Delta + s/6)u=0$ ~in~ $U$ \end{tabular} \right\}$$ which is called \emph{the Penrose transform}\index{Penrose transform}\cite{bailsing,hitka,atgrn}, where $K=K_Z$. Since locally $\mathcal O(K^{1/2}) \approx\mathcal O(-2)$ e.g. $Z=\mathbb{CP}_3$, for a cohomology class $\psi \in H^1 (\widetilde U, {\mathcal O}(K^{1/2}))$, the value of the corresponding function $pen_\psi$ at $x\in U$ is obtained by restricting $\psi$ to the twistor line $P_x\subset Z$ to obtain an element $$pen_\psi(x)=\psi |_{P_x}\in H^1(P_x, {\mathcal O}(K^{1/2}))\approx H^1({\Bbb C\Bbb P}_1 , {\mathcal O}(-2) ) \approx \mathbb C.$$ Note that $pen_\psi$ is a section of a line bundle, but the choice of a metric $g$ in the conformal class determines a canonical trivialization of this line bundle \cite{hitka}, and $pen_\psi$ then becomes an ordinary function. Taking $U=M-\{y\}$ we have $(\Delta +s/6)G_y=0$ on $U$ in the uniquely presence of the conformal Green's functions(\ref{greatsmall}) and $G_y(x)$ is regarded as a function of $x$ corresponds to a canonical element $$pen^{-1}(G_y)\in H^1(Z-P_y, {\mathcal O}(K^{1/2}))$$ where $P_y$ is the twistor line over the point $y$. What is this interesting cohomology class? The answer was discovered by Atiyah \cite{atgrn} involving the \emph{Serre Class} of a complex submanifold. Which is a construction due to Serre \cite{serremod} and Horrocks \cite{horrocks}. We now give the definition of the Serre class via the following lemma: \begin{lem}[Serre-Horrocks Vector Bundle,Serre Class] \label{serhor} Let $W$ be a (possibly non-compact) complex manifold, and let $V\subset W$ be a closed complex submanifold of complex codimension $2$, and $N=N_{V/W}$ be the normal bundle of $V$. For any holomorphic line bundle $L\to W$ satisfying $$L|_V\approx \wedge^2 N ~~~\textnormal{and}~~~H^1(W, {\mathcal O}(L^*))=H^2(W, {\mathcal O}(L^*))=0$$ There is a rank-$2$ holomorphic vector bundle $E\to W$ called the \emph{Serre-Horrocks bundle} of $(W,V,L)$ , together with a holomorphic section $\zeta$ satisfying $$\wedge^2 E \approx L~~~,~~~d\zeta|_V : N\stackrel{\sim}{\to} E~~~\textnormal{and}~~~\zeta =0 ~\textnormal{exactly on V}.$$ The pair $(E, \zeta)$ is unique up to isomorphism if we also impose that the isomorphism $\det d\zeta: \wedge^2 N\to \wedge^2E|_V$ should agree with a given isomorphism $ \wedge^2 N\to L|_V$. They also give rise to an extension $$0\to {\mathcal O}(L^*)\to {\mathcal O}(E^*) \stackrel{\cdot \zeta} {\to} {\mathscr I}_V\to 0,$$ the class of which is defined to be the {\em Serre Class} $\lambda(V)\in \textnormal{Ext}^{1}_W ({\mathscr I}_V, {\mathcal O}(L^*))$, where ${\mathscr I}_V$ is the ideal sheaf of $V$, and this extension determines an element of $H^1(W-V, {\mathcal O}(L^*))$ by restricting to $W-V$. \end{lem} \begin{proof} Consult \cite{lom} for a proof. \end{proof} For an alternative treatment of Serre's class via the Grothendieck class consult \cite{atgrn}. We are now ready to state the answer of Atiyah : \begin{thm}[Atiyah\cite{atgrn}] \label{atiyah} Let $(M^4,g)$ be a compact self-dual Riemannian manifold with twistor space $Z$, and assume that the conformally invariant Laplace operator $\Delta=d^*d+s/6$ on $M$ has no global nontrivial solution so that the Green's functions are well defined. Let $y\in M$ be any point, and $P_y\subset Z$ be the corresponding twistor line.\\ Then the image of the Serre class $\lambda (P_y)\in \textnormal{Ext}^{1}_Z ({\mathscr I}_{P_y}, {\mathcal O}(K^{1/2}))$ in $H^1(Z-P_y, {\mathcal O}(K^{1/2}))$ is the Penrose transform of the Green's function $G_y$ times a non-zero constant. More precisely $$pen^{-1}(G_y)={1\over 4\pi^2}\lambda(P_y)$$ \end{thm} Now thanks to this remarkable result of Atiyah, we can substitute the Serre class for the Green's functions in our previous characterization \ref{greenchar} and get rid of them to obtain a better criterion for positivity as follows : \begin{prop}[Cohomological Characterization , \cite{lom}]\label{nice} Let $(M^4,g)$ be a compact self-dual Riemannian manifold with twistor space $Z$. Let $P_y$ be a twistor line in $Z$. \\ Then the conformal class $[g]$ contains a metric of positive scalar curvature if and only if $H^1(Z, {\mathcal O}(K^{1/2}))=0$, and the Serre-Horrocks vector bundle(\ref{serhor}) on $Z$ taking $L= K^{-1/2}$ associated to $P_y$ satisfies $E|_{P_x}\approx {\mathcal O}(1)\oplus {\mathcal O}(1)$ for every twistor line $P_x$ \end{prop} \begin{proof} $\Rightarrow$ : If a conformal class contains a metric of positive scalar curvature $g$, then we can show that $Ker(\Delta+{s\over 6})$ is trivial as follows: Let $(\Delta+{s\over 6})u=0$ for some smooth function $u:M\to\mathbb R$ and $s>0$. Since $M$ is compact, $u$ has a minimum say at some point $m$. At the minimum one has $$\Delta u(m)=-\sum u_{kk}(m)\leq 0$$ because of the normal coordinates about $m$, modern Laplacian and second derivative test. So that $$\Delta u=-{su\over 6}\leq 0 ~~~\textnormal{implying}~~~ u\geq 0~~~ \textnormal{everywhere}.$$ If we integrate over $M$ on gets $0$ for the Laplacian of a function so $$0=\int_M\Delta u ~dV=\int_M -{su\over 6} dV$$ hence $$\int_M su~dV=0~~~\textnormal{implying}~~~u\equiv 0~~~\textnormal{since}~~~s>0$$ that is to say that the kernel is zero. Remember the Penrose Transform map $$pen : H^1(M, \mathcal O(K^{1/2}))\stackrel{\sim}{\longrightarrow} Ker(\Delta+{s\over 6})$$ implies that $H^1 (M, \mathcal O(K^{1/2}))=0$, also by Serre Duality $$H^2(M,K^{1/2})\approx H^{0,2}_{\bar{\partial}}(M,K^{1/2})\stackrel{SD}{\approx}H^{3,1}_{\bar{\partial}}(M,K^{1/2*})^*\approx H^1(M,K\otimes K^{-1/2})^*$$ $$=H^1(M,K^{1/2})^*=0$$ also $$\wedge^2NP_y=\wedge^2\mathcal O_{P_y}(1)\oplus\mathcal O_{P_y}(1)=\bigoplus_{2=p+q}\wedge^p\mathcal O(1)\otimes \wedge^q\mathcal O(1)=\wedge^1\mathcal O(1)\otimes\wedge^1\mathcal O(1)$$ $$=\mathcal O_{\mathbb P_1}(2)=K^{-1/2}|_{P_y}$$\\ since $K^{-1/2}|_{P_y}=\mathcal O_{\mathbb P_3}(4)^{1/2}|_{P_y}=\mathcal O_{P_y}(2).$ So that the hypothesis for the Serre-Horrocks vector bundle construction (\ref{serhor}) for $L=K^{-1/2}$ is satisfied. Then we have the image of the Serre class $$4\pi^2pen^{-1}(G_y)=\lambda(P_y)\in H^1(Z-P_y,K^{1/2})$$ So $$4\pi^2G_y(x)=pen_{\lambda(P_y)}(x)=\lambda(P_y)|_{P_x}\in H^1(P_x,\mathcal O(K^{1/2}))\approx \mathbb C$$ where $$H^1(P_x,\mathcal O(K^{1/2}))\approx H^1(\mathbb{CP}_1,\mathcal O(-2))\approx H^0(\mathbb{CP}_1,\Omega^1(\mathcal O(-2)^*)) =H^0(\mathbb{CP}_1,\mathcal O)\approx \mathbb C$$ By the Green's Function Characterization (\ref{greenchar}) we know that $4\pi^2G_y(x)\neq 0$. So $\lambda(P_y)|_{P_x}\in H^1(P_x,\mathcal O(K^{1/2}))$ is also nonzero. Since $\lambda(P_y)$ corresponds to the extension $$0\to\mathcal O(K^{1/2})\to\mathcal O(E^*)\to \mathcal I_{P_y}\to 0$$ If we restrict to $Z-P_y$ $$0\to\mathcal O(K^{1/2})\to\mathcal O(E^*)\to \mathcal O\to 0$$ dualizing we obtain $$0\to\mathcal O\to\mathcal O(E)\to \mathcal O(K^{-1/2})\to 0$$ now restricting this extension to $P_x$ $$0\to\mathcal O_{\mathbb P_1}\to\mathcal O(E)|_{P_x}\to \mathcal O(2)\to 0$$ So since $G_y(x)\neq 0$ , we expect that this extension is nontrivial. Let's figure out the possibilities. First of all, by the theorem of Grothendieck \cite{okonek}p22 every holomorphic vector bundle over $\mathbb P_1$ splits. In our case $E|_{P_x}=\mathcal O(k)\oplus\mathcal O(l)$ for some $k,l\in\mathbb Z$. Moreover if we impose $k\geq l$, this splitting is uniquely determined\cite{okonek}. Secondly, any short exact sequence of vector bundles splits topologically by \cite{okonek}p16. In our case, topologically we have $E|_{P_x}\stackrel{t}{=}\mathcal O\oplus\mathcal O(2)$. So, setting the chern classes to each other we have $$c_1(E|_{P_x})[P_x]=c_1(\mathcal O(k)\oplus\mathcal O(l))[\mathbb P_1]=c_1\mathcal O(k)+c_1\mathcal O(l)[\mathbb P_1]=k+l$$ equal to $$c_1(E|_{P_x})[P_x]=c_1(\mathcal O\oplus\mathcal O(2))[\mathbb P_1]=c_1\mathcal O+c_1\mathcal O(2)[\mathbb P_1]=0+2=2.$$ Hence $l=2-k$. We now have $E|_{P_x}=\mathcal O(k)\oplus\mathcal O(2-k)$. Our extension becomes $$0\to\mathcal O_{\mathbb P_1}\to\mathcal O(k)\oplus\mathcal O(2-k)\to \mathcal O(2)\to 0$$ The inclusion $\mathcal O\hookrightarrow\mathcal O(k)\oplus\mathcal O(2-k)$ gives a trivial holomorphic subbundle. It has one complex dimensional space of sections. So these sections are automatically sections of $\mathcal O(k)\oplus\mathcal O(2-k)$, too. This implies $$0\neq H^0(\mathcal O(k)\oplus\mathcal O(2-k))=H^0(\mathcal O(k))\oplus H^0(\mathcal O(2-k))$$ Imposing $k,2-k\geq 0$ by the Kodaira Vanishing Theorem\cite{gh} since the direct sum elements $\mathcal O(k)$ and $\mathcal O(2-k)$ should possess sections. Also, from uniqueness $k\geq l=2-k$. Altogether we have $2\geq k\geq 1$. From the two choices, $k=2$ gives the trivial extension $\mathcal O(2)\oplus\mathcal O$, $k=1$ gives the nontrivial extension $E|_{P_x}=\mathcal O(1)\oplus\mathcal O(1)$ as we expected. See the following remark for existence. $\Leftarrow$ : For the converse, if $E|_{P_x}=\mathcal O(1)\oplus\mathcal O(1)$ then we already showed that this is the nontrivial extension hence $G_y(x)\neq 0$, so that the scalar curvature is positive by the Green's Function Characterization (\ref{greenchar}) \end{proof} \begin{rmk}The nontrivial extension of $\mathcal O$ by $\mathcal O(2)$ exists by the Euler exact sequence\index{Euler sequence} $$0\to \mathcal O\to\mathcal O(1)^{\oplus n+1}\stackrel{\mathcal E}{\to} T'\mathbb P^n\to 0$$ \cite{gh}p409 for $n=1$. Alternatively, the maps $i:\rho\mapsto (\rho Z_0,\rho Z_1)$ and $j:(u,v)\mapsto uZ_1-vZ_0$ for coordinates $[Z_0:Z_1]$ on $\mathbb P_1$ yields the exact sheaf sequence $$0\to\mathcal O(-1)\stackrel{i}{\to}\mathcal O\oplus\mathcal O\stackrel{j}{\to}\mathcal O(1)\to 0$$ tensoring with $\mathcal O(1)$ produces the nontrivial $\mathcal O(1)\oplus\mathcal O(1)$ extension. Since we have a unique nontrivial extension, this shows $$\textnormal{Ext}^1(\mathcal O(2),\mathcal O)=\mathbb C$$ used in $\cite{atgrn}$ to classify the extensions. On the other hand $$H^1(Hom(\mathcal O(2),\mathcal O))=H^1(\mathcal O(2)^*\otimes\mathcal O)=H^1(\mathcal O_{\mathbb P_1}(-2))$$ $$=H^0(\mathcal O(2)\otimes\mathcal O(-2)^*)= H^0(\mathbb P_1,\mathcal O)=\mathbb C$$ used in $\cite{DF}$ to classify the extensions. So, our computation verifies the isomorphism $$\textnormal{Ext}^q(M,\mathcal F,\mathcal G)\approx H^q(M,\mathcal F^*\otimes_\mathcal O \mathcal G)$$ for locally free sheaves or vector bundles for $q=1$. See \cite{gh}$p706$.\\ Here, \textnormal{Ext} stands for what is called the {\em global Ext}\index{Ext, global} group usually defined to be the hypercohomology of the complex of sheaves associated to a global syzygy for $\mathcal F$. Though practically usually computed via the spectral sequence to be $$\textnormal{Ext}^k(\mathcal F,\mathcal G)=H^0(\underline{Ext}^k_\mathcal O(\mathcal F,\mathcal G))$$ under some vanishing conditions\cite{gh}. \hfill $\square$ \end{rmk} \section{The Sign of the Scalar Curvature} \label{scalar} We are now ready to approach the problem of determining the sign of the Yamabe constant for the self-dual conformal classes constructed in Theorem (\ref{thmb}). The techniques used here are analogous to the ones used by LeBrun in \cite{lom}. \begin{thm}\label{pos} Let $(M_1,g_1)$ and $(M_2,g_2)$be compact self-dual Riemannian $4$-manifolds with $H^2(Z_i, {\mathcal O}(TZ_i))=0$ for their twistor spaces. Moreover suppose that they have positive scalar curvature. Then, for all sufficiently small ${\mathfrak t}>0$, the self-dual conformal class $[g_{\mathfrak t}]$ obtained on $M_1 \# M_2$ by the Donaldson-Friedman Theorem (\ref{thmb}) contains a metric of positive scalar curvature. \end{thm} \begin{proof} Pick a point $y \in (M_1 \# M_2)\backslash M_1$. Consider the real twistor line $P_y\subset \wt{Z}_2$, and extend this as a $1$-parameter family of twistor lines in $P_{y_{\mathfrak t}}\subset Z_{\mathfrak t}$ for ${\mathfrak t}$ near $0\in \mathbb C$ and such that $P_{y_{\mathfrak t}}$ is a real twistor line for ${\mathfrak t}$ real. By shrinking ${\mathcal U}$ if needed, we may arrange that ${\mathcal P}= \cup_{\mathfrak t} P_{y_{\mathfrak t}}$ is a closed codimension-$2$ submanifold of ${\mathcal Z}$ and $H^1 ({\mathcal Z} , {\mathcal O}(L^*))=H^2 ({\mathcal Z} , {\mathcal O}(L^*))=0$ by the Vanishing Theorem (\ref{vanishing}). Next we check that $L|_\mathcal P\approx \wedge^2N_{\mathcal P}$. Over a twistor line $P_{y_\mathfrak t}$ we have $$\wedge^2N_\mathcal P|_{P_{y_\mathfrak t}} =\wedge^2 (\mathcal O(1)\oplus\mathcal O(1))=\mathcal O_{P_{y_\mathfrak t}}(2)$$ by considering the first Chern classes. On the other hand, notice that the restriction of $L^*$ to any smooth fiber $Z_{\mathfrak t}$, ${\mathfrak t}\neq 0$ is simply $K^{1/2}$ : $$L^*|_{Z_\mathfrak t}=({1\over 2}K_\mathcal Z-\wt{Z}_1)|_{Z_\mathfrak t}={1\over 2}K_\mathcal Z|_{Z_\mathfrak t}= {1\over 2}(K_{Z_\mathfrak t}-Z_\mathfrak t)|_{Z_\mathfrak t}= {1\over 2}K_{Z_\mathfrak t}|_{Z_\mathfrak t}.$$ Here, $\wt{Z}_1|_{Z_\mathfrak t}=0$ because of the fact that $\wt{Z}_1$ and ${Z_\mathfrak t}$ does not intersect for ${\mathfrak t}\neq 0$. The normal bundle of $Z_\mathfrak t$ is trivial, because of the fact that we have a standard deformation. Then $$L|_{P_{y_\mathfrak t}}= K^{-{1/2}}_{Z_\mathfrak t}|_{P_{y_\mathfrak t}}=TF|_{P_{y_\mathfrak t}}=\mathcal O_{P_{y_\mathfrak t}}(2) ~~~\textnormal{for}~~~ \mathfrak t\neq 0$$ since ~$TF$ of Sec (\ref{natural}) is the square-root of the anti-canonical bundle. For the case $\mathfrak t=0$, we need the fact that ~$L^*|_{\widetilde{Z}_2}=\pi^*K^{1/2}_{Z_2}$~ which we have computed in the step \ref{hitvan} of the proof of the vanishing theorem (\ref{vanishing}). This yields $$L|_{P_{y_0}}=\pi^*K^{-1/2}_{Z_2}|_{\wt{Z}_2}|_{P_{y_0}}=\mathcal O_{P_{y_0}}(2).$$ Then the Serre-Horrocks construction (\ref{serhor}) is available to obtain the holomorphic vector bundle $E\to {\mathcal Z}$ and a holomorphic section $\zeta$ vanishing exactly along $\mathcal P$, also, the corresponding extension $$0\to {\mathcal O}(L^*)\to {\mathcal O}(E^*)\stackrel{\cdot\zeta}{\to} {\mathscr I}_{\mathcal P}\to 0$$ gives us the Serre class $\lambda({\mathcal P})\in H^1({\mathcal Z}-{\mathcal P}, {\mathcal O}(L^*))$. Since $L^*|_{Z_\mathfrak t}=K^{1/2}_{Z_\mathfrak t}$ for $\mathfrak t\neq 0$ by the above computation, Proposition (\ref{atiyah}) of Atiyah tells us that the restriction of $\lambda ({\mathcal P})$ to $Z_{\mathfrak t}$, ${\mathfrak t} > 0$, has Penrose transform equal to a positive constant times the conformal Green's function of $( M_1\# M_2, g_{\mathfrak t}, y_{\mathfrak t})$ for any ${\mathfrak t} > 0$. Now, we will restrict $(E,\zeta )$ to the two components of the divisor $Z_0$. We begin by restricting to $\widetilde{Z}_2$. We have $L|_{P_{y_0}}=\mathcal O_{P_0}(2)=\wedge^2N{P_{y_0}}$ and $$H^k(\wt{Z}_2,L^*)=H^k(\wt{Z}_2,\pi^*K^{1/2}_{Z_2})=H^k(Z_2,\pi_*\pi^*K^{1/2}_{Z_2}) =H^k(Z_2,K^{1/2}_{Z_2})=0$$ for $k=1,2$ because of the projection lemma, Leray spectral sequence and the Hitchin's Vanishing theorem for positive scalar curvature on $M_2$. So that we have the Serre-Horrocks bundle for the triple $(\wt{Z}_2 , P_{y_0} , L|_{\wt{Z}_2}=\pi^*K^{-1/2}_{Z_2})$. On the other hand it is possible to construct the Serre-Horrocks bundle $E_2$ for the triple $(Z_2,P_{y_0},K_{Z_2}^{-1/2})$ for which all conditions are already checked to be satisfied. In the construction of these Serre-Horrocks bundles, if we stick to a chosen isomorphism $\wedge^2N \to L|_{P_{y_0}}$, these bundles are going to be isomorphic by (\ref{serhor}). The splitting type of $E$ on the twistor lines corresponding to the points in $M_2-\{y_0,p_2\}$ supposed to be the same as the splitting type of $E_2$, which is $\mathcal O(1)\oplus\mathcal O(1)$ since $Z_2$ already admits a self-dual metric of positive scalar curvature. Secondly, we restrict $(E,\zeta)$ to $\wt{Z}_1$. Alternatively we restrict the Serre class $\lambda(\mathcal P)$ to $H^1(\wt{Z}_1,\mathcal O(L^*))$ where \begin{center} $L^*|_{\wt{Z}_1}=\frac{1}{2}K_\mathcal Z-\wt{Z}_1|_{\wt{Z}_1} =\frac{1}{2}K_\mathcal Z+Q|_{\wt{Z}_1}\stackrel{adj}=\frac{1}{2}(K_{\wt{Z}_1}-\wt{Z}_1)+Q|_{\wt{Z}_1} =\frac{1}{2}(K_{\wt{Z}_1}+Q)+Q|_{\wt{Z}_1}=\frac{1}{2}(\pi^*K_{Z_1}+2Q)+Q|_{\wt{Z}_1}=\pi^*{1\over 2}K_{Z_1}+2Q|_{\wt{Z}_1},$ \end{center} and show that it is non-zero on every real twistor line away from $Q$ here. Remember that we have the the restriction isomorphism obtained in the step \ref{restrictiontoq} of the proof of the vanishing theorem (\ref{vanishing}) $$H^1(\mathcal O_{\wt{Z}_1}(L^*))\stackrel{\sim}{\longrightarrow}H^1(\mathcal O_Q(L^*))\approx \mathbb C$$ as a consequence of Hitchin's Vanishing theorems for positive scalar curvature on $M_1$, as mentioned in the step \ref{tech}, and $H^1(\mathcal O_Q(L^*))=H^1({\mathbb P}_1 \times {\mathbb P}_1,\mathcal O(-2,0))=\mathbb C$, as computed in the step \ref{restrto quadric cohomology}. This shows that if there is a rational curve of $Q$ on which the Serre class is non-zero, then this class is non-zero and a generator of $H^1(\mathcal O_{\wt{Z}_1}(L^*))$. The Serre-Horrocks bundle construction on $Z_2$ shows us that $E|_{C_2}= \mathcal O(1)\oplus\mathcal O(1)$ where $C_2$ is the twistor line on which the blow up is done. We know that $Q=\mathbb P_1\times\mathbb P_1\approx \mathbb P(NC_2)$. So that the exceptional divisor has one set of rational curves which are the fibers, and another set of rational curves, coming from the sections of the projective bundle $\mathbb P (NC_2)$. Take the zero section of $\mathbb P(NC_2)$, on which $E$ has a splitting type $\mathcal O(1)\oplus\mathcal O(1)$. So over the zero section in $Q$, $E$ is going to be the same, hence non-trivial splitting type. This shows that over this rational curve on $Q$, the Serre-class is nonzero. Hence by the isomorphism above, the Serre-class is the (up to constant) nontrivial class in $H^1(\wt{Z}_1,\mathcal O(L^*))\approx\mathbb C$. Next we have to show that this non-trivial class is non-zero on every real twistor line in $\wt{Z}_1-Q$ or $Z_1-C_1$\footnote{Thanks to C.LeBrun for this idea.}. For this purpose consider the Serre-Horrocks vector bundle $E_1$ and its section $\zeta_1$ for the triple $(Z_1,C_1,K_{Z_1}^{-1/2})$, so that $\pi^*\zeta_1$ is a section of $\pi^*E_1$ vanishing exactly along $Q$. Remember the construction of the line bundle associated to the divisor $Q$ in $\wt{Z}_1$ \cite{gh}. Consider the local defining functions $s_\alpha \in \mathfrak M^*(U_\alpha)$\footnote{Here, $\mathfrak M^*$ stands for the multiplicative sheaf of meromorphic functions which are not identically zero, in the convention of \cite{gh}. Actually the local defining functions here are holomorphic because Q is effective.} of $Q$ over some open cover $\{U_\alpha\}$ of $\wt{Z}_1$. These functions are holomorphic and vanish to first order along $Q$. Then the corresponding line bundle is constructed via the transition functions ~$g_{\alpha\beta}=s_\alpha\ / s_\beta$. Since $s_\alpha$'s transform according to the transition functions, they constitute a holomorphic section $s$ of this line bundle $[Q]$, which vanish up to first order along $Q$. Local holomorphic sections of this bundle is denoted by $\mathcal O([Q])$ and they are local functions with simple poles along $Q$. If we multiply $\pi^*\zeta_1$ with these functions, we will get a holomorphic section of $\pi^*E_1$ on the corresponding local open set, since $\zeta_1$ has a non-degenerate zero on $Q$, so that it vanishes up to degree $1$, there. This guarantees that the map is one to one, and the multiplication embeds $\mathcal O([Q])$ into $\pi^*E_1$. The quotient has rank $1$, and the transition functions of $\pi^*E_1$ relative to a suitable trivialization will then look like $$\left(\begin{array}{cc} g_{\alpha\beta} & k_{\alpha\beta} \\ 0 & d_{\alpha\beta}\cdot g_{\alpha\beta}^{-1}\end{array}\right)$$ where $d_{\alpha\beta}$ stands for the determinant of the transition matrix of the bundle $\pi^*E_1$ in this coordinate chart. Since the bundle ~$\det\pi^*E_1\otimes[Q]^{-1}$ has the right transition functions, it is isomorphic to the quotient bundle, hence we have the following exact sequence $$0\to [Q] \to \pi^* E_1 \to \pi^*K^{-1/2} \otimes [Q]^{-1} \to 0$$ since ~$\det E_1=K^{-1/2}_{Z_1}$ as an essential feature of the Serre-Horrocks construction. This extension of line bundles is classified by an element in $$\mathrm{Ext}^1_{\wt{Z}_1}(\pi^*K^{-1/2} \otimes [Q]^{-1},[Q])\approx H^1(\wt{Z}_1,\pi^*K^{1/2} \otimes [Q]^{2})$$ by \cite{atgrn}. If we restrict our exact sequence to ~$\wt{Z}_1-Q=Z_1-C_1$, since the bundle $[Q]$ is trivial on the complement of $Q$, this extension class will be the Serre class of the triple $(Z_1,C_1,K_{Z_1}^{-1/2})$. Finally, since $M_1$ has positive scalar curvature, this class is nonzero on every real twistor line in $Z_1-C_1$. So that non-triviality of the class forced non-triviality over the real twistor lines. In other words $E$ has a non-trivial splitting type over the real twistor lines of $\wt{Z}_1$. So we showed that the Serre-Horrocks vector bundle $E$ determined by $\lambda ({\mathcal P})$ splits as ${\mathcal O}(1)\oplus {\mathcal O}(1)$ on all the $\sigma_0$-invariant rational curves in $Z_0$ which are limits of real twistor lines in ${\mathcal Z}_{\mathfrak t}$ as ${\mathfrak t}\to 0$. It therefore has the same splitting type on all the real twistor lines of ${\mathcal Z}_{\mathfrak t}$ for ${\mathfrak t}$ small. Besides, $$h^j(Z_t,\mathcal O(L^*)) \leq h^j(Z_0,\mathcal O(L^*))=0 ~~~\textnormal{for} ~~~j=1,2$$ by the semi-continuity principle and the proof of the vanishing theorem (\ref{vanishing}). So that via~ $L^*|_{Z_\mathfrak t}\approx K^{1/2}$, $$H^1(Z_{\mathfrak t},\mathcal O(K^{1/2}))\approx Ker(\Delta+{s\over 6})=0.$$ Since the two conditions are satisfied, Cohomological characterization (\ref{nice}) guarantees the positivity of the conformal class \end{proof} \bigskip \vspace{5mm} \small \begin{flushleft} \textsc{Department of Mathematics, State University of New York, Stony Brook}\\ \textit{E-mail address :} \texttt{\textbf{kalafat@math.sunysb.edu}} \end{flushleft} \vspace{5mm}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The polar quotients (called also polar invariants) of isolated hypersurface singularities were introduced by Teissier in the seventies of the last century to study equisingularity problems (\cite{Teissier1975, Teissier1976, Teissier1977}). They are, by definition, the quotients of the contact orders between a hypersurface and the branches of its generic polar curve. It is proved that the set of polar quotients is an analytic invariance, and in the case of reduced plane curve singularities, it is a topological invariant (see \cite{Teissier1977}). The Milnor number, the \L ojasiewicz gradient exponent and other numerical invariants can be computed in terms of the polar quotients. Teissier's polar quotients can be easily adapted to non-isolated hypersurface singularities. However, it seems to be more difficult to obtain similar results in the general case. One of our main results is to show that, in the case of (not necessarily reduced) plane curve singularities, the polar quotients and the \L ojasiewicz gradient exponent are topological invariants. More precisely, we will show in Section~\ref{Section3} that the set of polar quotients of a plane curve singularity can be interpreted in terms of approximations of its Newton--Puiseux roots (Theorem~\ref{Theorem31}). Then, using a recent result due to Parusi\'nski \cite{Parusinski2008}, we obtain the topological invariance of the set of polar quotients (Theorem~\ref{Theorem34}). Let $f\in \mathbb{K}\{z_1, \ldots, z_n\}$ with $\mathbb{K}=\mathbb{C}$ or $\mathbb{R}$ be a hypersurface singularity. Taking its value in a small neighbourhood, $f$ can be identified with an analytic function germ $f \colon (\mathbb{K}^n, 0) \to (\mathbb{K}, 0)$. It is well-known (see \cite{Lejeune1974, Lojasiewicz1965}) that there exist positive numbers $c, \ell$ and $\epsilon$ such that the following {\em \L ojasiewicz gradient inequality} holds \begin{eqnarray*} \|\nabla f(z)\| &\ge& c\, |f(z)|^\ell \quad \textrm{ for all } \quad \|z\| < \epsilon. \end{eqnarray*} The minimum of such exponent $\ell$ is called the {\em \L ojasiewicz gradient exponent} of $f$ (at the origin) and is denoted by $\mathscr{L}(f).$ The number $\mathscr{L}(f)$ is rational belonging to the interval $[0, 1)$ and above inequality holds with any exponent $\ell \ge \mathscr{L}(f)$ for some positive constants $c, \epsilon.$ It will be shown in Section~\ref{Section4} that, if $n=2$, i.e. $f$ is a plane curve singularity, then the \L ojasiewicz gradient exponent $\mathscr{L}(f)$ is attained along the polar curve of $f$ and so it can be computed in terms of the polar quotients of $f$ (Theorems~\ref{Theorem41}, \ref{Theorem45}, and Corollary~\ref{Corollary43}). In particular, the \L ojasiewicz gradient exponent of complex plane curve singularities is a topological invariant (Corollary~\ref{Corollary44}). As an application, we give effective estimates for the \L ojasiewicz exponents in the gradient and classical inequalities of polynomials. Namely, if $f$ is a (real or complex) polynomial in two variables of degree $d,$ then (Theorem~\ref{Theorem49}) \begin{eqnarray*} \mathscr{L}(f) & \le & 1 - \frac{1}{(d - 1)^2 + 1}. \end{eqnarray*} From this we derive the following effective version of the {\em classical {\L}ojasiewicz inequality} \begin{eqnarray*} |f(z)| & \ge & c\, \mathrm{dist}(z, f^{-1}(0))^{(d - 1)^2 + 1} \quad \textrm{ for all } \quad \|z\| < \epsilon, \end{eqnarray*} where $\mathrm{dist}(z, f^{-1}(0))$ denotes the distance from $z$ to the set $f^{-1}(0)$ (Theorem~\ref{Theorem410}). We refer the reader to the papers \cite{Acunto2005, Gwozdziewicz1999, Johnson2011, Kollar1999, Pham2012} for recent results concerning the estimation of the {\L}ojasiewicz exponents for (real) polynomials in higher dimensions. Our proofs are based on the notion of Newton polygon relative to an arc, which will be recalled in Section~\ref{Section2}. \section{The Newton polygon relative to an arc} \label{Section2} The technique of Newton polygons plays an important role in this paper. It is well-known that Newton transformations which arise in a natural way when applying the Newton algorithm provide a useful tool for calculating invariants of singularities. For a complete treatment we refer to \cite{ Brieskorn1986, Casas-Alvero2000, Walker1950, Wall2004}. In this section we recall the notion of Newton polygon relative to an arc due to Kuo and Parusi\'nski \cite{Kuo2000} (see also, \cite{HD08} and \cite{HD10}). Let $f \colon (\mathbb{K}^2, 0) \to (\mathbb{K}, 0)$ denote an analytic function germ with Taylor expansion: $$f(x, y) = f_m(x, y) + f_{m + 1}(x, y) + \cdots,$$ where $f_k$ is a homogeneous polynomial of degree $k,$ and $f_m \not \equiv 0.$ We will assume that $f$ is {\em mini-regular in $x$ of order $m$} in the sense that $f_m(1, 0) \ne 0.$ (This can be achieved by a linear transformation $x' = x, y' = y + cx,$ $c$ a generic constant). Let $\phi$ be an analytic arc in $\mathbb{K}^2$, which is not tangent to the $x$-axis. Then it can be parametrized by $$x=c_1t^{n_1} + c_2t^{n_2}+ \cdots \in \mathbb{K}\{t\} \text{ and } y=t^N$$ and therefore can be identified with a {\em Puiseux series} \begin{eqnarray*} x = \phi(y) = c_1y^{n_1/N} + c_2y^{n_2/N}+ \cdots \in \mathbb{K}\{y^{1/N}\} \end{eqnarray*} with $N \le n_1 < n_2 < \cdots $ being positive integers. Let us apply the change of variables $X := x - \phi(y)$ and $Y := y$ to $f(x, y),$ yielding $$F(X, Y) := f(X + \phi(Y), Y) := \sum c_{ij}X^iY^{j/N}.$$ For each $c_{ij} \ne 0,$ let us plot a dot at $(i, j/N),$ called a {\em Newton dot.} The set of Newton dots is called the {\em Newton diagram.} They generate a convex hull, whose boundary is called the {\em Newton polygon of $f$ relative to $\phi,$} to be denoted by $\mathbb{P}(f, \phi).$ Note that this is the Newton polygon of $F$ in the usual sense. If $\phi$ is a {\em Newton-Puiseux root} of $f = 0$ (i.e. $f(\phi(y), y)=0$), then there are no Newton dots on $X = 0$, and vice versa. Assume that $\phi$ is not a Newton-Puiseux root of $f$, then the exponents of the series $f(\phi(y), y) = F(0, Y)$ correspond to the Newton dots on the line $X = 0.$ In particular, $\mathrm{ord} f(\phi(y), y) = h_0,$ where $(0, h_0)$ is the lowest Newton dot on $X = 0$. The Newton edges $E_s$ and their associated Newton angles $\theta_s$ are defined in an obvious way as illustrated in the following example. \begin{example}{\rm Take $f(x, y) := {x}^{3}-{y}^{4}+{y}^{5}$ and $\phi : x = y^{4/3}.$ We have $$F(X, Y) := f(X + \phi(Y), Y) = {X}^{3}+3\,{X}^{2}{Y}^{4/3}+3\,X{Y}^{8/3}+{Y}^{5}.$$ By definition, the Newton polygon of $f$ relative to $\phi$ has two compact edges $E_1, E_2$ with $\tan \theta_1 = 4/3, \tan \theta_2 = 7/3$ (see Figure~\ref{Figure1}). \unitlength = .75cm \begin{figure} \begin{picture}(7, 8)(-2, -1) \put (0, 0){\vector(0, 1){6}} \put (0, 0){\vector(1, 0){6}} \multiput(0, 1) (.5, 0){12}{\line(1, 0){.1}} \multiput(0, 2) (.5, 0){12}{\line(1, 0){.1}} \multiput(0, 3) (.5, 0){12}{\line(1, 0){.1}} \multiput(0, 4) (.5, 0){12}{\line(1, 0){.1}} \multiput(0, 5) (.5, 0){12}{\line(1, 0){.1}} \multiput(1, 0) (0, .5){12}{\line(0, 1){.1}} \multiput(2, 0) (0, .5){12}{\line(0, 1){.1}} \multiput(3, 0) (0, .5){12}{\line(0, 1){.1}} \multiput(4, 0) (0, .5){12}{\line(0, 1){.1}} \multiput(5, 0) (0, .5){12}{\line(0, 1){.1}} \multiput(1, 2.67) (-.45, 0){2}{\line(-1, 0){.2}} \put(3, 0){\circle*{0.2}} \put(2, 1.3){\circle*{0.2}} \put(1, 2.67){\circle*{0.2}} \put(0, 5){\circle*{0.2}} \put(1.6, 2.2){$E_1$} \put(.5, 4.2){$E_2$} \put(2.2, .15){$\theta_1$} \put(0.3, 2.85){$\theta_2$} \thicklines \put(3, 0){\line(-3, 4){2}} \put(1, 2.67){\line(-2, 5){.91}} \put(-.75, 4.85){$\ 5$} \put(-.75, 2.5){$\frac{8}{3}$} \put(-.75, 1.15){$\frac{4}{3}$} \put(.85, -1){$1$} \put(1.85, -1){$2$} \put(2.85, -1){$3$} \put(1, -.15){\line(0, 1){.3}} \put(2, -.15){\line(0, 1){.3}} \put(-.15, 2.67){\line(1, 0){.3}} \put(-.15, 1.3){\line(1, 0){.3}} \end{picture} \caption{ \ } \label{Figure1} \end{figure} }\end{example} Take any edge $E_s.$ The associated polynomial $\mathcal{E}_s(z)$ is defined to be $\mathcal{E}_s(z) := \mathcal{E}_s(z, 1),$ where $$\mathcal{E}_s(X, Y) := \sum_{(i, j/N) \in E_s} c_{ij} X^i Y^{j/N}.$$ The {\em highest Newton edge}, denoted by $E_H,$ is the compact edge of the polygon $\mathbb{P}(f, \phi)$ with a vertex being the lowest Newton dot on $X = 0.$ For instance, in the above example, $E_2$ is the highest Newton edge. Next, we recall the notion of {\em sliding} (see \cite{Kuo2000}). Suppose that $\phi$ is not a root of $f = 0.$ Consider the Newton polygon $\mathbb{P}(f, \phi).$ Take any nonzero root $c$ of $\mathcal{E}_H(z) = 0,$ the polynomial equation associated to the highest Newton edge $E_H.$ We call $$\phi_1(y) : x = \phi(y) + cy^{\tan \theta_H}$$ a {\em sliding} of $\phi$ along $f,$ where $\theta_H$ is the angle associated to $E_H.$ A recursive sliding $\phi \to \phi_1 \to \phi_2 \to \cdots$ produces a limit, $\phi_\infty$, which is a root of $f = 0$. The $\phi_\infty$ will be called a {\em final result of sliding $\phi$ along $f$}. Note that, the $\phi_\infty$ has the form $$\phi_\infty\colon x = \phi(y) + c y^{\tan \theta_H}+\text{higher order terms},$$ due to the following lemma. \begin{lemma} \label{Lemma22} Let $\phi$ be a Puiseux series, which is not root of $f = 0$ and let $\theta_H$ and $\mathcal{E}_H$ be the angle and polynomial associated to the highest Newton edge $E_H.$ Consider a series $$\psi : x = \phi(y) + c y^{\rho} + \textrm{ higher order terms,}$$ where $c \in \mathbb{K}$ and $\rho \in \mathbb{Q}, \rho > 0.$ Then the following statements hold: \begin{itemize} \item[(i)] If either $\tan \theta_{H}< \rho$ or $\tan \theta_{H}= \rho$ and $\mathcal{E}_H(c) \ne 0$ then $\mathbb{P}(f, \phi) = \mathbb{P}(f, \psi),$ and therefore $\mathrm{ord} f(\phi(y), y) = \mathrm{ord} f(\psi(y), y).$ \item[(ii)] If $\tan \theta_{H}= \rho$ and $\mathcal{E}_H(c) = 0$ then $\mathrm{ord} f(\phi(y), y) < \mathrm{ord} f(\psi(y), y).$ \end{itemize} \end{lemma} \begin{proof} This is well-known as the Newton-Puiseux algorithm of finding a Newton--Puiseux root of $f = 0$ (cf. \cite{Brieskorn1986, Casas-Alvero2000, Walker1950}). For a detailed proof, we refer to \cite{HD08}. In fact, the special case where $\psi : x = \phi(y) + c y^{\tan \theta_H}$ was proved in \cite[Lemma 2.1]{HD08}. The lemma is then deduced by applying the special case (infinitely) many times. \end{proof} \section{Polar quotients} \label{Section3} Let $f \colon (\mathbb{C}^2, 0) \to (\mathbb{C}, 0)$ be an analytic function germ which is mini-regular in $x.$ After Teissier \cite{Teissier1977}, we define the set of {\em polar quotients} $$\mathcal{Q}(f) := \{\mathrm{ord} f(\gamma(y), y) \ | \ \gamma \in \Gamma(f)\}.$$ In this section we will show that the set of polar quotients is a topological invariant. We first give a formula interpreting polar quotients in terms of approximations of Newton--Puiseux roots of $f$. Let $\gamma(y) := \sum_{i} a_i y^{\alpha_i}$ be a Puiseux series. For each positive real number $\rho$, the series $\sum_{\alpha_i < \rho}a_{i}y^{\alpha_i} + gy^\rho,$ where $g$ is a generic constant, is called the {\em $\rho$-approximation} of $\gamma(y)$. For two distinct Puiseux series $\gamma_1(y)$ and $\gamma_2(y)$, the {\em approximation of $\gamma_1(y)$ and $\gamma_2(y)$} is defined to be the $\rho$-approximation series of $\gamma_1(y)$ (and hence of $\gamma_2(y)$), where $\rho := \mathrm{ord}\ (\gamma_1(y) - \gamma_2(y))$ is the {\em contact order} of $\gamma_1(y)$ and $\gamma_2(y)$. \begin{theorem}\label{Theorem31} Let $f \colon (\mathbb{C}^2, 0) \to (\mathbb{C}, 0)$ be an analytic function germ which is mini-regular in $x$ of order $m$ and let $\xi_1, \ldots, \xi_r$ $(r \ge 2)$ be its distinct Newton--Puiseux roots. Then $$\mathcal{Q}(f) = \left\{\mathrm{ord}\, f(\xi_{i,j}(y),y) \mid 1 \le i < j \le r \right\},$$ where $\xi_{i,j}$ denotes the approximation of $\xi_{i}$ and $\xi_{j}.$ \end{theorem} \begin{proof} Take any $\gamma \in \Gamma(f)$ and consider the Newton polygon $\mathbb{P}(f, \gamma)$ of $f$ relative to $\gamma.$ Note that $(m, 0)$ is a vertex of the Newton polygon and there is a dot on the line $X = 0$ but there are not dots on the line $X = 1.$ Let $E_H$ and $\mathcal{E}_H$ be the highest edge and the corresponding associated polynomial. We have $$\deg \mathcal{E}_H \geq 2, \quad \mathcal{E}_H(0) \ne 0, \quad \textrm{ and } \quad \frac{d}{dz } \mathcal{E}_H(0) = 0.$$ It implies that the equation $\mathcal{E}_H(z) = 0$ has at least two non-zero distinct roots, say $c_1, c_2.$ Let $\gamma_{i, \infty}, i = 1, 2$ be a final result of sliding of the arc $y \mapsto \gamma(y) + c_i y^{\tan \theta_H}$ along $f,$ where $\theta_H$ is the angle corresponding to the highest edge $E_H.$ We have $\mathrm{ord} (\gamma_{1, \infty}(y) - \gamma_{2, \infty}(y)) = \tan \theta_H$ and $$f(\gamma_{i, \infty}(y), y) \equiv 0 \quad \textrm{ for } \quad i = 1, 2.$$ Let $\gamma_{1, 2}$ be the approximation of $\gamma_{1, \infty}$ and $\gamma_{2, \infty}.$ It follows from Lemma \ref{Lemma22} that $\mathrm{ord} f(\gamma(y), y) = \mathrm{ord} f(\gamma_{1, 2}(y), y),$ and hence $$\mathcal{Q}(f) \subset \left\{\mathrm{ord} f(\xi_{i,j}(y),y) \mid 1 \le i < j \le r \right\}.$$ To show the inverse inclusion, we take any pair of distinct roots $\xi_1, \xi_2$ of $f$ and let $\xi_{1,2}$ be the approximation of $\xi_{1}$ and $\xi_{2}$. Then we may write $$\xi_{1,2}(y)=\xi_{1}(y) +g y^{\rho}+\text{higher order terms}$$ where $g$ is generic, and $\rho$ denotes the contact order of $\xi_{1}$ and $\xi_{2}$. Write $$f(X + \xi_1(Y), Y) = \sum c_{ij} X^i Y^{j/N}.$$ Let $\Delta$ be the (non empty) set of Newton dots in the Newton polygon $\mathbb{P}(f, \xi_1)$ of $f$ relative to $\xi_1,$ where the linear function $(i, j) \mapsto \rho i + j/N$ defined on $\mathbb{P}(f, \xi_1)$ takes its minimal value, say $h_0.$ We denote by $(i_1,j_1/N)$ the lowest point of $\Delta$, i.e. the point in $\Delta$ with maximal $i_1.$ Since $\xi_1$ is a root of $f = 0,$ there are no dots on the line $X = 0,$ and so $i_1 \ge 1.$ Let us denote $\phi(y):=\xi_{1}(y) +g y^{\rho}$ and $$F(X, Y) := f(X + \phi(Y), Y) =\sum c_{ij} (X + g Y^\rho)^i Y^{j/N}= \sum a_{ij} X^i Y^{j/N}.$$ Note that, all the Newton dots of the Newton polygon $\mathbb{P}(f, \phi)$ of $f$ relative to $\phi$ must have the form $(k, \rho(i - k) + j/N)$ with $c_{i j}\neq 0$ and $k = 0, 1, \ldots, i$, and $(i_1,j_1/N)$ is a Newton dot of $\mathbb{P}(f, \phi)$ because $a_{i_1 j_1} = c_{i_1 j_1}\neq 0.$ Since $g$ is generic, the point $(0,h_0)$ is also a Newton dot of $\mathbb{P}(f, \phi)$ (in fact, we take $g$ satisfying $\sum_{(i, j/N)\in \Delta} c_{ij}g^i\neq 0$). Furthermore, all the Newton dots of $\mathbb{P}(f, \phi)$ lie on or above the line containing the two dots $(0,h_0)$ and $(i_1,j_1/N)$. This shows that the edge $E_H$ connecting $(0,h_0)$ and $(i_1,j_1/N)$ is the highest Newton edge of $\mathbb{P}(f, \phi).$ Let $\theta_H$ and $\mathcal{E}_H(z)$ be the angle and polynomial associated with the highest Newton edge $E_H.$ We have (see Figure~\ref{Figure2}) \begin{eqnarray*} \tan \theta_H &=& \frac{h_0 - j_1/N}{i_1} \ = \ \frac{\rho i_1}{i_1} \ = \ \rho, \\ \mathcal{E}_H(z) &=& \sum_{(i, j/N)\in E_H} a_{i j}z^i \ = \ \sum_{(i, j/N) \in \Delta} c_{i j}(z + g)^i. \end{eqnarray*} By the definition of $\phi$, we may write \begin{eqnarray*} \xi_1(y) &=& \phi(y) + a_1 y^\rho + \textrm{ higher order terms,} \\ \xi_2(y) &=& \phi(y) + a_2 y^\rho + \textrm{ higher order terms} \end{eqnarray*} with $a_1\neq a_2.$ It follows from Lemma \ref{Lemma22} that $a_1$ and $a_2$ are roots of the polynomial $\mathcal{E}_H(z).$ In particular, we have $\deg \mathcal{E}_H(z) \geq 2.$ \unitlength = .75cm \begin{figure} \begin{picture}(7, 15)(-2, -1.5) \put (0, 0){\vector(0, 1){13}} \put (0, 0){\vector(1, 0){7}} \put (2.35, 5){\vector(3, 1){2}} \multiput(4.32, -1) (-.25, .75){5}{\line(-1, 3){.16}} \multiput(2, 6) (-.25, .75){9}{\line(-1, 3){.16}} \multiput(3, 3) (-.45, 0){7}{\line(-1, 0){.2}} \thicklines \put(5, 0){\line(-1, 1){1}} \put(4, 1){\line(-1, 2){1}} \put(3, 3){\line(-1, 3){1}} \put(2, 6){\line(-1, 5){1}} \put(5, 0){\circle*{0.2}} \put(4, 1){\circle*{0.2}} \put(3, 3){\circle*{0.2}} \put(2, 6){\circle*{0.2}} \put(0, 12){\circle*{0.2}} \put(1, 11){\circle*{0.2}} \put(-1.75, 0){$(0, 0)$} \put(5, -1){$(m, 0)$} \put(-1.75, 12){$(0, h_0)$} \put(3.5, 2.85){$(i_1, \frac{j_1}{N})$} \put(2.75, 4.5){$\Delta$} \put(2.75, 7){$\mathbb{P}(f, \xi_1)$} \put(2., 3.25){$\theta_H$} \put(4.5, 5.5){$(\rho, 1)$} \end{picture} \caption{ \ } \label{Figure2} \end{figure} Consider the Newton diagram of $\frac{\partial f}{\partial x}$ relative to $\phi$ which can be easily found from $\mathbb{P}(f, \phi)$. Namely, move every Newton dot $(i, j/N)$ of $\mathbb{P}(f, \phi)$ to $(i - 1, j/N),$ if $i \ge 1,$ and delete all Newton dots $(0, j/N).$ This is simply because $\frac{\partial}{\partial X}(X^i Y^{j/N}) = i X^{i - 1} Y^{j/N}$. Since $g$ is generic, $\frac{d}{dz}\mathcal{E}_H(g) \ne 0$ and so, in the Newton diagram of $f$ to $\phi$ there exists a Newton dot on the line $X = 1.$ Therefore the highest Newton edge of the Newton polygon $\mathbb{P}(\frac{\partial f}{\partial x}, \phi)$ has the vertices at $(0, h_0 - \tan \theta_H)$ and $(i_1- 1, j_1/N).$ The associated polynomial equation is $\frac{d}{dz} \mathcal{E}_H(z) = 0$. Since $\mathcal{E}_H(z)$ has two distinct roots $a_1,a_2$, it follows by simple calculation (see also the argument in the proof of Theorem \ref{Theorem41}), that there exists a nonzero number $c \in \mathbb{C}$ such that $$\mathcal{E}_H(c) \ne 0 \quad \textrm{ and } \quad \frac{d}{dz}\mathcal{E}_H (c) = 0.$$ Let $\gamma_\infty$ be a final result of sliding of the arc $\phi$ along $\frac{\partial f}{ \partial x }$. It follows from Lemma \ref{Lemma22} that $$\mathrm{ord} f(\gamma_\infty(y), y) = \mathrm{ord}f(\phi(y), y).$$ Furthermore, since $\mathrm{ord}\left(\xi_{1,2}(y)-\phi(y)\right)>\rho = \tan \theta_H$, applying again Lemma \ref{Lemma22} we get $$\mathrm{ord} f(\xi_{1,2}(y), y) = \mathrm{ord}f(\phi(y), y).$$ Hence the inverse inclusion holds. The theorem is proved. \end{proof} By using the same argument as in the proof of Theorem~\ref{Theorem31}, we obtain \begin{corollary} Let $f \colon (\mathbb{C}^2, 0) \to (\mathbb{C}, 0)$ be an analytic function germ and let $\xi_1,\ldots,\xi_r$ $(r \ge 2)$ be its distinct Newton--Puiseux roots. Then for each pair of distinct roots $\xi_i,\xi_j$ there exists a curve $\gamma \in \Gamma(f)$ such that $$\mathrm{ord} (\gamma-\xi_k)=\mathrm{ord} (\xi_{i,j}-\xi_k) \quad \textrm{ for } \quad k = 1, \ldots, r,$$ where $\xi_{i,j}$ denotes the approximation of $\xi_{i}$ and $\xi_{j}.$ \end{corollary} The above corollary is a sharper version of \cite[Lemma~3.3]{Kuo1977}. Indeed, by letting $k = i$ and $ k= j,$ we get $$\mathrm{ord} (\gamma-\xi_i)=\mathrm{ord} (\gamma-\xi_j)=\mathrm{ord} (\xi_{i}-\xi_j).$$ Recall that, two continuous function germs $f, g \colon (\mathbb{K}^2, 0) \to (\mathbb{K}, 0)$ are said to be {\em topologically right equivalent}, if there exists a germ of homeomorphisms $h \colon (\mathbb{K}^2, 0) \to (\mathbb{K}^2, 0)$ such that $f = g \circ h.$ In \cite{Kuo1977} Kuo and Lu introduced a tree model of an isolated singularity $f \colon (\mathbb{C}^2, 0) \to (\mathbb{C}, 0).$ This model allows one to visualise the Puiseux pairs of irreducible components of $f = 0$ and the contact orders between them. Kuo and Lu's model can be easily adapted to the nonisolated case by adding the multiplicities of components. We need the following result due to Parusi\'nski~\cite[Theorem 0.1 and Remark 0.4]{Parusinski2008}, where the last statement follows directly from the proof therein. \begin{theorem}\label{Theorem33} Let $f , g \colon (\mathbb{C}^2, 0) \to (\mathbb{C}, 0)$ be (not necessarily reduced) analytic function germs. Then the following are equivalent \begin{itemize} \item[(i)] $f$ and $g$ are topologically right equivalent. \item[(ii)] There is a one-to-one correspondence between the irreducible components of the zero sets $f^{-1}(0)$ and $g^{-1}(0)$ that preserves the multiplicities of these components, their Puiseux pairs, and the intersection multiplicities of any pairs of distinct components. \item[(iii)] The tree models of $f$ and of $g$ coincide. \item[(iv)] There is a one-to-one correspondence between the distinct Newton--Puiseux roots of $f = 0$ and $g = 0$ that preserves the multiplicities of these roots, and the contact orders of any pairs of distinct roots. \end{itemize} \end{theorem} \begin{theorem}\label{Theorem34} The set of polar quotients of (not necessarily reduced) complex analytic function germs in two variables is a topological invariant. \end{theorem} \begin{proof} Let $f \colon (\mathbb{C}^2, 0) \to (\mathbb{C}, 0)$ be an analytic function germ which is mini-regular in $x$ and let $\xi_1,\ldots,\xi_r$ be its distinct Newton--Puiseux roots with multiplicities $m_1,\ldots,m_r$: $$f(x, y) = u(x, y) \prod_{k = 1}^r (x - \xi_k(y))^{m_k},$$ where $u$ is a unit in $\mathbb{C}\{x,y\}.$ If $r = 1$ then the set of polar quotients of $f$ is empty and there is nothing to prove. So assume that $r \ge 2.$ Denote by $\xi_{i,j}$ the approximation of two distinct roots $\xi_i$ and $\xi_j.$ We have $$\mathrm{ord}\ f(\xi_{i,j}(y),y)=\sum_{k=1}^{r}m_k\mathrm{ord} (\xi_{i,j}-\xi_k)=\sum_{k=1}^{r}m_k\min\{\mathrm{ord} (\xi_{i}-\xi_k),\mathrm{ord} (\xi_{j}-\xi_k)\}.$$ The theorem follows immediately from Theorems~\ref{Theorem31}~and~\ref{Theorem33}. \end{proof} \section{\L ojasiewicz exponents}\label{Section4} Let $f \colon (\mathbb{K}^n, 0) \to (\mathbb{K}, 0)$ be an analytic function germ. Take any analytic arc $\phi$ parametrized by $$\phi(t) = \left(z_1(t), \ldots, z_n(t)\right),$$ where each $z_i(t)$ is a convergent power series, for $|t|$ small. If $f \circ \phi \not \equiv 0,$ then we can define a positive rational number $\ell(\phi)$ by \begin{eqnarray*} \| \nabla f (\phi(t)) \| & \simeq & |f(\phi(t))|^{\ell(\phi)}, \end{eqnarray*} where $A \simeq B$ means that $A/B$ lies between two positive constants. By the Curve Selection Lemma (see \cite{Milnor1968}), it is not hard to show that the {\L}ojasiewicz gradient exponent of $f$ is given by \begin{eqnarray}\label{Eqn1} \mathscr{L}(f) &=& \sup_\phi \ell(\phi), \end{eqnarray} where the supremum is taken over all analytic curves passing through the origin, which are not contained in the zero locus of $f.$ As a consequence, considering a generic linear curve, one can see that \begin{eqnarray} \label{Eqn3} \mathscr{L}(f) &\ge& \frac{m - 1}{m}, \end{eqnarray} where $m := \mathrm{ord}\, f$ stands for the multiplicity of $f$ at the origin. Moreover, from \eqref{Eqn1} and the inequality $\mathscr{L}(f) < 1,$ it is not hard to see that for any unit $u$ in $\mathbb{K}\{z_1,\ldots, z_n\},$ \begin{eqnarray} \label{Eqn4} \mathscr{L}(u \cdot f) &=& \mathscr{L}(f). \end{eqnarray} In the two next subsections we provide formulas computing the {\L}ojasiewicz gradient exponent of analytic function germs in two real and complex variables. \subsection{\L ojasiewicz gradient exponent of complex analytic function germs} \ Let $f \colon (\mathbb{C}^2, 0) \to (\mathbb{C}, 0)$ be an analytic function germ. Assume that $f$ is mini-regular in $x$ of order $m := \mathrm{ord} f$. Recall that, the loci defined by $\frac{\partial f}{ \partial x } = 0$ is called a {\em polar curve.} Following \cite{Walker1950}, a Newton--Puiseux root of $\frac{\partial f}{ \partial x } = 0$ is called a {\em branch} of the polar curve or simply a {\em polar branch.} We denote by $\Gamma(f)$ the set of polar branches which are not roots of $f = 0.$ \begin{theorem}\label{Theorem41} With the above notations, the \L ojasiewicz gradient exponent of $f$ is given by $$\mathscr{L}(f) = \begin{cases} \frac{m - 1}{m} & \textrm{ if } \Gamma(f) = \emptyset, \\ \max\left\{ \ell(\gamma)\mid \gamma\in \Gamma(f)\right\} & \textrm{ otherwise.} \end{cases}$$ \end{theorem} \begin{proof} We first consider the case $\Gamma(f) \neq \emptyset.$ By the Weierstrass preparation theorem (see, for instance, \cite{Brieskorn1986, Greuel2006}), there exist $u \in \mathbb{C}\{x, y\}$ and $a_i \in \mathbb{C}\{y\}$ such that $$g(x, y) := u(x, y) \cdot f(x, y) = x^m + a_1(y) x^{m - 1} + \cdots + a_m(y)$$ with $u(0, 0) \ne 0$ and $a_i(0) = 0.$ Due to the division theorem (see, for instance, \cite{Brieskorn1986, Greuel2006}), there exist $\phi \in \mathbb{C}\{y\},$ with $\phi(0) = 0,$ and a polynomial $h\in \mathbb{C}\{y\}[x]$ of degree at most $m-2$ such that $$m g(x,y)=(x - \phi(y)) \frac{\partial g}{ \partial x}(x, y)+h(x,y),$$ or, equivalently, $$m u(x,y) \cdot f(x,y)=(x - \phi(y)) \left(\frac{\partial u}{ \partial x}(x, y)f(x,y)+u(x,y)\frac{\partial f}{ \partial x}(x, y)\right)+h(x,y).$$ Since $\Gamma(f) = \emptyset,$ it follows that all the $m - 1$ roots of $\frac{\partial f}{ \partial x} = 0$ are also roots (counted with multiplicity) of $h = 0,$ and hence $h \equiv 0$ because $\deg h\leq m - 2.$ Then the differential equation $$\frac{\frac{\partial g}{ \partial x}(x, y)}{g(x,y)}=\frac{m}{x - \phi(y)}$$ implies that $g$ has the form $c(x - \phi(y))^m$ for some $c \neq 0.$ Consequently, one has \begin{eqnarray*} \|\nabla g(x, y) \| &\ge& \left|\frac{\partial g}{ \partial x}(x, y) \right| \ = \ m\, c^{\frac{1}{m}}\, |g(x, y)|^{\frac{m - 1}{m}} \quad \textrm{ for all $(x,y)$ near $(0, 0)$}. \end{eqnarray*} This implies that $\mathscr{L}(g) \leq \frac{m-1}{m}$ and that the equality because of \eqref{Eqn3}. Therefore, from \eqref{Eqn4}, we get $$\mathscr{L}(f) \ = \ \mathscr{L}(g) \ = \ \frac{m - 1}{m}.$$ We now consider the case $\Gamma(f) \neq \emptyset.$ We will prove the inequality $$\mathscr{L}(f)\leq \max\left\{ \ell(\gamma)\mid \gamma\in \Gamma(f)\right\}.$$ By \eqref{Eqn1}, this is equivalent to showing that $$\ell(\phi)\leq \max\left\{ \ell(\gamma)\mid \gamma\in \Gamma(f)\right\}$$ for all analytic curves $\phi$ passing through the origin and not contained in the zero locus of $f$. To this end, we make the following observation. \begin{claim}\label{Claim1} The inequality $$\ell(\gamma) \ge \ \frac{m - 1}{m}$$ holds for all $\gamma \in \Gamma(f)$. \end{claim} \begin{proof} Let $x = \gamma(y)$ be a Newton--Puiseux root of $\frac{\partial f}{ \partial x } = 0$ but not of $f = 0.$ Since $f$ is mini-regular in $x$ of order $m,$ $\frac{\partial f}{\partial x}$ is mini-regular in $x$ of order $m - 1.$ This implies that $\mathrm{ord}\, \gamma(y)\geq 1$ and so $\mathrm{ord}\, f \left(\gamma(y),y\right) \geq m.$ Note that \begin{eqnarray*} \frac{d f (\gamma(y),y )}{ d y } = \frac{\partial f}{ \partial y }(\gamma(y),y), \end{eqnarray*} and hence $\mathrm{ord}\, f \left(\gamma(y),y\right) = \mathrm{ord}\, \frac{\partial f}{ \partial y } \left(\gamma(y),y\right) +1.$ It yields that \begin{eqnarray*} \ell(\gamma ) \ = \ \frac{\mathrm{ord}\, \frac{\partial f}{ \partial y }\left(\gamma(y),y\right)}{\mathrm{ord}\, f \left(\gamma(y),y\right)} \ \ge \ 1 - \frac{1}{m}. \end{eqnarray*} \end{proof} Take any analytic arc $\phi$ which is not root of $f = 0.$ It is easy to see that if $\phi$ is tangent to the $x$-axis, then $\ell(\phi)\leq \frac{m - 1}{m}$. We can therefore ignore these arcs. Then the arc $\phi$ may be parametrized by a Puiseux series $x = \phi(y)$ with $\mathrm{ord}\, \phi(y) \geq 1.$ Assume that $\frac{\partial f}{ \partial x}(\phi(y), y) \not \equiv 0.$ In the Newton polygon $\mathbb{P}(f, \phi)$ of $f$ relative to $\phi,$ let $(0, h_0)$ and $(1, h_1)$ be the lowest Newton dots on $X = 0$ and $X = 1,$ respectively. Then $\ell(\phi)$ can be computed as follows. \begin{claim}\label{Claim2} We have $$\ell(\phi) = \min \Big \{\frac{h_0 - 1}{h_0}, \frac{h_1}{h_0} \Big \}.$$ \end{claim} \begin{proof} Let \begin{eqnarray*} F(X, Y) &:=& f(X + \phi(Y), Y) \\ &=& unit \cdot Y^{h_0} + unit \cdot Y^{h_1}X + \textrm{ terms divisible by } X^2. \end{eqnarray*} By the Chain Rule, \begin{eqnarray*} \frac{\partial F}{ \partial X } &=& \frac{\partial f}{ \partial x } \quad \textrm{ and } \quad \frac{\partial F}{ \partial Y } \ = \ \phi'(Y) \frac{\partial f}{ \partial x } + \frac{\partial f}{ \partial y }. \end{eqnarray*} Since $\mathrm{ord}\ \phi(Y)\geq 1$, it follows that \begin{eqnarray*} \Big |\frac{\partial F}{ \partial X } \Big| + \Big|\frac{\partial F}{ \partial Y } \Big| &\simeq& \Big| \frac{\partial f}{ \partial x } \Big| + \Big |\frac{\partial f}{ \partial y } \Big|. \end{eqnarray*} Along $X = 0,$ we have \begin{eqnarray*} |F| \ \simeq \ |Y|^{h_0}, \quad \Big |\frac{\partial F}{ \partial X } \Big| \ \simeq \ |Y|^{h_1}, \quad \Big |\frac{\partial F}{ \partial Y } \Big| &\simeq& |Y|^{h_0 - 1}, \end{eqnarray*} whence the result. \end{proof} \begin{claim}\label{Claim3} Let $\gamma$ denote a final result of sliding of $\phi$ along $\frac{\partial f}{ \partial x }.$ Then $$\ell(\phi) \le \ell(\gamma).$$ \end{claim} \begin{proof} In fact, consider the Newton polygon $\mathbb{P}(f, \phi).$ Let $E_H$ and $\theta_H$ be the highest edge and the corresponding angle. Note that $(0, h_0)$ is a vertex of $E_H$ and $(m, 0)$ is a vertex of the polygon. There are two cases to be consider (see Figure~\ref{Figure3}). \setlength{\unitlength}{0.27cm} \begin{figure} \begin{center} \begin{picture}(20, 25)(0, -5) \label{Fig2} \linethickness{0.05mm} \put (0, 0){\vector(0, 1){19}} \put (0, 0){\vector(1, 0){22}} \multiput(0, 0)(2, 0){10}{\line(0, 1){.2}} \multiput(0, 2)(2, 0){10}{\line(0, 1){.2}} \multiput(0, 4)(2, 0){10}{\line(0, 1){.2}} \multiput(0, 6)(2, 0){10}{\line(0, 1){.2}} \multiput(0, 8)(2, 0){10}{\line(0, 1){.2}} \multiput(0, 10)(2, 0){10}{\line(0, 1){.2}} \multiput(0, 12)(2, 0){10}{\line(0, 1){.2}} \multiput(0, 14)(2, 0){10}{\line(0, 1){.2}} \multiput(0, 16)(2, 0){10}{\line(0, 1){.2}} \multiput(0, 0)(0, 2){9}{\line(1,0){.2}} \multiput(2, 0)(0, 2){9}{\line(1,0){.2}} \multiput(4, 0)(0, 2){9}{\line(1,0){.2}} \multiput(6, 0)(0, 2){9}{\line(1,0){.2}} \multiput(8, 0)(0, 2){9}{\line(1,0){.2}} \multiput(10, 0)(0, 2){9}{\line(1,0){.2}} \multiput(12, 0)(0, 2){9}{\line(1,0){.2}} \multiput(14, 0)(0, 2){9}{\line(1,0){.2}} \multiput(16, 0)(0, 2){9}{\line(1,0){.2}} \multiput(18, 0)(0, 2){9}{\line(1,0){.2}} \thicklines \put(0,16){\line(1,-2){6}} \put(6,4){\line(3, -2){6}} \put(0,16){\circle*{0.5}} \put(6,4){\circle*{0.5}} \put(2,12){\circle*{0.5}} \put(12,0){\circle*{0.5}} \put(-1.4,-2){$0$} \put(-5.,16){$(0, h_0)$} \put(-5.,12){$(1, h_1)$} \put(1.5,-2){$1$} \put(10,-2){$(m, 0)$} \put(0,-5){Case 1: $(1, h_1) \in E_H$} \end{picture}\qquad \qquad \qquad \begin{picture}(20, 25)(0, -5) \label{Fig2} \linethickness{0.05mm} \put (0, 0){\vector(0, 1){19}} \put (0, 0){\vector(1, 0){22}} \multiput(0, 0)(2, 0){10}{\line(0, 1){.2}} \multiput(0, 2)(2, 0){10}{\line(0, 1){.2}} \multiput(0, 4)(2, 0){10}{\line(0, 1){.2}} \multiput(0, 6)(2, 0){10}{\line(0, 1){.2}} \multiput(0, 8)(2, 0){10}{\line(0, 1){.2}} \multiput(0, 10)(2, 0){10}{\line(0, 1){.2}} \multiput(0, 12)(2, 0){10}{\line(0, 1){.2}} \multiput(0, 14)(2, 0){10}{\line(0, 1){.2}} \multiput(0, 16)(2, 0){10}{\line(0, 1){.2}} \multiput(0, 0)(0, 2){9}{\line(1,0){.2}} \multiput(2, 0)(0, 2){9}{\line(1,0){.2}} \multiput(4, 0)(0, 2){9}{\line(1,0){.2}} \multiput(6, 0)(0, 2){9}{\line(1,0){.2}} \multiput(8, 0)(0, 2){9}{\line(1,0){.2}} \multiput(10, 0)(0, 2){9}{\line(1,0){.2}} \multiput(12, 0)(0, 2){9}{\line(1,0){.2}} \multiput(14, 0)(0, 2){9}{\line(1,0){.2}} \multiput(16, 0)(0, 2){9}{\line(1,0){.2}} \multiput(18, 0)(0, 2){9}{\line(1,0){.2}} \thicklines \put(0,16){\line(1,-3){4}} \put(4,4){\line(2,-1){4}} \put(8,2){\line(3,-1){6}} \put(0,16){\circle*{0.5}} \put(4,4){\circle*{0.5}} \put(8,2){\circle*{0.5}} \put(2,12){\circle*{0.5}} \put(14,0){\circle*{0.5}} \put(-1.4,-2){$0$} \put(-5,16){$(0, h_0)$} \put(-5,12){$(1, h_1)$} \put(1.5,-2){$1$} \put(12,-2){$(m, 0)$} \put(0,-5){Case 2: $(1, h_1) \not \in E_H$} \end{picture} \end{center} \caption{ \ } \label{Figure3} \end{figure} \subsubsection*{Case 1: $(1, h_1) \in E_H$} We have \begin{eqnarray*} \frac{h_0 - h_1}{1} &=& \tan \theta_H \ \ge \ 1. \end{eqnarray*} Hence, $h_0 - 1 \ge h_1.$ By Claim~\ref{Claim2}, we get \begin{eqnarray*} \ell(\phi) &=& \min \Big \{\frac{h_0 - 1}{h_0}, \frac{h_1}{h_0} \Big \} \ = \ \frac{h_1}{h_0} \ \le \ \frac{m - 1}{m}, \end{eqnarray*} where the last inequality follows from the assumption that $f$ is mini-regular in $x$ of order $m := \mathrm{ord} f.$ \subsubsection*{Case 2: $(1, h_1) \not \in E_H$} In this case, $\theta_H <\theta_{H'}$ , where $\theta_{H'}$ denotes the angle corresponding to the highest edge of the Newton polygon of $\frac{\partial f}{ \partial x }$ relative to $\phi$. Since $\gamma$ is a final result of sliding of $\phi$ along $\frac{\partial f}{ \partial x }$, it has the form $$\gamma(y) = \phi(y) + c y^{\tan \theta_{H'}}+\text{higher order terms},$$ for some nonzero constant $c \in \mathbb{C}.$ Applying Lemma~\ref{Lemma22} one has $$\mathrm{ord} f(\gamma(y),y)=\mathrm{ord} f(\phi(y),y)=h_0.$$ It hence follows from Claim~\ref{Claim2} that \begin{eqnarray*} \ell(\phi) &=& \min \Big \{\frac{h_0 - 1}{h_0}, \frac{h_1}{h_0} \Big \} \ \le \ \frac{h_0 - 1}{h_0} \ = \ \frac{\mathrm{ord} f(\gamma(y),y) - 1}{\mathrm{ord} f(\gamma(y), y)} \ = \ \ell(\gamma). \end{eqnarray*} Summing up in both cases we have \begin{eqnarray*} \ell(\phi) &\le& \max \Big \{\frac{m - 1}{m}, \ell(\gamma) \Big \} \ \le \ \ell(\gamma), \end{eqnarray*} where the second inequality follows from Claim~\ref{Claim1}. \end{proof} Applying the above claims we obtain that $$\mathscr{L}(f) \leq \max\left\{ \ell(\gamma)\mid \gamma\in \Gamma(f)\right\},$$ and hence the equality according to \eqref{Eqn1}. This completes the proof of Theorem~\ref{Theorem41}. \end{proof} \begin{example}{\rm Take $f(x, y) = 1/6\,{x}^{6}+1/4\,{x}^{4}{y}^{4}-1/5\,{x}^{5}y-1/3\,{x}^{3}{y}^{5} \in \mathbb{C}\{x, y\}.$ We have $f$ is mini-regular in $x$ of order $m = 6$ and $\frac{\partial f}{ \partial x } = x^2(x - y)(x^2 + y^4).$ By definition, then $\Gamma(f)$ consists three polar branches $$\gamma_{1} : x = y, \quad \gamma_{2} : x = \sqrt{-1} y^2, \quad \textrm{ and } \quad \gamma_{3} : x = - \sqrt{-1} y^2.$$ A simple calculation shows that $\ell(\gamma_{1} ) = \frac{5}{6}$ and $\ell(\gamma_{2}) = \ell(\gamma_{3}) = \frac{10}{11}.$ By Theorem~\ref{Theorem45}, $$\mathscr{L}(f) = \max \left \{\frac{5}{6}, \frac{10}{11} \right\} = \frac{10}{11}.$$ }\end{example} The following result is a direct consequence of Theorems~\ref{Theorem31}~and~\ref{Theorem41}. It gives us an alternative formula to compute the \L ojasiewicz gradient exponent of $f$ in terms of its Newton--Puiseux roots. \begin{corollary}\label{Corollary43} Let $f \colon (\mathbb{C}^2, 0) \to (\mathbb{C}, 0)$ be an analytic function germ and let $\xi_1,\ldots,\xi_r$ be its distinct Newton--Puiseux roots. Then $$\mathscr{L}(f) = \begin{cases} \frac{m-1}{m} & \textrm{ if } \ r = 1, \\ \max\left\{1 - \frac{1}{\mathrm{ord}\, f(\xi_{i,j}(y),y)} \mid 1\leq i<j\leq r\right\} & \textrm{ otherwise,} \end{cases}$$ where $\xi_{i,j}$ denotes the approximation of $\xi_i$ and $\xi_j$. \end{corollary} \begin{corollary}\label{Corollary44} The \L ojasiewicz gradient exponent of complex analytic function germs in two variables is a topological invariant. \end{corollary} \begin{proof} This follows immediately from Corollary~\ref{Corollary43}, Theorems~\ref{Theorem31}~and~\ref{Theorem34}. \end{proof} \subsection{\L ojasiewicz gradient exponent of real analytic function germs} \ For an real analytic function germ $f \colon (\mathbb{R}^2, 0) \to (\mathbb{R}, 0)$ we have the following version of Theorem~\ref{Theorem41}. Let $x = \gamma(y)$ be a Newton--Puiseux root of $\frac{\partial f}{\partial x} = 0$ in the ring $\mathbb{C}\{x, y\}$: \begin{eqnarray*} \gamma(y) = a_1 y^{n_1/N} + a_2y^{n_2/N}+ \cdots + a_{s - 1} y^{n_{s - 1}/N} + c_s y^{n_s/N} + \cdots, \end{eqnarray*} where $a_i \in \mathbb{R},$ $c_s$ is the first non-real coefficient, if there is one. Let us replace $c_s$ by a generic real number $g,$ and call \begin{eqnarray*} \gamma_{\mathbb{R}}(y) := a_1 y^{n_1/N} + a_2y^{n_2/N}+ \cdots + a_{s - 1} y^{n_{s - 1}/N} + g y^{n_s/N}, \end{eqnarray*} a {\em real polar branch.} In case $s = +\infty,$ let $\gamma_{\mathbb{R}} = \gamma.$ We denote by $\Gamma(f)$ the set of real polar branches of $f$ which are not Newton--Puiseux roots of $f = 0.$ Let $$\mathscr{L}_{+}(f) := \max \left\{\frac{m-1}{m},\ell(\gamma_{\mathbb{R}}) \mid \gamma_{\mathbb{R}} \in \Gamma(f) \right\}.$$ We also put $\mathscr{L}_{-}(f) := \mathscr{L}_{+}(\bar{f}),$ where $\bar{f}$ denotes the germ defined by $\bar{f}(x, y) := f(x, -y).$ \begin{theorem}\label{Theorem45} With the above notations, the \L ojasiewicz gradient exponent of $f$ is given by $$\mathscr{L}(f) = \max \left\{\mathscr{L}_{+}(f), \mathscr{L}_{-}(f) \right\}.$$ \end{theorem} \begin{proof} Let $\phi$ be a real curve parametrized by either $(x = x(t), y = t)$ or $(x = x(t), y = -t),$ where $x(t)$ is an element in $\mathbb{R}\{t^{1/N}\}$ for some positive integer number $N.$ Also assume that $\phi$ is not root of $f = 0.$ We first consider the case $\phi$ has the form $(x(t), t).$ Let us denote by $\gamma$ a final result of sliding of $\phi$ along $\frac{\partial f}{\partial x}$ and by $\gamma_{\mathbb{R}}$ its real approximation. \begin{claim}\label{Claim4} We have $$\ell(\phi) \le \max \left \{\frac{m - 1}{m},\ell(\gamma_{\mathbb{R}}) \right\},$$ and therefore $\ell(\phi) \le \mathscr{L}_{+}(f)$. \end{claim} \begin{proof} If $\phi$ is tangent to the $x$-axis, then $\ell(\phi)\leq \frac{m - 1}{m},$ and there is nothing to prove. So assume that the arc $\phi$ is parametrized by a Puiseux series $x = \phi(y)$ with $\mathrm{ord}\, \phi(y) \geq 1.$ Clearly, we may assume that $\phi$ is not root of $\frac{\partial f}{ \partial x} = 0.$ In the Newton polygon $\mathbb{P}(f, \phi)$ of $f$ relative to $\phi,$ let $(0, h_0)$ and $(1, h_1)$ be the lowest Newton dots on $X = 0$ and $X = 1,$ respectively. By the same argument as in Claim~\ref{Claim2}, the quantity $\ell(\phi)$ can be read off from the Newton polygon $\mathbb{P}(f,\phi)$ as $$\ell(\phi) = \min \Big \{\frac{h_0 - 1}{h_0}, \frac{h_1}{h_0} \Big \}.$$ We can see moreover that, if $(1,h_1)\in E_H$, then $\ell(\phi)\leq \frac{m - 1}{m}$. It hence suffices to prove the claim for the case that $(1,h_1)\not\in E_H$. In this case, $\theta_H <\theta_{H'},$ where $\theta_{H'}$ denotes the angle corresponding to the highest Newton edge of the Newton polygon of $\frac{\partial f}{ \partial x }$ relative to $\phi$. Since $\gamma$ is a final result of sliding of $\phi$ along $\frac{\partial f}{ \partial x }$, it has the form $$\gamma(y) = \phi(y) + c y^{\tan \theta_{H'}}+\text{higher order terms}$$ for some non-zero number $c \in\mathbb{C}.$ By definition, the series $\gamma_{\mathbb{R}}$ also has the form $$\gamma_{\mathbb{R}}(y) = \phi(y) + g y^{\tan \theta_{H'}}+\text{higher order terms}$$ with $g \in \mathbb{R}$ being generic if $c \not\in \mathbb{R}$ and $g = c$ otherwise. Applying Lemma~\ref{Lemma22} for both $f$ and $\frac{\partial f}{\partial x}$, we obtain $$h_0 \ = \ \mathrm{ord} f(\phi(y),y) \ = \ \mathrm{ord} f(\gamma_{\mathbb{R}}(y),y)$$ and $$h_1 \ = \ \mathrm{ord}\ \frac{\partial f}{\partial x}(\phi(y),y) \ \le \ \mathrm{ord}\ \frac{\partial f}{\partial x} (\gamma_{\mathbb{R}}(y),y) \ =: \ h'_1.$$ It follows that \begin{eqnarray*} \ell(\phi) \ = \ \min \Big \{\frac{h_0 - 1}{h_0}, \frac{h_1}{h_0}\Big \} & \leq & \min \Big \{\frac{h_0 - 1}{h_0}, \frac{h'_1}{h_0}\Big \} \ = \ \ell(\gamma_{\mathbb{R}}). \end{eqnarray*} The claim is proved. \end{proof} We now consider the case $\phi$ has the form $(x(t), -t)$. Then $$\|\nabla \bar{f}(x(t), t)\| = \|\nabla {f}(x(t), -t)\| \simeq |{f}(x(t), -t)|^{\ell(\phi)} = |\bar{f}(x(t), t)|^{\ell(\phi)}.$$ Applying Claim \ref{Claim4} for $\bar{f}$ we get $\ell(\phi) \leq \mathscr{L}_{+}(\bar{f})=\mathscr{L}_{-}(f).$ Summing up in both cases we have \begin{eqnarray*} \ell(\phi) &\le& \max \left \{\mathscr{L}_{+}(f), \mathscr{L}_{-}(f) \right\}. \end{eqnarray*} Since $\phi$ is arbitrary, we get easily from \eqref{Eqn1} and \eqref{Eqn3} that \begin{eqnarray*} \mathscr{L}(f) & = & \max \left \{\mathscr{L}_{+}(f), \mathscr{L}_{-}(f) \right\}. \end{eqnarray*} The proof of theorem is completed. \end{proof} \begin{example}{\rm Let $f(x,y) = x^3 + 3xy^3 \in \mathbb{R}\{x, y\}$. Then $f$ is mini-regular in $x$ of order $m =3$ and $\frac{\partial f}{ \partial x } = 3(x^2 + y^3).$ By definition, then $\Gamma(f)$ consists one real polar branch $\gamma : x = g y^{3/2} $ for some generic number $g.$ A simple calculation shows that $\ell(\gamma ) = \frac{2}{3}$ and that $\mathscr{L}_{+}(f) = \frac{2}{3}$. It can be computed similarly that $\mathscr{L}_{-}(f) = \frac{7}{9}$. Hence, $\mathscr{L}(f) = \frac{7}{9}$ by Theorem~\ref{Theorem45}. }\end{example} \begin{remark}{\rm It is noting that the \L ojasiewicz gradient exponent of real analytic function germs is not topological invariant. Indeed, in some neighbourhood of the origin in $\mathbb{R}^2,$ consider the functions $f(x, y) := x^2 - y^3$ and $g(x, y) := x^2 - y^5$. It is obvious that they are topologically right equivalent. On the other hand, one can easily see that $\mathscr{L}(f) = 2/3 \ne 4/5 = \mathscr{L}(g).$ }\end{remark} The following result was observed by Haraux \cite[Theorem~2.1]{Haraux2005} in the real case. \begin{corollary} Let $f \colon \mathbb{K}^2 \to \mathbb{K}$ be a homogeneous polynomial of degree $d.$ Then $$\mathscr{L}(f) = 1 - \dfrac{1}{d}.$$ \end{corollary} \begin{proof} We first consider the case where $f$ is a complex homogeneous polynomial of degree $d.$ Since an invertible linear transformation does not change homogeneity of $f,$ we may assume that $f$ is mini-regular in $x.$ We have $\frac{\partial f}{\partial x}$ is homogeneous polynomial of degree $d - 1.$ Hence, each root $x = \gamma(y)$ of $\frac{\partial f}{\partial x} = 0$ has the form $x = ay$ for some $a \in \mathbb{C}.$ Clearly, if $f(ay, y) \not \equiv 0,$ then $f(ay, y) = b y^d$ for some $b \ne 0,$ and so $\mathrm{ord} f(\gamma(y), y) = d = \mathrm{ord} f.$ Therefore, by Theorems~\ref{Theorem41}, $\mathscr{L}(f) = 1 - \frac{1}{d}.$ We now assume that $f$ is a real homogeneous polynomial of degree $d$ and consider its complexification $f_{\mathbb{C}}$. By definition, we have \begin{eqnarray*} \mathscr{L}(f) & \le & \mathscr{L}(f_{\mathbb{C}}) \ = \ 1 - \frac{1}{d}. \end{eqnarray*} On the other hand, the inequality $1 - \frac{1}{d} \le \mathscr{L}(f)$ holds. Therefore, $\mathscr{L}(f) = 1 - \frac{1}{d}.$ \end{proof} \subsection{Effective estimates for \L ojasiewicz exponents} \ In this subsection we give bounds for \L ojasiewicz exponents of polynomial functions in two variables. The bounds depend only on the degree of the polynomial and are simple to state. \begin{theorem}[see also {\cite[Main Theorem]{Acunto2005}}] \label{Theorem49} Let $f \colon \mathbb{K}^2 \to \mathbb{K}$ be a polynomial of degree $d$ with $f(0) = 0.$ Then $$\mathscr{L}(f) \le 1 - \frac{1}{(d-1)^2 + 1}. $$ \end{theorem} Before proving the corollary we recall the notion of intersection multiplicity of two plane curve germs (see, for example, \cite{Greuel2006}). Let $f \in \mathbb{C}\{x, y\}$ be irreducible. Then the {\em intersection multiplicity} of any $g \in \mathbb{C}\{x, y\}$ with $f$ is given by $$i(f, g) := \mathrm{ord}\, g(x(t),y(t)),$$ where $t \mapsto (x(t), y(t))$ is a parametrization for the curve germ defined by $f.$ Here by a {\em parametrization} of the curve germ $f = 0,$ we mean an analytic map germ $$\phi \colon (\mathbb{C}, 0) \rightarrow (\mathbb{C}^2, 0), \quad t \mapsto (x(t), y(t)),$$ with $f \circ \phi \equiv 0$ and satisfying the following {\em universal factorization property}: each analytic map germ $\psi \colon (\mathbb{C}, 0) \rightarrow (\mathbb{C}^2, 0)$ with $f \circ \psi \equiv 0,$ there exists a unique analytic map germ $\psi' \colon (\mathbb{C}, 0) \rightarrow (\mathbb{C}, 0)$ such that $\psi = \phi \circ \psi'.$ In general, let $f \in \mathbb{C}\{x, y\}$ a convergent power series and let $f = f_1^{\alpha_1} \cdots f_r^{\alpha_r}$ be a factorization of $f$ in the ring $\mathbb{C}\{x, y\}$ with $f_i$ being irreducible and pairwise co-prime. Then the intersection multiplicity of $g$ with $f$ is defined to be the sum $$i(f_1^{\alpha_1} \cdots f_r^{\alpha_r}, g) := \alpha_1 i(f_1, g) + \cdots + \alpha_r i(f_r, g).$$ \begin{proof}[Proof of Theorem~\ref{Theorem49}] By definition, if $f$ is a real polynomial, then $\mathscr{L}(f) \le \mathscr{L}(f_{\mathbb{C}}),$ where $f_{\mathbb{C}}$ is the complexification of $f.$ Hence, it suffices to consider the complex case. Without loss of generality we may assume that $f$ is mini-regular in $x$ of order $m \le d.$ It follows from Theorem \ref{Theorem41} that, if $\Gamma(f) = \emptyset$ then $$\mathscr{L}(f) = 1-\frac{1}{m}\leq 1-\frac{1}{(d-1)^2+1}.$$ We now assume that $\Gamma(f) \ne \emptyset.$ Take a polar branch $\gamma$ in $\Gamma(f)$, along which the \L ojasiewicz gradient exponent is attained: $$\mathscr{L}(f) = \ell(\gamma) = 1 - \frac{1}{\mathrm{ord} \ f(\gamma(y),y)}.$$ Let $g$ be the irreducible factor of $\frac{\partial f}{ \partial x }$ in $\mathbb{C}\{x,y\}$ having $\gamma$ as a Newton--Puiseux root. Then $t \mapsto (\gamma(t^{N}), t^{N})$ is a parametrization of the curve germ $g = 0,$ where $N$ denotes the order of $g.$ Note that $i(\frac{\partial f}{\partial y}, g)$ is finite because $f \circ \gamma \not \equiv 0.$ We have \begin{eqnarray*} i \left (\frac{\partial f}{\partial y}, g \right) \ = \ \mathrm{ord}\, \frac{\partial f}{\partial y}(\gamma(t^{N}),t^{N}) &=& N \cdot\mathrm{ord}\, \frac{\partial f}{\partial y}(\gamma(y), y) \\ &\ge& \mathrm{ord} \, \frac{\partial f}{\partial y}(\gamma(y),y) = \mathrm{ord}\, f(\gamma(y),y)-1. \end{eqnarray*} Let $h \in \mathbb{cC}[x, y]$ be the irreducible component of the polynomial $\frac{\partial f}{ \partial x }$ which is, in $\mathbb{C}\{x,y\}$, divisible by $g.$ Note that $h$ does not divide $\frac{\partial f}{\partial y},$ since $i(\frac{\partial f}{\partial y}, g)$ is finite. It follows from Bezout's theorem (\cite{Brieskorn1986}, p.232) that $$i\left (\frac{\partial f}{\partial y}, h \right) \leq (d - 1) \cdot \deg h \leq (d-1)^2.$$ Since $i(\frac{\partial f}{\partial y}, g) \le i(\frac{\partial f}{\partial y}, h),$ therefore $$\mathscr{L}(f)=1 - \frac{1}{\mathrm{ord} \ f(\gamma(y), y)} \le 1 - \frac{1}{i(\frac{\partial f}{\partial y}, g) + 1}\leq 1 - \frac{1}{i(\frac{\partial f}{\partial y}, h) + 1} \leq 1-\frac{1}{(d - 1)^2 + 1}.$$ The corollary is proved. \end{proof} Let $f \colon \mathbb{K}^2 \to \mathbb{K}$ be a polynomial function of degree $d$ with $f(0) = 0.$ Set \begin{eqnarray*} \widetilde{\mathscr{L}}(f) &:=& \inf\{\ell \ | \ \exists c > 0, \exists \epsilon >0, |f(x, y)| \ge c\, \mathrm{dist}((x, y), f^{-1}(0))^\ell, \forall \|(x, y)\| < \epsilon\}, \end{eqnarray*} where $\mathrm{dist}((x, y), f^{-1}(0))$ denotes the distance from $(x, y)$ to the set $f^{-1}(0)$ (see \cite{Lojasiewicz1959, Lojasiewicz1965}). It is well-known (see \cite{Bochnak1975, Kuo1974}) that the \L ojasiewicz exponent $\widetilde{\mathscr{L}}(f)$ is a rational number and it is attained along an analytic curve. In case $\mathbb{K} = \mathbb{C},$ Risler and Trotman showed in \cite[Theorem~1]{Risler1997} that \begin{eqnarray*} \widetilde{\mathscr{L}}(f) &=& \mathrm{ord}\, f \ \le \ d. \end{eqnarray*} In case $\mathbb{K} = \mathbb{R},$ a formula for computing $\widetilde{\mathscr{L}}(f)$ was given by Kuo in \cite{Kuo1974}. Furthermore, we have the following result (see also \cite{Acunto2005, Gwozdziewicz1999, Johnson2011, Kollar1999, Kurdyka2014, Pham2012}). \begin{theorem}\label{Theorem410} Let $f \colon \mathbb{R}^2 \to \mathbb{R}$ be a real polynomial of degree $d$ with $f(0) = 0.$ Then $$\widetilde{\mathscr{L}}(f) \le \frac{1}{1 - \mathscr{L}(f)}.$$ In particular, we have $$\widetilde{\mathscr{L}}(f) \le (d - 1)^2 + 1.$$ \end{theorem} \begin{proof} The first inequality is an immediate consequence of the proof of Theorem~2.2 in \cite{Pham2012} (see also \cite{Kurdyka2014}). The second one can be deduced from Theorem~\ref{Theorem49}. \end{proof} \begin{remark}{\rm In view of \cite[Example~1]{Kollar1999} (see also \cite{Gwozdziewicz1999, Johnson2011}), the estimate $\widetilde{\mathscr{L}}(f) \le (d - 1)^2 + 1$ is close to being optimal. }\end{remark} \subsection*{Acknowledgment.} A part of this work was done while the first author and the second author were visiting at Vietnam Institute for Advanced Study in Mathematics (VIASM) in the spring of 2016. These authors would like to thank the Institute for hospitality and support. \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} We consider finite graphs without loops, but with possible multiple edges, and follow \cite{BoMu08} for undefined terms and notation. As in \cite{BoMu08}, $\kappa'(G)$ denotes the edge-connectivity of a graph $G$; and $d_D^+(v)$, $d_D^-(v)$ denote the out-degree and the in-degree of a vertex in a digraph $D$, respectively. Throughout this paper, $\mbox{$\mathbb Z$}$ denotes the set of integers, and $A$ denotes an (additive) abelian group with identity $0$. For an $m\in \mathbb{Z}$, let $\mathbb{Z}_m$ be the set of integers modulo $m$, as well as the (additive) cyclic group on $m$ elements. For vertex subsets $U, W\subseteq V(G)$, let $[U, W]_G = \{uw \in E(G)|u \in U, w \in W\}$; and for each $v \in V(G)$, define $E_G(v)=[v, V(G)-v]_G$. The subscript $G$ may be omitted if $G$ is understood from the context. An edge cut $X=[S,V(G)-S]$ in a connected graph $G$ is {\bf essential} if at least two components of $G-X$ are nontrivial. A graph is {\bf essentially $k$-edge-connected} if it does not have an essential edge cut with fewer than $k$ edges. For an integer $m>1$, a graph $G$ admits a {\bf mod $m$-orientation} if $G$ has an orientation $D$ such that at every vertex $v\in V(G)$, $d^+_D(v)-d^-_D(v)\equiv 0 \pmod m$. Let $\mbox{$\mathcal M$}_m$ be the family of all graphs admitting a mod $m$-orientation. Let $k\ge 2$ be an integer and $G$ be a graph with an orientation $D=D(G)$. For any vertex $v\in V(G)$, let $E^+_D(v)$ denote the set of all edges directed away from $v$, and let $E^-_D(v)$ denote the set of all edges directed into $v$. A function $f: E(G)\rightarrow \{\pm 1, \pm2,\dots, \pm (k-1)\}$ is called a {\bf nowhere-zero $k$-flow} if \[ \sum\limits_{e\in E^+_D(v)}f(e) ~-\sum\limits_{e\in E^-_D(v)}f(e)=0, \mbox{ for any vertex $v\in V(G)$.} \] The well-known $3$-Flow Conjecture of Tutte is stated below. \begin{conjecture}\label{tutte3flow} (Tutte \cite{Tutt54}) Every $4$-edge-connected graph has a nowhere-zero $3$-flow. \end{conjecture} Tutte \cite{Tutt66} (see also Brylawski \cite{Bryl72}, Arrowsmith and Jaeger \cite{ArJa82}) indicated that a graph $G$ has a nowhere-zero $k$-flow if and only if $G$ has a nowhere-zero $\mbox{$\mathbb Z$}_k$-flow. Moreover, a graph has a nowhere-zero $3$-flow if and only if $G$ has a mod $3$-orientation (i.e. $G\in \mbox{$\mathcal M$}_3$). Jaeger et al. \cite{JLPT92} introduced the notion of $\mathbb{Z}_k$-connectedness as a generalization of nowhere-zero flows. In this paper, we mainly focus on $\mathbb{Z}_3$-connectedness of graphs. A function $\beta : V(G) \rightarrow \mathbb{Z}_3$ is a zero-sum function of $G$ if $\sum_{v\in V(G)}\beta(v)=0$ in $\mbox{$\mathbb Z$}_3$. Let $Z(G, \mbox{$\mathbb Z$}_3)$ be the set of all zero-sum functions of $G$. An orientation $D$ of $G$ with $d^+_D(v)-d^-_D(v)= \beta(v)$ in $\mathbb{Z}_3$ for every vertex $v\in V(G)$ is called a {\bf $\beta$-orientation}. A mod $3$-orientation of $G$ is a $\beta$-orientation with $\beta(v)=0$ for every vertex $v\in V(G)$. A graph $G$ is {\bf $\mathbb{Z}_3$-connected} if, for every $\beta\in {Z}(G, \mathbb{Z}_{3})$, there is an orientation $D$ such that $d^+_D(G)-d^-_D(G)\equiv \beta(v) \pmod {3}$ for every vertex $v\in V(G)$. The collection of all $\mbox{$\mathbb Z$}_3$-connected graphs is denoted by $\langle\mbox{$\mathbb Z$}_3\rangle$. Jaeger et al. \cite{JLPT92} proposed the following Conjecture. \begin{conjecture}\label{Jaegerz3} (Jaeger, Linial, Payan and Tarsi \cite{JLPT92}) Every $5$-edge-connected graph is $\mbox{$\mathbb Z$}_3$-connected. \end{conjecture} A graph $G$ with $z_0 \in V(G)$ is {\bf $\mbox{$\mathcal M$}_3$-extendable at vertex $z_0$} if, for any pre-orientation $D_{z_0}$ of $E_G(z_0)$ with $d_{D_{z_0}}^+(z_0) \equiv d_{D_{z_0}}^-(z_0) \pmod 3$, $D_{z_0}$ can be extended to a mod $3$-orientation $D$ of $G$. Kochol \cite{Koch01} showed that Conjecture \ref{Jaegerz3} implies Conjecture \ref{tutte3flow}. \begin{theorem}\label{kochol}(Kochol \cite{Koch01}) The following are equivalent. \\ (i) Every $4$-edge-connected graph has a nowhere-zero $3$-flow. \\ (ii) Every $5$-edge-connected graph has a nowhere-zero $3$-flow. \\ (iii) Every $5$-edge-connected essentially $6$-edge-connected graph is $\mbox{$\mathcal M$}_3$-extendable at every degree $5$ vertex. \\ (iv) Every $4$-edge-connected graph with each vertex of degree $4$ or $5$ is $\mbox{$\mathcal M$}_3$-extendable at every vertex. \end{theorem} A graph is called {\bf $\mbox{$\langle\Z_3\rangle$}$-extendable at vertex $z_0$}, if, for any $\beta\in Z(G, \mbox{$\mathbb Z$}_3)$ and any pre-orientation $D_{z_0}$ of $E_G({z_0})$ with $d_{D_{z_0}}^+(z_0)-d_{D_{z_0}}^-(z_0)\equiv\beta (z_0)\pmod 3$, $D_{z_0}$ can be extended to a $\beta$-orientation $D$ of $G$. In the next section, we shall prove the following proposition on extendability at vertex $z_0$. \begin{proposition}\label{extendingiff} Let $G$ be a graph and $z_0 \in V(G)$ be a vertex. \\ (i) $G$ is $\mbox{$\langle\Z_3\rangle$}$-extendable at vertex $z_0$ if and only if $G-z_0$ is $\mbox{$\mathbb Z$}_3$-connected. \\ (ii) If $G$ is $\mbox{$\langle\Z_3\rangle$}$-extendable at $z_0$, then $G$ is $\mbox{$\mathbb Z$}_3$-connected. \end{proposition} Thomassen \cite{Thom12} and Lov\'{a}sz et al. \cite{LTWZ13} utilized partial flow extensions to obtain breakthroughs in $\mbox{$\mathbb Z$}_3$-connectedness and modulo orientation problems. Lov\'{a}sz, Thomassen, Wu and Zhang \cite{LTWZ13,Wuyz12} proved that every $6$-edge-connected graph is $\mbox{$\mathbb Z$}_3$-connected. In fact, they have proved a stronger result. \begin{theorem}\label{LTWZWu} (Lov\'{a}sz, Thomassen, Wu and Zhang \cite{LTWZ13} and Wu \cite{Wuyz12}) Every $6$-edge-connected graph is $\mbox{$\langle\Z_3\rangle$}$-extendable at any vertex of degree at most $7$. \end{theorem} { Analogous to Theorem \ref{kochol}(iii) of Kochol, it is natural to suggest the following strengthening of Conjecture \ref{Jaegerz3}, which eliminates nontrivial $5$-edge-cut, and whose truth would imply Conjecture \ref{Jaegerz3}, as to be shown in Section 3 of this paper.} \begin{conjecture}\label{z3extending} Every $5$-edge-connected essentially $6$-edge-connected graph is $\mbox{$\langle\Z_3\rangle$}$-extendable at any vertex of degree $5$. \end{conjecture} The main results of this paper are the following. \begin{theorem}\label{mainthm} Every graph with $4$ edge-disjoint spanning trees is $\mbox{$\mathbb Z$}_3$-connected. \end{theorem} In response to Theorem \ref{kochol}(iii) of Kochol and providing some supporting evidence to Conjecture \ref{z3extending}, we obtain a partial result as stated below. \begin{theorem}\label{ess23con} Each of the following holds.\\ (a)Every $5$-edge-connected essentially $23$-edge-connected graph is $\mbox{$\mathcal M$}_3$-extendable at any degree five vertex.\\ (b)Every $5$-edge-connected essentially $23$-edge-connected graph is $\mbox{$\langle\Z_3\rangle$}$-extendable at any degree five vertex. \end{theorem} Theorems \ref{mainthm} and \ref{ess23con} are immediate corollaries of a technical theorem, stated below as Theorem \ref{4treez3}, which would be proved via utilizing a method of Thomassen \cite{Thom12} and Lov\'{a}sz et al. in \cite{LTWZ13}. Following Catlin \cite{Catl92}, let $F(G,k)$ denote the minimum number of additional edges that must be added to $G$ to result in a supergraph $G'$ of $G$ that has $k$ edge-disjoint spanning trees. In particular, $G$ has $k$ edge-disjoint spanning trees if and only if $F(G,k) = 0$. It is known (\cite{WLYZ14,LaLL17}) that if $G$ is $\mbox{$\mathbb Z$}_3$-connected, then it contains two edge-disjoint spanning trees (i.e. $F(G,2) = 0$). A cut-edge is called a {\bf bridge}. The following provides a sufficient condition for graphs to be $\mbox{$\mathbb Z$}_3$-connected through number of edge-disjoint spanning trees. \begin{theorem}\label{4treez3} Let $G$ be a graph. \\ (i) Suppose that $F(G,4)\le 3$. Then $G$ is $\mbox{$\mathbb Z$}_3$-connected, unless $G$ contains a bridge. (Thus, $G$ is $\mbox{$\mathbb Z$}_3$-connected if and only if $\kappa'(G)\ge 2$. ) \\ (ii) Suppose that $F(G,4)=0$. Then for any vertex $v \in V(G)$ with $d_G(v) \le 7$, if $\kappa'(G-v) \ge 2$, then $G$ is $\mbox{$\langle\Z_3\rangle$}$-extendable at $v$. \end{theorem} Prerequisites will be presented in the next section. In Section 3, we will study the relationship among Conjectures \ref{tutte3flow}, \ref{Jaegerz3} and \ref{z3extending}. Theorems \ref{4treez3}, \ref{mainthm} and \ref{ess23con} will be proved in a subsequent section. \section{Prerequisites} In this section, we will justify Proposition \ref{extendingiff} and present other preliminaries. For a graph $G$ and a vertex $z \in V(G)$, define $N_G(z) =\{v\in V(G): zv\in E(G)\}$. For notation convenience, the algebraic manipulations in the proof of Proposition \ref{extendingiff} will be over $\mbox{$\mathbb Z$}_3$. \\ \noindent {\bf Proof of Proposition \ref{extendingiff} } As Part (ii) is straightforward, we only prove Part (i). Suppose that a graph $G$ is $\mbox{$\langle\Z_3\rangle$}$-extendable at vertex $z_0$. Let $D_{z_0}$ be a fixed pre-orientation of $E_G({z_0})$. We also use $D_{z_0}$ to denote the digraph induced by the oriented edges of $D_{z_0}$. Define \begin{equation} \label{b} \mbox{ $b(v)=d_{D_{z_0}}^+(v)-d_{D_{z_0}}^-(v)$ for each $v\in N_G(z_0)\cup \{z_0\}$.}\end{equation} Then $b(z_0) + \sum_{v \in N_G(z_0)} b(v) = 0$. We are to prove $G-z_0$ is $\mathbb{Z}_3$-connected. For any $\beta\in \mathbb{Z} (G-{z_0}, \mathbb{Z}_3)$, define $$\beta'(v)= \left\{\begin{array}{lll} \beta(v)+b(v), \mbox{if~} v\in N_G(z_0); \\ b(z_0), \mbox{if~} v=z_0;\\ \beta(v), \mbox{~othewise.} \end{array} \right. $$ Then $\sum_{v \in V(G)} \beta'(v) = \sum_{v \in V(G-z_0)} \beta(v) + (b(z_0) + \sum_{v \in N_G(z_0)} b(v)) = 0$, and so $\beta'\in {Z} (G, \mathbb{Z}_3)$. Since $G$ is $\mbox{$\langle\Z_3\rangle$}$-extendable at vertex $z_0$, there exists an orientation $D'$ of $G$ such that $d^+_{D'}(v)-d^-_{D'}(v)=\beta'(v)$ for any vertex $v\in V(G)$ and $D'$ agrees with $D_{z_0}$ on $E_G(z_0)$. Let $D$ be the restriction of $D'$ on $G-z_0$. By the definition of $\beta'$, we have $d^+_D(v)-d^-_D(v)=\beta(v)$ for any vertex $v\in V(G-z_0)$, and so $G-z_0$ is $\mathbb{Z}_3$-connected. Conversely, assume that $G-z_0$ is $\mathbb{Z}_3$-connected. Let $\beta'\in {Z} (G, \mathbb{Z}_3)$, and $D_{z_0}$ be a pre-orientation of $E_G({z_0})$ with $d^+_{D_{z_0}}(z_0)-d^-_{D_{z_0}}(z_0)=\beta'(z_0)$. Define $b(v)$ as in (\ref{b}), and $$\beta(v)= \left\{\begin{array}{ll} \beta'(v)-b(v), \mbox{if~} v\in N_G(z_0); \\ \beta'(v), \mbox{otherwise.} \end{array} \right. $$ As $\sum_{v \in V(G - z_0)} \beta(v) = \sum_{v \in V(G)} \beta'(v) = 0$, we have $\beta\in {Z} (G-z_0, \mathbb{Z}_3)$. Since $G-z_0 \in \mbox{$\langle\Z_3\rangle$}$, there exists an orientation $D'$ of $G-z_0$ satisfying $d^+_{D'}(v)-d^-_{D'}(v)=\beta'(v)$, for any vertex $v\in V(G-z_0)$. Combine $D'$ and $D_{z_0}$ to obtain an orientation $D$ of $G$. Then for any vertex $v\in V(G)$, depending on $v = z_0$ or not, we always have $d^+_D(v)-d^-_D(v)=\beta'(v)$, and so $G$ is $\mbox{$\langle\Z_3\rangle$}$-extendable at vertex $z_0$. This completes the proof of Proposition \ref{extendingiff}. \fbox{\rule[1.2mm]{0.8mm}{0mm}} \vskip 0.3cm Let $G$ be a graph and $\beta\in Z(G,\mbox{$\mathbb Z$}_3)$. Define an integer valued mapping $\tau : 2^{V(G)} \mapsto \{0,\pm 1, \pm 2, \pm 3\}$ as follows: for each vertex $x\in V(G)$, \begin{eqnarray}\nonumber \tau (x) \equiv \left\{\begin{array}{ll} \beta(x)\pmod 3; \\ d(x) \pmod 2. \end{array} \right. \end{eqnarray} For a vertex set $A\subset V(G)$, denote $\beta(A)\equiv \sum_{v\in A} \beta(v)\pmod 3$, $d(A)=|[A,V(G)-A]|$ and define $\tau(A)$ to be \begin{eqnarray}\nonumber \tau (A) \equiv \left\{\begin{array}{ll} \beta(A)\pmod 3; \\ d(A) \pmod 2. \end{array} \right. \end{eqnarray} \begin{theorem}\label{partialextending} (Lov\'{a}sz, Thomassen, Wu and Zhang, Theorem 3.1 of \cite{LTWZ13}) Let $G$ be a graph, $\beta\in Z(G,\mbox{$\mathbb Z$}_3)$ and $z_0 \in V(G)$. If $D_{z_0}$ is a pre-orientation of $E_G({z_0})$, and if \\ (i) $|V(G)|\ge 3$, \\ (ii) $d(z_0)\le 4 + |\tau(z_0)|$ and $d^+(z_0)-d^-(z_0)\equiv \beta(z_0) \pmod 3$, and \\ (iii) $d(A)\ge 4+ |\tau(A)|$ for each nonempty $A\subseteq V(G)-\{z_0\}$ with $|V(G)- A|\ge 2$, \\ then $D_{z_0}$ can be extended to a $\beta$-orientation of the entire graph $G$. \end{theorem} The following is an application of Theorem \ref{partialextending}. \begin{lemma}\label{delete3edges} Let $G$ be a $6$-edge-connected graph. Each of the following holds. \\ (i) If $v \in V(G)$ with $d(v) \le 7$, then $G-v \in \mbox{$\langle\Z_3\rangle$}$. \\ (ii) If $E_1 \subset E(G)$ with $|E_1|\le 3$, then $G-E_1 \in \mbox{$\langle\Z_3\rangle$}$. \end{lemma} \proof ~ (i) we may assume that $d_G(v)=7$ to prove the lemma. Otherwise, pick an edge $e \in E_G(v)$ and add an edge parallel to $e$, which results in still a $6$-edge-connected graph. Take an arbitrary $\beta'\in {Z}(G-{v}, \mathbb{Z}_3)$. We shall show that $G-v$ has a $\beta'$-orientation. Define $\beta(v)=3$. We shall apply Theorem \ref{partialextending} by viewing $v$ as $z_0$ in Theorem \ref{partialextending}. Since $d(v) = 7$, we have $|\tau(v)|=3$, and thus we can orient the edges $E_G(v)$ with an orientation $D_v$ so that $d_{D_v}^+(v) = 5$ and $d_{D_v}^-(v) = 2$. Define $b(x)=d_{D_{v}}^+(x)-d_{D_{v}}^-(x)$ for each $x\in N_G(v)$ and set \begin{equation} \label{beta-1} \beta(x)= \left\{\begin{array}{lll} \beta'(x)+b(x), \mbox{if~} x\in N_G(v); \\ \beta(v), \mbox{if~} x=v;\\ \beta'(x), \mbox{~othewise.} \end{array} \right. \end{equation} Then $\beta\in {Z} (G, \mathbb{Z}_3)$. As $\kappa'(G) \ge 6$, conditions (i)-(iii) of Theorem \ref{partialextending} are satisfied, and so by Theorem \ref{partialextending}, $G$ has a $\beta$-orientation $D$. Let $D'$ be the restriction of $D$ on $G-v$. By (\ref{beta-1}), $D'$ is a $\beta'$-orientation of $G-v$. This proves (i). (ii) Since $\mbox{$\mathbb Z$}_3$-connectedness is preserved under adding edges, we may assume that $|E_1|=3$. In graph $G$, subdivide each edge in $E_1$ with internal vertices $z_1,z_2,z_3$, respectively. Identify $z_1,z_2,z_3$ to form a new vertex $z_0$ in the resulted graph $G'$. By the construction of $G'$, we have $\kappa'(G') \ge 6$. By Lemma \ref{delete3edges} (i), $G-E_1=G'-z_0 \in \mbox{$\langle\Z_3\rangle$}$. \fbox{\rule[1.2mm]{0.8mm}{0mm}} \vspace{0.3cm} For an edge set $X \subseteq E(G)$, the {\bf contraction} $G/X$ is the graph obtained from $G$ by identifying the two ends of each edge in $X$, and then deleting the resulting loops. If $H$ is a subgraph of $G$, then we use $G/H$ for $G/E(H)$. For a vertex set $W \subset V(G)$ such that $G[W]$ is connected, we also use $G/W$ for $G/G[W]$. \begin{lemma}\label{cf} (Proposition 2.1 of \cite{LaiH03}) Let $G$ be a graph. Each of the following holds. \\ (i) If $G\in \langle \mathbb{Z}_3\rangle$ and $e\in E(G)$, then $G/e \in \langle \mathbb{Z}_3\rangle.$ \\ (ii) If $H\subseteq G$ and if $H, G/H \in \langle \mathbb{Z}_3\rangle$, then $G\in \langle \mathbb{Z}_3\rangle.$ \end{lemma} \section{Relationship among the conjectures} A graph is called {\bf $\mbox{$\langle\Z_3\rangle$}$-reduced} if it does not have any nontrivial $\mbox{$\mathbb Z$}_3$-connected subgraphs. By definition, $K_1$ is $\mbox{$\langle\Z_3\rangle$}$-reduced. The potential minimal counterexamples of Conjectures \ref{tutte3flow} and \ref{Jaegerz3} must be $\mbox{$\langle\Z_3\rangle$}$-reduced graphs. As an example, it is routine to verify that the $4$-edge-connected non $\mbox{$\mathbb Z$}_3$-connected graph $J$ constructed by Jaeger et al.\cite{JLPT92} (see Figure \ref{Jaegergraph}) is indeed a $\mbox{$\langle\Z_3\rangle$}$-reduced graph. Applying Theorem \ref{partialextending}, we obtain the following. \begin{lemma}\label{min5} Every $\mbox{$\langle\Z_3\rangle$}$-reduced graph has minimal degree at most $5$. \end{lemma} \proof Suppose, to the contrary, that there is a $\mbox{$\langle\Z_3\rangle$}$-reduced graph $G$ with $\delta(G)\ge 6$. As a cycle of length $2$ is $\mbox{$\mathbb Z$}_3$-connected, $G$ has no parallel edges and $|V(G)|\ge 4$. If $\kappa'(G)\ge 6$, then $G$ is $\mbox{$\mathbb Z$}_3$-connected by Theorem \ref{LTWZWu}, contradicting to $G$ is a $\mbox{$\langle\Z_3\rangle$}$-reduced graph. For a vertex subset $W \subset V(G)$, let $W^c = V(G) - W$. Among all those edge-cuts $[W, W^c]$ of size at most $5$ in $G$, choose the one with $|W|$ minimized. Let $v_c$ denote the vertex onto which $W^c$ is contracted in $G/W^c$. Obtain a graph $G'$ from $G/W^c$ by adding $6-d_{G/W^c}(v_c)$ edges between $W$ and $v_c$. Then $\kappa'(G')\ge 6$ by the choice of $W$. By Lemma \ref{delete3edges} (i), $G[W]= G'- v_c$ is $\mbox{$\mathbb Z$}_3$-connected, a contradiction. \fbox{\rule[1.2mm]{0.8mm}{0mm}} \\ We believe that the following strengthening of Lemma \ref{min5} holds as well, whose truth implies Conjecture \ref{Jaegerz3}, as will be shown below in Proposition \ref{d5extending}. \begin{conjecture}\label{atmost4} Every $\mbox{$\langle\Z_3\rangle$}$-reduced graph has minimal degree at most $4$. \end{conjecture} \begin{proposition}\label{d5extending} Each of the following holds. \\ (i) Conjecture \ref{z3extending} implies Conjecture \ref{atmost4}. \\ (ii) Conjecture \ref{atmost4} implies Conjecture \ref{Jaegerz3}. \end{proposition} \proof { We shall prove (ii) first.} Assume that Conjecture \ref{atmost4} holds. Then by the validity of Conjecture \ref{atmost4}, every graph with minimum degree at least 5 is not $\mbox{$\langle\Z_3\rangle$}$-reduced. Let $G$ be a counterexample to Conjecture \ref{Jaegerz3} with $|V(G)|$ minimized. Since $\delta(G) \ge \kappa'(G) \ge 5$, $G$ is not $\mbox{$\langle\Z_3\rangle$}$-reduced, and so $G$ contains a nontrivial $\mbox{$\mathbb Z$}_3$-connected subgraph $H$. Since $\kappa'(G/H) \ge \kappa'(G) \ge 5$, and since $|V(G)| > |V(G/H)|$, the minimality of $G$ implies that $G/H$ is $\mbox{$\mathbb Z$}_3$-connected. By Lemma \ref{cf} (ii), $G$ must be $\mbox{$\mathbb Z$}_3$-connected as well, contrary to the assumption that $G$ is a counterexample of Conjecture \ref{Jaegerz3}. This proves { (ii)}. To prove (i), we use arguments similar to those in the proof of Lemma \ref{min5}. By contradiction, we assume that Conjecture \ref{z3extending} holds but there is a counterexample $G$ to Conjecture \ref{atmost4} with $|V(G)|$ minimized and with $\delta(G) \ge 5$. By the validity of Conjecture \ref{z3extending}, $G$ must have an essential edge-cut of size at most $5$. Among all those essential edge-cuts $[W, W^c]$ of size at most $5$, choose the one with $|W|$ minimized. Let $v_c$ denote the vertex onto which $W^c$ is contracted in $G/W^c$. Adding some edges between $W$ and $v_c$ such that $v_c$ has degree $5$ in the new graph, and we still denote it $G/W^c$. Then we have $|W|\ge 2$, and the minimality of $|W|$ forces that $G/W^c$ is an essentially $6$-edge-connected graph. By the assumption that Conjecture \ref{z3extending} holds, $G/W^c$ is $\mbox{$\langle\Z_3\rangle$}$-extendable at $v_c$. By Proposition \ref{extendingiff}, $G[W]=G/W^c - v_c \in \mbox{$\langle\Z_3\rangle$}$, contradicting to that $G$ is $\mbox{$\langle\Z_3\rangle$}$-reduced. \fbox{\rule[1.2mm]{0.8mm}{0mm}} \\ In the rest of this section, we study the relationship between $\mbox{$\langle\Z_3\rangle$}$-extendability and edge deletions. Theorem \ref{eqdeletetwoedges} below indicates that deleting one or two adjacent edges does not make Conjecture \ref{Jaegerz3} stronger. Theorem \ref{eqdeleteanytwoedges} and Proposition \ref{eqdeletethreeedges} below also describe the strength of Conjecture \ref{atmost4} and Conjecture \ref{z3extending} via edge deletions. \begin{theorem} \label{eqdeletetwoedges} The following statements are equivalent. \\ (i) Every $5$-edge-connected graph is $\mbox{$\mathbb Z$}_3$-connected. \\ (ii) Every $5$-edge-connected graph deleting two adjacent edges is $\mbox{$\mathbb Z$}_3$-connected. \end{theorem} \begin{theorem} \label{eqdeleteanytwoedges} The following statements are equivalent. \\ (i) Every $\mbox{$\langle\Z_3\rangle$}$-reduced graph has minimal degree at most $4$. \\ (ii) Every $5$-edge-connected graph deleting any two edges is $\mbox{$\mathbb Z$}_3$-connected. \end{theorem} \begin{proposition} \label{eqdeletethreeedges} The following statements are equivalent. \\ (i) Every $5$-edge-connected essentially $6$-edge-connected graph is $\mbox{$\langle\Z_3\rangle$}$-extendable at any vertex of degree $5$. \\ (ii) Every $5$-edge-connected graph is $\mbox{$\langle\Z_3\rangle$}$-extendable at any vertex of degree $5$. \\ (iii) Every $5$-edge-connected graph deleting three {incident} edges of a degree $5$ vertex is $\mbox{$\mathbb Z$}_3$-connected. \end{proposition} We shall justify Theorem \ref{eqdeletetwoedges} and Theorem \ref{eqdeleteanytwoedges} by utilizing Kochol's method in \cite{Koch01}. In \cite{Koch01}, Kochol applies $\mbox{$\mathcal M$}_3$-extension on a degree $5$ vertex and converts it into degree $3$ vertices, which helps him establish Theorem \ref{kochol}. Unlike mod $3$-orientations, direct application of the method above does not seem to help on $\mbox{$\langle\Z_3\rangle$}$-extension for certain $\beta$-orientation. We observe that some edge deletions behave similarly as extension, as showed in Proposition \ref{extendingiff} and the theorems above. This is part of the reason why we would like to prove Theorem \ref{4treez3} in the form of edge deletions. A lemma is needed to prove Theorems \ref{eqdeletetwoedges} and \ref{eqdeleteanytwoedges}. \begin{definition} \label{e-sum} Let $G_1$ be a graph with $e =u_1v_1 \in E(G_1)$, and $G_2(u_2, v_2)$ be a graph with distinguished (and distinct) vertices of $u_2, v_2$. Let $G_1 \oplus_e G_2$ be a graph obtained from the disjoint union of $G_1 - e$ and $G_2$ by identifying $u_1$ and $u_2$ to form a vertex $u$, and by identifying $v_1$ and $v_2$ to form a vertex $v$. Thus for $i \in \{1, 2\}$, we can view $u = u_i$ and $v = v_i$ in $G_i$. Note that even if $e$ and $u_2, v_2$ are given, $G_1 \oplus_e G_2$ may not be unique. Thus we use $G_1 \oplus_e G_2$ to denote any one of the resulting graph. \end{definition} \begin{lemma}\label{2sum} Let $G_1$ and $G_2$ be nontrivial graphs with $e \in E(G_1)$. (i) If $G_1$ and $G_2$ are not $\mbox{$\mathbb Z$}_3$-connected graphs, then $G_1 \oplus_e G_2$ is not $\mbox{$\mathbb Z$}_3$-connected. (ii) If $G_1$ and $G_2$ are $\mbox{$\langle\Z_3\rangle$}$-reduced graphs, then $G_1 \oplus_e G_2$ is a $\mbox{$\langle\Z_3\rangle$}$-reduced graph.. \end{lemma} \proof ~(i) The proof is similar to those of Lemma 1 in \cite{Koch01} and of Lemma 2.5 in \cite{DeXY06}. Let $G = G_1 \oplus_e G_2$. We shall adopt the notation in Definition \ref{e-sum}. Fix $ i \in \{1, 2\}$. Since $G_i$ is not $\mbox{$\mathbb Z$}_3$-connected, there exists a $\beta_i \in Z(G_i, \mbox{$\mathbb Z$}_3)$ such that $G_i$ does not have a $\beta_i$-orientation. Define $\beta: V(G) \mapsto \mbox{$\mathbb Z$}_3$ as follows: $$\beta(x)= \left\{\begin{array}{lll} \beta_1(x), \mbox{if~} x\in V(G_1)-\{u_1,v_1\}; \\ \beta_2(x), \mbox{if~} x\in V(G_2)-\{u_2,v_2\}; \\ \beta_1(x)+\beta_2(x), \mbox{if~} x\in \{u,v\}. \end{array} \right. $$ As $\sum_{z \in V(G)} \beta(z) = \sum_{i=1}^2 \sum_{z \in V(G_i)} \beta_i(z)$, we have $\beta\in Z(G, \mbox{$\mathbb Z$}_3)$. It remains to show $G$ does not have a $\beta$-orientation. By contradiction, assume that $G$ has a $\beta$-orientation $D$. Let $D_2$ be the restriction of $D$ on $E(G_2)$. Then $d^+_{D_2}(x)-d^-_{D_2}(x)=\beta_2(x)$ in $\mbox{$\mathbb Z$}_3$ for any $x\in V(G_2)-\{u_2,v_2\}$. Since $G_2$ does not have a $\beta_2$-orientation, we must have $d^+_{D_2}(u)-d^-_{D_2}(u)\neq \beta_2(u)$ in $\mbox{$\mathbb Z$}_3$. Thus, we have either \begin{eqnarray} \label{+1} d^+_{D_2}(u)-d^-_{D_2}(u)= \beta_2(u)+1~~\text{and}~~ d^+_{D_2}(v)-d^-_{D_2}(v) = \beta_2(v)-1, \end{eqnarray} or \begin{eqnarray} \label{-1} d^+_{D_2}(u)-d^-_{D_2}(u)= \beta_2(u)-1~~\text{and}~~ d^+_{D_2}(v)-d^-_{D_2}(v) = \beta_2(v)+1. \end{eqnarray} Let $D_1'$ be the restriction of $D$ on $E(G_1)-e$. If (\ref{+1}) holds, then both $d^+_{D_1'}(u)-d^-_{D_1'}(u)= \beta_1(u)-1$ and $d^+_{D_1'}(v)-d^-_{D_1'}(v) = \beta_1(v)+1$. Obtain an orientation $D_1$ of $G_1$ from $D_1'$ by orienting $e=u_1v_1$ from $u_1$ to $v_1$. If (\ref{-1}) holds, then both $d^+_{D_1'}(u)-d^-_{D_1'}(u)= \beta_1(u)+1$ and $d^+_{D_1'}(v)-d^-_{D_1'}(v) = \beta_1(v)-1$. Obtain an orientation $D_1$ of $G_1$ from $D_1'$ by orienting $e=u_1v_1$ from $v_1$ to $u_1$. In either case, $D_1$ is a $\beta_1$-orientation of $G_1$, contrary to the choice of $\beta_1$. (ii) follows from (i) by the definition of $\mbox{$\langle\Z_3\rangle$}$-reduced graph. This proves the lemma. \fbox{\rule[1.2mm]{0.8mm}{0mm}} \begin{figure} \begin{center} \begin{tikzpicture} \draw (0,0) ellipse (1cm and 1.5cm); \draw (3,0) ellipse (1cm and 1.5cm); \filldraw[black] (0.5,2.6) circle (0.05cm); \filldraw[black] (-0.5,1) circle (0.05cm); \filldraw[black] (0,0.9) circle (0.05cm); \filldraw[black] (0.5,0.5) circle (0.05cm); \filldraw[black] (0.7,0) circle (0.05cm); \filldraw[black] (0.7,-0.5) circle (0.05cm); \filldraw[black] (2.5,2.6) circle (0.05cm); \filldraw[black] (3.5,1) circle (0.05cm); \filldraw[black] (3,0.9) circle (0.05cm); \filldraw[black] (2.5,0.5) circle (0.05cm); \filldraw[black] (2.3,0) circle (0.05cm); \filldraw[black] (2.3,-0.5) circle (0.05cm); \draw [-] (0.7,0)--(2.3,0); \draw [-] (0.7,0)--(2.3,-0.5); \draw [-] (0.7,-0.5)--(2.3,0); \draw [-] (0.7,-0.5)--(2.3,-0.5); \draw [-] (0.5,2.6)--(-0.5,1); \draw [-] (0.5,2.6)--(0,0.9); \draw [-] (0.5,2.6)--(0.5,0.5); \draw [-] (2.5,2.6)--(3.5,1); \draw [-] (2.5,2.6)--(3,0.9); \draw [-] (2.5,2.6)--(2.5,0.5); \node at (0.5,3){$v^1$}; \node at (2.5,3){$v^2$}; \node at (-0.5,0.7){$v^1_5$}; \node at (0,0.6){$v^1_4$}; \node at (0.3,0.4){$v^1_3$}; \node at (0.4,0){$v^1_2$}; \node at (0.4,-0.5){$v^1_1$}; \node at (3.6,0.7){$v^2_5$}; \node at (3.1,0.6){$v^2_4$}; \node at (2.8,0.4){$v^2_3$}; \node at (2.7,0){$v^2_2$}; \node at (2.7,-0.5){$v^2_1$}; \draw (6,3) ellipse (0.5cm and 0.7cm); \draw (10.5,3) ellipse (0.5cm and 0.7cm); \draw (6,1) ellipse (0.5cm and 0.7cm); \draw (7,-1) ellipse (0.5cm and 0.7cm); \draw (10.5,1) ellipse (0.5cm and 0.7cm); \draw (9.5,-1) ellipse (0.5cm and 0.7cm); \filldraw[black] (7.5,3) circle (0.08cm); \filldraw[black] (9,3) circle (0.08cm); \filldraw[black] (7.5,1) circle (0.08cm); \filldraw[black] (9,1) circle (0.08cm); \draw [-] (7.5,3)--(9,3); \draw [-] (7.5,3)--(9,1); \draw [-] (7.5,1)--(9,3); \draw [-] (7.5,3)--(6.3,3.4); \draw [-] (7.5,3)--(6.3,3); \draw [-] (7.5,3)--(6.3,2.6); \draw [-] (7.5,1)--(6.3,1.4); \draw [-] (7.5,1)--(6.3,1); \draw [-] (7.5,1)--(6.3,.6); \draw [-] (7.5,1)--(7.2,-0.7); \draw [-] (7.5,1)--(7,-0.5); \draw [-] (7.5,1)--(6.8,-0.7); \draw [-] (9,3)--(10.2,3.4); \draw [-] (9,3)--(10.2,3); \draw [-] (9,3)--(10.2,2.6); \draw [-] (9,1)--(10.2,1.4); \draw [-] (9,1)--(10.2,1); \draw [-] (9,1)--(10.2,0.6); \draw [-] (9,1)--(9.7,-0.7); \draw [-] (9,1)--(9.5,-0.5); \draw [-] (9,1)--(9.3,-0.7); \draw [-] (5.8,2.7)--(5.8,1.3); \draw [-] (5.8,2.7)--(6.2,1.3); \draw [-] (6.2,2.7)--(6.2,1.3); \draw [-] (6.2,2.7)--(5.8,1.3); \draw [-] (10.3,2.7)--(10.3,1.3); \draw [-] (10.3,2.7)--(10.7,1.3); \draw [-] (10.7,2.7)--(10.7,1.3); \draw [-] (10.7,2.7)--(10.3,1.3); \draw [-] (7.3,-0.8)--(9.3,-1.2); \draw [-] (7.3,-1.2)--(9.3,-0.8); \draw [-] (9.3,-0.8)--(7.3,-0.8); \draw [-] (9.3,-1.2)--(7.3,-1.2); \node at (1.5,-2.0){$J(v^1, v^2)$}; \node at (8.25,-2.0){$G(\Gamma)$}; \end{tikzpicture} \end{center} \caption{ The construction in Theorem \ref{eqdeletetwoedges}}\label{construction} \end{figure} ~ \noindent {\bf Proof of Theorem \ref{eqdeletetwoedges}.} It suffices to prove that (i) implies (ii). By contradiction, assume that (i) holds and that there exists a graph $\Gamma$ with $\kappa'(\Gamma) \ge 5$ and with two distinct adjacent edges $vv_1, vv_2 \in E(\Gamma)$, where $v_1$ and $v_2$ may or maynot be distnict, such that $\Gamma - \{vv_1, vv_2\} \notin \mbox{$\langle\Z_3\rangle$}$. As $\kappa'(\Gamma) \ge 5$, $|E_{\Gamma}(v)| \ge 5$. Let $K \cong K_4$ with $V(K) = \{w_1, w_2, w_3, w_4\}$. We assume first that $v_1 \neq v_2$ in $\Gamma$, and use $L(v_1, v_2)$ to denote $\Gamma - \{vv_1, vv_2\}$ with $v_1$ and $v_2$ being two distinguished vertices. For $1 \le j \le 2$, let $\phi_j: L_j(v_1^j, v_2^j) \mapsto L(v_1, v_2)$ be a graph isomorphism with $\phi_j(v^j) = v$, $\phi_j(v_1^j) = v_1$ and $\phi_j(v_2^j) = v_2$. Define $J(v^1, v^2) = K \oplus_{w_1w_2} L_1(v_1^1, v_2^1) \oplus_{w_3w_4} L_2(v_1^2, v_2^2)$. Let $J^k(v^1, v^2)$, ($1 \le k \le 3$), be three isomorphic copies of $J(v^1, v^2)$, and define $G(\Gamma) = K \oplus_{w_1w_2} J^1(v^1, v^2) \oplus_{w_2w_3} J^2(v^1, v^2) \oplus_{w_3w_4} J^3(v^1, v^2)$, as depicted in Figure \ref{construction}. By the definition of $G(\Gamma)$, $G(\Gamma)$ contains six subgraphs $H_i$, ($1 \le i \le 6$), each of which is isomorphic to $\Gamma - \{vv_1, vv_2\}$. It is known that $K \notin \mbox{$\langle\Z_3\rangle$}$. As $\Gamma - \{vv_1, vv_2\} \notin \mbox{$\langle\Z_3\rangle$}$, it follows from Lemma \ref{2sum} that $J(v^1, v^2) \notin \mbox{$\langle\Z_3\rangle$}$, and so by repeated applications of Lemma \ref{2sum}, $G(\Gamma) \notin \mbox{$\langle\Z_3\rangle$}$. Let $W \subseteq E(\Gamma)$ be a minimum edge cut of $G(\Gamma)$. If for any $i$, $|W \cap E(H_i)| = 0$, then $W$ is an edge cut of the graph $G(\Gamma)/(\cup_{i=1}^6 H_i)$, and so it is straightforward to check that $|W| \ge 5$. Hence we assume that for some $i$, $W \cap E(H_i) \neq \emptyset$. Then $\Gamma - \{vv_1, vv_2\}$ contains an edge subset $W_i'$ corresponding to $W \cap E(H_i)$ under the isomorphism between $\Gamma - \{vv_1, vv_2\}$ and $H_i$. If $W_i'$ does not separate the neighbors of $v$ and $\{v_1, v_2\}$ in $\Gamma$, then $W_i'$ is an edge cut of $\Gamma$, and so $|W| \ge |W'| \ge \kappa'(\Gamma) \ge 5$. Hence by symmetry, we assume that $v$ and $v_1$ are in different components of $\Gamma - W_i'$. Since $\kappa'(\Gamma) \ge 5$, we have $|W_i'| \ge \kappa'(\Gamma - \{vv_1, vv_2\}) = 5-2 = 3$. By the definition of $G(\Gamma)$, $G(\Gamma) - E(H_i)$ contains 2 edge-disjoint $(v, v_1)$-paths, which implies that $|W - E(H_i)| \ge 2$, and so $|W| = |W \cap E(H_i)| + |W - E(H_i)| \ge 3+2=5$. We conclude that $\kappa'(G(\Gamma)) \ge 5$. By Theorem \ref{eqdeletetwoedges}(i), we have $G(\Gamma) \in \mbox{$\langle\Z_3\rangle$}$, which leads to a contradiction to the fact that $G(\Gamma) \notin \mbox{$\langle\Z_3\rangle$}$. Next we assume that $v_1 = v_2$. Then for $j = 1, 2$, $v_1^j = v_2^j$ in $L_j(v_1^j, v_2^j)$. In this case, we differently define $J(v^1, v^2)$ to be the graph obtained from the disjoint union of $L_1(v_1^1, v_2^1)$ and $L_2(v_1^2, v_2^2)$ by identifying $v_1^1$ with $v^2_1$. Since $L_1(v_1^1, v_2^1)$ is a block of $J(v^1, v^2)$, $J(v^1, v^2) \notin \mbox{$\langle\Z_3\rangle$}$. We again define $G(\Gamma) = K \oplus_{w_1w_2} J^1(v^1, v^2) \oplus_{w_2w_3} J^2(v^1, v^2) \oplus_{w_3w_4} J^3(v^1, v^2)$. Then by Lemma \ref{2sum}, $G(\Gamma) \notin \mbox{$\langle\Z_3\rangle$}$. By a similar argument as shown above, we again conclude that $\kappa'(G(\Gamma)) \ge 5$, and so by Theorem \ref{eqdeletetwoedges}(i), $G(\Gamma) \in \mbox{$\langle\Z_3\rangle$}$. This contradiction establishes the theorem. \fbox{\rule[1.2mm]{0.8mm}{0mm}} \begin{figure} \begin{center} \begin{tikzpicture} \filldraw[black] (0,0) circle (0.05cm); \filldraw[black] (0,1) circle (0.05cm); \filldraw[black] (1,0) circle (0.05cm); \filldraw[black] (1,1) circle (0.05cm); \draw [-] (0,0)--(0,1);\draw [-] (0,0)--(1,0);\draw [-] (0,0)--(1,1); \draw [-] (0,1)--(1,0);\draw [-] (0,1)--(1,1);\draw [-] (1,0)--(1,1); \node at (-.3, .1){$x_1$};\node at (-.3,1.1){$x_2$};\node at (1.3,.1){$x_3$};\node at (1.3,1.1){$x_4$}; \node at (3.7, .1){$x_{11}$};\node at (3.7,1.1){$x_{12}$};\node at (5.3,.1){$x_{9}$};\node at (5.3,1.1){$x_{10}$}; \node at (2, 2.6){$x_7$};\node at (2,4.2){$x_5$};\node at (3,2.6){$x_8$};\node at (3,4.2){$x_6$}; \filldraw[black] (4,0) circle (0.05cm); \filldraw[black] (4,1) circle (0.05cm); \filldraw[black] (5,0) circle (0.05cm); \filldraw[black] (5,1) circle (0.05cm); \draw [-] (4,0)--(4,1);\draw [-] (4,0)--(5,0);\draw [-] (4,0)--(5,1); \draw [-] (4,1)--(5,0);\draw [-] (4,1)--(5,1);\draw [-] (5,0)--(5,1); \filldraw[black] (2,2.93) circle (0.05cm); \filldraw[black] (2,3.93) circle (0.05cm); \filldraw[black] (3,2.93) circle (0.05cm); \filldraw[black] (3,3.93) circle (0.05cm); \draw [-] (2,2.93)--(3,2.93);\draw [-] (2,2.93)--(3,3.93);\draw [-] (2,2.93)--(2,3.93); \draw [-] (2,3.93)--(3,2.93);\draw [-] (2,3.93)--(3,3.93);\draw [-] (3,2.93)--(3,3.93); \draw plot [smooth,tension=1.5] coordinates{(0,1)(1,3)(2,3.93)}; \draw plot [smooth,tension=1.5] coordinates{(1,1)(1.5,2.2)(2,2.93)}; \draw plot [smooth,tension=1.5] coordinates{(5,1)(4,3)(3,3.93)}; \draw plot [smooth,tension=1.5] coordinates{(4,1)(3.5,2.2)(3,2.93)}; \draw plot [smooth,tension=1.5] coordinates{(1,0)(2.5,-0.3)(4,0)}; \draw plot [smooth,tension=1.5] coordinates{(0,0)(2.5,-1)(5,0)}; \end{tikzpicture} \end{center} \caption{the graph $J$ : a $4$-edge-connected $\mbox{$\langle\Z_3\rangle$}$-reduced graph}\label{figureJaeger} \label{Jaegergraph} \end{figure} ~ We need the following splitting theorem of Mader\cite{Made78} before proceeding the next proof. For two distinct vertices $x, y$, let $\lambda_G(x, y)$ be the maximum number of edge-disjoint paths connecting $x$ and $y$ in $G$. The following Mader's theorem asserts that local edge-connectivity is preserved under splitting. \begin{theorem}\label{maderthm}(Mader \cite{Made78}) Let $G$ be a graph and let $z$ be a non-separating vertex of $G$ with degree at least $4$ and $|N_G(z)|\ge 2$. Then there exist two edges $v_1z, v_2z$ in $G$ such that, splitting $v_1z, v_2z$, the resulting graph $G'=G-v_1z - v_2z + v_1v_2$ satisfies $\lambda_{G'}(x, y)=\lambda_G(x, y)$ for any two vertices $x, y$ different from $z$. \end{theorem} \begin{figure} \begin{center} \begin{tikzpicture} \node at (0, -3) {$H(w_3^1, w_3^2)$}; \node at (7.5, -3) {$G^*$}; \draw (0,0) ellipse (1cm and 1.5cm); \filldraw[black] (0.45,1.1) circle (0.05cm);\node at (.1, 1.1) {$u_1$}; \filldraw[black] (0.65,0.5) circle (0.05cm);\node at (.3, 0.5) {$u_2$}; \filldraw[black] (2,1.2) circle (0.05cm); \node at (2, 1.5) {$w_3^1$}; \draw [-] (0.45,1.1)--(2,1.2); \draw [-] (0.65,0.5)--(2,1.2); \filldraw[black] (0.65,-0.2) circle (0.05cm);\node at (.3, -0.2) {$v_1$}; \filldraw[black] (0.55,-0.8) circle (0.05cm);\node at (.2, -0.8) {$v_2$}; \filldraw[black] (2,-0.65) circle (0.05cm);\node at (2, -.95) {$w_3^2$}; \draw [-] (0.65,-0.2)--(2,-0.65); \draw [-] (0.55,-0.8)--(2,-0.65); \filldraw[black] (4,0) circle (0.05cm); \filldraw[black] (4,-1) circle (0.05cm); \filldraw[black] (5,0) circle (0.05cm); \filldraw[black] (5,-1) circle (0.05cm); \draw [-] (4,0)--(5,-1);\draw [-] (4,0)--(5,0); \draw [-] (4,-1)--(5,0);\draw [-] (4,-1)--(5,-1); \draw (3.25,-0.5) ellipse (0.35cm and 0.75cm); \draw [-] (4,0)--(3.3,0.1); \draw [-] (4,0)--(3.3,-0.3); \draw [-] (4,-1)--(3.3,-0.7); \draw [-] (4,-1)--(3.3,-1.1); \draw (5.75,-0.5) ellipse (0.35cm and 0.75cm); \draw [-] (5,0)--(5.7,0.1); \draw [-] (5,0)--(5.7,-0.3); \draw [-] (5,-1)--(5.7,-0.7); \draw [-] (5,-1)--(5.7,-1.1); \filldraw[black] (10,0) circle (0.05cm); \filldraw[black] (10,-1) circle (0.05cm); \filldraw[black] (11,0) circle (0.05cm); \filldraw[black] (11,-1) circle (0.05cm); \draw [-] (10,0)--(11,-1);\draw [-] (10,0)--(11,0); \draw [-] (10,-1)--(11,0);\draw [-] (10,-1)--(11,-1); \draw(9.25,-0.5) ellipse (0.35cm and 0.75cm); \draw [-] (10,0)--(9.3,0.1); \draw [-] (10,0)--(9.3,-0.3); \draw [-] (10,-1)--(9.3,-0.7); \draw [-] (10,-1)--(9.3,-1.1); \draw(11.75,-0.5) ellipse (0.35cm and 0.75cm); \draw [-] (11,0)--(11.7,0.1); \draw [-] (11,0)--(11.7,-0.3); \draw [-] (11,-1)--(11.7,-0.7); \draw [-] (11,-1)--(11.7,-1.1); \draw [-] (11,0)--(11.7,0.1); \draw [-] (11,0)--(11.7,-0.3); \draw [-] (11,-1)--(11.7,-0.7); \draw [-] (11,-1)--(11.7,-1.1); \filldraw[black] (7,3) circle (0.05cm); \filldraw[black] (7,2) circle (0.05cm); \filldraw[black] (8,3) circle (0.05cm); \filldraw[black] (8,2) circle (0.05cm); \draw [-] (7,3)--(8,2);\draw [-] (7,3)--(7,2); \draw [-] (7,2)--(8,3);\draw [-] (8,3)--(8,2); \draw(7.5,1.25) ellipse (0.75cm and 0.35cm); \draw [-] (7,2)--(7.15,1.3); \draw [-] (7,2)--(6.8,1.3); \draw [-] (8,2)--(7.8,1.3); \draw [-] (8,2)--(8.15,1.3); \draw(7.5,3.75) ellipse (0.75cm and 0.35cm); \draw [-] (7,3)--(7.15,3.7); \draw [-] (7,3)--(6.8,3.7); \draw [-] (8,3)--(7.8,3.7); \draw [-] (8,3)--(8.15,3.7); \draw plot [smooth,tension=1.5] coordinates{(4,0)(5.5,2)(7,3)}; \draw plot [smooth,tension=1.5] coordinates{(5,0)(6,1.5)(7,2)}; \draw plot [smooth,tension=1.5] coordinates{(8,2)(9,1.5)(10,0)}; \draw plot [smooth,tension=1.5] coordinates{(11,0)(9.5,2)(8,3)}; \draw plot [smooth,tension=1.5] coordinates{(4,-1)(7.5,-2.5)(11,-1)}; \draw plot [smooth,tension=1.5] coordinates{(5,-1)(7.5,-1.9)(10,-1)}; \end{tikzpicture} \end{center} \caption{The construction in Theorem \ref{eqdeleteanytwoedges}} \end{figure} ~ \noindent {\bf Proof of Theorem \ref{eqdeleteanytwoedges}. } (i) $\Rightarrow$ (ii). By contradiction, assume that (i) holds and that there exists a $5$-edge-connected graph $\Gamma$ with $|V(\Gamma)|$ minimized and with two distinct edges $u_1u_2, v_1v_2 \in E(\Gamma)$, where $u_1$ and $v_1$ may or maynot be distinct, such that $G =\Gamma - \{u_1u_2, v_1v_2\} \notin \mbox{$\langle\Z_3\rangle$}$. By the minimality of $|V(\Gamma)|$, $G$ must be a $\mbox{$\langle\Z_3\rangle$}$-reduced graph. For $i=1, 2$, let $K^i \cong K_3$ with $V(K^i) = \{w_1^i, w_2^i, w_3^i\}$. Define $K(v_1, v_2) = K^1 \oplus_{w_1^1w_2^1} G(u_1, u_2)$ and $H(w_3^1, w_3^2)= K^2 \oplus_{w_1^2w_2^2}K(v_1, v_2)$. As $K_3$ and $G$ are $\mbox{$\langle\Z_3\rangle$}$-reduced graphs, by Lemma \ref{2sum}(ii), $H(w_3^1, w_3^2)$ is a $\mbox{$\langle\Z_3\rangle$}$-reduced graph. Moreover, $H(w_3^1, w_3^2)$ has exactly two vertices of degree 2, namely $w_3^1, w_3^2$, and the other vertices of $H(w_3^1, w_3^2)$ have degree at least $5$. Let $J$ be the graph as depicted in Figure \ref{Jaegergraph} with $V(J) = \{x_1, \dots, x_{12}\}$. Obtain a graph $G^*$ by attaching copies of $H(w_3^1, w_3^2)$ and applying $\oplus_{e}$ operation for each $e=x_{2i-1}x_{2i}$, $1\le i\le 6$, as depicted in Figure 3. Then we have $\delta(G^*) \ge 5$. By the validity of (i), $G^*$ is not $\mbox{$\langle\Z_3\rangle$}$-reduced. On the other hand, as $K_3$ and $G$ are $\mbox{$\langle\Z_3\rangle$}$-reduced, it follows by Lemma \ref{2sum}(ii) that $H(w_3^1, w_3^2)$ is also $\mbox{$\langle\Z_3\rangle$}$-reduced. As $J$ and $H(w_3^1, w_3^2)$ are $\mbox{$\langle\Z_3\rangle$}$-reduced, we conclude by Lemma \ref{2sum}(ii) that $G^*$ is also $\mbox{$\langle\Z_3\rangle$}$-reduced, contrary to the fact that $G^*$ is not $\mbox{$\langle\Z_3\rangle$}$-reduced, as implied by (i). This shows that (i) implies (ii). (ii) $\Rightarrow$ (i). Assume that (ii) holds. Then (ii) implies that every $5$-edge-connected graph is $\mbox{$\mathbb Z$}_3$-connected. Let $G$ be a counterexample to (i). Then $G$ is a $\mbox{$\langle\Z_3\rangle$}$-reduced graph with $\delta(G)\ge 5$. If $\kappa'(G) \ge 5$, then by (ii), $G$ itself is $\mbox{$\mathbb Z$}_3$-connected, contrary to the assumption that $G$ is $\mbox{$\langle\Z_3\rangle$}$-reduced. Hence $\kappa'(G) \le 4$. Since $\delta(G) \ge 5$, $G$ must have an essential edge-cut of size at most $4$. Among all essential edge-cuts $[W, W^c]$ of size at most $4$, choose one with $|W|$ minimized. Since $G$ is a $\mbox{$\langle\Z_3\rangle$}$-reduced graph, $G[W]$ is also a $\mbox{$\langle\Z_3\rangle$}$-reduced graph. Moreover, it is possible to add two new edges to $G[W]$ to result in a $5$-edge-connected graph. If $|[W, W^c]|\le 3$, we obtain a graph $G[W]^+$ from $G[W]$ by appropriately adding two new edges (possibly parallel) joining vertices in $W$ so that $\delta(G[W]^+) \ge 5$, and so by the minimality of $|W|$, we have $\kappa'(G[W]^+) \ge 5$. By the validity of (ii), we conclude that $G[W]$ is $\mbox{$\mathbb Z$}_3$-connected. Since $\delta(G) \ge 5$, $G[W]$ is a nontrivial subgraph of $G$. This contradicts the assumption that $G$ is a $\mbox{$\langle\Z_3\rangle$}$-reduced graph. Hence we assume that $|[W, W^c]|=4$. Let $H=G/{W^c}$ and $z$ be the vertex onto which $G[W^c]$ is contracted, and denote $E_H(z)= \{e_1, e_2, e_3, e_4\}$ with $e_i = zv_i$, $1 \le i \le 4$. Since $E_H(z)$ may contain parallel edges, the $v_i$'s do not have to be distinct. By the minimality of $W$ and Menger's theorem, we have $\lambda_H(x, y)\ge 5$ for any two vertices $x, y\in V(H)-\{z\}$. Suppose first that $H[E_H(z)]$ contains parallel edges. Assume that $z$ and $v_1$ are joined by at least 2 edges. Define $H'' = H/H[\{z, v_1\}]$. By the minimality of $W$, we have $\kappa'(H'') \ge 5$. As $|E_H(z) - E(H[\{z, v_1\}])| \le 2$, it follows by (ii) that $G[W] = H'' - (E_H(z) - E(H[\{z, v_1\}]))$ is $\mbox{$\mathbb Z$}_3$-connected, contrary to the assumption that $G$ is $\mbox{$\langle\Z_3\rangle$}$-reduced. Hence we assume that $H[E_H(z)]$ contains no parallel edges, and so the $v_i$'s are 4 distinct vertices. By Theorem \ref{maderthm}, we may assume that the graph $H'=H -v_1z -v_2z+ v_1v_2$ satisfies $\lambda_{H'}(x, y)=\lambda_H(x, y)\ge 5$ for any two vertices $x, y\in V(H')-\{z\}$. This implies that the graph $H'' = H'/\{zv_3\}$ is $5$-edge-connected. By (ii), $G[W] \cong H'' - \{v_1v_2, e_4\} \in \mbox{$\langle\Z_3\rangle$}$, contrary to the assumption that $G$ is $\mbox{$\langle\Z_3\rangle$}$-reduced. \fbox{\rule[1.2mm]{0.8mm}{0mm}} \\ Proposition \ref{eqdeletethreeedges} indicates certain implications of Conjecture \ref{z3extending}. The proof of Proposition \ref{eqdeletethreeedges} is similar to that of Proposition \ref{d5extending} and is omitted. \section{Proofs of Theorems \ref{mainthm}, \ref{ess23con} and \ref{4treez3}} Theorems \ref{mainthm}, \ref{ess23con} and \ref{4treez3} will be proved in this section. We start with a lemma. \begin{lemma}\label{lifting} Let $G$ be a graph, $v$ be a vertex of $G$ with degree at least $4$ and $vv_1, vv_2\in E_G(v)$. If $G_1=G-v+v_1v_2$ is $\mbox{$\mathbb Z$}_3$-connected, then $G$ is $\mbox{$\mathbb Z$}_3$-connected. \end{lemma} \proof~ Let $G_2=G-vv_1- vv_2+v_1v_2$. As $|[v,V(G)-v]_{G_2}| = d_G(v) - 2 \ge 2$, we have $G_2/G_1\in \mbox{$\langle\Z_3\rangle$}$. Since $G_1\in\mbox{$\langle\Z_3\rangle$}$ and $G_2/G_1\in \mbox{$\langle\Z_3\rangle$}$, it follows by Lemma \ref{cf} that $G_2\in \mbox{$\langle\Z_3\rangle$}$. By Lemma 2.4 of \cite{LaiH03}, $G_2 \in \mbox{$\langle\Z_3\rangle$}$ implies that $G\in \mbox{$\langle\Z_3\rangle$}$. \fbox{\rule[1.2mm]{0.8mm}{0mm}} For an integer $k > 0$, it is known (see \cite{Nash64}, or more explicitly, Lemma 3.1 of \cite{LLLX10} or Lemma 3.4 of \cite{LiLC09}) that if $F(H,k) > 0$ for any nontrivial proper subgraph $H$ of $G$, then \begin{equation} \label{FGk} F(G,k) = k(|V(G)| - 1) - |E(G)|. \end{equation} \vspace{0.1cm} \noindent {\bf Proof of Theorem \ref{4treez3}.} Assume that Theorem \ref{4treez3} (i) holds and that $G$ is a graph with $F(G,4) = 0$. If $v \in V(G)$ with $d_G(v) \le 7$ satisfies $\kappa'(G - v) \ge 2$, then $F(G-v, 4) \le 3$ and so by Theorem \ref{4treez3} (i), $G - v$ is $\mbox{$\mathbb Z$}_3$-connected. It follows from Proposition \ref{extendingiff} that $G$ is $\mbox{$\langle\Z_3\rangle$}$-extendable at vertex $v$. Thus if (i) holds, then (ii) would follow as well. Hence it suffices to show that \begin{equation} \label{statement} \mbox{ if $F(G,4)\le 3$ and $\kappa'(G) \ge 2$, then $G \in \mbox{$\langle\Z_3\rangle$}$. } \end{equation} We argue by contradiction and assume that \begin{equation} \label{ex} \mbox{ $G$ is a counterexample to (\ref{statement}) with $|V(G)|+|E(G)|$ minimized.} \end{equation} As (i) holds if $|V(G)|\le 2$, we assume that $|V(G)| \ge 3$. By assumption, there exists a set $E_1$ of edges not in $G$ with $|E_1| = F(G,4)$ such that $G^+ = G + E_1$ contains four edge-disjoint spanning trees, denoted $T_1, T_2, T_3, T_4$. \\ \noindent{\bf Claim 1:} Each of the following holds. \\ (i) For any nontrivial proper subgraph $H$ of $G$, $H \notin \mbox{$\langle\Z_3\rangle$}$ and $F(H,4) \ge 3$. \\ (ii) $G$ is $4$-edge-connected. Let $H$ be a nontrivial proper subgraph of $G$. As $F(G/H, 4) \le 3$ (see, for example, Lemma 2.1 of \cite{LiLC09}), if $H \in \mbox{$\langle\Z_3\rangle$}$, then by (\ref{ex}) and $\kappa'(G/H)\ge\kappa'(G) \ge 2$, we have $G/H \in \mbox{$\langle\Z_3\rangle$}$, and so by Lemma \ref{cf}, $G \in \mbox{$\langle\Z_3\rangle$}$, contrary to (\ref{ex}). Hence we must have $H \notin \mbox{$\langle\Z_3\rangle$}$. If $F(H,4) \le 2$, then by $\kappa'(H)\ge 2$ and (\ref{ex}), we have $H \in \mbox{$\langle\Z_3\rangle$}$, contrary to the fact that $H \notin \mbox{$\langle\Z_3\rangle$}$. This proves Claim 1(i). To prove Claim 1(ii), assume that $G$ has a minimum edge-cut $W$ with $|W| \le 3$. Let $H_1$, $H_2$ be the two components of $G-W$. By (i) and by (\ref{FGk}), we have \[ F(H_1, 4) + F(H_2, 4) = \sum_{i=1}^2 [4(|V(H_i)| - 1) - E(H_i)|] = F(G,4) - 4 + |W| \le |W|-1\le 2. \] This, together with the fact that $W$ is a minimum edge-cut, implies that $\kappa'(H_i) \ge 2$ for each $i \in \{1, 2\}$. Since $|V(G)|\ge 3$, at least one of $H_1$ and $H_2$ is nontrivial, contrary to Claim 1(i). Thus Claim 1(ii) must hold. \\ \noindent{\bf Claim 2:} $E(G^+)=\cup_{i=1}^4E(T_i)$. Suppose that there exists $e\in E(G^+)-\cup_{i=1}^4E(T_i)$. The minimality of $E_1$ indicates that $E_1 \subseteq \cup_{i=1}^4E(T_i)$, and thus $e\in E(G)$. Let $G' = G-e$. Then $G'$ is a spanning subgraph of $G$ with $F(G', 4) = F(G,4) \le 3$ and $\kappa'(G')\ge 3$ by Claim 1(ii). As $G' \in \mbox{$\langle\Z_3\rangle$}$ implies $G \in \mbox{$\langle\Z_3\rangle$}$, Claim 2 follows from (\ref{ex}). \\ \noindent{\bf Claim 3:} Each of the following holds. \\ (i) $G^+$ has no subgraph $H^+$ with $1 < |V(H^+)| < |V(G^+)|$ such that $F(H^+,4)=0$. \\ (ii) $\kappa'(G^+) \ge 5$ and $G^+$ does not have an essentially $5$-edge-cut. \\ (iii) $G^+$ has no vertex of degree $5$. Argue by contradiction to show Claim 3(i) and choose a subgraph $H^+$ of $G^+$ with $1 < |V(H^+)| < |V(G^+)|$ and $F(H^+,4)=0$ such that $|V(H^+)|$ minimized. By Claim 2, if $X =V(H^+)$, then $H^+=G^+[X]$. If $|X|=2$, then by Claim 1(i), Claim 2 and $F(H^+,4)=0$, we conclude that $E(G[X])$ consists of a cut edge of $G$, contrary to Claim 1(ii). Hence we assume that $|X|\ge 3$. Let $H = H^+ - E_1$. Then $H=G[X]$. Since $F(H^+,4)=0$ and by Claim 2, $F(H,4) \le |E_1|= F(G,4) \le 3$. If $H$ has a cut edge $e$, then by (\ref{FGk}) and as $|V(H)|\ge 3$, one component of $H-e$ must be nontrivial and has 4 edge-disjoint spanning trees, contrary to the minimality of $|V(H^+)|$. Hence $\kappa'(H) \ge 2$, and so by (\ref{ex}), $H \in \mbox{$\langle\Z_3\rangle$}$, contrary to Claim 1(i). This proves Claim 3(i). If $W$ is a minimal 4-edge-cut or an essential $5$-edge-cut of $G^+$ with $G^+_1$ and $G^+_2$ being the two components of $G^+ - W$, then by (\ref{FGk}), there exists a nontrivial $H^+ \in \{G^+_1, G^+_2\}$ with $F(H^+,4)=0$, contrary to Claim 3(i). This proves Claim 3(ii). We argue by contradiction to show Claim 3(iii). Let $v_0$ be a vertex with $d_{G^+}(v_0) = 5$, $E_{G^+}(v_0)=\{e_1, e_2, e_3, e_4, e_5\}$, and $v_i$, $1 \le i \le 5$, be vertices with $e_i = v_0v_i$. As $E_{G^+}(v_0)$ may contain parallel edges, the $v_i$'s are not necessarily distinct. Since $F(G^+, 4)= 0$, we may assume that for $1 \le i \le 4$, $e_i \in E(T_i)$, and $e_5 \in E(T_1)$. By Claim 1(ii), $|E_1 \cap E_{G^+}(v_0)| \le 1$, and so we may assume that $e_1 \in E(G)$. By symmetry among $e_2, e_3, e_4$ and Claim 1(i) and (ii), $e_1$ has at most one parallel edge, and thus we may assume $e_2\in E(G)$ and $v_2\neq v_1$. Let $e_5''$ be an edge linking $v_1$ and $v_5$ but not in $E(G)$. Define $G'' = G-v_0+{ v_1v_2}$ and \[ E_1'' = \left\{ \begin{array}{ll} E_1 & \mbox{ if $E_1 \cap E_{G^+}(v_0) = \emptyset$;} \\ E_1 - E_{G^+}(v_0) & \mbox{ if $|E_1 \cap E_{G^+}(v_0)| =1$ and $e_5 \notin E_1$;} \\ (E_1 - E_{G^+}(v_0)) \cup \{ e_5''\} & \mbox{ if $E_1 \cap E_{G^+}(v_0) = \{e_5 \}$.} \end{array} \right. \] As for $i \in \{2,3, 4\}$, $T_i - v_0$ is a spanning tree of $G'' + E_1''$, and $(T_1 - v_0) + e_5''$ is a spanning tree of $G''+E_1''$. It follows by $|E_1''| \le |E_1| = 3$ that $F(G'', 4) \le 3$, and $|V(G'')|+|E(G'')|<|V(G)|+|E(G)|$. If $G''$ has a cut edge, then as $d_{G}(v_0) \le d_{G^+}(v_0) = 5$, $G$ has a edge-cut $W'$ with $|W'| \le 3$, contrary to Claim 1(ii). Thus $\kappa'(G'') \ge 2$. By (\ref{ex}), $G''\in\mbox{$\langle\Z_3\rangle$}$. Hence $G\in \mbox{$\langle\Z_3\rangle$}$ by Lemma \ref{lifting}, contrary to (\ref{ex}). This proves Claim 3. By Claim 3, $\kappa'(G^{+}) \ge 6$, and so by Lemma \ref{delete3edges}(ii) and $F(G,4)\le 3$, we have $G=G^+-E_1 \in \mbox{$\langle\Z_3\rangle$}$, contrary to (\ref{ex}). The proof is completed. \fbox{\rule[1.2mm]{0.8mm}{0mm}} Theorem \ref{mainthm} is an immediate corollary of Theorem \ref{4treez3}, and we will prove Theorem \ref{ess23con} by a simple discharge argument. The next lemma follows from arguments of Nash-Williams in \cite{Nash64}. A detailed proof can be found in Theorem 2.4 of \cite{YaLL10}. \begin{lemma} \label{gam} Let $G$ be a nontrivial graph and let $k > 0$ be an integer. If $|E(G)|\ge k(|V(G)|-1)$, then $G$ has a nontrivial subgraph $H$ with $F(H,k) = 0$. \end{lemma} \noindent{\bf Proof of Theorem \ref{ess23con}.} It suffices to show (b). We shall show that every $5$-edge-connected essentially $23$-edge-connected graph contains $4$ edge-disjoint spanning trees. Then Theorem \ref{ess23con}(b) follows from Theorem \ref{4treez3}(ii). Let $G$ be a counterexample with $|E(G)|$ minimized. Then $F(G,4)>0$ and $|V(G)|\ge 4$. If $|E(G)|\ge 4(|V(G)|-1)$, by Lemma \ref{gam}, there exists a non-trivial subgraph $H$ with $F(H,4) = 0$. By definition of contraction, $G/H$ is $5$-edge-connected and essentially $23$-edge-connected. By the minimality of $G$, $G/H$ has $4$ edge-disjoint spanning trees. As $H$ has $4$ edge-disjoint spanning trees, it follows that (see Lemma 2.1 of \cite{LiLC09}) $F(G,4) = 0$, contrary to the choice of $G$. Hence we have \begin{eqnarray}\label{4treesedges} |E(G)|<4(|V(G)|-1). \end{eqnarray} Since $|V(G)| \ge 4$ and $G$ is essentially $23$-edge-connected, for any edge $uv \in E(G)$, we have \begin{eqnarray}\label{du} d(u)+d(v)\ge 23+2. \end{eqnarray} For integers $i, k \ge 1$, define $D_i(G) = \{v \in V(G): d_G(v) = i\}$, $D_{\le k}(G) = \cup_{i \le k} D_i(G)$, and $D_{\ge k}(G) = \cup_{i \ge k} D_i(G)$. It follows from (\ref{du}) that $D_{\le 8}$ is an independent set. Each vertex begins with charge equal to its degree. If $d(v)\ge 9$ and $vu\in E(G)$, then $v$ gives charge $\frac{d(v)-8}{d(v)}$ to $u$. Note that $G$ may contain parallel edges and the charge running through each edge adjacent to $v$. Clearly, if $v\in D_{\ge 8}$, then $v$ will be left with charge $d(v)(1-\frac{d(v)-8}{d(v)})=8$. For any vertex $x\in D_{\le 7}$, denote $d(x)=i\in \{5, 6, 7\}$. By (\ref{du}), $x$ will end with charge at least $$i+\sum_{vx\in E(G)}\frac{d(v)-8}{d(v)}\ge i + \frac{25-i-8}{25-i}i=\frac{(42-2i)i}{25-i}\ge\min\{8, \frac{180}{19}, \frac{98}{9}\}=8,$$ a contradiction to (\ref{4treesedges}). \fbox{\rule[1.2mm]{0.8mm}{0mm}} We remark that there exist $5$-edge-connected and essentially $22$-edge-connected graphs do not contain $4$ edge-disjoint spanning trees. Lowing the constant $23$ may require new ideas and more elaborate work. As shown in Propositions \ref{d5extending} and \ref{eqdeletethreeedges}, lowing into $6$ would imply Conjectures \ref{tutte3flow} and \ref{Jaegerz3}. \section{Two Applications} Recall that a $\mbox{$\langle\Z_3\rangle$}$-reduced graph is a graph without nontrivial $\mbox{$\mathbb Z$}_3$-connected subgraphs. The number of edges in a $\mbox{$\langle\Z_3\rangle$}$-reduced graph is often useful in reduction method and some inductive arguments. Theorem \ref{4treez3}, together with Lemma \ref{gam}, establishes an upper bound for the density of a $\mbox{$\langle\Z_3\rangle$}$-reduced graph. \begin{lemma} Every $\mbox{$\langle\Z_3\rangle$}$-reduced graph on $n\ge 3$ vertices has at most $4n-8$ edges. \end{lemma} As defined in \cite{LLLM14}, a graph $G$ is {\bf strongly $\mathbb{Z}_{2s+1}$-connected} if, for every $b : V(G) \rightarrow \mathbb{Z}_{2s+1}$ with $\sum_{v\in V(G)}b(v)=0$, there is an orientation $D$ such that for every vertex $v\in V(G)$, $d^+_D(G)-d^+_D(G)\equiv b(v) \pmod {2s+1}$. Strongly $\mathbb{Z}_{2s+1}$-connected graphs are known as contractible configurations for modulo $(2s+1)$-orientations. The following has recently been obtained. \begin{proposition}\label{2strees}(\cite{LaLL17}) Every strongly $\mathbb{Z}_{2s+1}$-connected graph contains $2s$ edge-disjoint spanning trees. \end{proposition} By the monotonicity of circular flow (see, for example, \cite{GoTZ1998} or \cite{Zhan97}), it follows that every graph with a mod $5$-orientation also has a mod $3$-orientation. It is not known, in general, whether a strongly $\mathbb{Z}_{2k+3}$-connected graph is also strongly $\mathbb{Z}_{2k+1}$-connected. As an application of Proposition \ref{2strees}, if a graph $G$ is strongly $\mathbb{Z}_{5}$-connected graph, then $F(G,4) = 0$; it then follows from Theorem \ref{4treez3} that $G \in \mbox{$\langle\Z_3\rangle$}$. Hence we obtain the following proposition. \begin{proposition}\label{sz3sz5} Every strongly $\mathbb{Z}_{5}$-connected graph is $\mathbb{Z}_{3}$-connected. \end{proposition}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Discrete models and continuous Langevin equations for surface growth are usually grouped into universality classes. Each universality class is distinguish by a set of power-law exponents of statistical observables related to each other by scaling relations \cite{Barabasi-95}. These exponents are frequently determined by Monte Carlo simulations \cite{Family-88}. Models and equations within the same class share not only the same scaling exponents, but also the same linearities and (or) nonlinearities \cite{Lai-91}. There are two theoretical methods to connect discrete models to the corresponding continuous equations within a universality class. Both formalisms are based on the discrete Langevin equation obtained by the Kramers-Moyal expansion of the master equation. In both approaches, the first and second transition moment of the discrete Langevin equation are interpreted as generalized functions (or distributions) evaluated at the lattice points. The transition moments, which determine the behaviour of system, are normally expressed in term of if-them-else structures. Consequently, the transition moments could be a proper combination a Heaviside (or unit-step) generalized functions. The first of the two approaches was introduced by Vvedensky {\sl et al.} \cite{Vvedensky-93} and is based in the regularization of the transition moments by means of the individual regularization of each Heaviside included within the moments. The regularized Heaviside functions are sigmoid curves ({\sl e.g.} trigonometric, hyperbolic, tangent, or error functions) that allow Taylor expansions. The standard procedure is to replace each Heaviside function $\Theta$ by a regularizing function $\theta_\varepsilon$ which admits a Taylor expansion around zero \cite{Park-95,Costanza-97,Buceta-05,Muraca-04}. Here, $\theta_\varepsilon$ depends on a continuous regularization parameter $\varepsilon$, in a way that $\theta_\varepsilon\to \Theta$ when $\varepsilon\to 0$. The Taylor coefficients depend on both the regularization function and the regularization parameter \cite{Hansmann-13}. It is possible to obtain the continuous Langevin equation of the corresponding universality class by first expanding the regularized first transition moment around zero and then coarse-graining the expression. The second approach was introduced by Buceta and Hansmann \cite{Buceta-12}. In contrast to Vvedensky's approach, it employs the transition moments as generalized functions, without further regularization. Buceta and Hansmann applied these generalized functions to a set of test functions supported by the SPDF of the discrete process. This theoretical approach is inspired by the surface tilting method which is used to determine {\sl via} Monte Carlo simulations the nonlinearities of the discrete models ({\sl e.g.} models belonging the Kardar-Parisi-Zhang (KPZ) universality class). Using this approach we can derive all coefficients, the linear and nonlinear coefficients as well as the noise intensity of the continuous Langevin equation, which characterizes the universality class of the underlying discrete model. To that end, the method uses small transformations in the test space to find the coarse-grained coefficients of restricted and unrestricted discrete processes, {\sl e.g.} the restricted solid-on-solid model and the ballistic model, respectively. The lattice growth models with random deposition followed by instantaneous relaxation to a neighboring site show transition rules which contain three basic microscopic processes: deposition, diffusion, and volume conservation. The vertical incoming flux of particles on the substratum is the source of non-conserved noise in the system. In contrast, systems without particle inflow show conserved noise \cite{Hansmann-13}. The surface relaxation rules lead to microscopic processes of diffusion and volume conservation. However, in the coarse-grained limit these models belong to one of three groups (universality classes); models which show only diffusion, only volume conservation, or both diffusion and volume conservation. The RDSR model (or Family model) belongs to the first group, the Edward-Wilkinson universality class, and shows only deposition with diffusion process, while volume conservation processes renormalizes to zero. In the second group one finds the Das Sarma-Tamborenea (DST) molecular-beam-epitaxy (MBE) growth model which belongs to the Villain-Lai-Das Sarma (VLDS) universality class. This class is define by the VLDS equation, which includes deposition with volume conservation processes. In the third group one finds an intermediate model, the Wolf-Villain MBE growth model, which shows a crossover from the VLDS universality class to the EW class. For the three groups the generic continuous Langevin equation with non-conserved noise, for the height $h=h(x,t)$, is \begin{equation} \frac{\partial h}{\partial t}=F_0+\nabla^2 Z_\mathrm{s}+\nabla\!\cdot\! Z_\mathrm{a} +\eta(x,t)\;,\label{cLE} \end{equation} where $F_0$ is the incoming flux, $Z_\mathrm{s}$ is known as symmetric (or conserved KPZ) kernel, and $Z_\mathrm{a}$ is known as antisymmetric kernel. Both kernels are functions of $\nabla h$ and/or $\nabla^2 h$. The term $\nabla^2 Z_\mathrm{s}$ describes the volume conservation process and the term $\nabla\!\cdot\! Z_\mathrm{a}$ describes the diffusion. The mean value of the non-conservative noise $\eta$ is zero, {\sl i.e.} $\langle\eta(x,t)\rangle=0$, and correlation is \begin{equation} \langle\eta(x,t)\,\eta(x',t')\rangle=2\,D\,\delta(x-x')\,\delta(t-t')\;,\label{corr-noise} \end{equation} where $D$ is the noise intensity. Before going to the coarse-grained limit, the symmetric kernel of the RDSR model renormalizes to zero ({\sl i.e.} $Z_\mathrm{s}= 0$) and the antisymmetric kernel is \begin{equation} Z_\mathrm{a}=\left[\nu_0+\frac{\nu_2}{3}\,\abs{\nabla h}^2\right]\,\nabla h\;,\label{kernel-Za}\end{equation} up to the second-order of $\abs{\nabla h}$, that leads to the inhomogeneous diffusion equation \begin{equation} \frac{\partial h}{\partial t}=F_0+\nu\,\nabla^2 h+\eta\;,\label{IDe} \end{equation} with the diffusion term $\nu=\nu_0+\nu_2\,\abs{\nabla h}^2+\mathsf{O}(4)$. The antisymmetric kernel of the DST MBE growth model renormalizes to zero ({\sl i.e.} $Z_\mathrm{a}= 0$) and the symmetric kernel is \begin{equation} Z_\mathrm{s}=\mu\,\nabla^2 h+\frac{\lambda}{2}\,\abs{\nabla h}^2\;,\label{kernel-Zs} \end{equation} in the lowest order in Laplacians and gradients. The equation with $\mu<0$ and $\lambda<0$ leads to the VLDS equation \cite{Villain-91,Lai-91} and with $\mu<0$ and $\lambda=0$ it leads to the stochastic Mullins-Herring equation \cite{Herring-51,Mullins-57}. In order to obtain the EW equation \cite{Edwards-82} from eq.~(\ref{IDe}) it is common to introduce the coarse-grained space and time variables through the replacements $x\to b^{-1}x\,$, $t\to b^{-z}t\,$, where $z$ is a proper scaling parameter and $b$ parametrizes the extent of coarse graining in such way that the continuum limit is recovered with $b\to 0$. The corresponding coarse-grained height function is $u(x,t)=b^{\alpha}(h-F_0\,t)\,$, where $\alpha$ is another proper scaling parameter. The coarse-grained noise is given by $\zeta=b^{-(1+z)/2}\,\eta$. Defined in this way, it leave the noise correlation intensity of eq.~(\ref{corr-noise}) invariant under the coarse-grained transformation. Using these replacements in eq.~(\ref{IDe}) one obtains \begin{equation} \frac{\partial u}{\partial t}=b^{2-z}\nu\,\nabla^2 u+ b^{\alpha-(z-1)/2}\,\zeta\;,\nonumber \end{equation} with $\nu=\nu_0+b^{2(1-\alpha)}\nu_2\,\abs{\nabla u}^2+\mathsf{O}(b^{4(1-\alpha)})$. Setting $z=2\;,\alpha=1/2$ and taking the limit $b\to 0$, the diffusion coefficient $\nu\to\nu_0$ and one obtains the EW equation \begin{equation} \frac{\partial u}{\partial t}=\nu_0\,\nabla^2 u+ \zeta\;.\label{EWe} \end{equation} In this paper we review the RDSR model introduced by Family \cite{Family-86}. We start linking their growth rules to the Edwards-Wilkinson equation. In Section~\ref{sec:RDSR} we show that its first and second transition moment are separable into three different processes: deposition, diffusion, and volume conservation. Doing so we show that the diffusion (volume conservation) process is related to antisymmetric (symmetric) kernel, which are the base to simplify the derivation of the continuous equation or its coefficients in the following sections. In Section~\ref{sec:regulariz} we derive the continuous Langevin equation {\sl via} regularization of both kernels using the known method of $\Theta$-function regularization \cite{Vvedensky-93}. We show that the symmetric kernel renormalizes to zero in the coarse-grained limit and that the remaining antisymmetric kernel can be identified as the diffusion term of the EW equation. In Section~\ref{sec:framework} we include an overview on generalized function theory applied to the study of discrete surface models. We show how to derive the coefficients of the continuous Langevin equation in terms of the set of test functions supported by the SPDF of the discrete model. In Section~\ref{sec:coef-test} we determine the coefficients {\sl via} small translations in the test space. At the end of this section, using the separability of the test functions in polar coordinates, we find the coarse-grained coefficients in terms of radial functions. Finally, a summary is given in Section~\ref{sec:concl}. \section{RDSR model: from discrete to continuous\label{sec:RDSR}} All discrete models with random deposition followed by instantaneous relaxation to a neighbouring site can be described similarly. The incoming flux of particles falls sequentially toward the substratum at randomly chosen columns. According to the rules, the incoming particles try to stick to the columns or, otherwise, relax to its lowest neighbouring column. The time necessary to deposit a particle layer of the average size $a$ is $\tau$. We assume a discrete process that takes place on a square lattice of length $L$ with the unit cell size $a$. Furthermore, it is assumed that a randomly chosen column or some next neighbouring (NN) column can grow only $a$. The surface configuration $\boldsymbol{H}$ of the system is determined by the set of heights $\{h_j\}$ which corresponds to the columns $j=1,\dots ,N$ (with $L=N a$). The transition rate $W(\boldsymbol{H},\boldsymbol{H}')$ of a RDSR model, which describes the change between two consecutive surface configurations $\boldsymbol{H}$ and $\boldsymbol{H}'$ in the lapse $\tau$ is \begin{eqnarray} &&W(\boldsymbol{H},\boldsymbol{H}')=\frac{1}{\tau}\,\sum_{k=1}^N\Bigl[w_k^{(0)}\,\Delta(h'_{k}-h_{k}-a)\,\prod_{j\neq k}\Delta(h'_j-h_j)+w_k^{(-)}\,\Delta(h'_{k-1}-h_{k-1}-a)\!\prod_{j\neq k-1}\Delta(h'_j-h_j)\nonumber\\&&\hspace{21ex}+\,w_k^{(+)}\,\Delta(h'_{k+1}-h_{k+1}-a)\!\prod_{j\neq k+1}\Delta(h'_j-h_j)\Bigr]\;,\label{transition} \end{eqnarray} where $\omega_k^{(\cdot)}$ is the probability at which the deposition process occurs when the chosen column is $k$. The superscript labels {\scriptsize$(\pm)$} denote deposition by relaxation to one of the neighbouring columns of the column $k$. The superscript label {\scriptsize$(0)$} denote deposition at the column $k$. Here $\Delta(z)$ is equal to 1 if $z=0$ and equal to $0$ otherwise. The first transition moment is \begin{equation} K_j^{(1)} =\sum_{\boldsymbol{H}'}(h'_j-h_j)\,W(\boldsymbol{H},\boldsymbol{H}')=\frac{a}{\tau}\left[w_{j-1}^{(+)}+w_j^{(0)}+w_{j+1}^{(-)}\right]\;,\label{1st-transition} \end{equation} and the second transition moment is \begin{equation} K_{ij}^{(2)} = \sum_{\boldsymbol{H}'}(h'_i-h_i)(h'_j-h_j)\,W(\boldsymbol{H},\boldsymbol{H}')= a K_j^{(1)}\, \delta_{ij}\,.\label{2nd-transition} \end{equation} The growth rules of the RDSR (or Family) model are the following. A particle is deposited on the substrate at the chosen column if the neighbouring columns are not lower in height. Otherwise, the particle is deposited on the substrate at the neighbouring column with lower height or in case of equal heights at one randomly chosen neighbouring column. The probabilities of the RDSR model are \begin{eqnarray} w_j^{(0)}&=&\Theta(h_{j+1}-h_j)\,\Theta(h_{j-1}-h_j)\;,\label{dd}\\ w_{j\pm 1}^{(\mp)}&=&\tfrac{1}{2}\left[1+\Theta(h_{j\pm 2}-h_{j\pm 1})\right]\left[1-\Theta(h_j-h_{j\pm 1})\right]\;,\label{rd} \end{eqnarray} where $\Theta$ is the Heaviside (unit step) function which is defined as $\Theta(x)=1$ if $x\ge 0$ and $\Theta(x)=0$ if $x<0$. As mention above, the probability $w_j^{(0)}$ refers to the deposition at the chosen column $j$. In contrast, the probabilities $w_{j\pm 1}^{(\mp)}$ refers to the relaxation to column $j$ when the chosen column is $j\pm 1$. From Eqs. (\ref{dd}) and (\ref{rd}) we obtain the identity $w_j^{(0)}+w_j^{(-)}+w_j^{(+)}=1$ which ensures that the deposition average rate is $1/\tau$. Following the master equation approach \cite{Kramers-40} the discrete Langevin equation, for each height $h_j$ ($j=1,\dots ,N$), is \begin{equation} \frac{\mathop{}\!\mathrm{d} h_j}{\mathop{}\!\mathrm{d} t}= K_j^{(1)}(\boldsymbol{H})+\eta_j(t)\;. \end{equation} The noise is Gaussian white, {\sl i.e.} with the mean value equal to zero and the covariance \begin{equation} \langle\eta_j(t)\eta_k(t')\rangle=K_{jk}^{(2)}\delta(t-t')\;. \end{equation} We can separate the first transition moment $K_j^{(1)}$ into three summands, each of them taking into account a different process: deposition $F$ (or incoming flow), diffusion $D_j$, and/or volume conservation $C_j$, \begin{equation} K_j^{(1)}=F +D_j+C_j\;, \end{equation} where \begin{eqnarray} F\!&=&\!\!\frac{a}{\tau}\;,\\ D_j\!\!&=&\!\!\frac{a}{2\tau}\Bigl\lbrace\bigl[\Theta(h_{j+2}-h_{j+1}) -\Theta(h_j-h_{j+1})\bigr]-\bigl[\Theta(h_j-h_{j-1})-\Theta(h_{j-2}-h_{j-1}) \bigr]\Bigr\rbrace\;,\\ C_j\!\!&=&\!\!-\frac{a}{2\tau}\Bigl[\Theta(h_{j+2}-h_{j+1})\Theta(h_j-h_{j+1})-2\,\Theta(h_{j+1}-h_j)\Theta(h_{j-1}-h_j) +\Theta(h_j-h_{j-1})\Theta(h_{j-2}-h_{j-1})\Bigr]\;.\nonumber\\ \end{eqnarray} The diffusion term can be rewritten as \begin{equation} D_j=\frac{1}{2a}\Delta^{\!(1)}_2\mathcal{Z}_\mathrm{a}(\srho{-},\srho{+})\biggr\rfloor_{\srho{\pm}=(h_{j\pm 1}-h_j)/a}\,,\label{K1a} \end{equation} where $\Delta^{\!(1)}_2$ is the first variation between the NN columns [{\sl i.e.} $\Delta^{\!(1)}_2 F_j=F_{j+1}-F_{j-1}$] with the property $(2\,a)^{-1}\Delta^{\!(1)}_2\to\nabla$ when $a\to 0$. Here and below, we use the following notation for the partial derivative $\nabla\doteq \partial/\partial x$. The kernel $\mathcal{Z}_\mathrm{a}\!:\mathbb{R}^2\to\mathbb{R}$ is interpreted as a generalized function defined by ($\srho{\pm}\!\in\mathbb{R}$) \begin{equation} \mathcal{Z}_\mathrm{a}(\srho{-},\srho{+})=\zeta\left[\Theta(\srho{+})-\Theta(\srho{-})\right]\;,\label{cZa} \end{equation} with $\zeta=a^2/\tau$. Notice that $\mathcal{Z}_\mathrm{a}$ is antisymmetric, {\sl i.e.} $\mathcal{Z}_\mathrm{a}(x,y)=-\mathcal{Z}_\mathrm{a}(y,x)$. The volume conserving term can be rewritten as \begin{equation} C_j=\frac{1}{a^2}\Delta^{\!(2)}_1\mathcal{Z}_\mathrm{s}(\varrho_{_{-}},\varrho_{_{+}})\biggr\rfloor_{\varrho_{_{\pm}}=(h_{j\pm 1}-h_j)/a}\,,\label{K1s} \end{equation} where $\Delta^{\!(2)}_1$ is the second variation [{\sl i.e.} \mbox{$\Delta^{\!(2)}_1f_j=f_{j+1}-2\,f_j+f_{j-1}$}], with the property that $a^{-2}\Delta^{\!(2)}_1\to\nabla^2$ when $a\to 0$. Here, the kernel $\mathcal{Z}_\mathrm{s}\!:\mathbb{R}^2\to\mathbb{R}$ is the generalized function defined by \begin{equation} \mathcal{Z}_\mathrm{s}(\srho{-},\srho{+})=-\frac{a\,\zeta}{2}\;\Theta(\srho{-})\,\Theta(\srho{+})\;.\label{cZs} \end{equation} Notice that $\mathcal{Z}_\mathrm{s}$ is symmetric, {\sl i.e.} $\mathcal{Z}_\mathrm{s}(x,y)=\mathcal{Z}_\mathrm{s}(y,x)$. The term $C_j$ can be obtained by changing $\srho{\pm}\to -\srho{\pm}$ in the first moment of the Krug model with opposite sign \cite{Krug-97}. The real-valued generalized functions $\mathcal{Z}_\mathrm{a}$ and $\mathcal{Z}_\mathrm{s}$ can be related to real analytic functions $Z_\mathrm{a}$ and $Z_\mathrm{s}$, respectively, by regularization techniques. In the limit $a\to 0$ the diffusion term [eq.~(\ref{K1a})] $D_j\to\nabla\cdot Z_\mathrm{a}$ and the volume conserving term [eq.~(\ref{K1s})] $C_j\to\nabla^2 Z_\mathrm{s}\,$. In the next section we show how the symmetric kernel of the RDSR model renormalizes to zero ({\sl i.e.} $Z_\mathrm{s}= 0$). \section{Continuous Langevin equation {\sl via} regularization\label{sec:regulariz}} We use a slightly different procedure than other authors to obtain the same coefficients and equations \cite{Vvedensky-03}. In contrast to their work, we regularize the antisymmetric and symmetric kernels [eqs.~(\ref{cZa}) and (\ref{cZs}), respectively] and not the entire first transition moment at once. Using the regularization procedure, the Heaviside function $\Theta(x)$ can be replaced by a smooth real-valued function $\theta_\varepsilon(x)$ depending on the continuous parameter $\varepsilon$. The regularizing function satisfies $\theta_\varepsilon(n)\to\Theta(n)$ when $\varepsilon\to 0^+$ for all $n\in\mathbb{Z}$. There are several proposals to represent $\Theta$ for a shifted analytic function, including the following \begin{equation*} \theta_\varepsilon(x)=\frac{1}{2}\int_{-\infty}^x\;\Bigl[\mathrm{erf}\Bigl(\frac{s+1}{\varepsilon}\Bigr)-\mathrm{erf}\Bigl(\frac{s}{\varepsilon}\Bigr)\Bigr]\mathop{}\!\mathrm{d} s\;, \end{equation*} with $\varepsilon >0$, introduced in refs.~\cite{Haselwandter-06,Haselwandter-07} . The kernels $\mathcal{Z}_\mathrm{s}$ and $\mathcal{Z}_\mathrm{a}$ [eqs.~(\ref{cZs}) and (\ref{cZa}), respectively] can be regularized using the $\varepsilon$-theta function $\theta_\varepsilon$, {\sl i.e.} \begin{eqnarray} &&\mathcal{Z}_\mathrm{a}^{(\varepsilon)}(\srho{-},\srho{+})=\zeta\,\bigl[ \theta_\varepsilon(\srho{+})-\theta_\varepsilon(\srho{-})\bigr]\nonumber\\ &&\mathcal{Z}_\mathrm{s}^{(\varepsilon)}(\srho{-},\srho{+})=-\frac{a\,\zeta}{2}\;\theta_\varepsilon(\srho{+})\,\theta_\varepsilon(\srho{-})\;,\nonumber \end{eqnarray} with $\zeta=a^2/\tau$. Expanding the $\varepsilon$-theta in Taylor series around $x=0$ \begin{equation} \theta_\varepsilon(x)=\sum_{k=0}\,\frac{A_k^{(\varepsilon)}}{k!}\,x^k\;,\label{e-theta} \end{equation} we find the following expansions [superscripts $(\varepsilon)$ of $A_k^{(\varepsilon)}$ are omitted hereafter] \begin{eqnarray} &&\mathcal{Z}_\mathrm{a}^{(\varepsilon)}(\srho{-},\srho{+})=\zeta\,\bigl(\srho{+} -\srho{-}\bigr)\Bigl[A_1+\frac{1}{2}\,A_2\bigl(\srho{+}+\srho{-}\bigr) +\frac{1}{3!}\,A_3\bigl(\srho{+}^2+\srho{+}\srho{-}+\srho{-}^2\bigr) +\mathrm{O}(3)\Bigr]\nonumber\;,\\ &&\mathcal{Z}_\mathrm{s}^{(\varepsilon)}(\srho{-},\srho{+})=-\frac{a\,\zeta}{2}\,\Bigl[A_0^2+A_0 A_1 \bigl(\srho{+}+\srho{-}\bigr) +\frac{A_0 A_2}{2} \bigl(\varrho_{_{-}}^2-2\gamma\,\varrho_{_{-}}\varrho_{_{+}}+\varrho_{_{+}}^2\bigr)+\mathrm{O}(3)\Bigr]\label{cetas} \end{eqnarray} where \begin{equation} \gamma=-\frac{A_1^2}{A_0 A_2}\,.\label{gamma} \end{equation} Evaluating eqs.~(\ref{cetas}) we obtain \begin{eqnarray} &&\mathcal{Z}_\mathrm{a}^{(\varepsilon)}(\varrho_{_{-}},\varrho_{_{+}})\Bigr\rfloor_{\varrho_{_{\pm}}=(h_{j\pm 1}-h_j)/a}=2\,\zeta\,L_j^{(2)}\Bigl[ A_1+\frac{A_2}{2}\,a\,L_j^{(1)}+\frac{A_3}{3!}\,N_j^{(-1/2)}+\mathrm{O}(3)\Bigr]\;,\nonumber\\ &&\mathcal{Z}_\mathrm{s}^{(\varepsilon)}(\varrho_{_{-}},\varrho_{_{+}})\Bigr\rfloor_{\varrho_{_{\pm}}=(h_{j\pm 1}-h_j)/a}=-\frac{a\,\zeta}{2}\,\Bigl[A_0^2+A_0 A_1\,a\,L_j^{(1)}+A_0A_2(1+\gamma)\,N_j^{(\gamma)}+\mathrm{O}(3)\Bigr] \end{eqnarray} where the linear and quadratic terms are \begin{eqnarray} L_j^{(2)}&=&\frac{h_{j+1}-h_{j-1}}{2\,a}\;,\nonumber\\ L_j^{(1)}&=&\frac{h_{j+1}-2 h_j+h_{j-1}}{a^2}\;,\label{discrete}\\ N_j^{(\beta)} \!&=&\! \frac{(h_{j+1}-h_j)^2-2\,\beta\, (h_{j+1}-h_j)(h_{j-1}-h_j)+(h_{j-1}-h_j)^2}{2 \,a^2(\beta+1)}\;,\nonumber \end{eqnarray} with $-1<\beta\le 1$ the discretization parameter of the discretized nonlinear terms \cite{Buceta-05}. The usual choice $\beta=1$, called standard or post-point discretization, depends only on the height of the NN columns and, thus, the error of approximating $(\nabla h)^2$ is minimized. In contrast, the choice $\beta=0$, called anti-standard or prepoint discretization, corresponds to the arithmetic mean of the squared slopes around the interface sites. In this work, $\gamma$ depends on the coefficients of regularization through eq.~(\ref{gamma}). If $0\le\gamma\le 1$, for $A_0>0$ then $A_2<0$, these coincides with the result obtained in ref.~\cite{Jung-99}. The unusual choice $\beta=-1/2$, which has no clear discrete geometric meaning, has a continuous limit as we show below. Expanding eqs.~(\ref{discrete}) around $x=ja$, the discretized terms and their limits when $a\to 0$ are \begin{eqnarray} L_j^{(2)}&=&\nabla h+\frac{1}{16}\nabla^3 h\;a^2+\mathrm{O}(4) \quad\longrightarrow\quad\nabla h\;,\nonumber\\ L_j^{(1)}&=&\nabla^2 h+\frac{1}{12}\nabla^4 h\;a^2+\mathrm{O}(4) \quad\longrightarrow\quad\nabla^2 h\;,\label{continuous}\\ N_j^{(\beta)} &=&(\nabla h)^2+\frac{1}{4}\biggl(\frac{1-\beta}{1+\beta}\biggr)(\nabla^2 h)^2\;a^2+\mathrm{O}(4) \quad\longrightarrow\quad(\nabla h)^2\;.\nonumber \end{eqnarray} Notice that the limit of $N_j^{(\gamma)}$ does not depend on the discretization parameter $\gamma$, as shown in ref.~\cite{Buceta-05}. Taking $\zeta$ finite, the symmetric and antisymmetric contributions of eq.~(\ref{cLE}), respectively, are \begin{eqnarray} &&Z_\mathrm{a}=\lim_{a\to 0}\mathcal{Z}_\mathrm{a}^{(\varepsilon)}(\varrho_{_{-}},\varrho_{_{+}})\Bigr\rfloor_{\varrho_{_{\pm}}=(h_{j\pm 1}-h_j)/a}=2\,\zeta\Bigl[A_1+\frac{A_3}{3!}\,\abs{\nabla h}^2+ \mathrm{O}(4)\Bigr]\,\nabla h\;,\nonumber\\ &&Z_\mathrm{s}=\lim_{a\to 0}\mathcal{Z}_\mathrm{s}^{(\varepsilon)}(\varrho_{_{-}},\varrho_{_{+}})\Bigr\rfloor_{\varrho_{_{\pm}}=(h_{j\pm 1}-h_j)/a}=0\;. \end{eqnarray} Thus the continuous stochastic differential equation for $h\!=\!h(x,t)$ is the eq.~(\ref{IDe}) with \begin{eqnarray} &&\nu_0=2\,\zeta\,A_1\;,\label{regul-coef1}\\ &&\nu_2=\zeta\,A_3\;.\label{regul-coef2} \end{eqnarray} These coefficients depend on the chosen regularization and the regularization parameter $\varepsilon$ through the coefficients of the $\theta_\varepsilon$ [see eq.~(\ref{e-theta})]. It is important to take into account that the expansion in eq.~(\ref{IDe}) still includes terms of all orders, but in its coarse­-grained limit we find the EW equation~(\ref{EWe}) with the diffusion coefficient $\nu_0$ given by eq.~(\ref{regul-coef1}). \section{Framework based in generalized functions theory\label{sec:framework}} According to the previous section, the kernels $\mathcal{Z}_\mathrm{s}$ and $\mathcal{Z}_\mathrm{a}$ are generalized functions of $\boldsymbol{\varrho}=(\varrho_-,\varrho_+)\in\mathbb{R}^2$. In order to calculate statistical observables, at time $t$ it is possible to define a test function set $\varphi(\boldsymbol{\varrho},t)$ on which generalized functions $\mathcal{Z}(\boldsymbol{\varrho})$ can be applied. The test functions are supported by the probability density function $P(\boldsymbol{\sigma}_j,t)$ to find any column $j$ with surface configuration $\boldsymbol{\sigma}_j\in\mathbb{Z}^2$ at time $t$ [{\sl i.e.} $\varphi(\boldsymbol{\varrho}\!=\!\boldsymbol{\sigma}_j,t)=P(\boldsymbol{\sigma}_j,t)$], with $\boldsymbol{\sigma}_j=(\sigma_{j-},\sigma_{j+})$ and $\sigma_{j\pm}=(h_{j_\pm 1}-h_{j})/a$. These test functions $\varphi$ must have different properties for restricted and unrestricted processes \cite{Buceta-12}. Unrestricted processes, {\sl e.g.} RDSR, require that $\varphi\in\mathcal{S}(\mathbb R^2)$, where $\mathcal{S}$ is the test space of $\mathrm{C}^\infty$-functions that decay and have derivatives of all orders that vanish faster than any power of $\varrho_\pm^{-1}$. {\sl Via} Monte Carlo simulations of the RDSR model, we show that $P(\boldsymbol{\sigma}_j,t)$ converges rather fast to stationary probability density function $P\!_\mathrm{st}(\boldsymbol{\sigma}_j)$ (SPDF). Figure~\ref{fig:1} shows that, for several configurations $\boldsymbol{\sigma}_j$, the $P(\boldsymbol{\sigma}_j,t)$ have a very long stationary regime which justifies the introduction of the time-independent test functions. We show also that the SPDF has at least exponential decay [See Figure \ref{fig:2}] which ensures $\varphi\in\mathcal{S}$. We define the test function $\varphi(\boldsymbol{\varrho})$ as a real-valued function supported by the discrete SPDF, {\sl i.e.} $\varphi(\boldsymbol{\varrho}=\boldsymbol{\sigma}_j)=P\!_\mathrm{st}(\boldsymbol{\sigma}_j)$. We use here the notation on distributions that was introduced by Schwartz \cite{Schwartz-66}. \begin{figure} \begin{center} \includegraphics[scale=.5]{fig1.eps} \end{center} \caption{(Color online) Plot of the probability density function $P(\boldsymbol{\sigma},t)$ (PDF) as a function of time $t$ for various values ​​of $\boldsymbol{\sigma}$ indicated inside. The symbols show results of Monte Carlo simulations for the RDSR model with periodic boundary conditions and a lattice size $L=1024$ averaging over 25000 realizations. The broken axis is used in order to show the stationary behavior.\label{fig:1}} \end{figure} The distribution $\mathcal{Z}\in\mathcal{S}'$ (dual space of $\mathcal{S}$) applied to test function $\varphi\in \mathcal{S}$ is defined by \begin{equation} \langle\mathcal{Z}\,,\varphi\rangle=\int_{\mathbb{R}^2}\mathcal{Z}(\boldsymbol{\varrho})\,\varphi(\boldsymbol{\varrho})\;\mathrm{dv}_{\!\varrho}\;.\label{distr_f} \end{equation} Here $\langle Z,\varphi\rangle$ is the expectation value of $Z$ using the test function $\varphi$ as real-valued analytic representation of the SPDF. The test function is normed, {\sl i.e.} $\langle 1,\varphi\rangle=1\,$. The translation $T_{\boldsymbol{\alpha}}$ of a distribution $\mathcal{Z}$, denoted $T_{\boldsymbol{\alpha}}\mathcal{Z}$, extends the definition given by eq.~(\ref{distr_f}) to \begin{equation} \langle T_{\boldsymbol{\alpha}}\mathcal{Z}\,,\varphi\rangle=\langle\mathcal{Z}\,, T_{-\boldsymbol{\alpha}}\,\varphi\rangle\,, \end{equation} where the translation operator is defined by $T_{\boldsymbol{x}}:\boldsymbol{y}\mapsto\boldsymbol{y} -\boldsymbol{x}$ if $\boldsymbol{y}\,,\boldsymbol{x}\in\mathbb{R}^2$ \cite{Colombeau-84}. As mentioned above, we assume that the test function $\varphi$ takes ​​fixed values ​​in the discrete lattice $\mathbb{Z}^2$ given by the SPDF $P\!_\mathrm{st}$, {\sl i.e.} $\langle T_{\boldsymbol{\sigma}}\delta\,,\varphi\rangle=\varphi(\boldsymbol{\sigma})=P\!_\mathrm{st}(\boldsymbol{\sigma})\,$ for all $\boldsymbol{\sigma}\in\mathbb{Z}^2\,$, where $\delta$ is the Dirac distribution. Applying a translation $T_{\boldsymbol{u}}$ to a point $\boldsymbol{\varrho}\in\mathbb{R}^2$, the transformation of test function is $\varphi\rightarrow T_{\boldsymbol{u}}\varphi$\, if\, $(\boldsymbol{\varrho}-\boldsymbol{u})\in\mathbb{R}^2$. The change of the expectation value of $\mathcal{Z}$ is $z(0)\rightarrow z(\boldsymbol{u})$ with \begin{equation} z(\boldsymbol{u})=\langle\mathcal{Z}\,,T_{\boldsymbol{u}}\varphi\rangle=\int_{\mathbb{R}^2} \mathcal{Z}(\boldsymbol{\varrho})\;\varphi(\boldsymbol{\varrho}-\boldsymbol{u})\;\mathrm{dv}_{\!\varrho}\;. \label{eq:w-u} \end{equation} For small translations, the Taylor expansion of $\varphi(\boldsymbol{\varrho}-\boldsymbol{u})$ around $\boldsymbol{u} =\boldsymbol 0\,$ is \begin{equation} \varphi(\boldsymbol{\varrho}-\boldsymbol{u})=\varphi(\boldsymbol{\varrho})-u_\alpha\;\partial_\alpha\varphi\bigr\rfloor_{{\boldsymbol{u}} =\mathbf{0}}+\tfrac{1}{2}\;u_\alpha u_\beta\;\partial^2_{\alpha\beta}\varphi\bigr\rfloor_{{\boldsymbol{u}} =\mathbf{0}}+\mathrm{O}(3)\;.\label{Tvarphi} \end{equation} Here repeated subscripts imply sum. The $\alpha$-th component of $\boldsymbol{u}$ is $u_\alpha$. We used the notations $\partial_\alpha\,\dot=\,\partial/\partial\varrho_\alpha$ and $\partial^2_{\alpha\beta}\,\dot=\,\partial^2/(\partial\varrho_\alpha\partial\varrho_\beta)$, with $\alpha\,,\beta=1,2$. Since the test function $\varphi$ is known only at points of the lattice $\mathbb{Z}^2$, its derivatives cannot be calculated explicitly. In contrast, the distribution $\mathcal{Z}$ is derivable in all points. Since the test function has either compact support or decreases rapidly, one can take advantage of the following identity \begin{equation} \bigl\langle \mathcal{Z}\,,\partial^{\rm{n}}_{\alpha\beta\cdots\omega}\varphi\bigr\rangle=(-1)^{\rm{n}}\bigl\langle \partial^{\rm{n}}_{\alpha\beta\cdots\omega} \mathcal{Z}\,,\varphi\bigr\rangle\;.\label{repl} \end{equation} Using eq.~(\ref{repl}) the observable given by eq.~(\ref{eq:w-u}) for small translations is \begin{equation} z(\boldsymbol{u})=\Bigl\langle\mathcal{Z},\varphi\Bigr\rangle +\Bigl\langle\partial_\alpha\mathcal{Z},\varphi\Bigr\rangle\,u_\alpha +\tfrac{1}{2}\Bigl\langle\partial^2_{\alpha\beta}\mathcal{Z},\varphi\Bigr\rangle\,u_\alpha u_\beta+\mathrm{O}(3)\,.\label{distr-exp0} \end{equation} \begin{figure} \begin{center} \includegraphics[scale=.5]{fig2.eps} \end{center} \caption{(Color online) Semilog plot of the stationary probability density function $P_\text{st}(\boldsymbol{\sigma})$ (SPDF) for different values ​​of $\boldsymbol{\sigma}=(i,j)$. Notice the symmetry of the SPDF (circles). Qualitatively, we observe in all cases that the data of SPDF show clear exponential decay.\label{fig:2}} \end{figure} In the original work, Buceta and Hansmann \cite{Buceta-12} studied a restricted and an unrestricted discrete model belonging to the KPZ universality class: the restricted solid-on-solid model and the ballistic deposition model, respectively. They used the test space translations to determine the coarse-grained coefficients of the continuous stochastic differential equation known as KPZ equation. They derived their coefficients from the transformed average velocity of the interface $v(\boldsymbol{u})=\langle K^{(1)},T_{\boldsymbol{u}}\varphi\rangle$, where the first transition moment $K^{(1)}$ is the drift of corresponding discrete Langevin equation. In a previous work \cite{Hansmann-13}, they studied two volume conserving surface (VCS) models without restrictions and with conserved noise, which differ from each other in the symmetry of their dynamic hopping rules. In contrast to the original work, these two models allow to calculate analytically another observable quantity, which is the nonconserved noise intensity. Thus, the continuous noise intensity is $D=\langle K^{(2)},\varphi\rangle$, where $K^{(2)}$ is the second transition moment corresponding to discrete process. The working methodology introduced to study the symmetric and asymmetric VCS models is used here to study the RDSR model. \section{Continuous Langevin equation coefficients {\sl via} test functions\label{sec:coef-test}} Instead of finding the continuous Langevin equation {\sl via} regularization, we can obtain its non-zero coefficients $\nu_0$ and $\nu_2$ applying the distribution $\mathcal{Z}_\mathrm{a}$ [Eq.~(\ref{cZa})] on translated test functions $T_{\boldsymbol{u}}\varphi$. In order to calculate the coefficients of the antisymmetric kernel $Z_\mathrm{a}$ [Eq.~(\ref{kernel-Za})] we perform a translation $\boldsymbol{u}=(s,-s)$ on the surface configuration space with $s\ll 1$, taking into account that $\varphi$ is symmetric in its variables \begin{equation} z_\mathrm{a}(s) = \langle\mathcal{Z}_\mathrm{a}\,,T_{\boldsymbol{u}}\varphi\rangle = \nu_0\,s+\frac{\nu_2}{3}\,s^3+\mathrm{O}(s^5)\;,\label{za-kernel} \end{equation} where \begin{eqnarray} \nu_0 &=& \bigl\langle(\partial_x-\partial_y)\mathcal{Z}_\mathrm{a}\,,\varphi\bigr\rangle\;,\label{nu_0}\\ \nu_2 &=& \frac{1}{2}\,\bigl\langle(\partial_x-\partial_y)^{\!(3)}\!\mathcal{Z}_\mathrm{a}\,,\varphi\bigr\rangle \label{nu_2}\;, \end{eqnarray} where $(\partial_x-\partial_y)^{\!(3)}=\partial_{xxx}^{\,3}-3\,\partial_{xxy}^{\,3} +3\,\partial_{xyy}^{\,3}-\partial_{yyy}^{\,3}\;$. Eq.~(\ref{za-kernel}) contains only odd powers in $s$ since $\mathcal{Z}_\mathrm{a}$ is antisymmetric and contributes only to their odd order derivatives which are symmetrical. Here we show only the first and third order-term, although the calculations can be extended to higher orders. From eq.~(\ref{cZa}) we obtain the symmetric distribution $(\partial_x-\partial_y)\mathcal{Z}_\mathrm{a}=\zeta\bigl[\delta(x)+\delta(y)\bigr]$, hence the first-order coefficient (\ref{nu_0}) is \begin{equation} \nu_0 = 2\,\zeta\int_{-\infty}^{+\infty}\varphi(0,y)\,\mathop{}\!\mathrm{d} y\;.\label{nu-0-i} \end{equation} In eq.~(\ref{nu_2}), taking into account the identity given by eq.~(\ref{repl}) $\nu_2=\tfrac{1}{2}\,\bigl\langle(\partial_x-\partial_y)\mathcal{Z}_\mathrm{a}\,,(\partial_x-\partial_y)^{\!(2)}\varphi\bigr\rangle$, the non zero terms contain $\bigl\langle\delta(x),\partial_{xx}^2\varphi\bigr\rangle=\bigl\langle\delta(y),\partial_{yy}^2\varphi\bigr\rangle$. Then, the third-order coefficient (\ref{nu_2}) is \begin{equation} \nu_2 =\zeta\,\int_{-\infty}^{+\infty}\partial_{xx}^2\varphi(x,y) \Bigr\rfloor_{x=0}\;\mathop{}\!\mathrm{d} y\;. \label{nu-2-i} \end{equation} The regularizing coefficients are related unequivocally with the test functions for the RDSR model in a simple manner [equalling eqs.~(\ref{regul-coef1}) and (\ref{nu-0-i}), and equalling eqs.~(\ref{regul-coef2}) and (\ref{nu-2-i})]. The coefficients are derived from the antisymmetric kernel, which in this model is a linear combination of the $\Theta$-functions. For nonlinear combinations this relation between the regularizing coefficients and the test functions is unclear. \begin{figure} \begin{center} \includegraphics[scale=.55]{fig3.eps} \end{center} \caption{(Color online) Semilog data plot of $f_0(\rho)$ obtained from the SPDF by means of eqs.~(\ref{sl1}), (\ref{sl4a}), (\ref{sl3a}) and (\ref{sl2}). We observe that the data of $f_0$ show exponential decay.\label{fig:3}} \end{figure} As the test function $\varphi(x,y)$ is symmetric under the exchange of its variables, it admits the following Fourier-cosine expansion \begin{equation} \varphi(x,y)=\sum_{n=0}^{+\infty}\,f_n(\rho)\,\cos[n(\theta-\pi/4)]\;,\label{test-fourier} \end{equation} which is an invariant function under the exchange $\theta\leftrightarrow\pi/2-\theta$. Considering the fact that the test function is known at those points where the SPDF is defined, the Fourier coefficients $f_n(\rho)$ can be easily determined on those points. Replacing the Fourier series given by eq.~(\ref{test-fourier}) in the normalization condition \begin{equation} \iint_{\mathbb{R}^2}\varphi(x,y)\,\mathop{}\!\mathrm{d} x\mathop{}\!\mathrm{d} y=1\;, \end{equation} we obtain the following equivalent condition for the zero-order Fourier coefficient \begin{equation} \int_0^{+\infty}\rho\;f_0(\rho)\,\mathop{}\!\mathrm{d}\rho=1\;. \end{equation} The coefficients given by eqs.~(\ref{nu-0-i}) and (\ref{nu-2-i}) can be expressed in terms of the Fourier coefficients of eq.~(\ref{test-fourier}). See Appendix B for details. The lowest order contributions are \begin{eqnarray} \nu_0\simeq 4\,\zeta\int_0^{+\infty}f_0(\rho)\,\mathop{}\!\mathrm{d}\rho\;,\label{nu-0-f}\\ \nu_2\simeq 32\,\zeta\int_0^{+\infty}\frac{f_4(\rho)}{\rho^2}\,\mathop{}\!\mathrm{d}\rho\;,\label{nu-2-f} \end{eqnarray} It is possible to numerically compute some of these coefficients from simulation data of SPDF $P_\text{st}(i,j)=\varphi(i,j)$. In Appendix \ref{apA} we calculate the radial functions $f_0$ and $f_4$ given by the SPDF. Figure \ref{fig:3} shows the values of $f_0$ from Monte Carlo simulation data. \section{Summary and Conclusions\label{sec:concl}} We have revisited a random deposition model with surface relaxation which belongs to the Edwards-Wilkinson (EW) universality class. We have established a general framework that can be extended to other discrete models such as molecular beam epitaxial growth models. We separated the different processes of the RDSR model into deposition, diffusion and volume conservation. This separation revealed the symmetries involved in the growth process and simplified further calculations. We explained that only the antisymmetric contributions determine the characteristic diffusion related to the EW universality class. In addition, we showed that only the symmetric contributions are associated to the volume conservation, which in the case of RDSR model renormalized to zero at the continuous limit. We led the RDSR model to the continuum in two different ways to obtain both, the continuous Langevin equation (or their coefficients) and the EW equation at the coarse-grained limit. For the first way, we used a regularization approach of generalized functions. We regularized the symmetric and antisymmetric kernels, which are nested in the first transition moment, in order to find the continuous equation. For the second way, we applied the generalized functions on test functions in order to calculate the coefficients of the continuous equation. The latter approach has the advantage that the coefficient could be estimated from SPDF data of a Monte Carlo simulation. The disadvantage of the approach is that the set of test functions, which is defined by the SPDF, makes the calculation of the coefficients imprecise. In contrast, the first approach has the two disadvantages that its results depend on the chosen regularizing function and the impossibility to calculate the coefficients from the simulation data. In general, the connection between the two approaches is not yet entirely clear. Up to now is only understandable in the RDSR model treated here, where the regularized function is a linear combination of Heavisides. In most other cases, the regularization of a generalized function (product of Heavisides) is not equal to the product of regularized Heavisides \cite{Oberguggenberger-92}. Finally, we discuss the applicability of our methodology to other discrete models. This formalism can be extended to other models with deposition and relaxation rules and nonconserved noise, such as Wolf-Villain MBE model \cite{Wolf-90} in 1+1 dimensions. This model reaches the steady-state after a very long time and studies using dynamic renormalisation group theory show that it asymptotically belongs to the EW universality class \cite{Haselwandter-07}. There is a very long transient, during which the system goes through different universality classes. This suggests that the coefficients of the continuous stochastic differential equation are time-dependent functions. These coefficients change their behaviour substantially when the universality class changes. Additionally, the PDF of height differences also shows a long transient. Therefore, following our approach, the test functions and coefficients are also time-dependent. Furthermore, this methodology is applicable to other discrete models with deposition and relaxation belonging to the EW universality class \cite{Pal-99}. Based on our work, we believe that the approaches discussed in this paper would facilitate similar research on other discrete stochastic models which involve deposition processes followed by instantaneous relaxation. \section*{Acknowledgements} D.H. gratefully acknowledges CONICET and MINCyT for their support. \section*{Appendix A\label{apA}} In this appendix we calculate all possible values of the Fourier coefficients of equation~(\ref{test-fourier}) in terms of the known values of the test function. See Figure~\ref{fig:4} for details. Explicitly, if $j\in\mathbb{N}_0-\Upsilon$ where $\Upsilon=\{\upsilon\in\mathbb{N}/\upsilon=\sqrt{k^2+\ell^2},\;\forall\; k,\ell\in\mathbb{N}^\ast\;\text{and}\; k>\ell \}$ \begin{equation} \left( \begin{array}{c} \varphi(0,j)\\\varphi(-j,0) \end{array} \right)= \left( \begin{array}{cc} 1&\frac{1}{\sqrt{2}}\\ 1&-\frac{1}{\sqrt{2}} \end{array} \right)\left( \begin{array}{c} f_0(j)\\f_1(j) \end{array} \right)\;,\label{yellow} \end{equation} and $f_n(j)=0$ for $n\ge 2$\,. By solving the system \begin{equation} \left( \begin{array}{c} f_0(j)\\f_1(j) \end{array} \right)=\frac{1}{2}\left( \begin{array}{cc} 1&1\\ \sqrt{2}&-\sqrt{2} \end{array} \right)\left( \begin{array}{c} \varphi(0,j)\\\varphi(-j,0) \end{array} \right)\;.\label{sl1} \end{equation} Otherwise, with $j\in\Upsilon$, the second case is \begin{equation} \left( \begin{array}{c} \varphi(\ell,k)\\\varphi(0,j)\\\varphi(-\ell,k)\\\varphi(-k,\ell)\\\varphi(-j,0)\\\varphi(-k,-\ell) \end{array} \right)=\left( \begin{array}{cccccc} 1 &\cos\alpha &\cos 2\alpha &\cos 3\alpha &\cos 4\alpha &\cos 5\alpha\\ 1 &\frac{1}{\sqrt{2}} &0 &-\frac{1}{\sqrt{2}} &-1 &-\frac{1}{\sqrt{2}}\\ 1 &\sin\alpha &-\cos 2\alpha &-\sin 3\alpha &\cos 4\alpha &\sin 5\alpha\\ 1 &-\sin\alpha &-\cos 2\alpha &\sin 3\alpha &\cos 4\alpha &-\sin 5\alpha\\ 1 &-\frac{1}{\sqrt{2}} &0 &\frac{1}{\sqrt{2}} &-1 &\frac{1}{\sqrt{2}}\\ 1 &-\cos\alpha &\cos 2\alpha &-\cos 3\alpha &\cos 4\alpha &-\cos 5\alpha \end{array} \right)\left( \begin{array}{c} f_0(j)\\f_1(j)\\f_2(j)\\f_3(j)\\f_4(j)\\f_5(j) \end{array} \right)\;,\label{red} \end{equation} where $j=\sqrt{k^2+\ell^2}$ ($k,\ell\in\mathbb{N}^\ast,\hspace{1ex}k>\ell$) and $\alpha=\arctan(k/\ell)-\pi/4$ (with $0<\alpha<\pi/4$). Solving the equation system \begin{equation} \left( \begin{array}{c} f_0(j)\\f_2(j)\\f_4(j) \end{array} \right)=\frac{1}{4\cos^2 2\alpha} \left( \begin{array}{ccc} \frac{1}{2} &\cos 4\alpha &\frac{1}{2} \\ \cos 2\alpha &0 &-\cos 2\alpha\\ \frac{1}{2} &-1 &\frac{1}{2} \end{array} \right)\left( \begin{array}{c} \varphi(\ell,k)+\varphi(-k,-\ell)\\ \varphi(0,j)+ \varphi(-j,0)\\ \varphi(-\ell,k)+\varphi(-k,\ell) \end{array} \right)\;,\label{sl4a} \end{equation} \begin{equation} \left( \begin{array}{c} f_1(j)\\f_3(j)\\f_5(j) \end{array} \right)=\frac{1}{2\,\sin 4\alpha\,(1+\cos 4\alpha)} \!\!\left(\!\! \begin{array}{ccc} \sin 4\alpha\,\cos\alpha&\frac{1}{\sqrt{2}}\sin 8\alpha &\sin 4\alpha\,\sin\alpha\\ \sin 3\alpha\,\cos 2\alpha &-\frac{1}{\sqrt{2}}\sin 4\alpha &-\cos 3\alpha\,\cos 2\alpha\\ \sin\alpha\,\cos 2\alpha &-\frac{1}{\sqrt{2}}\sin 4\alpha &\cos\alpha \,\cos 2\alpha \end{array} \!\!\right)\!\!\left(\! \begin{array}{c} \varphi(\ell,k)-\varphi(-k,-\ell)\\ \varphi(0,j)- \varphi(-j,0)\\ \varphi(-\ell,k)-\varphi(-k,\ell) \end{array} \!\right)\!\!.\label{sl4b} \end{equation} \begin{figure} \begin{center} \includegraphics[scale=.3]{fig4.eps}\hspace{1cm} \end{center} \caption{(Color online) Symbols show the integer coordinates $(i,j)$ where the test function $\varphi$ is known, {\sl i.e.} $\varphi(i,j)=P_\text{st}(i,j)$. We show a half-space plot, taking into account the symmetry property of the test function. Each symbol type corresponds to an equation system for the calculus of all possibles values of the Fourier coefficients of eq.~(\ref{test-fourier}); yellow circle: eq.~(\ref{yellow}), red square: eq.~(\ref{red}), white star: eq.~(\ref{white}), and blue diamond: eq.~(\ref{blue}).\label{fig:4}} \end{figure} Also, if $\sigma\in\Sigma$ where $\Sigma=\{\sigma\in\mathbb{R}-\mathbb{N}/\sigma= \sqrt{k^2+\ell^2},\;\forall\; k,\ell\in\mathbb{N}^\ast\;\text{and}\; k>\ell \}$ \begin{equation} \left( \begin{array}{c} \varphi(\ell,k)\\\varphi(-\ell,k)\\\varphi(-k,\ell)\\\varphi(-k,-\ell) \end{array} \right)=\left( \begin{array}{cccc} 1 &\cos\alpha &\cos 2\alpha &\cos 3\alpha\\ 1 &\sin\alpha &-\cos 2\alpha&-\sin 3\alpha\\ 1 &-\sin\alpha &-\cos 2\alpha &\sin 3\alpha\\ 1 &-\cos\alpha &\cos 2\alpha &-\cos 3\alpha \end{array} \right)\left( \begin{array}{c} f_0(\sigma)\\f_1(\sigma)\\f_2(\sigma)\\f_3(\sigma) \end{array} \right)\;,\label{white} \end{equation} where $\ell\neq k$ and $\alpha=\arctan(k/\ell)-\pi/4$ (with $0<\alpha<\pi/4$). By solving the system \begin{equation} \left( \begin{array}{c} f_0(\sigma)\\f_2(\sigma) \end{array} \right)=\frac{1}{4\cos 2\alpha} \left( \begin{array}{ccc} \cos 2\alpha &\cos 2\alpha\\ 1 &-1 \end{array} \right)\left( \begin{array}{c} \varphi(\ell,k)\,+\,\varphi(-k,-\ell)\\ \varphi(-\ell,k)\,+\,\varphi(-k,\ell) \end{array} \right)\;,\label{sl3a} \end{equation} \begin{equation} \left( \begin{array}{c} f_1(\sigma)\\f_3(\sigma) \end{array} \right)=\frac{1}{2\,\sin 4\alpha} \left( \begin{array}{ccc} \sin 3\alpha &\cos 3\alpha\\ \sin\alpha &-\cos\alpha \end{array} \right)\left( \begin{array}{c} \varphi(\ell,k)\,-\,\varphi(-k,-\ell)\\ \varphi(-\ell,k)\,-\,\varphi(-k,\ell) \end{array} \right)\;.\label{sl3b} \end{equation} Finally, explicitly for $j\in\mathbb{N}_0$ \begin{equation} \left( \begin{array}{c} \varphi(j,j)\\\varphi(-j,j)\\\varphi(-j,-j) \end{array} \right)=\left( \begin{array}{ccc} 1 &1 &1\\ 1 &0 &-1\\ 1 &-1 &1 \end{array} \right)\left( \begin{array}{c} f_0(\sqrt{2}j)\\f_1(\sqrt{2}j)\\f_2(\sqrt{2}j) \end{array} \right)\;,\label{blue} \end{equation} and $f_n(j)=0$ for $n\ge 3$\,. By solving the system \begin{equation} \left( \begin{array}{c} f_0(\sqrt{2}j)\\f_1(\sqrt{2}j)\\f_2(\sqrt{2}j) \end{array} \right)=\frac{1}{4}\left( \begin{array}{ccc} 1 &2 &1\\ 2 &0 &-2\\ 1 &-2 &1 \end{array} \right)\left( \begin{array}{c} \varphi(j,j)\\\varphi(-j,j)\\\varphi(-j,-j) \end{array} \right)\;.\label{sl2} \end{equation} \section*{Appendix B\label{apB}} In order to obtain the coefficient $\nu_0$ in terms of Fourier coefficients [eq.~(\ref{test-fourier}], it is easy to show that the integral of eq.~(\ref{nu-0-i}) is \begin{equation} \int_{-\infty}^{+\infty}\varphi(0,y)\,\mathop{}\!\mathrm{d} y=\sum_{n=0}^{+\infty}\bigl[1+(-1)^n\bigl]\cos\Bigl(\frac{n\pi}{4}\Bigr)\!\int_0^{+\infty}\!f_n(\rho)\,\mathop{}\!\mathrm{d}\rho=2\,\sum_{k=0}^{+\infty}(-1)^k\!\int_0^{+\infty}f_{4k}(\rho)\,\mathop{}\!\mathrm{d}\rho\;.\label{int-nu-0} \end{equation} Similarly, we obtain the coefficient $\nu_2$ in terms of Fourier coefficients [eq.~(\ref{test-fourier}]. Take into account \begin{eqnarray} &&\frac{\partial\hspace{1ex}}{\partial x}=\cos\theta\,\frac{\partial\hspace{1ex}}{\partial\rho}-\frac{1}{\rho}\,\sin\theta\,\frac{\partial\hspace{1ex}}{\partial \theta}\nonumber\\ &&\frac{\partial\hspace{1ex}}{\partial y}=\sin\theta\,\frac{\partial\hspace{1ex}}{\partial\rho}+\frac{1}{\rho}\,\cos\theta\,\frac{\partial\hspace{1ex}}{\partial \theta}\;, \end{eqnarray} is direct to show \begin{eqnarray*} &&\left.\frac{\partial^2\varphi}{\partial x^2}\right\rfloor_{x=0,y>0}=-\frac{1}{y^2}\;\sum_{n=1}^{+\infty}n^2\cos\Bigl(\frac{n\pi}{4}\Bigr)f_n(y)\\ &&\left.\frac{\partial^2\varphi}{\partial x^2}\right\rfloor_{x=0,y<0}=-\frac{1}{y^2}\;\sum_{n=1}^{+\infty}n^2\cos\Bigl(\frac{3 n\pi}{4}\Bigr)f_n(-y)\;. \end{eqnarray*} Then the integral of eq.~(\ref{nu-2-i}) is \begin{equation} \int_{-\infty}^{+\infty}\partial^2_{xx}\varphi\Bigr\rfloor_{x=0}\mathop{}\!\mathrm{d} y=- \sum_{n=1}^{+\infty}n^2\bigl[1+(-1)^n\bigr]\,\cos\Bigl(\frac{n\pi}{4}\Bigr)\int_0^{+\infty}\frac{f_n(\rho)}{\rho^2}\,\mathop{}\!\mathrm{d}\rho=-32\,\sum_{k=1}^{+\infty}(-1)^k\,k^2\!\int_0^{+\infty}\frac{f_{4k}(\rho)}{\rho^2}\,\mathop{}\!\mathrm{d}\rho\;.\label{int-nu-2} \end{equation} The series of eqs.~(\ref{int-nu-0}) and (\ref{int-nu-2}) can be approximated to the lowest order taking into account that the integrals of the series are strongly convergent.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Background} \label{sec:background} In this section, we show the implications and challenges of transitioning into low power states (P/C/T-states) during synchronization and communication primitives for energy-savings on two practical examples. As a test platform we have used a compute node equipped with two Intel Haswell E5-2630 v3 CPUs, with 8 cores at 2.4 GHz nominal clock speed and 85W Thermal Design Power (TDP) and the production software stack of Intel systems\footnote{We use Intel \textit{MPI Library 5.1} as the runtime for communication, coupled with Intel \textit{ICC/IFORT 18.0} as our toolchain. We choose Intel software stack because it is currently used in our target systems as well supported in most of HPC machines based on Intel architectures}. We use a single compute node for the following exploration because this is a worst-case scenario for energy-saving strategies in MPI applications due the communications happen in a very short time. All the tests in this Section have been executed on a real scientific application, namely QuantumESPRESSO which is a suite of packages for performing Density Functional Theory based simulations at the nanoscale and it is widely employed to estimate ground state and excited state properties of materials \emph{ab initio}. For these single nodes tests, we used the CP package parallelized with MPI. We use QE because it is a paradigmatic application that shows the typical behaviors of HPC codes. QE main computational kernels include dense parallel linear algebra (diagonalization) and 3D parallel FFT, which makes the following exploration work relevant for many HPC codes\footnote{QE mostly used packages are: (i) \textit{Car-Parrinello} (CP) simulation, which prepares an initial configuration of a thermally disordered crystal of chemical elements by randomly displacing the atoms from their ideal crystalline positions; (ii) PWscf (Plane-Wave Self-Consistent Field) which solves the self-consistent Kohn and Sham (KS) equations and obtain the ground state electronic density for a representative case study \cite{avvisati2017fepc}}. To exploit the system behavior for different workload distribution in a single node evaluation, we focused the computation of the band structure of the Silicon along the main symmetry. When executed by a user with no domain expertise and with default parameter, QE runs with a hybrid MPI parallelization strategy with only one MPI process used to perform the diagonalization and all the MPI processes used to perform the FFT kernel. We will later refer to this case as \textit{QuantumESPRESSO CP Not Expert User} (QE-CP-NEU). Differently, when an expert user runs the same problem, he changes the parameters to better balance the workload by using multiple MPI processes to parallelize also the diagonalization kernel. We will later refer to this case as \textit{QuantumESPRESSO CP Expert User} (QE-CP-EU). In the QE-CP-NEU case, when a single process works on the linear algebra kernel, the other ones remain in busy wait on the MPI call. In the following text, we will compare fine-grain power management solutions with the busy-waiting mode (default mode) of MPI library, where processes continuously poll the CPU for the whole waiting time in MPI synchronization points. \begin{figure}[t] \centering \includegraphics[width=1.00 \linewidth]{./images/turbo} \caption{Time plot of frequency for QE-CP-NEU identifies the frequency of the MPI process working on the diagonalization, \textit{No Diag} is the average frequency of MPI processes not involved in the diagonalization.} \label{fig:turbo} \end{figure} \subsection{Wait-mode/C-state MPI library} \label{sec:wait_mode} Usually, MPI libraries use a busy-waiting policy in collective synchronizations to avoid performance penalties. This is also the default behavior of Intel MPI library. This library can also be configured to release the control to the idle task of the operating system (OS) during waiting time to leverage the C-states of the system. This allows cores to enter in sleep states and being woken up by the MPI library when the message is ready through an interrupt routine. In the Intel MPI library, it is possible to configure the wait-mode mechanism through the environment variable \textit{I\_MPI\_WAIT\_MODE}. This allows the library to leave the control to the idle task, reducing the power consumption for the core waiting in the MPI. The transitions in and out from the sleep mode induce overheads in the execution time. In figure \ref{fig:cs_ps_ts} are reported the experimental results, the wait-mode strategy is identify with \textit{CS}. From it, we can see the overhead induced by the wait mode w.r.t. the default busy-waiting configuration, which worsens by 25.85\% the execution time. This is explained by the high number of MPI calls in the QE application which leads to frequent sleep/wake-up transitions and high overheads. From the same figure, we can also see that the energy saving is negative, which is -12.72\%, this is because the power savings obtained in the MPI primitives does not compensate the overhead induced by the sleep/wake-up transitions. Indeed, the power reduction is of 12.83\%. This is confirmed by the average load of the system, which is 83.02\% as the effect of the C-states activity in the MPI primitives. The average frequency is 2.6GHz, which is the standard turbo frequency of our target system. Surprisingly, the QE-CP-NEU case has a negative overhead (-1.08\%\ overhead is a speedup). This speedup is given by the turbo logic of our system. Indeed, we can see that the average frequency is slightly higher than 2.6GHz, which means that the process doing the diagonalization can leverage the power budget freed by the other processes not involved in the diagonalization while they are waiting in a sleep state in the MPI runtime. In figure \ref{fig:turbo}, we report the average frequency of the process working on the diagonalization and the average frequencies of all the other MPI processes. In the target system, a single core can reach up to 3.2 GHz if only one core is running, this is what happens when all cores are waiting in a sleep state for the termination of the diagonalization workload. The benefit of this frequency boosting unleashed by the idle mode on the MPI library and the unbalanced workload can save up to 16.69\% of energy with a power saving of 20.86\%. As a conclusion of this first exploration, we recognize that it is possible to leverage the wait mode of the MPI library to save power without increasing the execution time, but energy savings and impact on the TTS depends on the MPI calls granularity which can lead to significant penalties if the application is characterized by frequent MPI calls. \begin{figure*}[t] \centering \includegraphics[width=1.00 \linewidth]{./images/dynlink_logview} \caption{Dynamic linking events when COUNTDOWN is injected at loading time in the application and logical view of all the components.} \label{fig:dynlink_logview} \end{figure*} \subsection{DVFS/P-state MPI library} To overcome the overheads of C-state transitions, we focus our initial exploration on the active low power states (C-state) and DVFS (P-state). Intel MPI library does not implement such a feature, so we manually instrumented all the MPI calls of the application with a \textit{epilogue} and \textit{prologue} function to scale down and raise up the frequency when the execution enters and exits from an MPI call. To avoid interference with the power governor of the operating system, we disabled it in our compute node granting the complete control of the frequency scaling. We use the MSR driver to change the current P-state writing \textit{IA32\_PERF\_CTL} register with the highest and lowest available P-state of the CPU, which corresponds to the turbo and 1.2GHz operating points. In figure \ref{fig:cs_ps_ts} we report the results of this exploration, where the P-state case is labelled with \textit{PS}. In the overhead plot, in figure \ref{fig:cs_ps_ts}.a, we can see that the overhead is significantly reduced w.r.t C-state mode, reducing the 25.85\% overhead obtained previously to 5.96\%. This means that the overhead of scaling the frequency is lower respect to the sleep/wake-up transitions cost. However, the energy and power savings are almost zero. Similarly to QE-CP-EU, this happens because all the MPI processes participate in the diagonalization, thus we have a high number of MPI calls with very short duration. This is also confirmed by the average frequency, which does not show significant variations w.r.t. the busy waiting, with a measured average frequency of 2.4GHz. The load bar reports 100\% of activity, which means that there is no idle time as expected. Focusing in the QE-CP-NEU case, in figure \ref{fig:cs_ps_ts}.b, the overhead is 3.88\% which is reduced w.r.t. QE-CP-EU. In addition, in this case, we have significant energy and power saving, respectively of 14.74\% and 14.75\%. These saving are due to the workload unbalance and to the long time spent in the MPI calls from the processes not involved in the diagonalization. This is confirmed by the lower average frequency (1.95GHz). The load is unaltered as expected. In conclusion, using DVFS for fine-grain power management instead of the idle mode allows to control the overhead for both balanced and unbalanced workload better. However, the overhead is still significant and in HPC the TTS is the prime goal. \subsection{DDCM/T-state MPI library} One crucial question is: are the overheads of fine-grain power management strategies induced by the specific power management states? To answer this question, we considered duty-cycling low power states\footnote{In this Section we also tried to use the Dynamic Duty Cycle Modulation (DDCM) (also known as throttling states or T-states) available in the Intel architectures which are characterized by lower overhead. DDCM has been supported in Intel processors since Pentium 4 and enables on-demand software-controlled clock modulation duty cycle.}. In Intel CPUs, DDCM is used by the \textit{HW power controller} to reduce the power consumption when the CPU identifies thermal hazards. Similarly to \cite{DDCM}, we use DDCM to reduce the power consumption of the cores in MPI calls. We manually instrumented the target as we did in the \textit{prologue} function of each MPI call to configuring DDCM to 12.5\% of clock cycles, which means for each clock cycle we gate the next 7; while in the \textit{epilogue} function, we restore the DDCM to 100\% of clock cycles, we control it by writing to the DDCM configuration register, called \textit{IA32\_CLOCK\_MODULATION}, through the MSR driver. In figure \ref{fig:cs_ps_ts}.a, the DDCM results are reported with \textit{TS} bars. Surprisingly, the overheads induced by T-states are greater than the wait mode and equal to 34.78\%. As a consequence, the energy saving is the worst, leading to an energy penalty of 14.94\%. The load is significantly reduced owing to the throttling, in an average of 67.78\%, while the frequency is constant to 2.6GHz. In figure \ref{fig:cs_ps_ts}.b, we report T-state results for QE-CP-NEU. Even for this unbalanced workload case, the T-states are the worst. T-state transitions introduce an overhead of 15.82\% consequent of the power reduction, with a very small energy saving, only of the 4.75\%, and a power saving of 21.97\%. The load of the system is reduced to 55.45\%, similarly to the idle mode, and the frequency remained unchanged as expected. As a matter of fact, we show that phase agnostic fine-grain power management leads to significant application overheads which may nullify the overall saving. Though, we need to bring knowledge of the workload distribution and the communication granularity of the application in the fine-grain power management. In the next Sections, we introduce the COUNTDOWN approach which addresses this issue. \begin{figure*} \centering \includegraphics[width=1.00 \linewidth]{./images/callback_idle} \caption{On the upper side is depicted the timer strategy utilized in COUNTDOWN, while in the lower side is depicted the idle-wait mode with timer implemented in the Intel MPI library.} \label{fig:callback} \end{figure*} \section{Conclusion} \label{sec:conclusion} In this paper, we presented COUNTDOWN, a methodology and a tool for profiling HPC scientific applications and for adding DVFS capabilities into standard MPI libraries. COUNTDOWN implements a timeout strategy to avoid application slowdown and exploiting MPI communication slacks to reduce energy consumption drastically. COUNTDOWN has been demonstrated on real HPC systems and workloads and does not require any modification to application source code nor the compilation toolchain. The COUNTDOWN approach can leverage several low power state technologies --- P/T/C states. We compared COUNTDOWN with state-of-the-art power management approaches for MPI libraries, which can dynamically control idle and DVFS levels for MPI-based application. Our experimental results show that using our tuned timeout strategy to take decisions on power control can drastically reduce overheads, maximizing the energy efficiency in small and large MPI communications. Our run-time library can lead up to 14.94\% energy saving, and 19.84\% of power saving with a less than 1.5\% performance penalty on a single compute node. However, the benefits of COUNTDOWN increase with the scale of the application. In a 1K cores NAS run, COUNTDOWN always saves energy, with a saving which depends on the application and ranges from 6\% to 50\% at a negligible overhead (below 6\%). In a full-scale production run of QE on more than 3.4K cores, COUNTDOWN saves 22.36\% of energy with only 2.88\% performance overhead. Energy reduction reaches 37.74\% when the application is executed with a default conservative parallelization setting. COUNTDOWN is an effective, non-intrusive and low overhead approach to cut today's supercomputing center energy-consumption transparently to the user. In future work, we plan to integrate it within standard power management infrastructure, such as GEOPM \cite{GEOPM}, and to complement it with predictive and application-driven power management techniques. \section{Experimental Results} \label{sec:experimental} In this Section, we present: (i) an overhead analysis of COUNTDOWN, (ii) the effect of timeout strategy using different timeout delays, and (iii) the evaluation on a single node and a production HPC system with real scientific applications. \subsection{Framework Overheads} We evaluate the overhead of running MPI applications instrumented with the profiler module of COUNTDOWN without changing the cores' frequency. We run QE-CP-EU on a single node, which is the worst case for COUNTDOWN in term of number and granularity of MPI calls to profile because all network-related overheads in MPI calls are nullified and intra-chip communication and synchronization are orders-of-magnitude faster than the inter-chip or inter-node ones. Hence, MPI wait-times exploitable for power management are generally much shorter. In this run, there are more than 1.1 million of MPI primitives for each process in the diagonalization task: our run-time library needs to profile in average an MPI call every 200us for each process. We measured the overhead comparing the execution time with and without COUNTDOWN instrumentation. We repeated the test five times, and we report the median case. Our results show that even in this unfavorable setting, the COUNTDOWN profiler introduces an overhead in the execution time which is less than 1\%. We repeated the same test changing the cores' frequency to assess the overhead of a fine-grain DVFS control. To measure only the overhead caused by the interaction with the DVFS knobs, we force COUNTDOWN to force always the highest P-state in the DVFS control registers. Thus, we avoid application slowdowns caused by frequency variation, and we obtained only the overhead caused by the register access. Our experimental results report of 1.04\% of overhead to access the DVFS control register and for the profile routines. These results prove that the source of the overheads of phase agnostic fine-grain power management is not related to issuing the low power state transition (DVFS in this case). Figure \ref{fig:timeout_cases} focuses on understanding the source of this by replicating the tests of Section \ref{sec:background} for both QE-CP-EU and QE-CP-NEU, but now entering in the low power state only for MPI phases longer than a given time threshold. For the P-state and T-state (Figure \ref{fig:timeout_cases}.b and Figure \ref{fig:timeout_cases}.c) we obtained that by profiling in advance the duration of each MPI phase and instrumenting with the low power command only the phases which had a duration longer than the threshold. We report on the x-axes the time threshold value. For C-state (Figure \ref{fig:timeout_cases}.a) we leveraged the COUNTDOWN MPI logic, \textit{I\_MPI\_SPIN\_COUNT} parameter to filter out short phases. On the x-axis, we report the \textit{I\_MPI\_SPIN\_COUNT} parameter. From the plot, we can recognize that there is a well-defined threshold of 500us for the T-state and P-state case and of 10K iteration steps for the C-state after which the overhead introduced by the fine-grain power management policy is reduced and the energy savings becomes positive for the QE-CP-EU. In the next Section, we will analyze why this happens by focusing on the P-state case. The overhead in term of memory is negligible since the memory required by COUNTDOWN is just a few megabytes for each MPI process. \begin{figure*}[t] \centering \includegraphics[width=1.00 \linewidth]{./images/time_freq} \caption{Average frequency and time duration of Application/MPI phases for the single node benchmark of QE-CP-EU. The lighter zones identify higher point density.} \label{fig:freq_time} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=1.00 \linewidth]{./images/time_time_freq} \caption{Time and average frequency of Application/MPI phases for the single node benchmark for QE-CP-EU.} \label{fig:time_time_freq} \end{figure*} \subsection{DVFS Overheads and Time Region Analysis} To find the reason of the higher overhead when frequency reduction is applied in all the MPI phases as highlighted in the previous Section, we report two scatter plots in which we show on the x-axis of the left plot the time duration of each MPI phase and on the right plot the time duration of each application phase. For both plots, we report on the y-axes the measured average frequency in that phase. This test is conducted by instrumenting each MPI call through COUNTDOWN with a \textit{prologue} routine to set the lowest frequency (1.2GHz) and with an \textit{epilogue} routine to set the highest frequency (Turbo). In theory, we would have expected that all MPI phases had executed at the minimum frequency and application phases had always run at maximum frequency. It is a matter of fact that MPI phases running at high frequencies may cause energy waste, while application phases running at low frequencies cause a performance penalty to the application. Our results show that for phases with a time duration between 0us and 500us, the average frequency vary in the interval between the high and low CPU's frequency values, while above it, it tends to the desired frequency for that phase. This can be explained by the response time of \textit{HW power controller} in serving P-state transition of our Intel Haswell \cite{hackenberg2015energy}, we discover the same behaviour on Intel Broadwell architecture. The \textit{HW power controller} periodically reads the DVFS register to check if the OS has specified a new frequency, this interval has been reported to be 500us in previous study \cite{hackenberg2015energy} and matches our empirical threshold. This means that every new setting for the core's frequency faster than 500us could be applied or completely ignored, depending on when the register was sampled the previous time. This can cause all sort of average frequencies. Clearly application phases which execute at a lower frequency than the maximum one may lead to a slowdown in the application, while MPI phases which execute at a higher frequency than the minimum one may lead to energy saving loss. It is nevertheless interesting to notice that phases with a duration from 0s to 500us are more likely to have the highest frequency for the MPI phases and the lowest frequency for the application phases. Which is the opposite of what expected. We will explain it with the next analysis. Thus, it is not possible to have effective control on the frequency selection for phases shorter than 500us, while for longer phases we have an asymptotic trend toward the requested frequency. We hypothesize that in phases shorter than 500us the average frequency depends more on the previous phase frequency than the requested one. \begin{figure*} \centering \captionsetup[subfigure]{position=top} \subfloat[All MPI processes diagonalization QE-CP-EU]{\includegraphics[scale=0.358]{./images/cs_ps_ts_500us_n16} } \subfloat[Single MPI process diagonalization QE-CP-NEU]{\includegraphics[scale=0.358]{./images/cs_ps_ts_500us_n1} } \caption{Overhead, energy/power saving, average load and frequency using COUNTDOWN for QE-CP-EU (a) and QE-CP-NEU (b). Legend: C-state ($CS$), P-state ($PS$) and T-state ($TS$) mode. Baseline is busy-waiting mode of MPI library.} \label{fig:cs_ps_ts_500us} \end{figure*} Following this intuition, in Figure \ref{fig:time_time_freq}, we correlate the time duration of each application phase with the time duration of the following MPI phase and its average frequency. We report in the y-axis the time duration of the application phase, in the x-axis the time duration of the subsequent MPI phase, and with the color code, we report the average frequency. In the left plot, we report the average frequency of the MPI phase, while in the right plot we report the average frequency for the application phase. For both plots, we can identify four regions/quadrants: (i) \textbf{Application \& MPI\textgreater 500us}: this region contains long application phases followed by long MPI phases. Points in this region show low frequency in MPI phases and high frequency in application phases. This is the ideal behavior, where applying frequency scaling policy reduces energy waste in MPI but with no impact on the performance of the application. Phases in this region are perfect candidates for fine-grain DVFS policies. (ii) \textbf{Application\textgreater 500us \& MPI\textless 500us}: this region contains long application phases followed by short MPI phases. Points in this region show for both application and MPI phases high average frequency. This is explained by the short duration of the MPI phases, which does not give enough time to the \textit{HW power controller} to serve the request to scale down the frequency (\textit{prologue}) before this setting is overwritten by the request to operate at the highest frequency (\textit{epilogue}). For this reason, fine-grain DVFS control in this region does not have an impact on the energy saving as the frequency reduction in MPI phases is negligible, but it also does not deteriorate the performance as the application phases are executed at the maximum frequency. Phases in this region should not be considered for fine-grain DVFS policies, being preferable to leave frequencies unaltered at the highest level. (iii) \textbf{Application\textless 500us \& MPI\textgreater 500us}: this region contains short application phases followed with long MPI phases. This is the opposite case of \textit{Application\textgreater 500us \& MPI\textless 500us} region. Points in this region show for both application and MPI phases low average frequency. This is explained by the short duration of the application phases, which does not give enough time to the \textit{HW power controller} to serve the request to raise up the frequency (requested at the exit of the previous MPI phase), before this setting gets overwritten by the request to operate at the lowest frequency (at the entrance of the following MPI phase). Applying fine-grain DVFS policies in this region can save power, but detriments the overall performance, as application phases are executed at low frequencies. Phases in this region should not be considered for fine-grain DVFS policies due to the high overheads in the application execution time. (iv) \textbf{Application \& MPI\textless 500us}: This region shows the opposite behavior of \textit{Application \& MPI\textgreater 500us} region. Both application and MPI phases execute randomly at high and low average frequencies due to the inability of the \textit{HW power controller} to capture and service the requested frequency changes. The average frequency at which MPI and application phases execute are strictly related to type of the previous long phase: if it was an application phase the following short phases will execute at high frequency in average; On the contrary, if it was an MPI phase the following short phases would execute at low frequency in average. Applying fine-grain DVFS policies in this region leads to unexpected behaviors which can detriment application performance. Fine-grain power managers should never consider all phases shorter than 500us. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{./images/nas} \caption{Results on NAS Benchmarks with COUNTDOWN. Baseline is busy-waiting mode (default mode) MPI library.} \label{fig:nas} \end{figure*} \subsection{Single-node Evaluation} We repeated the experiments of Section \ref{sec:background} using COUNTDOWN. We configure COUNTDOWN to scale down the P- and the T-states 500us after the prologues of MPI primitives. To reproduce the same timeout strategy leveraging the C-states, we configure \textit{MPI SPIN WAIT} as described in \ref{sec:event_module} with 10K as MPI spin counter parameter. The \textit{HW power controller} of Intel CPUs, has a different transition latency for sleep states w.r.t. DVFS scaling, as described in \cite{hackenberg2015energy}. For this reason, we empirically determine the best spin counter setting to maximize energy efficiency and to minimize the overhead for the target application. Figure \ref{fig:cs_ps_ts_500us} report the experimental results using \textit{COUNTDOWN THROTTLING}, \textit{COUNTDOWN DVFS} and \textit{MPI SPIN WAIT}. We can see that in all cases the overhead, the energy saving, and the power saving are significantly improved w.r.t. the baseline (only MPI library). Figure \ref{fig:cs_ps_ts_500us}.a shows the experimental results for QE-CP-EU. For the C-state mode the overhead decrease from 25.85\% to 1.70\% by using \textit{MPI SPIN WAIT}. Instead, for the P-state using \textit{COUNTDOWN DVFS} the overhead decreases from 5.96\% to a negligible overhead, and for the T-state using \textit{COUNTDOWN THROTTLING} the overhead decreases from 5.96\% to 0.29\%. All evaluations report a non-negative energy saving, as it was for the MPI library without timeout strategy, but with better results. Energy saving shows 21.80\%, 14.94\%, and 11.16\% improvements and power saving report 6.55\%, 5.77\%, and 2.47\% respectively for C-state, P-state, and T-state. These experimental results confirm our exploration of the time duration of MPI phases reported in figure \ref{fig:freq_time}. Most of the MPI calls of this benchmark have been skipped due to their short duration to avoid overheads. Figure \ref{fig:cs_ps_ts_500us}.b show similar improvements for QE-CP-NEU. In this configuration, for C-State mode the speed-up increases from 1.08\% to 6.14\% using \textit{MPI SPIN WAIT}. Instead of using COUNTDOWN, the overhead of P-state decreases from 3.88\% to 1.25\%, and for the T-state from 15.82\% to 2.19\%. As a result, the energy saving is 21.80\%, 14.94\%, and 11.16\% while power saving corresponds to 24.61\%, 19.84\%, and 15.23\% respectively for C-state, P-state, and T-state. \subsection{HPC Evaluation} After we have evaluated our methodology in a single compute node, we extend our exploration in a real HPC system. We use a Tier-1 HPC system based on an IBM NeXtScale cluster which is currently classified in the Top500 supercomputer list \cite{TOP500}. The compute nodes of the HPC system, are equipped with 2 Intel Broadwell E5-2697 v4 CPUs, with 18 cores at 2.3 GHz nominal clock speed and 145W TDP and interconnected with an Intel QDR (40Gb/s) Infiniband high-performance network. To benchmark the parallel performances in our target HPC system we focused on two set of applications. The first one is the NAS parallel benchmark suite \cite{nas} with the dataset E. We executed the NAS parallel benchmarks on 29 compute nodes with a total core count of 1024 cores. We use 1024 cores due the execution time of the application run using dataset E is on average ten minutes for each benchmark. The second one is the QuantumESPRESSO PWscf software configured for a complex large-scale simulation. For this purpose, we performed ten iterative steps of the self-consistent loop algorithm that optimizes the electronic density starting from the superposition of atomic charge densities. To obtain a reasonable scaling up to the largest set of nodes, we chose an ad-hoc dataset. During each iteration, the CPU time is mostly spent in linear algebra (matrix-matrix multiplication and matrix diagonalization) and FFT. Both these operations are distributed on multiple processors and operate on distributed data. As a consequence, FFT requires many AllToAll MPI communications while parallel diagonalization, performed with the \textsc{PDSYEVD} subroutine of \textsc{ScaLAPACK} and requires mostly MPI broadcasting messages. We run QE on 96 compute nodes, using 3456 cores and 12 TB of DRAM due our target HPC machine allows application runs with at maximum 100 nodes. We use an input dataset capable of scaling on such number of cores, and we configure QE using a set of parameters optimized to avoid network bottlenecks, which would limit the scalability. We name this configuration QuantumESPRESSO Expert User (QE-PWscf-EU), to differentiate it from the same problem but solved without optimizing the internal parameter as it was run by a user without domain-specific knowledge which we call QuantumESPRESSO Not Expert User (QE-PWscf-NEU). \begin{figure*} \centering \captionsetup[subfigure]{position=top} \subfloat{{\includegraphics[width=0.39\linewidth]{./images/multi_node} }} \subfloat{{\includegraphics[width=0.22\linewidth]{./images/oep_multi_node} }} \subfloat{{\includegraphics[width=0.39\linewidth]{./images/multi_node_unbalanced} }} \caption{(a,b) Sum of the time spent in phases longer and shorter than 500us for QE-PWscf-EU and QE-PWscf-NEU.} \label{fig:multi_node} \end{figure*} In these tests, we exclude the T-state mode, because, in the single-node evaluation, it always reported the worst results that the P-state mode. We also excluded the C-state mode as when we started the configuration of the Intel MPI library for HPC experiments using idle mode. We discover that this feature is not supported in a distributed environment. The Intel MPI library overrides the request of idle mode with the busy-wait mode when the application runs on multiple nodes. For this reason, we only use the P-state mode (\textit{COUNTDOWN DVFS}) in the HPC evaluation. We run the benchmark with and without COUNTDOWN on the same nodes, and we compared the results. Figure \ref{fig:nas} shows the results for the NAS parallel benchmark suite when executed on 1024 cores, while figure \ref{fig:multi_node} shows the results for the QE-PWscf-* application when executed on 3456 cores. The different plots for Figure \ref{fig:nas} reports the time-to-solution overhead, the energy and power saving as well as the MPI and application time phases distribution (in the percentage of the total time the accumulated time spent in phases longer and shorter than 500us) for the different large-scale benchmarks and application run. All the values are normalized against the default MPI busy waiting policy. From Figure \ref{fig:nas}.c, we can see that COUNTDOWN is capable of significantly cutting the energy consumption of the NAS benchmarks from 6\% to 50\%. From Figure \ref{fig:nas}.c we can see that this savings follows the percentage of time the benchmark passes in MPI phases longer than 500us. From the overhead plot (Figure \ref{fig:nas}.a) we can see that all these energy savings happen with a very small time-to-solution overhead, on average below 5\%. These results are very promising as they are virtually portable to any application, without the need to touch the application binary. When looking at the QuantumEspresso (QE-PWscf-*) case reported in figure \ref{fig:multi_node}, we see that COUNTDOWN attains similar results of NAS also with real production run optimized for scalability COUNTDOWN saves 22.36\% of energy with an overhead of 2.88\% in the QE-PWscf-EU case. Figure \ref{fig:multi_node}.a shows the total time spent in the application and in MPI phases which are shorter and longer than 500us for the QE-PWscf-EU case. On the x-axis, the figure reports the Id of the MPI rank, while in the y-axis reports in the percentage of the total time spent in phases longer and shorter than 500us. We can immediately see that in this real and optimized run, the application spends a negligible time in phases shorter than 500us. In addition, the time spent in the MPI library and the application is not homogeneous among the MPI processes. This is an effect of the workload parameters chosen to optimize the communications, which distribute the workload in subsets of MPI processes to minimize broadcast and All-to-All communications. Using this configuration, our experimental results report 2.88\% of overhead with an energy saving of 22.36\% and a power saving of 24.53\% thanks to COUNTDOWN. Figure \ref{fig:multi_node}.c shows that for the case QE-PWscf-NEU where the parameters are not optimized, all MPI processes have the same workload composition as they are part of the same workgroup and due the large overhead in the broadcast and All-to-All communications. Most of the processes spend almost 80\% of the time in the MPI library. Even if it is suboptimal, this happens to HPC users running the application without being domain experts or before tuning the execution parameters. This is a rather typical scenario in scientific computing as only runs that are repeated multiple times are carefully optimized by domain experts. In this situation, COUNTDOWN increases its benefits, reaching up to 37.74\% of energy saving and a power saving of 41.47\%. In this condition, we also notice that COUNTDOWN induces a small but relevant overhead of 6.38\%. We suspect that some MPI primitives suffer more than others from the frequency scaling. We will analyze in depth this problem in our future works aiming to guarantee that the COUNTDOWN overhead always remains negligible. However, we remark that an overhead well below 10\% is more than acceptable in many HPC facilities, especially when considering the massive energy savings. In summary, we can conclude that results achieved by COUNTDOWN in at production scale and application are very promising and if systematically adopted would dramatically reduce the TCO of today supercomputers. \section{Framework} \label{sec:framework} COUNTDOWN is a run-time library for profiling and fine-grain power management written in C language. COUNTDOWN is based on a \textit{profiler} and on a \textit{event} module to inspect and react to MPI primitives. The key idea in COUNTDOWN can be summarized as follows. Every time the application calls an MPI primitive, COUNTDOWN intercepts the call with minimal overhead and uses a timeout strategy \cite{benini_power_survey} to avoid changing the power state of the cores during fast application and MPI context switches, where doing so may result only in state transition overhead without significant energy and power reduction. In figure \ref{fig:dynlink_logview} the COUNTDOWN's components are depicted. COUNTDOWN exposes the same interface as a standard MPI library and intercepts all MPI calls from the application. COUNTDOWN implements two wrappers to intercept MPI calls: i) the first wrapper is used for C/C++ MPI libraries, ii) the second one is used for FORTRAN MPI libraries. This is mandatory since C/C++ and FORTRAN MPI libraries produce assembly symbols which are not application binary (ABI) compatible. The FORTRAN wrapper implements (un)marshalling interfaces to bind MPI FORTRAN handlers into compatible MPI C/C++ handlers. When an application is instrumented with COUNTDOWN, every MPI call is enclosed in a corresponding wrapper function that implements the same signature. The wrapper function calls the equivalent PMPI call, but after and before a \textit{prologue} and an \textit{epilogue} routine. Both routines are used by the profile and by the event modules to support monitoring and power management, respectively. COUNTDOWN interacts with the \textit{HW power manager} through a specific \textit{Events} module in the library. The \textit{Events} module can also be triggered by system signals registered as callbacks for timing purposes. COUNTDOWN configurations can be done through environment variables, and it is possible to change the verbosity of logging and the type of HW performance counters to monitor. The library targets the instrumentation of applications through dynamic linking, as depicted in figure \ref{fig:dynlink_logview}, without user intervention. When dynamic linking is not possible COUNTDOWN has also a fallback, a static-linking library, which can be used while building the application, to add COUNTDOWN at compilation time. The advantage of using the dynamic linking is the possibility to instrument every MPI-based applications without any modifications of the source code nor the toolchain, even without re-compiling it. Linking COUNTDOWN to the application is straightforward: it is enough to configure the environment variable \textit{LD\_PRELOAD} with the path of COUNTDOWN library and lunch the application as usual. \subsection{Profiler Module} COUNTDOWN allows extracting traces, which can be exploited to estimate application performance as \cite{zhai2016performance}. COUNTDOWN uses three different profiling strategies targeting different monitoring granularity. (i) The \textit{MPI profiler} is responsible for collecting all information regarding the MPI activity. For each MPI process, it collects information on MPI communicators, MPI groups and the coreId. In addition, the COUNTDOWN run-time library profiles each MPI call by collecting information on the type of the call, the entrance and exit times and the data exchanged with the other MPI processes. (ii) The \textit{fine-grain micro-architectural profiler}, collects micro-architectural information at every MPI call along with the \textit{MPI profiler}. This profiler uses the user-space RDPMC instruction to access the performance monitoring units implemented in Intel's processors. It monitors the average frequency, the time stamp counter (TSC) and the instructions retired for each MPI call and application phase. It can access up to 8 configurable performance counters that can be used to monitor user-specific micro-architectural metrics. (iii) The \textit{coarse-grain profiler} monitors a larger set of HW performance counters available in the Intel architectures. In Intel architectures, privileged permissions are required to access HW performance counters. Such level of permissions cannot be granted to the final users in production machines. To overcome this limitation, we use the MSR\_SAFE \cite{MSR_SAFE} drive , which can be configured to grant access to standard users on a subset of privileged architecture registers, while avoiding security issues. At the core level, COUNTDOWN monitors TSC, instructions retired, average frequency, C-state residencies, and temperature. At uncore level, it monitors CPU package energy consumption, C-state residencies, and temperature of the packages. This profiler uses Intel Running Average Power Limit (RAPL) to extract energy/power information from the CPU. The \textit{coarse-grain profiler}, due to the high overhead needed by every single access to the set of HW performance counters monitored, uses a time-based sample rate. Data are collected at least Ts second delay from the previous collection. The \textit{fine-grain micro-architectural profiler} at every MPI calls checks the time stamp of the previous sample of \textit{coarse-grain profiler} and, if it is above Ts seconds, triggers it to get a new sample. These capabilities are added to the application through the \textit{prologue} and \textit{epilogue} functions as shown in figure \ref{fig:dynlink_logview}. COUNTDOWN also implement a logging module to store profile information in a text file which can be written in local or remote storage. While the log file of MPI profiler can grow with the number of MPI primitives and can become significant in long computation (thus the information is stored in binary files), the logging module also reports a summary of this information in an additional text file. \begin{figure*}[t] \centering \includegraphics[width=1.00 \linewidth]{./images/timeout_cases} \caption{Impact of MPI phases duration on the overheads, energy and power savings of C/P/T-state for QE-EU and QE-NEU.} \label{fig:timeout_cases} \end{figure*} \subsection{Event Module} \label{sec:event_module} COUNTDOWN interacts with the \textit{HW power controller} of each core to reduce the power consumption. It uses MSR\_SAFE to write the architectural register to change the current P-state independently per core. When COUNTDOWN is enabled, the \textit{Events} module select the performance level at which to execute a given phase. COUNTDOWN implements a timeout strategy through the standard Linux timer APIs, which expose the system calls: \textit{setitimer()} and \textit{getitimer()} to manipulate user-space timers and register callback functions. This methodology is depicted in figure \ref{fig:callback} in the top part. When COUNTDOWN encounters an MPI phase, in which opportunistically can save energy by entering in a low power state, \textit{registers} a timer callback in the \textit{prologue} function (Event(start)), after that the execution continues with the standard workflow of the MPI phase. When the timer expires, a system signal is raised, the ``normal'' execution of the MPI code is interrupted, the signal handler triggers the COUNTDOWN \textit{callback}, and once the callback returns, execution of MPI code is resumed at the point it was interrupted. If the ``normal'' execution returns to COUNTDOWN (termination of the MPI phase) before the timer expiration, COUNTDOWN \textit{disables} the timer in the \textit{epilogue} function and the execution continues like nothing happened. The callback can be configured to enter in the lower T-state (12.5\% of load), later referred to as \textit{COUNTDOWN THROTTLING}, or in the lower P-state (1.2GHz) later referred to as \textit{COUNTDOWN DVFS}. Intel MPI library implements a similar strategy, but it relies on the sleep power states of the cores. Its behavior is depicted in the bottom part of figure \ref{fig:callback}. If the environment variable \textit{I\_MPI\_WAIT\_MODE}, presented in Section \ref{sec:wait_mode}, is combined with the environment variable \textit{I\_MPI\_SPIN\_COUNT}, it is possible to configure the spin count time for each MPI call. When the spin count becomes zero, the MPI library leaves the execution to the idle task of the CPU. This parameter does not contain a real-time value but includes a value which is decremented by the spinning procedure on the MPI library until it reaches zero. This allows the Intel MPI library to spin on a synchronization point for a while, and after that, enter in an idle low power state to reduce the power consumption of the core. The execution is restored when a system interrupt wakes up the MPI library signaling the end of the MPI call. Later, we will refer to this mode as \textit{MPI SPIN WAIT}. In the next Section, we will clarify though experiment why the timeout logic introduced by COUNTDOWN is effective in making fine-grain power management possible and convenient in MPI parallel applications. \section{Introduction} \label{sec:introduction} In today's supercomputers, the total power consumption of computing devices limits practically achievable performance. This is a direct consequence of the end of Dennard's scaling, which in the last decade has caused a progressive increase of the power density required to operate each new processor generation at its maximum performance. Higher power density implies more heat to be dissipated and increases cooling costs. These altogether worsen the total costs of ownership (TCO) and operational costs: limiting de facto the budget for the supercomputer computational capacity. Low power design strategies enable computing resources to trade-off their performance for power consumption by mean of low power modes of operation. These states obtained by Dynamic and Voltage Frequency Scaling (DVFS) (also known as performance states or P-states \cite{ACPI}), clock gating or throttling states (T-states), and idle states which switch off unused resources (C-states \cite{ACPI}). Power states transitions are controlled by hardware policies, operating system (OS) policies, and with an increasing emphasis in recent years, at user-space by the final users\cite{fraternali_islped04,lrz_lowfreq,losalomos_sc05,GEOPM} and at execution time \cite{adagio_dynamic,Schulz_IPDPS10}. While OS policies try to maximize the usage of the computing resources --- increasing the processor's speed (P-state) proportionally to the processor's utilization, with a specific focus on server and interactive workload --- two main families of power control policies are emerging in scientific computing. The first is based on the assumption that the performance penalty can be tolerated to reduce the overall energy consumption \cite{fraternali_islped04,fraternali_tpds17,lrz_lowfreq,losalomos_sc05}. The second is based on the assumption that it is possible to slow down a processor only when it does not execute critical tasks: to save energy without penalizing application performance \cite{adagio_dynamic,Schulz_IPDPS10,freeh2008just,GEOPM}. Both approaches are based on the concept of application slack/bottleneck (memory, IO, and communication) that can be opportunistically exploited to reduce power and save energy. However, there are drawbacks which limit the usage of these concepts in a production environment. The first approach causes overheads in the application time-to-solution (TTS) limiting the supercomputer throughput and capacity. The second approach depends on the capability of predicting the critical tasks in advance with severe performance loss in case of mispredictions. A typical HPC application is composed of several processes running on a cluster of nodes which exchange messages through a high-bandwidth, low-latency network. These processes can access the network sub-system through a software interface that abstracts the network level. The Message-Passing Interface (MPI) is a software interface for communication that allows processes to exchange explicit messages abstracting the network level. Usually, when the scale of the application increases, the time spent by the application in the MPI library becomes not negligible and impacts the overall power consumption. By default, when MPI processes are waiting in a synchronization primitive, the MPI libraries use a busy-waiting mechanism. However, during MPI primitives the workload is primarily composed of wait times and IO/memory accesses for which running an application in a low power mode may result in lower CPU power consumption with limited or even no impact on the execution time. MPI libraries implement idle-waiting mechanisms, but these are not used in practice to avoid performance penalties caused by the transition times into and out of low-power states \cite{hackenberg2015energy}. As a matter of fact, there is no known low-overhead and reliable mechanism for reducing energy consumption selectively during MPI communication slack. In this paper, we present COUNTDOWN\footnote{Github Repository: \url{https://github.com/EEESlab/countdown}}, a run-time library, analysis tool, and methodology to save energy in MPI-based applications by leveraging the communication slack. The main contribution of this manuscript are: i) An analysis of the effects and implications of fine-grain power management in today's supercomputing systems targeting energy saving in the MPI library. Our study shows that in today's HPC processors there are significant latencies in the HW to serve low power states transitions. We show that this delay is at the source of inefficiencies (overheads and saving losses) in the application for fine-grain power management in the MPI library. ii) Through the first set of benchmarks running on a single HPC node we show that: (a) there is a potential saving of energy with negligible overheads in the MPI communication slack of today's HPC applications; (b) these savings are jeopardized by the time that HW takes to perform power state transitions; (c) when combined with low-power states, Turbo logic can help improving execution time. iii) The COUNTDOWN library, which consists of a runtime able to automatically track at fine granularity MPI and application phases to inject power management calls. COUNTDOWN can identify MPI calls with energy-saving potential for which it is worthwhile to enter a low power state, leaving low-wait-time MPI calls unmodified to prevent overheads caused by low power state transitions. We show that COUNTDOWN's principles can be used to inject DVFS calls as well as to configure the MPI runtime correctly and take advantage of MPI idle-waiting mechanisms. COUNTDOWN works at execution time without requiring any off-line knowledge of the application, and it is completely: it does not require any modification of the source code and compilation toolchain. COUNTDOWN can be dynamically linked with the application at loading time: it can intercept dynamic linking to the MPI library instrumenting all the application calls to MPI functions before the execution workflow jumps to the library. The runtime also provides a static version of the library which can be connected with the application at linking time. COUNTDOWN supports C/C++ and Fortran HPC applications and most of the open-source and commercial MPI libraries. iv) We evaluate COUNTDOWN with a wide set of benchmarks and low power state mechanisms. In large HPC runs, COUNTDOWN leads to savings of 23.32\% on average for the NAS\cite{nas} parallel benchmarks on 1024 cores and to 22.36\% for an optimized QuantumESPRESSO (QE) on 3456 cores. When we run QE without communication tuning the savings increases to 37.74\%. The paper is organized as follows. Section \ref{sec:related}, presents the state-of-the-art in power and energy management approaches for scientific computing systems. Section \ref{sec:background} introduces the key concepts on power-saving in MPI phases of the application. Section \ref{sec:framework} explains our COUNTDOWN runtime and the characterizations of real HPC applications. Section \ref{sec:experimental} characterizes the COUNTDOWN library and report experimental results in power saving of production runs of applications in a tier1 supercomputer. \section*{Acknowledgments} \else \section*{Acknowledgment} \fi Work supported by the EU FETHPC project ANTAREX (g.a. 671623), EU project ExaNoDe (g.a. 671578), and CINECA research grant on Energy-Efficient HPC systems. \input{main.bbl} \vskip -2.5\baselineskip plus -10fil \begin{IEEEbiography}[{\includegraphics[width=1in, height=1.25in, clip, keepaspectratio]{./images/bio/daniele.jpg}}]{Daniele Cesarini} received a Ph.D. degree in Electrical Engineering from the University of Bologna, Italy, in 2019, where he is currently a Post-Doctoral researcher in the Department of Electrical, Electronic and Information Engineering (DEI). His research interests concern the development of SW-HW codesign strategies as well as algorithms for parallel programming support for energy-efficient HPC systems. \end{IEEEbiography} \vskip -3.5\baselineskip plus -10fil \begin{IEEEbiography}[{\includegraphics[width=1in, height=1.25in, clip, keepaspectratio]{./images/bio/andrea.jpg}}]{Andrea Bartolini} received a Ph.D. degree in Electrical Engineering from the University of Bologna, Italy, in 2011. He is currently Assistant Professor in the Department of Electrical, Electronic and Information Engineering (DEI) at the University of Bologna. Before, he was Post-Doctoral researcher in the Integrated Systems Laboratory at ETH Zurich. Since 2007 Dr. Bartolini has published more than 80 papers in peer-reviewed international journals and conferences with focus on dynamic resource management for embedded and HPC systems. \end{IEEEbiography} \vskip -2.6\baselineskip plus -10fil \begin{IEEEbiography}[{\includegraphics[width=1in, height=1.25in, clip, keepaspectratio]{./images/bio/pietro.jpg}}]{Pietro Bonf\`a} Dr. Pietro Bonf\`a received his B.Sc. degree in Physical Engineering (2008) in Politecnico di Milano, the M.Sc. degree in Physics (2011) from University of Pavia and the Ph.D. in Physics (2015) from University of Parma. He has a background in solid state physics and he is now actively involved in the development of computational chemistry codes as part of his activities in CINECA. \end{IEEEbiography} \vskip -3.25\baselineskip plus -10fil \begin{IEEEbiography}[{\includegraphics[width=1in, height=1.25in, clip, keepaspectratio]{./images/bio/carlo.jpg}}]{Carlo Cavazzoni} Carlo graduated cum laude in Physics from the University of Modena and earned his PhD Material Science at the International School for Advanced Studies of Trieste in 1998. He has authored or co-authored several papers published in prestigious international review including Science, Physical Review Letters, Nature Materials. Currently in the HPC Business Unit of CINECA, he is responsible for the R\&D, HPC infrastructure evolution and collaborations with scientific communities. \end{IEEEbiography} \vskip -2.75\baselineskip plus -10fil \begin{IEEEbiography}[{\includegraphics[width=1in, height=1.25in, clip, keepaspectratio]{./images/bio/luca.jpg}}]{Luca Benini} is professor of Digital Circuits and Systems at ETH Zurich, Switzerland, and is also professor at University of Bologna, Italy. His research interests are in system design of energy-efficient multicore SoC, smart sensors and sensor networks. He has published more than 800 papers in peer reviewed international journals and conferences, four books and several book chapters. He is a fellow of the ACM and Member of the Academia Europea. He is the recipient of the IEEE CAS Mac Van Valkenburg Award 2016. \end{IEEEbiography} \end{document} \section{Related Work} \label{sec:related} Several works focused on mechanisms and strategies to maximize energy savings at the expense of performance. These works focus on operating the processors at a reduced frequency for the entire duration of the application \cite{fraternali_islped04,lrz_lowfreq,losalomos_sc05}. The main drawback of these approaches is the negative impact on the application performance which is detrimental to the data center cost efficiency and TCO. Fraternali et al. \cite{fraternali_islped04,fraternali_tpds17} analyzed the impact on frequency selection on a green HPC machine which can lead a significant global energy reduction in real-life applications but can also induce significant performance penalties. Auweter et al. \cite{lrz_lowfreq} developed an energy-aware scheduler that relies on a predictive model to predict the wall-time and the power consumption at different frequency levels for each running applications in the system. The scheduler uses this information to select the frequency to apply to all the nodes executing the job to minimize the energy-to-solution allowing unbounded slowdown in the TTS. The main drawback of this approach is the selection of a fixed frequency for the entire application run which can cause a significant penalty on CPU-bound applications. Hsu et al. \cite{losalomos_sc05} propose an approach where users can specify a maximum-allowed performance slowdown for their applications while the proposed power-aware runtime reduces frequency on time windows, respecting the user's specified constraint. For this purpose, the proposed run-time estimates the instruction throughput dependency over frequency and minimizes the frequency while respecting the user's specified maximum-allowed performance slowdown. Similarly to the previous approach, energy gain is possible only by degrading the performance of the application. The main drawback of the works mentioned so far is that they lead to a systematic increase of TTS, which may be acceptable for the user, but it is not easily acceptable by the facility manager since it reduces the data center cost efficiency and TCO \cite{borghesi2018pricing}. For this reason, there is a trend in the literature towards HPC energy reduction methodologies with negligible or low impact on TTS of the running applications. Sundriyal et al. \cite{MVAPICH2_PSTATE,MVAPICH2_ALL,MVAPICH2_Gather,MVAPICH2_TSTATE} analyze the impact of fine-grain power management strategies in MVAPICH2 communication primitives, with a focus on send/receive \cite{MVAPICH2_PSTATE}, All-to-All \cite{MVAPICH2_ALL}, and AllGather communications \cite{MVAPICH2_Gather}. In \cite{MVAPICH2_PSTATE} the authors propose an algorithm to lower the P-state of the processor during send and receive primitives. The algorithm dynamically learns the best operating points for the different send and receive calls. In the \cite{MVAPICH2_Gather, MVAPICH2_ALL, MVAPICH2_TSTATE} works, the authors propose to lower also the T-state during the send-receive, AllGather and All-to-All primitives as this increases the power savings. These approaches show that power saving can be achieved by entering in a low power mode during specific communication primitives but they depend on a specific MPI implementation. Differently, we show that significant savings can be achieved without impacting the implementation of the MPI library. Moreover, there are other works which focus on the power consumption of network equipment during the execution of parallel applications \cite{chen2016reducing}. While these works impact on the HPC network we leverage on the power consumption of the computing units during communications. Rountree et al. \cite{adagio_static} analyze the energy savings which can be achieved on MPI parallel applications by slowing down the frequencies of processors which are not in the critical path. Authors of the paper define tasks as the region of code between two MPI communication calls, we will refer later in the document to tasks as phases. The critical path is defined as the chain of the tasks which bounds the application execution time. Indeed, cores executing tasks in the critical path will be the latest ones to reach the MPI synchronization points, forcing the other cores to wait. In \cite{adagio_static} authors propose a methodology for estimating offline the minimum frequency at which the waiting cores can execute without affecting the critical path and the TTS. In the same work, the authors suggest that the core's frequency cannot be changed too often without causing overheads. For this reason, the authors introduce a timer logic set at 10ms to avoid changing the core's frequency too often. This value is empirically found. With COUNTDOWN we demonstrate that in modern CPUs the best setting for this timer value corresponds to the built-in HW power controller latency. A later work of the same authors \cite{adagio_dynamic}, implements an online algorithm to identify the task and the minimum frequency at which it can be executed without worsening the critical path. In case the optimal frequency (which would nullify the communication blocking time) is below the minimum available one the authors propose to lower the core's frequency to the minimum one. This is done with a slack reclamation policy which is based on the measurement of the previous blocking time duration. If this was at least twice longer than an empirical time threshold (100ms) when the same task is executed again, a timer is set to the empirical threshold. If the MPI phase expires before the timer ends, nothing happens. Otherwise, when the timer expires, the core's frequency is set to the minimum one. This, in essence, implements a last-value prediction logic to determine if there will be enough blocking time which could be exploited to save energy and is only evaluated with small benchmarks (32 MPI processes). COUNTDOWN uses a timeout policy as well, but it applies it for each MPI phase without trying to predict its duration. This is a significant difference w.r.t to the \cite{adagio_dynamic} which makes it robust to miss-predictions \cite{benini_power_survey}. Similarly, Kappiah et al. \cite{freeh2008just} developed Jitter, an online runtime based on the identification of the critical path on the application among compute nodes involved in the application run. Liu et al. \cite{barriers_cmp} use a similar methodology as Kappiah et al. \cite{freeh2008just} but they apply it to a multi-core CPU. Zhai et al. \cite{zhai2016performance} propose a method for estimating the duration of an MPI parallel application. The authors of \cite{simil_adagio}, as in \cite{adagio_static,adagio_dynamic}, focus on saving power by entering a low power state for processes which are not in the critical path. The authors propose an algorithm to save energy by reducing application unbalance. This is based on measuring the start and end time of each MPI\_barrier and MPI\_Allreduce primitives to compute the duration of application and MPI code. Based on that the authors propose a feedback loop to lower the P-state and T-state if in previous compute and MPI region the overhead was below a given threshold. The algorithm is based on the assumption that the duration of the current application and MPI phases will be the same as the previous ones. In COUNTDOWN we target recent HW and larger production runs where we do not use any previous information on MPI and application phase duration, which may lead to costly performance overhead in case of misprediction in particular in irregular applications \cite{kerbyson2011energy}. Instead, COUNTDOWN relies only on a pure-reactive timer-based logic. It is worth to notice that differently from \cite{adagio_dynamic}, the COUNTDOWN logic does not use any pre-characterization of the message-transfer time of the MPI library to estimate the communication blocking time due to this can change depending to the network congestion of the high-performance interconnect. To save energy during MPI phases, Lim et at. \cite{lim2006adaptive} propose to reduce core's frequency in ``long'' MPI phases. Subsequent short MPI phases are grouped and treated as a single long MPI phase. They use an algorithm to select the best P-state to be applied according to the micro-operation throughput in the MPI phase. Similarly to \cite{adagio_static,adagio_dynamic}, this approach is based on the assumption that the duration and instruction composition of current MPI phase will be the same as the previous ones. Moreover, by treating short MPI phases as a single long one, the application phases between them are executed at low frequency leading overheads. Li et al. \cite{li2004thrifty} use a similar approach to \cite{lim2006adaptive} to reduce power consumption in synchronization points. This work focuses on collective barriers for parallel applications in shared-memory multiprocessors. Differently, from the previous approaches, instead of using P-state, they use idle states (C-states) and specific hardware extensions to account for their transitioning (sleep and wake-up) times. As in the previously described approaches, this runtime uses a history-based prediction model to identify the duration of the next barriers. \begin{figure*} \centering \captionsetup[subfigure]{position=top} \subfloat[All MPI processes are involved in the diagonalization QE-CP-EU]{\includegraphics[scale=0.358]{./images/cs_ps_ts_n16} } \subfloat[Single MPI process is involved in the diagonalization QE-CP-NEU]{\includegraphics[scale=0.358]{./images/cs_ps_ts_n1} } \caption{Overhead, energy/power saving, average load and frequency for QE-CP-EU (a) and QE-CP-NEU (b). Legend: C-state ($CS$), P-state ($PS$) and T-state ($TS$) mode. Baseline is busy-waiting mode (default mode) of MPI library.} \label{fig:cs_ps_ts} \end{figure*} The authors of \cite{MVAPICH2-EA} show that the approaches in \cite{adagio_static,adagio_dynamic} and the ones which estimate the duration of MPI and communication phases based on a last-value prediction\cite{lim2006adaptive,li2004thrifty} can lead to significant misprediction errors. The authors propose to solve this issue by estimating the duration of the MPI phases with a combination of communication models and empirical observation specialized for the different groups of communication primitives. If this estimated time is long enough, they will reduce the P-state. As we will show with the proposed COUNTDOWN approach, this can be achieved without a specific library implementation and communication models. Li et al. \cite{Schulz_IPDPS10} analyzed hybrid MPI/OpenMP applications in term of performance and energy saving and developed a power-aware runtime that relies on dynamic concurrency throttling (DCT) and DVFS mechanisms. This runtime uses a combination of a power model and a time predictor for OpenMP phases to select the best cores' frequency when application manifests workload imbalance. The works in the second group, namely \cite{freeh2008just,barriers_cmp,Schulz_IPDPS10,lim2006adaptive,li2004thrifty}, but also \cite{adagio_dynamic} in the slack reclamation policy, have in common the prediction of future workload imbalances or MPI phases obtained by analyzing previous communication patterns. However, this approach can lead to frequently mispredictions in irregular applications \cite{kerbyson2011energy} which cause performance penalties. COUNTDOWN differs from the above approaches (and complements them) because it is purely reactive and does not rely on assumptions and estimation of the future workload unbalance. The power management literature has analyzed in depth the issue of prediction inaccuracy and predictive model overfitting \cite{benini_power_survey}. One of the key outcome of COUNTDOWN, is that timeout-based policies are effective if predictions are not available (e.g. when data is being collected for building a predictive model), and are also essential in mitigating miss prediction overheads. The implementation of power management strategies greatly benefits from standard APIs and platform-independent software abstractions to interface with the hardware. Eastep et al. propose GEOPM \cite{GEOPM}, an extensible and plug-in based framework for power management in large parallel systems. GEOPM is an open-source project and exposes a set of APIs that programmers can insert into applications to combine power management strategies and HPC workload. A plugin of the framework targets power constraint systems aiming to speed up the critical path migrating power to the CPU's executing the critical path tasks. In a similar manner, another plugin can selectively reduce the frequency of the processors in specific regions of codes flagged by the user by differentiating regions in CPU, memory, IO, or disk bound. Today, GEOPM is capable of identifying MPI regions and reducing the frequency based on MPI primitive type. However, it cannot differentiate between short and long MPI and thus cannot control the overhead caused by the frequency changes and runtime in short MPI primitives. COUNTDOWN addresses this limitation and can be integrated into future releases of GEOPM, as its design principles are entirely compatible with it (i.e. no application code modifications are required). An earlier version of the COUNTDOWN run-time was presented in \cite{ANDARE}. This paper adds in COUNTDOWN the support for two additional low power state mechanisms (C-state and T-states) and their comparisons with P-state. Moreover, we extended \cite{ANDARE} with a detailed analysis of the timeout configuration for the three different low power state mechanisms, and the implication of the timeout with the MPI and application phases duration. We finally extended \cite{ANDARE} with a broader set of experimental results, including the NAS parallel benchmarks and an additional QE large-scale run with different network optimization, which is a common use-case in supercomputer environment.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Acousto-optic (AO) interactions, referred to in various fields as opto-mechanics, Brillouin or Raman scattering, or photon-phonon interactions \cite{van2016unifying}, provide a useful tool set for designing high performance optical devices. This is particularly true in integrated photonics where both optical and acoustic modes can be confined in sub-wavelength cross-sections. Recently, intense interest in bringing AO-based devices to integrated photonics have yielded isolators \cite{poulton2012design,kittlaus2018non,sohn2018time,ruesink2018optical}, amplifiers and lasers \cite{otterstrom2018silicon,gundavarapu2019sub,kittlaus2016large,kabakova2014chalcogenide,choudhary2017advanced,van2015net}, microwave photonic devices \cite{kittlaus2018rf,marpaung2015low,choudhary2017advanced}, modulators \cite{kittlaus2018non,sohn2018time,balram2016coherent,li2015nanophotonic,fan2016integrated}, and even optical phased arrays \cite{sarabalis2018optomechanical}. Of principal importance in any AO device is maximizing the overlap of the optical and acoustic modes and thereby maximizing the AO interaction strength. Confining the acoustic mode to the same cross-section as the optical mode, and propagating along the same axis (co- or counter-propagating), allows \emph{all} of the acoustic and optical energy to be used for the AO interaction. The first demonstration of an AO tunable filter took advantage of such a co-propagating design \cite{harris1969acousto}, while it took 5 more years for demonstration of a non-collinear filter \cite{chang1974noncollinear}. Later demonstrations of AO filters in thin film LiNbO$_3$ \cite{kuhn1971optical}, implanted LiNbO$_3$ waveguides \cite{smith1990integrated}, and optical fiber \cite{kim1997all} among other platforms have also benefited from co-propagation of the optical and acoustic modes. While analogous devices have been demonstrated in integrated platforms where a transducer is used to drive the acoustic wave \cite{van2018electrical,sohn2018time,balram2016coherent,li2015nanophotonic,sarabalis2018optomechanical,fan2016integrated}, this co-propagating arrangement has not been demonstrated without utilizing an optical resonator or inducing excess optical loss due to the presence of the transducer. The issue of efficiently coupling a transducer to a guided acoustic mode in the same cross-section with a guided optical mode is therefore an unsolved problem which, if addressed, would enable more efficient AO interactions in transducer-based designs. Transducer-based approaches cannot directly drive a propagating acoustic wave in the same cross-section as an optical mode because the metal transducers will scatter the optical mode and induce loss. The transducer-based designs demonstrated so far solve this issue by placing the transducer outside the optical waveguide and create the AO overlap in one of two manners: 1) launching an acoustic wave via a slab towards a waveguide \cite{sohn2018time,balram2016coherent,li2015nanophotonic,sarabalis2018optomechanical} or 2) directly modulating the optical waveguide with standoff electrodes \cite{fan2016integrated,van2018electrical}. This first category can be further subdivided by whether the slab mode couples to a co-propagating, confined acoustic mode \cite{balram2016coherent,sarabalis2018optomechanical} or a `transverse' acoustic mode which is not confined to the waveguide and does not propagate significantly outside the transducer region \cite{sohn2018time,li2015nanophotonic}. Slab-excitation of a co-propagating acoustic mode creates the desired geometry, but requires off-chip optical coupling \cite{balram2016coherent} or optical decay \cite{sarabalis2018optomechanical} in order to create strong AO overlap without the optical mode(s) scattering off the transducer. Slab-excitation of a waveguide-transverse acoustic mode naturally prohibits coupling to confined, propagating acoustic mode since the confined wave by definition does not couple to radiation modes in the slab. Direct modulation of the optical waveguide allows for near-optimal mode overlap and therefore strong AO interactions but requires large voltages as the electrodes must be sufficiently far from the waveguide and does not generate propagating acoustic modes, therefore requiring transducers as long as the desired interaction region \cite{fan2016integrated,van2018electrical}. In order to enable this co-confined, co-propagating AO configuration we require a component which couples an external transducer to a confined, propagating acoustic mode in an optical waveguide without excess optical loss, reflection, or modal cross-talk. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{./figs/InjectorOverview.pdf} \caption{Overview of the acoustic-optical multiplexer. (a) Our example design multiplexes two optical modes (Port 1) in a suspended silicon waveguide with an acoustic wave incident symmetrically from suspended silica beams (Port 2) into a single output port, the suspended silicon beam (Port 3). (b) The geometry and port definitions of this architecture are shown along with schematic examples of the optical and acoustic paths. An additional pair of silica beams is used to redirect acoustic energy scattered in the backwards direction (Port 4).} \label{fig:overview} \end{figure*} In this paper we introduce the concept for such a device which combines optical and acoustic modes from separate spatial ports into a single spatial output port, an acoustic-optical multiplexer (AO mux). We demonstrate a simple implementation of an AO mux using only silicon (Si) and silica (SiO$_2$) so as to be compatible with the standard 220 nm SOI silicon photonics platform. Using COMSOL for acoustic simulations and the Lumerical finite-difference time-domain solver (FDTD) for optical simulations, we evaluate the device's performance to show that an AO mux is feasible without large acoustic or optical insertion loss, cross-talk, or reflections. This AO mux is the first design capable of coupling an external acoustic transducer to a non-resonant, guided acoustic mode co-propagating with an optical mode without disturbing the optical characteristics, and will enable more efficient transducer-based integrated AO devices. \section{Device Overview} \label{sec:design} An AO mux requires, in the most basic sense, separate input ports for optical and acoustic modes and a output port which contains both optical and acoustic modes. A simple circuit schematic of an AO mux with ports is shown in the insert of Fig.~\ref{fig:overview}(a), where the two spatially separated input ports denoted as Ports 1 and 2 are multiplexed into a single output port, Port 3. Port 1 contains the optical mode, potentially including many waveguide and frequency modes, and Port 2 contains the acoustic mode(s). We emphasize that different spatial, rather than modal, ports are required precisely because there is no extant on-chip device capable of separating or combining acoustic modes. An AO mux, operating in reverse as an AO demux, provides exactly this functionality. The AO mux should ideally be both optically and acoustically transparent, i.e. should be without reflection, avoid inducing cross-talk between optical modes or acoustic modes, direct all optical and acoustic energy without loss into the output port, and be sufficiently short that AO interactions within the AO mux can be neglected. To demonstrate the viability of such an AO mux, and provide a reference point for performance, we choose a simple design which couples a single acoustic mode from a suspended silica beam into a suspended silicon beam with two optical modes [Fig.~\ref{fig:overview}(a)]. The geometry of this design is shown in the left section Fig.~\ref{fig:overview}(b), while the optical and acoustic paths are shown in the center and right sections of Fig.~\ref{fig:overview}(b). We are particularly interested in designing a `CMOS-compatible' AO mux, motivated by demonstrations of acoustic transduction without piezoelectrics \cite{van2018electrical,weinstein2010resonant}, so we choose the materials and device layer thickness to be compatible with the standard 220 nm SOI integrated photonics platform: silica (blue) and silicon (gray) with thickness $t=220$ nm. Notably, silica is a `high index' material for acoustics but a low index material for optics, and the reverse is true for silicon. Any core-cladding structure which guides an acoustic mode will therefore anti-guide an optical mode, and vice versa. We choose silica beams to couple into/out of the acoustic mode of the silicon beam because it introduces minimal optical perturbation (low optical index) while also guiding the acoustic wave (high acoustic index). A suspended silicon beam is used to as the input optical port and output (muxed) port so as to confine the optical modes (in the high index silicon) while the air gap will confine the acoustic modes, a commonplace approach in integrated AO devices \cite{van2015net,van2018electrical,li2015nanophotonic,sohn2018time}. Notably, a silicon slot waveguide could co-confine both \cite{van2014analysis}. Addition of suspended silica beams could be relatively straightforward using directive etching techniques such as reactive ion etching or ion beam milling in combination with a sacrificial layer beneath the device layer. An example application of an AO mux is for an isolator, where the AO interaction can be used to non-reciprocally convert between two optical modes \cite{poulton2012design,kittlaus2018non,sohn2018time,dostart2018energy}. We choose to design an AO mux centered at 1550 nm which facilitates this non-reciprocal mode conversion using a silicon beam of width $w_0$ where the TE0 and TE1 optical modes are coupled by the in-plane shearing acoustic mode. This corresponds to a silicon beam width of $w_0=570$ nm where the optical modes have identical group index of 4.11 and phase-matched acoustic wavelength $1.25$ {\textmu}m (frequency 3.12 GHz) \cite{dostart2018energy}. The silica beams have width $w_1=700$ nm and an angle $\theta=11^\circ$ relative to the silicon beam chosen to maximize acoustic directionality ($|S_{32}|^2/|S_{12}|^2$), bandwidth, and transmission ($|S_{32}|^2$). Unfortunately, corners are strongly scattering for acoustics and cannot be avoided in our suspended geometry without adding a third, mutually low index material. In this design, where we have prioritized optical over acoustic transparency, these corners are the fundamental loss mechanism for the acoustic mode and excite undesired acoustic modes radiating into all ports. To ensure large acoustic directionality, i.e. no acoustic radiation into the optical input port, we added silica `siphon' beams which remove the acoustic radiation propagating into the optical input port (with an `internal' port denoted as Port 4). These siphons have the same width and angle relative to the silicon beam as the input silica beams to maximize the absorption of the undesired radiation through reciprocity. To further suppress excitation of undesired modes, we have arranged the AO mux to symmetrically excite (with two separate transducers) the desired transverse shear mode, and we have added fillets to the silica/silicon junctions to minimize the scattering due to edges. \section{Simulation and Results} \label{sec:results} \begin{figure}[t] \centering \includegraphics[width=.6\textwidth]{./figs/AcousticPerformance.pdf} \caption{Acoustic performance of the AO mux. (a) Acoustic directivity and insertion loss for varying excitation frequency. (b) Acoustic simulation setup showing locations of ports, excitation planes, and PML locations. (c) Total displacement field at 3.12 GHz (left). Transverse displacement in the output port (gold) and excitation port (green) (right). Inset: 3D rendering of shear mode in the silicon beam. } \label{fig:acoustic_perf} \end{figure} \begin{figure}[t] \centering \includegraphics[width=.6\textwidth]{./figs/OpticalPerformance.pdf} \caption{Optical performance of the AO mux. (a) Transmission and reflection of the TE0 and TE1 optical modes, with upper plot showing a zoomed view of the transmission. Mode crosstalk in both transmission and reflection is below $-70\,$dB. (b) Plots of the optical modes at 1550 nm propagating through the AO mux (left) and in cross-sectional view (right).} \label{fig:optical_perf} \end{figure} We demonstrate our AO mux design in simulation; in the acoustic domain, we use the COMSOL 3D Structural Mechanics module to implement the geometry and excite a single frequency acoustic wave with the Frequency Domain study module to measure the acoustic S-parameters. The results are shown in Fig.~\ref{fig:acoustic_perf}(a) for varying acoustic frequency. As can be seen, a `resonance' at the design frequency can be seen in the directionality arising from optimal matching between the incident mode in the silica beams and excited mode in the silicon beam. This matching ensures the backwards-going mode (into Port 1) is strongly suppressed and acoustic insertion loss is minimized. For the chosen geometry, the AO mux achieves a peak acoustic directionality of 60 dB and 0.05 dB insertion loss at the design frequency. It has a bandwidth of 500 MHz within which directionality is at least 30 dB and insertion loss varies between 0 and 1 dB. The simulation setup is shown in Fig.~\ref{fig:acoustic_perf}(b), where the measurement planes are denoted in gold and the symmetric excitation port is shown in green. A symmetry plane is applied to the center of the geometry, so only half of the geometry needs to be simulated. Perfectly matched layers (PMLs) are used as absorbing layers after each measurement port to avoid artificial reflections. To excite the acoustic wave, a forced displacement is applied to the excitation plane. The total power exiting from all measurement ports is used to calculate the total input power, normalize the power measured at each port, and calculate the acoustic S-parameters. Cross-sectional plots of the acoustic displacement, the shear field in the excitation port, and the shear field at the output port at center frequency are shown in Fig.~\ref{fig:acoustic_perf}(c), showing excitation of the desired mode into Port 3 with low insertion loss and high directionality. For simulations in the optical domain, we import the geometry of the COMSOL simulation into Lumerical 3D FDTD. In two separate simulations, we excite pulses of TE0 and TE1 respectively and monitor Ports 1 and 3 as denoted in Fig.~\ref{fig:acoustic_perf}(b). Using a mode decomposition at both ports, the optical S-parameters are determined. The results are shown in Fig.~\ref{fig:optical_perf}(a), where the transmission and reflection coefficients are plotted across a 200 nm wavelength band centered at 1550 nm. The top plot is vertically scaled to show the insertion losses of both modes. Notably, TE0 has negligible insertion loss while TE1 has slightly larger insertion loss of varying between 1.2 dB at the edges of the optical bandwidth and 0.25 dB at 1550 nm. Both modes experience reflection $<-18$ dB and extremely low cross-talk of less than $-70$ dB, demonstrating that this AO mux design is effectively optically transparent. In Fig.~\ref{fig:optical_perf}(b) the modes are plotted in the device cross-section at 1550 nm, showing minimal perturbation by the silica beams. We would like to emphasize that the presented design is not necessarily optimal, only a geometry which performs adequately and highlights the benefits and potential of an AO mux as well as some key considerations. For example, reflections in the TE1 optical mode and corresponding insertion loss could be considered the principal limitation of the current design. This could be improved by introducing the silica beams more adiabatically or widening the silicon beam in advance of the AO mux to better confine the TE1 mode. It should also be noted that the $<1$ dB acoustic insertion loss quoted here is within the mux, whereas additional insertion loss on the order of several dB is to be expected from simply coupling the transducer to the input acoustic port in the silica beam. Further improvements to this design would focus on simulation of the full system, including transducers, to minimize acoustic insertion loss as well as more complex beam shapes to minimize optical reflections. Another potential improvement is to confine optical and acoustic waves in the same cross-section without suspension, already demonstrated in at least one geometry in an SOI platform using a `fin' waveguide \cite{sarabalis2017release}, a rib waveguide with a large height-to-width aspect ratio. Photonic-phononic crystals have, to our knowledge, required suspension for vertical acoustic confinement \cite{balram2016coherent}, but future improvements may allow for full confinement of both optics and acoustics without suspension using e.g. 1D vertical phononic crystals \cite{bahr2015theory}. Such non-suspended geometries could avoid the excess scattering due to corners by using a directional coupler configuration to evanescently couple an acoustic or optical mode from a second waveguide into the mux waveguide. In the presence of multiple optical modes, such as the case for our demonstrated design, an acoustic directional coupler would need to be used to avoid introducing optical mode cross-talk. For non-CMOS-compatible material sets, such as chalcogenides which can guide both optical and acoustic waves \cite{poulton2012design}, a cross-section where both optics and acoustics are guided can be easily designed and an AO mux should be well within reach. In this paper we introduce the concept of an acoustic-optical multiplexer, an AO mux, which combines acoustic and optical waves from two separate spatial ports into a single co-guided cross-section for optimal overlap of acoustic and optical modes. The AO mux allows acoustic waves to be driven by a transducer and introduced to a co-guided waveguide without additional optical loss or use of a resonator. As a proof-of-concept device, we propose and demonstrate through simulations a simple AO mux design which multiplexes two optical modes in a silicon beam with an acoustic mode injected symmetrically from two silica beams. This basic geometry enables strong AO interactions without significant optical cross-talk or reflections while injecting the desired acoustic mode with a broad acoustic bandwidth of 500 MHz, $>30$ dB directionality, and at most 1 dB acoustic insertion loss. We expect that AO multiplexers will be a useful tool for designing high-efficiency transducer-based AO devices. \noindent \textbf{\large Funding Information} National Science Foundation Graduate Research Fellowship Grant (1144083); Packard Fellowship for Science and Engineering (2012-38222). \noindent \textbf{\large Acknowledgments} We thank Prof. Kelvin Wagner for discussions on previous demonstrations of AO devices. \bibliographystyle{ieeetr} \section{Introduction} \label{sec:intro} Acousto-optic (AO) interactions, referred to in various fields as opto-mechanics, Brillouin or Raman scattering, or photon-phonon interactions \cite{van2016unifying}, provide a useful tool set for designing high performance optical devices. This is particularly true in integrated photonics where both optical and acoustic modes can be confined in sub-wavelength cross-sections. Recently, intense interest in bringing AO-based devices to integrated photonics have yielded isolators \cite{poulton2012design,kittlaus2018non,sohn2018time,ruesink2018optical}, amplifiers and lasers \cite{otterstrom2018silicon,gundavarapu2019sub,kittlaus2016large,kabakova2014chalcogenide,choudhary2017advanced,van2015net}, microwave photonic devices \cite{kittlaus2018rf,marpaung2015low,choudhary2017advanced}, modulators \cite{kittlaus2018non,sohn2018time,balram2016coherent,li2015nanophotonic,fan2016integrated}, and even optical phased arrays \cite{sarabalis2018optomechanical}. Of principal importance in any AO device is maximizing the overlap of the optical and acoustic modes and thereby maximizing the AO interaction strength. Confining the acoustic mode to the same cross-section as the optical mode, and propagating along the same axis (co- or counter-propagating), allows \emph{all} of the acoustic and optical energy to be used for the AO interaction. The first demonstration of an AO tunable filter took advantage of such a co-propagating design \cite{harris1969acousto}, while it took 5 more years for demonstration of a non-collinear filter \cite{chang1974noncollinear}. Later demonstrations of AO filters in thin film LiNbO$_3$ \cite{kuhn1971optical}, implanted LiNbO$_3$ waveguides \cite{smith1990integrated}, and optical fiber \cite{kim1997all} among other platforms have also benefited from co-propagation of the optical and acoustic modes. While analogous devices have been demonstrated in integrated platforms where a transducer is used to drive the acoustic wave \cite{van2018electrical,sohn2018time,balram2016coherent,li2015nanophotonic,sarabalis2018optomechanical,fan2016integrated}, this co-propagating arrangement has not been demonstrated without utilizing an optical resonator or inducing excess optical loss due to the presence of the transducer. The issue of efficiently coupling a transducer to a guided acoustic mode in the same cross-section with a guided optical mode is therefore an unsolved problem which, if addressed, would enable more efficient AO interactions in transducer-based designs. Transducer-based approaches cannot directly drive a propagating acoustic wave in the same cross-section as an optical mode because the metal transducers will scatter the optical mode and induce loss. The transducer-based designs demonstrated so far solve this issue by placing the transducer outside the optical waveguide and create the AO overlap in one of two manners: 1) launching an acoustic wave via a slab towards a waveguide \cite{sohn2018time,balram2016coherent,li2015nanophotonic,sarabalis2018optomechanical} or 2) directly modulating the optical waveguide with standoff electrodes \cite{fan2016integrated,van2018electrical}. This first category can be further subdivided by whether the slab mode couples to a co-propagating, confined acoustic mode \cite{balram2016coherent,sarabalis2018optomechanical} or a `transverse' acoustic mode which is not confined to the waveguide and does not propagate significantly outside the transducer region \cite{sohn2018time,li2015nanophotonic}. Slab-excitation of a co-propagating acoustic mode creates the desired geometry, but requires off-chip optical coupling \cite{balram2016coherent} or optical decay \cite{sarabalis2018optomechanical} in order to create strong AO overlap without the optical mode(s) scattering off the transducer. Slab-excitation of a waveguide-transverse acoustic mode naturally prohibits coupling to confined, propagating acoustic mode since the confined wave by definition does not couple to radiation modes in the slab. Direct modulation of the optical waveguide allows for near-optimal mode overlap and therefore strong AO interactions but requires large voltages as the electrodes must be sufficiently far from the waveguide and does not generate propagating acoustic modes, therefore requiring transducers as long as the desired interaction region \cite{fan2016integrated,van2018electrical}. In order to enable this co-confined, co-propagating AO configuration we require a component which couples an external transducer to a confined, propagating acoustic mode in an optical waveguide without excess optical loss, reflection, or modal cross-talk. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{./figs/InjectorOverview.pdf} \caption{Overview of the acoustic-optical multiplexer. (a) Our example design multiplexes two optical modes (Port 1) in a suspended silicon waveguide with an acoustic wave incident symmetrically from suspended silica beams (Port 2) into a single output port, the suspended silicon beam (Port 3). (b) The geometry and port definitions of this architecture are shown along with schematic examples of the optical and acoustic paths. An additional pair of silica beams is used to redirect acoustic energy scattered in the backwards direction (Port 4).} \label{fig:overview} \end{figure*} In this paper we introduce the concept for such a device which combines optical and acoustic modes from separate spatial ports into a single spatial output port, an acoustic-optical multiplexer (AO mux). We demonstrate a simple implementation of an AO mux using only silicon (Si) and silica (SiO$_2$) so as to be compatible with the standard 220 nm SOI silicon photonics platform. Using COMSOL for acoustic simulations and the Lumerical finite-difference time-domain solver (FDTD) for optical simulations, we evaluate the device's performance to show that an AO mux is feasible without large acoustic or optical insertion loss, cross-talk, or reflections. This AO mux is the first design capable of coupling an external acoustic transducer to a non-resonant, guided acoustic mode co-propagating with an optical mode without disturbing the optical characteristics, and will enable more efficient transducer-based integrated AO devices. \section{Device Overview} \label{sec:design} An AO mux requires, in the most basic sense, separate input ports for optical and acoustic modes and a output port which contains both optical and acoustic modes. A simple circuit schematic of an AO mux with ports is shown in the insert of Fig.~\ref{fig:overview}(a), where the two spatially separated input ports denoted as Ports 1 and 2 are multiplexed into a single output port, Port 3. Port 1 contains the optical mode, potentially including many waveguide and frequency modes, and Port 2 contains the acoustic mode(s). We emphasize that different spatial, rather than modal, ports are required precisely because there is no extant on-chip device capable of separating or combining acoustic modes. An AO mux, operating in reverse as an AO demux, provides exactly this functionality. The AO mux should ideally be both optically and acoustically transparent, i.e. should be without reflection, avoid inducing cross-talk between optical modes or acoustic modes, direct all optical and acoustic energy without loss into the output port, and be sufficiently short that AO interactions within the AO mux can be neglected. To demonstrate the viability of such an AO mux, and provide a reference point for performance, we choose a simple design which couples a single acoustic mode from a suspended silica beam into a suspended silicon beam with two optical modes [Fig.~\ref{fig:overview}(a)]. The geometry of this design is shown in the left section Fig.~\ref{fig:overview}(b), while the optical and acoustic paths are shown in the center and right sections of Fig.~\ref{fig:overview}(b). We are particularly interested in designing a `CMOS-compatible' AO mux, motivated by demonstrations of acoustic transduction without piezoelectrics \cite{van2018electrical,weinstein2010resonant}, so we choose the materials and device layer thickness to be compatible with the standard 220 nm SOI integrated photonics platform: silica (blue) and silicon (gray) with thickness $t=220$ nm. Notably, silica is a `high index' material for acoustics but a low index material for optics, and the reverse is true for silicon. Any core-cladding structure which guides an acoustic mode will therefore anti-guide an optical mode, and vice versa. We choose silica beams to couple into/out of the acoustic mode of the silicon beam because it introduces minimal optical perturbation (low optical index) while also guiding the acoustic wave (high acoustic index). A suspended silicon beam is used to as the input optical port and output (muxed) port so as to confine the optical modes (in the high index silicon) while the air gap will confine the acoustic modes, a commonplace approach in integrated AO devices \cite{van2015net,van2018electrical,li2015nanophotonic,sohn2018time}. Notably, a silicon slot waveguide could co-confine both \cite{van2014analysis}. Addition of suspended silica beams could be relatively straightforward using directive etching techniques such as reactive ion etching or ion beam milling in combination with a sacrificial layer beneath the device layer. An example application of an AO mux is for an isolator, where the AO interaction can be used to non-reciprocally convert between two optical modes \cite{poulton2012design,kittlaus2018non,sohn2018time,dostart2018energy}. We choose to design an AO mux centered at 1550 nm which facilitates this non-reciprocal mode conversion using a silicon beam of width $w_0$ where the TE0 and TE1 optical modes are coupled by the in-plane shearing acoustic mode. This corresponds to a silicon beam width of $w_0=570$ nm where the optical modes have identical group index of 4.11 and phase-matched acoustic wavelength $1.25$ {\textmu}m (frequency 3.12 GHz) \cite{dostart2018energy}. The silica beams have width $w_1=700$ nm and an angle $\theta=11^\circ$ relative to the silicon beam chosen to maximize acoustic directionality ($|S_{32}|^2/|S_{12}|^2$), bandwidth, and transmission ($|S_{32}|^2$). Unfortunately, corners are strongly scattering for acoustics and cannot be avoided in our suspended geometry without adding a third, mutually low index material. In this design, where we have prioritized optical over acoustic transparency, these corners are the fundamental loss mechanism for the acoustic mode and excite undesired acoustic modes radiating into all ports. To ensure large acoustic directionality, i.e. no acoustic radiation into the optical input port, we added silica `siphon' beams which remove the acoustic radiation propagating into the optical input port (with an `internal' port denoted as Port 4). These siphons have the same width and angle relative to the silicon beam as the input silica beams to maximize the absorption of the undesired radiation through reciprocity. To further suppress excitation of undesired modes, we have arranged the AO mux to symmetrically excite (with two separate transducers) the desired transverse shear mode, and we have added fillets to the silica/silicon junctions to minimize the scattering due to edges. \section{Simulation and Results} \label{sec:results} \begin{figure}[t] \centering \includegraphics[width=.6\textwidth]{./figs/AcousticPerformance.pdf} \caption{Acoustic performance of the AO mux. (a) Acoustic directivity and insertion loss for varying excitation frequency. (b) Acoustic simulation setup showing locations of ports, excitation planes, and PML locations. (c) Total displacement field at 3.12 GHz (left). Transverse displacement in the output port (gold) and excitation port (green) (right). Inset: 3D rendering of shear mode in the silicon beam. } \label{fig:acoustic_perf} \end{figure} \begin{figure}[t] \centering \includegraphics[width=.6\textwidth]{./figs/OpticalPerformance.pdf} \caption{Optical performance of the AO mux. (a) Transmission and reflection of the TE0 and TE1 optical modes, with upper plot showing a zoomed view of the transmission. Mode crosstalk in both transmission and reflection is below $-70\,$dB. (b) Plots of the optical modes at 1550 nm propagating through the AO mux (left) and in cross-sectional view (right).} \label{fig:optical_perf} \end{figure} We demonstrate our AO mux design in simulation; in the acoustic domain, we use the COMSOL 3D Structural Mechanics module to implement the geometry and excite a single frequency acoustic wave with the Frequency Domain study module to measure the acoustic S-parameters. The results are shown in Fig.~\ref{fig:acoustic_perf}(a) for varying acoustic frequency. As can be seen, a `resonance' at the design frequency can be seen in the directionality arising from optimal matching between the incident mode in the silica beams and excited mode in the silicon beam. This matching ensures the backwards-going mode (into Port 1) is strongly suppressed and acoustic insertion loss is minimized. For the chosen geometry, the AO mux achieves a peak acoustic directionality of 60 dB and 0.05 dB insertion loss at the design frequency. It has a bandwidth of 500 MHz within which directionality is at least 30 dB and insertion loss varies between 0 and 1 dB. The simulation setup is shown in Fig.~\ref{fig:acoustic_perf}(b), where the measurement planes are denoted in gold and the symmetric excitation port is shown in green. A symmetry plane is applied to the center of the geometry, so only half of the geometry needs to be simulated. Perfectly matched layers (PMLs) are used as absorbing layers after each measurement port to avoid artificial reflections. To excite the acoustic wave, a forced displacement is applied to the excitation plane. The total power exiting from all measurement ports is used to calculate the total input power, normalize the power measured at each port, and calculate the acoustic S-parameters. Cross-sectional plots of the acoustic displacement, the shear field in the excitation port, and the shear field at the output port at center frequency are shown in Fig.~\ref{fig:acoustic_perf}(c), showing excitation of the desired mode into Port 3 with low insertion loss and high directionality. For simulations in the optical domain, we import the geometry of the COMSOL simulation into Lumerical 3D FDTD. In two separate simulations, we excite pulses of TE0 and TE1 respectively and monitor Ports 1 and 3 as denoted in Fig.~\ref{fig:acoustic_perf}(b). Using a mode decomposition at both ports, the optical S-parameters are determined. The results are shown in Fig.~\ref{fig:optical_perf}(a), where the transmission and reflection coefficients are plotted across a 200 nm wavelength band centered at 1550 nm. The top plot is vertically scaled to show the insertion losses of both modes. Notably, TE0 has negligible insertion loss while TE1 has slightly larger insertion loss of varying between 1.2 dB at the edges of the optical bandwidth and 0.25 dB at 1550 nm. Both modes experience reflection $<-18$ dB and extremely low cross-talk of less than $-70$ dB, demonstrating that this AO mux design is effectively optically transparent. In Fig.~\ref{fig:optical_perf}(b) the modes are plotted in the device cross-section at 1550 nm, showing minimal perturbation by the silica beams. We would like to emphasize that the presented design is not necessarily optimal, only a geometry which performs adequately and highlights the benefits and potential of an AO mux as well as some key considerations. For example, reflections in the TE1 optical mode and corresponding insertion loss could be considered the principal limitation of the current design. This could be improved by introducing the silica beams more adiabatically or widening the silicon beam in advance of the AO mux to better confine the TE1 mode. It should also be noted that the $<1$ dB acoustic insertion loss quoted here is within the mux, whereas additional insertion loss on the order of several dB is to be expected from simply coupling the transducer to the input acoustic port in the silica beam. Further improvements to this design would focus on simulation of the full system, including transducers, to minimize acoustic insertion loss as well as more complex beam shapes to minimize optical reflections. Another potential improvement is to confine optical and acoustic waves in the same cross-section without suspension, already demonstrated in at least one geometry in an SOI platform using a `fin' waveguide \cite{sarabalis2017release}, a rib waveguide with a large height-to-width aspect ratio. Photonic-phononic crystals have, to our knowledge, required suspension for vertical acoustic confinement \cite{balram2016coherent}, but future improvements may allow for full confinement of both optics and acoustics without suspension using e.g. 1D vertical phononic crystals \cite{bahr2015theory}. Such non-suspended geometries could avoid the excess scattering due to corners by using a directional coupler configuration to evanescently couple an acoustic or optical mode from a second waveguide into the mux waveguide. In the presence of multiple optical modes, such as the case for our demonstrated design, an acoustic directional coupler would need to be used to avoid introducing optical mode cross-talk. For non-CMOS-compatible material sets, such as chalcogenides which can guide both optical and acoustic waves \cite{poulton2012design}, a cross-section where both optics and acoustics are guided can be easily designed and an AO mux should be well within reach. In this paper we introduce the concept of an acoustic-optical multiplexer, an AO mux, which combines acoustic and optical waves from two separate spatial ports into a single co-guided cross-section for optimal overlap of acoustic and optical modes. The AO mux allows acoustic waves to be driven by a transducer and introduced to a co-guided waveguide without additional optical loss or use of a resonator. As a proof-of-concept device, we propose and demonstrate through simulations a simple AO mux design which multiplexes two optical modes in a silicon beam with an acoustic mode injected symmetrically from two silica beams. This basic geometry enables strong AO interactions without significant optical cross-talk or reflections while injecting the desired acoustic mode with a broad acoustic bandwidth of 500 MHz, $>30$ dB directionality, and at most 1 dB acoustic insertion loss. We expect that AO multiplexers will be a useful tool for designing high-efficiency transducer-based AO devices. \noindent \textbf{\large Funding Information} National Science Foundation Graduate Research Fellowship Grant (1144083); Packard Fellowship for Science and Engineering (2012-38222). \noindent \textbf{\large Acknowledgments} We thank Prof. Kelvin Wagner for discussions on previous demonstrations of AO devices. \bibliographystyle{ieeetr}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Since their discovery in 1991 \cite{Iijima}, carbon nanotubes (CNT) have been studied under close scrutiny due to their eye-catching properties which are of a great interest not only for nanotechnology but also for fundamental physics. A carbon nanotube, which can be regarded as a tiny cylinder rolled up from a graphene sheet, is a good candidate to study electronic properties in one-dimensional (1D) systems where electron-electron interactions are substantially important. \newline CNT can be synthesized as a single walled tube (SWNT) or multiwalled tubes (MWNT) consisting of two or more concentric shells. SWNT can also be assembled to form ropes of ordered parallel tubes arranged in a triangular lattice \cite{Dressel,Jounet,Thess}. The nanotubes, which are nearly of the same diameter, can have different kind of helicities, but in general $\frac 1 3$ of them are metallic \cite{Ferrier04,Ferrier06}.\newline The transport properties of the rope is found to be strongly dependent on the amount of disorder within the tubes \cite{Maarouf,Tunney}. It has been reported that the intertube electronic transfer is enhanced in the presence of disorder, leading to a charge carrier delocalization \cite{Ferrier}. This feature raises the question whether such disorder-induced intertube coupling can be observed for the superconducting order in ropes of CNT?\ The first superconducting signature was observed, in 1998, as a proximity effect in isolated metallic bundled SWNT connected to superconducting leads \cite{Kasumov98,Morpurgo}. Later on, intrinsic superconductivity has been reported in ropes of CNT with a transition temperature T$_c=0.55$ K \cite{Ferrier06,Kociak03,Kasumov03}.\ Ferrier {\it et al.} \cite{Ferrier06} studied the dependence of the superconducting transition temperature on the number of the metallic tubes included in the rope and on the amount of disorder. They found that superconductivity arises only in ropes with more than 100 tubes. However, the most striking result of Ref.\cite{Ferrier06} is that disorder, contrary to what is expected, may induce superconductivity: the larger the amount of disorder, the stronger the superconducting correlations. Nevertheless, at a very large disorder amplitude, the superconducting order collapses as in other superconducting materials.\ Superconductivity at T$_c$= 15 K has been also reported in zeolite-inserted SWNT of small diameter (0.4 nm) \cite{Tang}. Takisue {\it et al.} \cite{Takesue} found a superconducting transition at T$_c\sim$ 12 K in MWNT encapsulated in zeolites. These relatively high critical temperatures put the question on the origin of superconductivity in SWNT. How can a superconducting order develop in such low dimensional systems where thermal fluctuations are expected to destroy any long range ordered state? The surprising observation of superconductivity in CNT has stimulated many theoretical studies to found out the underlying mechanism.\ The realization of a superconducting order in ropes of CNT has been ascribed by Gonzalez \cite{Gonzalez} to the presence of strong attractive electron-electron interactions mediated by phonon exchange. The latter prevails over repulsive Coulomb interaction in ropes with hundred or more of metallic nanotubes.\newline Other models based on phonon mediated attractive mechanisms have been also proposed\cite{Ferrier04,Sediki,Martino}. In particular, the dependence of the superconducting transition temperature on the number of tubes was quite understood in the framework of the model elaborated by Egger and De Martino \cite{Ferrier04,Martino} who introduced the Josephson couplings between the tubes and the phase fluctuations of the superconducting order parameter. However, a pronounced discrepancy with the experimental data emerges with decreasing the number of the tubes embedded in the rope \cite{Ferrier05,BouchiatP}.\ To explain the relatively high superconducting critical temperature reported in SWNT, Sasaki {\it et al.} \cite{Sasaki} have proposed a new mechanism where superconductivity originates from the edge states specific to graphene. The authors argued that superconductivity is due to a superconductor/normal/superconductor junction where the superconducting phase is realized at the ends of the SWNT while the bulk part of the tube remains metallic.\ An other scenario has been proposed by Zhang {\it et al.} \cite{Zhang} to account for the occurrence of superconductivity in SWNT connected to superconducting or normal electrodes. The authors argued that the SWNT becomes superconducting in the range of 11-30K due to the presence of van Hove singularities in the electron density of states of the nanotube. Karnaukhov and Diks \cite{Karnau} ruled out the electron-phonon interaction mechanism to explain the formation of the superconducting state in SWNT due to the relatively large value of the critical temperature. The authors suggested an alternative attractive electron-electron interaction originating from strong hybridized interaction induced by the two-band electron structure of SWNT.\ Recently, Belluci {\it et al.} \cite{Belluci} have theoretically argued that superconductivity can arise by a purely electronic mechanism in ultrasmall diameter SWNT and end-bonded multiwalled ones due to the screening of the forward scattering processes.\ More recently, Le Hur {\it et al.} \cite{Lehur} have derived a theoretical model to study the possibility of a superconducting proximity effect in metallic SWNT in the presence of superconducting substrate. The authors showed that the latter induce an unconventional double superconducting gap in the tube.\ The outcome of the above-mentioned studies is that the origin of superconductivity in CNT based systems is still under debate and many relative issues are not yet totally unveiled. In particular, the role of disorder on the stability of the superconducting phase has not been addressed in previous theoretical studies \cite{BouchiatP}. This is a key point which may shed light on the formation of the superconducting phase in low dimensional systems. \ In this paper, we theoretically investigate the effect of disorder on the superconducting state in a rope of CNT. The model is based on the time dependent Ginzburg-Landau theory taking into account the superconducting fluctuations which are substantially important in CNT regarding their low dimensionality. Ferrier {\it et al.} \cite{Ferrier05,FerrierPhD} have actually observed, in ropes of CNT, a large domain of superconducting fluctuations which extends to 1K, namely twice the transition temperature (T$_c$=0.5 K). In the following we present our model and discuss the obtained results in section III. Section IV is devoted to the concluding remarks. \section{The model} We consider a rope of identical SWNT arranged in a triangular lattice characterized by the basis ($\vec{a},\vec{b}$). For simplicity we assume that all the tubes are metallic while experimentally $\frac 2 3$, on average, are semiconductors. This assumption d\oe s not affect the outcomes of the present model which depends basically on the amount of disorder in the rope and on the intertube Josephson couplings. From the numerical point of view, one should expect that our calculated superconducting critical temperatures may be somewhat overestimated compared to the experimental ones since we considered that all the neighboring tubes of a given one are metallic. For a more realistic description, we can consider a random distribution of the tubes with different helicities and diameters. Such complication is, actually, irrelevant for the physics of superconductivity in ropes of CNT since the nature of electronic transport is essentially sensitive to the transverse coupling between the tubes which depends on the intra-tube disorder \cite{Ferrier}.\ The superconducting order is stabilized in the rope via Cooper pair tunneling between tubes and inside a single tube. We denote by $J_1$ and $J_2$ the Josephson coupling parameters across the rope, respectively, to the first and to the second neighboring tubes. We assume that the superconducting phase inside a tube is inhomogeneous with superconducting domains separated by metallic regions. This inhomogeneous structure, which may arise in the presence of impurities, is consistent with the absence of a bulk superconductivity in SWNT \cite{Sasaki}. The superconducting domains along the tube ($z$ axis) are coupled by Josephson tunneling parameterized by $J_0$.\\ Regarding the strong superconducting fluctuations which extend on a large temperature range around the critical temperature T$_c$, the mean field theory breaks down and one should expect clear deviation from the mean field critical temperature T$_0$. These fluctuations can be treated in the frame of the time dependent Ginzburg-Landau (TDGL) theory which has proven to be a reliable tool to study the critical transition region including superconducting fluctuations in different systems such as high-T$_c$ \cite{Puica} and low dimensional organic superconductors \cite{EPL}.\ We start by writing the superconducting free energy $F_s$ of the rope compared to that of the normal state $F_{norm}$: \begin{widetext} \begin{eqnarray} F&=&F_s-F_{norm} =\sum_{i,j,n}\int_{r_1}^{r_2}dx\int_{r_1}^{r_2}dy\int_0^{l_{01}}dz \left[a|\psi_{n,i,j}|^2+\frac{{\hbar}^2}{2m^{\ast}}|\vec{\nabla}\psi_{n,i,j}|^2 \right.\nonumber\\ &+& J_0|\psi_{n,i,j}-\psi_{n+1,i,j}|^2 + J_1|\psi_{n,i,j}-\psi_{\langle n\,i\,j\rangle}|^2 +\left. J_2|\psi_{n,i,j}-\psi_{\langle\langle n,i,j\rangle\rangle}|^2 +\frac b 2 |\psi_{n,i,j}|^4\right], \label{free} \end{eqnarray} \end{widetext} where $i$ and $j$ denote the tube coordinates in the triangular basis ($\vec{a},\vec{b}$) whereas $n$ indicates the position of the superconducting domain along the tube direction $z$. $\psi_{nij}$ is the superconducting order parameter and $\langle\, \rangle$ and $\langle\langle \; \rangle\rangle$ correspond to the first and second neighboring tubes. The coefficients $a$ and $b$ are given by: $a=a_0\epsilon$ and $b=\mu_0\kappa^2e_0^2{\hbar}^2/2m^2$, where $a_0={\hbar}^2/2m\xi^2_0$, $\xi_0$ being the superconducting coherence length, and $\epsilon=\ln(T/T_0)$ while $\kappa=\frac{\lambda_{\parallel}}{\xi_{\parallel}}$ is the GL parameter. Here $\lambda_{\parallel}$ and $\xi_{\parallel}$ are respectively the London penetration depth and the coherence length in the ($\vec{a},\vec{b}$) plane transverse to the tube direction. We take for simplicity $\xi_0=\xi_{\parallel}$. The Cooper pair is characterized by its electric charge $e_0=2e$ and its effective mass $m=2 m_e$ where $e$ is the unit charge and $m_e$ is the electron mass.\newline The superconducting order is assumed to develop inside a tube over a thickness $r_2-r_1$ from the surface. The length of the superconducting domain is denoted $l_{01}$.\newline The Josephson parameters are written as: \begin{eqnarray} &&J_0=\frac{{\hbar}^2}{2m^{\ast} l^2_{02}}\,{\mathrm exp}(-\frac{l_e}L),\quad J_1=\frac{{\hbar}^2}{2m^{\ast}l^2_1}\,{\mathrm exp}(-\frac{l_e}D), \nonumber\\ &&{ \mathrm and}\quad J_2=\frac{{\hbar}^2}{2m^{\ast}l^2_2}\,{\mathrm exp}(-\frac{l_e}D), \label{Joseph} \end{eqnarray} where $m^{\ast}$ is the effective pair mass in the superconducting domain whereas $l_e$ is the mean free path along the tube. $L$ and $D$ are, respectively the length and the diameter of the rope. We assume for simplicity that all the tubes have the same diameter. $l_1$ ($l_2$) denotes the intertube distance, from the tube surface, between first (second) neighboring tubes while $l_{02}$ is the distance between superconducting domains inside a single tube.\ The natural question which arises concerns the origin of these Josephson coupling expressions. The major issue regards the exponential terms which lead to an enhancement of the Josephson tunneling by increasing the amount of disorder, namely by decreasing the mean free path.\newline This idea is based on previous studies dealing with Josephson coupled arrays of n-leg spin ladders \cite{Orgad} and correlated stripes in cuprate superconductors \cite{Kivelson} which show clear evidence of the drastic effect of disorder on the superconducting state. Kivelson {\it et al.} \cite{Kivelson} have argued that the Josephson coupling between stripes is strongly enhanced by the transverse stripe fluctuations, which promotes the superconducting order. These fluctuations bring neighboring stripes close together leading to the enhancement of the mean value of the Josephson coupling.\ Orgad \cite{Orgad} has shown that such geometrical fluctuations in coupled ladder systems can reduce the suppression of the superconducting correlations due to disorder, by increasing the Josephson tunneling between ladders. The dynamic of the ladders reduces the effective disorder strength and make the superconducting pairing more robust against disorder. The interladder Josephson coupling is found to increase exponentially with the square of the fluctuation amplitude, which enhances the superconducting transition temperature. Orgad \cite{Orgad} considered a Josephson tunneling amplitude depending on the interladder distance as $J_{ij}\sim J_0{\mathrm exp}[-(s+u_i-u_j)/\gamma]$, where $u_i$ and $u_j$ are the deviation of the i$^{th}$ and the j$^{th}$ ladders from their static position, $s$ is the mean distance of the ladder array and $\gamma$ is a characteristic constant \cite{Orgad}.\newline The basic idea highlighted in Refs.\cite{Orgad,Kivelson} is that the interplay between disorder and the dynamics of the stripes or the ladders is substantial for the stability of the superconducting order in cuprates and spin ladder superconductors.\ Keeping this result in mind, let us now return to the rope of CNT. The latter can be described, as proposed by Ferrier {\it et al.} \cite{Ferrier} by an array of 1D atomic chains lying on a cylinder where each chain corresponds to a SWNT. The hopping processes along the chain are randomly distributed around a mean value $t_{\parallel}$ with a square distribution $\delta t_{\parallel}$. Such bond disorder along the chain may be induced by the dynamics of the tube as in the case of arrays of spin ladder or stripes. This leads to a competition between the geometrical fluctuations of the SWNT and the local disorder inside the tubes.\ By analogy with Ref.\cite{Orgad}, the Josephson tunneling between tubes can be written as $ J\;\alpha \;{\mathrm exp}[-d_{ij}/\gamma]$, where $d_{ij}$ is the separation distance between the i$^{th}$ and the j$^{th}$ tubes. The exponential term expresses the Cooper pair tunneling probability which can be averaged over the tubes as $\langle P_{\perp}\rangle={\mathrm exp}[-\langle d_{\perp}\rangle /\gamma]$, where $ \langle d_{\perp}\rangle$ is an average distance between the tubes. In diffusive superconductors, one should expect a dependence of the Josephson couplings on the mean free path since the superconducting coherence length is governed by the disorder amount and reads as $\xi_c=\sqrt{\frac{\hbar v_F l_e}{\Delta}}$, where $\Delta $ is the superconducting gap and $v_F$ is the Fermi velocity \cite{Varlamov}.\ A key question raises at this point concerning the relationship between $ \langle d_{\perp}\rangle$ and the intratube mean free path $l_e$, which we try to answer in the following.\ The plane transverse to the rope direction can be regarded as a dirty two dimensional superconductor of a mesoscopic size where the disorder points, due to defects or impurities, are localized inside the tubes. In this plane, the tube sections form a sort of disordered clusters embedded in a free disorder medium. The average distance $ \langle d_{\perp}\rangle$ between these clusters is controlled by the dynamic of the tube which is strongly dependent on the disorder amount inside the tubes. In the diffusive regime, the bond disorder due to the geometrical fluctuations of the tubes gives rise to an increasing intertube one particle hopping integral with increasing the site disorder amplitude originating from impurities and defects inside the tube \cite{Ferrier}. This means that the intertube distance $ \langle d_{\perp}\rangle$ decreases with decreasing the intratube mean free path $l_e$. $ \langle d_{\perp}\rangle$ is then expected to have the same behavior as $l_e$ and may be expressed as a growing function of $l_e$. We do not claim that the present model provides the exact form of this function. A more detailed analysis based on a microscopic study is needed. Since $ \langle d_{\perp}\rangle$, as $ l_e$, is a free parameter in our model, we set for simplicity $ \langle d_{\perp}\rangle =l_e$. This means that, in the diffusive regime, the mean free path inside the tube and across the rope are of the same order. This is justified as far as $ l_e$ is smaller than the rope diameter $D$ to keep the transverse one particle transport in the diffusive regime. Actually, this approximation does not affect the overall outcomes of our model but may yields to somewhat larger superconducting critical temperatures compared to the experimental ones. To characterize the electronic transport in disordered mesoscopic systems, one need to compare the size of the system, which is the rope diameter in this case, to a characteristic mean free path. Regarding its dependence on the intratube disorder amplitude, $ \langle d_{\perp}\rangle$ seems to be a good parameter to account for the transport regime across the rope. It comes out that $ \langle d_{\perp}\rangle$ and the rope diameter $D$, which depends on the tube number N, are the key parameters for the one particle transport and for the Cooper pair tunneling across the rope in the diffusive regime. The tunneling probability can then be written as $\langle P_{\perp}\rangle={\mathrm exp}[-\langle d_{\perp}\rangle /\gamma]= {\mathrm exp}[-l_e/D]$, where the $\gamma$ constant, which accounts for the environment between the tubes, is replaced by rope diameter $D$. This is made possible since the tube environment is disorder free and depends only on the tube number included in the expression of the rope diameter $D$.\ In the absence of site disorder and geometrical fluctuations, namely in a pure static rope, the Josephson couplings between respectively the first and the second neighboring tubes write as: \begin{eqnarray} J_1=\frac{{\hbar}^2}{2m^{\ast}l^2_1}\quad{ \mathrm and}\quad J_2=\frac{{\hbar}^2}{2m^{\ast}l^2_2} \label{Joseph2} \end{eqnarray} Such couplings cannot describe the superconducting order in the rope since they are independent on the rope characteristics particularly the tube number.\newline In the presence of disorder and geometrical fluctuations of the tubes, the Josephson parameters $J_1$ and $J_2$ given by Eq.\ref{Joseph2} should be changed to account for the average pair tunneling probability across the rope $\langle P_{\perp}\rangle={\mathrm exp}[-l_e/D]$, which gives rise to the expressions introduced in Eq.\ref{Joseph}.\ Regarding the intratube Josephson tunneling $J_0$, one can define an average pair hopping probability along the tube $\langle P_{\parallel}\rangle={\mathrm exp}[-l_e/L]$ resulting from the geometrical fluctuations of the tube which yields to the expression given by Eq.\ref{Joseph}.\newline It is worth to note that the $J_0$ term is irrelevant for the stability of the superconducting phase as we will show in the next.\ It comes out that the dynamics of the tubes in the rope mitigate the drastic effect of the local disorder on the superconducting order by enhancing the Josephson tunneling amplitudes between the tubes. The latter increase as a function of the effective disorder. This is reminiscent of the disorder-induced electronic transverse delocalization in ropes of CNT proposed by Ferrier {\it et al.} \cite{Ferrier}. We suggest that this delocalization scenario holds for Cooper pair due to the tube dynamics as argued above.\\ Let us now turn to the superconducting order parameter whose critical dynamics satisfy the TDGL equation: \begin{eqnarray} \Gamma^{-1}_0\frac{\partial \psi_{nij}}{\partial t}=- \frac{\partial F}{\partial\psi^{\ast}_{nij}}+\zeta_{nij}(\vec{r},t) \end{eqnarray} Here $\Gamma^{-1}_0=\pi{\hbar}^3/16 m \xi^2_{\parallel} k_BT$ is the relaxation rate of the order parameter whereas $\zeta_{nij}(\vec{r},t)$ are the Langevin forces describing the thermodynamical fluctuations and which obey the Gaussian white-noise law\cite{Puica}: \begin{eqnarray*} \langle \zeta_{nij}(\vec{r},t) \zeta^{\ast}_{n^{\prime}i^{\prime}j^{\prime}}(\vec{r}\;^{\prime},t^{\prime}) \rangle=2 \Gamma^{-1}_0k_BT\delta(\vec{r}-\vec{r}\;^{\prime})\delta(t-t^{\prime}) \end{eqnarray*} with $\vec{r}=(X+id,Y+jd,Z+nl_0)$ and $\vec{r}\;^{\prime}=(X+i^{\prime}d,Y+j^{\prime}d,Z+n^{\prime}l_0)$, where $d=l_1+d_0$ and $l_0=l_{01}+l_{02}$, $d_0$ being the tube diameter. X, Y and Z are the coordinates of a point belonging to a superconducting domain of a SWNT of length $l_{01}$, along the $z$ direction, and of a thickness $r_2-r_1$.\ By taking the derivative of the free energy (Eq.\ref{free}) with respect to $\psi^{\ast}_{nij}$, the TDGL equation becomes: \begin{widetext} \begin{eqnarray} &&\zeta_{nij}(\vec{r},t)= \Gamma^{-1}_0 \frac{\partial \psi_{n,i,j}}{\partial t}+ a\psi_{n,i,j}-\frac{{\hbar}^2}{2m^{\ast}} \Delta \psi_{n,i,j} +b\langle|\psi_{n,i,j}^2|\rangle \psi_{n,i,j} +6\,J_1\psi_{n,i,j}\nonumber\\ &-&J_1\left( \psi_{n,i+1,j}+\psi_{n,i-1,j} +\psi_{n,i,j+1}+\psi_{n,i,j-1}+\psi_{n,i+1,j-1}+\psi_{n,i-1,j+1}\right)\nonumber\\ &+& J_2\left(6\psi_{n,i,j}-\psi_{n,i+2,j-1}-\psi_{n,i-2,j+1} -\psi_{n,i+1,j+1}-\psi_{n,i-1,j-1}\right) \nonumber\\ &-& J_2\left(\psi_{n,i+1,j-2}+\psi_{n,i-1,j+2}\right) +J_0\left( 2\psi_{n,i,j}-\psi_{n+1,i,j}-\psi_{n-1,i,j}\right) \label{zeta} \end{eqnarray} \end{widetext} where we adopted the Hartree approximation for the quartic term as in Ref.\onlinecite{Puica}, which results in replacing the term $b|\psi_{nij}|^2\psi_{nij}$ by $b\langle|\psi_{nij}|^2\rangle \psi_{nij}$. This approximation leads to a linear problem with a reduced temperature: \begin{equation} \tilde{\epsilon}=\epsilon+\frac b a \langle|\psi_{nij}|^2\rangle, \label{self} \end{equation} which is determined self-consistently together with $\langle|\psi_{nij}|^2\rangle$. The superconducting critical temperature is defined as $\tilde{\epsilon}(T=T_c)=0$\cite{Puica}.\newline To solve this equation, we introduce the Fourier transform of $\psi_{nij}$ as \begin{eqnarray*} \psi_{nij}(\vec{r},t)&=&\int\frac{d^3\vec{k}}{(2\pi)^3}\psi(\vec{k},t) {\rm e}^{-i\vec{k}. \vec{r}} \end{eqnarray*} where \begin{eqnarray*} &&\psi(\vec{k},t)=\sum_{nij}\int_{X_1}^{X_2}dX\int_{Y_1}^{Y_2}dY \int_{0}^{l_{01}}dZ \times\nonumber\\ &&\psi_{nij}(X+id,Y+jd,Z+nl_0,t) {\rm e}^{(X+id)k_x}\; {\rm e}^{(Y+jd)k_y}\;{\rm e}^{(Z+nl_0)k_z}, \end{eqnarray*} where $X_1$ and $X_2$ ($Y_1$ and $Y_2$) are the limiting values for $X$ ($Y$) in a superconducting domain in the ($a,b$) plane. Taking the Fourier transform of Eq.\ref{zeta}, we obtain \begin{widetext} \begin{eqnarray} \zeta(\vec{k},t)= &&\left\{\Gamma^{-1}_0 \frac{\partial}{\partial t}+ \frac{{\hbar}^2k^2}{2m^{\ast}} +\tilde{a}+2J_0\left( 1-\cos(k_zl_0)\right) +2J_1\left[3-\cos(dk_x)-\cos(dk_y)-\cos\left(d(k_x-k_y)\right)\right]\right.\nonumber\\ &+&\left.2J_2\left[3-\cos\left(d(2k_x-k_y)\right)-\cos\left(d(k_x+k_y)\right)-\cos\left(d(k_x-2k_y)\right)\right]\right\}\psi(\vec{k},t), \label{zetak} \end{eqnarray} \end{widetext} with $\tilde{a}=a+b\langle|\psi_{nij}|^2\rangle$ and the correlation relation satisfied by $\zeta(\vec{k},t)$ : \[ \langle \zeta(\vec{k},t)\zeta^{\ast}(\vec{k}^{\prime},t^{\prime})\rangle=2\Gamma^{-1}_0k_BT(2\pi)^3\delta(\vec{k}-\vec{k}^{\prime})\delta(t-t^{\prime}) \] Equation \ref{zetak} can be solved using the Green function method proposed by Puica and Lang \cite{Puica} for layered superconductors. We define the Green function $R(\vec{k},t,k^{\prime}_z,t^{\prime})$ through the relation: \begin{eqnarray} &&\left[\Gamma^{-1}_0 \frac{\partial}{\partial t}+ \frac{{\hbar}^2k^2_z}{2m^{\ast}} +2J_0\left(1-\cos(k_zl_0)\right)+a_1\right]\times\nonumber\\ &&R(\vec{k},t,k^{\prime}_z,t^{\prime})=\delta(k_z-k^{\prime}_z)\delta(t-t^{\prime}), \label{Rk} \end{eqnarray} where \begin{widetext} \begin{eqnarray} a_1&=&\tilde{a}+\frac{{\hbar}^2(k^2_x+k^2_y)}{2m^{\ast}} +2J_1\left[3-\cos(dk_x)-\cos(dk_y)-\cos\left(d(k_x-k_y)\right)\right]\nonumber\\ &+&2J_2\left[3-\cos\left(d(2k_x-k_y)\right)-\cos\left(d(k_x+k_y)\right)-\cos\left(d(k_x-2k_y)\right)\right]. \end{eqnarray} \end{widetext} We also introduce the Fourier transform of $R(\vec{k},t,k^{\prime}_z,t^{\prime})$ with respect to time as: \begin{eqnarray} R(\vec{k},\omega,k^{\prime}_z,t^{\prime})=\int dt R(\vec{k},t,k^{\prime}_z,t^{\prime}) {\rm e}^{i\omega (t-t ^{\prime})}, \end{eqnarray} which can be deduced from Eq.\ref{Rk} as: \begin{eqnarray} &&R(\vec{k},\omega,k^{\prime}_z,t^{\prime})=\delta(k_z-k^{\prime}_z)\times\nonumber\\ &&\left[-i\omega \Gamma^{-1}_0 + \frac{{\hbar}^2k^2_z}{2m^{\ast}} +2J_0\left(1-\cos(k_zl_0)\right)+a_1\right]^{-1}, \label{TFRk} \end{eqnarray} $\psi(\vec{k},t)$ solution of Eq.\ref{zetak} can be expressed in term of the Green function $R(\vec{k},t,k^{\prime}_z,t^{\prime})$ as \cite{Puica} \[ \psi(\vec{k},t)=\int dt^{\prime}\int dk^{\prime}_z R(\vec{k},t,k^{\prime}_z,t^{\prime}) \zeta(k_x,k_y,k^{\prime}_z,t^{\prime}). \] Given Eq.\ref{TFRk}, we obtain: \begin{eqnarray} &&\psi(\vec{k},t)=\int_0^{\infty}d\tau \zeta(\vec{k},t-\tau)\int d\omega {\rm e}^{i\omega \tau}\times\nonumber\\ &&\left[-i\omega \Gamma^{-1}_0 + \frac{{\hbar}^2k^2_z}{2m^{\ast}} +2J_0\left(1-\cos(k_zl_0)\right)+a_1\right]^{-1} \end{eqnarray} with $\tau=t-t^{\prime}$ and the following correlation relation: \begin{eqnarray} \langle\psi(\vec{k},t)\psi^{\ast}(\vec{k}^{\prime},t)\rangle\; \alpha\;\delta(\vec{k}-\vec{k}^{\prime}) \label{correlation} \end{eqnarray} To solve Eq.\ref{self}, one need to derive $\langle|\psi_{nij}|^2\rangle$ which, regarding Eq.\ref{correlation}, can be simply written as: \begin{eqnarray} &&\langle|\psi_{nij}|^2\rangle=4\pi \Gamma^{-1}_0k_BT\int d\omega\int d^3\vec{k}\times\nonumber\\ &&\left\{(\omega \Gamma^{-1}_0 )^2+ \left[\frac{{\hbar}^2k^2_z}{2m^{\ast}} +2J_0\left(1-\cos(k_zl_0)\right)+a_1\right]^{2}\right \}^{-1} \end{eqnarray} The critical temperature can now be deduced from Eq.\ref{self} by setting $\tilde{\epsilon}(T=T_c)=0$, which yields to Eq.\ref{Tc} given in the Appendix. In the next we discuss the numerical results. \section{Results and discussion} We have solved numerically Eq.\ref{Tc} and the results are depicted in Fig.1 which shows the superconducting transition temperature T$_c$ as a function of the number $N$ of the tubes forming the rope. It is worth to note that $N$ is involved in the rope diameter $D$ as $D=\sqrt{N}(d_0+e)$ where $d_0$ and $e$ are respectively the tube diameter and the intertube distance \cite{Kasumov03}.\ As shown in Fig.1, T$_c$ is strongly enhanced by increasing $N$ but this enhancement is slowed down for $N$ larger than 100 with a tendancy to saturation, which is reminiscent of the experimental results \cite{Ferrier04,FerrierPhD}. This behavior reflects the dimensionality of the superconducting phase appearing in the rope. By increasing $N$, the 3D character of the superconducting state is enhanced and T$_c$ likewise. However, for a larger $N$ ($N\sim 200$), the rope can be regarded as a 3D system and a further increase of $N$ is irrelevant for the superconducting order, which explains the saturation behavior of T$_c$ at large $N$.\ \begin{figure}[t] \begin{center} \includegraphics[width=7cm,height=5cm]{rope_Fig1.eps} \end{center} \caption{Superconducting transition temperature as a function of the number $N$ of tubes. The calculations are done in the one-particle delocalized regime and for $\lambda=0.6 \mu$m, $\xi_0=0.1\mu$m and $L=1.4\mu$m \cite{Ferrier05}. $\lambda$ and $\xi_0$ are respectively the penetration depth and the coherence length in the superconducting domain while $L$ is the rope length.} \label{fig1} \end{figure} A worth noting question concerns the interplay between superconductivity and the 1D character of a SWNT. Could superconductivity prevail over the low dimensionality of such systems? This turns out to consider only the $J_0$ term in our model. In such case, numerical calculations show that T$_c$ is at most of the order of 1 mK, which explains the difficulty to observe an intrinsic superconductivity in SWNT as reported in Refs.\onlinecite{Ferrier,Ferrier05}. The superconducting phase can, actually, develop in ropes containing about one hundred metallic tubes as shown by earlier studies \cite{Gonzalez,Martino}. The limiting tubes number in our model is then N=13 if one include the first and second neighbors of a given tube.\ In Fig.2, we give, for different tube numbers, the dependence of T$_c$ on the inverse of the mean free path which mimics the amount of local disorder inside the tube. Peculiarly, Fig.2 shows that disorder promotes the superconducting order as found experimentally \cite{Ferrier,Ferrier05}. This behavior is due to the intertube disorder-induced delocalization of the Cooper pairs. Actually, the intertube pair delocalization is expected to develop in the electronic diffusive regime, where disorder can induce transverse hopping processes across the rope \cite{Ferrier}. \ It is worth to note that the values of the critical temperature reported in Figs.1 and 2 may be somewhat overestimated since we have considered that all the tubes are metallic. In a more realistic model, one should take, on average, for each tube two neighboring metallic tubes since, in most cases, $\frac 1 3$ of the tubes within a rope are metallic. \begin{figure}[t] \begin{center} \includegraphics[width=7cm,height=5cm]{rope_Fig2.eps} \end{center} \caption{Superconducting transition temperature as a function of the inverse of the mean free path in a rope of SWNT for different tube numbers. The calculations are done in the one-particle delocalized regime and for the same data as in Fig.1} \label{fig2} \end{figure} By the way, one should emphasize the role of the Josephson tunneling $J_2$ between second neighboring tubes on the superconducting order. Numerical results show that $T_c$ is reduced by $20\%$ if $J_2$ is neglected. Actually, the second neighboring tubes should be involved in the tunneling processes since they are in the same range of reach as the first neighbors \cite{BouchiatP}. This is due to the geometry of the rope characterized by a tube diameter of 3 nm and an intertube distance of 0.35 nm.\\ A worth stressing question regards the saturation behavior of $T_c$ at large disorder amplitude in Fig.2. This feature, which is due to the expression of the intertube couplings given by Eq.\ref{Joseph}, does not sound in agreement with the experimental data which rather show a collapse of the superconducting phase at large enough amount of disorder\cite{Kasumov03,FerrierPhD}. This discrepancy originates from the nature of the electronic transport regime. Our results are derived within the delocalized diffusive regime characterized by a disorder induced transverse electronic hopping \cite{Ferrier}. However, in the large disorder range, a localized regime develops where the electrons are confined within individual tubes. The Josephson couplings given by Eq.\ref{Joseph} are no more reliable since, in this case, the intratube disorder overcomes the geometrical fluctuations of the tubes, leading to the suppression of the intertube pair tunneling. The latter is expected to be strongly reduced by the electron localization which can be roughly described by an exp$\left(-\frac{L}{\xi}\right)$ behavior for the intertube electron hopping where $L$ is the rope length and $\xi=2Nl_e$ is the localization length \cite{Ferrier}. $N$ and $l_e$ being the number of metallic tubes and the mean free path inside the tube. As a consequence, one can assume the following Josephson couplings: \begin{eqnarray} J_1=\frac{{\hbar}^2}{2m^{\ast}l^2_1}\,{\mathrm exp}(-\frac{L}{\xi}) \, { \mathrm and}\quad J_2=\frac{{\hbar}^2}{2m^{\ast}l^2_2}\,{\mathrm exp}(-\frac{L}{\xi}), \label{Joseph3} \end{eqnarray} which express the disorder induced Cooper pair localization as a result of the electronic localization.\ Fig.3 shows the superconducting transition temperature $T_c$ as a function of the inverse of the mean free path $l_e$ which is a measure of the disorder amplitude. The calculations are done using Eq.\ref{Joseph3}. In this regime of localization, $T_c$ is reduced by increasing disorder due to the suppression of the intertube tunneling. However, the tube number $N$ acts, as in the delocalized regime, to the benefit of the superconducting phase. Increasing $N$ furthers the establishment of a 3D electronic transport regime by increasing the localization length $\xi$. The effect of disorder is significantly important in ropes with a small tubes number where the 1D character prevails over the formation of a 3D superconducting order.\ \begin{figure}[t] \begin{center} \includegraphics[width=7cm,height=5cm]{rope_Fig3.eps} \end{center} \caption{Superconducting transition temperature as a function of the inverse of the mean free path in ropes of N=30 and N=70 tubes. $T_c$ is calculated in the localization regime where the Josephson couplings are given by Eq.\ref{Joseph3}. The used data are the same as in Figs.1 and 2.} \label{fig3} \end{figure} The superconducting behaviors in the delocalized and localized regimes (Figs.2 and 3), are reminiscent of those obtained in a 2D array of stripes \cite{Kivelson}. In such systems, the superconducting transition temperature is found to increase with the transverse stripe fluctuations up to a critical value above which it drops. This happens when the system undergoes a phase transition to an isotropic state where the stripe structure is lost.\\ A tough question raised from Figs.2 and 3 concerns the extension of the delocalized regime. At which disorder amplitude the dynamic of the tubes is frozen and the intertube Josephson tunnelings start to collapse? A rough estimation may be deduced from the experimental results of Kasumov {\it et al.} \cite{Kasumov03} showing that the key parameter governing the disorder in a suspended rope is the ratio $\frac{\xi_c}L$ where $L$ and $\xi_c$ are respectively the rope and the coherence lengths. The latter depends on the mean free path $l_e$ as discussed above $\xi_c=\sqrt{\frac{\hbar v_Fl_e}{\Delta}}$\cite{Varlamov}.\ In Fig.4, we have depicted the behavior of superconducting transition temperature in the localized and delocalized regimes for a rope of N=70 tubes based on the results shown in Figs.2 and 3. According to Fig.4, the suppression of the superconducting order starts at a critical value $\frac{L}{l_{ec}}=$0.2. The smaller the tube number, the greater $l_{ec}$, the frailer the superconducting order.\ \begin{figure}[t] \begin{center} \includegraphics[width=7cm,height=5cm]{rope_Fig4.eps} \end{center} \caption{Superconducting transition temperature as a function of the inverse of the mean free path in ropes of N=70 tubes. Regions (I) and (II) denote, respectively, the delocalized and localized regimes. The calculations are done with the same data as in Fig.1} \label{fig4} \end{figure} The comparison of the numerical values of $l_{ec}$ with the experimental results of Ref.\onlinecite{Ferrier} is not obvious. More data are needed to accurately determine the critical disorder amplitude at which the superconducting transition temperature reaches its maximum before decreasing. Nevertheless, one can compare the extent of the disorder regime over which the superconducting order develops. Let us characterize this disorder range by the ratio $\eta=\frac{l_{e1}} {l_{e2}}$, where $l_{e1}$ and $l_{e2}$ are respectively the mean free paths corresponding to the appearance and the collapse of the superconducting phase.\ According to the data of Ferrier {\it et al.}\cite{Ferrier,FerrierPhD}, superconductivity appears at $\left(\frac{\xi_c}L\right)_1=\frac 12$ and vanishes at $\left(\frac{\xi_c}L\right)_2=\frac 1{10}$. Assuming that $\xi_c \alpha \sqrt{l_e}$ \cite{Ferrier,FerrierPhD} and a constant rope length $L$, gives rise to $\eta=\frac{\xi_{c1}^2}{\xi_{c2}^2}=\frac{l_{e1}} {l_{e2}}=25$. From Fig.4, $\eta=\frac{1.6}{0.09}\sim 18$, where we consider that $T_c=1$mK corresponds to the disappearance of the superconducting phase. This value is quite in agreement with the experimental one. Moreover, one can estimate from Fig.4 the range of the disorder-induced superconductivity regime to which, one may assign a ratio $\eta_d=\frac{l_{e1}}{l_{ec}} \sim 2.2$, namely $\frac 17$ of the total disorder regime over which superconductivity may be observed. Checking this value requires more experimental data. \section{Concluding remarks} In summary using TDGL theory, we probed the role of the effective dimensionality and the amount of disorder on the stability of the superconducting order in ropes of CNT. We found that an increase of the dimensionality of the rope, which is achieved by increasing the tube number $N$, promotes the establishment of a 3D superconducting phase with an increasing superconducting critical temperature $T_c$. However, for large $N$ values, $T_c$ tends to saturation indicating the formation of a well defined 3D superconducting order.\ The main result of our work regards the disorder induced superconductivity in the rope which originates from the dynamics of the tubes. The latter enhance the intertube Josephson tunnelings which mitigate the suppression of the superconducting phase by disorder. However, for larger disorder amplitude, electronic localization prevails against intertube hopping leading to the suppression of superconductivity as found in other superconducting materials. \section*{Acknowledgment} We would like to acknowledge fruitful discussions with Pr. H. Bouchiat, Drs. M. Ferrier, S. Gu\'eron and K. Sasaki. We are grateful for Pr. H. Bouchiat for the critical reading of the manuscript. We warmly thank the staff of Laboratoire de Physique des Solides \`a Orsay for kind hospitality.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} In~\cite{FJ09Large} the authors study the limiting behaviour of the implied volatility in the Heston model as maturity tends to infinity. The main aim of this note is to give a rigorous account of the relationship between the concept of essential smoothness and the large deviation principle for the family of random variables $(X_t/t\pm E_\lambda/t)_{t\geq1}$, where the process $X$ denotes the log-spot in Heston model~\eqref{eq:Heston_SDE} and $E_\lambda$ is an exponential random variable with parameter $\lambda>0$ independent of $X$. This note fills a gap in the proof of Corollary~2.4 in~\cite{FJ09Large} and hence completes the proof of the main result in~\cite{FJ09Large}, which describes the limiting behaviour of the implied volatility smile in the Heston model far from maturity. The note is organized as follows. Section~\ref{sec:trap} describes the relevant concepts of the large deviation theory and discusses how the effective domain changes when a family of random variables is perturbed by an independent exponential random variable. Section~\ref{sec:Trap} discusses the failure of essential smoothness when the Heston model is perturbed by an independent exponential, which is what causes the gap in the proof of Corollary~2.4 in~\cite{FJ09Large}. Section~\ref{sec:Trap} also proves Theorem~\ref{thm:Fix}, which fills the gap. \begin{comment} \section{The option pricing formulae} \label{sec:formula} Let a positive martingale $S=(S_t)_{t\geq0}$ be a model for a risky security under a pricing measure $\mathsf P$ (throughout this note the interest rates and dividend yields are assumed to be zero) on some filtered measurable space $(\Omega,\mathcal F,(\mathcal F_t)_{t\geq0})$. The computation of put and call option prices can in many such models be simplified by the following representation formulae \begin{eqnarray} \label{eq:Put_Formulae} \mathsf E\left[(K-S_t)^+\right] & = & K \mathsf P\left[\log(K)>\log(S_t)+E_1\right],\\ \label{eq:Call_Formulae} \mathsf E\left[(S_t-K)^+\right] & = & S_0 \widetilde \mathsf P\left[\log(S_t)-E_1>\log(K)\right], \end{eqnarray} for any maturity $t\geq0$ and strike $K>0$, where $E_1$ is an exponential random variable independent of $S$ with $\mathsf E[E_1]=1$ (as usual $x^+=\max\{x,0\}$ for any $x\in\mathbb R$). The probability measure $\widetilde \mathsf P$ is the so-called \textit{share} measure defined on the $\sigma$-field $\mathcal F_t$ by its Radon-Nikodym derivative $\dd \widetilde \mathsf P/\dd \mathsf P = S_t/S_0.$ Note that under $\widetilde \mathsf P$ the variable $E_1$ remains exponentially distributed with $\widetilde \mathsf E[E_1]=1$ and independent of $S_t$. The proof of~\eqref{eq:Put_Formulae} is a straightforward consequence of the independence of $E_1$ and $S_t$ and Fubini's theorem: $$ \mathsf P\left[\log\frac{K}{S_t}>E_1\right] = \mathsf E\left[I_{\left\{\log\frac{K}{S_t}>E_1\right\}}\right] = \mathsf E\left[I_{\left\{\log\frac{K}{S_t}>0\right\}}\int_0^{\log\frac{K}{S_t}}\mathrm{e}^{-x}\dd x\right] = \mathsf E\left[I_{\{K>S_t\}} \left(1-\frac{S_t}{K}\right)\right], $$ where $I_{\{\cdot\}}$ denotes the indicator function of the event $\{\cdot\}$. An analogous argument under $\widetilde\mathsf P$ proves~\eqref{eq:Call_Formulae}. These formulae were applied by Carr and Madan in~\cite{CM09} to obtain a fast algorithm for the efficient pricing of vanilla options in models where the characteristic function is known in closed form. In such a model the characteristic function of the variable $\log(S_t)+E_1$ is also known in closed form since $S_t$ and $E_1$ are independent. Therefore the representation of the put price as a single probability given in formula~\eqref{eq:Put_Formulae} is numerically convenient as it leads to faster, more stable and accurate pricing algorithms than the usual representation $\mathsf E\left[(K-S_t)^+\right] = K \mathsf P\left[\log\frac{K}{S_t}>0\right] - S_0 \widetilde \mathsf P\left[\log\frac{K}{S_t}>0\right]$, which requires two separate Fourier inversions for the two probabilities to be computed. \end{comment} \section{The large deviation principle for random variables in $\mathbb R$} \label{sec:trap} We briefly recall the basic facts of the large deviation theory in $\mathbb R$ (see monograph~\cite[Ch. 2]{DemboZeitouni} for more details). Let $(Z_t)_{t\geq1}$ be a family of random variables with $Z_t\in\mathbb R$. $J$ is a \textit{rate function} if it is lower semicontinuous and $J(\mathbb R)\subset[0,\infty]$ holds. The family $(Z_t)_{t\geq1}$ satisfies the \textit{large deviation principle (LDP)} with the \textit{rate function} $J$ if for every Borel set $B\subset\mathbb R$ we have \begin{equation} \label{eq:DefLDP} -\inf_{x\in B^\circ}J(x)\leq\liminf_{t\to\infty}\frac{1}{t}\log \mathsf P\left[Z_t\in B\right] \leq \limsup_{t\to\infty}\frac{1}{t}\log \mathsf P\left[Z_t\in B\right]\leq-\inf_{x\in \overline B}J(x), \end{equation} with the convention $\inf\emptyset =\infty$ the relative notions of interior (interior $B^\circ$, closure $\overline B$ and boundary $\overline B\setminus B^\circ$ are in the topology of $\mathbb R$). The G\"artner-Ellis theorem (Theorem~\ref{thm:GartnerEllis} below) gives sufficient conditions for a family $(Z_t)_{t\geq1}$ to satisfy the LDP (see monograph~\cite[Section 2.3]{DemboZeitouni} for details). Let $\Lambda_t(u):=\log \mathsf E\left[\mathrm{e}^{uZ_t}\right]\in(-\infty,\infty]$ be the cumulant generating function of $Z_t$. Assume that for every $u\in\mathbb R$ \begin{eqnarray} \label{eq:LDP_Assumption} \Lambda(u) := \lim_{t\to \infty}\Lambda_t(tu)/t \quad\text{exists in } [-\infty,\infty] \qquad\text{and}\qquad 0 \in \mathcal D_\Lambda^\circ, \end{eqnarray} where $\mathcal D_\Lambda:=\{u\in\mathbb R:\Lambda(u)<\infty\}$ is the \textit{effective domain} of $\Lambda$ and $\mathcal D_\Lambda^\circ$ is its interior. The \textit{Fenchel-Legendre transform} $\Lambda^*$ of the convex function $\Lambda$ is defined by the formula \begin{eqnarray} \label{eq:DefFenchelLegendreTransf} \Lambda^*(x) &:=& \sup\{ux-\Lambda(u)\>:\>u\in\mathbb R\} \quad\text{for}\quad x\in\mathbb R. \end{eqnarray} Under the assumption in~\eqref{eq:LDP_Assumption}, $\Lambda^*$ is lower semicontinuous with compact level sets $\{x:\Lambda^*(x)\leq \alpha\}$ (see~\cite[Lemma 2.3.9(a)]{DemboZeitouni}) and $\Lambda^*(\mathbb R)\subset[0,\infty]$ and hence satisfies the definition of a \textit{good rate function}. We now state the G\"artner-Ellis theorem (see~\cite[Section 2.3]{DemboZeitouni} for its proof). \begin{theorem} \label{thm:GartnerEllis} Let the random variables $(Z_t)_{t\geq1}$ satisfy the assumption in~\eqref{eq:LDP_Assumption}. If $\Lambda$ is essentially smooth and lower semicontinuous, then LDP holds for $(Z_t)_{t\geq1}$ with the good rate function $\Lambda^*$. \end{theorem} The function $\Lambda:\mathbb R\to(-\infty,\infty]$ defined in~\eqref{eq:LDP_Assumption} is \textit{essentially smooth} if it is (a)~differentiable in $\mathcal D^\circ_\Lambda$ and (b)~\textit{steep}, i.e. $\lim_{n\to\infty}|\Lambda'(u_n)|=\infty$ for every sequence $(u_n)_{n\in\mathbb N}$ in $\mathcal D^\circ_\Lambda$ that converges to a boundary point of $\mathcal D^\circ_\Lambda$. If $\mathcal D_\Lambda^\circ$ is a strict subset of $\mathbb R$, which is the case in the setting of~\cite{FJ09Large} (see also Section~\ref{sec:Trap} below), essential smoothness, which plays a key role in the proof of Theorem~\ref{thm:GartnerEllis}, is not automatic. The following question is of central importance in~\cite{FJ09Large}: does the LDP persist if a family of random variables $(Z_t)_{t\geq1}$ is perturbed by an independent exponential random variable $E_1$? It is implicitly assumed in the proof of Corollary~2.4 in~\cite{FJ09Large} (see the last line on page~17 and lines~4 and~14 on page~18) that if $(Z_t)_{t\geq1}$ satisfies the assumptions of Theorem~\ref{thm:GartnerEllis}, then so do the families $(Y^{1+}_t)_{t\geq1}$ and $(Y^{1-}_t)_{t\geq1}$, where $Y_t^{1\pm}=Z_t\pm E_1/t$, and the LDP is applied. In particular the authors in~\cite{FJ09Large} assume that the limiting cumulant generating functions of $(Y^{1\pm}_t)_{t\geq1}$ are essentially smooth. However the following simple lemma holds. \begin{lemma} \label{lem:perturbation} Let $(Z_t)_{t\geq1}$ satisfy the assumption in~\eqref{eq:LDP_Assumption} with a limiting cumulant generating function $\Lambda$. Let $\lambda>0$ and $E_\lambda$ an exponential random variable independent of $(Z_t)_{t\geq1}$ with $\mathsf E[E_\lambda]=1/\lambda$ and let $Y_t^{\lambda\pm}:=Z_t\pm E_\lambda/t$. Then the families of random variables $(Y^{\lambda\pm}_t)_{t\geq1}$ satisfy the assumption in~\eqref{eq:LDP_Assumption} and the corresponding limiting cumulant generating functions are given by \begin{align*} \Lambda^{\lambda+}(u) & = \left\{ \begin{array}{ll} \displaystyle \Lambda(u), & \text{if } u\in \mathcal D_\Lambda\cap(-\infty,\lambda),\\ \infty, & \text{otherwise}, \end{array} \right. \quad \text{and}\quad \Lambda^{\lambda-}(u) & = \left\{ \begin{array}{ll} \displaystyle \Lambda(u), & \text{if } u\in \mathcal D_\Lambda\cap(-\lambda,\infty),\\ \infty, & \text{otherwise}. \end{array} \right. \end{align*} \end{lemma} \begin{remarks}\noindent \textbf{(a)} Let $(Z_t)_{t\geq1}$ satisfy the assumption in~\eqref{eq:LDP_Assumption} and assume further that $\Lambda$ is differentiable in $\mathcal D_\Lambda^\circ$. If $1\in\mathcal D_\Lambda^\circ$, then the right-hand boundary point of the interior of the effective domain $\mathcal D_{\Lambda^{1+}}^\circ$ is equal to $1$ and Lemma~\ref{lem:perturbation} implies that the limiting cumulant generating function $\Lambda^{1+}$ of $(Y^{1+}_t)_{t\geq1}$ is \begin{itemize} \item neither essentially smooth, since $\Lambda^{1+}$ is not steep at $1$, \item nor lower semicontinuous at $1$, since it is differentiable in $\mathcal D_{\Lambda^{1+}}^\circ$ with $\Lambda^{1+}(1)=\infty$. \end{itemize} Loss of steepness and lower semicontinuity occurs also for $(Y^{1-}_t)_{t\geq1}$ in the case where $-1\in\mathcal D_\Lambda^\circ$. \smallskip \noindent \textbf{(b)} Lemma~\ref{lem:perturbation} implies that if $(Z_t)_{t\geq1}$ satisfies the assumptions of Theorem~\ref{thm:GartnerEllis} \textit{and} $\mathcal D_\Lambda$ is contained in $(-\infty,\lambda)$, for some $\lambda>0$, then $(Y_t^{\lambda+})_{t\geq1}$ also satisfies the assumptions of Theorem~\ref{thm:GartnerEllis} and hence the LDP with a good rate function $\Lambda^*$. An analogous statement holds for $(Y_t^{\lambda-})_{t\geq1}$. \end{remarks} \begin{proof} Note that $\log \mathsf E\left[\mathrm{e}^{uE_\lambda}\right]$ is finite and equal to $\log\left(\lambda/(\lambda-u)\right)$ if and only if $u\in(-\infty,\lambda)$. For all large $t$ and $u\in\mathcal D_\Lambda\cap(-\infty,\lambda)$, the assumption in~\eqref{eq:LDP_Assumption} implies that $\Lambda^{\lambda+}_t(tu)=\log \mathsf E\left[\exp\left(tuY_t^{\lambda+}\right)\right]$ is finite and that the formula holds \begin{eqnarray} \label{eq:Perturbed_Cum_Gen_Fun} \Lambda^{\lambda+}_t(tu) = \Lambda_t(tu)+\log\frac{\lambda}{\lambda-u}, \qquad\text{where}\qquad \Lambda_t(tu) = \log \mathsf E\left[\exp\left(tuZ_t\right)\right]. \end{eqnarray} The inequality $u\geq\lambda$ implies that, since $\Lambda_t(tu)>-\infty$, we have $\Lambda^{\lambda+}_t(tu)=\infty$ for all $t$ and hence $\Lambda^{\lambda+}(u)=\infty$. If $u\in(\mathbb R\setminus\mathcal D_\Lambda)\cap(-\infty,\lambda)$, then~\eqref{eq:Perturbed_Cum_Gen_Fun} yields $\Lambda^{\lambda+}(u)=\lim_{t\nearrow\infty}\Lambda^{\lambda+}_t(tu)/t = \infty$. This proves the lemma for $(Y^{\lambda+}_t)_{t\geq1}$. The case of $(Y^{\lambda-}_t)_{t\geq1}$ is analogous. \end{proof} \section{Essential smoothness can fail} \label{sec:Trap} The Heston model $S=\mathrm{e}^{X}$ is a stochastic volatility model with the log-stock process $X$ given by \begin{eqnarray} \label{eq:Heston_SDE} \dd X_t = -\frac{Y_t}{2} \dd t+\sqrt{Y_t}\dd W^1_t &\text{and} & \dd Y_t = \kappa(\theta-Y_t)\dd t+\sigma \sqrt{Y_t}\dd W^2_t, \end{eqnarray} where $\kappa,\theta,\sigma>0$, $Y_0=y_0>0$, $X_0=x_0\in\mathbb R$ and $W^1,W^2$ are standard Brownian motions with correlation $\rho\in(-1,1)$. The standing assumption \begin{eqnarray} \label{eq:assumption_chi} \rho\sigma-\kappa<0, \end{eqnarray} is made in~\cite{FJ09Large} (see equation~(2.2) in Theorem~2.1 on page 5 of~\cite{FJ09Large}). In particular the inequality in~\eqref{eq:assumption_chi} implies that $S$ is a strictly positive true martingale and allows the definition of the share measure $\widetilde \mathsf P$ via the Radon-Nikodym derivative $\dd \widetilde \mathsf P/\dd \mathsf P = \mathrm{e}^{X_t-x_0}$. The authors' aim in~\cite{FJ09Large} is to obtain the limiting implied volatility smile as maturity tends to infinity at the strike $K=S_0\mathrm{e}^{xt}$ for any $x\in\mathbb R$ in the Heston model. Their main formula is given in Corollary~3.1~of~\cite{FJ09Large}. A key step in the proof of~\cite[Corollary~3.1]{FJ09Large} is given by~\cite[Corollary~2.4]{FJ09Large}. In the proof of~\cite[Corollary~2.4]{FJ09Large} (see last line on page 17 and lines 4 and 14 on page 18) it is implicitly assumed that the LDP for $(X_t/t_{t\geq1}$ implies the LDP for the family $(X_t/t\pm E_1/t)_{t\geq1}$. However, as we have seen in Section~\ref{sec:trap} (see remarks following Lemma~\ref{lem:perturbation}), Theorem~\ref{thm:GartnerEllis} cannot be applied directly to the family $(X_t/t\pm E_1/t)_{t\geq1}$, even if $(X_t/t)_{t\geq1}$ satisfies its assumptions. We start with a precise description of the problem and present the solution in Theorem~\ref{thm:Fix}. \begin{remarks} \noindent \textbf{(i)} Under~\eqref{eq:assumption_chi}, a simple calculation implies that $\Lambda$ and $\mathcal D_\Lambda$ of the family $(X_t/t)_{t\geq1}$ are: \begin{eqnarray} \label{eq:Lmabda} \Lambda(u) & = & -\frac{\theta\kappa}{\sigma^2}\left(u\rho\sigma-\kappa+\sqrt{\Delta(u)}\right)\quad \text{for}\quad u\in\mathcal D_\Lambda \qquad\text{and}\qquad \mathcal D_\Lambda = [u_-,u_+]\quad\text{where}\\ u_\pm & = & \left(1/2-\rho\kappa/\sigma \pm\sqrt{\left(\kappa/\sigma-\rho\right) \kappa/\sigma+1/4}\right)/\left(1-\rho^2\right) \quad\text{with}\quad u_-<0<1<u_+. \label{eq:roots} \end{eqnarray} In~\eqref{eq:Lmabda} the function $\Delta$ is a quadratic $\Delta(u)=(u\rho\sigma-\kappa)^2-\sigma^2(u^2-u)$ and the boundary points $u_+$ and $u_-$ of the effective domain $\mathcal D_\Lambda$ are its zeros. Elementary calculations show that $\Lambda$ is essentially smooth and that the unique minimum of $\Lambda^*$ is attained at $\Lambda'(0)=-\theta/2$. Therefore $(X_t/t)_{t\geq1}$ satisfies the LDP with the good rate function $\Lambda^*$, defined in~\eqref{eq:DefFenchelLegendreTransf}, by Theorem~\ref{thm:GartnerEllis}. \smallskip \noindent \textbf{(ii)} Under the share measure $\widetilde \mathsf P$, given by $\dd \widetilde \mathsf P/\dd \mathsf P = \mathrm{e}^{X_t-x_0}$, we have $\widetilde \mathsf E\left[\mathrm{e}^{u X_t}\right]= \mathrm{e}^{-x_0}\mathsf E\left[\mathrm{e}^{(u+1) X_t}\right]$ for all $u\in\mathbb R$ and $t>0$ and hence the family $(X_t/t)_{t\geq1}$ under $\widetilde \mathsf P$ satisfies the assumption in~\eqref{eq:LDP_Assumption} with the limiting cumulant generating function $\widetilde \Lambda(u)=\Lambda(u+1)$, $\mathcal D_{\widetilde \Lambda}=[u_--1,u_+-1]$. As before, $(X_t/t)_{t\geq1}$ satisfies the LDP under $\widetilde \mathsf P$ with the strictly convex good rate function $\widetilde \Lambda$, which satisfies $\widetilde \Lambda^*(x)=\Lambda^*(x)-x$ for all $x\in\mathbb R$ and attains its unique minimum at $\widetilde \Lambda'(0)=\Lambda'(1)=\theta\kappa/(\kappa-\rho\sigma)$. \end{remarks} \begin{theorem} \label{thm:Fix} Let the process $X$ be given by~\eqref{eq:Heston_SDE} and assume that~\eqref{eq:assumption_chi} holds. Let $E_1$ be the exponential random variable with $\mathsf E[E_1]=1$, which is independent of $X$. Then the following limits hold: \begin{eqnarray} \label{eq:lim_1} \lim_{t\nearrow\infty}\frac{1}{t} \log \mathsf P\left[X_t-x_0+E_1<xt\right] & = & -\Lambda^*(x) \qquad\text{for}\quad x\leq \Lambda'(0)=-\theta/2;\\ \label{eq:lim_2} \lim_{t\nearrow\infty}\frac{1}{t} \log \widetilde \mathsf P\left[X_t-x_0-E_1>xt\right] & = & x-\Lambda^*(x) \qquad\text{for}\quad x\geq \Lambda'(1)=\theta\kappa/(\kappa-\rho\sigma);\\ \label{eq:lim_3} \lim_{t\nearrow\infty}\frac{1}{t} \log \widetilde \mathsf P\left[X_t-x_0-E_1\leq xt\right] & = & x-\Lambda^*(x) \qquad\text{for}\quad x\in\left[\Lambda'(0),\Lambda'(1)\right]; \end{eqnarray} where $\Lambda$ is given in~\eqref{eq:Lmabda}, its Fenchel-Legendre transform $\Lambda^*$ is defined in~\eqref{eq:DefFenchelLegendreTransf} and $\dd \widetilde\mathsf P/\dd \mathsf P=\mathrm{e}^{X_t-x_0}$. \end{theorem} \begin{remark} \noindent The limits in Theorem~\ref{thm:Fix} are precisely the limits that arise in the proof of~\cite[Corollary~2.4]{FJ09Large} (see the last line on page~17 and lines~4 and~14 on page~18) and are claimed to hold since the family $(X_t/t)_{t\geq1}$ satisfies the LDP under $\mathsf P$ and $\widetilde\mathsf P$ by Remarks~(i) and~(ii) above and Theorem~\ref{thm:GartnerEllis}. However Lemma~\ref{lem:perturbation} implies that the limiting cumulant generating function $\Lambda^{1+}$ of the family of random variables $(Z_t+E_1/t)_{t\geq1}$, where $Z_t=(X_t-x_0)/t$, is neither lower semicontinuous nor essentially smooth. Hence Theorem~\ref{thm:GartnerEllis} cannot be applied to $(Z_t+E_1/t)_{t\geq1}$. An anologous issue arises under the measure $\widetilde\mathsf P$. \end{remark} \begin{proof} The basic idea of the proof is simple: for~\eqref{eq:lim_1} we sandwich the probability $\mathsf P\left[X_t-x_0+E_1<xt\right]$ between two tail probabilities of two families of random variables, which satisfy the LDP with the same rate function $\Lambda^*$ by Lemma~\ref{lem:perturbation} and Theorem~\ref{thm:GartnerEllis}. The limits in \eqref{eq:lim_2} and~\eqref{eq:lim_3} follow similarly. For given parameter values in the Heston model pick $\lambda>u_+$, where $u_+$ is defined in~\eqref{eq:roots}. Let $E_\lambda$ be an exponential random variable with $\mathsf E[E_\lambda]=1/\lambda$, defined on the same probability space as $X$ and $E_1$ and independent of both. Since $u_+>1$, we have the elementary inequality \begin{eqnarray} \label{eq:ElemIneq} \mathsf P\left[E_\lambda<\alpha\right]= I_{\{\alpha>0\}}\left(1-\mathrm{e}^{-\lambda \alpha}\right)\leq I_{\{\alpha>0\}}\left(1-\mathrm{e}^{-\alpha}\right)= \mathsf P\left[E_1<\alpha\right]\qquad\text{for any}\quad\alpha\in\mathbb R. \end{eqnarray} The inequality \begin{eqnarray} \label{eq:MainInequality} \mathsf P\left[X_t-x_0+E_\lambda<xt\right] & \leq & \mathsf P\left[X_t-x_0+E_1<xt\right] \end{eqnarray} follows by conditioning on $X_t$ and applying~\eqref{eq:ElemIneq}. On the other hand, since $E_1>0$ a.s., we have \begin{eqnarray} \label{eq:SimpleMainInequality} \mathsf P\left[X_t-x_0+E_1<xt\right] & \leq & \mathsf P\left[X_t-x_0<xt\right]. \end{eqnarray} Lemma~\ref{lem:perturbation} implies that the families of random variables $(Z_t+E_\lambda/t)_{t\geq1}$ and $(Z_t)_{t\geq1}$, where $Z_t=(X_t-x_0)/t$, both have the limiting cumulant generating function equal to $\Lambda$ given in~\eqref{eq:Lmabda} with the effective domain $\mathcal D_\Lambda=[u_-,u_+]$. Since $\Lambda$ is essentially smooth and lower semicontinuous on $\mathcal D_\Lambda$ and the assumption in~\eqref{eq:LDP_Assumption} is satisfied, Theorem~\ref{thm:GartnerEllis} implies that $(Z_t+E_\lambda/t)_{t\geq1}$ and $(Z_t)_{t\geq1}$, satisfy the LDP with the good rate function $\Lambda^*$. Since $x$ in~\eqref{eq:lim_1} is assumed to be less or equal to the unique minimum $\Lambda'(0)=-\theta/2$ of $\Lambda^*$ (see Remark~(i) above) and $\Lambda^*$ is non-negative and strictly convex, the LDP (see the inequalities in~\eqref{eq:DefLDP}) and the inequalities in~\eqref{eq:MainInequality} and~\eqref{eq:SimpleMainInequality} imply the limit in~\eqref{eq:lim_1}. To prove~\eqref{eq:lim_2} pick $\lambda>1-u_-$ and note that the inequality in~\eqref{eq:ElemIneq} and conditioning on $X_t$ yield \begin{eqnarray} \label{eq:TildeInequality} \widetilde \mathsf P\left[X_t-x_0>xt\right] \geq \widetilde \mathsf P\left[X_t-x_0-E_1>xt\right] \geq \widetilde \mathsf P\left[X_t-x_0-E_\lambda>xt\right]. \end{eqnarray} As before, Lemma~\ref{lem:perturbation} and Theorem~\ref{thm:GartnerEllis} imply that $(Z_t-E_\lambda/t)_{t\geq1}$ and $(Z_t)_{t\geq1}$ satisfy the LDP with the convex rate function $\widetilde \Lambda^*$, which by Remark~(ii) above attains its unique minimum at $\Lambda'(1)=\theta\kappa/(\kappa-\rho\sigma)$. Since $x\geq\Lambda'(1)$ in~\eqref{eq:lim_2}, the limit follows. A similar argument implies the limit in~\eqref{eq:lim_3} for all $x\in[\Lambda'(0),\Lambda'(1)]$, which concludes the proof. \end{proof} \begin{comment} \section{Conclusion} \label{sec:Conclusion} This note investigates the subtleties that arise with effective domains and a potential loss of essential smoothness when a family of random variables is perturbed by an independent exponential (the precise formulation is given in Section~\ref{sec:trap}). A gap in the proof of Corollary~2.4 in~\cite{FJ09Large}, which is key in deriving the main result of~\cite{FJ09Large}, is identified and circumvented by Theorem~\ref{thm:Fix}. The proof of Theorem~\ref{thm:Fix} presented in Section~\ref{sec:Trap} has two main features: one, it is based on the relationship between the essential smoothness of large deviation theory and option pricing formulae~\eqref{eq:Put_Formulae} and~\eqref{eq:Call_Formulae} developed in this note and two, it stays well within the realm of~\cite{FJ09Large} without the need for the introduction of further mathematical concepts. \end{comment}
{ "redpajama_set_name": "RedPajamaArXiv" }