diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzagnm" "b/data_all_eng_slimpj/shuffled/split2/finalzzagnm" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzagnm" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nIn this paper following \\cite{Coa1999,CoaSuj1999}\nwe are interested in computing an Euler characteristic for finitely generated modules over Iwasawa algebras. Suppose that $G$ is a compact $p$-adic Lie group without $p$-torsion, and $M$ is a finitely generated $\\mathbb{Z}_p G$-module. If the homology groups $H_i(G,M)$ are finite for all $i$ then we say that the Euler characteristic is well-defined and takes the value \\[ \\chi(G,M):=\\prod_{i\\geqslant 0} |H_i(G,M)|^{(-1)^n}. \\] \n\nThe following is result is classical:\n\n\\begin{thm} Suppose that $G\\cong\\mathbb{Z}_p^d$.\n\\begin{enumerate} \\item The Euler characteristic of a finitely generated $\\mathbb{Z}_p G$-module is well-defined if and only if $H_0(G,M)$ is finite.\n\\item Moreover, $\\chi(G,M)=1$ whenever $M$ is a pseudo-null module with well-defined Euler characteristic. \\end{enumerate}\n\\end{thm}\nBoth parts of this theorem are known not to be true for general compact $p$-adic Lie groups $G$ without $p$-torsion, see \\cite{CoaSchSuj2003\/2}. \n\nIn this paper we prove that (1) holds if and only if $G$ is finite-by-nilpotent. Both directions depend on the key fact that the set $S:=\\mathbb{Z}_p G\\backslash \\ker(\\mathbb{Z}_p G\\rightarrow \\mathbb{Z}_p)$ is an Ore set in $\\mathbb{Z}_p G$ precisely if $G$ is finite-by-nilpotent. \n\nIn addition, we prove that part (2) of the theorem holds whenever $G$ is finite-by-nilpotent. To prove this we prove analogous results for the Akashi series of \\cite{CoaSchSuj2003\/2} and \\cite{CFKSV} (see (\\ref{Akashi}) for the definition). In particular we show that if $G\\cong H\\rtimes\\Gamma$ then every finitely generated $\\mathbb{Z}_p G$-module $M$ such that $H_0(H,M)$ is $\\mathbb{Z}_p\\Gamma$-torsion has well-defined Akashi series if and only if $H$ is finite-by-nilpotent and that if $G$ is finite-by-nilpotent then such a module that is also pseudo-null has trivial Akashi series. This time the first part depends on the fact that $T:=\\mathbb{Z}_p G\\backslash \\ker(\\mathbb{Z}_p G\\rightarrow \\mathbb{Z}_p\\Gamma)$ is an Ore set in $\\mathbb{Z}_p G$ precisely if $H$ is finite-by-nilpotent. We prove the second part using the Hochschild-Serre spectral sequence. We could prove part (2) of the Euler characteristics result directly using the same techniques but we do not because it follows immediately from the Akashi series result.\n\nWhenever $M$ is a finitely generated and torsion for a subset of $S$ (resp. $T$) that is an Ore set of $\\mathbb{Z}_p G$ the Euler characteristic (resp. Akashi series) of $M$ is well-defined. This raises the interesting algebraic question of what the maximal Ore subsets of $S$ and $T$ are for general compact $p$-adic Lie groups $G$ (with $G\\cong H\\rtimes\\Gamma$ for the $T$ case). Since the long exact sequence of $K$-theory used to formulate the main conjecture in \\cite{CFKSV} is defined for any such Ore set and the connecting map will remain surjective whenever $G$ is pro-$p$ this question may well also have arithmetic implications as the $\\mathcal{M}_H(G)$-conjecture (\\cite[Conjecture 5.1]{CFKSV}) could then be weakened without losing the means of defining of a characteristic element that should be related to a $p$-adic $\\mathrm{L}$-function.\n \nFollowing work of Serre \\cite{Ser1998} in the case $M$ is finite, Ardakov and the author \\cite{ArdWad2008} have described the Euler characteristic of any finitely generated $p$-torsion module in terms of a notion of graded Brauer character for $M$ that is supported on the $p$-regular elements of $G$. It followed from this description that if $d_G(M)<\\dim C_G(g)$ for every $p$-regular element of $G$ then $\\chi(G,M)$ must be $1$. Here $d_G(M)$ denotes the canonical dimension of $M$ as defined in section \\ref{can} and $C_G(g)$ denotes the centraliser of $g$ in $G$.\n\nTotaro had already extended Serre's work in a different direction. Instead of concentrating on $p$-torsion modules he computed Euler characteristics of modules that are finitely generated as $\\mathbb{Z}_p$-modules. Part of his main result was the following:\n\n\\begin{thm}[\\cite{Tot1999}, Theorem 0.1] Let $p$ be any prime number. Let G be a compact $p$-adic Lie group of dimension at least 2, and let $M$ be a finitely generated $\\mathbb{Z}_p$-module with $G$-action. Suppose that the homology of the Lie algebra $\\mathfrak{g}_{\\mathbb{Q}_p}$ of $G$ acting on $M\\otimes\\mathbb{Q}_p$ is 0; this is equivalent to assuming that the homology of any sufficiently small open subgroup $G_0$ acting on $M$ is finite, so that the Euler characteristic $\\chi(G_0, M)$ is defined. Then the Euler characteristics $\\chi(G_0, M)$ are the same for all sufficiently small open subgroups $G_0$ of $G$. \n\nThe common value of these Euler characteristics is $1$ if every element of the Lie algebra $\\mathfrak{g}_{\\mathbb{Q}_p}$ has centraliser of dimension at least 2. Otherwise, there is an element of $\\mathfrak{g}_{\\mathbb{Q}_p}$ whose centraliser has dimension $1$, and then the common value is not $1$ for some choice of module $M$. \n\\end{thm}\n\nLooking at these results together, and recalling that $d_G(M)\\leq 1$ for any finitely generated $\\mathbb{Z}_p$-module, we might be tempted to make the following conjecture:\n\n\\begin{conj} If $G$ is a compact $p$-adic Lie group without $p$-torsion and $M$ is a finitely generated $\\mathbb{Z}_p G$-module with well-defined Euler characteristic such that $d_G(M)<\\dim C_G(g)$ for all $g\\in G$ then $\\chi(G,M)=1$. \n\\end{conj}\n\nIn fact our Theorem might indicate that an even stronger conjecture is true. Recall that every compact $p$-adic Lie group has an associated $\\mathbb{Z}_p$-Lie algebra $\\mathfrak{g}$ and each automorphism of $G$ induces an automorphism of $\\mathfrak{g}$. In this way for each $g\\in G$ conjugation by $g$ induces an element $\\theta(g)$ of $GL(\\mathfrak{g})$. Let $\\mathfrak{g}^0(g)$ denote the generalised eigenspace $\\theta(g)$ for the eigenvalue $1$. That is \\[ \\mathfrak{g}^0(g):=\\{x\\in\\mathfrak{g}|(\\theta(g)-1)^n=0\\mbox{ for some }n>0\\}. \\]\n\n\\begin{conj} If $G$ is a compact $p$-adic Lie group without $p$-torsion and $M$ is a finitely generated $\\mathbb{Z}_p G$-module with well-defined Euler characteristic such that $d_G(M)<\\dim\\mathfrak{g}^0(g)$ for each $g\\in G$ then $\\chi(G,M)=1$.\n\\end{conj}\n\nOf course, if either $g$ has finite order or $g$ has infinite order and $\\dim C_\\mathfrak{g}(g)=1$ then $\\mathfrak{g}^0(g)=C_\\mathfrak{g}(g)$. Moreover if $G$ is finite-by-nilpotent then $\\mathfrak{g}^0(g)=\\dim G$ for every $g\\in G$. \n\n\n\\subsection{Acknowledgments} Much of this work was done whilst the author was a EPSRC postdoctoral fellow under research grant EP\/C527348\/1. He would like to thank Konstantin Ardakov for many helpful conversations.\n\n\\section{Preliminaries}\n\n\\subsection{Notation}\n\nLet $G$ be a compact $p$-adic Lie group. We define the Iwasawa algebra \\[ \\mathbb{Z}_p G:= \\lim\\limits_{\\longleftarrow}\\mathbb{Z}_p [G\/N] \\] where $N$ runs over all the open normal subgroups of $G$ and $\\mathbb{Z}_p[G\/N]$ denotes the usual algebraic group algebra. If $H$ is a closed normal subgroup of $G$, we write $I_{H,G}$ for the kernel of the augmentation map $\\mathbb{Z}_p G\\rightarrow \\mathbb{Z}_p G\/H$. \n\nGiven a profinite ring $R$, we write $\\lmod{R}$ for the category of profinite left $R$-modules and continuous $R$-module homomorphisms. Then $H_i(G,-)$ is the $i$th derived functor of the functor \\[ (-)_G\\colon \\lmod{\\mathbb{Z}_p G}\\rightarrow\\lmod{\\mathbb{Z}_p} \\] that sends a module $M$ to its $G$-coinvariants $M\/I_{G,G} M$. Since $(-)_G=\\mathbb{Z}_p\\otimes_{\\mathbb{Z}_p G}(-)$ as functors, it follows that $H_i(G,M)\\cong \\Tor_i^{\\mathbb{Z}_p G}(\\mathbb{Z}_p,M) \\mbox{ for each }i\\geqslant 0$ (see \\cite[section 6.3]{RibZal2000} for more details.) \n \nIf $X$ is a subset of a profinite group we will write $\\langle X\\rangle$ for the closed subgroup of $G$ generated by $X$. We write $Z(G)$ for the centre of $G$.\n\nIf $S$ is a (left and right) Ore set in a ring $R$ and $M$ is an $R$-module, we write $R_S$ for the localisation of $R$ at $S$, and $M_S$ for the localisation of $M$ at $S$.\n\n$\\Gamma$ will always denote a group isomorphic to $\\mathbb{Z}_p$. \n \n\\subsection{A little group theory} \\label{group}\n\n\\begin{defn} Recall that a group $G$ is \\emph{finite-by-nilpotent} if it has a finite normal subgroup $N$ such that $G\/N$ is nilpotent.\n\\end{defn}\n\n\\begin{lem} Suppose that $G$ is a compact $p$-adic Lie group that is finite-by-nilpotent and has no elements of order $p$\n\\begin{enumerate}\n\\item $G$ has a maximal finite normal subgroup $\\Delta^+(G)$\n\\item $G\/\\Delta^+$ is a nilpotent pro-$p$ group without elements of order $p$.\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\nFor part (1) see \\cite[1.3]{ArdBro2007}. Part (2) follows from \\cite[Lemma 4.1]{Ard2006}.\n\\end{proof}\n\n\\subsection{Properties of $\\mathbb{Z}_p G$} \n\nWe record some standard properties of Iwasawa algebras that we will use without further comment; see \\cite{ArdBro2006} for references to proofs.\n\n\\begin{lem} Suppose that $G$ is a compact $p$-adic Lie group of dimension $d$.\n\\begin{enumerate}\n\\item $\\mathbb{Z}_p G$ is a left and right Noetherian ring;\n\\item $\\mathbb{Z}_p G$ is an Auslander-Gorenstein ring of dimension $d+1$;\n\\item $\\mathbb{Z}_p G$ has finite global dimension if and only if $G$ has no elements of order $p$;\n\\end{enumerate}\n\\end{lem}\n\n\\subsection{The Canonical dimension function} \\label{can}\nRecall that if $R$ is an Auslander-Gorenstein ring then there is a canonical dimension function $\\delta$ on the category of non-zero finitely generated $R$-modules given by $\\delta(M)=\\dim R-j_R(M)$ where $j_R(M)=\\inf\\{j|\\Ext^j_R(M,R)\\neq 0\\}$. \n\nWe say a module is \\emph{pseudo-null} if $\\delta(M)\\leqslant \\mathrm{inj.dim}(R)-2$.\n\nWhen $R=\\mathbb{Z}_p G$ for $G$ a compact $p$-adic Lie group we write $d_G(M)$ for $\\delta(M)$. Then $M$ is pseudo-null as a $\\mathbb{Z}_p G$-module when $d_G(M)\\leqslant\\dim G-1$. \n \n\\subsection{Homology and base change} \\label{basechange}\n\n\\begin{lem} Suppose we have rings $R$ and $S$, a ring homomorphism $R\\rightarrow S$, a right $S$-module $N$, and a left $R$-module $M$.\n\\begin{enumerate}\n\\item If $S$ is a flat as a right $R$-module then \\[ \\Tor_i^R(N,M)\\cong\\Tor_i^S(N,S\\otimes_R M) \\] for each $i\\geqslant 0$.\n\\item In general, there is a base change spectral sequence \\[E^2_{ij}=\\Tor_i^S(N,\\Tor_j^R(S,M)) \\Longrightarrow \\Tor_{i+j}^R(N,M).\\]\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\nPart (1) follows immediately from part (2). Part (2) is \\cite[Theorem 5.6.6]{Wei1995}. \n\\end{proof}\n\n\\subsection{Computation of homology groups when $G\\cong\\mathbb{Z}_p$} \\label{homology}\n\\begin{lem} If $G=\\mathbb{Z}_p=\\langle z\\rangle$ and $M$ is a profinite $\\mathbb{Z}_p G$-module then \n\\begin{enumerate}\n\\item $H_i(G,M)=0$ unless $i=0,1$;\n\\item $H_0(G,M)=M\/(z-1)M$;\n\\item $H_1(G,M)=M^G=\\ker (z-1)\\colon M\\rightarrow M$. \n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof} The map $\\mathbb{Z}_p G\\rightarrow\\mathbb{Z}_p G$ sending $\\alpha$ to $(z-1)\\alpha$ defines a projective resolution of $\\mathbb{Z}_p$ as a $\\mathbb{Z}_p G$-module. All three parts follow easily.\n\\end{proof}\n\n\\subsection{Localisation at augmentation ideals} \\label{localisable}\n\nRecall that a semi-prime ideal $I$ in a ring $R$ is \\emph{localisable} if the set of elements of $r$ in $R$ such that $r+I$ is regular in $R\/I$ forms an Ore set in $R$. The following result of Ardakov explains, at least in part, the importance of the condition that $G$ be finite-by-nilpotent.\n\n\\begin{thm}[Theorem A of \\cite{Ard2006}] If $G$ is a compact $p$-adic Lie group and $H$ is a closed normal subgroup then the kernel $I_{H,G}$ of the augmentation map $\\mathbb{Z}_p G\\rightarrow\\mathbb{Z}_p G\/H$ is localisable if and only if $H$ is a finite-by-nilpotent group.\n\\end{thm}\n\n\\subsection{Akashi series}\\label{Akashi}\n\nThis material in this section is largely taken from \\cite[section 4]{CoaSchSuj2003\/2}.\n\n\\begin{defn} If $M$ is a finitely generated torsion $\\mathbb{Z}_p\\Gamma$-module then there is an exact sequence of $\\mathbb{Z}_p\\Gamma$-modules \\[ 0\\rightarrow \\bigoplus_{i=1}^r \\mathbb{Z}_p\\Gamma\/\\mathbb{Z}_p\\Gamma f_i\\rightarrow M\\rightarrow D\\rightarrow 0 \\] with $D$ pseudo-null. The \\emph{characteristic element} of $M$ is defined by \\[ f_M=\\prod f_i\\] and is uniquely determined up to multiplication by a unit in $\\mathbb{Z}_p\\Gamma$.\n\\end{defn}\n\nNow write $Q(\\Gamma)$ for the field of fractions of $\\mathbb{Z}_p\\Gamma$. \n\n\\begin{defn} If $G$ is isomorphic to a semidirect product $G\\cong H\\rtimes\\Gamma$ and $M$ is a finitely generated $\\mathbb{Z}_p G$-module such that $H_j(H,M)$ is a torsion $\\mathbb{Z}_p\\Gamma$-module for each $j\\geq 0$, then the \\emph{Akashi series} of $M$ is given by \\[ Ak_H(M)=\\prod_{j\\geqslant 0}(f_{H_j(H,M)})^{(-1)^j}\\in Q(\\Gamma)\/(\\mathbb{Z}_p\\Gamma)^\\times.\\] In this case we say that the Akashi series of $M$ is well-defined. We will supress the subscript $H$ when no confusion will result.\n\\end{defn}\n\n\\begin{rmk} Strictly our definition of the Akashi series is a little more general than that in \\cite{CoaSchSuj2003\/2}; we extend their definition to some modules that need not be finitely generated over $\\mathbb{Z}_p H$. One consequence of this is that unlike in their version if a $p$-torsion module has well-defined Akashi series it need not necessarily be trivial. However the proofs in the lemma below are identical. \n\\end{rmk}\n\n\\begin{lem} Suppose $G$ is isomorphic to a semidirect product $G\\cong H\\rtimes\\Gamma$\n\\begin{enumerate} \n\\item If $0\\rightarrow L\\rightarrow M\\rightarrow N\\rightarrow 0$ is a short exact sequence of finitely generated $\\mathbb{Z}_p G$-modules with well-defined Akashi series then $Ak(M)=Ak(L).Ak(N)$. \n\\item If $M$ is a finitely generated $\\mathbb{Z}_p G$-module with well-defined Euler characteristic, then $M$ has well-defined Akashi series and \\[ \\chi(G,M)=|\\epsilon(Ak(M))|_p^{-1} \\] where $\\epsilon$ is the augmentation map $\\mathbb{Z}_p\\Gamma\\rightarrow \\mathbb{Z}_p$.\n\\end{enumerate}\n\\end{lem}\n\n\\section{Characterisation of when the nature of the zeroth homology group suffices to determine well-definition}\n\n\\subsection{Modules with well-defined Euler characteristic} \\label{Eulerdefd}\n\nSuppose that $G$ is any compact $p$-adic Lie group with no elements of order $p$. We want to study those finitely generated left $\\mathbb{Z}_p G$-modules $M$ with well-defined Euler characteristic. Since the groups $H_i(G,M)\\cong\\Tor_i^{\\mathbb{Z}_p G}(\\mathbb{Z}_p,M)$ are finitely generated $\\mathbb{Z}_p$-modules we just need to know when they are all $p$-torsion.\n\n\\begin{thm} If $G$ is a compact $p$-adic Lie group without no elements of order $p$, then $G$ is finite-by-nilpotent if and only the following are equivalent for a finitely generated left $\\mathbb{Z}_p G$-module $M$\n\\begin{enumerate} \\item $M$ has well-defined Euler characteristic; \\item $H_0(G,M)$ is finite. \\end{enumerate} \n \\end{thm}\n\n\\begin{proof} \nUsing Theorem \\ref{localisable}, we know that $G$ is finite-by-nilpotent if and only if $S:=\\mathbb{Z}_p G\\backslash I_{G,G}$ is an Ore set. \n\nSuppose first that $S$ is an Ore set so we may form the localisation $\\mathbb{Z}_p G_S$. Then $\\mathbb{Z}_p G_S$ is a local ring with maximal ideal $(I_{G,G})_S$ and residue field $\\mathbb{Q}_p$. \n\nSince $\\mathbb{Z}_p$ is a $\\mathbb{Z}_p G$-bimodule, $\\mathbb{Z}_p\\otimes_{\\mathbb{Z}_p G}M$ is a left $\\mathbb{Z}_p G$-module. As we also have $(\\mathbb{Z}_p\\otimes_{\\mathbb{Z}_p G}P)_S\\cong \\mathbb{Q}_p\\otimes_{{\\mathbb{Z}_p G}_S}P_S$ for every finitely generated projective left $\\mathbb{Z}_p G$-module $P$, it follows that \\[ (\\Tor_i^{\\mathbb{Z}_p G}(\\mathbb{Z}_p,M))_S\\cong\\Tor_i^{\\mathbb{Z}_p G_S}(\\mathbb{Q}_p,M_S) \\mbox{ for each }i\\geqslant 0.\\] Moreover, since $G$ acts trivially on each group $\\Tor_i^{\\mathbb{Z}_p G}(\\mathbb{Z}_p,M)$, they are all $p$-torsion if and only if they are all $S$-torsion, if and only if $\\Tor_i^{\\mathbb{Z}_p G_S}(\\mathbb{Q}_p,M_S)=0$ for every $i\\geqslant 0$. \n\nAs $\\mathbb{Z}_p G_S$ is local with residue field $\\mathbb{Q}_p$, and $M_S$ is finitely generated over $\\mathbb{Z}_p G_S$, Nakayama's Lemma tells us that $\\mathbb{Q}_p\\otimes_{{\\mathbb{Z}_p G}_S} M_S=0$ if and only if $M_S=0$. Thus the Euler characteristic of $M$ is well-defined if and only if $M_S=0$ if and only if $\\mathbb{Z}_p\\otimes_{\\mathbb{Z}_p G}M$ is finite as required. \n\nSuppose now that $S$ is not an Ore set This means that we can find $r\\in\\mathbb{Z}_p G$ and $s\\in S$ such that $Sr\\cap \\mathbb{Z}_p Gs=\\emptyset$. We consider the cyclic left $\\mathbb{Z}_p G$ module $M=\\mathbb{Z}_p G\/\\mathbb{Z}_p G\\langle r,s\\rangle$ for this pair $r,s$. \n\nThere is a free resolution of $M$ that begins \\[ \\cdots\\rightarrow(\\mathbb{Z}_p G)^d\\stackrel{d_1}{\\rightarrow}(\\mathbb{Z}_p G)^2\\stackrel{d_0}{\\rightarrow}\\mathbb{Z}_p G\\rightarrow M\\rightarrow 0, \\] with $d_0(\\alpha,\\beta)=\\alpha r+\\beta s$.\n\nNow applying $\\mathbb{Z}_p\\otimes_{\\mathbb{Z}_p G}(-)$ to this resolution yields a complex \\[ \\mathbb{Z}_p^d\\stackrel{\\overline{d_1}}{\\rightarrow}\\mathbb{Z}_p^2\\stackrel{\\overline{d_0}}{\\rightarrow}\\mathbb{Z}_p\\rightarrow 0, \\] with homology $H_\\bullet(G,M)$.\n\nThe condition that $Sr\\cap \\mathbb{Z}_p Gs=\\emptyset$ means that if $d_0(\\alpha,\\beta)=0$ then $\\alpha\\in I_{G,G}$. Since $S$ is multiplicatively closed it follows also that $\\beta\\in I_{G,G}$. Thus \\[\\im(d_1)=\\ker(d_0)\\subseteq I_{G,G}(\\mathbb{Z}_p G)^2\\] and so $\\overline{d_1}=0$. \n \nWe also have $\\overline{d_0}(a,b)=a\\epsilon(r)+b\\epsilon(s)$ where $\\epsilon$ is the augmentation map $\\mathbb{Z}_p G\\rightarrow \\mathbb{Z}_p$. As $s\\in S$, $\\epsilon(s)\\neq 0$ and so $\\overline{d_0}$ is not the zero map. It follows that $H_0(G,M)$ is finite and $H_1(G,M)\\cong \\mathbb{Z}_p$.\n\\end{proof}\n\n\\subsection{Multiplicativity of Euler Characteristic} \\label{AddEuler}\n\nBecause the $S$-torsion modules form an abelian subcategory of all finitely generated $\\mathbb{Z}_p G$-modules we may prove \n\n\\begin{prop} Suppose that $G$ is a finite-by-nilpotent compact $p$-adic Lie group. If $0\\rightarrow L\\rightarrow M\\rightarrow N\\rightarrow 0$ is a short exact sequence of finitely generated left $\\mathbb{Z}_p G$-modules then $M$ has well-defined Euler characteristic if and only if both $L$ and $N$ have well-defined Euler characteristic. Moreover in this case \\[ \\chi(G,M)=\\chi(G,L)\\cdot\\chi(G,N).\\] \\end{prop}\n\n\\begin{proof} The first part follows from Proposition \\ref{Eulerdefd}. The second part can be read off from the long exact sequence of homology.\n\\end{proof}\n\n\\subsection{Modules with well-defined Akashi series} \\label{Akdefd}\nThere are analogous results for Akashi series, in particular\n\n\\begin{thm} Suppose $G$ is isomorphic to a semi-direct product $H\\rtimes\\Gamma$ and define \\[ T:=\\mathbb{Z}_p G\\backslash \\ker(\\mathbb{Z}_p G\\rightarrow \\mathbb{Z}_p\\Gamma).\\] The Akashi series of $M$ is well-defined for every finitely generated left $\\mathbb{Z}_p G$-module $M$ such that $H_0(H,M)$ is a torsion $\\mathbb{Z}_p\\Gamma$-module if and only if $T$ is an Ore set in $\\mathbb{Z}_p G$ if and only if $H$ is finite-by-nilpotent.\n\\end{thm}\n\n\\begin{proof} The proof is nearly identical to that of Theorem \\ref{Eulerdefd} so briefly:\n\nOnce again, that $T$ is an Ore set if and only if $H$ is finite-by-nilpotent follows immediately from Theorem \\ref{localisable}. \n\nIf $T$ is an Ore set then $\\mathbb{Z}_p G_T$ is a local ring with maximal ideal $(I_{H,G})_T$ and $H_0(H,M)$ is a torsion $\\mathbb{Z}_p\\Gamma$-module if and only if \\[ H_0(H,M)_T\\cong M\\otimes_{\\mathbb{Z}_p G_T}\\mathbb{Z}_p G_T\/(I_{H,G})_T=0\\] if and only if $M_T=0$. Thus $H_j(H,M)_T=0$ for every $j\\geq 0$ if and only if $H_0(H,M)_T=0$ and the former holds if and only if $H_0(H,M)$ is $\\mathbb{Z}_p\\Gamma$-torsion.\n\nConversely if $T$ is not an Ore set then there are elements $r\\in\\mathbb{Z}_p G$ and $t\\in T$ such that there are no elements $r'\\in\\mathbb{Z}_p G$ and $t'\\in T$ with $r't=t'r$. Thus the kernel of the map \\[ (\\mathbb{Z}_p G)^2\\rightarrow \\mathbb{Z}_p G; (\\alpha,\\beta)\\mapsto \\alpha r+\\beta t\\] is contained in $(I_{H,G})^2$. Using this fact to compute the homology groups $H_j(H,M)$ for $M=\\mathbb{Z}_p G\/\\mathbb{Z}_p G\\langle r,t\\rangle$ we obtain $H_0(H,M)$ is $\\mathbb{Z}_p\\Gamma$-torsion but $H_1(H,M)$ is not $\\mathbb{Z}_p\\Gamma$-torsion. \n\\end{proof}\n\n\\begin{cor} If $G$ is as above and $0\\rightarrow L\\rightarrow M\\rightarrow N\\rightarrow 0$ is a short exact sequence of finitely generated $\\mathbb{Z}_p G$-modules then $M$ has well-defined Akashi series if and only if $L$ and $N$ both have well-defined Akashi series.\\hfill $\\qed$\n\\end{cor}\n\n\\section{Triviality of Euler characterstic for pseudo-nulls}\n\n\\subsection{Reduction to torsion-free nilpotent $G$} \\label{reduction}\n\nOur main goal now is to prove the second part of our main result: that if $G$ is a finite-by-nilpotent compact $p$-adic Lie group without $p$-torsion and $M$ is a finitely generated pseudo-null $\\mathbb{Z}_p G$-module with well-defined Euler characteristic then $\\chi(G,M)=1$. \n\nIn fact we prove the apparently stronger result that if $G$ is a finite-by-nilpotent compact $p$-adic Lie group and $H$ is a closed normal subgroup such that $G\\cong H\\rtimes \\Gamma$ then whenever $M$ is a finitely generated pseudo-null $\\mathbb{Z}_p G$-module with well-defined Akashi series we have $Ak_H(M)=1$. To see that the Euler characteristic version follows from this, observe that (except in the trivial case where $G$ is finite) we may always find such a closed normal subgroup $H$ and apply Lemma \\ref{Akashi}(2). \n\n\n\nWe first reduce to the case that $G$ is nilpotent and pro-$p$.\n\n\\begin{lem} Suppose that $G\\cong H\\rtimes\\Gamma$ is a compact $p$-adic Lie group with finite normal subgroup $\\Delta$ such that $(|\\Delta|,p)=1$. If $M$ is a finitely generated $\\mathbb{Z}_p G$-module then \\[ H_i(H,M)\\cong H_i(H\/\\Delta,M_\\Delta)\\mbox{ for each }i\\geqslant 0,\\] as $\\mathbb{Z}_p\\Gamma$-modules and so $Ak_H(M)=Ak_{H\/\\Delta}(M_\\Delta)$ if either is well-defined --- of course $G\/\\Delta\\cong (H\/\\Delta)\\rtimes\\Gamma$. \n\nMoreover, $d_{G\/\\Delta}(M_\\Delta)\\leqslant d_G(M)$, and so $M_\\Delta$ is a pseudo-null $\\mathbb{Z}_p G\/\\Delta$-module if $M$ is a pseudonull $\\mathbb{Z}_p G$-module.\n\\end{lem}\n\n\\begin{proof} First recall that $\\mathbb{Z}_p$ with the trivial $\\Delta$-action is a projective right $\\mathbb{Z}_p\\Delta$ module as $|\\Delta|$ is a unit in $\\mathbb{Z}_p$. Since the induction functor $(-)\\hat{\\otimes}_{\\mathbb{Z}_p\\Delta}\\mathbb{Z}_p H$ from profinite right $\\mathbb{Z}_p\\Delta$-modules to profinite right $\\mathbb{Z}_p H$-modules is left-adjoint to the restriction functor, it sends projective modules to projective modules and so in particular $\\mathbb{Z}_p H\/\\Delta\\cong \\mathbb{Z}_p \\hat{\\otimes}_{\\mathbb{Z}_p \\Delta} \\mathbb{Z}_p H$ is a projective profinite right $\\mathbb{Z}_p H$-module and so flat as a right $\\mathbb{Z}_p H$-module. \n\nUsing Lemma \\ref{basechange}(1) we may conclude that $H_i(H,M)\\cong H_i(H\/\\Delta,H_0(\\Delta,M))$ for each $i\\geqslant 0$ as $\\mathbb{Z}_p\\Gamma$-modules and the first part follows.\n\nBy considering a finitely generated projective resolution of $M$ we can also show \\[ \\Ext^j_{\\mathbb{Z}_p G}(M,\\mathbb{Z}_p G)\\otimes_{\\mathbb{Z}_p \\Delta}\\mathbb{Z}_p\\cong \\Ext^j_{\\mathbb{Z}_p G\/\\Delta}(M_\\Delta,\\mathbb{Z}_p G\/\\Delta) \\] for each $j\\geqslant 0$ and the second part follows too as $\\dim G=\\dim G\/\\Delta$.\n\\end{proof}\n\nBy applying this Lemma in the case $\\Delta=\\Delta^+$ is the maximal finite normal subgroup of $G$ and using Lemma \\ref{group} we have reduced the calculation of Euler characteristics of finitely generated pseudo-null $\\mathbb{Z}_p G$-modules when $G$ is finite-by-nilpotent without elements of order $p$ to the case when $G$ is torsion-free nilpotent and pro-$p$. \n\n\\subsection{The torsion-free nilpotent case}\\label{final}\n\nSuppose now that $G$ is a finitely generated nilpotent pro-$p$ group without torsion and we have a fixed decomposition $G\\cong H\\rtimes\\Gamma$. \n \n\\begin{lem} Suppose that $M$ is a $\\mathbb{Z}_p G$-module with well-defined Akashi series and there exists $z\\in Z(G)\\cap H$ such that $Z=\\langle z\\rangle$ is an isolated subgroup of $G$ acting trivially on $M$. Then $Ak_H(M)=1$. \n\\end{lem}\n\n\\begin{proof}\nFirst notice that $H_0(H,M)=H_0(H\/Z,M)$ is a torsion $\\mathbb{Z}_p\\Gamma$-module, that $G\/Z\\cong H\/Z\\rtimes\\Gamma$ and that $H\/Z$ is also torsion-free and nilpotent so $Ak_{H\/Z}(M)$ is also well-defined by Proposition \\ref{Eulerdefd}. Now the proof is nearly identical to the proof of \\cite[Corollary 12.2]{ArdWad2008} but we'll sketch it here for the sake of the reader.\n\nBy Lemma \\ref{homology} $H_i(Z,M)$ vanishes for $i>1$ and is isomorphic to $M$ as a left $\\mathbb{Z}_p G\/Z$-module for $i=0,1$ since $Z$ is central in $G$. Thus Lemma \\ref{basechange}(2) describes a spectral sequence of $\\mathbb{Z}_p\\Gamma$-modules with second page \\[ E_{ij}=H_i(H\/Z,H_j(Z,M))\\] that is concentrated in rows $j=0,1$. By \\cite[Exercise 5.2.2]{Wei1995}, for example, this yields a long exact sequence \\[ \\cdots\\rightarrow H_{n+1}(H,M)\\rightarrow H_{n+1}(H\/Z,M)\\rightarrow H_{n-1}(H\/Z,M)\\rightarrow H_n(H,M)\\rightarrow\\cdots \\] of $\\mathbb{Z}_p\\Gamma$-modules\n\nThe result now follows from the multiplicativity of characteristic elements. \\end{proof}\n\n\\begin{thm} If $G\\cong H\\rtimes\\Gamma$ is a torsion-free nilpotent $p$-adic Lie group, and $M$ is a pseudo-null $\\mathbb{Z}_p G$-module with well-defined Akashi series then $Ak_H(M)=1$.\n\\end{thm}\n\n\\begin{proof} Since $M$ is Noetherian and $Ak_H(0)=1$ we may define $N$ to be a maximal submodule of $M$ such that $Ak_H(N)=1$. Using Lemma \\ref{Akashi}(1) and Corollary \\ref{Akdefd} we see that every non-zero submodule $L$ of $M\/N$ satisfies $Ak_H(L)\\neq 1$. Thus after replacing $M$ by $M\/N$ it suffices to prove that $M$ must have a non-zero submodule $L$ with $Ak_H(L)=1$. \n\nNext recall (\\cite[4.5]{Lev1992}) that whenever we have a finitely generated module $M$ over a Auslander-Gorenstein ring it has a critical submodule; ie a submodule $N$ with the property that every proper quotient has strictly smaller canonical dimension. Using this fact and the remarks in the first paragraph we may assume that our module $M$ is critical. \n\nNow pick an isolated one-dimensional subgroup $Z$ of $Z(G)\\cap H$, then $M^Z$ is a $\\mathbb{Z}_p G$-submodule of $M$. By the Lemma above $Ak_H(M^Z)=1$, and so we may assume (Lemma \\ref{homology}(3)) that $H^1(Z,M)=M^Z=0$. \n\nNow the homology spectral sequence \\[ E^2_{ij}=H_i(H\/Z,H_j(Z,M))\\Longrightarrow H_{i+j}(H,M) \\] has only one non-trivial row on the second page and so $H_i(H,M)=H_i(H\/Z,M_Z)$ as $\\mathbb{Z}_p\\Gamma$-modules. Thus $Ak_H(M)=Ak_{H\/Z}(M_Z)$. \n\nBut $d_G(M_Z)1$ for all $g\\in G$ or $G$ has dimension $1$;\n\\item if $G$ is isomorphic to a semidirect product $\\mathbb{Z}_p^d\\rtimes\\mathbb{Z}_p$ then $G$ is nilpotent;\n\\item if $G$ is split-reductive then it is abelian.\n\\end{enumerate}\n\\end{thm}\n\n\\begin{proof}\nFor part (1), suppose $H$ is a closed subgroup of $G$ and $N$ is any finitely generated left $\\mathbb{Z}_p H$-module. By Shapiro's Lemma (see \\cite[Theorem 6.10.9]{RibZal2000}, for example) we have $H_i(G,\\mathbb{Z}_p G\\otimes_{\\mathbb{Z}_p H} N)\\cong H_i(H,N)$ for each $i\\geqslant 0$, so it suffices to prove that if $N$ is pseudo-null as a $\\mathbb{Z}_p H$-module then $\\mathbb{Z}_p G\\otimes_{\\mathbb{Z}_p H}N$ is pseudo-null as a $\\mathbb{Z}_p G$-module.\n\nBy inducing a finitely generated projective resolution of $N$ as a $\\mathbb{Z}_p H$-module to a projective resolution of $\\mathbb{Z}_p G\\otimes_{\\mathbb{Z}_p H}N$ as a $\\mathbb{Z}_p G$-module we see that \\[ \\Ext^j_{\\mathbb{Z}_p H}(N,\\mathbb{Z}_p H)\\otimes_{\\mathbb{Z}_p H}\\mathbb{Z}_p G\\cong \\Ext^j_{\\mathbb{Z}_p G}(\\mathbb{Z}_p G\\otimes_{\\mathbb{Z}_p H}N,\\mathbb{Z}_p G)\\] for each $j\\geqslant 0$ and we are done.\n\nPart (2) follows from \\cite[Lemma 7.4 \\& Theorem 11.5]{ArdWad2006}. In particular there is a pseudo-null $p$-torsion $\\mathbb{Z}_p G$-module $M$ with $\\chi(G,M)\\neq 1$.\n\nFor part (3), since all open subgroup are closed, part (1) tells us that it suffices to prove the result for an open subgroup of $G$. The result now follows immediately from Totaro's theorem quoted in the introduction.\n\nFor part (4), suppose $G\\cong \\mathbb{Z}_p^d\\rtimes \\mathbb{Z}_p$, let $H$ be the closed normal subgroup of $G$ isomorphic to $\\mathbb{Z}_p^d$ and let $g$ generate a complement to $H$ in $G$. Now the action of $g$ on $H$ has a minimal polynomial $p(t)$, say and we may write $p(t)=(t-1)^rq(t)$ with $q(t)$ and $(t-1)$ relatively prime. If $q$ is constant then $G$ is nilpotent so we may assume it is not. Let $K=\\langle\\ker q(g),g\\rangle$, a closed subgroup $G$ with $\\dim C_K(g)=1$. By part (3) pseudo-nulls are not $\\chi$-trivial for $K$ and the result follows by part (2). \n \nTo prove part (5) we first notice that parts (1) and (4) imply that it suffices to show that if $G$ is not abelian then it must have a non-nilpotent closed subgroup isomorphic to a semi-direct product $\\mathbb{Z}_p^d\\rtimes \\mathbb{Z}_p$. If the associated Lie algebra is split-reductive but not abelian then $G$ has a non-abelian subgroup isomorphic to a semi-direct product $\\mathbb{Z}_p\\rtimes\\mathbb{Z}_p$. \\end{proof} \n\n\\begin{rmks} \\hfill\n\n\\begin{enumerate}\n\\item It is easy to see that the proof of part (5) applies to a much wider class of groups than non-abelian split-reductive groups. \n\n\\item However, one class of examples that such arguments don't easily apply to includes groups that are four-dimensional and soluble and are isomorphic to a semi-direct product $H_3\\rtimes\\mathbb{Z}_p$ where $H_3$ is a $3$-dimensional Heisenberg pro-$p$ group and the complementary copy of $\\mathbb{Z}_p$ is generated by an element that induces a automorphism of infinite order on $H_3$ that acts trivially on the centre of $H_3$ and fixes no other one-dimensional subgroup. \n\n\n\\end{enumerate}\n\\end{rmks}\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{introduction}\n\nPrediction with expert advice is\na framework for online sequence prediction.\nPredictions are made step by step.\nThe quality of each prediction (the discrepancy between\nthe prediction and the actual outcome) is evaluated\nby a real number called loss.\nThe losses are accumulated over time.\nIn the standard framework\nfor prediction with expert advice\n(see the monograph~\\cite{Cesa-BianchiLugosi}\nfor a comprehensive review),\nthe losses from all steps are just summed.\nIn this paper, we consider a generalization\nwhere older losses can be devalued;\nin other words,\nwe use discounted cumulative loss.\n\nPredictions are made by Experts and Learner\naccording to Protocol~\\ref{prot:GenDisc}.\n\\begin{protocol\n \\caption{Prediction with expert advice under general discounting}\n \\label{prot:GenDisc}\n \\begin{algorithmic}\n \\STATE $\\dL_0:=0$.\n \\STATE $\\dL_0^\\theta:=0$, $\\theta \\in \\Theta$.\n \\FOR{$t=1,2,\\dots$}\n \\STATE Accountant announces $\\alpha_{t-1}\\in(0,1]$.\n \\STATE Experts announce $\\gamma_t^\\theta \\in\\Gamma$, $\\theta \\in \\Theta$.\n \\STATE Learner announces $\\gamma_t\\in\\Gamma$.\n \\STATE Reality announces $\\omega_t\\in\\Omega$.\n \\STATE $\\dL_t^\\theta:=\\alpha_{t-1}\\dL_{t-1}^\\theta\n +\\lambda(\\gamma_t^\\theta,\\omega_t)$, $\\theta \\in \\Theta$.\n \\STATE $\\dL_t:=\\alpha_{t-1}\\dL_{t-1}+\\lambda(\\gamma_t,\\omega_t)$.\n \\ENDFOR\n \\end{algorithmic}\n\\end{protocol}\nIn this protocol,\n$\\Omega$ is the set of possible outcomes and\n$\\omega_1,\\omega_2,\\omega_3\\ldots$ is the sequence to predict;\n$\\Gamma$ is the set of admissible predictions,\nand ${\\lambda\\colon\\Gamma\\times\\Omega\\to[0,\\infty]}$\nis the loss function.\nThe triple $(\\Omega,\\Gamma,\\lambda)$ specifies\nthe game of prediction.\nThe most common examples are\nthe binary square loss, log loss, and absolute loss games.\nThey have $\\Omega=\\{0,1\\}$ and $\\Gamma=[0,1]$,\nand their loss functions are\n$\\lambda^\\mathrm{sq}(\\gamma,\\omega)=(\\gamma-\\omega)^2$,\n$\\lambda^\\mathrm{log}(\\gamma,0)=-\\log (1-\\gamma)$ and\n$\\lambda^\\mathrm{log}(\\gamma,1)=-\\log \\gamma$,\n$\\lambda^\\mathrm{abs}(\\gamma,\\omega)=\\lvert\\gamma-\\omega\\rvert$,\nrespectively.\n\nThe players in the game of prediction\nare Experts $\\theta$ from some pool $\\Theta$,\nLearner, and also Accountant and Reality.\nWe are interested in (worst-case optimal) strategies for Learner,\nand thus the game can be regarded as a two-player game,\nwhere Learner opposes the other players.\nThe aim of Learner is to keep his total loss $\\dL_t$\nsmall as compared to the total losses $\\dL_t^\\theta$\nof all experts $\\theta\\in\\Theta$.\n\nThe standard protocol of prediction with expert advice\n(as described in~\\cite{VovkAS,Vovk:98})\nis a special case of Protocol~\\ref{prot:GenDisc}\nwhere Accountant always announces $\\alpha_t=1$, $t=0,1,2,\\ldots$.\nThe new setting gives some more freedom\nto Learner's opponents.\n\nAnother important special case is the exponential (geometric) discounting\n$\\alpha_t=\\alpha\\in(0,1)$.\nExponential discounting is widely used\nin finance and economics (see, e.\\,g.,~\\citet{Muth1960}),\ntime series analysis (see, e.\\,g.,~\\citealp{Gardner2006}),\nreinforcement learning~\\cite{BartoSutton},\nand other applications.\nIn the context of prediction with expert advice,\nFreund and Hsu~\\citet{Freund2008} noted that\nthe discounted loss provides an alternative to\n``tracking the best expert'' framework~\\citet{HerbsterTBE}.\nIndeed, an exponentially discounted sum depends \nalmost exclusively on the last $O(\\log (1\/\\alpha))$ terms.\nIf the expert with the best one-step performance\nchanges at this rate,\nthen Learner observing the $\\alpha$-dis\\-count\\-ed losses\nwill mostly follow predictions of the current best expert.\nUnder our more general discounting,\nmore subtle properties of best expert changes \nmay be specified by varying the discount factor.\nIn particular, one can cause Learner to ``restart mildly'' \ngiving $\\alpha_t=1$ (or $\\alpha_t\\approx 1$) \nmost of the time and $\\alpha_t\\ll 1$ at crucial moments.\n\\ifARXIV\n(We prohibit $\\alpha_t=0$ in the protocol,\nsince this is exactly the same as the stopping the current\ngame and starting a new, independent game;\non the other hand, the assumption $\\alpha_t\\ne 0$\nsimplifies some statements.)\n\\fi\n\nCesa-Bianchi and Lugosi~\\cite[\\S~2.11]{Cesa-BianchiLugosi}\ndiscuss another kind of discounting\n\\begin{equation}\\label{eq:CBLdiscount}\n L_T = \\sum_{t=1}^T \\beta_{T-t} l_t\\,,\n\\end{equation}\nwhere $l_t$ are one-step losses and $\\beta_t$ are some decreasing\ndiscount factors.\nTo see the difference, let us rewrite our definition in the same style:\n\\ifCONF\n\\begin{equation}\\label{eq:beta-discount}\nL_T=\\alpha_{T-1}L_{T-1}+ l_T\n=\n\\sum_{t=1}^T \\alpha_{t}\\cdots\\alpha_{T-1}l_t\n= \\frac{1}{\\beta_T}\\sum_{t=1}^T \\beta_{t}l_t\\,,\n\\end{equation}\n\\fi\n\\ifARXIV\n\\begin{multline}\\label{eq:beta-discount}\nL_T=\\alpha_{T-1}L_{T-1}+ l_T\n=\n\\alpha_{T-2}\\alpha_{T-1}L_{T-2}\n+\\alpha_{T-1}l_{T-1}\n+l_T\n= \\ldots\n\\\\\n=\n\\sum_{t=1}^T \\alpha_{t}\\cdots\\alpha_{T-1}l_t\n= \\frac{1}{\\beta_T}\\sum_{t=1}^T \\beta_{t}l_t\\,,\n\\end{multline}\n\\fi\nwhere $\\beta_t=1\/\\alpha_1\\cdots\\alpha_{t-1}$, $\\beta_1=1$.\nThe sequence $\\beta_t$ is \\emph{non-decreasing},\n$\\beta_1\\le \\beta_2\\le \\beta_3\\le \\ldots$;\nbut it is applied ``in the reverse order'' compared to~\\eqref{eq:CBLdiscount}.\nSo, in both definitions, the older losses are the less weight they\nare ascribed.\nHowever, according to~\\eqref{eq:CBLdiscount}, the losses $l_t$\nhave different relative weights in $L_T$, $L_{T+1}$ and so on,\nwhereas~\\eqref{eq:beta-discount} fixes\nthe relative weight of $l_t$ with respect to all previous losses\nforever starting from the moment~$t$.\nThe latter\nproperty allows us to get uniform algorithms for Learner with\nloss guarantees that hold for all $T=1,2,\\ldots$;\nin contrast, Theorem~2.8 in~\\cite{Cesa-BianchiLugosi}\ngives a guarantee only at one moment $T$ chosen in advance.\nThe only kind of discounting that can be expressed\nboth as~\\eqref{eq:CBLdiscount} and as~\\eqref{eq:beta-discount}\nis the exponential discounting\n$\\sum_{t=1}^T \\alpha^{T-t}l_t$.\nUnder this discounting, NormalHedge algorithm is\nanalysed in~\\cite{Freund2008};\nwe briefly compare the obtained bounds in Section~\\ref{sec:convex}.\n\n\\ifARXIV\nLet us say a few words about ``economical'' interpretation of discounting.\nRecall that $\\alpha_t\\le 1$ in Protocol~\\ref{prot:GenDisc},\nin other words, the previous cumulative loss cannot become\nmore important at later steps.\nIf the losses are interpreted as the lost money,\nit is more natural to assume that the old losses must be multiplied\nby something greater than~$1$.\nIndeed, the money could have been invested and have brought some\ninterest, so the current value of an ancient small loss can \nbe considerably large.\nNevertheless, there is \na not so artificial interpretation for our discounting model as well.\nAssume that the loss at each step is expressed \nas a quantity of some goods, and we pay for them in cash;\nsay, we pay for apples damaged because of \nour incorrect weather prediction.\nThe price of apples can increase but never decreases.\nThen $\\beta_t$ in~\\eqref{eq:beta-discount} is the current price,\n$\\sum_{t=1}^T\\beta_t l_t$ is the total sum of money we lost,\nand $L_T$ is the quantity of apples that we could have bought now\nif we had not lost so much money.\n(We must also assume that\nwe cannot hedge our risk by buying \na lot of cheap apples in advance---the apples \nwill rot---and that the bank interest is zero.)\n\nWe need the condition $\\alpha_t\\le 1$\nfor our algorithms and loss bounds.\nHowever, the case of $\\alpha_t\\ge 1$ is no less interesting.\nWe cannot say anything about it and leave it as an open problem,\nas well as the general case of arbitrary positive~$\\alpha_t$.\n\\fi\n\nThe rest of the paper is organized as follows.\nIn Section~\\ref{sec:linear},\nwe propose a generalization of\nthe Aggregating Algorithm~\\cite{Vovk:98}\nand prove the same bound as in~\\cite{Vovk:98}\nbut for the discounted loss.\nIn Section~\\ref{sec:convex},\nwe consider convex loss functions\nand propose an algorithm similar to\nthe Weak Aggregating Algotihm~\\cite{WAA} and\nthe exponentially weighted average forecaster\nwith time-varying learning rate~\\cite[\\S~2.3]{Cesa-BianchiLugosi},\nwith a similar loss bound.\nIn Section~\\ref{sec:regression},\nwe consider the use of prediction with expert advice\nfor the regression problem\nand adapt the\nAggregating Algorithm for Regression~\\cite{VovkCOS}\n(applied to spaces of linear functions and\nto reproducing kernel Hilbert spaces)\nto the discounted square loss.\n\\ifCONF\nAll our algorithms are inspired by the methodology\nof defensive forecasting~\\cite{Chernov2010}.\nWe do not explicitly use or refer to this technique.\nHowever,\nto illustrate these ideas we provide\nan alternative treatment of the regression task\nwith the help of defensive forecasting\nin the full version\nof the paper~\\cite{CZ2010} (see Appendix~A.2;\nAppendix~A.1 contains some proofs omitted here).\n\\fi\n\\ifARXIV\nAll our algorithms are inspired by the methodology\nof defensive forecasting~\\cite{Chernov2010}.\nWe do not explicitly use or refer to this technique\nin the main text.\nHowever,\nto illustrate these ideas we provide\nan alternative treatment of the regression task\nwith the help of defensive forecasting\nin Appendix~\\ref{append:DF}.\n\\fi\n\n\n\\section{Linear Bounds for Learner's Loss}\n\\label{sec:linear}\n\nIn this section, we assume that the set of experts is finite,\n$\\Theta=\\{1,\\ldots,K\\}$,\nand show\nhow Learner can achieve a bound\nof the form $\\dL_t\\le c\\dL_t^k + (c\\ln K)\/\\eta$\nfor all Experts $k$,\nwhere $c\\ge 1$ and $\\eta>0$ are constants.\nBounds of this kind were obtained in~\\cite{VovkAS}.\nLoosely speaking, such a bound holds for certain $c$ and $\\eta$\nif and only if\nthe game $(\\Omega,\\Gamma,\\lambda)$\nhas the following property:\n\\begin{equation}\\label{eq:realiz}\n\\exists \\gamma\\in\\Gamma \\:\\forall\\omega\\in\\Omega\\quad\n\\lambda(\\gamma,\\omega)\\le\n-\\frac{c}{\\eta}\n \\ln\\left(\\sum_{i\\in I}p_i\\e^{-\\eta \\lambda(\\gamma_i,\\omega)}\\right)\n\\end{equation}\nfor any finite index set $I$,\nfor any $\\gamma_i\\in\\Gamma$, $i\\in I$,\nand for any $p_i\\in[0,1]$ such that $\\sum_{i\\in I}p_i=1$.\nIt turns out that this property is sufficient\nfor the discounted case as well.\n\n\\begin{theorem}\\label{thm:AAD}\nSuppose that the game $(\\Omega,\\Gamma,\\lambda)$\nsatisfies condition~\\eqref{eq:realiz}\nfor certain $c\\ge 1$ and $\\eta>0$.\nIn the game played according to Protocol~\\ref{prot:GenDisc},\nLearner has a strategy guaranteeing\nthat, for any $T$ and for any $k\\in\\{1,\\ldots, K\\}$, it holds\n\\begin{equation}\\label{eq:AAbound}\n \\dL_T \\le c \\dL_T^k + \\frac{c\\ln K}{\\eta}\\,.\n\\end{equation}\n\\end{theorem}\n\\ifARXIV\nWe formulate the strategy for Learner in Subsection~\\ref{ssec:AAD}\nand prove the theorem in Subsection~\\ref{ssec:AAproof}.\n\\fi\n\nFor the standard undiscounted case\n(Accountant announces $\\alpha_t=1$ at each step $t$),\nthis theorem was proved by Vovk\nin~\\cite{VovkAS} with the help of the Aggregating Algorithm (AA)\nas Learner's strategy.\nIt is known (\\cite{Haussler1998,Vovk:98})\nthat this bound is asymptotically optimal\nfor large pools of Experts\n(for games satisfying some assumptions):\nif the game does not satisfy~\\eqref{eq:realiz}\nfor some $c\\ge 1$ and $\\eta>0$,\nthen, for sufficiently large $K$,\nthere is a strategy for Experts and Reality\n(recall that Accountant always says $\\alpha_t=1$)\nsuch that Learner cannot secure~\\eqref{eq:AAbound}.\nFor the special case of $c=1$,\nbound~\\eqref{eq:AAbound} is tight\nfor any fixed $K$ as well~\\cite{Vovk:1999derandomizing}.\nThese results imply optimality of Theorem~\\ref{thm:AAD}\nin the new setting with general discounting\n(when we allow arbitrary behaviour of Accountant\nwith the only requirement $\\alpha_t\\in(0,1]$).\nHowever,\nthey leave open the question of lower bounds\nunder different discounting assumptions\n(that is, when Accountant moves are fixed);\na particularly interesting case is\nthe exponential discounting\n$\\alpha_t=\\alpha\\in(0,1)$.\n\n\\ifCONF\n\\vspace*{-10ex}\n\\begin{proof}\n\\fi\n\\ifARXIV\n\\subsection{Learner's Strategy}\n\\label{ssec:AAD}\n\\fi\n\n\\ifARXIV\nTo prove Theorem~\\ref{thm:AAD},\nwe will exploit the AA with a minor modification.\n\n\\begin{algorithm\n \\caption{The Aggregating Algorithm}\\label{alg:AA}\n \\begin{algorithmic}[1]\n \\STATE Initialize weights of Experts $w_0^k:=1\/K$, $k=1,\\ldots,K$.\n \\FOR{$t=1,2,\\dots$}\n \\STATE Get Experts' predictions\n {$\\gamma_t^k \\in \\Gamma, k=1,\\ldots,K$}.\n \\STATE Calculate $g_t(\\omega)=\n -\\frac{c}{\\eta}\\ln\\left(\\sum_{k=1}^K w_{t-1}^k\n \\e^{-\\eta \\lambda(\\gamma_t^k,\\omega)}\\right)$,\n for all $\\omega \\in \\Omega$.\n \\STATE Output $\\gamma_t := \\sigma(g_t) \\in \\Gamma$.\n \\STATE Get $\\omega_t\\in\\Omega$.\n \\STATE\\label{AAline:update} Update the weights\n $\\tilde w_t^k := w_{t-1}^k \\e^{-\\eta \\lambda(\\gamma_t^k,\\omega_t)}$,\n $k=1,\\ldots,K$,\n \\STATE\\label{AAline:normal} and normalize them\n $w_t^k := \\tilde w_t^k \/ \\sum_{k=1}^K \\tilde w_t^k$, $k=1,\\ldots,K$.\n \\ENDFOR.\n \\end{algorithmic}\n\\end{algorithm}\nThe pseudocode of the AA is given as Algorithm~\\ref{alg:AA}.\nThe algorithm has three parameters,\nwhich depend on the game $(\\Omega,\\Gamma,\\lambda)$:\n$c\\ge 1$, $\\eta>0$, and a function $\\sigma\\colon\\R^\\Omega\\to\\Gamma$.\nThe function $\\sigma$ is called a \\emph{substitution function}\nand must have the following property:\n$\\lambda(\\sigma(g),\\omega)\\le g(\\omega)$ for all $\\omega\\in\\Omega$\nif for $g\\in\\R^\\Omega$ there exists any $\\gamma\\in\\Gamma$ such that\n$\\lambda(\\gamma,\\omega)\\le g(\\omega)$ for all $\\omega\\in\\Omega$.\nA natural example of substitution function is given by\n\\begin{equation}\\label{eq:subst}\n\\sigma(g)=\n \\arg\\min_{\\gamma\\in\\Gamma}\\bigl(\\lambda(\\gamma,\\omega) - g(\\omega)\\bigr)\n\\end{equation}\n(if the minimum is attained at several points,\none can take any of them).\nAn advantage of this $\\sigma$ is that the normalization step\nin line~\\ref{AAline:normal}\nis not necessary and one can take $w_t^k=\\tilde w_t^k$.\nIndeed, multiplying all $w_t^k$ by a constant (independent of $k$)\nwe add to all $g_t(\\omega)$ a constant (independent of $\\omega$),\nand $\\sigma(g_t)$ does not change.\n\nThe Aggregating Algorithm with Discounting (AAD)\ndiffers only by the use of the weights\nin the computation of $g_t$\nand the update of the weights.\n\nThe pseudocode of the AAD is given as Algorithm~\\ref{alg:AAD}.\n\\fi\n\\ifCONF\nAs Learner's strategy we exploit a minor modification of\nthe Aggregating Algorithm, the AA with Discounting (AAD).\nThe pseudocode is given as Algorithm~\\ref{alg:AAD}.\n\\fi\n\n\n\\begin{algorithm\n \\caption{The Aggregating Algorithm with Discounting}\\label{alg:AAD}\n \\begin{algorithmic}[1]\n \\STATE Initialize weights of Experts $w_0^k:=1$, $k=1,\\ldots,K$.\n \\FOR{$t=1,2,\\dots$}\n \\STATE Get discount $\\alpha_{t-1}\\in(0,1]$.\n \\STATE Get Experts' predictions\n {$\\gamma_t^k \\in \\Gamma, k=1,\\ldots,K$}.\n \\STATE Calculate $g_t(\\omega)=\n -\\frac{c}{\\eta}\\left(\\ln\\sum_{k=1}^K\n \\frac{1}{K}(w_{t-1}^k)^{\\alpha_{t-1}}\n \\e^{-\\eta \\lambda(\\gamma_t^k,\\omega)}\\right)$,\n for all $\\omega \\in \\Omega$.\n \\STATE Output $\\gamma_t := \\sigma(g_t) \\in \\Gamma$.\n \\STATE Get $\\omega_t\\in\\Omega$.\n \\STATE\\label{AADline:update} Update the weights\n $w_t^k := (w_{t-1}^k)^{\\alpha_{t-1}}\n \\e^{\\eta \\lambda(\\gamma_t,\\omega_t)\/c\n -\\eta \\lambda(\\gamma_t^k,\\omega_t)}$,\n $k=1,\\ldots,K$,\n \\ENDFOR.\n \\end{algorithmic}\n\\end{algorithm}\n\\ifCONF\nThe algorithm has three parameters,\nwhich depend on the game $(\\Omega,\\Gamma,\\lambda)$:\n$c\\ge 1$, $\\eta>0$, and a function $\\sigma\\colon\\R^\\Omega\\to\\Gamma$.\nThe function $\\sigma$ is called a \\emph{substitution function}\nand must have the following property:\n$\\lambda(\\sigma(g),\\omega)\\le g(\\omega)$ for all $\\omega\\in\\Omega$\nif for $g\\in\\R^\\Omega$ there exists any $\\gamma\\in\\Gamma$ such that\n$\\lambda(\\gamma,\\omega)\\le g(\\omega)$ for all $\\omega\\in\\Omega$.\nA natural example of substitution function is given by\n\\begin{equation}\\label{eq:subst}\n\\sigma(g)=\n \\arg\\min_{\\gamma\\in\\Gamma}\\bigl(\\lambda(\\gamma,\\omega) - g(\\omega)\\bigr)\n\\end{equation}\n(if the minimum is attained in several points,\none can take any of them).\nAn advantage of this $\\sigma$\nis that one can use in line~\\ref{AADline:update} of the algorithm\nthe update rule\n$w_t^k := (w_{t-1}^k)^{\\alpha_{t-1}}\n \\e^{-\\eta \\lambda(\\gamma_t^k,\\omega_t)}$,\nwhich does not contain Learner's losses.\nIndeed, multiplying all $w_t^k$ by a constant (independent of $k$)\nwe add to all $g_t(\\omega)$ a constant (independent of $\\omega$),\nand $\\sigma(g_t)$ does not change.\n\\fi\n\\ifARXIV\nFor a substitution function satisfying~\\eqref{eq:subst},\none can use in line~\\ref{AADline:update} the update rule\n$w_t^k := (w_{t-1}^k)^{\\alpha_{t-1}}\n \\e^{-\\eta \\lambda(\\gamma_t^k,\\omega_t)}$,\nwhich does not contain Learner's losses,\nin the same manner as\nthe normalization in Algorithm~\\ref{alg:AA}\ncan be omitted.\n\\fi\n\n\\ifARXIV\n\\subsection{Proof of the Bound}\n\\label{ssec:AAproof}\n\\fi\n\nAssume that $c$ and $\\eta$ are such that\ncondition~\\eqref{eq:realiz} holds for the game.\nLet us show that\nAlgorithm~\\ref{alg:AAD}\npreserves the following condition:\n\\begin{equation}\\label{eq:AADcond}\n\\sum_{k=1}^K \\frac{1}{K}w_t^k\\le 1\\,.\n\\end{equation}\nCondition~\\eqref{eq:AADcond} trivially holds for $t=0$.\nAssume that \\eqref{eq:AADcond}~holds for $t-1$,\nthat is, $\\sum_{k=1}^K w_{t-1}^k\/K\\le 1$.\nThus, we have\n\\ifCONF\n$\n\\sum_{k=1}^K (w_{t-1}^k)^{\\alpha_{t-1}}\/K\n\\le \\left(\\sum_{k=1}^K w_{t-1}^k\/K\\right)^{\\alpha_{t-1}}\n\\le 1\n$,\n\\fi\n\\ifARXIV\n$$\n\\sum_{k=1}^K \\frac{1}{K}(w_{t-1}^k)^{\\alpha_{t-1}}\n\\le \\left(\\sum_{k=1}^K \\frac{1}{K}w_{t-1}^k\\right)^{\\alpha_{t-1}}\n\\le 1\\,,\n$$\n\\fi\nsince the function $x\\mapsto x^\\alpha$ is concave\nfor $\\alpha\\in(0,1]$, $x\\ge 0$,\nand since $x\\le 1$ implies $x^\\alpha\\le 1$ for $\\alpha\\ge 0$\nand $x\\ge 0$.\n\nLet $\\tilde w^k$ be any reals such that\n$\\tilde w^k\\ge (w_{t-1}^k)^{\\alpha_{t-1}}\/K$\nand $\\sum_{k=1}^K\\tilde w^k = 1$.\nDue to condition~\\eqref{eq:realiz}\nthere exists $\\gamma\\in\\Gamma$\nsuch that for all $\\omega\\in\\Omega$\n\\begin{multline*}\n\\lambda(\\gamma,\\omega)\n\\le\n-\\frac{c}{\\eta}\n \\ln\\left(\\sum_{k=1}^K \\tilde w^k\\e^{-\\eta \\lambda(\\gamma_t^k,\\omega)}\\right)\n\\\\\n\\le\n-\\frac{c}{\\eta}\n \\ln\\left(\\sum_{k=1}^K \\frac{1}{K}(w_{t-1}^k)^{\\alpha_{t-1}}\n \\e^{-\\eta \\lambda(\\gamma_t^k,\\omega)}\\right)\n= g_t(\\omega)\n\\end{multline*}\n(the second inequality holds due to our choice of $\\tilde w^k$).\nThus, due to the property of $\\sigma$,\nwe have $\\lambda(\\gamma_t,\\omega)\\le g_t(\\omega)$ for all $\\omega\\in\\Omega$.\nIn particular, this holds for $\\omega=\\omega_t$,\nand we ge\n\\ifCONF~\\eqref{eq:AADcond}.\n\\fi\n\\ifARXIV\n$$\n\\lambda(\\gamma_t,\\omega_t)\n\\le\n-\\frac{c}{\\eta}\n \\ln\\left(\\sum_{k=1}^K \\frac{1}{K}(w_{t-1}^k)^{\\alpha_{t-1}}\n \\e^{-\\eta \\lambda(\\gamma_t^k,\\omega_t)}\\right)\\,,\n$$\nwhich is equivalent to~\\eqref{eq:AADcond}.\n\n\\fi\nTo get the loss bound~\\eqref{eq:AAbound}, it remains to note that\n$$\n\\ln w_t^k = \\eta\\left(\\dL_t\/c - \\dL_t^k\\right)\\,.\n$$\nIndeed, for $t=0$, this is trivial.\nIf this holds for $w_{t-1}^k$,\nthen\n\\ifCONF\n$\\ln w_t^k =\n{\\alpha_{t-1}}\\ln (w_{t-1}^k)\n+\\eta \\lambda(\\gamma_t,\\omega_t)\/c -\\eta \\lambda(\\gamma_t^k,\\omega_t)\n=\n\\alpha_{t-1}\\eta\\left(\\dL_{t-1}\/c - \\dL_{t-1}^k\\right)\n+\\eta \\lambda(\\gamma_t,\\omega_t)\/c -\\eta \\lambda(\\gamma_t^k,\\omega_t)\n=\n\\eta\\left((\\alpha_{t-1}\\dL_{t-1} + \\lambda(\\gamma_t,\\omega_t))\/c -\n (\\alpha_{t-1}\\dL_{t-1}^k + \\lambda(\\gamma_t^k,\\omega_t))\n \\right)\n=\\eta\\left(\\dL_t\/c - \\dL_t^k\\right)\n$\n\\fi\n\\ifARXIV\n\\begin{multline*}\n\\ln w_t^k =\n{\\alpha_{t-1}}\\ln (w_{t-1}^k)\n+\\eta \\lambda(\\gamma_t,\\omega_t)\/c -\\eta \\lambda(\\gamma_t^k,\\omega_t)\n\\\\\n=\n\\alpha_{t-1}\\eta\\left(\\dL_{t-1}\/c - \\dL_{t-1}^k\\right)\n+\\eta \\lambda(\\gamma_t,\\omega_t)\/c -\\eta \\lambda(\\gamma_t^k,\\omega_t)\n\\\\\n=\n\\eta\\left((\\alpha_{t-1}\\dL_{t-1} + \\lambda(\\gamma_t,\\omega_t))\/c -\n (\\alpha_{t-1}\\dL_{t-1}^k + \\lambda(\\gamma_t^k,\\omega_t))\n \\right)\n=\\eta\\left(\\dL_t\/c - \\dL_t^k\\right)\n\\end{multline*}\n\\fi\nand we get the equality for $w_t^k$.\nThus, condition~\\eqref{eq:AADcond} means that\n\\begin{equation}\\label{eq:AADFbound}\n\\sum_{k=1}^K \\frac{1}{K} \\e^{\\eta\\left(\\dL_t\/c - \\dL_t^k\\right)}\\le 1\\,,\n\\end{equation}\nand~\\eqref{eq:AAbound} follows\nby lower-bounding the sum by any of its terms.\n\\ifCONF\n\\EP\n\\fi\n\n\n\\begin{remark}\nEverything in this section remains valid,\nif we replace the equal initial Experts' weights $1\/K$\nby arbitrary non-negative weights $w^k$, $\\sum_{k=1}^K w^k=1$.\nThis leads to a variant of~\\eqref{eq:AAbound},\nwhere the last additive term is replaced by\n$\\frac{c}{\\eta}\\ln\\frac{1}{w^k}$.\nAdditionally, we can consider any measurable space $\\Theta$\nof Experts and a non-negative weight function $w(\\theta)$,\nand replace sums over $K$ by integrals over $\\Theta$.\nThen the algorithm and its analysis remain valid\n(if we impose natural integrability conditions\non Experts' predictions $\\gamma_t^\\theta$;\nsee~\\cite{VovkCOS} for more detailed discussion)---this will\nbe used in Section~\\ref{sec:regression}.\n\\end{remark}\n\n\n\n\\section{Learner's Loss in Bounded Convex Games}\n\\label{sec:convex}\n\nThe linear bounds of the form~\\eqref{eq:AAbound}\nare perfect when $c=1$.\nHowever, for many games (for example, the absolute loss game),\ncondition~\\eqref{eq:realiz}\ndoes not hold for $c=1$ (with any $\\eta>0$),\nand one cannot get a bound of the form $\\dL_t\\le \\dL_t^k + O(1)$.\nSince Experts' losses $\\dL_T^\\theta$ may grow\nas $T$ in the worst case,\nany bound with $c>1$ only guarantees that\nLearner's loss may exceed an Expert's loss\nby at most $O(T)$.\nHowever, for a large class of interesting games\n(including the absolute loss game),\none can obtain guarantees of the form\n$\\dL_T\\le \\dL_T^k + O(\\sqrt{T})$\nin the undiscounted case.\nIn this section, we prove an analogous result\nfor the discounted setting.\n\nA game $(\\Omega,\\Gamma,\\lambda)$ is non-empty\nif $\\Omega$ and $\\Gamma$ are non-empty.\nThe game is called \\emph{bounded}\nif $L=\\max_{\\omega,\\gamma} \\lambda(\\gamma,\\omega)<\\infty$.\nOne may assume that $L=1$\n(if not, consider the scaled loss function $\\lambda\/L$).\nThe game is called \\emph{convex} if\nfor any predictions $\\gamma_1,\\ldots,\\gamma_M\\in\\Gamma$ and\nfor any weights $p_1,\\ldots,p_M\\in[0,1]$, ${\\sum_{m=1}^M p_m=1}$,\n\\ifCONF\nthere exists $\\gamma\\in\\Gamma$ such that\n$\\lambda(\\gamma,\\omega)\\le\\sum_{m=1}^M p_m\\lambda(\\gamma_m,\\omega)$\nfor all $\\omega\\in\\Omega$.\n\\fi\n\\ifARXIV\n\\begin{equation}\\label{eq:convexity}\n\\exists\\gamma\\in\\Gamma\\:\\forall\\omega\\in\\Omega\\quad\n\\lambda(\\gamma,\\omega)\\le\\sum_{m=1}^M p_m\\lambda(\\gamma_m,\\omega)\\,.\n\\end{equation}\n\\fi\nNote that if $\\Gamma$ is a convex set (e.\\,g.,\\,$\\Gamma=[0,1]$)\nand $\\lambda(\\gamma,\\omega)$ is convex in $\\gamma$\n(e.\\,g.,\\,$\\lambda^\\mathrm{abs}$),\nthen the game is convex.\n\n\\begin{theorem}\\label{thm:convexbound}\nSuppose that $(\\Omega,\\Gamma,\\lambda)$\nis a non-empty convex game,\nand $\\lambda(\\gamma,\\omega)\\in[0,1]$\nfor all $\\gamma\\in\\Gamma$ and $\\omega\\in\\Omega$.\nIn the game played according to Protocol~\\ref{prot:GenDisc},\nLearner has a strategy guaranteeing\nthat, for any $T$ and for any $k\\in\\{1,\\ldots, K\\}$, it holds\n\\begin{equation}\\label{eq:sqrtbound}\n\\dL_T \\le \\dL_T^k + \\sqrt{\\ln K}\\sqrt{\\frac{B_T}{\\beta_T}}\\,,\n\\end{equation}\nwhere $\\beta_t=1\/(\\alpha_1\\cdots\\alpha_{t-1})$ and\n$B_T=\\sum_{t=1}^T \\beta_t$.\n\\end{theorem}\n\n\nNote that $B_T\/\\beta_T$ is the maximal predictors' loss,\nwhich incurs when the predictor suffers the maximal\npossible loss $l_t=1$ at each step.\n\\ifARXIV\n\n\\fi\nIn the undiscounted case, $\\alpha_t=1$,\nthus $\\beta_t=1$, $B_T=T$,\nand~\\eqref{eq:sqrtbound} becomes\n\\ifCONF$\\fi\n\\ifARXIV$$\\fi\n\\dL_T \\le \\dL_T^k + \\sqrt{T\\ln K\n\\ifCONF$. \\fi\n\\ifARXIV\\,.$$\\fi\nA similar bound\n(but with worse constant $\\sqrt{2}$ instead of $1$\nbefore $\\sqrt{T\\ln K}$) is\nobtained in~\\cite[Theorem~2.3]{Cesa-BianchiLugosi}:\n\\ifCONF$\\dL_T \\le \\dL_T^k + \\sqrt{2T\\ln K} + \\sqrt{(\\ln K)\/8}$. \\fi\n\\ifARXIV\n$$\n\\dL_T \\le \\dL_T^k + \\sqrt{2T\\ln K} + \\sqrt{\\frac{\\ln K}{8}}\\,.\n$$\n\n\\fi\nFor the exponential discounting $\\alpha_t=\\alpha$,\nwe have $\\beta_t=\\alpha^{-t+1}$ and $B_T=(1-\\alpha^{-T})\/(1-1\/\\alpha)$,\nand \\eqref{eq:sqrtbound} transforms into\n\\ifCONF\n$\n\\dL_T \\le \\dL_T^k + \\sqrt{\\ln K}\\sqrt{(1-\\alpha^T)\/(1-\\alpha)}\n\\le\n\\dL_T^k + \\sqrt{(\\ln K)\/(1-\\alpha)}\n$.\n\\fi\n\\ifARXIV\n$$\n\\dL_T \\le \\dL_T^k + \\sqrt{\\ln K}\\sqrt{\\frac{1-\\alpha^T}{1-\\alpha}}\n\\le\n\\dL_T^k + \\sqrt{\\frac{\\ln K}{1-\\alpha}}\\,.\n$$\n\\fi\nA similar bound (with worse constants) is obtained\nin~\\cite{Freund2008} for NormalHedge:\n\\ifCONF\n$\n\\dL_T\n\\le\n\\dL_T^k + \\sqrt{(8\\ln 2.32K)\/(1-\\alpha)}\n$.\n\\fi\n\\ifARXIV\n$$\n\\dL_T\n\\le\n\\dL_T^k + \\sqrt{\\frac{8\\ln 2.32K}{1-\\alpha}}\\,.\n$$\n\\fi\nThe NormalHedge algorithm has an important advantage:\nit can guarantee the last bound without knowledge\nof the number of experts~$K$\n(see~\\cite{CFH2009} for a precise definition).\nWe can achieve the same with the help of a more complicated\nalgorithm but at the price of a worse bound \n\\ifCONF (see also the remark after the proof).\\fi\n\\ifARXIV (Theorem~\\ref{thm:convexsuperbound}).\\fi\n\n\\ifCONF\n\\begin{proof}\n\\fi\n\\ifARXIV\n\\subsection{Learner's Strategy for Theorem~\\ref{thm:convexbound}}\n\\fi\nThe pseudocode of Learner's strategy\nis given as Algorithm~\\ref{alg:FDFD}.\nIt contains a constant $a>0$, which we will choose later in the proof.\n\n\\ifARXIV\nThe algorithm is not fully specified,\nsince lines~\\ref{FDFDline:find}--\\ref{FDFDline:gamma}\nof Algorithm~\\ref{alg:FDFD} \nallow arbitrary choice of $\\gamma$ satisfying the inequality.\nThe algorithm can be completed with the help of \na substitution function $\\sigma$\nas in Algorithm~\\ref{alg:AAD},\nso that lines~\\ref{FDFDline:find}--\\ref{FDFDline:output}\nare replaced by\n$$g_t(\\omega)=\n -\\frac{1}{\\eta_t}\\ln\\left(\n \\sum_{k=1}^K \\frac{1}{K}\n \\left(w_{t-1}^k\\right)^{\\alpha_{t-1}\\eta_t\/\\eta_{t-1}}\n \\e^{\n -\\eta_t\\lambda(\\gamma_t^k,\\omega)\n -\\eta_t^2\/8\n }\n \\right)\n$$\nand $\\gamma_t=\\sigma(g_t)$.\nHowever, the current form of Algorithm~\\ref{alg:FDFD}\nemphasizes the similarity to the Algorithm~\\ref{alg:fullFDFD},\nwhich is described later (Subsection~\\ref{ssec:epsilonbest})\nbut actually inspired our analysis.\n\n\n\\fi\n\n\\begin{algorithm}[ht]\n \\caption{Learner's Strategy for Convex Games}\n\\label{alg:FDFD}\n \\begin{algorithmic}[1]\n \\STATE Initialize weights of Experts $w_0^k:=1$, $k=1,\\ldots,K$.\\\\\n Set $\\beta_1=1$, $B_0=0$.\n \\FOR{$t=1,2,\\dots$}\n \\STATE Get discount $\\alpha_{t-1}\\in(0,1]$; \n update $\\beta_t=\\beta_{t-1}\/\\alpha_{t-1}$, $B_t=B_{t-1}+\\beta_t$.\n \\STATE Compute $\\eta_t=a\\sqrt{\\beta_t\/B_t}$.\n \\STATE Get Experts' predictions\n {$\\gamma_t^k \\in \\Gamma$, $k=1,\\ldots,K$}.\n \\STATE \\label{FDFDline:find}\n Find $\\gamma\\in\\Gamma$ s.t. for all $\\omega\\in\\Omega$\n \\STATE \\label{FDFDline:gamma} \\qquad\n $\\lambda(\\gamma,\\omega)\n \\le\n -\\frac{1}{\\eta_t}\\ln\\left(\n \\sum_{k=1}^K \\frac{1}{K}\n \\left(w_{t-1}^k\\right)^{\\alpha_{t-1}\\eta_t\/\\eta_{t-1}}\n \\e^{\n -\\eta_t\\lambda(\\gamma_t^k,\\omega)\n -\\eta_t^2\/8\n }\n \\right)\n $\n \\STATE \\label{FDFDline:output} Output $\\gamma_t := \\gamma$.\n \\STATE Get $\\omega_t\\in\\Omega$.\n \\STATE \\label{FDFDline:update}\n Update the weights\n $w_t^k := \\left(w_{t-1}^k\\right)^{\\alpha_{t-1}\\eta_t\/\\eta_{t-1}}\n \\e^{\n \\eta_t\\bigl(\\lambda(\\gamma_t,\\omega_t)\n -\\lambda(\\gamma_t^k,\\omega_t)\\bigr)\n -\\eta_t^2\/8\n }\n $,\n \\STATE \\qquad $k=1,\\ldots,K$,\n \\ENDFOR.\n \\end{algorithmic}\n\\ifCONF\n \\textbf{Remark:}\n\n If $\\lambda(\\gamma,\\omega)$ is convex in $\\gamma$,\n lines~\\ref{FDFDline:find}--\\ref{FDFDline:gamma}\n can be replaced by\n $\\gamma=\\sum_{k=1}^K\\tilde w^k \\gamma_t^k$, see~\\eqref{eq:gammaexists}.\n\\fi\n\\end{algorithm}\n\n\n\\ifARXIV\n\nLet us explain the relation of Algorithm~\\ref{alg:FDFD}\nto the Weak Aggregating Algorithm~\\cite{WAA}\nand the exponentially weighted average forecaster\nwith time-varying learning rate~\\cite[\\S~2.3]{Cesa-BianchiLugosi}.\nTo this end, consider Algorithm~\\ref{alg:WAAD}.\n\n\\begin{algorithm}[ht]\n \\caption{Weak Aggregating Algorithm with Discounting}\n\\label{alg:WAAD}\n \\begin{algorithmic}[1]\n \\STATE Initialize Experts' cumulative losses $\\dL_0^k:=0$, $k=1,\\ldots,K$.\\\\\n Set $\\beta_1=1$, $B_0=0$.\n \\FOR{$t=1,2,\\dots$}\n \\STATE Get discount $\\alpha_{t-1}\\in(0,1]$; \n update $\\beta_t=\\beta_{t-1}\/\\alpha_{t-1}$, $B_t=B_{t-1}+\\beta_t$.\n \\STATE Compute $\\eta_t=a\\sqrt{\\beta_t\/B_t}$.\n \\STATE Compute the weights \n $q_t^k=\\e^{-\\alpha_{t-1}\\eta_t \\dL_{t-1}^k}$, $k=1,\\ldots,K$.\n \\STATE Compute the normalized weights\n $\\tilde w_t^k = q_t^k\\left\/\\sum_{j=1}^K q_t^j\\right.$.\n \\STATE Get Experts' predictions\n {$\\gamma_t^k \\in \\Gamma$, $k=1,\\ldots,K$}.\n \\STATE \\label{WAADline:find}\n Find $\\gamma\\in\\Gamma$ s.t. for all $\\omega\\in\\Omega$\n \\quad \n $\\lambda(\\gamma,\\omega)\\le \n \\sum_{k=1}^K \\tilde w_t^k\\lambda(\\gamma_t^k,\\omega)\n $.\n \\STATE \\label{WAADline:output} Output $\\gamma_t := \\gamma$.\n \\STATE Get $\\omega_t\\in\\Omega$.\n \\STATE Update $\\dL_t^k:=\\alpha_{t-1}\\dL_{t-1}^k+\\lambda(\\gamma_t^k,\\omega_t)$,\n $k=1,\\ldots,K$.\n \\ENDFOR.\n \\end{algorithmic}\n\\end{algorithm}\nThe proof of Theorem~\\ref{thm:convexbound} \nimplies that Algorithm~\\ref{alg:WAAD} is \na special case of Algorithm~\\ref{alg:FDFD}.\nIndeed, \\eqref{eq:FDFDweights} implies that \n$w_{t-1}^k = \\e^{-\\eta_{t-1}\\dL_{t-1}^k + C}$,\nwhere $C$ does not depend on~$k$\nand $w_{t-1}^k$ are the weights from Algorithm~\\ref{alg:FDFD}.\nTherefore $q_t^k = C'(w_{t-1}^k)^{\\alpha_{t-1}\\eta_t\/\\eta_{t-1}}$,\nwhere $C'$ does not depend on~$k$,\nand one can take $\\tilde w_t^k$ for $\\tilde w^k$ in the proof\nof Theorem~\\ref{thm:convexbound}.\nThus, if Algorithm~\\ref{alg:WAAD} output some $\\gamma_t$\nthen Algorithm~\\ref{alg:FDFD} can output this $\\gamma_t$ as well.\n\nRecall that if $\\alpha_t=1$ for all $t$ (the undiscounted case),\n$\\beta_t=1$ and $B_t=t$, hence $\\eta_t=a\/\\sqrt{t}$.\nIn this case, Algorithm~\\ref{alg:WAAD} is just \nthe Weak Aggregating Algorithm as described in~\\cite{WAA}.\n\nConsider now the case when \n$\\Gamma$ is a convex set and \n$\\lambda(\\gamma,\\omega)$ is convex in $\\gamma$.\nThen one can take \n$\\gamma_t = \\sum_{k=1}^K \\tilde w_t^k \\gamma_t^k$\nin Algorithm~\\ref{alg:WAAD}.\nFor $\\alpha_t=1$, we get exactly\nthe exponentially weighted average forecaster\nwith time-varying learning rate~\\cite[\\S~2.3]{Cesa-BianchiLugosi}.\n\n\n\\subsection{Proof of Theorem~\\ref{thm:convexbound}}\n\\fi\n\nSimilarly to the case of the AAD,\nlet us show that Algorithm~\\ref{alg:FDFD}\nalways can find $\\gamma$ in lines~\\ref{FDFDline:find}--\\ref{FDFDline:gamma}\nand preserves the following condition:\n\\begin{equation}\\label{eq:FDFDcond}\n\\sum_{k=1}^K \\frac{1}{K}w_t^k\\le 1\\,.\n\\end{equation}\n\nFirst check that $\\alpha_{t-1}\\eta_t\/\\eta_{t-1}\\le 1$.\nIndeed, $\\alpha_{t-1}=\\beta_{t-1}\/\\beta_t$,\nand thus\n\\begin{equation}\\label{eq:alphaless1}\n\\alpha_{t-1}\\frac{\\eta_t}{\\eta_{t-1}}\n=\n \\frac{\\beta_{t-1}}{\\beta_t}\n \\frac{a\\sqrt{\\beta_t\/B_t}}{a\\sqrt{\\beta_{t-1}\/B_{t-1}}}\n=\\sqrt{\\frac{\\beta_{t-1}}{\\beta_t}\\frac{B_{t-1}}{B_t}}\n=\\sqrt{\\alpha_{t-1}}\\sqrt{\\frac{B_{t-1}}{B_{t-1}+\\beta_{t}}}\n\\le 1\\,.\n\\end{equation}\n\nCondition~\\eqref{eq:FDFDcond} trivially holds for $t=0$.\nAssume that \\eqref{eq:FDFDcond}~holds for $t-1$,\nthat is, $\\sum_{k=1}^K w_{t-1}^k\/K\\le 1$.\nThus, we have\n\\begin{equation}\\label{eq:FDFDweightsSeminormal}\n\\sum_{k=1}^K \\frac{1}{K}(w_{t-1}^k)^{\\alpha_{t-1}\\eta_t\/\\eta_{t-1}}\n\\le \\left(\\sum_{k=1}^K \\frac{1}{K}w_{t-1}^k\\right)^{\\alpha_{t-1}\\eta_t\/\\eta_{t-1}}\n\\le 1\\,,\n\\end{equation}\nsince the function $x\\mapsto x^\\alpha$ is concave\nfor $\\alpha\\in(0,1]$, $x\\ge 0$,\nand since $x\\le 1$ implies $x^\\alpha\\le 1$ for $\\alpha\\ge 0$\nand $x\\ge 0$.\n\nLet $\\tilde w^k$ be any reals such that\n$\\tilde w^k\\ge (w_{t-1}^k)^{\\alpha_{t-1}\\eta_t\/\\eta_{t-1}}\/K$\nand ${\\sum_{k=1}^K\\tilde w^k = 1}$.\n(For example, \n$\\tilde w^k = (w_{t-1}^k)^{\\alpha_{t-1}\\eta_t\/\\eta_{t-1}}\n \\left\/\n \\sum_{j=1}^K (w_{t-1}^j)^{\\alpha_{t-1}\\eta_t\/\\eta_{t-1}}\n \\right.\n$.)\nBy the Hoeffding inequality (see, e.\\,g., \\cite[Lemma~2.2]{Cesa-BianchiLugosi}),\nwe have\n\\begin{equation}\\label{eq:Hoeff}\n\\ln \\sum_{k=1}^K\\tilde w^k \\e^{-\\eta_t\\lambda(\\gamma_t^k,\\omega)}\n\\le\n-\\eta_t \\sum_{k=1}^K\\tilde w^k \\lambda(\\gamma_t^k,\\omega) + \\frac{\\eta_t^2}{8}\\,,\n\\end{equation}\nsince $\\lambda(\\gamma,\\omega)\\in[0,1]$\nfor any $\\gamma\\in\\Gamma$ and $\\omega\\in\\Omega$.\nSince the game is convex,\nthere exists $\\gamma\\in\\Gamma$ such that\n$\\lambda(\\gamma,\\omega)\\le \\sum_{k=1}^K\\tilde w^k \\lambda(\\gamma_t^k,\\omega)$\nfor all $\\omega\\in\\Omega$.\nFor this $\\gamma$ and for all $\\omega\\in\\Omega$ we have\n\\begin{multline}\\label{eq:gammaexists}\n\\lambda(\\gamma,\\omega)\n\\le\n\\sum_{k=1}^K\\tilde w^k \\lambda(\\gamma_t^k,\\omega)\n\\le\n-\\frac{1}{\\eta_t}\n \\ln\\left(\\sum_{k=1}^K \\tilde w^k\n \\e^{-\\eta \\lambda(\\gamma_t^k,\\omega) - \\eta_t^2\/8}\n \\right)\n\\\\\n\\le\n-\\frac{1}{\\eta_t}\n \\ln\\left(\\sum \\frac{1}{K}\n \\left(w_{t-1}^k\\right)^{\\alpha_{t-1}\\eta_t\/\\eta_{t-1}}\n \\e^{-\\eta_t\\lambda(\\gamma_t^k,\\omega)-\\eta_t^2\/8}\n \\right)\n\\end{multline}\n(the second inequality follows from~\\eqref{eq:Hoeff}, and\nthe third inequality holds due to our choice of $\\tilde w^k$).\nThus, one can always find\n$\\gamma$ in lines~\\ref{FDFDline:find}--\\ref{FDFDline:gamma}\nof Algorithm~\\ref{alg:FDFD}.\nIt remains to note that the inequality in line~\\ref{FDFDline:gamma}\nwith $\\gamma_t$ substituted for $\\gamma$\nand $\\omega_t$ substituted for $\\omega$\nis equivalent to\n$$\n1\\ge \\sum \\frac{1}{K}\n \\left(w_{t-1}^k\\right)^{\\alpha_{t-1}\\eta_t\/\\eta_{t-1}}\n \\e^{\\eta_t\\lambda(\\gamma_t,\\omega_t)\n -\\eta_t\\lambda(\\gamma_t^k,\\omega_t)-\\eta_t^2\/8}\n=\n \\sum \\frac{1}{K} w_{t}^k\\,.\n$$\n\nNow let us check that\n\\begin{equation}\\label{eq:FDFDweights}\n\\ln w_t^k = \\eta_t\\left(\\dL_t - \\dL_t^k\\right)\n -\\frac{\\eta_t}{8\\beta_t}\\sum_{\\tau=1}^t \\beta_\\tau\\eta_\\tau\n\\,.\n\\end{equation}\nIndeed, for $t=0$, this is trivial.\nAssume that it holds for $w_{t-1}^k$. Then,\ntaking the logarithm of the update expression\nin line~\\ref{FDFDline:update} \\ifARXIV of Algorithm~\\ref{alg:FDFD} \\fi\nand substituting $\\ln w_{t-1}^k$,\nwe get\n\\begin{multline*}\\allowdisplaybreaks\n\\ln w_t^k\n=\n\\frac{\\alpha_{t-1}\\eta_t}{\\eta_{t-1}}\\ln w_{t-1}^k\n+ \\eta_t\\bigl(\\lambda(\\gamma_t,\\omega_t)\n -\\lambda(\\gamma_t^k,\\omega_t)\\bigr)\n -\\frac{\\eta_t^2}{8}\n\\ifCONF=\\fi\n\\\\\n=\n\\frac{\\alpha_{t-1}\\eta_t}{\\eta_{t-1}}\n\\left(\n \\eta_{t-1}\\left(\\dL_{t-1} - \\dL_{t-1}^k\\right)\n -\\frac{\\eta_{t-1}}{8\\beta_{t-1}}\\sum_{\\tau=1}^{t-1} \\beta_\\tau\\eta_\\tau\n\\right)\n+ \\eta_t\\bigl(\\lambda(\\gamma_t,\\omega_t)\n -\\lambda(\\gamma_t^k,\\omega_t)\\bigr)\n -\\frac{\\eta_t^2}{8}\n\\\\\n=\n\\eta_t\\left(\\alpha_{t-1}\\dL_{t-1}+\\lambda(\\gamma_t,\\omega_t)\n -\\alpha_{t-1}\\dL_{t-1}^k-\\lambda(\\gamma_t^k,\\omega_t)\n \\right)\n-\\frac{\\eta_t}{8\\beta_t}\\sum_{\\tau=1}^{t-1} \\beta_\\tau\\eta_\\tau\n-\\frac{\\eta_t^2}{8}\n\\\\\n=\n\\eta_t\\left(\\dL_t - \\dL_t^k\\right)\n-\\frac{\\eta_t}{8\\beta_t}\\sum_{\\tau=1}^t \\beta_\\tau\\eta_\\tau\\,.\n\\end{multline*}\n\nCondition~\\eqref{eq:FDFDcond} implies that\n$w_T^k\\le K$ for all $k$ and $T$,\nhence we get a loss bound\n\\begin{equation}\\label{eq:preFDFDloss}\n\\dL_T \\le \\dL_T^k + \\frac{\\ln K}{\\eta_T} +\n \\frac{1}{8\\beta_T}\\sum_{t=1}^T \\beta_t\\eta_t\\,.\n\\end{equation}\n\nRecall that $\\eta_t=a\\sqrt{\\beta_t\/B_t}$.\nTo estimate $\\sum_{t=1}^T \\beta_t\\eta_t$,\nwe use the following inequality\n\\ifCONF\n(see~\\cite[Appendix~A.1]{CZ2010} for the proof).\n\\fi\n\\ifARXIV\n(see Appendix~\\ref{append:technical} for the proof).\n\\fi\n\n\\begin{lemma}\\label{lem:gen-sqrt-sum}\nLet $\\beta_t$ be any reals such that\n$1\\le\\beta_1\\le\\beta_2\\le\\ldots$.\nLet $B_T=\\sum_{t=1}^T\\beta_t$.\nThen, for any $T$, it holds\n$$\n \\frac{1}{\\beta_T}\\sum_{t=1}^T \\beta_t\\sqrt{\\frac{\\beta_t}{B_t}}\n \\le\n 2\\sqrt{\\frac{B_T}{\\beta_T}}\\,.\n$$\n\\end{lemma}\nThen~\\eqref{eq:preFDFDloss} implies\n$$\n\\dL_T \\le \\dL_T^k + \\frac{\\ln K}{a}\\sqrt{\\frac{B_T}{\\beta_T}} +\n \\frac{2a}{8}\\sqrt{\\frac{B_T}{\\beta_T}}\n=\n\\dL_T^k + \\left(\\frac{\\ln K}{a} + \\frac{a}{4}\n \\right)\\sqrt{\\frac{B_T}{\\beta_T}}\n\\,.\n$$\nChoosing $a=2\\sqrt{\\ln K}$,\nwe finally get\n\\ifCONF\nthe bound.\n\\fi\n\\ifARXIV\n$$\n\\dL_T \\le \\dL_T^k + \\sqrt{\\ln K}\\sqrt{\\frac{B_T}{\\beta_T}}\\,.\n$$\n\\fi\n\\ifCONF\n\\EP\n\n\\begin{remark}\nAlgorithm~\\ref{alg:FDFD} is a modification\nof the ``Fake Defensive Forecasting'' algorithm\nfrom~\\cite[Theorem~9]{CV2010}.\nThe algorithm is based on the ideas of defensive \nforecasting~\\cite{Chernov2010},\nin particular, Hoeffding supermartingales~\\cite{Vovk-Hoeffding},\ncombined with the ideas from an early version\nof the Weak Aggregating Algorithm~\\cite{WAAreport}.\nHowever, our analysis is completely different from~\\cite{CV2010},\nfollowing the lines of~\\cite[Theorem~2.2]{Cesa-BianchiLugosi}\nand~\\cite{WAAreport}.\nAlgorithm~\\ref{alg:FDFD} is quite similar to\nthe exponentially weighted average forecaster\nwith time-varying learning rate~\\cite[\\S~2.3]{Cesa-BianchiLugosi},\nbut it keeps the weights $w_k^t\/K$ semi-normalized\nbecause of a specific update rule in line~\\ref{FDFDline:update}\ninstead of normalizing them.\nA more involved version of Algorithm~\\ref{alg:FDFD}\ncan achieve a bound for $\\epsilon$-quantile regret~\\cite{CFH2009},\nbut the analysis becomes more complicated,\nrequires application of the supermartingale technique,\nand gives a worse bound.\n\\end{remark}\n\\fi\n\n\\ifARXIV\n\n\n\\subsection{A Bound with respect to $\\epsilon$-Best Expert}\n\\label{ssec:epsilonbest}\n\nAlgorithm~\\ref{alg:FDFD} originates in\nthe ``Fake Defensive Forecasting'' (FDF) algorithm\nfrom~\\cite[Theorem~9]{CV2010}.\nThat algorithm is based on the ideas of defensive \nforecasting~\\cite{Chernov2010},\nin particular, Hoeffding supermartingales~\\cite{Vovk-Hoeffding},\ncombined with the ideas from an early version\nof the Weak Aggregating Algorithm~\\cite{WAAreport}.\nHowever, our analysis in Theorem~\\ref{thm:convexbound} \nis completely different from~\\cite{CV2010},\nfollowing the lines of~\\cite[Theorem~2.2]{Cesa-BianchiLugosi}\nand~\\cite{WAAreport}.\n\nIn this subsection,\nwe consider a direct extension of\nthe FDF algorithm from~\\cite[Theorem~9]{CV2010}\nto the discounted case.\nAlgorithm~\\ref{alg:fullFDFD}\nbecomes the FDF algorithm when $\\alpha_t=1$.\n\n\\begin{algorithm}[ht]\n \\caption{Fake Defensive Forecasting Algorithm with Discounting}\n\\label{alg:fullFDFD}\n \\begin{algorithmic}[1]\n \\STATE Initialize cumulative losses $\\dL_0=0$, $\\dL_0^k:=0$, $k=1,\\ldots,K$.\\\\\n Set $\\beta_1=1$, $B_0=0$.\n \\FOR{$t=1,2,\\dots$}\n \\STATE Get discount $\\alpha_{t-1}\\in(0,1]$; \n update $\\beta_t=\\beta_{t-1}\/\\alpha_{t-1}$, $B_t=B_{t-1}+\\beta_t$.\n \\STATE Compute $\\eta_t=\\sqrt{\\beta_t\/B_t}$.\n \\STATE Get Experts' predictions\n {$\\gamma_t^k \\in \\Gamma$, $k=1,\\ldots,K$}.\n \\STATE \\label{fullFDFDline:find}\n Find $\\gamma\\in\\Gamma$ s.t. for all $\\omega\\in\\Omega$\n \\quad \n $f_t(\\gamma,\\omega) \\le C_t$,\\\\ \n where $f_t$ and $C_t$\n are defined by~\\eqref{eq:fakesuper} and~\\eqref{eq:fakeconstant},\n respectively.\n \\STATE \\label{fullFDFDline:output} Output $\\gamma_t := \\gamma$.\n \\STATE Get $\\omega_t\\in\\Omega$.\n \\STATE Update $\\dL_t:=\\alpha_{t-1}\\dL_{t-1}+\\lambda(\\gamma_t,\\omega_t)$.\n \\STATE Update $\\dL_t^k:=\\alpha_{t-1}\\dL_{t-1}^k+\\lambda(\\gamma_t^k,\\omega_t)$,\n $k=1,\\ldots,K$.\n \\ENDFOR.\n\n \\end{algorithmic}\n\\end{algorithm}\n\nAlgorithm~\\ref{alg:fullFDFD} in line~\\ref{fullFDFDline:find}\nuses the function\n\\begin{multline}\\label{eq:fakesuper}\nf_t(\\gamma,\\omega)=\n\\sum_{k=1}^K \\frac{1}{K} \\sum_{j=1}^\\infty\n\\frac{c}{j^2}\n \\exp\\left( j\\alpha_{t-1}\\eta_t(\\dL_{t-1}-\\dL_{t-1}^k)\n -\\frac{j^2\\eta_t}{2\\beta_t}\\sum_{\\tau=1}^{t-1}\\beta_\\tau\\eta_\\tau\n \\right)\n\\\\\n\\times\n\\exp\\left( j\\eta_t(\\lambda(\\gamma,\\omega)-\\lambda(\\gamma_{t}^k,\\omega))\n -\\frac{j^2\\eta_t^2}{2}\n \\right) \n\\end{multline}\nand the constant\n\\begin{equation}\\label{eq:fakeconstant}\nC_t = \n\\sum_{k=1}^K \\frac{1}{K} \\sum_{j=1}^\\infty\n\\frac{c}{j^2}\n\\exp\\left( j\\alpha_{t-1}\\eta_t(\\dL_{t-1}-\\dL_{t-1}^k)\n -\\frac{j^2\\eta_{t}}{2\\beta_{t}}\\sum_{\\tau=1}^{t-1}\\beta_\\tau\\eta_\\tau\n \\right)\\,,\n\\end{equation}\nwhere $1\/c=\\sum_{j=1}^\\infty \\frac{1}{j^2}$.\n\n\nAlgorithm~\\ref{alg:fullFDFD} is more complicated\nthan Algorithm~\\ref{alg:FDFD},\nand the loss bound we get is weaker and holds for a narrower class of games.\nHowever, this bound can be stated as \na bound for \\emph{$\\epsilon$-quantile regret}\nintroduced in~\\cite{CFH2009}.\nNamely, let $\\dL_t^\\epsilon$\nbe any value such that for at least $\\epsilon K$ Experts\ntheir loss $\\dL_t^k$ after step $t$\nis not greater than $\\dL_t^\\epsilon$.\nThe $\\epsilon$-quantile regret\nis the difference between $\\dL_t$ and $\\dL_t^\\epsilon$.\nFor $\\epsilon=1\/K$, we can choose \n$\\dL_t^\\epsilon=\\min_k\\dL_t^k\\le \\dL_t^k$ \nfor all $k=1,\\ldots,K$,\nand thus a bound in terms of the $\\epsilon$-quantile regret\nimplies a bound in terms of $\\dL_t^k$.\nThe value $1\/\\epsilon$ plays the role of the ``effective'' number\nof experts. \nAlgorithm~\\ref{alg:fullFDFD} guarantees a bound\nin terms of $\\dL_t^\\epsilon$ for any $\\epsilon>0$,\nwithout the prior knowledge of $\\epsilon$,\nand in this sense the algorithm works for \nthe unknown number of Experts\n(see~\\cite{CV2010} for a more detailed discussion).\n\nFor Algorithm~\\ref{alg:fullFDFD} we need to restrict the class\nof games we consider.\nThe game is called \\emph{compact} if\nthe set\n$\\Lambda=\n\\{\\lambda(\\gamma,\\cdot)\\in\\mathbb{R}^\\Omega\\givn \\gamma\\in\\Gamma\\}$\nis compact in the standard topology of $\\R^\\Omega$.\n\n\n\\begin{theorem}\\label{thm:convexsuperbound}\nSuppose that $(\\Omega,\\Gamma,\\lambda)$\nis a non-empty convex compact game,\n$\\Omega$ is finite,\nand $\\lambda(\\gamma,\\omega)\\in[0,1]$\nfor all $\\gamma\\in\\Gamma$ and $\\omega\\in\\Omega$.\nIn the game played according to Protocol~\\ref{prot:GenDisc},\nLearner has a strategy guaranteeing\nthat, for any $T$ and for any $\\epsilon>0$, it holds\n\\begin{equation}\\label{eq:sqrtsuperbound}\n\\dL_T \\le \\dL_T^\\epsilon + \n 2\\sqrt{\\frac{B_T}{\\beta_T}\\ln\\frac{1}{\\epsilon}}+\n 7\\sqrt{\\frac{B_T}{\\beta_T}}\\,,\n\\end{equation}\nwhere $\\beta_t=1\/(\\alpha_1\\cdots\\alpha_{t-1})$ and\n$B_T=\\sum_{t=1}^T \\beta_t$.\n\\end{theorem}\n\\begin{proof}\nThe most difficult part of the proof is to show\nthat one can find $\\gamma$ in line~\\ref{fullFDFDline:find}\nof Algorithm~\\ref{alg:fullFDFD}.\nWe do not do this here, but refer to~\\cite{CV2010};\nthe proof is literally the same as in~\\cite[Theorem~9]{CV2010}\nand is based on the supermartingale property of~$f_t$.\n(The rest of the proof below also follows~\\cite[Theorem~9]{CV2010};\nthe only difference is in the definition of $f_t$\nand $C_t$.)\n\nLet us check that $C_t\\le 1$ for all $t$.\nClearly, $C_1=1$.\nAssume that we have $C_t\\le 1$.\nThis implies $f_t(\\gamma_t,\\omega_t)\\le 1$ due to the choice of $\\gamma_t$,\nand thus $(f_t(\\gamma_t,\\omega_t))^{\\alpha_t\\eta_{t+1}\/\\eta_{t}}\\le 1$.\nSimilarly to~\\eqref{eq:alphaless1}, we have\n$\\alpha_t\\eta_{t+1}\/\\eta_{t}\\le 1$.\nSince the function $x\\mapsto x^\\alpha$ is concave\nfor $\\alpha\\in(0,1]$, $x\\ge 0$, we get\n\\begin{multline*}\n1\\ge \\bigl(f_t(\\gamma_t,\\omega_t)\\bigr)^{\\alpha_t\\eta_{t+1}\/\\eta_{t}}\n\\\\\n=\n\\left(\\sum_{k=1}^K \\frac{1}{K} \\sum_{j=1}^\\infty\n\\frac{c}{j^2}\n \\exp\\left( j\\eta_t(\\dL_{t}-\\dL_{t}^k)\n -\\frac{j^2\\eta_t}{2\\beta_t}\\sum_{\\tau=1}^{t}\\beta_\\tau\\eta_\\tau\n \\right)\n\\right)^{\\alpha_t\\eta_{t+1}\/\\eta_{t}}\n\\\\\n\\ge\n\\sum_{k=1}^K \\frac{1}{K} \\sum_{j=1}^\\infty\n\\frac{c}{j^2}\n \\left(\\exp\\left( j\\eta_t(\\dL_{t}-\\dL_{t}^k)\n -\\frac{j^2\\eta_t}{2\\beta_t}\\sum_{\\tau=1}^{t}\\beta_\\tau\\eta_\\tau\n \\right)\n \\right)^{\\alpha_t\\eta_{t+1}\/\\eta_{t}}\n\\\\\n=\n\\sum_{k=1}^K \\frac{1}{K} \\sum_{j=1}^\\infty\n\\frac{c}{j^2}\n \\exp\\left( j\\alpha_t\\eta_{t+1}(\\dL_{t}-\\dL_{t}^k)\n -\\frac{j^2\\eta_{t+1}}{2\\beta_{t+1}}\\sum_{\\tau=1}^{t}\\beta_\\tau\\eta_\\tau\n \\right)\n= C_{t+1}\\,.\n\\end{multline*}\n\nThus, for each $t$ we have $f_t(\\gamma_t,\\omega_t)\\le 1$,\nthat is,\n$$\n\\sum_{k=1}^K \\frac{1}{K} \\sum_{j=1}^\\infty\n\\frac{c}{j^2}\n \\exp\\left( j\\eta_t(\\dL_{t}-\\dL_{t}^k)\n -\\frac{j^2\\eta_t}{2\\beta_t}\\sum_{\\tau=1}^{t}\\beta_\\tau\\eta_\\tau\n \\right)\n\\le 1\\,.\n$$\nFor any $\\epsilon>0$, let us take any $\\dL_T^\\epsilon$\nsuch that\nfor at least $\\epsilon K$ Experts\ntheir losses $\\dL_T^k$ are smaller than or equal to $\\dL_T^\\epsilon$.\nThen we have \n\\begin{multline*}\\allowdisplaybreaks\n1\\ge\n\\sum_{k=1}^K \\frac{1}{K} \\sum_{j=1}^\\infty\n\\frac{c}{j^2}\n \\exp\\left( j\\eta_t(\\dL_{t}-\\dL_{t}^k)\n -\\frac{j^2\\eta_t}{2\\beta_t}\\sum_{\\tau=1}^{t}\\beta_\\tau\\eta_\\tau\n \\right)\n\\\\\n\\ge \n\\epsilon\n\\sum_{j=1}^\\infty\n\\frac{c}{j^2}\n \\exp\\left( j\\eta_t(\\dL_{t}-\\dL_{t}^\\epsilon)\n -\\frac{j^2\\eta_t}{2\\beta_t}\\sum_{\\tau=1}^{t}\\beta_\\tau\\eta_\\tau\n \\right)\n\\\\\n\\ge\n\\frac{c\\epsilon}{j^2}\n \\exp\\left( j\\eta_t(\\dL_{t}-\\dL_{t}^\\epsilon)\n -\\frac{j^2\\eta_t}{2\\beta_t}\\sum_{\\tau=1}^{t}\\beta_\\tau\\eta_\\tau\n \\right)\n\\end{multline*}\nfor any natural $j$.\nTaking the logarithm and rearranging,\nwe get\n$$\n\\dL_{t}\\le \\dL_{t}^\\epsilon\n + \\frac{j}{2\\beta_t}\\sum_{\\tau=1}^{t}\\beta_\\tau\\eta_\\tau\n + \\frac{1}{j\\eta_t} \\ln\\frac{j^2}{c\\epsilon}\\,.\n$$\nSubstituting $\\eta_t=\\sqrt{\\beta_t\/B_t}$\nand using Lemma~\\ref{lem:gen-sqrt-sum},\nwe get\n$$\n\\dL_{t}\\le \\dL_{t}^\\epsilon\n + \\left(j+\\frac{2}{j}\\ln j + \\frac{1}{j}\\ln\\frac{1}{\\epsilon}\n + \\frac{1}{j}\\ln\\frac{1}{c}\n \\right)\n \\sqrt{\\frac{B_t}{\\beta_t}}\\,.\n$$\nLetting $j=\\left\\lceil\\sqrt{\\ln(1\/\\epsilon)}\\right\\rceil+1$\nand using the estimates $j\\le \\sqrt{\\ln(1\/\\epsilon)}+2$,\n$(\\ln j)\/j\\le 2$, \n$(\\ln(1\/\\epsilon))\/j\\le \\sqrt{\\ln(1\/\\epsilon)}$,\n$1\/j\\le 1$,\nand $\\ln(1\/c)=\\ln(\\pi^2\/6)\\le 1$,\nwe obtain the final bound.\n\\EP \n\n\\fi\n\n\\section{Regression with Discounted Loss}\n\\label{sec:regression}\n\nIn this section we consider a task of regression,\nwhere Learner must predict ``labels'' $y_t\\in\\R$\nfor input instances $x_t\\in\\bX\\subseteq\\R^n$.\nThe predictions proceed according to Protocol~\\ref{prot:COP}.\n\\begin{protocol}[h]\n \\caption{Competitive online regression}\n \\label{prot:COP}\n \\begin{algorithmic}\n \\FOR{$t=1,2,\\dots$}\n \\STATE Reality announces $x_t\\in\\bX$.\n \\STATE Learner announces $\\gamma_t\\in\\Gamma$.\n \\STATE Reality announces $y_t\\in\\Omega$.\n \\ENDFOR\n \\end{algorithmic}\n\\end{protocol}\nThis task can be embedded into prediction with expert advice\nif Learner competes with all functions $x\\to y$\nfrom some large class serving as a pool of\n(imaginary) Experts.\n\n\\subsection{The Framework and Linear Functions as Experts}\n\nLet the input space be $\\bX\\subseteq\\R^n$,\nthe set of predictions be $\\Gamma = \\R$,\nand\nthe set of outcomes be $\\Omega = [Y_1,Y_2]$.\nIn this section we consider the square loss\n$\\lambda^\\mathrm{sq}(\\gamma,y) = (\\gamma-y)^2$.\nLearner competes with a pool of experts\n$\\Theta=\\R^n$ (treated as linear functionals on $\\R^n$).\nEach individual expert is denoted by~$\\theta\\in\\Theta$\nand predicts $\\theta'x_t$ at step $t$.\n\nLet us take any distribution over the experts $P(d\\theta)$.\nIt is known from \\cite{VovkAS} that~\\eqref{eq:realiz} holds for the square loss\nwith $c=1$, $\\eta = \\frac{2}{(Y_2-Y_1)^2}$:\n\\begin{equation}\\label{eq:linrealiz}\n\\exists \\gamma\\in\\Gamma \\:\\forall y\\in\\Omega=[Y_1,Y_2]\\quad\n(\\gamma - y)^2\\le\n-\\frac{1}{\\eta}\n \\ln\\left(\\int_\\Theta \\e^{-\\eta (\\theta'x_t - y)^2}P(d\\theta)\\right).\n\\end{equation}\n\n\nDenote by $X$ the matrix of size $T\\times n$\nconsisting of the rows of the input vectors $x_1',\\ldots,x_T'$.\nLet also $W_T = \\diag(\\beta_1\/\\beta_T,\\beta_2\/\\beta_T,\\ldots,\\beta_T\/\\beta_T)$, i.e.,\n$W_T$ is a diagonal matrix $T \\times T$.\nIn a manner similar to~\\cite{VovkCOS},\nwe prove the following upper bound for Learner's loss.\n\\begin{theorem}\\label{thm:linbound}\nFor any $a > 0$,\nthere exists a prediction strategy for Learner in Protocol~\\ref{prot:COP} \nachieving, for every $T$\nand for any linear predictor $\\theta \\in \\R^n$,\n \\begin{multline}\\label{eq:linbound}\n \\sum_{t=1}^T \\frac{\\beta_t}{\\beta_T} (\\gamma_t-y_t)^2\n \\le\n \\sum_{t=1}^T \\frac{\\beta_t}{\\beta_T} (\\theta'x_t-y_t)^2 \n\\\\\n + a\\|\\theta\\|^2\n + \\frac{(Y_2-Y_1)^2}{4}\\ln\\det\\left(\\frac{X'W_TX}{a} + I\\right)\\,.\n \\end{multline}\nIf, in addition, $\\|x_t\\|_\\infty \\le Z$ for all $t$, then\n \\begin{multline}\\label{eq:linboundT}\n \\sum_{t=1}^T \\frac{\\beta_t}{\\beta_T} (\\gamma_t-y_t)^2\n \\le\n \\sum_{t=1}^T \\frac{\\beta_t}{\\beta_T} (\\theta'x_t-y_t)^2 \n\\\\\n + a\\|\\theta\\|^2\n + \\frac{n(Y_2-Y_1)^2}{4}\n \\ln\\left(\\frac{Z^2}{a}\\frac{\\sum_{t=1}^T \\beta_t}{\\beta_T} + 1\n \\right).\n \\end{multline}\n\\end{theorem}\n\nIn the undiscounted case ($\\alpha_t=1$ for all $t$),\nthe bounds in the theorem coincide with the bounds\nfor the Aggregating Algorithm for Regression~\\cite[Theorem~1]{VovkCOS}\nwith $Y_2 = Y$ and $Y_1 = -Y$,\nsince, as remarked after Theorem~\\ref{thm:convexbound},\n$\\beta_t=1$ and\n$\\left(\\sum_{t=1}^T \\beta_t\\right)\/\\beta_T = T$\nin the undiscounted case. \nRecall also that in the case of the exponential discounting\n($\\alpha_t = \\alpha \\in(0,1)$)\nwe have $\\beta_t=\\alpha^{-t+1}$ and\n$\\left(\\sum_{t=1}^T \\beta_t\\right)\/\\beta_T = (1-\\alpha^{T-1})\/(1-\\alpha)\n\\le 1\/(1-\\alpha)$.\n\\ifARXIV\nThus, for the exponential discounting bound~\\eqref{eq:linboundT}\nbecomes\n\\begin{multline}\\label{eq:linboundconst}\n \\sum_{t=1}^T \\alpha^{T-t} (\\gamma_t-y_t)^2\n \\le\n \\sum_{t=1}^T \\alpha^{T-t} (\\theta'x_t-y_t)^2 \n\\\\\n + a\\|\\theta\\|^2\n + \\frac{n(Y_2-Y_1)^2}{4}\n \\ln\\left(\\frac{Z^2(1-\\alpha^{T-1})}{a(1-\\alpha)} + 1\\right)\\,.\n\\end{multline}\n\\fi\n\n\n\\subsection{Functions from an RKHS as Experts}\nIn this section we apply the kernel trick\nto the linear method\nto compete with wider sets of experts.\nEach expert $f\\in\\hs{F}$ predicts $f(x_t)$.\nHere $\\hs{F}$ is a reproducing kernel Hilbert space (RKHS)\nwith a positive definite kernel $k \\colon \\bX\\times\\bX \\to \\R$.\nFor the definition of RKHS and\nits connection to kernels see \\cite{Scholkopf2002}.\nEach kernel defines a unique RKHS.\nWe use the notation $\\mat{K}_T = \\{k(x_i,x_j)\\}_{i,j=1,\\ldots,T}$\nfor the kernel matrix\nfor the input vectors at step~$T$.\nIn a manner similar to~\\cite{KAAR},\nwe prove the following upper bound\non the discounted square loss of Learner.\n\\begin{theorem}\\label{thm:hilbound}\n For any $a > 0$,\n there exists a strategy for Learner\n in Protocol~\\ref{prot:COP}\n achieving,\n for every positive integer $T$\n and any predictor $f \\in \\hs{F}$,\n \\begin{multline}\\label{eq:hilbound}\n \\sum_{t=1}^T \\frac{\\beta_t}{\\beta_T} (\\gamma_t-y_t)^2\n \\le\n \\sum_{t=1}^T \\frac{\\beta_t}{\\beta_T} (f(x_t)-y_t)^2 \n\\\\\n + a\\|f\\|^2\n + \\frac{(Y_2-Y_1)^2}{4}\n \\ln\\det\\left(\\frac{\\sqrt{W_T}\\mat{K}_T\\sqrt{W_T}}{a} + I\n \\right)\\,.\n \\end{multline}\n\\end{theorem}\n\n\n\\begin{corollary}\\label{cor:hilboundconst}\nAssume that \n$c^2_\\hs{F} = \\sup_{x \\in \\bX} k(x,x) < \\infty$\nfor the RKHS $\\hs{F}$.\nUnder the conditions of Theorem~\\ref{thm:hilbound},\ngiven in advance any constant $\\mathcal{T}$ such that\n$\\left(\\sum_{t=1}^T \\beta_t\\right)\/\\beta_T \\le \\mathcal{T}$,\none can choose parameter $a$ such that\nthe strategy in Theorem~\\ref{thm:hilbound}\nachieves for any $f \\in \\hs{F}$\n \\begin{multline}\\label{eq:hilboundconst}\n \\sum_{t=1}^T \\frac{\\beta_t}{\\beta_T} (\\gamma_t-y_t)^2\n \\le\n \\sum_{t=1}^T \\frac{\\beta_t}{\\beta_T} (f(x_t)-y_t)^2\n +\n\\left(\\frac{(Y_2-Y_1)^2}{4}+\\|f\\|^2\\right)c_{\\hs{F}}\\sqrt{\\mathcal{T}}\\,.\n \\end{multline}\nwhere $c^2_\\hs{F} = \\sup_{x \\in \\bX} k(x,x) < \\infty$\ncharacterizes the RKHS $\\hs{F}$.\n\\end{corollary}\n\\BP\nThe determinant of a symmetric positive definite matrix\nis upper bounded by the product of its diagonal elements\n(see Chapter 2, Theorem 7 in \\cite{Beckenbach1961}),\nand thus we have\n \\begin{multline*}\n \\ln\\det \\left(I + \\frac{\\sqrt{W_T}\\mat{K}_T\\sqrt{W_T}}{a} \\right)\n \\le\n T \\ln \\left(1+\\frac{c^2_{\\hs{F}}\\left(\\prod_{t=1}^T \\frac{\\beta_t}{\\beta_T}\\right)^{1\/T}}{a}\\right)\n \\\\\n\\le\n T \\frac{c^2_{\\hs{F}}}{a}\\left(\\prod_{t=1}^T \\frac{\\beta_t}{\\beta_T}\\right)^{1\/T}\n\\le\n T \\frac{c^2_{\\hs{F}}}{a\\beta_T}\\frac{\\sum_{t=1}^T \\beta_t}{T}\n\\le\n\\frac{c^2_{\\hs{F}}\\mathcal{T}}{a}\n \\end{multline*}\n(we use $\\ln(1+x)\\le x$ and the inequality between the geometric\nand arithmetic means).\nChoosing $a=c_{\\hs{F}}\\sqrt{\\mathcal{T}}$,\nwe get bound~\\eqref{eq:hilboundconst} from~\\eqref{eq:hilbound}.\n\\EP\nRecall again that\n$\\left(\\sum_{t=1}^T \\beta_t\\right)\/\\beta_T = (1-\\alpha^{T-1})\/(1-\\alpha)\n\\le 1\/(1-\\alpha)$ in the case of the exponential discounting\n($\\alpha_t = \\alpha \\in(0,1)$),\nand we can take $\\mathcal{T}=1\/(1-\\alpha)$.\n\nIn the undiscounted case ($\\alpha_t=1$),\nwe have $\\left(\\sum_{t=1}^T \\beta_t\\right)\/\\beta_T=T$,\nso we need to know the number of steps in advance.\nThen, bound~\\eqref{eq:hilboundconst} matches the bound\nobtained in~\\cite[the displayed formula after~(33)]{VovkRKHSarXiv}.\nIf we do not know an upper bound $\\mathcal{T}$ in advance,\nit is still possible to achieve a bound similar to~\\eqref{eq:hilboundconst}\nusing the Aggregating Algorithm with Discounting\nto merge Learner's strategies from Theorem~\\ref{thm:hilbound}\nwith different values of parameter $a$,\nin the same manner as in~\\cite[Theorem~3]{VovkRKHSarXiv}.\n\n\\ifARXIV\n\\begin{corollary}\\label{cor:hilboundmixed}\nAssume that \n$c^2_\\hs{F} = \\sup_{x \\in \\bX} k(x,x) < \\infty$\nfor the RKHS $\\hs{F}$.\nUnder the conditions of Theorem~\\ref{thm:hilbound},\n there exists a strategy for Learner\n in Protocol~\\ref{prot:COP}\n achieving,\n for every positive integer $T$\n and any predictor $f \\in \\hs{F}$,\n \\begin{multline}\\label{eq:hilboundmixed}\n \\sum_{t=1}^T \\frac{\\beta_t}{\\beta_T} (\\gamma_t-y_t)^2\n \\le\n \\sum_{t=1}^T \\frac{\\beta_t}{\\beta_T} (f(x_t)-y_t)^2\n +\nc_{\\hs{F}}\\|f\\|(Y_2-Y_1)\\sqrt{\\frac{\\sum_{t=1}^T \\beta_t}{\\beta_T}}\n\\\\\n +\\frac{(Y_2-Y_1)^2}{2}\\ln\\frac{\\sum_{t=1}^T \\beta_t}{\\beta_T}\n + \\|f\\|^2\n + (Y_2-Y_1)^2\\ln\\left(\\frac{c_{\\hs{F}}(Y_2-Y_1)}{\\|f\\|}+2 \n \\right)\\,.\n \\end{multline}\n\\end{corollary}\n\\BP\nLet us take the strategies from Theorem~\\ref{thm:hilbound}\nfor $a=1,2,3,\\ldots$\nand provide them as Experts \nto the Aggregating Algorithm with Discounting,\nwith the square loss function,\n$\\eta=2\/(Y_2-Y_1)^2$ and initial Experts' weights proprotional\nto $1\/a^2$.\nThen Theorem~\\ref{thm:AAD} \n(extended as described in Remark at the end of Section~\\ref{sec:linear})\nguarantees that the extra loss of the aggregated\nstrategy (compared to the strategy from Theorem~\\ref{thm:hilbound}\nwith parameter $a$) is not greater than \n$\\frac{(Y_2-Y_1)^2}{2}\\ln\\frac{a^2}{c}$, where $c=\\sum_{k=1}^K 1\/k^2$.\nOn the other hand,\nfor the strategy from Theorem~\\ref{thm:hilbound}\nwith parameter $a$ similarly to the proof of Corollary~\\ref{cor:hilboundconst}\nwe get\n$$\n \\sum_{t=1}^T \\frac{\\beta_t}{\\beta_T} (\\gamma_t-y_t)^2\n \\le\n \\sum_{t=1}^T \\frac{\\beta_t}{\\beta_T} (f(x_t)-y_t)^2 \n + a\\|f\\|^2\n + \\frac{c_{\\hs{F}}^2(Y_2-Y_1)^2}{4a}\\frac{\\sum_{t=1}^T \\beta_t}{\\beta_T}\\,.\n$$\nAdding $\\frac{(Y_2-Y_1)^2}{2}\\ln\\frac{a^2}{c}$ to the right-hand side\nand choosing \n$$\na=\\left\\lceil \\frac{c_{\\hs{F}}(Y_2-Y_1)}{2\\|f\\|}\n \\sqrt{\\frac{\\sum_{t=1}^T \\beta_t}{\\beta_T}}\\,\n \\right\\rceil\\,,\n$$\nwe get the statement after simple estimations.\n\\EP\n\\fi\n\n\\subsection{Proofs of Theorems~\\ref{thm:linbound} and~\\ref{thm:hilbound}}\nLet us begin with several technical lemmas from linear algebra.\n\\ifCONF\nFor complete proofs of them see~\\cite[Appendix~A.1]{CZ2010}.\n\\fi\n\\ifARXIV\nThe proofs of some of these lemmas are moved to\nAppendix~\\ref{append:technical}.\n\\fi\n\n\\begin{lemma}\\label{lem:integraleval}\nLet $A$ be a symmetric positive definite matrix of size~$n \\times n$.\nLet $\\theta, b \\in \\R^n$, $c$ be a real number,\nand $Q(\\theta)=\\theta'A\\theta + b'\\theta + c$.\nThen\n \\begin{equation*}\n \\int_{\\R^n} e^{-Q(\\theta)} d\\theta = e^{-Q_0} \\frac{\\pi^{n\/2}}{\\sqrt{\\det A}},\n \\end{equation*}\n where $Q_0 = \\min_{\\theta \\in \\R^n} Q(\\theta)$.\n\\end{lemma}\n\\noindent\nThe proof of this lemma can be found\nin~\\cite[Theorem~15.12.1]{Harville1997}.\n\n\\begin{lemma}\\label{lem:Frepres}\nLet $A$ be a symmetric positive definite matrix of size~$n \\times n$.\nLet $b,z \\in \\R^n$, and\n \\begin{equation*}\n F(A,b,z) = \\min_{\\theta \\in \\R^n}(\\theta' A \\theta + b'\\theta + z'\\theta)\n - \\min_{\\theta \\in \\R^n}(\\theta' A \\theta + b'\\theta - z'\\theta)\\,.\n \\end{equation*}\nThen $F(A,b,z) = -b'A^{-1}z$.\n\\end{lemma}\n\n\\begin{lemma}\\label{lem:ratiointegr}\nLet $A$ be a symmetric positive definite matrix of size~$n \\times n$.\nLet $\\theta,b_1,b_2 \\in \\R^n$, $c_1, c_2$ be real numbers,\nand\n$Q_1(\\theta)=\\theta'A\\theta + b_1'\\theta + c_1$,\n$Q_2(\\theta)=\\theta'A\\theta + b_2'\\theta + c_2$.\nThen\n \\begin{equation*}\n \\frac{\\int_{\\R^n} e^{-Q_1(\\theta)} d\\theta}\n {\\int_{\\R^n} e^{-Q_2(\\theta)} d\\theta}\n = e^{c_2-c_1 - \\frac{1}{4}(b_2+b_1)'A^{-1}(b_2-b_1)}\\,.\n \\end{equation*}\n\\end{lemma}\n\nThe previous three lemmas\nwere implicitly used in \\cite{VovkCOS} to derive a bound\non the cumulative undiscounted square loss\nof the algorithm competing with linear experts.\n\n\\begin{lemma}\\label{lem:matrixequal}\nFor any matrix $B$ of size~$n\\times m$,\nany matrix $C$ of size~$m\\times n$,\nand any real number $a$\nsuch that the matrices $aI_m+CB$ and $aI_n+BC$ are nonsingular,\nit holds\n\\begin{equation}\\label{eq:matrixequal}\n B(aI_m+CB)^{-1}=(aI_n+BC)^{-1}B\\,,\n\\end{equation}\nwhere $I_n,I_m$ are the unit matrices of sizes~$n\\times n$ and~$m\\times m$,\nrespectively.\n\\end{lemma}\n\\BP\nNote that this is equivalent to $(aI_n+BC)B=B(aI_m+CB)$.\n\\EP\n\n\\begin{lemma}\\label{lem:matdetiden}\nFor matrix $B$ of size~$n\\times m$,\nany matrix $C$ of size~$m\\times n$,\nand any real number $a$, it holds\n \\begin{equation}\\label{eq:matdetiden}\n \\det (aI_n + BC)=\\det (aI_m + CB)\\,,\n \\end{equation}\nwhere $I_n,I_m$ are the unit matrices of sizes~$n\\times n$ and~$m\\times m$,\nrespectively.\n\\end{lemma}\n\n\\subsubsection{Proof of Theorem~\\ref{thm:linbound}.}\nWe take the Gaussian initial distribution over the experts\nwith a parameter $a>0$:\n\\begin{equation*}\nP_0(d\\theta) = \\left(\\frac{a\\eta}{\\pi}\\right)^{n\/2} e^{-a\\eta\\|\\theta\\|^2}d\\theta.\n\\end{equation*}\nand use ``Algorithm~\\ref{alg:AAD} with infinitely many Experts''.\nRepeating the derivations from\n\\ifCONF the proof of Theorem~\\ref{thm:AAD}, \\fi\n\\ifARXIV Subsection~\\ref{ssec:AAproof}, \\fi\nwe obtain the following analogue of~\\eqref{eq:AADFbound}:\n\\begin{equation*}\n\\left(\\frac{a\\eta}{\\pi}\\right)^{n\/2}\n\\int_\\Theta \\e^{\\eta\\left(\\sum_{t=1}^T \\frac{\\beta_t}{\\beta_T} (\\gamma_t-y_t)^2\n-\n\\sum_{t=1}^T \\frac{\\beta_t}{\\beta_T} (\\theta'x_t-y_t)^2\\right)} e^{-a\\eta\\|\\theta\\|^2}d\\theta \\le 1.\n\\end{equation*}\n\nThe simple equality\n\\begin{equation}\\label{eq:exlosstrans}\n\\sum_{t=1}^T \\frac{\\beta_t}{\\beta_T} (\\theta'x_t-y_t)^2 + a\\|\\theta\\|^2\n=\n \\theta'(aI + X'W_TX)\\theta\n - 2\\sum_{t=1}^T \\frac{\\beta_t}{\\beta_T} y_t\\theta'x_t\n + \\sum_{t=1}^T \\frac{\\beta_t}{\\beta_T} y_t^2\n\\end{equation}\nshows that\nthe integral can be evaluated with the help of Lemma~\\ref{lem:integraleval}:\n\\begin{multline*}\n\\left(\\frac{a\\eta}{\\pi}\\right)^{n\/2}\n\\int_\\Theta e^{-\\eta \\left(\\sum_{t=1}^T \\frac{\\beta_t}{\\beta_T} (\\theta'x_t-y_t)^2 + a\\|\\theta\\|^2 \\right)} d\\theta\n\\\\ =\n\\left(\\frac{a}{\\pi}\\right)^{n\/2}\ne^{-\\eta \\min_{\\theta} \\left(\\sum_{t=1}^T \\frac{\\beta_t}{\\beta_T} (\\theta'x_t-y_t)^2 + a\\|\\theta\\|^2 \\right)} \n \\frac{\\pi^{n\/2}}{\\sqrt{\\det(aI + X'W_TX)}}.\n\\end{multline*}\nWe take the natural logarithms of both parts of the bound and using the value $\\eta = \\frac{2}{(Y_2-Y_1)^2}$\nobtain~\\eqref{eq:linbound}.\nThe determinant of a symmetric positive definite matrix\nis upper bounded by the product of its diagonal elements\n(see Chapter 2, Theorem 7 in \\cite{Beckenbach1961}):\n\\begin{equation*}\n\\det\\left(\\frac{X'W_TX}{a} + I\\right)\n \\le \\left(\\frac{Z^2\\sum_{t=1}^T \\beta_t}{a\\beta_T} + 1\\right)^n,\n\\end{equation*}\nand thus we obtain~\\eqref{eq:linboundT}.\n\n\\subsubsection{Proof of Theorem~\\ref{thm:hilbound}.}\n We must prove that for each $T$\n and each sequence $(x_1,y_1,\\ldots,x_T,y_T)\\in(\\bX\\times\\R)^T$\n the guarantee~\\eqref{eq:hilbound} is satisfied.\n Fix $T$ and $(x_1,y_1,\\ldots,x_T,y_T)$.\n Fix an isomorphism between the linear span of $k_{x_1},\\ldots,k_{x_T}$\n obtained for the Riesz Representation theorem\n and $\\R^{\\tilde T}$,\n where $\\tilde T\\le T$ is the dimension of the linear span of $k_{x_1},\\ldots,k_{x_T}$.\n Let $\\tilde x_1,\\ldots,\\tilde x_T\\in\\R^{\\tilde T}$ be the images of $k_{x_1},\\ldots,k_{x_T}$,\n respectively,\n under this isomorphism.\n We have then $k(\\cdot,x_i) = \\langle \\cdot,\\tilde x_i \\rangle$ for any $x_i$.\n\n We apply the strategy from Theorem~\\ref{thm:linbound}\n to $\\tilde x_1,\\ldots,\\tilde x_T$.\n The predictions of the strategies are the same due to \n Proposition~\\ref{prop:hilpred} below.\n Any expert $\\theta\\in\\R^{\\tilde T}$ in bound~\\eqref{eq:linbound} \n can be represented as\n \\begin{equation*}\n \\theta = \\sum_{i=1}^T c_i \\tilde x_i = \\sum_{i=1}^T c_i k(\\cdot,x_i)\n \\end{equation*}\n for some $c_i\\in\\R$.\n Thus the experts' predictions are \n $\\theta' \\tilde x_t = \\sum_{i=1}^T c_i k(x_t,x_i)$,\n and the norm is \n $\\|\\theta\\|^2 = \\sum_{i,j=1}^T c_i c_j k(x_i,x_j)$.\n\n Denote by $\\tilde X$ the $T\\times \\tilde T$\n matrix consisting of the rows of the vectors $\\tilde x_1',\\ldots,\\tilde x_T'$.\n From Lemma~\\ref{lem:matdetiden} we have\n \\begin{equation*}\n \\det\\left(\\frac{\\tilde X'W_T\\tilde X}{a} + I\\right)\n =\n \\det\\left(\\frac{\\sqrt{W_T}\\tilde X \\tilde X'\\sqrt{W_T}}{a} + I\\right).\n \\end{equation*}\n Thus using $\\mat{K}_T = \\tilde X \\tilde X'$ we obtain the upper bound\n \\begin{multline*}\n \\sum_{t=1}^T \\frac{\\beta_t}{\\beta_T} (\\gamma_t-y_t)^2\n \\le\n \\sum_{t=1}^T \\frac{\\beta_t}{\\beta_T} \\left(\\sum_{i=1}^T c_i k(x_t,x_i)-y_t\\right)^2 \n\\\\\n + a\\sum_{i,j=1}^T c_i c_j k(x_i,x_j)\n + \\frac{(Y_2-Y_1)^2}{4}\\ln\\det\\left(\\frac{\\sqrt{W_T}\\mat{K}_T\\sqrt{W_T}}{a} + I\\right)\n \\end{multline*}\n for any $c_i\\in\\R$, $i=1,\\ldots,T$.\n By the Representer theorem (see Theorem 4.2 in \\cite{Scholkopf2002})\n the minimum of $\\sum_{t=1}^T \\frac{\\beta_t}{\\beta_T} (f(x_t)-y_t)^2 + a\\|f\\|^2$ over all $f\\in\\hs{F}$\n is achieved\n on one of the linear combinations from the bound obtained above.\n This concludes the proof.\n\n\\subsection{Regression Algorithms}\\label{sec:algreg}\nIn this subsection we derive explicit form of\nthe prediction strategies for Learner\nused in Theorems~\\ref{thm:linbound}\nand~\\ref{thm:hilbound}.\n\n\\subsubsection{Strategy for Theorem~\\ref{thm:linbound}.}\nIn~\\cite{VovkCOS} Vovk suggests for the square loss\nthe following substitution function\nsatisfying~\\eqref{eq:subst}:\n\\begin{equation}\\label{eq:substsquare}\n\\gamma_T = \\frac{Y_2+Y_1}{2} - \\frac{g_T(Y_2)-g_T(Y_1)}{2(Y_2-Y_1)}.\n\\end{equation}\nIt allows us to calculate $g_T$ with unnormalized weights:\n\\begin{equation*}\ng_T(y)=\n -\\frac{1}{\\eta}\\left(\\ln\\int_{\\Theta} e^{-\\eta\\left(\\theta'A_T\\theta -\n 2\\theta'\\left(b_{T-1} + yx_T\\right)\n +\n \\left(\\sum_{t=1}^{T-1} \\frac{\\beta_t}{\\beta_T} y_t^2 + y^2\\right)\\right)}d\\theta\\right)\n\\end{equation*}\nfor any $y\\in\\Omega=[Y_1,Y_2]$ \n(here we use the expansion~\\eqref{eq:exlosstrans}),\nwhere\n\\begin{equation*}\nA_T = aI + \\sum_{t=1}^{T-1} \\frac{\\beta_t}{\\beta_T}x_t x_t' + x_T x_T' = aI + X'W_TX,\n\\end{equation*}\nand $b_{T-1} = \\sum_{t=1}^{T-1} \\frac{\\beta_t}{\\beta_T} y_t x_t$.\nThe direct calculation of $g_T$\nis inefficient: it requires numerical integration.\nInstead, we notice that\n\\begin{multline}\n\\gamma_T = \\frac{Y_2+Y_1}{2} - \\frac{g_T(Y_2)-g_T(Y_1)}{2(Y_2-Y_1)}\n \\\\\n = \\frac{Y_2+Y_1}{2}\n - \\frac{1}{2(Y_2-Y_1)\\eta}\n \\ln \\frac{\\int_{\\Theta} e^{-\\eta\\left(\\theta'A_T\\theta -\n 2\\theta'\\left(b_{T-1} + Y_1x_T\\right)\n +\n \\left(\\sum_{t=1}^{T-1} \\frac{\\beta_t}{\\beta_T} y_t^2 + Y_1^2\\right)\\right)}d\\theta}\n {\\int_{\\Theta} e^{-\\eta\\left(\\theta'A_T\\theta -\n 2\\theta'\\left(b_{T-1} + Y_2x_T\\right)\n +\n \\left(\\sum_{t=1}^{T-1} \\frac{\\beta_t}{\\beta_T} y_t^2 + Y_2^2\\right)\\right)}d\\theta}\n \\\\[0.5ex]\n = \\frac{Y_2+Y_1}{2}\n - \\frac{1}{2(Y_2-Y_1)\\eta}\n \\ln e^{\\eta\\left(Y_2^2-Y_1^2\n - \\left(b_{T-1}+\\left(\\frac{Y_2+Y_1}{2}\\right)x_T\\right)'A_T^{-1}\n \\left(\\frac{Y_2-Y_1}{2}x_T\\right)\\right)}\n \\\\\n = \\left(b_{T-1}+\\left(\\frac{Y_2+Y_1}{2}\\right)x_T\\right)'A_T^{-1} x_T\\,,\n\\label{eq:linpred}\n\\end{multline}\nwhere the third equality follows from Lemma~\\ref{lem:ratiointegr}.\n\n\nThe strategy which predicts according to~\\eqref{eq:linpred} \nrequires $O(n^3)$ operations per step.\nThe most time-consuming operation is the inverse of the matrix $A_T$.\n\\ifARXIV\nNote that for the undiscounted case the inverse could be computed\nincrementally using the Sherman-Morrison formula,\nwhich leads to $O(n^2)$ operations per step.\n\\fi\n\n\\subsubsection{Strategy for Theorem~\\ref{thm:hilbound}.}\nWe use following notation.\nLet\n\\begin{equation}\\label{eq:hilnotation}\n\\begin{array}{lcl}\n\\vect{k}_T & \\text{be} & \\text{the last column of the matrix } \\mat{K}_T,\\vect{k}_T = \\{k(x_i,x_T)\\}_{i=1}^T,\\\\\n\\vect{Y}_T & \\text{be} & \\text{the column vector of the outcomes } \\vect{Y}_T = (y_1,\\ldots,y_T)'.\n\\end{array}\n\\end{equation}\nWhen we write $\\vect{Z} = (\\vect{V};\\vect{Y})$ or $\\vect{Z} = (\\vect{V}';\\vect{Y}')'$ we mean that\nthe column vector $\\vect{Z}$ is obtained by\nconcatenating two column vectors $\\vect{V},\\vect{Y}$ vertically or\n$\\vect{V}',\\vect{Y}'$ horizontally.\n\nAs it is clear from the proof of Theorem~\\ref{thm:hilbound},\nwe need to prove that the strategy for this theorem\nis the same as the strategy for Theorem~\\ref{thm:linbound}\nin the case when the kernel is the scalar product.\n\n\\begin{proposition}\\label{prop:hilpred}\n The predictions~\\eqref{eq:linpred} can be represented as\n \\begin{equation}\\label{eq:hilpred}\n \\gamma_T = \\left( \\vect{Y}_{T-1}; \\frac{Y_2+Y_1}{2} \\right)'\\sqrt{W_T}\n \\left(aI + \\sqrt{W_T}\\mat{K}_T\\sqrt{W_T}\\right)^{-1} \\sqrt{W_T}\\vect{k}_T\n \\end{equation}\n for the scalar product kernel $k(x,y) = \\langle x, y \\rangle$, the unit $T \\times T$ matrix $I$, and $a>0$.\n\\end{proposition}\n\\BP\n For the scalar product kernel we have we have $\\mat{K}_T = X'X$\n and $\\sqrt{W_T}\\vect{k}_T = \\sqrt{W_T} X x_T$.\n By Lemma~\\ref{lem:matrixequal} we obtain\n \\begin{equation*}\n \\left(aI + \\sqrt{W_T}XX'\\sqrt{W_T}\\right)^{-1} \\sqrt{W_T} X x_T\n =\n \\sqrt{W_T} X \\bigl(aI + X'W_TX\\bigr)^{-1}x_T\\,.\n \\end{equation*}\n It is easy to see that\n \\begin{equation*}\n \\left( \\vect{Y}_{T-1}; \\frac{Y_2+Y_1}{2}\\right)'W_T X\n =\n \\left(\\sum_{t=1}^{T-1} \\frac{\\beta_t}{\\beta_T} y_t x_t+\\left(\\frac{Y_2+Y_1}{2}\\right)x_T\\right)'\n \\end{equation*}\n and\n \\begin{equation*}\n X'W_TX = \\sum_{t=1}^{T-1} \\frac{\\beta_t}{\\beta_T} x_t x_t' + x_T x_T'\\,.\n \\end{equation*}\n Thus we obtain the formula~\\eqref{eq:linpred} from~\\eqref{eq:hilpred}.\n\\EP\n\n\n\\subsubsection*{Acknowledgements}\nWe are grateful to Yura Kalnishkan and Volodya Vovk\nfor numerous illuminating discussions.\nThis work was supported by EPSRC\n(grant EP\/F002998\/1).\n\n\\ifARXIV\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section*{Supplemental material}\n\\subsection*{Parameterizing our ignorance} \n\\label{sec:setup}\n\nIn the following we give a detailed description of the construction of\nthe EOS used in our work. In essence, for the low-density limit we use\nthe EOSs of Refs. \\cite{Baym71b_2, Negele73_2}, that describe the neutron\nstar's crust up until a number density of $n_{\\rm crust}=0.08\\,{\\rm\nfm}^{-3}$. Instead, for baryon number densities $n_{_{\\rm B}}$ found in\nthe outer core, i.e.,~ between $n_{\\rm crust}$ and $n_{_{\\rm\nB}}\\approx\\!1.3\\,n_{\\rm sat}\\!\\approx\\! 0.21\\,{\\rm fm}^{-3}$, where\n$n_{\\rm sat}=0.16\\,{\\rm fm}^{-3}$ is the nuclear-saturation density, we\nuse an improved and consistent neutron-matter EOS based on the chiral\nexpansion at ${\\rm N}^3{\\rm LO}$ of the nucleon-nucleon and three-nucleon\n(3N) chiral interactions that takes into account also the subleading 3N\ninteractions, as well as 4N forces in the Hartree-Fock\napproximation \\cite{Drischler2016_2}. In practice, we take the lower and\nupper limits of the uncertainty band for this EOS and fit them with two\npolytropes. The first polytrope is used up until $\\approx\\!n_{\\rm sat}$\nand yields an adiabatic index $\\Gamma_1$ in the range $[1.31,1.58]$,\nwhile the second adiabatic index $\\Gamma_2$ is chosen in the range\n$[2.08,2.38]$; the limits of these ranges essentially establish the\nuncertainty in the description of the EOS in the outer core. We note that\ntaking just these very stiff and soft limits into account -- as done by\nRef. \\cite{Annala2017_2} -- only allows to give upper and lower bounds, as\nset by the softest and stiffest EOS possible. On the other hand, to\nunderstand common features and effects of the crust and outer core on the\nneutron-star models, we construct our EOSs from uniform distributions of\n$\\Gamma_1$ and $\\Gamma_2$, which will obviously include the stiffest and\nsoftest possible EOSs, but also all the other possibilities between these\ntwo limits. In addition, and to explore the sensitivity of our results on\nthis prescription of the EOS, we also investigate the impact of a further\nrefinement of the EOS obtained using a new many-body Monte-Carlo\nframework for perturbative calculations applied to a set of chiral\ninteractions \\cite{Drischler2017_2}. In this case, we use the range over six\nEOSs, each based on a different Hamiltonian \\cite{Hebeler2011_2}, as uncertainty band.\n\\begin{figure}[h!]\n \\includegraphics[width=0.9\\columnwidth]{.\/some_PTs.pdf}\n \\caption{\\footnotesize Representative sample of $(M,R)$ curves\n constructed from EOSs with a phase transition. While stable\n branches are shown as solid lines, the unstable ones are denoted by\n dashed ones. Also shown with thick dashed lines are the lower and\n upper constraint on the maximum mass.}\n \\label{fig:sup_1}\n\\end{figure}\n\n\\begin{figure}[h!]\n \\includegraphics[width=0.9\\columnwidth]{.\/EOS_params.jpg}\n \\caption{\\footnotesize\n Relative jump of the energy discontinuity $\\Delta e\/e_{\\rm trans}$ shown\n as a function of the normalised pressure at the phase transition\n $p_{\\rm trans}\/e_{\\rm trans}$. All EOSs represented posses a phase\n transition but some lead to twin stars (red dots), while others do not\n (green dots).} \\label{fig:sup_2}\n\n\\end{figure}\n\\begin{figure*}[t!]\n \\includegraphics[width=1.8\\columnwidth]{.\/2M_only.pdf}\n \\caption{\\footnotesize PDFs of stellar radii. Left panel: PDF with only\n the observational constraints on the observed maximum mass for pure\n hadronic EOSs; right: the same but for EOSs with a phase transition\n and where the PDF for $R<12 \\, \\rm km$ is for the twin-branch and the\n one for $R>12 \\, \\rm km$ for the hadronic branch. In both panels the\n solid and dashed lines indicate the $2$-$\\sigma$ and $3$-$\\sigma$\n confidence levels, respectively. This figure should also be\n contrasted with Fig. 1 of the main text.}\n \\label{fig:sup_3}\n\\end{figure*}\n\nFor the high-density part of the EOS, on the other hand, we follow\n\\cite{Annala2017_2} and \\cite{Kurkela2014_2}, and use the cold quark-matter\nEOS derived by \\cite{Fraga2014_2}, which is based on the perturbative QCD\ncalculation of \\cite{Kurkela2010_2}. The uncertainty in this EOS is\nestimated by changing the renormalization scale parameter within a factor\ntwo, $X\\in[1,4]$, which is chosen from a uniform distribution, allowing\nus to match the last segment of the interpolating piecewise polytrope via\nits adiabatic index at the baryon chemical potential $\\mu_{\\rm\nb}=2.6\\,\\rm GeV$. Finally, for the unconstrained region above $\\sim\n1.3\\,n_{\\rm sat}$, we follow \\cite{Kurkela2014_2} and interpolate between\nthese limits using piecewise polytropes with four segments (tetratropes),\nwhose polytropic exponents, as well as the matching points between the\nsegments, are chosen at random assuming equal probabilities, ensuring\ncontinuity of energy and pressure at the matching points. We have also\nchecked our results to be robust, when using five instead of four\npolytropic segments.\n\nAfter constructing an EOS with the procedure outlined above, we check\nthat it is physically plausible by ensuring that the sound speed is\nsubluminal everywhere inside the star, while the minimal thermodynamic\nstability criteria is automatically satisfied. For any EOS passing this\ntest, we then construct a sequence of neutron-star models by solving the\nTOV equations. Additionally, we check compatibility with observations of\nthe maximum mass of neutron stars \\cite{Antoniadis2013_2} and ensure that\nthe maximum mass of any sequence exceeds $2.01$. In this way, we compute\na million of equilibrium sequences containing more than a billion stars\nand essentially covering the whole possible range of EOSs from very soft\nto very stiff.\n\nAs mentioned in the main text, in order to take into account the\npossibility that the neutron-star matter has a strong phase transition in\nits interior, we construct an equally sized set of $10^6$ EOSs using the\nabove setup, but adding a jump in energy density $\\Delta e \\in [0,1000]\\,\n{\\rm MeV}\/{\\rm fm}^3$, while ensuring constancy of the chemical\npotential. This jump is introduced randomly between the polytropic\nsegments and its range is motivated by the results of\nRef. \\cite{Christian2018_2}. The latter choice ensures that all four\ncategories of twin stars can be obtained (see \\cite{Christian2018_2} for an\noverview). This is also evident from Fig. \\ref{fig:sup_1}, which shows a\nrepresentative sample of the mass radius-curves obtained with the setup\nfor EOSs with phase transition. In particular, it is possible to see that\nthe phase transition can set in at very low masses, i.e.,~ $<\\!1\\,M_\\odot$,\nbut also at very high masses, i.e.,~ $>\\ 3\\ M_\\odot$, where the two stable\nbranches can be up to $5\\ \\rm km$ apart. Overall, we find that with this\nprescription we can cover well the space of possible EOSs with a phase\ntransition.\n\nA more detailed representation of the occurrence and properties of the\nphase transitions in the EOSs constructed here can\nbe obtained by looking at Fig. \\ref{fig:sup_2}, which shows the relative\nlength of the jump $\\Delta e\/e_{\\rm trans}$ over $p_{\\rm trans}\/e_{\\rm\ntrans}$, where $p_{\\rm trans}$ is the pressure at the onset of the phase\ntransition and $e_{\\rm trans}$ the corresponding energy. The figure\nclearly shows that our sample of EOSs with phase transitions populates\nthe whole space necessary for obtaining all four categories of twin stars\n(cf.~ Fig. 3 of Ref. \\cite{Christian2018_2}). The red region also confirms that\ntwin stars can only be obtained for a small set of combinations of the\nparameters $\\Delta e$ and $e_{\\rm trans}$. Additionally, we can see that\nthe remaining parameter space is also covered well.\n\n\n\n\\subsection{Comparing EOSs with and without phase transitions}\n\nWhile purely hadronic EOSs yield neutron star-models with radii $\\gtrsim 10\\,\n\\rm km$, this is not contradicting the results of Ref. \\cite{Tews2018_2},\nwho also find models with radii as small as $\\sim 8\\,\\rm km$. From Fig.\n\\ref{fig:sup_3}, where we show all models without any upper constraint on\nthe maximum mass and the tidal deformabilities, it is evident that such\nsmall radii can only be obtained when a phase transition is present. At\nthe same time, it is evident that such compact stars are not likely and\nrequire some fine-tuning of the EOS. For instance, we find that\nsuch small radii are found for twin stars, whose total number, $N_{\\rm\ntwins}$, only corresponds to $\\sim 5\\%$ of the total number of \nall models build from EOS exhibiting a phase transition, $N_{\\rm total}$.\n\\begin{figure}[h!]\n \\includegraphics[width=1.0\\columnwidth]{.\/lambda2D_PTs.pdf}\n \\caption{\\footnotesize The same as Fig. 3 of the main text, but for\n the set of EOSs with phase transition.}\n \\label{fig:sup_4}\n\\end{figure}\n\n\nAnother striking difference between purely hadronic EOSs and the ones\nwith phase transitions is the occurrence of small tidal deformabilities\nin the latter case. For instance, for $M\\lesssim 1.4$ models with\n$\\tilde{\\Lambda}_{1.4} \\lesssim 100$ can be found, as is evident from\nFig. \\ref{fig:sup_4}, which is the same as Fig. 3 of the main text but\nin the case of EOSs with a phase transition. This finding resolves the\napparent conflict between the small lower limit for\n$\\tilde{\\Lambda}_{1.4}$ found in Ref. \\cite{De2018_2} (i.e.,~\n$\\tilde{\\Lambda}_{1.4} > 75$) and the stricter one obtained by\nRef. \\cite{Annala2017_2} (i.e.,~ $\\tilde{\\Lambda}_{1.4} > 120$) or the value\nof $\\tilde{\\Lambda}_{1.4} > 375$ discussed in the main text. In the case\nof Ref. \\cite{De2018_2}, in fact, no distinction was made between purely\nhadronic and EOSs with phase transitions. On the other hand, a\ncomparison between Figs. 3 of the main text and \\ref{fig:sup_4} clearly\nshows that the lower limit on $\\tilde{\\Lambda}_{1.4}$ becomes much\nstricter if one assumes a purely hadronic EOS. A similar conclusion has\nbeen drawn in Ref. \\cite{Tews2018_2}, where a model with phase transitions\n($\\tilde{\\Lambda}_{1.4} > 80$) and one without ($\\tilde{\\Lambda}_{1.4} >\n280$) was employed. In addition, we find that for $M \\gtrsim 1.6$ a\nstrict cut-off for the upper limit of $\\tilde{\\Lambda}$ exists if a phase\ntransition is present. As a result, deducing a value of $\\tilde{\\Lambda}$\nfrom a gravitational-wave measurement in this mass range could be used to\ndistinguish a purely hadronic EOS from one with a phase transition, as we\nhave further outlined in the main text.\n\n\n\\input{supplemental_material.bbl}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn doing spectroscopy, one often needs certain phonon modes to test selection rules. In our lab, a novel Raman scattering experiment is being conducted and \\(\\Gamma_{2}^{+}\\) phonon of the \\(O_{h}\\) space group is among the several modes to produce new features. However, seaching for \\(\\Gamma_{2}^{+}\\) phonon in \\(O_{h}\\) space groups was not successful. Many well known crystals in \\(O_{h}\\) space group, such as silicon and perovskite, do not have \\(\\Gamma_{2}^{+}\\) phonons. A careful examination of those crystals shows that not only \\(\\Gamma_{2}^{+}\\), but also all the one ( \\(\\Gamma_{1}^{+}\\), \\(\\Gamma_{2}^{+}\\), \\(\\Gamma_{1}^{-}\\), \\(\\Gamma_{2}^{-}\\)) and two ( \\(\\Gamma_{3}^{+}\\), \\(\\Gamma_{3}^{-}\\)) dimensional phonons rarely happen while three dimensional phonons (\\(\\Gamma_{4}^{+}\\), \\(\\Gamma_{5}^{+}\\), \\(\\Gamma_{4}^{-}\\), \\(\\Gamma_{5}^{-}\\)) always present.\n\nInstead of checking phonon structures of available crystals one by one, we perform a systematic study on the zone center phonon structures of \\(O_{h}\\) space groups. All one dimensional phonons are listed for different Wyckoff positions of the ten \\(O_{h}\\) space groups. Analysis shows that at least four inequivalent atoms in one set of Wyckoff positions are required to have one dimensional phonons. This explains the absence of the \\(\\Gamma_{2}^{+}\\) phonons in NaCl, diamond and other crystals with simple structures. The results are tabulated and, with the help of our tables, one can choose proper crystals when a certain one or two dimensional phonon is needed. As far as we know, there are no similar analysis in the literature. The fact that crystals belonging to \\(O_{h}\\) space groups are the most common ones~\\cite{crystaldata} makes our work useful. A restriction relation between number of atoms in one set of Wyckoff position and the number of one dimensional phonons (magnons) is found in cubic lattice systems (\\(T\\), \\(T_{h}\\), \\(T_{d}\\), \\(O\\), \\(O_{h}\\)). All symmetry assignments follow the Koster notations~\\cite{KDWS}.\n\nThis note is arranged in the following order: Section II introduces the method to obtain one dimensional zone center phonons; The results are tabulated in section III for all ten \\(O_{h}\\) space groups at all Wyckoff positions; Section IV explores higher dimensional phonons: a certain number of two and three dimensional phonons accompany a single one dimensional phonon. The restriction relation on the number of one dimensional phonons and the number of atoms in one Wyckoff set is presented; Section V applies the same argument to all cubic lattice systems (\\(T\\), \\(T_{h}\\), \\(T_{d}\\), \\(O\\), \\(O_{h}\\)) and the same restriction is found; We discuss the structures of magnons in section VI where the same restrictions are found; Discussions are in section VII, where the phonon structure of \\(A15\\) crystals is studied.\n\n\\section{Theoretical background}\n\n\\(O_{h}\\) space groups are space groups with \\(O_{h}\\) point group. They are \\(O_{h}^{1}\\) (\\(Pm3m\\)), \\(O_{h}^{2}\\) (\\(Pn3n\\)), \\(O_{h}^{3}\\) (\\(Pm3n\\)), \\(O_{h}^{4}\\) (\\(Pn3m\\)), \\(O_{h}^{5}\\) (\\(Fm3m\\)), \\(O_{h}^{6}\\) (\\(Fm3c\\)), \\(O_{h}^{7}\\) (\\(Fd3m\\)), \\(O_{h}^{8}\\) (\\(Fd3c\\)), \\(O_{h}^{9}\\) (\\(Im3m\\)) and \\(O_{h}^{10}\\) (\\(Ia3d\\)). In those space groups, 48 rotational operations are associated with simple cubic, face-centered-cubic or body-centered-cubic lattices. The general form of a symmetry operation is \\{\\(\\hat g|\\bi{R_{L}} + \\bi{\\tau_{g}}\\)\\}: \\(\\hat{g}\\) is the rotational operation, \\(\\bi{R_{L}}\\) is the lattice translation. \\(\\bi{\\tau_{g}}\\) is 0 for symmorphic space groups and some fractional translation(s) for nonsymmorphic space groups.\n\n\\begin{table}\n\\caption{The character table of \\(O_{h}\\) point group.and the direct product of \\(\\Gamma_{4}^{-}\\) with all representations of \\(O_{h}\\) point group~\\cite{KDWS}.} \\label{tab:charactertable}\n\\begin{center}\n\\begin{tabular}{|l|r|r|r|r|r|r|r|r|r|r|c|}\\hline\n& \\(E\\) & 8\\(C_{3}\\) &3\\(C_{2}\\) & 6\\(C_{4}\\) & 6\\(C_{2}\\) & \\(I\\) & 8\\(S_{6}\\) & 3\\(\\sigma_{h}\\) & 3\\(S_{4}\\) & 6\\(\\sigma_{d}\\)&\\(\\otimes\\) \\(\\Gamma_{4}^{-}\\) \\\\ \\hline\n\\(\\Gamma_{1}^{+}\\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 &\\(\\Gamma_{4}^{-}\\)\\\\ \\hline\n\\(\\Gamma_{2}^{+}\\) & 1 & 1 & 1 & -1 & -1 & 1 & 1 & 1 & -1 & -1 &\\(\\Gamma_{5}^{-}\\)\\\\ \\hline\n\\(\\Gamma_{3}^{+}\\) & 2 & -1 & 2 & 0 & 0 & 2 & -1 & 2 & 0 & 0&\\(\\Gamma_{4}^{-}\\) + \\(\\Gamma_{5}^{-}\\)\\\\ \\hline\n\\(\\Gamma_{4}^{+}\\) & 3 & 0 & -1 & 1 & -1 & 3 & 0 & -1 & 1 & -1 &\\(\\Gamma_{1}^{-}\\)+\\(\\Gamma_{3}^{-}\\)+\\(\\Gamma_{4}^{-}\\)+\\(\\Gamma_{5}^{-}\\) \\\\ \\hline\n\\(\\Gamma_{5}^{+}\\) & 3 & 0 & -1 & -1 & 1 & 3 & 0 & -1 & -1 & 1& \\(\\Gamma_{2}^{-}\\)+\\(\\Gamma_{3}^{-}\\)+\\(\\Gamma_{4}^{-}\\)+\\(\\Gamma_{5}^{-}\\) \\\\ \\hline\n\\(\\Gamma_{1}^{-}\\) & 1 & 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 & -1 &\\(\\Gamma_{4}^{+}\\)\\\\ \\hline\n\\(\\Gamma_{2}^{-}\\) & 1 & 1 & 1 & -1 & -1 & -1 & -1 & -1 & 1 & 1 &\\(\\Gamma_{5}^{+}\\)\\\\ \\hline\n\\(\\Gamma_{3}^{-}\\) & 2 & -1 & 2 & 0 & 0 & -2 & 1 & -2 & 0 & 0 &\\(\\Gamma_{4}^{+}\\) + \\(\\Gamma_{5}^{+}\\)\\\\ \\hline\n\\(\\Gamma_{4}^{-}\\) & 3 & 0 & -1 & 1 & -1 & -3 & 0 & 1 & -1 & 1& \\(\\Gamma_{1}^{+}\\)+\\(\\Gamma_{3}^{+}\\)+\\(\\Gamma_{4}^{+}\\)+\\(\\Gamma_{5}^{+}\\) \\\\ \\hline\n\\(\\Gamma_{5}^{-}\\) & 3 & 0 & -1 & -1 & 1 & -3 & 0 & 1 & 1 & -1& \\(\\Gamma_{2}^{+}\\)+\\(\\Gamma_{3}^{+}\\)+\\(\\Gamma_{4}^{+}\\)+\\(\\Gamma_{5}^{+}\\) \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nWe now proceed to determine the phonon modes. Let \\(\\gamma\\) denote any of the one dimensional phonons (\\(\\Gamma_{1}^{+}, \\Gamma_{2}^{+}, \\Gamma_{1}^{-}, \\Gamma_{2}^{-}\\)). \\(\\gamma\\) is carried by a set of \\(n\\) atoms labelled by their Wyckoff positions, such as \\(12(e)\\) in \\(O_{h}^{2}\\). Among the 48 rotational operations, \\(48\/n\\) of them, denoted by \\{\\(\\hat \\alpha|\\bi{\\tau_{\\alpha}}\\)\\}, do not move the atom or move the atom to its equvalent position: \\( \\{\\hat \\alpha | \\bi\\tau_{\\alpha}\\} \\bi{r_i}=\\bi{r_j}\\) and \\(\\bi{r_j} = \\bi{r_i} +\\bi{R_{L}}\\) where\\(\\bi{r_i}\\) is the position of the original atom and \\(\\bi{R_{L}}\\) is any lattice vector; the rest (\\(48-48\/n\\)) operations, denoted by \\(\\{\\hat \\beta | \\bi{\\tau_{\\beta}}\\}\\), would shift the original atom to its inequivalent positions: \\(\\{ \\hat \\beta | \\bi{\\tau_{\\beta}} \\} \\bi{r_i}=\\bi{r_j}\\) and \\(\\bi{r_j} \\neq \\bi{r_i} + \\bi{R_{L}}\\). If a set of \\(n\\) atoms carry \\(\\gamma\\) phonon, the atomic displacement vector \\(\\bi{V}\\) at atom \\(\\bi{r_{i}}\\) must satisfy: \\(\\hat {g} \\cdot \\bi{V} = \\chi^{\\gamma}(\\hat {g}) \\cdot \\bi{V} \\), where \\(\\hat g\\) is any of the 48 rotational operations. In general, \\(\\bi{V}\\) is in the form of (\\(v_{x}, v_{y}, v_{z}\\)) and any solution of \\(\\bi{V} = (v_{x}, v_{y}, v_{z})\\) gives the allowed phonon mode. With the defination of \\(\\hat \\alpha\\) the set of 48 equations can be simplified: \\(\\hat \\alpha \\cdot \\bi{V} = \\chi^{\\gamma}(\\hat {\\alpha}) \\cdot \\bi{V}\\), which reduce the number of equations down to \\(48\/n\\). That is, only operations that shift the atom to itself or its equivalent positions need to be considered. The existance of solutions for \\(\\bi{V}\\) is necessary and sufficient for the existance of one dimensional phonons.\n\n\n\n\\section{Results on one dimensional phonons}\n\nAll Wyckoff positions in \\(O_{h}\\) space groups are considered in this section. When dealing with phonons, the primitive cell is more suitable than the unit cell. Therefore, a slightly different notation is adopted, compared to the {\\em International Table for Crystallography}~\\cite{Wyckoff}: although the letters of Wyckoff positions remain the same, we put down the number of atoms in one primitive cell instead of one unit cell. For example, in \\(O_{h}^{10}\\), \\(48(h)\\) in this note is the \\(96(h)\\) in the {\\em International Table for Crystallography}. Directions of phonon modes for (\\(\\Gamma_{1}^{+}\\), \\(\\Gamma_{2}^{+}\\), \\(\\Gamma_{1}^{-}\\), \\(\\Gamma_{2}^{-}\\)) phonons are listed in table~\\ref{tab:Oh}.\n\n\\begin{center}\n\\begin{longtable}{|l|l|l|l|l|} \\caption{One dimentional phonon modes in \\(O_{h}\\) space groups. \\label{tab:Oh}}\\\\\n\\hline\nWyckoff positions & \\(\\Gamma_{1}^{+}\\) & \\(\\Gamma_{2}^{+}\\) & \\(\\Gamma_{1}^{-}\\) & \\(\\Gamma_{2}^{-}\\) \\endhead\n\\hline \\endlastfoot\n\\hline \n\\multicolumn{5}{l}{\\(O_{h}^{1}\\)} \\\\ \\hline\n48, (n), (x, y, z)&(x, y, z)&(x, y, z)&(x, y, z)&(x, y, z) \\\\ \\hline\n24, (m), (x, x, z)&(x, x, z)&(x, -x, 0)&(x, -x, 0)&(x, x, z) \\\\ \\hline\n24, (l), (1\/2, y, z)&(0, y, z)&(0, y, z)&(x, 0, 0)&(x, 0, 0) \\\\ \\hline\n24, (k), (0, y, z)&(0, y, z)&(0, y, z)&(x, 0, 0)&(x, 0, 0) \\\\ \\hline\n12, (j), (1\/2, y, y)&(0, y, y)&(0, y, -y)&&(x, 0, 0) \\\\ \\hline\n12, (i), (0, y, y)&(0, y, y)&(0, y, -y)\t&&(x, 0, 0) \\\\ \\hline\n12, (h), (x, 1\/2, 0)&(x, 0, 0)&(x, 0, 0)&& \\\\ \\hline\n 8, (g), (x, x, x)&(x, x, x)&&&(x, x, x)\\\\ \\hline\n 6, (f), (x, 1\/2, 1\/2)&(x, 0, 0)&&&\\\\\t \\hline\n 6, (e), (x, 0, 0)&(x, 0, 0)&&&\\\\ \\hline\n 3, (d), (1\/2, 0, 0)& & & &\\\\\t \\hline\n 3, (c), (0, 1\/2, 1\/2)& & & &\\\\\t \\hline\t\n 1, (b), (1\/2, 1\/2, 1\/2)& & & &\\\\\t \\hline\t\n 1, (a), (0, 0, 0)&&&&\\\\\t \\hline\n\\multicolumn{5}{l}{\\(O_{h}^{2}\\)} \\\\ \\hline\n48, (i), (x, y, z)&(x, y, z)&\t(x, y, z)&(x, y, z)&(x, y, z) \\\\ \\hline\n24, (h), (0, y, y)&(0, y, y)&(x, y, -y)&(0, y, y)&(x, y, -y) \\\\ \\hline\n24, (g), (x, 0, 1\/2)&(x, 0, 0)&(x, 0, 0)&(x, 0, 0)&(x, 0, 0) \\\\ \\hline\n16, (f), (x, x, x)&(x, x, x)&(x, x, x)&(x, x, x)&(x, x, x) \\\\ \\hline\n12, (e), (x, 0, 0)&(x, 0, 0)&&(x, 0, 0)& \\\\ \\hline\n12, (d), (1\/4, 0, 1\/2)&&(x, 0, 0)&&(x, 0, 0) \\\\ \\hline\n8, (c), (1\/4, 1\/4, 1\/4)&&&(x, x, x)&(x, x, x) \\\\ \\hline\n6, (b), (0, 1\/2, 1\/2)&&&&\t\\\\ \\hline\t\t\n2, (a), (0, 0, 0)&&&&\t\\\\ \\hline\t\n\\multicolumn{5}{l}{\\(O_{h}^{3}\\)} \\\\ \\hline\n48, (l), (x, y, z)&(x, y, z)&(x, y, z)&(x, y, z)&(x, y, z) \\\\ \\hline\n24, (k), (0, y, z)&(0, y, z)&(0, y, z)&(x, 0, 0)&(x, 0, 0) \\\\ \\hline\n24, (j), (1\/4, y, y+1\/2)&(0, y, y)&(x, y, -y)&(0, y, y)&(x, y, -y) \\\\ \\hline\n16, (i), (x, x, x)&(x, x, x)&(x, x, x)&(x, x, x)&(x, x, x) \\\\ \\hline\n12, (h), (x, 1\/2, 0)&(x, 0, 0)&(x, 0, 0)&&\\\\ \\hline\n12, (g), (x, 0, 1\/2)&(x, 0, 0)&(x, 0, 0)&&\\\\ \\hline\n12, (f), (x, 0, 0)&(x, 0, 0)&(x, 0, 0)&&\\\\ \\hline\n8, (e), (1\/2, 1\/2, 1\/2)&&(x, x, x)&&(x, x, x)\\\\ \\hline\n6, (d), (1\/4, 1\/2, 0)&&(x, 0, 0)&&\\\\ \\hline\n6, (c), (1\/4, 0, 1\/2)&&(x, 0, 0)&&\\\\ \\hline\n6, (b), (0, 1\/2, 1\/2)&&&&\t\\\\ \\hline\n2, (a), (0, 0, 0)&&&&\t\\\\ \\hline\n\\multicolumn{5}{l}{\\(O_{h}^{4}\\)} \\\\ \\hline\n48, (l), (x, y, z)&(x, y, z)&(x, y, z)&(x, y, z)&(x, y, z) \\\\ \\hline\n24, (k), (x, x, z)&(x, x, z)&(x, -x, 0)&(x, -x, 0)&(x, x, z) \\\\ \\hline\n24, (j), (1\/4, y, y+1\/2)&(0, y, y)&(x, y, -y)&(0, y, y)&(x, y, -y) \\\\ \\hline\n24, (i), (1\/4, y, -y+1\/2)&(0, y, -y)&(x, y, y)&(0, y, -y)&(x, y, y) \\\\ \\hline\n24, (h), (x, 0, 1\/2)&(x, 0, 0)&(x, 0, 0)&(x, 0, 0)&(x, 0, 0) \\\\ \\hline\n12, (g), (x, 0, 0)&(x, 0, 0)&&&(x, 0, 0) \\\\ \\hline\n12, (f), (1\/4, 0, 1\/2)&&(x, 0, 0)&&(x, 0, 0) \\\\ \\hline\n8, (e), (x, x, x)&(x, x, x)&&&(x, x, x)\\\\ \\hline\n6, (d), (0, 1\/2, 1\/2)\t&&&& \\\\ \\hline\n4, (c), (3\/4, 3\/4, 3\/4)&&&&(x, x, x) \\\\ \\hline\n4, (b), (1\/4, 1\/4, 1\/4)&&&&(x, x, x) \\\\ \\hline\n2, (a), (0, 0, 0)&&&&\\\\ \\hline\n\\multicolumn{5}{l}{\\(O_{h}^{5}\\)} \\\\ \\hline\n48, (l), (x, y, z)&\t(x, y, z)&(x, y, z)&(x, y, z)&(x, y, z)\\\\ \\hline\n24, (k), (x, x, z)&(x, x, z)&(x, -x, 0)&(x, -x, 0)&(x, x, z)\\\\ \\hline\n24, (j), (0, y, z)&\t(0, y, z)&(0, y, z)&(x, 0, 0)&(x, 0, 0)\\\\ \\hline\n12, (i), (1\/2, y, y)&(0, y, y)&(0, y, -y)&&(x, 0, 0)\\\\ \\hline\n12, (h), (0, y, y)&(0, y, y)&(0, y, -y)&&(x, 0, 0)\\\\ \\hline\n12, (g), (x, 1\/4, 1\/4)&(x, 0, 0)&&&(x, 0, 0)\\\\ \\hline\n8, (f), (x, x, x)&(x, x, x)&&&(x, x, x)\\\\ \\hline\n6, (e), (x, 0, 0)&(x, 0, 0)&&&\\\\ \\hline\n6, (d), (0, 1\/4, 1\/4)&&&&(x, 0, 0)\\\\ \\hline\n2, (c), (1\/4, 1\/4, 1\/4)&&&&\\\\ \\hline\n1, (b), (1\/2, 1\/2, 1\/2)&&&&\\\\ \\hline\n1, (a), (0, 0, 0)&&&&\\\\ \\hline\n\\multicolumn{5}{l}{\\(O_{h}^{6}\\)} \\\\ \\hline\n48, (j), (x, y, z)&(x, y, z)&(x, y, z)&(x, y, z)&(x, y, z)\\\\ \\hline\n24, (i), (0, y, z)&(0, y, z)&(0, y, y)&(x, 0, 0)&(x, 0, 0)\\\\ \\hline\n24, (h), (1\/4, y, y)&(0, y, y)&(x, y, -y)&\t(0, y, y)&(x, y, -y)\\\\ \\hline\n16, (g), (x, x, x)&(x, x, x)&(x, x, x)&(x, x, x)&(x, x, x)\\\\ \\hline\n12, (f), (x, 1\/4, 1\/4)&(x, 0, 0)&(x, 0, 0)&&\\\\ \\hline\n12, (e), (x, 0, 0)&(x, 0, 0)&(x, 0, 0)&&\\\\ \\hline\n6, (d), (0, 1\/4, 1\/4)&&&(x, 0, 0)&\\\\ \\hline\n6, (c), (1\/4, 0, 0)&&(x, 0, 0)&&\\\\ \\hline\t\n2, (b), (0, 0, 0)&&&&\t\\\\ \\hline\n2, (a), (1\/4, 1\/4, 1\/4)&&&&\\\\ \\hline\n\\multicolumn{5}{l}{\\(O_{h}^{7}\\)} \\\\ \\hline\n48, (i), (x, y, z)&(x, y, z)&(x, y, z)&(x, y, z)&(x, y, z)\\\\ \\hline\n24, (h), (1\/8, y, -y+1\/4)&(0, y, -y)&(x, y, y)&(0, y, -y)&(x, y, y)\\\\ \\hline\n24, (g), (x, x, z)&(x, x, z)&(x, -x, 0)&(x, -x, 0)&\t(x, x, z)\\\\ \\hline\n12, (f), (x, 0, 0)&(x, 0, 0)\t&&&(x, 0, 0)\\\\ \\hline\n8, (e), (x, x, x)&(x, x, x)&&&(x, x, x)\\\\ \\hline\n4, (d), (5\/8, 5\/8, 5\/8)&&&&(x, x, x)\\\\ \\hline\n4, (c), (1\/8, 1\/8, 1\/8)&&&&(x, x, x)\\\\ \\hline\n2, (b), (1\/2, 1\/2, 1\/2)&&&&\\\\ \\hline\n2, (a), (0, 0, 0)&&&&\t\\\\ \\hline\n\\multicolumn{5}{l}{\\(O_{h}^{8}\\)} \\\\ \\hline\n48, (h), (x, y, z)&(x, y, z)&(x, y, z)&(x, y, z)\t&(x, y, z)\\\\ \\hline\n24, (g), (1\/8, y, -y+1\/4)&(0, y, -y)&(x, y, y)&(0, y, -y)&(x, y, y)\\\\ \\hline\n24, (f), (x, 0, 0)&(x, 0, 0)&(x, 0, 0)&(x, 0, 0)&(x, 0, 0)\\\\ \\hline\n16, (e), (x, x, x)&(x, x, x)&(x, x, x)&(x, x, x)&(x, x, x)\\\\ \\hline\n12, (d), (1\/4, 0, 0)&&(x, 0, 0)&(x, 0, 0)&\\\\ \\hline\n8, (c), (3\/8, 3\/8, 3\/8)&&&(x, x, x)&(x, x, x)\\\\ \\hline\n8, (b), (1\/8, 1\/8, 1\/8)&&\t(x, x, x)&&(x, x, x)\\\\ \\hline\n4, (a), (0, 0, 0)&&&&\t\\\\ \\hline\n\\multicolumn{5}{l}{\\(O_{h}^{9}\\)} \\\\ \\hline\n48, (l), (x, y, z)&(x, y, z)&\t(x, y, z)&(x, y, z)&(x, y, z)\\\\ \\hline\n24, (k), (x, x, z)&(x, x, z)&(x, -x, 0)&(x, -x, 0)&(x, x, z)\\\\ \\hline\n24, (j), (0, y, z)&(0, y, z)&(0, y, y)&(x, 0, 0)&(x, 0, 0)\\\\ \\hline\n24, (i), (1\/4, y, -y+1\/2)&(0, y, -y)&(x, y, y)&(0, y, -y)&(x, y, y)\\\\ \\hline\n12, (h), (0, y, y)&(0, y, y)&(0, y, -y)&&(x, 0, 0)\\\\ \\hline\n12, (g), (x, 0, 1\/2)&(x, 0, 0)&(x, 0, 0)&&\\\\ \\hline\n8, (f), (x, x, x)&(x, x, x)&&&(x, x, x)\\\\ \\hline\n6, (e), (x, 0, 0)&(x, 0, 0)&&&\\\\ \\hline\n6, (d), (1\/4, 0, 1\/2)&&(x, 0, 0)&&\\\\ \\hline\n4, (c), (1\/4, 1\/4, 1\/4)&&&&(x, x, x)\\\\ \\hline\n3, (b), (0, 1\/2, 1\/2)&&&&\\\\ \\hline\n1, (a), (0, 0, 0)&&&&\\\\ \\hline\n\\multicolumn{5}{l}{\\(O_{h}^{10}\\)} \\\\ \\hline\n48, (h), (x, y, z)&(x, y, z)&(x, y, z)& (x, y, z)&(x, y, z) \\\\ \\hline\n24, (g), (1\/8, y, -y+1\/4)&\t(0, y, -y)&(x, y, y)&(0, y, -y)&(x, y, y) \\\\ \\hline\n24, (f), (x, 0, 1\/4)&(x, 0, 0)&(x, 0, 0)&(x, 0, 0)&(x, 0, 0) \\\\ \\hline\n16, (e), (x, x, x)&(x, x, x)&(x, x, x)&(x, x, x)&(x, x, x) \\\\ \\hline\n12, (d), (3\/8, 0, 1\/4)&&(x, 0, 0)&(x, 0, 0)& \\\\ \\hline\n12, (c), (1\/8, 0, 1\/4)&&(x, 0, 0) &&(x, 0, 0) \\\\ \\hline\n8, (b), (1\/8, 1\/8, 1\/8)&&(x, x, x)&&(x, x, x) \\\\ \\hline\n8, (a), (0, 0, 0)&&&(x, x, x)&(x, x, x) \\\\ \\hline\n\\end{longtable}\n\\end{center}\n\nThe meaning of the notations in table~\\ref{tab:Oh} is as follows: for a certain phonon mode belonging to a set of atoms, the number of free parameters in the phonon modes is the number of its appearances. For example, in \\(O_{h}^{10}\\), \\(\\Gamma_{2}^{+}\\) phonon of Wyckoff position \\(24(g)\\) \\((1\/8, y, -y+1\/4)\\) is labelled \\((x, y, y)\\). It means that \\(\\Gamma_{2}^{+}\\) phonon modes appear twice and the \\(\\Gamma_{2}^{+}\\) phonon modes on atom \\((1\/8, y, -y+1\/4)\\) are in the (1, 0, 0) direction and (0, 1, 1) direction. The actual atomic displacements are the linear combinations of the two phonon modes, the coefficients of which cannot be determined by symmetry alone. It is worth mentioning that the number of \\(\\Gamma_{1}^{+}\\) phonons equals the number of free parameters of the Wyckoff positions because totally symmetric distortions preserve the symmetry. For example, Wyckoff position \\(24(g)\\) \\((1\/8, y, -y+1\/4)\\) in \\(O_{h}^{10}\\) has only one \\(\\Gamma_{1}^{+}\\) phonon mode which is in the (0, 1, -1) direction. Also one can see that any one dimensional phonon does not happen more than three times in one set of Wyckoff positions.\n\n\\section{Two and three dimensional phonons}\n\nAdditional informations about two and three dimensional phonons can also be obtained from the analysis on one dimensional phonons. In general, the phonon structure is determined by the decomposition of mechanical representation, which is the direct product of permutation group and the vector representation~\\cite{Birman}. Permutation group is a group formed by taking atomic positions as basis functions. Its character under a certain symmetry operation equals the number of atoms unchanged or can be shifted back to itself by lattice vector. The vector representation is formed with basis functions (\\(x, y, z\\)) and it belongs to \\(\\Gamma_{4}^{-}\\) in \\(O_{h}\\) space groups. The direct products of \\(\\Gamma_{4}^{-}\\) with different representations are given in table~\\ref{tab:charactertable}. \n\n\nCareful inspection of the tables shows many interesting results. There will always be three dimensional phonons, no matter how simple the structure is. Each one dimensional phonon is accompanied by one two dimensional phonon and two three dimensional phonons. This makes a restriction on the number of appearances of one dimensional phonons: \\(n \\times 3 \\geq \\sum_{\\gamma}{m \\times 9} \\label{eq:restriction1}\\), where \\(n\\) is the number of atoms in one set of Wyckoff position, and \\(m\\) is number of one dimensional phonons. Summation over \\(\\gamma\\) means summation over all the one dimensional phonons.\n\nThe character of the permutation group must be non-negative. One dimensional phonons correspond to \\(\\Gamma_{4}^{+}\\), \\(\\Gamma_{5}^{+}\\), \\(\\Gamma_{4}^{-}\\), \\(\\Gamma_{5}^{-}\\) in the permutation representation and they all have negative characters under \\(C_{2}\\) symmetry operation (see table~\\ref{tab:charactertable}). Other one or two dimensional representations which have positive characters under \\(C_{2}\\) operation must present in the permutation representation. It makes further restrictions on the number of appearances of one dimensional phonons:\n\\begin{equation}\nn \\times 3 \\geq \\sum_{\\gamma}{m \\times 12} \\label{eq:restriction2}\n\\end{equation}\nThis means that at least four atoms are needed to carry a single one dimensional phonon, and this rule is indeed observed in all ten \\(O_{h}\\) space groups (table~\\ref{tab:Oh}). The difference between \\( {\\sum_{\\gamma} {m \\times 12}}\\) and \\(m \\times 3\\) will be totally filled by three dimensional phonons.\n\n\\section{Cubic lattice system}\n\nThe same argument can be applied to other cubic lattice systems: \\(T\\), \\(T_{h}\\), \\(T_{d}\\) and \\(O\\). The direct product tables of those point groups \\cite{KDWS} show that any one dimensional phonon is accompanied by one two dimensional phonon and two three dimensional phonons, giving a total dimension of 9. Those 9 dimensional phonons correspond to one three dimensional representation in the permutation representation, and all three dimensional representations have negative characters under one set of \\(C_{2}\\) operations (there are two sets of \\(C_{2}\\) operations in \\(O\\) and \\(O_{h}\\)). Permutation representation must have non-negative characters, therefore other one or two dimensional representations with positive characters under \\(C_{2}\\) must present. This makes the restriction for \\(T\\), \\(T_{h}\\), \\(T_{d}\\), \\(O\\) and \\(O_{h}\\) space groups: \\(n \\times 3 \\geq \\sum_{\\gamma}{m \\times 12} \\label{eq:restriction3}\\) and the summation over \\(\\gamma\\) is the summation over all one dimensional phonons in \\(T\\), \\(T_{h}\\), \\(T_{d}\\), \\(O\\) and \\(O_{h}\\) space groups. It should be noticed that, for \\(T\\) and \\(T_{h}\\), the two dimensional representation is actually two one dimensional representations forced to stick together due to time reveral symmetry.\n\nThe same analysis on one dimensional phonons that lead to table~\\ref{tab:Oh} can be extended to \\(T\\), \\(T_{h}\\), \\(T_{d}\\) and \\(O\\) space groups. However, due to limitation of space, they are not tabulated.\n\n\\section{Magnons for magnetic space groups with cubic unitary group}\n\nWithout external magnetic field, time reversal operator \\(\\theta\\) is also a symmetry operation of the system. The inclusion of time reversal operator generates magnetic space group \\(\\bf M\\) which contains equal number of unitary and antiunitary elements: \\(\\bf M\\) = \\(\\bf H\\) + \\(\\bf A \\bf H\\). \\(\\bf H\\) is ordinary space group and \\(\\bf A\\) is antiunitary coset representative. The magnon symmetry is characterized within space group \\(\\bf H\\)~\\cite{dimmock}. Representations of magnons are contained in the direct product of permutation group and the ``pseudo vector representation\" (\\(R_{x}\\), \\(R_{y}\\), \\(R_{z}\\)).\n\nFor those magnetic space groups with \\(\\bf H\\) being one of the cubic space groups (\\#195 -- \\#230), we find that the same analysis on phonons can also be applied. ``Pseudo vector\" belongs to one of the three dimensional representations in \\(T\\), \\(T_{h}\\), \\(T_{d}\\), \\(O\\) and \\(O_{h}\\) point groups and any one or two dimensional representations must be contained in the direct product of ``pseudo vector group\" and a three dimensional representation of the permutation group. As for phonons, these three dimensional representations have negative character under \\(C_{2}\\) operation therefore some one or two dimensional representations must present in the permutation group whose characters must be non-negative. This leads to the same restrictions that at least four atoms are needed to have one or two dimensional magnons if the unitary group \\(\\bf H\\) of the magnetic space group \\(\\bf M\\) belongs to cubic space groups (\\#195 -- \\#230).\n\n\\section{Discussion}\n\nWe are now in position to solve the problem that initiated this study. Table~\\ref{tab:Oh} shows that the \\(\\Gamma_{2}^{+}\\) requires 6 inequivalent atoms for \\(O_{h}^{3}\\), \\(O_{h}^{6}\\), \\(O_{h}^{9}\\), 8 inequivalent atoms for \\(O_{h}^{8}\\), \\(O_{h}^{10}\\), 12 inequivalent atoms for \\(O_{h}^{1}\\), \\(O_{h}^{2}\\), \\(O_{h}^{4}\\), \\(O_{h}^{5}\\) and 24 inequivalent atoms for \\(O_{h}^{7}\\). This explains the absence of \\(\\Gamma_{2}^{+}\\) phonons in common \\(O_{h}\\) space group crystals: diamond (\\#227; 2(\\(a\\)))~\\cite{warren}, niobium monoxide (\\#221; 3(\\(c\\)), 3(\\(d\\))), fluorite (\\#225; 1(\\(a\\)), 2(\\(c\\)))~\\cite{verstraete}, caesium chloride (\\#221; 1(\\(a\\)), 1(\\(b\\))), sodium chloride (\\#225; 1(\\(a\\)), 1(\\(b\\)))~\\cite{burstein}, Pt\\(_3\\)O\\(_4\\) (\\#229; 3(\\(b\\)), 4(\\(c\\))), cubic spinel (\\#227; 2(\\(a\\)), 4(\\(d\\)), 8(\\(e\\)))~\\cite{dewijs}, perovskite (\\#221; 1(\\(a\\)), 3(\\(c\\)))~\\cite{stirling} and others. Among all the \\(O_{h}\\) crystals that the authors are familiar with, \\(A15\\) is the only structure to have \\(\\Gamma_{2}^{+}\\) phonon.\n\nThe \\(A15\\) structure has simple cubic lattice. The general form for \\(A15\\) is A\\(_3\\)B. The primitive cell contains two formula units and the Wyckoff positions are 2(\\(a\\)) and 6(\\(c\\)) (see figure~\\ref{fig:A15}). The required \\(\\Gamma_{2}^{+}\\) phonon mode is shown in figure~\\ref{fig:A15}. Based on discussions in section IV, one \\(\\Gamma_{3}^{+}\\), one \\(\\Gamma_{4}^{+}\\) and one \\(\\Gamma_{5}^{+}\\) accompany the \\(\\Gamma_{2}^{+}\\) phonon and the rest are three dimensional phonons. This prediction agrees with actual phonon calculations that the zone center phonons for \\(A15\\) crystals are \\(\\Gamma_{2}^{+}\\) + \\(\\Gamma_{3}^{+}\\) + \\(\\Gamma_{4}^{+}\\) + \\(\\Gamma_{5}^{+}\\) + 3 \\(\\Gamma_{4}^{-}\\) + 2 \\(\\Gamma_{5}^{-}\\)~\\cite{Tutuncu}. It is our next objective to perform the novel Raman scattering experiments on single crystals with \\(A15\\) structure, where \\(\\Gamma_{2}^{+}\\) phonon of energy \\(\\sim\\) 268 cm\\(^{-1}\\) can be well separated from its nearest neighbor of \\(\\Gamma_{3}^{+}\\) phonon of energy \\(\\sim\\) 284 cm\\(^{-1}\\) in case of Mo\\(_3\\)Si~\\cite{Tutuncu}.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=10cm]{A15.jpg}\n\\caption{The unit cell of Mo\\(_3\\)Si, a typical crystal of \\(A15\\) structure. Mo and Si atoms are indicated by small (blue) and big (red) spheres. The \\(\\Gamma_{2}^{+}\\) phonon mode are shown by (red) arrows.} \\label{fig:A15}\n\\end{center}\n\\end{figure}\n\nCompared with lattice systems of lower symmetry, the cubic system is perculiar in that the vector representation (\\(x, y, z\\)) belongs to a single three dimensional representation. This is the reason that one and two dimensional phonons happen rarely in crystals with cubic lattice. Table~\\ref{tab:Oh} shows that some one dimensional phonons require as many as 24 atoms. Usually one does not go to such complicated strutures before trying simpler ones. On the other hand, although any set of Wyckoff positions with 48 inequivalent atoms, say, \\(48(h)\\) of \\(O_{h}^{10}\\) space group, gives all one dimensional phonons (\\(\\Gamma_{1}^{\\pm}\\), \\(\\Gamma_{2}^{\\pm}\\)), simpler crystal structures are better in that less phonon modes leads to larger seperations in energy and resolution is always an issue in spectroscopy experiments. The number of one dimensional phonons equals the number of two dimensional phonons therefore the searching for two dimensional phonon modes is as difficult as the searching for one dimensional phonon modes. This is also the case for magnetic space groups with cubic unitary groups. Following our tables, one can choose the crystal that is complicated enough to have the required (one or two dimensional) phonon mode, yet keeps the structure as simple as possible.\\\\\n\nTo summarize, four rules are obtained for phonon structure in \\(T\\), \\(T_{h}\\), \\(T_{d}\\), \\(O\\) and \\(O_{h}\\) space groups and for magnon structure with magnetic space groups whose unitary space part belongs to \\(T\\), \\(T_{h}\\), \\(T_{d}\\), \\(O\\) and \\(O_{h}\\) space groups:\n\\begin{enumerate}\n\\item At least four inequivalent atoms are needed to produce one dimensional phonon (magnon).\n\\item The number of one dimensional phonons (magnons) equals the number of two dimensional phonons (magnons) therefore at least four inequivalent atoms are needed to produce two dimensional phonons (magnons).\n\\item Three dimensional phonons (magnons) always exist, no matter how simple the crystal is.\n\\item A restriction rule is obtained: \\(n \\times 3 \\geq {\\sum_{\\gamma}{m \\times 12}}\\), \\(n\\) being the number of atoms in one Wyckoff set and \\(m\\) the number of one dimensional phonons (magnons).\n\\end{enumerate}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe Solar System contains a wonderful variety of planetary atmospheres. Each planet has its own unique formation and evolutionary history resulting in brilliant rings, icy moons, and complex atmospheric compositions. In order to better understand the physical processes taking place within these celestial bodies, we extract data across almost the entire electromagnetic spectrum, primarily in the visible and infrared as these are the wavelengths where light is best reflected and thermally emitted. Radiative transfer models allow us to simulate the physical processes on these bodies, producing spectra that can be compared to our data sets. Usually these radiative transfer models are coupled to an inverse method, which iteratively explores parameter space to retrieve the most likely set of parameters describing the atmosphere of the (exo)planet or brown dwarf (e.g., \\citet{nemesis08, waldmann15, line15}). \n\nThis method of determining the atmospheric structure and composition is reliant on the accuracy of our radiative transfer models. For Bayesian retrieval algorithms such as Nested Sampling (e.g. MultiNest \\citet{feroz2013}) and Markov Chain Monte Carlo (MCMC, e.g. EMCEE \\citet{foreman-mackey13}), the radiative transfer forward model will be executed millions of times while it explores a large parameter space. Because of the nature of these highly dimensional retrieval calculations, small differences in forward models could potentially produce non-trivial changes in the retrieved probability distributions. For example, if one model produces an extra, erroneous absorption band for, say, \\ce{NH3}, the retrieval would try to compensate for this error during the fitting procedure. While the `true' value of \\ce{NH3} should fit the spectrum (assuming the rest of the model is perfect), the extra absorption band would mean that the total amount of retrieved \\ce{NH3} will be smaller than the `true' value, as it finds a medium-ground answer which best fits all (true + erroneous) absorption bands. If we do not assume the rest of the model is perfect, then the problem is even graver - the retrieval will try to fit the spectrum by adjusting the model in a non-trivial way, such as adjusting the surface gravity or the temperature profile. In this way, one forward model error propagates through to all other retrieved posterior probability distribution functions (PDFs).\n\nA key difference between a number of radiative transfer models is how the gaseous absorption is calculated - this is because it is a computationally expensive step. A line-by-line calculation, where the absorption coefficient is calculated for the exact temperature and pressure for each individual line, is the most accurate and correct as it resolves each individual line exactly. However, this method is far too slow to be used in sophisticated retrieval procedures because of the enormous number of lines involved, especially at high temperatures (>1000K). Instead, the absorption coefficient is usually pre-calculated at a range of temperatures and pressures for a variety of gases, and interpolated to the necessary values. There are multiple methods of calculating the gaseous absorption, but here we limit ourselves to discussion of just two methods: individual gas cross-sections and correlated-$k$ \\citep[e.g.,][]{lacis1991}. We also briefly consider the opacity sampling method \\citep[see e.g.][]{hubeny2014}, though not as extensively as the previous two as this would require a major overhaul of our code. We note that, as long as the pressure and temperature resolution in the cross-section look-up table is high enough, there is no distinction between a high-resolution ($\\Delta\\nu = 0.01$ cm$^{-1}$) cross-section and a line-by-line calculation.\nIndividual gas cross-sections have been used to solve the radiative transfer equation \\citep{macdonald2017}. The cross-sections are absorption coefficients calculated from a line-by-line calculation on a grid of pressures and temperatures for a given gas, then integrated preserving area to a (typical) resolution of $\\Delta \\nu = 1$cm$^{-1}$. Other authors calculate a high-resolution cross-section and sample it at a resolution of $\\Delta \\nu = 1$cm$^{-1}$ \\citep{line15, sharp07}. \\citet{hedges2016} found that these individual gas cross-sections could have median differences of <1\\% for low-resolution ($R\\sim$ 100), up to 40\\% for medium-resolution ($R\\lesssim$ 5000), and over 100\\% to 1000\\% for high-resolution cross-sections ($R\\sim$ \\num{1e5}), introduced by various aspects of\npressure broadening. This is before the non-trivial task of quantifying these differences over a whole atmosphere.\n\nAnother approach to calculating the spectra of brown dwarfs and exoplanets is using premixed $k$-coefficients \\citep{saumon08} or on-the-fly mixing for the correlated-$k$ method \\citep{nemesis08, barstow14, jaemin14}. Note that for the rest of the paper, `cross-sections' refers to cross-sections which are not premixed, i.e. individual cross-sections for individual gases. $k$-tables are produced by performing line-by-line calculations of the absorption coefficient then rewriting the absorption coefficient strength distribution in terms of a cumulative frequency distribution over bins of specified wavenumber\/wavelength width by ranking and sampling the distribution according to absorption coefficient strength. The inverse of this distribution is known as the $k$-distribution \\citep{lacis1991}. The $k$-distribution is a smooth, monotonically increasing function and so can be sampled with only 10-20 points, compared with $\\sim$ $10^3$ to $10^6$ for the cross-section or line-by-line methods. Within a single atmospheric layer, we may simply combine the $k$-distributions of different gases by assuming that the lines are randomly overlapping. Note that there are many ways of combining $k$-coefficients for different gases. Among brown dwarf and gas giant atmosphere models, ATMO \\citep{tremblin2015,drummond2016} and PETIT \\citep{molliere2015} also use random overlap, HELIOS \\citep{malik2017} assumes perfect correlation, and \\citet{amundsen2016} use equivalent extinction. For further information see \\citet{amundsen2017}. Then, by assuming that the wavenumbers at which the (total) cross section takes a certain value are vertically correlated these $k$-distributions may be used to calculate the transmission, thermal emission or scattering of an atmosphere using the correlated-$k$ method. From previous studies \\citep{nemesis08}, we have found the correlated-$k$ approximation to be accurate to better than 5\\%. This is why we primarily use this as our benchmark absorption method for the paper. For more information on our correlated-$k$ method, refer to \\citep{nemesis08} and references therein. \n\nIn this paper, we argue that it is inaccurate to calculate the gaseous absorption in both single- and mixed-gas atmospheres by combining individual gas cross-sections with large- and moderately-sized bins ($\\Delta\\nu = 25, 1$ cm$^{-1}$), or inefficient with high-resolution bins ($\\Delta\\nu = 0.01$ cm$^{-1}$). Throughout the paper we compare our calculations to the correlated-$k$ method, and verify our methods with a line-by-line calculation. We also consider the effects of including \\ce{H2}-He pressure-broadening in the spectral calculations. \n\nSection one describes the methods and ingredients used in the correlated-$k$ tables (`$k$-tables') and cross-sections such as the pressure-broadening parameters and line lists used.\n\nSection two verifies our methods with simple single-layer atmospheres and a line-by-line calculation. More realistic atmospheres are then used to contrast the spectra of the two methods (correlated-$k$ and cross-sections) and the effect of introducing \\ce{H2}-He pressure broadening. The example cases include a simple Hot Jupiter, a typical late-T dwarf, and HD189733b in primary transit and secondary eclipse geometries.\n\nSection three summarizes our findings.\n\n\\section{Methods}\n\n\\subsection{Line Lists}\n\nA wide range of line list databases exists to provide the relevant molecular information for calculating spectra in the atmospheres of exoplanets and brown dwarfs. The key factors in deciding which line lists are most appropriate are the wavelength and temperature ranges for which they are valid. For example, a widely used line list database is the High Resolution Transmission (HITRAN) database \\citep{rothman13}, which is collated from multiple experimental and theoretical sources. This database is, however, used mainly for representing the spectrum of the Earth, and is therefore only reliable for temperatures up to $\\sim$400K, as it removes all insignificant line intensities at this temperature. Unfortunately, these insignificant line intensities become more significant with increasing temperature, meaning the relevant number of lines goes from thousands to billions as the temperature increases above 1500K. \n\nThe ExoMol project \\citep{tennyson12} contains a much more valid temperature (up to 1500-3000K) and wavelength range for the relevant gases involved in brown dwarf and exoplanet atmospheres. However, ExoMol line data are predicted solely from \\textit{ab initio} calculations rather than measured in a laboratory and as such line positions and intensities may contain larger errors than those found experimentally in, e.g. HITRAN, for overlapping validity ranges. We note that HITEMP, a sister database to HITRAN, contain line lists appropriate for higher temperatures, and similarly contains a mixture of experimental and \\textit{ab initio} data.\n\nIn Table~\\ref{tbl:linelists}, we present a summary of the line lists that we selected according to the criteria above, where $Q(T)$ lists the chosen source of the partition function data, necessary to calculate the line intensities at the temperature of interest. We also use \\ce{H2}-\\ce{H2} and \\ce{H2}-He collision-induced absorption from HITRAN 2012 \\citep{richard2012} for any calculation containing \\ce{H2}-He.\n\n\\subsection{Pressure-Broadening Coefficients}\n\nAnother measure of appropriateness for each line list is the validity of its pressure-broadening coefficients. Unfortunately, line list databases which are suitable for the pressure broadening found in \\ce{H2}-He-dominated atmospheres are scarce. HITRAN's Earth-centric lists exhibit broadening parameters suited to air-broadening (\\ce{N2} and \\ce{O2}). ExoMol now provide \\ce{H2} and He pressure broadening parameters for most of their gases. For all molecules ExoMol provides cross-sections with only Doppler (i.e. thermal) broadening, but no pressure broadening.\n\nIn order to correctly estimate the pressure broadening induced on spectral lines in brown dwarf and exoplanet atmospheres, we performed a literature search and found that \\citet{amundsen14} had openly discussed their sources for \\ce{H2}-He pressure broadening. This work was performed prior to ExoMol providing pressure broadening parameters for \\ce{H2} and He, hence we have not used those values. Instead, most of the information found in Table~\\ref{tbl:presbroad} that we use for our pressure broadening parameters is from \\citet{amundsen14}, with a few additional sources added. Note that we have not chosen to implement the same procedure for Na and K, as they both produce massive broadening wings in the optical and are not experimentally well sampled for \\ce{H2} and He broadening; instead we arbitrarily set the air-broadened widths to 0.075cm\\textsuperscript{-1} atm$^{-1}$ based on experience with terrestrial radiative transfer studies. We also note that using Voigt lineshapes, as we have done for all gases, is especially dubious for the Na and K resonance lines \\citet{burrows2001}. We intend on updating the Na and K $k$-tables in the future for more realistic lineshapes; however this will not drastically change the outcomes of this paper. For methods taken by other groups, see e.g. \\cite{tremblin2015, baudino2015}.\n\nIn many cases these sources only provided broadening parameters for the lower rotational quantum number, $J_{low}$, up to 8-20 for the maximum $J_{low}$ value depending on the gas, while our line lists contain data up to $J_{low}$ = 300. We implemented the broadening parameters into our line lists shown in Table~\\ref{tbl:presbroad} by first converting them into a single `foreign broadening parameter' $\\gamma_{0}$ assuming an atmospheric ratio of 85:15 for \\ce{H2}:He, using the weighted sum $\\gamma_{0} = \\gamma_{H_2} \\text{VMR}_{H_2} + \\gamma_{He} \\text{VMR}_{He}$, where VMR represents the volume mixing ratio. We then fitted this foreign broadening coefficient with a fourth order polynomial given by $\\gamma_{0}(J_{low}) = \\sum_{i=0}^{i=3} \\alpha_i J_{low}^i$, where $\\alpha_i$ represents each order's constant, up until the available data, then using the last available broadening coefficient for any $J_{low}$ higher than the maximum available. Note that \\citet{amundsen14} used a linear approach up to the maximum $J_{low}$, and then a constant value as we have. \n\nThis is of course not ideal, but does not introduce any complex error propagation which might be the case with a more sophisticated modelling that extrapolates to higher $J_{low}$. An empirical approach was considered, but while many gases show a gradual flattening of broadening coefficient with increasing $J_{low}$ \\citep[see e.g.,][]{buldyreva2011collisional}, there is no clear or simple relationship that allows us to model all of the gases after this maximum. In general, the constant value appears to be a good first-order approximation. An example for the molecule CO is presented in Figure~\\ref{fig:COpoly}, where we compare our new \\ce{H2}-He foreign broadening to that provided for air by HITRAN. \n\nThe pressure-broadened line half-width (cm\\textsuperscript{-1}), is calculated from $\\gamma = \\gamma_0 \\left(\\frac{P}{P_0}\\right) \\left(\\frac{T_0}{T}\\right)^n\n$ where the half-width at half-maximum of the Lorentzian profile $\\gamma_0$ is determined at a standard temperature $T_0$ and pressure $P_0$ (i.e. 296K and 1 atm), $n$ is an empirically derived pressure-broadening temperature exponent found in Table~\\ref{tbl:presbroad}, and $T$ and $P$ are the desired temperature and pressure of the line respectively. The temperature exponent is constant over all quantum rotational numbers.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1.0\\columnwidth]{broadening-co.pdf}\n \\caption{Pressure broadening coefficients for CO. Pink is air-broadening, blue circles are data points, green lines are fourth-order polynomial fits. Top panel: \\ce{H2} broadening coefficients. Middle panel: He broadening coefficients. Bottom: \\ce{H2}-He broadening coefficients with 85:15 ratio. }\n \\label{fig:COpoly}\n\\end{figure}\n\n\n\\subsection{Cross-section Comparison}\n\nIn this subsection we investigate two methods of solving the radiative transfer equation regarding optical depth, and show how they affect the spectra produced by forward models of brown dwarfs and exoplanets. As our retrievals are entirely dependent on our forward models being `correct', this is in an important point to address.\n\nWe now describe the exact details of the $k$-tables we use in our radiative transfer calculations.\n\nFor each gas in Table~\\ref{tbl:linelists}, we must calculate $k$-tables to be used in the correlated-$k$ method of solving the radiative transfer equations. We developed a method to remove insignificant lines in the line databases at specified temperatures, as this will shorten the length of computation time of the $k$-tables. The method first calculates the line intensities at a particular temperature, and orders the line intensities from smallest to largest. It then creates a cumulative sum from the smallest to largest line intensities, and removes the smallest n\\% of contributions to the total line intensities. We use n = \\num{1e-10}, a very small percentage, so that we do not underestimate any continuum effects. We do this for our entire temperature range, i.e. 100 - 2950K, over 20 equally spaced (150K) temperature points as different lines become important at different temperatures. At the lower temperatures, a majority of the lines are stripped away (leaving $\\sim$ \\num{1e4} lines), while at higher temperatures the line lists are essentially identical (up to $\\sim$ \\num{1e11}).\n\nFor the $k$-tables, we first calculate the underlying absorption spectrum monochromatically (at least 1\/6 Voigt line width) over the entire spectral region (0.3-30\\textmu m). We calculate the absorption and hence cumulative $k$-distributions at output wavelengths having separation $\\Delta \\lambda$=0.001\\textmu m, with square bins of width double that of the separation (i.e. a resolution of $\\lambda$=0.002\\textmu m). We use Gauss-Lobatto quadrature to sample the distributions. To include the contribution of wings of lines centred outside the spectral region of interest, the total wavelength range considered is defined as a range between $\\nu_{min} - \\nu_{cut}$ to $\\nu_{max} + \\nu_{cut}$. The total spectral interval is then subdivided into 1.5cm$^{-1}$ bins where line data is stored. Absorption at a particular output wavelength is then calculated by considering lines stored in the adjacent bins and the bin in the middle. Wing contributions from the lines centred outside these bins are calculated at the middle and end bins using a quadratic polynomial (because the wing shape follows a Lorentzian), and added on to the absorption at the output wavelength. For each line calculation, a line wing outside the cutoff (25cm$^{-1}$ from centre) is ignored.\n\nWe calculate a grid of spectral opacities with the 20 aforementioned temperature points, and 20 pressure levels equally spaced in logspace from $\\sim$ \\num{1e-7} to 100 atm, using 20 g-ordinates for Gauss-Lobatto quadrature. The lineshapes are given by a Voigt profile, where the line-wing cut-off ($\\nu_{cut}$) is at 25cm$^{-1}$ for all gases but alkali, where the cut-off is at 6000cm$^{-1}$.\n\nThe pressure-temperature grid is linearly interpolated in temperature-log-pressure space to find the desired spectral opacity. The $k$-tables may also be resampled into lower resolutions to increase calculation speed, if necessary. For a more detailed description of how these calculations are done, the reader is referred to \\citet{nemesis08}. These $k$-tables calculated here will be adopted in future works.\n\nFor the cross-section calculations in the literature, ExoMol \\citep{hill2013} calculate a high-resolution (<0.01cm$^{-1}$) cross-section and the integral of the cross-section is preserved for all requested resolutions. \\cite{macdonald2017} calculate their cross-sections at 0.01cm$^{-1}$ (from \\cite{hedges2016}), and bin them down (presumably preserving area) to 1cm$^{-1}$ resolution before using them. \\cite{line2016b} use a 1cm$^{-1}$ resolution opacity sampling method on pre-computed cross sections which have a variable resolution wavenumber grid that samples the lines at 1\/4 of their Voigt half widths from \\cite{freedman08, freedman2014}.\n\nOne of the major issues with low- and medium-resolution cross-sections is that they cannot combine gases effectively due to their insufficient resolution. The usual approach is to sum the individual contributions of the various gases and weight them by their volume mixing ratio \\citep{sharp07, waldmann15}. The multiplication property of transmission is only valid when these calculations are done monochromatically. To illustrate this issue for low- and medium-resolution cross-sections, consider the simple two-gas problem in a single layer where each gas has an identical transmission spectrum over a given interval which is larger than the bin size for the cross-section, $\\Delta \\nu$, shown in Figure~\\ref{fig:trans}. In this extreme example, the multiplication using these two methods produces mean transmissions that are a factor of two different. Correlated-$k$ preserves this multiplication property of transmissions \\citep{goody1989atmospheric}, i.e. the calculations are the same as for monochromaticity. Using cross-sections invokes using a mean transmission over the interval and hence the multiplication can produce fallacious results. \n\nThe opacity sampling method requires a sufficient number of samples within a specified bin to correctly estimate the area of the cross-section, though this number is ill-defined and depends on spectral resolution. \\cite{line2016b} show that the 1cm$^{-1}$ resolution opacity sampling is sufficient for their purposes (1.0-2.5\\textmu m); however, this may not be a good resolution for either higher spectral resolutions or longer wavelengths. We briefly explore the effects of using opacity sampling as a means of computing the gaseous absorption in the next section, but do not include it in our main results section as it would require a large overhaul of our code.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1.0\\columnwidth]{trans.pdf}\n \\caption{An illustrative example of the effects on transmission when multiplying monochromatic and non-monochromatic wavelength regions. The Correlated-$k$ transmission (pink) squared is identical to the non-squared, as the first half of the interval is 1 squared, and the second half is 0 squared. The cross-section transmission (green) is averaged over the bin, such that when squared it becomes a yet smaller value (blue).}\n \\label{fig:trans}\n\\end{figure}\n\nThere are two key issues with using the Doppler-broadened cross-sections provided by the ExoMol project \\citep{hill2013}. One is that they fundamentally should not be used to solve the radiative transfer equation in mixed-gas atmospheres by combining single gas cross-sections as they are either ineffective (low- and medium-resolution) or computationally expensive (high-resolution). The second is that pressure-broadening is a key property in calculating atmospheric spectra that should not be ignored. These points will be addressed in the next section.\n\nAt low pressures, where all of the pressure-broadening has ceased and the lineshape is dominated by Doppler broadening, the mean opacity (i.e. $\\bar{k} = \\sum_{i=1}^{NG} k_i \\Delta g_i $) of our $k$-tables should be equivalent to the ExoMol cross-sections. We show this to be true for \\ce{CH4} at 1600K in Figure~\\ref{fig:exmkch4-1}, where the cross-sections were taken from www.exomol.com at a resolution $\\Delta \\nu = 1$cm$^{-1}$. The spectra are binned to $\\Delta \\lambda = $0.005\\textmu m resolution. This is also the case for the other relevant gases.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1.0\\columnwidth]{ch4-3.pdf}\n \\caption{The \\ce{CH4} $k$-table opacity at low-pressure (blue) is compared to the Doppler-broadened cross-sections at a resolution of $\\Delta \\nu = 1$cm$^{-1}$ from the ExoMol project (pink) for 1600K. The spectra are binned to $\\Delta \\lambda = $0.005\\textmu m resolution.}\n \\label{fig:exmkch4-1}\n\\end{figure}\n\nBy flattening our $k$-distribution, i.e. replacing $k_i$ for each g-ordinate with $\\bar{k}$ so that the $k$-distribution $k(g)$ is now a flat (constant) distribution, our $k$-tables at these low pressures are mathematically identical to using cross-sections, and so will fall foul to the aforementioned gas-mixing issues.\n\nInstead of using the ExoMol cross-sections directly where we are limited to the temperatures ranges supplied online, we carry out our comparison using only the lowest pressure points of our flattened $k$-tables as a proxy. In the next section we show first that this is a good approximation to using cross-sections of resolution $\\Delta \\nu = 1$cm$^{-1}$.\n\n\n\\section{Results}\n\nFirst we would like to show the inability to effectively mix gases with cross-sections. We create a simple one-layer model at low pressure (and therefore only Doppler-broadened), with a temperature of 1000K, and containing 50\\% \\ce{H2O} and 50\\% \\ce{CH4}. We calculate the mean transmission for the correlated-$k$ using:\n\n\\begin{equation}\\label{eqn:trank}\n\\bar{T} = \\sum_{i=1}^{NG} \\sum_{j=1}^{NG} e^{- (k_i m_a + k_j m_b) } \\Delta g_i \\Delta g_j\n\\end{equation}\n\nand the cross-sections:\n\n\\begin{equation}\\label{eqn:tranxsec}\n\\bar{T} = e^{- (k_a m_a + k_b m_b) } \n\\end{equation}\n\nwhere $i$ and $j$ represent the index of the weights and $a$ and $b$ are labels for the first and second gas respectively, $k$ is the absorption coefficient (cm$^{2}$ molecule$^{-1}$), and $m$ is the absorber amount (molecule cm$^{-2}$). Absorption (1 - $\\bar{T}$) is calculated with four different methods: 1) using our $k$-tables described earlier, the online ExoMol cross-sections directly at two different resolutions - 2) 1cm$^{-1}$ and 3) 25cm$^{-1}$); 4) our cross-section proxy $k$-tables described in the previous section; and 5) the opacity-sampling method at 1cm$^{-1}$ resolution using the 0.01cm$^{-1}$ ExoMol cross-sections. Methods 1 and 4 are combined using equation~\\ref{eqn:trank}, while methods 2,3 and 5 are combined using equation~\\ref{eqn:tranxsec}. The spectra are then binned to $\\Delta \\lambda = $0.005\\textmu m resolution, and shown in Figure~\\ref{fig:abs}. We choose these two resolutions for the cross-sections because the prior is the usual resolution used by other modellers, and the latter because it is closer to the typical resolution that we are presenting in Figure~\\ref{fig:mixedgases} and subsequent figures in the visible ($\\sim$0.6\\textmu m). Beyond $\\sim$3\\textmu m, our $k$-tables are in fact higher resolution than $\\Delta \\nu = 1$cm$^{-1}$. This means that we expect the cross-sections (and opacity sampling method) to become relatively less accurate at longer wavelengths. This is also generally true because the Doppler width is proportional to the wavenumber, i.e. at long wavelengths lines become increasingly narrow and consequently require a higher resolution to be resolved.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1.0\\columnwidth]{abs.pdf}\n \\caption{Absorption calculated for a layer consisting of 50\\% \\ce{CH4}, 50\\% \\ce{H2O} at 1000K and path amount of \\num{1e20} molecule cm$^{-2}$. Pink is opacity-sampling at 1cm$^{-1}$ resolution using the 0.01cm$^{-1}$ resolution ExoMol cross-sections. Gold and blue are the cross-section absorption spectra at 1cm$^{-1}$ and 25cm$^{-1}$ resolution respectively, black line is the correlated-$k$ method with a resolution of 0.002\\textmu m the green line is the cross-section derived from our correlated-$k$ method.}\n \\label{fig:abs}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1.0\\columnwidth]{zoomabs.pdf}\n \\caption{Same as Figure~\\ref{fig:abs} but zoomed in to 2-3\\textmu m. The purple dots are a line-by-line calculation calculated at $\\Delta \\nu = 0.01$cm$^{-1}$.}\n \\label{fig:abszoom}\n\\end{figure}\n\nIn Figures~\\ref{fig:abs} and~\\ref{fig:abszoom} we see that the opacity sampling method produces absorption that is approximately of the same order as the correlated-$k$ and line-by-line methods. However it also exhibits a slight underestimation of the absorption and a large increase in noise as we go to longer, less-sampled wavelengths. The effect is especially pronounced in certain bands such as the 2.5-3\\textmu m band in Figure~\\ref{fig:abszoom}. While these effects are interesting, it would require a major overhaul of our code to further investigate, and hence do not include it in further analysis. Figure~\\ref{fig:abs} also shows that the overlapping bands can cause up to $\\sim$2 orders of magnitude change in absorption when comparing the correlated-$k$ and cross-section methods. We believe these `ghost features' are a consequence of one-cross section being more accurate than the other, so that in a mixture, when the inaccuracy dominates the total absorption, a ghost feature is formed. This is because the average transmissions computed from equation~\\ref{eqn:tranxsec} does not capture the non-linear relationship between transmission and opacity. As expected, beyond 10\\textmu m the absorption varies enormously due to an increase in the number of binned lines per resolution element in the cross-sections. Also it shows that our $k$-table proxy cross-sections show approximately the same (if not smaller) effects than the two different ExoMol cross-sections, hence we can conclude that using these cross-sections are a reasonable proxy to use for the rest of this paper. As shown in Figure~\\ref{fig:abszoom}, the line-by-line calculation between 2-3\\textmu m agrees very well with our correlated-$k$ method and shows that there is a real concern for combining cross-sections at a resolution of $\\Delta \\nu = 1$cm$^{-1}$, especially at longer wavelengths. By comparing our correlated-$k$ method to a high-resolution cross-section (0.01cm$^{-1}$) in Figure~\\ref{fig:abshigh}, it is evident that they now share a similar spectral morphology; however, the increase in resolution of the cross-section now causes a large increase in computation time. Figure~\\ref{fig:abshigh} also shows a medium-resolution case (0.1cm$^{-1}$) which shows inaccuracies beginning to occur in a non-negligible way. In a vast majority of real-life data resolutions, using the correlated-$k$ method is faster or more accurate that using these high- and medium-resolution cross-sections because the the correlated-$k$ method can be precalculated at exactly the right resolution of the data, whereas the cross-section method must always be at a much higher resolution than that of the data.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1.0\\columnwidth]{abshigh.pdf}\n \\caption{Similar to Figure~\\ref{fig:abs}, except pink is the cross-section absorption spectra at 0.1cm$^{-1}$ resolution, gold is cross-section at 0.01cm$^{-1}$ resolution, and black is the same correlated-$k$ calculation}.\n \\label{fig:abshigh}\n\\end{figure}\n\n\n\\subsection{Primary Transit}\n\nIn this example we take a simplified but more realistic atmosphere found in exoplanetary science, and compare its resulting spectra using the two different gaseous absorption methods.\nWe do this using an isothermal (2000K) Hot Jupiter-esque primary transit as our example ($1.06M_{Sat}$, $1.4 R_{Jup}$ around a Solar-size star), which we refer to as PT1. \n\nHere we limit the $k$-tables to have only Doppler broadening by using the lowest pressure point in our grid (\\num{1e-7}atm). This means we are directly comparing $k$-tables with cross-sections, i.e. the effect of flattening over the $g$-distribution. From this section onwards, we also include collision-induced absorption from \\ce{H2}-\\ce{H2} and \\ce{H2}-He in our realistic atmosphere calculations where appropriate.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1.0\\columnwidth]{singlegases.pdf}\n \\caption{Spectra for two PT1 (defined in manuscript) exoplanets consisting of a single gas (100\\% \\ce{CH4} in pink and black, 100\\% \\ce{H2O} in gold and green) calculated using two methods. Legend labels X-Sec and Corr-$k$ represent cross-sections and correlated-$k$ (Doppler-broadened only).}\n \\label{fig:singlegas}\n\\end{figure}\n\nIn Figure~\\ref{fig:singlegas}, we have two different atmospheres, each with a set of spectra calculated by the correlated-$k$ method and the cross-section method. We have a 100\\% \\ce{CH4} atmosphere, and a 100\\% \\ce{H2O} atmosphere. Here, no mixing of the gases is required, and therefore the two different methods produce almost identical results, with the differences being more pronounced for \\ce{H2O}. These results are relatively similar because the propagating error in transmission multiplication throughout the pressure-varying path due to bin-averaging is small compared to those when averaging with overlapping gaseous bands. Note that the changes are not due to pressure-broadening effects, as the $k$-tables only contain Doppler broadening. \n\nWe introduce a third atmosphere, composed of 50\\% \\ce{CH4} and 50\\% \\ce{H2O}. In Figure~\\ref{fig:mixedgases}, we compare the spectra produced by the Doppler-broadened correlated-$k$ and the cross-section methods. We find that not only do the cross-sections overestimate the transit depth, but indeed they change the morphology of the spectrum itself, similar to the results for in Figure~\\ref{fig:abs}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1.0\\columnwidth]{multigases.pdf}\n \\caption{Spectra for one PT1 exoplanet consisting of 50\\% \\ce{CH4}, 50\\% \\ce{H2O}. Pink line is the correlated-$k$ method (Doppler-broadened only), blue is the cross-section method.}\n \\label{fig:mixedgases}\n\\end{figure}\n\n\n\\subsection{Brown Dwarfs}\n\nIn the previous two subsections we verified our correlated-$k$ method via line-by-line calculations, and showed the impact of various resolutions of cross-sections on the morphology of the spectra and absorption profiles of simple exoplanet atmospheres. In this subsection and the next we intend to show two things for multiple types of realistic \\ce{H2}-He-dominated atmospheres: 1) the effects of using cross-sections to incorrectly mix gases, and 2) the effects of including pressure-broadening on the resulting spectra. \n\nTo illustrate the effects of using cross-sections, correlated-$k$ with no pressure-broadening (smallest pressure level available at our $k$-tables, $\\sim$1e-7 atm), and correlated-$k$ with pressure-broadening to calculate the spectra of brown dwarf atmospheres, we use a typical late-T dwarf as an example, as it contains significant amounts of \\ce{H2O}, \\ce{CH4}, and \\ce{NH3} (\\num{3.5e-4}, \\num{4.0e-4}, \\num{2.3e-4} respectively for their volume mixing ratios, which are representative values that are constant in height). The spectrum also contains Na and K with VMRs of \\num{3.5e-5} and \\num{3.5e-6}. The mass and radius are 41.5M$_{Jup}$ and 1.38R$_{Jup}$ respectively, and the temperature profile is a 673.5K grid model from \\citet{saumon08} with the appropriate surface gravity. The temperature profile, along with those from the next two sections, is shown in Figure~\\ref{fig:tpprofs}. The exact numbers for these parameters do not significantly change the outcome of the results, and are the best fit parameters of a previous unpublished retrieval performed on the object GL570D. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1.0\\columnwidth]{gl570dsub.pdf}\n \\caption{Top plot: Spectra calculated from best fit parameters for GL570D using various methods of calculation. The spectra are calculated for cross-sections with (translucent green) and without pressure-broadening (translucent black), and correlated-$k$ with (red) and without pressure broadening (blue). Bottom plot: the transmission calculated for the three major absorbing species in the atmosphere in a single layer of atmosphere at 700K, where each gas has been weighted by its volume mixing ratio for a given path amount.}\n \\label{fig:TD}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1.0\\columnwidth]{TPPROFS.pdf}\n \\caption{Temperature profiles for the brown dwarf (`T dwarf' in pink), the primary transit of HD189733b (`Primary' in blue) and secondary eclipse of HD189733b (`Eclipse' in green).}\n \\label{fig:tpprofs}\n\\end{figure}\n\nFigure~\\ref{fig:TD} contains two plots. In the upper plot, we have spectra calculated using the cross-section method with and without pressure broadening, and the correlated-$k$ method with and without pressure broadening. In the bottom plot, we have calculated the transmission in a single layer at 700K and 1 atm, weighted by the volume mixing ratios with a given path amount (\\num{1e23} molecules\/cm$^{-2}$) for \\ce{NH3}, \\ce{CH4}, \\ce{H2O}, Na and K, so that it can be easily seen which spectral features belong to which species.\n\nIn all of the large bands we find a discrepancy between the cross-section method and the correlated-$k$ method, usually of multiple orders of magnitude. For example, the 2.8\\textmu m water band produces luminosities that vary by up to $\\sim$3-4 orders of magnitude. The same can be said for the 3.2\\textmu m \\ce{CH4} band. In molecular bands, the addition of pressure broadening can produce an additional order of magnitude change. These are huge effects that would certainly be reflected in parameter estimation during retrievals. \n\n\\subsection{Primary Transit and Secondary Eclipse}\n\nIn Figure~\\ref{fig:PTHD} we present a fiducial model from \\citet{barstow2014} of the primary transit spectrum of HD189733b, calculated using the three methods as described before for the brown dwarfs. The atmosphere primarily consists of the usual \\ce{H2}, He, \\ce{H2O}, Na and K gases, along with a haze to cover the observed Rayleigh slope. \n\nSimilar to the brown dwarf cases, we find that the total opacity and morphology of the spectra differ greatly between the methods. The transit depth can vary by up to 1\\%, as seen at the 2.8\\textmu m water band. Contrary to the brown dwarfs, we note that the pressure broadening effects are subdued for primary transit. This is because we are probing much higher in the atmosphere than in the brown dwarf case, where Doppler broadening is the main agent of broadening. The Na and K lines are changed vastly, although we must note that the visible region is especially subject to large changes in $\\Delta \\nu$ (a constant $\\Delta \\lambda$, as we have, produces larger $\\Delta \\nu$ in the visible), and thus the effects are more pronounced than for the infrared. \n\nFor secondary eclipse, the VMRs used are slightly different than for primary transit. \\ce{H2O}, \\ce{CO2}, \\ce{CO} and \\ce{CH4} all have VMRs of \\num{1e-4}. \\ce{H2}, He, Na, and K have VMRs of 0.9, 0.1, \\num{5e-6}, and \\num{1e-7} respectively. In Figure~\\ref{fig:SECHD} we see a familiar increase in opacity and change in morphology for the cross-section method. The pressure-broadening is also slightly more significant compared to primary transit, as we are probing lower in the atmosphere. The effects here are smaller than for the brown dwarfs, where a change in the method can increase the flux by half an order of magnitude in the mid- and far- infrared, or change it by an order of magnitude in the near-IR. The effects of pressure-broadening in secondary eclipse is smaller than on brown dwarfs (because the contribution functions peak at a lower pressure in the atmosphere) but still appreciable enough to become apparent in the spectra. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1.0\\columnwidth]{PT.pdf}\n \\caption{A fiducial HD189733b atmosphere for primary transit. The spectra are calculated for cross-sections with (translucent green) and without pressure-broadening (translucent black), and correlated-$k$ with (red) and without pressure broadening (blue).}\n \\label{fig:PTHD}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1.0\\columnwidth]{SEC-comb.pdf}\n \\caption{A fiducial HD189733b atmosphere for secondary eclipse. The spectra are calculated for cross-sections with (translucent green) and without pressure-broadening (translucent black), and correlated-$k$ with (red) and without pressure broadening (blue).}\n \\label{fig:SECHD}\n\\end{figure}\n\n\\section{Conclusions}\n\n\nA major discrepancy between different radiative transfer models is how gaseous absorption is calculated. We investigated the effects of using two different gaseous absorption methods (correlated-$k$ and cross-sections) to calculate spectra in a variety of atmospheres of varying complexities. We also investigated the effects of including \\ce{H2}-He pressure broadening in the more complex atmospheres. These investigations are important because radiative transfer models are often coupled to inverse methods, which can iterate over millions of forward models in a large parameter space; a single forward model error, such as a `ghost feature', would propagate through to all retrieved PDFs influencing them in a non-trivial way, i.e. an error in the absorption spectrum of \\ce{NH3} could influence the retrieved temperature profile.\n\nWe first showed that for test cases with resolutions of $\\Delta \\nu = 1$cm$^{-1}$ the cross-section method overestimates the amount of absorption present in the atmosphere and should therefore be used with caution. The morphology of the spectra also changes and produces `ghost' features for mixed-gas atmospheres. When considering our flattened $k$-table cross-sections, the effect can produce multiple orders of magnitude change in the flux received from brown dwarfs in certain wavelength regions. The effect is similar but smaller for primary transit, and is closer in order to a $\\sim$ 1\\% change in transit depth. The flux ratio of secondary eclipse exoplanets can find an order of magnitude change in the near-IR, with the effect becoming lesser for longer wavelengths. Correlated-$k$ can produce similar results to line-by-line and very high-resolution cross-sections, but is much less computationally expensive.\nThe inclusion of \\ce{H2}-He pressure broadening similarly changes the total flux found in the spectra of brown dwarfs and secondary transit exoplanets by up to an order of magnitude, while only making slight changes to the spectra of transiting exoplanets. \n\nIf we take into account the sizable discrepancies found by \\citet{hedges2016} between the different aspects of pressure broadening from medium- and higher resolutions, the issues discussed in this paper might be even more serious than suggested. However, at higher resolutions the differences we present here will become less significant, and so the main error source will switch from those discussed here to those discussed by \\citet{hedges2016}.\n\nWe conclude that inaccurate use of cross-sections and omission of pressure broadening can be key sources of error in the modelling of brown dwarf and exoplanet atmospheres. These sources of error in the forward model may produce strong biases in the probability distribution functions of retrieved parameters. \n\n\\section*{Acknowledgements}\n\nR.G. thanks and acknowledges the support of the Science and Technology Facilities Council. P.G.J.I. also receives funding from the Science and Technology Facilities Council (ST\/K00106X\/1). \n \n\n\n\n\n\\bibliographystyle{mnras}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}