text
stringlengths
28
2.36M
meta
stringlengths
20
188
\begin{document} \maketitle \begin{abstract} In this paper we study the selection principle of closed discrete selection, first researched by Tkachuk in \cite{Tkachuk2018} and strengthened by Clontz, Holshouser in \cite{ClontzHolshouser}, in set-open topologies on the space of continuous real-valued functions. Adapting the techniques involving point-picking games on \(X\) and \(C_p(X)\), the current authors showed similar equivalences in \cite{CaruvanaHolshouser} involving the compact subsets of \(X\) and \(C_k(X)\). By pursuing a bitopological setting, we have touched upon a unifying framework which involves three basic techniques: general game duality via reflections (Clontz), general game equivalence via topological connections, and strengthening of strategies (Pawlikowski and Tkachuk). Moreover, we develop a framework which identifies topological notions to match with generalized versions of the point-open game. \end{abstract} \section{Introduction} The closed discrete selection principle was first studied by Tkachuk in 2017. This property occurs naturally in the course of studying functional analysis. Tkachuk connected this selection principle on \(C_p(X)\) with topological properties of \(X\). He then went on to consider the corresponding selection game, creating a partial characterization of winning strategies in that game and finding connections between it, the point-open game on \(X\), and Gruenhage's \(W\)-game on \(C_p(X)\) \cite{TkachukGame}. In 2019, Clontz and Holshouser \cite{ClontzHolshouser} finished this characterization, showing that the discrete selection game on \(C_p(X)\) is equivalent to a modification of the point-open game on \(X\). Clontz and Holshouser show this not only for full information strategies but also for limited information strategies. The current authors continued this work, researching the closed discrete game on \(C_k(X)\), the real-valued continuous functions with the compact open topology \cite{CaruvanaHolshouser}. They show that similar connections exist in this setting, with the point-open game on \(X\) replaced by the compact-open game. They also isolated general techniques which have use beyond the study of closed discrete selections. In this paper, we study the problem of closed discrete selection in the general setting of set-open topologies on the space of continuous functions. We use closed discrete selection as a tool not only for comparing \(X\) to its space of continuous functions, but also for comparing different set-open topologies to each other. To establish these connections, we prove general statements in three categories: \begin{enumerate} \item strengthening the strategies in games, \item criteria for games to be dual, \item characterizations of strong strategies in abstract point-open games, \end{enumerate} and use work of Clontz \cite{ClontzDuality} to show that some general classes of games are equivalent. In this version, we have \begin{itemize} \item identified that Lemmas \ref{lemma:PawlikowskiA} and \ref{lemma:Pawlikowski}, as stated, are not known to be true yet and reference \href{https://arxiv.org/abs/2102.00296}{arXiv:2102.00296} for a revised proof for \(k\)-covers. \item corrected a slight error in the statement of Theorem \ref{MD2} which relied on Lemma \ref{lemma:Pawlikowski}. \end{itemize} \section{Definitions and Preliminaries} \begin{definition} Let \(X\) be a space and \(\mathcal A \subseteq \wp(X)\). We say that \(\mathcal A\) is an \textbf{ideal-base} if, for \(A_1, A_2 \in \mathcal A\), there exists \(A_3 \in \mathcal A\) so that \(A_1 \cup A_2 \subseteq A_3\). \end{definition} \begin{definition} For a topological space \(X\) and a collection \(\mathcal A \subseteq \wp(X)\), we let \(\bar{\mathcal A} = \{ \text{cl}_X(A) : A \in \mathcal A \}\). \end{definition} \begin{definition} Fix a topological space \(X\) and a collection \(\mathcal A \subseteq \wp(X)\). Then \begin{itemize} \item we let \(C_p(X)\) denote the set of all continuous functions \(X \to \mathbb R\) endowed with the topology of point-wise convergence; we also let \(\mathbf 0\) be the function which identically zero. \item we let \(C_k(X)\) denote the set of all continuous functions \(X \to \mathbb R\) endowed with the topology of uniform convergence on compact subsets of \(X\); we will write \[ [f;K,\varepsilon] = \left\{ g \in C_k(X) : \sup\{ |f(x)-g(x)| : x \in K \} < \varepsilon \right\} \] for \(f \in C_k(X)\), \(K \subseteq X\) compact, and \(\varepsilon > 0\), \item in general, we let \(C_{\mathcal A}(X)\) denote the set of all continuous functions \(X \to \mathbb R\) endowed with the \(\mathcal A\)-open topology; we will write \[ [f;A,\varepsilon] = \left\{ g \in C_{\mathcal A}(X) : \sup\{ |f(x)-g(x)| : x \in A \} < \varepsilon \right\} \] for \(f \in C_{\mathcal A}(X)\), \(A \in \mathcal A\), and \(\varepsilon > 0\), \end{itemize} Notice that, for the sets of the form \([f;A,\varepsilon]\) to be a base for the topology \(C_{\mathcal A}(X)\), then \(\mathcal A\) must be an ideal-base. \end{definition} \begin{definition} For a topological space \(X\), we let \(K(X)\) denote the family of all non-empty compact subsets of \(X\). \end{definition} \begin{definition} Let \(X\) be a topological space. We say that \(A \subseteq X\) is \textbf{\(\mathbb R\)-bounded} if, for every continuous \(f : X \to \mathbb R\), \(f[A]\) is bounded. \end{definition} In this paper, we will be concerned with selection principles and related games. For classical results, basic tools, and notation, the authors recommend \cite{SakaiScheppers} and \cite{KocinacSelectedResults}. \begin{definition} Consider collections \(\mathcal A\) and \(\mathcal B\) and an ordinal \(\alpha\). The corresponding selection principles are defined as follows: \begin{itemize} \item \(S_{\text{fin}}^\alpha(\mathcal A, \mathcal B)\) is the assertion that, given any \(\{A_\xi : \xi \in \alpha\} \subseteq \mathcal A\), there exists \(\{ \mathcal F_\xi : \xi \in \alpha \}\) so that, for each \(\xi \in \alpha\), \(\mathcal F_\xi\) is a finite subset of \(A_\xi\) (denoted as \(\mathcal F_\xi \in [A_\xi]^{<\omega}\) hereinafter) and \(\bigcup\{ \mathcal F_\xi : \xi \in \alpha \} \in \mathcal B\), and \item \(S_1^\alpha(\mathcal A, \mathcal B)\) is the assertion that, given any \(\{A_\xi : \xi \in \alpha\} \subseteq \mathcal A\), there exists \(\{ x_\xi : \xi \in \alpha \}\) so that, for each \(\xi \in \alpha\), \( x_\xi \in A_\xi\) and \(\{ x_\xi : \xi \in \alpha \} \in \mathcal B\). \end{itemize} We suppress the superscript when \(\alpha = \omega\); i.e., \(S_1(\mathcal A, \mathcal B) = S^\omega_1(\mathcal A, \mathcal B)\). \end{definition} \begin{definition} Let \(X\) be a topological space and \(\mathscr U\) be an open cover of \(X\) with \(X \notin \mathscr U\). Recall that \begin{itemize} \item \(\mathscr U\) is said to be a \textbf{\(\Lambda\)-cover} if, for every \(x \in X\), \(\{ U \in \mathscr U : x \in U \}\) is infinite, \item \(\mathscr U\) is an \textbf{\(\omega\)-cover} of \(X\) provided that given any finite subset \(F\) of \(X\), there exists some \(U \in \mathscr U\) so that \(F \subseteq U\), \item \(\mathscr U\) is said to be a \textbf{\(\gamma\)-cover} if \(\mathscr U\) is an infinite \(\omega\)-cover and for every finite subset \(F \subseteq X\), \(\{ U \in \mathscr U : F \not\subseteq U \}\) is finite, \item \(\mathscr U\) is a \textbf{\(k\)-cover} of \(X\) provided that given any compact subset \(K\) of \(X\), there exists some \(U \in \mathscr U\) so that \(K \subseteq U\), and \item \(\mathscr U\) is said to be a \textbf{\(\gamma_k\)-cover} if \(\mathscr U\) is an infinite \(k\)-cover and for every compact \(K \subseteq X\), \(\{ U \in \mathscr U : K \not\subseteq U \}\) is finite. \end{itemize} Note that if \(\mathscr U = \{U_n : n \in \omega\}\), then \(\mathscr U\) is a \(\gamma_k\)-cover if and only if every cofinal sequence of the \(U_n\) form an \(k\)-cover. For a family of sets \(\mathcal A\), let \begin{itemize} \item \(\mathcal O(X, \mathcal A)\) to be all open covers \(\mathscr U\) so that \(X\not\in\mathscr U\) and for every \(A \in \mathcal{A}\), there is an open set \(U \in \mathscr U\) which contains \(A\), \item \(\Lambda(X, \mathcal A)\) be all open covers \(\mathscr U\) so that \(X\not\in\mathscr U\), and for all \(A \in \mathcal A\), there are infinitely many \(U \in \mathscr U\) so that \(A \subseteq U\), and \item \(\Gamma(X, \mathcal A)\) to be all infinite open covers \(\mathscr U\) so that \(X\not\in\mathscr U\) and for every \(A \in \mathcal{A}\), \(\{ U \in \mathscr U : A \not\subseteq U\}\) is finite. \end{itemize} \end{definition} \begin{remark} Note that \begin{itemize} \item \(\mathcal O(X,[X]^{<\omega}) = \Omega_X\) denotes the collection of all \(\omega\)-covers of \(X\). \item \(\mathcal O(X, K(X)) = \mathcal K_X\) denotes the collection of all \(k\)-covers of \(X\). \item \(\Gamma(X, K(X)) = \Gamma_k(X)\) denotes the collection of all \(\gamma_k\)-covers of \(X\). \end{itemize} \end{remark} \begin{notn} We let \begin{itemize} \item For any collection \(\mathcal A\), \(\neg \mathcal A\) is the complement of \(\mathcal A\). \item \(\mathscr T_X\) denote the set of all non-empty subsets of \(X\). \item \(\Omega_{X,x}\) denote the set of all \(A \subseteq X\) with \(x \in \text{cl}_X(A)\). We also call \(A \in \Omega_{X,x}\) a \textbf{blade} of \(x\). \item \(\Gamma_{X,x}\) denote the set of all sequences \(\{x_n : n \in \omega\} \subseteq X\) with \(x_n \to x\). \item \(\mathcal D_X\) denote the collection of all dense subsets of \(X\). \item \(\text{CD}_X\) denote the collection of all closed and discrete subsets of \(X\). \item \(\mathcal O_X\) denote the collection of all open covers of \(X\). \item \(\Lambda_X\) denote the collection of all \(\lambda\)-covers of \(X\). \item \(\Gamma_X\) denote the collection of all \(\gamma\)-covers of \(X\). \end{itemize} \end{notn} We can create variations of selection principles and their negations by looking at selection games. \begin{definition} Given a set \(\mathcal A\) and another set \(\mathcal B\), we define the \textbf{finite selection game} \(G^\alpha_{\text{fin}}(\mathcal A, \mathcal B)\) for \(\mathcal A\) and \(\mathcal B\) as follows: \[ \begin{array}{c|cccccc} \text{I} & A_0 & A_1 & A_2 & \cdots & A_\xi & \cdots\\ \hline \text{II} & \mathcal F_0 & \mathcal F_1 & \mathcal F_2 & \cdots & \mathcal F_\xi & \cdots \end{array} \] where \(A_\xi \in \mathcal A\) and \(\mathcal F_\xi \in [A_\xi]^{<\omega}\) for all \(\xi < \alpha\). We declare Two the winner if \(\bigcup\{ \mathcal F_\xi : \xi < \alpha \} \in \mathcal B\). Otherwise, One wins. We let \(G_{\text{fin}}(\mathcal A, \mathcal B)\) denote \(G^\omega_{\text{fin}}(\mathcal A, \mathcal B)\). \end{definition} \begin{definition} Similarly, we define the \textbf{single selection game} \(G^\alpha_1(\mathcal A, \mathcal B)\) as follows: \[ \begin{array}{c|cccccc} \text{I} & A_0 & A_1 & A_2 & \cdots & A_\xi & \cdots\\ \hline \text{II} & x_0 & x_1 & x_2 & \cdots & x_\xi & \cdots \end{array} \] where each \(A_\xi \in \mathcal A\) and \(x_\xi \in A_\xi\). We declare Two the winner if \(\{ x_\xi : \xi \in \alpha \} \in \mathcal B\). Otherwise, One wins. We let \(G_{1}(\mathcal A, \mathcal B)\) denote \(G^\omega_{1}(\mathcal A, \mathcal B)\). \end{definition} \begin{definition} We define strategies of various strength below. \begin{itemize} \item A \textbf{strategy for player One} in \(G^\alpha_1(\mathcal A, \mathcal B)\) is a function \(\sigma:(\bigcup \mathcal A)^{<\alpha} \to \mathcal A\). A strategy \(\sigma\) for One is called \textbf{winning} if whenever \(x_\xi \in \sigma\langle x_\zeta : \zeta < \xi \rangle\) for all \(\xi < \alpha\), \(\{x_\xi:\xi \in \alpha\} \not\in \mathcal B\). If player One has a winning strategy, we write \(\One \uparrow G^\alpha_1(\mathcal A, \mathcal B)\). \item A strategy for player Two in \(G^\alpha_1(\mathcal A, \mathcal B)\) is a function \(\tau:\mathcal A^{<\alpha} \to \bigcup \mathcal A\). A strategy \(\tau\) for Two is \textbf{winning} if whenever \(A_\xi \in \mathcal A\) for all \(\xi < \alpha\), \(\{\tau(A_0,\cdots,A_\xi) : \xi < \alpha\} \in \mathcal B\). If player Two has a winning strategy, we write \(\Two \uparrow G^\alpha_1(\mathcal A, \mathcal B)\). \item A \textbf{predetermined strategy} for One is a strategy which only considers the current turn number. We call this kind of strategy predetermined because One is not reacting to Two's moves, they are just running through a pre-planned script. Formally it is a function \(\sigma:\alpha \to \mathcal A\). If One has a winning predetermined strategy, we write \(\One \underset{\text{pre}}{\uparrow} G^\alpha_1(\mathcal A, \mathcal B)\). \item A \textbf{Markov strategy} for Two is a strategy which only considers the most recent move of player One and the current turn number. Formally it is a function \(\tau:\mathcal A \times \alpha \to \bigcup \mathcal A\). If Two has a winning Markov strategy, we write \(\Two \underset{\text{mark}}{\uparrow} G^\alpha_1(\mathcal A, \mathcal B)\). \end{itemize} \end{definition} \begin{definition} Two games \(\mathcal G_1\) and \(\mathcal G_2\) are said to be \textbf{strategically dual} provided that the following two hold: \begin{itemize} \item \(\text{I} \uparrow \mathcal G_1 \text{ iff } \text{II} \uparrow \mathcal G_2\) \item \(\text{I} \uparrow \mathcal G_2 \text{ iff } \text{II} \uparrow \mathcal G_1\) \end{itemize} Two games \(\mathcal G_1\) and \(\mathcal G_2\) are said to be \textbf{Markov dual} provided that the following two hold: \begin{itemize} \item \(\text{I} \underset{\text{pre}}{\uparrow} \mathcal G_1 \text{ iff } \text{II} \underset{\text{mark}}{\uparrow} \mathcal G_2\) \item \(\text{I} \underset{\text{pre}}{\uparrow} \mathcal G_2 \text{ iff } \text{II} \underset{\text{mark}}{\uparrow} \mathcal G_1\) \end{itemize} Two games \(\mathcal G_1\) and \(\mathcal G_2\) are said to be \textbf{dual} provided that they are both strategically dual and Markov dual. \end{definition} \begin{remark} In general, \(S_1^\alpha(\mathcal A, \mathcal B)\) holds if and only if \(\text{I} \underset{\text{pre}}{\not\uparrow} G_1^\alpha(\mathcal{A},\mathcal{B})\). See \cite[Prop. 13]{ClontzHolshouser}. \end{remark} \begin{remark} The game \(G_{\text{fin}}(\mathcal O_X,\mathcal O_X)\) is the well-known Menger game and the game \(G_1(\mathcal O_X, \mathcal O_X)\) is the well-known Rothberger game. \end{remark} \begin{notn} For \(A \subseteq X\), let \(\mathscr N(A)\) be all open sets \(U\) so that \(A \subseteq U\). Set \(\mathscr N[X] = \{\mathscr N_x :x \in X\}\), and in general if \(\mathcal A\) is a collection of subsets of \(X\), then \(\mathscr N[\mathcal A] = \{\mathscr N(A) :A \in \mathcal{A}\}\). In the case when \(X\) and \(X^\prime\) represent two topologies on the same underlying set, we will use the notation \(\mathscr N_X(A)\) to denote the collection of open sets relative to the topology according to \(X\) that contain \(A\). \end{notn} \begin{remark} The game \(G_1(\mathscr N[X],\neg \mathcal O_X)\) is the well-known point-open game first appearing in \cite{Galvin1978}: player One is trying to build an open cover and player Two is trying to avoid building an open cover. The game \(G_1(\mathscr N[K(X)], \neg \mathcal O_X)\) is the compact-open game. Generally, when \(\mathscr N[\mathcal A]\) is being used in a game, we will use the identification of \(A\) with \(\mathscr N(A)\) to simplify notation. Particularly, One picks \(A \in \mathcal A\) and Two's response will be an open set \(U\) so that \(A \subseteq U\). \end{remark} \begin{definition} A topological space \(X\) is called \textbf{discretely selective} if, for any sequence \(\{ U_n : n \in \omega \}\) of non-empty open sets, there exists a closed discrete set \(\{x_n : n \in \omega\} \subseteq X\) so that \(x_n \in U_n\) for each \(n \in \omega\); i.e. \(S_1(\mathscr T_X, {\text{CD}}_X)\) holds. This notion was first isolated by Tkachuk in \cite{Tkachuk2018}. \end{definition} \begin{definition} \label{definition:ClosedDiscrete} For a topological space \(X\), the \textbf{closed discrete selection game} on \(X\), is \(G_1(\mathscr T_X, {\text{CD}}_X)\). Tkachuk studies this game in \cite{TkachukGame}. \end{definition} Note that \(X\) is discretely selective if and only if \(\text{I} \underset{\text{pre}}{\not\uparrow} G_1(\mathscr T_X, {\text{CD}}_X)\). \begin{remark} \label{definition:GruenhageGame} For a topological space \(X\) and \(x\in X\), \textbf{Gruenhage's \(W\)-game} for \(X\) at \(x\) is \(G_1(\mathscr N(x), \neg \Gamma_{X,x})\) and \textbf{Gruenhage's clustering game} for \(X\) at \(x\) is \(G_1(\mathscr N(x), \neg \Omega_{X,x})\). \end{remark} \begin{definition} Suppose \((P, \leq)\) is a partially ordered set and \(\mathcal A, \mathcal B \subseteq P\). Then \textbf{\(\mathcal A\) has cofinality \(\kappa\) relative to \(\mathcal B\)}, denoted \[ \mbox{cof}(\mathcal A; \mathcal B, \leq) = \kappa, \] if \(\kappa\) is the minimum cardinal so that there is a collection \(\{A_\alpha : \alpha < \kappa\} \subseteq \mathcal A\) with the property that whenever \(B \in \mathcal B\), there is an \(\alpha\) so that \(B \leq A_\alpha\). If there is no such cardinal don't define the cofinality. \end{definition} \begin{definition} Suppose \((P, \leq)\) and \((Q, \leq^*)\) are partial orders and \(\mathcal A, \mathcal B \subseteq P\), \(\mathcal C, \mathcal D \subseteq Q\). Then \[ (\mathcal A; \mathcal B, \leq) \geq_T (\mathcal C; \mathcal D, \leq^*) \] if there is a map \(\varphi:\mathcal A \to \mathcal C\) so that whenever \(\mathcal F \subseteq \mathcal A\) is cofinal relative to \(\mathcal B\), then \(\varphi[\mathcal F]\) is cofinal relative to \(\mathcal D\). This definition is inspired by Paul Gartside and Ana Mamatelashvili's work on the Tukey order \cite{Gartside}. \end{definition} Suppose \((P , \leq)\) is a partially ordered set. We define \(\leq\) on \(P \times \omega\) by \[ (p , n) \leq (q, m) \Longleftrightarrow (p \leq q \text{ and } n \leq m). \] \begin{lemma} For any partially ordered set \((P, \leq)\) and any \(Q \subseteq P\), \((Q \times \omega,P \times \omega) \geq_T (Q , P)\). \end{lemma} \begin{proof} Let \(\phi : P \times \omega \to P\) be defined by \(\phi(p,n) = p\). Suppose \(A \subseteq P \times \omega\) is cofinal for \(Q \times \omega\) and let \(q \in Q\) be arbitrary. By the cofinality of \(A\), we can find \((r,m) \in A\) so that \((q,0) \leq (r,m)\). It follows that \(q \leq r = \phi(r,m)\) which demonstrates that \(\phi[A]\) is cofinal for \(Q\). \end{proof} \begin{lemma} Suppose \((P, \leq)\) and \((Q, \leq^*)\) are partial orders, \(\mathcal A, \mathcal B \subseteq P\), and \(\mathcal C, \mathcal D \subseteq Q\). Suppose further that \((\mathcal A; \mathcal B, \leq) =_T (\mathcal C; \mathcal D, \leq^*)\) and \(\cof(\mathcal A; \mathcal B, \leq) = \kappa\). Then \(\cof(\mathcal C; \mathcal D, \leq^*) = \kappa\). \end{lemma} \begin{proof} Let \(\varphi:\mathcal A \to \mathcal C\) be so that whenever \(\mathcal F \subseteq \mathcal A\) is cofinal for \(\mathcal B\), then \(\varphi[\mathcal F]\) is cofinal for \(\mathcal D\). Also let \(\mathcal F = \{A_\alpha : \alpha < \kappa\} \subseteq \mathcal A\) be cofinal for \(\mathcal B\). Then \(\varphi[\mathcal F]\) is a subset of \(\mathcal C\) and is cofinal for \(\mathcal D\). Thus \(\cof(\mathcal C; \mathcal D, \leq^*) \leq \kappa\). Suppose towards a contradiction that \(\cof(\mathcal C; \mathcal D, \leq^*) = \lambda < \kappa\). Then we can find a collection \(\mathcal G = \{C_\alpha : \alpha < \lambda\} \subseteq \mathcal C\) which is cofinal for \(\mathcal D\). Now let \(\psi:\mathcal C \to \mathcal A\) witness that \((\mathcal C; \mathcal D, \leq^*) \geq_T (\mathcal A; \mathcal B, \leq)\). Then \(\psi[\mathcal G] \subseteq \mathcal A\) and is cofinal for \(\mathcal B\). But this would imply that \(\cof(\mathcal A; \mathcal B, \leq) < \kappa\), a contradiction. \end{proof} \begin{lemma} \label{lemma:CofinalityBetweenGroundAndFunctions} Suppose \(X\) is a Tychonoff space. Assume \(\mathcal A, \mathcal B \subseteq \wp(X)\). Then \[ (\mathscr N_{C_{\mathcal A}(X)}(\mathbf 0); \mathscr N_{C_{\mathcal{B}}(X)}(\mathbf 0), \supseteq) \leq_T (\mathcal A \times \omega; \mathcal B \times \omega, \subseteq) \] and \[ (\mathscr N_{C_{\bar{\mathcal A}}(X)}(\mathbf 0); \mathscr N_{C_{\mathcal{B}}(X)}(\mathbf 0), \supseteq) =_T (\bar{\mathcal A} \times \omega; \mathcal B \times \omega, \subseteq). \] \end{lemma} \begin{proof} To address \((\mathscr N_{C_{\mathcal A}(X)}(\mathbf 0); \mathscr N_{C_{\mathcal{B}}(X)}(\mathbf 0), \supseteq) \leq_T (\mathcal A \times \omega; \mathcal B \times \omega, \subseteq)\), define \(\psi : \mathcal A \times \omega \to \mathscr N_{C_{\mathcal A}}(\mathbf 0)\) by \[ \psi(A,n) = [\mathbf 0; A, 2^{-n}]. \] Suppose \(\mathcal F \subseteq \mathcal A \times \omega\) is cofinal for \(\mathcal B \times \omega\) and let \(U \in \mathscr N_{C_{\mathcal B}(X)}(\mathbf 0)\) be arbitrary. We can find \(B \in \mathcal B\) and \(n \in \omega\) so that \[ [\mathbf 0; B, 2^{-n}] \subseteq U. \] By the cofinality of \(\mathcal F\) relative to \(\mathcal B \times \omega\), we can find \(A \in \mathcal A\) and \(m \in \omega\) so that \(B \subseteq A\) and \(n \leq m\). It follows that \[ \psi(A, m) = [\mathbf 0; A , 2^{-m}] \subseteq [\mathbf 0; B, 2^{-n}] \subseteq U. \] That is, \(\psi[\mathcal F]\) is cofinal in \(\mathscr N_{C_{\mathcal B}(X)}(\mathbf 0)\). Without loss of generality, suppose \(\mathcal A = \bar{\mathcal A}\). To address \[ (\mathscr N_{C_{\mathcal A}(X)}(\mathbf 0); \mathscr N_{C_{\mathcal{B}}(X)}(\mathbf 0), \supseteq) \geq_T (\mathcal A \times \omega; \mathcal B \times \omega, \subseteq), \] let \(\phi : \mathscr N_{C_{\mathcal{A}}(X)}(\mathbf 0) \to \mathcal A \times \omega\) be defined in the following way. For any \(U \in \mathscr N_{C_{\mathcal A}(X)}(\mathbf 0)\), let \(A_U \in \mathcal A\) and \(\varepsilon_U > 0\) be so that \[ [\mathbf 0; A_U , \varepsilon_U] \subseteq U. \] Choose \(n_U \in \omega\) so that \(2^{-n_U} < \varepsilon_U\). Then define \(\phi(U) = \langle A_U, n_U \rangle\). Suppose \(\mathcal F \subseteq \mathscr N_{C_{\mathcal A}(X)}(\mathbf 0)\) is cofinal for \(\mathscr N_{C_{\mathcal B}(X)}(\mathbf 0)\). To see that \(\phi[\mathcal F]\) is cofinal for \(\mathcal B \times \omega\), let \(B \in \mathcal B\) and \(n \in \omega\). Then \([\mathbf 0; B, 2^{-n}] \in \mathscr N_{C_{\mathcal B}(X)}(\mathbf 0)\) which means there exists some \(U \in \mathcal F\) so that \(U \subseteq [\mathbf 0; B, 2^{-n}]\). Moreover, \[ [\mathbf 0; A_U , 2^{-n_U}] \subseteq U \subseteq [\mathbf 0; B, 2^{-n}]. \] Suppose toward contradiction that \(B \not\subseteq A_U\). Then, for \(x \in B \setminus A_U\), we can find a continuous function \(f : X \to [0,1]\) so that \(f(x) = 1\) and \(f \restriction_{A_U} \equiv 0\). But then \(f \in [\mathbf 0; A_U , 2^{-n_U}] \setminus [\mathbf 0; B, 2^{-n}]\), a contradiction. Were \(n > n_U\), consider the constant function defined by \(f(x) = 2^{-n}\). This is a contradiction to \([\mathbf 0; A_U , 2^{-n_U}] \subseteq [\mathbf 0; B, 2^{-n}]\) so \(n \leq n_U\). Since \(B \subseteq A_U\) and \(n \leq n_U\), we see that \(\phi[\mathcal F]\) is cofinal for \(\mathcal B \times \omega\). \end{proof} \section{Strengthening Strategies} \begin{lemma} Suppose \(\mathcal A\) is an ideal-base, \(X = \bigcup \mathcal A\), and let \(\mathscr U \in \mathcal O(X,\mathcal A)\). Then, for each \(A\in \mathcal A\), \(\{U \in \mathscr U : A \in U \}\) is infinite. That is, \(\mathcal O(X,\mathcal A) = \Lambda(X,\mathcal A)\). \end{lemma} \begin{proof} Let \(A \in \mathcal A\) be arbitrary and let \(U_0 \in \mathscr U\) be so that \(A \subseteq U_0\). Since \(X \setminus U_0 \neq \emptyset\), let \(x_1 \in X \setminus U_0\) and let \(A_1^\ast \in \mathcal A\) be so that \(x_1 \in A_1^\ast\). Let \(A_1 \in \mathcal A\) be so that \(A \cup A_1^\ast \subseteq A_1\) and let \(U_1 \in \mathscr U\) be so that \(A_1 \subseteq U_1\). Since \(A_1 \cap (X \setminus U_0) \neq\emptyset\), we know that \(U_0 \neq U_1\). Inductively continue in this way. \end{proof} \begin{corollary}\label{lemma:Open=Large} Suppose \(\mathcal A\) and \(\mathcal B\) are ideal-bases. Then \(G_1(\mathscr N[\mathcal A], \neg\mathcal O(X,\mathcal B))\) is equivalent to \(G_1(\mathscr N[\mathcal A], \neg\Lambda(X,\mathcal B))\). \end{corollary} \begin{definition} For collections \(\mathcal A\) and \(\mathcal B\), recall that \(\mathcal A\) \textbf{refines} \(\mathcal B\), denoted \(\mathcal A \prec \mathcal B\), provided that, for every \(B \in \mathcal B\), there exists \(A \in \mathcal A\) so that \(A \subseteq B\). \end{definition} \begin{lemma} \(\mathcal A \prec \mathcal B\) if and only if \(\mathcal O(X,\mathcal B) \subseteq \mathcal O(X,\mathcal A)\). \end{lemma} \begin{proof} Suppose \(\mathcal A \prec \mathcal B\). Let \(\mathscr U \in \mathcal O(X,\mathcal B)\) and \(A \in \mathcal A\). Let \(B \in \mathcal B\) be so that \(A \subseteq B\) and let \(U \in \mathscr U\) be so that \(B \subseteq U\). You get the idea. Now, suppose \(\mathcal A \not\prec \mathcal B\). Let \(A \in \mathcal A\) be so that, for all \(B\in\mathcal B\), \(A \not\subseteq B\). Then choose \(x_B \in A \setminus B\) and set \(U_B = X \setminus \{x_B\}\) for each \(B\in \mathcal B\). Notice that \(B \subseteq U_B\) so \(\{ U_B : B \in \mathcal B \} \in \mathcal O(X,\mathcal B)\). Clearly, \(\{ U_B : B \in \mathcal B \} \not\in \mathcal O(X,\mathcal A)\). \end{proof} In \cite{Pawlikowski1994}, Pawlikowski showed that \(S_{\text{fin}}(\mathcal O_X,\mathcal O_X)\) if and only if \(\One \not\uparrow G_{\text{fin}}(\mathcal O_X,\Lambda_X)\) and also that \(S_{1}(\mathcal O_X,\mathcal O_X)\) if and only if \(\One \not\uparrow G_{1}(\mathcal O_X,\Lambda_X)\). The authors generalized this in a previous paper. The following lemmas are slightly more general than proved there, but the proofs are the same as in \cite{CaruvanaHolshouser}. {\color{red} Lemmas \ref{lemma:PawlikowskiA} and \ref{lemma:Pawlikowski} are only known to be true if both cover types are \(\omega\)-covers or if both cover types are \(k\)-covers. See \href{https://arxiv.org/abs/2102.00296}{arXiv:2102.00296} for a proof of the \(k\)-covers case. \begin{lemma} \label{lemma:PawlikowskiA} Assume \(\mathcal A \prec \mathcal B\) and \(S_{\text{fin}}(\mathcal O(X,\mathcal A),\mathcal O(X,\mathcal B))\). Then \(\One \not\uparrow G_{\text{fin}}(\mathcal O(X,\mathcal A),\Lambda(X,\mathcal B))\). Moreover, \(\One \uparrow G_{\text{fin}}(\mathcal O(X, \mathcal A), \mathcal O(X, \mathcal B))\) if and only if \(\One \underset{\text{pre}}{\uparrow} G_{\text{fin}}(\mathcal O(X, \mathcal A), \mathcal O(X, \mathcal B))\). \end{lemma} \begin{lemma}\label{lemma:Pawlikowski} Assume \(\mathcal A \prec \mathcal B\) and \(S_{1}(\mathcal O(X,\mathcal A),\mathcal O(X,\mathcal B))\). Then \(\One \not\uparrow G_{1}(\mathcal O(X,\mathcal A),\Lambda(X,\mathcal B))\). Moreover, \(\One \uparrow G_{1}(\mathcal O(X, \mathcal A), \mathcal O(X, \mathcal B))\) if and only if \(\One \underset{\text{pre}}{\uparrow} G_{1}(\mathcal O(X, \mathcal A), \mathcal O(X, \mathcal B))\). \end{lemma} } In \cite{TkachukFE}, Tkachuk showed that \(\One \uparrow G_1([X]^{<\omega}, \neg\mathcal O_X)\) if and only if \(\One \uparrow G_1([X]^{<\omega},\neg\Gamma_X)\). The authors generalized this result to \(\mathcal O(X,\mathcal A)\) in \cite{CaruvanaHolshouser}, assuming that \(\mathcal A\) is an ideal. Here we show that one only needs to assume that \(\mathcal A\) is an ideal base. \begin{lemma} \label{lem:PreviousLemma} For any strategy \(\sigma\) for One in \(G_1(\mathcal A, \mathcal B)\) where \(\mathcal A\) and \(\mathcal B\) are collections, define \[ \text{play}_\sigma = \left\{ \langle x_0 , x_1 , \ldots , x_n \rangle : (n \in \omega) \wedge (\forall \ell < n)\left[ x_\ell \in \sigma(\langle x_j : j < \ell \rangle) \right] \right\} \subseteq \left(\bigcup \mathcal A\right)^{<\omega} \] and \[ \text{play}_\sigma^\omega = \left\{ \langle x_n : n \in \omega \rangle : (\forall n \in \omega)\left[ \langle x_\ell : \ell \leq n \rangle \in \text{play}_\sigma \right] \right\} \subseteq \left( \bigcup \mathcal A \right)^\omega \] If \(\sigma\) is a winning strategy, then for any \(\langle x_n : n \in \omega \rangle \in \text{play}_\sigma^\omega\), \(\{ x_n : n \in \omega \} \not\in \mathcal B\). \end{lemma} \begin{proof} Let \(\langle x_n : n \in \omega \rangle \in \text{play}_\sigma^\omega\). Let \(A_0 = \sigma(\emptyset)\) and notice that \(x_0 \in A_0\) since \(\langle x_0 \rangle \in \text{play}_\sigma\). Now suppose we have \(A_0 , A_1 , \ldots , A_n \in \mathcal A\) defined so that \(x_\ell \in A_\ell = \sigma(\langle x_j : j < \ell \rangle)\). Let \(A_{n+1} = \sigma(\langle x_0,x_1, \ldots , x_n \rangle )\). We claim that \(x_{n+1} \in A_{n+1}\). To see this, we know that \(\langle x_0 , x_1 , \ldots , x_{n+1} \rangle \in \text{play}_\sigma\) so \(x_{n+1} \in \sigma(\langle x_j : j < n+1 \rangle) = A_{n+1}\). Hence, the \(x_n\) arise from a single run of the game according to \(\sigma\). Since \(\sigma\) is winning for One, \(\{x_n:n\in\omega\} \not\in\mathcal B\). \end{proof} \begin{proposition} Let \(\mathcal A\) and \(\mathcal B\) be collections. Set \[ \mathcal B_\Gamma = \{B \in \mathcal B : (\mbox{for all infinite } B' \subseteq B)[B' \in \mathcal B]\} \] If \(\mathcal A\) is a filter base, then \(\One \uparrow G_1(\mathcal A, \neg \mathcal B)\) if and only if \(\One \uparrow G_1(\mathcal A, \neg \mathcal B_\Gamma)\). \end{proposition} \begin{proof} Let \(s\) be a winning strategy for One in \(G_1(\mathcal A, \neg\mathcal B)\). For \(\langle x_0,\cdots,x_n \rangle \in \mbox{play}_s\), define \(\gamma(x_0,\cdots,x_n) \in \mathcal A\) to be so that \[ \gamma(x_0,\cdots,x_n) \subseteq \bigcap_{j=0}^n s(x_0,\cdots,x_j). \] Now we will define a winning strategy \(\sigma\) for One in \(G_1(\mathcal A, \neg\mathcal B_\Gamma)\). First set \(\sigma(\emptyset) = s(\emptyset) = A_0\). Now suppose we have defined \(\sigma(x_0,\cdots,x_{n-1})\) for all \(x_0,\cdots,x_{n-1}\) satisfying \(x_0 \in \sigma(\emptyset)\), \(x_1 \in \sigma(x_0)\), and so on. Suppose also that \(\sigma\) has been defined in such a way that for a fixed \(x_n \in \sigma(x_0,\cdots,x_{n-1})\), \begin{enumerate}[label=(\roman*)] \item for any \(0 \leq j_0 < j_1 < \cdots < j_k \leq n\), \(\langle x_{j_0}, x_{j_1}, \cdots, x_{j_k} \rangle \in \mbox{play}_s\), and \item for any \(0 \leq j_0 < j_1 < \cdots < j_k \leq \ell < n\), \(x_{\ell+1} \in \gamma(x_{j_0},x_{j_1},\cdots,x_{j_k})\). \end{enumerate} Define \(A_{n+1} \in \mathcal A\) to be so that \[ A_{n+1} \subseteq \bigcap \{\gamma(x_{j_0},x_{j_1},\cdots,x_{j_k}) : 0 \leq j_0 < j_1 < \cdots < j_k \leq n\} \] Then set \(\sigma(x_0,\cdots,x_n) = A_{n+1}\). We check that this definition satisfies the two properties relative to \(n+1\). Fix \(x_{n+1} \in A_{n+1}\). Let \(0 \leq j_0 < j_1 < \cdots < j_k \leq n+1\). Notice that \(\langle x_{j_0},x_{j_1},\cdots,x_{j_{k-1}} \rangle \in \mbox{play}_s\) by the inductive hypothesis. So let \(A^*_{j_m} = s(x_{j_0},\cdots,x_{j_m})\) for \(0 \leq m < k\) and \[ A^*_{j_k} = s(x_{j_0},x_{j_1},\cdots,x_{j_{k-1}}). \] It follows that \(A_{n+1} \subseteq A^*_{j_k}\) and that \(x_{n+1} \in A^*_{j_k}\). Hence, \[ A^*_{j_0}, x_{j_0}, \cdots, A^*_{j_k}, x_{j_k} \] is a play according to \(s\). The second property holds by the definition of \(\sigma\). This completes the definition of \(\sigma\). We now show that \(\sigma\) is a winning strategy. Suppose \(A_0, x_0, A_1, x_1, \cdots\) is a full run of the game \(G_1(\mathcal A, \neg\mathcal B_\Gamma)\) played according to \(\sigma\). Suppose, by way of contradiction, that there is an infinite \(B' \subseteq \{x_n : n \in \omega\}\) so that \(B' \notin \mathcal B\). Say \(B' = \{x_{j_n} : n \in \omega\}\). Then by the construction of \(\sigma\), \(\langle x_0, \cdots, x_{j_n} \rangle \in \mbox{play}_s\) for all \(n \in \omega\). Hence, \(\{x_{j_n} : n \in \omega\} \in \mbox{play}^\omega_s\), and so by the Lemma \ref{lem:PreviousLemma}, \(\{x_{j_n} : n \in \omega\} = B' \in \mathcal B\), a contradiction. Thus \(\{x_n : \in \omega\} \in \mathcal B_\Gamma\), and \(\sigma\) is a winning strategy. The other direction of the proof is obvious. \end{proof} \begin{corollary} \label{lem:Open=Gamma} Let \(\mathcal A\) be an ideal-base. Then One has winning (pre-determined) strategy for the game \(G_1(\mathscr N[\mathcal A], \neg\mathcal O(X,\mathcal B))\) if and only if One has winning (pre-determined) strategy for \(G_1(\mathscr N[\mathcal A],\neg\Gamma(X,\mathcal B))\). The same is true for pre-determined strategies. \end{corollary} \begin{proof} Notice that if \(\mathcal A\) is an ideal base, then \(\mathscr N[\mathcal A]\) is a filter base. Also notice that \(\mathcal O(X,\mathcal B)_\Gamma\) is the same thing as \(\Gamma(X, \mathcal B)\). This shows that \(\One \uparrow G_1(\mathscr N[\mathcal A], \neg\mathcal O(X,\mathcal B)) \iff \One \uparrow G_1(\mathscr N[\mathcal A],\neg\Gamma(X,\mathcal B))\). The fact that the results hold for pre-determined strategies follows from a modification of the proof of the proposition. Simply set \[ \sigma(n) = s(0) \cap \cdots \cap s(n) \] and check that this works. \end{proof} \section{An Order on Single Selection Games} \begin{definition} Let \(\mathcal A\), \(\mathcal B\), \(\mathcal C\), and \(\mathcal D\) be collections and \(\alpha\) be an ordinal. Say that \(G^\alpha_1(\mathcal A, \mathcal C) \leq_{\Two} G^\alpha_1(\mathcal B, \mathcal D)\) if \begin{itemize} \item \(\Two \underset{\text{mark}}{\uparrow} G^\alpha_1(\mathcal A, \mathcal C) \implies \Two \underset{\text{mark}}{\uparrow} G^\alpha_1(\mathcal B, \mathcal D)\), \item \(\Two \uparrow G^\alpha_1(\mathcal A, \mathcal C) \implies \Two \uparrow G^\alpha_1(\mathcal B, \mathcal D)\), \item \(\One \not\uparrow G^\alpha_1(\mathcal A, \mathcal C) \implies \One \not\uparrow G^\alpha_1(\mathcal B, \mathcal D)\), and \item \(\One \not\underset{\text{pre}}{\uparrow} G^\alpha_1(\mathcal A, \mathcal C) \implies \One \not\underset{\text{pre}}{\uparrow} G^\alpha_1(\mathcal B, \mathcal D)\). \end{itemize} \end{definition} Notice that if \(G^\alpha_1(\mathcal A, \mathcal C) \leq_{\Two} G^\alpha_1(\mathcal B, \mathcal D)\) and \(G^\alpha_1(\mathcal B, \mathcal D) \leq_{\Two} G^\alpha_1(\mathcal A, \mathcal C)\), then the games are equivalent. Also notice that \(\leq_{\Two}\) is transitive. \begin{theorem}\label{Translation} Let \(\mathcal A\), \(\mathcal B\), \(\mathcal C\), and \(\mathcal D\) be collections and \(\alpha\) be an ordinal. Suppose there are functions \begin{itemize} \item \(\overleftarrow{T}_{\One,\xi}:\mathcal B \to \mathcal A\) and \item \(\overrightarrow{T}_{\Two,\xi}: \bigcup \mathcal A \times \mathcal B \to \bigcup \mathcal B\) \end{itemize} for each \(\xi \in \alpha\), so that \begin{enumerate}[label=(Tr\arabic*)] \item \label{TranslationA} If \(x \in \overleftarrow{T}_{\One,\xi}(B)\), then \(\overrightarrow{T}_{\Two,\xi}(x,B) \in B\) \item \label{TranslationB} If \(x_\xi \in \overleftarrow{T}_{\One,\xi}(B_\xi)\) and \(\{x_\xi : \xi \in \alpha\} \in \mathcal C\), then \(\{\overrightarrow{T}_{\Two,\xi}(x_\xi,B_\xi) : \xi \in \alpha\} \in \mathcal D\). \end{enumerate} Then \(G^\alpha_1(\mathcal A,\mathcal C) \leq_{\Two} G^\alpha_1(\mathcal B, \mathcal D)\). \end{theorem} \begin{proof} Suppose \(\Two \underset{\text{mark}}{\uparrow} G^\alpha_1(\mathcal A, \mathcal C)\) and let \(\tau\) be a winning Markov strategy for Two. We define a winning Markov strategy for Two in \(G^\alpha_1(\mathcal B, \mathcal D)\). Toward this end, let \(\{B_\xi : \xi \in \alpha\} \subseteq \mathcal B\) be arbitrary and set \(A_\xi = \overleftarrow{T}_{\One,\xi}(B_\xi)\) and \(x_\xi = \tau(A_\xi , \xi)\). Define \(y_\xi = \overrightarrow{T}_{\Two,\xi}(x_\xi,B_\xi)\). Then \[ \{ x_\xi : \xi \in \alpha\} \in \mathcal C \implies \{ y_\xi : \xi \in \alpha \} \in \mathcal D. \] Suppose \(\Two \uparrow G^\alpha_1(\mathcal A, \mathcal C)\) and let \(\tau\) be a winning strategy for Two. We define a strategy \(t\) for Two in \(G_1^\alpha(\mathcal B, \mathcal D)\) recursively. Suppose One plays \(B_0\). Then \(A_0 := \overleftarrow{T}_{\One,0}(B_0)\) is an initial play of \(G^\alpha_1(\mathcal A, \mathcal C)\). So \(x_0 := \tau(A_0) \in A_0\). Define \[ t(B_0) = y_0 = \overrightarrow{T}_{\Two,0}(x_0,B_0). \] For \(\beta \in \alpha\), suppose we have \(\{A_\xi: \xi < \beta\}\), \(\{B_\xi : \xi < \beta \}\), \(\{ x_\xi : \xi < \beta\}\), and \(\{ y_\xi : \xi < \beta\}\) defined. Given \(B_\beta \in \mathcal B\), let \(A_\beta = \overleftarrow{T}_{\One,\beta}(B_\beta)\) and \(x_\beta = \tau(A_0, \ldots , A_\beta) \in A_\beta\). Then set \[ t(B_0 , \ldots , B_\beta) = y_\beta = \overrightarrow{T}_{\Two,\beta}(x_\beta,B_\beta). \] This concludes the definition of \(t\). By \ref{TranslationA}, since \(x_\xi \in \overleftarrow{T}_{\One,\xi}(B_\xi)\), it follows that \(y_\xi \in B_\xi\). Using \ref{TranslationB}, we see that \[ \{ x_\xi : \xi \in \alpha \} \in \mathcal C \implies \{ y_\xi : \xi \in \alpha \} \in \mathcal D. \] Suppose \(\One \uparrow G^\alpha_1(\mathcal B, \mathcal D)\) and let \(\sigma\) witness this. We will develop a strategy \(s\) for One in \(G^\alpha_1(\mathcal A, \mathcal B)\). Let \(B_0 = \sigma(\emptyset)\) and \(s(\emptyset) = A_0 = \overleftarrow{T}_{\One,0}(B_0)\). Then, for \(\beta \in \alpha\), suppose we have \(\{A_\xi:\xi \leq \beta\} \subseteq \mathcal A\), \(\{ B_\xi: \xi \leq \beta \} \subseteq \mathcal B\), \(\{x_\xi: \xi < \beta\}\), and \(\{y_\xi:\xi < \beta\}\) defined in the right way. Suppose \(x_\beta \in A_\beta\). Then set \(y_\beta = \overrightarrow{T}_{\Two,\beta}(x_\beta,B_\beta) \in B_\beta\), \(B_{\beta+1} = \sigma( y_0 , \ldots , y_\beta)\) and \[ s( x_0 , \ldots , x_\beta) = A_{\beta+1} = \overleftarrow{T}_{\One,\beta+1}(B_{\beta+1}). \] After the run of the game is completed, let \(x_{\xi+1} \in s(x_0,\cdots,x_\xi)\) for all \(\xi \in \alpha\) and \(x_0 \in s(\emptyset)\). Then \ref{TranslationA} gives us that \(\overrightarrow{T}_{\Two,\xi}(x_\xi,B_\xi) = y_\xi \in B_\xi\). As \(\sigma\) is a winning strategy for One in \(\One \uparrow G^\alpha_1(\mathcal B, \mathcal D)\), \ref{TranslationB} yields \[ \{ y_\xi : \xi \in \alpha \} \not\in \mathcal D \implies \{ x_\xi : \xi \in \alpha \} \not\in \mathcal C \] Suppose \(\One \underset{\text{pre}}{\uparrow} G^\alpha_1(\mathcal B, \mathcal D)\) and let \(\{B_\xi : \xi \in \alpha \}\) represent One's winning strategy. Let \(A_\xi = \overleftarrow{T}_{\One,\xi}(B_\xi)\) for each \(\xi\in\alpha\). We will show that \(\{A_\xi:\xi\in\alpha\}\) forms a winning strategy for One in \(G^\alpha_1(\mathcal A, \mathcal C)\). Let \(x_\xi \in A_\xi\) for all \(\xi \in \alpha\) and let \(y_\xi = \overrightarrow{T}_{\Two,\xi}(x_\xi,B_\xi)\). By \ref{TranslationA}, \(y_\xi \in B_\xi\) for all \(\xi\in\alpha\) and so \(\{y_\xi:\xi\in\alpha\} \not\in \mathcal D\). By \ref{TranslationB}, we see that \(\{x_\xi : \xi \in \alpha\} \not\in \mathcal C\). \end{proof} In some situations, the use of both maps is not necessary as the translation between player One's moves simply comes from lifting the translation of player Two's selections. \begin{corollary}\label{corollary:EasyTranslate} Let \(\mathcal A\), \(\mathcal B\), \(\mathcal C\), and \(\mathcal D\) be collections. Suppose there is a map \(\phi : \left( \bigcup \mathcal B \right) \times \omega \to \left( \bigcup \mathcal A \right)\) so that \begin{itemize} \item For all \(B \in \mathcal B\) and all \(n \in \omega\), \(\{ \phi(y,n) : y \in B\} \in \mathcal A\) \item if \(\{ \phi(y_n,n) : n \in \omega \} \in \mathcal C\), then \(\{ y_n : n \in \omega \} \in \mathcal D\) \end{itemize} Then \(G_1(\mathcal A, \mathcal C) \leq_\Two G_1(\mathcal B, \mathcal D)\). \end{corollary} \begin{proof} Define \(\overleftarrow{T}_{\One,n}:\mathcal B \to \mathcal A\) by \[ \overleftarrow{T}_{\One,n}(B) = \phi[B \times \{n\}]. \] From the first assumption on \(\phi\) we know that \(\overleftarrow{T}_{\One,n}\) really does produce objects in \(\mathcal A\). For \(x \in \phi[B \times \{n\}]\) and \(n \in \omega\), choose \(y_{x,n} \in B\) so that \(\phi(y_{x,n},n) = x\). Define \(\overrightarrow{T}_{\Two,n}:\bigcup \mathcal A \times \mathcal B \to \bigcup \mathcal B\) by \[ \overrightarrow{T}_{\Two,n}(x)(B) = y_{x,n} \] if possible and otherwise set it to be an arbitrary element of \(\bigcup \mathcal B\). So if \(x \in \overleftarrow{T}_{\One,n}(B)\), then \(\overrightarrow{T}_{\Two,n}(x)(B) = y_{x,n} \in B\). Now suppose \(x_n \in \phi[B_n \times \{n\}]\) and \(\{x_n : n \in \omega\} \in \mathcal C\). Then \(\{\phi(y_{x_n,n},n) : n \in \omega\} \in \mathcal C\). By the second assumption on \(\phi\), it follows that \(\{y_{x_n} : n \in \omega\} \in \mathcal D\). Thus \(\{\overrightarrow{T}_{\Two,n}(x_n)(B_n) : n \in \omega\} \in \mathcal D\). This completes the proof. \end{proof} \section{Equivalent and Dual Classes of Games} \begin{corollary}\label{corollary:Roth=CFT=CDFT} Let \(X\) be a Tychonoff space and \(\mathcal A, \mathcal B \subseteq \wp(X)\). Then \begin{enumerate}[label=(\roman*)] \item \label{RothA} \(G_1(\mathcal O(X,\mathcal A), \Lambda(X, \mathcal B)) \leq_{\Two} G_1(\Omega_{C_{\mathcal A}(X), \mathbf{0}}, \Omega_{C_{\mathcal B}(X),\mathbf{0}})\), \item \label{RothB} \(G_1(\Omega_{C_{\mathcal A}(X), \mathbf{0}}, \Omega_{C_{\mathcal B}(X),\mathbf{0}}) \leq_{\Two} G_1(\mathcal D_{C_{\mathcal A}(X)}, \Omega_{C_{\mathcal B}(X),\mathbf{0}})\), and \item \label{RothC} if \(\mathcal A\) consists of closed sets and \(X\) is \(\mathcal A\)-normal, then \[ G_1(\mathcal D_{C_{\mathcal A}(X)}, \Omega_{C_{\mathcal B}(X),\mathbf{0}}) \leq_{\Two} G_1(\mathcal O(X,\mathcal A), \Lambda(X, \mathcal B)). \] \end{enumerate} Thus if \(\mathcal A\) consists of closed sets and \(X\) is \(\mathcal A\)-normal, then the three games are equivalent. \end{corollary} \begin{proof} Let \(\phi : C_{\mathcal A}(X) \times \omega \to \mathscr T_X\) be defined by \(\phi(f,n) = f^{-1}[(-2^{-n} , 2^{-n})]\). Suppose \(F \in \Omega_{C_{\mathcal A}(X) , \mathbf 0}\) and let both \(A \in \mathcal A\) and \(n \in \omega\) be arbitrary. Choose \(f \in F\) so that \(f \in [\mathbf 0; A , 2^{-n}]\) and notice that \(A \subseteq f^{-1}[(-2^{-n},2^{-n})]\). Hence, \(\{ \phi(f,n) : f \in F\} \in \mathcal O(X,\mathcal A)\). Next, suppose \(\{ \phi(f_n , n) : n \in \omega\} \in \Lambda(X,\mathcal B)\). Let \(B \in \mathcal B\) and \(\varepsilon > 0\) be arbitrary. Then, there is \(n \in \omega\) large enough so that \(B \subseteq f_n^{-1}[(-2^{-n} , 2^{-n})]\) and \(2^{-n} < \varepsilon\). It follows that \(f \in [\mathbf 0 ; B , \varepsilon]\). By Corollary \ref{corollary:EasyTranslate}, this completes \ref{RothA}. Next we check that \(G_1(\Omega_{C_{\mathcal A}(X), \mathbf{0}}, \Omega_{C_{\mathcal B}(X),\mathbf{0}}) \leq_{\Two} G_1(\mathcal D_{C_{\mathcal A}(X)}, \Omega_{C_{\mathcal B}(X),\mathbf{0}})\). As \(\mathcal D_{C_{\mathcal A}(X)} \subseteq \Omega_{C_{\mathcal A}(X), \mathbf{0}}\), this is true. Simply have Two use the exact same counter-play or strategy. For \ref{RothC}, define \begin{itemize} \item \(\overleftarrow{T}_{\One,n}:\mathcal O(X,\mathcal A) \to \mathcal D_{C_{\mathcal A}(X)}\) by \[ \overleftarrow{T}_{\One,n}(\mathcal U) = \{f \in C_{\mathcal A}(X) : (\exists U \in \mathcal U)[f[X \smallsetminus U] = 1]\} \] \item \(\overrightarrow{T}_{\Two,n}:C_{\mathcal A}(X) \times \mathcal O(X,\mathcal A) \to \mathscr{T}_X\) by \(\overrightarrow{T}_{\Two,n}(f,\mathcal U) = U\), where \(U \in \mathcal U\) is such that \(f[X \smallsetminus U] = \{1\}\) (if possible, otherwise set \(\overrightarrow{T}_{\Two,n}(f,\mathcal U) = X\)). \end{itemize} First check that the functions are well-defined. To see that \(\overleftarrow{T}_{\One,n}(\mathcal U)\) is a dense set in \(C_{\mathcal A}(X)\), consider a basic open set \([f; A, \varepsilon]\). Since \(\mathcal U \in \mathcal O(X, \mathcal A)\), there is a \(U \in \mathcal U\) so that \(A \subseteq U\). Since \(X\) is \(\mathcal A\)-normal, we can find a continuous function \(g:X \to [0,1]\) so that \(g[A] = 0\) and \(g[X \smallsetminus A] = 1\). Define \(h = f(1-g) + g\). Then \(h\restriction_A = f\), \(h[X \smallsetminus U] = 1\). So \(h \in [f;A,\varepsilon] \cap \overleftarrow{T}_{\One,n}(\mathcal U, n)\). This shows that \(\overleftarrow{T}_{\One,n}(\mathcal U, n)\) is dense. It is clear that \(\overrightarrow{T}_{\Two,n}\) maps into the appropriate space. We next check \ref{TranslationA}. Suppose \(f \in \overleftarrow{T}_{\One,n}(\mathcal U)\). We need to check that \(\overrightarrow{T}_{\Two,n}(f,\mathcal U) \in \mathcal U\). Because \(f \in \overleftarrow{T}_{\One,n}(\mathcal U)\), we can find a \(U \in \mathcal U\) so that \(f[X \smallsetminus U] = \{1\}\). Thus \(\overrightarrow{T}_{\Two,n}(f,\mathcal U) = U \in \mathcal U\). Now we check \ref{TranslationB}, that is, that the \(\overrightarrow{T}_{\Two,n}\) translate from \(\Omega_{C_{\mathcal B}(X),\mathbf{0}}\) to \(\Lambda(X,\mathcal B)\). Suppose \(f_n \in \overleftarrow{T}_{\One,n}(\mathcal U_n)\) and \[ \{f_n : n \in \omega\} \in \Omega_{C_{\mathcal B}(X),\mathbf{0}}. \] We need to see that \(\{\overrightarrow{T}_{\Two,n}(f_n, \mathcal U_n) : n \in \omega\} \in \Lambda(X,\mathcal B)\). Notice \(\overrightarrow{T}_{\Two,n}(f_n,\mathcal U_n) = U_n \in \mathcal U_n\) with the property that \(f_n[X \smallsetminus U_n] = 1\). Let \(B \in \mathcal B\). Then there is an \(n_0\) so that \(f_{n_0} \in [\mathbf 0;B,1]\). Thus \(B \subseteq f_{n_0}^{-1}[(-1,1)]\), and so \(B \cap (X \setminus U_{n_0}) = \emptyset\). Therefore \(B \subseteq U_{n_0}\). There is an \(n_1 > n_0\) so that \(f_{n_1} \in [\mathbf 0;B,1] \smallsetminus \{f_k : k \leq n_0\}\) and so \(B \subseteq U_{n_1}\). Continuing this process inductively, we see that \(B\) is covered infinitely many times and that \(\{U_n : n \in \omega\} \in \Lambda(X,\mathcal B)\). \end{proof} \begin{corollary}\label{corollary:PO=Gru=CD} Let \(X\) be a Tychonoff space and \(\mathcal A, \mathcal B \subseteq \wp(X)\). Then \begin{enumerate}[label=(\roman*)] \item \label{POA} \(G_1(\mathscr{N}_{C_{\mathcal{A}(X)}}(\mathbf 0), \neg \Omega_{C_{\mathcal B(X)},\mathbf 0}) \leq_{\Two} G_1(\mathscr{N}[\mathcal A], \neg \Lambda(X,\mathcal B))\) \item \label{POB} \(G_1(\mathscr{T}_{C_{\mathcal{A}(X)}}, \neg \Omega_{C_{\mathcal B(X)},\mathbf 0}) \leq_{\Two} G_1(\mathscr{N}_{C_{\mathcal{A}(X)}}(\mathbf 0), \neg \Omega_{C_{\mathcal B(X)},\mathbf 0})\) \item \label{POC} \(G_1(\mathscr{T}_{C_{\mathcal A}(X)}, \mbox{CD}_{C_{\mathcal B}(X)}) \leq_{\Two} G_1(\mathscr{T}_{C_{\mathcal{A}(X)}}, \neg \Omega_{C_{\mathcal B(X)},\mathbf 0})\) \item \label{POD} If \(\mathcal A\) consists of closed sets, \(X\) is \(\mathcal A\)-normal, and \(\mathcal B\) consists of \(\mathbb R\)-bounded sets, then \[ G_1(\mathscr{N}[\mathcal A], \neg \Lambda(X,\mathcal B)) \leq_{\Two} G_1(\mathscr{T}_{C_{\mathcal A}(X)}, \mbox{CD}_{C_{\mathcal B}(X)}). \] \end{enumerate} Thus if \(\mathcal A\) consists of closed sets, \(X\) is \(\mathcal A\)-normal, and \(\mathcal B\) consists of \(\mathbb R\)-bounded sets, then all these games are equivalent. \end{corollary} \begin{proof} First we check that \(G_1(\mathscr{N}_{C_{\mathcal{A}(X)}}(\mathbf 0), \neg \Omega_{C_{\mathcal B(X)},\mathbf 0}) \leq_{\Two} G_1(\mathscr{N}[\mathcal A], \neg \Lambda(X,\mathcal B))\). Define \begin{itemize} \item \(\overleftarrow{T}_{\One,n}:\mathscr{N}[\mathcal A] \to \mathscr{N}_{C_{\mathcal{A}(X)}}(\mathbf 0)\) by \[ \overleftarrow{T}_{\One,n}(\mathscr{N}(A)) = [\mathbf 0; A, 2^{-n}] \] \item \(\overrightarrow{T}_{\Two,n}:C_{\mathcal A}(X) \times \mathscr{N}[\mathcal A] \to \mathscr{T}_X\) by \(\overrightarrow{T}_{\Two,n}(f,\mathscr{N}(A)) = f^{-1}[(-2^{-n},2^{-n})]\). \end{itemize} The maps are well-defined since the continuous pre-image of an open set is open. We check \ref{TranslationA}. Suppose \(f \in \overleftarrow{T}_{\One,n}(\mathscr{N}(A))\). We need to check that \(\overrightarrow{T}_{\Two,n}(f,\mathscr{N}(A)) \in \mathscr{N}(A)\), i.e. that \(A \subseteq f^{-1}[(-2^{-n},2^{-n})]\). Since \(f \in \overleftarrow{T}_{\One,n}(\mathscr{N}(A)) = [\mathbf 0; A, 2^{-n}]\), \(f[A] \subseteq (-2^{-n},2^{-n})\). Thus \(A \subseteq f^{-1}[(-2^{-n},2^{-n})]\). We check \ref{TranslationB}. Suppose \(f_n \in \overleftarrow{T}_{\One,n}(\mathscr{N}(A_n))\) and that \(\{f_n : n \in \omega\} \notin \Omega_{C_{\mathcal B(X)},\mathbf 0}\). Then \(f_n \in [\mathbf 0; A_n, 2^{-n}]\) and there is a \(B \in \mathcal B\), an \(\varepsilon > 0\), and an \(N \in \omega\) so that for all \(n \geq N\), \(f_n \notin [\mathbf 0; B, \varepsilon]\). We need to show that \(\{f_n^{-1}[(-2^{-n},2^{-n})] : n \in \omega\} \notin \Lambda(X,\mathcal B)\). We proceed by way of contradiction. Suppose in particular that there is a \(n \geq N\) so that \(2^{-n} < \varepsilon\) and \(B \subseteq f_n^{-1}[(-2^{-n},2^{-n})]\). Then \(f_n \in [\mathbf 0; B, 2^{-n}] \subseteq [\mathbf 0; B, \varepsilon]\). This is a contradiction. Then \(G_1(\mathscr{T}_{C_{\mathcal{A}(X)}}(\mathbf 0), \neg \Omega_{C_{\mathcal B(X)},\mathbf 0}) \leq_{\Two} G_1(\mathscr{N}_{C_{\mathcal{A}(X)}}(\mathbf 0), \neg \Omega_{C_{\mathcal B(X)},\mathbf 0})\) is true as \(\mathscr{N}_{C_{\mathcal{A}(X)}}(\mathbf 0) \subseteq \mathscr{T}_{C_{\mathcal{A}(X)}}\). To see that \(G_1(\mathscr{T}_{C_{\mathcal A}(X)}, \mbox{CD}_{C_{\mathcal B}(X)}) \leq_{\Two} G_1(\mathscr{T}_{C_{\mathcal{A}(X)}}, \neg \Omega_{C_{\mathcal B(X)},\mathbf 0})\), observe that if Two can create a closed discrete set in response to player One, then Two has avoided having \(\mathbf 0\) as a cluster point. Suppose \(X\) is \(\mathcal A\)-normal and \(\mathcal B\) consists of \(\mathbb R\)-bounded sets. For \(U \in \mathscr{T}_{C_{\mathcal A}(X)}\), \(V \in \mathscr N(A_U)\), and \(n \in \omega\), identify a function \(f_{U,V,n}:X \to \mathbb R\) with the property that \(f_{U,V,n}\restriction_{A_U} = f_U\) and \(f_{U,V,n}[X \smallsetminus V] = \{n\}\). Such a function exists for the following reason. Since \(X\) is \(\mathcal A\)-normal, there is a function \(g\) so that \(g[A_U] = 0\) and \(g[X \smallsetminus V] = 1\). Let \(f_{U,V,n} = f_U \cdot (1 - g) + n \cdot g\) and notice that \(f_{U,V,n}\) is as required. Define \begin{itemize} \item \(\overleftarrow{T}_{\One,n}:\mathscr{T}_{C_{\mathcal A}(X)} \to \mathscr{N}[\mathcal A]\) by \(\overleftarrow{T}_{\One,n}(U) = \mathscr N(A_U)\) \item \(\overrightarrow{T}_{\Two,n}:\mathscr{T}_X \times \mathscr{T}_{C_{\mathcal A}(X)} \to C_{\mathcal A}(X)\) by \(\overrightarrow{T}_{\Two,n}(V,U) = f_{U,V,n}\) (if possible, otherwise declare \(\overrightarrow{T}_{\Two,n}(V,U) = \mathbf 0\)). \end{itemize} We check \ref{TranslationA}. Suppose \(V \in \overleftarrow{T}_{\One,n}(U) = \mathscr N(A_U)\). We need to check that \(\overrightarrow{T}_{\Two,n}(V,U) = f_{U,V,n} \in U\). Since \(V \in \mathscr N(A_U)\), \(f_{U,V,n}\) was chosen so that \(f_{U,V,n}\restriction_{A_U} = f_U\) which implies that \(f_{U,V,n} \in U\). We check \ref{TranslationB}. Suppose \(V_n \in \overleftarrow{T}_{\One,n}(U_n) = \mathscr N(A_n)\), where \(A_n = A_{U_n}\) and \(\{V_n : n \in \omega\} \notin \Lambda (X,\mathcal B)\). Then there is a \(B \in \mathcal B\) and \(N\) so that for all \(n \geq N\), \(B \not \subseteq V_n\). Say \(\overrightarrow{T}_{\Two,n}(V_n,U_n) = g_n\) and that \(f_{U_n} = f_n\). Then \(g_n\restriction_{A_n} = f_n\) and \(g_n[X \smallsetminus V_n] = \{n\}\). We proceed by way of contradiction. Let \(f \in C_{\mathcal B}(X)\) be so that for all \(n\), there is a \(k \geq \max\{N,n\}\) so that \(g_k \in [f;B,2^{-n}]\). Since \(B \not\subseteq V_k\), there is an \(x_k \in B \smallsetminus V_k\). Thus \(|g_k(x_n) - f(x_n)| \leq 2^{-n}\), and so \(f(x_n) \geq k-1\). Proceeding in this way, we can produce an unbounded sequence \(k_n\) and a collection of points \(x_n \in B\) so that \(f(x_n) \geq k_n - 1\). But then \(f\) is a continuous function where \(f[B]\) is unbounded. So \(B\) is not \(\mathbb R\)-bounded, which is a contradiction. \end{proof} \begin{corollary}\label{corollary:Gru<PointGamma} Let \(X\) be a Tychonoff space and \(\mathcal A, \mathcal B \subseteq \wp(X)\). Then \begin{enumerate}[label=(\roman*)] \item \label{gruA} \(G_1(\mathscr{N}_{C_{\mathcal{A}(X)}}(\mathbf 0), \neg \Gamma_{C_{\mathcal B(X)},\mathbf 0}) \leq_{\Two} G_1(\mathscr{N}[\mathcal A], \neg \Gamma(X,\mathcal B))\) \item \label{gruB} \(G_1(\mathscr{N}_{C_{\mathcal{A}(X)}}(\mathbf 0), \neg \Omega_{C_{\mathcal B(X)},\mathbf 0}) \leq_{\Two} G_1(\mathscr{N}_{C_{\mathcal{A}(X)}}(\mathbf 0), \neg \Gamma_{C_{\mathcal B(X)},\mathbf 0})\) \end{enumerate} \end{corollary} \begin{proof} Part \ref{gruA} of this corollary is essentially the same as \ref{POA} of Corollary \ref{corollary:PO=Gru=CD}. To see that \(G_1(\mathscr{N}_{C_{\mathcal{A}(X)}}(\mathbf 0), \neg \Omega_{C_{\mathcal B(X)},\mathbf 0}) \leq_{\Two} G_1(\mathscr{N}_{C_{\mathcal{A}(X)}}(\mathbf 0), \neg \Gamma_{C_{\mathcal B(X)},\mathbf 0})\), simply notice that if Two can avoid clustering around \(\mathbf 0\), then Two can certainly avoid converging to \(\mathbf 0\). \end{proof} \begin{definition} For a collection \(\mathcal A\), we say that \(\mathcal B \subseteq \mathcal A\) is a \textbf{selection basis} for \(\mathcal A\) if \[ (\forall A \in \mathcal A)(\exists B \in \mathcal B)(B \subseteq A). \] \end{definition} \begin{definition} For collections \(\mathcal A\) and \(\mathcal R\), we say that \(\mathcal R\) is a \textbf{reflection} of \(\mathcal A\) if \[ \{ \text{ran}(f) : f \in \text{choice}(\mathcal R) \} \] is a selection basis for \(\mathcal A\). \end{definition} \begin{theorem}\cite[Corollary 17]{ClontzDuality} \label{thm:ClontzDuality} If \(\mathcal R\) is a reflection of \(\mathcal A\), then \(G_1(\mathcal A, \mathcal B)\) and \(G_1(\mathcal R, \neg\mathcal B)\) are dual. \end{theorem} \begin{corollary}\cite[Corollary 21]{CaruvanaHolshouser} For any collection \(\mathcal A\) of subsets of a space \(X\) and any collection \(\mathcal B\), the games \(G_1(\mathcal O(X,\mathcal A),\mathcal B)\) and \(G_1(\mathscr N[\mathcal A] , \neg\mathcal B)\) are dual. \end{corollary} \begin{proposition} Suppose \(X\) is a topological space, \(x \in X\), and \(\mathcal B \subseteq \wp(X)\). Then \(G_1(\Omega_{X,x}, \mathcal B)\) and \(G_1(\mathscr N(x), \neg \mathcal B)\) are dual. \end{proposition} \begin{proof} It suffices to show that \[ \{\mbox{ran}(C) : C \in \mbox{choice}(\mathscr N(x))\} \subseteq \Omega_{X,x} \] and is a selection basis for \(\Omega_{X,x}\). Clearly, each \(\mbox{ran}(C) \in \Omega_{X,x}\). Now let \(F \in \Omega_{X,x}\). Then for each \(U \in \mathscr N(x)\), there is an \(x_U \in F \cap U\). Define a choice function \(C\) for \(\mathscr N(x)\) by \(C(U) = x_U\). Notice that \[ \mbox{ran}(C) = \{x_U : U \in \mathscr N(x)\} \subseteq F. \] \end{proof} \begin{corollary}\label{corollary:CFTDual} \(G_1(\Omega_{C_{\mathcal A}(X),\mathbf 0}, \Omega_{C_{\mathcal B}(X),\mathbf 0})\) and \(G_1(\mathscr N_{C_{\mathcal A}(X)}(\mathbf 0), \neg \Omega_{C_{\mathcal B}(X),\mathbf 0})\) are dual. \end{corollary} \begin{proposition}\label{prop:CD->CDFT} \(G_1(\mathcal D_{C_{\mathcal A}(X)}, \Omega_{C_{\mathcal B}(X),\mathbf 0})\) and \(G_1(\mathscr{T}_{C_{\mathcal A}(X)}, \neg \Omega_{C_{\mathcal B}(X),\mathbf 0})\) are dual. Therefore whenever \(\Two \uparrow G_1(\mathscr{T}_{C_{\mathcal A}(X)}, \mbox{CD}_{C_{\mathcal B}(X)})\), we have that \(\One \uparrow G_1(\mathcal D_{C_{\mathcal A}(X)}, \Omega_{C_{\mathcal B}(X),\mathbf 0})\). This is also true for going from Markov strategies to pre-determined strategies. \end{proposition} \begin{proof} We can use reflection to show that \(G_1(\mathcal D_{C_{\mathcal A}(X)}, \Omega_{C_{\mathcal B}(X),\mathbf 0})\) and \(G_1(\mathscr{T}_{C_{\mathcal A}(X)}, \neg\Omega_{C_{\mathcal B}(X),\mathbf 0})\) are dual and that \(G_1(\mathcal D_{C_{\mathcal A}(X)}, \neg \mbox{CD}_{C_{\mathcal B}(X)})\) and \(G_1(\mathscr{T}_{C_{\mathcal A}(X)}, \mbox{CD}_{C_{\mathcal B}(X)})\) are dual as well. First check that \[ \{\mbox{ran}(C) : C \in \mbox{choice}(\mathscr{T}_{C_{\mathcal A}(X)})\} \subseteq \mathcal D_{C_{\mathcal A}(X)} \] and is a selection basis for \(\mathcal D_{C_{\mathcal A}(X)}\). Clearly, each \(\mbox{ran}(C) \in \mathcal D_{C_{\mathcal A}(X)}\). Now let \(D \in \mathcal D_{C_{\mathcal A}(X)}\). Then for each \(U \in \mathscr{T}_{C_{\mathcal A}(X)}\), there is an \(f_U \in D \cap U\). Define a choice function \(C\) for \(\mathscr{T}_{C_{\mathcal A}(X)}\) by \(C(U) = f_U\). Notice that \[ \mbox{ran}(C) = \{f_U : U \in \mathscr{T}_{C_{\mathcal A}(X)}\} \subseteq D. \] Thus \(G_1(\mathcal D_{C_{\mathcal A}(X)}, \mathcal C)\) and \(G_1(\mathscr{T}_{C_{\mathcal A}(X)}, \neg \mathcal C)\) are dual for any \(\mathcal C \subseteq \wp(C_{\mathcal B}(X))\). Therefore, \[ \Two \uparrow G_1(\mathscr{T}_{C_{\mathcal A}(X)}, \mbox{CD}_{C_{\mathcal B}(X)}) \implies \Two \uparrow G_1(\mathscr{T}_{C_{\mathcal A}(X)}, \neg\Omega_{C_{\mathcal B}(X),\mathbf 0}) \iff \One \uparrow G_1(\mathcal D_{C_{\mathcal A}(X)}, \Omega_{C_{\mathcal B}(X),\mathbf 0}). \] The analogous results hold for Markov and pre-determined strategies. \end{proof} \section{Covering Properties} \begin{lemma}\label{lemma:Gru=Cof} Suppose \(X\) is a Tychonoff space. Then the following are equivalent: \begin{enumerate}[label=(\roman*)] \item \label{OnePre_Gruenhage} \(\One \underset{\text{pre}}{\uparrow} G_1( \mathscr N_{C_{\mathcal A}(X)}(\mathbf 0) , \neg \Gamma_{C_{\mathcal B}(X), \mathbf 0})\), \item \label{OnePre_GruenhageCluster} \(\One \underset{\text{pre}}{\uparrow} G_1( \mathscr N_{C_{\mathcal A}(X)}(\mathbf 0) , \neg \Omega_{C_{\mathcal B}(X), \mathbf 0})\), \item \label{GruenhageCofinal} \(\cof(\mathscr N_{C_{\mathcal A}(X)}(\mathbf 0); \mathscr N_{C_{\mathcal{B}}(X)}(\mathbf 0), \supseteq) = \omega\). \end{enumerate} \end{lemma} \begin{proof} Clearly, \ref{OnePre_Gruenhage} implies \ref{OnePre_GruenhageCluster}. Suppose \(\One \underset{\text{pre}}{\uparrow} G_1( \mathscr N_{C_{\mathcal A}(X)}(\mathbf 0) , \neg \Omega_{C_{\mathcal B}(X), \mathbf 0})\). Then we get a sequence of neighborhoods \([\mathbf 0; A_n, \varepsilon_n]\). Now, let \(B \in \mathcal B\), \(\varepsilon > 0\), and consider \([\mathbf 0; B, \varepsilon]\). Suppose \([\mathbf 0; A_n, \varepsilon_n] \not \subseteq [\mathbf 0; B, \varepsilon]\) for any \(n\). Then we have functions \(f_n \in [\mathbf 0; A_n, \varepsilon_n] \setminus [\mathbf 0; B, \varepsilon]\). Consider the play \([\mathbf 0; A_0, \varepsilon_0], f_0, \cdots\) of the game \(G_1(\mathscr{T}_{C_{\mathcal A}(X)}, \neg \Omega_{C_{\mathcal B}(X), \mathbf 0})\) according to the winning strategy. Because none of the \(f_n\) are in \([\mathbf 0; B, \varepsilon]\), the \(f_n\) fail to accumulate to \(\mathbf 0\) in \(C_{\mathcal B}(X)\). This is a contradiction. So \ref{OnePre_GruenhageCluster} implies \ref{GruenhageCofinal}. Now let \(U_n\) be a sequence of \(C_{\mathcal A}(X)\) neighborhoods of \(\mathbf 0\) which is cofinal in the \(C_{\mathcal B}(X)\) neighborhoods. We can assume without loss of generality that the \(U_n\) are descending. Define a strategy \(\sigma\) for player One in \(G_1( \mathscr N_{C_{\mathcal A}(X)}(\mathbf 0) , \neg \Gamma_{C_{\mathcal B}(X), \mathbf 0})\) by \(\sigma(n) = U_n\). Suppose that \(f_n \in U_n\) for all \(n\). Let \([\mathbf 0; B, \varepsilon]\) be an arbitrary \(C_{\mathcal B}(X)\)-nhood of \(\mathbf 0\). Then there is an \(N\) so that for all \(n \geq N\), \(U_n \subseteq [\mathbf 0; B, \varepsilon]\). Thus for all \(n \geq N\), \(f_n \in [\mathbf 0; B, \varepsilon]\). So \(f_n \to \mathbf 0\) in \(C_{\mathcal B}(X)\). Therefore \ref{GruenhageCofinal} implies \ref{OnePre_Gruenhage}. \end{proof} \begin{comment} \begin{lemma}\label{lemma:Cof=CD} Suppose \(X\) is a Tychonoff space. Then \begin{enumerate}[label=(\roman*)] \item \label{cof_Space} \(\cof(\mathcal A \times \omega; \mathcal B \times \omega, \subseteq) = \omega\) implies \item \label{cof_Function} \(\cof(\mathscr N_{C_{\mathcal A}(X)}(\mathbf 0); \mathscr N_{C_{\mathcal{B}}(X)}(\mathbf 0), \supseteq) = \omega\) implies \item \label{cof_PreOne} \(\One \underset{\text{pre}}{\uparrow} G_1(\mathscr{T}_{C_{\mathcal A}(X)}, \text{CD}_{C_{\mathcal B}(X)})\) \end{enumerate} If \(\mathcal B\) consists of bounded sets and \(\mathcal A\) consists of closed sets, then \ref{cof_Space} - \ref{cof_PreOne} are equivalent. \end{lemma} \begin{proof} \ref{cof_Space} \(\implies\) \ref{cof_Function} follows immediately from Lemma \ref{lemma:CofinalityBetweenGroundAndFunctions}. \ref{cof_Function} \(\implies\) \ref{cof_PreOne} follows from Lemma \ref{lemma:Gru=Cof}. \end{proof} \end{comment} The following generalizes V.416 from \cite[p. 460]{TkachukFE}. Moreover, if we replace \(\mathcal A\) with the space of singletons \(X\), we obtain Theorem 1 of Gerlits and Nagy, \cite{GerlitsNagy}. \begin{lemma} \label{lemma:PreDeterminedClosed} Assume \(\mathcal A, \mathcal B \subseteq \wp(X)\). Then \(\One \underset{\text{pre}}{\uparrow} G_1(\mathscr N[\mathcal A], \neg\mathcal O(X, \mathcal B))\) if and only if \(\cof(\mathcal A; \mathcal B, \subseteq) \leq \omega\). \end{lemma} \begin{proof} Suppose \(\One \underset{\text{pre}}{\uparrow} G_1(\mathscr N[\mathcal A], \neg \mathcal O(X, \mathcal B))\). Let \(\sigma\) be an example of a pre-determined strategy for One in this game. Say \(\mbox{ran}(\sigma) = \{\mathscr N(A_n) : n \in \omega\}\). We claim that \(\{A_n : n \in \omega\}\) is cofinal for \(\mathcal B\). Towards a contradiction suppose that there were an \(B \in \mathcal B\) so that \(B \not \subseteq A_n\) for all \(n\). Then for each \(n\), we can choose \(x_n \in B \setminus A_n\). Then the sequence \(\mathscr N(A_0), X \setminus \{x_0\}, \cdots\) would be a play of \(G_1(\mathscr N[\mathcal A], \neg\mathcal O(X, \mathcal B))\). Since \(\sigma\) is winning, \(B \subseteq X \setminus \{x_n\}\) for some \(n\). But then \(x_n \in X \setminus \{x_n\}\), a contradiction. Therefore \(\{A_n : n \in \omega\}\) is cofinal for \(\mathcal B\) and \(\cof(\mathcal A; \mathcal B, \subseteq) \leq \omega\). Suppose \(\cof(\mathcal A; \mathcal B, \subseteq) = \omega\). Let \(\{A_n : n \in \omega\}\) witness this. Define a strategy \(\sigma\) by \(\sigma(n) = \mathscr N(A_n)\). Now suppose \(\sigma(0), U_0, \cdots\) is a play of \(G_1(\mathscr N[\mathcal A], \neg \mathcal O(X, \mathcal B))\) according to \(\sigma\). Let \(B \in \mathcal B\). Then there is an \(n\) so that \(B \subseteq A_n \subseteq U_n\). Thus \(\{U_n : n \in \omega\} \in \mathcal O(X, \mathcal B)\) and \(\sigma\) is winning. \end{proof} The following generalizes a result of Telg{\'{a}}rsky \cite{Telgarsky} and extends Theorem 27 of \cite{CaruvanaHolshouser}. \begin{lemma} \label{lemma:CoveringPropertyPre} Assume \(\mathcal A, \mathcal B \subseteq \wp(X)\), and \(\mathcal A\) is a collection of \(G_\delta\) sets. Then the following are equivalent: \begin{enumerate}[label=(\roman*)] \item \(\One \uparrow G_1(\mathscr N[\mathcal A], \neg\mathcal O(X, \mathcal B))\) \item \(\cof(\mathcal A; \mathcal B, \subseteq) \leq \omega\) \item \(\One \underset{\text{pre}}{\uparrow} G_1(\mathscr N[\mathcal A], \neg\mathcal O(X, \mathcal B))\) \end{enumerate} \end{lemma} \begin{proof} Let \(\sigma\) be a strategy for One. Without loss of generality, One is playing sets from \(\mathcal A\) and Two plays open sets which contain One's play. For every \(A \in \mathcal A\), let \(\mathcal U_A\) be a countable collection of open sets so that \(A = \bigcap \mathcal U_A\). Define a tree in the following way. Let \(T_0 = \emptyset\). For \(n \in \omega\), we define \[ T_{n+1} = \{ w \concat \langle \sigma(w), U \rangle : w\in T_n \text{ and } U \in \mathcal U_{\sigma(w)} \}. \] Observe that each \(T_n\) is countable as each \(\mathcal U_A\) is countable. Hence, \[ \mathscr F := \bigcup_{n\in\omega} \{ \sigma(w) : w \in T_n \} \] is a countable subset of \(\mathcal A\). By way of contradiction, suppose there is some \(B \in \mathcal B\) so that \(B \not\subseteq A\) for all \(A \in \mathscr F\). Let \(A_0 = \sigma(\emptyset)\). Since \(B \not\subseteq A_0\), there must be some \(x_0 \in B\setminus A_0\). As \(A_0 = \bigcap \mathcal U_{A_0}\), there is some \(U_0 \in \mathcal U_{A_0}\) so that \(A_0 \subseteq U_0\) and \(x_0 \not\in U_0\). Recursively, this defines a run of the game \(A_0, U_0, A_1 , U_1 , \ldots\) according to \(\sigma\). So we can conclude that \(\{ U_n : n \in \omega \} \in \mathcal O(X,\mathcal B)\). Thus, \(B \subseteq U_n\) for some \(n \in \omega\) but then \(x_n \in U_n\), a contradiction. Therefore, \(\cof(\mathcal A; \mathcal B, \subseteq) \leq \omega\). The rest of the equivalence is clear. \end{proof} \begin{note} Let \(X\) be the one-point Lindel\"{o}fication of \(\omega_1\) and consider \(G_1(\mathscr N[[X]^{<\omega}], \neg \mathcal O(X, [X]^{<\omega}))\). In \(X\), \(\{\omega_1\}\) is closed, but not a \(G_\delta\). One has a winning strategy in \(G_1(\mathscr N[[X]^{<\omega}], \neg \mathcal O(X, [X]^{<\omega}))\), but \(\cof([X]^{<\omega}; [X]^{<\omega} , \subseteq) = \omega_1\). Now consider \(X = \mathbb{R}\). Let \(\mathcal M\) be the meager subsets of \(\mathbb{R}\). Then player One has a winning tactic (in two moves) for \(G_1(\mathscr N[\mathcal M], \neg \mathcal O_X)\), but \(\cof(\mathcal M; X, \subseteq) = \mbox{cov}(\mathcal M) > \omega\). \end{note} \section{The Main Theorems} \begin{theorem}\label{MD1} Suppose \(X\) is a Tychonoff space and \(\mathcal A, \mathcal B \subseteq \wp(X)\). Suppose \(\mathcal A\) and \(\mathcal B\) are ideal-bases and that \(\mathcal A\) consists of closed sets. Then the following diagrams are true, where dashed arrows require the assumption that \(X\) is \(\mathcal A\)-normal and dotted lines require the assumption that \(\mathcal B\) consists of \(\mathbb R\)-bounded sets. If \(X\) is \(\mathcal A\)-normal, \(\mathcal B\) consists of \(\mathbb R\)-bounded sets, and \(\mathcal A\) consists of \(G_\delta\) sets, then all of the statements across both diagrams are equivalent. \noindent \framebox[\textwidth]{ \begin{tikzpicture}[scale=.975, auto, transform shape] \node (pointOpen) at (-4,0) {\(\One \uparrow G_1(\mathscr N[\mathcal A], \neg \mathcal O(X,\mathcal B))\)}; \node (pointLambda) at (-4,-1.5) {\(\One \uparrow G_1(\mathscr N[\mathcal A], \neg \Lambda(X,\mathcal B))\)}; \node (pointGamma) at (-4,-3) {\(\One \uparrow G_1(\mathscr N[\mathcal A], \neg \Gamma(X,\mathcal B))\)}; \node (GruenhageLim) at (-4,-4.5) {\(\One \uparrow G_1(\mathscr N_{C_{\mathcal A}(X)}(\mathbf 0), \neg \Gamma_{C_{\mathcal B}(X),\mathbf 0})\)}; \node (GruenhageCluster) at (-4,-6) {\(\One \uparrow G_1(\mathscr N_{C_{\mathcal A}(X)}(\mathbf 0), \neg \Omega_{C_{\mathcal B}(X),\mathbf 0})\)}; \node (CL) at (-4,-7.5) {\(\One \uparrow G_1(\mathscr{T}_{C_{\mathcal A}(X)},\neg \Omega_{C_{\mathcal B}(X),\mathbf 0})\)}; \node (CD) at (-4,-9) {\(\One \uparrow G_1(\mathscr{T}_{C_{\mathcal A}(X)},\mbox{CD}_{C_{\mathcal B}(X)})\)}; \node (Rothberger) at (4,0) {\(\Two \uparrow G_1(\mathcal O(X, \mathcal A), \mathcal O(X,\mathcal B))\)}; \node (RothbergerLambda) at (4,-1.5) {\(\Two \uparrow G_1(\mathcal O(X, \mathcal A), \Lambda(X,\mathcal B))\)}; \node (RothbergerGamma) at (4,-3) {\(\Two \uparrow G_1(\mathcal O(X, \mathcal A), \Gamma(X,\mathcal B))\)}; \node (spacer) at (4,-4.5) {\phantom{\(\One \uparrow G_1(\mathscr N_{C_{\mathcal A}(X)}(\mathbf 0), \neg \Gamma_{C_{\mathcal B}(X),\mathbf 0})\)}}; \node (CFT) at (4,-6) {\(\Two \uparrow G_1(\Omega_{C_{\mathcal A}(X),\mathbf 0},\Omega_{C_{\mathcal B}(X),\mathbf 0})\)}; \node (CDFT) at (4,-7.5) {\(\Two \uparrow G_1(\mathcal D_{C_{\mathcal A}(X)},\Omega_{C_{\mathcal B}(X),\mathbf 0})\)}; \draw[<->] (pointOpen.east) -- (Rothberger.west); \draw[<->] (pointOpen.south) -- (pointLambda.north); \draw[<->] (Rothberger.south) -- (RothbergerLambda.north); \draw[<->] (pointLambda.east) -- (RothbergerLambda.west); \draw[<->] (pointLambda.south) -- (pointGamma.north); \draw[<->] (RothbergerLambda.south) -- (RothbergerGamma.north); \draw[<->] (pointGamma.east) -- (RothbergerGamma.west); \draw[->] ([xshift=.2cm]pointGamma.south) -- ([xshift=.2cm]GruenhageLim.north); \draw[->, dashed] ([xshift=-.2cm]GruenhageLim.north) -- ([xshift=-.2cm]pointGamma.south); \draw[->] ([xshift=.2cm]RothbergerGamma.south) -- ([xshift=.2cm]CFT.north); \draw[->, dashed] ([xshift=-.2cm]CFT.north) -- ([xshift=-.2cm]RothbergerGamma.south); \draw[->] ([xshift=.2cm]GruenhageLim.south) -- ([xshift=.2cm]GruenhageCluster.north); \draw[->, dashed] ([xshift=-.2cm]GruenhageCluster.north) -- ([xshift=-.2cm]GruenhageLim.south); \draw[<->] (GruenhageCluster.east) -- (CFT.west); \draw[->] ([xshift=.2cm]GruenhageCluster.south) -- ([xshift=.2cm]CL.north); \draw[->, dashed] ([xshift=-.2cm]CL.north) -- ([xshift=-.2cm]GruenhageCluster.south); \draw[->] ([xshift=.2cm]CFT.south) -- ([xshift=.2cm]CDFT.north); \draw[->, dashed] ([xshift=-.2cm]CDFT.north) -- ([xshift=-.2cm]CFT.south); \draw[<->] (CL.east) -- (CDFT.west); \draw[->] ([xshift=.2cm]CL.south) -- ([xshift=.2cm]CD.north); \draw[->, densely dotted] ([xshift=-.2cm]CD.north) -- ([xshift=-.2cm]CL.south); \end{tikzpicture} } \noindent \framebox[\textwidth]{ \begin{tikzpicture}[scale=.975, auto, transform shape] \node (XCof) at (0,0) {\(\cof(\mathcal A \times \omega; \mathcal B \times \omega, \subseteq) = \omega\)}; \node (FunCof) at (0,-1.5) {\(\cof(\mathscr N_{C_{\mathcal A}(X)}(\mathbf 0); \mathscr N_{C_{\mathcal B}(X)}(\mathbf 0), \supseteq) = \omega\)}; \node (pointGamma) at (-4,1.5) {\(\One \underset{\text{pre}}{\uparrow} G_1(\mathscr N[\mathcal A], \neg \Gamma(X,\mathcal B))\)}; \node (pointLambda) at (-4,3) {\(\One \underset{\text{pre}}{\uparrow} G_1(\mathscr N[\mathcal A], \neg \Lambda(X,\mathcal B))\)}; \node (pointOpen) at (-4,4.5) {\(\One \underset{\text{pre}}{\uparrow} G_1(\mathscr N[\mathcal A], \neg \mathcal O(X,\mathcal B))\)}; \node (RothbergerGamma) at (4,1.5) {\(\Two \underset{\text{mark}}{\uparrow} G_1(\mathcal O(X, \mathcal A), \Gamma(X,\mathcal B))\)}; \node (RothbergerLambda) at (4,3) {\(\Two \underset{\text{mark}}{\uparrow} G_1(\mathcal O(X, \mathcal A), \Lambda(X,\mathcal B))\)}; \node (Rothberger) at (4,4.5) {\(\Two \underset{\text{mark}}{\uparrow} G_1(\mathcal O(X, \mathcal A), \mathcal O(X,\mathcal B))\)}; \node (GruenhageLim) at (-4,-3) {\(\One \underset{\text{pre}}{\uparrow} G_1(\mathscr N_{C_{\mathcal A}(X)}(\mathbf 0), \neg \Gamma_{C_{\mathcal B}(X),\mathbf 0})\)}; \node (GruenhageCluster) at (-4,-4.5) {\(\One \underset{\text{pre}}{\uparrow} G_1(\mathscr N_{C_{\mathcal A}(X)}(\mathbf 0), \neg \Omega_{C_{\mathcal B}(X),\mathbf 0})\)}; \node (CL) at (-4,-6) {\(\One \underset{\text{pre}}{\uparrow} G_1(\mathscr{T}_{C_{\mathcal A}(X)},\neg \Omega_{C_{\mathcal B}(X),\mathbf 0})\)}; \node (CD) at (-4,-7.5) {\(\One \underset{\text{pre}}{\uparrow} G_1(\mathscr{T}_{C_{\mathcal A}(X)},\mbox{CD}_{C_{\mathcal B}(X)})\)}; \node (spacer) at (4,-3) {\phantom{\(\One \underset{\text{pre}}{\uparrow} G_1(\mathscr N_{C_{\mathcal A}(X)}(\mathbf 0), \neg \Gamma_{C_{\mathcal B}(X),\mathbf 0})\)}}; \node (CFT) at (4,-4.5) {\(\Two \underset{\text{mark}}{\uparrow} G_1(\Omega_{C_{\mathcal A}(X),\mathbf 0},\Omega_{C_{\mathcal B}(X),\mathbf 0})\)}; \node (CDFT) at (4,-6) {\(\Two \underset{\text{mark}}{\uparrow} G_1(\mathcal D_{C_{\mathcal A}(X)},\Omega_{C_{\mathcal B}(X),\mathbf 0})\)}; \draw[<->] (pointOpen.east) -- (Rothberger.west); \draw[<->] (pointOpen.south) -- (pointLambda.north); \draw[<->] (Rothberger.south) -- (RothbergerLambda.north); \draw[<->] (pointLambda.east) -- (RothbergerLambda.west); \draw[<->] (pointLambda.south) -- (pointGamma.north); \draw[<->] (RothbergerLambda.south) -- (RothbergerGamma.north); \draw[<->] (pointGamma.east) -- (RothbergerGamma.west); \draw[<->] (pointGamma.south) -- (XCof.west); \draw[<->] (RothbergerGamma.south) -- (XCof.east); \draw[<->] (XCof.south) -- (FunCof.north); \draw[<->] (FunCof.west) -- (GruenhageLim.north); \draw[<->] (FunCof.east) -- (CFT.north); \draw[<->] (GruenhageLim.south) -- (GruenhageCluster.north); \draw[<->] (GruenhageCluster.east) -- (CFT.west); \draw[->] ([xshift=.2cm]GruenhageCluster.south) -- ([xshift=.2cm]CL.north); \draw[->, dashed] ([xshift=-.2cm]CL.north) -- ([xshift=-.2cm]GruenhageCluster.south); \draw[->] ([xshift=.2cm]CFT.south) -- ([xshift=.2cm]CDFT.north); \draw[->, dashed] ([xshift=-.2cm]CDFT.north) -- ([xshift=-.2cm]CFT.south); \draw[<->] (CL.east) -- (CDFT.west); \draw[->] ([xshift=.2cm]CL.south) -- ([xshift=.2cm]CD.north); \draw[->, densely dotted] ([xshift=-.2cm]CD.north) -- ([xshift=-.2cm]CL.south); \end{tikzpicture} } \end{theorem} \begin{proof} Since we have assumed that \(\mathcal A\) and \(\mathcal B\) are ideal-bases, Lemma \ref{lem:Open=Gamma} implies that all three versions of the generalized point-open game are equivalent for player One. This applies for full strategies and pre-determined strategies. The fact that \(\One \uparrow G_1(\mathscr N[\mathcal A], \neg \Psi(X,\mathcal B))\) is equivalent to \(\Two \uparrow G_1(\mathcal O(X, \mathcal A), \Psi(X,\mathcal B))\) (where \(\Psi\) is \(\mathcal O\), \(\Lambda\), or \(\Gamma\)) comes from the general reflection result from Clontz, Theorem \ref{thm:ClontzDuality}. This also implies the analogous statements for pre-determined and Markov strategies. Since all of the versions of the generalized point-open game are equivalent for player One, we can conclude that all of the versions of the generalized Rothberger game are equivalent for player Two. By Corollary \ref{corollary:CFTDual}, \(\Two \uparrow G_1(\Omega_{C_{\mathcal A}(X),\mathbf 0},\Omega_{C_{\mathcal B}(X),\mathbf 0})\) if and only if \(\One \uparrow G_1(\mathscr N_{C_{\mathcal A}(X)}(\mathbf 0), \neg \Omega_{C_{\mathcal B}(X),\mathbf 0})\), and also at the level of Markov/pre-determined strategies. Likewise, Proposition \ref{prop:CD->CDFT} implies that \(\Two \uparrow G_1(\mathcal D_{C_{\mathcal A}(X)},\Omega_{C_{\mathcal B}(X),\mathbf 0})\) is equivalent to \(\One \uparrow G_1(\mathscr{T}_{C_{\mathcal A}(X)}, \neg \Omega_{C_{\mathcal B}(X),\mathbf 0})\), and also at the level of Markov/pre-determined strategies. Corollary \ref{corollary:Roth=CFT=CDFT} yields the implications between \(G_1(\mathcal O(X, \mathcal A), \Lambda(X,\mathcal B))\), \(G_1(\Omega_{C_{\mathcal A}(X),\mathbf 0},\Omega_{C_{\mathcal B}(X),\mathbf 0})\), and \(G_1(\mathcal D_{C_{\mathcal A}(X)},\Omega_{C_{\mathcal B}(X),\mathbf 0})\). Then Corollaries \ref{corollary:PO=Gru=CD} and \ref{corollary:Gru<PointGamma} provide the arrows between games for the rest of the left side of the diagram. We now check the improved implications in the second diagram. By Lemma \ref{lemma:PreDeterminedClosed}, we have that \(\One \underset{\text{pre}}{\uparrow} G_1(\mathscr N[\mathcal A], \neg \mathcal O(X,\mathcal B))\) if and only if \(\cof(\mathcal A \times \omega, \mathcal B \times \omega) = \omega\). Then Lemma \ref{lemma:CofinalityBetweenGroundAndFunctions} implies that \(\cof(\mathcal A \times \omega, \mathcal B \times \omega) = \omega\) if and only if \(\cof(\mathscr N_{C_{\mathcal A}(X)}(\mathbf 0); \mathscr N_{C_{\mathcal B}(X)}(\mathbf 0), \supseteq) = \omega\). Finally, using Lemma \ref{lemma:Gru=Cof} we see that \(\cof(\mathscr N_{C_{\mathcal A}(X)}(\mathbf 0); \mathscr N_{C_{\mathcal B}(X)}(\mathbf 0), \supseteq) = \omega\) if and only if \(\One \underset{\text{pre}}{\uparrow} G_1(\mathscr N_{C_{\mathcal A}(X)}(\mathbf 0), \neg \Omega_{C_{\mathcal B}(X),\mathbf 0})\), which is in turn equivalent to \(\One \underset{\text{pre}}{\uparrow} G_1(\mathscr N_{C_{\mathcal A}(X)}(\mathbf 0), \neg \Gamma_{C_{\mathcal B}(X),\mathbf 0})\). This suffices to improve the arrows from the first diagram and finishes the second diagram. \end{proof} \begin{theorem} \label{MD2} Suppose \(X\) is a Tychonoff space and \(\mathcal A, \mathcal B \subseteq \wp(X)\). Suppose \(\mathcal A\) and \(\mathcal B\) are ideal-bases and that \(\mathcal A\) consists of closed sets. Then the following diagrams are true, where dashed arrows require the assumption that \(X\) is \(\mathcal A\)-normal and dotted lines require the assumption that \(X\) is \(\mathcal A\)-normal and \(\mathcal B\) consists of \(\mathbb R\)-bounded sets. {\color{red} If \(X\) is \(\mathcal A\)-normal, \(\mathcal B\) consists of \(\mathbb R\)-bounded sets, and \(\mathcal A \prec \mathcal B\), then all of the statements across both diagrams are equivalent. } Correction: If \(\mathcal A = \mathcal B\) are the finite subsets of \(X\), or \(\mathcal A = \mathcal B\) are the compact subsets of \(X\), then all of the statements across both diagrams are equivalent. \noindent \framebox[\textwidth]{ \begin{tikzpicture} \node (pointOpen) at (-4,0) {\(\Two \uparrow G_1(\mathscr N[\mathcal A], \neg \mathcal O(X,\mathcal B))\)}; \node (pointLambda) at (-4,-1.5) {\(\Two \uparrow G_1(\mathscr N[\mathcal A], \neg \Lambda(X,\mathcal B))\)}; \node (GruenhageCluster) at (-4,-3) {\(\Two \uparrow G_1(\mathscr N_{C_{\mathcal A}(X)}(\mathbf 0), \neg \Omega_{C_{\mathcal B}(X),\mathbf 0})\)}; \node (CL) at (-4,-4.5) {\(\Two \uparrow G_1(\mathscr{T}_{C_{\mathcal A}(X)}, \neg \Omega_{C_{\mathcal B}(X),\mathbf 0})\)}; \node (CD) at (-4,-6) {\(\Two \uparrow G_1(\mathscr{T}_{C_{\mathcal A}(X)},\mbox{CD}_{C_{\mathcal B}(X)})\)}; \node (Rothberger) at (4,0) {\(\One \uparrow G_1(\mathcal O(X, \mathcal A), \mathcal O(X,\mathcal B))\)}; \node (RothbergerLambda) at (4,-1.5) {\(\One \uparrow G_1(\mathcal O(X, \mathcal A), \Lambda(X,\mathcal B))\)}; \node (CFT) at (4,-3) {\(\One \uparrow G_1(\Omega_{C_{\mathcal A}(X),\mathbf 0},\Omega_{C_{\mathcal B}(X),\mathbf 0})\)}; \node (CDFT) at (4,-4.5) {\(\One \uparrow G_1(\mathcal D_{C_{\mathcal A}(X)},\Omega_{C_{\mathcal B}(X),\mathbf 0})\)}; \draw [<->] (pointOpen.east) -- (Rothberger.west); \draw[<->] (pointOpen.south) -- (pointLambda.north); \draw[<->] (Rothberger.south) -- (RothbergerLambda.north); \draw[<->] (pointLambda.east) -- (RothbergerLambda.west); \draw[<->] (pointLambda.south) -- (GruenhageCluster.north); \draw[<->] (RothbergerLambda.south) -- (CFT.north); \draw[<->] (GruenhageCluster.south) -- (CL.north); \draw[<->] (GruenhageCluster.east) -- (CFT.west); \draw[->, densely dotted] ([xshift=.2cm]CL.south) -- ([xshift=.2cm]CD.north); \draw[->] ([xshift=-.2cm]CD.north) -- ([xshift=-.2cm]CL.south); \draw[<->] (CFT.south) -- (CDFT.north); \draw[<->] (CDFT.west) -- (CL.east); \draw[->] (CD.east) -- (CDFT.south); \end{tikzpicture} } \noindent \framebox[\textwidth]{ \begin{tikzpicture} \node (pointOpen) at (-4,0) {\(\Two \underset{\text{mark}}{\uparrow} G_1(\mathscr N[\mathcal A], \neg \mathcal O(X,\mathcal B))\)}; \node (pointLambda) at (-4,-1.5) {\(\Two \underset{\text{mark}}{\uparrow} G_1(\mathscr N[\mathcal A], \neg \Lambda(X,\mathcal B))\)}; \node (GruenhageCluster) at (-4,-3) {\(\Two \underset{\text{mark}}{\uparrow} G_1(\mathscr N_{C_{\mathcal A}(X)}(\mathbf 0), \neg \Omega_{C_{\mathcal B}(X),\mathbf 0})\)}; \node (CL) at (-4,-4.5) {\(\Two \underset{\text{mark}}{\uparrow} G_1(\mathscr{T}_{C_{\mathcal A}(X)}, \neg \Omega_{C_{\mathcal B}(X),\mathbf 0})\)}; \node (CD) at (-4,-6) {\(\Two \underset{\text{mark}}{\uparrow} G_1(\mathscr{T}_{C_{\mathcal A}(X)},\mbox{CD}_{C_{\mathcal B}(X)})\)}; \node (Rothberger) at (4,0) {\(\One \underset{\text{pre}}{\uparrow} G_1(\mathcal O(X, \mathcal A), \mathcal O(X,\mathcal B))\)}; \node (RothbergerLambda) at (4,-1.5) {\(\One \underset{\text{pre}}{\uparrow} G_1(\mathcal O(X, \mathcal A), \Lambda(X,\mathcal B))\)}; \node (CFT) at (4,-3) {\(\One \underset{\text{pre}}{\uparrow} G_1(\Omega_{C_{\mathcal A}(X),\mathbf 0},\Omega_{C_{\mathcal B}(X),\mathbf 0})\)}; \node (CDFT) at (4,-4.5) {\(\One \underset{\text{pre}}{\uparrow} G_1(\mathcal D_{C_{\mathcal A}(X)},\Omega_{C_{\mathcal B}(X),\mathbf 0})\)}; \draw [<->] (pointOpen.east) -- (Rothberger.west); \draw[<->] (pointOpen.south) -- (pointLambda.north); \draw[<->] (Rothberger.south) -- (RothbergerLambda.north); \draw[<->] (pointLambda.east) -- (RothbergerLambda.west); \draw[<->] (pointLambda.south) -- (GruenhageCluster.north); \draw[<->] (RothbergerLambda.south) -- (CFT.north); \draw[<->] (GruenhageCluster.south) -- (CL.north); \draw[<->] (GruenhageCluster.east) -- (CFT.west); \draw[->, densely dotted] ([xshift=.2cm]CL.south) -- ([xshift=.2cm]CD.north); \draw[->] ([xshift=-.2cm]CD.north) -- ([xshift=-.2cm]CL.south); \draw[<->] (CFT.south) -- (CDFT.north); \draw[<->] (CDFT.west) -- (CL.east); \draw[->] (CD.east) -- (CDFT.south); \end{tikzpicture} } \end{theorem} \begin{proof} Since \(\mathcal A\) and \(\mathcal B\) are pre-ideals, the versions of the point-open game are equivalent. The arrows between the versions of the point-open game and versions of the Rothberger game come from the duality of the point-open and Rothberger games. From this, the versions of the Rothberger game are equivalent. Corollaries \ref{corollary:PO=Gru=CD} and \ref{corollary:Gru<PointGamma} generates arrows on the left side of the diagram. Similarly, Corollary \ref{corollary:Roth=CFT=CDFT} provides arrows on the right side of the diagram. By Corollary, \ref{corollary:CFTDual}, \(\One \underset{\text{pre}}{\uparrow} G_1(\Omega_{C_{\mathcal A}(X),\mathbf 0},\Omega_{C_{\mathcal B}(X),\mathbf 0})\) if and only if \(\Two \underset{\text{mark}}{\uparrow} G_1(\mathscr N_{C_{\mathcal A}(X)}(\mathbf 0), \neg \Omega_{C_{\mathcal B}(X),\mathbf 0})\). Proposition \ref{prop:CD->CDFT} adds the implications from the statement \(\Two \underset{\text{mark}}{\uparrow} G_1(\mathscr{T}_{C_{\mathcal A}(X)},\mbox{CD}_{C_{\mathcal B}(X)})\) to \(\One \underset{\text{pre}}{\uparrow} G_1(\mathcal D_{C_{\mathcal A}(X)},\Omega_{C_{\mathcal B}(X),\mathbf 0})\) and then to \(\Two \underset{\text{mark}}{\uparrow} G_1(\mathscr{T}_{C_{\mathcal A}(X)}, \neg \Omega_{C_{\mathcal B}(X),\mathbf 0})\). With these connections, the main block of the diagram becomes equivalent without any extra assumptions needed. {\color{red}If \(\mathcal A \prec \mathcal B\), Lemma \ref{lemma:Pawlikowski} applies and all of the statements across the two diagrams are equivalent.} Correction: If \(\mathcal A = \mathcal B\) are either the finite or compact subsets of \(X\), Lemma \ref{lemma:Pawlikowski}'s revision in \href{https://arxiv.org/abs/2102.00296}{arXiv:2102.00296} applies and all of the statements across the two diagrams are equivalent. \end{proof} \begin{note} Suppose \(\mathcal A = \mathcal B = [\mathbb R]^\omega\). Define a strategy for One in \(G_1(\mathcal O(X, \mathcal A), \mathcal O(X, \mathcal B))\) as follows: In the \(n^{\text{th}}\) inning, for any countable set \(A \subseteq \mathbb R\), choose \(U_{A,n}\) to be an open set so that \(A \subseteq U_{A,n}\) and \(U_{A,n}\) has Lebesgue measure \(< 2^{-n}\). Then \(\sigma(n) = \{ U_{A,n} : A \in \mathcal A\}\). This is a pre-determined winning strategy for One. Consider a strategy for Two in \(G_1(\mathcal D_{C_{\mathcal A}(X)}, \Omega_{C_{\mathcal B}(X), \mathbf 0})\) defined as follows: In the \(n^{\text{th}}\) inning, One's play must have non-trivial intersection with \([\mathbf 0 ; \mathbb Q, 2^{-n}]\). Let Two choose \(f_n\) in this intersection. Then as in the previous example, \(f_n \to \mathbf 0\). This shows that if \(\mathcal A\) does not consist of closed sets, then the properties do not have to be equivalent. \end{note} \begin{note} If we do not require that \(\mathcal A\) be an ideal base, then the statements \begin{itemize} \item \(\Two \uparrow G_1(\mathscr N[\mathcal A], \neg \Gamma(X,\mathcal B))\), \item \(\One \uparrow G_1(\mathcal O(X, \mathcal A), \Gamma(X,\mathcal B))\), and \item \(\Two \uparrow G_1(\mathscr N_{C_{\mathcal A}(X)}(\mathbf 0), \neg \Gamma_{C_{\mathcal B}(X),\mathbf 0})\) \end{itemize} are all strictly weaker than any of those present in the first diagram of the previous theorem. This is also true for Markov/pre-determined strategies. The counter example of \(X = \mathbb Z\) with \(\mathcal A\) and \(\mathcal B\) both set to be the singleton subsets of \(\mathbb Z\) demonstrates this. Assuming that \(\mathcal A\) is an ideal base makes the situation more complicated. In that situation \(\One \uparrow G_1(\mathscr N[\mathcal A], \neg \mathcal O(X,\mathcal B))\) implies that \(\One \uparrow G_1(\mathscr N[\mathcal A], \neg \Gamma(X,\mathcal B))\). So to find a space \(X\) where \(\Two \not\uparrow G_1(\mathscr N[\mathcal A], \neg \mathcal O(X,\mathcal B))\) and \(\Two \uparrow G_1(\mathscr N[\mathcal A], \neg \Gamma(X,\mathcal B))\), we need for \(G_1(\mathscr N[\mathcal A], \neg \mathcal O(X,\mathcal B))\) to be undetermined and \(X\) to not be a \(\gamma\)-set. These are necessary but not sufficient conditions. We do not currently know of any counter-examples, but we also do not know a good reason why the games should be equivalent for player Two. \end{note} \section{Applications} Corollaries \ref{meagerGame} and \ref{nullGame} are direct applications of Lemma \ref{lemma:CoveringPropertyPre}. \begin{corollary} \label{meagerGame} Suppose \(X\) is a space where all closed sets are \(G_\delta\) sets, \(\mathcal A\) consists of the closed nowhere dense sets, and \(\mathcal B\) is the set of all singleton subsets of \(X\). Then One has a winning strategy in \(G_1(\mathscr N[\mathcal A], \neg \mathcal O(X,\mathcal B))\) if and only if \(X\) is meager. \end{corollary} \begin{corollary} \label{nullGame} Suppose \(X\) is a space, \(\mathcal A\) consists of the \(G_\delta\) \(\mu\)-null sets with respect to a Borel measure \(\mu\), and \(\mathcal B\) is the set of all singleton subsets of \(X\). Then One has a winning strategy in \(G_1(\mathscr N[\mathcal A], \neg \mathcal O(X,\mathcal B))\) if and only if \(X\) is \(\mu\)-null; i.e., \(\mu\) is the trivial zero measure. \end{corollary} The following summarizes a majority of the results from \cite{ClontzHolshouser}. \begin{theorem} \label{CH1} Suppose \(X\) is a Tychonoff space. Then \begin{enumerate}[label=(\roman*)] \item \label{Group1CH1} \(G_1(\mathscr N[[X]^{<\omega}], \neg\Omega_X)\), \(G_1(\mathscr N_{C_p(X)}(\mathbf 0), \neg \Omega_{C_p(X),\mathbf 0})\), and \(G_1(\mathscr T_{C_p(X)}, \text{CD}_{C_p(X)})\) are equivalent, \item \label{Group2CH1} \(G_1(\Omega_X, \Omega_X)\), \(G_1(\Omega_{C_p(X), \mathbf 0}, \Omega_{C_p(X), \mathbf 0})\), and \(G_1(\mathcal D_{C_p(X)}, \Omega_{C_p(X), \mathbf 0})\) are equivalent, \item The two groups of games in \ref{Group1CH1} and \ref{Group2CH1} are dual to each other, \item \(\text{I} \underset{\text{pre}}{\uparrow} G_1(\mathscr T_{C_p(X)}, \text{CD}_{C_p(X)})\) iff \(X\) is countable iff \(C_p(X)\) is first countable, \item For player One, the games \(G_1(\mathscr N[[X]^{<\omega}], \neg\Gamma_X)\) and \(G_1(\mathscr N_{C_p(X)}(\mathbf 0), \neg \Gamma_{C_p(X),\mathbf 0})\) are equivalent to \(G_1(\mathscr N[[X]^{<\omega}], \neg\Omega_X)\) and \(G_1(\mathscr N_{C_p(X)}(\mathbf 0), \neg \Omega_{C_p(X),\mathbf 0})\), \item For player Two, \(G_1(\Omega_X, \Omega_X)\) and \(G_1(\Omega_X, \Gamma_X)\) are equivalent, \item \(\text{I} \underset{\text{pre}}{\uparrow} G_1(\Omega_X, \Omega_X)\) if and only if \(\text{I} \uparrow G_1(\Omega_X, \Omega_X)\). \end{enumerate} \end{theorem} The following summarizes a majority of the results from \cite{CaruvanaHolshouser}. \begin{theorem} \label{CH2} Suppose \(X\) is a Tychonoff space. Then \begin{enumerate}[label=(\roman*)] \item \label{Group1CH2} \(G_1(\mathscr N[K(X)], \neg\mathcal K_X)\), \(G_1(\mathscr N_{C_k(X)}(\mathbf 0), \neg \Omega_{C_k(X),\mathbf 0})\), and \(G_1(\mathscr T_{C_k(X)}, \text{CD}_{C_k(X)})\) are equivalent, \item \label{Group2CH2} \(G_1(\mathcal K_X, \mathcal K_X)\), \(G_1(\Omega_{C_k(X), \mathbf 0}, \Omega_{C_k(X), \mathbf 0})\), and \(G_1(\mathcal D_{C_k(X)}, \Omega_{C_k(X), \mathbf 0})\) are equivalent, \item The two groups of games in \ref{Group1CH2} and \ref{Group2CH2} are dual to each other, \item \(\text{I} \underset{\text{pre}}{\uparrow} G_1(\mathscr T_{C_k(X)}, \text{CD}_{C_k(X)})\) iff \(X\) is hemicompact iff \(C_k(X)\) is first-countable, \item For player One, \(G_1(\mathscr N[K(X)], \neg\Gamma_k(X))\) and \(G_1(\mathscr N_{C_k(X)}(\mathbf 0), \neg \Gamma_{C_k(X),\mathbf 0})\) are equivalent to \(G_1(\mathscr N[K(X)], \neg\mathcal K_X)\) and \(G_1(\mathscr N_{C_k(X)}(\mathbf 0), \neg \Omega_{C_k(X),\mathbf 0})\), \item For player Two, \(G_1(\mathcal K_X, \mathcal K_X)\) and \(G_1(\mathcal K_X, \Gamma_k(X))\) are equivalent, \item \(\text{I} \underset{\text{pre}}{\uparrow} G_1(\mathcal K_X, \mathcal K_X)\) if and only if \(\text{I} \uparrow G_1(\mathcal K_X, \mathcal K_X)\). \end{enumerate} \end{theorem} Notice that the property of being \(\sigma\)-compact lies in between being countable and being hemicompact. If we use the fact that Theorems \ref{MD1} and \ref{MD2} apply to pairs \(\mathcal A\) and \(\mathcal B\), then we can generate a setup which characterizes \(\sigma\)-compactness in way that is similar to Theorems \ref{CH1} and \ref{CH2}. \begin{theorem} Suppose \(X\) is a Tychonoff space. Then \begin{enumerate}[label=(\roman*)] \item \label{Group1} \(G_1(\mathscr N[K(X)], \neg\Omega_X)\), \(G_1(\mathscr N_{C_k(X)}(\mathbf 0), \neg \Omega_{C_p(X),\mathbf 0})\), and \(G_1(\mathscr T_{C_k(X)}, \text{CD}_{C_p(X)})\) are equivalent, \item \label{Group2} \(G_1(\mathcal K_X, \Omega_X)\), \(G_1(\Omega_{C_k(X), \mathbf 0}, \Omega_{C_p(X), \mathbf 0})\), and \(G_1(\mathcal D_{C_k(X)}, \Omega_{C_p(X), \mathbf 0})\) are equivalent, \item The two groups of games in \ref{Group1} and \ref{Group2} are dual to each other, \item \(\text{I} \underset{\text{pre}}{\uparrow} G_1(\mathscr T_{C_k(X)}, \text{CD}_{C_p(X)})\) iff \(X\) is \(\sigma\)-compact iff \(\cof(\mathscr N_{C_k(X)}(\mathbf 0); \mathscr N_{C_p(X)}(\mathbf 0), \supseteq) = \omega\), \item For player One, the games \(G_1(\mathscr N[K(X)], \neg\Gamma_X)\) and \(G_1(\mathscr N_{C_k(X)}(\mathbf 0), \neg \Gamma_{C_p(X),\mathbf 0})\) are equivalent to \(G_1(\mathscr N[K(X)], \neg\Omega_X)\) and \(G_1(\mathscr N_{C_k(X)}(\mathbf 0), \neg \Omega_{C_p(X),\mathbf 0})\), and \item For player Two, \(G_1(\mathcal K_X, \Omega_X)\) and \(G_1(\mathcal K_X, \Gamma_X)\) are equivalent. \end{enumerate} \end{theorem} \section{Open Questions} \begin{itemize} \item Is there a topological characterization of the statement \(\cof(\mathcal A; \mathcal B, \leq) \leq_T \omega^\omega\)? \item Does \(\text{I} \uparrow G_1(\mathcal K_X, \Omega_X)\) imply \(\text{I} \underset{\text{pre}}{\uparrow} G_1(\mathcal K_X, \Omega_X)\)? \item More broadly, to what extent can the Pawlikowski generalization presented here be further generalized? \item If \(\mathcal A\) is an ideal base, are \(G_1(\mathscr N[\mathcal A], \neg \Gamma(X,\mathcal B))\) and \(G_1(\mathscr N[\mathcal A], \neg \mathcal O(X,\mathcal B))\) equivalent for player Two? \item Can the assumption that \(\mathcal B\) consists of \(\mathbb R\)-bounded sets be removed from Theorems \ref{MD1} and \ref{MD2}? \item To what extent can the techniques in this paper be used to study more complex selection principles like the Hurewicz property or the \(\alpha\)-Fr{\'{e}}chet properties? \end{itemize} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{"config": "arxiv", "file": "1910.02476.tex"}
\section{Summary} Nonparametric confidence sequences are particularly useful in sequential estimation because they enable valid inference at arbitrary stopping times, but they are underappreciated as powerful tools to provide accurate inference even at fixed times. Recent work~\citep{howard_exponential_2018,howard_uniform_2019} has developed several time-uniform generalizations of the Cramer-Chernoff technique utilizing ``line-crossing'' inequalities and using Robbins' method of mixtures to convert them to ``curve-crossing'' inequalities. This work adds several new techniques to the toolkit: to complement the methods of discrete mixtures, conjugate mixtures and stitching, we develop a ``predictable mixture'' approach. When coupled with existing nonparametric supermartingales, it yields computationally efficient empirical-Bernstein confidence sequences. One of our major contributions is to thoroughly develop a new martingale approach to estimating means of bounded random variables in both with- and without-replacement settings, and explore their connections to predictably-mixed supermartingales. Our methods are particularly easy to interpret in terms of evolving capital processes and sequential testing by betting~\citep{shafer2019language} but we go much further by developing powerful and efficient betting strategies that lead to state-of-the-art variance-adaptive confidence sets that are significantly tighter than past work in all considered settings. In particular, Shafer espouses \emph{complementary} benefits of such approaches, ranging from improved scientific communication, ties to historical advances in probability, and reproducibility via continued experimentation (also see~\cite{grunwald_safe_2019}), but our focus here has been on developing a new state of the art for a set of classical, fundamental problems. The connections to online learning theory~\citep{kumon2011sequential,rakhlin2017equivalence,orabona2017training,cutkosky2018black}, and to empirical and dual likelihoods (Section~\ref{section:EL}), can possibly be exploited to prove further properties of our confidence sequences, which we conjecture are admissible in some formal and general sense~\citep{ramdas2020admissible}. We did not require these connections in this paper, but exploring such connections may be fruitful in unexpected ways. It is clear to us, and hopefully to the reader as well, that the ideas behind this work (``adaptive statistical inference by betting'') form the tip of the iceberg---they lead to powerful, efficient, nonasymptotic, nonparametric inference and can be adapted to a range of other problems. As just one example, let $\Pcal^{p,q}$ represent the set of all continuous distributions such that the $p$-quantile of $X_t$, conditional on the past, is equal to $q$. This is also a nonparametric, convex set of distributions with no common reference measure. Nevertheless, for any predictable sequence $\{\lambda_i\}$, it is easy to check that \[ M_t = \prod_{i=1}^t (1 + \lambda_i (\mathbf{1}_{X_i \leq q} - p)) \] is a $\Pcal^{p,q}$-NM. Setting $p=1/2$ and $q=0$, for example, we can sequentially test if the median of the underlying data distribution is the origin. The continuity assumption can be relaxed, and this test can be inverted to get a confidence sequence for any quantile. We do not pursue this idea further in the current paper because the recent martingale methods of~\citet{howard2019sequential} already provide a challenging benchmark. Typically two different martingale-based methods do not uniformly dominate each other, and the powerful gains in this paper were made possible because previous approaches implicitly or explicitly employed supermartingales. We are actively pursuing these and several other nonparametric extensions in ongoing work.
{"config": "arxiv", "file": "2010.09686/discussion.tex"}
TITLE: If given matrix $A$ and $AX$, find $X$. QUESTION [0 upvotes]: Given matrices: $$A= \begin{pmatrix}2&0&1\\1&-1&0\\0&3&-2\end{pmatrix}$$ $$A\,X =\begin{pmatrix}3&-1\\2&5\\0&4\end{pmatrix}$$ Find $X$ My method was a very slow method as I had to solve two systems of equations with 3 variables each. Does anyone have any ideas of a better method to do this? I am thinking something with inverse matrices. REPLY [0 votes]: If $A$ is invertible, then you can left-multiply by $A^{-1}$ to recover $X$. Note that you don’t have to invert $A$ explicitly. Since the RREF of any invertible square matrix is the identity, row-reducing it is equivalent to multiplying it by its inverse. So, if you row-reduce the augmented matrix $[A\mid AX]$ you’ll end up with $[I\mid A^{-1}AX]=[I\mid X]$ when you’re done.
{"set_name": "stack_exchange", "score": 0, "question_id": 3378403}
\begin{document} \vspace*{-1\baselineskip} \begin{abstract} We prove that for any infinite right-angled Coxeter or Artin group, its spherical and geodesic growth rates (with respect to the standard generating set) either take values in the set of Perron numbers, or equal $1$. Also, we compute the average number of geodesics representing an element of given word-length in such groups.\\ \noindent \textit{Key words: } Coxeter group, Artin group, graphs of groups, RACG, RAAG, Perron number, algebraic integer, finite automaton.\\ \noindent \textit{2010 AMS Classification: } 57R90, 57M50, 20F55, 37F20. \\ \end{abstract} \maketitle \vspace*{-2\baselineskip} \section{Introduction} A classical formula obtained by Steinberg in 1968, c.f. \cite{Steinberg}, shows that the growth series of a Coxeter group (with respect to its standard generating set consisting of involutions) is a rational function, hence the growth rates of these groups are algebraic numbers. In 1980's Cannon discovered a remarkable connection between Salem polynomials and growth functions of surface groups and some cocompact Coxeter groups of ranks $3$ and $4$. Even though these results were published much later in the paper \cite{CaWa}, the initial preprint spawned the studies by other authors, who established that in many cases the growth rates of cocompact and cofinite Coxeter groups are either Salem or Pisot numbers with most notable results obtained in the works \cite{Floyd, Parry}. However, the classes of Salem and Pisot numbers appear to be somewhat narrow, since growth rates of many cocompact and cofinite hyperbolic Coxeter groups do not belong there. As it was shown in \cite[Theorem~4.1]{KePe}, in many cases these growth rates reside in a wider class of Perron numbers, and it was conjectured \cite[p. 1301]{KePe} that this is the case for all Coxeter groups acting cocompactly on hyperbolic spaces as reflection groups. The actual conjecture describes a detailed distribution of the poles of the associated growth series, and it implies that the growth rate is a Perron number. Several results confirming the latter fact have appeared recently in \cite{Kolpakov, KoYu, NoKe, Umemoto, Yu1, Yu2}. The geodesic growth functions of Coxeter groups have also attracted some attention in the recent works \cite{AC, CK}. However, there are fewer methods available for computing them, e.g. there is no analogue of such a convenient tool as Steinberg's formula. Thus, the number-theoretic properties of geodesic growth rates still remain less understood. In the present work we show that the spherical and geodesic exponential growth rates of infinite right-angled Coxeter groups (RACGs) and right-angled Artin groups (RAAGs) are Perron numbers, besides the cases when they equal $1$. Namely, in the case of RACGs the following theorems hold. \begin{letteredthm}[A]\label{thm-A} Let $G$ be an infinite right-angled Coxeter group with defining graph $\Gamma$. Then the spherical exponential growth rate $\alpha(G)$ of $G$ with respect to its standard set of generators determined by $\Gamma$ is either $1$, or a Perron number. \end{letteredthm} \begin{letteredthm}[B]\label{thm-B} Let $G$ be an infinite right-angled Coxeter group with defining graph $\Gamma$. Then the geodesic exponential growth rate $\beta(G)$ of $G$ with respect to its standard set of generators determined by $\Gamma$ is either $1$, or a Perron number. \end{letteredthm} Analogous results hold for RAAGs and their growth rates. \begin{letteredthm}[C]\label{thm-C} Let $G$ be a right-angled Artin group with defining graph $\Gamma$. Then the spherical exponential growth rate $\alpha(G)$ of $G$ with respect to its standard symmetric set of generators determined by $\Gamma$ is either $1$, or a Perron number. \end{letteredthm} \begin{letteredthm}[D]\label{thm-D} Let $G$ be a right-angled Artin group with defining graph $\Gamma$. Then the geodesic exponential growth rate $\beta(G)$ of $G$ with respect to its standard symmetric set of generators determined by $\Gamma$ is either $1$, or a Perron number. \end{letteredthm} The original conjecture by Kellerhals and Perren has been confirmed in several cases \cite{Kolpakov, KoYu, NoKe, Umemoto} by applying Steinberg's formula \cite{Steinberg} and with extensive use of hyperbolic geometry, notably Andreev's theorem \cite{Andreev-1, Andreev-2}. Recently in \cite{Yu1, Yu2} it was established that the growth rates of all $3$-dimensional hyperbolic Coxeter groups are Perron numbers. In the present paper, we prove that the spherical and geodesic growth rates of RACGs and RAAGs are also Perron numbers, even when there is no cocompact or finite covolume action. Moreover, all hyperbolic right-angled polytopes in dimensions $n = 2$ \cite[Theorem 7.16.2]{Beardon} and $n = 3$ \cite{Pogorelov} are classified, while no such polytopes exist in dimensions $n \geq 5$ \cite{PV}. The only known right-angled hyperbolic polytopes in dimension $n = 4$ are the Coxeter $120$-cell \cite{Kellerhals} and its ``garlands'' obtained by glueing several such polytopes along appropriate facets. Hence, our methods of proof are not related to the geometry of the group action, and rather use the structure of the group considered as a formal language: namely, we consider the corresponding finite state automaton, following the works by Brink and Howlett \cite{BH} and Loeffler, Meier, and Worthington \cite{LMW}. Also, we would like to mention that Theorem A and Theorem C can be deduced from the results of Sections~10--11 in \cite{GTT}, where a different automaton, essentially due to Hermiller and Meier \cite{HM}, has been considered. Much earlier, similar results for the spherical growth rates of partially commutative monoids were obtained in \cite{LR}. The properties of the automata used in the present work allow us to show the following fact that describes how many geodesics ``on average'' represent an element of word-length $n$. Let us write $a_n \sim b_n$ for a pair of sequences of positive real numbers indexed by integers if $\lim_{n\to \infty} \frac{a_n}{b_n} = 1$. \begin{letteredthm}[E]\label{thm-E} Let $G$ be either an infinite right-angled Coxeter group with defining graph $\Gamma$ whose complement $\overline{\Gamma}$ is \textit{not} a union of a complete graph and an empty graph\footnote{Here and further an empty graph means a graph with some (possibly none) vertices and no edges.}, or a right-angled Artin group with defining graph $\Gamma$ that is not empty. Let $a_n$ be the number of elements in $G$ of word-length $n$ with respect to $\Gamma$, and let $b_n$ be the number of length $n$ geodesics issuing from the origin in the Cayley graph of $G$ with respect to $\Gamma$. Then, $b_n \sim C\,\, \delta^n \,\, a_n$, as $n\rightarrow \infty$, where $\delta = \delta(G) > 1$ is a ratio of two Perron numbers, and $C = C(G) >0$ is a constant. In particular, this implies that $\beta(G) > \alpha(G)$. \end{letteredthm} As evidenced by our examples in the sequel, geodesic growth rates may not be Perron numbers outside the class of right-angled Coxeter groups. If we consider the automatic growth rate, c.f. \cite{GS}, which is notably associated with a non-standard generating set, then this quantity is not necessarily a Perron number already in the right-angled case. We refer the reader to the monograph \cite{BB} for a comprehensive exposition of combinatorics of Coxeter groups and to \cite{LM} for more information on the general dynamical properties of finite state automata. \section{Preliminaries}\label{section:preliminaries} In this section we briefly recall all the necessary notions and facts that are used in the sequel. A \emph{Perron number} is a real algebraic integer bigger than $1$ which is greater in its absolute value than any of its other Galois conjugates. Perron numbers constitute an important class of numbers that appear, in particular, in connection with dynamics, c.f. \cite{LM}. \smallskip Let $M$ be a square $n \times n$ ($n \geq 1$) matrix with real entries. Then $M$ is called \emph{positive} if $M_{ij} > 0$, for all $1 \leq i, j \leq n$, and \emph{non-negative} if $M_{ij} \geq 0$, for all $1 \leq i, j \leq n$. A non-negative matrix $M$ is called \emph{reducible (or decomposable)} if there exists a permutation matrix $P$ such that $PMP^{-1}$ has an upper-triangular block form. Otherwise, $M$ is called \textit{irreducible (or indecomposable)}. It is well-known that if $M$ is the adjacency matrix of a directed graph $D$, then $M$ is irreducible if and only if $D$ is strongly connected (i.e. there is a directed path between any two distinct vertices of $D$). The \emph{$i$-th period} ($1 \leq i \leq n$) of a non-negative matrix $M$ is the greatest common divisor of all natural numbers $d$ such that $(M^d)_{ii} > 0$. If $M$ is irreducible, then all periods of $M$ coincide and equal \emph{the period of} $M$. A non-negative matrix is called \emph{aperiodic} if it has period $1$. A non-negative matrix that is irreducible and aperiodic is called \emph{primitive}. The classical Perron-Frobenius theorem implies that the largest real eigenvalue of a square $n\times n$ ($n \geq 2$) non-negative primitive integral matrix is a Perron number, c.f. \cite[Theorem 4.5.11]{LM}. \smallskip In our case, the matrix $M$ represents the transfer matrix of a finite-state automaton $\mathcal{A}$ (or its part), which can be viewed as a directed graph. Let $a_l = |\{$ words of length $l$ accepted by $ \mathcal{A} \}|$. Then the \textit{exponential growth rate} of the regular language $L = L(\mathcal{A})$ accepted by $\mathcal{A}$ is defined as $\gamma(L) = \limsup_{l\to \infty} \sqrt[l]{a_l}$. The spectral radius of $M$ equals exactly $\gamma(L)$ provided that the latter is bigger than $1$, c.f. \cite[Proposition 4.2.1]{LM}. If $G$ is a group with a generating set $S$, let $S^{-1}$ be the set of inverses of the elements in $S$. The word-length of an element in $g \in G$ is the minimum length of a word over the alphabet $S \cup S^{-1}$ needed to write $g$ as a product. Then we define the \textit{spherical exponential growth rate} of $G$ with respect to $S$ as $\alpha(G, S) = \limsup_{l\to \infty}\sqrt[l]{a_l}$, for $a_l$ being the number of elements in $G$ of word-length $l$. The \textit{geodesic exponential growth rate} $\beta(G)$ of the group $G$ with respect to a generating set $S$ is defined as $\beta(G, S) = \limsup_{l\to \infty}\sqrt[l]{b_l}$, for $b_l$ being the number of geodesic paths in the Cayley graph of $G$ with respect to $S$ starting at the identity and having length $l$. Here a geodesic path is a path joining two given vertices and having minimal number of edges, hence it is simple (i.e. without backtracking or self-intersections) . If $\mathrm{ShortLex}$ is the shortlex language for $G$ and $\mathrm{Geo}$ is the geodesic language for $G$, in each case with respect to $S$, then $\alpha(G, S) = \gamma(\mathrm{ShortLex})$ and $\beta(G, S) = \gamma(\mathrm{Geo})$. The right-angled Coxeter group (or RACG, for short) $G$ defined by a simple graph $\Gamma = (V, E)$ with vertices $V = V\Gamma$ and edges $E = E\Gamma$, is the group with standard presentation \begin{equation*} G = \langle v \in V\Gamma \mid v^2=1,\, \mbox{ for all } v \in V\Gamma,\quad [u, v]=1, \mbox{ if } (u, v) \in E\Gamma \rangle, \end{equation*} while the right-angled Artin group (or RAAG) $G$ defined by $\Gamma$ has standard presentation \begin{equation*} G = \langle v \in V\Gamma \mid [u, v]=1, \mbox{ if } (u, v) \in E\Gamma \rangle. \end{equation*} It is known that the $\mathrm{ShortLex}$ and $\mathrm{Geo}$ languages are regular for RACGs and RAAGs with their standard symmetric generating sets, c.f. \cite{BH, LMW}. In the sequel, for a RACG or RAAG $G$ we shall write simply $\alpha(G)$, resp. $\beta(G)$, for the spherical, resp. geodesic, growth rate of $G$ with respect to its standard symmetric generating set. As the complement $\overline{\Gamma}$ of the defining graph $\Gamma$ splits into connected components, the cor\-res\-pon\-ding RACG or RAAG splits into a direct product of the respective irreducible RACGs or RAAGs. If $\overline{\Gamma}$ has a connected component with three or more vertices, then the growth rate (spherical or geodesic) of the associated RACG is strictly greater than $1$. An analogous statement holds for a RAAG defined by a graph $\Gamma$ such that $\overline{\Gamma}$ has a connected component with two or more vertices. Thus, apart from easily classifiable exceptions, the growth rates (spherical and geodesic) of RACGs and RAAGs are strictly greater than $1$. We would like to stress the fact that the geodesic growth rate of a Coxeter group (not a RACG) does not have to be a Perron number (even if it is greater than $1$), as the example of the affine reflection group $\widetilde{A}_2$ shows (its spherical growth rate is, however, equal to $1$). The automaton $\mathcal{A}$ recognising the geodesic language $\mathrm{Geo}(\widetilde{A}_2)$ can be found in the book by Bj\"orner and Brenti \cite{BB} on page~118 (Figure~4.9), and is depicted in Figure~\ref{fig:automatonA2tilde} below for reader's convenience. \begin{figure}[ht] \centering \includegraphics[width=13 cm]{automatonA2tilde} \caption{\footnotesize The geodesic automaton for $\widetilde{A}_2 = \langle v_1, v_2, v_3 \mid v^2_i=1, i\in \{1,2,3\};\,\,\, (v_i v_j)^3=1, (i,j\in \{1,2,3\},\, i\ne j) \rangle$. The generators labelling its arrows are indicated by their indices. The start state is marked by a double circle. The fail state and the corresponding arrows are omitted. The attracting component has vertices $\{a, b, c, d, e, f\}$.}\label{fig:automatonA2tilde} \end{figure} Observe that the automaton $\mathcal{A}$ has a single attracting component spanned by the vertices labelled $\{a, b, c, d, e, f\}$, while the period of $a$ equals $\mathrm{gcd}(4,6) = 2$. By \cite[Exercise 4.5.13]{LM}, this is enough to conclude that the growth rate of $\mathcal{A}$ is not a Perron number. Thus, neither is the geodesic growth rate of $\widetilde{A}_2$. A direct computation shows that it equals $\sqrt{2}$, whose only other Galois conjugate is its negative. We would like to note that we do not know any example of an infinite Coxeter group such that its spherical growth rate with respect to the standard generating set is not a Perron number, neither equal to~$1$. However, one can find a gainsaying example even for a RACG, when one considers a non-standard generating set. It is generally not known, whether the growth series of Coxeter groups are rational for all generating sets, but for a RACG $G$ a natural generating set with this propery was introduced in the paper \cite{GS}. This generating set, called the \textit{automatic} generating set, consists of all words $b_1 b_2 \ldots b_k$ where $\{b_1,b_2,\ldots,b_k\}$ is a clique in the defining graph of the group. The automaton described in \cite[Remark~5]{GS} accepts the shortlex language of normal forms with respect to the aforementioned alphabet, so that the corresponding growth series is rational and the spherical growth rate is an algebraic number. Let us then consider the group $\mathbb{Z}_2*(\mathbb{Z}_2 \times \mathbb{Z}_2)$ that is defined by the graph on the set of vertices $S=\{a, b, c\}$ having a single edge joining $b$ and $c$. The spherical growth rate with respect to the standard set $S$ can be easily computed and it equals the golden ratio $(1+\sqrt5)/2$, which is a Pisot number (and thus a Perron number). However, the automatic generating set $\{a,b,c,d\}$ from \cite{GS} provides normal forms where the letters $a$ alter with the other three letters $b$, $c$, and $d = bc$, so that the corresponding growth rate equals $\sqrt3$, which is not a Perron number. \section{Finite automata for shortlex and geodesic words}\label{section:automata} We begin by introducing more of the general set-up and describing the structure of finite automata for the shortlex and geodesic languages associated with a RACG or RAAG, say $G$. Then we outline the ideas of subsequent proofs. We start by the case of RACGs and then continue to that of RAAGs, since the latter can be deduced from the former. First of all, certain assumptions can be made about the defining graph $\Gamma = \langle V, E \rangle$ of the RACG $G$ according to our observations in the previous section. We suppose that the complement $\overline{\Gamma}$ is connected and has three or more vertices. Otherwise, either $G \cong D_{\infty}$ or $G$ splits as a direct product of two RACGs $G_1$ and $G_2$, and for the growth rates we have $\alpha(G) = \max\{ \alpha(G_1), \alpha(G_2) \}$ \cite[\S VI.C.59 ]{H} and $\beta(G) = \beta(G_1) + \beta(G_2)$ \cite[Theorem 2.2]{Br}. If any of $G_i$'s is a finite group, then its defining graph is a complete graph and its spherical and geodesic growth rates are equal to $0$. Otherwise, both of its growth rates are at least $1$. Thus, we either take a maximum of two numbers, each of which is, by assumption, either $0$, or $1$, or a Perron number, or a sum of such two numbers. Thus, the resulting value is also either $0$, or $1$, or a Perron number \cite{Lind}. In fact, $0$ happens as a growth rate for finite RACGs only. For a RAAG $G$ with defining graph $\Gamma$, we assume that the complement $\overline{\Gamma}$ is connected and has two or more vertices. Otherwise, either $G \cong \mathbb{Z}$, or $G$ splits as a direct product of two RAAGs $G_1$ and $G_2$, and the previous argument for RACGs applies verbatim. Each $G_i$ has spherical and geodesic growth rates at least $1$. Now we describe two automata, which are the main objects of our further consideration. The first automaton, called $\mathcal{A}$, accepts the shortlex language of words for the RACG $G$ with respect to its standard generating set, and the second one, called $\mathcal{B}$, accepts the geodesic words for $G$ (with respect to the standard generating set). We start by describing the automaton $\mathcal{B}$, which is introduced in \cite{LMW}, since it has a simpler structure. For a simple graph $\Gamma$, and a vertex $v\in V\Gamma$, let the star of $v$ be the set $\mathrm{st}(v) = \{ u \in V\Gamma\, |\, u \mbox{ is adjacent to } v \mbox{ in } \Gamma \}$. Then, $\mathcal{B}$ has the following set of states $\mathcal{S}$ and transition function $\delta$: \begin{itemize} \item[a)] $\mathcal{S} = \{ s \subseteq V\Gamma \mid s \mbox{ spans a clique in } \Gamma \} \cup \{\emptyset\} \cup \star$, \item[b)] the start state is $\{\emptyset\}$, and the fail state is $\star$ only, while all other states are accept states, \item[c)] for each $s \in \mathcal{S}$ and $v \in V\Gamma$ we have $\delta(s, v) = \{v\} \cup (\mathrm{st}(v) \cap s)$, while $v \notin s$, and $\star$ otherwise. \end{itemize} Next, we order the vertices of $\Gamma$ with respect to some total order $\{ v_{i_1} < v_{i_2} < \dots < v_{i_n} \}$ and consider the shortlex automaton $\mathcal{A}$ for $G$ which is obtained from $\mathcal{B}$ simply by deleting all the transitions which violate the shortlex order.\footnote{The automaton under consideration is actually accepting the \emph{reverse shortlex} language, where the significance of letters reduces from right to left, with ``smaller'' letters considered more significant. However, this language has the same growth function as the standard shortlex language, and thus there is no difference for the purposes of our proof.} Thus, we modify $\delta$ as follows: \begin{itemize} \item[a)] $\delta(s, v) = \star$, if $v \in s$\, or\, $v > \mathrm{min}(\mathrm{st}(v) \cap s)$, when $\mathrm{st}(v) \cap s \neq \emptyset$, \item[b)] $\delta(s, v) = \{v\} \cup (\mathrm{st}(v) \cap s)$, otherwise. \end{itemize} For the sake of convenience, we shall omit the fail state $\star$ and the corresponding arrows in all our automata, similar to Figure~\ref{fig:automatonA2tilde}. It is worth noting that the automata $\mathcal A$ and $\mathcal B$ can be built using two different approaches: via the combinatorics on words, where a state describes the set of possible last letters in the normal form of a given word, c.f. \cite{LMW}, or using the geometry of short roots of a given Coxeter group, c.f. \cite{BH} (note that the latter is much more powerful since it works for all Coxeter groups). In what follows, we shall prove that the transfer matrix $M = M(\mathcal{A} \setminus \{\emptyset\})$ is primitive. We need to consider such a pruned automaton since the start state $\{\emptyset\}$ has no incoming arrows, and thus $\mathcal{A}$ itself is not strongly connected. However, we need only the rest of $\mathcal{A}$ in order to count non-trivial words, and may instead suppose that we have several start states, while the set of accepted words will be partitioned by their first letters. Then we show that $\mathcal{A} \setminus \{\emptyset\}$ is strongly connected by finding a subset of the so-called \textit{singleton states}, and first showing that the latter is strongly connected (Lemma \ref{singletons-connected}). Then we prove that for any other state there is always a directed path in $\mathcal{A} \setminus \{\emptyset\}$ leading to a singleton state (Lemma \ref{level}) and vice versa (an easy observation). This is equivalent to saying that $M$ is irreducible. Furthermore, at least one of the singleton states belongs simultaneously to a $2$- and a $3$-cycle of directed edges in $\mathcal{A}$ (Lemma \ref{period-one}). This will imply that $M$ is aperiodic. Then the Perron-Frobenius theorem, as stated in \cite[Theorem 4.5.11]{LM}, applied to $M$ guarantees that $\alpha(G)$ is a Perron number (Theorem A). By applying analogous reasoning to the automaton $\mathcal{B}$, we obtain that $\beta(G)$ is also a Perron number (Theorem B). In order to proceed to RAAGs, we apply \cite[Lemma 2]{DS} stating that for a RAAG $G$, there exists an associated RACG $G^\pm$ whose spherical and geodesic growth rates coincide with those of $G$. Thus, the result for RAAGs follows (Theorems C and D). Finally, by using the notion of matrix domination \cite[Definition A.7]{B}, we show that the Perron-Frobenius eigenvalue of the transfer matrix of $\mathcal{A}$ strictly dominates that of $\mathcal{B}$ under certain simple conditions on the defining graph $\Gamma$, from which the required inequality for the growth rates immediately follows (Theorem E). \section{Proof of Theorem A}\label{section:proof-A} Let $G$ be an infinite right-angled Coxeter group with defining graph $\Gamma$. We show that \emph{the spherical exponential growth rate $\alpha(G)$ of $G$ with respect to its standard set of generators determined by $\Gamma$ is either $1$ or a Perron number.} In the sequel, we assume that $\overline{\Gamma}$ is connected, otherwise we proceed to its connected components, as discussed in the previous section. Also, let $\Gamma$ have at least $3$ vertices, otherwise $G \cong D_\infty$ and the proof is finished. The following definition describes a useful class of states of the shortlex automaton $\mathcal{A}$ introduced in the previous section. \begin{defn} Let $s \in \mathcal{S}$ be a state of the automaton $\mathcal{A}$. We call $s$ a \textit{singleton} if $s = \{v\}$ for a vertex $v \in V\Gamma$. \end{defn} Next, we show a crucial, albeit almost evident, property of singleton states. \begin{lemma}\label{singletons-connected} The set of singleton states of $\mathcal{A}$ is strongly connected. \end{lemma} \begin{proof} If two vertices $u$ and $v$ are connected in $\overline{\Gamma}$, then $\delta(\{u\}, v) = \{v\}$ and $\delta(\{v\}, u) = \{u\}$. By connectivity of $\overline{\Gamma}$, the claim follows. \end{proof} With the above lemma in hand, one can prove that the whole $\mathcal{A}\setminus \{\emptyset\}$ is strongly connected. To this end, let us partition the states of $\mathcal{A}$ by cardinality: $\mathcal{S} = \bigsqcup^{m}_{k = 0} \bigcup_{|s| = k} s$, and say that a state $s$ belongs to level $k$ if $|s| = k$, $(0\leq k \leq m)$, where $m$ is the maximal clique size in $\Gamma$. \begin{figure}[ht] \centering \includegraphics[width=9 cm]{tree} \caption{\footnotesize A state $s = \{ 2, 10, 12 \}$ of level $3$ is represented by highlighted vertices in the spanning tree $T$ for $\overline{\Gamma}$. Here, following the proof notation, $u = 2$, $v = 10$, and $w = 1$.}\label{fig:tree} \end{figure} \begin{lemma}\label{level} Any state of $\mathcal{A}$ of level $k > 1$ is connected by a directed path to a state of strictly smaller level $l < k$. \end{lemma} \begin{proof} Let us choose a spanning tree $T$ in $\overline{\Gamma}$ and suspend it by the root. We can assume that the order on the vertices of $\Gamma$ is defined by assigning a unique integer label in the set $\{1, \dots, n\}$ and then comparing the numbers in the usual way. We label the root $1$, and the lower levels of successor vertices of $T$ are labelled left-to-right in the increasing order. An example of such labelling is shown in Figure~\ref{fig:tree}. Let $s \in \mathcal{S}$ be a state of $\mathcal{A}$ (represented by a clique in $\Gamma$) that is not a singleton. Let $u$, $v$ be such vertices in $s$ that $u = \min(s)$, and $v = \min(s \setminus \{ u \})$. Then there exists a path $p$ in $T$ that connects $u$ and $v$. Necessarily, the length of $p$ is $|p| \geq 2$. Note that for the vertex $w$ adjacent to $u$ in $p$ we have $w < v$ by construction of $T$ and its labelling. Since $u \notin \mathrm{st}(w)$, we have that $w < \min( \mathrm{st}(w) \cap s )$, and therefore $s' = \delta(s, w) = \{ w \} \cup (\mathrm{st}(w) \cap s) \neq \star$. Thus, we find a new state $s'$ which is not a fail state. If $l = |s'| < |s| = k$, then the proof is finished. Note, that the inequality $l < k$ always holds if $|p| = 2$, since in this case $s' = (s \setminus \{ u,v \})\cup \{ w \}$. Let us suppose that $l = k$ and $|p| > 2$. Then $s' = (s \setminus \{ u \}) \cup \{ w \}$, and $\min(s') = w$, $\min(s'\setminus \{w\}) = v$, while the path $p'$ joining $w$ to $v$ in $T$ has length $|p'| < |p|$. Hence, we conclude the proof by induction on the length of this path. \end{proof} Let $s, s' \in \mathcal{S}$ be two states of $\mathcal{A}$ of the respective levels $l$ and $m$, with $l, m \geq 1$. Then we can apply Lemma~ \ref{level} repeatedly in order to move from $s$ to some singleton state $\{u\}$, while by construction of $\mathcal{A}$ there exist a singleton $\{v\}$ and a directed path from $\{v\}$ to $s'$. Due to Lemma~\ref{singletons-connected} one can then move among the singletons from $\{u\}$ to $\{v\}$, and thus connect $s$ to $s'$ by a directed path in $\mathcal{A}\setminus \{ \emptyset \}$. Then $\mathcal{A}\setminus \{ \emptyset \}$ is strongly connected, and its transfer matrix $M$ is irreducible. \begin{lemma}\label{period-one} At least one singleton state of the automaton $\mathcal{A}$ belongs simultaneously to a $2$- and a $3$-cycle of directed edges in $\mathcal{A}$. \end{lemma} \begin{proof} Since $\overline{\Gamma}$ is connected and has at least $3$ vertices, it contains a path subgraph with vertices $u$, $v$, and $w$, such that $uv$ and $vw$ are edges, and $u > w$ in the lexicographic order. Then we have the following cycles by applying $\delta$: \begin{itemize} \item[a)] $ \{u\} \rightarrow \delta(\{u\}, v) = \{v\} \rightarrow \delta(\{v\}, u) = \{u\}$, \item[b)] $\{u\} \rightarrow \delta(\{u\}, w) = \{ u,w \} \rightarrow \delta(\{ u,w \}, v) = \{v\} \rightarrow \delta(\{v\}, u) = \{u\}$, \\ if $u$ and $w$ commute, \item[c)] $\{u\} \rightarrow \delta(\{u\}, w) = \{ w \} \rightarrow \delta(\{w \}, v) = \{v\} \rightarrow \delta(\{v\}, u) = \{u\}$, \\ if the element $uw$ has infinite order. \qedhere \end{itemize} \end{proof} The above statement is equivalent to the transfer matrix $M = M(\mathcal{A}\setminus \{ \emptyset \})$ being aperiodic. Taking into account that $M$ is also irreducible, we obtain that $M$ is primitive, and its Perron-Frobenius eigenvalue is thus a Perron number by \cite[Theorem 4.5.11]{LM}. In other words, the spherical growth rate $\alpha(G)$ is a Perron number. \section{Proof of Theorem B}\label{section:proof-B} Let $G$ be an infinite right-angled Coxeter group with defining graph $\Gamma$. We show that \emph{the geodesic exponential growth rate $\beta(G)$ of $G$ with respect to its standard set of generators determined by $\Gamma$ is either $1$ or a Perron number.} As indicated in Section~\ref{section:automata}, we may suppose that $\overline{\Gamma}$ is connected and has at least three vertices. Let $\mathcal{B}$ be the geodesic automaton for $G$. Then $\mathcal{B}\setminus \{ \emptyset \}$ is strongly connected, since in order to obtain $\mathcal{B}$ from $\mathcal{A}$ we add directed edges to $\mathcal{A}$, and never remove one. Also, the argument of Lemma~\ref{period-one} applies verbatim to $\mathcal{B}$. Thus, the growth rate of the language accepted by $\mathcal{B}$ is a Perron number. \section{Proof of Theorems C and D}\label{section:proof-C-D} Let $G$ be a RAAG with defining graph $\Gamma$ and symmetric generating set $S = \{ v\, :\, v \in V\Gamma \} \cup \{ v^{-1}\, :\, v \in V\Gamma \}$. Let $\alpha(G)$ and $\beta(G)$ be, respectively, the spherical exponential growth rate and the geodesic exponential growth rate of $G$ with respect to $S$. Below we show that \emph{each of $\alpha(G)$ and $\beta(G)$ is either $1$, or a Perron number.} According to our observation about the behaviour of growth rates of RAAGs with respect to direct products, we may assume that $\overline{\Gamma}$ is connected. By assuming that $\Gamma$ has two or more vertices we guarantee that the spherical and geodesic growth rates of $G$ are strictly greater than $1$. It is well-known, e.g. by \cite[Lemma 2]{DS}, that there exist a RACG $G^\pm$ with generating set $S^\pm$ such that its elements of length $k$ map injectively into the elements of length $k$ in the group $G$ with respect to the generating set $S$. Indeed, let $G^\pm$ be the associated RACG with defining graph $\Gamma^\pm$, which is the double of $\Gamma$. That is, $\Gamma^\pm$ has a pair of vertices $v^+$ and $v^-$ for each vertex $v$ of $\Gamma$, and if $(u, v)\in E\Gamma$, then $(u^+, v^+)$, $(u^-, v^-)$, $(u^+, v^-)$, $(u^-, v^+)$ are edges of $\Gamma^\pm$. The generating set for $G^\pm$ is $S^\pm = V\Gamma^\pm$. Once we have a word $w = v^{r_1}_{i_1} v^{r_2}_{i_2} \ldots v^{r_s}_{i_s}$ in $G$, consider the corresponding word $\sigma(w) = \prod^{s}_{j=1} \sigma(v^{r_j}_{i_j})$, where each $\sigma(v^{r_j}_{i_j})$ has length $|r_j|$ and alternating form $v^+_{i_j} v^-_{i_j}v^+_{i_j} \ldots v^{\varepsilon}_{i_j}$, if $r_i>0$, or $v^-_{i_j} v^+_{i_j}v^-_{i_j}\ldots v^{-\varepsilon}_{i_j}$, if $r_i<0$, where $\varepsilon = \pm 1$, as appropriate. It is easy to check that the correspondence $\sigma$ between the set of words in $\mathrm{Geo}(G)$ and $\mathrm{Geo}(G^\pm)$ is one-to-one and length-preserving. Define a lexicographic order on the symmetric generating set $S$ of $G$ in which generators with positive exponents always dominate, i.e. $u > v^{-1}$ for all $u, v \in V\Gamma$, and generators having same sign exponents are compared with respect to some total order such that $u < v$ if and only if $u^{-1} > v^{-1}$, for all $u \neq v \in V\Gamma$. Let the corresponding lexicographic order on the generating set $S^\pm$ of $G^\pm$ be defined by $u^+ > v^-$ for all the corresponding vertices of $\Gamma^\pm$, and $u^+ < v^+$, resp. $u^- > v^-$, whenever $u < v$ in the total order on the generating set $S$. Then $\sigma$ becomes compatible with the corresponding shortlex orders on $G$ and $G^\pm$. That is, we have a one-to-one correspondence between the set of words of any given length in $\mathrm{Geo}(G)$ and $\mathrm{Geo}(G^\pm)$, as well as in $\mathrm{ShortLex}(G)$ and $\mathrm{ShortLex}(G^\pm)$. This fact implies that $\alpha(G) = \alpha(G^\pm)$ and $\beta(G) = \beta(G^\pm)$, and thus the spherical growth rate $\alpha(G)$ of $G$ and its geodesic growth rate $\beta(G)$ are Perron numbers, by Theorem A and Theorem B for RACGs. \section{Proof of Theorem E}\label{section:proof-E} Let $G$ be an infinite right-angled Coxeter group with defining graph $\Gamma$ such that $\overline{\Gamma}$ is not a union of a complete graph and an empty graph, or let $G$ be a right-angled Artin group, with $\Gamma$ non-empty. Then we show that \textit{the geodesic growth rate $\beta(G)$ strictly dominates the spherical growth rate $\alpha(G)$.} In fact, this statement takes a more quantitative form, as can be seen below. To this end, let $G$ be a RACG with defining graph $\Gamma$. If $\overline{\Gamma}$ has $k \geq 1$ connected components $\overline{\Gamma}_i$, $i=1,\dots, k$, then $G$ splits as a direct product $G_1 \times \ldots \times G_k$, where $G_i$ is a subgroup of $G$ determined by the subgraph $\Gamma_i$ spanned in $\Gamma$ by the vertices of $\overline{\Gamma_i}$. As mentioned in Section~\ref{section:automata}, the following equalities hold for the spherical and geodesic growth rates of a direct product: \begin{equation*} \alpha(G) = \max_{i=1,\dots, k} \alpha(G_i), \end{equation*} while \begin{equation*} \beta(G) = \sum^k_{i=1} \beta(G_i), \end{equation*} Note that if $\overline{\Gamma_i}$ is an isolated vertex, then $\alpha(G_i) = \beta(G_i) = 0$, otherwise $\alpha(G_i)$, $\beta(G_i) \geq 1$. \smallskip Thus, if more than one connected component of $\overline{\Gamma}$ is not a vertex, then $\alpha(G) < \beta(G)$. The equality clearly takes place when $\overline{\Gamma}$ is a union of a complete graph and an empty graph. Now suppose that $\overline{\Gamma}$ is a union of several isolated vertices $v_i$, $i=1, \dots, k$, for $k\geq 0$, and a single connected graph $\overline{\Gamma_0}$ on two or more vertices. Since the non-zero growth rate in this case belongs to the latter, the initial group $G$ can be replaced by its subgroup determined by $\Gamma_0$. Thus we continue by setting $\Gamma := \Gamma_0$, and let $G$ be the corresponding RACG. Let $M$ be the transfer matrix of the automaton $\mathcal{A}$ (the shortlex automaton for $G$), and $N$ be the transfer matrix of the automaton $\mathcal{B}$ (the geodesic automaton for $G$) constructed in Section~\ref{section:automata}. Since $\mathcal{A}$ is a subgraph of $\mathcal{B}$, if both are considered as labelled directed graphs, then $M$ is dominated by $N$ in the sense of \cite[Definition A.7]{B}. The spherical growth rate $\alpha = \alpha(G)$ and the geodesic growth rate $\beta = \beta(G)$ are the Perron-Frobenius eigenvalues (or, which is the same, spectral radii) of $M$ and $N$, respectively, c.f. \cite[Proposition 4.2.1]{LM}. As we know from Sections \ref{section:proof-A} and \ref{section:proof-B}, both matrices $M$ and $N$ are irreducible. Moreover, $M$ and $N$ can coincide if and only if there are no commutation relations between the generators of $G$ (i.e. $G$ is a free product of two or more copies of $\mathbb{Z}_2$), which is not the case. Then, by \cite[Corollary A.9]{B}, we obtain the inequality $\alpha < \beta$. Let $a_n$ be the number of elements in $G$ of word-length $n$ with respect to $\Gamma$, and let $b_n$ be the number of length $n$ geodesics issuing from the origin in the Cayley graph of $G$ with respect to $\Gamma$. Then, since the Perron-Frobenius eigenvalue is simple, the quantities $a_n$ and $b_n$ asymptotically satisfy $a_n \sim C_1\,\,\alpha^n$ and $b_n \sim C_2\,\, \beta^n$, as $n\rightarrow \infty$, for some constants $C_1, C_2 > 0$. Then the claim for RACGs follows. \smallskip The case of a RAAG $G$ with defining graph $\Gamma$ such that $\overline{\Gamma}$ is connected can be treated similarly provided the discussion of growth rates in Section~\ref{section:proof-C-D}, and the fact that the corresponding RACG $G^{\pm}$ has empty defining graph if and only if one starts with the empty graph $\Gamma$ for $G$. Otherwise, if $\overline{\Gamma}$ is disconnected, then the geodesic growth rate $\beta = \beta(G)$ is a sum of two or more numbers greater than or equal to $1$ (since the minimal possible spherical or geodesic exponential growth rate equals $1$ for a RAAG), while $\alpha = \alpha(G)$ is the maximum of those, which implies $\alpha < \beta$, as required. \section*{Acknowledgements} \noindent {\small The authors gratefully acknowledge the support that they received from the Swiss National Science Foundation, project no.~PP00P2-170560 (for A.K.), and the Russian Foundation for Basic Research, projects no.~18-01-00822 and no.~18-51-05006 (for A.T.). A.K. would like to thank Laura Ciobanu (Heriot--Watt University, UK) and A.T. would like to thank Fedor M. Malyshev (Steklov Mathematical Institute of RAS) for stimulating discussions. Both authors feel obliged to Denis~Osin (Vanderbilt University, USA) for his criticism that improved the earlier version of Theorem~E. Also, A.T. would like to thank the University of Neuch\^{a}tel for hospitality during his visits in August 2018 and May 2019. Both authors express their gratitude to the anonymous referees for their careful reading of the manuscript and numerous comments that invaluably helped to improve the quality of exposition.}
{"config": "arxiv", "file": "1809.09591/RACG_Perron_numbers_6.tex"}
TITLE: Size of KL-divergence neighbourhoods QUESTION [5 upvotes]: I am new here. I was reading another post here and this got me wondering what can be said about the size of the following kl divergence neighborhoods. Consider these two kl-divergence neighbourhood for a fixed distribution $P'$ and some $\alpha \geq 0$ $$ \mathbf{P} = \{P : D_{KL}(P||P') < \alpha\}\\ \mathbf{Q} = \{Q : D_{KL}(P'||Q) < \alpha\} $$ I am wondering anything can be said about $|\mathbf{P }|$ and $ |\mathbf{ Q}|$. Because the KL-divergence is asymmetric, I doubt they are not equal and the answer seems to be dependent on $P'$. When $\alpha=0$, the problem is trivial. I am curious about when $\alpha > 0$. I hope this question made sense. Thanks. REPLY [1 votes]: Perhaps more a question for math.stackexchange.com. Unless your probability space $(\Omega, \mathcal F)$ is trivial (i.e. $\mathcal F = \{ \emptyset, \Omega \})$, the sets $\mathbf P$ and $\mathbf Q$ will contain a continuum of probability distributions. (Consider probability measures of the form $\frac{d P}{dP'} =\exp(-X)/E [\exp(-X)]$ for random variables $X$. Not all random variables $X$ will work, but at least uncountably many).
{"set_name": "stack_exchange", "score": 5, "question_id": 147178}
\section{TD$(\lambda)$} \subsection{Properties of the TD$(\lambda)$ Algorithm} To utilize our SA results, we begin by rewriting the update equation of the TD$(\lambda)$ algorithm (\ref{algo:TDlambda}) in the form of the stochastic iterative algorithm (\ref{algo:sa}). For ease of exposition, we consider only using constant stepsize in the TD$(\lambda)$ algorithm, i.e., $\epsilon_k=\epsilon$ for all $k\geq 0$. For any $k\geq 0$, let $Y_k=(S_0,...,S_k,A_k,S_{k+1})$ (which takes value in $\mathcal{Y}_k:=\mathcal{S}^{k+2}\times\mathcal{A}$), and define a time-varying operator $F_k:\mathbb{R}^{|\mathcal{S}|}\times\mathcal{Y}_k \mapsto\mathbb{R}^{|\mathcal{S}|}$ by $[F_k(V,y)](s)=[F_k(V,s_0,...,s_k,a_k,s_{k+1})](s) =\Gamma_4(V,s_k,a_k,s_{k+1})\sum_{i=0}^{k}(\beta \lambda)^{k-i}\mathbbm{1}_{\{s_{i}=s\}}+ V(s)$ for all $s\in\mathcal{S}$. Note that the sequence $\{Y_k\}$ is not a Markov chain since it has a time-varying state-space. Using the notations of $\{Y_k\}$ and $F_k(\cdot,\cdot)$, we can rewrite the update equation of the TD$(\lambda)$ algorithm by \begin{align}\label{algo:TDlambda_update} V_{k+1}=V_k+\epsilon \left(F_k(V_k,Y_k)-V_k\right). \end{align} Although Eq. (\ref{algo:TDlambda_update}) is similar to the update equation for SA algorithm (\ref{algo:sa}), since the sequence $\{Y_k\}$ is not a Markov chain and the operator $F_k(\cdot,\cdot)$ is time-varying, our Theorem \ref{thm:sa} is not directly applicable. To overcome this difficulty, let us carefully look at the operator $F_k(\cdot,\cdot)$. Although $F_k(V_k,Y_k)$ depends on the whole trajectory of states visited before (through the term $\sum_{i=0}^{k}(\beta\lambda)^{k-i}\mathbbm{1}_{\{S_i=s\}}$), due to the geometric factor $(\beta\lambda)^{k-i}$, the states visited during the early stage of the iteration are not important. Inspired by this observation, we define the truncated sequence $\{Y_k^\tau\}$ of $\{Y_k\}$ by $Y_k^\tau=(S_{k-\tau},...,S_k,A_k,S_{k+1})$ for all $k\geq \tau$, where $\tau$ is a \textit{fixed} non-negative integer. Note that the random process $\mathcal{M}_Y=\{Y_k^\tau\}$ is now a Markov chain, whose state-space is denoted by $\mathcal{Y}_\tau$. Similarly, we define the truncated operator $F_k^\tau:\mathbb{R}^{|\mathcal{S}|}\times\mathcal{Y}_\tau\mapsto\mathbb{R}^{|\mathcal{S}|}$ of $F_k(\cdot,\cdot)$ by $[F_k^\tau(V,s_{k-\tau},...,s_k,a_k,s_{k+1})](s) =\Gamma_4(V,s_k,a_k,s_{k+1})\sum_{i=k-\tau}^{k}(\beta \lambda)^{k-i}\mathbbm{1}_{\{s_{i}=s\}}+ V(s)$ for all $s\in\mathcal{S}$. Using the above notations, we can further rewrite the update equation (\ref{algo:TDlambda_update}) by \begin{align}\label{algo:TDlambda_new} V_{k+1}=\;&V_k+\epsilon \left(F_k^\tau(V_k,Y_k^\tau)-V_k\right)+\underbrace{\epsilon \left(F_k(V_k,Y_k)-F_k^\tau(V_k,Y_k^\tau)\right)}_{\text{The Error Term}}. \end{align} Now, we want to argue that when the truncation level $\tau$ is large enough, the last term on the RHS of the previous equation is negligible compared to the other two terms. In fact, we have the following result. \begin{lemma}\label{le:truncation} For all $k\geq 0$ and $\tau\in [0,k]$, denote $y=(s_0,...,s_k,a_k,s_{k+1})$ and $y_\tau=(s_{k-\tau},...,s_k,a_k,s_{k+1})$. Then the following inequality holds for all $V\in\mathbb{R}^{|\mathcal{S}|}$: $\|F_k^\tau(V,y_\tau)-F_k(V,y)\|_2\leq \frac{(\beta\lambda)^{\tau+1}}{1-\beta\lambda}(1+2\|V\|_2)$. \end{lemma} \begin{proof}[Proof of Lemma \ref{le:truncation}] For any $V\in\mathbb{R}^{|\mathcal{S}|}$ and $(s_0,...,s_k,a_k,s_{k+1})$, we have by definition of the operators $F_k^\tau(\cdot,\cdot)$ and $F_k(\cdot,\cdot)$ that \begin{align*} &\|F_k^\tau(V,s_{k-\tau},...,s_k,a_k,s_{k+1})-F_k(V,s_0,...,s_k,a_k,s_{k+1})\|_2^2\\ =\;&\sum_{s\in\mathcal{S}}\left[\left(\mathcal{R}(s_k,a_k)+\beta V(s_{k+1})-V(s_k)\right)\sum_{i=0}^{k-\tau-1}(\beta \lambda)^{k-i}\mathbbm{1}_{\{s_i=s\}}\right]^2\\ \leq \;&(1+2\|V\|_2)^2\sum_{s\in\mathcal{S}}\left[\sum_{i=0}^{k-\tau-1}(\beta \lambda)^{k-i}\mathbbm{1}_{\{s_i=s\}}\right]^2\\ = \;&(\beta\lambda)^{2(\tau+1)}(1+2\|V\|_2)^2\sum_{s\in\mathcal{S}}\left[\sum_{i=0}^{k-\tau-1}(\beta \lambda)^{k-\tau-1-i}\mathbbm{1}_{\{s_i=s\}}\right]^2\\ \leq \;&\frac{(\beta\lambda)^{2(\tau+1)}}{1-\beta\lambda}(1+2\|V\|_2)^2\sum_{s\in\mathcal{S}}\sum_{i=0}^{k-\tau-1}(\beta \lambda)^{k-\tau-1-i}\mathbbm{1}_{\{s_i=s\}}\tag{Cauchy Schwarz inequality}\\ =\;&\frac{(\beta\lambda)^{2(\tau+1)}}{1-\beta\lambda}(1+2\|V\|_2)^2\sum_{i=0}^{k-\tau-1}(\beta \lambda)^{k-\tau-1-i}\sum_{s\in\mathcal{S}}\mathbbm{1}_{\{s_i=s\}}\\ =\;&\frac{(\beta\lambda)^{2(\tau+1)}}{(1-\beta\lambda)^2}(1+2\|V\|_2)^2. \end{align*} The result follows by taking the square root on both sides of the previous inequality. \end{proof} Lemma \ref{le:truncation} indicates that the error term in Eq. (\ref{algo:TDlambda_new}) is indeed geometric small. Suppose we ignore that error term, the update equation becomes $V_{k+1}\approx V_k+\epsilon_k(F_k^\tau(V_k,Y_k^\tau)-V_k)$. Since the random process $\mathcal{M}_{Y}=\{Y_k^\tau\}$ is a Markov chain, once we establish the required properties for the truncated operator $F_k^\tau(\cdot,\cdot)$, our SA results become applicable. From now on, we will choose $\tau=\min\{k\geq 0:(\beta\lambda)^{k+1}\leq \epsilon\}\leq \frac{\log(1/\epsilon)}{\log(1/(\beta\lambda))}$, where $\epsilon$ is the constant stepsize we use. This implies that the error term in Eq. (\ref{algo:TDlambda_new}) is of the order $O(\epsilon^2)$. Under this choice of $\tau$, we next investigate the properties of the operator $F_k^\tau(\cdot,\cdot)$ and the random process $\{Y_k^\tau\}$ in the following proposition. Let $\mathcal{K}\in\mathbb{R}^{|\mathcal{S}|\times|\mathcal{S}|}$ be a diagonal matrix with diagonal entries $\{\kappa(s)\}_{s\in\mathcal{S}}$, and let $\kappa_{\min}=\min_{s\in\mathcal{S}}\kappa(s)$. \begin{proposition}\label{prop:TDlambda} Suppose Assumption \ref{as:TDn} is satisfied. Then we have the following results. \begin{enumerate}[topsep=-1ex,itemsep= -1ex,partopsep=1ex,parsep=1ex,label=(\arabic*)] \item For any $k\geq \tau$, the operator $F_k^\tau(\cdot,\cdot)$ satisfies $\|F_k^\tau(V_1,y)-F_k^\tau(V_2,y)\|_2\leq \frac{3}{1-\beta\lambda}\|V_1-V_2\|_2$, and $\|F_k^\tau(\bm{0},y)\|_2\leq \frac{1}{1-\beta\lambda}$ for any $V_1,V_2\in\mathbb{R}^{|\mathcal{S}|}$ and $y\in\mathcal{Y}_\tau$. \item The Markov chain $\{Y_k^\tau\}_{k\geq \tau}$ has a unique stationary distribution, denoted by $\mu$. Moreover, there exists $C_4>0$ and $\sigma_4\in (0,1)$ such that $\max_{y\in\mathcal{Y}_\tau}\|P^{k+\tau+1}(y,\cdot)-\mu(\cdot)\|_{\text{TV}}\leq C_4\sigma_4^k$ for all $k\geq 0$. \item For any $k\geq \tau$, define the expected operator $\bar{F}_k^\tau:\mathbb{R}^{|\mathcal{S}|}\mapsto\mathbb{R}^{|\mathcal{S}|}$ by $\bar{F}_k^\tau(V)=\mathbb{E}_{Y\sim \mu}[F_k^\tau(V,Y)]$. Then \begin{enumerate}[topsep=-1ex,itemsep= -1ex,partopsep=1ex,parsep=1ex,label=(\alph*)] \item $\bar{F}_k^\tau$ is a linear operator given by $\bar{F}_k^\tau(V)=\left(I-\mathcal{K}\sum_{i=0}^{\tau}(\beta \lambda P_ \pi)^{i}(I-\beta P_{\pi})\right) V+\mathcal{K}\sum_{i=0}^{\tau}(\beta \lambda P_{\pi})^{i}R_{\pi}$. \item $\bar{F}_k^\tau$ is a contraction mapping with respect to $\|\cdot\|_p$ for any $p\in [1,\infty]$, with a common contraction factor $\gamma_4=1-\kappa_{\min}\frac{(1-\beta)(1-(\beta\lambda)^{\tau+1})}{1-\beta\lambda}$. \item $\bar{F}_k^\tau$ has a unique fixed-point $V_{\pi}$. \end{enumerate} \end{enumerate} \end{proposition} \begin{proof}[Proof of Proposition \ref{prop:TDlambda}] \begin{enumerate}[topsep=-1ex,itemsep= -1ex,partopsep=1ex,parsep=1ex,label=(\arabic*)] \item For any $V_1,V_2\in\mathbb{R}^{|\mathcal{S}|}$ and $y\in \mathcal{Y}_\tau$, we have \begin{align*} &\|F_k^\tau(V_1,y)-F_k^\tau(V_2,y)\|_2^2\\ =\;&\sum_{s\in\mathcal{S}}\left[\left(\beta (V_1(s_{k+1})-V_2(s_{k+1}))-(V_1(s_k)-V_2(s_k))\right)\sum_{i=k-\tau}^{k}(\beta \lambda)^{k-i}\mathbbm{1}_{\{s_i=s\}}+V_1(s)-V_2(s)\right]^2\\ = \;&\sum_{s\in\mathcal{S}}(V_1(s)-V_2(s))^2+\sum_{s\in\mathcal{S}}\left[\left(\beta (V_1(s_{k+1})-V_2(s_{k+1}))-(V_1(s_k)-V_2(s_k))\right)\sum_{i=k-\tau}^{k}(\beta \lambda)^{k-i}\mathbbm{1}_{\{s_i=s\}}\right]^2\\ &+2\sum_{s\in\mathcal{S}}\left[\left(\beta (V_1(s_{k+1})-V_2(s_{k+1}))-(V_1(s_k)-V_2(s_k))\right)\sum_{i=k-\tau}^{k}(\beta \lambda)^{k-i}\mathbbm{1}_{\{s_i=s\}}\right](V_1(s)-V_2(s))\\ \leq \;&\|V_1-V_2\|_2^2+4\|V_1-V_2\|_2^2\sum_{s\in\mathcal{S}}\left[\sum_{i=k-\tau}^{k}(\beta \lambda)^{k-i}\mathbbm{1}_{\{s_i=s\}}\right]^2+2\|V_1-V_2\|_2^2\sum_{s\in\mathcal{S}}\sum_{i=k-\tau}^{k}(\beta \lambda)^{k-i}\mathbbm{1}_{\{s_i=s\}}\\ \leq \;&\|V_1-V_2\|_2^2+4\|V_1-V_2\|_2^2\left(\sum_{i=k-\tau}^{k}(\beta \lambda)^{k-i}\right)\sum_{s\in\mathcal{S}}\sum_{i=k-\tau}^{k}(\beta \lambda)^{k-i}\mathbbm{1}_{\{s_i=s\}}+2\left(\sum_{i=k-\tau}^{k}(\beta \lambda)^{k-i}\right)\|V_1-V_2\|_2^2\\ = \;&\|V_1-V_2\|_2^2+4\|V_1-V_2\|_2^2\left(\sum_{i=k-\tau}^{k}(\beta \lambda)^{k-i}\right)^2+2\left(\sum_{i=k-\tau}^{k}(\beta \lambda)^{k-i}\right)\|V_1-V_2\|_2^2\\ = \;&\|V_1-V_2\|_2^2\left[1+2\left(\sum_{i=k-\tau}^{k}(\beta \lambda)^{k-i}\right)\right]^2\\ \leq \;&\frac{9}{(1-\beta\lambda)^2}\|V_1-V_2\|_2^2. \end{align*} It follows that $\|F_k^\tau(V_1,y)-F_k^\tau(V_2,y)\|_2\leq \frac{3}{1-\beta\lambda}\|V_1-V_2\|_2$. Similarly, for any $y\in\mathcal{Y}_\tau$, we have \begin{align*} \|F_k^\tau(\bm{0},y)\|_2^2&=\sum_{s\in\mathcal{S}}\left[\mathcal{R}(s_k,a_k)\sum_{i=k-\tau}^{k}(\beta \lambda)^{k-i}\mathbbm{1}_{\{s_i=s\}}\right]^2\\ &\leq \sum_{s\in\mathcal{S}}\left[\sum_{i=k-\tau}^{k}(\beta \lambda)^{k-i}\mathbbm{1}_{\{s_i=s\}}\right]^2\\ &\leq \left(\sum_{i=k-\tau}^{k}(\beta \lambda)^{k-i}\right)^2\tag{Cauchy Schwarz}\\ &\leq \frac{1}{(1-\beta\lambda)^2}. \end{align*} It follows that $\|F_k^\tau(\bm{0},y)\|_2\leq \frac{1}{1-\beta\lambda}$. \item The proof is identical to that of Propositon \ref{prop:Q-learning} (2). \item \begin{enumerate}[topsep=-1ex,itemsep= -1ex,partopsep=1ex,parsep=1ex,label=(\alph*)] \item For any $V\in\mathbb{R}^{|\mathcal{S}|}$ and $s\in\mathcal{S}$, we have \begin{align*} [\bar{F}_k^\tau(V)](s)=\;&\mathbb{E}_{Y\sim \mu}\left[[F_k^\tau(V,Y)](s)\right]\\ =\;&\mathbb{E}_{Y\sim \mu}\left[\left(\mathcal{R}(S_k,A_k)+\beta V(S_{k+1})-V(S_k)\right)\sum_{i=k-\tau}^{k}(\beta \lambda)^{k-i}\mathbbm{1}_{\{S_i=s\}}\right]+V(s)\\ =\;&\mathbb{E}_{Y\sim \mu}\left[\sum_{i=k-\tau}^{k}(\beta \lambda)^{k-i}\mathbbm{1}_{\{S_i=s\}}\mathbb{E}\left[\left(\mathcal{R}(S_k,A_k)+\beta V(S_{k+1})-V(S_k)\right)\;\middle|\;S_k,S_{k-1},...,S_0\right]\right]+V(s)\\ =\;&\mathbb{E}_{Y\sim \mu}\left[\sum_{i=k-\tau}^{k}(\beta \lambda)^{k-i}\mathbbm{1}_{\{S_i=s\}}(R_{\pi}(S_k)+\beta [P_{\pi} V](S_k)-V(S_k))\right]+V(s)\\ =\;&\sum_{i=k-\tau}^{k}(\beta \lambda)^{k-i}\sum_{s_0\in\mathcal{S}}\kappa(s_0)P_{\pi}^i(s_0,s)\sum_{s'\in\mathcal{S}}P_{\pi}^{k-i}(s,s')(R_{\pi}(s')+\beta [P_{\pi} V](s')-V(s')) +V(s)\\ =\;&\kappa(s)\sum_{i=k-\tau}^{k}(\beta \lambda)^{k-i}\sum_{s'\in\mathcal{S}}P_{\pi}^{k-i}(s,s')(R_{\pi}(s')+\beta [P_{\pi} V](s')-V(s')) +V(s)\\ =\;&\kappa(s)\sum_{i=k-\tau}^{k}(\beta \lambda)^{k-i}[P_{\pi}^{k-i}(R_{\pi}+\beta P_{\pi} V-V)](s)+V(s). \end{align*} It follows that \begin{align*} \bar{F}_k^\tau(V)&=\mathcal{K}\sum_{i=k-\tau}^{k}(\beta \lambda P_{\pi})^{k-i}(R_{\pi}+\beta P_{\pi} V-V)+V\\ &=\mathcal{K}\sum_{i=0}^{\tau}(\beta \lambda P_{\pi})^{i}(R_{\pi}+\beta P_{\pi} V-V)+V\\ &=\left[I-\mathcal{K}\sum_{i=0}^{\tau}(\beta \lambda P_{\pi})^{i}( I-\beta P_{\pi})\right]V+\mathcal{K}\sum_{i=0}^{\tau}(\beta \lambda P_{\pi})^{i}R_{\pi}. \end{align*} \item For any $V_1,V_2\in\mathbb{R}^{|\mathcal{S}|}$ and $p\in [1,\infty]$, we have \begin{align*} \|\bar{F}_k^\tau(V_1)-\bar{F}_k^\tau(V_2)\|_p&=\left\|\left[I-\mathcal{K}\sum_{i=0}^{\tau}(\beta \lambda P_{\pi})^{i}( I-\beta P_{\pi})\right](V_1-V_2)\right\|_p\\ &\leq \left\|I-\mathcal{K}\sum_{i=0}^{\tau}(\beta \lambda P_{\pi})^{i}( I-\beta P_{\pi})\right\|_p\|V_1-V_2\|_p. \end{align*} Denote $G=I-\mathcal{K}\sum_{i=0}^{\tau}(\beta \lambda P_{\pi})^{i}( I-\beta P_{\pi})$. It remains to provide an upper bound on $\|G\|_p$. Since \begin{align*} G&=I-\mathcal{K}\sum_{i=0}^{\tau}(\beta \lambda P_{\pi})^{i}+\mathcal{K}\sum_{i=0}^{\tau}(\beta \lambda P_{\pi})^{i}\beta P_{\pi}\\ &=I-\mathcal{K}-\mathcal{K}\sum_{i=1}^{\tau}(\beta \lambda P_{\pi})^{i}+\mathcal{K}\sum_{i=0}^{\tau}(\beta \lambda P_{\pi})^{i}\beta P_{\pi}\\ &=I-\mathcal{K}-\mathcal{K}\sum_{i=0}^{\tau-1}(\beta \lambda P_{\pi})^{i+1}+\mathcal{K}\sum_{i=0}^{\tau}(\beta \lambda P_{\pi})^{i}\beta P_{\pi}\\ &=I-\mathcal{K}+\mathcal{K}\sum_{i=0}^{\tau-1}(\beta \lambda P_{\pi})^{i}\beta P_{\pi}(1-\lambda)+\mathcal{K}(\beta \lambda P_{\pi})^{\tau}\beta P_{\pi}, \end{align*} the matrix $G_{\lambda,\tau}$ has non-negative entries. Therefore, we have \begin{align*} \|G_{\lambda,\tau}\|_\infty=\|G_{\lambda,\tau}\bm{1}\|_\infty=\left\|\bm{1}-\kappa\frac{(1-\beta)(1-(\beta\lambda)^{\tau+1})}{1-\beta\lambda}\right\|_\infty=1-\kappa_{\min}\frac{(1-\beta)(1-(\beta\lambda)^{\tau+1})}{1-\beta\lambda} \end{align*} and \begin{align*} \|G_{\lambda,\tau}\|_1=\|\bm{1}^\top G_{\lambda,\tau}\|_\infty=\left\|\bm{1}^\top-\kappa^\top\frac{(1-\beta)(1-(\beta\lambda)^{\tau+1})}{1-\beta\lambda}\right\|_\infty=1-\kappa_{\min}\frac{(1-\beta)(1-(\beta\lambda)^{\tau+1})}{1-\beta\lambda}. \end{align*} It then follows from Lemma \ref{le:matrix} that \begin{align*} \|G_{\lambda,\tau}\|_p\leq \|G_{\lambda,\tau}\|_1^{1/p}\|G_{\lambda,\tau}\|_\infty^{1-1/p}\leq 1-\kappa_{\min}\frac{(1-\beta)(1-(\beta\lambda)^{\tau+1})}{1-\beta\lambda}. \end{align*} Hence the operator $F_k^\tau(\cdot,\cdot)$ is a contraction with respect to $\|\cdot\|_p$, with a common contraction factor $\gamma_4= 1-\kappa_{\min}\frac{(1-\beta)(1-(\beta\lambda)^{\tau+1})}{1-\beta\lambda}$. \item It is enough to show that $V_{\pi}$ is a fixed-point of $\bar{F}_k^\tau(\cdot)$, the uniqueness follows from $\bar{F}_k^\tau(\cdot)$ being a contraction. Using the Bellman's equation $R_{\pi}+\beta P_{\pi} V_{\pi}-V_{\pi}=0$, we have \begin{align*} \bar{F}_k^\tau(V_{\pi})=\mathcal{K}\sum_{i=0}^{\tau}(\beta \lambda P_{\pi})^{i}(R_{\pi}+\beta P_{\pi} V_{\pi}-V_{\pi})+V_{\pi}=V_{\pi}. \end{align*} \end{enumerate} \end{enumerate} \end{proof} \subsection{Proof of Theorem \ref{thm:TDlambda}}\label{pf:thm:TDlambda} We will exploit the $\|\cdot\|_2$-contraction property of the operator $\bar{F}_k^\tau(\cdot)$ provided in Proposition \ref{prop:TDlambda}. Let $M(x)=\|x\|_2^2$ be our Lyapunov function. Using the update equation (\ref{algo:TDlambda_new}), and we have for all $k\geq 0$: \begin{align} &\|V_{k+1}-V_{\pi}\|_2^2\nonumber\\ =\;&\|V_k-V_\pi\|_2^2+\underbrace{2\epsilon(V_k-V_\pi)^\top \left(\bar{F}_k^\tau(V_k)-V_k\right)}_{\circled{1}}+\underbrace{2\epsilon(V_k-V_\pi)^\top \left(F_k^\tau(V_k,Y)-\bar{F}_k^\tau(V_k)\right)}_{\circled{2}}\nonumber\\ &+\underbrace{\epsilon^2\|F_k^\tau(V_k,Y)-V_k\|_2^2}_{\circled{3}}+\underbrace{\epsilon^2\|F_k(V_k,Y_k)-F_k^\tau(V_k,Y)\|_2^2}_{\circled{4}}\nonumber\\ &+\underbrace{2\epsilon(V_k-V_\pi)^\top \left(F_k(V_k,Y_k)-F_k^\tau(V_k,Y)\right)}_{\circled{5}}+ \underbrace{2\epsilon\left(F_k^\tau(V_k,Y)-V_k\right)^\top\left(F_k(V_k,Y_k)-F_k^\tau(V_k,Y)\right)}_{\circled{6}}.\label{eq:TDlambda:composition} \end{align} The terms $\circled{1}$, $\circled{2}$, and $\circled{3}$ correspond to the terms $T_1$, $T_3$, and $T_4$ in Eq. (\ref{eq:composition1}), and hence can be controlled in the same way as provded in Lemmas \ref{le:T1}, \ref{le:T3}, and \ref{le:T4}. The proof is omitted. As for the terms $\circled{3}$, $\circled{4}$, and $\circled{5}$, we can easily use Lemma \ref{le:TDlambda} along with the Cauchy Schwarz inequality to bound them, which gives the following result. \begin{lemma}\label{le:TDlambda} The following inequalities hold: \begin{enumerate}[topsep=-1ex,itemsep= -1ex,partopsep=1ex,parsep=1ex,label=(\arabic*)] \item $\circled{4}\leq \frac{8\epsilon^2}{(1-\beta\lambda)^2}\|V_k-V_\pi\|_2^2+\frac{2\epsilon^2}{(1-\beta\lambda)^2}(4\|V_\pi\|_2+1)^2$ for all $k\geq \tau$. \item $\circled{5}\leq \frac{16\epsilon ^2}{(1-\beta\lambda)}\|V_k-V_\pi\|_2^2+\frac{4\epsilon ^2}{(1-\beta\lambda)}(4\|V_\pi\|_2+1)^2$ for all $k\geq \tau$. \item $\circled{6}\leq \frac{64\epsilon^2}{(1-\beta\lambda)^2}\|V_k-V_\pi\|_2^2+\frac{4\epsilon^2}{(1-\beta\lambda)^2}(4\|V_\pi\|_2+1)^2$ for all $k\geq \tau$. \end{enumerate} \end{lemma} \begin{proof}[Proof of Lemma \ref{le:TDlambda}] \begin{enumerate}[topsep=-1ex,itemsep= -1ex,partopsep=1ex,parsep=1ex,label=(\arabic*)] \item For all $k\geq \tau$, we have \begin{align*} \circled{4}&=\epsilon^2\|F_k(V_k,Y_k)-F_k^\tau(V_k,Y_k^\tau)\|_2^2\\ &\leq \frac{\epsilon^2(\beta\lambda)^{2(\tau+1)}}{(1-\beta\lambda)^2}(2\|V_k\|_2+1)^2\tag{Lemma \ref{le:truncation}}\\ &\leq \frac{\epsilon^4}{(1-\beta\lambda)^2}(2\|V_k-V_\pi\|_2+2\|V_\pi\|_2+1)^2\\ &\leq \frac{8\epsilon^2}{(1-\beta\lambda)^2}\|V_k-V_\pi\|_2^2+\frac{2\epsilon^2}{(1-\beta\lambda)^2}(4\|V_\pi\|_2+1)^2. \end{align*} \item For all $k\geq \tau$, we have \begin{align*} \circled{5}&=2\epsilon(V_k-V_\pi)^\top \left(F_k(V_k,Y_k)-F_k^\tau(V_k,Y_k^\tau)\right)\\ &\leq 2\epsilon \|V_k-V_\pi\|_2\|F_k(V_k,Y_k)-F_k^\tau(V_k,Y_k^\tau)\|_2\\ &\leq \frac{2\epsilon (\beta\lambda)^{\tau+1}}{(1-\beta\lambda)}\|V_k-V_\pi\|_2(2\|V_k\|_2+1)\tag{Proposition \ref{prop:TDlambda} (1)}\\ &\leq \frac{2\epsilon (\beta\lambda)^{\tau+1}}{(1-\beta\lambda)}(2\|V_k-V_\pi\|_2+2\|V_\pi\|_2+1)^2\\ &\leq \frac{16\epsilon (\beta\lambda)^{\tau+1}}{(1-\beta\lambda)}\|V_k-V_\pi\|_2^2+\frac{4\epsilon (\beta\lambda)^{\tau+1}}{(1-\beta\lambda)}(4\|V_\pi\|_2+1)^2\\ &\leq \frac{16\epsilon ^2}{(1-\beta\lambda)}\|V_k-V_\pi\|_2^2+\frac{4\epsilon ^2}{(1-\beta\lambda)}(4\|V_\pi\|_2+1)^2,\tag{The choice of $\tau$}. \end{align*} \item For all $k\geq \tau$, we have \begin{align*} \circled{6}&=2\epsilon\left(F_k^\tau(V_k,Y_k^\tau)-V_k\right)^\top\left(F_k(V_k,Y_k)-F_k^\tau(V_k,Y_k^\tau)\right)\\ &\leq 2\epsilon\|F_k^\tau(V_k,Y_k^\tau)-V_k\|_2\|F_k(V_k,Y_k)-F_k^\tau(V_k,Y_k^\tau)\|_2\\ &\leq \frac{2\epsilon(\beta\lambda)^{\tau+1}}{1-\beta\lambda}\left(\frac{3}{1-\beta\lambda}\|V_k\|_2+\frac{1}{1-\beta\lambda}+\|V_k\|_2\right)\left(2\|V_k\|_2+1\right)\\ &\leq \frac{2\epsilon(\beta\lambda)^{\tau+1}}{(1-\beta\lambda)^2}(4\|V_k\|_2+1)(2\|V_k\|_2+1)\\ &\leq \frac{2\epsilon(\beta\lambda)^{\tau+1}}{(1-\beta\lambda)^2}(4\|V_k-V_\pi\|_2+4\|V_\pi\|_2+1)^2\\ &\leq \frac{64\epsilon(\beta\lambda)^{\tau+1}}{(1-\beta\lambda)^2}\|V_k-V_\pi\|_2^2+\frac{4\epsilon(\beta\lambda)^{\tau+1}}{(1-\beta\lambda)^2}(4\|V_\pi\|_2+1)^2\\ &\leq \frac{64\epsilon^2}{(1-\beta\lambda)^2}\|V_k-V_\pi\|_2^2+\frac{4\epsilon^2}{(1-\beta\lambda)^2}(4\|V_\pi\|_2+1)^2\tag{The choice of $\tau$}. \end{align*} \end{enumerate} \end{proof} Substitute the upper bounds we have for the terms $\circled{1}$ to $\circled{6}$ into Eq. (\ref{eq:TDlambda:composition}), and we have the one-step contractive inequality for the TD$(\lambda)$ algorithm. Repeatedly using that inequality and we obtain the desired finite-sample convergence bounds.
{"config": "arxiv", "file": "2102.01567/AppendixE.tex"}
TITLE: When does a topological group embed topologically in its group of homeomorphisms? QUESTION [3 upvotes]: Let $X$ be a topological group. $X$ acts freely on itself by left multiplication; this gives us an injective group homomorphism $\Phi: X\rightarrow \operatorname{Homeo} X$. Under what conditions is $\Phi$ also a homeomorphism to its image, in the compact-open topology on $\operatorname{Homeo} X$? (Is this always true? True if $X$ is locally compact? Is it at least true if $X$ is a Lie group? Etc. Ideally, what's the most general condition and what's the argument?) REPLY [2 votes]: The map $\Phi:G\to C(G,G)$ is always continuous. Indeed, if $K,U$ are compact and open in $G$ respectively, and $g\in\Phi^{-1}([K,U])$, where $[K,U]$ is the set of all continuous maps $f:G\to G$ such that $f(K)\subset U$, then for any $k\in K$ we can choose open nighborhoods $V_k(g)$ and $V(k)$ of $g$ and $k$ respectively such that $V_k(g)\cdot V(k)\subset U$. We then extract a finite open cover $V(k_1),\dots,V(k_n)$ of $K$, and set $$V=\bigcap_{i=1}^n V_{k_i}(g)$$ This is an open neighborhood of $g$, and for any $g'\in V$, $g'K\subset U$, i.e. $V\subset\Phi^{-1}([K,U])$ Suppose $G$ is a locally compact and locally connected topological group, so that the homeomorphisms of $G$ equipped with the compact open topology form a topological group. By a homogeneity argument using the fact that $\mathrm{Homeo}(G)$ is a topological group, it is enough to show that $\Phi$ is an open mapping at $1$. Let $V$ be an open neighborhood of $1$ in $G$. It is enough to show that there are a compact set $K$ and an open set $U$ such that $$\Phi(1)=\mathrm{id}_G\in\Phi(G)\cap[K,U]\subset\Phi(V)$$ Just choose $U$ and $K$ such that $1\in K\subset U$ and $UU^{-1}\subset V$. Then $\Phi(G)\cap[K,U]$ is open in the induced topology on $\Phi(G)$, and if $\Phi(g)\in\Phi(G)\cap[K,U]$ for some $g\in G$, then $gK\subset U$ so that $g\in UK^{-1}\subset V$.
{"set_name": "stack_exchange", "score": 3, "question_id": 1089711}
\begin{document} \title[Hausdorff dimensions of level sets related to moving digit means]{Hausdorff dimensions of level sets related to moving digit means} \author{Haibo Chen} \address{School of Statistics and Mathematics, Zhongnan University of Economics and Law, Wu\-han, Hubei, 430073, China} \email{hiboo\_chen@sohu.com} \begin{abstract} In this paper, we will introduce and study the lower moving digit mean $\b{\it M}(x)$ and the upper moving digit mean $\bar{M}(x)$ of $x\in[0,1]$ in $p$-adic expansion, where $p\geq2$ is an integer. Moreover, the Hausdorff dimension of level set \[B(\alpha,\beta)=\left\{x\in [0,1]\colon \b{\it M}(x)=\alpha,\bar{M}(x)=\beta\right\}\] is determined for each pair of numbers $\alpha$ and $\beta$ satisfying with $0\leq\alpha\leq\beta\leq p-1$. \end{abstract} \subjclass[2010]{Primary 11K55; Secondary 28A80.} \keywords{moving digit mean, Hausdorff dimension, entropy function, Moran set.} \maketitle \section{Introduction} To determine the Hausdorff dimensions of sets of numbers in which the distributions of the digits are of specific characters for some representation is a fundamental and important problem in number theory and multifractal analysis. It has a long history and there are a great many classic work on this topic, suh as Besicovitch~\cite{Be}, Eggleston~\cite{E}, Billingsley~\cite{Bi}, Barreira et al.~\cite{BSS} and Fan et al.~\cite{FF,FFW}, in which the sets are associated with frequencies of digits of numbers in different expansions, long-term time averages and ergodic limits, etc. Actually, the expressions in these sets are of a common feature, i.e., they are usually described by the arithmetic mean. Different from this, in this paper we would like to study the moving digit means of numbers and investigate the corresponding level sets related to it in the unit interval. The moving digit mean is derived from the concept of tangential dimension at a point for measure studied by Guido and Isola~\cite{GI1,GI2}. To be precise, if the measure is a Bernoulli measure, then the tangential dimension is of a linear relation with the moving digit mean (see~\cite{CWX}). More significantly, when we explore the multifractal behavior at a point of a measure, the tangential dimension is more sensitive than the local dimensions of measures, which enables the tangential dimension to provide more information than the local dimension. Thus, compared with the arithmetic mean, the moving digit mean used in the description of level sets can also provide more information on the distributions of digits of numbers. In the following, we introduce the corresponding concepts and notations. Let $p\geq 2$ be an integer and $A=\{0,1,\ldots p-1\}$ be the alphabet with $p$ elements. It is known that each number $x\in I= [0,1]$ can be expanded into an infinite non-terminating expression \[\sum_{n=1}^\infty\frac{x_n}{p^n}=0.x_1x_2x_3\ldots,\quad\text{where}\ x_n\in A,\ n\geq1,\] which is called the $p$-adic expansion of $x$. Denoted by $S_n(x)=\sum_{i=1}^nx_i, n\geq1$, the $n$-th partial sum of $x$. Let $T\colon I\to I$ be the shift operator defined by \[Tx=0.x_2x_3x_4\ldots,\quad\text{for any}\ x=0.x_1x_2x_3\ldots\in I.\] Let $x\in I$, we call \begin{align} \b{\it M}(x)=\lim_{n\to\infty}\varliminf_{m\to\infty}\frac{S_n(T^mx)}{n}\quad\text{and}\quad\bar{M}(x)=\lim_{n\to\infty}\varlimsup_{m\to\infty}\frac{S_n(T^mx)}{n} \end{align} \emph{the lower and upper moving digit means} of $x$ respectively. \begin{rem} Let $n\geq1$. Denote by \[\b{\it M}_n(x)=\varliminf_{m\to\infty}\frac{S_n(T^mx)}{n}\quad\text{and}\quad\bar{M}_n(x)=\varlimsup_{m\to\infty}\frac{S_n(T^mx)}{n}\] the $n$-th lower and upper moving digit means respectively. It is easy to check that, for any $x\in I$, the two sequences $\{-\b{\it M}_n(x)\}_{n\geq1}$ and $\{\bar{M}_n(x)\}_{n\geq1}$ are both subadditive since they satisfy the inequalities: \[-\b{\it M}_{m+n}(x)\leq-\b{\it M}_m(x)-\b{\it M}_n(x)\quad\text{and}\quad\bar{M}_{m+n}(x)\leq\bar{M}_m(x)+\bar{M}_n(x),\quad m,n\geq1.\] Thus, the limits of $\b{\it M}_n(x)$ and $\bar{M}_n(x)$ always exist as $n\to\infty$. So, the definitions of $\b{\it M}(x)$ and $\bar{M}(x)$ are both reasonable. \end{rem} To investigate the influence of the lower and upper moving digit means on the distribution of digits of numbers, define the level set \begin{align} B(\alpha,\beta)=\{x\in I\colon \b{\it M}(x)=\alpha,\bar{M}(x)=\beta\},\quad\text{where}\ 0\leq\alpha\leq\beta\leq p-1, \end{align} which may be called \emph{Banach set} with lower level $\alpha$ and upper level $\beta$. In this paper, we would like to determine the Hausdorff dimension of set $B(\alpha,\beta)$ for any $p\geq2$, which is a non-trivial and meaningful generalization of the work in \cite{CWX}. First, we give the definition of $p$-adic entropy function below. Let $0\leq\alpha\leq p-1$ and $\delta>0$. Denote \[H(\alpha,n,\delta)=\left\{x_1x_2\cdots x_n\in A^n\colon n(\alpha-\delta)<\sum_{i=1}^nx_i<n(\alpha+\delta)\right\}\] and $h(\alpha,n,\delta)=\card H(\alpha,n,\delta)$. Here and in the sequel, the symbol $\card$ is the cardinality of some finite set. Furthermore, define the $p$-adic entropy function as \begin{align}\label{definiton h alpha} h(\alpha)=\lim_{\delta\to0}\varlimsup_{n\to\infty}\frac{\log h(\alpha,n,\delta)}{(\log p)n},\quad 0\leq\alpha\leq p-1. \end{align} It is evident that $0\leq h(\alpha)\leq1$ for any $0\leq\alpha\leq p-1$. Denote by $\dim_H$ the Hausdorff dimension of some set. Then we have \begin{thm}\label{theorem main theorem} For any $0\leq\alpha\leq\beta\leq p-1$, we have \begin{align}\label{formula main} \dim_HB(\alpha,\beta)=\sup_{\alpha\leq t\leq\beta}h(t). \end{align} \end{thm} In particular, let $0\leq\alpha\leq p-1$ and write \begin{align} B(\alpha):=B(\alpha,\alpha)=\{x\in I\colon \b{\it M}(x)=\bar{M}(x)=\alpha\}. \end{align} Then we may obtain immediately that \begin{cor}\label{corollary main corollary} For any $0\leq\alpha\leq p-1$, we have $\dim_HB(\alpha)=h(\alpha)$. \end{cor} Note that we will discuss another set $B^\ast(\alpha)$ which is related to the moving digit mean $M(x)$ in the end of this paper. As a result, $B(\alpha)$ is different from and large than $B^\ast(\alpha)$ in the sense of Hausdorff dimension. Moreover, in the case $p=2$, we can give an explicit expression to the binary entropy function $h_2$ by the following calculation: \begin{align} h_2(\alpha)=\lim_{\delta\to0}\varlimsup_{n\to\infty}\frac{\log \sum_{i=[n(\alpha-\delta)]+1}^{[n(\alpha+\delta)]}\binom{n}{i}}{(\log 2)n}=\frac{-\alpha\log\alpha-(1-\alpha)\log(1-\alpha)}{\log2}, \end{align} where $\binom ni$ is the binomial coefficient. Thus, the following conclusion can be easily obtained by Theorem \ref{theorem main theorem} and Corollary \ref{corollary main corollary}. \begin{cor} Let $p=2$. If $0\leq\alpha\leq\beta\leq 1$, then $\dim_HB(\alpha,\beta)=\sup_{\alpha\leq t\leq\beta}h_2(t)$. If $0\leq\alpha\leq1$, then $\dim_HB(\alpha)=h_2(\alpha)$. \end{cor} In the present paper, the readers are assumed to know well the definitions and basic properties of Hausdorff dimension and Hausdorff measure. For these and more related theory, one can refer to Falconer's book~\cite{F97}. The structure of this paper is as follows. In the next section, the concepts of $p$-adic lower and upper entropy functions are introduced, their relations to the $p$-adic entropy function are also shown. In Section 3, the concepts Besicovitch sets in $p$-adic expansion are introduced. Moreover, their Hausdorff dimensions are also determined, which generalizes an early work of Besicovitch. A In Section 4, we will introduce some special Moran sets and determine their Hausdorff dimension for ready use. The last section is devoted to the proof of Theorem \ref{theorem main theorem}, some further discussion are also presented there. \section{$p$-adic entropy function} In this section, we will introduce the definitions of $p$-adic lower entropy function $\b{\it h}(\alpha)$ and $p$-adic upper entropy function $\bar{h}(\alpha)$, and then present some properties about the two functions and the $p$-adic entropy function $h(\alpha)$. Let $0\leq\alpha\leq p-1$, $n\geq1$ and $\delta>0$. Denote \[\bar{H}(\alpha,n,\delta)=\left\{x_1x_2\cdots x_n\in A^n\colon\sum_{i=1}^nx_i<n(\alpha+\delta)\right\}\] and $\bar{h}(\alpha,n,\delta)=\card\bar{H}(\alpha,n,\delta)$. Define the $p$-adic upper entropy function \begin{align}\label{definition bar h} \bar{h}(\alpha)=\lim_{\delta\to0}\varlimsup_{n\to\infty}\frac{\log\bar{h}(\alpha,n,\delta)}{(\log p)n}. \end{align} Oppositely, denote \[\b{\it H}(\alpha,n,\delta)=\left\{x_1x_2\cdots x_n\in A^n\colon\sum_{i=1}^nx_i>n(\alpha-\delta)\right\}\] and $\b{\it h}(\alpha,n,\delta)=\card\b{\it H}(\alpha,n,\delta)$. Define the $p$-adic lower entropy function \begin{align}\label{definition b h} \b{\it h}(\alpha)=\lim_{\delta\to0}\varliminf_{n\to\infty}\frac{\log\b{\it h}(\alpha,n,\delta)}{(\log p)n}. \end{align} Note that we have $0\leq\b{\it h}(\alpha),\bar{h}(\alpha)\leq1$ and the limits in \eqref{definition bar h} and \eqref{definition b h} both exist since $\bar{h}(\alpha,n,\delta)$ and $\b{\it h}(\alpha,n,\delta)$ are increasing for $\delta>0$. Moreover, for the two functions $\b{\it h}(\alpha)$ and $\bar{h}(\alpha)$, we have the following relation between them. \begin{thm}\label{theorem barh bh} $\bar{h}(p-1-\alpha)=\b{\it h}(\alpha)$. \end{thm} \begin{proof} For any $n$-word $x_1\cdots x_n\in A^n$, it is easy to check that the word $(p-1-x_1)\cdots(p-1-x_n)\in\b{\it H}(\alpha)$ if and only if the word $x_1\cdots x_n\in\bar{H}(p-1-\alpha)$. So, there is a one-to-one corresponding between the two sets $\b{\it H}(\alpha)$ and $\bar{H}(p-1-\alpha)$. \end{proof} Recall the definition of $p$-adic function $h(\alpha)$ in \eqref{definiton h alpha}. Actually, we may have \begin{align}\label{definiton h alpha ==} h(\alpha)=\lim_{\delta\to0}\varliminf_{n\to\infty}\frac{\log h(\alpha,n,\delta)}{(\log p)n}=\lim_{\delta\to0}\varlimsup_{n\to\infty}\frac{\log h(\alpha,n,\delta)}{(\log p)n} \end{align} according to Proposition 4.2 in~\cite{TWWX}. Moreover, we can also discover the following properties of $p$-adic function. \begin{thm}\label{theorem enumerate 1} For the function $h(\alpha)$ defined on $[0,p-1]$, we have \noindent \begin{enumerate} \item\label{halpha 1} $h(0)=h(p-1)=0$; \item\label{halpha 2} $h(\alpha)$ is concave and continuous on $[0,p-1]$; \item\label{halpha 3} $h(\alpha)$ is symmetric with respect to the line $\alpha=(p-1)/2$. That is, we have $h(\alpha)=h(p-1-\alpha)$ for any $0\leq\alpha\leq p-1$. It follows that $h(\alpha)$ is increasing on $[0,(p-1)/2]$ and decreasing on $[(p-1)/2,p-1]$; \item\label{halpha 4} If $0\leq\alpha\leq(p-1)/2$, then $\bar{h}(\alpha)=h(\alpha)$; if $(p-1)/2\leq\alpha\leq p-1$, then $\b{\it h}(\alpha)=h(\alpha)$. \end{enumerate} \end{thm} \begin{proof} (1) The conclusion $h(0)=0$ is followed by the the estimation \begin{align}\label{inequality h zero} h(0)\leq\lim_{\delta\to0}\varliminf_{n\to\infty}\frac{\log\frac{n^{[n\delta]}}{[n\delta]!}}{(\log p)n}\leq\lim_{\delta\to0}\varliminf_{n\to\infty}\frac{\log\frac{n^{n\delta}}{(n\delta-1)!}}{(\log p)n} =\lim_{\delta\to0}\frac{\delta\log\frac{e}{\delta}}{\log p}=0. \end{align} Here, the third equality is followed by the well-known Stirling's approximation and $y!=y(y-1)\cdots(y-[y])$ if $y>0$. The other conclusion $h(p-1)=0$ can be deduced similarly. (2) Let $m\geq1$ and take $m$ $n$-words $X_1,X_2,\ldots,X_m\subset H(\alpha,n,\delta)$. It is obvious that the concatenation of these words satisfies $X_1X_2\cdots X_m\subset H(\alpha,nm,\delta)$. Thus, \[\big(h(\alpha,n,\delta)\big)^m\leq h(\alpha,nm,\delta).\] Let $\alpha,\beta\in (0,p-1)$ and $s$, $t$ be two positive integers. Then \[\big(h(\alpha,n,\delta)\big)^s\big(h(\beta,n,\delta)\big)^t\leq h(\alpha,ns,\delta)h(\beta,nt,\delta)\leq h\left(\frac{s\alpha+t\beta}{p+q},n(s+t),\delta\right).\] Hence, \[\frac{s}{s+t}h(\alpha)+\frac{t}{s+t}h(\beta)\leq h\left(\frac{s}{s+t}\alpha+\frac{t}{s+t}\beta\right).\] Since $s$ and $t$ are two arbitrary positive integers, we have \[\lambda h(\alpha)+(1-\lambda)h(\beta)\leq h(\lambda\alpha+(1-\lambda)\beta)\] for any $0<\lambda<1$. It means that the function $h(\alpha)$ is rational concavity. By the definition of $h(\alpha)$, for any $\eta>0$, there exists $\delta_0>0$ such that \begin{align}\label{inequality h alpha} \varlimsup_{n\to\infty}\frac{\log h(\alpha,n,\delta)}{(\log p)n}<h(\alpha)+\frac{\eta}{2} \end{align} for any $0<\delta<\delta_0$. Take a number $\gamma$ which satisfies $|\alpha-\gamma|<\delta/2$. Then, we have $H(\gamma,n,\delta/2)\subset H(\alpha,n,\delta)$. It yields that $h(\gamma,n,\delta/2)\leq h(\alpha,n,\delta)$. Moreover, by the definition of $h(\gamma)$, there exists some $\delta_1$ satisfying $0<\delta_1<\delta_0$ such that \[h(\gamma)\leq \varlimsup_{n\to\infty}\frac{\log h(\gamma,n,\delta/2)}{(\log p)n}+\frac{\eta}{2}\leq\varlimsup_{n\to\infty}\frac{\log h(\alpha,n,\delta)}{(\log p)n}+\frac{\eta}{2}.\] for any $0<\delta<\delta_1$. This, together with \eqref{inequality h alpha}, yields that $h(\gamma)<h(\alpha)+\eta$ if $|\alpha-\gamma|<\delta/2$, where $0<\delta<\delta_1$. It implies that the function $h(\alpha)$ is upper semi-continuous. So, $h(\alpha)$ is concave and then continuous on $(0,p-1)$. Next, we show that $h(\alpha)$ is continuous at $\alpha=0$ and $\alpha=p-1$. Since $h(\alpha,n,\delta)\leq\bar{h}(\alpha,n,\delta)$, similar to the estimation \eqref{inequality h zero} we have \begin{align*} \lim_{\alpha\to0^+}h(\alpha)&\leq\lim_{\alpha\to0^+}\lim_{\delta\to0}\varliminf_{n\to\infty}\frac{\log\big(n^{[n(\alpha+\delta)]}/[n(\alpha+\delta)]!\big)}{(\log p)n}\\ &\leq\lim_{\alpha\to0^+}\lim_{\delta\to0}\frac{(\alpha+\delta)\log\frac{e}{(\alpha+\delta)}}{\log p}=\lim_{\alpha\to0^+}\frac{\alpha\log\frac{e}{\alpha}}{\log p}=0. \end{align*} It follows that $\lim_{\alpha\to0^+}h(\alpha)=0=h(0)$. Thus, $h(\alpha)$ is continuous at $\alpha=0$. Similarly, the continuity of $h(\alpha)$ at $\alpha=p-1$ holds as well. Thus, $h(\alpha)$ is concave and continuous on $[0,p-1]$. (3) The proof of the property $h(\alpha)=h(p-1-\alpha)$ is similar to the discussion for Theorem~\ref{theorem barh bh}. The monotonicity of $h(\alpha)$ is followed by the concavity of $h(\alpha)$ in property \eqref{halpha 2}. (4) The former part is followed by the increasing property of $h(\alpha)$ in \eqref{halpha 3} and the inequality \[h(\alpha,n,\delta)\leq\bar{h}(\alpha,n,\delta)\leq2\left(\left[\frac{\alpha+\delta}{2\delta}\right]+1\right)h(\alpha,n,\delta),\] where $\alpha\leq(p-1)/2$. The second part can be dealt with in a similar way. \end{proof} \section{Besicovitch sets} In this section, we will determine the Hausdorff dimensions of Bsicovitch sets $\b{\it E}(\alpha)$ and $\bar{E}(\alpha)$ and present at last a further property of the $p$-adic entropy function for ready use. Let $x\in I$. Denote by \[\b{\it A}(x)=\varliminf_{n\to\infty}\frac{S_n(x)}{n}\quad\text{and}\quad \bar{A}(x)=\varlimsup_{n\to\infty}\frac{S_n(x)}{n}\] the lower and upper digit means of $x$, respectively. Let $0\leq\alpha\leq p-1$. Define the level sets \begin{align}\label{E alpha} \b{\it E}(\alpha)=\left\{x\in I\colon \b{\it A}(x)\geq\alpha\right\}\quad\text{and}\quad\bar{E}(\alpha)=\left\{x\in I\colon \bar{A}(x)\leq\alpha\right\}, \end{align} which are called the \emph{Besicovitch sets} in this paper. Note that the Hausdorff dimensions of $\b{\it E}(\alpha)$ and $\bar{E}(\alpha)$ are determined by Besicovitch~\cite{Be} in the case of binary expansion. For the present general case, we have \begin{thm}\label{theorem general} Let $0\leq\alpha\leq p-1$. Then \begin{equation}\label{equation E alpha 1} \dim_H\b{\it E}(\alpha)=\begin{cases} 1,\indent &0\leq\alpha<(p-1)/2;\\ h(\alpha),\indent &(p-1)/2\leq\alpha\leq p-1,\end{cases} \end{equation} and \begin{equation}\label{equation E alpha 2} \dim_H\bar{E}(\alpha)=\begin{cases} h(\alpha),\indent &0\leq\alpha<(p-1)/2;\\ 1,\indent &(p-1)/2\leq\alpha\leq p-1.\end{cases} \end{equation} \end{thm} To prove this theorem, we would like to introduce a lemma about the Hausdorff dimensions of homogeneous Moran sets. Here, It is assumed the readers are familiar with the definition and structure of homogeneous Moran sets, for which one can see \cite{FWW} for more details. Let $\{N_k\}_{k\geq1}$ be a sequence of integers and $\{c_k\}_{k\geq1}$ be a sequence of positive numbers satisfying $N_k\geq2$, $0<c_k<1$ and $N_kc_k\leq1$. Let $\mathcal{M}=\mathcal{M}\big(I,\{N_k\}_{k\geq1},\{c_k\}_{k\geq1}\big)$ be the homogeneous Moran set determined by the sequences $\{N_k\}_{k\geq1}$ and $\{c_k\}_{k\geq1}$. Denote \[s=\varliminf_{k\to\infty}\frac{\log(N_1N_2\cdots N_k)}{-\log(c_1c_2\cdots c_{k+1}N_{k+1})}.\] Then we have \begin{lem}[See Theorem 2.1 and Corollary 2.1 in \cite{FWW}]\label{lemma FWW} Let $\mathcal{M}$ be a homogeneous Moran set, then $\dim_H\mathcal{M}\geq s$. Moreover, if $\inf_{k\geq1}c_k>0$, then $\dim_H\mathcal{M}=s$. \end{lem} \begin{proof}[Proof of Theorem \ref{theorem general}] We will give only the proof of \eqref{equation E alpha 2} since the conclusion \eqref{equation E alpha 1} can be dealt with in a similar way. For the second part of \eqref{equation E alpha 2}, it is obvious that $\{x\in I\colon\lim_{n\to\infty}S_n(x)/n=(p-1)/2\}\subset\bar{E}(\alpha)$. Since $\lim_{n\to\infty}S_n(x)/n=(p-1)/2$ for almost all $x\in I$ by the ergodic theorem, we have \[\dim_H\bar{E}(\alpha)\geq\dim_H\left\{x\in I\colon\lim_{n\to\infty}\frac{S_n(x)}{n}=\frac{p-1}{2}\right\}=1.\] It follows that $\dim_H\bar{E}(\alpha)=1$ as $(p-1)/2\leq\alpha\leq p-1$. For the first part of \eqref{equation E alpha 2}, first we will show the upper bound of Hausdorff dimension of $\bar{E}(\alpha)$ is $h(\alpha)$ as $0\leq\alpha<(p-1)/2$. For any $\delta>0$, we have \[\bar{E}(\alpha)\subset\bigcap_{l=1}^\infty\bigcup_{n=l}^\infty\bigcup_{x_1\cdots x_n\in\bar{H}(\alpha,n,\delta)}I(x_1\cdots x_n),\] where the cylinder $I(x_1\cdots x_n)=\{y=0.y_1y_2\ldots\in I\colon y_1=x_1,\ldots,y_n=x_n\}$. By the definition of $\bar{h}(\alpha)$, for any $\eta>0$, there exists an integer $N$ such that \[\bar{h}(\alpha,n,\delta)<p^{n\left(\bar{h}(\alpha)+\frac{\eta}{2}\right)},\quad\forall n>N.\] Then, for any $l>N$, the $(\bar{h}(\alpha)+\eta)$-Hausdorff measure of $\bar{E}(\alpha)$ satisfies \[\mathbb{H}_{p^{-l}}^{\bar{h}(\alpha)+\eta}\big(\bar{E}(\alpha)\big)\leq\sum_{n=l}^{\infty}\bar{h}(\alpha,n,\delta)(p^{-n})^s<\sum_{n=l}^{\infty}(p^{-\frac{\eta}{2}})^n<\infty.\] This implies that $\dim_H\bar{E}(\alpha)\leq\bar{h}(\alpha)+\eta$. Thus, $\dim_H\bar{E}(\alpha)\leq\bar{h}(\alpha)=h(\alpha)$ by the arbitrariness of $\eta$ and \eqref{halpha 4} of Theorem \ref{theorem enumerate 1}. Next, we turn to show that the lower bound of the Hausdorff dimension of $\bar{E}(\alpha)$ is $h(\alpha)$. For this, we will prove $\dim_H\bar{E}(\alpha)\geq\tau$ for any $0<\tau<h(\alpha)$. Since $\tau<h(\alpha)$, we can take two sequences, one is an increasing integer sequence $\{n_j\}_{j\geq1}$ and the other is a decreasing positive sequence $\{\delta_j\}_{j\geq1}$ satisfying $\lim_{j\to\infty}\delta_j=0$, such that \[h(\alpha,n_j,\delta_j)>p^{n_j\tau}.\] Let $j\geq1$ and write \[F_j(\alpha)=\left\{x_1x_2\cdots x_{n_j}\in A^{n_j}\colon\left|\frac{\sum_{i=1}^{n_j}x_i}{n_j}-\alpha\right|<\delta_j\right\}.\] Take a positive integer sequence $\{m_i\}_{i\geq1}$ satisfies \[\lim_{j\to\infty}\frac{n_{j+1}}{\sum_{i=1}^{j}m_in_i}=0.\] Denote by \[q_j=m_1n_1+m_2n_2+\ldots+m_jn_j,\quad j\geq1.\] Based on the sequence of sets $\{F_j(\alpha)\}_{j\geq1}$, construct the Moran set \begin{align*} \begin{split} \mathcal{F}(\alpha)&=\left\{0.x_1x_2\ldots\in I\colon x_{q_i+1}\cdots x_{q_{i+1}}\in F_i(\alpha)^{m_i},\forall i\geq1\right\}\\ &=:0.\prod_{i=1}^{\infty}F_i(\alpha)^{m_i}. \end{split} \end{align*} Here and in the sequel, if $F$ is a set of words with equal length and $m$ is a positive integer, then we use the notation $F^m$ to denote the set in which every word is the concatenations of $m$ words in the set $F$, and $F^\infty$ the set in which every sequence is the concatenations of infinite words in the set $F$. For the sequence of sets of words $\{F_i\}_{i\geq1}$, $\prod_{i=1}^\infty F_i$ denotes the set in which every sequence is the successive concatenations of words in the set $F_i$ according to the order of natural numbers. It is easy to see that \[\frac{m_1n_1(\alpha-\delta_1)+\cdots+m_jn_j(\alpha-\delta_j)}{m_1n_1+\cdots+m_jn_j}\leq\frac{S_{q_j}(x)}{q_j}\leq\frac{m_1n_1(\alpha+\delta_1)+\cdots+m_jn_j(\alpha+\delta_j)}{m_1n_1+\cdots+m_jn_j}.\] Since $m_jn_j\to\infty$ and $\delta_j\to0$ as $j\to\infty$, we have \[\lim_{j\to\infty}\frac{S_{q_j}(x)}{q_j}=\alpha\] by the squeeze theorem. This implies the upper limit of $S_n(x)/n$ is $\alpha$. So, we have $\mathcal{F}(\alpha)\subset\bar{E}(\alpha)$ and then $\dim_H\bar{E}(\alpha)\geq\dim_H\mathcal{F}(\alpha)$. For any integer $n$ large enough, there exist two integers $j\geq1$ and $b$ such that \[0\leq b<m_{k+1}\quad\text{and}\quad\sum_{i=1}^jm_in_i+bn_{j+1}\leq n<\sum_{i=1}^jm_in_i+(b+1)n_{j+1}.\] Then, by the first assertion of Lemma \ref{lemma FWW} we have \[\dim_H\mathcal{F}(\alpha)\geq\varliminf_{j\to\infty}\frac{\big(\sum_{i=1}^jm_in_i+bn_{j+1}\big)\tau\log p}{\big(\sum_{i=1}^jm_in_i+(b+1)n_{j+1}\big)\log p-n_{j+1}\tau\log p}=\tau.\] Thus, we obtain that $\dim_H\bar{E}(\alpha)\geq\tau$, which shows the first part of \eqref{equation E alpha 2}. The proof is completed now. \end{proof} Moreover, denote by \[A(x)=\lim_{n\to\infty}\frac{S_n(x)}{n},\quad x\in I,\] \emph{the arithmetic digit mean} of $x$ if the limit exists and define the level set related to it as \begin{align}\label{E = alpha} E(\alpha)=\left\{x\in I\colon A(x)=\alpha\right\}. \end{align} Then, by the same technique used in the proof of Theorem~\ref{theorem general}, we may get \begin{thm}\label{theorem = general} For any $0\leq\alpha\leq p-1$, we have that $\dim_HE(\alpha)=h(\alpha)$. \end{thm} \begin{cor}\label{corollary p-1 2} $h\big((p-1)/2\big)=1$. \end{cor} \begin{proof} Since $A(x)=(p-1)/2$ for almost all $x\in I$, by Theorem~\ref{theorem = general} we have \begin{align*} \begin{split} 1=\dim_H\left\{x\in I\colon A(x)=\frac{p-1}{2}\right\} =\dim_HE\left(\frac{p-1}{2}\right)=h\left(\frac{p-1}{2}\right). \end{split} \end{align*} It ends the proof. \end{proof} \begin{rem}\label{remark b alpha beta} By Corollary \ref{corollary p-1 2}, Theorem \ref{theorem main theorem} can be restated in details as follows: if $0\leq\alpha\leq\beta<(p-1)/2$, then $\dim_HB(\alpha,\beta)=h(\beta)$; if $0\leq\alpha\leq(p-1)/2\leq\beta\leq1$, then $\dim_HB(\alpha,\beta)=1$; if $(p-1)/2<\alpha\leq\beta\leq p-1$, then $\dim_HB(\alpha,\beta)=h(\alpha)$. \end{rem} \section{Some Moran sets} In this section, we will introduce some Moran sets constructed by sets of words with bounded digit sums and then determine their Hausdorff dimensions. Based on them, we will construct suitable subsets to achieve the lower bound of Hausdorff dimension of $B(\alpha,\beta)$ in the last section. Let $M\geq1$ be an integer. Take two integers $P$ and $Q$ satisfying $0\leq P\leq Q\leq (p-1)M$. Write \[W([P,Q],M):=\left\{x_1x_2\cdots x_M\in A^M\colon P\leq\sum_{i=1}^Mx_i\leq Q\right\}.\] Then define the Moran set \begin{align*} \mathcal {W}([P,Q],M)&:={}0.W([P,Q],M)^\infty. \end{align*} For the size of the set $\mathcal {W}([P,Q],M)$, by the second assertion in Lemma \ref{lemma FWW}, we can get immediately that \begin{lem}\label{lemma pqm} Let $0\leq P\leq Q\leq (p-1)M$ and $M\geq1$. Then \[\dim_H\mathcal {W}([P,Q],M)=\frac{\log\card W([P,Q],M)}{(\log p)M}.\] \end{lem} Here and in the sequel, if $P=Q$, then write $W([P,Q],M)$ as $W(P,M)$ and $\mathcal {W}([P,Q],M)$ as $\mathcal {W}(P,M)$ respectively for simplicity. Let $\alpha$ be a real number and $0\leq\alpha\leq p-1$. Let $n\geq1$. Define the function \[\omega(\alpha,n)=\card W([\alpha n],n).\] Then the corresponding properties in the following lemma is evident. \begin{lem}\label{theorem enumerate 2} Let $0\leq\alpha\leq p-1$ and $n\geq1$. Then \begin{enumerate} \item\label{omega 1} For each $\alpha$, $\omega(\alpha,n)$ is increasing with respect to $n$; \item\label{omega 2} For each $n$, $\omega(\alpha,n)$ is constant on $[(k-1)/n,k/n)$, where $1\leq k\leq(p-1)n$, with respect to $\alpha$; \item\label{omega 3} For each $n$, $\omega(\alpha,n)$ is increasing on $[0,(p-1)/2]+1/n)$ and decreasing on $[(p-1)/2+1/n,p-1]$ with respect to $\alpha$. \end{enumerate} \end{lem} Moreover, we have \begin{lem}\label{lemma logcard} Let $0\leq\alpha\leq p-1$. Then \[\lim_{n\to\infty}\frac{\log\card W([\alpha n],n)}{(\log p)n}=h(\alpha).\] \end{lem} \begin{proof} We first show that \begin{align}\label{equality lemma logcard} \varliminf_{n\to\infty}\frac{\log\card W([\alpha n],n)}{(\log p)n}=h(\alpha). \end{align} The proof is divided into three cases: $0\leq\alpha<(p-1)/2$, $\alpha=(p-1)/2$ and $(p-1)/2<\alpha\leq p-1$. Here, we give only the proof of the first case. Take $\delta>0$ such that $\alpha+\delta<(p-1)/2$. Since \[\card W([(\alpha+\delta)n],n)\leq\frac{n^{n\delta}}{(n\delta-1)!}\card W([\alpha n],n),\] by the Stirling's approximation we have \begin{align*} \begin{split} \varliminf_{n\to\infty}\frac{\log\card W([(\alpha+\delta)n],n)}{(\log p)n}&\leq\varliminf_{n\to\infty}\frac{\log\frac{n^{n\delta}}{(n\delta-1)!}}{(\log p)n}+\varliminf_{n\to\infty}\frac{\log\card W([\alpha n],n)}{(\log p)n}\\ &=\frac{\delta\log\frac{e}{\delta}}{\log p}+\varliminf_{n\to\infty}\frac{\log\card W([\alpha n],n)}{(\log p)n}. \end{split} \end{align*} Let $\delta\to0$ in both sides, since $\lim_{\delta\to0}(\delta\log\frac{e}{\delta})/\log p=0$, we have \begin{align}\label{inequality log halpha} \lim_{\delta\to0}\varliminf_{n\to\infty}\frac{\log\card W([(\alpha+\delta)n],n)}{(\log p)n}\leq\varliminf_{n\to\infty}\frac{\log\card W([\alpha n],n)}{(\log p)n}. \end{align} Moreover, by the properties of function $\omega(\alpha,n)$ in Lemma~\ref{theorem enumerate 2}, we have \begin{align*} \begin{split} \card W([(\alpha+\delta)n],n)&\leq\card W\big(\big[[(\alpha-\delta)n]+1,[(\alpha+\delta)n]\big],n\big)\\ &=h(\alpha,n,\delta)\leq([2\delta n]+1)\card W([(\alpha+\delta)n],n). \end{split} \end{align*} It follows that \[\lim_{\delta\to0}\varliminf_{n\to\infty}\frac{\log\card W([(\alpha+\delta)n],n)}{(\log p)n}=\lim_{\delta\to0}\varliminf_{n\to\infty}\frac{\log h(\alpha,n,\delta)}{(\log p)n}=h(\alpha).\] This, together with \eqref{inequality log halpha}, yields that \[h(\alpha)\leq\varliminf_{n\to\infty}\frac{\log\card W([\alpha n],n)}{(\log p)n}.\] On the other hand, the inequality for the opposite direction is apparently true. So, the equality \eqref{equality lemma logcard} is established. Since the equality $\varlimsup_{n\to\infty}\log\card W([\alpha n],n)/\big((\log p)n\big)=h(\alpha)$ can be proved as the above way, the proof of this lemma is finished now. \end{proof} In the sequel, we will construct a Moran set $\mathcal{W}_M(\alpha)$, where $0\leq\alpha<p-1$, to obtain the lower bound of Hausdorff dimension of $B(\alpha,\beta)$ in Theorem~\ref{theorem main theorem}. At first, we construct recursively two sequences of sets of words $\{W_n(\alpha,M)\}_{n=1}^\infty$ and $\{V_n(\alpha,M)\}_{n=1}^\infty$ below. Let $M$ be sufficiently large such that $[\alpha M]+1<(p-1)M$. For brevity, write \[W_1(\alpha,M)=W([\alpha M],M),\quad V_1(\alpha,M)=W([\alpha M]+1,M).\] Suppose that the sets $W_i(\alpha,M)$ and $V_i(\alpha,M)$ are well-defined for all $1\leq i\leq n$, then define \begin{align*} \begin{split} W_{n+1}(\alpha,M)=\big\{&x_1\cdots x_{2^nM}\in W([\alpha 2^nM],2^nM)\colon\\ &x_{2^{n-1}Mi+1}\cdots x_{2^{n-1}M(i+1)}\in W_n(\alpha,M)\cup V_n(\alpha,M),i=0,1\big\}, \end{split} \end{align*} \begin{align*} \begin{split} V_{n+1}(\alpha,M)=\big\{&x_1\cdots x_{2^nM}\in W([\alpha 2^nM]+1,2^nM) \colon \\&x_{2^{n-1}Mi+1}\cdots x_{2^{n-1}M(i+1)}\in W_n(\alpha,M)\cup V_n(\alpha,M),i=0,1\big\}. \end{split} \end{align*} The above definitions are valid since the estimation \begin{equation}\label{cc5} 2[\alpha2^kM]<[\alpha2^{k+1}M]+1\leq 2\left([\alpha2^kM]+1\right) \end{equation} holds for any $0\leq\alpha<p-1$ and $k\geq0$. With this construction, we know that for each $n\geq1$, every word in $W_n(\alpha,M)$ is of length $2^{n-1}M$ and the sum of elements is $[\alpha2^{n-1}M]$. Similarly, every word in $V_n(\alpha,M)$ is of length $2^{n-1}M$ and the sum of elements is $[\alpha2^{n-1}M]+1$. Moreover, we even have \begin{rem}\label{remark decompose} For any $0\leq i\leq n-1$, we can decompose uniquely each word in $W_n(\alpha,M)$ and $V_n(\alpha,M)$ into successive concatenations of $2^iM$-words, the sum of elements in each $2^iM$-word is $[\alpha 2^iM]$ or $[\alpha 2^iM]+1$. \end{rem} Based on the family of sets of $\alpha$-words $\{W_n(\alpha,M)\}_{n=1}^\infty$, define the Moran set \begin{align*} \mathcal{W}_M(\alpha):=0.\prod_{n=1}^\infty W_n(\alpha,M). \end{align*} Then we have \begin{lem}\label{lemma alpha M} Let $0\leq\alpha<p-1$, then \begin{align}\label{equantion alpha M} \lim_{M\to\infty}\dim_H\mathcal{W}_M(\alpha)=h(\alpha). \end{align} \end{lem} \begin{proof} For the case $0\leq\alpha<(p-1)/2$, take $M$ to be large enough such that $[\alpha M]+1<[(p-1)M/2]$. By the monotonicity of function $\omega$ in \eqref{omega 3} of Lemma~\ref{theorem enumerate 2}, we have \[\card W([\alpha M],M)=\omega(\alpha,M)\leq \omega(\alpha+1/M,M)=\card W([\alpha M]+1,M).\] From this and the structures of words in $\mathcal{W}_M(\alpha)$ in Remark~\ref{remark decompose}, we know that \begin{align}\label{inequality alpha M} \dim_H\mathcal{W}([\alpha M],M)\leq\dim_H\mathcal{W}_M(\alpha)\leq\dim_H\mathcal{W}\big(\big[[\alpha M],[\alpha M]+1\big],M\big). \end{align} Moreover, by Lemma \ref{lemma pqm}, we have \[\dim_H\mathcal{W}([\alpha M],M)=\frac{\log\card W([\alpha M],M)}{(\log p)M}\] and \begin{align*} \begin{split} \dim_H\mathcal{W}\big(\big[[\alpha M],[\alpha M]+1\big],M\big) =\frac{\log\big(\card W([\alpha M],M)+\card W([\alpha M]+1,M)\big)}{(\log p)M}. \end{split} \end{align*} Thus, by Lemma~\ref{lemma logcard} we may obtain that \[\lim_{M\to\infty}\dim_H\mathcal{W}([\alpha M],M)=\lim_{M\to\infty}\dim_H\mathcal{W}\big(\big[[\alpha M],[\alpha M]+1\big],M\big)=h(\alpha).\] This, together with \eqref{inequality alpha M}, leads to the conclusion~\eqref{equantion alpha M}. On the other hand, for the case $(p-1)/2\leq\alpha<p-1$, we can get similarly that \[\dim_H\mathcal{W}([\alpha M]+1,M)\leq\dim_H\mathcal{W}_M(\alpha)\leq\dim_H\mathcal{W}\big(\big[[\alpha M],[\alpha M]+1\big],M\big)\] for sufficiently large $M$. The proof of the remainder is similar to the first case. We omit the details here. The proof is completed now. \end{proof} \section{Proof of Theorem \ref{theorem main theorem}} In this section, we will give the proof of Theorem~\ref{theorem main theorem}. First, we would like to present the following lemma to reveal the relations among the lower and upper digit means of $x\in I$ and the lower and upper moving digit means of $x\in I$, which will be used to achieve the upper bound of Hausdorff dimension of $B(\alpha,\beta)$. \begin{lem}\label{lemma relation} For any $x\in I$, we have \begin{equation} \b{\it M}(x)\leq\b{\it A}(x)\leq\bar{A}(x)\leq\bar{M}(x). \end{equation} \end{lem} \begin{proof} It suffices to show that $\bar{A}(x)\leq\bar{M}(x)$. Write $\bar{M}(x)=\beta\in[0,p-1]$. Then, for any $\epsilon>0$, there exists an integer $N>0$, such that \[\varlimsup\limits_{m\to\infty}\frac{S_n(T^mx)}{n}<\beta+\epsilon,\quad\forall n\geq N.\] Furthermore, there exists an integer $Q=Q(n)\geq1$ such that \[\frac{S_n(T^mx)}{n}<\beta+\epsilon,\quad \mbox{i.e.},\quad \sum_{i=1}^nx_{m+i}\leq[n(\beta+\epsilon)]\] for any $ m\geq Q$. Suppose $t\geq Q$ and $t=Q+r+kn$, where $0\leq r\leq n-1$, then \[\frac{S_t(x)}{t}\leq\frac{Q+r+k[n(\beta+\epsilon)]}{Q+r+kn}.\] Let $t\to\infty$, then $k\to\infty$ and \[\varlimsup_{t\to\infty}\frac{S_t(x)}{t}\leq\frac{[n(\beta+\epsilon)]}{n}\leq\beta+\epsilon.\] Since $\epsilon$ is arbitrary, we have $\bar{A}(x)\leq\beta=\bar{M}(x)$. \end{proof} Next, we would like to present a lemma for the computation of lower bound of the Hausdorff dimension of $B(\alpha,\beta)$. Let $\mathbb{M}$ be a subset of $\mathbb{N}$. We say the set $\mathbb{M}$ is of density $\rho\in[0,1]$ in $\mathbb{N}$ if \[\lim_{n\to\infty}\frac{\card\{i\in\mathbb{M}\colon i\leq n\}}{n}=\rho.\] Write $\mathbb{N}\backslash\mathbb{M}=\{n_i\}_{i\geq1}$ where $n_i<n_{i+1}$ for all $i\geq1$. Define a mapping $\varphi_\mathbb{M}\colon I\to I$ by \[0.x_1x_2\ldots\mapsto 0.x_{n_1}x_{n_2}\ldots.\] Under the mapping $\varphi_\mathbb{M}$, for any given subset $D\subset I$, we may obtain another set $\varphi_\mathbb{M}(D)=\{\varphi_\mathbb{M}(x)\colon x\in D\}$. Moreover, we have \begin{lem}[See Lemma 2.3 in~\cite{CT}]\label{lemma invariance} Suppose that the set $\mathbb{M}$ is of density zero in $\mathbb{N}$. Then for any set $D\subset I$ we have $\dim_HD=\dim_H\phi_\mathbb{M}(D)$. \end{lem} Lemma~\ref{lemma invariance} implies that for a set $D$, its Hausdorff dimension is invariant after deleting the digits, for which the set of their positions is of density zero in $\mathbb{N}$, from the sequences of numbers in $D$. Now, we are ready to give the proof of Theorem~\ref{theorem main theorem}. \begin{proof}[Proof of Theorem~\ref{theorem main theorem}] According to Remark \ref{remark b alpha beta}, the proof is divided into three parts: \begin{enumerate} \item $\dim_HB(\alpha,\beta)=h(\beta)$ if $0\leq\alpha\leq\beta<(p-1)/2$; \item $\dim_HB(\alpha,\beta)=1$ if $\alpha\leq(p-1)/2\leq\beta$; \item $\dim_HB(\alpha,\beta)=h(\alpha)$ if $(p-1)/2<\alpha\leq\beta\leq p-1$. \end{enumerate} In the following, we will give the proofs of them respectively. (1) For the upper bound, by Lemma~\ref{lemma relation}, we have $B(\alpha,\beta)\subset\bar{S}(\beta)$. It follows that $\dim_HB(\alpha,\beta)\leq\dim_H\bar{S}(\beta)=h(\beta)$. For the lower bound, construct the set \[\mathcal{W}_M(\alpha,\beta)=0.\prod_{n=1}^{\infty}\big(W_n(\alpha,M)\times W_n(\beta,M)^n\big).\] Then we have: (a) $\mathcal{W}_M(\alpha,\beta)\subset B(\alpha,\beta)$; (b) $\dim_H\mathcal{W}_M(\alpha,\beta)=\dim_H\mathcal{W}_M(\beta)$. For the proof of (a), note that for any $i\geq1$, each word in the set \[\prod_{n=1}^i\big(W_n(\alpha,M)\times W_n(\beta,M)^n\big),\] is of length $i2^iM$ and the length of words in $W_n(\alpha,M)$ and $W_n(\beta,M)$, $n>i$, are of common lengths $2^{n-1}M$. So, we may decompose every number $x\in\mathcal{W}_M(\alpha,\beta)$ into successive concatenations of $2^iM$-words. Take $n$ to be sufficiently large and write $n=k2^iM+r$, $0\leq r\leq 2^iM-1$. Then \[\frac{(k-1)[\alpha2^iM]}{k2^iM+r}\leq\b{\it M}_n(x)\leq\frac{(k+2)([\alpha2^iM]+1)}{k2^iM+r}.\] Let $n\to\infty$, then $k\to\infty$. It yields that \[\frac{[\alpha2^iM]}{2^iM}\leq\b{\it M}(x)\leq\frac{[\alpha2^iM]+1}{2^iM}.\] Since this inequality holds for all $i\geq1$, we have $\b{\it M}(x)=\alpha$ by letting $i\to\infty$. The other conclusion $\bar{M}(x)=\beta$ can be deduced in a similar manner. So, we have $\mathcal{W}_M(\alpha,\beta)\subset B(\alpha,\beta)$. For the second assertion (b), the set of positions occupied by the words in sets $W_n(\alpha,M)$, $n\geq1$, is of density zero for any $x\in\mathcal{W}_M(\alpha,\beta)$. By deleting all these words in the sequences of numbers in $\mathcal{W}_M(\alpha,\beta)$, we obtain the set $\mathcal{W}_M(\beta)$. Then, (b) is established by Lemma~\ref{lemma invariance}. By (a) and (b), we obtain that \[\dim_HB(\alpha,\beta)\geq\dim_H\mathcal{W}_M(\alpha,\beta)=\dim_H\mathcal{W}_M(\beta).\] Let $M\to\infty$, it yields that $\dim_HB(\alpha,\beta)\geq h(\beta)$ according to Lemma~\ref{lemma alpha M}. The proof of this part is finished. (2) This part is split into four cases for consideration: i) $\alpha< (p-1)/2<\beta$; ii) $\alpha=(p-1)/2<\beta$; iii) $\alpha<\beta=(p-1)/2$; iv) $\alpha=\beta=(p-1)/2$. Case i): $\alpha< (p-1)/2<\beta$. Since $h((p-1)/2)=1$, for $\epsilon$ small enough, there exists $\delta_0>0$ and $n_0>0$ such that \[\alpha<\frac{p-1}{2}-\delta_0<\frac{p-1}{2}+\delta_0<\beta\quad\text{and}\quad\frac{\log h\left(\frac{p-1}{2},n_0,\delta_0\right)}{(\log p)n_0}>1-\epsilon.\] Based on the set \[H\left(\frac{p-1}{2},n_0,\delta_0\right)=\bigg\{x_1\cdots x_{n_0}\in A^{n_0}\colon n_0(\frac{p-1}{2}-\delta_0)<\sum_{i=1}^{n_0}x_i<n_0(\frac{p-1}{2}+\delta_0)\bigg\},\] define \begin{align}\label{definition u n0} \mathcal{H}\left(\frac{p-1}{2},n_0,\delta_0\right)=0.H\left(\frac{p-1}{2},n_0,\delta_0\right)^\infty. \end{align} Then, by Lemma \ref{lemma pqm}, we have \[\dim_H\mathcal{H}\left(\frac{p-1}{2},n_0,\delta_0\right)=\frac{\log\card H\left(\frac{p-1}{2},n_0,\delta_0\right)}{(\log p)n_0}=\frac{\log h\left(\frac{p-1}{2},n_0,\delta_0\right)}{(\log p)n_0}>1-\epsilon.\] Now, construct the set \begin{align}\label{definition w n0} \mathcal{W}_{M,n_0,\delta_0}(\alpha,\beta)=0.\prod_{n=1}^{\infty}\left(W_n(\alpha,M)\times H\left(\frac{p-1}{2},n_0,\delta_0\right)^{3^{n-1}}\times W_n(\beta,M)\right). \end{align} Similar to the proof of the foregoing part, we can also deduce that \[\mathcal{W}_{M,n_0,\delta_0}(\alpha,\beta)\subset B(\alpha,\beta)\quad\text{and}\quad\dim_H\mathcal{W}_{M,n_0,\delta_0}(\alpha,\beta)=\dim_H\mathcal{H}\left(\frac{p-1}{2},n_0,\delta_0\right).\] Thus, we have \[\dim_HB(\alpha,\beta)\geq\dim_H\mathcal{H}\left(\frac{p-1}{2},n_0,\delta_0\right)>1-\epsilon.\] It proves this case since $\epsilon$ is arbitrary. Case ii): $\alpha=(p-1)/2<\beta$. In this case, take $\delta_0$ satisfying $(p-1)/2<(p-1)/2+\delta_0<\beta$. Put \[H'\left(\frac{p-1}{2},n_0,\delta_0\right)=\bigg\{x_1\cdots x_{n_0}\in A^{n_0}\colon \frac{p-1}{2}n_0<\sum_{i=1}^{n_0}x_i<(\frac{p-1}{2}+\delta_0)n_0\bigg\}\] and \[\mathcal{H}'\left(\frac{p-1}{2},n_0,\delta_0\right)=0.H'\left(\frac{p-1}{2},n_0,\delta_0\right)^\infty.\] Then, similar to the proof of Lemma \ref{lemma logcard}, we have \[\lim_{\delta_0\to0}\varliminf_{n_0\to\infty}\frac{\log\card H'\left(\frac{p-1}{2},n_0,\delta_0\right)}{(\log p)n_0}=\lim_{\delta_0\to0}\varliminf_{n_0\to\infty}\frac{\log\card H\left(\frac{p-1}{2},n_0,\delta_0\right)}{(\log p)n_0}=1.\] So, for any $\epsilon>0$, there exists $n_0$ and $\delta_0$ such that \[\dim_H\mathcal{H}'\left(\frac{p-1}{2},n_0,\delta_0\right)>1-\epsilon.\] Next, construct the Moran set $\mathcal{W}_{M,n_0,\delta_0}'\big((p-1)/2,\beta\big)$ as $\mathcal{W}_{M,n_0,\delta_0}(\alpha,\beta)$ in \eqref{definition w n0} by replacing $H\big((p-1)/2,n_0,\delta_0\big)$ with $H'\left((p-1)/2,n_0,\delta_0\right)$. For the remaining proof of this case, it is just similar to the above discussion in Case i). Case iii): $\alpha<\beta=(p-1)/2$. It can be proved as that of Case ii). Case iv): $\alpha=\beta=(p-1)/2$. In this case, take $n_0$ to be even and consider the set \[\mathcal{W}_{n_0}\left(\frac{p-1}{2}\right)=0.W\left(\frac{p-1}{2},n_0\right)^\infty,\] where \[W\left(\frac{p-1}{2},n_0\right)=\bigg\{x_1\cdots x_{n_0}\in A^{n_0}\colon \sum_{i=1}^{n_0}x_i=\frac{p-1}{2}n_0\bigg\}.\] Then we have \[\lim_{n_0\to\infty}\dim_H\mathcal{W}_{n_0}\left(\frac{p-1}{2}\right)=h\left(\frac{p-1}{2}\right)=1\] by Lemma \ref{lemma logcard}. It is evident that \[\mathcal{W}_{n_0}\left(\frac{p-1}{2}\right)\subset B\left(\frac{p-1}{2},\frac{p-1}{2}\right)=B\left(\frac{p-1}{2}\right).\] Thus, $\dim_HB\big((p-1)/2\big)\geq\dim_H\mathcal{W}_{n_0}\big((p-1)/2\big)$. Let $n_0\to\infty$, then we have $\dim_HB\big((p-1)/2\big)=1$. (3) Three cases will be considered in this part. Case 1): $(p-1)/2<\alpha\leq\beta<p-1$. This case can be proved as that of part (1) and we omit the details here. Case 2): $(p-1)/2<\alpha<\beta=p-1$. First, by Lemma~\ref{lemma relation}, we may obtain that $B(\alpha,p-1)\subset\b{\it E}(\alpha)$. So, $\dim_HB(\alpha,p-1)\leq\dim_H\b{\it E}(\alpha)=h(\alpha)$. Second, construct the set \[\mathcal{W}_M(\alpha,p-1)=0.\prod_{n=1}^{\infty}\big(W_n(\alpha,M)\times(p-1)^n\big),\] where $(p-1)^n$ means the word $(p-1)\cdots(p-1)$ of length $n$. Then we can deduce similarly that \[\mathcal{W}_M(\alpha,p-1)\subset B(\alpha,p-1)\quad\text{and}\quad \dim_H\mathcal{W}_M(\alpha,p-1)=\dim_H\mathcal{W}_M(\alpha).\] Thus, $\dim_HB(\alpha,p-1)\geq\dim_H\mathcal{W}_M(\alpha)=h(\alpha)$. The above two assertions imply that $\dim_HB(\alpha,p-1)=h(\alpha)$. Case 3): $\alpha=\beta=p-1$. Since $B(p-1,p-1)\subset\b{\it E}(p-1)$ and $\dim_H\b{\it E}(p-1)=h(p-1)=0$, we obtain that $\dim_HB(p-1,p-1)=0=h(p-1)$. The proof is finished now. \end{proof} At last, it should be pointed out that we can even study the \emph{moving digit mean} of $x$: \begin{align} M(x)=\lim_{n\to\infty}\lim_{m\to\infty}\frac{S_n(T^mx)}{n},\quad x\in I. \end{align} Moreover, define the level sets related to it as \begin{align} B^\ast(\alpha)=\left\{x\in I\colon M(x)=\alpha\right\},\quad0\leq\alpha\leq p-1. \end{align} In this situation, the set $B^\ast(\alpha)$ is somewhat trivial because we have \begin{thm} Let $0\leq\alpha\leq p-1$. If $\alpha=0,1,\ldots,p-1$, then $B^\ast(\alpha)$ is a countable set. Otherwise, $B^\ast(\alpha)$ is an empty set. Hence, we always have \begin{align} \dim_HB^\ast(\alpha)=0 \end{align} for any $0\leq\alpha\leq p-1$. \end{thm} \begin{proof} Let $\alpha=i$, where $i=0,1,\ldots, p-1$. Then each number in $B^\ast(\alpha)$ is ultimately 1-periodic ending with $i^\infty$, that is \[B^\ast(\alpha)=\{x\in I\colon x=0.x_1x_2\ldots x_niii\ldots,n\geq 1\}.\] Thus, $B^\ast(\alpha)$ is countable. On the other hand, if $i<\alpha<i+1$, where $i=0,1,\ldots$, or $p-2$, then for any $x\in B^\ast(\alpha)$ and $n\geq1$ the limit $\lim_{m\to\infty}S_n(T^mx)/n$ does not exist according to the proof by contradiction and Cauchy's criterion for convergence of sequences. It follows that $B^\ast(\alpha)=\emptyset$. \end{proof} \subsection*{Acknowledgment} This work was finished when the author visited the Laboratoire d'Analyse et de Math\'{e}matiques Appliqu\'{e}es, Universit\'{e} Paris-Est Cr\'{e}teil Val de Marne, France. Thanks a lot for the great encouragement from his collaborator and the assistance provided by the laboratory.
{"config": "arxiv", "file": "1809.08000.tex"}
TITLE: Why do not we solve two dimensional wave equation directly without using method of descent ? QUESTION [1 upvotes]: Why do not we solve two dimensional wave equation directly without using method of descent ? Is there any problem with two dimensional wave equation ? Two solve two dimensional wave equation, we use three dimensional solution to wave equation. Why do we do this ? Thank you for your help. . REPLY [1 votes]: Indeed I don't understand why we don't solve it by Fourier analysis. We can. I prefer to answer you through an example: consider a thin elastic membrane stretched tightly over a rectangular frame. Suppose the dimensions of the frame are $a \times b$ and that we keep the edges of the membrane fixed to the frame. 1) Perturbing the membrane from equilibrium results in some sort of vibration of the surface. 2) Our goal is to mathematically model the vibrations of the membrane surface. We let $u(x, y, t)$ as the deflection of membrane from equilibrium at position $x, y$ and at time $t$. For a fixed $t$, the surface $z = u(x,y,t)$ gives the shape of the membrane at time $t$. Under ideal assumptions (e.g. uniform membrane density, uniform tension, no resistance to motion, small deflection, etc.) one can show that $u$ satisfies the two dimensional wave equation: $$\frac{\partial^2 u}{\partial t^2} = c^2\nabla^2 u$$ for $0<x<a$ and $0<y<b$ As in the one dimensional situation, the constant c has the units of velocity. It is given by $$c^2 = \frac{\tau}{\rho}$$ where $\tau$ is the tension per unit length, and $\rho$ is mass density. The fact that we are keeping the edges of the membrane fixed is expressed by the boundary conditions: $$u(0, y, t) = u(a, y, t) = 0$$ $$u(x, 0, t) = u(x, b, t) = 0$$ We must also specify how the membrane is initially deformed and set into motion. This is done via the initial conditions $$u(x,y,0) = f (x,y)$$ $$u_t(x,y,0) = g(x,y)$$ Where $u_t$ is the derivative of $u$ with respect to $t$. The goal is now to solve the equation, and we will use the separation of variables and the superposition principle. Let's tart with $$u(x,y,t) = X(x)Y(y)T(t)$$ Plugging this into the wave equation we get $$XYT′′ =c2 X′′YT +XY′′T $$ If we divide both sides by $c^2XYT$ this becomes $$\frac{T''}{c^2 T} = \frac{X''}{X} + \frac{Y''}{Y}$$ Because the two sides are functions of different independent variables, they must be constant $$\frac{T''}{c^2 T} = \frac{X''}{X} + \frac{Y''}{Y} = A$$ That is for the first equality: $$T'' - c^2AT = 0$$ and for the second $$\frac{X''}{X} = -\frac{Y''}{Y} + A$$ Once again, the two sides involve unrelated variables, so both are constant: $$\frac{X''}{X} = -\frac{Y''}{Y} + A = B$$ If we now let $C = A-B$ we get $$X'' - BX = 0$$ $$Y'' - CY = 0$$ By the first boundary condition we notice that since we want nontrivial solutions only, we can cancel $Y$ and $T$, yielding $$X(0) = 0$$ When we perform similar computations with the other three boundary conditions we also get $$X(a) = 0$$ $$Y(0) = Y(b) = 0$$ And there are no boundary conditions on $T$. You can easily solve the two boundary conditions for $X$ and $Y$ so you can easily get $$X_m (x) = \sin\mu x ~~~~~~~~~~~ \mu = \frac{m\pi}{a}$$ $$Y_n(y) = \sin\nu y ~~~~~~~~~~~ \nu = \frac{n\pi}{b}$$ For $n$ and $m$ natural numbers. The separation constants are then $B = -\mu^2$ and $C = -\nu^2$. Recalling that $T$ must satisfy $T'' - c^2 AT = 0$ we get, with $A + B = C = (\mu^2 + \nu^2) < 0$, then for any choice of $n$ and $m$ we have $$T_{m n} (t) = B_{nm}\cos\lambda_{nm}t + B^*_{nm}\sin\lambda_{mn} t$$ Where $$\lambda_{nm} = c\sqrt{\mu^2 + \nu^2} = c\pi \sqrt{\frac{m^2}{a^2} + \frac{n^2}{b^2}}$$ These are the characteristic frequencies of the membrane. Remarks: Note that the normal modes: 1) oscillate spatially with frequency $\mu$ in the $x$-direction 2) oscillate spatially with frequency $\nu$ in the $y$-direction 3) oscillate in time with frequency $\lambda_{nm}$ Eventually, According to the principle of superposition, we may add them to obtain the general solution: $$u(x, y, t) = \sum_{n = 1}^{+\infty}\sum_{m = 1}^{+\infty} \sin\mu x \sin\nu y (B_{nm}\cos\lambda_{mn}t + B^*_{nm}\sin\lambda_{nm} t)$$ P.s. We must use a double series since the indices $m$ and $n$ vary independently throughout the Natural numbers set. Finally, we must determine the values of the coefficients $B_{mn}$ and $B^∗$ that are required so that our solution also satisfies the initial conditions. We easily get $$f(x, y) = u(x, y. 0) = \sum_{n = 1}^{+\infty}\sum_{m = 1}^{+\infty} B_{nm}\sin\frac{m\pi}{a}x \sin\frac{n\pi}{b} y$$ and $$g(x, y) = u_t(x, y, 0) = \sum_{n = 1}^{+\infty}\sum_{m = 1}^{+\infty}\lambda_{nm}B^*_{nm}\sin\frac{m\pi}{a}x \sin\frac{n\pi}{b} y$$ And by the way these are examples of double Fourier series. There are ways to determine the Fourier Coefficients, and its something a bit long to do. If you know Fourier analysis you can try by yourself considering the function $$Z_{nm}(x, y) = B_{nm}\sin\frac{m\pi}{a}x \sin\frac{n\pi}{b} y$$ are pairwise orthogonal relative to the inner product $\langle f, g\rangle$. The calculation is not difficult but quite tedious. Eventually you'll get the solution: $$u(x, y, t) = \frac{576}{\pi^6} \sum_{n = 1}^{+\infty}\sum_{m = 1}^{+\infty} \left(\frac{(1 + (-1)^{m+1})(1 + (-1)^{n+1})}{m^3n^3}\sin\frac{m\pi}{2}x \sin\frac{n\pi}{3}y\cos \pi\sqrt{9m^2 + 4n^2t}\right)$$ Hence no steepest descent or other methods have been used.
{"set_name": "stack_exchange", "score": 1, "question_id": 2314399}
TITLE: Projections onto closed and convex sets QUESTION [0 upvotes]: I have to prove that if $A$ is convex and closed set, then $z=P_A(x)$ for all $z\in A$ if and only if $\langle x-z, z-y\rangle \geq 0$ for all $y\in A$ I have following proof which is not much complicated, but I don't understand few things. If $g(\theta)=||x-((1-\theta)z+\theta y)||^2, \theta \in R, z=P_A(x), y\in A$ is quadratic function of the variable $\theta$ and it has minimum at $\theta =-\frac{\langle x-z,z-y\rangle}{||z-y||^2}$ Now there is a part that I don't understand: For $z=P_A(x)$, from convexity of a set A, we get $g(0)\leq g(\theta)$ for all $\theta \in [0,1]$, so $\theta_{min} \leq 0$. I know why $g(0)\leq g(\theta)$ (I can see it by simply putting $0$ in function), but I don't know how convexity of $A$ caused that and why did we take $\theta$ from $[0,1]$. The rest of the proof is ok. Would anybody try to make this clear to me? REPLY [2 votes]: If $z$ is the projection of $x$ to $A$, then it's the closest point to $x$ in $A$. If $y$ is any other point in $A$ and $0\leq\theta\leq 1$, then $(1-\theta)z+\theta y$ is in $A$ because $A$ is convex. (Notice that this would not follow if $\theta$ were outside $[0,1]$.) So the distance from $x$ to this point $(1-\theta)z+\theta y$ in $A$, must be at least the distance from $x$ to $z$. Square both sides of that inequality (because squared distances are algebraically nicer than distances in Euclidean space), and you get $g(\theta)\geq g(0)$. So $g(\theta)$, with $\theta$ restricted to $[0,1]$, takes its minimum value at $\theta=0$.
{"set_name": "stack_exchange", "score": 0, "question_id": 412196}
TITLE: Conversion from state space back to transfer function in octave. QUESTION [1 upvotes]: I'm having problem converting a transfer function to state space and then going back to the same transfer function. I did a little experiment: [A, B, C, D] = tf2ss(tf([50, 10, 1], [1, 0])); [b, a] = ss2tf(A, B, C, D); results in b: [-9, -1], a: [1, -1, ~0] Why is this? Shouldn't I get back the exact same transfer function that I put into tf2ss(), ie b = [50, 10, 1] and a = [1, 0]? I have also tried c2d(pid(10, 1, 50), 1) but got this: error: ss: dss2ss: this descriptor system cannot be converted to regular state-space form Why? REPLY [2 votes]: State space models of that form can only represent transfer function which are proper, so the order of the denominator polynomial has to be greater of equal to that of the numerator. This is not the case for your test case. In order to see why one can use the formula that converts a state space model of the form $$ \dot{x} = A\,x + B\,u \\ y = C\,x + D\,u $$ into a equivalent transfer function $$ G(s) = C\,(s\,I - A)^{-1} B + D. $$ The inverse of $s\,I - A$, with $A,I\in\mathbb{R}^{n\times n}$, will be in $\mathbb{R}^{n\times n}$ as well with each element having the determinant of $s\,I - A$ in the denominator and the numerator in the $i$th row and $j$th column having the determinant of the submatrix of $s\,I - A$ by removing $i$th row and $j$th column (this is also known as a minor). It can be shown that the order of each denominator (ignoring pole-zero cancellation for now) will always be equal to $n$, while the order of each minor can at most be $n-1$. If pole-zero cancellation would occur then both would decrease in order, so the order of the numerator will always be less then that of the denominator. Pre and post multiplying this by $C$ and $B$ does not change this fact and when adding $D$ one could multiply it by $\det(s\,I - A)/\det(s\,I - A)$ such that the factions can be combined. This last step would at most make the order of the numerator equal to that of the denominator, but not greater. It is possible to represent improper transfer functions with state space models, however for this the time derivative has to be redefined as $$ E\,\dot{x} = A\,x + B\,u. $$ In order to use this I think you have to use the command dss (both in MATLAB and Octave). If the matrix $E$ is not full rank then the resulting transfer function will be improper. For as far as I known MATLAB (and maybe therefore Octave as well) does not support this for the conversion functions between transfer functions and state space models.
{"set_name": "stack_exchange", "score": 1, "question_id": 2594711}
TITLE: Verify Stokes's Theorem for the given surface and vector field QUESTION [0 upvotes]: $S$ is parametrized by $X(s,t) = (s\cos(t), s\sin(t), t)$, $0 \leq s \leq 1$ and $0 \leq t \leq \frac{\pi}{2}$ $$\mathbf{F} = z \mathbf{i} + x \mathbf{j} + z \mathbf{k}$$ I have two things preventing me from solving this: first, I do not know how to find $d\,\mathbf{s}$ for $X(s,t) = (s\cos(t), s\sin(t), t)$ and I am not sure how to take out the parameterization of $X$ and find a surface to work with for the double integral over $s$ of $\nabla x \mathbf{F} \,d\,\mathbf{S}$ REPLY [1 votes]: How comfortable are you with surface integrals? $X(s,t)$, as defined in your post, is a parametrization of $S$. Computing $\int_S \nabla \times \mathbf{F} d\mathbf{S}$ seems like a standard pre-Stokes' homework problem about surface integrals. To compute the line integral side of Stokes' theorem, you'll need to parametrize the boundary of $S$. Notice that the domain of $X$ in the $st$-plane is a rectangle (with sides 1 and $\pi/2$). $X$ sends each point of the rectangle to a point on $S$, and it sends the boundary of the rectangle to the boundary of $S$. Can you see how to parametrize the boundary now? (Make sure it's oriented correctly!) Now all you have to do is compute an ordinary line integral.
{"set_name": "stack_exchange", "score": 0, "question_id": 36841}
\begin{document} \title{Exponential renormalization} \date{June 12th, 2010} \begin{abstract} Moving beyond the classical additive and multiplicative approaches, we present an ``exponential'' method for perturbative renormalization. Using Dyson's identity for Green's functions as well as the link between the Fa\`a di Bruno Hopf algebra and the Hopf algebras of Feynman graphs, its relation to the composition of formal power series is analyzed. Eventually, we argue that the new method has several attractive features and encompasses the BPHZ method. The latter can be seen as a special case of the new procedure for renormalization scheme maps with the Rota--Baxter property. To our best knowledge, although very natural from group-theoretical and physical points of view, several ideas introduced in the present paper seem to be new (besides the exponential method, let us mention the notions of counterfactors and of order $n$ bare coupling constants). \end{abstract} \maketitle \tableofcontents \section{Introduction} Renormalization theory \cite{CasKen,Collins,Del,IZ1980} plays a major role in the perturbative approach to quantum field theory (QFT). Since its inception in the late 1930s \cite{Brown} it has evolved from a highly technical and difficult set of tools, mainly used in precision calculations in high energy particle physics, into a fundamental physical principle encoded by the modern notion of the renormalization group. Recently, Alain Connes, Dirk Kreimer, Matilde Marcolli and collaborators developed a compelling mathematical setting capturing essential parts of the algebraic and combinatorial structure underlying the so-called BPHZ renormalization procedure in perturbative QFT \cite{CKI,CKII,CKIII,CM2008,kreimer2}. The essential notion appearing in this approach is the one of combinatorial Hopf algebras. The latter typically consists of a graded vector space where the homogeneous components are spanned by finite sets of combinatorial objects, such as planar or non-planar rooted trees, or Feynman graphs, and the Hopf algebraic structures are given by particular constructions on those objects. For a particular QFT the set of Feynman rules corresponds to a multiplicative map from such a combinatorial Hopf algebra, generated, say, by one-particle irreducible (1PI) ultraviolet (UV) superficially divergent diagrams, into a commutative unital target algebra. This target algebra essentially reflects the regularization scheme. The process of renormalization in perturbative QFT can be performed in many different ways~\cite{Collins,IZ1980}. A convenient framework is provided by dimensional regularization (DR). It implies a target algebra of regularized probability amplitudes equipped with a natural Rota--Baxter (RB) algebra structure. The latter encodes nothing but minimal subtraction (MS). Introducing a combinatorial Hopf algebra of Feynman graphs in the context of $\phi^3$-theory (in 6 dimensions) allows for example to reformulate the BPHZ renormalization method for Feynman graphs, in terms of a Birkhoff--Wiener--Hopf (BWH) decomposition inside the group of dimensionally regularized characters \cite{kreimer2,CKII}. As it turns out, Bogoliubov's recursive renormalization process is then best encoded by Atkinson's recursion for noncommutative Rota--Baxter algebras, the solution of which was obtained in the form of a closed formula in \cite{EFMP}. Following Kreimer \cite{kreimer1}, Walter van Suijlekom extended the Hopf algebra approach to perturbative renormalization of gauge theories \cite{vanSu1,vanSu2}. The Connes--Kreimer approach focussed originally on DR+MS but can actually be extended to other regularization schemes, provided the subtraction method corresponds to a Rota--Baxter algebra structure. It applies for example to zero momentum subtraction as shown in \cite{EFGP}. However, essential parts of this algebraic machinery are not anymore available once the RB property is lost. More precisely, the remarquable result that Bogoliubov's classical renormalization formulae give birth to Hopf algebra characters and are essentially equivalent to the BWH decomposition of Hopf algebra characters is lost if the renormalization scheme map is not RB \cite{CKI,CKII}. Two remarks are in order. First, more insights from an algebraic point of view are needed in this particular direction. As a contribution to the subject, we propose and study in the last section of the present paper a non-MS scheme within DR which is not of Rota--Baxter type. Second, the characterization of the BPHZ method in terms of BWH decomposition might be too restrictive, as it excludes possible subtraction schemes that do not fall into the class of Rota--Baxter type ones. In this paper we present an exponential algorithm to perform perturbative renormalization (the term ``exponential'' refers to the way the algorithm is constructed and was also chosen for its similarity with the classical ``additive'' and ``multiplicative'' terminologies). One advantage of this method, besides its group-theoretical naturality, is that it does not rely on the Rota--Baxter property. Indeed, the exponential method is less restrictive than the BPHZ method in the Hopf algebraic picture. It only requires a projector $P_-$ (used to isolate the divergences of regularized amplitudes) such that the image of the associated orthogonal projector, $P_+:=id-P_-$, forms a subalgebra. This constraint on the image of $P_+$ reflects the natural assumption that products of finite regularized amplitudes are supposed to be finite. Let us mention that the very process of exponential renormalization leads to the introduction of new objects and ideas in the algebro-combinatorial approach to perturbative QFT. Particularly promissing are the ones of counterfactors and order $n$ bare coupling constants, that fit particularly well some widespread ideas that do not always come with a rigorous mathematical foundation such as the one that ``in the end everything boils down in perturbative QFT to power series substitutions''. The notion of order $n$ bare coupling constants makes such a statement very precise from the algebraic point of view. Let us also mention that the exponential method is a further development of ideas sketched in our earlier paper~\cite{KEFPatras} that pointed at a natural link between renormalization techniques and fine properties of Lie idempotents, with a particular emphasis on the family of Zassenhaus Lie idempotents. Here we do not further develop such aspects from the theory of free Lie algebras \cite{reutenauer1993}, and refer to the aforementioned article for details on the subject. \medskip The paper is organized as follows. The next section briefly recalls some general properties of graded Hopf algebras including the BWH decomposition of regularized Feynman rules viewed as Hopf algebra characters. We also dwell on the Fa\`a di Bruno Hopf algebra and prove an elementary but useful Lemma that allows the translation of the Dyson formula (relating bare and renormalized Green's functions) into the language of combinatorial Hopf algebras. In Section \ref{sect:Exp} we introduce the notion of $n$-regular characters and present an exponential recursion used to construct $m$-regular characters from $m-1$-regular ones. We conclude the article by introducing and studying a toy-model non-Rota--Baxter renormalization scheme on which the exponential recursion can be performed. We prove in particular that locality properties are preserved by this renormalization process. \section{From Dyson to Fa\`a di Bruno} \label{sect:D2F} \subsection{Preliminaries} \label{ssect:prelim} In this section we introduce some mathematical structures to be used in the sequel. We also recall the BWH decomposition of Hopf algebra characters. Complementary details can be found, e.g. in \cite{EFGP, FGB, Manchon}. Let us consider a graded, connected and commutative Hopf algebra $H=\bigoplus_{n \geq 0} H_n$ over the field $k$, or its pro-unipotent completion $\prod_{n \geq 0} H_n$. Recall that since the pioneering work of Pierre Cartier on formal groups \cite{CartierHGF}, it is well-known that the two types of Hopf algebras behave identically, allowing to deal similarly with finite sums $\sum_{n \leq N}h_n$, $h_n \in H_n$ and formal series $\sum_{n \in \NN}h_n$, $h_n \in H_n$. The unit in $H$ is denoted by $\un$. Natural candidates are the Hopf algebras of rooted trees and Feynman graphs~\cite{CKI,CKII} related to non-commutative geometry and pQFT, respectively. We remark here that graduation phenomena are essential for all our forthcoming computations, since in the examples of physical interest they incorporate information such as the number of loops (or vertices) in Feynman graphs, relevant for perturbative renormalization. The action of the grading operator $Y: H \to H$ is given by: $$ Y(h) = \sum\limits_{n\in \mathbb{N}} n h_n \quad {\rm{for}} \quad h = \sum\limits_{n\in \mathbb{N}}h_n\in\prod\limits_{n\in\mathbb{N}} H_n. $$ We write $\epsilon$ for the augmentation map from $H$ to $H_0 = k \subset H$ and $H^+:=\bigoplus_{n=1}^{\infty} H_n$ for the augmentation ideal of $H$. The identity map of $H$ is denoted $id$. The product in $H$ is written $m_H$ and its action on elements simply by concatenation. The coproduct is written $\Delta$; we use Sweedler's notation and write $h^{(1)}\otimes h^{(2)}$ or $\sum_{j = 0}^n h_j^{(1)}\otimes h_{n-j}^{(2)}$ for $\Delta (h) \in \bigoplus_{j=0}^{n}H_j\otimes H_{n-j}$, $h\in H_n$. The space of $k$-linear maps from $H$ to $k$, $\Lin(H,k):=\prod_{n\in\mathbb{N}} \Lin(H_n,k)$, is naturally endowed with an associative unital algebra structure by the convolution product: \allowdisplaybreaks{ \begin{equation*} f\ast g := m_k\circ(f\otimes g)\circ\Delta : \qquad H \xrightarrow{\Delta} H \otimes H \xrightarrow{f \otimes g} k \otimes k \xrightarrow{m_{k}} k. \end{equation*}} The unit for the convolution product is precisely~the counit~$\epsilon : H \to k$. Recall that a character is a linear map $\gamma$ of unital algebras from $H$ to the base field $k$: $$ \gamma (hh') = \gamma(h)\gamma(h'). $$ The group of characters is denoted by $G$. With $\pi_n$, $n \in \mathbb{N}$, denoting the projection from $H$ to $H_n$ we write $\gamma_{(n)}=\gamma \circ \pi_n$. An infinitesimal character is a linear map $\alpha$ from $H$ to $k$ such that: $$ \alpha (h h') = \alpha(h) \epsilon(h') + \epsilon (h) \alpha (h'). $$ As for characters, we write $\alpha(h) = \sum_{n \in\mathbb{N}} \alpha_{(n)} (h_n)$. We remark that by the definitions of characters and infinitesimal characters $\gamma_0(\un)=1$, that is $\gamma_0 = \epsilon$, whereas $\alpha_0(\un) = 0$, respectively. Recall that the graded vector space $\frak{g}$ of infinitesimal characters is a Lie subalgebra of $\Lin(H,k)$ for the Lie bracket induced on the latter by the convolution product. Let $A$ be a commutative $k$-algebra, with unit~$1_A=\eta_A(1)$, $\eta_A : k \to A$ and with product~$m_A$, which we sometimes denote by a dot, i.e. $m_A(u\otimes v)=:u\cdot v$ or simply by concatenation. The main examples we have in mind are $A=\CC,\,A=\CC[[\varepsilon,\varepsilon^{-1}]$ and~$A=H$. We extend now the definition of characters and call an ($A$-valued) character of $H$ any algebra map from~$H$ to~$A$. In particular $H$-valued characters are simply algebra endomorphisms of $H$. We extend as well the notion of infinitesimal characters to maps from~$H$ to the commutative $k$-algebra $A$, that is: $$ \alpha(hh') = \alpha(h) \cdot e(h') + e(h) \cdot \alpha (h'), $$ where $e:=\eta_A\circ\epsilon$ is now the unit in the convolution algebra $\Lin(H,A)$. Observe that infinitesimal characters can be alternatively defined as $k$-linear maps from $H$ to $A$ with $\alpha \circ \pi_0=0$ that vanish on the square of the augmentation ideal of $H$. The group (Lie algebra) of $A$-valued characters (infinitesimal characters) is denoted $G(A)$ ($\frak{g}(A)$) or $G_H(A)$ when we want to emphasize the underlying Hopf algebra. \subsection{Birkhoff--Wiener--Hopf decomposition of $G(A)$} \label{ssect:BWH} In the introduction we already mentioned one of Connes--Kreimer's seminal insights into the algebro-combinatorial structure underlying the process of perturbative renormalization in QFT. In the context of DR+MS, they reformulated the BPHZ-method as a Birkhoff--Wiener--Hopf decomposition of regularized Feynman rules, where the latter are seen as an element in the group $G(A)$. Of pivotal role in this approach is a Rota--Baxter algebra structure on the target algebra $A=\CC[[\varepsilon,\varepsilon^{-1}]$. In general, let us assume that the commutative algebra~$A = A_+ \oplus A_-$ splits directly into the subalgebras $A_\pm=T_\pm(A)$ with $1_A \in A_+$, defined in terms of the projectors $T_-$ and $T_+:=id-T_-$. The pair $(A,T_-)$ is a special case of a (weight one) Rota--Baxter algebra \cite{EFGP2} since $T_-$, and similarly $T_+$, satisfies the (weight one RB) relation: \begin{equation} T_-(x)\cdot T_-(y) + T_-(x\cdot y) = T_-\bigl(T_-(x)\cdot y + x\cdot T_-(y)\bigr), \qquad x,y \in A. \label{eq:RBR} \end{equation} One easily shows that $\Lin(H,A)$ with an idempotent operator $\mathcal{T}_-$ defined by $\mathcal{T}_-(f)=T_- \circ f$, for $f\in \Lin(H,A)$, is a (in general non-commutative) unital Rota--Baxter algebra (of weight one). The Rota--Baxter property~\eqref{eq:RBR} implies that $G(A)$ decomposes as a set as the product of two subgroups: $$ G(A) = G_-(A)\ast G_+(A), \quad{\rm{where}}\quad G_\pm(A) = \exp^*(\mathcal{T}_\pm(\frak{g}(A))). $$ \begin{cor} \cite{CKII,EFGP} \label{cor:ck-Birkhoff} For any $\gamma \in G(A)$ the unique characters $\gamma_+\in G_{+}(A)$ and $\gamma_-^{-1}\in G_{-}(A)$ in the decomposition of $G(A) = G_-(A)\ast G_+(A)$ solve the equations: \begin{equation} \gamma_{\pm} = e \pm \mathcal{T}_{\pm}(\gamma_{-} \ast (\gamma - e)). \label{eq:BogoliubovFormulae} \end{equation} That is, we have Connes--Kreimer's Birkhoff--Wiener--Hopf decomposition: \begin{equation} \gamma = \gamma_-^{-1} \ast \gamma_+. \label{eq:BCHbirkhoff} \end{equation} \end{cor} Note that this corollary is true if and only if the operator $T_-$ on $A$ is of Rota--Baxter type. That is, uniqueness of the decomposition follows from the idempotence of the map $T_-$. In fact, in the sequel we will show that this result is a special case of a more general decomposition of characters. \subsection{The Fa\`a di Bruno Hopf algebra and a key lemma} \label{ssect:FdBlemma} Another example of combinatorial Hopf algebra, i.e. a graded, connected, commutative bialgebra with a basis indexed by combinatorial objects, which we will see to be acutely important in the sequel, is the famous Fa\`a di Bruno Hopf algebra $F$, for details see e.g.~\cite{BFFK,FGB,JR}. Recall that for series, say of a real variable $x$: $$ f(x) = \sum_{n=0}^\infty a_n(f)\,x^{n+1}, \quad h(x) = \sum_{n=0}^\infty a_n(h)\,x^{n+1}, \ {\rm{with}}\ a_0(f) = a_0(h) = 1, $$ the composition is given by: $$ f\bigl(h(x)\bigr) = \sum_{n=0}^\infty a_n\bigl(f\circ h\bigr)\,x^{n+1}=\sum\limits_{n=0}^\infty a_n(f)(h(x))^{n+1}. $$ It defines the group structure on: $$ G_F:=\Bigl\{f(x) = \sum_{n=0}^\infty a_n(f)\,x^{n+1} \ | \ a_n(f) \in \mathbb{C}, a_0(f) = 1\Bigr\}. $$ One may interpret the functions $a_n$ as a derivation evaluated at $x = 0$: $$ a_n(f) =\frac{1}{(n+1)!}\frac{d^{n+1}f}{dx^{n+1}}(0). $$ The coefficients $a_n\bigl(f \circ h\bigr)$ are given by: $$ a_n(f \circ h) = \sum_{k=0}^n a_k(f) \sum_{l_0 + \cdots + l_k=n-k \atop l_i \ge 0, i=0,\ldots,k} a_{l_0}(h)\cdots a_{l_k}(h). $$ For instance, with an obvious notation, the coefficient of~$x^4$ in the composed series is given by $$ f_3 + 3f_2h_1 + f_1(h_1^2 + 2h_2) + h_3. $$ The action of these coefficient functions on the elements of the group $G_F$ implies a pairing: $$ \langle a_n , f \rangle := a_n(f). $$ The group structure on $G_F$ allows to define the structure of a commutative Hopf algebra on the polynomial ring spanned by the $a_n$, denoted by $F$, with coproduct: $$ \Delta_F(a_n) = \sum_{k=0}^n \sum_{l_0 + \cdots + l_k=n-k \atop l_i \ge 0, i=0,\ldots,k} a_{l_0} \cdots a_{l_k} \otimes a_k . $$ Notice that, using the pairing, an element $f$ of $G_F$ can be viewed as the $\RR[x]$-valued character $\hat{f}$ on $F$ characterized by: $\hat{f}(a_n):=a_n(f)x^n.$ The composition of formal power series translates then into the convolution product of characters: $\widehat{f(h)}=\hat{h} * \hat{f}$. Let us condense this into what we call the Fa\`a di Bruno formula, that is, define $a:=\sum_{n \ge 0} a_n$. Then $a$ satisfies: \begin{equation} \label{FdBformula} \Delta_F(a) = \sum_{n \ge 0} a^{n+1} \otimes a_n. \end{equation} Note that subindices indicate the graduation degree. We prove now a technical lemma, important in view of applications to perturbative renormalization. As we will see further below, it allows to translate the Dyson formulas for renormalized and bare 1PI Green's functions into the language of Hopf algebras. \begin{lem} \label{lem:FaBlemma} Let $H=\prod_{n \geq 0} H_n$ be a complete graded commutative Hopf algebra, which is an algebra of formal power series containing the free variables $f_1,\ldots,f_n,\ldots$. We assume that $f_i$ has degree $i$ and write $f = \un + \sum_{k>0} f_k$ (so that, in particular, $f$ is invertible). If $\Delta(f)=\sum_n f \alpha^n\otimes f_n$, where $\alpha=\sum_{n \ge 0} \alpha_n$ and the $\alpha_n$, $n>0$, are algebraically independent as well as algebraically independent from the $f_i$, then $\alpha$ satisfies the Fa\`a di Bruno formula: $$ \Delta(\alpha)=\sum\limits_{n \ge 0} \alpha^{n+1}\otimes \alpha_n. $$ \end{lem} \begin{proof} Indeed, let us make explicit the associativity of the coproduct, $(\Delta \otimes id ) \circ \Delta=(id \otimes\Delta )\circ \Delta$. First: $$ \sum\limits_{n \ge 0} \Delta(f\alpha^n) \otimes f_n = \sum\limits_{n \ge 0} f\alpha^n \otimes \Delta(f_n) = \sum\limits_{n,p \leq n}f\alpha^n\otimes (f\alpha^{n-p})_p \otimes f_{n-p}. $$ Now we look at the component of this identity that lies in the subspace $H \otimes H \otimes H_1$ and get: $$ \Delta(f\alpha)=\sum\limits_{n \ge 0}f\alpha^n \otimes (f\alpha)_{n-1}, $$ that is: $$ \sum\limits_{n \ge 0} f\alpha^n\alpha_{(1)}\otimes f_n\alpha_{(2)} =\sum\limits_{n,p<n}f\alpha^n\otimes f_p\alpha_{n-p-1}. $$ Since $f$ is invertible: $$ \sum\limits_{n \ge 0}\alpha^n\alpha_{(1)}\otimes f_n\alpha_{(2)} =\sum\limits_{n,p<n}\alpha^n\otimes f_p\alpha_{n-p-1}. $$ From the assumption of algebraic independence among the $\alpha_i$ and $f_j$, we get, looking at the component associated to $f_0=1$ on the right hand side of the above tensor product: $$ \Delta(\alpha) = \alpha_{(1)}\otimes \alpha_{(2)} = \sum\limits_{n \ge 0}\alpha^{n+1}\otimes \alpha_{n}. $$ \end{proof} \begin{cor} With the hypothesis of the Lemma, the map $\chi$ from $F$ to $H$, $a_n\longmapsto \alpha_n$ is a Hopf algebra map. In particular, if $f$ and $g$ are in $G_H(\RR)$, $f\circ\chi$ and $g\circ\chi$ belong to $G_F(\RR)$ and: $$ \sum\limits_{n \ge 0} g\ast f(\alpha_n)x^{n+1} = \sum\limits_{n \ge 0} (g\circ\chi) \ast (f\circ\chi) (a_n)x^{n+1} = f\circ\chi(g\circ\chi), $$ where in the last equality we used the identification of $G_F(\RR)$ with $x+x\RR[x]$ to view $f\circ\chi$ and $g\circ \chi$ as formal power series. \end{cor} In other terms, properties of $H$ can be translated into the language of formal power series and their compositions. \section{The exponential method} \label{sect:Exp} Let $H=\bigoplus_{n \geq 0} H_n$ be an arbitrary graded connected commutative Hopf algebra and $A$ a commutative $k$-algebra with unit $1_A=\eta_A(1)$. Recall that $\pi_n$ stands for the projection on $H_n$ orthogonally to the other graded components of $H$. As before, the group of characters with image in $A$ is denoted by $G(A)$, with unit $e:=\eta_A \circ \epsilon$. We assume in this section that the target algebra $A$ contains a subalgebra $A_+$, and that there is a linear projection map $P_+$ from $A$ onto $A_+$. We write $P_-:=id - P_+$. The purpose of the present section is to construct a map from $G(A)$ to $G(A_+)$. In the particular case of a multiplicative renormalizable perturbative QFT, where $H$ is a Hopf algebra of Feynman diagrams and $A$ the target algebra of regularized Feynman rules, this map should send the corresponding Feynman rule character $\psi \in G(A)$ to a renormalized, but still regularized, Feynman rule character $R$. The particular claim of $A_+ \subset A$ being a subalgebra implies $G(A_+)$ being a subgroup. This reflects the natural assumption, motivated by physics, that the resulting -renormalized- character $R \in G(A_+)$ maps products of graphs into $A_+$, i.e. $R(\Gamma_1\Gamma_2)=R(\Gamma_1)R(\Gamma_2) \in A_+$. Or, to say the same, products of finite and regularized amplitudes are still finite. In the case where the target algebra has the Rota--Baxter property, the map from $G(A)$ to $G(A_+)$ should be induced by the BWH decomposition of characters. \subsection{An algorithm for constructing regular characters} \label{ssect:algorithm} We first introduce the notion of {$n$-regular} characters. Later we identify them with characters renormalized up to degree $n$. \begin{defn} \label{def:regular} A character $\varphi \in G(A)$ is said to be regular up to order $n$, or $n$-regular, if ${P_+} \circ \varphi_{(l)} = \varphi_{(l)}$ for all $l \leq n$. A character is called regular if it is $n$-regular for all $n$. \end{defn} In the next proposition we outline an iterative method to construct a regular character in $G(A_+)$ starting with an arbitrary one in $G(A)$. The iteration proceeds in terms of the grading of $H$. \begin{prop} \label{prop:Exp1} Let $\varphi \in G(A)$ be regular up to order $n$. Define $\mu^{\varphi}_{n+1}$ to be the linear map which is zero on $H_i$ for $i \not= n+1$ and: $$ \mu^{\varphi}_{n+1}:=P_- \circ \varphi \circ \pi_{n+1}=P_- \circ \varphi_{(n+1)}. $$ Then \begin{enumerate} \item $\mu^{\varphi}_{n+1}$ is an infinitesimal character. \item The convolution exponential $\Upsilon^-_{n+1} := \exp^*{(-\mu^{\varphi}_{n+1})}$ is therefore a character. \item The product $\varphi_{n+1}^+:=\Upsilon^-_{n+1} \ast \varphi$ is a regular character up to order $n+1$. \end{enumerate} \end{prop} Note that we use the same notation for the projectors $P_\pm$ on $A$ and the ones defined on $\Lin(H,A)$. \begin{remark} Let us emphasize two crucial points. First, we see the algebraic naturalness of the particular assumption on $A_+$ being a subalgebra. Indeed, it allows for a simple construction of infinitesimal characters from characters in terms of the projector $P_-$. Second, at each order in the presented process we stay strictly inside the group $G(A)$. This property does not hold for other recursive renomalization algorithms. For example, in the BPHZ case, the recursion takes place in the larger algebra $Lin(H,A)$, see e.g. \cite{EFMP}. \end{remark} \begin{proof} Let us start by showing that $\mu^{\varphi}_{n+1}$ is an infinitesimal character. That is, its value is zero on any non trivial product of elements in $H$. In fact, for $y=xz \in H_{n+1}$, $x,z\notin H_0$, $$ \mu^{\varphi}_{n+1} (y) = P_-(\varphi (y)) = P_-(\varphi (x)\varphi(z)) = P_-({P_+}\varphi (x) {P_+}\phi (y)), $$ since $\varphi$ is $n$-regular by assumption. This implies that $\mu^{\varphi}_{n+1}(y)=P_-\circ P_+(P_+\varphi (x)P_+\varphi(z))=0$ as the image of $P_+$ is a subalgebra in $A$. The second assertion is true for any infinitesimal character, see e.g. \cite{EFGP}. The third one follows from the next observations: \begin{itemize} \item For degree reasons (since $\mu^{\varphi}_{n+1}= 0$ on $H_k, \ k\leq n$), $\varphi_{n+1}^+=\exp^*(-\mu^{\varphi}_{n+1}) \ast \varphi = \varphi $ on $H_k, \ k\leq n$, so that $\exp^*(-\mu^{\varphi}_{n+1})\ast \varphi$ is regular up to order $n$. \item In degree $n+1$: let $y \in H_{n+1}$. With a Sweedler-type notation for the reduced coproduct $\Delta (y) - y \otimes 1-1\otimes y = y'_{(1)}\otimes y'_{(2)}$, we get: \allowdisplaybreaks{ \begin{eqnarray} \exp^*(-\mu^{\varphi}_{n+1})\ast \varphi (y)&=& \exp^*(-\mu^{\varphi}_{n+1})(y)+\varphi(y) + \exp^*(-\mu^{\varphi}_{n+1})(y'_{(1)})\varphi(y'_{(2)}) \nonumber\\ &=&-\mu^{\varphi}_{n+1}(y)+\varphi (y) = P_+\varphi(y), \label{almostreg} \end{eqnarray}} which follows from $\exp^*(-\mu^{\varphi}_{n+1})$ being zero on $H_i$, $1 \leq i \leq n$ and $\exp^*(-\mu^{\varphi}_{n+1})=-\mu^{\varphi}_{n+1}$ on $H_{n+1}$. Hence, this implies immediately: \allowdisplaybreaks{ \begin{eqnarray*} P_+((\exp^*(-\mu^{\varphi}_{n+1})\ast \varphi )(y))&=& P_+(P_+\varphi(y))\\ &=& P_+\varphi(y) = (\exp^*(-\mu^{\varphi}_{n+1})\ast \varphi)(y). \end{eqnarray*}} \end{itemize} \end{proof} Note the following particular fact. When iterating the above construction of regular characters, say, by going from a $n-1$-regular character $\varphi_{n-1}^+$ to the $n$-regular character $\varphi_{n}^+$, the $n-1$-regular character is by construction {\it{almost regular}} at order $n$. By this we mean that $\varphi_{n}^+(H_n)$ is given by applying $P_+$ to $\varphi_{n-1}^+(H_n)$, see (\ref{almostreg}). This amounts to a simple subtraction, i.e. for $y \in H_n$: $$ \varphi_{n}^+(y) = P_+(\varphi_{n-1}^+(y)) = \varphi_{n-1}^+(y) - P_-(\varphi_{n-1}^+(y)). $$ Observe that by construction for $y \in H_{n}$: \begin{equation} \label{preparation} \varphi_{n-1}^+(y)=\varphi_{n-2}^+(y) - P_-(\varphi_{n-2}^+(y^{(1)}_{n-1}))\varphi_{n-2}^+(y^{(2)}_1), \end{equation} where the reader should recall the notation $\Delta(y)=\sum_{i=0}^n y_i^{(1)}\otimes y_{n-i}^{(2)}$ making the grading explicit in the coproduct. Further below we will interpret these results in the context of perturbative renormalization of Feynman graphs: for example, when $\Gamma$ is a UV divergent 1PI diagram of loop order $n$, the order one graph $\Gamma^{(2)}_1$ on the right-hand side of the formula consists of the unique one loop primitive cograph. That is, $\Gamma^{(2)}_1$ follows from $\Gamma$ with all its 1PI UV divergent subgraphs reduced to points. In the literature this is denoted as $res(\Gamma)=\Gamma^{(2)}_1$. The following propositions capture the basic construction of a character regular to all orders from an arbitrary character. We call it the exponential method. \begin{prop} (Exponential method) \label{prop:Exp2} We consider the recursion: $\Upsilon_{0}^- := e$, $\varphi_{0}^+:=\varphi$, and: $$ \varphi^+_{n+1}:= \Upsilon^-_{n+1} * \varphi_{n}^+, $$ where $ \Upsilon_{n+1}^- :=\exp^*(- P_- \circ \varphi_{n}^+ \circ \pi_{n+1})$. Then, we have that $\varphi^+:=\varphi^+_{\infty} := \lim\limits_\rightarrow\varphi^+_{n}$ is regular to all orders. Moreover, $\Upsilon_{\infty}^- \ast \varphi= \varphi^+$, where: $$ \Upsilon_{\infty}^-:=\lim\limits_\rightarrow \Upsilon(n) $$ and: $$ \Upsilon(n):= \Upsilon^-_{n} \ast \cdots \ast \Upsilon_{1}^-. $$ \end{prop} \begin{remark} In the light of the application of the exponential method to perturbative renormalization in QFT, we introduce some useful terminology. We call $\Upsilon_{l}^- :=\exp^*(- P_- \circ \varphi_{l-1}^+ \circ \pi_{l})$ the counterfactor of order $l$ and the product $\Upsilon(n):= \Upsilon^-_{n} \ast \cdots \ast \Upsilon_{1}^- = \Upsilon^-_{n} \ast \Upsilon(n-1)$ the counterterm of order $n$. \end{remark} \subsection{On the construction of bare coupling constants} The following two propositions will be of interest in the sequel when we dwell on the physical interpretation of the exponential method. Let $A$ be as in Proposition \ref{prop:Exp1}. We introduce a formal parameter $g$ which commutes with all elements in $A$, which we extend to the filtered complete algebra $A[[g]]$ (think of $g$ as the renormalized -i.e. finite- coupling constant of a QFT). The character $\varphi \in G(A)$ is extended to $\tilde\varphi \in G(A[[g]])$ so as to map $f = 1 + \sum_{k>0} f_k \in H$ to: $$ \tilde\varphi(f)(g) = 1 + \sum_{k>0} \varphi(f_k)g^k \in A[[g]]. $$ Notice that we emphasize the functional dependency of $\tilde\varphi(f)$ on $g$ for reasons that will become clear in our forthcoming developments. Recall Lemma \ref{lem:FaBlemma}. We assume that $\Delta(f)=\sum_{n \ge 0} f \alpha^n \otimes f_n$ where $\alpha=\sum_{n \ge 0} \alpha_n \in H$ satisfies the Fa\`a di Bruno formula: $$ \Delta(\alpha)=\sum\limits_{n \ge 0} \alpha^{n+1} \otimes \alpha_n. $$ Let $\tilde\varphi^+_{n+1}:= \tilde\Upsilon^-_{n+1} * \tilde\varphi_{n}^+ =\tilde\Upsilon(n+1) * \tilde\varphi \in G(A[[g]])$ be the $n+1$-regular character constructed via the exponential method from $\tilde\varphi \in G(A[[g]])$. Now we define for each counterfactor $\tilde\Upsilon^-_{l}$, $l\ge 0$ a formal power series in $g$, which we call the order $l$ bare coupling constant: $$ g_{(l)}(g) := \tilde\Upsilon^-_{l}(g\alpha)=g+\sum_{n > 0} a^{(l)}_n g^{n+1}, $$ $a^{(l)}_n:=\Upsilon^-_{l}( \alpha_n)$. Observe that $\tilde\Upsilon^-_{0}(g\alpha)=g\varepsilon(\un)=g$ and by construction $a^{(l)}_n=0$ for $n<l$. \begin{prop} (exponential counterterm and composition) \label{prop:ExpFaa1} With the aforementioned assumptions, we find that applying the order $n$ counterterm $\tilde\Upsilon(n)$ to the series $g\alpha \in H$ equals the $n$-fold composition of the bare coupling constants $g_{(1)}(g), \cdots, g_{(n)}(g)$: \allowdisplaybreaks{ \begin{eqnarray*} \tilde\Upsilon(n)(g\alpha)(g)&=& \tilde\Upsilon^-_{n} \ast \cdots \ast \tilde\Upsilon_{1}^-(g\alpha)(g)\\ &=&g_{(1)}\circ \cdots \circ g_{(n)}(g). \end{eqnarray*}} \end{prop} \begin{proof} The proof follows by induction together with the Fa\`a di Bruno formula. \allowdisplaybreaks{ \begin{eqnarray*} \tilde\Upsilon(2)(g\alpha)(g)&=& \tilde\Upsilon^-_{2} \ast \tilde\Upsilon_{1}^-(g\alpha)(g)\\ &=&\sum\limits_{n \ge 0} (\tilde\Upsilon^-_{2}(\alpha)(g))^{n+1}a^{(1)}_ng^{n+1}\\ &=&\sum\limits_{n \ge 0} a^{(1)}_n (\tilde\Upsilon^-_{2}(g\alpha)(g))^{n+1} = g_{(1)}\circ g_{(2)}(g). \end{eqnarray*}} Similarly: \allowdisplaybreaks{ \begin{eqnarray*} \tilde\Upsilon(m)(g\alpha)(g) &=& \tilde\Upsilon^-_{m} \ast \cdots \ast \tilde\Upsilon_{2}^- \ast \tilde\Upsilon_{1}^-(g\alpha)(g)\\ &=&\sum\limits_{n \ge 0} (\tilde\Upsilon^-_{m} \ast \cdots \ast \tilde\Upsilon_{2}^- (\alpha)(g))^{n+1} a^{(1)}_ng^{n+1}\\ &=&\sum\limits_{n \ge 0} a_n^{(1)} (g_{(2)}\circ \cdots \circ g_{(m)})(g))^{n+1} = g_{(1)}\circ (g_{(2)}\circ \cdots \circ g_{(m)})(g). \end{eqnarray*}} \end{proof} \begin{prop} (exponential method and composition) \label{prop:ExpFaa2} With the assumption of the foregoing proposition we find that: \allowdisplaybreaks{ \begin{eqnarray*} \tilde\varphi^+_n(f)(g) &=& \tilde\Upsilon(n)(f)(g)\cdot \tilde\varphi(f)\circ g_{(1)}\circ \cdots \circ g_{(n)} (g) \end{eqnarray*}} \end{prop} \begin{proof} The proof follows from the coproduct $\Delta(f)=\sum_n f \alpha^n\otimes f_n$ by a simple calculation. \allowdisplaybreaks{ \begin{eqnarray*} \tilde\varphi^+_n(f)(g) &=& (\tilde\Upsilon(n) * \tilde\varphi)(f)(g)\\ &=& \sum_{m\ge 0} \tilde\Upsilon(n)(f \alpha^m)(g) \varphi(f_m)g^m\\ &=& \tilde\Upsilon(n)(f)(g) \sum_{m\ge 0}\varphi(f_m)(\tilde\Upsilon(n)(g\alpha)(g))^m \end{eqnarray*}} from which we derive the above formula using Proposition \ref{prop:ExpFaa1}. \end{proof} The reader may recognize in this formula a familiar structure. This identity is indeed an elaboration on the Dyson formula -we shall return on this point later on. \subsection{The BWH-decomposition as a special case} \label{ssect:BWHexp} The decomposition $\Upsilon^-_{\infty}\ast\varphi= \varphi^+$ in Proposition~\ref{prop:Exp2} may be interpreted as a generalized BWH decomposition. Indeed, under the Rota--Baxter assumption, that is if $P_-$ is a proper idempotent Rota--Baxter map (i.e. if the image of $P_-$ is a subalgebra, denoted $A_-$), $G(A)=G(A_-)\ast G(A_+)$ and the decomposition of a character $\varphi$ into the convolution product of an element in $G(A_-)$ and in $G(A_+)$ is necessarily unique (see \cite{EFGP2,EFMP} to which we refer for details on the Bogoliubov recursion in the context of Rota--Baxter algebras). In particular, $\Upsilon_\infty^-$ identifies with the counterterm $\varphi_-$ of the BWH decomposition. Let us detail briefly this link with the BPHZ method under the Rota--Baxter assumption for the projection maps $P_-$ and $P_+$. Proposition \ref{prop:Exp2} in the foregoing subsection leads to the following important remark (that holds independently of the RB assumption). Observe that by construction it is clear that for $y \in H_{k}$, $k<n+1$: $$ \Upsilon(n+1)(y)=\Upsilon(k)(y). $$ Using $\varphi^+_{k-1}=\Upsilon(k-1)*\varphi$ we see with $y \in H_{k}$ that: \allowdisplaybreaks{ \begin{eqnarray} \Upsilon(k)(y) &=& \Upsilon^-_{k} \ast \cdots \ast \Upsilon_{1}^-(y) \nonumber\\ &=& -P_-(\varphi^+_{k-1}(y)) + \Upsilon(k-1)(y) \nonumber\\ &=& -P_-(\varphi(y)) - P_-( \Upsilon(k-1)(y) )- P_-( \Upsilon(k-1)(y_{(1)}') \varphi(y_{(2)}') ) + \Upsilon(k-1)(y)\nonumber\\ &=& -P_-(\varphi(y) + \Upsilon(k-1)(y'_{(1)}) \varphi(y'_{(2)}) ) + P_+( \Upsilon(k-1)(y) ) \nonumber\\ &=& -P_-(\Upsilon(k-1)*(\varphi-e) (y))+ P_+( \Upsilon(k-1)(y) ).\label{BWHrb} \end{eqnarray}} Now, note that for all $n > 0$, the RB property implies that $\Upsilon(n)(y)$ is in $A_-$ for $y \in H^+$. Hence, going to equation (\ref{BWHrb}) we see that $ P_+( \Upsilon(k-1)(y) ) = 0$. \begin{prop}\label{prop:ZassenBogo} For $n>0$ the characters $\varphi^+_n$ and $\Upsilon(n)$ restricted to $H^n:=\bigoplus_{i=0}^nH_i$ solve Bogoliubov's renormalization recursion. \end{prop} \begin{proof} Let $x \in H^n$. From our previous discussion: \allowdisplaybreaks{ \begin{eqnarray*} e(x) - P_- \circ (\Upsilon(n) * (\varphi - e))(x) &=& \Upsilon(n)(x). \end{eqnarray*}} Similarly: \allowdisplaybreaks{ \begin{eqnarray*} e(x) + P_+ \circ (\Upsilon(n) * (\varphi - e))(x) &=& e(x) + P_+ \circ (\Upsilon(n) * \varphi - \Upsilon(n))(x)\\ &=& e(x) + P_+ \circ (\varphi^+_n - \Upsilon(n))(x)\\ &=& \varphi^+_n(x). \end{eqnarray*}} When going to the last line we used $P_+ \circ P_-= P_- \circ P_+= 0$ as well as the Rota--Baxter property of $P_-$ and $P_+$. This implies that, on $H^n$, $\varphi^+_n= e + P_+\circ(\Upsilon(n) * (\varphi - e))$ and $\Upsilon(n)=e - P_-\circ(\Upsilon(n) * (\varphi - e))$ which are Bogoliubov's renormalization equations for the counterterm and the renormalized character, respectively, see e.g. \cite{EFGP2,EFMP}. \end{proof} \subsection{On counterterms in the BWH decomposition} \label{ssect:Bogo} Recall briefly how these results translate in the language of renormalization in perturbative QFT. This section also introduces several notations that will be useful later on. The reader is referred to the textbooks \cite{Collins,IZ1980} and the articles \cite{CKII,CKIII} for more details. As often in the literature, the massless $\phi^4$ Lagrangian $L=L(\partial_\mu \phi,\phi, g)$ in four space-time dimensions shall serve as a paradigm: \begin{equation} \label{Lphi4} L := \frac{1}{2} \partial_\mu \phi \partial^\mu \phi - \frac{g}{4!}\phi^4. \end{equation} This is certainly a too simple Lagrangian to account for all the combinatorial subtleties of perturbative QFT, but its basic properties are quite enough for our present purpose. The quadratic part is called the free Lagrangian, denoted by $L_0$. The rest is called the interaction part, and is denoted by $L_i$. The parameter $g$ appearing in $L=L_0+L_i$ is the so-called renormalized, that is, finite coupling constant. Perturbation theory is most effectively expressed using Feynman graphs. Recall that from the above Lagrangian we can derive Feynman rules. Then any Feynman graph~$\Gamma$ corresponds by these Feynman rules to a Feynman amplitude. By $|\Gamma|$ we denote the number of loops in the diagram. Recall that in any given theory exists a rigid relation between the numbers of loops and vertices, for each given $m$-point function. In $\phi^4$ theory, for graphs associated to the $2$-point function the number of vertices equals the number of loops. For graphs associated to the $4$-point function the number of vertices is equal to the number of loops plus one. A Feynman amplitude consists of the Feynman integral, i.e. a multiple $d(=4)$-dimensional momentum space integral: \begin{equation} \Gamma \mapsto \bigg[\int\prod_{l=1}^{|\Gamma|}\,d^dk_l \bigg]I_\Gamma(p,k), \label{eq:stone-of-contention} \end{equation} multiplied by a proper power of the coupling constant, i.e. $g^{|\Gamma|+1}$ for $4$-point graphs and $g^{|\Gamma|}$ for $2$-point graphs. Here, $k=(k_1,\ldots,k_{|\Gamma|})$ are the $|\Gamma|$ independent internal (loop) momenta, that is, each independent loop yields one integration, and $p=(p_1,\ldots,p_N)$, with $\sum_{k=1}^N p_k=0$, denotes the~$N$ external momenta. Feynman integrals are most often divergent and require to be properly regularized and renormalized to acquire physical meaning. A regularization method is a prescription that parameterizes the divergencies appearing in Feynman amplitudes upon introducing non-physical parameters, denoted $\varepsilon$, thereby rendering them formally finite. Let us write $g\tilde\psi(\Gamma;\varepsilon)=g^{|\Gamma|+1}{\psi}(\Gamma;\varepsilon)$ for the regularized Feynman amplitude (for example in DR; the notation $\tilde\psi$ is introduced for later use). Of pivotal interest are Green's functions, in particular 1PI $n$-point (regularized) Green's functions, denoted $G^{(n)}(g,\varepsilon):=G^{(n)}(p_1,\ldots,p_n;g,\varepsilon)$. In the following we will ignore the external momenta and omit the regularization parameter. Recall that for the renormalization of the Lagrangian (\ref{Lphi4}), the $4$- and $2$-legs 1PI Feynman graphs, respectively the corresponding amplitudes, beyond tree level are of particular interest. As guiding examples we use therefore from now on the regularized momentum space 1PI $4$- and $2$-point Green's function. These are power series in the coupling $g$ with Feynman amplitudes as coefficients: $$ G^{(4)}(g)= \tilde\psi(gz_g) \quad {\rm{and}} \quad G^{(2)}(g)= \tilde\psi(z_\phi), $$ where $z_g$ and $z_\phi$ stand for the formal coupling constant $z$-factors in the corresponding Hopf algebra of Feynman graphs $H$: \begin{equation} \label{z} z_g = \un + \sum_{k>0} \Gamma^{(4)}_k \quad {\rm{and}} \quad z_\phi = \un - \sum_{k>1} \Gamma^{(2)}_k. \end{equation} Here, $\un$ is the empty graph in $H$ and: $$ \Gamma^{(4)}_k := \sum_{m=1}^{N^{(4)}_k} \frac{\Gamma^{(4)}_{k,m}} {sym(\Gamma^{(4)}_{k,m})} \quad {\rm{and}} \quad \Gamma^{(2)}_k :=\sum_{n=1}^{N^{(2)}_k} \frac{\Gamma^{(2)}_{k,n}}{sym(\Gamma^{(2)}_{k,n})} $$ denote the sums of the $N^{(4)}_k$ 1PI 4-point and $N^{(2)}_k$ 2-point graphs of loop order $k$, divided by their symmetry factors, respectively. To deal with the polynomial dependency of the Green's functions on the coupling constant $g$, we write: $$ G^{(4)}(g)=g + \sum\limits_{k=1}^{\infty} g^{k+1}G^{(4)}_k \quad {\rm{and}} \quad G^{(2)}(g)=1 - \sum\limits_{k=1}^{\infty} g^{k}G^{(2)}_k, $$ so that $G^{(r)}_k=\psi (\Gamma^{(r)}_k)$, for $r=2,4$. Hence, as perturbative 1PI Green's functions are power series with individual -UV divergent- 1PI Feynman amplitudes as coefficients, one way to render them finite is to renormalize graph by graph. This is the purpose of the Bogoliubov recursion, which, in the context of DR+MS, was nicely encoded in the group-theoretical language by Connes and Kreimer \cite{CKII}. Indeed, let $H$ be the graded connected commutative Hopf algebra of 1PI Feynman graphs associated to the Lagrangian (\ref{Lphi4}) and let us choose the RB algebra of Laurent series $A=\CC[\varepsilon^{-1},\varepsilon]]$ as a target algebra for the regularized amplitudes (the natural choice in DR). Then, the correspondence $\Gamma\mapsto\tilde\psi(\Gamma;\varepsilon)$ extends uniquely to a character on $H$. That is, the regularized Feynman rules, $\tilde\psi$, can be interpreted as an element of $G(A[[g]])$. Recall now that in the case of DR the underlying RB structure, i.e. the MS scheme, implies the unique BWH decomposition $\tilde\psi=\tilde\psi_-^{-1}\ast\tilde\psi_+$. This allows to recover Bogoliubov's classical counterterm map $C$ and the renormalized Feynman rules map $R$. Indeed, for an arbitrary 1PI graph $\Gamma \in H$, one gets: $$ C(\Gamma ) =\tilde\psi_-(\Gamma) \quad {\rm{and}} \quad R(\Gamma )=\tilde\psi_+(\Gamma ). $$ The linearity of $R$ then leads to renormalized 1PI Green's functions: $G_R^{(4)}(g)=R(gz_g)$, $G_R^{(2)}(g)=R(z_\phi)$. We refer to \cite{CKII} for further details. \subsection{The Lagrangian picture} \label{ssect:LagrangePic} The counterterms $C(\Gamma )$ figure in the renormalization of the Lagrangian $L$. Indeed, for a multiplicative renormalizable QFT, it can be shown that the BPHZ method is equivalent to the method of additive, and hence multiplicative renormalization. Therefore, let us remind ourselves briefly of the additive method, characterized by adding order-by-order counterterms to the Lagrangian $L$. Eventually, this amounts to multiplying each term in the Lagrangian by particular renormalization factors. Details can be found in standard textbooks on perturbative QFT, such as \cite{Collins,IZ1980}. In general the additive renormalization prescription is defined as follows. The Lagrangian $L$ is modified by adding the so-called counterterm Lagrangian, $L_{ct}$, resulting in the renormalized Lagrangian: $$ L_{ren} :=L + L_{ct}, $$ where $L_{ct}:=\sum_{s>0} L_{ct}^{(s)}$ is defined by: \begin{equation} \label{renormalized0} L_{ct}:= C_1(g)\frac{1}{2} \partial_\mu \phi \partial^\mu \phi - C_2(g)\frac{g}{4!}\phi^4, \end{equation} with $C_n(g):=\sum_{s>0}g^sC_n^{(s)}$, $n=1,2$ being power series in $g$. The $C_n^{(s)}$, $n=1,2$, $s>0$ are functions of the regularization parameter $\varepsilon$ to be defined iteratively as follows. To obtain the $1$-loop counterterm $L^{(1)}_{ct}$ one starts with $L=L_0+L_i$, computes the propagators and vertices, and generates all one-loop diagrams, that is, graphs of order $g^2$. Among those one isolates the $UV$ divergent 1PI Feynman diagrams and chooses the $1$-loop counterterm part $L_{ct}^{(1)}$, that is, $C_n^{(1)}$, $n=1,2$, so as to cancel these divergences. Now, use the $1$-loop renormalized Lagrangian $L_{ren} ^{(1)}:=L + L_{ct}^{(1)} + \sum_{s>1} L_{ct}^{(s)}$ to generate all graphs up to $2$-loops, that is, all graphs of order $g^3$. Note that this includes for instance graphs with one loop where one of the vertices is multiplied by $g^2C_2^{(1)}$ and the other one by $g$, leading to an order $g^3$ contribution. Again, as before, isolate the $UV$ divergent 1PI ones and choose the $2$-loop counterterm part $L_{ct}^{(2)}$, which is now of order $g^3$, again so as to cancel these divergencies. Proceed with the $2$-loop renormalized Lagrangian $L_{ren} ^{(2)}:=L + L_{ct}^{(1)}+L_{ct}^{(2)} + \sum_{s>2} L_{ct}^{(s)}$, and so on. The 2-point graphs contribute to the wave function counterterm, whereas 4-point graphs contribute to the coupling constant counterterm (see e.g. \cite[Chap. 5]{Collins}). Note that after $j$ steps in the iterative prescription one obtains the resulting $j$th-loop renormalized Lagrangian: \begin{equation} \label{renormalized1} L^{(j)}_{ren} := L_0+L_i + L^{(1)}_{ct} + \cdots + L^{(j)}_{ct}+ \sum_{s>j} L_{ct}^{(s)} \end{equation} with counterterms $C_n^{(s)}$, $n=1,2$ fixed up to order $j$, such that it gives finite expressions up to loop order $j$. The part $L_{ct}^{(s)}$, $s>j$, remains undetermined. In fact, later we will see that, in our terminology, some associated Feynman rules are $j$-regular. The multiplicative renormalizability of $L$ implies that we may absorb the counterterms into the coupling constant and wave function $Z$-factors: $$ Z_g:=1+C_2(g),\ \ Z_\phi:=1+C_1(g), $$ where $C_n(g)=\sum_{s=1} g^s C_n^{(s)}$, $n=1,2$. We get: \begin{equation} \label{renormalized2a} L^{(j)}_{ren} = \frac{1}{2} Z_\phi \partial_\mu \phi \partial^\mu \phi - \frac{1}{4!}gZ_g\phi^4. \end{equation} As it turns out, Bogoliubov's counterterm map seen as $C \in G(A[[g]])$ gives: $$ Z_g(g)=C(z_g)(g) \quad {\rm{and}} \quad Z_\phi(g)=C(z_\phi)(g), $$ where we made the $g$ dependence explicit. Now we define the bare, or unrenormalized, field $\phi_{(0)}:= \sqrt{Z_{\phi}} \phi$ as well as the bare coupling constant: $$ g^B(g):=\frac{gZ_g(g)}{{Z^2_{\phi}(g)}}, $$ and as $C \in G(A[[g]])$: $$ g^B(g)=g C(z_B)(g) $$ where $ z_B:=z_g/z_{\phi}^2 \in H$ is the formal bare coupling. Up to the rescaling of the wave functions, the locality of the counterterms allows for the following renormalized Lagrangian: \begin{equation} \label{renormalized2b} L_{ren} =\frac{1}{2} \partial_\mu \phi_{(0)} \partial^\mu \phi_{(0)} - \frac{1}{4!}g^B(g)\phi^4_{(0)}. \end{equation} \subsection{Dyson's formula revisited} \label{ssect:Dysonformula} Let us denote once again by $H$ and $F$ the Hopf algebra of 1PI Feynman graphs of the massless $\phi^4$ theory in four space-time dimensions and the Fa\`a Di Bruno Hopf algebra, respectively. The purpose of the present section is to show how Dyson's formula, relating renormalized and (regularized) bare Green's functions, allows for a refined interpretation of the exponential method for constructing regular characters in the context of renormalization. We write $R$ and $C$ for the regularized renormalized Feynman rules and counterterm character, respectively. Recall the universal bare coupling constant: $$ z_B:= z_g z_\phi^{-2}. $$ It can be expanded as a formal series in $H$: \begin{equation} \label{univcoup} z_B = \un + \sum_{k>0} \Gamma_k \in H, \end{equation} where $\Gamma_k \in H_k$ is a homogeneous polynomial of loop order $k$ in 1PI 2- and 4-point graphs with a linear part $\Gamma_k^{(4)} + 2 \Gamma_k^{(2)}$. Notice that, as $H$ is a polynomial algebra over Feynman graphs and since the family of the $\Gamma_k^{(4)}$ and of the $\Gamma_k^{(2)}$ are algebraically independent in $H$, also the families of $\Gamma_k$ and $\Gamma_k^{(r)}$, $r=2,4$, are algebraically independent in $H$. Coming back to 1PI Green's functions. Dyson back then in the 1940s \cite{Dyson49} showed --in the context of QED, but the result holds in general \cite[Chap.8]{IZ1980}-- that the bare and renormalized 1PI $n$-point Green's functions satisfy the following simple identity: \begin{equation} \label{dyson} G_R^{(n)}(g) =Z^{n/2}_\phi G^{(n)}(g^B). \end{equation} Recall that the renormalized as well as the bare 2- and 4-point Green's functions and the $Z$-factors, $Z_\phi$ and $Z_g$, are obtained by applying respectively the renormalized Feynman rules map $R$, the Feynman rules $\tilde\psi$ and the counterterm $C$ to the formal $z$-factors introduced in (\ref{z}), respectively. When translated into the language of Hopf algebras, the Dyson equation reads, say, in the case of the $4$-point function: $$ G_R^{(4)}(g)=R(gz_g) = C(z^{2}_\phi) \sum_{j=0}^{\infty}C(z_B)^{j+1}\tilde\psi(g\Gamma_j^{(4)}) = \sum_{j=0}^{\infty}C(z_B)^{j}C(z_g) \tilde\psi(g\Gamma_j^{(4)}). $$ This can be rewritten: \begin{eqnarray} \label{E1} R(z_g)&=&m_A(C\otimes \tilde\psi)\sum_{j=0}^{\infty}z_B^{j}z_g\otimes\Gamma_j^{(4)}, \end{eqnarray} where we recognize the convolution expression $R=C\ast \tilde\psi$ of the BWH decomposition, with: \begin{eqnarray} \label{E2} \Delta(z_g)=\sum_{k \ge 0} z_B^k z_g \otimes \Gamma^{(4)}_k. \end{eqnarray} Similarly, the study of the 2-point function yields: $$ \Delta(z_\phi)= z_\phi \otimes \un - \sum_{k > 0} z_B^k z_\phi \otimes \Gamma^{(2)}_k. $$ The equivalence between the two formulas (\ref{E1}) and (\ref{E2}) follow from the observation that the BWH decomposition of characters holds for arbitrary counterterms and renormalized characters, ${\tilde\psi}_-$ and ${\tilde\psi}_+$, respectively. Choosing, e.g. ${\tilde\psi}_-=C$ and ${\tilde\psi}_+=R$ in such a way that their values on Feynman diagrams form a family of algebraically independent elements (over the rationals) in $\CC$ shows that (\ref{E1}) implies (\ref{E2}) (the converse being obvious). Notice that the coproduct formulas can also be obtained directly from the combinatorics of Feynman graphs. We refer to \cite{Bel,CKIII,vanSu1,vanSu2} for complementary approaches and a self-contained study of coproduct formulas for the various formal $z$-factors. Now, Lemma \ref{lem:FaBlemma} implies immediately the Fa\`a di Bruno formula for $z_B$: \begin{prop} \label{thething2} $$ \Delta(z_B)=\sum_{k \ge 0} z_B^{k+1} \otimes \Gamma_k $$ \end{prop} \begin{cor} There exists a natural Hopf algebra homomorphism $\Phi$ from $F$ to $H$: \begin{equation} \label{FdB} a_n \mapsto \Phi(a_n) := \Gamma_n. \end{equation} \end{cor} Equivalently, there exists a natural group homomorphism $\rho$ from $G(A)$, the $A$-valued character group of $H$, to the $A$-valued character group $G_{F}$ of $F$: $$ G(A) \ni \varphi \mapsto \rho(\varphi):=\varphi \circ \Phi : { F} \to A. $$ \subsection{Dyson's formula and the exponential method} \label{ssect:DysonExpo} Let us briefly make explicit the exponential method for perturbative renormalization in the particular context of the Hopf algebras of renormalization. We denote by $H:=\bigoplus_{n\ge 0}H_n$ the Connes--Kreimer Hopf algebra of 1PI --UV-divergent-- Feynman graphs and by $G(A[[g]])$ the group of regularized characters from $H$ to the commutative unital algebra $A$ over $\mathbb{C}$ to be equipped with a $\mathbb{C}$-linear projector $P_-$ such that the image of $P_+:=id -P_-$ is a subalgebra. The algebra $A$ and projector $P_-$ reflect the regularization method respectively the renormalization scheme. The unit in $G(A[[g]])$ is denoted by $e$. The corresponding graded Lie algebra of infinitesimal characters is denoted by $\frak{g}(A[[g]])=\bigoplus_{n>0}\frak{g}_n(A[[g]])$. Let $\tilde\psi \in G(A[[g]])$ be the character corresponding to the regularized Feynman rules, derived from a Lagrangian of a -- multiplicative renormalizable -- perturbative quantum field theory, say, for instance $\phi^4$ in four space-time dimensions. Hence any $l$-loop graph $\Gamma \in H_l$ is mapped to: \begin{equation} \label{FeynChar} \Gamma \xrightarrow{ \tilde\psi} \tilde\psi(\Gamma):=g^{|\Gamma|}\psi(\Gamma)=g^{l}\psi(\Gamma). \end{equation} Note that the character $\psi$ associates with a Feynman graph the corresponding Feynman integral whereas the character $\tilde\psi$ maps any graph with $|\Gamma|$ loops to its regularized Feynman integral multiplied by the $|\Gamma|$th power of the coupling constant. Recall that the exponential method of renormalization proceeds order-by-order in the number of loops. At one-loop order, one starts by considering the infinitesimal character of order one from $H$ to $A[[g]]$: $$ \tilde\tau_{1}:= P_- \circ \tilde\psi \circ \pi_1 \in \frak{g}_1(A[[g]]), $$ The corresponding exponential {\it{counterfactor}} from $H$ to $A[[g]]$ is given by: $$ \tilde\Upsilon^-_{1}:=\exp^*(- \tilde\tau_{1}). $$ From the definition of the Feynman rules character (\ref{FeynChar}) we get: \allowdisplaybreaks{ \begin{eqnarray*} \tilde\Upsilon^-_{1}(\Gamma_k)&=&\exp^*(-P_-\circ \tilde\psi\circ\pi_1)(\Gamma_k)\\ &=& g^{k}\exp^*(-P_-\circ \psi \circ\pi_1)(\Gamma_k)\\ &=& g^{k} \Upsilon^-_1(\Gamma_k). \end{eqnarray*}} The character: $$ \tilde\psi^+_{1} := \tilde\Upsilon^-_{1} * \tilde\psi $$ is $1$-regular, i.e. it maps $H_1$ to $A_+[[g]]$. Indeed, as $h \in H_1$ is primitive we find $ \tilde\psi^+_{1}(h)= \tilde\psi(h) + \tilde\Upsilon^-_{1}(h)= \tilde\psi(h)-P_-( \tilde\psi(h))=P_+( \tilde\psi(h))$. In general, by multiplying the order $n-1$-regular character by the counterfactor $ \tilde\Upsilon^-_{n}$ we obtain the $n$-regular character: $$ \tilde\psi^+_{n} := \tilde\Upsilon^-_{n} * \tilde\psi^+_{n-1} = \tilde\Upsilon(n) * \tilde\psi, $$ with the exponential order $n$ counterterm $ \tilde\Upsilon(n):= \tilde\Upsilon^-_{n} * \cdots * \tilde\Upsilon^-_{1}$. Hence, in the Hopf algebra context the exponential method of iterative renormalization consists of a successive multiplicative construction of higher order regular characters from lower order regular characters, obtained by multiplication with counterfactors. Next, we define the $n$th-order bare coupling constant: $$ g_n(g) = \tilde\Upsilon^-_{n}(gz_B)(g) = g + \sum_{k \ge 0} g^{k+1} \Upsilon^-_n(\Gamma_k) \in gA[[g]]. $$ Recall that $\Upsilon^-_n(\Gamma_k) = 0$ for $k<n$. We denote the $m$-fold iteration: $$ g_1 \circ \cdots \circ g_m (g) =: g^{ \circ}_{m}(g), $$ where by Proposition~\ref{thething2} and from the general properties of Fa\`a di Bruno formulas, we have: $g_m^{\circ}(g)=\tilde\Upsilon(n)(gz_B)$. We also introduce the $n$th-order Z-factors: $$ Z^{(n)}_g(g):= \tilde\Upsilon(n)(z_g)(g) \quad {\rm{and}} \quad Z^{(n)}_\phi(g):= \tilde\Upsilon(n)(z_\phi)(g), $$ so that the $n$th-order renormalized 2- and 4-point 1PI Green's functions are: \allowdisplaybreaks{ \begin{eqnarray*} G_{R,n}^{(4)}(g):=g\tilde\psi^+_{n}(z_g)(g) &=& g\tilde\Upsilon(n) * \tilde\psi_g(z_g)(g) = \sum_{l \ge 0} \tilde\Upsilon(n)(z_B^l z_g) g^{l+1}\psi(\Gamma^{(4)}_l) \\ &=& \tilde\Upsilon(n)( z_\phi)^2 \sum_{l \ge 0} (\tilde\Upsilon(n)(gz_B)(g) )^{l+1}\psi(\Gamma^{(4)}_l)\\ &=& (Z^{(n)}_\phi(g))^2 \sum_{l \ge 0} {g_m^\circ(g)}^{l+1}\psi(\Gamma^{(4)}_l), \end{eqnarray*}} or, $G_{R,n}^{(4)}(g)=(Z_\phi^{(n)}(g))^2G^{(4)}(g_m^\circ(g))$. Similarly, $\tilde\psi^+_{n}(z_\phi)(g)= Z^{(n)}_\phi(g) \sum_{l \ge 0} ( \tilde\Upsilon(n)(gz_B )(g))^{l}\psi(\Gamma^{(2)}_l)$ and $G_{R,n}^{(2)}(g)=Z_\phi^{(n)}(g)G^{(2)}(g_m^\circ(g))$. This corresponds to a Lagrangian multiplicatively renormalized up to order $n$: $$ L^{(n)}_{ren} :=\frac{1}{2} Z^{(n)}_\phi(g) \partial_\mu \phi \partial^\mu \phi - \frac{gZ^{(n)}_g(g)}{4!}\phi^4. $$ However, using Propositions \ref{prop:ExpFaa1} and \ref{prop:ExpFaa2}, we may also rescale the wave function and write: $$ L^{(n)}_{ren} :=\frac{1}{2} \partial_\mu \phi_{n,0} \partial^\mu \phi_{n,0} - \frac{g^{ \circ}_{n}(g)}{4!}\phi_{n,0}^4. $$ where $\phi_{n,0}:=\sqrt{Z^{(n)}_\phi(g)} \phi$. Physically, on the level of the Lagrangian, the exponential renormalization method corresponds therefore to successive reparametrizations of the bare coupling constant. \section{On locality and non Rota--Baxter type subtraction schemes} \label{sect:nonRB} In this last section we present a class of non-Rota--Baxter type subtraction schemes combining the idea of fixing the values of Feynman rules at given values of the parameters and the minimal subtraction scheme in dimensional regularization. The latter is known to be local \cite{CasKen,Collins} and we will use this fact to prove that the new class of non-Rota--Baxter type schemes is local as well. We first introduce some terminology. Let $\psi$ denote a dimensionally regularized Feynman rules character corresponding to a perturbatively renormalizable (massless, for greater tractability) quantum field theory. It maps the graded connected Hopf algebra $H=\bigoplus_{n \ge 0}H_n$ of 1PI Feynman graphs into the algebra $A$ of Laurent series with finite pole part. In fact, to be more precise, the coefficients of such a Laurent series are functions of the external parameters. In this setting, Eq.~(\ref{eq:stone-of-contention}) specializes to (see e.g. \cite{Collins}): $$ H \ni \Gamma \mapsto \psi (\Gamma;\mu,g,s) = \sum_{n = -N}^{\infty} a^\mu_n(\Gamma;g,s) \varepsilon^{n}. $$ Here, $\mu$ denotes 'tHooft's mass, $\varepsilon$ the dimensional regularization parameter and $s$ the set of external parameters others than the coupling constant $g$. The algebra $A$ is equipped with a natural Rota--Baxter projector $T_-$ mapping any Laurent series to its pole part: $$ T_-( \psi (\Gamma;\mu,g,s) ) := \sum_{n = -N}^{-1} a^\mu_n(\Gamma,g,s) \varepsilon^{n}. $$ This is equivalent to a direct decomposition of $A$ into the subalgebras $A_-:=T_-(A)$ and $A_+:=T_+(A)$. In this setting, recall that the BWH decomposition gives rise to a unique factorization: $ \psi=\psi^{-1}_-*\psi_+ $ into a counterterm map $\psi_-$ and the renormalized Feynman rules map $\psi_+$. Both maps are characterised by Bogoliubov's renormalization recursions: $ \psi_\pm = e \pm T_\pm \circ (\psi_- *(\psi - e)). $ The Rota--Baxter property of $T_-$ ensures that both, $\psi_-$ and $\psi_+$, are characters. Recall the notion of locality \cite{CasKen,Collins}. We call a character $\psi$ (and, more generally, a linear form on $H$) strongly local if the coefficients in the Laurent series which it associates to graphs are polynomials in the external parameter. Notice that the convolution product of two strongly local characters is strongly local: strongly local characters form a subgroup of the group of characters. On the other hand a character $\psi$ is local if its counterterm $\psi_-$ is strongly local. Notice that strong locality implies locality. Indeed, since, by the Bogoliubov formula $\psi_-=e-T_-(\psi_-\circ(\psi-e))$, $\psi_-$ is strongly local if $\psi$ is strongly local due to the recursive nature of the formula. It is well-known that for a multiplicatively renormalizable perturbative QFT with dimensionally regularized Feynman rules character $\psi$, the counterterm $\psi_-$ following from Bogoliubov's recursion is strongly local. Moreover, as the Birkhoff decomposition is unique, recall that comparing with the exponential method we get: $ \psi_- = \Upsilon_{\infty}^-:=\lim\limits_\rightarrow \Upsilon(n) $ with: $ \Upsilon(n):= \Upsilon^-_{n} \ast \cdots \ast \Upsilon_{1}^-. $ Hence, in the particular case of a Rota--Baxter type subtraction scheme the exponential method provides a decomposition of Bogoliubov's counterterm character with respect to the grading of the Hopf algebra. The following Proposition shows that the exponential counterfactors inherit the strong locality property of the Bogoliubov's counterterm character. \begin{prop} In the context of minimal subtraction, the exponential counterfactors $\Upsilon^-_{i}$ and hence the exponential counterterms $\Upsilon(n)$ are strongly local iff $\psi_-= \Upsilon_{\infty}^-$ is strongly local . \end{prop} \begin{proof} One direction is evident as strong locality of the counterfactors implies strong locality of $ \Upsilon_{\infty}^-$. The proof of the opposite direction follows by induction. For any $\Gamma \in H_1$ we find: $$ \psi_-(\Gamma) = -T_-\circ \psi\circ\pi_1(\Gamma), $$ which implies that $-T_-\circ \psi\circ\pi_1$ is strongly local. The strong locality of $ \Upsilon_{1}^-:=\exp^*(-T_- \circ \psi \circ \pi_1)$ follows from the usual properties of the exponential map in a graded algebra. Let us assume that strong locality holds for $\Upsilon_{1}^-,\ldots,\Upsilon_n^-$. For $\Gamma \in H_{n+1}$ we find (for degree reasons): $$ \psi_-*\Upsilon^{-1}(n)(\Gamma) = \cdots * \Upsilon_{n+2}\ast\Upsilon_{n+1}^-(\Gamma)= \Upsilon_{n+1}^-(\Gamma) = -T_- \circ \psi^+_n \circ \pi_{n+1}(\Gamma). $$ Strong locality of $-T_- \circ \psi^+_n \circ \pi_{n+1}$ follows, as well as strong locality of $\Upsilon_{n+1}^-=\exp^*(-T_- \circ \psi^+_n \circ \pi_{n+1})$. \end{proof} The next result with be useful later. \begin{lem} \label{lem:help1} For a strongly local character $\phi$ in the context of a proper projector $P_-$ on $A$, the exponential method leads to a decomposition $\phi= \Upsilon_{\infty}^- * \phi^+$ into a strongly local counterterm $ \Upsilon_{\infty}^-$ as well as a strongly local regular character $\phi^+$. \end{lem} \begin{proof} The proof follows once again from the definition of the recursion. Indeed, the first order counterfactor in the exponential method is: $$ \Upsilon_{1}^- = \exp^*(-T_- \circ \phi \circ \pi_1), $$ which is clearly strongly local, since $\phi$ is strongly local. Then $\phi^+_1= \Upsilon_{1}^- * \phi$ is strongly local as a product of strongly local characters. The same reasoning then applies at each order. \end{proof} \subsection{A non-Rota--Baxter subtraction scheme} \label{ssect:nonRB} We introduce now another projection, denoted $T^q_-$. It is a projector defined on $A$ in terms of the RB map $T_-$: \begin{equation} \label{Taylor} T_-^q:=T_- + \delta^n_{\varepsilon,q}, \end{equation} where the linear map $\delta^n_{\varepsilon,q}$ is the Taylor jet operator up to $n$th-order with respect to the variable $\varepsilon$ at zero, which evaluates the coefficient functions at all orders between $1$ and $n$ at the fixed value $q$: $$ \delta_{\varepsilon,q}^n(\sum_{m=-N}^{\infty} a_m(x)\varepsilon^m):=\sum\limits_{i=1}^na_i(q)\varepsilon^i. $$ Note the condensed notation, where $q$ stands for a fixed set of values of parameters. The choice of the projection amounts, from the point of view of the renormalized quantities, to fix the coefficient functions at 0 for given values of parameters (e.g. external momenta). One verifies that $T_-^q$ defines a linear projection. Moreover, the image of $T_+^q:=id - T_-^q$ forms a subalgebra in $A$ (the algebra of formal power series in $\varepsilon$ whose coefficient functions of order less than $n$ vanish at the chosen particular values $q$ of parameters), but the image of $T_-^q$ does not. This implies immediately that the projector $T_-^q$ is not of Rota--Baxter type. Hence, we have in general: $$ T^q_-( \psi (\Gamma;\mu,g,s) ) = \sum_{l = -N}^{-1} a^\mu_l(\Gamma,g,s) \varepsilon^{l} + \sum_{i=1}^na^\mu_i(\Gamma,g,q)\varepsilon^i $$ and $$ T^q_+( \psi (\Gamma;\mu,g,s) ) = \sum_{l = 0}^{\infty} a^\mu_l(\Gamma, g,s) \varepsilon^{l} - \sum_{i=1}^na^\mu_i(\Gamma,g,s)\varepsilon^i. $$ We find: \begin{prop} Using the subtraction scheme defined in terms of projector $T_-^q$ on $A$, the exponential method applied to the Feynman rules character $\psi$ gives a regular character: $$ \psi_{q}^+=\Upsilon_{\infty,q}^- \ast \psi , $$ where we use a self-explaining notation for the counterterm $\Upsilon_{\infty,q}^- $ and the renormalized character $\psi_{q}^+$. \end{prop} Now we would like to prove that the exponential method using the projector $T_-^q$ on $A$ gives local counterterms. That is, we want to prove that the counterfactor $\Upsilon_{n,q}^-$ for all $n$, and hence $\Upsilon_{\infty,q}^-$, are strongly local. In the following: $$ \psi_-= \Upsilon_{\infty}^- = \cdots *\Upsilon_{n}^- * \cdots *\Upsilon_{2}^- *\Upsilon_{1}^- $$ stands for the multiplicative decomposition of Bogoliubov's strongly local counterterm character following from the exponential method using the minimal subtraction scheme $T_-$. Whereas: $$ \Upsilon_{\infty,q}^- = \cdots *\Upsilon_{n,q}^- * \cdots * \Upsilon_{2,q}^- * \Upsilon_{1,q}^- $$ stands for the counterterm character following from the exponential method according to the modified subtraction scheme $T_-^q$. The following Lemma is instrumental in this section. \begin{lem}\label{lemtech} For a substraction scheme such that the image of $P_+$ is a subalgebra, let $\phi$ be a $n$-regular character and $\xi$ be a regular character, then: $$ P_-\circ (\phi\ast \xi)_{n+1}=P_-\circ \phi_{n+1}. $$ In particular, the counterfactor $\Upsilon^-_{n+1}$ associated to $\phi$ is equal to the counterfactor associated to $\phi\ast \xi$. It follows that, if the exponential decomposition of a character $\psi$ is given by: $\psi=\Upsilon{^-_\infty}\ast\psi^+$, the exponential decomposition of the convolution product of $\psi$ with a regular character $\xi$ is given by: $\psi\ast\xi=\Upsilon_\infty^-\ast(\psi^+\ast \xi)$. \end{lem} \begin{proof} Indeed, for a $n+1$-loop graph $\Gamma$, $\phi\ast\xi(\Gamma)=\phi(\Gamma)+\xi(\Gamma) + c$, where $c$ is a linear combination of products of the image by $\phi$ and $\xi$ of graphs of loop-order strictly less than $n+1$. The regularity hypothesis and the hypothesis that the image of $P_+$ is a subalgebra imply $P_-(\xi(\Gamma)+c)=0$, hence the first assertion of the Lemma. The others follow from the definition of the exponential methods by recursion. \end{proof} \begin{lem}\label{truc} Let $\psi$ be a regular character for the minimal substraction scheme ($T_-\circ \psi=0$). Using the subtraction scheme defined in terms of projector $T_-^q$ on $A$, the exponential method applied to $\psi$ gives $ \psi_{q}^+=\Upsilon_{\infty,q}^- \ast \psi , $ where, for each graph $\Gamma$, $\Upsilon_{\infty,q}^-(\Gamma)$ is a polynomial with constant coefficients in the perturbation parameter $\varepsilon$. In particular, $\Upsilon_{\infty,q}^-$ is strongly local. \end{lem} The Lemma follows from the definition of the substraction map $T_-^q$: by its very definition, since $\psi(\Gamma)$ is a formal power series in the parameter $\varepsilon$ (without singular part), $T_-^q\circ\psi (\Gamma)$ is a polynomial (of degree less or equal to $n$) with constant coefficients in the perturbation parameter $\varepsilon$. As usual, this behaviour is preserved by convolution exponentials, and goes therefore recursively over to the $\Upsilon_{i,q}^-$ and to $\Upsilon_{\infty,q}^-$. \begin{prop} \label{prop:strongloc} With the above hypothesis, i.e. a dimensionally regularized Feynman rules character $\psi$ which is local with respect to the minimal subtraction scheme, the counterfactors and counterterm of the exponential method, $ \Upsilon_{i,q}^- $ respectively, $ \Upsilon_{\infty,q}, $ obtained using the subtraction scheme defined in terms of the projector $T_-^q$ are strongly local. \end{prop} \begin{proof} Indeed, we have, using the MS scheme, the BWH decomposition $\psi=\psi_-^{-1}\ast\psi_+$, where $\psi_-^{-1}$ is strongly local. Applying the exponential method with respect to the projector $T_-^q$ to $\psi^+$ we get, according to Lemma~\ref{truc}, a decomposition $\psi^+=\Upsilon_+^-\ast\psi_{++}$, where we write $\Upsilon_+^-$ (resp. $\psi_{++}$) for the counterterm and renormalized character and where $\Upsilon_+^-$ is strongly local. We get: $\psi=\psi_-^{-1}\ast\Upsilon_+^-\ast\psi_{++}$, where $\psi_{++}$ is regular with respect to $T_-^q$. From Lemma~\ref{lemtech}, we know that the counterfactors and counterterm for $\psi$ in the exponential method for $T_-^q$ are equal to the counterfactors and counterterm for $\psi_-^{-1}\ast\Upsilon_+^-$, which is a product of strongly local characters, and therefore is strongly local. The Proposition follows then from Lemma~\ref{lem:help1} and its proof. \end{proof} \subsection{A Toy-model calculation} \label{ssect:nonRBToy} In the following example we apply the above introduced local non-Rota--Baxter type subtraction scheme within dimensional regularization. We exemplify it by means of a simple toy model calculation. We work with the bicommutative Hopf algebra $H^{lad} =\bigoplus_{k\ge 0} H^{lad}_k$ of rooted ladder trees. Let us recall the general coproduct of the tree $t_n$ with $n$ vertices: $$ \Delta(t_n)= t_n \otimes \un + \un \otimes t_n + \sum_{k=1}^{n-1} t_{n-k} \otimes t_k. $$ The regularized toy model is defined by a character $\psi \in G(A)$ mapping the tree $t_n$ to an $n$-fold iterated Riemann integral with values in $A:=\CC[[\varepsilon,\varepsilon^{-1}]$: \begin{equation} \label{example} \psi(p;\varepsilon,\mu)(t_n) := \mu^{\varepsilon} \int_p^\infty \psi(x;\varepsilon,\mu)(t_{n-1}) \frac{dx}{x^{1+\varepsilon}} = \frac{1}{n! \varepsilon^n} \exp\bigl(-n\varepsilon \log(\frac{p}{\mu})\bigr), \end{equation} with $\psi(p;\varepsilon,\mu)(t_1):=\mu^{\varepsilon}\int_{p}^\infty \frac{dx}{x^{1+\varepsilon}}$, with $\mu,\varepsilon > 0$, and where $p$ denotes an external momenta. Recall that $\mu$ ('tHooft's mass) has been introduced for dimensional reasons, so as to make the ratio $\frac{p}{\mu}$ a dimensionless scalar. In the following we will write $a:=\log(\frac{p}{\mu})$ and $b:=\log(\frac{q}{\mu})$, where $q$ is fixed. For later use we write out the first three values: \allowdisplaybreaks{ \begin{eqnarray*} \psi(p;\varepsilon,\mu)(t_1) &=& \frac{1}{\varepsilon} - a + \frac{1}{2}\varepsilon a^2 - \frac{1}{3!}\varepsilon^2 a^3 + \frac{1}{4!}\varepsilon^3 a^4 - O(\varepsilon^4) \\ \psi(p;\varepsilon,\mu)(t_2) &=& \frac{1}{2\varepsilon^2} - \frac{1}{\varepsilon} a + a^2 - \frac{2}{3}\varepsilon a^3 + \frac{1}{3}\varepsilon^2 a^4 - \frac{2}{15}\varepsilon^3 a^5 + O(\varepsilon^4) \\ \psi(p;\varepsilon,\mu)(t_3) &=& \frac{1}{3!\varepsilon^3} - \frac{1}{2\varepsilon^2} a + \frac{3}{4\varepsilon} a^2 - \frac{3}{4} a^3 + \frac{9}{16}\varepsilon a^4 - \frac{27}{80}\varepsilon^2 a^5 + O(\varepsilon^3). \end{eqnarray*}} Now, for a Laurent series $\alpha(p/\mu):=\sum_{n=-N}^{\infty} \alpha_n(p/\mu)\varepsilon^n$, where the coefficients $\alpha_n=\alpha_n(p/\mu)$ are functions of $p/\mu$, we define the following projector $P_-$: \begin{equation} \label{nonRB1} P_-\bigl(\sum_{n=-N}^{\infty}\alpha_n(p/\mu)\varepsilon^n\bigr) := \sum_{n=-N}^{-1}\alpha_n(p/\mu)\varepsilon^n + \alpha_1(q/\mu)\varepsilon, \end{equation} where $q$ is fixed and chosen appropriately. We get: $$ P_+\bigl(\sum_{n = -N}^\infty \alpha_n(p/\mu)\varepsilon^n\bigr) = \alpha_0 + (\alpha_1(p/\mu) - \alpha_1(q/\mu))\varepsilon + \sum_{n = 2}^\infty \alpha_n(p/\mu)\varepsilon^n \in \CC[[\varepsilon]]. $$ One verifies that: $$ P_\pm^2=P_\pm \quad {\rm{and}}\quad P_\pm \circ P_\mp = P_\mp \circ P_\pm = 0. $$ Let us emphasize that $P_-$ is not a Rota--Baxter map. This implies that we are not allowed to apply formulae (\ref{eq:BogoliubovFormulae}) in Corollary \ref{cor:ck-Birkhoff} for the renormalization of $\psi(p;\varepsilon,\mu)$. However, we will show explicitly that the exponential method applies in this case, giving at each order a local counterterm(-factor) character as well as a finite renormalized character. At first order we apply the $1$-regular character, $\psi^+_1$, to the one vertex tree: \allowdisplaybreaks{ \begin{eqnarray*} \psi^+_1(t_1) =\Upsilon(1)\ast \psi(t_1) &=& \bigl(\exp^*(-P_- \circ \psi \circ \pi_1) \ast \psi\bigr) (t_1)\\ &=& -P_- \circ \psi \circ \pi_1(t_1) + \psi (t_1)\\ &=& -P_-(\psi(t_1)) + \psi (t_1)\\ &=& -\bigl(\frac{1}{\varepsilon} + \frac{1}{2} \varepsilon b^2 \bigr) + \frac{1}{\varepsilon} - a + \frac{1}{2}\varepsilon a^2 - \frac{1}{3!}\varepsilon^2 a^3 + O(\varepsilon^3) \\ &=& - a + \frac{1}{2}\varepsilon (a^2 - b^2) + O(\varepsilon^2). \end{eqnarray*}} Observe that the counterfactor, and hence counterterm at order one is: $$ \Upsilon(1)(t_1) = \Upsilon^-_1(t_1) = \exp^*(-P_-\circ \psi \circ \pi_1) (t_1) = - \frac{1}{\varepsilon} - \frac{1}{2} \varepsilon b^2 = - \frac{1}{\varepsilon} (1 + \frac{1}{2} \varepsilon^2 b^2), $$ which is local, i.e. does not contain any $\log(p/\mu)$ terms. Let us define $f=f(\varepsilon;q):=1 + \frac{1}{2}\varepsilon^2 b^2$. Now, calculate the $2$-regular character, $\psi^+_2$, on the two vertex tree: \allowdisplaybreaks{ \begin{eqnarray*} \psi^+_2(t_2) = \Upsilon(2)\ast \psi(t_2) &=& \bigl(\exp^*(-P_- \circ \psi^+_1 \circ \pi_2) \ast \exp^*(-P_- \circ \psi \circ \pi_1) \ast \psi \bigr)(t_2)\\ &=& \psi(t_2) + \Upsilon(1)(t_1)\psi (t_1) + \Upsilon(2)(t_2) \\ &=& \psi(t_2) - P_- (\psi(t_1))\psi (t_1) -P_- (\psi^+_1(t_2)) + \frac{1}{2}P_- (\psi(t_1))P_- (\psi(t_1))\\ &=& \psi(t_2) - P_- (\psi(t_1))\psi (t_1) -P_-\bigl(\psi(t_2) - P_- (\psi(t_1))\psi (t_1) \bigr) \\ & & \quad\ -P_-\bigl(\frac{1}{2}P_- (\psi(t_1))P_- (\psi(t_1))\bigr)+ \frac{1}{2}P_- (\psi(t_1))P_- (\psi(t_1))\\ &=& P_+\bigl( \psi(t_2) - P_- (\psi(t_1))\psi (t_1) \bigr) + \frac{1}{2}P_+\bigl(P_- (\psi(t_1))P_- (\psi(t_1))\bigr). \end{eqnarray*}} We first calculate the counterterm: \allowdisplaybreaks{ \begin{eqnarray*} \Upsilon(2)(t_2) &=& \exp^*(-P_- \circ \psi^+_1 \circ \pi_2) \ast \exp^*(-P_- \circ \psi \circ \pi_1)(t_2)\\ &=& -P_- \circ \psi^+_1(t_2) + \frac{1}{2}P_- (\psi(t_1))P_- (\psi(t_1))\\ &=& -P_- ( \Upsilon^-_1*\psi(t_2)) + \frac{f^2}{2\varepsilon^2} \end{eqnarray*}} Now observe that: \allowdisplaybreaks{ \begin{eqnarray*} \Upsilon^-_1*\psi(t_2) &=& \psi(t_2) - P_- (\psi(t_1))\psi (t_1) + \frac{1}{2}P_- (\psi(t_1))P_- (\psi(t_1))\\ &=& \frac{1}{2\varepsilon^2} - \frac{1}{\varepsilon} a + a^2 - \frac{2}{3}\varepsilon a^3 + O(\varepsilon^2) \\ & & \qquad -\bigl(\frac{1}{\varepsilon} + \frac{1}{2} \varepsilon b^2 \bigr) \bigl( \frac{1}{\varepsilon} - a + \frac{1}{2}\varepsilon a^2 - \frac{1}{3!}\varepsilon^2 a^3 + O(\varepsilon^3) \bigr) + \frac{1}{2}\bigl(\frac{1}{\varepsilon} + \frac{1}{2} \varepsilon b^2 \bigr) \bigl(\frac{1}{\varepsilon} + \frac{1}{2} \varepsilon b^2 \bigr)\\ &=& \frac{1}{2} a^2 - \frac{1}{2}\varepsilon\bigl(a^3 - ab^2\bigr) + O(\varepsilon^2). \end{eqnarray*}} We get: $$ -P_- ( \psi(t_2) - P_- (\psi(t_1))\psi (t_1) + \Upsilon^-_1(t_2)) = \Upsilon^-_2(t_2)=0. $$ Hence, we find: \allowdisplaybreaks{ \begin{eqnarray*} \Upsilon(2)(t_2) &=& \Upsilon^-_2\ast\Upsilon^-_1(t_2)=\Upsilon^-_1(t_2) = \frac{1}{2\varepsilon^2} f^2, \end{eqnarray*}} which is local, and: \allowdisplaybreaks{ \begin{eqnarray*} \psi^+_2(t_2) = \psi^+_1(t_2) &=& \frac{1}{2} a^2 - \frac{1}{2}\varepsilon\bigl(a^3 - ab^2\bigr) + O(\varepsilon^2) \\ &=& \frac{1}{2}\bigl( - a + \frac{1}{2}\varepsilon (a^2 - b^2) + O(\varepsilon^2) \bigr)^2. \end{eqnarray*}} At third order, using $\Upsilon^-_2(t_2)= P_-(\psi^+_1(t_2))=0$ and $\Upsilon^-_2(t_1)=0$, a direct computation shows that similarly the order 3 counterfactor, $\Upsilon^-_3$, evaluated on the order 3 tree, $t_3$, is zero: $$ \Upsilon^-_3(t_3) = \exp^*(-P_-\circ \psi^2_+ \circ \pi_3) (t_3) = -P_-\bigl(\psi^2_+ (t_3) \bigr) =0 $$ whereas \allowdisplaybreaks{ \begin{eqnarray*} \psi^+_3(t_3) &=& \psi^+_2(t_3) =\frac{1}{3!}\bigl( - a + \frac{1}{2}\varepsilon (a^2 - b^2) + O(\varepsilon^2) \bigr)^3, \end{eqnarray*}} and the counterterm at order 3 is: \allowdisplaybreaks{ \begin{eqnarray*} \Upsilon(3)(t_3) &=& -\frac{1}{3!\varepsilon^3}f^3. \end{eqnarray*}} This pattern is general and encoded in the following proposition. \begin{prop} The renormalization of the toy-model (\ref{example}) via the exponential method, in the context of DR together with the general non-RB scheme (\ref{Taylor}) gives the $n$th-order counterfactor $\Upsilon_n^-(t_n) = 0$ and counterterm: $$ \Upsilon(n)(t_n) =\frac{1}{n!\varepsilon^n}(-f)^n, $$ with: $$ f=f(\varepsilon;q):=1 + \frac{1}{2}\varepsilon^2 b^2 - \frac{1}{3!}\varepsilon^3 b^3 + \cdots + \frac{(-1)^{m+1}}{(m+1)!}\varepsilon^{m+1} b^{m+1} $$ corresponding the Taylor jet operator (\ref{Taylor}), $\delta^m_{\varepsilon,q}$, say, of fixed order $m \in \mathbb{N}_+$. The $n$th-regular, i.e. renormalized character is given by: $$ \psi^+_n(t_n)= \frac{1}{n!}\bigl( - a + \frac{1}{2}\varepsilon (a^2 - b^2) - \frac{1}{3!}\varepsilon^2 (a^3 - b^3) + \cdots + \frac{(-1)^m}{(m+1)!}\varepsilon^{m} (a^{m+1} - b^{m+1}) - O(\varepsilon^{m+1})\bigr)^n $$ \end{prop} \begin{proof} Let us write $T:=\un + \sum\limits_{n=1}^\infty t_n$ for the formal sum of all rooted ladder trees. This sum is a group-like element ($\Delta(T)=T\otimes T$). It follows that $$ \psi(T)=\sum_n\frac{1}{n!}(\frac{1}{\varepsilon}(\frac{p}{\mu})^{-\varepsilon})^n =\exp(\frac{1}{\varepsilon}(\frac{p}{\mu})^{-\varepsilon}) $$ can be rewritten as the convolution exponential of the infinitesimal character $\eta$: $$ \eta (t_n):= \begin{cases} \eta(t_1):=\frac{1}{\varepsilon}(\frac{p}{\mu})^{-\varepsilon}, & n = 1\\ 0 & else. \end{cases} $$ Then: $$ \psi(T) =\exp^\ast(\eta)(T). $$ Let us write $\eta^-:=P_-(\eta)$ and $\eta^+:=P_+(\eta)$, so that, in particular $\eta^-(t_1)=-\Upsilon(1)(t_1)=\frac{f}{\varepsilon}$ and $\eta^+(t_1)= - a + \frac{1}{2}\varepsilon (a^2 - b^2) - \frac{1}{3!}\varepsilon^2 (a^3 - b^3) + \cdots + \frac{(-1)^m}{(m+1)!}\varepsilon^{m} (a^{m+1} - b^{m+1}) - O(\varepsilon^{m+1})$. We get finally (recall that the convolution product of linear endomorphisms of a bicommutative Hopf algebra is commutative): $$ \psi=\exp^\ast(\eta)=\exp^\ast(\eta_-)\ast\exp^\ast(\eta_+), $$ where $\Upsilon(1)^{-1}=\exp^\ast(\eta_-)$ and where (by direct inspection) $\exp^\ast(\eta_+)$ is regular. It follows that $\psi$ is renormalized already at the first order of the exponential algorithm, that is: $\Upsilon_\infty=\exp^\ast(-\eta_-)=\Upsilon(1)$ and $\psi^+=\exp^\ast(\eta_+)$. The Proposition follows from the group-like structure of $T$ which implies that: $$ \Upsilon_\infty(t_n)=\exp^\ast(-\eta_-)(t_n)=\frac{1}{n!}(-\eta_-(t_1))^n, $$ and similarly for $\psi^+(t_n)$. \end{proof} Notice that in the classical MS scheme, one gets simply $f=1$ in the above formulas. One recovers then by the same arguments the well-known result following from the BPHZ method in DR and MS. \vspace{0.5cm} \subsection*{Acknowledgments} The first named author is supported by a de la Cierva grant from the Spanish government. We thank warmly J.~Gracia-Bond\'\i a. Long joint discussions on QFT in Nice and Zaragoza were seminal to the present work, which is part of a common long-term project.
{"config": "arxiv", "file": "1003.1679.tex"}
TITLE: $p$-sylow subgroups of $SL(3, \mathbb {Z}_p)$ QUESTION [3 upvotes]: I wonder how many $p$-sylow subgroups of $SL(3, \mathbb {Z}_p)$ are there. ($p$ is any prime) Rather than finding generators, I used the fact that |$GL(3, \mathbb {Z}_p)$| = $({p}^3 - 1)({p}^3 - p)({p}^3 - {p}^2)$. Since $SL(3, \mathbb {Z}_p)$ is a kernel of $ \phi : GL(3, \mathbb {Z}_p) \to {Z}_p^* $, $ \phi (A) = det(A) $, I got |$SL(3, \mathbb {Z}_p)$| = ${(p - 1)}^2 ({p}^2 + p + 1) {p}^3 (p + 1)$. Now due to the third sylow theorem, the number of $p$-sylow subgroups is of the form $1 + pk$ and it must divides |$SL(3, \mathbb {Z}_p)$|. ($k$ is a nonnegative integer) So there are several possibilities due to the factorization of |$SL(3, \mathbb {Z}_p)$|. But after then, I can't determine the exact number of $p$-sylow subgroups among them. Is there any helpful fact that I can apply to progress? REPLY [5 votes]: Hint: The number of $p$ Sylow subgroups of $SL(3,F_p)$ is $$\frac{\vert SL(3,F_p) \vert}{\vert N(P) \vert}$$ where $P$ is any $p$ Sylow subgroup and $N(P)$ is the normalizer. Take in particular, $$P=\left\{\begin{pmatrix} 1& x & y\\ 0& 1 & z\\ 0&0&1\end{pmatrix}: x,y,z \in F_p \right\}$$ Try to find $N(P)$ and complete your problem!
{"set_name": "stack_exchange", "score": 3, "question_id": 3233097}
\begin{document} \title[The improved isoperimetric inequality and the Wigner caustic] {The improved isoperimetric inequality\linebreak and the Wigner caustic of planar ovals} \author{ Micha\l{} Zwierzy\'nski} \address{Warsaw University of Technology\\ Faculty of Mathematics and Information Science\\ Plac Politechniki 1\\ 00-661 Warsaw\\ Poland\\} \email{zwierzynskim@mini.pw.edu.pl} \thanks{The work of M. Zwierzy\'nski was partially supported by NCN grant no. DEC-2013/11/B/ST1/03080. } \subjclass[2010]{52A38, 52A40, 58K70} \keywords{affine equidistants, convex curve, Wigner caustic, constant width, isoperimetric inequality, singularities} \begin{abstract} The classical isoperimetric inequality in the Euclidean plane $\mathbb{R}^2$ states that for a simple closed curve $\M$ of the length $L_{\M}$, enclosing a region of the area $A_{\M}$, one gets \begin{align*} L_{\M}^2\geqslant 4\pi A_{\M}. \end{align*} In this paper we present the \textit{improved isoperimetric inequality}, which states that if $\M$ is a closed regular simple convex curve, then \begin{align*} L_{\M}^2\geqslant 4\pi A_{\M}+8\pi\left|\widetilde{A}_{E_{\frac{1}{2}}(\M)}\right|, \end{align*} where $\widetilde{A}_{E_{\frac{1}{2}}(\M)}$ is an oriented area of the Wigner caustic of $\M$, and the equality holds if and only if $\M$ is a curve of constant width. Furthermore we also present a stability property of the improved isoperimetric inequality (near equality implies curve nearly of constant width). The Wigner caustic is an example of an affine $\lambda$-equidistant (for $\displaystyle\lambda=\frac{1}{2}$) and the improved isoperimetric inequality is a consequence of certain bounds of oriented areas of affine equidistants. \end{abstract} \maketitle \section{Introduction} The \textit{classical isoperimetric inequality} in the Euclidean plane $\mathbb{R}^2$ states that: \begin{thm}(Isoperimetric inequality) Let $\M$ be a simple closed curve of the length $L_{\M}$, enclosing a region of the area $A_{\M}$, then \begin{align}\label{IsoperimetricIneq} L_{\M}^2\geqslant 4\pi A_{\M}, \end{align} and the equality (\ref{IsoperimetricIneq}) holds if and only if $\M$ is a circle. \end{thm} This fact was already known in ancient Greece. The first mathematical proof was given in the nineteenth century by Steiner \cite{S1}. After that, there have been many new proofs, generalizations, and applications of this famous theorem, see for instance \cite{C1, G1, G4, H3, L1, PX1, R1, S1}, and the literature therein. In 1902 Hurtwiz \cite{H3} and later Gao \cite{G1} showed the \textit{reverse isoperimetric inequality}. \begin{thm}(Reverse isoperimetric inequality) Let $K$ be a stricly convex domain whose support function $p$ has the property that $p''$ exists and is absolutely continuous, and let $\widetilde{A}$ denote the oriented area of the evolute of the boundary curve of $K$. Let $L_{K}$ be the perimeter of $K$ and $A_{K}$ be the area of $K$. Then \begin{align}\label{ReverseIsoperimetricIneq} L_{\M}^2\leqslant 4\pi A_{\M}+\pi|\widetilde{A}|, \end{align} Equality holds if and only if $p(\theta)=a_0+a_1\cos\theta+b_1\sin\theta+a_2\cos 2\theta+b_2\sin 2\theta$. \end{thm} In this paper we present bounds of oriented areas of affine equidistants and thanks to it we will prove the \textit{improved isoperimetric inequality}, which states that if $\M$ is a closed regular simple convex curve, then \begin{align*} L_{\M}^2\geqslant 4\pi A_{\M}+8\pi\left|\widetilde{A}_{E_{\frac{1}{2}}(\M)}\right|, \end{align*} where $\widetilde{A}_{E_{\frac{1}{2}}(\M)}$ is an oriented area of the Wigner caustic of $\M$, and the equality holds if and only if $\M$ is a curve of constant width. This is very interesting that the absolute value of the oriented area of the Wigner caustic improves the classical isoperimetric inequality and also gives the exact link between the area and the length of constant width curves. The family of affine $\lambda$ - equidistants arises as the counterpart of parallels or offsets in Euclidean geometry. An affine equidistant for us is the set of points of chords connecting points on $\M$ where tangent lines to $\M$ are parallel, which divide the chord segments between the base points with a fixed ratio $\lambda$, also called the \textit{affine time}. When in affine $\lambda$-equidistants the ratio $\lambda$ is equal to $\displaystyle\frac{1}{2}$ then this set is also known as the \textit{Wigner caustic}. The Wigner caustic of a smooth convex closed curve on affine symplectic plane was first introduced by Berry, in his celebrated 1977 paper \cite{B1} on the semiclassical limit of Wigner's phase-space representation of quantum states. They are many papers considering affine equidistants, see for instance \cite{C2, CDR1, DMR1, DR1, DRS1, DZ1, G3, GWZ1, RZ1, Z1}, and the literature therein. The Wigner caustic is also known as the \textit{area evolute}, see \cite{C2, G3}. \section{Geometric quantities, affine equidistants and Fourier series} Let $\M$ be a smooth planar curve, i.e. the image of the $C^{\infty}$ smooth map from an interval to $\mathbb{R}^2$. A smooth curve is \textit{closed} if it is the image of a $C^{\infty}$ smooth map from $S^1$ to $\mathbb{R}^2$. A smooth curve is \textit{regular} if its velocity does not vanish. A regular closed curve is \textit{convex} if its signed curvature has a constant sign. An \textit{oval} is a smooth closed convex curve which is simple, i.e. it has no selfintersections. In our case it is enough to consider $C^2$ - smooth curves. \begin{defn}\label{parallelpair} A pair $a,b\in\M$ ($a\neq b$) is called the \textit{parallel pair} if tangent lines to $\M$ at points $a,b$ are parallel. \end{defn} \begin{defn}\label{chord} A \textit{chord} passing through a pair $a,b\in\M$ is the line: $$l(a,b)=\left\{\lambda a+(1-\lambda)b\ \big| \lambda\in\mathbb{R}\right\}.$$ \end{defn} \begin{defn}\label{equidistantSet} An affine $\lambda$-equidistant is the following set. $$\Eq_{\lambda}(\M)=\left\{\lambda a+(1-\lambda)b\ \big|\ a,b \text{ is a parallel pair of } \M\right\}.$$ The set $\Eq_{\frac{1}{2}}(\M)$ will be called the \textit{Wigner caustic} of $\M$. \end{defn} Note that, for any given $\lambda\in\mathbb{R}$ we have an equality $\Eq_{\lambda}(\M)=\Eq_{1-\lambda}(\M)$. Thus, the case $\displaystyle\lambda=\frac{1}{2}$ is special. In particular we have also equalities $\Eq_0(\M)=\Eq_1(\M)=\M$. It is well known that if $\M$ is a generic oval, then $\Eq_{\lambda}(\M)$ are smooth closed curves with cusp singularities only \cite{B1, GZ1}, the number of cusps of $\Eq_{\frac{1}{2}}(\M)$ is odd and not smaller than $3$ \cite{B1, G3} and the number of cusps of $\Eq_{\lambda}(\M)$ for a generic value of $\displaystyle\lambda\neq\frac{1}{2}$ is even \cite{DZ1}. \begin{defn} An oval is said to have \textit{constant width} if the distance between every pair of parallel tangent lines is constant. This constant is called the \textit{width} of the curve. \end{defn} \begin{figure}[h] \centering \includegraphics[scale=0.35]{curve_wc.png} \includegraphics[scale=0.35]{curve_eq040.png} \caption{An oval $\M$ and (i) $\Eq_{\frac{1}{2}}(\M)$, (ii) $\Eq_{\frac{2}{5}}(\M)$.} \label{Picture1} \end{figure} Let us recall some basic facts about plane ovals which will be used later. The details can be found in the classical literature \cite{G4, H2}. Let $\M$ be a positively oriented oval. Take a point $O$ inside $\M$ as the origin of our frame. Let $p$ be the oriented perpendicular distance from $O$ to the tangent line at a point on $\M$, and $\theta$ the oriented angle from the positive $x_1$-axis to this perpendicular ray. Clearly, $p$ is a single-valued periodic function of $\theta$ with period $2\pi$ and the parameterization of $\M$ in terms of $\theta$ and $p(\theta)$ is as follows \begin{align}\label{ParameterizationM} \gamma(\theta)=\big(\gamma_1(\theta),\gamma_2(\theta)\big)=\big(p(\theta)\cos\theta-p'(\theta)\sin\theta, p(\theta)\sin\theta+p'(\theta)\cos\theta\big). \end{align} The couple $\big(\theta, p(\theta)\big)$ is usually called the \textit{polar tangential coordinate} on $\M$, and $p(\theta)$ its \textit{Minkowski's support function}. Then, the curvature $\kappa$ of $\M$ is in the following form \begin{align}\label{CurvatureM} \displaystyle \kappa(\theta)=\frac{d\theta}{ds}=\frac{1}{p(\theta)+p''(\theta)}>0, \end{align} or equivalently, the radius of a curvature $\rho$ of $\M$ is given by \begin{align*} \rho(\theta)=\frac{ds}{d\theta}=p(\theta)+p''(\theta). \end{align*} Let $L_{\M}$ and $A_{\M}$ be the length of $\M$ and the area it bounds, respectively. Then one can get that \begin{align}\label{CauchyFormula} L_{\M}=\int_{\M}ds=\int_0^{2\pi}\rho(\theta)d\theta=\int_0^{2\pi}p(\theta)d\theta, \end{align} and \begin{align}\label{BlaschkeFormula} A_{\M} & =\frac{1}{2}\int_{\M}p(\theta)ds\\ \nonumber &=\frac{1}{2}\int_0^{2\pi}p(\theta)\left[p(\theta)+p''(\theta)\right]d\theta=\frac{1}{2}\int_0^{2\pi}\left[p^2(\theta)-p'^2(\theta)\right]d\theta. \end{align} (\ref{CauchyFormula}) and (\ref{BlaschkeFormula}) are known as \textit{Cauchy's formula} and \textit{Blaschke's formula}, respectively. Since the Minkowski support function of $\M$ is smooth bounded and $2\pi$--periodic, its Fourier series is in the form \begin{align}\label{Fourierofp} p(\theta)=a_0+\sum_{n=1}^{\infty}\big(a_n\cos n\theta+b_n\sin n\theta\big). \end{align} Differentiation of (\ref{Fourierofp}) with respect to $\theta$ gives \begin{align}\label{Fourierofpprime} p'(\theta)=\sum_{n=1}^{\infty}n\big(-a_n\sin n\theta+b_n\cos n\theta\big). \end{align} By (\ref{Fourierofp}), (\ref{Fourierofpprime}) and the Parseval equality one can express $L_{\M}$ and $A_{\M}$ in terms of Fourier coefficients of $p(\theta)$ in the following way. \begin{align} \label{Lengthofmfourier} L_{\M} &=2\pi a_0.\\ \label{Areaofmfourier} A_{\M} &=\pi a_0^2-\frac{\pi}{2}\sum_{n=2}^{\infty}(n^2-1)(a_n^2+b_n^2). \end{align} One can notice that $\gamma(\theta),\gamma(\theta+\pi)$ is a parallel pair of $\M$, hence $\gamma_{\lambda}$ - the parameterization of $\Eq_{\lambda}(\M)$ is as follows \begin{align}\label{ParameterizationEqM} \gamma_{\lambda}(\theta) &=\big(\gamma_{\lambda, 1}(\theta), \gamma_{\lambda, 2}(\theta)\big)\\ \nonumber &=\lambda\gamma(\theta)+(1-\lambda)\gamma(\theta+\pi) \\ \nonumber &=\left(P_{\lambda}(\theta)\cos\theta-P'_{\lambda}\sin\theta, P_{\lambda}(\theta)\sin\theta+P'_{\lambda}(\theta)\cos\theta\right), \end{align} where $P_{\lambda}(\theta)=\lambda p(\theta)-(1-\lambda)p(\theta+\pi)$, $\theta\in[0,2\pi]$. Furthermore if $\displaystyle\lambda=\frac{1}{2}$, then the map $\M\ni\gamma(\theta)\mapsto\gamma_{\frac{1}{2}}(\theta)\in\Eq_{\frac{1}{2}}(\M)$ for $\theta\in[0,2\pi]$ is the double covering of the Wigner caustic of $\M$. \section{Oriented areas of equidistants and the improved isoperimetric inequality} Let $L_{\Eq_{\lambda}(\M)}, \widetilde{A}_{E_{\lambda}(\M)}$ denote the length of $\Eq_{\lambda}(\M)$ and the oriented area of $\Eq_{\lambda}(\M)$, respectively. Similarly like in \cite{DZ1} we can show the following proposition. \begin{prop}\cite{DZ1} Let $M$ be an oval. Then \begin{enumerate}[(i)] \item if $\displaystyle\lambda\neq\frac{1}{2}$, then $L_{\Eq_{\lambda}(\M)}\leqslant\big(|\lambda|+|1-\lambda|\big)L_{\M}$. In particular if $\displaystyle\lambda\in\left(0,\frac{1}{2}\right)\cup\left(\frac{1}{2},1\right)$, then $L_{\Eq_{\lambda}(\M)}\leqslant L_{\M}$. \item $2L_{\Eq_{\frac{1}{2}}(\M)}\leqslant L_{\M}$. \end{enumerate} \end{prop} \begin{proof} The parameterization of $\M$ and $\Eq_{\lambda}(\M)$ is like in (\ref{ParameterizationM}) and (\ref{ParameterizationEqM}), respectively. Then \begin{align*} L_{\Eq_{\lambda}(\M)} &=\int_0^{2\pi}|\gamma'_{\lambda}(\theta)|d\theta\\ &=\int_0^{2\pi}\big|\lambda\gamma'(\theta)+(1-\lambda)\gamma'(\theta+\pi)\big|d\theta\\ &\leqslant |\lambda|\int_0^{2\pi}|\gamma'(\theta)|d\theta+|1-\lambda|\int_0^{2\pi}|\gamma'(\theta+\pi)|d\theta\\ &=\big(|\lambda|+|1-\lambda|\big)L_{\M}. \end{align*} If $\displaystyle\lambda=\frac{1}{2}$, then the map $\M\ni\gamma(\theta)\mapsto\gamma_{\frac{1}{2}}(\theta)\in\Eq_{\frac{1}{2}}(\M)$ for $\theta\in[0,2\pi]$ is the double covering of the Wigner caustic of $\M$. Thus $2L_{\Eq_{\frac{1}{2}}(\M)}\leqslant L_{\M}$. \end{proof} \begin{thm}\label{ThmBoundedAreas} Let $\M$ be a positively oriented oval of the length $L_{\M}$, enclosing the region of the area $A_{\M}$. Let $\widetilde{A}_{E_{\lambda}(\M)}$ denote an oriented area of $E_{\lambda}(\M)$. Then \begin{enumerate}[(i)] \item if $\displaystyle\lambda\in\left(0,\frac{1}{2}\right)\cup\left(\frac{1}{2},1\right)$, then \begin{align}\label{ThmA} A_{\M}-\frac{\lambda(1-\lambda)}{\pi}L_{\M}^2\leqslant\widetilde{A}_{E_{\lambda}(\M)}\leqslant (2\lambda-1)^2A_{\M}. \end{align} \item if $\displaystyle\lambda=\frac{1}{2}$, then \begin{align}\label{ThmB} A_{\M}-\frac{L_{\M}^2}{4\pi}\leqslant 2\widetilde{A}_{E_{\frac{1}{2}}(\M)}\leqslant 0. \end{align} \item if $\lambda\in(-\infty,0)\cup(1,\infty)$, then \begin{align}\label{ThmC} (2\lambda-1)^2A_{\M}\leqslant\widetilde{A}_{E_{\lambda}(\M)}\leqslant A_M-\frac{\lambda(1-\lambda)}{\pi}L_{\M}^2. \end{align} \item for all $\displaystyle\lambda\neq 0, \frac{1}{2}, 1$ $\M$ is a curve of constant width if and only if \begin{align}\label{ThmD} \widetilde{A}_{E_{\lambda}(\M)}=A_{\M}-\frac{\lambda(1-\lambda)}{\pi}L_{\M}^2. \end{align} \item $\M$ is a curve of constant width if and only if \begin{align}\label{ThmE} 2\widetilde{A}_{E_{\frac{1}{2}}(\M)}=A_{\M}-\frac{L_{\M}^2}{4\pi}. \end{align} \end{enumerate} \end{thm} \begin{proof} Let (\ref{ParameterizationEqM}) be the parameterization of $\Eq_{\lambda}(\M)$. Then the oriented area of $\Eq_{\lambda}(\M)$ is equal to \begin{align*} \widetilde{A}_{E_{\lambda}(\M)} &=\frac{1}{2}\int_{\Eq_{\lambda}(\M)}\gamma_{\lambda, 1}d\gamma_{\lambda, 2}-\gamma_{\lambda, 2}d\gamma_{\lambda, 1}& \\ &=\frac{1}{2}\int_{0}^{2\pi}\Big[\left(P_{\lambda}(\theta)\cos\theta-P'_{\lambda}(\theta)\sin\theta\right)\left(P_{\lambda}(\theta)+P''_{\lambda}(\theta)\right)\cos\theta \\ &\qquad\qquad\quad +\left(P_{\lambda}(\theta)\sin\theta+P'_{\lambda}\cos\theta\right)\left(P_{\lambda}(\theta)+P''_{\lambda}(\theta)\right)\sin\theta\Big]d\theta\\ &=\frac{1}{2}\int_0^{2\pi}\Big[P_{\lambda}^2(\theta)+P_{\lambda}(\theta)P''_{\lambda}(\theta)\Big]d\theta\\ &=\frac{1}{2}\int_0^{2\pi}\Big[P_{\lambda}^2(\theta)-P'^2_{\lambda}(\theta)\Big]d\theta\\ &=\frac{1}{2}\int_0^{2\pi}\Big[\big(\lambda p(\theta)-(1-\lambda)p(\theta+\pi)\big)^2-\big(\lambda p'(\theta)-(1-\lambda)p'(\theta+\pi)\big)^2\Big]d\theta\\ &=\lambda^2\cdot\frac{1}{2}\int_{0}^{2\pi}\left[p^2(\theta)-p'^2(\theta)\right]d\theta+(1-\lambda)^2\cdot\frac{1}{2}\int_0^{2\pi}\left[p^2(\theta+\pi)-p'^2(\theta+\pi)\right]d\theta\\ &\qquad -2\lambda(1-\lambda)\cdot\frac{1}{2}\int_0^{2\pi}\left[p(\theta)p(\theta+\pi)-p'(\theta)p'(\theta+\pi)\right]d\theta\\ &=\left(2\lambda^2-2\lambda+1\right)A_{\M}-2\lambda(1-\lambda)\cdot\frac{1}{2}\int_0^{2\pi}\left[p(\theta)p(\theta+\pi)-p'(\theta)p'(\theta+\pi)\right]d\theta. \end{align*} Let $\displaystyle \Psi_{\M}=\frac{1}{2}\int_0^{2\pi}\left[p(\theta)p(\theta+\pi)-p'(\theta)p'(\theta+\pi)\right]d\theta$, then the oriented area of $\Eq_{\lambda}(\M)$ is in the following form. \begin{align} \label{OrientedAreaFormula}\widetilde{A}_{E_{\lambda}(\M)}=(2\lambda^2-2\lambda+1)A_{\M}-2\lambda(1-\lambda)\Psi_{\M}. \end{align} Let us find formula for $\Psi_{\M}$ in terms of coefficients of Fourier series of the Minkowski support function $p(\theta)$. By (\ref{Fourierofp}) and (\ref{Fourierofpprime}) there are also equalities: \begin{align} \nonumber p(\theta+\pi) &=a_0+\sum_{n=1}^{\infty}(-1)^n\big(a_n\cos(n\theta)+b_n\sin(n\theta)\big), \\ \nonumber p'(\theta+\pi) &=\sum_{n=1}^{\infty}(-1)^nn\big(-a_n\sin(n\theta)+b_n\cos(n\theta)\big),\\ \label{Phithetadef}\Psi_{\M} &=\pi a_0^2-\frac{\pi}{2}\sum_{n=2}^{\infty}(-1)^n(n^2-1)(a_n^2+b_n^2). \end{align} Then bounds of $\Psi_{\M}$ are as follows. \begin{align}\label{IneqPhitheta} \pi a_0^2-\frac{\pi}{2}\sum_{n=2}^{\infty}(n^2-1)(a_n^2+b_n^2)\leqslant \Psi_{\M} &\leqslant \pi a_0^2+\frac{\pi}{2}\sum_{n=2}^{\infty}(n^2-1)(a_n^2+b_n^2). \end{align} By (\ref{Lengthofmfourier}) and (\ref{Areaofmfourier}) one can rewrite (\ref{IneqPhitheta}) as \begin{align}\label{InePhithetaBetter} A_{\M}\leqslant \Psi_{\M} &\leqslant \frac{L_{\M}^2}{2\pi}-A_{\M} \end{align} Then using (\ref{InePhithetaBetter}) to (\ref{OrientedAreaFormula}) and because the map $\M\ni\gamma(\theta)\mapsto\gamma_{\frac{1}{2}}(\theta)\in\Eq_{\frac{1}{2}}(\M)$ for $\theta\in[0,2\pi]$ is the double covering of the Wigner caustic of $\M$, one can obtain (\ref{ThmA}), (\ref{ThmB}), (\ref{ThmC}). To prove (\ref{ThmD}) and (\ref{ThmE}) let us notice that $\M$ is a curve of constant width if and only if coefficients $a_{2n}, b_{2n}$ for $n\geqslant 1$ in the Fourier series of $p(\theta)$ are all equal to zero \cite{F1, G4}, and then the formula (\ref{Phithetadef}) of $\Psi_{\M}$ is as follows: \begin{align*} \Psi_{\M} &=\pi a_0^2-\frac{\pi}{2}\sum_{n=2}^{\infty}(-1)^n(n^2-1)(a_n^2+b_n^2)\\ &=\pi a_0^2+\frac{\pi}{2}\sum_{n=2, n\text{ is odd}}^{\infty}(n^2-1)(a_n^2+b_n^2)\\ &=2\pi a_0^2-A_{\M}\\ &=\frac{L_{\M}^2}{2\pi}-A_{\M}. \end{align*} \end{proof} A simple consequence of the above theorem is the following remark and also the main result, the improved isoperimetric inequality for planar ovals. \begin{rem} Let $\M$ be an positively oriented oval. Then Theorem \ref{ThmBoundedAreas} gives us that $\widetilde{A}_{E_{\frac{1}{2}}(\M)}\leqslant 0$, which leads to the fact that the Wigner caustic of $\M$ has a reversed orientation against that of the original curve $\M$. \end{rem} \begin{thm}(The improved isoperimetric inequality)\label{ImprovedIsoperimetricInequalityThm} If $\M$ is an oval of the length $L_{\M}$, enclosing a region of the area $A_{\M}$, then \begin{align}\label{ImprovedIsoperimetricInequality} L_{\M}^2\geqslant 4\pi A_{\M}+8\pi|\widetilde{A}_{E_{\frac{1}{2}}(\M)}\big|, \end{align} where $\widetilde{A}_{E_{\frac{1}{2}}(\M)}$ is an oriented area of the Wigner caustic of $\M$, and equality (\ref{ImprovedIsoperimetricInequality}) holds if and only if $\M$ is a curve of constant width. \end{thm} \begin{rem} The improved isoperimetric inequality becomes isoperimetric inequality if and only if $\widetilde{A}_{E_{\frac{1}{2}}(\M)}=0$, that means when $\M$ has the center of symmetry. \end{rem} \begin{figure}[h] \centering \includegraphics[scale=0.35]{curve_constant_w.png} \caption{An oval $\M_3$ of constant width and the Wigner caustic of $\M_3$. The Minkowski support function of $\M_3$ is $p_3(\theta)=\cos 3\theta+11$.} \label{PictureConstantWidth} \end{figure} \begin{thm}(Barbier's Theorem)\cite{B2} Let $\M$ be a curve of constant width $w$. Then the length of $\M$ is equal to $\pi w$. \end{thm} By Barbier's Theorem and Theorem \ref{ImprovedIsoperimetricInequalityThm} one can get the following corollary. \begin{cor} Let $\M$ be an oval of constant width $w$, enclosing region with the area $A_{\M}$. Then \begin{align*} A_{\M}=\frac{\pi w^2}{4}-2\left|\widetilde{A}_{E_{\frac{1}{2}}(\M)}\right|, \end{align*} where $\widetilde{A}_{E_{\frac{1}{2}}(\M)}$ is an oriented area of the Wigner caustic of $\M$. \end{cor} One can check that curve $M$ for which $p(\theta)=\cos 3\theta+11$ (see Fig. \ref{PictureConstantWidth}) is an oval of constant width $w=22$, enclosing the area $A_{\M}=117\pi$, and a signed area of the Wigner caustic of $\M$ is equal to $\widetilde{A}_{E_{\frac{1}{2}}(\M)}=-2\pi$. In Proposition \ref{PropCusps} we present explicitly curves for which the Wigner caustic has ecactly $2n+1$ cusps (this number must be odd and not smaller than $3$ - \cite{B1, G3}). $M_7$ and $\Eq_{\frac{1}{2}}(\M_7)$ are shown in Fig. \ref{PictureConstantWidth7cusps}. \begin{prop}\label{PropCusps} Let $n$ be a positive integer. Let $M_{2n+1}$ be a curve for which $p_{2n+1}(\theta)=\cos\big[\theta(2n+1)\big]+(2n+1)^2+2$ is its Minkowski support function. Then $\M$ is an oval of constant width and $\Eq_{\frac{1}{2}}(\M_{2n+1})$ has exactly $2n+1$ cusps. \end{prop} \begin{proof} $\M$ is singular if and only if $p_{2n+1}(\theta)+p''_{2n+1}(\theta)=0$, but one can check that is impossible. $\M$ is curve of constant width, because $p_{2n+1}(\theta)+p_{2n+1}(\theta+\pi)$ is constant for all values of $\theta$. The cusp of the Wigner caustic of an oval appears when curvatures of the original curve at points in parallel pair are equal \cite{B1}. One can check that equation $\kappa(\theta)=\kappa(\theta+\pi)$, where $\theta\in\left[0,\pi\right]$ holds if and only if $\displaystyle\theta=\frac{\pi+2k\pi}{4n+2}$ for $k\in\{0,1,2,\ldots,2n\}$. \end{proof} \begin{figure}[h] \centering \includegraphics[scale=0.35]{curve_constant_w3.png} \includegraphics[scale=0.35]{curve_constant_w4.png} \caption{ An oval $\M_7$ of constant width and $\Eq_{\frac{1}{2}}(\M_7)$. The Minkowski support function of $\M_7$ is $p_7(\theta)=\cos 7\theta+51$.} \label{PictureConstantWidth7cusps} \end{figure} \section{The stability of the improved isoperimetric inequality} A bounded convex subset in $\mathbb{R}^n$ is said to be an \textit{$n$ -- dimensional convex body} if it is closed and has interior points. Let $\mathcal{C}^n$ denote the set of all $n$ -- dimensional convex bodies. There are many important inequalities in convex geometry and differential geometry, such as the isoperimetric inequality, the Brunn--Minkowski inequality,\linebreak Aleksandrov--Fenchel inequality. The stability property of them are of great interest in geometric analysis, see \cite{F2, G2, H3, PX1, S2} and the literature therein. An inequality in convex geometry can be written as \begin{align}\label{IneqConvexGeometry} \Phi(K)\geqslant 0, \end{align} where $\Phi:\mathcal{C}^n\to\mathbb{R}$ is a real-valued function and (\ref{IneqConvexGeometry}) holds for all $K\in\mathcal{C}^n$. Let $\mathcal{C}^n_{\Phi}$ be a subset of $\mathcal{C}^n$ for which equality in (\ref{IneqConvexGeometry}) holds. For example, if $n=2$, and like in previous sections let $L_{\partial K}$ denote the length of the boundary of $K$, let $A_{\partial K}$ denote the area enclosed by $\partial K$ (i.e. the area of $K$), $\Phi(K)=L_{\partial K}^2-4\pi A_{\partial K}$. Then inequality $\Phi(K)\geqslant 0$ is then the classical isoperimetric inequality in $\mathbb{R}^2$. In this case $\mathcal{C}^2_{\Phi}$ is a set of disks. In this section we will study stability properties associated with (\ref{IneqConvexGeometry}). We ask if $K$ must by close to a member of $\mathcal{C}^n_{\Phi}$ whenever $\Phi(K)$ is close to zero. \linebreak Let $d:\mathcal{C}^n\times\mathcal{C}^n\to\mathbb{R}$ denoted in some sense the deviation between two convex bodies. $d$ should satisfy two following conditions: \begin{enumerate}[(i)] \item $d(K,L)\geqslant 0$ for all $K,L\in\mathcal{C}^n$, \item $d(K,L)=0$ if and only if $K=L$. \end{enumerate} If $\Phi, \mathcal{C}^n_{\Phi}$ and $d$ are given, then the \textit{stability problem} associated with (\ref{IneqConvexGeometry}) is as follows. \textit{Find positive constants $c,\alpha$ such that for each $K\in\mathcal{C}^n$, there exists $N\in C^n_{\Phi}$ such that \begin{align}\label{StabilityIneq} \Phi(K)\geqslant cg^{\alpha}(K,N). \end{align}} From this point let us assume that $n=2$ and by Theorem \ref{ImprovedIsoperimetricInequalityThm} let \begin{align}\label{IneqConvexProblem} \Phi(K)=L_{\partial K}^2-4\pi A_{\partial K}-8\pi\left|\widetilde{A}_{E_{\frac{1}{2}}(\M)}\right|\geqslant 0. \end{align} From Theorem \ref{ImprovedIsoperimetricInequalityThm} one can see that $C^2_{\Phi}$ consists of bodies of constant width. Let us recall two $d$ measure functions. Let $K$ and $N$ be two convex bodies with respective support functions $p_{\partial K}$ and $p_{\partial N}$. Usually to measure the deviation between $K$ and $N$ one can use the \textit{Hausdorff distance}, \begin{align}\label{HausdorffDistance} d_{\infty}(K,N)=\max_{\theta}\Big|p_{\partial K}(\theta)-p_{\partial N}(\theta)\Big|. \end{align} Another such measure is the measure that corresponds to the $L_2$-metric in the function space. It is defined by \begin{align}\label{LTwoDistance} d_2(K,N)=\left(\int_0^{2\pi}\Big|p_{\partial K}(\theta)-p_{\partial N}(\theta)\Big|^2d\theta\right)^{\frac{1}{2}}. \end{align} It is obvious that $d_{\infty}(K,N)=0$ (or $d_2(K,N)=0$) if and only of $K=N$. \begin{lem}\label{LemmaTrig} Let $c_k, d_k\in\mathbb{R}$ for $k\in\{1,2,\ldots,n\}$. Then \begin{align*} \max_{\theta}\left|\sum_{k=1, \text{k is odd}}^n\big(c_k\cos k\theta+d_k\sin k\theta\big)\right|\leqslant\max_{\theta}\left|\sum_{k=1}^n\big(c_k\cos k\theta+d_k\sin k\theta\big)\right|. \end{align*} \end{lem} \begin{proof} Let \begin{align*} f_{odd}(\theta) &=\sum_{k=1, \text{k is odd}}^n\big(c_k\cos k\theta+d_k\sin k\theta\big),\\ f_{even}(\theta) &=\sum_{k=1, \text{k is even}}^n\big(c_k\cos k\theta+d_k\sin k\theta\big),\\ f(\theta) & =f_{odd}(\theta)+f_{even}(\theta). \end{align*} One can see that $f_{odd}$ is bounded, $2\pi$-periodic and $f_{odd}(\theta)=-f_{odd}(\theta+\pi)$. Let $\theta_0$ be an argument for which $\displaystyle f_{odd}(\theta_0)=\max_{\theta}f_{odd}(\theta)$, then $\displaystyle f_{odd}(\theta_0+\pi)=\min_{\theta}f_{odd}(\theta)=-f_{odd}(\theta_0)$. Because $f_{even}$ is $\pi$-periodic, then $f_{even}(\theta_0)=f_{even}(\theta_0+\pi)$. One can see that: \begin{itemize} \item If $f_{even}(\theta_0)\geqslant 0$, then $\displaystyle\max_{\theta}|f(\theta)|\geqslant |f(\theta_0)|=f(\theta_0)\geqslant f_{odd}(\theta_0)=\max_{\theta}|f_{odd}(\theta)|$. \item If $f_{even}(\theta_0)<0$, then $\displaystyle\max_{\theta}|f(\theta)|\geqslant |f(\theta_0+\pi)|=-f(\theta_0+\pi)\geqslant -f_{odd}(\theta_0+\pi)=\max_{\theta}|f_{odd}(\theta)|$. \end{itemize} \end{proof} \begin{defn} Let $p_{\M}$ be the Minkowski support function of a positively oriented oval $\M$ of length $L_{\M}$. Then \begin{align}\label{SupportWM} p_{W_{\M}}(\theta)=\frac{L_{\M}}{2\pi}+\frac{p_{\M}(\theta)-p_{\M}(\theta+\pi)}{2} \end{align} will be the support function of a curve $W_{\M}$ which will be called the \textit{Wigner caustic type curve associated with $\M$}. \end{defn} \begin{prop} Let $W_{\M}$ be the Wigner caustic type curve associated with an oval $\M$. Then $W_{\M}$ has following properties: \begin{enumerate}[(i)] \item $W_{\M}$ is an oval of constant width, \item $L_{W_{\M}}=L_{\M}$, \item $\Eq_{\frac{1}{2}}(W_{\M})=\Eq_{\frac{1}{2}}(\M)$, \item $A_{W_M}\geqslant A_{\M}$ and the equality holds if and only if $\M$ is a curve of constant width, \item ${W_M}=\M$ if and only if $\M$ is a curve of constant width. \end{enumerate} \end{prop} \begin{proof} By (\ref{SupportWM}), to prove that $W_{\M}$ we will show $\rho_{W_{\M}}(\theta)>0$. By (\ref{Fourierofp}) and (\ref{CurvatureM}) the radius of a curvature of $W_{M}$ is equal to \begin{align} \rho_{W_M}(\theta) &=p_{W_M}(\theta)+p''_{W_M}(\theta)\\ \label{CurvatureWM} &=a_0+\sum_{n=1, \text{n is odd}}^{\infty}(-n^2+1)(a_n\cos n\theta+b_n\sin n\theta) \end{align} Because $\displaystyle\rho_{M}(\theta)=a_0+\sum_{n=1}^{\infty}(-n^2+1)(a_n\cos n\theta+b_n\sin n\theta)>0$, then also by (\ref{CurvatureWM}) inequality $\rho_{W_M}>0$ holds. This is a consequence of that the range of \linebreak $\displaystyle\left|\sum_{\text{n is odd}}(-n^2+1)(a_n\cos n\theta+b_n\sin n\theta)\right|$ is a subset of the range of \linebreak $\displaystyle\left|\sum_n(-n^2+1)(a_n\cos n\theta+b_n\sin n\theta)\right|$ - see Lemma \ref{LemmaTrig}. To check that $W_M$ is a curve of constant width let us notice that \linebreak $p_{W_M}(\theta)+p_{W_M}(\theta+\pi)$ is a constant. By (\ref{CauchyFormula}) one can get \begin{align*} L_{W_M} &=\int_0^{2\pi}p_{W_M}(\theta)d\theta=\int_0^{2\pi}\left(\frac{L_M}{2\pi}+\frac{p_{M}(\theta)-p_{M}(\theta+\pi)}{2}\right)d\theta=L_M. \end{align*} By (\ref{ParameterizationEqM}) the support function of the Wigner caustic of $\M$ is \linebreak $\displaystyle\frac{p(\theta)-p(\theta+\pi)}{2}$. One can check that also this is the support function of the Wigner caustic of $W_M$. Then using the improved isoperimetric inequality it is easy to show that $A_{W_M}\geqslant A_M$ and the equality holds if and only if $M$ is a curve of constant width. \end{proof} \begin{thm}\label{ThmStabIneqMax} Let $K$ be strictly convex domain of area $A_{\partial K}$ and perimeter $L_{\partial K}$ and let $\widetilde{A}_{E_{\frac{1}{2}}(\partial K)}$ denote the oriented area of the Wigner caustic of $\partial K$. Let $W_{K}$ denote the convex body for which $\partial W_{K}$ is the Wigner caustic type curve associated with $\partial K$. Then \begin{align}\label{StabIneqMax} L_{\partial K}^2-4\pi A_{\partial K}-8\pi\left|\widetilde{A}_{\Eq_{\frac{1}{2}}(\partial K)}\right|\geqslant 4\pi^2 d_{\infty}^2(K, W_K), \end{align} where equality holds if and only if $\partial K$ is a curve of constant width. \end{thm} \begin{proof} Because of (\ref{Fourierofp}), (\ref{Fourierofpprime}), (\ref{CauchyFormula}), (\ref{BlaschkeFormula}), the support functions $p_{\partial K}$ and $p_{\partial W_K}$ have the following Fourier series: \begin{align}\label{FourierOfMWM} p_{\partial K}(\theta) &=a_0+\sum_{n=1}^{\infty}(a_n\cos n\theta+b_n\sin n\theta),\\ \nonumber p_{\partial W_K}(\theta) &=a_0+\sum_{n=1, \text{n is odd}}^{\infty}(a_n\cos n\theta+b_n\sin n\theta). \end{align} Also one can get the Fourier series of $\Phi$ (see (\ref{IneqConvexProblem})): \begin{align}\label{FourierOfPhi} \Phi(K) &=L_{\partial K}^2-4\pi A_{\partial K}-8\pi\left|\widetilde{A}_{E_{\frac{1}{2}}(\M)}\right|\\ \nonumber &=2\pi^2\sum_{n=2, \textit{n is even}}^{\infty}(n^2-1)(a_n^2+b_n^2). \end{align} One can check that $|a_n\cos n\theta+b_n\sin n\theta|\leqslant\sqrt{a_n^2+b_n^2}$ and then by (\ref{HausdorffDistance}) and H\"older's inequality: \begin{align*} d_{\infty}(K, W_K) &= \max_{\theta}\Big|p_{\partial K}(\theta)-p_{\partial W_K}(\theta)\Big| \\ &=\max_{\theta}\left|\sum_{n=2, \textit{n is even}}^{\infty}(a_n\cos n\theta+b_n\sin n\theta)\right|\\ &\leqslant\max_{\theta}\left(\sum_{n=2, \textit{n is even}}^{\infty}|a_n\cos n\theta+b_n\sin n\theta|\right)\\ &\leqslant\sum_{n=2, \textit{n is even}}\frac{1}{\sqrt{n^2-1}}\cdot \sqrt{n^2-1}\sqrt{a_n^2+b_n^2}\\ &\leqslant\sqrt{\sum_{n=2, \textit{n is even}}\frac{1}{n^2-1}}\cdot \sqrt{\sum_{n=2, \textit{n is even}}(n^2-1)(a_n^2+b_n^2)}\\ &=\sqrt{\frac{1}{2}}\cdot\sqrt{\frac{\Phi(K)}{2\pi^2}}. \end{align*} And the equality holds if and only if $a_{2m}=b_{2m}=0$ for all $m\in\mathbb{N}$, so $\partial K$ is a curve of constant width. \end{proof} \begin{thm}\label{ThmStabIneqL2} Under the same assumptions of Theorem \ref{ThmStabIneqMax}, one gets \begin{align}\label{StabIneqL2} L_{\partial K}^2-4\pi A_{\partial K}-8\pi\left|\widetilde{A}_{\Eq_{\frac{1}{2}}(\partial K)}\right|\geqslant 6\pi d_2^2(K, W_K), \end{align} where equality holds if and only if $\partial K$ is a curve of constant width, or the Minkowski support function of $\partial K$ is in the form \begin{align*} p_{\partial K}(\theta)=a_0+a_2\cos 2\theta+b_2\sin 2\theta+\sum_{n=1, \text{n is odd}}^{\infty}(a_{n}\cos n\theta+b_{n}\sin n\theta). \end{align*} \end{thm} \begin{proof} Because of (\ref{FourierOfMWM}) and (\ref{FourierOfPhi}) \begin{align*} d_2^2(K,W_K) &=\int_0^{2\pi}\Big|p_{\partial K}(\theta)-p_{\partial W_K}(\theta)\Big|^2d\theta\\ &=\int_0^{2\pi}\left|\sum_{n=2, \textit{n is even}}(a_n\cos n\theta+b_n\sin\theta)\right|^2d\theta\\ &=\pi\sum_{n=2, \textit{n is even}}(a_n^2+b_n^2)\\ &\leqslant\frac{1}{6\pi}\cdot 2\pi^2\sum_{n=2, \textit{n is even}}(n^2-1)(a_n^2+b_n^2)\\ &=\frac{1}{6\pi}\Phi(K). \end{align*} And the equality holds if and only if $a_{2m}=b_{2m}=0$ for all $m\in\mathbb{N}$, so $\partial K$ is a curve of constant width, or $\displaystyle p_{\partial K}(\theta)=a_0+a_2\cos 2\theta+b_2\sin 2\theta+\sum_{n=1, \text{n is odd}}^{\infty}(a_{n}\cos n\theta+b_{n}\sin n\theta)$. \end{proof} Let us consider the convex body $K$ for which (\ref{ExampleSupportFun}) is its Minkowski support function -- see Fig. \ref{FigExampleStabIneq}. We will check how close the right hand sides in the stability inequalities (\ref{StabIneqMax}) and (\ref{StabIneqL2}) are to be optimal. \begin{align}\label{ExampleSupportFun} p_{\partial K}(\theta)=10+2\cos 2\theta-\frac{1}{3}\sin 3\theta-\frac{1}{4}\cos 4\theta. \end{align} \begin{figure}[h] \centering \includegraphics[scale=0.4]{curve_ex_stab_ineq.PNG} \caption{A convex body $K$ for which $p_{\partial K}(\theta)=10+2\cos 2\theta-\frac{1}{3}\sin 3\theta-\frac{1}{4}\cos 4\theta$ is its Minkowski support function, the Wigner caustic type curve associated with $\partial K$ (the dashed one) and the Wigner caustic of $\partial K$.} \label{FigExampleStabIneq} \end{figure} Then \begin{align} p_{\partial W_{K}}(\theta) & =10-\frac{1}{3}\sin 3\theta,\\ p_{\Eq_{\frac{1}{2}}(\partial K)}(\theta) &=-\frac{1}{3}\sin 3\theta. \end{align} One can check that \begin{align} L_{\partial K} &= 20\pi,\\ A_{\partial K} &=\frac{26809}{288}\pi, \\ \widetilde{A}_{\Eq_{\frac{1}{2}}(\partial K)} &=-\frac{2\pi}{9},\\ \label{lhs} L_{\partial K}^2-4\pi A_{\partial K}-8\pi\left|\widetilde{A}_{\Eq_{\frac{1}{2}}(\partial K)}\right| &=25.875\pi^2, \end{align} \begin{align} \label{rhs1} 4\pi^2d_{\infty}^2(K, W_K) =4\pi^2\left(\max_{\theta}\left|2\cos 2\theta-\frac{1}{4}\cos 4\theta\right|\right)^2 &=20.25\pi^2,\\ \label{rhs2} 6\pi d_2^2(K,W_K) =6\pi\int_0^{2\pi}\left(2\cos 2\theta-\frac{1}{4}\cos 4\theta\right)^2d\theta &=24.375\pi^2. \end{align} Then by (\ref{lhs}), (\ref{rhs1}) the stability inequality (\ref{StabIneqMax}) in Theorem \ref{ThmStabIneqMax} is in the following form: \begin{align} 25.875\pi^2\geqslant 20.25\pi^2, \end{align} and by (\ref{lhs}), (\ref{rhs2}) the stability inequality (\ref{StabIneqL2}) in Theorem \ref{ThmStabIneqL2} is in the following form: \begin{align} 25.875\pi^2\geqslant 24.375\pi^2. \end{align} \section*{Acknowledgements} The author would like to thank Professor Wojciech Domitrz for the helpful discussions. \bibliographystyle{amsalpha}
{"config": "arxiv", "file": "1512.06684/The_improved_iso_ineq_and_wc_Revised.tex"}
TITLE: definition of distribution function of random variable QUESTION [0 upvotes]: please help me to understand fully following definition : i am using this book http://www.math.harvard.edu/~knill/books/KnillProbability.pdf page 79,i can't understand some part,in spite of this fact that on next page there are explanations of these,for example part C,does it means that as $h$ approaches 0 then distribution function converges to actual function?then why not it is continuous from left?about non decreasing,i have read that follows from ${X < x } \subset {X < y }$ for $x < y$ for part $b$ $ P[{X < -n}] -> 0$ and $P[{X < n}] -> 1.$ what does this part means?thanks in advance REPLY [2 votes]: For C. it only means that for every $x_0\in\mathbb{R}$, $$ F_X(x_0+h) \xrightarrow[h\to0^+]{} F_X(x_0) $$ Why not from the left? Consider a random variable $X$ which has probability $1$ of having taking value $78$ (for instance). That is, $X$ is almost surely equal to $78$, there is not much randomness there... then, $$ F_X(x) = \begin{cases} 0 & \text{if } x < 78\\ 1 & \text{if } x \geq 78\\ \end{cases} $$ which is certainly not left-continuous (but is right-continuous). For non-decreasing, well: the probability that $X \leq 10$ is definitely not more than the probability that $X \leq 11$ (since if $X \leq 10$, you also have $X \leq 11$). This holds for any $a\leq b$ instead of 10 and 11, and is equivalent to saying $F_X$ is non-decreasing (by definition of $F_X(x)=\mathbb{P}\{X\leq x\}$). Finally, when $n$ goes to $-\infty$, the probability that $\mathbb{P}\{X\leq n\}$ does go to $0$ (the smaller the value of $n$, the smaller the probability that the random value taken by $X$ will be below $n$).
{"set_name": "stack_exchange", "score": 0, "question_id": 793531}
TITLE: Matrix calculation QUESTION [4 upvotes]: The matrix is $ M= \frac{d}{d\theta} e^{A+\theta B} \mid _{\theta = 0} $ where $A$ and $B$ are both $n\times n$ matrices. I was thinking solving it by introducing the equations: $\dot x = (A + \theta B)x,\ x(0) = I$ with solution $x = X(t,\theta) $, where $M = \frac{dX(1,0)}{d\theta}$, I was stuck then, thanks for any suggestions : ) REPLY [1 votes]: To flesh out 5PMs comment: $$\exp(A + \theta B) = \sum_{k \geqslant 0} \frac{(A + \theta B)^k}{k!} = \sum_{k \geqslant 0} \frac{A^k + k A^{k-1}(\theta B) + \theta^2( \ldots )}{k!} $$Differentiating term by term, we get $$\sum_{k \geqslant 0} \frac{k A^{k-1} B + \theta( \ldots)}{k!}$$So when we evaluate at zero, we obtain: $$\sum_{k \geqslant 0} \frac{A^{k-1}}{(k-1)!} B = \exp(A)B$$
{"set_name": "stack_exchange", "score": 4, "question_id": 284667}
TITLE: Balls in spaces of operators QUESTION [12 upvotes]: I am interested in some geometrical aspects of spaces $L(E)$, of bounded operators on a given Banach space $E$. I am unable to estimate if my problem deserves to be asked at MO, but let me try. Is there an infinite-dimensional Banach space (non-separable preferably) $E$ such that for some non-zero $T\in L(E)$ the set $$\{S\in L(E)\colon \|S-T\|=\|S+T\|\}$$ contains an open ball? In fact, I am more interested in the negation: Is there a Banach space such that for none non-zero $T\in L(E)$ this can happen? I cannot (dis)prove it even if $E$ is a Hilbert space. REPLY [9 votes]: In what follows I show that such an operator exists if $E$ can be written (isometrically) as the $\ell_\infty$-direct sum of two (nonzero) subspaces (I have not tried the Hilbert space case, but I started writing my answer before the edits were made to the question.) Let $E = X\oplus_\infty Y$, where $X$ and $Y$ are nonzero (infinite dimensional, if you like). Each $V\in L(E)$ satisfies $\Vert V \Vert = \max ( \Vert P_X V \Vert, \Vert P_Y V\Vert )$, where $P_X$ and $P_Y$ denote the projections onto the complemented subspaces $X$ and $Y$. Let $T= P_X$ and $S=3P_Y$, so that $\Vert T-S\Vert =3=\Vert T+S\Vert $. To construct the desired example, we show that if $\Vert R-S\Vert <1$, then $\Vert T-R\Vert = \Vert T+R\Vert $. So take such $R$ and note that then $\Vert P_YR \Vert >2$ and $\Vert P_XR\Vert<1$. It follows that $$ \Vert T-R\Vert = \max (\Vert P_X(T-R) \Vert ,\Vert P_Y(T-R)\Vert ) = \max (\Vert T-P_XR \Vert ,\Vert P_YR\Vert ) = \Vert P_Y R\Vert $$ (since $\Vert T-P_XR \Vert \leq \Vert T\Vert + \Vert P_XR \Vert$<2 and $\Vert P_Y R\Vert >2$). Similarly, we conclude that $\Vert T+R\Vert = \Vert P_Y R\Vert$, hence $\Vert T+R\Vert = \Vert T-R\Vert $. Edit: Note that since each $U\in L(X\oplus_1 Y)$ satisfies $\Vert U\Vert = \max (\Vert UP_X\Vert , \Vert UP_Y\Vert )$, a similar construction gives an example of such a ball for spaces isometrically isomorphic to $X\oplus_1 Y$ for nonzero $X$ and $Y$.
{"set_name": "stack_exchange", "score": 12, "question_id": 77527}
TITLE: Proving the decomposition $N = a^a b^b$ is unique or not QUESTION [3 upvotes]: Suppose $a,b,c,d$ be natural numbers. If $a \ge b > 0$, $c \ge d > 0$, $a^a b^b = c^c d^d$, then $a = c$ and $b = d$. For example, can we find another decomposition of $N$ when $N = 8^8 4^4$ ? Perhaps this decomposition is unique, I think. But it's difficult to prove. REPLY [0 votes]: Well, apparently, the decomposition of $8^84^4$ is unique - one may show that by a pretty short exhaustive search. The general question, however, is much harder to crack... Take a look at this. $$\begin{align}a&=4\,478\,976&=2^{11}\cdot3^7\\ b&=1 \\ c&=2\,985\,984&=2^{12}\cdot3^6\\ d&=1\,679\,616&=2^8\cdot3^8 \end{align}$$ In short, $a=24g,\;c=16g,\;d=9g$, where $g=GCD(a,c,d)=2^8\cdot3^6$. Now $a^ab^b=\left(2^{11}\cdot3^7\right)^{24g}=2^{11\cdot24g}\cdot3^{7\cdot24g}=2^{264g}\cdot3^{168g}$. On the other hand, $c^cd^d=\left(2^{12}\cdot3^6\right)^{16g}\cdot\left(2^8\cdot3^8\right)^{9g}=\left(2^{192g}\cdot3^{96g}\right)\left(2^{72g}\cdot3^{72g}\right)=2^{264g}\cdot3^{168g}$. Which pretty much answers the titular question with a solid "no". I don't know whether this is the "smallest" example in any meaningful sense. On a side note, the similar-looking but completely unrelated equation $a!\cdot b!=c!\cdot d!$ has solutions galore.
{"set_name": "stack_exchange", "score": 3, "question_id": 1626814}
TITLE: Integrating with respect to different variables QUESTION [10 upvotes]: I have started reading a book on differential equations and it says something like: $$\frac{dx}{x} = k \, dt$$ Integrating both sides gives $$\log x = kt + c$$ How is it that I can 'integrate both sides here' when I am integrating one side with respect to $x$ yet I am integrating the other side with respect to $t$? REPLY [0 votes]: You could also view integration as a linear operator you apply to both sides of the equation. Applying the same operator to both sides still gives you a valid equation.
{"set_name": "stack_exchange", "score": 10, "question_id": 161868}
TITLE: Show $0-1$ Knapsack is polynomially reducible to this problem QUESTION [1 upvotes]: I have already posted this question here but have not received an answer so I am cross-posting with hope to reach a larger amount of mathematicians: Let $T=\{1,\cdots,n\}$ and consider the following mixed integer linear problem $P$: $$ \max \sum_{t\in T} x_{t} $$ subject to $$ I_t=I_{t-1}+a_{t}-x_{t}\quad t\in T\\ I_t \le M_t(1-y_t)\quad t\in T\\ x_t \le M_ty_t \quad t\in T\\ I_t,x_t \ge 0 \quad t\in T\\ y_t \in \{0,1\}\quad t\in T $$ $I_0$ and $a_t$ are given parameters in $\mathbb{R}^+$ and $M_t$ is a large constant. This problem is quite similar to a lot-sizing problem with capacities $M_t$, production variables $x_t$, demands $a_t$, in which we have constraints $I_t=I_{t-1}\color{red}{-}a_{t}\color{red}{+}x_{t}$ (but no constraints $I_t \le M_t(1-y_t)$). I would like to show that $P$ is NP-hard. The decision version of $P$ is clearly in NP. I am trying to show that the binary knapsack problem reduces to $P$ in polynomial time. My try: I consider the following knapsack problem : $$ \min\{ \sum_{t \in T}c_t y_t\;|\;\sum_{t\in T}v_t y_t \ge b, \; y_t\in \{0,1\} \} $$ and solve an instance of $P$ with: $M_t = v_t\quad \forall t \in T$ $a_t =0 \quad \forall t=1,...,n-1$ $a_n = b$ $I_0 = 0$ Variables $I_t$ can be eliminated by noting that $$ I_t = I_0 + \sum_{j=1}^t(a_j-x_j) $$ and so the decision version of $P$ is equivalent to finding a feasible solution of $$ I_0 + \sum_{j=1}^t(a_j-x_j) \ge 0 \quad t \in T\\ I_0 + \sum_{j=1}^t(a_j-x_j) \le M_t(1-y_t) \quad t \in T \\ x_t \le M_ty_t \quad t\in T\\ x_t \ge 0 \quad t\in T\\ y_t \in \{0,1\}\quad t\in T $$ With the chosen data, we get: $$ b-\sum_{j=1}^t x_j \ge 0 \quad t \in T\\ b- \sum_{j=1}^t x_j \le v_t(1-y_t) \quad t \in T \\ x_t \le v_t y_t \quad t\in T\\ x_t \ge 0 \quad t\in T\\ y_t \in \{0,1\}\quad t\in T $$ Now, if I were able to show that $b-\sum_{j=1}^t x_j = 0 \; \forall t \in T$, by combining it with $x_t \le v_t y_t \; \forall t \in T$, I would get $$ b \le \sum_{t\in T} v_t y_t $$ which would finish the proof. For every $t\in T$ such that $y_t =1$ it is the case since the equations become $b-\sum_{j=1}^t x_j \ge 0$ and $b-\sum_{j=1}^t x_j \le 0$. Is there any way to prove this this way? How can I deal with the cases for which $y_t =0$? I am pretty sure that if $M_t=M\; \; \forall t \in T$, then $P$ is no longer NP-hard. Can anyone confirm this? Another intuitive approach would be to reduce the lot sizing problem to $P$, but I did not get very far. Any help is appreciated! REPLY [2 votes]: Your constraints imply that \begin{align*} y_t=0 &\implies x_t=0\text{ and }I_t=I_{t-1}+a_t\\ y_t=1 &\implies I_t=0\text{ and }x_t=I_{t-1}+a_t \end{align*} As a consequence, if the problem is feasible then the objective value is $a_0+a_1+\cdots+a_T$ for every feasible solution. Feasibility can be decided by looking for a path from $0$ to $T$ in the following digraph $G=(V,A)$. The node set is $V=\{0,1,\ldots,T\}$ and the arc set is \begin{multline*} A=\{(0,j)\ :\ I_0+a_1+\cdots+a_t\leqslant M_t\text{ for all }t\in\{1,\ldots,j\}\}\\ \cup\{(i,j)\ :\ 1\leqslant i<j\leqslant T,\ a_{i+1}+\cdots+a_t\leqslant M_t\text{ for all }t\in\{i+1,\ldots,j\}\} \end{multline*} There is a one-to-one correspondence between feasible solutions for your problem and paths $(i_0=0,i_1,\ldots,i_k=T)$ given by \[y_t=1\iff t\in\{i_1,\ldots,i_k\}.\]
{"set_name": "stack_exchange", "score": 1, "question_id": 261510}
\begin{document} \title[Work and Entropy Production in Information--Driven Engines] {Relations Between Work and Entropy Production for General Information--Driven, Finite--State Engines} \author{Neri Merhav} \address{The Andrew \& Erna Viterbi Faculty of Electrical Engineering, Technion, Haifa 32000, Israel.\\ E--mail: merhav@ee.technion.ac.il} \begin{abstract} We consider a system model of a general finite--state machine (ratchet) that simultaneously interacts with three kinds of reservoirs: a heat reservoir, a work reservoir, and an information reservoir, the latter being taken to be a running digital tape whose symbols interact sequentially with the machine. As has been shown in earlier work, this finite--state machine can act as a demon (with memory), which creates a net flow of energy from the heat reservoir into the work reservoir (thus extracting useful work) at the price of increasing the entropy of the information reservoir. Under very few assumptions, we propose a simple derivation of a family of inequalities that relate the work extraction with the entropy production. These inequalities can be seen as either upper bounds on the extractable work or as lower bounds on the entropy production, depending on the point of view. Many of these bounds are relatively easy to calculate and they are tight in the sense that equality can be approached arbitrarily closely. In their basic forms, these inequalities are applicable to any finite number of cycles (and not only asymptotically), and for a general input information sequence (possibly correlated), which is not necessarily assumed even stationary. Several known results are obtained as special cases. \end{abstract} \indent{\bf Keywords}: information exchange, second law, entropy production, Maxwell demon, work extraction, finite--state machine. \maketitle \section{Introduction} The fact that information processing plays a very interesting role in thermodynamics, has already been recognized in the second half of the nineteenth century, namely, when Maxwell proposed his celebrated gedanken experiment, known as Maxwell's demon \cite{maxwelldemon}. According to the Maxwell demon experiment, a demon with access to information on momenta and positions of particles in a gas, at every given time, is cable of separating between fast--moving particles and slower ones, thus forming a temperature difference without supplying external energy, which sounds in contradiction to the second law of thermodynamics. A few decades later, Szilard \cite{szilardeng} pointed out that it is possible to convert heat into work, when considering a box with a single particle. In particular, using a certain protocol of measurement and control, one may be able to produce work in each cycle of the system, which is again, in apparent contradiction with to the second law, since no external energy is injected. These intriguing observations have created a considerable dispute and controversy in the scientific community. Several additional thought--provoking gedanken experiments have ultimately formed the basis for a vast amount of theoretical work associated with the role of informational ingredients in thermodynamics. An incomplete list of modern articles along these lines, include \cite{AJM09}, \cite{BS13}, \cite{BS14a}, \cite{BS14b}, \cite{BBS14}, \cite{BMC16a}, \cite{BMC16b}, \cite{BMC16c}, \cite{CGQ15}, \cite{Deffner13}, \cite{DJ13}, \cite{EV11}, \cite{GDC13}, \cite{HBS14}, \cite{HA14}, \cite{HE14}, \cite{HS14}, \cite{MJ12}, \cite{MQJ13}, \cite{Merhav15}, \cite{PHS15}, \cite{SU12}, and \cite{SU13}. These articles can be basically divided into two main categories. In the first category, the informational ingredient is in the form of measurement and feedback control (just like in the Maxwell's demon and Szilard's engine) and the second category is about physical systems that include, beyond the traditional heat reservoir (heat bath), also a work reservoir and an {\it information reservoir}, which interacts with the system entropically, but with no energy exchange. The information reservoir, which is a relatively new concept in physics \cite{BS14a}, \cite{BS14b}, \cite{DJ13}, may be, for instance, a large memory register or a digital tape carrying a long sequence of bits, which interact sequentially with the system and may change during this interaction. Basically, the main results, in all these articles, are generalized forms of the second law of thermodynamics, where the entropy increase consists of an extra term that is concerned with information exchange, such as mutual information (for systems with measurement and feedback control) or Shannon entropy increase (for systems with a information reservoir). In contrast to the early proposed thought experiments, that were typically described in general terms of an ``intelligent agent'' and were not quite described in full detail, Mandal and Jarzynski \cite{MJ12} were the first to devise a concrete model of a system that behaves basically like a demon. Specifically, they described and analyzed a simple autonomous system, based on a finite--state Markov process, that when operates as an engine, it converts heat into mechanical work, and, at the same time, it writes bits serially on a tape, which plays the role of an information reservoir. Here, the word ``writes'' refers to a situation where the entropy of the output bits recorded on the tape (after the interaction), is larger than the entropy of the input bits (before the interaction). It can also act as an eraser, which performs the reversed process of losing energy while ``deleting'' information, that is, decreasing the entropy. Several variants on this physical model, which are based on quite similar ideas, were offered in some later articles. These include: \cite{BS13} -- where the running tape can move both back and forth, \cite{BS14a} -- where the interaction time with each bit is a random variable rather than fixed parameter, \cite{BS14b} -- with three different points of view on information--driven systems, \cite{BBS14} -- with the upper energy level being time--varying, \cite{CGQ15} -- with a model based on enzyme kinetics, \cite{Deffner13} -- with a quantum model, \cite{HA14} -- with a thermal tape, and \cite{MQJ13}, which concerns an information--driven refrigerator, where instead of the work, heat is transferred from a cold reservoir into a hotter one. In a recent series of interesting papers, \cite{BMC16a}, \cite{BMC16b}, \cite{BMC16c}, Boyd, Mandal and Crutchfield considered a system model of a demon (ratchet) that is implemented by a general finite--state machine (FSM) that simultaneously interacts with a heat reservoir (heat bath at fixed temperature), a work reservoir (i.e., a given mass that may be lifted by the machine), and an information reservoir (a digital tape, as described above). The state variable of the FSM, which manifests the memory of the ratchet to past input and output information, interacts with the current bit of the information reservoir during one unit of time, a.k.a.\ the interaction interval (or cycle), and then the machine produces the next state and the output bit, before it turns to process the next input bit, etc. The operation of the ratchet during one cycle is then characterized by the joint probability distribution of the next state and the output bit given the current state and the input bit. Perhaps the most important result in \cite{BMC16a}, \cite{BMC16b} and \cite{BMC16c}, is that for a stationary input process (i.e., the incoming sequence of tape bits), the work extraction per cycle is asymptotically upper bounded by $kT$ times the difference between the Shannon entropy rate of the tape output process and that of the input process (both in units of nats\footnote{$1~\mbox{nat}=\log_2\mbox{e}$ bits. Entropy defined using the natural base logarithm has units of nats.} per cycle), i.e., eq.\ (5) of \cite{BMC16a} (here $k$ is the Boltzmann constant and $T$ is the temperature). In addition to this general result, various conclusions are drawn in those papers. For example, the uselessness of ratchet memory when the input process is memoryless (i.i.d.), as well as its usefulness (for maximizing work extraction) when the input process is correlated, are both discussed in depth, and several interesting examples are demonstrated. While the above mentioned upper bound on the work extraction, \cite[eq.\ (5)]{BMC16a}, seems reasonable and interesting, some concerns arise upon reading its derivation in \cite[Appendix A]{BMC16a}, and these concerns are discussed in some detail in the Appendix. In this paper, we consider a similar setup, but we focus is on the derivation of a family of alternative inequalities that relate work extraction to entropy production. The new proposed inequalities have the following advantages. \begin{enumerate} \item The approach taken and the derivation are very simple. \item The underlying assumptions about the input process, the ratchet, and the other parts of the system, are rather mild. \item The inequalities apply to any finite number of cycles. \item For a stationary input process, the inequalities are simple and the resulting bounds are relatively easy to calculate. \item The inequalities are tight in the sense that equality can be approached arbitrarily closely. \item Some known results are obtained as special cases. \end{enumerate} The remaining part of the paper is organized as follows. In Section 2, we establish some notation conventions. In Section 3, we describe the physical system model. In Section 4, we derive our basic work/entropy--production inequality. In Section 5, we discuss this inequality and explore it from various points of view. Finally, in Section 6, we derive a more general family of inequalities, which have the flavor of fluctuation theorems. \section{Notation Conventions} Throughout the paper, random variables will be denoted by capital letters, specific values they may take will be denoted by the corresponding lower case letters, and their alphabets will be denoted by calligraphic letters. Random vectors, their realizations and their alphabets will be denoted, respectively, by capital letters, the corresponding lower case letters, and the corresponding calligraphic letters, all superscripted by their dimension. For example, the random vector $X^n=(X_1,\ldots,X_n)$, ($n$ -- positive integer) may take a specific vector value $x^n=(x_1,\ldots,x_n)$ in $\calX^n$, which is the $n$--th order Cartesian power of $\calX$, the alphabet of each component of this vector. The probability of an event $\calE$ will be denoted by $P[\calE]$. The indicator function of an event $\calE$ will be denoted by $\calI[\calE]$. The Shannon entropy of a discrete random variable $X$ will be denoted\footnote{Following the customary notation conventions in information theory, $H(X)$ should not be understood as a function $H$ of the random outcome of $X$, but as a functional of the probability distribution of $X$.} by $H(X)$, that is, \begin{equation} \label{entropydef} H(X)=-\sum_{x\in\calX}P(x)\ln P(x), \end{equation} where $\{P(x),~x\in\calX\}$ is the probability distribution of $X$. When we wish to emphasize the dependence of the entropy on the underlying distribution $P$, we denote it by $\calH(P)$. The binary entropy function will be defined as \begin{equation} h(p)=-p\ln p-(1-p)\ln(1-p),~~~~0\le p\le 1. \end{equation} Similarly, for a discrete random vector $X^n=(X_1,\ldots,X_n)$, the joint entropy is denoted by $H(X^n)$ (or by $H(X_1,\ldots,X_n)$), and defined as \begin{equation} \label{jointentropydef} H(X^n)=-\sum_{x^n\in\calX^n}P(x^n)\ln P(x^n). \end{equation} The conditional entropy of a generic random variable $U$ over a discrete alphabet $\calU$, given another generic random variable $V\in\calV$, is defined as \begin{equation} \label{condentropydef1} H(U|V)=-\sum_{u\in\calU}\sum_{v\in\calV}P(u,v)\ln P(u|v), \end{equation} which should not be confused with the conditional entropy given a {\it specific realization} of $V$, i.e., \begin{equation} \label{condentropydef2} H(U|V=v)=-\sum_{u\in\calU}P(u|v)\ln P(u|v). \end{equation} The mutual information between $U$ and $V$ is \begin{eqnarray} I(U;V)&=&H(U)-H(U|V)\nonumber\\ &=&H(V)-H(V|U)\nonumber\\ &=&H(U)+H(V)-H(U,V), \end{eqnarray} where it should be kept in mind that in all three definitions, $U$ and $V$ can themselves be random vectors. Similarly, the conditional mutual information between $U$ and $V$ given $W$ is \begin{eqnarray} I(U;V|W)&=&H(U|W)-H(U|V,W)\nonumber\\ &=&H(V|W)-H(V|U,W)\nonumber\\ &=&H(U|W)+H(V|W)-H(U,V|W). \end{eqnarray} The Kullback--Leibler divergence (a.k.a.\ relative entropy or cross-entropy) between two distributions $P$ and $Q$ on the same alphabet $\calX$, is defined as \begin{equation} D(P\|Q)=\sum_{x\in\calX}P(x)\ln\frac{P(x)}{Q(x)}. \end{equation} \section{System Model Description} As in the previous articles on models of physical systems with an information reservoir, our system consists of the following ingredients: a heat bath at temperature $T$, a work reservoir, here designated by a wheel loaded by a mass $m$, an information reservoir in the form of a digital input tape, a corresponding output tape, and a certain device, which is the demon, or ratchet, in the terminology of \cite{BMC16a}, \cite{BMC16b}, \cite{BMC16c}. The ratchet interacts (separately) with each one of the other parts of the system (see Fig.\ 1). \begin{figure}[ht] \hspace*{3cm}\input{mjs.pstex_t} \caption{The physical system model.} \end{figure} The input tape consists of a sequence of symbols, $x_1,x_2,\ldots$, from a finite alphabet $\calX$ (say, binary symbols where $\calX=\{0,1\}$), that are serially fed into the ratchet, which in turn processes these symbols sequentially, while going through a sequence of internal states, $s_1, s_2, \ldots$, taking values in a finite set $\calS$. The ratchet outputs another sequence of symbols, $y_1, y_2,\ldots$, which are elements of the same alphabet, $\calX$, as the input symbols. The state of the ratchet is an internal variable that encodes the memory that the ratchet has with regard to its history. In the $n$--th cycle of the process ($n = 1,2,\ldots$), while the ratchet is at state $s_n$, it is fed by the input symbol $x_n$ and it produces the pair $(y_n,s_{n+1})$ in stochastic manner, according to a given conditional distribution, $P(y_n,s_{n+1}|x_n,s_n)$, where $y_n$ is the output symbol at the $n$--th cycle and $s_{n+1}$ is the next state. We now describe the mechanism that dictates this conditional distribution, along with the concurrent interactions among the ratchet, the heat bath and the work reservoir. The $n$--th cycle of the process occurs during the time interval, $(n-1)\tau\le t < n\tau$, in other words, the duration of each cycle is $\tau$ seconds, where $\tau > 0$ is a given parameter. During each such interval, the symbol and the state form together a Markov jump process, $(\xi_t,\sigma_t)$, whose state\footnote{Note that from this point and onward, there are two different notions of ``state'', one of which is the state of ratchet, which is just $s_n$ (or $\sigma_t$), and the other one is the state of the Markov process, which is the pair $(x_n,s_n)$ (or $(\xi_t,\sigma_t)$). To avoid confusion, we will use the terms ``ratchet state'' and ``Markov state'' correspondingly, whenever there is room for ambiguity.} set is the product set $\calX\times\calS$ and whose matrix of Markov--state transition rates is $M[(\xi,\sigma)\to(\xi^\prime,\sigma^\prime)]$, $\xi,\xi^\prime\in\calX$, $\sigma,\sigma^\prime\in\calS$. The random Markov--state transitions of this process are caused by spontaneous thermal fluctuations that result from the interaction with the heat bath. The Markov process is initialized at time $t=(n-1)\tau$ according to $(\xi_{(n-1)\tau},\sigma_{(n-1)\tau})=(x_n,s_n)$. At the end of this interaction interval, i.e., at time $t=n\tau - 0$, when the process is its final state $(\xi_{n\tau-0},\sigma_{n\tau-0})$, the ratchet records the output symbol as $y_n=\xi_{n\tau-0}$ and the next ratchet state becomes $s_{n+1}=\sigma_{n\tau-0}$, and then the $(n+1)$--st cycle begins in the same manner, etc. Denoting by $\Pi_t(\xi,\sigma)$ the probability of finding the Markov process in state $(\xi,\sigma)$ at time $t$, it is clear from the above description, that the conditional distribution $P(y_n,s_{n+1}|x_n,s_n)$, that was mentioned before, is the solution $\{\Pi_{n\tau-0}(y,s)\}$ to the master equations (see, e.g., \cite[Chap.\ 5]{vanKampen}), $$\frac{\mbox{d}\Pi_t(\xi,\sigma)}{\mbox{d}t}=\sum_{\xi^\prime,\sigma^\prime} \{\Pi_t(\xi^\prime,\sigma^\prime)M[(\xi^\prime,\sigma^\prime)\to(\xi,\sigma)] -\Pi_t(\xi,\sigma)M[(\xi,\sigma)\to(\xi^\prime,\sigma^\prime)]\},$$ when the initial condition is $\Pi_{(n-1)\tau}(\xi,\sigma)=\calI\{(\xi,\sigma)= (x_n,s_n)\}$. Associated with each state, $(\xi,\sigma)$, of the Markov process, there is a given energy $E(\xi,\sigma)=mg\cdot \Delta(\xi,\sigma)$, $\Delta(\xi,\sigma)$ being the height level of the mass $m$ (relative to some reference height associated with an arbitrary Markov state). As the Markov process jumps from $(\xi,\sigma)$ to $(\xi^\prime,\sigma^\prime)$, the ratchet lifts the mass by $\Delta(\xi^\prime,\sigma^\prime)-\Delta(\xi,\sigma)$, thus performing an amount of work given by $E(\xi^\prime,\sigma^\prime)-E(\xi,\sigma)$, whose origin is heat extracted from the heat bath (of course, the direction of the flow of energy between the heat bath and the work reservoir is reversed when these energy differences change their sign). It should be pointed out that the input tape does not supply energy to the ratchet, in other words, at the switching times, $t=n\tau$, although the state of the Markov process changes from $(\xi_{n\tau-0},\sigma_{n\tau-0})=(y_n,s_{n+1})$ to $(\xi_{n\tau},\sigma_{n\tau})=(x_{n+1},s_{n+1})$, this switching is not assumed to be accompanied by a change in energy (the mass is neither raised nor lowered). In other words, the various energy levels, $E(\\xi,\sigma)$, have only a relative meaning, and so, after $N$ cycles, the total amount of work carried out by the ratchet is given by \begin{equation} W_N=\sum_{n=1}^N[E(y_n,s_{n+1})-E(x_n,s_n)]. \end{equation} It will be assumed that the sequence of input symbols is governed by a stochastic process, which is designated by $X_1,X_2,\ldots$, and which obeys a given probability law $P$, that is, \begin{equation} \mbox{Pr}\{X_1=x_1,~X_2=x_2,\ldots, X_n=x_n\}=P(x_1,x_2,\ldots,x_n), \end{equation} for every positive integer $n$ and every $(x_1,x_2,\ldots,x_n)\in\calX^n$, where $P(x_1,x_2,\ldots,x_n)$ is the probability distribution function. No special assumptions will be made concerning the process (not even stationarity) unless this will be specified explicitly. Following the notation conventions described in Section 2, the notation of the input sequence using capital $X$ emphasizes that this is a random process. By the same token, when we wish to emphasize the induced randomness of the ratchet state sequence and the output sequence, we denote them by $\{S_n\}$ and $\{Y_n\}$, respectively. To summarize, our model consists of two sets of stochastic processes in two different levels: one level lies in the larger time scale which is discrete (indexed by the integer $n$), and this is where the processes $\{X_n\}$, $\{Y_n\}$ and $\{S_n\}$ take place. The probability distributions of these processes are denoted by the letter $P$. The other level is in the smaller time scale, which is continuous, and this is where the Markov--jump pair process $\{(\xi_t,\sigma_t)\}$ takes place during each interaction interval of length $\tau$. The joint probability distribution of $(\xi_t,\sigma_t)$ is denoted by $\Pi_t$. The connection between the two kinds of processes is that at times $t=(n-1)\tau$, $n=1,2,\ldots$, $(\xi_t,\sigma_t)$ is set to $(X_n,S_n)$, and at times $t=n\tau-0$, $(Y_n,S_{n+1})$ is set to $(\xi_t,\sigma_t)$. \section{The Basic Work/Entropy--Production Inequality} As said, we are assuming that within each interaction interval, $(n-1)\tau\le t < n\tau$, the pair $(\xi_t,\sigma_t)$ is a Markov jump process. For convenience of the exposition, let us temporarily shift the origin and redefine this time interval to be $0\le t < \tau$. Since each Markov state $(\xi,\sigma)$, is associated with energy level $E(\xi,\sigma)$, the equilibrium distribution is the canonical distribution, \begin{equation} \Pi_{\mbox{\tiny eq}}(\xi,\sigma)=\frac{e^{-\beta E(\xi,\sigma)}}{Z(\beta)}, \end{equation} where $\beta=\frac{1}{kT}$ is the inverse temperature and \begin{equation} Z(\beta)=\sum_{(\xi,\sigma)\in\calX\times\calS} e^{-\beta E(\xi,\sigma)}. \end{equation} The Markovity of the process implies that $D(\Pi_t\|\Pi_{\mbox{\tiny eq}})$ is monotonically non--increasing in $t$ (see, e.g., \cite[Chap.\ V.5]{vanKampen}, \cite[Theorem 1.6]{Kelly79}, \cite[Section 4.4]{CT06}), and so, \begin{equation} \label{htheorem} D(\Pi_\tau\|\Pi_{\mbox{\tiny eq}})\le D(\Pi_0\|\Pi_{\mbox{\tiny eq}}), \end{equation} which is clearly equivalent to \begin{equation} \label{div} \sum_{(\xi,\sigma)\in\calX\times\calS} [\Pi_{\tau}(\xi,\sigma)-\Pi_0(\xi,\sigma)]\cdot \ln\frac{1}{\Pi_{\mbox{\tiny eq}}(\xi,\sigma)}\le \calH(\Pi_\tau)-\calH(\Pi_0). \end{equation} Since \begin{equation} \label{p2w} \ln\frac{1}{\Pi_{\mbox{\tiny eq}}(\xi,\sigma)}=\ln Z(\beta)+\beta E(\xi,\sigma) \equiv\ln Z(\beta)+\beta mg\Delta(\xi,\sigma), \end{equation} the left--hand side (l.h.s.) of (\ref{div}) gives the average work per cycle (in units of $kT$), and the right--hand side (r.h.s.) is the difference between the entropy of the final Markov state within the cycle, $(\xi_\tau,\sigma_\tau)$, and the entropy of the initial Markov state, $(\xi_0,\sigma_0)$. Returning to the notation of the discrete time processes (indexed by $n$), we have then just shown that $$\left<\Delta W_n\right>\equiv \left<E(Y_n,S_{n+1})\right>-\left<E(X_n,S_n)\right>\le kT\cdot[H(Y_n,S_{n+1})-H(X_n,S_n)],$$ and so, the total average work after $N$ cycles is upper bounded by \begin{equation} \label{basic} \left<W_N\right>\equiv \sum_{n=1}^N\left<\Delta W_n\right>\le kT\cdot\sum_{n=1}^N [H(Y_n,S_{n+1})-H(X_n,S_n)]. \end{equation} Eq.\ (\ref{basic}) serves as our basic work/entropy--production inequality. A slightly different form is the following: \begin{eqnarray} \label{alternative} \frac{\left<W_N\right>}{kT}&\le& \sum_{n=1}^N [H(Y_n|S_{n+1})-H(X_n|S_n)]+ \sum_{n=1}^N [H(S_{n+1})-H(S_n)]\nonumber\\ &=&\sum_{n=1}^N [H(Y_n|S_{n+1})-H(X_n|S_n)]+ H(S_{N+1})-H(S_1). \end{eqnarray} The first sum in the last expression is the (conditional) entropy production associated with the input--output relation of the system, whereas the term $H(S_{N+1})-H(S_1)$ can be understood as the contribution of the ratchet state to the net entropy production throughout the entire process of $N$ cycles. If the ratchet has many states and $N$ is not too large, the latter contribution might be significant, but if the number of ratchet states, $|\calS|$, is fixed, then the relative contribution of ratchet--state entropy production term, which cannot exceed $\ln|\calS|$, becomes negligible compared to the input--output entropy production term for large $N$. In particular, if we divide both sides of the inequality by $N$, then as $N\to\infty$, the term $\frac{\ln|\calS|}{N}$ tends to zero, and so, the the average work per cycle is asymptotically upper bounded by $\frac{kT}{N}\sum_{n=1}^N [H(Y_n|S_{n+1})-H(X_n|S_n)]$. This expression is different from the general upper bound of \cite{BMC16a}, \cite{BMC16b}, \cite{BMC16c}, where it was argued that $\left<W_N\right>/NkT$ is asymptotically upper bounded by \begin{equation} \label{bmc} \frac{1}{N}[H(Y^N)-H(X^N)]= \frac{1}{N}\sum_{n=1}^N[H(Y_n|Y^{n-1})-H(X_n|X^{n-1})]. \end{equation} While both the first term in (\ref{alternative}) and (\ref{bmc}) involve sums of differences between conditional output and input entropies, the conditionings being used in the two bounds are substantially different. Our bound suggests that the relevant information ``memorized'' by both the input process and the output process, is simply the ratchet state that is coupled to it, rather than its own past, as in (\ref{bmc}). These conditionings on the states can be understood to be the residual input--output entropy production that is {\it not} part of the entropy production of the ratchet state (which is in general, correlated to the input and output). Moreover, the last line of (\ref{alternative}) is typically easier to calculate than (\ref{bmc}), as will be discussed and demonstrated in the sequel. Yet another variant of (\ref{basic}) is obtained when the chain rule of the entropy is applied in the opposite manner, i.e., \begin{equation} \frac{\left<W_N\right>}{kT}\le \sum_{n=1}^N [H(Y_n)-H(X_n)]+ \sum_{n=1}^N [H(S_{n+1}|Y_n)-H(S_n|X_n)]. \end{equation} Here the first term is the input--output entropy production and the second term is the conditional entropy production of the ratchet state. However, this form is less useful than (\ref{alternative}). \section{Discussion on the Bounds and Their Variants} In this section, we discuss eqs.\ (\ref{basic}) and (\ref{alternative}) as well as several additional variants of these inequalities. \subsection{Tightness and Achievability} The first important point concerning inequality (\ref{basic}) is that it is potentially tight in the sense that the ratio between the two sides of eq.\ (\ref{basic}) may approach unity arbitrarily closely. To see this, consider first the case where $\Pi_0(\xi,\sigma)$ is close to $\Pi_{\mbox{\tiny eq}}(\xi,\sigma)$ in the sense that \begin{equation} \Pi_0(\xi,\sigma)=\Pi_{\mbox{\tiny eq}}(\xi,\sigma)\cdot[1+\epsilon(\xi,\sigma)],~~~(\xi,\sigma)\in\calX\times\calS \end{equation} where $\epsilon\equiv\max_{\xi,\sigma}|\epsilon(\xi,\sigma)| \ll 1$ and obviously, \begin{equation} \label{avgeps} \sum_{\xi,\sigma}\Pi_{\mbox{\tiny eq}}(\xi,\sigma)\epsilon(\xi,\sigma)=0 \end{equation} since $\{\Pi_0(\xi,\sigma)\}$ must sum up to unity. Assume also that $\Pi_\tau(\xi,\sigma)$ is even much closer to $\Pi_{\mbox{\tiny eq}}(\xi,\sigma)$ in the sense that the ratio $\Pi_\tau(\xi,\sigma)/\Pi_{\mbox{\tiny eq}}(\xi,\sigma)$ is between $1-\epsilon^2$ and $1+\epsilon^2$. Now, the work per cycle is given by \begin{eqnarray} \left<\Delta W\right>&=&\sum_{\xi,\sigma}\Pi_\tau(\xi,\sigma)E(\xi,\sigma)- \sum_{\xi,\sigma}\Pi_0(\xi,\sigma)E(\xi,\sigma)\nonumber\\ &=& \sum_{\xi,\sigma}\Pi_{\mbox{\tiny eq}}(\xi,\sigma)E(\xi,\sigma)+O(\epsilon^2)- \sum_{\xi,\sigma}\Pi_{\mbox{\tiny eq}}(\xi,\sigma)[1+\epsilon(\xi,\sigma)]E(\xi,\sigma)\nonumber\\ &=&-\sum_{\xi,\sigma}\Pi_{\mbox{\tiny eq}}(\xi,\sigma)\epsilon(\xi,\sigma)E(\xi,\sigma)+O(\epsilon^2). \end{eqnarray} On the other hand, the entropy production per cycle is given by \begin{eqnarray} \Delta\calH&\equiv&\calH(\Pi_\tau)-\calH(\Pi_0)\\ &=& \sum_{\xi,\sigma}\Pi_0(\xi,\sigma)\ln \Pi_0(\xi,\sigma)- \sum_{\xi,\sigma}\Pi_\tau(\xi,\sigma)\ln \Pi_\tau(\xi,\sigma)\\ &=& \sum_{\xi,\sigma}\Pi_{\mbox{\tiny eq}}(\xi,\sigma)[1+\epsilon(\xi,\sigma)] \ln\{\Pi_{\mbox{\tiny eq}}(\xi,\sigma)[1+\epsilon(\xi,\sigma)]\}-\nonumber\\ & &\sum_{\xi,\sigma}\Pi_{\mbox{\tiny eq}}(\xi,\sigma)\ln \Pi_{\mbox{\tiny eq}}(\xi,\sigma)+O(\epsilon^2)\\ &=&\sum_{\xi,\sigma}\Pi_{\mbox{\tiny eq}}(\xi,\sigma)\epsilon(\xi,\sigma)\ln \Pi_{\mbox{\tiny eq}}(\xi,\sigma)+O(\epsilon^2), \end{eqnarray} where the last line is obtained using (\ref{avgeps}). Now, the difference $kT\Delta\calH-\left<\Delta W\right>$ is given by $kT\cdot[D(\Pi_0\|\Pi_{\mbox{\tiny eq}})-D(\Pi_\tau\|\Pi_{\mbox{\tiny eq}})]$. But, \begin{eqnarray} D(\Pi_0\|\Pi_{\mbox{\tiny eq}})&=&\sum_{\xi,\sigma}\Pi_0(\xi,\sigma)\ln[1+\epsilon(\xi,\sigma)]\\ &=&\sum_{\xi,\sigma}\Pi_{\mbox{\tiny eq}}(\xi,\sigma) [1+\epsilon(\xi,\sigma)]\ln[1+\epsilon(\xi,\sigma)]\\ &=&\frac{1}{2}\sum_{\xi,\sigma}\Pi_{\mbox{\tiny eq}}(\xi,\sigma)\epsilon^2(\xi,\sigma)+o(\epsilon^2)\\ &=&O(\epsilon^2) \end{eqnarray} and similarly, $D(\Pi_\tau\|\Pi_{\mbox{\tiny eq}})=O(\epsilon^4)$. We have seen then that while both $kT\Delta\calH$ and $\left<\Delta W\right>$ scale linearly with $\{\epsilon(\xi,\sigma)\}$ (for small $\epsilon(\xi,\sigma)$), the difference between them scales with $\{\epsilon^2(\xi,\sigma)\}$. Thus, if both $\left<\Delta W\right>$ and $kT\Delta\calH$ are positive, the ratio between them may be arbitrarily close to unity, provided that $\{\epsilon(\xi,\sigma)\}$ are sufficiently small. Even if $\Pi_0$ and $\Pi_{\mbox{\tiny eq}}$ differ considerably, it is still possible to approach the entropy production bound, but this may require many small steps (in the spirit of quasi--static processes in classical thermodynamics), i.e., a chain of many systems of the type of Fig.\ 1, where the output bit--stream of each one of them serves as the input bit--stream to the next one. This approach was hinted already in \cite{Merhav15} and later also in \cite{BMC16c}. If we think of $\Pi_0$ as the canonical distribution with respect to some Hamiltonian $E_0(\xi,\sigma)$ (which is always possible, say, by defining $E_0(\xi,\sigma)=-kT\ln \Pi_0(\xi,\sigma)$), then we can design a long sequence of distributions, $\Pi^{(1)}, \Pi^{(2)}, \ldots, \Pi^{(L)}=\Pi_{\mbox{\tiny eq}}$ ($L$ -- large positive integer), such that $\Pi^{(i)}$ has ``Hamiltonian'' $(1-i/L)E_0(\xi,\sigma)+(i/L)E(\xi,\sigma)$, $i=1,2,\ldots,L$, so that the distance between every two consecutive distributions (in the above sense) is of the order of $\epsilon=1/L$ and hence the gap between the entropy production and the incremental work, pertaining to the passage from $\Pi^{(i)}$ to $\Pi^{(i+1)}$, is of the order of $\epsilon^2=1/L^2$, so that even if we sum up all these gaps, the total cumulative gap is of the order of $L$ steps times $1/L^2$, which is $1/L$, and hence can still be made arbitrarily small by selecting $L$ large enough. \subsection{Memoryless and Markov Input Processes} Most of the earlier works on systems with information reservoirs assumed that the input process $\{X_n\}$ is memoryless, i.e., that $P(x_1,\ldots,x_N)$ admits a product form for all $N$. In this case, $S_n$, which is generated by $X_1,\ldots,X_{n-1}$, must be statistically independent of $X_n$, and so, in eq.\ (\ref{alternative}), $H(X_n|S_n)=H(X_n)$. We therefore obtain from (\ref{alternative}), the following: \begin{eqnarray} \frac{\left<W_N\right>}{kT}&\le&\sum_{n=1}^N[H(Y_n|S_{n+1})-H(X_n)]+H(S_{N+1})-H(S_1)\\ &=&\sum_{n=1}^N[H(Y_n)-H(X_n]-\sum_{n=1}^NI(S_{n+1};Y_n)+H(S_{N+1})-H(S_1). \end{eqnarray} As already mentioned in the context of (\ref{alternative}), if we divide both sides by $N$ and take the limit $N\to\infty$, the term $\frac{1}{N}[H(S_{N+1})-H(S_1)]\le\frac{1}{N}\ln|\calS|$ vanishes as $N\to\infty$, and if we also drop the negative contribution of the mutual information terms, we further enlarge the expression to obtain the familiar bound that the asymptotic work per cycle cannot exceed the limit of $kT\cdot\frac{1}{N}\sum_{n=1}^N[H(Y_n)-H(X_n]$. As discussed also in \cite{BMC16a}, \cite{BMC16b}, \cite{BMC16c}, this bound is valid (and can be approached, following the discussion in the previous subsection) also by a memoryless ratchet, namely, a ratchet with one internal state only. Moreover, it is not only that there is nothing to lose from using a memoryless ratchet, but on the contrary -- there is, in fact, a lot to lose if the ratchet uses memory in a non--trivial manner: this loss is expressed in the negative term $-\sum_{n=1}^NI(S_{n+1};Y_n)$. The loss can, of course, be avoided if we make sure that at the end of each cycle, the two components of the Markov state, namely, $S_{n+1}$ and $Y_n$, are statistically independent, and so, $I(S_{n+1};Y_n)=0$ for all $n$. If $\tau$ is large enough so that $\Pi_{\mbox{\tiny eq}}$ is approached, and if $E(\xi,\sigma)$ is additive (namely, $E(\xi,\sigma)=E_1(\xi)+E_2(\sigma)$), then $\Pi_{\mbox{\tiny eq}}(\xi,\sigma)= \Pi_{\mbox{\tiny eq}}(\xi)\Pi_{\mbox{\tiny eq}}(\sigma)$, and this is the case. Indeed, in \cite{MJ12}, for example, this is the case, as there are six Markov states ($|\calX|=2$ times $|\calS|=3$) and $\Pi_{\mbox{\tiny eq}}(\xi,\sigma)=e^{-\beta mgh\xi}/[3(1+e^{-\beta mgh})]$, $\xi\in\{0,1\}, \sigma\in\{A,B,C\}$.\\ \vspace{0.1cm} \noindent {\it Example.} Consider a binary memoryless source with $\mbox{Pr}\{X_n=1\}=1-\mbox{Pr}\{X_n=0\}=p$, and a two--state ratchet, with a state set $\calS=\{A,B\}$. The joint process $\{(X_n,S_n)\}$ (as well as $\{(\xi_t,\sigma_t)\}$ within each interaction interval) is therefore a four--state process with state set $\{A0,B0,A1,B1\}$. Let the energy levels be $E(A0)=0$, $E(B0)=\epsilon$, $E(A1)=2\epsilon$ and $E(B1)=3\epsilon$, where $\epsilon > 0$ is a given energy quantum. The Markov jump process $\{(\xi_t,\sigma_t)\}$ has transition rates, $M[A0\to B0]= M[B0\to A1]= M[A1\to B1]=e^{-\beta\epsilon}$, $M[B1\to A1]= M[A1\to B0]=M[B0\to A0]=1$ (in some units of frequency) and all other transition rates are zero (see Fig.\ 2). This process obeys detailed balance and its equilibrium distribution is given by $\Pi_{\mbox{\tiny eq}}[A0]=1/Z$, $\Pi_{\mbox{\tiny eq}}[B0]=e^{-\beta\epsilon}/Z$, $\Pi_{\mbox{\tiny eq}}[A1]=e^{-2\beta\epsilon}/Z$, and $\Pi_{\mbox{\tiny eq}}[B1]=e^{-3\beta\epsilon}/Z$, where $Z=1+e^{-\beta\epsilon}+e^{-2\beta\epsilon}+e^{-3\beta\epsilon}$. \begin{figure}[ht] \hspace*{1cm}\input{xmp.pstex_t} \caption{Example of the Markov jump process.} \end{figure} Suppose that $\tau$ is very large compared to the time constants of the process, so that $\Pi_\tau(\xi,\sigma)$ can be well approximated by the equilibrium distribution. Then, it is straightforward to see that \begin{equation} P(Y_n=0|S_{n+1}=A)=\frac{\Pi_{\mbox{\tiny eq}}[A0]}{\Pi_{\mbox{\tiny eq}}[A0]+\Pi_{\mbox{\tiny eq}}[A1]}=\frac{1}{1+e^{-2\beta\epsilon}} \end{equation} and similarly for $P[Y_n=0|S_{n+1}=B]$. Therefore, \begin{equation} H(Y_n|S_{n+1})=h\left(\frac{1}{1+e^{-2\beta\epsilon}}\right), \end{equation} where $h(\cdot)$ is the binary entropy function, defined in Section 2. As for the input entropy, we have $H(X_n|S_n)=H(X_n)=h(p)$. Therefore, the upper bound on the work per cycle is \begin{equation} \left<\Delta W_n\right>\le h\left(\frac{1}{1+e^{-2\beta\epsilon}}\right)-h(p). \end{equation} It follows that a necessary condition for the ratchet to operate as an engine (rather than as an eraser) is $p < 1/(1+e^{2\beta\epsilon})$ or $p > 1/(1+e^{-2\beta\epsilon})$. Using similar considerations, the exact work extraction is also easy to calculate in this example, but we will not delve into it any further. This concludes the example. Consider next the case where the input process is a stationary first order Markov process, i.e., \begin{equation} P(x^N)=P(x_1)\prod_{n=1}^{N-1}P(x_{n+1}|x_n). \end{equation} As described above, in the discrete time scale, the ratchet is characterized by the input--output transition probability distribution $P(y,s^\prime|x,s)=\mbox{Pr}\{Y_n=y,S_{n+1}=s^\prime|X_n=x,S_n=s\}$. Consider the corresponding marginal conditional distribution \begin{equation} P(s^\prime|x,s)=\sum_{y\in\calX}P(y,s^\prime|x,s). \end{equation} Then, assuming that the initial ratchet state, $S_1$, is independent of the initial input symbol, $X_1$, we have \begin{equation} P(x^N,s^N)=P(x_1)P(s_1)\prod_{n=1}^{N-1}[P(x_{n+1}|x_n)P(s_{n+1}|x_n,s_n)], \end{equation} which means that the pair process $\{(X_n,S_n)\}$ is a first order Markov process as well. Let us assume that the transition matrix of this Markov pair process is such that there exists a unique stationary distribution $P(x,s)=\mbox{Pr}\{X_n=x,S_n=s\}$. Once the stationary distribution $P(x,s)$ is found, the input--output--state joint distribution is dictated by the ratchet input--output transition probability distribution $\{P(y,s^\prime|x,s)\}$, according to \begin{equation} P(x,s,y,s^\prime)=P(x,s)P(y,s^\prime|x,s), \end{equation} which is the joint distribution of the quadruple $(X_n,S_n,Y_n,S_{n+1})$ in the stationary regime. Once this joint distribution is found, one can (relatively) easily compute the stationary average work extraction per cycle, $\left<\Delta W_n\right>=\left<E(Y_n,S_{n+1})\right>-\left<E(X_n,S_n)\right>$, as well as the stationary joint entropies $H(X_n,S_n)$ and $H(Y_n,S_{n+1})$ (or $H(X_n|S_n)$ and $H(Y_n|S_{n+1})$) in order to calculate the entropy--production bound. This should be contrasted with the bound in \cite{BMC16a} (see also \cite{BMC16b}, \cite{BMC16c}), where, as mentioned earlier, $\left<W_N/NkT\right>$ is asymptotically upper bounded by $\lim_{N\to\infty}\frac{1}{N}[H(Y^N)-H(X^N)]$, whose calculation is not trivial, as $Y^N$ is a hidden Markov process, for which there is no closed--form expression for the entropy rate. A good design of a ratchet would be in the quest of finding the transition distribution $\{P(y,s^\prime|x,s)\}$ that maximizes the work extraction (or its entropy production bound) for the given Markov input process. This is an optimization problem with a finite (and fixed) number of parameters. If, in addition, one has the freedom to control the parameters of the Markov input process, say, by transducing a given source of randomness, e.g., a random bit--stream, then of course, the optimization will include also the induced joint distribution $\{P(x,s)\}$. If such a transducer is a one--to--one mapping, then its operation does not consume energy. For example, if the raw input stream is a sequence of independent fair coin tosses (i.e., a purely random bit--stream), this transducer can be chosen to be the decoder of an optimal lossless data compression scheme for the desired input process $P$. \subsection{Conditional Entropy Bounds} We now return to the case of a general input process. For a given $n=1,2,\ldots$, let us denote $u_n=(x^{n-1},y^{n-1},s^n)$, which is the full input--output--state history available at time $n$, and define $v_n=f_n(u_n)$, where $f_n$ is an arbitrary function. If $f_n$ is a many--to--one function, then $v_n$ designates some partial history information, for example, $v_n=x^{n-1}$, or $v_n=y^{n-1}$. Once again, when we wish to emphasize the randomness of all these variables, we use capital letters: $U_n=(X^{n-1},Y^{n-1},S^n)$, $V_n=f_n(U_n)$, etc. Now consider the application of the H--theorem (eq.\ (\ref{htheorem})) with $\Pi_0(\xi,\sigma)= P(X_n=\xi,S_n=\sigma|V_n=v_n)$, instead of the unconditional distribution as before. Then, using the Markovity of the dynamics within each interaction interval, the same derivation as in Section 3 would now yield \begin{eqnarray} \left<\Delta W_n|V_n=v_n\right>&\equiv&\left<E(Y_n,S_{n+1})|V_n=v_n\right>- \left<E(X_n,S_n)|V_n=v_n\right>\nonumber\\ &\le& kT[H(Y_n,S_{n+1}|V_n=v_n)-H(X_n,S_n|V_n=v_n)], \end{eqnarray} where the notation $\left<\cdot|V_n=v_n\right>$ designates conditional expectation given $V_n=v_n$. Averaging both sides with respect to (w.r.t.) the randomness of $V_n$, we get \begin{eqnarray} \left<\Delta W_n\right>&\equiv&\left<E(Y_n,S_{n+1})\right>- \left<E(X_n,S_n)\right>\nonumber\\ &\le& kT[H(Y_n,S_{n+1}|V_n)-H(X_n,S_n|V_n)], \end{eqnarray} and summing all inequalities from $n=1$ to $n=N$, we obtain the family of bounds, \begin{eqnarray} \label{conditional} \left<W_N\right>&\equiv&\sum_{n=1}^N[\left<E(Y_n,S_{n+1})\right>- \left<E(X_n,S_n)\right>]\nonumber\\ &\le& kT\sum_{n=1}^N[H(Y_n,S_{n+1}|V_n)-H(X_n,S_n|V_n)], \end{eqnarray} with a freedom in the choice of $V_n$ (or, equivalently, the choice of the function $f_n$). Now, one may wonder what is the best choice that would yield the tightest bound in this family. Conditioning reduces entropy, but it reduces both the entropy of $(Y_n,S_{n+1})$ and that of $(X_n,S_n)$, so it may not be immediately clear what happens to the difference. A little thought, however, shows that the best choice of $V_n$ is null, namely, the unconditional entropy bound of Section 3 is no worse than any bound of the form (\ref{conditional}). To see why this is true, observe that \begin{eqnarray} & &H(Y_n,S_{n+1}|V_n)-H(X_n,S_n|V_n)\nonumber\\ &=&H(Y_n,S_{n+1})-H(X_n,S_n)+I(V_n;X_n,S_n)-I(V_n;Y_n,S_{n+1})\\ &\ge&H(Y_n,S_{n+1})-H(X_n,S_n), \end{eqnarray} where the inequality follows from the data processing inequality \cite[Sect.\ 2.8]{CT06}, as $V_n$ and $(Y_n,S_{n+1})$ are statistically independent given $(X_n,S_n)$, owing to the Markov property of the process $\{(\xi_t,\sigma_t)\}$. Consequently, $I(V_n;X_n,S_n)\ge I(V_n;Y_n,S_{n+1})$, and the inequality is achieved when $V_n$ is degenerate. Thus, for the purpose of upper bounding the work, the conditioning on any partial history $V_n$ turns out to be completely useless. However, the family of inequalities (\ref{conditional}) may be more interesting when we consider them as lower bounds on entropy production rather than upper bounds on extractable work. Specifically, consider the case $V_n=(X^{n-1},Y^{n-1})$. Then, the work/entropy-production inequality reads \begin{eqnarray} \label{entropybound} \frac{\left<W_N\right>}{kT}&\le& \sum_{n=1}^N[H(Y_n,S_{n+1}|X^{n-1},Y^{n-1})-H(X_n,S_n|X^{n-1},Y^{n-1})]\nonumber\\ &\le&\sum_{n=1}^N[H(Y_n,S_{n+1}|Y^{n-1})-H(X_n|X^{n-1},Y^{n-1})-\nonumber\\ & &H(S_n|X^n,Y^{n-1})]\nonumber\\ &=&\sum_{n=1}^N[H(Y_n|Y^{n-1})+H(S_{n+1}|Y^n)-\nonumber\\ & &H(X_n|X^{n-1})-H(S_n|X^n,Y^{n-1})]\nonumber\\ &=&H(Y^N)-H(X^N)+\sum_{n=1}^N[H(S_{n+1}|Y^n)-H(S_n|X^n,Y^{n-1})], \end{eqnarray} where the first equality is since $Y^{n-1}$ is independent of $X_n$ given $X^{n-1}$. Now, the second term in the last line of eq.\ (\ref{entropybound}) is equivalent to \begin{eqnarray} & &H(S_{N+1}|Y^N)-H(S_1|X_1)+\sum_{n=2}^N[H(S_n|Y^{n-1})-H(S_n|X^n,Y^{n-1})]\nonumber\\ &=&H(S_{N+1}|Y^N)-H(S_1|X_1)+\sum_{n=2}^NI(S_n;X^n|Y^{n-1}). \end{eqnarray} In general, this expression can always be upper bounded by $N\ln|\calS|$, and so, we obtain the following lower bound on the output entropy \begin{equation} H(Y^N)\ge H(X^N)+\frac{\left<W_N\right>}{kT}-N\ln|\calS|. \end{equation} Suppose now that $\{X_n\}$ is a memoryless process, or even a Markov process. Then, as mentioned earlier (see also \cite{BMC16a,BMC16b,BMC16c}), $\{Y_n\}$ is a hidden Markov process, and as already explained before, the joint entropy of $Y^N$ is difficult to compute and it does not have a simple closed--form expression. On the other hand, the above lower bound on $H(Y^N)$ is relatively easy to calculate, as $P(x^N)$ has a simple product form and $\left<W_N\right>$ depends only on the marginals of $(X_n,S_n)$ and $(Y_n,S_{n+1})$, which can be calculated recursively from the transition probabilities $\{P(y_n,s_{n+1}|x_n,s_n)\}$, for $n=1,2,\ldots,N$, and if in addition, $\{X_n\}$ is stationary, then $(X_n,Y_n)$ and $(Y_n,S_{n+1})$ have stationary distributions too, as described before. While one may suspect that $N\ln|\calS|$ might be a loose bound for the second term on the right--most side of (\ref{entropybound}), there are, nevertheless, situations where it is quite a reasonable bound, especially when $\ln|\calS|$ is small (compared to $H(X^N)/N+\left<W_N\right>/NkT$). Moreover, if the marginal entropy of $S_n$ is known to be upper bounded by some constant $H_0 < \ln|\calS|$, then $\ln|\calS|$ can be replaced by $H_0$ in the above lower bound. \section{More General Inequalities} An equivalent form of the basic result of Section 3 is the following: \begin{equation} \calH(\Pi_0)-\beta\left<E(\xi_0,\sigma_0)\right>\le \calH(\Pi_\tau)-\beta\left<E(\xi_\tau,\sigma_\tau)\right>, \end{equation} The l.h.s.\ can be thought of as the negative free energy of the Markov state at time $t=0$ (multiplied by a factor of $\beta$), and the r.h.s.\ is the same quantity at time $t=\tau$. In other words, if we define the random variable \begin{equation} \phi_t(\xi,\sigma)=-\ln \Pi_t(\xi,\sigma)-\beta E(\xi,\sigma), \end{equation} then what we have seen in Section 3 is that \begin{equation} \left<\phi_0(\xi,\sigma)\right>_0\le \left<\phi_t(\xi,\sigma)\right>_t, \end{equation} where $\left<\cdot\right>_t$ denotes expectation w.r.t.\ $\Pi_t$. Equivalently, if we denote $\phi(X_n,S_n)=-\ln P(X_n,S_n)-\beta E(X_n,S_n)$, $\phi(Y_n,S_{n+1})=-\ln P(Y_n,S_{n+1})-\beta E(Y_n,S_{n+1})$, and we take $t=\tau$, this becomes \begin{equation} \label{phinequality} \left<\phi(X_n,S_n)\right>\le \left<\phi(Y_n,S_{n+1})\right>, \end{equation} where the expectations at both sides are w.r.t.\ the randomness of the relevant random variables. In this section, we show that this form of the inequality relation extends to more general moments of the random variables $\phi(X_n,S_n)$ and $\phi(Y_n,S_{n+1})$. As is well known, the H--theorem applies to generalized divergence functionals and not only to the Kullback--Leibler divergence $D(\Pi_t\|\Pi_{\mbox{\tiny eq}})$, see \cite[Theorem 1.6]{Kelly79}, \cite[Chap.\ V.5]{vanKampen}. Let $Q$ be any convex function and suppose that $\Pi_{\mbox{\tiny eq}}(x,s)> 0$ for every $(x,s)$. Then according to the generalized H--theorem, \begin{equation} D_Q(\Pi_t\|\Pi_{\mbox{\tiny eq}})=\sum_{x,s} \Pi_{\mbox{\tiny eq}}(x,s)Q\left(\frac{\Pi_t(x,s)}{\Pi_{\mbox{\tiny eq}}(x,s)}\right) \end{equation} decreases monotonically as a function of $t$, and so, \begin{equation} D_Q(\Pi_\tau\|\Pi_{\mbox{\tiny eq}})\le D_Q(\Pi_0\|\Pi_{\mbox{\tiny eq}}). \end{equation} Now, \begin{eqnarray} D_Q(\Pi_t\|\Pi_{\mbox{\tiny eq}})&=&\left<\frac{\Pi_{\mbox{\tiny eq}}(\xi,\sigma)}{\Pi_t(\xi,\sigma)}\cdot Q\left(\frac{\Pi_\tau(\xi,\sigma)}{P{\mbox{\tiny eq}}(\xi,\sigma)}\right)\right>_t\nonumber\\ &=&\frac{1}{Z}\cdot\left<e^{\phi_t(\xi,\sigma)} \cdot Q\left(Z\cdot e^{-\phi_t(\xi,\sigma)}\right)\right>_t \end{eqnarray} In the corresponding inequality between $D_Q(\Pi_\tau\|\Pi_{\mbox{\tiny eq}})$ and $D_Q(\Pi_0\|\Pi_{\mbox{\tiny eq}})$, the external factor of $1/Z$, obviously cancels out. Also, since $Q(u)$ is convex iff $Q(Z\cdot u)$ ($Z$ -- constant) is convex, we can re--define the latter as our convex function $Q$ to begin with, and so, by the generalized H--theorem\footnote{Note that the classical H--theorem is obtained as a special case by the choice $Q(u)=u\ln u$.} \begin{equation} \Lambda(t)\equiv\left<e^{\phi_t(\xi,\sigma)}\cdot Q\left( e^{-\phi_t(\xi,\sigma)}\right)\right>_t \end{equation} is monotonically decreasing for any convex function $Q$. It now follows that \begin{equation} \left<e^{\phi(X_n,S_n)}\cdot Q\left(e^{-\phi(X_n,S_n)}\right)\right>\ge \left<e^{\phi(Y_n,S_{n+1})}\cdot Q\left(e^{-\phi(Y_n,S_{n+1})}\right)\right>. \end{equation} This class of inequalities has the flavor of fluctuation theorems concerning $\phi(X_n,S_n)$ and $\phi(Y_n,S_{n+1})$. We observe that unlike the classical H--theorem, which makes a claim only about the the first moments of $\phi(X_n,S_n)$ and $\phi(Y_n,S_{n+1})$, here we have a more general statement concerning the monotonicity of moments of a considerably wide family of functions of these random variables. For example, choosing $Q(u)=-\ln u$ gives \begin{equation} \left<\phi(X_n,S_n)e^{\phi(X_n,S_n)}\right> \ge\left<\phi(Y_n,S_{n+1})e^{\phi(Y_n,S_{n+1})}\right>, \end{equation} which is somewhat counter--intuitive, in view of (\ref{phinequality}), as the function $f(u)=ue^u$ is monotonically increasing. An interesting family of functions $\{Q\}$ is the family of power functions, defined as $Q_z(u)=u^{1-z}$ for $z\le 0$ and $z\ge 1$ and $Q_z(u)=-u^{1-z}$ for $z\in[0,1]$. Here we obtain that \begin{equation} \left<\exp\{z\phi(X_n,S_n)\}\right>\le \left<\exp\{z\phi(Y_n,S_{n+1})\}\right>~~~~~~~~~\mbox{for}~~z\in[0,1] \end{equation} and \begin{equation} \left<\exp\{z\phi(X_n,S_n)\}\right>\ge \left<\exp\{z\phi(Y_n,S_{n+1})\}\right>~~~~~~~~~\mbox{for}~~z\notin[0,1] \end{equation} Note that for $z > 1$, $P(X_n=x,S_n=s)$ must be strictly positive for all $(x,s)$ with $E(x,s) < \infty$, for otherwise, there is a singularity. We have therefore obtained inequalities that involve the characteristic functions of $\phi(X_n,S_n)$ and $\phi(Y_n,S_{n+1})$. It is interesting to observe that the direction of the inequality is reversed when the parameter $z$ crosses both the values $z=0$ and $z=1$. \section*{Appendix A} \noindent {\it Some Concerns About the Derivation of Eq.\ (5) of \cite{BMC16a}.} In Appendix A of \cite{BMC16a}, eq.\ (5) of that paper is derived, namely, the inequality that upper bounds the work extraction per cycle by $kT$ times the difference between the Shannon entropy rate of the output process and that of the input process, as mentioned in the Introduction. The derivation in \cite[Appendix A]{BMC16a} begins from the second law of thermodynamics, and on the basis of the second law, it states that the joint {\it Shannon entropy} of the entire system, consisting of the ratchet state, the input tape, the output tape, and the heat bath, must not decrease with time (eq.\ (A2) in \cite{BMC16a}). The first concern is that while the second law is an assertion about the increase of the thermodynamic entropy (which is, strictly speaking, defined for equilibrium), some more care should be exercised when addressing the increase of the Shannon entropy. To be specific, we are familiar with two situations (in classical statistical physics) where the Shannon entropy is known to be non--decreasing. The first is associated with Hamiltonian dynamics, where the total Shannon entropy simply remains fixed, due to the Liouville theorem, as argued, for example, in \cite[Section III]{DJ13}, and indeed, ref.\ \cite{DJ13} is cited in \cite{BMC16a} (in the context of eq.\ (2) therein), but there is no assumption in \cite{BMC16a} about Hamiltonian dynamics, and it is not even clear that Hamiltonian dynamics can be assumed in this model setting, in the first place, due to the discrete nature of the input and output information streams, as well as the ratchet state. Two additional assumptions made in \cite{DJ13}, but not in \cite{BMC16a}, are that the system is initially prepared in a product state (i.e., the states of the different parts of the system are statistically independent) \cite[eq.\ (27)]{DJ13} and that the heat bath is initially in equilibrium \cite[eq.\ (28)]{DJ13}. By contrast, the only assumption made in \cite{BMC16a} is that the ratchet has a finite number of states (see first sentence in \cite[Appendix A]{BMC16a}). The second situation where the Shannon entropy is known to be non--decreasing is when the state of the system is a Markov process, which has a uniform stationary state distribution, owing to the H--Theorem (see, for example, \cite[Chap.\ V, Sect.\ 5]{vanKampen}). However, it is not clear that the total system under discussion obeys Markov dynamics with a stationary distribution (let alone, the uniform distribution), because the tape moves in one direction only, so states accessible at a given time instant are no longer accessible at later times (after $n$ cycles, the machine has converted $n$ input bits to output bits, so the position of the tape relative to the ratchet, indexed by $n$, should be part of the Markovian state). Another concern is that in Appendix A of \cite{BMC16a}, it is argued that the state of the heat bath is independent of the states of the ratchet and the tape at all times, with the somewhat vague explanation that ``they have no memory of the environment'' (see the text immediately after eq.\ (A4) of \cite{BMC16a}). While this independence argument may make sense with regard to the initial preparation (at time $t=0$) of the system (again, as assumed also in \cite{DJ13}), it is less clear why this remains true also at later times, after the systems have interacted for a while. Note that indeed, in \cite{DJ13}, the various components of the system are not assumed independent at positive times. To summarize, there seems to be some room for concern that more assumptions may be needed in \cite{BMC16a} beyond the assumption on a finite number of ratchet states. \section*{References}
{"config": "arxiv", "file": "1611.01965/p193.tex"}
TITLE: Rank-complement subgroup existence QUESTION [1 upvotes]: Let $G$ be a finitely generated Abelian group. For each subgroup $H$ of $G$, does there exist another subgroup $K$ of $G$ such that $\text{rank}(G)=\text{rank}(H)+\text{rank}(K)$ and $\text{rank}(H\cap K) = 0$? Edit: For background, given two subgroups $H$ and $K$ of $G$, we say that $K$ is a complement of $H$ in $G$ if $G=H+K$ and $H\cap K = \{ 0 \}$; generally, given $H$, a complement of $H$ in $G$ may not exist, e.g., take the subgroup $H=2\mathbb{Z}$ of the group $G=\mathbb{Z}$. My question concerns a weaker notion, a 'rank-complement'. REPLY [0 votes]: Yes. The structure theory of finitely generated Abelian groups tells us that $G$ can be written as a sum of subgroups $G = M + T$, where $M$ is torsion-free and $T$ is finite; it follows that $\text{rank}(M)=\text{rank}(G)$. Now $U = H \cap M$ is also torsion-free with $\text{rank}(U)=\text{rank}(H)$ since the intersection only removes torsion elements. Thus, $U$ admits a basis $B_U$ which can be augmented to obtain a basis $B_V$ of some subgroup $V$ of $M$ with $\text{rank}(V)=\text{rank}(M)$; therefore, the subgroup $K$ spanned by $B_K = B_V \setminus B_U$ is a complement of $U$ in $V$, so $\text{rank}(K)=\text{rank}(V)-\text{rank}(U)$. Putting this all together, $\text{rank}(G) = \text{rank}(V)=\text{rank}(U)+\text{rank}(K)=\text{rank}(H)+\text{rank}(K)$, and $H\cap K = U \cap K = \{0\}$, so $\text{rank}(H \cap K)=0$.
{"set_name": "stack_exchange", "score": 1, "question_id": 1321006}
\subsection{Geometric cross section}\label{sec_geomcross} Recall the set $\BS$ from Section~\ref{sec_base}. \begin{lemma}\label{propsSH} Let $\wt\ch\in\wt\fch_{\choices,\shmap}$. Then $\pr(\wt\ch)$ is a convex polyhedron and $\partial\pr(\wt\ch)$ consists of complete geodesic segments. Moreover, we have that $\pr(\wt\ch)^\circ\cap \BS=\emptyset$ and $\partial\pr(\wt\ch) \subseteq\BS$ and $\pr(\wt\ch)\cap\BS=b(\wt\ch)$ and that $b(\wt\ch)$ is a connected component of $\BS$. \end{lemma} \begin{proof} Let $\wt\ch_1$ be the (unique) element in $\wt\fch_\choices$ such that $\wt\ch = \shmap(\wt\ch_1)\wt\ch_1$. Corollary~\ref{iscellH} states that $\ch_1\sceq\cl(\pr(\wt\ch_1))$ is a cell in $H$ and that $b(\wt\ch_1)$ is a side of $\ch_1$. Moreover, $\pr(\wt\ch_1) = \ch_1^\circ \cup b(\wt\ch_1)$ and hence $\pr(\wt\ch_1)^\circ = \ch_1^\circ$. Thus, $\partial\pr(\wt\ch_1) = \partial\ch_1$ consists of complete geodesic segments, $\pr(\wt\ch_1)^\circ\cap \BS=\emptyset$, $\partial\pr(\wt\ch_1) \subseteq\BS$ and $\pr(\wt\ch_1)\cap\BS = b(\wt\ch_1)$. Now the statements of the lemma follow from $\pr(\wt\ch) = \shmap(\wt\ch_1)\pr(\wt\ch_1)$ and $b(\wt\ch) = \shmap(\wt\ch_1) b(\wt\ch_1)$ and the $\Gamma$-invariance of $\BS$. \end{proof} \begin{prop}\label{CS=CShat} We have $\wh\CS = \wh\CS(\wt\fch_{\choices,\shmap})$. Moreover, the union \[ \CS'\rueck\big(\wt\fch_{\choices,\shmap}\big) = \bigcup\big\{ \CS'\rueck\big(\wt\ch\big) \ \big\vert\ \wt\ch\in\wt\fch_{\choices,\shmap}\big\} \] is disjoint and $\CS'(\wt\fch_{\choices,\shmap})$ is a set of representatives for $\wh\CS$. \end{prop} \begin{proof} We start by showing that $\wh\CS(\wt\fch_{\choices,\shmap})\subseteq\wh\CS$. Let $\wt\ch\in \wt\fch_{\choices,\shmap}$. Then there exists a (unique) $\wt\ch_1\in\wt\fch_{\choices}$ such that $\wt\ch=\shmap(\wt\ch_1)\wt\ch_1$. Lemma~\ref{propsSH} shows that $b(\wt\ch_1)$ is a connected component of $\BS$. The set $\CS'(\wt\ch_1)$ consists of unit tangent vectors based on $b(\wt\ch_1)$ which are not tangent to it. Therefore, $\CS'(\wt\ch_1)\subseteq \CS$. Now $b(\wt\ch)=\shmap(\wt\ch_1)\wt\ch_1$ and $\CS'(\wt\ch) = \shmap(\wt\ch_1)\CS'(\wt\ch_1)$ with $\shmap(\wt\ch_1)\in\Gamma$. Thus, we see that $\pi(b(\wt\ch)) \subseteq\pi(\BS)=\wh\BS$ and $\pi(\CS'(\wt\ch))\subseteq\pi(\CS)=\wh\CS$. This shows that $\wh\CS(\wt\fch_{\choices,\shmap})\subseteq\wh\CS$. Conversely, let $\wh v\in\wh\CS$. We will show that there is a unique $\wt\ch\in\wt\fch_{\choices,\shmap}$ and a unique $v\in \CS'(\wt\ch)$ such that $\pi(v)=\wh v$. Pick any $w\in\pi^{-1}(v)$. Remark~\ref{remain} shows that the set $\mc P\sceq \bigcup\{\wt\ch\mid \wt\ch\in\wt\fch_{\choices,\shmap}\}$ is a fundamental set for $\Gamma$ in $SH$. Hence there exists a unique pair $(\wt\ch, g)\in \wt\fch_{\choices,\shmap}\times\Gamma$ such that $v\sceq gw\in \wt\ch$. Note that $\pi^{-1}(\wh\CS)=\CS$. Thus, $v\in\CS$ and hence $\pr(v)\in\pr(\wt\ch)\cap \BS$. Lemma~\ref{propsSH} shows that $\pr(v)\in b(\wt\ch)$. Therefore, $v\in \pi^{-1}(b(\wt\ch))\cap \wt\ch$. Since $v\in \CS$, it does not point along $b(\wt\ch)$. Hence $v$ does not point along $\partial\pr(\wt\ch)$, which shows that $v\in \CS'(\wt\ch)$. This proves that $\wh\CS\subseteq\wh\CS(\wt\fch_{\choices,\shmap})$. To see the uniqueness of $\wt\ch$ and $v$ suppose that $w_1\in\pi^{-1}(\wh v)$. Let $(\wt\ch_1,g_1)\in\wt\fch_{\choices,\shmap}\times\Gamma$ be the unique pair such that $g_1w_1\in\wt\ch_1$. There exists a unique element $h\in\Gamma$ such that $hw=w_1$. Then $g_1hg^{-1}v=gw_1$ and $v,g_1hg^{-1}v\in\mc P$. Now $\mc P$ being a fundamental set shows that $g_1hg^{-1}=\id$, which proves that $g_1w_1=g_1hw=gw=v$ and $\wt\ch_1=\wt\ch$. This completes the proof. \end{proof} \begin{cor} Let $\wh\gamma$ be a geodesic on $Y$ which intersects $\wh\CS$ in $t$. Then there is a unique geodesic $\gamma$ on $H$ which intersects $\CS'(\wt\fch_{\choices,\shmap})$ in $t$ such that $\pi(\gamma)=\wh\gamma$. \end{cor} \begin{defi} Let $\wh\gamma$ be a geodesic on $Y$ which intersects $\wh\CS$ in $\wh\gamma'(t_0)$. If \[ s\sceq \min \big\{ t>t_0 \ \big\vert\ \wh\gamma'(t) \in \wh\CS \big\} \] exists, we call $s$ the \textit{first return time} of $\wh\gamma'(t_0)$ and $\wh\gamma'(s)$ the \textit{next point of intersection of $\wh\gamma$ and $\wh\CS$}. \index{first return time} \index{next point of intersection} Let $\gamma$ be a geodesic on $H$. If $\gamma'(t)\in \CS$, then we say that $\gamma$ \textit{intersects $\CS$ in $t$}. \index{intersects} If there is a sequence $(t_n)_{n\in\N}$ with $\lim_{n\to\infty} t_n = \infty$ and $\gamma'(t_n)\in \CS$ for all $n\in\N$, then $\gamma$ is said to \textit{intersect $\CS$ infinitely often in future}. Analogously, if we find a sequence $(t_n)_{n\in\N}$ with $\lim_{n\to\infty} t_n = -\infty$ and $\gamma'(t_n)\in \CS$ for all $n\in\N$, then $\gamma$ is said to \textit{intersect $\CS$ infinitely often in past}. \index{intersects infinitely often in future}\index{intersects infinitely often in past} Suppose that $\gamma$ intersects $\CS$ in $t_0$. If \[ s \sceq \min\big\{t>t_0 \ \big\vert\ \gamma'(t) \in\CS \big\} \] exists, we call $s$ the \textit{first return time} of $\gamma'(t_0)$ and $\gamma'(s)$ the \textit{next point of intersection of $\gamma$ and $\CS$}. Analogously, we define the \textit{previous point of intersection} of $\wh\gamma$ and $\wh\CS$ resp.\@ of $\gamma$ and $\CS$. \index{previous point of intersection} \end{defi} \begin{remark}\label{charinter} A geodesic $\widehat \gamma$ on $Y$ intersects $\widehat\CS$ if and only if some (and hence any) representative of $\widehat\gamma$ on $H$ intersects $\pi^{-1}(\widehat\CS)$. Recall that $\CS=\pi^{-1}(\widehat \CS)$, and that $\CS$ is the set of unit tangent vectors based on $\BS$ but which are not tangent to $\BS$. Since $\BS$ is a totally geodesic submanifold of $H$ (see Proposition~\ref{BStotgeod}), a geodesic $\gamma$ on $H$ intersects $\CS$ if and only if $\gamma$ intersects $\BS$ transversely. Again because $\BS$ is totally geodesic, the geodesic $\gamma$ intersects $\BS$ transversely if and only if $\gamma$ intersects $\BS$ and is not contained in $\BS$. Therefore, a geodesic $\widehat\gamma$ on $Y$ intersects $\widehat\CS$ if and only if some (and thus any) representative $\gamma$ of $\widehat\gamma$ on $H$ intersects $\BS$ and $\gamma(\R) \not\subseteq \BS$. A similar argument simplifies the search for previous and next points of intersection. To make this precise, suppose that $\widehat\gamma$ is a geodesic on $Y$ which intersects $\widehat\CS$ in $\widehat\gamma'(t_0)$ and that $\gamma$ is a representative of $\widehat\gamma$ on $H$. Then $\gamma'(t_0) \in \CS$. There is a next point of intersection of $\widehat\gamma$ and $\widehat\CS$ if and only if there is a next point of intersection of $\gamma$ and $\CS$. The hypothesis that $\gamma'(t_0)\in\CS$ implies that $\gamma(\R)$ is not contained in $\BS$. Hence each intersection of $\gamma$ and $\BS$ is transversal. Then there is a next point of intersection of $\gamma$ and $\CS$ if and only if $\gamma( (t_0,\infty) )$ intersects $\BS$. Suppose that there is a next point of intersection. If $\gamma'(s)$ is the next point of intersection of $\gamma$ and $\CS$, then and only then $\widehat\gamma'(s)$ is the next point of intersection of $\widehat\gamma$ and $\widehat\CS$. In this case, $s=\min\{ t>t_0 \mid \gamma(t)\in\BS\}$. Likewise, there was a previous point of intersection of $\widehat\gamma$ and $\widehat\CS$ if and only if there was a previous point of intersection of $\gamma$ and $\CS$. Further, there was a previous point of intersection of $\gamma$ and $\CS$ if and only if $\gamma( (-\infty, t_0) )$ intersects $\BS$. If there was a previous point of intersection, then $\gamma'(s)$ is the previous point of intersection of $\gamma$ and $\CS$ if and only if $\widehat\gamma'(s)$ was the previous point of intersection of $\widehat\gamma$ and $\widehat\CS$. Moreover, $s=\max\{ t<t_0\mid \gamma(t)\in\BS\}$. \end{remark} Proposition~\ref{CS1} provides a characterization of the geodesics on $Y$ which intersect $\wh\CS$ at all. Its proof needs the following lemma. \begin{lemma}\label{convex} Let $U$ be a convex polyhedron in $H$ and $\gamma$ a geodesic on $H$. \begin{enumerate}[{\rm (i)}] \item\label{convexi} Suppose that $t\in \R$ such that $\gamma(t)\in \partial U$. If $\gamma( (t,\infty) ) \subseteq U$, then either there is a unique side $S$ of $U$ such that $\gamma( (t,\infty) ) \subseteq S$ or $\gamma( (t,\infty) )\subseteq U^\circ$. \item\label{convexii} Suppose that $t_1,t_2,t_3\in \R$ such that $t_1<t_2<t_3$ and $\gamma(t_1), \gamma(t_2), \gamma(t_3) \in \partial U$. Then there is a side $S$ of $U$ such that $S\subseteq \gamma(\R)$. \item\label{convexiii} If $\gamma(\pm\infty)\in\bhg U$, then either $\gamma(\R)\subseteq U^\circ$ or $\gamma(\R)\subseteq \partial U$. If $\gamma(t)\in U$ and $\gamma(\infty)\in\bhg U$, then either $\gamma( (t,\infty) )\subseteq U^\circ$ or $\gamma( [t,\infty) ) \subseteq \partial U$. \end{enumerate} \end{lemma} \begin{proof} We will use the following specialization of \cite[Theorem~6.3.8]{Ratcliffe}: Suppose that $s$ is a non-trivial geodesic segment with endpoints $a,b$ (possibly in $\bhg H$) which is contained in $U$. If there is a side $S$ of $U$ such that $s\mminus\{a,b\}$ intersects $S$, then $s\subseteq S$. For \eqref{convexi} suppose that there exists $t_1\in (t,\infty)$ such that $\gamma(t_1)\in \partial U$. If $\gamma(t_1)$ is an endpoint of some side of $U$, then there are two sides $S_1,S_2$ of $U$ which have $\gamma(t_1)$ as an endpoint. Assume for contradiction that $S_1,S_2\subseteq\gamma(\R)$. Since $\gamma(t_1)\in S_1\cap S_2$, the union $T\sceq S_1\cup S_2$ is a geodesic segment in $\partial U$ and hence $S$ is contained in a side of $U$. This contradicts to $\gamma(t_1)$ being an endpoint of the sides $S_1$ and $S_2$. Suppose that $S_1\not\subseteq \gamma(\R)$. Let $\langle S_1\rangle$ be the complete geodesic segment which contains $S_1$. Then $\langle S_1\rangle$ divides $H$ into two closed halfplanes $H_1$ and $H_2$ (with $H_1\cap H_2=\langle S_1\rangle$) one of which contains $\gamma(t)$, say $H_1$. Now $\gamma(\R)$ intersects $\langle S_1\rangle$ transversely in $\gamma(t_1)$. Since $t_1>t$, the segment $\gamma( (t_1,\infty) )$ is contained in $H_2$. This contradicts to $\gamma( (t,\infty) )\subseteq U$. Hence $\gamma(t_1)$ is not an endpoint of some side of $U$. Let $S$ be the unique side of $U$ with $\gamma(t_1)\in S$. Then $S\subseteq \gamma( (t,\infty) )$. The previous argument shows that $\gamma( (t,\infty) )$ does not contain an endpoint of $S$, hence $\gamma( (t,\infty) )\subseteq S$. Finally, since $S$ is closed, $\gamma( [t,\infty) )\subseteq S$. For \eqref{convexii} let $s\sceq [\gamma(t_1),\gamma(t_3)]$. Since $\gamma(t_1)$ and $\gamma(t_3)$ are in $U$, the convexity of $U$ shows that $s\subseteq U$. Now $\gamma(t_2) \in (\gamma(t_1),\gamma(t_3))\cap\partial U$. As in the proof of \eqref{convexi} it follows that $\gamma(t_2)$ is not an endpoint of some side of $U$. Let $S$ be the unique side of $U$ with $\gamma(t_2)\in S$. Then $s\subseteq S$. Since the geodesic segment $S$ contains (at least) two point of the complete geodesic segment $\gamma(\R)$, it follows that $S\subseteq\gamma(\R)$. For \eqref{convexiii} it suffices to show that $\gamma(\R)\subseteq U$ resp.\@ that $\gamma((t,\infty))\subseteq U$. This follows from \cite[Theorem~6.4.2]{Ratcliffe}. \end{proof} \begin{prop}\label{CS1} Let $\widehat\gamma$ be a geodesic on $Y$. Then $\widehat\gamma$ intersects $\widehat\CS$ if and only if $\widehat\gamma \notin \NC$. \end{prop} \begin{proof} Let $\fch$ be the family of cells in $H$ assigned to $\fpch$. Recall from Proposition~\ref{choiceinvariant} that $\NC=\NC(\fch)$. Suppose first that $\wh\gamma\in\NC$. Then we find $\ch\in\fch$ and a representative $\gamma$ of $\wh\gamma$ on $H$ such that $\gamma(\pm\infty)\in \bd(\ch)$. Since $\ch$ is a convex polyhedron and $\gamma(\pm\infty)\in\bhg\ch$, Lemma~\ref{convex}\eqref{convexiii} states that either $\gamma(\R)\subseteq \ch^\circ$ or $\gamma(\R)\subseteq \partial\ch$. Corollary~\ref{cellsHtess} shows that $B^\circ\cap \BS=\emptyset$ and $\partial\ch\subseteq\BS$. Thus, either $\gamma(\R)$ does not intersect $\BS$ or $\gamma(\R)\subseteq \BS$. Remark~\ref{charinter} shows that in both cases $\gamma$ does not intersect $\CS$, and therefore $\wh\gamma$ does not intersect $\wh\CS$. Suppose now that $\wh\gamma$ does not intersect $\wh\CS$. Then each representative of $\wh\gamma$ on $H$ does not intersect $\CS$. Let $\gamma$ be any representative of $\wh\gamma$ on $H$. We will show that there is a cell $\ch$ in $H$ and a to $\gamma$ equivalent geodesic $\eta$ such that $\eta(\pm\infty)\in\bd(\ch)$. Pick a unit tangent vector $v$ to $\gamma$. Recall from Proposition~\ref{BfundsetSH} that $\bigcup\{\wt\ch\mid \wt\ch\in\wt\fch_{\choices}\}$ is a fundamental set for $\Gamma$ in $SH$. Thus, there is a pair $(\wt\ch,g)\in\wt\fch_\choices\times\Gamma$ such that $gv\in\wt\ch$. Set $\eta\sceq g\gamma$. Lemma~\ref{propsSH} states that $\partial\pr(\wt\ch)$ consists of complete geodesic segments and $\partial\pr(\wt\ch)\subseteq\BS$. By assumption, $\eta$ does not intersect $\BS$ transversely, which implies that $\eta$ does not intersect $\partial\pr(\wt\ch)$ transversely. Because $\eta(\R)\cap \cl(\pr(\wt\ch))\not=\emptyset$, it follows that $\eta(\R)\subseteq \cl(\pr(\wt\ch))$. Thus, $\eta(\pm\infty)\in \bhg\cl(\pr(\wt\ch))$. By Corollary~\ref{iscellH}, $\ch\sceq \cl(\pr(\wt\ch))$ is a cell in $H$. Therefore $\eta(\pm\infty)\in \bd(\ch)$, which shows that $\wh\gamma=\wh\eta\in \NC(\ch)\subseteq\NC$. \end{proof} Suppose that we are given a geodesic $\wh\gamma$ on $Y$ which intersects $\wh\CS$ in $\wh\gamma'(t_0)$ and suppose that $\gamma$ is the unique geodesic on $H$ which intersects $\CS'(\wt\fch_{\choices,\shmap})$ in $\gamma'(t_0)$ and which satisfies $\pi(\gamma)=\wh\gamma$. Our next goal is to characterize when there is a next point of intersection of $\wh\gamma$ and $\wh\CS$ resp.\@ of $\gamma$ and $\CS$, and, if there is one, where this point is located. Further we will do analogous investigations on the existence and location of previous points of intersections. To this end we need the following preparations. \begin{defi}\label{def_codint} Let $\wt\ch\in\wt\fch_{\choices,\shmap}$ and suppose that $b(\wt\ch)$ is the complete geodesic segment $(a,\infty)$ with $a\in\R$. We assign to $\wt\ch$ two intervals $I(\wt\ch)$ and $J(\wt\ch)$ which are given as follows: \begin{align*} I\big(\wt\ch\big) & \sceq \begin{cases} (a,\infty) & \text{if $\pr(\wt\ch)\subseteq \{ z\in H\mid \Rea z\geq a\}$,} \\ (-\infty, a) & \text{if $\pr(\wt\ch) \subseteq \{ z\in H \mid \Rea z \leq a\}$,} \end{cases} \intertext{and} J\big(\wt\ch\big) & \sceq \begin{cases} (-\infty, a) & \text{if $\pr(\wt\ch)\subseteq \{ z\in H\mid \Rea z \geq a\}$,} \\ (a,\infty) & \text{if $\pr(\wt\ch) \subseteq \{ z\in H \mid \Rea z \leq a\}$.} \end{cases} \end{align*} \end{defi} Note that the combination of Remark~\ref{remain} with Propositions~\ref{nccells}\eqref{ncc1} and \ref{ncSH} resp.\@ with Propositions~\ref{ccells}\eqref{cc1} and \ref{ccSH} resp.\@ with Remark~\ref{just_def} and Proposition~\ref{stripSH} shows that indeed each $\wt\ch\in\wt\fch_{\choices,\shmap}$ gets assigned a pair $\big( I(\wt\ch), J(\wt\ch) \big)$ of intervals. \begin{lemma}\label{char_intervals} Let $\wt\ch\in\wt\fch_{\choices,\shmap}$. For each $v\in\CS'(\wt\ch)$ let $\gamma_v$ denote the geodesic on $H$ determined by $v$. If $v\in \CS'(\wt\ch)$, then $(\gamma_v(\infty), \gamma_v(-\infty))\in I(\wt\ch)\times J(\wt\ch)$. Conversely, if $(x,y)\in I(\wt\ch)\times J(\wt\ch)$, then there exists a unique element $v$ in $\CS'(\wt\ch)$ such that $(\gamma_v(\infty), \gamma_v(-\infty)) = (x,y)$. \end{lemma} \begin{proof} Let $v\in \CS'(\wt\ch)$. By Proposition~\ref{ncSH} resp.\@ \ref{ccSH} resp.\@ \ref{stripSH} (recall Remark~\ref{remain}), the unit tangent vector $v$ points into $\pr(\wt\ch)^\circ$ and $\pr(v)\in b(\wt\ch)$. By definition we find $\eps > 0$ such that $\gamma_v( (0,\eps) )\subseteq \pr(\wt\ch)^\circ$. Then $\gamma_v(\R)$ intersects $b(\wt\ch)$ in $\gamma_v(0) = \pr(v)$. From $\gamma_v(\eps/2) \in \pr(\wt\ch)^\circ$ and hence $\gamma_v(\eps/2)\notin b(\wt\ch)$, it follows that $\gamma_v(\R)\not= b(\wt\ch)$. Since $\gamma_v(\R)$ and $b(\wt\ch)$ are both complete geodesic segments, this shows that $\pr(v)$ is the only intersection point of $\gamma_v(\R)$ and $b(\wt\ch)$. Now $b(\wt\ch)$ divides $H$ into two closed half-spaces $H_1$ and $H_2$ (with $H_1\cap H_2=b(\wt\ch)$) one of which contains $\pr(\wt\ch)$, say $\pr(\wt\ch)\subseteq H_1$. Then $\gamma_v( (0,\infty) ) \subseteq H_1$ and $\gamma_v( (-\infty, 0) )\subseteq H_2$. The definition of $I(\wt\ch)$ and $J(\wt\ch)$ shows that $\big( \gamma_v(\infty), \gamma_v(-\infty) \big) \in I(\wt\ch)\times J(\wt\ch)$. Conversely, let $(x,y)\in I(\wt\ch)\times J(\wt\ch)$. Suppose that $b(\wt\ch)$ is the geodesic segment $(a,\infty)$ and suppose w.l.o.g.\@ that $I(\wt\ch)$ is the interval $(a,\infty)$ and $J(\wt\ch)$ the interval $(-\infty, a)$. Let $c$ denote the complete geodesic segment $[x,y]$. Since $x>a>y$, the geodesic segment $c$ intersects $b(\wt\ch)$ in a (unique) point $z$. There are exactly two unit tangent vectors $w_j$, $j=1,2$, to $c$ at $z$. For $j\in\{1,2\}$ let $\gamma_{w_j}$ denote the geodesic on $H$ determined by $w_j$. Then $\gamma_{w_j}(\R) = c$ and \[ \big( \gamma_{w_1}(\infty),\gamma_{w_1}(-\infty) \big) = \big( \gamma_{w_2}(\infty), \gamma_{w_2}(-\infty) \big) \] with \[ \big( \gamma_{w_1}(\infty), \gamma_{w_1}(-\infty) \big) = (x,y)\quad\text{or}\quad \big( \gamma_{w_1}(\infty), \gamma_{w_1}(-\infty) \big) = (y,x). \] W.l.o.g.\@ suppose that $\big( \gamma_{w_1}(\infty),\gamma_{w_1}(-\infty) \big) = (x,y)$ and set $v\sceq w_1$. We will show that $v$ points into $\pr(\wt\ch)^\circ$. The set $b(\wt\ch)$ is a side of $\cl(\pr(\wt\ch))$ and, since $\cl(\pr(\wt\ch))$ is a convex polyhedron with non-empty interior, $b(\wt\ch)$ is a side of $\pr(\wt\ch)^\circ$, hence $b(\wt\ch)\subseteq \partial\pr(\wt\ch)^\circ$. Since $z$ is not an endpoint of $b(\wt\ch)$, there exists $\eps>0$ such that \[ B_\eps(z) \cap \pr\big(\wt\ch\big)^\circ = B_\eps(z) \cap \{ z\in H \mid \Rea z > a \}. \] Now $\gamma_v( (0,\infty) )\subseteq \{ z\in H \mid \Rea z > a\}$ with $\gamma_v(0) = z$. Hence there is $\delta>0$ such that \[ \gamma_v\big( ( 0,\delta) \big)\subseteq B_\eps(z) \cap \{ z\in H \mid \Rea z > a\}. \] Thus $\gamma_v( (0,\delta) ) \subseteq \pr(\wt\ch)^\circ$, which means that $v$ point into $\pr(\wt\ch)^\circ$. Then Proposition~\ref{ncSH} resp.\@ \ref{ccSH} resp.\@ \ref{stripSH} states that $v\in \CS'(\wt\ch)$. This completes the proof. \end{proof} Let $\wt\ch\in\wt\fch_{\choices,\shmap}$ and $g\in\Gamma$. Suppose that $I(\wt\ch) =(a,\infty)$. Then \begin{align*} gI\big(\wt\ch\big) & = \begin{cases} (ga,g\infty) & \text{if $ga<g\infty$,} \\ (ga,\infty] \cup (-\infty, g\infty) & \text{if $g\infty < ga$,} \end{cases} \intertext{and} gJ\big(\wt\ch\big) & = \begin{cases} (ga,\infty] \cup (-\infty, g\infty) & \text{if $ga < g\infty$,} \\ (ga,g\infty) &\text{if $g\infty < ga$.} \end{cases} \end{align*} Here, the interval $(b,\infty]$ denotes the union of the interval $(b,\infty)$ with the point $\infty\in \bhg H$. \label{def_bint} Hence, the set $I\sceq (b,\infty]\cup (-\infty, c)$ is connected as a subset of $\bhg H$. The interpretation of $I$ is more eluminating in the ball model: Via the Cayley transform $\mc C$ the set $\bhg H$ is homeomorphic to the unit sphere $S^1$. Let $b'\sceq \mc C(b)$, $c'\sceq \mc C(c)$ and $I'\sceq \mc C(I)$. Then $I'$ is the connected component of $S^1\mminus\{b',c'\}$ which contains $\mc C(\infty)$. Suppose now that $I(\wt\ch) = (-\infty, a)$. Then \begin{align*} gI\big(\wt\ch\big) & = \begin{cases} (-\infty, ga) \cup (g(-\infty), \infty] & \text{if $ga<g(-\infty)$,} \\ (g(-\infty), ga) & \text{if $g(-\infty)<ga$,} \end{cases} \intertext{and} gJ\big(\wt\ch\big) & = \begin{cases} ( g(-\infty), ga) & \text{if $ga < g(-\infty)$,} \\ (-\infty, ga) \cup (g(-\infty), \infty] & \text{if $g(-\infty)<ga$.} \end{cases} \end{align*} Note that for $g=\textmat{\alpha}{\beta}{\gamma}{\delta}$ we have \[ g(-\infty) = \lim_{t\searrow -\infty} \frac{\alpha t + \beta}{\gamma t + \delta} = \lim_{s\nearrow 0} \frac{\alpha + \beta s}{\gamma + \delta s} = \lim_{s\searrow 0} \frac{\alpha + \beta s}{\gamma + \delta s} = g\infty. \] In particular, $\id(-\infty) = \infty$. Let $a,b\in\overline \R$. For abbreviation we set $(a,b)_+\sceq (\min(a,b),\max(a,b))$ and $(a,b)_-\sceq (\max(a,b),\infty] \cup (-\infty, \min(a,b))$. \label{def_pmint} \begin{prop}\label{adjacentSH} Let $\wt\ch\in\wt\fch_{\choices,\shmap}$ and suppose that $S$ is a side of $\pr(\wt\ch)$. Then there exist exactly two pairs $(\wt\ch_1,g_1),(\wt\ch_2,g_2)\in \wt\fch_{\choices,\shmap}\times\Gamma$ such that $S=g_jb(\wt\ch_j)$. Moreover, $g_1 \cl( \pr(\wt\ch_1) ) = \cl(\pr(\wt\ch))$ and $g_2\cl(\pr(\wt\ch_2))\cap \cl(\pr(\wt\ch)) = S$ or vice versa. The union $g_1\CS'(\wt\ch_1) \cup g_2\CS'(\wt\ch_2)$ is disjoint and equals the set of all unit tangent vectors in $\CS$ that are based on $S$. Let $a,b\in\bhg H$ be the endpoints of $S$. Then $g_1I(\wt\ch_1)\times g_1J(\wt\ch_1) = (a,b)_+ \times (a,b)_-$ and $g_2I(\wt\ch_2)\times g_2J(\wt\ch_2) = (a,b)_-\times (a,b)_+$ or vice versa. \end{prop} \begin{proof} Let $D'$ denote the set of unit tangent vectors in $\CS$ that are based on $S$. By Lemma~\ref{propsSH}, $S$ is a connected component of $\BS$. Hence $D'$ is the set of unit tangent vectors based on $S$ but not tangent to $S$. The complete geodesic segment $S$ divides $H$ into two open half-spaces $H_1,H_2$ such that $H$ is the disjoint union $H_1\cup S\cup H_2$. Moreover, $\pr(\wt\ch)^\circ$ is contained in $H_1$ or $H_2$, say $\pr(\wt\ch)^\circ\subseteq H_1$. Then $D'$ decomposes into the disjoint union $D'_1\cup D'_2$ where $D'_j$ denotes the non-empty set of elements in $D'$ that point into $H_j$. For $j=1,2$ pick $v_j\in D'_j$. Since $\CS'(\wt\fch_{\choices,\shmap})$ is a set of representatives for $\wh\CS=\pi(\CS)$ (see Proposition~\ref{CS=CShat}), there exists a uniquely determined pair $(\wt\ch_j, g_j)\in\wt\fch_{\choices,\shmap}\times\Gamma$ such that $v_j\in g_j\CS'(\wt\ch_j)$. We will show that $S=g_jb(\wt\ch_j)$. Assume for contradiction that $S\not=g_jb(\wt\ch_j)$. Since $S$ and $g_jb(\wt\ch_j)$ are complete geodesic segments, the intersection of $S$ and $g_jb(\wt\ch_j)$ in $\pr(v_j)$ is transversal. Recall that $S\subseteq\partial\pr(\wt\ch)$ and $b(\wt\ch_j)\subseteq\partial\pr(\wt\ch_j)$ and that $\partial\pr(\wt\ch')^\circ = \partial\pr(\wt\ch')$ for each $\wt\ch'\in\wt\fch_{\choices,\shmap}$. Then there exists $\eps>0$ such that $B_\eps(\pr(v_j))\cap \pr(\wt\ch)^\circ = B_\eps(\pr(v_j))\cap H_1$ and \[ B_\eps\big(\pr(v_j)\big) \cap g_j\pr\big(\wt\ch_j\big)^\circ \cap H_1 \not=\emptyset. \] Hence $\pr(\wt\ch)^\circ\cap g_j\pr(\wt\ch_j)^\circ \not=\emptyset$. Proposition~\ref{glue_nccells} resp.\@ \ref{glue_ccells} resp.\@ \ref{glue_stripcells} in combination with Remark~\ref{remain} shows that $\cl(\pr(\wt\ch)) = g_j\cl(\pr(\wt\ch_j))$. But then \[ \partial\pr(\wt\ch) = g_j\partial\pr(\wt\ch_j), \] which implies that $S=g_jb(\wt\ch_j)$. This is a contradiction to the assumption that $S\not=g_jb(\wt\ch_j)$. Therefore $S=g_jb(\wt\ch_j)$. Then Lemma~\ref{char_intervals} implies that $g_jI(\wt\ch_j)\times g_jJ(\wt\ch_j)$ equals $(a,b)_+\times (a,b)_-$ or $(a,b)_-\times (a,b)_+$. On the other hand \[ \bhg H_1 \times \bhg H_2 = \big\{ \big(\gamma_v(\infty), \gamma_v(-\infty)\big) \ \big\vert\ v\in D'_1\big\} = \big\{ \big(\gamma_v(-\infty),\gamma_v(\infty)\big) \ \big\vert\ v\in D'_2\big\} \] equals $(a,b)_+\times (a,b)_-$ or $(a,b)_-\times (a,b)_+$. Therefore, again by Lemma~\ref{char_intervals}, $g_j\CS'(\wt\ch_j)=D'_j$. This shows that the union $g_1\CS'(\wt\ch_1)\cup g_2\CS'(\wt\ch_2)$ is disjoint and equals $D'$. We have $\cl(\pr(\wt\ch))\subseteq \overline H_1$ and $g_1\cl(\pr(\wt\ch_1))\subseteq \overline H_1$ with $S\subseteq \partial\pr(\wt\ch) \cap g_1 \partial \pr(\wt\ch_1)$. Let $z\in S$. Then there exists $\eps>0$ such that \[ B_\eps(z) \cap \pr\big(\wt\ch\big)^\circ = B_\eps(z) \cap H_1 = B_\eps(z) \cap g_1\pr\big(\wt\ch_1\big)^\circ. \] Hence $\pr(\wt\ch)^\circ \cap g_1\pr(\wt\ch_1)^\circ\not=\emptyset$. As above we find that $\cl(\pr(\wt\ch)) = g_1\cl(\pr(\wt\ch_1))$. Finally, $g_2\cl(\pr(\wt\ch_2))\subseteq \overline H_2$ with \[ S\subseteq g_2\cl\big(\pr\big(\wt\ch_2\big)\big) \cap \overline H_1 \subseteq \overline H_2 \cap \overline H_1 = S. \] Hence $\cl(\pr(\wt\ch))\cap g_2\cl(\pr(\wt\ch_2)) = S$. \end{proof} Let $\wt\ch\in\wt\fch_{\choices,\shmap}$ and suppose that $S$ is a side of $\pr(\wt\ch)$. Let $(\wt\ch_1,g_1), (\wt\ch_2,g_2)$ be the two elements in $\wt\fch_{\choices,\shmap}\times\Gamma$ such that $S=g_jb(\wt\ch_j)$ and $g_1\cl(\pr(\wt\ch_1)) = \cl(\pr(\wt\ch))$ and $g_2\cl(\pr(\wt\ch_2))\cap \cl(\pr(\wt\ch))=S$. Then we define \[ p\big(\wt\ch,S\big) \sceq \big(\wt\ch_1,g_1\big)\quad\text{and}\quad n\big(\wt\ch,S\big)\sceq \big(\wt\ch_2,g_2\big). \] \begin{remark}\label{sides_eff} Let $\wt\ch\in\wt\fch_{\choices,\shmap}$ and suppose that $S$ is a side of $\pr(\wt\ch)$. We will show how one effectively finds the elements $p(\wt\ch,S)$ and $n(\wt\ch,S)$. Let \[ \big(\wt\ch_1,k_1\big)\sceq p\big(\wt\ch,S\big) \quad\text{and}\quad \big(\wt\ch_2,k_2\big)\sceq n\big(\wt\ch,S\big). \] Suppose that $\wt\ch'$ is the (unique) element in $\wt\fch_\choices$ such that $\shmap(\wt\ch')\wt\ch'=\wt\ch$ and suppose further that $\wt\ch'_j\in\wt\fch_\choices$ such that $\shmap(\wt\ch'_j)\wt\ch'_j = \wt\ch_j$ for $j=1,2$. Set $S'\sceq \shmap(\wt\ch')^{-1}S$. Then $S'$ is a side of $\pr(\wt\ch')$. For $j=1,2$ we have \[ S'=\shmap\big(\wt\ch'\big)^{-1}S = \shmap\big(\wt\ch'\big)^{-1} k_j b\big(\wt\ch_j\big) = \shmap\big(\wt\ch'\big)^{-1} k_j \shmap\big(\wt\ch'_j\big) b\big(\wt\ch'_j\big) \] and \[ k_j\cl\big(\pr\big(\wt\ch_j\big)\big) = k_j\shmap\big(\wt\ch'_j\big)\cl\big(\pr\big(\wt\ch'_j\big)\big). \] Moreover, $\cl(\pr(\wt\ch)) = \shmap(\wt\ch')\cl(\pr(\wt\ch'))$. Then $k_1\cl(\pr(\wt\ch_1)) = \cl(\pr(\wt\ch))$ is equivalent to \[ \shmap\big(\wt\ch'\big)^{-1}k_1\shmap\big(\wt\ch'_1\big)\cl\big(\pr\big(\wt\ch'_1\big)\big) = \cl\big(\pr\big(\wt\ch'\big)\big), \] and $k_2\cl(\pr(\wt\ch_2))\cap\cl(\pr(\wt\ch)) = S$ is equivalent to \[ \shmap\big(\wt\ch'\big)^{-1}k_2\shmap\big(\wt\ch'_2\big)\cl\big(\pr\big(\wt\ch'_2\big)\big) \cap \cl\big(\pr\big(\wt\ch'\big)\big) = S'. \] Therefore, $(\wt\ch_1,k_1) = p(\wt\ch,S)$ if and only if $(\wt\ch'_1,\shmap(\wt\ch')^{-1}k_1\shmap(\wt\ch'_1)) = p(\wt\ch',S')$, and $(\wt\ch_2,k_2) = n(\wt\ch,S)$ if and only if $(\wt\ch'_2,\shmap(\wt\ch')^{-1}k_2\shmap(\wt\ch'_2)) = n(\wt\ch',S')$. By Corollary~\ref{iscellH}, the sets $\ch'\sceq \cl(\pr(\wt\ch'))$ and $\ch'_j\sceq \cl(\pr(\wt\ch'_j))$ are $\fpch$-cells in $H$. Suppose first that $\ch'$ arises from the non-cuspidal basal precell $\pch'$ in $H$. Then there is a unique element $(\pch, h_\pch)\in \choices$ such that for some $h\in\Gamma$ the pair $(\pch',h)$ is contained in the cycle in $\fpch\times\Gamma$ determined by $(\pch, h_\pch)$. Necessarily, $\pch$ is non-cuspidal. Let $\big( (\pch_j, h_j) \big)_{j=1,\ldots, k}$ be the cycle in $\fpch\times\Gamma$ determined by $(\pch,h_\pch)$. Then $\pch'=\pch_m$ for some $m\in\{1,\ldots, \cyl(\pch)\}$ and hence $\ch'=\ch(\pch_m)$ and $\wt\ch' = \wt\ch_m(\pch,h_\pch)$. For $j=1,\ldots, k$ set $g_1\sceq\id$ and $g_{j+1}\sceq h_jg_j$. Proposition~\ref{nccells}\eqref{ncc3} states that $\ch(\pch_m)=g_m\ch(\pch)$ and Proposition~\ref{nccells}\eqref{ncc1} shows that $S'$ is the geodesic segment $[g_mg_j^{-1}\infty, g_mg_{j+1}^{-1}\infty]$ for some $j\in \{1,\ldots, k\}$. Then $g_jg_m^{-1}S'=[\infty, h_j^{-1}\infty]$. Let $n\in\{1,\ldots, \cyl(\pch)\}$ such that $n\equiv j \mod \cyl(\pch)$. Then $h_n=h_j$ by Lemma~\ref{cyclic}. Proposition~\ref{ncSH} shows that $b(\wt\ch_n(\pch, h_\pch)) = [\infty, h_n^{-1}\infty] = g_jg_m^{-1}S'$. We claim that $(\wt\ch_j(\pch,h_\pch), g_mg_j^{-1}) = p(\wt\ch',S')$. For this is remains to show that $g_mg_j^{-1}\cl(\pr(\wt\ch_j(\pch,h_\pch))) = \cl(\pr(\wt\ch'))$. Proposition~\ref{ncSH} shows that $\cl(\pr(\wt\ch_j(\pch, h_\pch))) = \ch(\pch_n)$ and Lemma~\ref{cyclic} implies that $\ch(\pch_n) = \ch(\pch_j)$. Let $v$ be the vertex of $\mc K$ to which $\pch$ is attached. Then $g_jv\in \ch(\pch_j)^\circ$ and $g_mg_j^{-1}g_jv=g_mv\in\ch(\pch_m)^\circ$. Therefore $g_mg_j^{-1}\ch(\pch_j)^\circ \cap \ch(\pch_m)^\circ \not=\emptyset$. From Proposition~\ref{glue_nccells} it follows that $g_mg_j^{-1}\ch(\pch_j) = \ch(\pch_m)$. Recall that $\ch(\pch_m) = \cl(\pr(\wt\ch'))$. Hence $( \wt\ch_j(\pch,h_\pch), g_mg_j^{-1}) = p(\wt\ch',S')$ and \[ \big( \shmap\big(\wt\ch_j(\pch,h_\pch)\big) \wt\ch_j\big(\pch, h_\pch\big), \shmap\big(\wt\ch'\big) g_mg_j^{-1} \shmap\big( \wt\ch_j(\pch, h_\pch)\big)^{-1} \big) = p\big(\wt\ch, S\big). \] Analogously one proceeds if $\pch'$ is cuspidal or a strip precell. Now we show how one determines $(\wt\ch_2,k_2)$. Suppose again that $\ch'$ arises from the non-cuspidal basal precell $\pch'$ in $H$. We use the notation from the determination of $p(\wt\ch',S')$. By Corollary~\ref{props_preH} there is a unique pair $(\wh\pch,s)\in\fpch\times\Z$ such that $b(\wt\ch_n(\pch, h_\pch)) \cap t_\lambda^s\wh\pch \not=\emptyset$ and $t_\lambda^s\wh\pch\not=\pch_n$. Then $t_\lambda^{-s}g_jg_m^{-1}S'$ is a side of the cell $\ch(\wh\pch)$ in $H$. As before, we determine $(\wt\ch_3,k_3)\in\wt\fch_\choices\times\Gamma$ such that $k_3b(\wt\ch_3) = t_\lambda^{-s}g_jg_m^{-1}S'$ and $k_3\cl(\pr(\wt\ch_3)) = \ch(\wh\pch)$. Recall that $g_jg_m^{-1}S'$ is a side of $\ch(\pch_j) = \ch(\pch_n)$. We have \begin{align*} g_mg_j^{-1}t_\lambda^s k_3 \cl\big(\pr\big(\wt\ch_3\big)\big) \cap \cl\big(\pr\big(\wt\ch'\big)\big) & = g_mg_j^{-1}t_\lambda^s \ch(\wh\pch) \cap \ch(\pch_m) \\ & = g_mg_j^{-1}t_\lambda^s \ch(\wh\pch) \cap g_mg_j^{-1}\ch(\pch_j) \\ & = g_mg_j^{-1} \big( t_\lambda^s \ch(\wh\pch) \cap \ch(\pch_j) \big) \\ & = g_mg_j^{-1} \big( t_\lambda^s \ch(\wh\pch) \cap \ch(\pch_n) \big) \\ & = g_mg_j^{-1}g_jg_m^{-1}S' \\ & = S'. \end{align*} Thus $n(\wt\ch',S') = (\wt\ch_3, g_mg_j^{-1}t_\lambda^s k_3)$ and \[ n\big(\wt\ch, S\big) = \big( \shmap\big(\wt\ch_3\big)\wt\ch_3, \shmap\big(\wt\ch'\big)g_mg_j^{-1}t_\lambda^s k_3 \shmap\big(\wt\ch_3\big)^{-1}\big). \] If $\ch'$ arises from a cuspidal or strip precell in $H$, then the construction of $n(\wt\ch,S)$ is analogous. \end{remark} \begin{prop}\label{CS2} Let $\wh\gamma$ be a geodesic on $Y$ and suppose that $\wh\gamma$ intersects $\wh\CS$ in $\wh\gamma'(t_0)$. Let $\gamma$ be the unique geodesic on $H$ such that $\gamma'(t_0)\in \CS'(\wt\fch_{\choices,\shmap})$ and $\pi(\gamma'(t_0))=\wh\gamma'(t_0)$. Let $\wt\ch\in\wt\fch_{\choices,\shmap}$ be the unique shifted cell in $SH$ for which we have $\gamma'(t_0)\in \CS'(\wt\ch)$. \begin{enumerate}[{\rm (i)}] \item\label{CS2i} There is a next point of intersection of $\gamma$ and $\CS$ if and only if $\gamma(\infty)$ does not belong to $\bhg\pr(\wt\ch)$. \item\label{CS2ii} Suppose that $\gamma(\infty)\notin\bhg\pr(\wt\ch)$. Then there is a unique side $S$ of $\pr(\wt\ch)$ intersected by $\gamma( (t_0,\infty) )$. Suppose that $(\wt\ch_1,g) = n(\wt\ch,S)$. The next point of intersection is on $g\CS'(\wt\ch_1)$. \item\label{CS2iii} Let $(\wt\ch',h) = n(\wt\ch, b(\wt\ch))$. Then there was a previous point of intersection of $\gamma$ and $\CS$ if and only if $\gamma(-\infty)\notin h\bhg\pr(\wt\ch')$. \item\label{CS2iv} Suppose that $\gamma(-\infty)\notin h\bhg\pr(\wt\ch')$. Then there is a unique side $S$ of $h\pr(\wt\ch')$ intersected by $\gamma( (-\infty, t_0) )$. Let $(\wt\ch_2, h^{-1}k) = p(\wt\ch',h^{-1}S)$. Then the previous point of intersection was on $k\CS'(\wt\ch_2)$. \end{enumerate} \end{prop} \begin{proof} We start by proving \eqref{CS2i}. Recall from Remark~\ref{charinter} that there is a next point of intersection of $\gamma$ and $\CS$ if and only if $\gamma( (t_0,\infty) )$ intersects $\BS$. Since $\gamma'(t_0)\in \CS'(\wt\ch)$, Proposition~\ref{ncSH} resp.\@ \ref{ccSH} resp.\@ \ref{stripSH} in combination with Remark~\ref{remain} shows that $\gamma'(t_0)$ points into $\pr(\wt\ch)^\circ$. Lemma~\ref{propsSH} states that $\pr(\wt\ch)^\circ \cap \BS =\emptyset$ and $\partial\pr(\wt\ch)\subseteq \BS$. Hence $\gamma( (t_0,\infty))$ does not intersect $\BS$ if and only if $\gamma( (t_0,\infty) )\subseteq \pr(\wt\ch)^\circ$. In this case, \[ \gamma(\infty)\in \chg( \pr(\wt\ch)) \cap \bhg H = \bhg \pr(\wt\ch). \] Conversely, if $\gamma(\infty) \in \bhg\pr(\wt\ch)$, then Lemma~\ref{convex}\eqref{convexiii} states that $\gamma( (t_0,\infty) )\subseteq \pr(\wt\ch)^\circ$ or $\gamma( (t_0,\infty) )\subseteq \partial\pr(\wt\ch)$. In the latter case, Lemma~\ref{propsSH} shows that $\gamma( (t_0,\infty) )\subseteq\BS$. Hence, if $\gamma(\infty)\in\bhg\pr(\wt\ch)$, then $\gamma( (t_0,\infty) )\subseteq \pr(\wt\ch)^\circ$. Suppose now that $\gamma(\infty)\notin\bhg\pr(\wt\ch)$. The previous argument shows that the geodesic segment $\gamma( (t_0,\infty) )$ intersects $\partial\pr(\wt\ch)$, say $\gamma(t_1)\in\partial\pr(\wt\ch)$ with $t_1\in (t_0,\infty)$. If there was an element $t_2\in (t_0,\infty)\mminus\{t_1\}$ with $\gamma(t_2)\in\partial\pr(\wt\ch)$, then Lemma~\ref{convex}\eqref{convexii} would imply that there is a side $S$ of $\pr(\wt\ch)$ such that $\gamma(\R)=S$, where the equality follows from the fact that $S$ is a complete geodesic segment (see Lemma~\ref{propsSH}). But then, by Lemma~\ref{propsSH}, $\gamma(\R)\subseteq \BS$, which contradicts to $\gamma'(t_0)\in \CS$. Thus, $\gamma(t_1)$ is the only intersection point of $\partial\pr(\wt\ch)$ and $\gamma( (t_0,\infty) )$. Since $\gamma( (t_0,t_1) )\subseteq \pr(\wt\ch)^\circ$, $\gamma'(t_0)$ is the next point of intersection of $\gamma$ and $\CS$. Moreover, $\gamma'(t_1)$ points out of $\pr(\wt\ch)$, since otherwise $\gamma( (t_1,\infty) )$ would intersect $\partial\pr(\wt\ch)$ which would lead to a contradiction as before. Proposition~\ref{CS=CShat} states that there is a unique pair $(\wt\ch_1,g)\in\wt\fch_{\choices,\shmap}\times\Gamma$ such that $\gamma'(t_1)\in g\CS'(\wt\ch_1)$. Then $\gamma'(t_1)$ points into $g\pr(\wt\ch_1)^\circ$. Let $S$ be the side of $\pr(\wt\ch)$ with $\gamma(t_1)\in S$. By Proposition~\ref{adjacentSH}, either $g\cl(\pr(\wt\ch_1)) = \cl(\pr(\wt\ch))$ or $g\cl(\pr(\wt\ch_1))\cap \cl(\pr(\wt\ch)) = S$. In the first case, $\gamma'(t_1)$ points into $\pr(\wt\ch)^\circ$, which is a contradiction. Therefore \[ g\cl(\pr(\wt\ch_1))\cap \cl(\pr(\wt\ch)) = S, \] which shows that $(\wt\ch_1,g) = n(\wt\ch,S)$. This completes the proof of \eqref{CS2ii}. Let $(\wt\ch',h)=n(\wt\ch, b(\wt\ch))$. Since $\gamma(t_0) \in b(\wt\ch)$ and $\gamma'(t_0)\in \CS'(\wt\ch)$, Proposition~\ref{adjacentSH} implies that $\gamma(t_0)\in hb(\wt\ch')$ and $\gamma'(t_0) \notin h\CS'(\wt\ch')$. Since $\gamma(\R)\not\subseteq h\partial\pr(\wt\ch')$, the unit tangent vector $\gamma'(t_0)$ points out of $\pr(\wt\ch')$. Because the intersection of $\gamma(\R)$ and $hb(\wt\ch')$ is transversal and $\pr(\wt\ch')$ is a convex polyhedron with non-empty interior, $\gamma( (t_0-\eps,t_0) )\cap h\pr(\wt\ch)^\circ \not= \emptyset$ for each $\eps>0$. As before we find that there was a previous point of intersection of $\gamma$ and $\CS$ if and only if $\gamma( (-\infty, t_0) )$ intersects $\partial\pr(\wt\ch')$ and that this is the case if and only if $\gamma(-\infty) \notin h\bhg\pr(\wt\ch')$. Suppose that $\gamma(-\infty)\notin h\bhg\pr(\wt\ch')$. As before, there is a unique $t_{-1}\in (-\infty, t_0)$ such that $\gamma(t_{-1})\in h\bhg\pr(\wt\ch')$. Let $S$ be the side of $h\pr(\wt\ch')$ with $\gamma(t_{-1})\in S$. Necessarily, $\gamma( (t_{-1},t_0 ) \subseteq h\pr(\wt\ch')^\circ$, which shows that $\gamma'(t_{-1})$ points into $h\pr(\wt\ch')^\circ$ and that $\gamma'(t_{-1})$ is the previous point of intersection. Let $(\wt\ch_2, k) \in \wt\fch_{\choices,\shmap}\times\Gamma$ be the unique pair such that $\gamma'(t_{-1})\in k\CS'(\wt\ch_2)$ (see Proposition~\ref{CS=CShat}). By Proposition~\ref{adjacentSH}, we have either $k\cl(\pr(\wt\ch_2)) = h\cl(\pr(\wt\ch'))$ or $k\cl(\pr(\wt\ch_2)) \cap h\cl(\pr(\wt\ch')) = S$. In the latter case, $\gamma'(t_{-1})$ points out of $h\pr(\wt\ch')^\circ$ which is a contradiction. Hence $h^{-1}k\cl(\pr(\wt\ch_2) = \cl(\pr(\wt\ch'))$, which shows that $(\wt\ch_2,h^{-1}k) = p(\wt\ch',h^{-1}S)$. \end{proof} \begin{cor}\label{lastinter} Let $\wh\gamma$ be a geodesic on $Y$ and suppose that $\wh\gamma$ does not intersect $\wh\CS$ infinitely often in future. If $\wh\gamma$ intersects $\wh\CS$ at all, then there exists $t\in\R$ such that $\wh\gamma'(t)\in\wh\CS$ and $\wh\gamma( (t,\infty) )\cap \wh\BS =\emptyset$. Analogously, suppose that $\wh\eta$ is a geodesic on $Y$ which does not intersect $\wh\CS$ infinitely often in past. If $\wh\eta$ intersects $\wh\CS$ at all, then there exists $t\in\R$ such that $\wh\eta'(t)\in\wh\CS$ and $\wh\eta( (-\infty, t) ) \cap \wh\BS = \emptyset$. \end{cor} \begin{proof} Since $\wh\gamma$ does not intersect $\wh\CS$ infinitely often in future, we find $s\in\R$ such that $\wh\gamma'( (s,\infty) )\cap \wh\CS = \emptyset$. Suppose that $\wh\gamma$ intersects $\wh\CS$. Remark~\ref{charinter} shows that then $\wh\gamma'( (s,\infty) )\cap \wh\CS = \emptyset$ is equivalent to $\wh\gamma( (s,\infty) )\cap\wh\BS = \emptyset$. Pick $r\in (s,\infty)$ and let $\gamma$ be any representative of $\wh\gamma$ on $H$. Then $\gamma(r)\notin\BS$. Hence there is a pair $(B,g)\in\fch\times\Gamma$ such that $\gamma(r)\in gB^\circ$. Since $g\partial B \subseteq \BS$ by the definition of $\BS$, we have $\gamma( (s,\infty) ) \subseteq gB^\circ$. Since $\wh\gamma$ intersects $\wh\CS$, $\gamma(\R)$ intersects $g\partial B$ transversely. Because $gB$ is convex, this intersection is unique, say $\{ \gamma(t) \} = \gamma(\R) \cap g\partial B$. Then $\gamma( (t,\infty) )\subseteq gB^\circ$. Hence $\gamma'(t)\in \CS$. Thus $\wh\gamma'(t)\in \wh\CS$ and $\wh\gamma( (t,\infty) ) \cap \wh\BS = \emptyset$. The proof of the claims on $\wh\eta$ is analogous. \end{proof} \begin{example}\label{intersectionHecke} For the Hecke triangle group $G_5$ with $\fpch = \{\pch\}$, $\choices = \{ (\pch, U_5)\}$ (see Example~\ref{HeckecellSH}) and $\shmap\equiv \id$, Figure~\ref{nextlastHecke} shows the translates of $\CS'\sceq \CS'(\wt\ch)$ which are necessary to determine the location of next and previous points of intersection. \begin{figure}[h] \begin{center} \includegraphics*{Hecke5.15} \end{center} \caption{The shaded parts are translates of $\CS'$ (in unit tangent bundle) as indicated.}\label{nextlastHecke} \end{figure} \end{example} \begin{example}\label{intersectionGamma05} Recall the setting of Example~\ref{choicesGamma05}. We consider the two shift maps $\shmap_1 \equiv \id$, and \[ \shmap_2\big(\wt\ch_1\big) \sceq \mat{1}{-1}{0}{1}\quad\text{and}\quad \shmap_2\big(\wt\ch_j\big) \sceq \id \quad\text{for $j=2,\ldots, 6$.} \] For simplicity set $\wt\ch_{-1}\sceq \shmap_2\big(\wt\ch_1\big) \wt\ch_1$ and $\CS'_{-1}\sceq \shmap_2\big(\wt\ch_1\big)\CS'_1$. Further we set \begin{align*} g_1 & \sceq \mat{1}{0}{5}{1}, & g_2 & \sceq \mat{2}{-1}{5}{-2}, & g_3 & \sceq \mat{3}{-2}{5}{-3}, & g_4 & \sceq \mat{4}{-1}{5}{-1}, \\ g_5 & \sceq \mat{4}{-5}{5}{-6}, & g_6 & \sceq \mat{1}{1}{0}{1}, & g_7 & \sceq \mat{-1}{0}{5}{-1}. \end{align*} Figure~\ref{forward1} shows the translates of the sets $\CS'_j$ which are necessary to determine the location of the next point of intersection if the shift map is $\shmap_1$, and Figure~\ref{forward2} those if $\shmap_2$ is the chosen shift map. \begin{figure}[h] \begin{center} \includegraphics*{Gamma05.10} \end{center} \caption{The translates of $\CS'$ relevant for determination of the location next point of intersection for the shift map $\shmap_1$.}\label{forward1} \end{figure} \begin{figure}[h] \begin{center} \includegraphics*{Gamma05.5} \end{center} \caption{The translates of $\CS'$ relevant for determination of the location next point of intersection for the shift map $\shmap_2$.}\label{forward2} \end{figure} \end{example} Recall the set $\bd$ from Section~\ref{sec_base}. \begin{prop}\label{CS3} Let $\widehat\gamma$ be a geodesic on $Y$. \begin{enumerate}[{\rm (i)}] \item\label{CS3i} $\widehat\gamma$ intersects $\widehat\CS$ infinitely often in future if and only if $\widehat\gamma(\infty) \notin \pi(\bd)$. \item\label{CS3ii} $\widehat\gamma$ intersects $\widehat\CS$ infinitely often in past if and only if $\widehat\gamma(-\infty) \notin \pi(\bd)$. \end{enumerate} \end{prop} \begin{proof} We will only show \eqref{CS3i}. The proof of \eqref{CS3ii} is analogous. Suppose first that $\wh\gamma$ does not intersect $\wh\CS$ infinitely often in future. If $\wh\gamma$ does not intersect $\wh\CS$ at all, then Proposition~\ref{CS1} states that $\wh\gamma\in\NC$. Recall from Proposition~\ref{choiceinvariant} that $\NC=\NC(\fch)$. Hence there is $\ch\in\fch$ and a representative $\gamma$ of $\wh\gamma$ on $H$ such that $\gamma(\pm\infty)\in\bd(\ch)$. Thus $\wh\gamma\in \pi(\bd(\ch)) \subseteq \pi(\bd)$. Suppose now that $\wh\gamma$ intersects $\wh\CS$. Corollary~\ref{lastinter} shows that there is $t\in\R$ such that $\wh\gamma'(t)\in\wh\CS$ and $\wh\gamma( (t,\infty) )\cap\wh\BS =\emptyset$. Let $\gamma$ be the representative of $\wh\gamma$ on $H$ such that $\gamma'(t)\in \CS'(\wt\fch_{\choices,\shmap})$. Let $\wt\ch\in \wt\fch_{\choices,\shmap}$ be the unique shifted cell in $SH$ such that $\gamma'(t)\in \CS'(\wt\ch)$. From $\wh\gamma( (t,\infty) )\cap \wh\BS = \emptyset$ it follows that $\gamma( (t,\infty) )\cap \BS = \emptyset$. Since $\partial \pr(\wt\ch) \subseteq \BS$ by Lemma~\ref{propsSH}, $\gamma( (t,\infty) )\subseteq \pr(\wt\ch)^\circ$. Hence $\gamma(\infty) \in \bhg\pr(\wt\ch)$. Let $\wt\ch'\in\wt\fch_\choices$ such that $\shmap(\wt\ch')\wt\ch'=\wt\ch$. Corollary~\ref{iscellH} shows that $\ch'\sceq \cl(\pr(\wt\ch'))\in\fch$. Hence \[ \bhg\pr(\wt\ch) = \bhg\cl(\pr(\wt\ch)) = \shmap(\wt\ch') \bhg\ch' = \shmap(\wt\ch') \bd(\ch') \subseteq \bd(\fch). \] Recall from Proposition~\ref{choiceinvariant} that $\bd=\bd(\fch)$. Therefore $\gamma(\infty)\in \bd$ and $\wh\gamma(\infty) \in \pi(\bd)$. Suppose now that $\wh\gamma(\infty) \in \pi(\bd)$. We will show that $\wh\gamma$ does not intersect $\wh\CS$ infinitely often in future. Suppose first that $\wh\gamma(\infty) = \pi(\infty)$. Choose a representative $\gamma$ of $\wh\gamma$ on $H$ such that $\gamma(\infty) = \infty$. Lemma~\ref{KM} shows that $\gamma(\R)\cap\mc K \not=\emptyset$. Pick $z\in\gamma(\R)\cap\mc K$, say $\gamma(t) = z$. By Corollary~\ref{props_preH} we find a (not necessarily unique) pair $(\pch,m)\in\fpch\times\Z$ such that $t_\lambda^m z\in \pch$. The geodesic $\eta\sceq t_\lambda^m\gamma$ is a representative of $\wh\gamma$ on $H$ with $\eta(\infty) \infty \in \bhg\pch$ and $\eta(t)\in\pch$. Since $\pch$ is convex, the geodesic segment $\eta( [t,\infty) )$ is contained in $\pch$ and therefore in $\ch(\pch)$ with $\eta(\infty)\in \bhg\ch(\pch)$. Because $\ch(\pch)$ is convex, Lemma~\ref{convex}\eqref{convexiii} states that either $\eta([t,\infty) )\subseteq \ch(\pch)^\circ$ or $\eta([t,\infty))\subseteq\partial\ch(\pch)$. Since $\partial\ch(\pch)$ consists of complete geodesic segments, Lemma~\ref{convex} implies that either $\eta(\R)\subseteq \ch(\pch)^\circ$ or $\eta(\R)\subseteq\partial\ch(\pch)$ or $\eta(\R)$ intersects $\partial\ch(\pch)$ in a unique point which is not an endpoint of some side. In the first two cases, $\eta(-\infty)\in \bhg\ch(\pch)$ and therefore $\wh\gamma=\wh\eta\in \NC(\ch(\pch))$. Proposition~\ref{CS1} shows that $\wh\gamma$ does not intersect $\wh\CS$. In the latter case, there is a unique side $S$ of $\ch(\pch)$ intersected by $\eta(\R)$ and this intersection is transversal. Suppose that $\{\eta(s)\} = S\cap \eta(\R)$ and let $v\sceq \eta'(s)$. Since $\eta( (s,\infty) ) \subseteq \ch(\pch)^\circ$, the unit tangent vector $v$ points into $\ch(\pch)^\circ$. Note that $v\in \CS$. By Proposition~\ref{adjacentSH}, there exists a (unique) pair $(\wt\ch, g)\in\wt\fch_{\choices,\shmap}\times\Gamma$ such that $v\in g\CS'(\wt\ch)$. Moreover, $g\cl(\pr(\wt\ch)) = \ch(\pch)$. Then $\alpha\sceq g^{-1}\eta$ is a representative of $\wh\gamma$ on $H$ such that $\alpha'(s) = g^{-1}v \in \CS'(\wt\ch)$ and $\alpha(\infty) \in \bhg\pr(\wt\ch)$. Proposition~\ref{CS2}\eqref{CS2i} shows that there is no next point of intersection of $\alpha$ and $\CS$. Hence $\wh\gamma$ does not intersect $\wh\CS$ infinitely often in future. Suppose now that $\wh\gamma(\infty)\notin\pi(\infty)$. We find a representative $\gamma$ of $\wh\gamma$ on $H$ and a cell $\ch\in\fch$ in $H$ such that $\gamma(\infty)\in\bhg\ch \cap \R$. Assume for contradiction that $\gamma$ intersects $\CS$ infinitely often in future. Let $(t_n)_{n\in\N}$ be an increasing sequence in $\R$ such that $\gamma'(t_n)\in \CS$ for each $n\in \N$ and $\lim_{n\to\infty} t_n = \infty$. For $n\in\N$ let $S_n$ be the connected component of $\BS$ such that $\gamma(t_n)\in S_n$. Note that $S_n$ is a complete geodesic segment. We will show that there exists $n_0\in\N$ such that both endpoints of $S_{n_0}$ are in $\R$. Assume for contradiction that each $S_n$ is vertical, hence $S_n=[a_n,\infty]$ with $a_n\in\R$. Then either $a_1<a_2<\ldots$ or $a_1>a_2>\ldots$. Theorem~\ref{precellsH} shows that $\fpch$ is finite. Therefore $\fch$ is so by Corollary~\ref{ABbij}. Recall that each $S_n$ is a vertical side of some $\Gamma_\infty$-translate of some element in $\fch$. Hence there is $r>0$ such that $|a_{n+1}-a_n|\geq r$ for each $n\in\N$. W.l.o.g.\@ suppose that $a_1<a_2<\ldots$. Then $\lim_{n\to\infty} a_n =\infty$. For each $n\in\N$, $\gamma(\infty)$ is contained in the interval $(a_n,\infty)$. Hence $\gamma(\infty)\in \bigcap_{n\in\N} (a_n,\infty) = \emptyset$. This is a contradiction. Therefore we find $k\in\N$ such that $S_k=[a_k,b_k]$ with $a_k,b_k\in\R$. W.l.o.g.\@ $a_k<b_k$. Let $(\wt\ch, g)\in\wt\fch_{\choices,\shmap}\times\Gamma$ such that $\gamma'(t_k)\in g\CS'(\wt\ch)$. Proposition~\ref{adjacentSH} states that $gb(\wt\ch) = S_k$ and $\gamma(\infty) \in (a_k,b_k)_+$ or $\gamma(\infty)\in (a_k,b_k)_-$. In each case $a_k<\gamma(\infty)<b_k$. Lemma~\ref{convex}\eqref{convexiii} shows that the complete geodesic segment $S\sceq [\gamma(\infty),\infty]$ is contained in $\ch$. It divides $H$ into the two open half-spaces \[ H_1 \sceq \{ z\in H \mid \Rea z < \gamma(\infty) \}\quad\text{and}\quad H_2 \sceq \{ z\in H \mid \Rea z > \gamma(\infty) \} \] such that $H$ is the disjoint union $H_1\cup S\cup H_2$. Neither $a_n$ nor $b_n$ is an endpoint of $S$ but $(a_n,b_n) \in \bhg H_1\times \bhg H_2$ or $(a_n,b_n)\in \bhg H_2\times \bhg H_1$. In each case, $S_n$ intersects $S$ transversely. Then $S_n$ intersects $\ch^\circ$. Since $S_n$ is the side of some $\Gamma$-translate of some cell in $H$, this is a contradiction to Corollary~\ref{cellsHtess}. This shows that $\gamma$ does not intersect $\CS$ infinitely often in future and hence $\wh\gamma$ does not intersect $\wh\CS$ infinitely often in future. This completes the proof of \eqref{CS3i}. \end{proof} Recall the set $\NIC$ from Remark~\ref{outlook}. \begin{thm}\label{geomcross} Let $\mu$ be a measure on the space of geodesics on $Y$. Then $\wh\CS$ is a cross section \wrt $\mu$ for the geodesic flow on $Y$ if and only if $\mu(\NIC) = 0$. \end{thm} \begin{proof} Proposition~\ref{gcs1} shows that $\wh\CS$ satisfies (\apref{C}{C2}{}). Let $\wh\gamma$ be a geodesic on $Y$. Then Proposition~\ref{CS3} implies that $\wh\gamma$ intersects $\wh\CS$ infinitely often in past and future if and only if $\wh\gamma\notin\NIC$. This completes the proof. \end{proof} Let $\mc E$ denote the set of unit tangent vectors to the geodesics in $\NIC$ and set $\wh\CS_\st \sceq \wh\CS \mminus \mc E$. \label{def_mcE} \begin{cor} Let $\mu$ be a measure on the space of geodesics on $Y$ such that $\mu(\NIC) = 0$. Then $\wh\CS_\st$ is the maximal strong cross section \wrt $\mu$ contained in $\wh\CS$. \end{cor}
{"config": "arxiv", "file": "1008.0367/cusp_geomcross.tex"}
TITLE: Measure preserving transformation that makes two partitions independent QUESTION [4 upvotes]: I am looking for a reference for the following result. I think it is pretty well known but I haven't found it written down anywhere. Let $(X, \mathcal{B}, \mu)$ be a standard nonatomic measure space and let $\mathcal{P}, \mathcal{P'}$ be two finite measurable partitions of $X$. Then there is a map $\varphi: X \to X$ which preserves the measure $\mu$ and such that the partitions $\mathcal{P}$ and $\varphi^{-1}\mathcal{P'}$ are independent with respect to $\mu$. (Two partitions $\mathcal{P}$ and $\mathcal{Q}$ are independent with respect to $\mu$ if for any two cells $A \in \mathcal{P}$, $B \in \mathcal{Q}$, $\mu(A \cap B) = \mu(A)\mu(B)$.) REPLY [0 votes]: Atomic counterexample I think it is false. Take $X= \{a,b\}$ a space with just two point, $\mu(\{a\})=1/2$ and $\mathcal{P}=\mathcal{P'}=\{\{a\},\{b\} \}$ the full partition. There are two maps preserving the measure: the identity $id$ and $\phi$ which permute the two elements. For this two $id^{-1} \mathcal{P'}=\phi^{-1} \mathcal{P'}=\mathcal{P'}=\mathcal{P}$. You can check that $\mathcal{P}$ is not self independent since $$ \mu(\{a\} \cap \{a\})=1/2 \ne \mu(\{a\})^2 =1/4. $$ Non atomic counterexample Let's recall that an atomic measure verify that for every $A$ measurable with $\mu(A)>0$ there is $B \subset A$ measurable with $0< \mu(B) < \mu(A)$. I think this property is still not strong enough for this result to be true. The following property, let's call it $\star$, is stronger. For every $A$ measurable with $\mu(A)>0$, for every $x \in [0,\mu(A)]$, there is a $B \subset A$ measurable with $\mu(B)=x$. Let's check that there exist non atomic measure which does not have $\star$ property. Take $X=\{0,1\}^{\mathbb{N}}$ and take $x \in ]0,1[$ a transcendental number. Take the measure which, for every $I \in \mathbb{N}$, give mass $x$ to the set of sequence having $0$ at $i$ position. Then every measurable set being made with intersection and complement of such set, their measure are polynomial in $x$ and cannot be $1/2$. Then we can create a counterexample. Take $Y=X_1 \cup X_2$ two copy of the previous space and $\mathcal{P}=\mathcal{P'}=\{ X_1 , X_2\}$. There should be a measurable set of measure $1/4$ inside $X_1$ which is not the case. I hope I didn't make a mistake here. I think $\star$ property should be strong enough for your property to be true but I haven't demonstrate it yet.
{"set_name": "stack_exchange", "score": 4, "question_id": 4392643}
TITLE: Is the spin-rotation symmetry of Kitaev model $D_2$ or $Q_8$? QUESTION [8 upvotes]: It is known that the Kitaev Hamiltonian and its spin-liquid ground state both break the $SU(2)$ spin-rotation symmetry. So what's the spin-rotation-symmetry group for the Kitaev model? It's obvious that the Kitaev Hamiltonian is invariant under $\pi$ rotation about the three spin axes, and in some recent papers, the authors give the "group"(see the Comments in the end) $G=\left \{1,e^{i\pi S_x}, e^{i\pi S_y},e^{i\pi S_z} \right \}$, where $(e^{i\pi S_x}, e^{i\pi S_y},e^{i\pi S_z})=(i\sigma_x,i\sigma_y,i\sigma_z )$, with $\mathbf{S}=\frac{1}{2}\mathbf{\sigma}$ and $\mathbf{\sigma}$ being the Pauli matrices. But how about the quaternion group $Q_8=\left \{1,-1,e^{i\pi S_x}, e^{-i\pi S_x},e^{i\pi S_y},e^{-i\pi S_y},e^{i\pi S_z}, e^{-i\pi S_z}\right \}$, with $-1$ representing the $2\pi$ spin-rotation operator. On the other hand, consider the dihedral group $D_2=\left \{ \begin{pmatrix}1 & 0 &0 \\ 0& 1 & 0\\ 0&0 &1 \end{pmatrix},\begin{pmatrix}1 & 0 &0 \\ 0& -1 & 0\\ 0&0 &-1 \end{pmatrix},\begin{pmatrix}-1 & 0 &0 \\ 0& 1 & 0\\ 0&0 &-1 \end{pmatrix},\begin{pmatrix}-1 & 0 &0 \\ 0& -1 & 0\\ 0&0 &1 \end{pmatrix} \right \}$, and these $SO(3)$ matrices can also implement the $\pi$ spin rotation. So, which one you choose, $G,Q_8$, or $D_2$ ? Notice that $Q_8$ is a subgroup of $SU(2)$, while $D_2$ is a subgroup of $SO(3)$. Furthermore, $D_2\cong Q_8/Z_2$, just like $SO(3)\cong SU(2)/Z_2$, where $Z_2=\left \{ \begin{pmatrix}1 & 0 \\ 0 &1\end{pmatrix} ,\begin{pmatrix}-1 & 0 \\ 0 &-1 \end{pmatrix} \right \}$. Comments: The $G$ defined above is even not a group, since, e.g., $(e^{i\pi S_z})^2=-1\notin G$. Remarks: Notice here that $D_2$ can not be viewed as a subgroup of $Q_8$, just like $SO(3)$ can not be viewed as a subgroup of $SU(2)$. Supplementary: As an example, consider a two spin-1/2 system. We want to gain some insights that what kinds of wavefunctions preserves the $Q_8$ spin-rotation symmetry from this simplest model. For convenience, let $R_\alpha =e^{\pm i\pi S_\alpha}=-4S_1^\alpha S_2^\alpha$ represent the $\pi$ spin-rotation operators around spin axes $\alpha=x,y,z$, where $S_\alpha=S_1^\alpha+ S_2^\alpha$. Therefore, by saying a wavefunction $\psi$ has $Q_8$ spin-rotation symmetry, we mean $R_\alpha\psi=\lambda_ \alpha \psi$, with $\left |\lambda_ \alpha \right |^2=1$. After a simple calculation, we find that a $Q_8$ spin-rotation symmetric wavefunction $\psi$ could only take one of the following 4 possible forms: $(1) \left | \uparrow \downarrow \right \rangle-\left | \downarrow \uparrow \right \rangle$, with $(\lambda_x,\lambda_y,\lambda_z)=(1,1,1)$ (Singlet state with full $SU(2)$ spin-rotation symmetry), which is annihilated by $S_x,S_y,$ and $S_z$, $(2) \left | \uparrow \downarrow \right \rangle+\left | \downarrow \uparrow \right \rangle$, with $(\lambda_x,\lambda_y,\lambda_z)=(-1,-1,1)$, which is annihilated by $S_z$, $(3) \left | \uparrow \uparrow \right \rangle-\left | \downarrow \downarrow \right \rangle$, with $(\lambda_x,\lambda_y,\lambda_z)=(1,-1,-1)$, which is annihilated by $S_x$, $(4) \left | \uparrow \uparrow \right \rangle+\left | \downarrow \downarrow \right \rangle$, with $(\lambda_x,\lambda_y,\lambda_z)=(-1,1,-1)$, which is annihilated by $S_y$. Note that any kind of superposition of the above states would no longer be an eigenfunction of $R_\alpha$ and hence would break the $Q_8$ spin-rotation symmetry. REPLY [6 votes]: The set $G$ gives the representation of the identity and generators of the abstract group of quaternions as elements in $SL(2,\mathbb C)$ which are also in $SU(2)$. Taking the completion of this yields the representation $Q_8$ of the quaternions presented in the question. From the description of the symmetry group as coming from here, consider the composition of two $\pi$ rotations along the $\hat x$, $\hat y$, or $\hat z$ axis. This operation is not the identity operation on spins (that requires a $4\pi$ rotation). However, all elements of $D_2$ given above are of order 2. This indicates that the symmetry group of the system should be isomorphic to the quaternions and $Q_8$ is the appropriate representation acting on spin states. The notation arising there for $D_2$ is probably from the dicyclic group of order $4\times 2=8$ which is isomorphic to the quaternions.
{"set_name": "stack_exchange", "score": 8, "question_id": 91811}
TITLE: How do I count given way of distribution of items QUESTION [0 upvotes]: I am struggling with finding closed form or even non-closed form of following count: The number of ways for distributing $n$ distinguishable items to $n$ distinguishable groups, where order of distribution does not matters each cell gets at least one item an item can be distributed to multiple groups (i.e. repetition allowed) any item should not be left undistributed I am able to come up with the count that disregards last point, i.e. any item should not be left undistributed: Each group can get items in $\binom{n}{1}+\binom{n}{2}+...+\binom{n}{n}=2^n-1$ ways. There are $r$ groups. So final count will be $(2^n-1)^r$. But how can accommodate last criteria: "any item should not be left undistributed"? REPLY [0 votes]: You deal with that by inclusion-exclusion. Start with your answer of $(2^n-1)^r$ and subtract the cases where an item is not distributed. If one item is not distributed, there are $(2^{n-1}-1)^r$ ways to distribute the rest, so subtract those out. Unfortunately, you have subtracted twice the cases where two items are not distributed, so you need to add them back in once. Now you will have counted the ones where three are not distributed once in the initial count, subtracted them three times, and added them three times, so subtract them once. Keep going.
{"set_name": "stack_exchange", "score": 0, "question_id": 2732979}
\section{A randomized construction} \label{sec:unweighted_logn_case} In this section we introduce our strategy and ideas to finding heavy cycles and show how this helps find a cycle decomposition with few cycles. We prove neither Theorem~\ref{thm:main_result} nor Theorem~\ref{thm:weighted_digraph_heavy_cycle_existence} here. Those theorems are proved in Section~\ref{sec:weighted_case}, where we refine our ideas and make them more technical. We believe that the core idea is nonetheless simple and adaptable to other problems. \begin{theorem} \label{thm:main_theorem_weak_version} Every Eulerian digraph on $n$ vertices can be decomposed into $O(n \log n)$ edge-disjoint cycles. \end{theorem} It is easy to see that every Eulerian digraph can be decomposed into edge-disjoint cycles by sequentially taking out cycles. The challenge is to choose these cycles cleverly so that we do not need too many. To prove Theorem~\ref{thm:main_theorem_weak_version}, we proceed as follows: first we add a weighting to the digraph such that every edge $e = (u, v)$ receives weight $\weight(e) = 1/\dout(u)$. We then show that, regardless of how we decompose the graph, the sum of weights of the cycles is bounded; if all cycles in our decomposition have weight sufficiently large, then there cannot be too many of them. The key ingredient is therefore to find heavy cycles. Proposition~\ref{prop:weight_cycle_existence} below shows that this is indeed possible. \begin{proposition} \label{prop:weight_cycle_existence} There exist positive constants $K_0$, $K_1$ such that every digraph $G$ of order $n$ with minimum degree $\delta(G) \geq K_0 \cdot \log n$ contains a cycle $C$ such that \begin{align*} \sum_{v \in C}{ \frac{1}{\dout(v)} } \geq K_1. \end{align*} \end{proposition} The proof of Theorem~\ref{thm:main_theorem_weak_version} follows directly from Proposition~\ref{prop:weight_cycle_existence} and Lemma~\ref{lem:small_decomp_existence}. In the rest of this section we prove Proposition~\ref{prop:weight_cycle_existence}. The central idea of the proof is to consider the following random walk. \begin{definition} \label{def:easy_case_random_walk} Given a sinkless digraph $G$, we produce a \emph{random path} $(x_t)_{t \geq 0}$ on $G$ as follows: \begin{itemize} \item The first vertex $x_0$ is chosen arbitrarily. \item At step $t \geq 0$, if $\drem_t(x_t) \geq \frac{1}{2} \dout(x_t)$, we choose $x_{t+1}$ \uar among the unvisited neighbours of $x_t$. \item At step $t \geq 0$, if $\drem_t(x_t) < \frac{1}{2} \dout(x_t)$, we stop the path. We name $T$ the time at which the path stops. \end{itemize} \end{definition} No vertex is visited twice so it is justified that we call it a path. In particular, this implies that $T < n$. To construct a cycle, we connect the last vertex $x_T$ of this path to its first neighbour $x_s$ in the path. In what follows, we show that, for suitable digraphs $G$, the cycle $C = (x_s, \ldots, x_T)$ produced in this manner satisfies the conclusion of Proposition~\ref{prop:weight_cycle_existence} w.h.p.\footnote{We say that a sequence of events $E_1, E_2, \dots$ holds \emph{with high probability} (or w.h.p.\ for short) if $\Pr\left[ E_n\right] \rightarrow 1$ as $n\rightarrow\infty$.} For this, we first prove that the visited out-neighbourhood of any vertex between two steps cannot be too large w.h.p. \begin{lemma} \label{lem:visited_neighbours_bound} For a sinkless digraph $G$, let $(x_0, \ldots, x_T)$ be a random path and let $\lambda>0$. With probability at least $1-1/n$, it holds for all $v \in G$ and all $s < T$ that \begin{align*} \left| \left\{ \outneighb(v) \cap \{x_s, \dots, x_T\} \right\} \right| \leq 1+\frac{2e^\lambda-2}{\lambda} \cdot \dout(v) \cdot \sum_{t=s}^{T-1}{\frac{1}{\dout(x_t)}} + \frac{3}\lambda \cdot \log n. \end{align*} \end{lemma} \begin{proof} Before formally proving it, we give an intuition of why this is true. Let $v$ be a fixed vertex in $G$, and consider the number of out-neighbours of $v$ that are contained in the path. If at time $t$ the random path has visited $x_0, \dots, x_{t}$ and does not stop yet, \ie $\drem_t(x_t) \geq \frac{1}{2} \dout(x_t)$, then we have\[\Pr[ x_{t+1}\in \outneighb( v)] = \frac{\left| \outneighb(v) \cap \outneighb(x_t) \setminus \{ x_0, \dots, x_{t} \} \right|}{\left| \outneighb(x_t) \setminus \{ x_0, \dots, x_t \} \right|} \leq 2\frac{\dout(v)}{\dout(x_t)}.\] Because we select a random path, we expect the number of visited vertices in the out-neighbourhood of $v$ to be concentrated around its expectation, that is, not too different from $\sum_t{ {\dout(v)}/{\dout(x_t)}}$. Thus, for a typical vertex $v$ we would expect the random path to enter its out-neighbourhood not much more than $2\dout(v)\sum_{t=s}^{T-1} {1}/{\dout(x_t)}$ times after time $s$. Formally, for a given $v \in G$ and for $1 \leq t \leq n$ we define the random variables \[X^{(v)}_t \eqdef \indicator{t \leq T \land x_t \in \outneighb(v)}\text{ and }p^{(v)}_t \eqdef \indicator{t \leq T} \cdot \Pr \left[ x_t \in \outneighb(v) \mid x_0, \dots, x_{t-1} \right].\] In what follows we wish to apply Lemma~\ref{lem:adversarial_probabilities} to these quantities. Hence, we need to check that the conditions of the lemma, in particular \eqref{eq:wizardcondition}, are satisfied. To do so, we observe that, for any $t$, we have \begin{align*} &\E \left[ X^{(v)}_t \mid X^{(v)}_1, \dots, X^{(v)}_{t-1}, p^{(v)}_1, \dots, p^{(v)}_t \right] \\ &\quad = \E \biggl[ \E \bigl[ X^{(v)}_t \bigm| X^{(v)}_1, \dots, X^{(v)}_{t-1}, p^{(v)}_1, \dots, p^{(v)}_t, x_0, \dots, x_{t-1} \bigr] \biggm| X^{(v)}_1, \dots, X^{(v)}_{t-1}, p^{(v)}_1, \dots, p^{(v)}_t \biggr] \\ &\quad = \E \biggl[ \E \bigl[ X^{(v)}_t \mid x_0, \dots, x_{t-1} \bigr] \biggm| X^{(v)}_1, \dots, X^{(v)}_{t-1}, p^{(v)}_1, \dots, p^{(v)}_t \biggr] \\ &\quad = \E \biggl[ p^{(v)}_t \biggm| X^{(v)}_1, \dots, X^{(v)}_{t-1}, p^{(v)}_1, \dots, p^{(v)}_t \biggr] \\ &\quad = p^{(v)}_t, \end{align*} as desired. To apply Lemma~\ref{lem:adversarial_probabilities}, we first note that, for any vertex $v$ and any $s \geq 0$, we have on the one hand that \begin{equation}\label{eq:visited_neighbours_bound_proof_2} \left| \outneighb(v) \cap \{x_s, \dots, x_T\} \right| \leq 1 + \left| \outneighb(v) \cap \{x_{s+1}, \dots, x_T\} \right| = 1+\sum_{t=s+1}^{n}{X^{(v)}_t} \end{equation} where we interpret $\{x_s, \dots, x_T\}$ as the empty set when $s > T$. On the other hand, as we noted above, $p^{(v)}_t \leq 2{\dout(v)}/{\dout(x_{t-1})}$ for any $1\leq t \leq T$ and, by definition, $p^{(v)}_t=0$ for $t>T$. It follows that, for any $s\geq 0$ we have \begin{equation}\label{eq:visited_neighbours_bound_proof_1} \sum_{t=s+1}^{n}{ p_t^{(v)} } \leq 2 \sum_{t=s}^{T-1}{ \frac{\dout(v)}{\dout(x_t)} }. \end{equation} By Lemma~\ref{lem:adversarial_probabilities}, we have for any vertex $v$ and any $s \geq 0$ that \begin{align*} \Pr \left[ \sum_{t=s+1}^n{ X^{(v)}_t } > \frac{e^\lambda-1}\lambda \sum_{t=s+1}^n{p^{(v)}_t} + \frac{3}{\lambda} \log n \right] \leq n^{-3}. \end{align*} By combining \eqref{eq:visited_neighbours_bound_proof_2} and \eqref{eq:visited_neighbours_bound_proof_1}, it follows that \begin{align*} \Pr \left[ \left| \outneighb(v) \cap \{x_{s}, \dots, x_T\} \right| > 1 + \frac{2e^\lambda-2}\lambda \dout(v) \sum_{t=s}^{T-1}{\frac{1}{\dout(x_t)}} + \frac{3}{\lambda} \log n \right] \leq n^{-3}. \end{align*} By union bound over all $v$, $s$, we obtain the claim. \end{proof} Recall that Proposition~\ref{prop:weight_cycle_existence} states that provided a digraph has minimum degree large enough, we can find a heavy enough cycle. We are now well equipped to prove this lemma. \begin{proof}[Proof of Proposition~\ref{prop:weight_cycle_existence}] Note that $\delta(G)>0$ ensures that $G$ is sinkless. In this proof we use Lemma~\ref{lem:visited_neighbours_bound} to show that with positive probability, the weight of the created cycle is high enough. Let $G$ be a graph of order $n$ with $\delta \geq K_0 \cdot \log n$. Let $(x_0, \ldots, x_T)$ be a random path and let $x_s$ be the first neighbour of $x_T$ in the path so that $(x_s, \ldots, x_T)$ is a cycle. By definition, $x_T$ has at least $\dout(x_T) / 2$ out-neighbours in the path, $x_s$ being the first one. Hence \begin{align} \frac{1}{2} \dout(x_T) \leq \left| \outneighb(x_T) \cap \{x_s, \dots, x_T\} \right| \label{eq:weight_cycle_existence_proof_2}. \end{align} Moreover, applying Lemma~\ref{lem:visited_neighbours_bound} with $\lambda=1$, $v=x_T$ and $s$ as above, we have, with positive probability, that \begin{equation}\label{eq:weight_cycle_existence_proof_3}\begin{split} \left| \outneighb(x_T) \cap \{x_s, \dots, x_T\} \right| &\leq 1 + (2e-2) \cdot \dout(x_T) \cdot \sum_{t=s}^{T-1}{ \frac{1}{\dout(x_t)}} + 3 \cdot \log n \\ &\leq 4 \cdot \dout(x_T) \cdot \sum_{t=s}^{T-1}{ \frac{1}{\dout(x_t)}} + 4 \log n. \end{split}\end{equation} Combining \eqref{eq:weight_cycle_existence_proof_2}, \eqref{eq:weight_cycle_existence_proof_3} we obtain \begin{align*} \frac{1}{2} \dout(x_T) \leq 4 \dout(x_T) \cdot \sum_{t=s}^{T-1}{ \frac{1}{\dout(x_t)} } + 4 \cdot \log n. \end{align*} Rearranging the terms gives \begin{align*} \sum_{t=s}^T{ \frac{1}{\dout(x_t)} } \geq \frac{1}{8} - \frac{\log n}{\dout(x_T)} \geq \frac{1}{8} - \frac{1}{K_0}, \end{align*} since $\dout(x_T) \geq \delta(G) \geq K_0 \log n$. The lemma follows by choosing $K_0>8$ and $K_1:=1/8-1/K_0.$ \end{proof} To conclude this section, we show that it is possible to use the random path in Definition~\ref{def:easy_case_random_walk} to prove a weaker version of Theorem~\ref{thm:weighted_digraph_heavy_cycle_existence} in the case of uniform out-weights. Extending this to a full proof of the theorem requires some additional ideas, which will be discussed in the next section. \begin{proposition} Let $G$ be a sinkless digraph on $n$ vertices. Consider a random path $(x_0, \dots, x_T)$ on $G$ and let $C$ be the corresponding cycle. Then, with high probability as $n\rightarrow\infty$ we have \begin{equation*}\label{eq:rwgengraph}\sum_{v\in C} \frac{1}{\dout(v)} \geq \frac{\log\log n}{8 \log n}.\end{equation*} \end{proposition} \begin{proof} Let \(s\) be the first index such that \(x_s \in \outneighb(x_T)\). Applying Lemma~\ref{lem:visited_neighbours_bound} with $\lambda=\log \log n$ it follows that, with high probability, \begin{equation} \label{eq:weight_cycle_existence_proof_4}\begin{split} \left| \outneighb(x_T) \cap \{x_s, \dots, x_T\} \right| &\leq 1 + \frac{2\log n-2}{\log \log n} \cdot \dout(x_T) \cdot \sum_{t=s}^{T-1}{ \frac{1}{\dout(x_t)}} + 3 \cdot \frac{\log n}{\log \log n} \\ &\leq \frac{2 \log n}{\log \log n} \cdot \dout(x_T) \cdot \sum_{t=s}^{T-1}{ \frac{1}{\dout(x_t)}} + 4 \frac{\log n}{\log \log n}. \end{split} \end{equation} By Definition~\ref{def:easy_case_random_walk}, the random path ends when \(\drem_T(x_T) < \dout(x_T) / 2 \), which implies that \[ \dout(x_T) / 2 < \dout(x_T)- \drem_T(x_T) = \left| \outneighb(x_T) \cap \{x_s, \ldots, x_T\} \right|.\]By combining this together with \eqref{eq:weight_cycle_existence_proof_4} and rearranging the terms, it follows that \begin{equation*} \sum_{t=s}^{T-1} \frac{1}{\dout(x_t)} \geq \frac{1}{4}\frac{\log \log n}{\log n} - \frac{2}{\dout(x_T)}. \end{equation*} Recall that the cycle corresponding to the random path was described as \(C = (x_s, \ldots, x_T)\), obtained by taking the edge from the last vertex \(x_T\) to its first neighbour \(x_s\) in the path. We can conclude that \begin{equation*} \sum_{v\in C} \frac{1}{\dout(v)} = \frac{1}{\dout(x_T)} + \sum_{t=s}^{T-1} \frac{1}{\dout(x_t)} \geq \max\left\{\frac{1}{\dout(x_T)}, \frac{1}{4}\frac{\log \log n}{\log n} - \frac{1}{\dout(x_T)}\right\}, \end{equation*} and the proposition follows by minimizing the right-hand side over $\dout(x_T)$. \end{proof}
{"config": "arxiv", "file": "1911.07778/easy_version.tex"}
TITLE: Inverse function of $2^{x(x-1)}$? QUESTION [3 upvotes]: I tried taking log to the base $2$ both sides and solving it using quadratic formula : $$y = 2^{x(x-1)}$$ Taking log to the base $2$ both sides : $$\log_2(y) = x(x-1)$$ $$x^2 - x -\log_2(y) = 0$$ Solving the above equation for $x$ : $$x=\frac{1\pm\sqrt{1+4\log_2(y)}}2$$ However, the answer to this question is : $$x = \frac{\log_2(y)}{\log_2(y) - 1}$$ I would appreciate if someone can answer this question for me. It has been bugging me for a while. Btw before posting this question here, I tried to find if someone has already asked this question but no one has. REPLY [2 votes]: It might be a typo in the book, indeed if we try to obtain the inverse function of $$y=2^{\frac x{x-1}}\,$$ we get the correct answer. By taking the logarithm to the base $2$ both sides, we get that : $$\log_2(y)=\dfrac x{x-1}$$ $$x\log_2(y)-\log_2(y)=x$$ and by solving the above equation for $x$ : $$x\log_2(y)-x=\log_2(y)$$ $$x\big(\log_2(y)-1\big)=\log_2(y)\;\,.$$ Hence, the inverse function is : $$x=\frac{\log_2(y)}{\log_2(y)-1}\;\,.$$
{"set_name": "stack_exchange", "score": 3, "question_id": 4195365}
\begin{document} \maketitle \begin{abstract} \textbf{Abstract}: We give the exact solution to the connection problem of the \textit{confluent} Heun equation, by making use of the explicit expression of irregular Virasoro conformal blocks as sums over partitions via the AGT correspondence. Under the appropriate dictionary, this provides the exact connection coefficients of the radial and angular parts of the Teukolsky equation for the Kerr black hole, which we use to extract the finite frequency greybody factor, quasinormal modes and Love numbers. In the relevant approximation limits our results are in agreement with existing literature. The method we use can be extended to solve the linearized Einstein equation in other interesting gravitational backgrounds. \end{abstract} \newpage \tableofcontents \section{Introduction and outlook} The recent experimental verification of gravitational waves renewed the interest in the theoretical studies of General Relativity and black hole physics. A particularly interesting aspect is the development of exact computational techniques to produce high precision tests of General Relativity equations. From this perspective, the study of exact solutions of differential equations rather than their approximate or numerical solutions is of paramount importance both to deepen our comprehension of physical phenomena and to reveal possible physical fine structure effects. On the other hand, recent developments in the study of two-dimensional conformal field theories, their relation with supersymmetric gauge theories, equivariant localisation and duality in quantum field theory produced new tools which are very effective to study long-standing classical problems in the theory of differential equations. Indeed, it has been known for a long time that the study of two-dimensional Conformal Field Theories \cite{Belavin:1984vu} and of the representations of its infinite-dimensional symmetry algebra provide exact solutions to partial differential equations in terms of conformal blocks and the appropriate fusion coefficients. The prototypical example is the null-state equation at level 2 for primary operators of Virasoro algebra which reduce, in the large central charge limit, to a Schr\"odinger-like equation with regular singularities, corresponding to a potential term with at most quadratic poles. In this way one can engineer solutions of second-order linear differential equations of Fuchsian type by making use of the appropriate two dimensional CFT\footnote{ Our analysis is here limited - for the sake of presenting the general method - to second order linear differential equations, but all we say can be generalized to higher order equations by considering higher level degenerate field insertions, as already considered in \cite{Belavin:1984vu}.}. While under the operator/state correspondence the vertex operators in the above construction correspond to primary (highest weight) states, one can insert more general irregular vertex operators corresponding to universal Whittaker states. The latter generate irregular singularities in the corresponding null-state equation and therefore allow engineering more general potentials with singularities of order higher than two. Schematically, given a multi-vertex operator ${\cal O}_V(z_1,\ldots,z_N)$ satisfying the OPE \begin{equation}\label{OPE} T(z){\cal O}_V(z_1,\ldots,z_N) \sim V(z;z_i){\cal O}_V(z_1,\ldots,z_N) \quad \mathrm{as} \quad z\sim z_i \end{equation} one finds the corresponding level 2 null-state equation \begin{equation}\label{scro} [b^{-2}\partial_z^2+\sum_i V(z;z_i)]\Psi(z)=0 \quad \Psi(z)=\langle\Phi_{2,1}(z) {\cal O}_V(z_1,\ldots,z_N)\rangle \end{equation} satisfied by the correlation function of the multi-vertex and the level $2$ degenerate field $\Phi_{2,1}(z)$. If the multi-vertex contains primary operators only, the OPE \eqref{OPE} and the potential in \eqref{scro} contain at most quadratic poles, while the insertions of irregular vertices generate higher order singularities in $\sum_i V(z;z_i)$. Actually, $V(z;z_i)$ is a function in $z$ and in differential operators with respect to the $z_i$. The dependence on the latter is specified by the semiclassical limit $b\to 0$ of Liouville CFT\footnote{This is not to be confused with the semiclassical approximation of the Schr\"odinger equation.}, corresponding to large Virasoro central charge $c\to\infty$. In this way, one finds a Schr\"odinger-like equation \begin{equation} \epsilon_1^2 \frac{d^2 \Psi(z)}{dz^2} + V_{CFT}(z) \Psi(z) = 0 \, , \label{eq:schroedinger} \end{equation} where $\epsilon_1$ is a parameter which stays finite in the large central charge limit and plays the r\^ole of the Planck constant. The advantage of this approach is that the explicit solution of the connection problem on the $z$-plane for equation \eqref{eq:schroedinger} can be derived from the explicit computation of the full CFT$_2$ correlator \eqref{eq:full} and from its expansions in different intermediate channels. A crucial ingredient to accomplish this program is a deep control on the analytic structure of regular and irregular Virasoro conformal blocks. This has been recently obtained after the seminal AGT paper \cite{Alday_2010}, where conformal blocks of Virasoro algebra have been identified with concrete combinatorial formulae arising from equivariant instanton counting in the context of ${\cal N}=2$ four-dimensional supersymmetric gauge theories \cite{Nekrasov:2002qd,Nekrasov:2003rj}. The explicit solution of the instanton counting problem has been decoded in the CFT language in terms of overlap of universal Whittaker states in \cite{Gaiotto:2009ma,Marshakov_2010,Bonelli_2012,Gaiotto:2012sf}. More precisely, the wave function $\Psi(z)$ corresponds to the insertion of a BPS surface observable in the gauge theory path integral \cite{Alday_2010a}. The specific case studied in this paper corresponds to a surface observable in the $SU(2)$ ${\cal N}=2$ gauge theory with $N_f=3$ fundamental hypermultiplets. The relevant gauge theory moduli space in these cases is the one of \textit{ramified} instantons \cite{Kanno:2011fw}, with vortices localised on the surface defect, the $z$-variable providing the fugacity for the vortex counting. In the simplest cases the latter is indeed captured by hypergeometric functions \cite{Bonelli:2011fq}. An important consequence of the AGT correspondence between CFT correlation functions and exact BPS partition functions in ${\cal N}=2$ four dimensional gauge theories has been the discovery of the so called "Kiev formula" in the theory of Painlev\'e transcendents \cite{Gamayun_2013}, which established the latter to be a further class of special functions with an explicit combinatorial expression in terms of equivariant volumes of instanton moduli spaces \cite{Nekrasov:2002qd,Bruzzo_2003}. This correspondence between Painlev\'e and gauge theory has been extended to the full Painlev\'e confluence diagram in \cite{Bonelli_2017}, used in \cite{bonelli2021instantons} to produce recurrence relations for instanton counting for general gauge groups and studied in terms of blow-up equations in \cite{Grassi:2016nnt,Bershtein:2018zcz,Nekrasov:2020qcq}. These results are related via the AGT correspondence to the $c=1$ limit of Liouville conformal field theory. On the other hand, it is well-known that a direct relation exists between the linear system associated to Painlev\'e VI equation and the Heun equation \cite{Slavyanov:2000:SF}. Further studies on this subject appeared recently in \cite{Lisovyy:2021bkm}. This perspective has been analyzed in the context of black hole physics in \cite{Carneiro_da_Cunha_2016,Carneiro_da_Cunha_2020,amado2021remarks} where it was suggested that some physical properties of black holes, such as their greybody factor and quasinormal modes, can be studied in a particular regime in terms of Painlev\'e equations. A decisive step forward about the quasinormal mode problem has been taken in \cite{aminov2020black}, where a different approach making use of the Seiberg-Witten quantum curve of an appropriate supersymmetric gauge theory has been advocated to justify their sprectrum and whose evidence was also supported by comparison with numerical analysis of the gravitational equation (see also \cite{Hatsuda:2020sbn,Hatsuda:2020iql} for further developments). This view point has been further analysed in \cite{chico2021}, where the context is widely generalized to D-branes and other types of gravitational backgrounds in various dimensions. From the CFT$_2$ viewpoint, the gauge theoretical approach corresponds to the large Virasoro central charge limit recalled above. It would be interesting to explore the relation between the $c=1$ and $c=\infty$ approaches (see \cite{Bershtein:2021uts} for recent interesting developments). Let us remark that in our view the CFT$_2$ framework is the suitable one to provide a physical explanation of the above described relations among black hole physics and supersymmetric gauge theories. In this paper, for the sake of concreteness and with a specific application to the Kerr black hole problem in mind, we study equation \eqref{scro} for $N_f=3$ in the case of two regular and one irregular singularity of fourth order. In Sect.\ref{two} we review the relativistic massless wave equation in the Kerr black hole background, giving rise to the Teukolsky equation, whose solution can be obtained by separation of variables. In Sect.\ref{three} we recall how both the radial and angular parts reduce, under an appropriate dictionary, to \eqref{scro} with an irregular singularity of order four at infinity and two regular singularities, which is the \textit{confluent} Heun equation \cite{ronveaux1995heun}. We provide the explicit exact solution of the connection coefficients in Sect.\ref{four}. The efficiency of the instanton expansion in the exact solution against the numerical integration is demonstrated by a detailed quantitative analysis in Subsect.\ref{plots}. In Sect.\ref{five} we apply these results to Kerr black hole physics. We perform the study of the greybody factor of the Kerr black hole at finite frequency for which we give an exact formula. This reduces to the well-known result of Maldacena and Strominger \cite{Maldacena_1997} in the zero frequency limit and in the semiclassical regime reproduces the results computed via standard WKB approximation in \cite{dumlu2020stokes}. By using the explicit solution of the connection problem, we also provide a proof of the exact quantization of Kerr black hole quasinormal modes as proposed in \cite{aminov2020black}. By solving the angular Teukolsky equation, we also prove the analogue dual quantization condition on the corresponding parameters of the spin-weighted spheroidal harmonics. Finally, we discuss the use of the precise asymptotics of our solution to determine the tidal deformation profile in the far away region of the Kerr black hole and compare it to recent results on the associated Love numbers in the static \cite{Le_Tiec_2021} and quasi-static \cite{chia2020tidal,charalambous2021vanishing} regimes. \vspace{1cm} Let us discuss some selected open points and possible further developments. \begin{itemize} \item from the CFT$_2$ perspective, the equation \eqref{scro} arises in the semiclassical limit of Liouville field theory. An intriguing question to investigate is whether the quantum corrections in CFT$_2$ can have a physical interpretation in the black hole description. In principle, this could be related to quantum gravitational corrections or more generally to some deviations from General Relativity, which will affect the physical properties of the black hole's gravitational field. \item Although in a very different circle of ideas, a link of holographic type between CFT$_2$ and Kerr black hole physics emerged in the last years since \cite{Guica_2009}. It would be very interesting to find whether the mathematical structure behind the solution of the Kerr black hole radiation problem we present in this letter could have a clear interpretation in the context of the Kerr/CFT correspondence. \item A further possible application of the method presented in this paper is the study of the physics of the last stages of coalescence of compact objects with the Zerilli function \cite{Zerilli:1971wd}, see \cite{Annulli:2021dkw} for recent developments. The corresponding potential displays a fifth order singularity which can be engineered with a higher irregular state, corresponding to Argyres-Douglas SCFT in gauge theory \cite{Argyres:1995xn}. Let us remark that the CFT$_2$ methods extend beyond the equivariant localisation results in gauge theory, making it possible to quantitatively study higher order singularities \cite{Bonelli_2012}. \item Other black hole backgrounds can be analysed with methods similar to the ones used in this paper. An important example is given by Kerr black hole solutions which asymptote to the (Anti-)de Sitter metric at infinity. These correspond to the Heun equation, which has four regular singularities on the Riemann sphere, and can be engineered from five-point correlators in Liouville CFT with four primary operator insertions and one level 2 degenerate field. This will provide explicit formulae for the corresponding connection problem and wave functions allowing for example to give an exact expression for the greybody factor studied in \cite{Gregory:2021ozs}. \item Our method can well be extended to other gravitational potentials studied to analyse possible deviations from GR with a modified quasinormal mode spectrum \cite{Ikeda:2021uvc} and Love numbers \cite{Brustein:2021bnw}. \item The results we present are given as a perturbative series in the instanton counting parameter $\Lambda$, which, as we show from comparison with the numerical solution in Subsect.\ref{plots}, actually converges very efficiently. From the gauge theory reader's viewpoint let us notice that understanding how to extend our approach to the connection problem on the $\Lambda$ plane \cite{Lisovyy:2018mnj} would improve our understanding the strong coupling effects in gauge theory. Moreover, it could reveal to be useful for other applications in gravitational problems. \end{itemize} \section{Perturbations of Kerr black holes}\label{two} The Kerr metric describes the spacetime outside of a stationary, rotating black hole in asymptotically flat space. In Boyer-Lindquist coordinates it reads: \begin{equation} \begin{aligned} ds^2= & -\left(\frac{\Delta - \text{a}^2 \sin^2 \theta}{\Sigma} \right) dt^{2} + \frac{\Sigma}{\Delta} dr^{2} + \Sigma d\theta^{2} + \left(\frac{(r^2+ \text{a}^2)^2 - \Delta \text{a}^2 \sin^2 \theta}{\Sigma} \right) \sin^{2}\theta \ d\phi^{2} \\ & - \frac{2\text{a} \sin^{2} \theta (r^2 + \text{a}^2 -\Delta)}{\Sigma} dt \, d\phi \,, \end{aligned} \end{equation} where \begin{equation} \Sigma = r^2 + \text{a}^2 \cos^2 \theta \, , \quad \Delta = r^2 - 2Mr + \text{a}^2 \,. \end{equation} The horizons are given by the roots of $\Delta$: \begin{equation} r_\pm = M \pm \sqrt{M^2 - \text{a}^2} \,. \end{equation} Two other relevant quantities are the Hawking temperature and the angular velocity at the horizon: \begin{equation} T_H = \frac{r_+-r_-}{8 \pi M r_+}\, , \quad \Omega = \frac{\text{a}}{2Mr_+} \,. \end{equation} Perturbations of the Kerr metric by fields of spin $s=0,1,2$ are described by the Teukolsky equation \cite{Teukolsky:1972my}, who found that an Ansatz of the form \begin{equation} \Phi_s = e^{im\phi - i\omega t} S_{\lambda, s} (\theta, \text{a} \omega) R_s (r) \,. \end{equation} permits a separation of variables of the partial differential equation. One gets\footnote{Dropping the ${}_s$ subscript to ease the notation} the following equations for the radial and the angular part (see for example \cite{Berti_2009} eq.25): \begin{equation} \begin{aligned} &\Delta \frac{d^2R}{dr^2}+(s+1)\frac{d\Delta}{dr}\frac{dR}{dr} + \left(\frac{K^2-2is(r-M)K}{\Delta} - \Lambda_{\lambda, s} + 4 i s \omega r\right)R = 0 \,, \\ &\partial_x (1 - x^2) \partial_x S_\lambda + \left[ (cx)^2 + \lambda + s - \frac{(m+sx)^2}{1-x^2} - 2 c s x \right] S_\lambda = 0 \,. \end{aligned} \end{equation} Here $x=\cos\theta$, $c = \text{a} \omega$ and \begin{equation} K=(r^2+\text{a}^2)\omega - \text{a}m, \quad \Lambda_{\lambda, s} = \lambda + \text{a}^2 \omega^2 - 2 \text{a} m \omega \,. \end{equation} $\lambda$ has to be determined as the eigenvalue of the angular equation with suitable boundary conditions imposing regularity at $\theta=0,\pi$. In general no closed-form expression is known, but for small $\text{a}\omega$ it is given by $\lambda = \ell(\ell+1) - s(s+1) + \mathcal{O}(\text{a}\omega)$ (see Appendix \ref{Appendix:SWSH}). We give a way to calculate it to arbitrary order in $\text{a}\omega$ in subsection \ref{angular_quantization}. \newline For later purposes it is convenient to write both equations in the form of a Schr\"odinger equation. For the radial equation we define \begin{equation} \begin{aligned} & z = \frac{r-r_-}{r_+ - r_-}\, , \quad \psi(z) = \Delta(r)^\frac{s+1}{2} R(r) \end{aligned} \end{equation} With this change of variables the inner and outer horizons are at $z=0$ and $z=1$, respectively, and $r\rightarrow \infty$ corresponds to $z \rightarrow \infty$. We obtain the differential equation \begin{equation} \frac{d^2 \psi(z)}{dz^2} + V_r(z)\psi(z) = 0 \end{equation} with potential \begin{equation}\label{radial_potential} V_r(z) = \frac{1}{z^2 (z-1)^2}\sum_{i=0}^4 \hat{A}^r_i z^i\,. \end{equation} The coefficients $\hat{A}^r_i$ depend on the parameters of the black hole and the frequency, spin and angular momentum of the perturbation. Their explicit expression is given in Appendix \ref{coefficients_potential}.\newline For the angular part instead we define \begin{equation} z = \frac{1+x}{2} \,, \quad y(z) = \sqrt{1-x^2} \frac{S_\lambda}{2} \,. \label{eq:angchangeofvar} \end{equation} After this change of variables, $\theta = 0$ corresponds to $z = 1$, and $\theta = \pi$ to $z = 0$. The equation now reads \begin{equation} \frac{d^2 y(z)}{dz^2} + V_{ang} (z) y(z) = 0 \,, \end{equation} with potential \begin{equation} V_{ang} (z) = \frac{1}{z^2 (z-1)^2} \sum_{i=0}^4 \hat{A}_i^\theta z^i\,. \end{equation} Again, we give the explicit expressions of the coefficients $\hat{A}_i^\theta$ in Appendix \ref{coefficients_potential}. When written as Schr\"odinger equations, it is evident that the radial and angular equations share the same singularity structure. They both have two regular singular points at $z = 0, 1$ and an irregular singular point of Poincar\'e rank one at $z=\infty$. Such a differential equation is well-known in the mathematics literature as the confluent Heun equation \cite{ronveaux1995heun}. \section{The confluent Heun equation and conformal field theory}\label{three} \subsection{The confluent Heun equation in standard form} The confluent Heun equation (CHE) is a linear differential equation of second order with regular singularities at $z=0$ and 1, and an irregular singularity of rank 1 at $z=\infty$. In its standard form it is written as \begin{equation}\label{CHE_standard} \frac{d^2w}{dz^2}+\left(\frac{\gamma}{z}+\frac{\delta}{z-1}+\epsilon\right)\frac{dw}{dz}+\frac{\alpha z-q}{z(z-1)}w=0\,. \end{equation} By defining $w(z)=P(z)^{-1/2}\psi(z)$ with $P(z)=e^{\epsilon z} z^\gamma (z-1)^\delta$, we can bring the standard form of the CHE into the form of a Schr\"odinger equation: \begin{equation}\label{CHE_Schrodinger} \frac{d^2\psi(z)}{dz^2} + V_{Heun}(z) \psi(z) = 0 \end{equation} where the potential is \begin{equation} V_{Heun}(z) = \frac{1}{z^2 (z-1)^2}\sum_{i=0}^4 A^H_i z^i \end{equation} with coefficients $A_i$ given in terms of the parameters of the standard form of the CHE by \begin{equation}\label{HeunAs} \begin{aligned} & A^H_0 = \frac{\gamma(2-\gamma)}{4}\\ & A^H_1 = q+\frac{\gamma}{2}(\gamma+\delta-\epsilon-2)\\ & A^H_2 = -q-\alpha-\frac{\gamma^2}{4}+\frac{\delta}{2}-\frac{(\delta-\epsilon)^2}{4}+\frac{\gamma}{2}(1-\delta+2\epsilon) \\ & A^H_3 = \alpha-\frac{\epsilon}{2}(\gamma+\delta-\epsilon) \\ & A^H_4 = - \frac{\epsilon^2}{4} \end{aligned} \end{equation} \subsection{The confluent Heun equation as a BPZ equation}\label{HeunasBPZ} In this section we work at the level of chiral conformal field theory/conformal blocks, which are fixed completely by the Virasoro algebra. Throughout this paper we work with conformal momenta related to the conformal weight by $\Delta = \frac{Q^2}{4}-\alpha^2$. The representation theory of the Virasoro algebra contains degenerate Verma modules of weight $\Delta_{r,s} = \frac{Q^2}{4}-\alpha_{r,s}^2$ with $\alpha_{r,s}=-\frac{br}{2}-\frac{s}{2b}$, where $Q=b+\frac{1}{b}$ and $b$ is related to the central charge as $c=1+6Q^2$. At level 2, the degenerate field $\Phi_{2,1}$ has weight $\Delta_{2,1}=-\frac{1}{2}-\frac{3}{4}b^2$ and satisfies the null-state equation \begin{equation} (b^{-2}L_{-1}^2 + L_{-2})\cdot \Phi_{2,1}(z) = 0\,. \label{eq:degenerateannihilation} \end{equation} When this field is inserted in correlation functions, equation (\ref{eq:degenerateannihilation}) translates into a differential equation for the correlator called BPZ equation \cite{Belavin:1984vu}. Consider then the following chiral correlator with a degenerate field insertion: \begin{equation}\label{correlator_unnormalized} \Psi (z) := \langle \Delta, \Lambda_0, m_0 | \Phi_{2,1}(z) V_2(1) | \Delta_1 \rangle\,. \end{equation} $\Phi_{2,1}$ is the degenerate field mentioned above, $V_2(1)$ is a primary operator of weight $\Delta_2=\frac{Q^2}{4}-\alpha_2^2$ inserted at $z=1$ and $|\Delta_1\rangle$ is a primary state of weight $\Delta_1=\frac{Q^2}{4}-\alpha_1^2$ corresponding via the state-operator correspondence to the insertion of $V_1(0)$. The state $\langle \Delta, \Lambda_0, m_0 |$, called an irregular state of rank 1, is a more exotic kind of state, defined in \cite{Marshakov_2009} as: \begin{equation}\label{irregular_state} \langle \Delta, \Lambda_0, m_0| = \sum_Y \sum_p \langle\Delta|L_{Y} m_0^{|Y|-2p} \Lambda_0^{|Y|} Q_{\Delta}^{-1}\big([2^p,1^{|Y|-2p}],Y\big)\,. \end{equation} The first sum runs over Young tableaux $Y$, $|Y|$ denotes the total number of boxes in the tableau and $Q$ is the Shapovalov form $Q_\Delta(Y,Y')=\langle\Delta|L_Y L_{-Y'}|\Delta'\rangle$. The notation $[2^p,1^{|Y|-2p}]$ refers to a Young tableau with $p$ columns of two boxes and $|Y|-2p$ columns of single boxes. $p$ then runs from $0$ to $|Y|/2$. All in all this implies the following relations, derived in \cite{Marshakov_2009}, which are all that we will need: \begin{equation} \begin{aligned} & \langle \Delta, \Lambda_0, m_0|L_0 = \bigg( \Delta + \Lambda_0 \frac{\partial}{\partial \Lambda_0} \bigg)\langle \Delta, \Lambda_0, m_0| \\ & \langle \Delta, \Lambda_0, m_0|L_{-1} = m_0 \Lambda_0 \langle \Delta, \Lambda_0, m_0| \\ & \langle \Delta, \Lambda_0, m_0|L_{-2} = \Lambda_0^2 \langle \Delta, \Lambda_0, m_0|\\ & \langle \Delta, \Lambda_0, m_0|L_{-n} = 0 \quad \mathrm{for} \,\, n \geq 3\,, \end{aligned} \end{equation} so it is a kind of coherent state for the Virasoro algebra. The investigation of these kind of states in CFT was motivated by the AGT conjecture \cite{Alday_2010} according to which they are related to asymptotically free gauge theories \cite{Gaiotto:2009ma,Bonelli_2012,Gaiotto:2012sf}. This correlator satisfies the following BPZ equation (see Appendix \ref{CFT_calculations} for details): \begin{equation} \begin{aligned} 0 =& \langle \Delta, \Lambda_0, m_0 | \big(b^{-2} \partial_z^2 + L_{-2}\cdot \big) \Phi_{2,1}(z) V_2(1) | V_1 \rangle = \\ = & \bigg(b^{-2} \partial_z^2 - \frac{1}{z}\partial_z - \frac{1}{z} \frac{1}{z-1} \big(z\partial_z - \Lambda_0 \partial_{\Lambda_0} + \Delta_{2,1} + \Delta_2 + \Delta_1 - \Delta \big) + \frac{\Delta_2}{(z-1)^2} + \frac{\Delta_1}{z^2} + \frac{ m_0 \Lambda_0}{z} + \Lambda_0^2\bigg) \Psi(z)\,. \end{aligned} \end{equation} We now take a double-scaling limit known as the Nekrasov-Shatashvili (NS) limit in the AGT dual gauge theory \cite{NEKRASOV_2010}, which corresponds to the semiclassical limit of large Virasoro central charge in the CFT. This amounts to introducing a new parameter $\hbar$, and sending $\epsilon_2 = \hbar b \to 0$, while keeping $\epsilon_1 = \hbar/b$, $\hat{\Delta}=\hbar^2 \Delta$, $\hat{\Delta}_1=\hbar^2 \Delta_1$,$\hat{\Delta}_2=\hbar^2 \Delta_2$, $\Lambda = 2i\hbar \Lambda_0$ and $m_3 = \frac{i}{2}\hbar m_0 $ fixed. Furthermore, arguments from CFT \cite{Zamolodchikov_1996} and the AGT conjecture tell us that in this limit the correlator exponentiates and the $z$-dependence appears only at subleading order: \begin{equation} \Psi (z) \propto \exp{ \frac{1}{\epsilon_1 \epsilon_2} \left( \mathcal{F}^{\mathrm{inst}}(\epsilon_1)+ \epsilon_2 \mathcal{W}(z;\epsilon_1) + \mathcal{O}(\epsilon_2^2) \right)}\,. \end{equation} Introducing the normalized wavefunction $\psi(z) = \lim_{\epsilon_2 \rightarrow 0} \Psi(z) /\langle \Delta, \Lambda_0, m_0 | V_2(1) | \Delta_1 \rangle$ and multiplying everything by $\hbar^2$, the BPZ equation in the NS limit becomes \begin{equation} \begin{aligned} & 0 = \bigg(\epsilon_1^2\partial_z^2 - \frac{1}{z} \frac{1}{z-1} \big(- \Lambda \partial_{\Lambda} \mathcal{F}^{\mathrm{inst}} + \hat{\Delta}_2 + \hat{\Delta}_1 -\hat{\Delta} \big) + \frac{\hat{\Delta}_2}{(z-1)^2} + \frac{\hat{\Delta}_1}{z^2} - \frac{m_3 \Lambda}{z} - \frac{\Lambda^2}{4}\bigg) \psi(z) \,. \end{aligned} \label{eq:schroedingerexp} \end{equation} All other terms vanish in the limit. It takes the form of a Schr\"odinger equation: \begin{equation} \epsilon_1^2 \frac{d^2 \psi(z)}{dz^2} + V_{CFT}(z) \psi(z) = 0 \label{eq:schroedinger} \end{equation} with potential \begin{equation} V_{CFT}(z) = \frac{1}{z^2 (z-1)^2}\sum_{i=0}^4 A_i z^i\,. \end{equation} Written in this form it is clear that the BPZ equation for this correlation function takes the form of the confluent Heun equation. Using conformal momenta instead of dimensions we write $\hat{\Delta}_i=\frac{1}{4}-a_i^2$, where we have used $\hat{\Delta}_i=\hbar^2\Delta_i$, $\hbar Q=\epsilon_1+\epsilon_2=\epsilon_1$ and defined $a_i := \hbar \alpha_i$. Defining furthermore $E:=a^2-\Lambda \partial_{\Lambda} \mathcal{F}^{\mathrm{inst}}$, the coefficients of the potential are \begin{equation} \begin{aligned} & A_0 = \frac{\epsilon_1^2}{4}-a_1^2\\ & A_1 = -\frac{\epsilon_1^2}{4} + E + a_1^2 - a_2^2 -m_3 \Lambda \\ & A_2 = \frac{\epsilon_1^2}{4} -E + 2 m_3 \Lambda - \frac{\Lambda^2}{4} \\ & A_3 = -m_3 \Lambda +\frac{\Lambda^2}{2} \\ & A_4 = - \frac{\Lambda^2}{4}\,. \end{aligned} \label{eq:CFTA's} \end{equation} Comparing with the coefficients $A^H_i$ of the CHE in (\ref{HeunAs}) and setting $\epsilon_1=1$ to match the coefficient of the second derivative, we can identify the parameters of the standard form with the parameters of the CFT as: \begin{equation}\label{CHE_dictionary} \boxed{\begin{aligned} & \alpha = \theta'' \Lambda(1+\theta a_1+\theta' a_2+\theta'' m_3)\\ & \gamma = 1+2\theta a_1\\ & \delta = 1+2\theta' a_2\\ &\epsilon = \theta'' \Lambda\\ & q = E-\frac{1}{4} - (\theta a_1+\theta' a_2)^2 - (\theta a_1+\theta' a_2) + \theta'' \Lambda\left(\frac{1}{2}+\theta a_1-\theta'' m_3\right) \end{aligned}} \end{equation} for any choice of signs $\theta,\theta',\theta'' = \pm 1$. These $8=2^3$ dictionaries reflect the symmetries of the equation, which is invariant independently under $a_1\to -a_1$, $a_2\to-a_2$ and $(m_3,\Lambda)\to-(m_3,\Lambda)$. \subsection{The radial dictionary} We see that the BPZ equation takes the same form as the radial and angular equations of the black hole perturbation equation if we set $\epsilon_1 = 1$. We will do this from now on. This implies $b=\hbar$. Comparing with the coefficients $\hat{A}^r_i$ we find the following eight dictionaries between the parameters of the radial equation in the black hole problem and the CFT: \begin{equation} \begin{aligned} & E = \frac{1}{4} + \lambda + s(s+1)+ \text{a}^2 \omega^2 - 8M^2 \omega^2 - \left( 2M\omega^2 + i s \omega \right) (r_+-r_-)\\ & a_1 = \theta \left(-i \frac{\omega - m \Omega}{4 \pi T_H} + 2 \mathrm{i} M \omega + \frac{s}{2} \right) \\ & a_2 = \theta' \left( -i\frac{\omega - m \Omega}{4 \pi T_H} - \frac{s}{2} \right)\\ & m_3 = \theta'' \left(-2 i M \omega + s \right)\\ & \Lambda = -2 i \theta '' \omega (r_+ - r_-) \end{aligned} \end{equation} where $\theta,\theta',\theta'' = \pm 1$. We will make the following choice for the dictionary from now on: \begin{equation}\label{radial_dictionary} \boxed{\begin{aligned} & E = \frac{1}{4} + \lambda + s(s+1) + \text{a}^2 \omega^2 - 8M^2 \omega^2 - \left(2M\omega^2 + i s \omega \right) (r_+-r_-)\\ & a_1 = -i \frac{\omega - m \Omega}{4 \pi T_H} + 2 \mathrm{i} M \omega + \frac{s}{2} \\ & a_2 = -i \frac{\omega - m \Omega}{4 \pi T_H} - \frac{s}{2}\\ & m_3 = -2 i M \omega + s \\ & \Lambda = -2 i \omega (r_+ - r_-)\,. \end{aligned}} \end{equation} which corresponds to $\theta = \theta' = \theta'' = +1$. Using AGT this dictionary gives the following masses in the gauge theory (see Appendix \ref{AppendixNekrasov} for details): \begin{equation} \begin{aligned} &m_1 = a_1+a_2= -i \frac{\omega - m \Omega}{2 \pi T_H} + 2i M \omega \,, \\ &m_2 = a_2-a_1= -2 i M \omega - s \,, \\ & m_3 = -2 i M \omega + s \,. \end{aligned} \end{equation} This is the same result as the one found in \cite{aminov2020black} except for a shift in $E$, which is due to a different definition of the $U(1)$-factor. \subsection{The angular dictionary} Comparing instead (\ref{eq:CFTA's}) with the $\hat{A}_i^\theta$ in (\ref{eq:angkerrA's}) we find the following eight dictionaries between the parameters of the angular equation in the black hole problem and the CFT: \begin{equation} \begin{aligned} &E = \frac{1}{4} + c^2 + s(s+1) - 2 c s + \lambda \,, \\ &a_1 = \theta \left(-\frac{m-s}{2} \right) \,, \\ &a_2 = \theta' \left( -\frac{m+s}{2} \right) \,, \\ &m_3 = -\theta'' s \,, \\ &\Lambda = \theta'' 4 c \,, \end{aligned} \end{equation} where again $\theta,\theta',\theta'' = \pm 1$ and our choice from here on will be $\theta = \theta' = \theta'' = +1$, i.e.: \begin{equation} \boxed{\begin{aligned} & E = \frac{1}{4} + c^2 + s(s+1) - 2 c s + \lambda \,, \\ & a_1 = -\frac{m-s}{2} \,, \\ & a_2 = -\frac{m+s}{2}\,, \\ & m_3 = - s \,, \\ & \Lambda = 4 c \,. \end{aligned}} \label{eq:angulardictionaryy} \end{equation} Using AGT this dictionary gives the following masses in the gauge theory (see Appendix \ref{AppendixNekrasov} for details): \begin{equation} \begin{aligned} &m_1 =a_1+a_2= - m \,, \\ &m_2 =a_2-a_1= - s \,, \\ &m_3 = - s \,. \end{aligned} \end{equation} Again we note the discrepancy with \cite{aminov2020black} due to the different $U(1)$-factor. \section{The connection problem}\label{section:ConnectionProblem} \label{four} Exploiting crossing symmetry of Liouville correlation functions we can connect different asymptotic expansions of the solutions of BPZ equations around different field insertion points. Asymptotic expansions are computed via OPEs with regular and irregular insertions. To this end, we recall that the OPE of the degenerate field of our interest and a primary field reads \cite{Belavin:1984vu}: \begin{equation} \Phi_{2,1} (z, \bar{z}) V_{\alpha_i} (w, \bar{w}) = \sum_{\pm} \mathcal{C}_{\alpha_{2, 1}, \alpha_i}^{\alpha_{i \pm}} |z-w|^{2 k_{\pm}} \left( V_{\alpha_{i \pm}} (w, \bar{w}) + \mathcal{O}(|z - w|^2 ) \right) \,, \label{eq:regularOPE} \end{equation} where $\alpha_{i \pm} := \alpha_i \pm \frac{-b}{2}$, and $k_\pm = \Delta_{\alpha_{i \pm}} - \Delta_{\alpha_i} - \Delta_{2,1}$ is fixed by the $L_0$ action. The OPE coefficient $\mathcal{C}_{\alpha_{2, 1}, \alpha_i}^{\alpha_{i \pm}}$ is computed in terms of DOZZ factors \cite{Dorn:1994xn} \cite{Zamolodchikov:1995aa} (see Appendix \ref{DOZZfactors}), namely \begin{equation} \mathcal{C}_{\alpha_{2, 1}, \alpha_i}^{\alpha_{i \pm}} = G^{-1}(\alpha_{i \pm}) C(\alpha_{i \pm}, \frac{-b-Q}{2}, \alpha_i) \,. \end{equation} The OPE with the irregular state is constrained by conformal symmetry, and the leading behavior is fixed by the action of $L_0, L_1, L_2$ instead of just $L_0$. The overall factors are again given in terms of DOZZ factors (see Appendix \ref{IrregularOPE}). One finds \begin{equation} \begin{aligned} \langle \Delta_\alpha, \Lambda_0, \bar{\Lambda}_0, m_0 | \Phi_{2, 1} (z, \bar{z}) &= \mathcal{C}^{\alpha_+}_{\alpha, \alpha_{2,1}} \displaystyle\left\lvert \sum_{\pm, k} \mathcal{A}_{\alpha_+, m_{0 \pm}} (\pm \Lambda)^{-\frac{1}{2} \pm m_3 + b \alpha_+} z^{\frac{1}{2} (bQ - 1 \pm 2 m_3)} e^{\pm \Lambda z/2} z^{-k} \langle \Delta_{\alpha_+}, \Lambda_0, m_{0 \pm}; k | \right\rvert^2 + \\ &+ \mathcal{C}^{\alpha_-}_{\alpha, \alpha_{2,1}} \displaystyle\left\lvert \sum_{\pm, k} \mathcal{A}_{\alpha_-, m_{0 \pm}} (\pm \Lambda)^{-\frac{1}{2} \pm m_3 - b \alpha_-} z^{\frac{1}{2} (bQ - 1 \pm 2 m_3)} e^{\pm \Lambda z/2} z^{-k} \langle \Delta_{\alpha_-}, \Lambda_0, m_{0 \pm}; k | \right\rvert^2\,. \end{aligned} \label{eq:ierrgope} \end{equation} Here the irregular state depending on $\Lambda_0, \bar{\Lambda}_0$ denotes the full (chiral$\otimes$antichiral) state, and the modulus squared of the chiral states (depending only on $\Lambda_0$) also has to be understood as a tensor product. The coefficients $\mathcal{A}$ are given by \begin{equation} \begin{aligned} &\mathcal{A}_{\alpha_+, m_{0 +}} = \frac{\Gamma (1 - 2 b \alpha_+)}{\Gamma (\frac{1}{2} + m_3 - b \alpha_+)} \,, \, \, \mathcal{A}_{\alpha_+, m_{0 -}} = \frac{\Gamma (1 - 2 b \alpha_+)}{\Gamma (\frac{1}{2} - m_3 - b \alpha_+)} \,, \\ &\mathcal{A}_{\alpha_-, m_{0 +}} = \frac{\Gamma (1 + 2 b \alpha_-)}{\Gamma (\frac{1}{2} + m_3 + b \alpha_-)} \,, \, \, \mathcal{A}_{\alpha_-, m_{0 -}} = \frac{\Gamma (1 + 2 b \alpha_-)}{\Gamma (\frac{1}{2} - m_3 + b \alpha_-)} \,. \end{aligned} \label{eq:irrOPEcoeffreal} \end{equation} Since the results presented in this section are formulated purely in a CFT context, they will be written for finite $b$ unless otherwise specified. \subsection{Connection formulae for the irregular 4 point function} Let us consider the irregular correlator \begin{equation} \Psi (z, \bar{z}) = \langle \Delta_\alpha, \Lambda_0, \bar{\Lambda}_0, m_0 | \Phi_{2, 1} (z, \bar{z}) V_{\alpha_2} (1, \bar{1}) | \Delta_{\alpha_1} \rangle \,. \label{eq:full} \end{equation} The asymptotics of $\Psi$ for $z \sim 1, \infty$, respectively $t, u-$channels, are given by the OPEs. Due to crossing symmetry, the two expansions have to agree, therefore \begin{equation}\label{eq:asymptotics} \Psi(z, \bar{z}) = K_{\alpha_{2+}, \alpha_{2+}}^{(t)} | f_{\alpha_{2+}}^{(t)} (z) |^2 + K_{\alpha_{2-}, \alpha_{2-}}^{(t)} | f_{\alpha_{2-}}^{(t)} (z) |^2 = K_{\alpha_+, \alpha_+}^{(u)} | f_{\alpha_+}^{(u)} (z) |^2 + K_{\alpha_-, \alpha_-}^{(u)} | f_{\alpha_-}^{(u)} (z) |^2 \,. \end{equation} where \begin{equation} \begin{aligned} &K_{\alpha_{2+}, \alpha_{2+}}^{(t)} = \mathcal{C}_{\alpha_{2,1} \alpha_2}^{\alpha_{2+}} C(\alpha, \alpha_{2+}, \alpha_1) \,, \, K_{\alpha_{2-}, \alpha_{2-}}^{(t)} = \mathcal{C}_{\alpha_{2,1} \alpha_2}^{\alpha_{2-}} C(\alpha, \alpha_{2-}, \alpha_1) \,, \\ &K_{\alpha_{+}, \alpha_{+}}^{(u)} = \mathcal{C}_{\alpha_{2,1} \alpha}^{\alpha_+} C(\alpha_+, \alpha_2, \alpha_1) \,, \, K_{\alpha_{-}, \alpha_{-}}^{(u)} = \mathcal{C}_{\alpha_{2,1} \alpha}^{\alpha_-} C(\alpha_-, \alpha_2, \alpha_1) \,, \end{aligned} \end{equation} are the OPE coefficients for the two fusion channels in the $t$ and $u$-channel OPEs and \begin{equation} \begin{aligned} &f_{\alpha_{2+}}^{(t)} (z) = \langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2+}} (1) | \Delta_{\alpha_1} \rangle (z-1)^{\frac{b Q + 2 b \alpha_2}{2}} \left(1+\mathcal{O}(z-1)\right)\,, \\ &f_{\alpha_{2-}}^{(t)} (z) = \langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2-}} (1) | \Delta_{\alpha_1} \rangle (z-1)^{\frac{b Q - 2 b \alpha_2}{2}}\left(1+\mathcal{O}(z-1)\right)\left(1+\mathcal{O}(z-1)\right) \,, \\ &f_{\alpha_+}^{(u)} (z) = \sum_{\pm} \langle \Delta_{\alpha_+}, \Lambda_0, m_{0 \pm} | V_{\alpha_2} (1) | \Delta_{\alpha_1} \rangle \mathcal{A}_{\alpha_+, m_{0 \pm}} e^{\pm \frac{\Lambda z}{2}} (\pm \Lambda)^{-\frac{1}{2} \pm m_3 + b \alpha_+} z^{\frac{1}{2} \left( bQ - 1 \pm 2 m_3 \right)}\left(1+\mathcal{O}(z^{-1})\right) \,, \\ &f_{\alpha_-}^{(u)} (z) =\sum_{\pm} \langle \Delta_{\alpha_-}, \Lambda_0, m_{0 \pm} | V_{\alpha_2} (1) | \Delta_{\alpha_1} \rangle \mathcal{A}_{\alpha_-, m_{0 \pm}} e^{\pm \frac{\Lambda z}{2}} (\pm \Lambda)^{-\frac{1}{2} \pm m_3 - b \alpha_-} z^{\frac{1}{2} \left( bQ - 1 \pm 2 m_3 \right)}\left(1+\mathcal{O}(z^{-1})\right) \,. \end{aligned} \label{HeunLambdaasymptotics} \end{equation} give the expansions of the correlator in the two fusion channels of the $t$ and $u$-channels. Note that in line with the definition (\ref{irregular_state}), the irregular state contributes to the DOZZ factor the same as as a regular state. Here, as noted in section \ref{HeunasBPZ}, $f_{\pm}^{(t,u)}$ in the NS limit are (up to a rescaling by one of the correlators, to keep them finite) the two linearly independent confluent Heun functions expanded around $1$ and $\infty$, respectively. We remark that due to the presence of the irregular singularity the $\alpha_\pm$ channels at infinity contribute with two different irregular states each, corresponding to $m_{0 \pm}$. This is consistent with the fact that the irregular state comes from the collision of two primary operators \cite{Gaiotto:2012sf}. The two expansions are related via a connection matrix $M$ by \begin{equation} f_i^{(t)} (z) = M_{ij} f_j^{(u)} (z) \,, \, \, i = \alpha_{2 \pm} \,, \, j = \alpha_\pm \,. \label{eq:connectionansatz} \end{equation} This equation, combined with the requirement of crossing symmetry (\ref{eq:asymptotics}) gives the constraints \begin{equation} K_{ij}^{(t)} M_{ik} M_{jl} = K_{kl}^{(u)} \,. \label{eq:connectionconstraints} \end{equation} Equations (\ref{eq:connectionconstraints}) give 3 quadratic equations for the 4 entries $M_{ij}$. Other constraints come from noticing that the $M_{ij}$ have to respect the symmetry under reflection of the momenta. The sign ambiguity inherent in the quadratic constraints (\ref{eq:connectionconstraints}) is resolved by imposing that for $\Lambda \to 0$ they reduce to the known hypergeometric connection matrix, since \begin{equation} \langle \Delta_\alpha, \Lambda_0, \bar{\Lambda}_0, m_0 | \Phi_{2, 1} (z, \bar{z}) V_{\alpha_2} (1, \bar{1}) | \Delta_{\alpha_1} \rangle \to \langle \Delta_\alpha | \Phi_{2, 1} (z, \bar{z}) V_{\alpha_2} (1, \bar{1}) | \Delta_{\alpha_1} \rangle \,, \, \text{as} \, \Lambda \to 0 \,, \end{equation} and conformal blocks of the regular degenerate 4 point functions are hypergeometric functions. This gives \begin{equation} \begin{aligned} &M_{\alpha_{2+}, \alpha_+} = \frac{\Gamma (- 2 b \alpha) \Gamma (1 + 2 b \alpha_2)}{\Gamma (\frac{1}{2} + b (\alpha_1 + \alpha_2 - \alpha)) \Gamma (\frac{1}{2} + b (-\alpha_1 + \alpha_2 - \alpha))} \,, \\ &M_{\alpha_{2-}, \alpha_-} = \frac{\Gamma (2 b \alpha) \Gamma (1 - 2 b \alpha_2)}{\Gamma (\frac{1}{2} + b (\alpha_1 - \alpha_2 + \alpha)) \Gamma (\frac{1}{2} + b (-\alpha_1 - \alpha_2 + \alpha))} \,, \\ &M_{\alpha_{2+}, \alpha_-} = \frac{\Gamma (2 b \alpha) \Gamma (1 + 2 b \alpha_2)}{\Gamma (\frac{1}{2} + b (\alpha_1 + \alpha_2 + \alpha)) \Gamma (\frac{1}{2} + b (-\alpha_1 + \alpha_2 + \alpha))} \,, \\ &M_{\alpha_{2-}, \alpha_+} = \frac{\Gamma (-2 b \alpha) \Gamma (1 - 2 b \alpha_2)}{\Gamma (\frac{1}{2} + b (\alpha_1 - \alpha_2 - \alpha)) \Gamma (\frac{1}{2} + b (-\alpha_1 - \alpha_2 - \alpha))} \,. \end{aligned} \label{eq:irregconnectionmatrix} \end{equation} Note that $M_{ij}$ is given by the hypergeometric connection matrix even for finite $\Lambda$, since all $\Lambda$ corrections are encoded in the asymptotics of the functions (\ref{HeunLambdaasymptotics}). Proceeding in the same way we can find connection coefficients between $0, 1$. Using crossing symmetry we have \begin{equation} \Psi (z, \bar{z}) = K_{\alpha_{1+}, \alpha_{1+}}^{(s)} | f_{\alpha_{1+}}^{(s)} (z) |^2 + K_{\alpha_{1-}, \alpha_{1-}}^{(s)} | f_{\alpha_{1-}}^{(s)} (z) |^2 = K_{\alpha_{2+}, \alpha_{2+}}^{(t)} | f_{\alpha_{2+}}^{(t)} (z) |^2 + K_{\alpha_{2-}, \alpha_{2-}}^{(t)} | f_{\alpha_{2-}}^{(t)} (z) |^2 \,, \label{eq:01asymptotics} \end{equation} where \begin{equation} \begin{aligned} &f_{\alpha_{1+}}^{(s)} (z) \simeq \langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_2} (1) | \Delta_{\alpha_{1+}} \rangle z^{\frac{b Q + b \alpha_1}{2}} \,, \\ &f_{\alpha_{1-}}^{(s)} (z) \simeq \langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_2} (1) | \Delta_{\alpha_{1-}} \rangle z^{\frac{b Q - b \alpha_1}{2}} \,. \end{aligned} \end{equation} Imposing again \begin{equation} f_i^{(s)} (z) = N_{ij} f_j^{(t)} (z) \,, \label{eq:connectionansatz01} \end{equation} substituting (\ref{eq:connectionansatz01}) in (\ref{eq:01asymptotics}) and imposing that $f^{(s,t)}$ reduce to hypergeometric functions as $\Lambda \to 0$ we find (see Appendix \ref{DOZZfactors}) \begin{equation} \begin{aligned} &N_{\alpha_{1+}, \alpha_{2+}} = \frac{\Gamma (-2 b \alpha_2) \Gamma (1 + 2 b \alpha_1)}{\Gamma (\frac{1}{2} + b (\alpha_1 - \alpha_2 + \alpha)) \Gamma (\frac{1}{2} + b (\alpha_1 - \alpha_2 - \alpha))} \,, \\ &N_{\alpha_{1-}, \alpha_{2-}} = \frac{\Gamma (2 b \alpha_2) \Gamma (1 - 2 b \alpha_1)}{\Gamma (\frac{1}{2} + b (-\alpha_1 + \alpha_2 - \alpha)) \Gamma (\frac{1}{2} + b (-\alpha_1 + \alpha_2 + \alpha))} \,, \\ &N_{\alpha_{1+}, \alpha_{2-}} = \frac{\Gamma (2 b \alpha_2) \Gamma (1 + 2 b \alpha_1)}{\Gamma (\frac{1}{2} + b (\alpha_1 + \alpha_2 - \alpha)) \Gamma (\frac{1}{2} + b (\alpha_1 + \alpha_2 + \alpha))} \,, \\ &N_{\alpha_{1-}, \alpha_{2+}} = \frac{\Gamma (-2 b \alpha_2) \Gamma (1 - 2 b \alpha_1)}{\Gamma (\frac{1}{2} + b (-\alpha_1 - \alpha_2 + \alpha)) \Gamma (\frac{1}{2} + b (-\alpha_1 - \alpha_2 - \alpha))} \,. \end{aligned} \end{equation} \subsection{AGT dual of irregular correlators and NS limit} The irregular correlators appearing in the asymptotics of the functions (\ref{HeunLambdaasymptotics}) can be efficiently computed as Nekrasov partition functions thanks to the AGT correspondence \cite{Alday_2010}. In particular, the chiral irregular three point function is identified with \cite{Marshakov_2009} \begin{equation} \langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_2} (1) | \Delta_{\alpha_1} \rangle = \mathcal{Z}^{\mathrm{inst}} (\Lambda, a, m_1, m_2, m_3) \,, \end{equation} where $\mathcal{Z}^{\mathrm{inst}} (\Lambda, a, m_1, m_2, m_3)$ is the Nekrasov instanton partition function of $SU(2)$ $\mathcal{N}=2$ gauge theory in the $\Omega$-background (see Appendix \ref{AppendixNekrasov}). While the analysis in the last section was completely general, in order to apply the obtained results to the Teukolsky equation, one needs to take the NS limit $\epsilon_2\to0$, $\epsilon_1=1$ as discussed in section \ref{HeunasBPZ}. In this limit the correlators diverge, but rescaling the functions in (\ref{HeunLambdaasymptotics}) by one of the correlators, the resulting ratios are finite. In a slight abuse of notation, we write the connection coefficients in the NS limit as \begin{equation} \begin{aligned} &M_{a_{2+}, a_+} = \frac{\Gamma (- 2 a) \Gamma (1 + 2 a_2)}{\Gamma (\frac{1}{2} + a_1 + a_2 - a) \Gamma (\frac{1}{2} - a_1 + a_2 - a)} \,, \\ &M_{a_{2-}, a_-} = \frac{\Gamma (2 a) \Gamma (1 - 2 a_2)}{\Gamma (\frac{1}{2} + a_1 - a_2 + a) \Gamma (\frac{1}{2} - a_1 - a_2 + a)} \,, \\ &M_{a_{2+}, a_-} = \frac{\Gamma (2 a) \Gamma (1 + 2 a_2)}{\Gamma (\frac{1}{2} + a_1 + a_2 + a) \Gamma (\frac{1}{2} - a_1 + a_2 + a)} \,, \\ &M_{a_{2-}, a_+} = \frac{\Gamma (-2 a) \Gamma (1 - 2 a_2)}{\Gamma (\frac{1}{2} + a_1 - a_2 - a) \Gamma (\frac{1}{2} - a_1 - a_2 - a)} \,, \end{aligned} \label{eq:irregconnectionmatrixnsoinf} \end{equation} and similarly \begin{equation} \begin{aligned} &N_{a_{1+}, a_{2+}} = \frac{\Gamma (-2 a_2) \Gamma (1 + 2 a_1)}{\Gamma (\frac{1}{2} + a_1 - a_2 + a) \Gamma (\frac{1}{2} + a_1 - a_2 - a)} \,, \\ &N_{a_{1-}, a_{2-}} = \frac{\Gamma (2 a_2) \Gamma (1 - 2 a_1)}{\Gamma (\frac{1}{2} - a_1 + a_2 - a) \Gamma (\frac{1}{2} - a_1 + a_2 + a)} \,, \\ &N_{a_{1+}, a_{2-}} = \frac{\Gamma (2 a_2) \Gamma (1 + 2 a_1)}{\Gamma (\frac{1}{2} + a_1 + a_2 - a) \Gamma (\frac{1}{2} + a_1 + a_2 + a)} \,, \\ &N_{a_{1-}, a_{2+}} = \frac{\Gamma (-2 a_2) \Gamma (1 - 2 a_1)}{\Gamma (\frac{1}{2} - a_1 - a_2 + a) \Gamma (\frac{1}{2} - a_1 - a_2 - a)} \,, \end{aligned} \label{eq:irregconnectionmatrixnsoone} \end{equation} where $a_i=\hbar \alpha_i=b \alpha_i$ for $\epsilon_1=\hbar/b=1$. \subsection{Plots of the connection coefficients}\label{plots} In the following we illustrate the power of the connection coefficients obtained above by comparing our analytical solution to the numerical one. Furthermore this illustrates how to evaluate the connection coefficients. For simplicity we focus on the connection problem between $z=0$ and $1$. The confluent Heun function $w(z)$ solving the CHE in standard form (\ref{CHE_standard}) can be expanded as a power series near $z=0$ as \begin{equation} w(z)=1-\frac{q}{\gamma}z+\frac{\alpha\gamma + q(q-\gamma-\delta+\epsilon)}{2\gamma(\gamma+1)}z^2+\mathcal{O}(z^3)\,. \end{equation} We are interested in analytically continuing this series toward the other singular point at $z=1$. This problem is solved by our connection coefficients, we just need to identify the functions and parameters: in terms of the function $\psi(z)$ solving the CHE in Schr\"odinger form (\ref{CHE_Schrodinger}), we have around $z=0$: \begin{equation} \psi(z) = e^{\epsilon z/2} z^{\gamma/2}(z-1)^{\delta/2}w(z) = z^{\frac{1}{2}+\theta a_1}\left(1+\mathcal{O}(z)\right) = \hat{f}^{(s)}_{\alpha_{1\theta}}(z) \end{equation} where we have introduced the normalized s-channel function, related to the s-channel function defined before by $f^{(s)}_{\alpha_{1\theta}}(z)=\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2}} (1) | \Delta_{\alpha_{1\theta}} \rangle \hat{f}^{(s)}_{\alpha_{1\theta}}(z)$. Similarly, we define the normalized t-channel function, related to the one defined before by $f^{(t)}_{\alpha_{2\theta'}}(z)=\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2\theta'}} (1) | \Delta_{\alpha_1} \rangle \hat{f}^{(t)}_{\alpha_{2\theta'}}(z)$. It is a solution to the CHE given as a power series around the singular point $z=1$ which can be obtained by the Fr\"obenius method: \begin{equation} \hat{f}^{(t)}_{\alpha_{2\theta'}}(z) = (1-z)^{\frac{1}{2}+\theta'a_2}\left(1 - \frac{1/4 - a_1^2 -a_2^2+E}{1+2\theta'a_2}(1-z)+\mathcal{O}((1-z)^2)\right) \end{equation} The s and t-channel solutions are related by $f_i^{(s)} = N_{ij} f_j^{(t)} $, with the coefficients $N_{ij}$ given before, which we now give more explicitly: \begin{equation} \boxed{\begin{aligned} \hat{f}^{(s)}_{\alpha_{1\theta}}(z) & = \frac{\Gamma (-2 a_2) \Gamma (1 + 2 \theta a_1)}{\Gamma (\frac{1}{2} + \theta a_1 - a_2 + a)) \Gamma (\frac{1}{2} + \theta a_1 - a_2 - a))} \frac{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2+}} (1) | \Delta_{\alpha_1} \rangle}{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2}} (1) | \Delta_{\alpha_{1\theta}} \rangle} \hat{f}^{(t)}_{\alpha_{2+}}(z) + \\ &+ \frac{\Gamma (2 a_2) \Gamma (1 + 2 \theta a_1)}{\Gamma (\frac{1}{2} + \theta a_1 + a_2 + a)) \Gamma (\frac{1}{2} + \theta a_1 + a_2 - a))} \frac{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2-}} (1) | \Delta_{\alpha_1} \rangle}{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2}} (1) | \Delta_{\alpha_{1\theta}} \rangle} \hat{f}^{(t)}_{\alpha_{2-}}(z) \end{aligned}} \end{equation} for $\theta=\pm$. A further complication arises from the fact that the parameter in the CHE is $E$, but in the connection formula the parameter $a$ appears which is related to $E$ in a nontrivial way and has to be obtained by inverting the Matone relation (see Appendix \ref{AppendixNekrasov}) \cite{Flume_2004}: \begin{equation} E=a^2-\Lambda\partial_\Lambda \mathcal{F}^{\mathrm{inst}}\,. \label{eq:matone} \end{equation} Everything has to be computed for general $\epsilon_1,\epsilon_2$ using Nekrasov formulae and then specialized to the NS limit by setting $\epsilon_1=1$ and taking the limit $\epsilon_2\to0$ in the end. To work consistently at one instanton one also needs to expand the Gamma functions since they contain $a$ which is given as an instanton expansion. We get \begin{equation} \boxed{\begin{aligned} &\hat{f}^{(s)}_{\alpha_{1\theta}}(z) = \frac{\Gamma (-2 a_2) \Gamma (1 + 2 \theta a_1)}{\Gamma (\frac{1}{2} + \theta a_1 - a_2 + \sqrt{E})) \Gamma (\frac{1}{2} + \theta a_1 - a_2 - \sqrt{E}))} \hat{f}^{(t)}_{\alpha_{2+}}(z)\times \\ & \times \left[1-\left(\frac{\theta a_1 + a_2}{\frac{1}{2}-2E} + \frac{\frac{1}{4}-E+a_1^2-a_2^2}{\sqrt{E}\left(1-4E\right)}\big[\psi^{(0)}\big(\frac{1}{2}-\sqrt{E}+\theta a_1 -a_2\big)-\psi^{(0)}\big(\frac{1}{2}+\sqrt{E}+\theta a_1 -a_2\big)\big]\right) m_3 \Lambda \right] + \\ &+ \frac{\Gamma (2 a_2) \Gamma (1 + 2 \theta a_1)}{\Gamma (\frac{1}{2} + \theta a_1 + a_2 + \sqrt{E})) \Gamma (\frac{1}{2} + \theta a_1 + a_2 - \sqrt{E}))} \hat{f}^{(t)}_{\alpha_{2-}}(z)\times \\ & \times \left[1-\left(\frac{\theta a_1 - a_2}{\frac{1}{2}-2E}+ \frac{\frac{1}{4}-E+a_1^2-a_2^2}{\sqrt{E}\left(1-4E\right)}\big[\psi^{(0)}\big(\frac{1}{2}-\sqrt{E}+\theta a_1 +a_2\big)-\psi^{(0)}\big(\frac{1}{2}+\sqrt{E}+\theta a_1+a_2\big)\big] \right)m_3 \Lambda \right]\\ &+ \mathcal{O}(\Lambda^2). \end{aligned}} \end{equation} Here $\psi^{(0)}(z)=\frac{d}{dz}\log \Gamma(z)$ is the digamma function. The higher instanton corrections to the connection coefficients can be computed in an analogous way. We have identified $\hat{f}^{(s)}_{\alpha_{1\theta}}(z) = e^{\epsilon z/2} z^{\gamma/2}(z-1)^{\delta/2}w(z)$ by using the power series expansion near $z=0$. We can then use the connection formula given above to obtain the power series expansion near $z=1$ in terms of $\hat{f}^{(t)}_{\alpha_{2\pm}}(z)$, and compare it to the numerical solution. In the following we illustrate the power of the connection formula by giving random values (in a suitable range) to the various parameters and plotting the confluent Heun function numerically versus the three-term power expansion at $z=1$, computed analytically by using the connection formula from $0$ to $1$. Here we use the dictionary between the parameters of the CHE in standard form and the CFT parameters given in (\ref{CHE_dictionary}), with $\theta = +1,\theta'=-1,\theta''=-1$. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{0inst_1.png} \caption{{\small Real and imaginary parts of the rescaled confluent Heun function $e^{\epsilon z/2} z^{\gamma/2}(z-1)^{\delta/2}w(z)$ (blue, dashed), computed numerically, and of the three-term power expansion near $z=1$ (solid, orange), obtained analytically using the connection coefficients computed at zero instantons. The validity of the series expansion around $z=1$ (orange) is limited to a neighborhood of $z=1$, but going to higher orders in the expansion to extend the validity is straightforward. The values of the parameters are: $a_1 = 0.970123 + 1.36981i,\, a_2 = -0.386424 - 2.99783i,\, E = 5.41627 + 6.40871i,\, m_3 = 1.68707 - 0.707722i,\,\Lambda = 1.96772 + 1.80414i$.}} \end{figure} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{1inst_1.png} \caption{{\small Real and imaginary parts of the rescaled confluent Heun function $e^{\epsilon z/2} z^{\gamma/2}(z-1)^{\delta/2}w(z)$ (blue, dashed), computed numerically, and of the three-term power expansion near $z=1$ (solid, orange), obtained analytically using the connection coefficients computed at one instanton. The validity of the series expansion around $z=1$ (orange) is limited to a neighborhood of $z=1$, but going to higher orders in the expansion to extend the validity is straightforward. The values of the parameters are: $a_1 = 0.970123 + 1.36981i,\, a_2 = -0.386424 - 2.99783i,\, E = 5.41627 + 6.40871i,\, m_3 = 1.68707 - 0.707722i,\,\Lambda = 1.96772 + 1.80414i$.}} \end{figure} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{2inst_1.png} \caption{{\small Real and imaginary parts of the rescaled confluent Heun function $e^{\epsilon z/2} z^{\gamma/2}(z-1)^{\delta/2}w(z)$ (blue, dashed), computed numerically, and of the three-term power expansion near $z=1$ (solid, orange), obtained analytically using the connection coefficients computed at two instantons. The validity of the series expansion around $z=1$ (orange) is limited to a neighborhood of $z=1$, but going to higher orders in the expansion to extend the validity is straightforward. The values of the parameters are: $a_1 = 0.970123 + 1.36981i,\, a_2 = -0.386424 - 2.99783i,\, E = 5.41627 + 6.40871i,\, m_3 = 1.68707 - 0.707722i,\,\Lambda = 1.96772 + 1.80414i$.}} \end{figure} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{0inst_2.png} \caption{{\small Real and imaginary parts of the rescaled confluent Heun function $e^{\epsilon z/2} z^{\gamma/2}(z-1)^{\delta/2}w(z)$ (blue, dashed), computed numerically, and of the three-term power expansion near $z=1$ (solid, orange), obtained analytically using the connection coefficients computed at zero instantons. The validity of the series expansion around $z=1$ (orange) is limited to a neighborhood of $z=1$, but going to higher orders in the expansion to extend the validity is straightforward. The values of the parameters are: $a_1 = 0.5 + 1.24031i,\,a_2 = -0.5 + 1.55419i,\,E = 5.52396,\,m_3 = 0.92039 + 1.36765i,\, \Lambda=1.60238 + 1.25941i$.}} \end{figure} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{1inst_2.png} \caption{{\small Real and imaginary parts of the rescaled confluent Heun function $e^{\epsilon z/2} z^{\gamma/2}(z-1)^{\delta/2}w(z)$ (blue, dashed), computed numerically, and of the three-term power expansion near $z=1$ (solid, orange), obtained analytically using the connection coefficients computed at one instanton. The validity of the series expansion around $z=1$ (orange) is limited to a neighborhood of $z=1$, but going to higher orders in the expansion to extend the validity is straightforward. The values of the parameters are: $a_1 = 0.5 + 1.24031i,\,a_2 = -0.5 + 1.55419i,\,E = 5.52396,\,m_3 = 0.92039 + 1.36765i,\, \Lambda=1.60238 + 1.25941i$.}} \end{figure} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{2inst_2.png} \caption{{\small Real and imaginary parts of the rescaled confluent Heun function $e^{\epsilon z/2} z^{\gamma/2}(z-1)^{\delta/2}w(z)$ (blue, dashed), computed numerically, and of the three-term power expansion near $z=1$ (solid, orange), obtained analytically using the connection coefficients computed at two instantons. The validity of the series expansion around $z=1$ (orange) is limited to a neighborhood of $z=1$, but going to higher orders in the expansion to extend the validity is straightforward. The values of the parameters are: $a_1 = 0.5 + 1.24031i,\,a_2 = -0.5 + 1.55419i,\,E = 5.52396,\,m_3 = 0.92039 + 1.36765i,\, \Lambda=1.60238 + 1.25941i$.}} \end{figure} We see that already the first instanton correction significantly improves the approximation. \section{Applications to the black hole problem}\label{five} There are several interesting physical quantities in the black hole problem which are governed by the Teukolsky equation. Having the explicit expression for the connection coefficients allows us to compute them exactly. We turn to this now. \subsection{The greybody factor} While all our analysis has been for classical black holes, it is known that quantum black holes emit thermal radiation from their horizons \cite{Hawking:1974sw}. However, the spacetime outside of the black hole acts as a potential barrier for the emitted particles, so that the emission spectrum as measured by an observer at infinity is no longer thermal, but is given by $\frac{\sigma(\omega)}{\exp{\frac{\omega-m\Omega}{T_H}}-1}$, where $\sigma(\omega)$ is the so-called greybody factor. Incidentally, it is the same as the absorption coefficient of the black hole, which tells us the ratio of a flux of particles incoming from infinity which penetrates the potential barrier and is absorbed by the black hole \cite{Hawking:1974sw} \cite{dong2015greybody}. More precisely, the radial equation with $s=0$ has a conserved flux, given by the "probability flux" when written as a Schr\"odinger equation: $\phi = \mathrm{Im}\psi^\dagger(z)\partial_z\psi(z)$ for $z$ on the real line. The absorption coefficient is then defined as the ratio between the flux $\phi_{abs}$ absorbed by the black hole (ingoing at the horizon) and the flux $\phi_{in}$ incoming from infinity. For non-zero spin, the potential (\ref{radial_potential}) becomes complex, and the flux is no longer conserved. In that case the absorption coefficient can be computed using energy fluxes \cite{Brito_2020}, but for simplicity we stick here to $s=0$. \subsubsection{The exact result} On physical grounds we impose the boundary condition that there is only an ingoing wave at the horizon: \begin{equation} R(r\to r_+) \sim (r-r_+)^{-i\frac{\omega-m\Omega}{4\pi T_H}} \end{equation} so the wavefunction near the horizon is given by \begin{equation} \psi(z) = \hat{f}_{\alpha_{2+}}^{(t)} (z) = (z-1)^{\frac{1}{2}+a_2} \left(1+\mathcal{O}(z-1)\right) \end{equation} with $a_2 = -i \frac{\omega - m \Omega}{4 \pi T_H}$ and recall that the time-dependent part goes like $e^{-i\omega t}$. This boundary condition is independent of whether $\omega - m \Omega$ is positive or negative: an observer near the horizon always sees an ingoing flux into the horizon, but when $\omega - m \Omega<0$ it is outgoing according to an observer at infinity. This phenomenon is known as superradiance \cite{Iyer:1986np}. In any case, this gives the flux \begin{equation} \phi_{abs} = \mathrm{Im}a_2 \end{equation} ingoing at the horizon. Using our connection formula, we find that near infinity the wavefunction behaves as \begin{equation}\label{psi_nearinfty} \begin{aligned} \psi(z) & = \frac{M_{\alpha_{2+}, \alpha_-} f_{\alpha_-}^{(u)} (z)}{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2+}} (1) | \Delta_{\alpha_1} \rangle} + \frac{ M_{\alpha_{2+}, \alpha_+}f_{\alpha_+}^{(u)} (z)}{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2+}} (1) | \Delta_{\alpha_1} \rangle} = \\ & = M_{\alpha_{2+}, \alpha_-} \Lambda^{-\frac{1}{2} - a} \sum_{\pm} \mathcal{A}_{\alpha_-, m_{3 \pm}} e^{\pm \frac{\Lambda z}{2}} \left( \Lambda z \right)^{ \pm m_3} \frac{\langle \Delta_{\alpha-}, \Lambda_0, m_{0\pm} | V_{\alpha_{2}} (1) | \Delta_{\alpha_1} \rangle}{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2+}} (1) | \Delta_{\alpha_1} \rangle} \left(1+\mathcal{O}(z^{-1})\right) + (\alpha \rightarrow -\alpha)\,. \end{aligned} \end{equation} At infinity, the ingoing part of the wave is easy to identify: recalling that $\Lambda = -2 i \omega (r_+ - r_-)$ it corresponds to the positive sign in the exponential. So the flux incoming from infinity is \begin{equation} \begin{aligned} \phi_{in} & = \mathrm{Im}\frac{\Lambda}{2} \left| M_{\alpha_{2+}, \alpha_-} \mathcal{A}_{\alpha_-, m_{3+}} \Lambda^{-\frac{1}{2} - a + m_3} \frac{\langle \Delta_{\alpha-}, \Lambda_0, m_{0+} | V_{\alpha_{2}} (1) | \Delta_{\alpha_1} \rangle}{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2+}} (1) | \Delta_{\alpha_1} \rangle} + (\alpha \rightarrow -\alpha) \right|^2 = \\ & = -\frac{1}{2} \left|\frac{\Gamma (1 + 2a)\Gamma (2a) \Gamma (1 + 2a_2)\Lambda^{ -a + m_3}}{\Gamma\left(\frac{1}{2} + m_3 + a\right)\prod_\pm \Gamma\left(\frac{1}{2} \pm a_1 + a_2 +a\right)} \frac{\langle \Delta_{\alpha-}, \Lambda_0, m_{0+} | V_{\alpha_{2}} (1) | \Delta_{\alpha_1} \rangle}{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2+}} (1) | \Delta_{\alpha_1} \rangle} + (a \rightarrow -a) \right|^2 \end{aligned} \end{equation} The minus sign comes from the fact that we have simplified $\Lambda$ and we have $\mathrm{Im}\Lambda = - |\Lambda|$. Note that also the flux at the horizon is negative (for non-superradiant modes). So the full absorption coefficient/greybody factor, defined as the flux going into the horizon normalized by the flux coming in from infinity is: \begin{equation}\label{exact_sigma} \sigma = \frac{\phi_{abs}}{\phi_{in}} = \frac{\displaystyle{-\mathrm{Im}2a_2}}{\displaystyle{\left|\frac{\Gamma (1 +2a)\Gamma (2a) \Gamma (1 + 2a_2)\Lambda^{ -a + m_3}}{\Gamma\left(\frac{1}{2} + m_3 + a\right)\prod_\pm \Gamma\left(\frac{1}{2} \pm a_1 + a_2 +a\right)} \frac{\langle \Delta_{\alpha-}, \Lambda_0, m_{0+} | V_{\alpha_{2}} (1) | \Delta_{\alpha_1} \rangle}{\langle \Delta_{\alpha}, \Lambda_0, m_{0} | V_{\alpha_{2+}} (1) | \Delta_{\alpha_1} \rangle} + (a \rightarrow -a) \right|^2}}\,. \end{equation} This is the exact result, given as a power series in $\Lambda$. The correlators have to be understood as computed in the NS limit with $\epsilon_1=1$. The ratio of correlators can be written in terms of the NS free energy (see Appendix \ref{appendix_semiclassical}), and substituting the dictionary (\ref{radial_dictionary}) we get \begin{equation}\label{exact_sigma_gravity} \boxed{\begin{aligned} & \sigma = \frac{\phi_{abs}}{\phi_{in}} = \frac{\omega-m\Omega}{2\pi T_H}\times \\ \times & \left|\frac{\Gamma (1+2a)\Gamma (2a) \Gamma (1-i\frac{\omega-m\Omega}{2\pi T_H}) (-2i\omega(r_+-r_-))^{-a -2iM\omega}e^{-i\omega(r_+-r_-)}\exp{ \left( \frac{\partial\mathcal{F}^{\mathrm{inst}}}{\partial a_1}\right)}|_{a_1=a,a_2=-a} }{\Gamma\left(\frac{1}{2} -2iM\omega + a\right)\Gamma\left(\frac{1}{2}-i\frac{\omega-m\Omega}{2\pi T_H} +2iM\omega +a\right)\Gamma\left(\frac{1}{2}-2iM\omega +a\right)} + (a \rightarrow -a) \right|^{-2}\,. \end{aligned}} \end{equation} Here $\mathcal{F}^{\mathrm{inst}}(\Lambda,a_1,a_2,m_1,m_2,m_3)$ is the instanton part of the NS free energy as defined in Appendix \ref{AppendixNekrasov} computed for general $\Vec{a}=(a_1,a_2)$ and after taking the derivative one substitutes the values $\Vec{a}=(a,-a)$ appropriate for $SU(2)$. The same holds for the second summand but one substitutes $\Vec{a}=(-a,a)$ in the end. To write this result fully in terms of the parameters of the black hole problem using the dictionary (\ref{radial_dictionary}), one has to invert the relation $E=a^2-\Lambda \partial_{\Lambda} \mathcal{F}^{\mathrm{inst}}$ to obtain $a(E)$, which can be done order by order in $\Lambda$. In the literature, the absorption coefficient for Kerr black holes has been calculated using various approximations. As a consistency check, we show that our result reproduces the known results in the appropriate regimes. \subsubsection{Comparison with asymptotic matching} In \cite{Maldacena_1997}, the absorption coefficient is calculated via an asymptotic matching procedure. They work in a regime in which $\text{a}\omega \ll 1$ such that the angular eigenvalue $\lambda \approx \ell(\ell+1)$, and solve the Teukolsky equation for $s=0$ asymptotically in the regions near and far from the outer horizon. Then one also takes $M\omega \ll 1$ such that there exists an overlap between the far and near regions and one can match the asymptotic solutions. For us these limits imply that also $|\Lambda| = 4 \omega \sqrt{M^2-\text{a}^2} \ll 1$, so we expand our exact transmission coefficient to lowest order in $\text{a}\omega$, $M\omega$ and $\Lambda$. Since from the dictionary (\ref{radial_dictionary}) $E = a^2 + \mathcal{O}(\Lambda) = \frac{1}{4}+\ell(\ell+1)+\mathcal{O}(\text{a}\omega,M\omega)$, in this limit we have $a = \ell+\frac{1}{2}$. Then the second term in the denominator of (\ref{exact_sigma}) which contains $\Lambda^a$ vanishes for $\Lambda \rightarrow 0$ while the first one survives and passes to the numerator. The instanton part of the NS free energy also vanishes, $\mathcal{F}^{\mathrm{inst}}(\Lambda\to0)=0$. (\ref{exact_sigma_gravity}) then becomes \begin{equation} \begin{aligned} & \sigma \approx \frac{\omega-m\Omega}{2\pi T_H}(2\omega(r_+-r_-))^{2\ell+1} \left|\frac{\Gamma\left(\ell+1\right)\Gamma\left(\ell+1-i\frac{\omega-m\Omega}{2\pi T_H}\right)\Gamma\left(\ell+1\right)}{\Gamma (2\ell+2)\Gamma (2\ell+1) \Gamma (1-i\frac{\omega-m\Omega}{2\pi T_H}) }\right|^{2}\,. \end{aligned} \end{equation} Using the relation $\frac{\Gamma(\ell+1)}{\Gamma( 2 \ell+2)} = \frac{\sqrt{\pi} }{2^{2 \ell+1}\Gamma(\ell+\frac{3}{2})}$ (and sending $i\to-i$ inside the modulus squared) we reduce precisely to the result of \cite{Maldacena_1997} (eq. 2.29): \begin{equation} \boxed{\sigma \approx \frac{\omega - m \Omega}{2 T_H} \frac{ (r_+ - r_-)^{2\ell+1}\omega^{2\ell+1} }{2^{2\ell+1}}\left| \frac{\Gamma(\ell+1)\Gamma\left(\ell+1+i \frac{\omega - m \Omega}{2 \pi T_H}\right)}{\Gamma\left(\ell+\frac{3}{2}\right)\Gamma(2\ell+1) \Gamma \left(1 + i \frac{\omega - m \Omega}{2 \pi T_H}\right)} \right|^2} \end{equation} which is valid for $M\omega,\text{a}\omega \ll 1$. \subsubsection{Comparison with semiclassics} We now show that the exact absorption coefficient reduces to the semiclassical result obtained via a standard WKB analysis of the equation \begin{equation} \epsilon_1^2 \partial_z^2\psi(z)+V(z)\psi(z) = 0\,. \end{equation} where we have reintroduced the small parameter $\epsilon_1$ which plays the role of the Planck constant to keep track of the orders in the expansion. For the Teukolsky equation (which has $\epsilon_1=1$) the semiclassical regime is the regime in which $\ell \gg 1$. Following \cite{dumlu2020stokes}, we also take $M\omega \ll 1$ and $s=0$ such that there are two zeroes of the potential between the outer horizon and infinity for real values of $z$ which we denote by $z_1$ and $z_2$ with $z_2 > z_1$, between which there is a potential barrier for the particle ($V(z)$ becomes negative, notice the "wrong sign" in front of the second derivative). Without these extra conditions, the potential generically becomes complex, or does not form a barrier. The main difference with the regime used for the asymptotic matching procedure in the previous section is that there we worked to leading order in $M\omega,a\omega$. Now we still assume them to be small but keep all orders, while working to first subleading order in $\epsilon_1$. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{semiclassical_potentials.png} \caption{{\small Forms of the potential $-V(z)$ for $M=1,\,a=0.5,\, \lambda=10,\,m=0\,,s=0$, and $\omega=0.01$ (left) and $\omega=1$ (right). We see that for $M\omega$ not small enough, the potential does not form a barrier.}} \end{figure} The standard WKB solutions are \begin{equation} \psi(z) \propto V(z)^{-\frac{1}{4}}\exp \left( \pm \frac{i}{\epsilon_1}\int_{z_*}^z \sqrt{V(z')}dz' \right) \end{equation} where $z_*$ is some arbitrary reference point, usually taken to be a turning point of the potential, here corresponding to a zero. The absorption coefficient is given by the transmission coefficient from infinity to the horizon and captures the tunneling amplitude through this potential barrier. It is simply given by \begin{equation} \sigma \approx \exp \left( \frac{2i}{\epsilon_1} \int_{z_1}^{z_2} \sqrt{V(z')}dz' \right) =\exp \left(-\frac{2}{\epsilon_1} \int_{z_1}^{z_2} \sqrt{|V(z')|}dz'\right) . \end{equation} On the other hand it is known that in the semiclassical limit the potential of the BPZ equation reduces to the Seiberg-Witten differential of the AGT dual gauge theory \cite{Alday_2010}, which for us is $SU(2)$ gauge theory with $N_f=3$: $V(z) \rightarrow - \phi^2_{SW}(z)$. The integral between the two zeroes then corresponds to half a B-cycle, so we identify \begin{equation} \boxed{\sigma \approx \exp \left( -\frac{2}{\epsilon_1} \int_{z_1}^{z_2} \phi_{SW}(z')dz'\right) = \exp \left( -\frac{1}{\epsilon_1} \oint_B \phi_{SW}(z')dz' \right) =: \exp \left( -\frac{a_D}{\epsilon_1} \right) \,,} \end{equation} where we have chosen an orientation of the B-cycle. Our exact absorption coefficient reduces to this expression in the semiclassical limit $\epsilon_1 \rightarrow 0$. The detailed calculation is a bit lengthy and is deferred to Appendix \ref{appendix_semiclassical}. \subsection{Quantization of quasinormal modes} With the explicit expression of the connection matrix (\ref{eq:irregconnectionmatrix}) in our hands we can extract the quantization condition for the quasinormal modes. The correct boundary conditions for quasinormal modes is only an ingoing wave at the horizon and only an outgoing one at infinity (see e.g. \cite{Berti_2009}, eq. (80)), that is \begin{equation} \begin{aligned} &R_{\mathrm{QNM}}(r\to r_+) \sim (r-r_+)^{-i\frac{\omega-m\Omega}{4\pi T_H}-s}\\ &R_{\mathrm{QNM}}(r\to \infty) \sim r^{-1-2s+2iM\omega} e^{i\omega r}\,. \end{aligned} \end{equation} In terms of the function $\psi(z)$ satisfying the Teukolsky equation in Schr\"odinger form: \begin{equation} \begin{aligned} &\psi_{\mathrm{QNM}} (z \to 1) \sim (z-1)^{\frac{1}{2}+a_2} \,, \\ &\psi_{\mathrm{QNM}} (z \to \infty) \sim e^{- \Lambda z /2} \left( \Lambda z \right)^{-m_3} \,. \end{aligned} \label{eq:QNMboundary} \end{equation} However, imposing the ingoing boundary condition at the horizon and using the connection formula, we get that near infinity \begin{equation} \begin{aligned} & \psi_{\mathrm{QNM}} (z \to \infty) \sim \\ \sim & \bigg( \Lambda^{a} M_{\alpha_{2+}, \alpha_+} \mathcal{A}_{\alpha_+ m_{0 -}} \frac{\langle \Delta_{\alpha+}, \Lambda_0, m_{0-} | V_{\alpha_{2}} (1) | \Delta_{\alpha_1} \rangle}{\langle \Delta_{\alpha}, \Lambda_0, m_{0} | V_{\alpha_{2+}} (1) | \Delta_{\alpha_1} \rangle}+ \Lambda^{- a} M_{\alpha_{2+}, \alpha_-} \mathcal{A}_{\alpha_- m_{0 -}} \frac{\langle \Delta_{\alpha-}, \Lambda_0, m_{0-} | V_{\alpha_{2}} (1) | \Delta_{\alpha_1} \rangle}{\langle \Delta_{\alpha}, \Lambda_0, m_{0} | V_{\alpha_{2+}} (1) | \Delta_{\alpha_1} \rangle} \bigg)\times\\&\times e^{- \Lambda z /2} \left( \Lambda z \right)^{-m_3} + \\ +& \bigg( \Lambda^{a} M_{\alpha_{2+}, \alpha_+} \mathcal{A}_{\alpha_+ m_{0 +}} \frac{\langle \Delta_{\alpha+}, \Lambda_0, m_{0+} | V_{\alpha_{2}} (1) | \Delta_{\alpha_1} \rangle}{\langle \Delta_{\alpha}, \Lambda_0, m_{0} | V_{\alpha_{2+}} (1) | \Delta_{\alpha_1} \rangle} + \Lambda^{- a} M_{\alpha_{2+}, \alpha_-} \mathcal{A}_{\alpha_- m_{0 +}} \frac{\langle \Delta_{\alpha-}, \Lambda_0, m_{0+} | V_{\alpha_{2}} (1) | \Delta_{\alpha_1} \rangle}{\langle \Delta_{\alpha}, \Lambda_0, m_{0} | V_{\alpha_{2+}} (1) | \Delta_{\alpha_1} \rangle} \bigg)\times\\&\times e^{\Lambda z /2} \left( \Lambda z \right)^{m_3} \end{aligned} \end{equation} which contains both an ingoing an an outgoing wave at infinity. In order to impose the correct boundary condition (\ref{eq:QNMboundary}) we need to impose that the coefficient of the ingoing wave vanishes: \begin{equation} \begin{aligned} &\Lambda^{a} M_{\alpha_{2+}, \alpha_+} \mathcal{A}_{\alpha_+ m_{0 +}} \frac{\langle \Delta_{\alpha+}, \Lambda_0, m_{0+} | V_{\alpha_{2}} (1) | \Delta_{\alpha_1} \rangle}{\langle \Delta_{\alpha}, \Lambda_0, m_{0} | V_{\alpha_{2+}} (1) | \Delta_{\alpha_1} \rangle} + \Lambda^{- a} M_{\alpha_{2+}, \alpha_-} \mathcal{A}_{\alpha_- m_{0 +}} \frac{\langle \Delta_{\alpha-}, \Lambda_0, m_{0+} | V_{\alpha_{2}} (1) | \Delta_{\alpha_1} \rangle}{\langle \Delta_{\alpha}, \Lambda_0, m_{0} | V_{\alpha_{2+}} (1) | \Delta_{\alpha_1} \rangle}=0 \\ &\Longrightarrow 1 + \Lambda^{-2a} \frac{M_{\alpha_{2+}, \alpha_-} \mathcal{A}_{\alpha_- m_{0 +}} \langle \Delta_{\alpha-}, \Lambda_0, m_{0+} | V_{\alpha_{2}} (1) | \Delta_{\alpha_1} \rangle}{M_{\alpha_{2+}, \alpha_+} \mathcal{A}_{\alpha_+ m_{0 +}} \langle \Delta_{\alpha+}, \Lambda_0, m_{0+} | V_{\alpha_{2}} (1) | \Delta_{\alpha_1} \rangle} = 0 \,. \end{aligned} \label{eq:quantizcft} \end{equation} Identifying in the NS limit \begin{equation} \begin{aligned} & \frac{\langle \Delta_{\alpha-}, \Lambda_0, m_{0+} | V_{\alpha_{2}} (1) | \Delta_{\alpha_1} \rangle}{\langle \Delta_{\alpha+}, \Lambda_0, m_{0+} | V_{\alpha_{2}} (1) | \Delta_{\alpha_1} \rangle} = \frac{\mathcal{Z} (\Lambda, a+\frac{\epsilon_2}{2}, m_1, m_2, m_3+\frac{\epsilon_2}{2})}{\mathcal{Z} (\Lambda, a-\frac{\epsilon_2}{2}, m_1, m_2, m_3+\frac{\epsilon_2}{2})} =\\ = & \exp \frac{1}{\epsilon_1 \epsilon_2}\left(\mathcal{F}^{\mathrm{inst}} (\Lambda, a + \frac{\epsilon_2}{2}, m_1, m_2, m_3+\frac{\epsilon_2}{2}) - \mathcal{F}^{\mathrm{inst}} (\Lambda, a - \frac{\epsilon_2}{2}, m_1, m_2, m_3+\frac{\epsilon_2}{2}) \right) \to \\ \to &\exp \frac{\partial_a \mathcal{F}^{\mathrm{inst}}(\Lambda, a, m_1, m_2, m_3)}{\epsilon_1} \,. \label{eq:prepinstqnms} \end{aligned} \end{equation} Moreover, \begin{equation} \begin{aligned} \frac{M_{\alpha_{2+}, \alpha_-} \mathcal{A}_{\alpha_- m_{0 +}}}{M_{\alpha_{2+}, \alpha_+} \mathcal{A}_{\alpha_+ m_{0 +}}} &= \frac{\Gamma \left( \frac{2a}{\epsilon_1} \right) \Gamma \left(1 + \frac{2a}{\epsilon_1} \right)}{\Gamma \left(- \frac{2a}{\epsilon_1} \right) \Gamma \left(1 - \frac{2a}{\epsilon_1} \right)} \frac{\Gamma \left( \frac{1}{2} + \frac{a_2 + a_1 - a}{\epsilon_1} \right)\Gamma \left( \frac{1}{2} + \frac{a_2 - a_1 - a}{\epsilon_1} \right)}{\Gamma \left( \frac{1}{2} + \frac{a_2 + a_1 + a}{\epsilon_1} \right)\Gamma \left( \frac{1}{2} + \frac{a_2 - a_1 + a}{\epsilon_1} \right)} \frac{\Gamma \left( \frac{1}{2} + \frac{m_3 - a}{\epsilon_1} \right)}{\Gamma \left( \frac{1}{2} + \frac{m_3 + a}{\epsilon_1} \right)} = \\ &= \frac{\Gamma \left( \frac{2a}{\epsilon_1} \right) \Gamma \left(1 + \frac{2a}{\epsilon_1} \right)}{\Gamma \left(- \frac{2a}{\epsilon_1} \right) \Gamma \left(1 - \frac{2a}{\epsilon_1} \right)} \prod_{i=1}^3 \frac{\Gamma \left( \frac{1}{2} + \frac{m_i - a}{\epsilon_1} \right)}{\Gamma \left( \frac{1}{2} + \frac{m_i + a}{\epsilon_1} \right)} = e^{-i \pi} \left( \frac{\Gamma \left(1 + \frac{2a}{\epsilon_1} \right) }{\Gamma \left(1 - \frac{2a}{\epsilon_1} \right)} \right)^2 \prod_{i=1}^3 \frac{\Gamma \left( \frac{1}{2} + \frac{m_i - a}{\epsilon_1} \right)}{\Gamma \left( \frac{1}{2} + \frac{m_i + a}{\epsilon_1} \right)} = \\ &= \exp \left[- i \pi + 2 \log \frac{\Gamma \left(1 + \frac{2a}{\epsilon_1} \right) }{\Gamma \left(1 - \frac{2a}{\epsilon_1} \right)} + \sum_{i=1}^3 \log \frac{\Gamma \left( \frac{1}{2} + \frac{m_i - a}{\epsilon_1} \right)}{\Gamma \left( \frac{1}{2} + \frac{m_i + a}{\epsilon_1} \right)} \right]\,. \end{aligned} \label{eq:ratio1coefficients} \end{equation} Including also the $\Lambda$ factor (restoring the factor of $\epsilon_1$), we identify the exponent with (see Appendix \ref{AppendixNekrasov}) \begin{equation} \frac{1}{\epsilon_1} \left[ -i \pi \epsilon_1 - 2 a \log \frac{\Lambda}{\epsilon_1} + 2 \epsilon_1 \log \frac{\Gamma \left(1 + \frac{2a}{\epsilon_1} \right) }{\Gamma \left(1 - \frac{2a}{\epsilon_1} \right)} + \epsilon_1 \sum_{i=1}^3 \log \frac{\Gamma \left( \frac{1}{2} + \frac{m_i - a}{\epsilon_1} \right)}{\Gamma \left( \frac{1}{2} + \frac{m_i + a}{\epsilon_1} \right)}\right] = -i \pi + \frac{1}{\epsilon_1}\partial_a \mathcal{F}^{\mathrm{1-loop}} \,. \label{eq:tobematchedGrassi} \end{equation} The instanton and one loop part combine to give the full NS free energy, and hence (\ref{eq:quantizcft}) can be conveniently rewritten for $\epsilon_1 = 1$ (as required by the dictionary), as \begin{equation} 1 - e^{\partial_a \mathcal{F}} = 0 \Rightarrow \partial_a \mathcal{F} = 2 \pi i n \,, n \in \mathbb{Z}\,. \end{equation} To solve for the quasinormal mode frequencies, we need to invert the relation $E=a^2-\Lambda \partial_\Lambda \mathcal{F}^{\mathrm{inst}}$ to obtain $a(E)$. Then the quantization condition for the quasinormal mode frequencies that we have derived reads \begin{equation} \boxed{\partial_a \mathcal{F}\left(-2i\omega(r_+-r_-),a(E),-i \frac{\omega - m \Omega}{2 \pi T_H} + 2i M \omega,-2iM\omega-s,-2iM\omega+s,1\right)=2\pi i n\,, n \in \mathbb{Z}\,,} \end{equation} with $E=\frac{1}{4} + \lambda + s(s+1) + \text{a}^2 \omega^2 - 8M^2 \omega^2 - \left(2M\omega^2 + i s \omega \right) (r_+-r_-)$. This gives an equation that is solved for a discrete set of $\omega_n$, in agreement with \cite{aminov2020black}\footnote{In order to match with \cite{aminov2020black}, it is important to notice that they use the variable $-ia$ instead of $a$, have a different $U(1)$ factor as previously noticed, and a sign difference in the definition of the free energy $\mathcal{F}$. Moreover, their $\partial_a \mathcal{F}$ is shifted by a factor of $- i \pi$ with respect to ours.}. \subsection{Angular quantization}\label{angular_quantization} Yet another application of the connection formulae is the computation of the angular eigenvalue $\lambda$. To this end, we impose that the angular eigenfunctions be regular at $z = 0, 1$. According to the angular dictionary (\ref{eq:angulardictionaryy}), \begin{equation} \frac{1 \pm 2 a_1}{2} = \frac{1}{2} \mp \frac{m-s}{2} \,, \, \frac{1 \pm 2 a_2}{2} = \frac{1}{2} \mp \frac{m+s}{2} \,, \end{equation} therefore, according to (\ref{eq:angchangeofvar}) the behavior of $S_\lambda$ as $z \to 0$ is given by \begin{equation} S_\lambda (z \to 0) \propto z^{\mp \frac{m-s}{2}} \,. \end{equation} Since $\lambda_{s,m} = \lambda^*_{s,-m}$, $\lambda_{-s,m} = \lambda_{s,m} + 2 s$ \cite{Berti:2005gp}, we can restrict without loss of generality to the case $s, m>0$. Assuming $m>s$, regularity of $S_\lambda$ as $z \to 0$ requires the boundary condition \begin{equation} y_{m>s}(z \to 0) = \hat{f}_{\alpha_{1-}}^{(s)} (z) \simeq z^{\frac{1}{2} + \frac{m-s}{2}}\,. \end{equation} Therefore near $z \to 1$, \begin{equation} \begin{aligned} &y_{m>s}(z \to 1) = \\ &= N_{a_{1-}, a_{2-}} \frac{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2-}} (1) | \Delta_{\alpha_1} \rangle}{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2}} (1) | \Delta_{\alpha_{1-}} \rangle} \hat{f}_{\alpha_{2-}}^{(t)} (z) + N_{a_{1-}, a_{2+}} \frac{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2+}} (1) | \Delta_{\alpha_1} \rangle}{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2}} (1) | \Delta_{\alpha_{1-}} \rangle} \hat{f}_{\alpha_{2+}}^{(t)} (z) \simeq \\ &\simeq \frac{\Gamma (-m-s) \Gamma (1+m-s)}{\Gamma (\frac{1}{2} -a -s) \Gamma (\frac{1}{2} +a -s)} \frac{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2-}} (1) | \Delta_{\alpha_1} \rangle}{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2}} (1) | \Delta_{\alpha_{1-}} \rangle} (1-z)^{\frac{1}{2} + \frac{m+s}{2}} + \\ &+ \frac{\Gamma(m+s) \Gamma(1+m-s)}{\Gamma(\frac{1}{2} - a + m) \Gamma (\frac{1}{2} + a + m)} \frac{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2+}} (1) | \Delta_{\alpha_1} \rangle}{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2}} (1) | \Delta_{\alpha_{1-}} \rangle} (1-z)^{\frac{1}{2} - \frac{m+s}{2}} \,. \end{aligned} \end{equation} For generic values of $a$, the coefficient of the first term in the previous equation is divergent as it stands. Moreover, the second term blows up for $z\to1$ and is expected to have some singular coefficients in the higher powers of $1-z$ (see Appendix \ref{appendix:angularq}). Both divergences are cured if we take \begin{equation} a = \ell + \frac{1}{2} \end{equation} for some integer $\ell \ge m$. On the other hand for $m<s$ the regular solution at zero is \begin{equation} y_{m<s}(z \to 0) \simeq \hat{f}_{\alpha_{1+}}^{(s)} (z) = z^{\frac{1}{2} + \frac{s-m}{2}}\,, \end{equation} therefore near 1 \begin{equation} \begin{aligned} &y_{m<s}(z \to 1) = \\ &= N_{a_{1+}, a_{2-}} \frac{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2-}} (1) | \Delta_{\alpha_{1}} \rangle}{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2}} (1) | \Delta_{\alpha_{1+}} \rangle} \hat{f}_{\alpha_{2-}}^{(t)} (z) + N_{a_{1+}, a_{2+}} \frac{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2+}} (1) | \Delta_{\alpha_{1}} \rangle}{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2}} (1) | \Delta_{\alpha_{1+}} \rangle} \hat{f}_{\alpha_{2+}}^{(t)} (z) \simeq \\ &\simeq \frac{\Gamma (-m-s) \Gamma (1+s-m)}{\Gamma (\frac{1}{2} -a -m) \Gamma (\frac{1}{2} +a -m)} \frac{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2-}} (1) | \Delta_{\alpha_{1}} \rangle}{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2}} (1) | \Delta_{\alpha_{1+}} \rangle} (1-z)^{\frac{1}{2} + \frac{m+s}{2}} + \\ &+ \frac{\Gamma(m+s) \Gamma(1+s-m)}{\Gamma(\frac{1}{2} - a + s) \Gamma (\frac{1}{2} + a + s)} \frac{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2+}} (1) | \Delta_{\alpha_{1}} \rangle}{\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2}} (1) | \Delta_{\alpha_{1+}} \rangle} (1-z)^{\frac{1}{2} - \frac{m+s}{2}} \,. \end{aligned} \end{equation} Again, we have to impose the same quantization relation where now $s \,, m$ are exchanged: \begin{equation} a = \ell + \frac{1}{2} \end{equation} for some integer $\ell \ge s$ in order for the wave function to be regular at $z = 0, 1$. The case $m=s$ is trivial since the two channels at zero coincide. Therefore the quantization condition for the angular eigenvalue is \begin{equation} a(\Lambda,E,m_1,m_2,m_3) = \ell + \frac{1}{2} \,, \, \ell \ge \text{max} (m \,, s) \,. \end{equation} Where as always $a$ is obtained by inverting the expression $E=a^2-\Lambda \partial_\Lambda \mathcal{F}^{\mathrm{inst}}$ order by order in $\Lambda$. Let us denote by \begin{equation} \lambda_0 = \lambda (\Lambda = 0) = \ell (\ell + 1) - s(s+1) \,. \end{equation} Then the above quantization condition for the angular eigenvalue $\lambda$ can be more conveniently written as \begin{equation} \lambda - \lambda_0 = 2cs-c^2-\Lambda\partial_\Lambda \mathcal{F}^{\mathrm{inst}}\left(\Lambda,\ell+\frac{1}{2},-m,-s,-s\right)\bigg|_{\Lambda=4c} \,, \end{equation} which is the result already obtained in \cite{aminov2020black}. \subsection{Love numbers} Applying an external gravitational field to a self-gravitating body generically causes it to deform, much in the same way as an external electric field polarizes a dielectric material. The response of the body to the external gravitational tidal field is captured by the so-called tidal response coefficients or Love numbers, named after A. E. H. Love who first studied them in the context of the Earth's response to the tides \cite{Love_1909}. In general relativity, the tidal response coefficients are generally complex, and the real part captures the conservative response of the body, whereas the imaginary part captures dissipative effects. There is some naming ambiguity where sometimes only the real, conservative part is called the Love number, whereas sometimes the full complex response coefficient is called Love number. For us the Love number will be the full complex response coefficient. For four-dimensional Kerr black holes, the conservative (real part) of the response coefficient to static external perturbations has been found to vanish \cite{Le_Tiec_2021, charalambous2021vanishing}. Moreover, Love numbers are measurable quantities that can be probed with gravitational wave observations \cite{Flanagan_2008, Cardoso_2017}. Using our conformal field theory approach to the Teukolsky equation we compute the static ($\omega =0$) Love number which agrees with that of \cite{charalambous2021vanishing}. We then compute the non-static Love number as an expansion to first order in $\omega$ in a way that can be extended to arbitrary order in $\omega$ in a straightforward way. \subsubsection{Definition of Love number and the intermediate region} For the definition of Love numbers we follow \cite{charalambous2021vanishing} and \cite{Le_Tiec_2021}, to which we refer for a more complete introduction. In the case of a static external perturbation ($\omega=0$), one imposes the ingoing boundary condition on the radial part of the perturbing field at the horizon, which then behaves near infinity as \begin{equation} \begin{aligned}\label{staticR} R(r\to\infty) &= A r^{\ell-s}(1+\mathcal{O}(r^{-1})) + B r^{-\ell-s-1}(1+\mathcal{O}(r^{-1})) \\ &= A r^{\ell-s}\left[(1+\mathcal{O}(r^{-1})) + k_{\ell m}^{(s)} \left(\frac{r}{r_+-r_-}\right)^{-2\ell-1}(1+\mathcal{O}(r^{-1}))\right] \end{aligned} \end{equation} for some constants $A$ and $B$. The Love number $k_{\ell m}^{(s)}$ is then defined as the coefficient of $(r/(r_+-r_-))^{-2\ell-1}$. (This differs from the definition in \cite{charalambous2021vanishing} where they define it as the coefficient of $(r/2M)^{-2\ell-1}$ instead). In the non-static case however, the definition of Love number is less clear, since the behaviour of the radial function at infinity is now qualitatively different from (\ref{staticR}): it is oscillatory (cf. \ref{psi_nearinfty}) due to the term $\propto \omega^2$ in the potential (\ref{radial_As}). We can however define an intermediate region in which the multipole expansion (\ref{staticR}) is valid even in the non-static case and we can read off the Love numbers in the same way as in the static case. Recall the Teukolsky equation written as a Schr\"odinger equation: \begin{equation} \frac{d^2 \psi(z)}{dz^2} + V_{CFT}(z) \psi(z) = 0 \end{equation} with \begin{equation} V_{CFT}(z) = \frac{1}{z^2 (z-1)^2}\sum_{i=0}^4 A_i z^i\,. \end{equation} For $z \gg 1$ we can expand the denominator in powers of z. Doing so and substituting the coefficients $A_i$ we get \begin{equation} V_{CFT}(z) = - \frac{\Lambda^2}{4}-\frac{m_3 \Lambda}{z}+\frac{1/4-a^2+u}{z^2}+\frac{1/4-a^2-m_1 m_2 +u}{z^3}+\mathcal{O}\left(\frac{1}{z^3}\right)\,. \end{equation} In order to be in the regime where the multipole expansion is valid to leading order, we need the dominant term in the potential to be the one proportional to $1/z^2$. So we have the conditions \begin{equation}\label{intermediate_condition} \begin{aligned} & \left|\frac{1/4-a^2+u}{z^2}\right| \gg \left|\frac{1/4-a^2-m_1m_2+u}{z^3}\right| \\ & \left|\frac{1/4-a^2+u}{z^2}\right| \gg \left|- \frac{\Lambda^2}{4}-\frac{m_3 \Lambda}{z}\right|. \end{aligned} \end{equation} Writing these conditions in terms of the gravitational parameters and expanding in $M\omega$ we get \begin{equation} \begin{aligned} & z-1\gg \left|\frac{(-i \frac{\omega - m \Omega}{2 \pi T_H} + 2 \mathrm{i} M \omega)(-2 i M \omega - s)}{\ell(\ell+1)+\mathcal{O}(M\omega)}\right|=\frac{sm\Omega}{2\pi T_H \ell(\ell+1)}+\mathcal{O}(M\omega)\\ & \left| \omega^2 (r_+ - r_-)^2 z^2 + 2 i \omega (r_+ - r_-)(s-2 i M \omega)z \right| \ll \ell(\ell+1)+\mathcal{O}(M\omega) \end{aligned} \end{equation} and in particular for $s=0$ and $\ell>0$ we have approximately \begin{equation} \frac{\ell}{M\omega} \gg z \gg 1. \end{equation} In a sense we are taking $z$ to be big enough to be far from the horizon, but not so far as to reach the oscillatory region at infinity, as already mentioned in \cite{chia2020tidal}. In the static case this intermediate region where the multipole expansion is valid extends all the way to infinity. \subsubsection{The static case} Since $\Lambda \propto \omega$, for the static case our correlator reduces to a regular 4 pt function: $\langle \Delta|\Phi_{2,1}(z)V_2(1)|\Delta_1\rangle$, which is given in terms of hypergeometric functions. Imposing the ingoing boundary condition at the horizon and using the connection coefficients, we get the following behaviour of the wavefunction near infinity: \begin{equation} \begin{aligned} \psi(z) = & \frac{\Gamma (- 2 a) \Gamma (1 + 2a_2)}{\Gamma (\frac{1}{2} + a_1 + a_2 - a) \Gamma (\frac{1}{2} -a_1 + a_2 - a)} z^{\frac{1}{2}-a}\left(1+\mathcal{O}(z^{-1})\right) + \\ + & \frac{\Gamma (2 a) \Gamma (1 + 2a_2)}{\Gamma (\frac{1}{2} + a_1 + a_2 + a) \Gamma (\frac{1}{2} -a_1 + a_2 + a)} z^{\frac{1}{2}+a}\left(1+\mathcal{O}(z^{-1})\right)\,. \end{aligned} \end{equation} Now using that $z = \frac{r-r_-}{r_+ - r_-}$, $\psi(z) = \Delta(r)^\frac{s+1}{2} R(r)$ and the dictionary with $\omega = 0$ we find that the radial function behaves as \begin{equation} \begin{aligned} R(r) &= \frac{\Gamma (- 2\ell -1 ) \Gamma \left(1-s+\frac{2iam}{r_+-r_-}\right)}{\Gamma \left(-\ell+ \frac{2iam}{r_+-r_-} \right) \Gamma (-\ell -s)} \left(\frac{r}{r_+-r_-}\right)^{-\ell-s-1}\left(1+\mathcal{O}(r^{-1})\right)+\\ &+ \frac{\Gamma (2\ell+1) \Gamma \left(1-s+\frac{2iam}{r_+-r_-}\right)}{\Gamma \left(\ell+1+\frac{2iam}{r_+-r_-}\right) \Gamma (\ell+1-s)} \left(\frac{r}{r_+-r_-}\right)^{\ell-s}\left(1+\mathcal{O}(r^{-1})\right) \\ &\propto r^{\ell-s}\left( 1 + \frac{\Gamma (- 2\ell -1 ) \Gamma\left(\ell+1+\frac{2iam}{r_+-r_-}\right)\Gamma(\ell+1-s)}{\Gamma(2\ell+1) \Gamma \left(-\ell+ \frac{2iam}{r_+-r_-} \right) \Gamma (-\ell -s)} \left(\frac{r}{r_+-r_-}\right)^{-2\ell-1} \right) \end{aligned} \end{equation} from where we can read off the static Love number as the coefficient of $\left(r/(r_+-r_-)\right)^{-2\ell-1}$: \begin{equation} \begin{aligned} k_{\ell m}^{(s)} &= \frac{\Gamma (- 2\ell -1 ) \Gamma\left(\ell+1+\frac{2iam}{r_+-r_-}\right)\Gamma(\ell+1-s)}{\Gamma(2\ell+1) \Gamma \left(-\ell+ \frac{2iam}{r_+-r_-} \right) \Gamma (-\ell -s)} \\ & = (-1)^{s+1} \frac{iam}{r_+-r_-} \frac{(\ell+s)!(\ell-s)!}{(2\ell+1)!(2\ell)!}\prod_{n=1}^\ell \left( n^2+ \left(\frac{2am}{r_+-r_-}\right)^2 \right) \end{aligned} \end{equation} which is the same result as obtained in \cite{charalambous2021vanishing} up to the trivial redefinition of Love number mentioned before that produces a factor of $\left(\frac{r_+-r_-}{2M}\right)^{2\ell+1}$. \subsubsection{The non-static case} In the non-static case $\Lambda$ is non-zero, so we need to consider the irregular correlator $\langle \Delta,\Lambda_0,m_0|\Phi_{2,1}(z)V_2(1)|\Delta_1\rangle$. Going to the intermediate region, where the multipole expansion is valid and we can read off the Love number as the coefficient of the appropriate power term, amounts to expanding the irregular state as a series and doing the OPE with each term in the series separately. Note that to do this consistently we consider higher powers of $\Lambda z$ to be subleading, as implied by the condition (\ref{intermediate_condition}) and that this is different from the irregular OPE (\ref{eq:ierrgope}) where $\Lambda z\to\infty$ and we expand in powers of $1/\Lambda z$. This is how the difference between the far and intermediate regions manifests itself at the level of the CFT. At the level of conformal blocks, neglecting the DOZZ factors, we can work with the chiral correlator and write \begin{equation} \begin{aligned} \langle \Delta,\Lambda_0,m_0|\Phi_{2,1}(z)V_2(1)|\Delta_1\rangle = \langle \Delta|\left(1+\frac{m_0\Lambda_0}{2\Delta}L_1+\mathcal{O}(\Lambda_0^2)\right)\Phi_{2,1}(z)V_2(1)|\Delta_1\rangle \end{aligned} \end{equation} The first term gives just the regular 4-point function and therefore the correct power-law behaviour for $z\to\infty$. We compute the second term (neglecting factors of $\mathcal{O}(b^2)$ since we work in the NS limit) using the OPE for the degenerate field against the field at infinity up to the first descendant: \begin{equation} \langle \Delta|\Phi_{2,1}(z) = \langle \Delta_\pm|\left(1+\frac{-1/2\pm a}{2\Delta_\pm z}L_{1}+ \mathcal{O}(z^{-2})\right) \end{equation} where the $\pm$ signs depend on the fusion channel which is chosen. Then \begin{equation} \begin{aligned} & \langle \Delta|L_1\Phi_{2,1}(z)V_2(w)|\Delta_1\rangle|_{w=1} = (z^2\partial_z + 2 \Delta_{2,1}z + w^2\partial_w + 2 \Delta_2 w)\langle \Delta|\Phi_{2,1}(z)V_2(w)|\Delta_1\rangle|_{w=1} \\ & = z^{\frac{1}{2}\mp a}\left( -(\frac{1}{2}\pm a)z + \left( 1+ \frac{(-\frac{3}{2}\mp a)(-\frac{1}{2}\pm a)}{2\Delta_\pm }\right)( \Delta_\pm+\Delta_2-\Delta_1) + \mathcal{O}(z^{-1}) \right)\,. \end{aligned} \end{equation} Imposing the ingoing boundary condition at the horizon, the behaviour in the intermediate region (now taking into account the hypergeometric connection coefficients from $1$ to $\infty$) is: \begin{equation} \begin{aligned} \psi(z) & = \frac{\Gamma (- 2 a) \Gamma (1 + 2a_2)}{\Gamma (\frac{1}{2} + a_1 + a_2 - a) \Gamma (\frac{1}{2} -a_1 + a_2 - a)} z^{\frac{1}{2}-a}\times \\ & \times \left[1+\frac{m_0\Lambda_0}{2\Delta}\left( -(\frac{1}{2}+ a)z + \left( 1+ \frac{(-\frac{3}{2}- a)(-\frac{1}{2}+ a)}{2\Delta_+ }\right)( \Delta_+ +\Delta_2-\Delta_1) \right)+ \mathcal{O}(z^{-1},\Lambda_0^2 z^2)\right] + \\ & + \frac{\Gamma (2 a) \Gamma (1 + 2a_2)}{\Gamma (\frac{1}{2} + a_1 + a_2 + a) \Gamma (\frac{1}{2} -a_1 + a_2 + a)} z^{\frac{1}{2}+a}\times \\ & \times \left[1+\frac{m_0\Lambda_0}{2\Delta}\left( -(\frac{1}{2}-a)z + \left( 1+ \frac{(-\frac{3}{2}+ a)(-\frac{1}{2}- a)}{2\Delta_- }\right)( \Delta_- +\Delta_2-\Delta_1) \right)+ \mathcal{O}(z^{-1},\Lambda_0^2 z^2)\right] \,. \end{aligned} \end{equation} To read off the Love number as the coefficient in front of the corresponding power of $r$ we need to perform the change of variables, since powers of $z$ do not correspond directly to powers of $r$ and cancellations can and do occur: $z^p=\left(\frac{r-r_-}{r_+-r_-}\right)^p=\left(\frac{r}{r_+-r_-}\right)^p\left(1-p\frac{r_-}{r}+\mathcal{O}(r^{-2})\right)$ so we have for the radial wavefunction: \begin{equation} \begin{aligned} &R(r) = \, \frac{\Gamma (- 2 a) \Gamma (1 + 2a_2)}{\Gamma (\frac{1}{2} + a_1 + a_2 - a) \Gamma (\frac{1}{2} -a_1 + a_2 - a)} \left(\frac{r}{r_+-r_-}\right)^{-a-s-\frac{1}{2}}\left(1+(a-\frac{1}{2}) \frac{r_-}{r}+\mathcal{O}(r^{-2})\right)\times \\ & \times \left[1+\frac{m_0\Lambda_0}{2\Delta}\left( -(\frac{1}{2}+ a)\left(\frac{r-r_-}{r_+-r_-}\right) + \left( 1+ \frac{(-\frac{3}{2}- a)(-\frac{1}{2}+ a)}{2\Delta_+ }\right)( \Delta_+ +\Delta_2-\Delta_1) \right)+ \mathcal{O}(r^{-1},\Lambda_0^2 r^2)\right] + \\ & + \frac{\Gamma (2 a) \Gamma (1 + 2a_2)}{\Gamma (\frac{1}{2} + a_1 + a_2 + a) \Gamma (\frac{1}{2} -a_1 + a_2 + a)} \left(\frac{r}{r_+-r_-}\right)^{a-s-\frac{1}{2}}\left(1-(a+\frac{1}{2}) \frac{r_-}{r}+\mathcal{O}(r^{-2})\right)\times \\ & \times \left[1+\frac{m_0\Lambda_0}{2\Delta}\left( -(\frac{1}{2}-a)\left(\frac{r-r_-}{r_+-r_-}\right) + \left( 1+ \frac{(-\frac{3}{2}+ a)(-\frac{1}{2}- a)}{2\Delta_- }\right)( \Delta_- +\Delta_2-\Delta_1) \right)+ \mathcal{O}(r^{-1},\Lambda_0^2 r^2)\right]\\[10pt] \supset &\, \frac{\Gamma (- 2 a) \Gamma (1 + 2a_2)}{\Gamma (\frac{1}{2} + a_1 + a_2 - a) \Gamma (\frac{1}{2} -a_1 + a_2 - a)} \left(\frac{r}{r_+-r_-}\right)^{-a-s-\frac{1}{2}}\times \\ & \times \left[1+\frac{m_0\Lambda_0}{2\Delta}\left( -(\frac{1}{2}+ a)(a-\frac{3}{2})\frac{r_-}{r_+-r_-} + \left( 1+ \frac{(-\frac{3}{2}- a)(-\frac{1}{2}+ a)}{2\Delta_+ }\right)( \Delta_+ +\Delta_2-\Delta_1) \right)+ \mathcal{O}(\Lambda_0^2)\right] + \\ & + \frac{\Gamma (2 a) \Gamma (1 + 2a_2)}{\Gamma (\frac{1}{2} + a_1 + a_2 + a) \Gamma (\frac{1}{2} -a_1 + a_2 + a)} \left(\frac{r}{r_+-r_-}\right)^{a-s-\frac{1}{2}}\times \\ & \times \left[1+\frac{m_0\Lambda_0}{2\Delta}\left( (\frac{1}{2}-a)(a+\frac{3}{2})\frac{r_-}{r_+-r_-} + \left( 1+ \frac{(-\frac{3}{2}+ a)(-\frac{1}{2}- a)}{2\Delta_- }\right)( \Delta_- +\Delta_2-\Delta_1) \right)+ \mathcal{O}(\Lambda_0^2)\right] \\[10pt] \propto & \,\left(\frac{r}{r_+-r_-}\right)^{a-s-\frac{1}{2}}\bigg\{ 1 + \frac{\Gamma (- 2 a) \Gamma (\frac{1}{2} + a_1 + a_2 + a) \Gamma (\frac{1}{2} -a_1 + a_2 + a)}{\Gamma(2a) \Gamma (\frac{1}{2} + a_1 + a_2 - a) \Gamma (\frac{1}{2} -a_1 + a_2 - a)} \left(\frac{r}{r_+-r_-}\right)^{-2a} \times \\ & \times \bigg[1+\frac{m_0\Lambda_0}{2\Delta}\left( -(\frac{1}{2}+ a)(a-\frac{3}{2})\frac{r_-}{r_+-r_-} + \left( 1+ \frac{(-\frac{3}{2}- a)(-\frac{1}{2}+ a)}{2\Delta_+ }\right)( \Delta_+ +\Delta_2-\Delta_1) \right)+ \\ & -\frac{m_0\Lambda_0}{2\Delta}\left( (\frac{1}{2}-a)(a+\frac{3}{2})\frac{r_-}{r_+-r_-} + \left( 1+ \frac{(-\frac{3}{2}+ a)(-\frac{1}{2}- a)}{2\Delta_- }\right)( \Delta_- +\Delta_2-\Delta_1) \right)\bigg]+ \mathcal{O}(\Lambda_0^2)\bigg\}\\[10pt] = & \,\left(\frac{r}{r_+-r_-}\right)^{a-s-\frac{1}{2}}\bigg\{ 1 + \frac{\Gamma (- 2 a) \Gamma (\frac{1}{2} + a_1 + a_2 + a) \Gamma (\frac{1}{2} -a_1 + a_2 + a)}{\Gamma(2a) \Gamma (\frac{1}{2} + a_1 + a_2 - a) \Gamma (\frac{1}{2} -a_1 + a_2 - a)} \left(\frac{r}{r_+-r_-}\right)^{-2a} \times \\ & \times \left[ 1 - \frac{2a m_3 \Lambda}{\frac{1}{2}-2a^2} \left(\frac{r_-}{r_+-r_-} - \frac{\frac{1}{4}-a^2+a_1^2-a_2^2}{\frac{1}{2}-2a^2} \right) +\mathcal{O}(\Lambda^2) \right]\bigg\}\,. \end{aligned} \end{equation} In the first step we have kept only the powers $r^{a-s-\frac{1}{2}}$ and $r^{-a-s-\frac{1}{2}}$ relevant for reading off the Love number, in the second step we pulled out the factor of $r^{a-s-\frac{1}{2}}$ with its coefficient as in (\ref{staticR}) and re-expanded the denominator to first order in $\Lambda_0$, and in the last step we simplified the expression. We can read off the non-static Love number: \begin{equation} \begin{aligned} k_{a m}^{(s)}(\Lambda)= \tilde{k}_{a m}^{(s)} \left[ 1 - \frac{2a m_3 \Lambda}{\frac{1}{2}-2a^2} \left(\frac{r_-}{r_+-r_-} - \frac{\frac{1}{4}-a^2+a_1^2-a_2^2}{\frac{1}{2}-2a^2} \right) +\mathcal{O}(\Lambda^2) \right] \end{aligned} \end{equation} where we have defined \begin{equation} \tilde{k}_{a m}^{(s)} := \frac{\Gamma (- 2 a) \Gamma (\frac{1}{2} + a_1 + a_2 + a) \Gamma (\frac{1}{2} -a_1 + a_2 + a)}{\Gamma(2a) \Gamma (\frac{1}{2} + a_1 + a_2 - a) \Gamma (\frac{1}{2} -a_1 + a_2 - a)}\,. \end{equation} To compare with other results in the literature we now identify \begin{equation} a|_{\text{a}=0} = \nu + \frac{1}{2} \end{equation} where $\text{a}$ is the reduced angular momentum of the black hole and $\nu$ is the "renormalized angular momentum" introduced in \cite{Mano_1996}. In particular one can check that $a$ receives no corrections at $\mathcal{O}(M\omega)$ and that at $\mathcal{O}(M^2\omega^2)$ it agrees with the expression given in \cite{Mano_1996} if one sets $\text{a}=0$. For non-zero $\text{a}$, we seem to have a more general "renormalized angular momentum" adapted to a spinning black hole. In any case, we can write \begin{equation}\label{Chia} \tilde{k}_{a m}^{(s)} = \frac{\Gamma (- 2 a) \Gamma (\frac{1}{2} -i \frac{\omega - m \Omega}{2 \pi T_H} + 2 \mathrm{i} M \omega + a) \Gamma (\frac{1}{2} -2 i M \omega - s + a)}{\Gamma(2a) \Gamma (\frac{1}{2} -i \frac{\omega - m \Omega}{2 \pi T_H} + 2 \mathrm{i} M \omega - a) \Gamma (\frac{1}{2} -2 i M \omega - s - a)}\,. \end{equation} In terms of black hole data, then non-static love number to first order in $\omega$ then reads \begin{equation}\label{Love_final} \boxed{\begin{aligned} k_{a m}^{(s)}(\omega)= \tilde{k}_{a m}^{(s)}\left[ 1 - \frac{2\ell+1}{\ell(\ell+1)}is\omega\left(\frac{3r_- - r_+}{2} - \frac{im\text{a}s}{2\ell(\ell+1)} \right) +\mathcal{O}(\omega^2) \right]\,, \end{aligned}} \end{equation} where $\tilde{k}_{a m}^{(s)}$ has to be understood as expanded to first order in $\omega$ in order to be consistent with the instanton expansion. We note that (\ref{Chia}) agrees with the result in \cite{chia2020tidal} (eq. 16) if one substitutes $a\to\ell+1/2$, uses some Gamma function identities as in the static case and neglects the $2iM\omega$ terms in the Gamma functions. However, he seems to be missing the instanton corrections present in (\ref{Love_final}). Although we have computed the non-static Love number by going to an intermediate region, it seems somewhat unnatural from the point of view of the CFT and the differential equation itself. It should be possible to read off the Love number directly from the waves at infinity, as mentioned also in \cite{chia2020tidal}. \vspace{1cm} \textbf{Acknowledgements}: We would like to thank M. Bianchi, E. Franzin, A. Grassi, O. Lisovyy and J.F. Morales for fruitful discussions. This research is partially supported by the INFN Research Projects GAST and ST\&FI, by PRIN "Geometria delle varietà algebriche" and by PRIN "Non-perturbative Aspects Of Gauge Theories And Strings". \appendix \section{The radial and angular potentials}\label{coefficients_potential} Both the radial and angular part of the Teukolsky equation can be written as a Schr\"odinger equation: \begin{equation} \frac{d^2 \psi(z)}{dz^2} + V(z)\psi(z) = 0 \end{equation} with potential \begin{equation} V(z) = \frac{1}{z^2 (z-1)^2}\sum_{i=0}^4 \hat{A}_i z^i\,. \end{equation} For the radial part, the coefficients are given by \begin{equation}\label{radial_As} \begin{aligned} &\hat{A}^r_0 = \frac{\text{a}^2(1-m^2) - M^2 + 4\text{a} m M \omega (M - \sqrt{ M^2 -\text{a}^2 }) + 4 M^2 \omega^2 (a^2 - 2 M^2) + 8 M^3 \sqrt{M^2-\text{a}^2} \omega^2}{4(\text{a}^2-M^2)} + \\&+ (i s) \frac{\text{a} m \sqrt{M^2-\text{a}^2} - 2 \text{a}^2 M \omega + 2 M^2 \omega (M - \sqrt{M^2 - \text{a}^2})}{2 (\text{a}^2 - M^2)} - \frac{s^2}{4} \,, \\ & \hat{A}^r_1 = \frac{4\text{a}^2 \lambda - 4 M^2 \lambda + (8\text{a} m M \omega+ 16\text{a}^2 M \omega^2 - 32 M^3 \omega^2) \sqrt{M^2-\text{a}^2} + 4\text{a}^4 \omega^2 - 36\text{a}^2 M^2 \omega^2 + 32 M^4 \omega^2 }{4(\text{a}^2-M^2)} + \\ &+ (is) \left( - i + \frac{(2 \text{a}^2 \omega- \text{a} m )\sqrt{M^2 - \text{a}^2} }{\text{a}^2 - M^2} \right) + s^2 \,, \\ & \hat{A}^r_2 = -\lambda - 5\text{a}^2 \omega^2 + 12 M^2 \omega^2 - 12 M \omega^2 \sqrt{M^2-\text{a}^2} + (is) (i - 6 \omega \sqrt{M^2-\text{a}^2}) - s^2 \,, \\ & \hat{A}^r_3 = 8\text{a}^2 \omega^2 - 8 M^2 \omega^2 + 8 M \omega^2 \sqrt{M^2-\text{a}^2} + (is) 4 \omega \sqrt{M^2 - \text{a}^2} \,, \\ & \hat{A}^r_4 = 4 (M^2-\text{a}^2) \omega^2 \,, \end{aligned} \end{equation} while for the angular part they are \begin{equation} \begin{aligned} &\hat{A}^\theta_0 = - \frac{1}{4} (-1+m-s) (1+m-s) \,, \\ &\hat{A}^\theta_1 = c^2 + s + 2 c s - m s + s^2 + \lambda \,, \\ &\hat{A}^\theta_2 = - s - (c+s)(5c+s) - \lambda \,, \\ &\hat{A}^\theta_3 = 4c (2c+s) \,, \\ &\hat{A}^\theta_4 = - 4 c^2 \,. \end{aligned} \label{eq:angkerrA's} \end{equation} \section{CFT calculations}\label{CFT_calculations} \subsection{The BPZ equation}\label{TheBPZequation} To calculate the BPZ equation for the correlator (\ref{correlator_unnormalized}) we first evaluate the correlator with an extra insertion of the energy-momentum tensor: \begin{equation} \begin{aligned} & \langle \Delta, \Lambda_0, m_0 |T(w) \Phi_{2,1}(z) V_2(y) | V_1 \rangle = \\ & = \sum_{n\geq0} \frac{1}{w^{n+2}} \langle \Delta, \Lambda_0, m_0 | [L_n,\Phi_{2,1}(z) V_2(y)] | V_1 \rangle + \bigg(\frac{\Delta_1}{w^2} + \frac{m_0 \Lambda_0}{w} + \Lambda_0^2\bigg) \langle \Delta, \Lambda_0, m_0 | \Phi_{2,1}(z) V_2(y) | V_1 \rangle = \\ & = \bigg( \frac{z}{w} \frac{1}{w-z} \partial_z + \frac{\Delta_{2,1}}{(w-z)^2} + \frac{y}{w} \frac{1}{w-y} \partial_y + \frac{\Delta_2}{(w-y)^2} + \frac{\Delta_1}{w^2} + + \frac{m_0 \Lambda_0}{w} + \Lambda_0^2\bigg) \langle \Delta, \Lambda_0, m_0 | \Phi_{2,1}(z) V_2(y) | V_1 \rangle\,. \end{aligned} \end{equation} Now we can simply compute \begin{equation} \begin{aligned} & \langle \Delta, \Lambda_0, m_0 | L_{-2} \cdot \Phi_{2,1}(z) V_2(y) | V_1 \rangle = \oint_{C_z} \frac{dw}{w-z} \langle \Delta, \Lambda_0, m_0 | T(w) \Phi_{2,1}(z) V_2(y) | V_1 \rangle = \\ = & \bigg( - \frac{1}{z}\partial_z + \frac{y}{z} \frac{1}{z-y} \partial_y + \frac{\Delta_2}{(z-y)^2} + \frac{\Delta_1}{z^2} + \frac{m_0 \Lambda_0}{z} + \Lambda_0^2\bigg) \langle \Delta, \Lambda_0, m_0 | \Phi_{2,1}(z) V_2(y) | V_1 \rangle\,. \end{aligned} \end{equation} Using the Ward identity for $L_0$: \begin{equation} \big( z\partial_z + y\partial_y - \Lambda_0 \partial_{\Lambda_0} + \Delta_{2,1} + \Delta_2 + \Delta_1 - \Delta \big) \langle \Delta, \Lambda_0, m_0 | \Phi_{2,1}(z) V_2(y) | V_1 \rangle = 0 \end{equation} we can eliminate $\partial_y$. Then setting $y=1$ we obtain \begin{equation} \begin{aligned}\label{L-2action} & \langle \Delta, \Lambda_0, m_0 | L_{-2} \cdot \Phi_{2,1}(z) V_2(1) | V_1 \rangle = \\ = & \bigg( - \frac{1}{z}\partial_z - \frac{1}{z} \frac{1}{z-1} \big(z\partial_z - \Lambda_0 \partial_{\Lambda_0} + \Delta_{2,1} + \Delta_2 + \Delta_1 - \Delta \big) + \frac{\Delta_2}{(z-1)^2} + \frac{\Delta_1}{z^2} + \frac{m_0 \Lambda_0}{z} + \Lambda_0^2\bigg) \Psi(z) \end{aligned} \end{equation} which gives the BPZ equation \begin{equation} \begin{aligned} 0 =& \langle \Delta, \Lambda_0, m_0 | \big(b^{-2} \partial_z^2 + L_{-2}\cdot \big) \Phi_{2,1}(z) V_2(1) | V_1 \rangle = \\ = & \bigg(b^{-2} \partial_z^2 - \frac{1}{z}\partial_z - \frac{1}{z} \frac{1}{z-1} \big(z\partial_z - \Lambda_0 \partial_{\Lambda_0} + \Delta_{2,1} + \Delta_2 + \Delta_1 - \Delta \big) + \frac{\Delta_2}{(z-1)^2} + \frac{\Delta_1}{z^2} + \frac{ m_0 \Lambda_0}{z} + \Lambda_0^2\bigg) \Psi(z) \,. \end{aligned} \end{equation} \subsection{DOZZ factors}\label{DOZZfactors} We normalize vertex operators so that the DOZZ three-point function\cite{Dorn:1994xn, Zamolodchikov:1995aa} reads \begin{equation} C\left(\alpha_1,\alpha_2,\alpha_3\right) = \frac{1}{\Upsilon_b(\alpha_1+\alpha_2+\alpha_3 + \frac{Q}{2}) \Upsilon_b(\alpha_1+\alpha_2-\alpha_3 + \frac{Q}{2})\Upsilon_b(\alpha_2+\alpha_3-\alpha_1 + \frac{Q}{2})\Upsilon_b(\alpha_3+\alpha_1-\alpha_2 + \frac{Q}{2})} \,, \label{eq:DOZZ} \end{equation} where \begin{equation} \begin{aligned} &\Upsilon_b (x) = \frac{1}{\Gamma_b(x) \Gamma_b(Q-x)} \,, \\ &\Gamma_b(x) = \frac{\Gamma_2(x | b, b^{-1})}{\Gamma_2(\frac{Q}{2} | b, b^{-1})} \,, \end{aligned} \end{equation} and $\Gamma_2$ is the double gamma function. $\U$ satisfies the shift relation \begin{equation} \Upsilon_b (x + b) = \gamma (bx) b^{1- 2 b x} \Upsilon_b (x) \,. \label{eq:shiftU} \end{equation} Moreover $\gamma(x) = \G (x) / \G (1-x)$, and satisfies the following relations \begin{equation} \begin{aligned} &\gamma (- x) \gamma (x) = -\frac{1}{x^2} \,, \\ &\gamma (x + 1) = -x^2 \gamma (x) \,, \\ &\gamma(x) = \frac{1}{\gamma(1-x)} \,. \end{aligned} \label{eq:sgammaprop} \end{equation} The two-point function normalization is given in terms of the DOZZ factors, that is \begin{equation} \langle \Delta_\alpha | \Delta_\alpha \rangle = G(\alpha) = C(\alpha, - \frac{Q}{2}, \alpha) = \frac{1}{\Upsilon_b(0) \Upsilon_b(0)\Upsilon_b(2 \alpha)\Upsilon_b(2 \alpha + Q)} \,. \end{equation} The regular OPE coefficient appearing in section \ref{section:ConnectionProblem} can be explicitly computed in terms of DOZZ factors, that is \begin{equation} \mathcal{C}_{\alpha_{2, 1}, \alpha_i}^{\alpha_{i \pm}} = G^{-1}(\alpha_{i \pm}) C(\alpha_{i \pm}, \frac{-b-Q}{2}, \alpha_i) = \gamma (- b^2) \gamma(\mp 2 b \alpha_i) b^{2 b (\pm 2 \alpha + Q)} \,. \end{equation} Another relevant ratio is \begin{equation} \frac{C (\alpha_1, \alpha_2, \alpha_{3+})}{C (\alpha_1, \alpha_2, \alpha_{3-})} = b^{- 8 b \alpha_3} \prod_{\pm, \pm} \gamma(\frac{1}{2} + b (\pm \alpha_1 \pm \alpha_2 + \alpha_3)) \,, \end{equation} that is readily computed from the shift relation (\ref{eq:shiftU}). With these relations at our disposal, we can evaluate ratios of the $K$s appearing in equations (\ref{eq:connectionconstraints}). In particular, \begin{equation} \frac{K^{(t)}_{\alpha_{2-}, \alpha_{2-}}}{K^{(t)}_{\alpha_{2+}, \alpha_{2+}}} = \frac{G^{-1}(\alpha_{2-}) C(\alpha_{2-}, \frac{-b-Q}{2}, \alpha_2) C(\alpha, \alpha_{2-}, \alpha_1)}{G^{-1}(\alpha_{2+}) C(\alpha_{2+}, \frac{-b-Q}{2}, \alpha_2) C(\alpha, \alpha_{2+}, \alpha_1)} = \frac{\gamma(2 b \alpha_2)}{\gamma(-2b\alpha_2)} \prod_{\pm, \pm} \gamma(\frac{1}{2} + b (\pm \alpha \pm \alpha_1 - \alpha_2)) \,, \end{equation} and similarly \begin{equation} \frac{K^{(u)}_{\alpha_+, \alpha_+}}{K^{(u)}_{\alpha_-, \alpha_-}} = \frac{\gamma(-2 b \alpha)}{\gamma(2b\alpha)} \prod_{\pm, \pm} \gamma(\frac{1}{2} + b ( \alpha \pm \alpha_1 \pm \alpha_2)) \,. \end{equation} \subsection{Irregular OPE}\label{IrregularOPE} Following \cite{Gaiotto:2012sf} let us make the following Ansatz for the OPE with the irregular state \begin{equation}\label{irrAnsatz} \langle \Delta_\alpha, \Lambda_0, \bar{\Lambda}_0, m_0 | \Phi_{2, 1} (z, \bar{z}) = \sum_{\beta} \tilde{\mathcal{C}}^{\beta}_{\alpha, \alpha_{2,1}} \displaystyle\left\lvert \sum_{\mu_0, k} \mathcal{A}_{\beta, \mu_0} z^{\zeta} \Lambda_0^{\lambda} e^{\gamma \Lambda_0 z} z^{-k} \langle \Delta_\beta, \Lambda_0, \mu_0; k | \right\rvert^2 \,, \end{equation} with all the parameters to be determined. Here $\langle \Delta_\beta, \Lambda_0, \mu_0, k |$ is the $k$-th irregular descendant, that has the form \begin{equation} | \Delta_\beta, \Lambda_0, \mu_0; k \rangle \sim L_{-J} \Lambda_0^{-k''} \partial_{\Lambda_0}^{k'} | \Delta_\beta, \Lambda_0, \mu_0 \rangle \end{equation} with $k' + k'' + | J | = k$. Note that in principle the parameters $\zeta, \lambda, \gamma$ depend both on $\beta$ and $\mu_0$. The first constraint comes from comparing with the regular OPE, namely \begin{equation} \langle \Delta_\alpha, \Lambda_0, \bar{\Lambda}_0, m_0 | \Phi_{2, 1} (z, \bar{z}) | \Delta_\beta \rangle \sim \langle \Delta_\alpha, \Lambda_0, \bar{\Lambda}_0, \mu_0 | \Delta_{\beta_\pm} \rangle = \langle \Delta_\alpha | \Delta_{\beta_\pm} \rangle \Rightarrow \beta_\pm = \alpha\,. \label{eq:IRRfirstconstraint} \end{equation} The other coefficients can be fixed by acting with the Virasoro generators on the left and right hand sides of the Ansatz (\ref{irrAnsatz}). Focusing on the chiral correlator and comparing powers of $\Lambda$ and $z$ we have \begin{equation} \begin{aligned} \langle \Delta_\alpha, \Lambda_0, m_0 | \Phi_{2, 1} (z) L_0 &= (\Delta_{\alpha} + \Lambda_0 \partial_{\Lambda_0} - \Delta_{2, 1} - z \partial_z) \langle \Delta_\alpha, \Lambda_0, m_0 | \Phi_{2, 1} (z) = \\ &= \sum_{k} z^{\zeta - k} \Lambda_0^\lambda e^{\gamma \Lambda_0 z} (\Delta_\alpha - \Delta_{2,1} - \zeta + k + \lambda + \Lambda_0 \partial_{\Lambda_0}) \langle \Delta_\beta, \Lambda_0, \mu_0; k | = \\ &= \sum_{k} z^{\zeta - k} \Lambda_0^\lambda e^{\gamma \Lambda_0 z} (\Delta_\beta + k + \Lambda_0 \partial_{\Lambda_0}) \langle \Delta_\beta, \Lambda_0, \mu_0; k | \,, \end{aligned} \end{equation} that gives the constraint \begin{equation} \lambda - \zeta = \Delta_\beta - \Delta_\alpha + \Delta_{2, 1} \,. \end{equation} Now let us consider the action of $L_{-1}$. We have \begin{equation} \begin{aligned} \langle \Delta_\alpha, \Lambda_0, m_0 | \Phi_{2, 1} (z) L_{-1} &= (m_0 \Lambda_0 - \partial_z) \langle \Delta_\alpha, \Lambda_0, m_0 | \Phi_{2, 1} (z) = \\ &= \sum_k z^{\zeta} \Lambda_0^\lambda e^{\gamma \Lambda_0 z} ((m_0 - \gamma )\Lambda_0z^{-k} + (k - \zeta)z^{-k-1} ) \langle \Delta_\beta, \Lambda_0, \mu_0; k | = \\ &= z^{\zeta} \Lambda_0^\lambda e^{\gamma \Lambda_0 z} \left( \langle \Delta_\beta, \Lambda_0, \mu_0| \mu_0 \Lambda_0 + z^{-1} \langle \Delta_\beta, \Lambda_0, \mu_0; 1| L_{-1} + \dots \right)\,. \end{aligned} \end{equation} Comparing powers, \begin{equation} \begin{aligned} &\mathcal{O} (z^\zeta) \Rightarrow m_0 - \gamma = \mu_0 \,, \\ &\mathcal{O} (z^{\zeta-1}) \Rightarrow \mu_0 \Lambda_0 \langle \Delta_\beta, \Lambda_0, \mu_0; 1| - \zeta \langle \Delta_\beta, \Lambda_0, \mu_0| = \langle \Delta_\beta, \Lambda_0, \mu_0; 1| L_{-1} \,. \end{aligned} \label{eq:IRRsecondconstraint} \end{equation} The first irregular descendant is of the form\footnote{The term $\sim \Lambda^{-1}$ cannot be determined at this order. Luckily, it doesn't play any role in the following discussion.} \begin{equation} \langle \Delta_\beta, \Lambda_0, \mu_0; 1| = A \langle \Delta_\beta, \Lambda_0, \mu_0 | L_{1} + B \partial_{\Lambda_0} \langle \Delta_\beta, \Lambda_0, \mu_0 | \,, \end{equation} therefore equation (\ref{eq:IRRsecondconstraint}) gives \begin{equation} \begin{aligned} \mu_0 \Lambda_0 \left( A \langle \Delta_\beta, \Lambda_0, \mu_0 | L_{1} + B \partial_{\Lambda_0} \langle \Delta_\beta, \Lambda_0, \mu_0 | \right) - \zeta \langle \Delta_\beta, \Lambda_0, \mu_0 | = A \langle \Delta_\beta, \Lambda_0, \mu_0 | L_{1} L_{-1}+ B \partial_{\Lambda_0} \langle \Delta_\beta, \Lambda_0, \mu_0 | L_{-1} \,. \end{aligned} \end{equation} The RHS gives \begin{equation} \begin{aligned} &A (2 \Delta_\beta + 2 \Lambda_0 \partial_{\Lambda_0}) \langle \Delta_\beta, \Lambda_0, \mu_0| + A \mu_0 \Lambda_0 \langle \Delta_\beta, \Lambda_0, \mu_0| L_1 + B \mu_0 \langle \Delta_\beta, \Lambda_0, \mu_0| + B \mu_0 \Lambda_0 \partial_{\Lambda_0} \langle \Delta_\beta, \Lambda_0, \mu_0| = \\ &= (2 A \Delta_\beta + B \mu_0) \langle \Delta_\beta, \Lambda_0, \mu_0| + (2 A \Lambda_0 + B \mu_0 \Lambda_0) \partial_{\Lambda_0} \langle \Delta_\beta, \Lambda_0, \mu_0| + A \mu_0 \Lambda_0 \langle \Delta_\beta, \Lambda_0, \mu_0| L_1 \,. \end{aligned} \end{equation} Comparing term by term, we obtain equations for $A, B$ \begin{equation} \begin{aligned} &2 A \Delta_\beta + B \mu_0 = - \zeta \,, \\ &2 A \Lambda_0 + B \mu_0 \Lambda_0 = B \mu_0 \Lambda_0 \,, \\ &\Rightarrow A = 0 \,, \, B = - \frac{\zeta}{\mu_0} \,. \end{aligned} \end{equation} Another constraint comes from the action of $L_2$. We have \begin{equation} \begin{aligned} \langle \Delta_\alpha, \Lambda_0, m_0 | \Phi_{2, 1} (z) L_{-2} &= (\Lambda_0^2 - z^{-1} \partial_z + \Delta_{2, 1} z^{-2}) \langle \Delta_\alpha, \Lambda_0, m_0 | \Phi_{2, 1} (z) = \\ &= \sum_k z^{\zeta} \Lambda_0^\lambda e^{\gamma \Lambda_0 z} (\Lambda_0^2 z^{-k} - \gamma \Lambda_0z^{-k-1} + (k - \zeta + \Delta_{2,1})z^{-k-2} ) \langle \Delta_\beta, \Lambda_0, \mu_0; k | = \\ &= z^{\zeta} \Lambda_0^\lambda e^{\gamma \Lambda_0 z} \left( \langle \Delta_\beta, \Lambda_0, \mu_0| \Lambda_0^2 + z^{-1} \langle \Delta_\beta, \Lambda_0, \mu_0; 1| L_{-2} + \dots \right) = \\ &= z^{\zeta} \Lambda_0^\lambda e^{\gamma \Lambda_0 z} \left( \langle \Delta_\beta, \Lambda_0, \mu_0| \Lambda_0^2 - z^{-1} \frac{\zeta}{\mu_0} (2 \Lambda_0 + \Lambda_0^2 \partial_{\Lambda_0}) \langle \Delta_\beta, \Lambda_0, \mu_0| + \dots \right) \end{aligned} \end{equation} The previous equation is trivially satisfied at order $\mathcal{O}(z^\zeta)$, and comparing at order $\mathcal{O}(z^{\zeta-1})$ gives \begin{equation} (- \Lambda_0^2 \frac{\zeta}{\mu_0} \partial_{\Lambda_0} - \gamma \Lambda_0 ) \langle \Delta_\beta, \Lambda_0, \mu_0| = -\frac{\zeta}{\mu_0} (2 \Lambda_0 + \Lambda_0^2 \partial_{\Lambda_0}) \langle \Delta_\beta, \Lambda_0, \mu_0| \,, \end{equation} that finally gives \begin{equation} \gamma = 2 \frac{\zeta}{\mu_0} \,. \end{equation} The last constraint we need is most easily obtained by looking at the null-state equation satisfied by the irregular 3 point function (\ref{eq:IRRfirstconstraint}). We have \begin{equation} \langle \Delta_\alpha, \Lambda_0, m_0 | T(w) \Phi_{2, 1} (z) | \Delta_{\alpha_\pm} \rangle = \langle \Delta_\alpha, \Lambda_0, m_0 | \left( \frac{m_0 \Lambda_0}{w} + \Lambda_0^2 + \frac{\Delta_{\alpha_\pm}}{w^2} + \frac{\Delta_{2,1}}{(w-z)^2} + \frac{z/w}{w-z} \partial_z \right) \Phi_{2, 1} (z) | \Delta_{\alpha_\pm} \rangle \,, \end{equation} therefore \begin{equation} \left( b^{-2} \partial_z^2 - \frac{1}{z} \partial_z + \frac{\Delta_{\alpha \pm}}{z^2} + \frac{m_0 \Lambda_0}{z} + \Lambda_0^2 \right) \langle \Delta_\alpha, \Lambda_0, m_0 | \Phi_{2, 1} (z) | \Delta_{\alpha_\pm} \rangle = 0 \,. \end{equation} Substituting the irregular OPE and looking at the leading term as $z \to \infty$ gives \begin{equation} \left( \frac{\gamma \Lambda_0}{b} \right)^2 + \Lambda_0^2 = 0 \Rightarrow \gamma = \pm i b \,. \end{equation} Putting all the constraints together yields, for a fixed channel $\beta = \alpha_\theta$, $\theta = \pm$, \begin{equation} \begin{aligned} &\gamma = \pm i b \,, \\ &\zeta = \frac{1}{2} \left( b^2 \pm i b m_0 \right) = \frac{1}{2} \left( bQ - 1 \pm 2 m_3 \right) \,, \\ &\lambda - \zeta = - \frac{1}{2} bQ + \theta b \alpha_\theta \,, \\ &\mu_0 = m_{0 \pm} = m_0 \pm (- i b) \,. \end{aligned} \end{equation} Finally, the irregular OPE reads \begin{equation} \begin{aligned} \langle \Delta_\alpha, \Lambda_0, \bar{\Lambda}_0, m_0 | \Phi_{2, 1} (z, \bar{z}) &= \tilde{\mathcal{C}}^{\alpha_+}_{\alpha, \alpha_{2,1}} \displaystyle\left\lvert \sum_{\pm, k} \mathcal{A}_{\alpha_+, m_{0 \pm}} \Lambda^{-\frac{1}{2} b Q + b \alpha_+} (\Lambda z)^{\frac{1}{2} (bQ - 1 \pm 2 m_3)} e^{\pm \Lambda z/2} z^{-k} \langle \Delta_{\alpha_+}, \Lambda_0, m_{0 \pm}; k | \right\rvert^2 + \\ &+ \tilde{\mathcal{C}}^{\alpha_-}_{\alpha, \alpha_{2,1}} \displaystyle\left\lvert \sum_{\pm, k} \mathcal{A}_{\alpha_-, m_{0 \pm}} \Lambda^{-\frac{1}{2} b Q - b \alpha_-} (\Lambda z)^{\frac{1}{2} (bQ - 1 \pm 2 m_3)} e^{\pm \Lambda z/2} z^{-k} \langle \Delta_{\alpha_-}, \Lambda_0, m_{0 \pm}; k | \right\rvert^2\,. \end{aligned} \end{equation} where absorbed a $2 i b$ factor in the OPE coefficients for later convenience, $\Lambda = 2ib \Lambda_0$ and $m_3 = \frac{i}{2}b m_0 $. Here the irregular state depending on $\Lambda_0, \bar{\Lambda}_0$ denotes the full (chiral$\otimes$antichiral) state, and the modulus squared of the chiral states (depending only on $\Lambda_0$) has to be understood as a tensor product. Now we can fix the OPE coefficients $\tilde{\mathcal{C}}^{\alpha_\pm}_{\alpha, \alpha_{2,1}}, \mathcal{A}_{\alpha_\pm, m_{0 \pm}}$ making use of the NSE for the full irregular three point function. Namely, \begin{equation} \left( b^{-2} \partial_z^2 - \frac{1}{z} \partial_z + \frac{\Delta_{\alpha \pm}}{z^2} + \frac{m_0 \Lambda_0}{z} + \Lambda_0^2 \right) \langle \Delta_\alpha, \Lambda_0, \bar{\Lambda}_0, m_0 | \Phi_{2, 1} (z, \bar{z}) | \Delta_{\alpha_\pm} \rangle = 0 \,. \end{equation} If we define $\langle \Delta_\alpha, \Lambda_0, \bar{\Lambda}_0, m_0 | \Phi_{2, 1} (z, \bar{z}) | \Delta_{\alpha_\pm} \rangle = \displaystyle\left\lvert e^{- \frac{\Lambda z}{2}} (\Lambda z)^{\frac{1}{2} (bQ + 2 b \alpha_\pm)} \right\rvert^2 G_\pm (z, \bar{z})$, then $G(z, \bar{z})$ will satisfy \begin{equation} \left( z \partial_z^2 + (1 + 2 b \alpha_\pm - \Lambda z ) \partial_z - \frac{\Lambda}{2} (1 + 2 m_3 + 2 b \alpha_\pm) \right) G_\pm (z, \bar{z}) = 0 \,. \end{equation} Note that we can rewrite the previous equation using the natural variable $w = \Lambda z$, and obtain \begin{equation} \left( w \partial_w^2 + (1 + 2 b \alpha_\pm - w ) \partial_w - \frac{1}{2} (1 + 2 m_3 + 2 b \alpha_\pm) \right) G_\pm (w, \bar{w}) = 0 \,. \label{eq:GNSE} \end{equation} Equation (\ref{eq:GNSE}) is the confluent hypergeometric equation, therefore\footnote{Note that in principle also mixed terms could appear. However, they cannot be there in order to correctly match the behavior near zero.} \begin{equation} G_\pm (w, \bar{w}) = K_\pm^{(1)} \displaystyle\left\lvert {}_1 F_1 \left( \frac{1}{2} + m_3 + b \alpha_\pm, 1 + 2 b \alpha_\pm, w \right) \right\rvert^2 + K_\pm^{(2)} \displaystyle\left\lvert w^{-2 b \alpha_\pm} {}_1 F_1 \left( \frac{1}{2} + m_3 - b \alpha_\pm, 1 - 2 b \alpha_\pm, w \right) \right\rvert^2 \,. \label{eq:GNSEsol} \end{equation} Expanding the correlator near zero and comparing the solution (\ref{eq:GNSEsol}) with the regular OPE near zero, \begin{equation} K_\pm^{(1)} \displaystyle\left\lvert (\Lambda z )^{\frac{1}{2} b Q + b \alpha_\pm} \right\rvert^2 + K_\pm^{(2)} \displaystyle\left\lvert (\Lambda z )^{\frac{1}{2} b Q - b \alpha_\pm} \right\rvert^2 = G(\alpha) \mathcal{C}^{\alpha}_{\alpha_{2,1} \alpha_{\pm}} \displaystyle\left\lvert z^{\frac{1}{2} b Q \mp b \alpha_\pm} \right\rvert^2 \,, \end{equation} and hence \begin{equation} \begin{aligned} &K_+^{(1)} = 0 \,, \, K_+^{(2)} = G(\alpha) \mathcal{C}^{\alpha}_{\alpha_{2,1} \alpha_+} \displaystyle\left\lvert \Lambda^{-\frac{1}{2} b Q + b \alpha_+} \right\rvert^2 \,, \\ &K_-^{(1)} = G(\alpha) \mathcal{C}^{\alpha}_{\alpha_{2,1} \alpha_-} \displaystyle\left\lvert \Lambda^{-\frac{1}{2} b Q - b \alpha_-} \right\rvert^2 \,, \, K_-^{(2)} = 0\,. \end{aligned} \end{equation} Now expanding the confluent hypergeometric near infinity and matching with the OPE we can finally fix all the coefficients. Recall that as $w \to \infty$ \begin{equation} \begin{aligned} {}_1 F_1 \left( \frac{1}{2} + m_3 + b \alpha_\pm, 1 + 2 b \alpha_\pm, w \right) &\simeq \frac{\Gamma (1 + 2 b \alpha_\pm)}{\Gamma (\frac{1}{2} + m_3 + b \alpha_\pm)} e^w w^{-\frac{1}{2} + m_3 - b \alpha_\pm} + \\ &+ \frac{\Gamma (1 + 2 b \alpha_\pm)}{\Gamma (\frac{1}{2} - m_3 + b \alpha_\pm)} (- 1)^{-\frac{1}{2} - m_3 - b \alpha_\pm} (w )^{-\frac{1}{2} - m_3 - b \alpha_\pm} \,, \\ w^{-2 b \alpha_\pm} {}_1 F_1 \left( \frac{1}{2} + m_3 - b \alpha_\pm, 1 - 2 b \alpha_\pm, w \right) &\simeq \frac{\Gamma (1 - 2 b \alpha_\pm)}{\Gamma (\frac{1}{2} + m_3 - b \alpha_\pm)} e^w w^{-\frac{1}{2} + m_3 - b \alpha_\pm} + \\ &+ \frac{\Gamma (1 - 2 b \alpha_\pm)}{\Gamma (\frac{1}{2} - m_3 - b \alpha_\pm)} (- 1)^{-\frac{1}{2} - m_3 + b \alpha_\pm} (w )^{-\frac{1}{2} - m_3 - b \alpha_\pm} \,. \end{aligned} \end{equation} Let us concentrate on the $\alpha_+$ channel. Expanding the full correlator and matching $z$ powers gives \begin{equation} \begin{aligned} &G(\alpha) \mathcal{C}^{\alpha}_{\alpha_{2,1} \alpha_+} \displaystyle\left\lvert \Lambda^{-\frac{1}{2} b Q + b \alpha_+} \right\rvert^2 \times \\ & \times \displaystyle\left\lvert \frac{\Gamma (1 - 2 b \alpha_+)}{\Gamma (\frac{1}{2} + m_3 - b \alpha_+)} e^{w/2} w^{\frac{bQ}{2}-\frac{1}{2} + m_3} + \frac{\Gamma (1 - 2 b \alpha_+)}{\Gamma (\frac{1}{2} - m_3 - b \alpha_+)} (- 1)^{-\frac{1}{2} - m_3 + b \alpha_+} e^{-w/2} (w )^{\frac{bQ}{2}-\frac{1}{2} - m_3} \right\rvert^2 = \\ &= G(\alpha_+) \tilde{\mathcal{C}}^{\alpha_+}_{\alpha, \alpha_{2,1}} \displaystyle\left\lvert \sum_{\pm} \mathcal{A}_{\alpha_+, m_{0 \pm}} \Lambda^{-\frac{1}{2} b Q + b \alpha_+} (\Lambda z)^{\frac{1}{2} (bQ - 1 \pm 2 m_3)} e^{\pm \Lambda z/2} \right\rvert^2 \,. \end{aligned} \label{eq:IRROPEMatching} \end{equation} Finally from equation (\ref{eq:IRROPEMatching}) we can read off the coefficients (the coefficients for the $\alpha_-$ channel are obtained simply sending $\alpha_+ \to - \alpha_-$) \begin{equation} \begin{aligned} &\tilde{\mathcal{C}}^{\alpha_\pm}_{\alpha, \alpha_{2,1}} = \mathcal{C}^{\alpha_\pm}_{\alpha_{2,1} \alpha} \,, \\ &\mathcal{A}_{\alpha_+, m_{0 +}} = \frac{\Gamma (1 - 2 b \alpha_+)}{\Gamma (\frac{1}{2} + m_3 - b \alpha_+)} \,, \\ &\mathcal{A}_{\alpha_+, m_{0 -}} = \frac{\Gamma (1 - 2 b \alpha_+)}{\Gamma (\frac{1}{2} - m_3 - b \alpha_+)} (- 1)^{-\frac{1}{2} - m_3 + b \alpha_+} \,, \\ &\mathcal{A}_{\alpha_-, m_{0 +}} = \frac{\Gamma (1 + 2 b \alpha_-)}{\Gamma (\frac{1}{2} + m_3 + b \alpha_-)} \,, \\ &\mathcal{A}_{\alpha_-, m_{0 -}} = \frac{\Gamma (1 + 2 b \alpha_-)}{\Gamma (\frac{1}{2} - m_3 + b \alpha_-)} (- 1)^{-\frac{1}{2} - m_3 - b \alpha_-} \,. \end{aligned} \label{eq:irrOPEcoeff} \end{equation} Two remarks about equations (\ref{eq:irrOPEcoeff}): first of all, the OPE is symmetric in $\alpha \to - \alpha$, as it should be. Moreover, we expect the full irregular 3 point correlator to be symmetric under the simultaneous transformation $\Lambda \to - \Lambda, m_3 \to - m_3$. Under this transformation \begin{equation} \begin{aligned} &\mathcal{A}_{\alpha_+, m_{3+}} \Lambda^{-\frac{1}{2} (bQ - 2 b \alpha_+)} e^{\frac{\Lambda z}{2}} \left( \Lambda z \right)^{\frac{1}{2} \left( bQ - 1 + 2 m_3 \right)} \to \\ & \to (-1)^{-\frac{1}{2} - m_3 + b\alpha_+} \frac{\Gamma (1 - 2 b \alpha_+)}{\Gamma (\frac{1}{2} - m_3 - b \alpha_+)} \Lambda^{-\frac{1}{2} (bQ - 2 b \alpha_+)} e^{-\frac{\Lambda z}{2}} \left( \Lambda z \right)^{\frac{1}{2} \left( bQ - 1 - 2 m_3 \right)} = \\ &= \mathcal{A}_{\alpha_+, m_{3-}} \Lambda^{-\frac{1}{2} (bQ - 2 b \alpha_+)} e^{-\frac{\Lambda z}{2}} \left( \Lambda z \right)^{\frac{1}{2} \left( bQ - 1 - 2 m_3 \right)} \,, \end{aligned} \end{equation} and the same happens for the other channel. This suggests that the $(-1)^{-\frac{1}{2} - m_3 \pm b \alpha_ \pm}$ factor naturally multiplies $\Lambda$ in the irregular OPE. Therefore, after this minor change we obtain formulae (\ref{eq:ierrgope}), (\ref{eq:irrOPEcoeffreal}). \section{Nekrasov formulae}\label{AppendixNekrasov} \subsection{The AGT dictionary} The irregular three-point correlation functions of the form $\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2}} (1) | \Delta_{\alpha_1} \rangle$ can be efficiently computed as a gauge theory partition function thanks to the AGT correspondence \cite{Alday_2010}. Concretely, we have \begin{equation} \langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2}} (1) | \Delta_{\alpha_1} \rangle = \mathcal{Z}^{\mathrm{inst}}_{SU(2)}(\Lambda,a,m_1,m_2,m_3) \end{equation} where the chiral correlation function has to be understood as a conformal block (without any DOZZ factors) with the irregular state given by the expansion (\ref{irregular_state}), and $\mathcal{Z}^{\mathrm{inst}}$ is the Nekrasov instanton partition function of $\mathcal{N}=2$ $SU(2)$ gauge theory with three hypermultiplets. The Nekrasov partition function contains a fundamental mass scale $\hbar = \sqrt{\epsilon_1\epsilon_2}$ which sets the units in which everything is measured. The mapping of parameters between CFT and gauge theory is then \begin{equation} \begin{aligned} &\epsilon_1 = \frac{\hbar}{b}\, , \quad \epsilon_2 = \hbar b\, , \quad \epsilon=\epsilon_1+\epsilon_2 \quad \longrightarrow\quad Q=\frac{\epsilon}{\hbar}\,, \\ &\Delta_i = \frac{Q^2}{4}-\alpha_i^2 = \frac{\frac{\epsilon^2}{4}-a_i^2}{\hbar^2} \, , \quad a_i := \hbar\alpha_i\,, \\ &\Lambda = 2i\hbar \Lambda_0 \, , \quad m_3 = \frac{i}{2}\hbar m_0\,,\\ &m_1 = a_1 + a_2 \, , \quad m_2 = -a_1 + a_2\,. \end{aligned} \end{equation} The factors of $i\hbar$ in $\Lambda$ and $m_3$ do not appear in \cite{Marshakov_2009} where the irregular state is defined because they drop terms of the form $\sqrt{-\epsilon_1\epsilon_2}$. \subsection{The instanton partition function} The $SU(2)$ partition function is given by the $U(2)$ partition function divided by the $U(1)$-factor: \begin{equation} \mathcal{Z}^{\mathrm{inst}}_{SU(2)}(\Lambda,a,m_1,m_2,m_3,\epsilon_1,\epsilon_2)= \mathcal{Z}_{U(1)}^{-1}(\Lambda,m_1,m_2,\epsilon_1,\epsilon_2)\mathcal{Z}_{U(2)}^{\mathrm{inst}}(\Lambda,a,m_1,m_2,m_3,\epsilon_1,\epsilon_2) \end{equation} where the $U(2)$ partition function is given by a combinatorial formula which we review now. We often suppress the dependence on $\epsilon_1,\epsilon_2$. We mostly follow the notation of \cite{Alday_2010}. Let $Y=(\lambda_1 \geq \lambda_2 \geq ...)$ be a Young tableau where $\lambda_i$ is the height of the $i$-th column and we set $\lambda_i=0$ when $i$ is larger than the width of the tableau. Its transpose is denoted by $Y^T=(\lambda_1' \geq \lambda_2' \geq ...)$. For a box $s$ at the coordinate $(i,j)$ we define the arm-length $A_Y(s)$ and the leg-length $L_Y(s)$ with respect to the tableau $Y$ as \begin{equation} A_Y(s)=\lambda_i-j\,,\quad L_Y(s) = \lambda_j'-i\,. \end{equation} Note that they can be negative when $s$ is outside the tableau. Define a function $E$ by \begin{equation} E(a,Y_1,Y_2,s) = a-\epsilon_1 L_{Y_2}(s)+\epsilon_2(A_{Y_1}(s)+1)\,. \end{equation} Using the notation $\Vec{a}=(a_1,a_2)$ with $a_1=-a_2=a$ and $\Vec{Y}=(Y_1,Y_2)$ the contribution of a vector multiplet is \begin{equation} z^{\mathrm{inst}}_{\mathrm{vector}}(\Vec{a},\Vec{Y})=\prod_{i,j=1}^2 \prod_{s \in Y_i}\frac{1}{E(a_i-a_j,Y_i,Y_j,s)}\prod_{t \in Y_j}\frac{1}{\epsilon_1+\epsilon_2-E(a_j-a_i,Y_j,Y_i,t)} \end{equation} and that of an (antifundamental) hypermultiplet \begin{equation} z^{\mathrm{inst}}_{\mathrm{matter}}(\Vec{a},\Vec{Y},m)=\prod_{i=1}^2 \prod_{s \in Y_i} \left(a+m+\epsilon_1\left(i-\frac{1}{2}\right)+\epsilon_2\left(j-\frac{1}{2}\right)\right)\,. \end{equation} This is different from the formula given in \cite{Alday_2010} because our masses are shifted with respect to theirs by $\epsilon/2$. Finally, the $U(2)$ partition function is given by \begin{equation} \mathcal{Z}_{U(2)}^{\mathrm{inst}}(\Lambda,a,m_1,m_2,m_3) = \sum_{\Vec{Y}}\Lambda^{|\Vec{Y}|} z^{\mathrm{inst}}_{\mathrm{vector}}(\Vec{a},\Vec{Y}) \prod_{n=1}^3 z^{\mathrm{inst}}_{\mathrm{matter}}(\Vec{a},\Vec{Y},m_n)\,, \end{equation} where $|\Vec{Y}|$ denotes the total number of boxes in $Y_1$ and $Y_2$. The $U(1)$-factor on the other hand can be obtained by decoupling one mass from the $U(1)$-factor for $N_f=4$. Before decoupling, the third and fourth masses are given by \begin{equation} m_3 = a_3+a_4 \, , \quad m_4 = a_3 - a_4 \end{equation} where $a_3$ and $a_4$ are related to the momenta of the two vertex operators that collide to form the irregular state. The $U(1)$-factor is \begin{equation} \mathcal{Z}_{U(1)}^{N_f=4}=(1-q)^{2(a_2+\epsilon/2)(a_3+\epsilon/2)/\epsilon_1\epsilon_2} \,. \end{equation} The decoupling limit is given by $q \rightarrow 0 ,\, m_4 \rightarrow \infty$ with $q m_4 \equiv \Lambda$ finite. This gives the $N_f=3$ $U(1)$-factor \begin{equation} \mathcal{Z}_{U(1)}=e^{-(m_1+m_2+\epsilon) \Lambda/2\epsilon_1\epsilon_2}\,. \end{equation} For reference, we give the one-instanton partition functions: \begin{equation} \begin{aligned} & \mathcal{Z}_{U(2)}^{\mathrm{inst}}(\Lambda,a,m_1,m_2,m_3) = 1 + \frac{\prod_{i=1}^3\left(-a+m_i+\frac{\epsilon}{2}\right)}{2a\epsilon_1\epsilon_2(-2a+\epsilon)}\Lambda - \frac{\prod_{i=1}^3\left(a+m_i+\frac{\epsilon}{2}\right)}{2a\epsilon_1\epsilon_2(2a+\epsilon)}\Lambda + \mathcal{O}(\Lambda^2) \\ & \mathcal{Z}_{SU(2)}^{\mathrm{inst}}(\Lambda,a,m_1,m_2,m_3) = 1 - \frac{\epsilon^2-4a^2-4m_1 m_2}{2\epsilon_1\epsilon_2(\epsilon+2a)(\epsilon-2a)}m_3\Lambda + \mathcal{O}(\Lambda^2) \end{aligned} \end{equation} \subsection{The Nekrasov-Shatashvili limit} While the above formulae are valid for arbitrary $\epsilon_1,\epsilon_2$, in the context of the black hole we work in the Nekrasov-Shatashvili (NS) limit which is defined by $\epsilon_2\to0$ while keeping $\epsilon_1$ finite \cite{NEKRASOV_2010}. Furthermore we set $\epsilon_1=1$. The correlators $\langle \Delta_\alpha, \Lambda_0, m_0 | V_{\alpha_{2}} (1) | \Delta_{\alpha_1} \rangle$ then need to be understood as being computed as a partition function in the NS limit. This is done by computing it for arbitrary $\epsilon_1,\epsilon_2$ and taking $\epsilon_1=1$ and $\epsilon_2\to0$ in the end, because the partition function itself diverges in this limit, while the ratios appearing e.g. in the connection formulas remain finite. Furthermore, we define the instanton part of the NS free energy as \begin{equation} \mathcal{F}^{\mathrm{inst}}(\Lambda,a,m_1,m_2,m_3,\epsilon_1)=\epsilon_1 \lim_{\epsilon_2\to0}\epsilon_2 \log \mathcal{Z}^{\mathrm{inst}}_{SU(2)}(\Lambda,a,m_1,m_2,m_3,\epsilon_1,\epsilon_2)\,. \end{equation} One also uses the Matone relation \cite{Matone_1995} \begin{equation} E=a^2-\Lambda \partial_\Lambda \mathcal{F}^{\mathrm{inst}}\,, \end{equation} which can be inverted order by order in $\Lambda$ to obtain $a(E)$. For reference, we give some relevant quantities computed up to one instanton, with $\epsilon_1=1$ and the leading power of $\epsilon_2$. \begin{equation} \begin{aligned} & \mathcal{Z}^{\mathrm{inst}}_{SU(2)}(\Lambda,a,m_1,m_2,m_3) = 1-\frac{\frac{1}{4}-a^2-m_1 m_2}{\frac{1}{2}-2a^2}\frac{m_3 \Lambda}{\epsilon_2}+\mathcal{O}(\Lambda^2) \\ & \mathcal{F}^{\mathrm{inst}}(\Lambda,a,m_1,m_2,m_3) = -\frac{\frac{1}{4}-a^2-m_1 m_2}{\frac{1}{2}-2a^2} m_3 \Lambda+\mathcal{O}(\Lambda^2) \\ & a(E) = \sqrt{E} - \frac{\frac{1}{4}-E+a_1^2-a_2^2}{\sqrt{E}\left(1-4E\right)}m_3\Lambda+\mathcal{O}(\Lambda^2) \\ & \frac{\mathcal{Z}^{\mathrm{inst}}_{SU(2)}(\Lambda,a,m_1-\frac{\theta'\epsilon_2}{2},m_2-\frac{\theta'\epsilon_2}{2},m_3)}{\mathcal{Z}^{\mathrm{inst}}_{SU(2)}(\Lambda,a,m_1-\frac{\theta\epsilon_2}{2},m_2+\frac{\theta\epsilon_2}{2},m_3)} = 1-\frac{\theta(m_1-m_2)+\theta'(m_1+m_2)}{1-4a^2}m_3 \Lambda+\mathcal{O}(\Lambda^2) \,. \end{aligned} \end{equation} Finally, we define the full NS free energy, including the classical and the one-loop part by \begin{equation} \begin{aligned} \partial_a\mathcal{F}(\Lambda,a,m_1,m_2,m_3,\epsilon_1)=& - 2 a \log \frac{\Lambda}{\epsilon_1} + 2 \epsilon_1 \log \frac{\Gamma \left(1 + \frac{2a}{\epsilon_1} \right) }{\Gamma \left(1 - \frac{2a}{\epsilon_1} \right)} + \epsilon_1 \sum_{i=1}^3 \log \frac{\Gamma \left( \frac{1}{2} + \frac{m_i - a}{\epsilon_1} \right)}{\Gamma \left( \frac{1}{2} + \frac{m_i + a}{\epsilon_1} \right)} +\\ &+\partial_a\mathcal{F}^{\mathrm{inst}}(\Lambda,a,m_1,m_2,m_3,\epsilon_1) \,. \end{aligned} \end{equation} \section{The semiclassical absorption coefficient}\label{appendix_semiclassical} We give the detailed reduction of the full absorption coefficient in the semiclassical regime to the final result $\sigma = \exp -a_D/\epsilon_1$, with \begin{equation} a_D := \oint_B \phi_{SW}(z)dz = \lim_{\epsilon_1\to0} \partial_a\mathcal{F} \end{equation} where $\phi_{SW}(z)$ is the Seiberg-Witten differential of the $\mathcal{N}=2$ $SU(2)$ gauge theory with three flavours and $\mathcal{F}$ is the full NS free energy introduced in the previous section. First we restore the powers of $\epsilon_1$ which were previously set to one in the exact absorption coefficient and substitute the AGT dictionary (see Appendix \ref{AppendixNekrasov}): \begin{equation} \sigma = \frac{\displaystyle{-\mathrm{Im}\frac{m_1+m_2}{\epsilon_1}}}{\displaystyle{\left|\frac{\Gamma\left(1 + \frac{2a}{\epsilon_1}\right)\Gamma\left(\frac{2a}{\epsilon_1}\right) \Gamma\left(1 + \frac{m_1+m_2}{\epsilon_1}\right)\left(\frac{\Lambda}{\epsilon_1}\right)^{\frac{-a + m_3}{\epsilon_1}}}{\prod_{i=1}^3 \Gamma\left(\frac{1}{2}+\frac{m_i +a}{\epsilon_1}\right)} \frac{\mathcal{Z}^{\mathrm{inst}}_{SU(2)}\left(\Lambda,a+\frac{\epsilon_2}{2},m_1 ,m_2,m_3+\frac{\epsilon_2}{2}\right)}{\mathcal{Z}^{\mathrm{inst}}_{SU(2)}\left(\Lambda,a,m_1 -\frac{\epsilon_2}{2},m_2 -\frac{\epsilon_2}{2},m_3\right)} + (a \rightarrow -a) \right|^2}} \end{equation} In the regime where we have two real turning points and where we have obtained the semiclassical transmission coefficient we have $|\Lambda|\ll 1$. Then $a$ can be obtained order by order in an expansion in $\Lambda$, starting from $a = \ell + \frac{1}{2}+\mathcal{O}(\Lambda)$ by using the relation $E=a^2-\Lambda\partial_\Lambda \mathcal{F}^{\mathrm{inst}}$. Since $\Lambda\partial_\Lambda \mathcal{F}^{\mathrm{inst}}$ is real for real $a$, we see that all terms in the expansion and therefore $a$ itself are real. We anticipate that in $\sigma$ the term surviving in the semiclassical limit will be the first term in the denominator. This can be seen quickly by approximating $\Gamma(z)\approx e^{z\log z}$ for large $z$. In the semiclassical limit $a \gg m_i$ and the contribution of the five Gamma functions containing $a$ in the first term goes like $e^{\frac{a}{\epsilon_1}\log\frac{a}{\epsilon_1}}$. Extracting the term $\epsilon_1^{-\frac{a}{\epsilon_1}}$ cancels the explicit power of $\epsilon_1$ outside, and the rest of the exponential blows up. On the other hand the behaviour of the second term in the denominator of the transmission coefficient can be obtained by sending $a\rightarrow -a$, so we see that in this case the exponential vanishes, and indeed the dominant term is the first one. The Gamma functions give the correct semiclassical one-loop contributions using Stirling's formula, and the ratio of partition functions gives the correct instanton contribution to $a_D$. More in detail, we can split the contributions to $a_D$ as \begin{equation} a_D=a_D^{\mathrm{1-loop}} + a_D^{\mathrm{inst}} = a_{D,\mathrm{vector}}^{\mathrm{1-loop}} + \sum_{i=1}^3 a_{D,\mathrm{matter}}^{\mathrm{1-loop}} + a_{D,\mathrm{vector}}^{\mathrm{inst}}+ \sum_{i=1}^3 a_{D,\mathrm{matter}}^{\mathrm{inst}}\,. \end{equation} We take all matter multiplets to be in the antifundamental representation of $SU(2)$. The vector and matter multiplet one-loop contributions to $a_D$ are \begin{equation} \begin{aligned} & a_{D,\mathrm{vector}}^{\mathrm{1-loop}}(a) = -8a+4a\log\frac{2a}{\Lambda}+4a\log\frac{-2a}{\Lambda}\\ & a_{D,\mathrm{matter}}^{\mathrm{1-loop}}(a,m) = \left(a-m\right)\left[1-\log\left(\frac{-a+m}{\Lambda}\right)\right] +\left(a+m\right)\left[1-\log\left(\frac{a+m}{\Lambda}\right)\right] \,. \end{aligned} \end{equation} These are antisymmetric under $a\rightarrow -a$ as they should be. On the other hand, in the absorption coefficient we have several Gamma functions, which we can expand in the semiclassical limit using Stirling's approximation $\log\Gamma(z) = (z-1/2)\log z -z+ \mathcal{O}(z^{-1})$. We neglect the constant factors of $2\pi$ since we have the same amount of Gamma functions in the numerator and denominator and they will cancel. We have for the vectormultiplet \begin{equation} \begin{aligned} \epsilon_1 \log\left|\Gamma\left(\frac{2a}{\epsilon_1}\right)\Gamma\left(1+\frac{2a}{\epsilon_1}\right)\right|^{-2} & \to 8a- 8a\log2a+8a\log\epsilon_1\\ & = -a_{D,\mathrm{vector}}^{\mathrm{1-loop}}(a) - 4 \pi i a - 8a\log\frac{\Lambda}{\epsilon_1}\,, \end{aligned} \end{equation} for the matter multiplets \begin{equation} \begin{aligned} \epsilon_1 \log\left|\Gamma\left(\frac{1}{2}+\frac{m+a}{\epsilon_1}\right)\right|^2 & \to (a-m)\log(a-m) + (a+m)\log(a+m)-2a(1+\log\epsilon_1)\\ & = -a_{D,\mathrm{matter}}^{\mathrm{1-loop}}(a,m) + i\pi(a-m) + 2a \log\frac{\Lambda}{\epsilon_1} \end{aligned} \end{equation} and there is one more Gamma function: \begin{equation} \left|\Gamma\left(1+\frac{m_1+m_2}{\epsilon_1}\right)\right|^2 \to i\frac{m_1+m_2}{\epsilon_1}e^{-i\pi (m_1+m_2)/\epsilon_1}. \end{equation} The last contribution is \begin{equation} \left|\frac{\Lambda}{\epsilon_1}\right|^{2\frac{a - m_3}{\epsilon_1}} = \left(\frac{\Lambda}{\epsilon_1}\right)^{\frac{2a}{\epsilon_1}}e^{i\pi\frac{a + m_3}{\epsilon_1}}. \end{equation} Putting it all together we have \begin{equation} -\mathrm{Im}\frac{m_1+m_2}{\epsilon_1}\left|\frac{\prod_{i=1}^3 \Gamma\left(\frac{1}{2}+\frac{m_i +a}{\epsilon_1}\right)\left(\frac{\Lambda}{\epsilon_1}\right)^{\frac{a - m_3}{\epsilon_1}}}{\Gamma\left(1+ \frac{2a}{\epsilon_1}\right)\Gamma\left(\frac{ 2a}{\epsilon_1}\right) \Gamma\left(1 + \frac{m_1+m_2}{\epsilon_1}\right)}\right|^2 \to e^{-a_D^{\mathrm{1-loop}}/\epsilon_1}\,. \end{equation} Now let us look at the instanton partition functions: \begin{equation} \begin{aligned} \frac{\mathcal{Z}^{\mathrm{inst}}_{SU(2)}\left(\Lambda,a,m_1 -\frac{\epsilon_2}{2},m_2 -\frac{\epsilon_2}{2},m_3\right)}{\mathcal{Z}^{\mathrm{inst}}_{SU(2)}\left(\Lambda,a+\frac{\epsilon_2}{2},m_1 ,m_2,m_3+\frac{\epsilon_2}{2}\right)} = \frac{\mathcal{Z}^{\mathrm{inst}}_{SU(2)}\left(\Lambda,\tilde{a}-\frac{\epsilon_2}{2},m_1 -\frac{\epsilon_2}{2},m_2 -\frac{\epsilon_2}{2},\tilde{m}_3-\frac{\epsilon_2}{2}\right)}{\mathcal{Z}^{\mathrm{inst}}_{SU(2)}\left(\Lambda,\tilde{a},m_1 ,m_2,\tilde{m}_3\right)} \end{aligned} \end{equation} where we have defined $\tilde{a} = a+\frac{\epsilon_2}{2}$ and $\tilde{m}_3 = m_3+ \frac{\epsilon_2}{2}$. Now, looking at the explicit expressions for the $U(2)$ Nekrasov partition functions, we see that the part corresponding to the gauge field depends only on $a_1 - a_2$, and the part corresponding the the hypermultiplets only on $a_1 + m_i$ and $a_2 + m_i$. So we see that \begin{equation} \begin{aligned} & \mathcal{Z}^{\mathrm{inst}}_{U(2)}\left(\Lambda,a_1=\tilde{a}-\frac{\epsilon_2}{2},a_2 = -\tilde{a}+\frac{\epsilon_2}{2},m_1 -\frac{\epsilon_2}{2},m_2 -\frac{\epsilon_2}{2},\tilde{m}_3-\frac{\epsilon_2}{2}\right) = \\ = \,& \mathcal{Z}^{\mathrm{inst}}_{U(2)}\left(\Lambda,a_1=\tilde{a}-\epsilon_2,a_2 = -\tilde{a},m_1 ,m_2 ,\tilde{m}_3\right)\,. \end{aligned} \end{equation} The $U(1)$ part behaves as \begin{equation} \begin{aligned} \mathcal{Z}^{-1}_{U(1)}(\Lambda,m_1-\frac{\epsilon_2}{2},m_2-\frac{\epsilon_2}{2}) = e^{(m_1+m_2-\epsilon_2)\Lambda/2\epsilon_1\epsilon_2} =e^{-\Lambda/2\epsilon_1} \mathcal{Z}^{-1}_{U(1)}(\Lambda,m_1,m_2)\,, \end{aligned} \end{equation} therefore \begin{equation} \begin{aligned} &\frac{\mathcal{Z}^{\mathrm{inst}}_{SU(2)}\left(\Lambda,a,m_1 -\frac{\epsilon_2}{2},m_2 -\frac{\epsilon_2}{2},m_3\right)}{\mathcal{Z}^{\mathrm{inst}}_{SU(2)}\left(\Lambda,a+\frac{\epsilon_2}{2},m_1 ,m_2,m_3+\frac{\epsilon_2}{2}\right)} =e^{-\Lambda/2\epsilon_1} \frac{\mathcal{Z}^{\mathrm{inst}}_{SU(2)}\left(\Lambda,\tilde{a}-\epsilon_2,-\tilde{a},m_1 ,m_2 ,\tilde{m}_3\right)}{\mathcal{Z}^{\mathrm{inst}}_{SU(2)}\left(\Lambda,\tilde{a},-\tilde{a},m_1,m_2,\tilde{m}_3\right)} =\\ & =e^{-\Lambda/2\epsilon_1} \exp\frac{1}{\epsilon_1 \epsilon_2}\left\{\mathcal{F}^{\mathrm{inst}}\left(\Lambda,\tilde{a}-\epsilon_2,-\tilde{a},m_1 ,m_2 ,\tilde{m}_3\right) - \mathcal{F}^{\mathrm{inst}}\left(\Lambda,\tilde{a},-\tilde{a},m_1,m_2,\tilde{m}_3\right)\right\} = \\ & = e^{-\Lambda/2\epsilon_1}\exp -\frac{1}{\epsilon_1} \frac{\partial}{\partial a_1} \mathcal{F}^{\mathrm{inst}}\left(\Lambda,a_1,a_2,m_1,m_2,\tilde{m}_3\right)|_{a_1 = \tilde{a},\,a_2=-\tilde{a}}\,. \end{aligned} \end{equation} Now there are no more factors of $1/\epsilon_2$ so we can safely drop the tildes. On the other hand, by symmetry considerations which are most easily seen in the expression as a conformal block, and using the fact that $\Lambda$ and the three masses are purely imaginary while $a$ is real, we have \begin{equation} \begin{aligned} &\frac{\mathcal{Z}^{\mathrm{inst}}_{SU(2)}\left(\Lambda,a,m_1 -\frac{\epsilon_2}{2},m_2 -\frac{\epsilon_2}{2},m_3\right)}{\mathcal{Z}^{\mathrm{inst}}_{SU(2)}\left(\Lambda,a+\frac{\epsilon_2}{2},m_1 ,m_2,m_3+\frac{\epsilon_2}{2}\right)} = \frac{\mathcal{Z}^{\mathrm{inst}}_{SU(2)}\left(-\Lambda,a,-m_1 +\frac{\epsilon_2}{2},-m_2 +\frac{\epsilon_2}{2},-m_3\right)}{\mathcal{Z}^{\mathrm{inst}}_{SU(2)}\left(-\Lambda,a+\frac{\epsilon_2}{2},-m_1 ,-m_2,-m_3-\frac{\epsilon_2}{2}\right)} =\\ & =\frac{\mathcal{Z}^{\mathrm{inst}}_{SU(2)}\left(\Lambda^*,a^*,m_1^* +\frac{\epsilon_2}{2},m_2^* +\frac{\epsilon_2}{2},m_3^*\right)}{\mathcal{Z}^{\mathrm{inst}}_{SU(2)}\left(\Lambda^*,a^*+\frac{\epsilon_2}{2},m_1^*,m_2^*,m_3^*-\frac{\epsilon_2}{2}\right)} \end{aligned} \end{equation} And therefore, repeating the same steps as above, \begin{equation} \left(\frac{\mathcal{Z}^{\mathrm{inst}}_{SU(2)}\left(\Lambda,a,m_1 -\frac{\epsilon_2}{2},m_2 -\frac{\epsilon_2}{2},m_3\right)}{\mathcal{Z}^{\mathrm{inst}}_{SU(2)}\left(\Lambda,a+\frac{\epsilon_2}{2},m_1 ,m_2,m_3+\frac{\epsilon_2}{2}\right)}\right)^* = e^{\Lambda/2\epsilon_1}\exp \frac{1}{\epsilon_1} \frac{\partial}{\partial a_2} \mathcal{F}^{\mathrm{inst}}\left(\Lambda,a_1,a_2,m_1,m_2,m_3\right)|_{a_1 =a,\,a_2=-a} \end{equation} Now using $\partial_a \mathcal{F}=\partial_{a_1} \mathcal{F}-\partial_{a_2} \mathcal{F}$ we have: \begin{equation} \left|\frac{\mathcal{Z}^{\mathrm{inst}}_{SU(2)}\left(\Lambda,a,m_1 -\frac{\epsilon_2}{2},m_2 -\frac{\epsilon_2}{2},m_3\right)}{\mathcal{Z}^{\mathrm{inst}}_{SU(2)}\left(\Lambda,a+\frac{\epsilon_2}{2},m_1 ,m_2,m_3+\frac{\epsilon_2}{2}\right)}\right|^2 = e^{-a_D^{\mathrm{inst}}/\epsilon_1} \end{equation} which combined with the one-loop part finally gives \begin{equation} \resizebox{0.14\hsize}{!}{\boxed{\sigma \approx e^{-a_D/\epsilon_1}}}\,. \end{equation} This result is valid for $\ell \gg 1$ and $M\omega,\text{a}\omega \ll 1$, while keeping all orders in $M\omega,\text{a}\omega$. \section{Angular quantization - warm-up examples}\label{appendix:angularq} The angular connection problem is to be handled with care, since according to the angular dictionary (\ref{eq:angulardictionaryy}) the momenta have to be set at degenerate values at which singularities appear. In order to understand how things work, it is useful to discuss simpler examples that are fully under control. \subsection{The spherical harmonics} Let us consider the spherical harmonics example, that is the case $c = s = 0$. Therefore, \begin{equation} \begin{aligned} &E = \frac{1}{4} + \lambda \,, \\ &a_1 = a_2 = - \frac{m}{2} \,, \\ &\Lambda = m_3 = 0 \,. \end{aligned} \end{equation} Restricting without loss of generality to $m>0$, the regular solution as $z \to 0$ is given by \begin{equation} y(z \to 0) \simeq z^{\frac{1}{2}+\frac{m}{2}} = z^{\frac{1}{2}+\frac{m}{2}} {}_2 F_1 \left( \frac{1}{2} - a, \frac{1}{2} + a, 1 + m, z \right) \,. \end{equation} According to hypergeometric connection formulae, around 1 \begin{equation} \begin{aligned} y(z \to 1) =& \frac{\Gamma(-m) \Gamma(1+m)}{\Gamma(\frac{1}{2}+a) \Gamma(\frac{1}{2}-a)} (1-z)^{\frac{1}{2}+\frac{m}{2}} {}_2 F_1 \left( \frac{1}{2} + a, \frac{1}{2} - a, 1 + m, 1-z \right) + \\ &+ \frac{\Gamma(m) \Gamma(1+m)}{\Gamma(\frac{1}{2} + m +a) \Gamma(\frac{1}{2} + m -a)} (1-z)^{\frac{1}{2}-\frac{m}{2}} {}_2 F_1 \left( \frac{1}{2} + a - m, \frac{1}{2} - a - m, 1 - m, 1-z \right) \end{aligned} \label{eq:spherarm1} \end{equation} The second hypergeometric is singular for generic values of $a$, since \begin{equation} \begin{aligned} &{}_2 F_1 \left( \frac{1}{2} + a - m, \frac{1}{2} - a - m, 1 - m, 1-z \right) = 1 + \frac{(\frac{1}{2} + a - m)( \frac{1}{2} - a - m)}{1 - m} (1-z) + \dots + \\ &+ \frac{(\frac{1}{2} + a - m)(\frac{1}{2} + a - m + 1) \dots (\frac{1}{2} + a - m + n)(\frac{1}{2} - a - m)(\frac{1}{2} - a - m + 1) \dots (\frac{1}{2} - a - m + n)}{(1 - m)(1-m+1) \dots (1-m+n)} \frac{(1-z)^n}{n!} + \dots \,. \end{aligned} \end{equation} For generic values of $a$, every term starting from $n = m - 1$ will be infinite. Moreover, the first term is divergent as it stands, since $\Gamma (-m + \epsilon) \sim \epsilon^{-1}$ for $m \in \mathbb{N} \,, \epsilon \to 0$. However, let us consider $a = \ell + \frac{1}{2}$, for $\ell \in \mathbb{N}$ and $\ell \ge m$. Then the second term now gives, for $m \to m + \delta$ \begin{equation} \sim \lim_{\delta \to 0} \frac{(1-z)^{\frac{1-m}{2}}}{\Gamma(m - \ell + \delta)} {}_2 F_1 \left(1+ \ell-m - \delta, - \ell - m - \delta, 1-m-\delta, 1-z\right) \sim \mathcal{O} ((1-z)^{\frac{m-1}{2}}) \,, \end{equation} that is finite as $\epsilon \to 0$ and as $z \to 1$ for $m>0$. Also, the first term in (\ref{eq:spherarm1}) remains finite since $\Gamma (-m) /\Gamma(-\ell)$ is a finite ratio. Therefore, quantizing our variable $a$ we obtain a finite result for the wave function, that is also regular as $\theta \to 0, \pi$. Note also that since $\ell$ is a positive integer, all the hypergeometrics have a finite number of terms. Solving $E=a^2$ for $\lambda$ gives \begin{equation} \lambda = \ell (\ell + 1) \,. \end{equation} \subsection{The spin weighted spherical harmonics example}\label{Appendix:SWSH} Let us slightly complicate the problem and consider $s \ne 0$, but $\Lambda = 0$. Again, the solution is given in terms of hypergeometrics. The dictionary gives \begin{equation} \begin{aligned} &E = \frac{1}{4} + \lambda + s(s+1) \,, \\ &a_1 = \frac{s-m}{2} \,, \\ &a_2 = \frac{-s-m}{2} \,, \\ &\Lambda = 0 \,. \end{aligned} \end{equation} Let us start by assuming $m>s>0$. Then the regular solution expanded near zero is given by \begin{equation} y_{m>s} (z \to 0) = z^{\frac{1+m-s}{2}} {}_2 F_1 \left( \frac{1}{2} - a - s, \frac{1}{2} + a - s, 1 + m - s, z \right) \,. \end{equation} Expanding near one, \begin{equation} \begin{aligned} y_{m>s} (z \to 1) =& \frac{\Gamma(m+s) \Gamma(1+m-s)}{\Gamma(\frac{1}{2}+a+m) \Gamma(\frac{1}{2}-a+m)} (1-z)^{\frac{1}{2}-\frac{m+s}{2}} {}_2 F_1 \left( \frac{1}{2} + a - m ,\frac{1}{2} - a - m, 1-m-s, 1-z \right) + \\ +& \frac{\Gamma(-m-s) \Gamma(1+m-s)}{\Gamma(\frac{1}{2}+a-s) \Gamma(\frac{1}{2}-a-s)} (1-z)^{\frac{1}{2}+\frac{m+s}{2}} {}_2 F_1 \left( \frac{1}{2} + a + s, \frac{1}{2} - a + s, 1+m+s, 1-z \right) \,. \end{aligned} \end{equation} Again, everything is divergent for generic values of $a$. Following the previous example, let us take $a = \ell + \frac{1}{2}$, $\ell \in \mathbb{N}$. Assuming again $\ell \ge m$, the first term will go like (as before, consider $m \to m + \delta$ for $\delta \to 0$) \begin{equation} \frac{(1-z)^{\frac{1-m-s}{2}}}{\Gamma(- \ell + m + \delta)} {}_2 F_1 \left( 1+ \ell - m - \delta , - \ell - m - \delta, 1-m-\delta -s, 1-z \right) \to \mathcal{O} ((1-z)^{\frac{m+s-1}{2}}) \,. \end{equation} In order to keep the second term finite we need \begin{equation} - \ell - s \le 0 \,, \, 1 + \ell - s > 0 \Rightarrow \ell \ge s \,, \end{equation} that is trivially satisfied since $\ell \ge m > s$. Let us now consider the case $s > m > 0$. Near zero \begin{equation} \begin{aligned} y_{m<s} (z) =& z^{\frac{1+s-m}{2}} {}_2 F_1 \left( \frac{1}{2} + a - m, \frac{1}{2} - a - m, 1 + s - m, z \right) = \\=& \frac{\Gamma(m+s) \Gamma(1+s-m)}{\Gamma(\frac{1}{2}+a+s) \Gamma(\frac{1}{2}-a+s)} (1-z)^{\frac{1}{2}-\frac{m+s}{2}} {}_2 F_1 \left( \frac{1}{2} + a - m ,\frac{1}{2} - a - m, 1-m-s, 1-z \right) + \\ +& \frac{\Gamma(-m-s) \Gamma(1+s-m)}{\Gamma(\frac{1}{2}+a-m) \Gamma(\frac{1}{2}-a-m)} (1-z)^{\frac{1}{2}+\frac{m+s}{2}} {}_2 F_1 \left( \frac{1}{2} + a + s, \frac{1}{2} - a + s, 1+m+s, 1-z \right) \,. \end{aligned} \end{equation} Again, we need $a = \ell + \frac{1}{2}$, where now $\ell \ge s > m$, in order for the solution to be regular near one. Overall, we have nonzero regular solutions only for $\ell \ge \max (m, s)$. Finally, lets us solve for $\lambda$. We have \begin{equation} \ell^2 + \ell = \lambda + s(1+s) \Rightarrow \lambda = \ell (\ell + 1) - s(s+1) \,. \end{equation} \printbibliography[heading=bibintoc] \end{document}
{"config": "arxiv", "file": "2105.04483/main.tex"}
TITLE: What is the next number in the last group QUESTION [1 upvotes]: there. I came across a number pattern problem like the one in the following picture. Does anyone happen to know what the ? stands for? Or how can we calculate the number? Thank you. REPLY [1 votes]: The third number in each sequence is the sum of the squares of the previous 2. $$2^2+6^2=4+36=40$$ $$3^2+7^2=9+49=58$$ $$6^2+5^2=36+25=61$$ Using this rule, we can do it for the missing number: $$2^2+3^2=4+9=13$$ So the correct answer is (A) 13
{"set_name": "stack_exchange", "score": 1, "question_id": 3355920}
TITLE: Prove that the trigonometric function is uniformly continuous QUESTION [2 upvotes]: In my assignment I have to prove that the following function is uniformly continuous in $(0,\frac{\pi}{2})$: $$f(x)=\frac {1-\sin x}{\cos x}$$ Here is my suggestion for solution. Please let me know if I'm wrong somewhere: I have to prove that if $|x_1-x_2|<\delta $ then $|f(x_1)-f(x_2)|<\epsilon$: $$\left|f(x_1)-f(x_2)\right|=\left|\frac {1-\sin x_1}{\cos x_1}-\frac {1-\sin x_2}{\cos x_2}\right|$$ $$=\left|\frac{1}{\cos x_1}-\frac{\sin x_1}{\cos x_1}-\frac{1}{\cos x_2}+\frac{\sin x_2}{\cos x_2}\right|$$ Since |$\sin x|\le 1$ we can write the following, since the following term is bigger: $$=\left|\frac{1}{\cos x_1}-\frac{\sin x_1}{\cos x_1}-\frac{1}{\cos x_2}+\frac{1}{\cos x_2}\right|$$ $$=\left|\frac{1}{\cos x_1}-\frac{\sin x_1}{\cos x_1}\right|$$ Since $\frac{\sin x}{\cos x}=\tan x$: $$=\left|\frac{1}{\cos x_1}-\tan x_1\right|$$ Now since the interval which the function is the defined in this particular question is $(0,\frac{\pi}{2})$, $\tan x_1>0$. Therefore, if we write the following term, we'll make it bigger: $$=\left|\frac{1}{\cos x_1}-\tan x_1\right|<\left|\frac{1}{\cos x_1}-0\right|$$ Now we will choose $\delta=\frac{\cos x_1}{\epsilon}$: $$\left|\frac{1}{\cos x_1}-0\right|<\frac{\cos x_1}{\epsilon}$$ Divide by $\cos x_1$ which is positive in the open interval $(0,\frac{\pi}{2})$: $$\left|\frac{1}{1}\right|<\frac{1}{\epsilon}<\epsilon$$ Did I get it right? Thanks, Alan REPLY [2 votes]: I assume you want to use $\epsilon$-$\delta$ (there are much easier ways). Multiply top and bottom by $1+\sin x$. We get $\frac{\cos x}{1+\sin x}$. We want to make $|f(x)-f(y)|\lt \epsilon$. We have $$f(x)-f(y)=\frac{\cos x}{1+\sin x}-\frac{\cos y}{1+\sin y}=\frac{\cos x+\cos x\sin y-\cos y-\sin x\cos y}{(1+\sin x)(1+\sin y)}.$$ This has absolute value less than the absolute value of the numerator. So we want to bound $$|\cos x+\cos x\sin y-\cos y-\sin x\cos y|,$$ which is less than or equal to $$|\cos x-\cos y|+|\sin(x-y)|.$$
{"set_name": "stack_exchange", "score": 2, "question_id": 1306786}
\subsection{Local Mass of Balls Estimates} To prove the properties of the statistical query routines, we will need the following two geometric results about manifolds with bounded reach. \begin{proposition}[{\cite[Proposition 8.2]{Aamari18}}] \label{prop:ball_projection} Let $M \in \manifolds{n}{d}{\rch_{\min}}$, $x \in \R^n$ such that $\dd(x,M) \le \rch_{\min}/8$, and $h \le \rch_{\min}/8$. Then, \begin{align*} \B\left(\pi_M(x), r_h^- \right )\cap M \subseteq \B(x,h) \cap M \subseteq \B \left(\pi_M(x), r_h^+ \right ) \cap M, \end{align*} where $r_h = (h^2- \dd(x,M)^2)_+^{1/2}$, $(r_h^-)^2 = \left (1-\frac{\dd(x,M)}{\rch_{\min}}\right )r_h^2$, and $(r_h^+)^2 = \left (1+\frac{ 2 \dd(x,M)}{\rch_{\min}}\right )r_h^2$. \end{proposition} As a result, one may show that any ball has large mass with respect to a measure $D \in \distributions{n}{d}{\rch_{\min}}{f_{\min}}{f_{\max}}{L}$. \begin{lemma} \label{lem:intrinsic_ball_mass} Let $D \in \distributions{n}{d}{\rch_{\min}}{f_{\min}}{f_{\max}}{L}$ have support $M = \supp(D)$. \begin{itemize} \item For all $p \in M$ and $h \le \rch_{\min}/4$, \[ a_d f_{\min} h^d \le D\bigl(\B(p,h) \bigr) \le A_d f_{\max} h^d, \] where $a_d = 2^{-d} \omega_d$ and $A_d = 2^d \omega_d$. \item For all $x_0 \in \R^n$ and $h \le \rch_{\min}/8$, \[ a'_d f_{\min} (h^2-\dd(x_0,M)^2)_+^{d/2} \le D\bigl(\B(x_0,h) \bigr) \le A'_d f_{\max} (h^2-\dd(x_0,M)^2)_+^{d/2}, \] where $a'_d = (7/8)^{d/2}a_d$ and $A'_d = (5/4)^{d/2}A_d$. \end{itemize} \end{lemma} \begin{proof}[Proof of \Cref{lem:intrinsic_ball_mass}] The first statement is a direct consequence of \cite[Propositions 8.6 \& 8.7]{Aamari18}. The second one follows by combining the previous point with \Cref{prop:ball_projection}. \end{proof} \subsection{Euclidean Packing and Covering Estimates} \label{subsec:euclidean-packing-and-covering} For sake of completeness, we include in this section some standard packing and covering bounds that are used in our analysis. We recall the following definitions. A $r$-covering of $K \subseteq \R^n$ is a subset $\mathcal{X}= \set{ x_1,\ldots,x_k } \subseteq K$ such that for all $x \in K$, $\dd(x,\mathcal{X}) \leq r$. A $r$-packing of $K$ is a subset $\mathcal{Y} = \left\lbrace y_1,\ldots,y_k \right\rbrace \subseteq K$ such that for all $y,y' \in \mathcal{Y}$, $\B(y,r) \cap \B(y',r) = \emptyset$ (or equivalently $\norm{y'-y}>2r$). \begin{definition}[Covering and Packing numbers]\label{def:packing_covering} For $K \subseteq \R^n$ and $r>0$, the covering number $\CV_K(r)$ of $K$ is the minimum number of balls of radius $r$ that are necessary to cover $K$: \begin{align*} \CV_K(r) &= \min \set{ {k > 0}~| \text{ there exists a } r\text{-covering of cardinality } k } . \end{align*} The packing number $\PK_K(r)$ of $K$ is the maximum number of disjoint balls of radius $r$ that can be packed in $K$: \begin{align*} \PK_K(r) &= \max \set{ {k > 0}~| \text{ there exists a } r\text{-packing of cardinality } k } . \end{align*} \end{definition} Packing an covering number are tightly related, as shown by the following statement. \begin{proposition} \label{prop:packing_covering_link} For all subset $K \subseteq \R^n$ and $r>0$, \begin{align*} \PK_K(2r) \leq \CV_K(2r) \leq \PK_K(r). \end{align*} \end{proposition} \begin{proof}[Proof of \Cref{prop:packing_covering_link}] For the left-hand side inequality, notice that if $K$ is covered by a family of balls of radius $2r$, each of these balls contains at most one point of a maximal $2r$-packing. Conversely, the right-hand side inequality follows from the fact that a maximal $r$-packing is always a $2r$-covering. Indeed, if it was not the case one could add a point $x_0 \in K$ that is $2r$-away from all of the $r$-packing elements, which would contradict the maximality of this packing. \end{proof} We then bound the packing and covering numbers of the submanifolds with reach bounded below. Note that these bounds depend only on the intrinsic dimension and volumes, but not on the ambient dimension. \begin{proposition} \label{prop:packing_covering_manifold} For all $M \in \manifolds{n}{d}{\rch_{\min}}$ and $r \leq \rch_{\min} / 8$, \[ \PK_M(r) \geq \frac{\Haus^d(M)}{\omega_d (4r)^d} , \] and \[ \CV_M(r) \leq \frac{\Haus^d(M)}{\omega_d (r/4)^d} . \] \end{proposition} \begin{proof}[Proof of \Cref{prop:packing_covering_manifold}] First, we have $\PK_{M}(r) \geq \CV_{M}(2r)$ from \Cref{prop:packing_covering_link}. In addition, if $\set{p_i}_{1 \leq i \leq N} \subseteq M$ is a minimal $(2r)$-covering of $M$, then by considering the uniform distribution $D_M = \indicator{M} \Haus^d /\Haus^d(M)$ over $M$, using a union bound and applying \Cref{lem:intrinsic_ball_mass}, we get \begin{align*} 1 = D_M\left( \cup_{i = 1}^N \B(p_i,2r) \right) \leq \sum_{i = 1}^N D_M(\B(p_i,2r)) \leq N 2^d \omega_d (2r)^d / \Haus^d(M) . \end{align*} As a result, $ \PK_{M}(r) \geq \CV_{M}(2r) = N \geq \frac{\Haus^d(M)}{\omega_d (4r)^d} . $ For the second bound, use again \Cref{prop:packing_covering_link} to get $\CV_{M}(r) \leq \PK_{M}(r/2)$. Now, by definition, a maximal $(r/2)$-packing $\set{q_j}_{1 \leq j \leq N'}\subseteq M$ of $M$ provides us with a family of disjoint balls of radii $r/2$. Hence, from \Cref{lem:intrinsic_ball_mass}, we get \begin{align*} 1 \geq D_M\left( \cup_{i = j}^{N'} \B(q_j,r/2) \right) = \sum_{j = 1}^{N'} D_M(\B(q_j,r/2)) \geq N' 2^{-d} \omega_d (r/2)^d / \Haus^d(M) , \end{align*} so that $ \CV_{M}(r) \leq \PK_{M}(r/2) = N' \leq \frac{\Haus^d(M)}{\omega_d (r/4)^d} . $ \end{proof} Bounds on the same discretization-related quantities computed on the Euclidean $n$-balls and $k$-spheres will also be useful. \begin{proposition} \label{prop:packing_covering_ball_sphere} \begin{itemize} \item For all $r > 0$, \[ \PK_{\B(0,R)}(r) \geq \left(\frac{R}{2r}\right)^n \text{ and } \CV_{\B(0,R)}(r) \leq \left(1+\frac{2R}{r}\right)^n . \] \item For all integer $1 \leq k < n$ and $r \leq 1/8$, \[ \PK_{\Sphere^{k}(0,1)}(r) \geq 2 \left(\frac{1}{4r}\right)^k . \] \end{itemize} \end{proposition} \begin{proof}[Proof of \Cref{prop:packing_covering_ball_sphere}] \begin{itemize}[leftmargin=*] \item From \Cref{prop:packing_covering_link}, we have $\PK_{\B(0,R)}(r) \geq \CV_{\B(0,R)}(2r)$. Furthermore, if $\cup_{i = 1}^N \B(x_i,2r) \supseteq \B(0,R)$ is a minimal $2r$-covering of $\B(0,R)$, then by a union bound, $ \omega_n R^n = \Haus^n(\B(0,R)) \leq N \omega_n (2r)^n , $ so that $\PK_{\B(0,R)}(r) \geq \CV_{\B(0,R)}(2r) = N \geq (R/(2r))^n$. For the second bound, we use again \Cref{prop:packing_covering_link} to get $\CV_{\B(0,R)}(r) \leq \PK_{\B(0,R)}(r/2)$, and we notice that any maximal $(r/2)$-packing of $\B(0,R)$ with cardinality $N'$ provides us with a family of disjoint balls of radii $r/2$, all contained in $\B(0,R)^{r/2} = \B(0,R+r/2)$. A union bound hence yields $ \omega_n (R+r/2)^n = \Haus^n(\B(0,R+r/2)) \geq N' \Haus^n(\B(0,r/2)) = N' \omega_n (r/2)^n $, yielding $\CV_{\B(0,R)}(r) \leq \PK_{\B(0,R)}(r/2) = N' \leq (1+2R/r)^n$. \item Notice that $\Sphere^{k}(0,1) \subseteq \R^n$ is a compact $k$-dimensional submanifold without boundary, reach $\rch_{\Sphere^{k}(0,1)} = 1$, and volume $\Haus^k(\Sphere^k(0,1)) = \sigma_k$. Applying \Cref{prop:packing_covering_manifold} together with elementary calculations hence yield \begin{align*} \PK_{\Sphere^k(0,1)}(r) &\geq \frac{\sigma_k}{\omega_k} \left(\frac{1}{4r}\right)^k \\ &= \left( \dfrac{ 2\pi^{(k+1)/2} }{ \Gamma\left( \frac{k+1}{2} \right) } \right) \left( \dfrac{ \pi^{k/2} }{ \Gamma\left( \frac{k}{2} +1 \right) } \right)^{-1} \left(\frac{1}{4r}\right)^k \\ &= 2\sqrt{\pi} \frac{\Gamma\left( \frac{k}{2} +1 \right)}{\Gamma\left( \frac{k+1}{2} \right)} \left(\frac{1}{4r}\right)^k \\ &\geq 2 \left(\frac{1}{4r}\right)^k . \qedhere \end{align*} \end{itemize} \end{proof} \subsection{Global Volume Estimates} The following bounds on the volume and diameter of low-dimensional submanifolds of $\R^n$ with positive reach are at the core of \Cref{subsubsec:implicit-bounds-on-parameters}. They exhibit some implicit constraints on the parameters for the statistical models not to be degenerate. \begin{proposition} \label{prop:volume_bounds_under_reach_constraint} For all $M \in \manifolds{n}{d}{\rch_{\min}}$, \[\Haus^d(M) \geq \sigma_d \rch_{\min}^d, \] with equality if and only if $M$ is a $d$-dimensional sphere of radius $\rch_{\min}$. Furthermore, if $M \subseteq \B(0,R)$ then $\rch_{\min} \leq \sqrt{2} R$ and \[ \Haus^d(M) \leq \left(\frac{18R}{\rch_{\min}} \right)^n \omega_d \left( \frac{\rch_{\min}}{2} \right)^d . \] \end{proposition} \begin{proof}[Proof of \Cref{prop:volume_bounds_under_reach_constraint}] For the first bound, note that the operator norm of the second fundamental form of $M$ is everywhere bounded above by $1/\rch_{\min}$ \cite[Proposition 6.1]{Niyogi08}, so that \cite[(3)]{Almgren86} applies and yields the result. For the next statement, note that \cite[Theorem 3.26]{Hatcher02} ensures that $M$ is not homotopy equivalent to a point. As a result, \cite[Lemma A.3]{Aamari19} applies and yields \begin{align*} \rch_{\min} &\leq \rch_M \\ &\leq \diam(M)/\sqrt{2} \\ &\leq \diam(\B(0,R))/\sqrt{2} \\ &= \sqrt{2} R. \end{align*} For the last bound, consider a $(\rch_{\min}/8)$-covering $\set{z_i}_{1 \leq i \leq N}$ of $\B(0,R)$, which can be chosen so that $N \leq \left( 1 + \frac{2R}{\rch_{\min}/8} \right)^n \leq \left(\frac{18R}{\rch_{\min}} \right)^n$ from \Cref{prop:packing_covering_ball_sphere}. Applying \Cref{lem:intrinsic_ball_mass} with $h=\rch_{\min}/8$, we obtain \begin{align*} \Haus^d(M \cap \B(z_i,\rch_{\min}/8)) &\leq (5/4)^{d/2} \times 2^d \omega_d ((\rch_{\min}/8)^2 - \dd(z_i,M)^2)_+^{d/2} \\ &\leq \omega_d \left( \frac{\rch_{\min}}{2} \right)^d , \end{align*} for all $i \in \set{1,\ldots,N}$. A union bound then yields \begin{align*} \Haus^d(M) &= \Haus^d\left( \cup_{i=1}^N M \cap \B(z_i,\rch_{\min}/8) \right) \\ &\leq N \omega_d \left( \frac{\rch_{\min}}{2} \right)^d \\ &\leq \left(\frac{18R}{\rch_{\min}} \right)^n \omega_d \left( \frac{\rch_{\min}}{2} \right)^d , \end{align*} which concludes the proof. \end{proof}
{"config": "arxiv", "file": "2011.04259/miscellaneous.tex"}
TITLE: Dirichlet distribution, sum of Beta distributions QUESTION [0 upvotes]: I currently have a problem about Dirichlet distributed Variables. In one of the papers I am currently reading it says: Let $S=(S_1,...,S_m)\sim Dir(\delta\omega_1,..., \delta \omega_m)$, with $\sum_{j=1}^m \omega_j=1$ and $\delta >0$ and let $Z=(Z_1,...,Z_m)$, with $Z_j= \sum_{i =1}^j S_i$. Establish that: $Z_j \sim Beta(\delta \zeta_j,\delta (1- \zeta_j))$ with $\zeta_j= \sum_{i =1}^j \omega_i$. What I know: I know that the marginal distribution of $S_j$ is a beta distribution with: $ S_j \sim Beta(\delta\omega_j,\delta\sum_{i=1}^m \omega_i-\delta\omega_j)=Beta(\delta\omega_j,\delta(1-\omega_j)) $ So it looks like there is an additive characteristic. How can this be established? REPLY [0 votes]: The "additive characteristic" you speak of isn't over independent Beta variables. It's conditional on the sum being less than 1. That is to say, $S_1$ and $S_2$ are not independent: although their support are on $[0,1]$, their sum cannot exceed $1$ because they are drawn from the Dirichlet distribution whose support is on $\boldsymbol S \in \{ \boldsymbol s \in [0,1]^m : \sum s_i \le 1 \}$.
{"set_name": "stack_exchange", "score": 0, "question_id": 965781}
TITLE: Is a certain property of a continuous map preserved under "surjectification"? QUESTION [2 upvotes]: Let $X$ and $Y$ be compact Hausdorff spaces and let $\varphi:X\to Y$ be continuous with a property that if $A$ is a nowhere dense zero-set in $Y$, then $\varphi^{-1}(A)$ is nowhere dense in $X$. Let $Z=\varphi(X)$. Does $\varphi$ still have the analogous property as a map into $Z$? Note that the condition implies that the set $J_{Z}$ of $f\in C(Y)$ which vanish on $Z$ is a $\sigma$-ideal in $C(Y)$, which means that it contains all existing supremums of countable sets in $J_{Z}$. Such $Z$'s could be considered countable analogues of regular closed sets, because $Z$ is regular if and only if $J_{Z}$ contains existing supremums of all sets. REPLY [2 votes]: The answer here is negative: for $Y$ take the remainder $\beta\omega\setminus\omega$ of the Stone-Cech remander of the discrete space $\omega$ of finite ordinals. In the space $Y$ take any countable discrete subspace $D$ and let $Z$ be the closure of $D$. Since $Y$ has no isolated points, the space $D$ is nowhere dense in $Y$ and so is its closure $Z$. Since $D$ is countable and discrete in the compact space $Z$, the remainder $R=Z\setminus D$ is a nonempty functionally closed nowhere dense set in $Z$. Consider the space $X=(Z\times\{0\})\cup (R\times\{1\})$ and the natural projection $\varphi:X\to Z\subseteq Y$. Observe that the set $R$ is functionally closed and nowhere dense in $Z$ and its preimage $\varphi^{-1}[R]$ contains the nonempty clopen subset $R\times\{1\}$ of $X$. On the other hand, each nonempty $G_\delta$-subset of the space $Y=\beta\omega\setminus\omega$ has nonempty interior in $Y$. So, $Y$ contains no functionally closed nowhere dense subsets and hence the function $\varphi:X\to Y$ has the desired property: for every nowhere dense functionally closed set $A$ in $Y$ the preimage $\varphi^{-1}(A)$ has any desired property, in particular is nowhere dense in $X$.
{"set_name": "stack_exchange", "score": 2, "question_id": 415867}
TITLE: Find the area of region QUESTION [0 upvotes]: Find the area of the region bounded by the curves $y=2x^2-6x+5$ and $y=x^2+6x-15$. I found the critical points $2$ and $10$. My trouble is making the integral work. My teacher's answer key says that the area is $85.33$ but I keep getting something different. REPLY [1 votes]: As you observed, the quadratic polynomials intersect at $x=2,10$. On $[2,10]$, we have $x^2+6x-15\geq 2x^2-6x+5$ since $$ (x^2+6x-15)-(2x^2-6x+5)=-x^2+12x-20=-(x-2)(x-10) $$ is positive between the roots. So the area is $$ \int_2^{10}((x^2+6x-15)-(2x^2-6x+5))dx=\int_2^{10}(-x^2+12x-20)dx $$ $$ =-\frac{x^3}{3}+6x^2-20x \Big|_2^{10}=-\frac{1000}{3}+600-200+\frac{8}{3}-24+40 $$ $$ =-\frac{992}{3}+416=\frac{256}{3}=85.3333\ldots $$
{"set_name": "stack_exchange", "score": 0, "question_id": 378969}
\begin{document} \title[Positivity of the renormalized volume of almost-Fuchsian manifolds]{Positivity of the renormalized volume of almost-Fuchsian hyperbolic $3$-manifolds} \author{Corina Ciobotaru} \thanks{C.~C. was supported by the FRIA} \address{Corina Ciobotaru, Universit\'e de Gen\`eve, Section de math\'ematiques, 2-4 rue du Li\`{e}vre, CP 64, 1211 Gen\`eve 4, Switzerland} \email{corina.ciobotaru@unige.ch} \author{Sergiu Moroianu} \thanks{S.~M. was partially supported by the CNCS project PN-II-RU-TE-2011-3-0053} \address{Sergiu Moroianu, Institutul de Matematic\u{a} al Academiei Rom\^{a}ne\\ P.O. Box 1-764\\ RO-014700 Bucharest\\ Romania} \email{moroianu@alum.mit.edu} \date{\today} \begin{abstract} We prove that the renormalized volume of almost-Fuchsian hyperbolic $3$-ma\-ni\-folds is non-negative, with equality only for Fuchsian manifolds. \end{abstract} \maketitle \section{Introduction} The renormalized volume $\Volr$ is a numerical invariant associated to an infinite-volume Riemannian manifold with some special structure near infinity, extracted from the divergent integral of the volume form. Early instances of renormalized volumes appear in Henningson--Skenderis~\cite{hs} for asymptotically hyperbolic Einstein metrics, and in Krasnov~\cite{Kr} for Schottky hyperbolic $3$-manifolds. In Takhtajan--Teo~\cite{TaTe} the renormalized volume is identified to the so-called Liouville action functional, a cohomological quantity known since the pioneering work of Takhtajan--Zograf \cite{TaZo} to be a K\"ahler potential for the Weil--Petersson symplectic form on the deformation space of certain Kleinian manifolds: \begin{equation} \partial\overline{\partial} \Volr= \frac{1}{8i}\omega_{\mathrm{WP}}. \label{kp}\end{equation} Krasnov--Schlenker~\cite{KS08} studied the renormalized volume using a geometric description in terms of foliations by equidistant surfaces. In the context of quasi-Fuchsian hyperbolic $3$-manifolds they computed the Hessian of $\Volr$ at the Fuchsian locus. They also gave a direct proof of the identity \eqref{kp} in that setting. Recently, Guillarmou--Moroianu~\cite{CS} studied the renormalized volume $\Volr$ in a general context, for geometrically finite hyperbolic $3$-manifolds without rank-$1$ cusps. There, $\Volr$ appears as the log-norm of a holomorphic section in the Chern--Simons line bundle over the Teichm\"uller space. Huang--Wang~\cite{huangwang} looked at renormalized volumes in their study of almost-Fuchsian hyperbolic $3$-manifolds. However, their renormalization procedure does not involve uniformization of the surfaces at infinity, hence the invariant $RV$ thus obtained is constant (and negative) on the moduli space of almost-Fuchsian metrics. There is a superficial analogy between $\Volr$ and the mass of asymptotically Euclidean mani\-folds. Like in the positive mass conjecture, one may ask if $\Volr$ is positive for all convex co-compact hyperbolic $3$-manifolds, or at least for quasi-Fuchsian manifolds. One piece of supporting evidence follows from the computation by Takhtajan--Teo~\cite{TaTe} of the variation of $\Volr$ (or equivalently, of the Liouville action functional) on deformation spaces. In the setting of quasi-Fuchsian manifolds, Krasnov--Schlenker~\cite{KS08} noted that the functional $\Volr$ vanishes at the Fuchsian locus. When one component of the boundary is kept fixed, the only critical point of $\Volr$ is at the unique Fuchsian metric. Moreover, this point is a local minimum because the Hessian of $\Volr$ is positive definite there as it coincides with the Weil-Petersson metric. Therefore, at least in a neighborhood of the Fuchsian locus, we do have positivity. We emphasize that to ensure vanishing of the renormalized volume for Fuchsian manifolds, the renormalization procedure used in Krasnov--Schlenker~\cite{KS08} differs from Guillarmou--Moroianu~\cite{CS} or from Huang--Wang~\cite{huangwang} by the universal constant $2\pi(1-g)$ where $g\geq 2$ is the genus. It is the definition from Krasnov--Schlenker~\cite{KS08} that we use below. These results are not sufficient to conclude that $\Volr$ is positive since the Teichm\"uller space is not compact and $\Volr$ is not proper (by combining the results in Schlenker~\cite{Sch} and Brock~\cite{Brock}, one sees that the difference between $\Volr$ and the Teichm\"uller distance is bounded, while the Teichm\"uller metric is incomplete). Another piece of evidence towards positivity was recently found by Schlenker~\cite{Sch}, who proved that $\Volr$ is bounded from below by some explicit (negative) constant. In this note we prove the positivity of $\Volr$ on the almost-Fuchsian space, which is an explicit open subset of the space of quasi-Fuchsian metrics. While this improves the local positivity result of Krasnov--Schlenker~\cite{KS08}, it does of course not prove positivity for every quasi-Fuchsian metric, that is therefore left for further studies. \section{Almost-Fuchsian hyperbolic $3$-manifolds} \begin{definition} \label{def::quasi-Fuchsian} A quasi-Fuchsian hyperbolic 3-manifold $X$ is the quotient of $\mathbb{H}^3$ by a quasi-Fuchsian group, i.e., a Kleinian group $\Gamma$ of $\mathrm{PSL}_2(\bC)$ whose limit set is a Jordan curve. \end{definition} When the group $\Gamma$ is a co-compact Fuchsian group (a subgroup of $\mathrm{PSL}_2(\bR)$), the Jordan curve in question is the $1$-point compactification of the real line, and $\Gamma\backslash \mathbb{H}^3$ is called a Fuchsian hyperbolic 3-manifold. Equivalently, a quasi-Fuchsian manifold $(X,g)$ is a complete hyperbolic $3$-manifold diffeomorphic to $\bR \times \Sigma_0$, where $\Sigma_0$ is a compact Riemann surface of genus $\geq 2$ and with the hyperbolic Riemannian metric $g$ on $X$ described as follows. There exist $t_{0}^{-} \leq t_{0}^{+} \in \bR$ such that the metric $g$ on $[t_{0}^+,\infty) \times \Sigma_0$, respectively on $(-\infty,t_{0}^-] \times \Sigma_0$, is given by \begin{align} \label{metricg} g=dt^2+ g_t^{\pm},&& g_t^{\pm}=g_0^{\pm}((\cosh(t)+A^{\pm}\sinh(t))^2\cdot,\cdot), \end{align} where $t \in[t_{0}^+,\infty)$, respectively, $t \in (-\infty,t_{0}^-]$, $g_0^{\pm}$ is a metric on $\Sigma_{0}^{\pm} = \{t_{0}^{\pm}\} \times \Sigma_0$ and $A^{\pm}$ is a symmetric endomorphism of $T\Sigma_0^{\pm}$ satisfying the Gauss and Codazzi--Mainardi equations \begin{align} \det(A^{\pm})= {}&\kappa^{\pm}+1,\label{hGe}\\ d^\nabla \II^{\pm} ={}&0.\nonumber \end{align} Here, $\kappa^{\pm}$ is the Gaussian curvature of $(\Sigma_0^{\pm},g_0^{\pm})$ and $d^\nabla$ represents the de Rham differential twisted by the Levi--Civita connection acting on $1$-forms with values in $T^*\Sigma_0^{\pm}$. By definition, $\II^{\pm}:=g_0^{\pm}(A^{\pm}\cdot,\cdot)$, called the second fundamental form of the embedding $\Sigma_0^{\pm}\hookrightarrow X$, is the bilinear form associated to $A^{\pm}$. Notice that the eigenvalues of $A^{\pm}$ should be less than $1$ in absolute value for the expression~(\ref{metricg}) to be a well-defined metric for all $t \in \bR$. \begin{definition}[Uhlenbeck~\cite{Uhl}] \label{def::almost-Fuch} An almost-Fuchsian hyperbolic $3$-manifold $(X,g)$ is a quasi-Fuchsian hyperbolic $3$-manifold containing a closed minimal surface $\Sigma$ whose principal curvatures belong to $(-1,1)$. \end{definition} Roughly speaking, an almost-Fuchsian manifold is obtained as a small deformation of a Fuchsian manifold, which, by definition, is the quotient of $\mathbb{H}^3$ by the action of a co-compact Fuchsian group. In particular, Fuchsian manifolds are almost-Fuchsian. \begin{remark} \label{rem::almost-F_all_t} By Uhlenbeck~\cite[Theorem~3.3]{Uhl}, an almost-Fuchsian hyperbolic $3$-manifold $X$ admits a {\it unique} minimally embedded surface $\Sigma$, whose principal curvatures are thus in $(-1,1)$. By taking $\Sigma_0^{\pm}= \Sigma$, the expression~(\ref{metricg}) is well-defined for all $t \in \bR$. \end{remark} \subsection{Funnel ends} \label{subsec::funnel_ends} Let $(X,g)=\Gamma\backslash \mathbb{H}^3$ be a quasi-Fuchsian manifold. Recall that the infinity of $X$ is defined as the space of geodesic rays escaping from every compact, modulo the equivalence relation of being asymptotically close to each other. By the Jordan separation theorem, the complement of the limit set of $\Gamma$ consists of two disjoint topological disks. The infinity of a quasi-Fuchsian manifold is thus a disjoint union of two `ends', corresponding to geodesics in $\mathbb{H}^3$ pointing towards one or the other of these two connected components. For an end of $X$, a \emph{funnel} is a cylinder $[t_0,\infty)\times \Sigma \hookrightarrow X$ isometrically embedded in $X$ so that the pullback of the hyperbolic metric $g$ of $X$ is of the form~(\ref{metricg}) for $t \in [t_0,\infty)$, where $A$ satisfies the Gauss and Codazzi--Mainardi equations as above. Notice that the gradient of the function $t$ on the funnel $ [t_0,\infty)\times \Sigma$ is a geodesic vector field of length $1$; thus $\{\infty\} \times \Sigma$ is in bijection with the corresponding end of $X$. A funnel has an obvious smooth compactification to a manifold with boundary, namely, $[t_0,\infty]\times \Sigma$. On this compactification, the Riemannian metric $e^{-2t}g$ is smooth in the variable $e^{-t}\in [0,e^{-t_0})$. Define $h_0:=\lim\limits_{t\to\infty} e^{-2t}g$ to be the metric induced on the surface at infinity $\{\infty\}\times \Sigma$. Explicitly, \[h_0=\tfrac14 g_0((1+A)^2\cdot,\cdot).\] In this way, one obtains a smooth compactification $\overline{X}$ of $X$, together with a metric at infinity, both depending at first sight on the funnels chosen inside each of the two ends of $X$. We emphasize however that each end of $X$ admits several funnel structures. Consider another cylinder $[t_0',\infty)\times\Sigma'\hookrightarrow X$ isometrically embedded in $X$, for a different function $t'$ with respect to which the metric $g$ takes the form~(\ref{metricg}). Then the gradient flow of $t'$ defines another foliation $[t_0',\infty)\times\Sigma'$. If this funnel determines the same end of $X$ as $[t_0,\infty)\times \Sigma$, then the two funnels intersect near infinity. Up to increasing $t_0'$ if necessary, we can assume that $[t_0',\infty)\times\Sigma' \hookrightarrow [t_0,\infty)\times \Sigma$. Moreover, for $t_0'$ large enough, the complement of the funnel $[t_0',\infty)\times\Sigma'$ in $X$ is geodesically convex, hence its boundary surface $\{t_0'\}\times \Sigma'$ intersects each half-geodesic along the $t$ flow in a unique point. Thus $\Sigma'$ is diffeomorphic to $\Sigma$. The identity map of $X$ extends smoothly on the corresponding compactifications induced by the chosen foliation structures of each of the two ends; so the smooth compactification of $X$ is canonical (i.e., independent of the choice of the funnels). Moreover, the induced metrics $h_0,h_0'$ with respect to two foliations are conformal to each other. It follows that the metric $g$ induces a conformal class $[h_0]$ on $\{\infty\}\times\Sigma \subset \partial_\infty X$. Conversely, we recall that for a quasi-Fuchsian manifold $(X,g)$, every metric $h_0^\pm$ in the associated conformal class on each of the two ends of $X$ is realized (near infinity) by a unique funnel, using a special function $t$ that decomposes the funnel as presented above. \section{The renormalized volume} \label{subsec::renorm_vol} Let $(X,g)$ be a quasi-Fuchsian hyperbolic $3$-mani\-fold. For each of the two ends of $X$ we choose a funnel with foliation structure $[t_j,\infty)\times \Sigma_j$, where $j \in \{1,2\}$. Let $h_0^1,h_0^2$ be the corresponding metrics on the boundary at infinity of $X$. Choose $t_0:=\max\{t_1,t_2\}$ and set $\Sigma=\Sigma_1\sqcup\Sigma_2$, so that $[t_0,\infty)\times \Sigma$ is isometrically embedded in $X$. Denote by $h_0$ the metric $(h_0^1,h_0^2)$ on the disconnected surface $\Sigma$. For $t\geq t_0$, denote by $K_t$ the complement in $X$ of the funnels $[t,\infty)\times \Sigma$, which is a compact manifold with boundary $\{t\}\times \Sigma=:\Sigma_t$. Let $\II^t$, $H^t:\Sigma_t \to \bR$ be the second fundamental form, respectively the mean curvature function of the boundary surfaces $\Sigma_t=\partial K_t$. The renormalized volume of $X$ with respect to the metrics $h_0$ (or equivalently, with respect to the corresponding functions $t$) is defined via the so-called Riesz regularization. \begin{definition} \label{def::renorm_vol} Let $(X,g)$ be a quasi-Fuchsian hyperbolic $3$-manifold which is decomposed into a finite-volume open set $K$ and two funnels. As explained above, let $h_0$ be the metric in the induced conformal class at infinity of $X$ corresponding to $g$ and the chosen funnels. The renormalized volume with respect to $h_0$ is defined by \[\Volr(X,g;h_0):=\Vol(K)+\FP_{z=0} \int_{X\setminus K} e^{-z|t|} dg,\] where by $\FP$ we denote the finite part of a meromorphic function. \end{definition} In Definition~\ref{def::renorm_vol}, we implicitly have used the fact, which follows from the proof of Proposition~\ref{propks} below, that the integral in the right-hand side is meromorphic in $z$. In Krasnov--Schlenker~\cite{KS08}, the renormalized volume is defined by integrating the volume form on increasingly large bounded domains and discarding some explicit terms which are divergent in the limit. We refer, for example, to Albin~\cite{albin} for a discussion of the link between these two types of renormalizations. For the sake of completeness, we include here a proof of the equality between these two definitions. Some care is needed since the addition in the definition of an universal constant, harmless in Guillarmou--Moroianu~\cite{CS} or Huang--Wang~\cite{huangwang}, drastically alters the positivity properties of $\Volr$. \begin{prop} \label{propks} The quantity \[ \Volks(X,g;h_0):=\Vol(K_t)-\tfrac{1}{4}\int_{\Sigma_t} H^t dg_t +t\pi \chi(\Sigma), \] called the (Krasnov--Schlenker) renormalized volume $\Volks$, is independent of $t\in[t_0,\infty)$, and coincides with the renormalized volume $\Volr(X,g;h_0)$. \end{prop} The definition of $\Volks$ and the independence of $t$ are due to Krasnov--Schlenker~\cite{KS08}, see also Schlenker~\cite[Lemma 3.6]{Sch}. \begin{proof} We use the notation from the beginning of this section. Let \begin{align*} g=dt^2+ g_t,&& g_t=g_0((\cosh(t)+A\sinh(t))^2\cdot,\cdot) \end{align*} be the expression of the metric $g$ in the fixed product decomposition of the funnels $[t_0,\infty)\times \Sigma$, as in \eqref{metricg}. Recall that $\II^t=g_t(A_t \cdot, \cdot)=\frac{1}{2}g_t'$ and $H^t=\Tr(A_t)$. For every $t \in [t_0,\infty)$, one obtains \begin{align*} dg_t={}&[\cosh^2t+\det(A)\sinh^2(t)+ \Tr(A)\cosh(t)\sinh(t)]dg_0\\ ={}&[\cosh^2t+(\kappa_{g_0}+1)\sinh^2(t)+ H^0\cosh(t)\sinh(t)]dg_0 \end{align*} (in the second line we have used \eqref{hGe} and the definition of $H^0$), and \begin{align*} \tfrac{1}{2}g_t'={}&g_0((\cosh(t)+A\sinh(t))(\cosh(t)+A\sinh(t))'\cdot,\cdot)\\ ={}&g_t((\cosh(t)+A\sinh(t))^{-2}(\cosh(t)+A\sinh(t))(\cosh(t)+A\sinh(t))'\cdot,\cdot)\\ ={}&g_t((\cosh(t)+A\sinh(t))^{-1}(\sinh(t)+A\cosh(t))\cdot,\cdot),\\ \intertext{so} A_t={}&(\cosh(t)+A\sinh(t))^{-1}(\sinh(t)+A\cosh(t)). \end{align*} Denote by $\lambda_1,\lambda_2$ the eigenvalues of the symmetric endomorphism $A$, so $\lambda_1+\lambda_2=H^0$ and $\lambda_1\lambda_2 = \kappa_{g_0}+1$. We deduce \begin{align*} H^tdg_t={}& (\cosh(2t)H^0+\sinh(2t)(\kappa_{g_0}+2))dg_0. \end{align*} The independence of $t$ is a straightforward consequence of the above formulas, we omit it since is coincides with Lemma 3.6 from Schlenker~\cite{Sch}. Let us prove the second part of the proposition. Fix $t \in (t_0, \infty)$ and use as a new variable $x \in [t,\infty)$. The above equations are of course valid if we replace $t$ by $x$ and $t_0$ by $t$. By the Gauss equation and the expressions of $dg_x$ and $H^xg_x$ with respect to $dg_t$ using the new variable $x$ we have: \begin{align*} \lefteqn{\FP_{z=0} \int_{X \setminus K_{t}} e^{-z|x|} dg}\\ &= \FP_{z=0} \int_{t}^{\infty} \int_{\Sigma_x}e^{-z|x|} dg_xdx\\ &=\FP_{z=0} \int_{t}^{\infty} \int_{\Sigma_t}e^{-z|x|}[(\cosh(x))^{2}+(\kappa_{g_t}+1)(\sinh(x))^2+ H^t\cosh(x)\sinh(x)]dg_tdx\\ &=\FP_{z=0} \int_{t}^{\infty} \int_{\Sigma_t}e^{-z|x|}\left(e^{2x}\left(\frac{\kappa_{g_t}}{4} + \frac{1}{2}+ \frac{H^{t}}{4}\right)+e^{-2x}\left(\frac{\kappa_{g_t}}{4}+ \frac{1}{2}- \frac{H^{t}}{4}\right) -\frac{\kappa_{g_t}}{2}\right)dg_tdx\\ &=\FP_{z=0}\left[\frac{1}{z-2}e^{(2-z)t}\int_{\Sigma_t}\left(\frac{\kappa_{g_t}}{4}+ \frac{1}{2} + \frac{H^{t}}{4}\right)dg_t\right.\\ &\hspace{1.7cm} + \left.\frac{1}{2+z}e^{-(z+2)t}\int_{\Sigma_t}\left(\frac{\kappa_{g_t}}{4} + \frac{1}{2}-\frac{H^{t}}{4}\right)dg_t+\frac{e^{zt}}{z}\int_{\Sigma_t}\frac{\kappa_{g_t}}{2} dg_t\right]\\ &=-\frac{1}{2}e^{2t}\int_{\Sigma_t}\left(\frac{\kappa_{g_t}}{4}+ \frac{1}{2}+ \frac{H^{t}}{4}\right)dg_t + \frac{1}{2}e^{-2t}\int_{\Sigma_t}\left(\frac{\kappa_{g_t}}{4}+ \frac{1}{2}-\frac{H^{t}}{4}\right)dg_t+t\pi \chi(\Sigma)\\ &=-\sinh(2t)\int_{\Sigma_t}\frac{\kappa_{g_t}+2}{4}dg_t -\cosh(2t)\int_{\Sigma_t}\frac{H^{t}}{4}dg_t+t\pi \chi(\Sigma)\\ &=-\frac{1}{4}\int_{\Sigma_t} H^t dg_t +t\pi \chi(\Sigma). \end{align*} \end{proof} Given a quasi-Fuchsian manifold $(X,g)$, one would like to have a canonical definition of the renormalized volume, which does not depend on the additional choices of the metrics at infinity of $X$. \begin{definition} \label{def::renorm_vol_canonical} The renormalized volume $\Volr(X,g)$ is defined as $\Volr(X,g;h_\cF)$, where the metrics $h_\cF$ at infinity of $X$ that are used for the renormalization procedure are the unique metrics in the conformal class $[h_0]$ having constant Gaussian curvature $-4$. \end{definition} This type of ``canonical'' renormalization first appeared in Krasnov \cite{Kr}. Notice that, by the Gauss--Bonnet formula, the area, with respect to $h_\cF$, of the boundary at infinity $\{\infty\}\times\Sigma$ of each funnel of $X$ equals $-\frac{\pi\chi(\Sigma)}{2}$. The following lemma appears in Krasnov--Schlenker~\cite[Section 7]{KS08}; for the sake of completion we include below a (new) proof using our current definition of renormalized volume. \begin{lemma} \label{lem::maxvol} Let $(X,g)$ be a quasi-Fuchsian hyperbolic $3$-manifold. Among all metrics $h_0\in[h_0]$ of area equal to $-\frac{\pi\chi(\Sigma)}{2}$, the renormalized volume $\Volr(X,g;h_0)$ attains its maximum for $h_0=h_\cF$. \end{lemma} The lemma holds evidently for every $\kappa<0$ when we maximize $\Volr$ among metrics of area $-\frac{2\pi\chi(\Sigma)}{\kappa}$ in a fixed conformal class, the maximizer being the unique metric with constant Gaussian curvature $\kappa<0$ in that conformal class. \begin{proof} From Guillarmou--Moroianu--Schlenker~\cite{GMS}, recall the conformal change formula of the renormalized volume. Let $h$ be a metric at infinity of $(X,g)$ and multiply $h$ by $e^{2\omega}$, for some smooth function $\omega:\{\infty\} \times \Sigma\to\bR$. We have that \begin{align}\label{confchrenv} \Volr(X,g;e^{2\omega}h)=\Volr(X,g;h)-\tfrac14 \int_\Sigma (|d\omega|^2_h+2\kappa_h\omega)dh. \end{align} In particular, for $h=h_\cF$ we obtain \begin{equation}\label{ineq1} \Volr(X,g;e^{2\omega}h_\cF)-\Volr(X,g;h_\cF)\leq 2\int_\Sigma \omega dh_\cF. \end{equation} Now, we assume that $e^{2\omega}h_\cF$ has the same area as $h_\cF$; so $\int_\Sigma e^{2\omega}dh_\cF=\int_\Sigma dh_\cF$. Write $\omega=c+\omega^\perp$, with $c$ being a constant and $ \int_\Sigma \omega^\perp dh_\cF=0$ (this is Hodge decomposition for $0$-forms on $\Sigma$). Using the inequality $e^x\geq 1+x$, valid for all real numbers $x$, we get \begin{align*} \int_\Sigma dh_\cF={}&\int_\Sigma e^{2\omega} dh_\cF= e^{2c}\int_\Sigma e^{2\omega^\perp} dh_\cF \geq e^{2c}\int_\Sigma (1+2\omega^\perp)dh_\cF=e^{2c}\int_\Sigma dh_\cF, \end{align*}implying that $c\leq 0$. Hence $\int_\Sigma \omega dh_\cF=\int_\Sigma c dh_\cF\leq 0$, proving the assertion of the lemma, in light of~(\ref{ineq1}). \end{proof} Moreover, when dilating $h_0$ by a constant greater than $1$, the renormalized volume increases. More precisely, we have: \begin{lemma} \label{lem::ineqdilat} Let $(X,g)$ be a quasi-Fuchsian hyperbolic $3$-manifold. Let $c>0$ and let $[h_0]$ be the induced conformal class on the boundary at infinity of $X$, which by abuse of notation is denoted $\Sigma$. Let $h_0$ be a metric in $[h_0]$. Then \[\Volr(X,g;c^2h_0)=\Volr(X,g;h_0)-\pi\chi(\Sigma) \ln c . \] \end{lemma} \begin{proof} This is a particular case of the formula~(\ref{confchrenv}) for the conformal change of the renormalized volume, in which $\omega$ is constant: \[\Volr(X,g;e^{2\omega}h_0)=\Volr(X,g;h_0)-\tfrac14 \omega\int_\Sigma 2\kappa_{h_0}dh_0=\Volr(X,g;h_0)-\omega \pi \chi(\Sigma) \](in the last equality we have used the Gauss--Bonnet formula). \end{proof} \section{Proof of the main result} \label{subsec::renorm_vol_A-F} Let $(X,g)$ be an almost-Fuchsian manifold. By Uhlenbeck~\cite{Uhl}, recall that $X$ contains a unique embedded minimal surface, which we denote $\Sigma$ in what follows. By considering the global decomposition $X=\bR \times \Sigma$ (see Remark~\ref{rem::almost-F_all_t}) we obtain two metrics $h_0^+,h_0^-$ in the corresponding conformal classes at $\pm \infty$ of $X$, defined by $h_0^\pm:= (e^{-2|t|}g)_{|t=\pm\infty}.$ Using the Krasnov--Schlenker definition of the renormalized volume from Proposition~\ref{propks}, it is evident that, with respect to the globally defined function $t$ on the almost-Fuchsian manifold $X$, we have \[\Volr(X,g;h_0^\pm)=0.\] This quantity is therefore not very interesting, but it will prove helpful when examining $\Volr(X,g)$. \begin{remark} The vanishing of $\Volr(X,g;h_0^\pm)$ is essentially the content of Proposition~3.7 in Huang--Wang~\cite{huangwang}, where a slightly different definition is used for the renormalized volume. In loc.\ cit.\ the renormalized volume $RV(X,g;h_0^\pm)$ equals $\pi\chi(\Sigma)$ independently of the metric on $X$, and its sign is interpreted as some sort of ``negativity of the mass''. We defend here the view that the Krasnov--Schlenker definition seems to be the most meaningful, as opposed to Guillarmou--Moroianu--Schlenker~\cite{GMS} or Huang--Wang~\cite{huangwang}, and that with this definition the sign of the volume appears to be positive, at least near the Fuchsian locus. \end{remark} Our goal is to control the renormalized volume of $(X,g)$ when the metric at $\pm \infty$ is $h_{\cF}^\pm$, the unique metrics of Gaussian curvature $-4$ inside the corresponding conformal class $[h_0^{\pm}]$ at infinity of $X$. Recall from Definition~\ref{def::renorm_vol_canonical} that, for this canonical choice (with non-standard constant $-4$) we obtain ``the'' renormalized volume of the almost-Fuchsian manifold $(X,g)$: \[\Volr(X,g)=\Volr(X,g;h_\cF^\pm).\] \begin{theorem}\label{thm::pos_renorm_vol} The renormalized volume $\Volr(X,g)$ of an almost-Fuchsian hyperbolic $3$-ma\-ni\-fold $(X,g)$ is non-negative, being zero only at the Fuchsian locus, i.e., for $g$ as in Definition~\ref{def::quasi-Fuchsian} with $A=0$ and $g_0$ hyperbolic. \end{theorem} \begin{proof} Denote the principal curvatures of the unique embedded minimal surface $\Sigma$ of $X$ by $\pm\lambda$ for some continuous function $\lambda:\Sigma\to [0,\infty)$. Recall that $\sup\limits_{x \in \Sigma}|\lambda(x)| <1$ and that the decomposition of the metric $g$ takes the form~(\ref{metricg}), for all $t \in \bR$. \begin{lemma}\label{lem::lemh0} The Gaussian curvature of $h_0^\pm$ is bounded above by $-4$, with equality if and only if $X$ is Fuchsian. \end{lemma} \begin{proof} Let $\Sigma_t$ be the leaf of the foliation at time $t$. We compute the Gaussian curvature $\kappa_{h_0^{+}}$ as the limit of the curvature of $e^{-2 t}g_t$ as $t\to +\infty$. From the proof of Proposition~\ref{propks}, the shape operator of $\Sigma_t$ is $A_t=\tfrac12 g_t^{-1}{g}_t' =(\cosh t+A\sinh t)^{-1}(A\cosh t +\sinh t)$. By the Gauss equation~(\ref{hGe}), we get \begin{align*} \kappa_{g_t}={}&\det A_t-1\\ ={}&\det[(1+A+e^{-2t}(1-A))^{-1}(1+A -e^{-2t}(1-A))]-1\\ ={}&\frac{\left(1-e^{-2t}\frac{1-\lambda}{1+\lambda}\right)\left(1-e^{-2t}\frac{1+\lambda}{1-\lambda}\right)} {\left(1+e^{-2t}\frac{1-\lambda}{1+\lambda}\right)\left(1+e^{-2t}\frac{1+\lambda}{1-\lambda}\right)}-1 \end{align*} so $\kappa_{e^{-2t}g_t}=e^{2t}\kappa_{g_t}$ converges to $-2\left(\frac{1-\lambda}{1+\lambda}+\frac{1+\lambda}{1-\lambda} \right)\leq -4$ as $t\to\infty$. The inequality for $h_0^-$ is proved similarly. Clearly the equality holds if and only if $\lambda=0$, i.e., $A= 0$. \end{proof} Using the Gauss--Bonnet formula, Lemma~\ref{lem::lemh0} implies that the area of $(\{\pm\infty\}\times\Sigma, h_0^{\pm})$ is at most equal to $-\pi\chi(\Sigma)/2$, which, again by Gauss--Bonnet, is the area of $h_\cF^\pm$: \begin{align*} -4\int_\Sigma dh_0^\pm \geq \int_\Sigma \kappa_{h_0^\pm} dh_0^\pm =2\pi\chi(\Sigma)=-4 \int_\Sigma dh_\cF^\pm. \end{align*} So \begin{equation}\label{ineqvol} \Vol(\Sigma, h_0^\pm)\leq \Vol(\Sigma, h_\cF^\pm)=-\pi\chi(\Sigma)/2, \end{equation} with equality if and only if $\kappa_{h_0^\pm}=-4$, which is equivalent to $\lambda=0$. Let $c^2:=\Vol(\Sigma, h_\cF^\pm)/\Vol(\Sigma, h_0^\pm)$. By~(\ref{ineqvol}), $c\geq 1$. Applying Lemma~\ref{lem::ineqdilat} we obtain \[\Volr(X,g;h_0^\pm) \leq \Volr(X,g;c^2 h_0^\pm).\] Since, by definition $\Vol(\Sigma, c^2h_0^\pm)= \Vol(\Sigma, h_\cF^\pm)$, Lemma~\ref{lem::maxvol} implies \[\Volr(X,g;c^2h_0^\pm) \leq \Volr(X,g; h_\cF^\pm)=\Volr(X,g).\] These inequalities are enough to conclude that $\Volr(X,g)\geq \Volr(X,g;h_0^\pm)=0$. Let us now analyze the equality case. If $(X,g)$ is Fuchsian, then the unique embedded minimal surface $\Sigma$ of $X$ has vanishing shape operator $A$, therefore $\lambda=-\lambda=0$. By the proof of Lemma~\ref{lem::lemh0}, we obtain that $\kappa_{h_0^{\pm}}=-4$, thus $\Volr(X,g)=0$. Conversely, assume that $(X,g)$ is almost-Fuchsian and that $\Volr(X,g)=0$. This implies that $\Volr(X,g;h_0^\pm)=\Volr(X,g;c^2 h_0^\pm)=\Volr(X,g)= \Volr(X,g;h_\cF^\pm)=0$, where $c^2$ was defined above as $\Vol(\Sigma, h_\cF^\pm)/\Vol(\Sigma, h_0^\pm)$. Thus, $c=1$ implying, by using the equality case in the inequality \eqref{ineqvol}, that $\kappa_{h_0^\pm} \equiv-4$ and that $\lambda=-\lambda=0$. Thus, the minimal surface $\Sigma$ is in fact totally geodesic, hence $(X,g)$ must be Fuchsian. \end{proof} \subsection*{Acknowledgments} We are indebted to Andy Sanders and Jean-Marc Schlenker, whom we consulted about almost-Fuchsian metrics and renormalized volumes. We thank the anonymous referee for several remarks improving the presentation of the manuscript. Colin Guillarmou pointed out to one of us (S.~M.) why the question of positivity for the renormalized volume is still open; his explanations are kindly acknowledged.
{"config": "arxiv", "file": "1402.2555.tex"}
TITLE: a norm that does not arise from an inner product QUESTION [2 upvotes]: In Pugh’s Real Mathematical Analysis, in order to bring an example that norms do not necessarily come from inner products it is stated that the unit sphere for every norm induced by an inner product is smooth but for the maximum norm $||.||_{max}$ the unit sphere is not smooth. I know that intuitively the author by smooth means having no corners, but what is the mathematical definition of smooth in this context? REPLY [2 votes]: Since the author seems to make the statement so early in his book, it's probably safe to say that your intuition is good enough. However, here is a way to make it precise: I would interpret "$S$ is smooth" as "$S$ is a smooth submanifold". If $S$ is the unit sphere with respect to a norm, then this would be equivalent to the following statement: ($\star$) There is a smooth function $F:\mathbb{R}^n \rightarrow \mathbb{R}$ with $S=\{x\vert F(x) = 1\}$ and $\nabla F(x)\neq 0$ for all $x\in S$. Now if $S$ is the unit sphere with respect to an inner product $\langle \cdot , \cdot \rangle$, then you can take $F(x) := \langle x , x \rangle$. It's easy to see that this is smooth (at least away from $0$, which is enough) and satisfies $\nabla F(x) = 2x$, hence ($\star$) is true. If $S$ is the unit sphere with respect to the maximum norm, then ($\star$) is not satisfied. First note that $F(x) = \Vert x \Vert_{\max}^2$ is not a smooth function. It might however be possible that another choice for $F$ works. But this is not possible: Note that $S$ is now a cube: Let $x_0$ be one of the corners and take two sequence $a_n,b_n\in S$ which approach $x_0$ from different faces. Since $\nabla F(x) \perp S$ (with respect to the usual dot-product) for all $x\in S$, we get $$\nabla F(x_0)= \lim \nabla F(a_n) \perp \lim \nabla F(b_n) = \nabla F(x_0)$$ and hence $\nabla F(x_0)= 0$, a contradiction.
{"set_name": "stack_exchange", "score": 2, "question_id": 2862181}
\begin{definition}[Definition:Conic Section/Focus] The [[Definition:Point|point]] $F$ is known as the '''focus''' of the [[Definition:Conic Section|conic section]]. \end{definition}
{"config": "wiki", "file": "def_25794.txt"}
TITLE: Formula for the Maxwell Stress tensor in arbitrary coordinates QUESTION [0 upvotes]: This question is nearly identical to my last, except this time its the Maxwell stress tensor, not the Cauchy stress tensor. I often see its components written as $$\sigma_{ij}=\varepsilon_0E_iE_j+\frac{1}{\mu_0}B_iB_j+\frac{\delta_{ij}}{2}\left(\varepsilon_0|\boldsymbol E|^2+\frac{1}{\mu_0}|\boldsymbol{B}|^2\right)$$ With $E_i$ being understood as the components of the vector $\boldsymbol E$. But I thought, "hang on", $\boldsymbol E$ is a vector, and thus is contravariant, and its components should be written $E^i$, and similarly for $\boldsymbol B$. But, I know that the stress tensor should be fully covariant, since it measures force (a vector) per unit area (which can be represented as a normal vector). I.e, it takes in two vector inputs and outputs a scalar, so it should be second order covariant. So I thought, we should replace $E_i$ with $g_{ki}E^k$. Similarly, the Kronecker delta bugs me as well - it is defined as a $(1,1)$ tensor, with components $$\delta^i_j=1 \text{ if }i=j, ~\delta^i_j=0\text{ if }i\neq j$$ So $$\delta_{ij}=g_{ki}\delta^k_j=g_{ij}$$ So, the "correct" formula, written out in all its glory, should really be $$\sigma_{ij}=\varepsilon_0(g_{ki}E^k)(g_{lj}E^j)+\frac{1}{\mu_0}(g_{ki}B^k)(g_{lj}B^j)+\frac{g_{ij}}{2}\left(\varepsilon_0|\boldsymbol E|^2+\frac{1}{\mu_0}|\boldsymbol{B}|^2\right)$$ Or of course, in shorter form $$\sigma_{ij}=\varepsilon_0E_iE_j+\frac{1}{\mu_0}B_iB_j+\frac{g_{ij}}{2}\left(\varepsilon_0|\boldsymbol E|^2+\frac{1}{\mu_0}|\boldsymbol{B}|^2\right)$$ Where $E_i$ are recognized not as the components of $\boldsymbol E$, but rather as the components of $\boldsymbol E^{\flat}$, its dual. And of course $|\boldsymbol E|^2=g_{ab}E^aE^b$. Am I right? REPLY [0 votes]: It is actually better if we define the Lorentz invariant Maxwell stress tensor. It is a second order contravariant tensor: $$T^{\alpha\beta}=\frac{1}{\mu_0}\left(F^{\alpha\gamma}F^\beta{}_{\gamma}-\frac{1}{4}\eta^{\alpha\beta}F^{\gamma\delta}F_{\gamma\delta}\right)$$ Here $F^{\mu\nu}$ is the $\mu,\nu$ component of the field strength tensor and $\eta^{\mu\nu}$ is the $\mu,\nu$ component of the inverse spacetime metric. This formula works no matter which coordinate system we use (but, the components of $\boldsymbol{\eta}$, $\mathbf{F}$ might be very complicated). We use the well known Lorentz scalar $$F^{\mu\nu}F_{\mu\nu}=2|\boldsymbol B|^2-2|\boldsymbol E^2|/c^2$$ Let $i,j,k,l,m\in\{1,2,3\}$. Let's compute the spacial components of the MST: $$T^{ij}=\frac{1}{\mu_0}\left(F^{i\gamma}F^j{}_{\gamma}-\frac{\eta^{ij}}{2}(|\boldsymbol B|^2-|\boldsymbol E^2|/c^2)\right)$$ Working for example in Cartesian spacial coordinates (i.e $\boldsymbol\eta=\operatorname{diag}(-1,1,1,1)$), we have the following identities: $$F^{i~0}=\frac{1}{c}E^i~~;~~F^{ij}=\varepsilon_{ijk}B^k \\ F^j{}_\gamma=\eta_{\gamma\lambda}F^{j\lambda} \\ \implies F^j{}_0=-F^{j~0}=\frac{-1}{c}E^j \\ \text{and}~~F^j{}_k=\eta_{k\lambda}F^{j\lambda}=\eta_{kk}F^{jk}=\varepsilon_{jkl}B^l$$ Hence $$T^{ij}=\frac{1}{\mu_0}\left(F^{i~0}F^j{}_0+F^{ik}F^j{}_k-\frac{\delta^i_j}{2}(|\boldsymbol B|^2-|\boldsymbol E^2|/c^2)\right) \\ =\frac{1}{\mu_0}\left(-\frac{1}{c^2}E^iE^j+\varepsilon_{ikl}B^l\varepsilon_{jkm}B^m-\frac{\delta^i_j}{2}(|\boldsymbol B|^2-|\boldsymbol E^2|/c^2)\right)$$ Now, $$\varepsilon_{ikl}\varepsilon_{jkm}=\varepsilon_{ilk}\varepsilon_{jmk}=\delta^i_j\delta^l_m-\delta^i_m\delta^l_j$$ So $$\varepsilon_{ikl}B^l\varepsilon_{jkm}B^m=(\delta^i_j\delta^l_m-\delta^i_m\delta^l_j)B^lB^m=\delta^i_jB^lB^l-B^iB^j$$ Hence, in Cartesian coords, $$\boxed{T^{ij}=\frac{1}{\mu_0}\left(-\frac{1}{c^2}E^iE^j-B^iB^j+\frac{\delta^i_j}{2}\left(|\boldsymbol B|^2+\frac{1}{c^2}|\boldsymbol E|^2\right)\right)}$$
{"set_name": "stack_exchange", "score": 0, "question_id": 4274614}
TITLE: Subspaces $\mathcal{C_k}\subset(\mathbb{Z}/2\mathbb{Z})^3$ with $\dim(\mathcal{C_k})=2$ QUESTION [1 upvotes]: Enumerate all two-dimensional subspaces of the space $(\mathbb Z/2\mathbb{Z})^3$. Obviously we have $|(\mathbb Z/2\mathbb{Z})^3|=2^3=8$ with $$(\mathbb Z/2\mathbb{Z})^3=\{v_0,\ldots,v_7\}=\left\{ \begin{pmatrix}0\\0\\0\end{pmatrix}, \begin{pmatrix}1\\0\\0\end{pmatrix}, \begin{pmatrix}0\\1\\0\end{pmatrix}, \begin{pmatrix}1\\1\\0\end{pmatrix}, \begin{pmatrix}0\\0\\1\end{pmatrix}, \begin{pmatrix}1\\0\\1\end{pmatrix}, \begin{pmatrix}0\\1\\1\end{pmatrix}, \begin{pmatrix}1\\1\\1\end{pmatrix}\right\}.$$ However in the solution to the question is the assumption that there are seven different subspaces $\mathcal{C}_k\subset(\mathbb{Z}/2\mathbb{Z})^3,k\in\mathbb N,$ with $\dim(\mathcal{C_k})=2$: $$ \mathcal C_1:=\operatorname{span}\{v_1,v_2\}=\operatorname{span}\{v_1,v_3\}=\operatorname{span}\{v_2,v_3\}\\ \mathcal C_2:=\operatorname{span}\{v_1,v_4\}=\operatorname{span}\{v_1,v_5\}=\operatorname{span}\{v_4,v_5\}\\ \mathcal C_3:=\operatorname{span}\{v_1,v_6\}=\operatorname{span}\{v_1,v_7\}=\operatorname{span}\{v_6,v_7\}\\ \mathcal C_4:=\operatorname{span}\{v_2,v_4\}=\operatorname{span}\{v_2,v_6\}=\operatorname{span}\{v_4,v_6\}\\ \mathcal C_5:=\operatorname{span}\{v_2,v_5\}=\operatorname{span}\{v_2,v_7\}=\operatorname{span}\{v_5,v_7\}\\ \mathcal C_6:=\operatorname{span}\{v_3,v_4\}=\operatorname{span}\{v_3,v_7\}=\operatorname{span}\{v_4,v_7\}\\ \mathcal C_7:=\operatorname{span}\{v_3,v_5\}=\operatorname{span}\{v_3,v_6\}=\operatorname{span}\{v_5,v_6\} $$ I did recognize the pattern that $\operatorname{span}\{v_a,v_b\}=\operatorname{span}\{v_a,v_c\}=\operatorname{span}\{v_b,v_c\}$ and I understand why this is true, but how do I get both the number of all subspaces AND the subspaces, too? My first thought was that the nullspace can be ignored if one wants to span spaces with two vectors - is that the reason why we have exactly seven subspaces? REPLY [2 votes]: Although you can count subspaces by counting their bases, as you did, and then grouping together those bases that define the same subspace, you can list the subspaces more efficiently as follows (a technique that works well because you are looking at subspaces of dimension one less than the whole space, which are called hyperplanes in general). Every subspace is determined by one nontrivial linear homogeneous equation. Moreover two nontrivial linear homogeneous equations define the same subspace if and only if one is obtained from the other by multiplication by a nonzero scalar. But here there is only one nonzero scalar, so different equations give different subspaces. So you can just enumerate the nontrivial linear homogeneous equations. There are $8$ possible equations (do you see which?) one of which is trivial.
{"set_name": "stack_exchange", "score": 1, "question_id": 190008}
TITLE: Positive definite matrices QUESTION [0 upvotes]: Let $A$ be a positive definite matrix such that $\|A\|>1$. Can we say that $A-I$ also is a positive matrix? We can generalize this question for a unital $C^*$-algebra: Let ${\cal A}$ be a unital $C^*$-algebra with unit $1_{\cal A}$. If for some $a\in {\cal A}^+$ such taht $\|a\|>1$, can we say that $a -1_{\cal A}$ is positive? REPLY [1 votes]: Don't think so... Let A be $\begin{pmatrix} 1 & 0 \\\\ 2 & 2 \end{pmatrix}$, Its determinant is 2 and for any $z = \begin{pmatrix} x \\\\ y \end{pmatrix}$, $z^TAz = (x+y)^2 + y^2$. So it seems to satisfy the conditions. However, $A-I$ is $\begin{pmatrix} 0 & 0 \\\\ 2 & 1 \end{pmatrix}$ and so fails the condition with $z = \begin{pmatrix} -1 \\\\ 1 \end{pmatrix}$
{"set_name": "stack_exchange", "score": 0, "question_id": 132439}
\begin{document} \maketitle \begin{abstract} We present a convexity-type result concerning simple quasi-states on closed manifolds. As a corollary, an inequality emerges which relates the Poisson bracket and the measure of non-additivity of a simple quasi-state on a closed surface equipped with an area form. In addition, we prove that the uniform norm of the Poisson bracket of two functions on a surface is stable from below under \(C^0\)-perturbations. \end{abstract} \section{Introduction and results} Let \(X\) be a compact Hausdorff space. We write \(C(X)\) for the Banach algebra of all real-valued continuous functions on \(X\), taken with the supremum norm \(\|\cdot\|\). For \(F \in C(X)\) we denote by \(C(F)\) the closed subalgebra of \(C(X)\) generated by \(F\) and the constant function \(1\), that is \[C(F) = \{\varphi \circ F\,|\, \varphi \in C(\im F)\}.\] A \textsl{quasi-state} \(\zeta\) on \(X\) is a functional \(\zeta \fc C(X) \to \R\) which satisfies: (i) \(\zeta(F) \geq 0\) for \(F \geq 0\); (ii) \(\zeta\) is linear on any \(C(F)\); (iii) \(\zeta(1) = 1\).\\ A quasi-state \(\zeta\) on \(X\) is \ts{simple} if it is multiplicative on each \(C(F)\). Quasi-states (as defined here) were introduced and studied by Aarnes, see \cite{quasi-states} and references therein. Any positive continuous linear functional of norm \(1\) (in other words, a Borel probability measure) furnishes an example of a quasi-state\footnote{Such a quasi-state is simple if and only if it is the \(\delta\)-measure at some point of \(X\).}. However, on many spaces there exist genuine, that is nonlinear, quasi-states (see examples below). The extent of nonlinearity of a quasi-state \(\zeta\) is measured by the functional \(\Pi(F,G):= \big| \zeta(F+G) - \zeta(F) - \zeta(G) \big|\). It will be clear from the context which quasi-state is meant. Our first result concerns simple quasi-states on manifolds, namely \begin{thm}\label{moment_map} Let \(M\) be a closed manifold, and let \(F,G\) be two continuous functions on \(M\). Let \(\zeta\) be a simple quasi-state on \(M\). Then the image of the moment map \(\Phi \fc M \to \R^2\), \(x \mapsto (F(x), G(x))\), contains the convex hull of the three points \((\zeta(F), \zeta(G))\), \((\zeta(F), \zeta(F+G) - \zeta(F))\), \((\zeta(F+G) - \zeta(G), \zeta(G))\). This is an isosceles right triangle whose legs are of length \(\Pi(F,G)\). \end{thm} \begin{rem} This triangle is in general the largest subset of \(\im \Phi\) we can hope for. For example, if \(M = S^2\) is the round sphere in \(\R^3(x,y,z)\), \(\zeta\) is the (unique) symplectically invariant simple quasi-state on \(M\) (see example \ref{qm_simply_conn} below), and \(F(x,y,z) = x^2\), \(G(x,y,z) = y^2\), then the image of the moment map is the triangle with vertices \((0,0),(0,1),(1,0)\). Here \(\zeta(F) = \zeta(G) = 0\) and \(\Pi(F,G) = \zeta(F+G) - \zeta(F) - \zeta(G) = \zeta(F + G) = 1\). \end{rem} \begin{rem} We do not know whether this result is true for the so-called \ts{pure} quasi-states, that is quasi-states at the extremal boundary of the convex sets of all quasi-states \cite{pure_quasi}. \end{rem} This theorem has an interesting corollary in the symplectic context: \begin{thm}\label{simple_qs_surf} If \(\zeta\) is a simple quasi-state on a closed surface \(M\) endowed with an area form \(\omega\), then for any \(F,G \in C^\infty(M)\) we have \begin{equation}\label{simple_qs_surf_ineq}\Pi(F,G)^2 \leq \area(M)\|\{F,G\}\|.\end{equation} Here \(\area(M) = \int_M \omega\), \(\{\cdot,\cdot\}\) is the Poisson bracket, and \(\|\cdot\|\) stands for the supremum norm. \end{thm} \begin{rem} The theorem easily extends to all representable quasi-states, that is, quasi-states that are elements of the closed convex hull of the set of simple quasi-states, see \cite{pure_quasi}. Indeed, it is clear that \eqref{simple_qs_surf_ineq} holds if \(\zeta\) is a finite convex combination of simple quasi-states. Moreover, let \(\zeta_\nu \to \zeta\) be a net of quasi-states converging to \(\zeta\), such that every \(\zeta_\nu\) satisfies \eqref{simple_qs_surf_ineq}. The topology on the space of quasi-states is such that \(\zeta_\nu(H) \to \zeta(H)\) for any \(H \in C(M)\). Then, since \[\big|\zeta_\nu(F+G) - \zeta_\nu(F) - \zeta_\nu(G)\big| \leq \area(M) \|\{F,G\}\|,\] for every \(\nu\), we obtain the same inequality for the limit \(\zeta\). \end{rem} \begin{rem} Note that if \(F,G\) are only required to be of class \(C^1\), the inequality remains valid, for we can choose sequences of \(C^\infty\) functions, \(F_n\), \(G_n\), which tend to \(F,G\), respectively, in the \(C^1\) norm. The claim follows because both sides are continuous with respect to \(C^1\) topology on \(C^\infty(M)\). In \cite{qs_sympl} it is proved that if \(\zeta\) is a quasi-state on a surface and \(F,G\) are two \(C^\infty\) functions, then \(\{F,G\} \equiv 0 \Rightarrow \Pi(F,G)=0\). In problem 8.2 (ibid.) the authors ask if it is possible to relax the smoothness assumption on \(F,G\), for example to show that if \(\zeta\) is a quasi-state then \(\zeta(F+G) = \zeta(F)+\zeta(G)\) for any \(F,G \in C^1(M)\) with \(\{F,G\} \equiv 0\). The present considerations show that if \(\zeta\) is a representable quasi-state on a closed surface, then it satisfies this property. \end{rem} In order to put this result in the proper context, we need the following definition. Let \((M,\omega)\) be a closed symplectic manifold. A \ts{symplectic quasi-state} \(\zeta\) on \(M\) is a functional \(\zeta \fc C(M) \to \R\) which satisfies: (i) \(\zeta(F) \geq 0\) for \(F \geq 0\); (ii) \(\zeta\) linear on Poisson-commutative subalgebras of \(C^\infty(M)\); (iii) \(\zeta(1) = 1\). Note that if we require that a functional \(\zeta \fc C(M) \to \R\) satisfy these properties, it automatically becomes linear on every singly generated subalgebra \(C(F)\) of \(C(M)\), therefore every symplectic quasi-state is in particular a quasi-state, and so the terminology is consistent. In dimension two any quasi-state is symplectic, as is proved in \cite{qs_sympl}, and therefore in this case the two notions coincide. The following result appears in \cite{quasimorphism}: \begin{thm}\label{ineq_EPZ} On certain closed symplectic manifolds \((M,\omega)\) there exist symplectic quasi-states \(\zeta\) which satisfy the following inequality: \begin{equation}\label{ineq_EPZ_ineq}\Pi(F,G)^2 \leq K(M,\omega)\|\{F,G\}\|,\end{equation} for any \(F,G \in C^\infty(M)\). Here \(K(M,\omega)\) is a constant depending only on the symplectic manifold. \end{thm} A quasi-state is symplectic if \(\{F,G\} = 0 \Rightarrow \Pi(F,G) = 0\). Theorems \ref{simple_qs_surf} and \ref{ineq_EPZ} assert that in case the quantity \(\Pi(F,G)\) is nonzero, it can still be controlled via the Poisson bracket. One of the manifolds for which the conclusion of theorem \ref{ineq_EPZ} is valid is the standard symplectic sphere, while the corresponding symplectic quasi-state \(\zeta\) is simple. Theorem \ref{simple_qs_surf} then can be viewed as an extension to theorem \ref{ineq_EPZ}, in that it shows that \eqref{ineq_EPZ_ineq} holds (with an appropriate constant) for any closed surface with an area form and any simple quasi-state on it. Also, in the case of the sphere the symplectic quasi-state can be described in elementary terms. However, its origin, as well as the proof of the inequality, both lie in the Floer theory, and are very indirect. One of the motivations for theorem \ref{simple_qs_surf} was to find an elementary proof for \eqref{ineq_EPZ_ineq}. Such a proof has indeed been found, and so this answers part of the question raised in \cite[Section 5]{quasimorphism}. Another aspect of these inequalities lies in the fact that a quasi-state is Lipschitz, that is \(|\zeta(F) - \zeta(G)| \leq \|F - G\|\) (see \cite{quasi-states}), and so is \(\Pi\): \(|\Pi(F,G) - \Pi(F',G')| \leq 2\big(\|F - F'\|+\|G - G'\|\big)\). Hence the left-hand sides of the inequalities are stable with respect to \(C^0\)-perturbations, while the Poisson bracket, which contains derivatives in its definition, can go wild as a result of such perturbations. But the inequalities tell us that if \(\zeta\) is not additive on a pair of functions, then arbitrarily small \(C^0\)-perturbations cannot make their Poisson bracket vanish. In fact, more can be said. Let us define the following quantity for a pair of smooth functions \(F,G\) on a symplectic manifold \(M\): \[\Upsilon(F,G) = \liminf_{\ve \to 0} \big\{\|\{F',G'\}\|\,\big|\, F',G' \in C^\infty(M):\, \|F-F'\|,\|G-G'\| < \ve\big\}.\] A theorem due to Cardin and Viterbo \cite{MTHJ} states that \(\{F,G\} \neq 0\) if and only if \(\Upsilon(F,G) \neq 0\). Inequality \eqref{ineq_EPZ_ineq} then provides an explicit lower bound on \(\Upsilon(F,G)\) in terms of \(\zeta\) for certain \((M,\omega)\), see \cite{quasimorphism}. Also, in \cite{quasimorphism} the following question was posed: is it true that \(\Upsilon(F,G) = \|\{F,G\}\|\) for any smooth \(F,G\)? As is shown here, in case the manifold is \ts{two-dimensional}, the answer is affirmative: \begin{thm}\label{eq_Poisson} Let \((M,\omega)\) be a two-dimensional symplectic manifold (not necessarily closed). For \(F,G \in C^\infty(M)\) we have \[\Upsilon(F,G) = \|\{F,G\}\|.\] \end{thm} \begin{rem} When \(\{F,G\}\) is an unbounded function, its ``supremum norm'' \(\|\{F,G\}\| = \infty\). It will be clear from the proof that in this case \(\Upsilon(F,G) = \infty\) as well. \end{rem} Actually, in the two-dimensional case the Poisson bracket is ``locally stable from below'', see proposition \ref{local_stability} for the precise statement. \begin{acknow} I would like to thank my advisor Prof. Leonid Polte\-ro\-vich for arousing my interest in quasi-states, and for his suggestions, incisive comments and advice. Thanks also to Judy Kupferman for constant supervision of my English, and for general support and interest. I would like to thank Egor Shelukhin for a suggestion which enabled me to simplify the proof of lemma \ref{surj}. And finally, I wish to thank Rami Aizenbud for his ongoing curiosity and encouragement of my work on this topic.\end{acknow} \section{Definitions and examples} \subsection{The Poisson bracket}\label{Poisson_br} We shall employ the following sign convention in the definition of the Poisson bracket of \(F,G \in C^\infty(M)\), where \((M,\omega)\) is a symplectic manifold of dimension \(2n\): \[-dF \wedge dG \wedge \omega^{n-1} = \textstyle \frac 1 n \{F,G\} \omega^n.\] In what follows we shall mostly use the case \(n=1\), and then the formula simplifies to \[-dF \wedge dG = \{F,G\} \omega.\] This can be rewritten as follows. If \(\Phi \fc M \to \R^2(x,y)\) is defined by \(\Phi(z) = (G(z),F(z))\) and \(\omega_0 = dx \wedge dy\), then \(\Phi^*\omega_0 = dG \wedge dF = \{F,G\}\omega\). \subsection{Quasi-states and quasi-measures} A \ts{space} will always refer to a compact Hausdorff space, unless otherwise mentioned. We have already defined quasi-states. Let us write \(\cQ(X)\) for the collection of quasi-states on \(X\). We now turn to another type of objects, called quasi-measures. These are related in a special way to quasi-states, and play a significant role in the present document. Let \(X\) be a space and let \(\cC\) and \(\cO\) be the collections of closed and open sets in \(X\), respectively. Let \(\cA = \cC \cup \cO\). A \ts{quasi-measure} \(\tau\) on \(X\) is a function \(\tau \fc \cA \to [0,1]\) which satisfies: (i) If \(\{A_i\}_i \subset \cA\) is a finite collection of pairwise disjoint of subsets of \(X\) such that \(\biguplus_i A_i \in \cA\), then \(\tau \big(\biguplus_i A_i \big) = \sum_i \tau(A_i)\). (ii) \(\tau(X) = 1\); (iii) \(\tau(A) \leq \tau(B)\) for \(A,B \in \cA\) such that \(A \subset B\); (iv) \(\tau(U) = \sup\{\tau(K) \, | \, K \in \cC:\, K \subset U\}\) for \(U \in \cO\).\\ Write \(\cM(X)\) for the collection of quasi-measures on \(X\). A quasi-measure is \ts{simple} if it only takes values \(0\) and \(1\). To each quasi-state \(\zeta \in \cQ(X)\) there corresponds a unique quasi-measure \(\tau \in \cM(X)\), defined by the following formula: \[\tau(K) = \inf\{\zeta(F) \, | \, F \in C(X): F \geq \1_K\}\] for \(K \in \cC\), and \(\tau(U) = 1 - \tau(X - U)\) for \(U \in \cO\). Here \(\1_K\) stands for the indicator function of the set \(K\). Conversely, to each quasi-measure \(\tau\) there corresponds a unique quasi-state \(\zeta\), obtained through integration with respect to \(\tau\): if \(F \in C(X)\), then the function \(b_F(x) = \tau(\{F < x\})\) is nondecreasing, and takes values in \([0,1]\). Hence it is Riemann integrable, and \[\zeta(F) \equiv \int_X F\, d\tau = \max_X F - \int\limits_{\min_X F}^{\max_X F} b_F(x)\, dx\,.\] The described procedures constitute the Aarnes representation theorem, which sets up a bijection \(\cQ(X) \leftrightarrow \cM(X)\) for a given space \(X\). This representation theorem is an extension of the Riesz representation theorem, in the sense that if \(\tau\) is the restriction to \(\cA\) of a Borel probability measure \(\mu\), then the corresponding quasi-state is the integral with respect to \(\mu\). We refer the reader to \cite{quasi-states} for details. Another property of this representation theorem is that simple quasi-states correspond to simple quasi-measures \cite{pure_quasi}. \subsection{Examples of simple quasi-states} Since simple quasi-states are in bijection with simple quasi-measures, we shall list here examples of the latter. \begin{exam}\label{qs_diff_inv} Let \(X\) be a space which is connected and locally connected, and moreover has Aarnes genus \(g = 0\). This last condition is somewhat technical, and since we shall not use it anywhere in the present document, we refer the reader to \cite{qm_construct} for the definition of \(g\) and further details. It suffices to note that in case \(X\) is a compact CW-complex, it has \(g=0\) whenever \(H^1(X;\Z) = 0\) \cite{extreme_qm}. Let us call a subset of \(X\) \ts{solid} if it is connected and has a connected complement. Let \(\mu\) be a Borel probability measure on \(X\), which has the property that whenever \(K,K'\) are two closed solid subsets with \(\mu(K) = \mu(K') = \frac 1 2\), then \(K \cap K' \neq \varnothing\). In this case the collection \(\cS\) of all closed solid subsets of \(X\) having \(\mu\)-measure \(< \frac 1 2\) is a co-basis in the terminology of \cite{qm_construct}, and so it defines a unique simple quasi-measure \(\tau\), which satisfies \[\tau(K) = \left\{\begin{array}{ll}0, & \text{if } \mu(K) < \frac 1 2 \\ 1, & \text{otherwise}\end{array}\right. ,\] for a closed solid \(K\). We shall mention two particular cases of this construction. \begin{exam}\label{qm_simply_conn} Take a simply connected closed manifold \(M\) with a volume form \(\Omega\) satisfying \(\int_M \Omega = 1\), and let \(\mu\) be the Lebesgue measure defined by \(\Omega\). Then the quasi-measure \(\tau\) constructed as above is \(\text{Diff}\,(M,\Omega)\)-invariant. In case \(M = S^2\) with the standard area form, the resulting simple quasi-state is precisely the one theorem \ref{ineq_EPZ} speaks about. \end{exam} \begin{exam} Take a space \(X\) as in the above example, and let \(\{z_i\}_{i=1}^{2n+1}\) be an odd number of distinct points on \(X\). Let \(\mu = \frac 1 {2n+1} \sum_i \delta_{z_i}\) be the discrete probability measure uniformly distributed among these points. The corresponding quasi-measure may be and is viewed as a generalization of \(\delta\)-measure. See \cite{extreme_qm}. \end{exam} \end{exam} \begin{exam} Examples of simple quasi-measures on the \(2\)-torus have been constructed by Knudsen \cite{extreme_qm}, \cite{qm_torus}. \end{exam} \subsection{The median of a Morse function}\label{median} Let \(M\) be a closed manifold, \(\zeta \in \cQ(M)\) be a simple quasi-state and \(\tau \in \cM(M)\) be the simple quasi-measure corresponding to \(\zeta\). Let \(F\) be a generic Morse function on \(M\), that is a Morse function with distinct critical values. The unique component of a level set of \(F\), whose measure with respect to \(\tau\) is \(1\), is called the \textsl{median of \(F\) relative to \(\zeta\)}, or briefly the median, and is denoted by \(m_F\). Usually the quasi-state is fixed and so this notation is unambiguous. The median satisfies: \(\zeta(F) = F(m_F)\). That the median exists can be seen as follows. It is proved in \cite{pure_quasi} that given a continuous function \(G\), the level set \(l_G = G^{-1}(\zeta(G))\) satisfies \(\tau(l_G) = 1\), and is the unique such level set. Now since a level set of a Morse function on a closed manifold is comprised of a finite number of connected components, the finite additivity of \(\tau\) implies the existence of a unique connected component having \(\tau\)-measure \(1\). The notations introduced here will be used below. The reason for the different notations (\(l\) and \(m\)) is that a continuous function \(G\) need not have a median, and for such a function the only meaningful object is the level set \(l_G\) having quasi-measure \(1\). \section{Proofs} \label{proofs} \subsection{Proof of theorem \ref{moment_map}} Denote the triangle mentioned in the theorem by \(\Delta\). Its area is \(\frac 1 2 \Pi(F,G)^2\). For simplicity assume that \(\zeta(F) = \zeta(G) = 0\), and that \(a := \zeta(F+G) > 0\), in which case \(\Pi(F,G) = \zeta(F+G) = a\). The general case follows easily from this particular one. Denote \(\kappa(t) = \zeta(F+tG)\). This is a continuous function, and \(\kappa(0) = 0\), \(\kappa(1) = a\). We claim that if \(c,d > 0\) are two numbers such that \(c+d < a\), then the equation \(c+td = \kappa(t)\) has a solution \(t \in (0,1)\). Indeed, this equation can be rewritten as \(c = \kappa(t) - td\). The function on the right side is continuous, and takes values \(0 < c\) for \(t = 0\) and \(a - d > c\) for \(t = 1\). The intermediate value theorem yields the required existence. Now take \(c,d > 0\) such that \(c+d < a\), that is \((c,d) \in \text{Int}\,\Delta\). Fix \(t \in (0,1)\) as above. We shall show that given \(\ve > 0\) there exists a point \(s \in M\) such that \(\|\Phi(s) - (c,d)\| < \ve/t\). Once this is proved, it follows that the image of \(\Phi\) is dense in the triangle \(\Delta\); but \(M\) is compact, and so is its image under the continuous map \(\Phi\), hence \(\im \Phi\) contains the whole of \(\Delta\). We use the following notation (see subsection \ref{median}): for a continuous function \(E\) on \(M\) let \(l_E := E^{-1}(\zeta(E))\). It follows that \(l_E\) is a set of quasi-measure \(1\). Let \(\ve > 0\) be so small that \(\ve < td\). Put \(H = F+tG\), and let \(K\) be a generic Morse function satisfying \(\|H - K\| < \ve/2\). Denote by \(m_K\) the median of \(K\), as above. Since any two closed sets of quasi-measure \(1\) must intersect, there are points \(p \in m_K \cap l_F\), \(q \in m_K \cap l_G\), \(r \in m_K \cap l_H\). Note that \(\zeta(H) = \kappa(t)\). We have \begin{align*} |F(q) - \kappa(t)| &= |(F(q)+tG(q)) - \kappa(t)| \quad &&\text{since } G(q) = \zeta(G) = 0\\ &= |H(q) - H(r)| &&H(r) = \zeta(H) = \kappa(t)\\ &= |H(q) - K(q) + K(r) - H(r)| &&\text{since } q,r \in m_K\\ & \leq |H(q) - K(q)| + |H(r) - K(r)|\\ &< \ve. \end{align*} In particular, \(c \in (0, F(q))\), since \(c = \kappa(t) - td < F(q) + \ve - td < F(q)\). Now the points \((0,G(p))\) and \((F(q),0)\) lie in the set \(\Phi(m_K)\) by construction. But \(m_K\) is connected, hence if we denote by \(\pi\) the projection \(\R^2 \to \R,\, (x,y) \mapsto x\), then \(\pi(\Phi(m_K))\), as a connected subset of the real line containing the points \(0\) and \(F(q)\), must contain the entire segment \([0,F(q)]\). There is then a point \(s \in m_K\) such that \(F(s) = c\). We obtain: \begin{align*} t|G(s) - d| &= |(F(s)+tG(s)) - (c+td)| \quad &&\\ &= |H(s) - H(r)| &&H(r) = \kappa(t) = c+td\\ & \leq |H(s) - K(s)| + |H(r) - K(r)| &&r,s \in m_K \\ &< \ve. \end{align*} Thus \[\|\Phi(s) - (c,d)\| = \big\|\big(F(s) - c, G(s) - d\big)\big\| = |G(s) - d| < \frac \ve t\,,\] as required. The proof is thus completed. \qed \subsection{Proof of theorem \ref{simple_qs_surf}} In \cite[theorem 3.2.3]{geom_meas} there is proved the so-called area formula. We shall make use of some corollary of it: let \(M\) and \(N\) be two smooth manifolds of dimension \(n\) with \(M\) compact, let \(\Phi \fc M \to N\) be a smooth map, and let \(\Omega\) be a smooth \(n\)-density on \(N\). Then the function \(n_\Phi(z) = \# \Phi^{-1}(z)\), defined on \(N\), is almost everywhere real-valued, and \[\int_M \Phi^*\Omega = \int_N n_\Phi\Omega.\] In our case \(M\) is the given surface, \(N = \R^2(x,y)\) with the standard density \(\Omega = dx\,dy\), and \(\Phi \fc M \to \R^2\) is \(\Phi(z) = (F(z),G(z))\). It follows from theorem \ref{moment_map} that the image of \(\Phi\) contains a triangle \(\Delta\) of area \(\int_\Delta \Omega = \frac 1 2 \Pi(F,G)^2\). It is true that \(n_\Phi(z) \geq 2\) for almost every \(z \in \Delta\). Indeed, \(M\) is closed, and \(\Phi\) is not onto. Consequently the degree modulo \(2\) of \(\Phi\) is zero, and therefore any regular value must be of even multiplicity. If a regular value is actually attained by \(\Phi\), then its multiplicity is at least two. Note that \(|\{F,G\}||\omega| = |dF \wedge dG| = \Phi^*\Omega\). Putting all this together, and noting that \(\int_M f\omega = \int_M f|\omega|\) for a continuous \(f\), we obtain finally \begin{multline*} \Pi(F,G)^2 = 2\int_{\Delta}\Omega \leq \int_{\R^2}n_\Phi \Omega = \int_M \Phi^*\Omega = \int_M|\{F,G\}|\omega \leq \\ \leq \|\{F,G\}\|\cdot\int_M\omega = \area(M)\|\{F,G\}\|. \end{multline*}\qed \subsection{Proof of theorem \ref{eq_Poisson}} We shall need two auxiliary results, which are presented below. Fix a positive integer \(n\). Denote \(B(r) = \{z \in \R^n \, | \, \|z\| < r\}\) for \(r > 0\), where \(\|z\|\) is the Euclidean length of a vector \(z \in \R^n\). \begin{lemma} \label{surj} Let \(0 < \delta < r\). Consider \(U = B(r)\). Then if \(\Phi \fc \overline U \to \R^n\) is a continuous map which is a \(\delta\)-perturbation in the \(C^0\) norm of the identity map \(\id_{\overline U}\), meaning that\/ \(\sup_{\|z\| \leq r} \|\Phi(z) - z\| < \delta\), then \(\Phi(U)\) contains the ball \(B(r -\delta)\), and if moreover \(\Phi\) is smooth and \(z \in \im \Phi\) is a regular value, then \(\deg_z \Phi = 1\). \end{lemma} \begin{prf} For \(z \in \R^n\) such that \(\|z\| \in [r,r+\delta]\) define \[t(z) = \frac{\|z\| - r}{\delta}\,,\quad z_0 = \frac {z}{\|z\|}\,r\,,\quad z_1 = \frac {z}{\|z\|}(r+\delta)\,.\] Clearly \(t(z) = 0\) for \(\|z\| = r\), \(t(z) = 1\) for \(\|z\| = r + \delta\), and \(t(z) \in [0,1]\) for \(\|z\| \in [r,r+\delta]\). Also, \(\|z_0\| = r\), \(\|z_1\| = r+\delta\), and if \(\|z\| = r\) or \(\|z\| = r+\delta\), then \(z = z_0\) or \(z = z_1\), respectively. Extend the definition of \(\Phi\) to the whole of \(\R^n\) by the formula: \[\Phi(z) = \left\{\begin{array}{ll} \Phi(z), & \|z\| \leq r \\ (1-t(z))\Phi(z_0) + t(z)z_1, & r \leq \|z\| \leq r+\delta \\ z, & \|z\| \geq r + \delta \\ \end{array}\right. .\] This extension is clearly continuous, \(\Phi|_{\R^n - V} = \id\), and \(\|\Phi(z) - z\| < \delta\) for all \(z \in \R^n\). For \(\|z\| \geq r + \delta\) and \(\|z\| \leq r\) this is obvious. For \(\|z\| \in [r,r+\delta]\) we have \(z = (1-t(z))z_0 + t(z)z_1\), and hence \begin{align*} \|\Phi(z) - z\| &= \big\|\big[(1-t(z))\Phi(z_0) + t(z)z_1\big] - \big[(1-t(z))z_0 + t(z)z_1\big] \big\|\\ &= (1-t(z))\|\Phi(z_0) - z_0\|\\ &<1\cdot \delta = \delta, \end{align*} since \(\|\Phi(z_0) - z_0\| < \delta\) by assumption. Finally, extend \(\Phi\) to a map \(\Phi \fc S^n \to S^n\) by adding \(\infty\) to \(\R^n\) and setting \(\Phi(\infty) = \infty\). This map is continuous and clearly homotopic to \(\id_{S^n}\). Therefore its degree is \(1\), and in particular it is surjective. We have \[\Phi(S^n - U) \cap B(r-\delta) = \varnothing,\] since \(\|\Phi(z) - z\| < \delta\) for any \(z \in \R^n\) and \(\Phi(\infty) = \infty\). Hence all the points in \(B(r - \delta)\) must come from points of \(U\). The last assertion follows from the equality of the degree of a smooth map at a regular value and the degree of the map. \qed \end{prf} \begin{prop}\label{local_stability} Let \(V\) be an open neighborhood of \(0 \in \R^2(x,y)\), endowed with the standard area form \(\omega_0 = dx \wedge dy\). Let \(F_0,G_0 \in C^\infty(\overline V)\), and suppose that \(\{F_0,G_0\}(0) = 1\). Then for any \(\ve > 0\) there exists \(\delta > 0\) and an open neighborhood \(U\) of \(0\) such that if \(F,G \in C^\infty(\overline V)\) satisfy \(\|F-F_0\|_{\overline V} < \delta, \linebreak \|G-G_0\|_{\overline V} < \delta\), then there exists \(z \in U\) such that \(\{F,G\}(z) > 1- \ve\). \end{prop} \begin{prf} Let \(\Phi_0 \fc V \to \R^2\) be defined by\footnote{This order of coordinates is explained by our sign convention, see subsection \ref{Poisson_br}. With this order the map \(\Phi_0\) is orientation-preserving on a neighborhood of the zero, and the function \(\varphi\) introduced here is indeed positive.} \(\Phi_0(z) = (G_0(z),F_0(z))\). There exists \(r > 0\) and a neighborhood \(W\) of \(0\) such that \(\Phi_0 \fc \overline W \to \overline{B(r)}\) is a diffeomorphism. Moreover, if we define the symplectic form \(\omega\) on \(\overline{B(r)}\) by \(\omega = (\Phi_0^{-1})^*\omega_0\), then \(\Phi_0 \fc (\overline W, \omega_0) \to (\overline{B(r)},\omega)\) is a symplectomorphism. There exists a smooth \ts{positive} function \(\varphi\) such that \(\omega = \varphi\, \omega_0\) throughout \(\overline{B(r)}\). We may assume \(r\) to be so small that \(\varphi < 1 + \ve/2\). Every differential object on \(\overline W\) can be transferred to \(\overline{B(r)}\) by pushing it forward with \(\Phi_0\). In particular, the functions \(G_0\) and \(F_0\) become the coordinates \(x\) and \(y\), the map \(\Phi_0\) becomes the identity map, and if \(F,G\) are smooth functions satisfying the conditions of the proposition, then the map \(\Phi(z) = (G(z),F(z))\) becomes a \(\delta\)-perturbation of the identity map. Therefore we may apply lemma \ref{surj} and conclude that the image of \(\Phi\) contains \(B(r-\delta)\), and moreover, at a regular value \(z\) we have \(\deg_z \Phi = 1\). Then we can write \[\int_{B(r-\delta)} \omega_0 \leq \int_{\Phi(B(r))}\omega_0.\] Now the regular values of \(\Phi\) form an open dense subset of \(\Phi(B(r))\). Let \(z\) be such a regular value. By the so-called stack-of-records theorem, there is a small disk \(Y \ni z\) such that \(\Phi^{-1}(Y)\) falls into a finite number of connected components \(\{Y_i\}\), each carried diffeomorphically by \(\Phi\) onto \(Y\). Then \[\int_Y \omega_0 = \ve_i \int_{Y_i} \Phi^*\omega_0,\] where \(\ve_i\) is the sign of the Jacobian of \(\Phi\) on \(Y_i\). Since \(\sum \ve_i = \deg_z\Phi = 1\), this implies \[\int_Y \omega_0 = \deg_z\Phi \int_Y \omega_0 = \sum_i \ve_i \int_{Y_i} \Phi^*\omega_0 = \int_{\Phi^{-1}(Y)} \Phi^*\omega_0.\] It then follows that \[ \int_{\Phi(B(r))}\omega_0= \int_{B(r)} \Phi^*\omega_0 = \int_{B(r)} \{F,G\} \omega \leq \max \{F,G\} \int_{B(r)}\omega.\] But the last integral is \[\int_{B(r)}\omega = \int_{B(r)}\varphi\,\omega_0 < (1 + \ve/2)\int_{B(r)}\omega_0 = \pi r^2(1 + \ve/2).\] Now \(\int_{B(r-\delta)} \omega_0 = \pi (r-\delta)^2\), and hence \[\max\{F,G\} \geq \frac{\pi (r-\delta)^2}{\pi r^2(1 + \ve/2)} > (1 - \ve/2)(1 - 2\delta/r),\] and if we choose \(\delta < \ve r/4\), we shall obtain the desired inequality. \qed \end{prf} Returning to the proof of theorem \ref{eq_Poisson}, let \(F,G \in C^\infty(M)\), \(z \in M\), \(\{F,G\}(z) = a\) and \(\ve > 0\). Rescaling one of the functions appropriately and applying the proposition, we can conclude that there is \(\delta > 0\) such that if \(F',G' \in C^\infty(M)\) with \(\|F - F'\|,\|G-G'\| < \delta\), then there exists a point \(z' \in M\) such that \(|\{F',G'\}(z')| > |a| - \ve\). Therefore \[\|\{F',G'\}\| > |a| - \ve,\] whence \[\Upsilon(F,G) \geq |a| - \ve.\] But since \(\ve\) is arbitrary, we obtain \(\Upsilon(F,G) \geq |a| = |\{F,G\}(z)|\) for any \(z \in M\), and taking the supremum over \(M\), \[\Upsilon(F,G) \geq \|\{F,G\}\|.\] Since the reverse inequality holds trivially, we have the desired result. \qed \section{Discussion} There are several directions in which the presented results could be generalized. First of all, theorem \ref{moment_map} speaks about closed manifolds, but the only thing used in the proof is the fact that any continuous function can be approximated by a function whose level sets have only countably many connected components. The countable additivity of quasi-measures, which is established in \cite{additivity_qm}, allows us to define the median of such a function, and then proceed as above. It would be interesting to find spaces other than manifolds with this property. The second direction is to try and generalize theorem \ref{simple_qs_surf} to arbitrary closed symplectic manifolds and symplectic quasi-states on them. The methods presented here fail even in the case of a non-representable quasi-state on a closed surface. And finally, the question posed in \cite{quasimorphism}, namely whether it is true that \(\Upsilon(F,G) = \|\{F,G\}\|\), is still open in higher dimensions. Apparently some new methods are needed.
{"config": "arxiv", "file": "math0703121.tex"}
TITLE: Application of Kodaira Embedding Theorem QUESTION [4 upvotes]: I am going to give a talk on Kahler manifold. In particular, I will outline a proof of the Kadaira Embedding theorem. I also wish to give some applications of the theorem. One of the application would be the Riemann bilinear relation on complex torus. I am searching for other applications. Does anyone has a good suggestion? REPLY [2 votes]: Let me mention some important theorems about Kodaira embedding theorem Let $X$ be a compact complex manifold, and $L$ be a holomorphic line bundle over $X$ equipped with a smooth Hermitian metric $h$ whose curvature form (locally given by $−i2π∂\bar ∂\log h$) is a positive definite real $(1,1)$-form, and so defines a Kähler metric $ω$ on $X$. Then the Kodaira embedding theorem states that there is a positive integer $k$ such that $L^k$ is globally generated (i.e. for every $x∈X$ there is a global holomorphic section $s∈H^0(X,L^k)$ with $s(x)≠0$) and the natural map $X\to\mathbb P(H^0(X,L^k)^∗)$, which sends a point $x$ to the hyperplane of sections which vanish at $x$, is an embedding. In particular, $X$ is a projective manifold. Theorem 1.1 of the this paper extends this theorem of Gang Tian to the case of $X$ not necessarily compact, with compact analytic subvariety $Σ$ and holomorphic-Hermitian line bundle $(L,h)$ such that $h$ is continuous on $X∖Σ$ and has semi-positive curvature current $γ=c_1(L,h)$. In this context the authors consider the spaces of $L^2$-holomorphic sections of the tensor powers $L^p|_{X∖Σ}$, the Bergman density functions Pp associated with orthonormal bases, and the Fubini-Study $(1,1)$-currents $γ_p$ for which the $P_p$ serve as potentials. Under these conditions, it is shown in Theorem 1.1 that each $γ_p$ extends to a closed positive current on $X$, and that $\frac{1}{p}γ_p$ approaches $γ$ weakly if $\frac{1}{p}\log P_p→0$ locally uniformly on $X∖Σ$, as $p→∞$. We have also the following theorem If $X$ is a normal compact Kahler variety with isolated singularities that admits a holomorphic line bundle $L$ that is positive when restricted to the regular part of $X$, then $X$ is biholomorphic to a projective-algebraic variety.
{"set_name": "stack_exchange", "score": 4, "question_id": 754354}
TITLE: In quadratic interpolation of 3 points - is the minimum guaranteed to be within the left and right points? QUESTION [0 upvotes]: I have 3 ordered points, $(x_1,y_1),(x_2,y_2),(x_3,y_3),$ with $x_1<x_2<x_3$, of which I calculate the interpolationg quadratic function. In my case, $(x_2,y_2)$ is a local minimum in or maximum in this list (i.e. ($y_2>y_1$ and $y_2>y_3$) or ($y_2<y_1$ and $y_2<y_3$). My question: is the parabola's minimum or maximum guaranteed to be within the support of these points, so that $x_1<x_\text{min}<x_3$? I thought about just working out the formula for $x_\text{min}$, but surely there is a more easy argument? REPLY [0 votes]: I suppose something like this works? Assume $x_\text{min} < x_1$. Then if $x_1<x_2<x_3$, $y_1,y_2,y_3$ is a monotonic sequence because the parabola is monotonic after $x_\text{min}$. Similarly for $x_\text{min}>x_3$. Hence, $x_1<x_\text{min}<x_3$.
{"set_name": "stack_exchange", "score": 0, "question_id": 4042697}
TITLE: Boundary flux maximizing drift (velocity) vector fields for 2D heat equation QUESTION [4 upvotes]: Looking for literature / known results on the following class of problems: Consider the domain bounded, open $\Omega\in \mathbb R^2$ with smooth boundary, divergence free drift $u=u(x,t)$, scalar field $T=T(x,t)$ with no slip and steady Dirichlet conditions: $u(\partial\Omega,t)=0\:\forall t,\: T(\partial\Omega,t)=T_0$ The scalar field evolves as: $\frac{\partial T}{\partial t}+u.\nabla{T}=k\nabla^2{T}$ Now consider the functional: $F(u)=\int_0^{t_1}\int_{\partial\Omega} \frac{\partial T}{\partial \hat n}ds dt$, where $\hat n$ is the outward normal on the boundary. This functional measures the total flux out of the domain in some finite time $t_1$. The aim is to maximize this functional within a class of permissible $u$ with some bounded norm (energy), say for example divergence free $u\in L^2(\Omega)$ and $||u||_{L^2}=1$. My question is that whether this problem has been looked at in the PDE literature before, and if yes, what are the known results. For example, what is the relation between the shape of $\Omega$ and the optimal $u$. REPLY [2 votes]: this may not be exactly what you are asking but may have some related materials.. http://www.math.cmu.edu/~gautam/research/papers/200911-bad-mixing-2d.pdf
{"set_name": "stack_exchange", "score": 4, "question_id": 141171}
TITLE: Confused About Trigonometric Substitution QUESTION [2 upvotes]: I'm learning Trigonometric Substitutions, they gave us the following example in the book: I'm confused about how exactly we make the substitution $x= a\sin(\theta)$ In regular substitution we have to take something from the integrand and substitute it for u. Why is it in this case we can use $a\sin(\theta)$ when I do not see it in the integrand? REPLY [1 votes]: When I teach trigonometric substitution I emphasize that it is an implicit substitution. In contrast, the usual $u$ substitution is an explicit substitution. An implicit substitution introduces a new variable through some equation linking it to the given integration variables. In contrast, an explicit substitution, which you are more familiar with writes the new variable in terms of the given. In both cases, we have two or three things to do: change the integrand to the new variable change the measure to the new variable change the bounds to the new variable The details of 1 and 2 look different for implicit vs. explicit, but it's the same concept just like differentiation and implicit differentiation are really the same concept. Pragmatically, the trig. substitution is easier to find $dx$ in terms of the new measure $d\theta$ because you just take $x=a \sin \theta$ and find $dx = a \cos \theta d\theta$. So, how to change $dx$ to the corresponding expression with $\theta$. The answer to the preceding question is obvious; $dx = a \cos \theta d\theta$. Finally, how to change the integrand? Well, your post already shows how the Pythagorean identity for sine and cosine obliterate the root. It is important to notice the new variable $\theta$ is the analog of usual $u$-substitution you've studied previously. By the way, there are also $u$-substitution problems where the $u$ is not in the original integrand, those are just unusual problems. For example, $\int \sec \theta d\theta = \int du/u$ for $u=\sec \theta+\tan \theta$. (see, no $u$ in the original problem)
{"set_name": "stack_exchange", "score": 2, "question_id": 966113}
\section{Static formulations of Wasserstein-1-type discrepancies} \label{sec:W1Extensions} In this section we consider two different extensions of the classical $W_1$ metric and subsequently demonstrate their equivalence. \subsection{Wasserstein-1-Type discrepancies} One may ask how the $W_1$ metric \eqref{eqn:W1predual} can be generalized without giving up its efficiency, particularly the convexity of the problem and its low dimensional variables and constraints. This motivates the following definition of a discrepancy between two nonnegative measures $\rho_0$ and $\rho_1$. \begin{definition}[$W_{\h,\g,B}$-discrepancy] \label{def:GeneralizedPrimalProblem} Consider a convex set $B\subset{\R^2}$ and concave, upper semi-continuous functions $\h,\g:\R\to\R\cup\{-\infty\}$. For $\rho_0,\rho_1 \in \measp(\Omega)$ we define \begin{align} E_{\h,\g,B}^{\rho_0,\rho_1}(\alpha,\beta)&=\begin{cases}\int_\Omega \h(\alpha(x))\,\d\rho_0(x)\\ \quad+ \int_\Omega \g(\beta(x))\,\d\rho_1(x)&\text{if }\alpha, \beta \in \Lip(\Omega)\text{ with }(\alpha(x),\beta(x)) \in B \ \forall\, x \in\Omega\,,\\-\infty&\text{else,}\end{cases}\nonumber\\ W_{\h,\g,B}(\rho_0,\rho_1)&=\sup_{\alpha,\beta\in C(\Omega)} E_{\h,\g,B}^{\rho_0,\rho_1}(\alpha,\beta)\,.\label{eq:GeneralizedPrimalProblem} \end{align} \end{definition} \begin{remark}[Inhomogeneous versions] In principle one could also allow $\h$, $\g$, and $B$ to have spatially varying, inhomogeneous forms $\h,\g:\R\times\Omega\to\R\cup\{-\infty\}$ and $B:\Omega\to2^{\R\times\R}$. To avoid technicalities we shall for now only consider the spatially homogeneous case. The generalization to the inhomogeneous case will be discussed in Section~\ref{sec:Inhomogeneous}. \end{remark} \begin{remark}[Complexity] Just like \eqref{eqn:W1predual}, definition \eqref{eq:GeneralizedPrimalProblem} represents a convex optimization problem whose variables $\alpha$ and $\beta$ are functions on the low-dimensional domain $\Omega$ and satisfy three local constraints everywhere on $\Omega$, two Lipschitz constraints as well as $(\alpha(x),\beta(x))\in B$. \end{remark} \begin{remark}[Convexity] As the pointwise supremum of linear functionals in $\rho_0$ and $\rho_1$, the discrepancy $W_{\h,\g,B}(\rho_0,\rho_1)$ is jointly convex in $\rho_0$ and $\rho_1$. \end{remark} \begin{remark}[Reduction to $W_1$ case]\label{rem:W1case} The Wasserstein-1-distance is obviously retrieved for the choice \begin{equation*} \h(\alpha)=\alpha\,,\qquad \g(\beta)=\beta\,,\qquad B=\{(\alpha,\beta)\in\R^2\,|\,\alpha+\beta\leq0\}\,. \end{equation*} \end{remark} \begin{definition}[Admissible $(\h,\g,B)$]\label{def:admissibility} We call $(\h,\g,B)$ admissible if there exist functions $\hB,\gB:\R\to\R\cup\{-\infty\}$ such that \begin{align} \label{eqn:BIntersection} B & = B_{01} \cap (B_0 \times B_1)\qquad\text{with} \\ \label{eqn:BHypograph} B_{01} & =\left\{(\alpha,\beta)\in\R^2\,\right|\left.\alpha\leq\hB(-\beta)\right\} =\left\{(\alpha,\beta)\in\R^2\,\right|\left.\beta\leq\gB(-\alpha)\right\},\hspace*{-\linewidth}\\ \label{eqn:AlphaBetaMinA} B_0 & = [\alphamin,+\infty), & \alphamin & = \inf \{ \alpha \in \R : \h(\alpha) > -\infty \}, \\ \label{eqn:AlphaBetaMinB} B_1 & = [\betamin,+\infty), & \betamin & = \inf \{ \beta \in \R : \g(\beta) > -\infty \}, \end{align} and $\h$, $\g$, $\hB$ or equivalently $\gB$ satisfy the following conditions (where we drop the indices): \begin{enumerate}[1.] \item\label{enm:convexity} $h$ is concave, \item\label{enm:wellposedness} $h$ is upper semi-continuous, \item\label{enm:positivity1} $h(s)\leq s$ for all $s\in\R$ and $h(0)=0$, \item\label{enm:positivity} $h$ is differentiable at $0$ and $h'(0)=1$, \item\label{enm:negativeMeasures} $h$ is monotonically increasing. \end{enumerate} Note that on their respective domains, $\hB=-\gB^{-1}(-\cdot)$ and $\gB=-\hB^{-1}(-\cdot)$. \end{definition} \begin{remark}[On the conditions] The admissibility conditions are chosen as to make $W_{\h,\g,B}$ a reasonable discrepancy on $\measp(\Omega)$, especially if two of $\h$, $\g$, and $B$ are taken as in Remark~\ref{rem:W1case}. In particular, we ask for the following properties. \begin{enumerate}[a.] \item\label{enm:propUSC} $E_{\h,\g,B}^{\rho_0,\rho_1}$ should be upper semi-continuous (a natural requirement for well-posedness of optimization problem\,\eqref{eq:GeneralizedPrimalProblem}), \item\label{enm:propNonNeg} $W_{\h,\g,B}(\rho_0,\rho_1)\geq0$ for all $\rho_0,\rho_1\in\measp(\Omega)$ and $W_{\h,\g,B}(\rho_0,\rho_1)=0$ if $\rho_0=\rho_1$, \item\label{enm:propPos} $W_{\h,\g,B}(\rho_0,\rho_1)>0$ for $\rho_0\neq\rho_1$, \item\label{enm:propNonNegMeas} $W_{\h,\g,B}(\rho_0,\rho_1)=\infty$ whenever $\rho_0$ or $\rho_1$ are negative, \item\label{enm:propSWLSC} $W_{\h,\g,B}(\rho_0,\rho_1)$ should be sequentially weakly-* lower semi-continuous in $(\rho_0,\rho_1)$. \end{enumerate} Now to obtain corresponding conditions on $B_{01}$ we first consider the case $\h=\g=\id$ (then $B = B_{01}$). Property~\ref{enm:propUSC} requires closedness of $B$, while property~\ref{enm:propNonNegMeas} implies $(-\infty,a]^2\subset B$ for some finite $a \in \R$. Together with the convexity of $B$ it follows that $B_{01}$ can be expressed in the form \eqref{eqn:BHypograph} for an upper semi-continuous, concave, monotonically increasing $\hB$. Next, we set any two of $\h$, $\g$, and $\hB$ to the identity. For the remaining one it is not difficult to see that condition~\ref{enm:convexity} is equivalent to the convexity of optimization problem\,\eqref{eq:GeneralizedPrimalProblem}. Likewise, condition~\ref{enm:wellposedness} is necessary for property~\ref{enm:propUSC} (it is also needed to make sense of the integrals in $E_{\h,\g,B}^{\rho_0,\rho_1}$) and will in the proof of Proposition~\ref{prop:upperSemicontinuity} turn out to be also sufficient. It is furthermore a simple exercise to show the equivalence between condition~\ref{enm:positivity1} and property~\ref{enm:propNonNeg}. Indeed, assume $\g=\hB=\id$ (the other cases follow analogously), then condition~\ref{enm:positivity1} implies $W_{\h,\g,B}(\rho_0,\rho_1)\geq E_{\h,\g,B}^{\rho_0,\rho_1}(0,0)=0$ as well as $W_{\h,\g,B}(\rho,\rho)=\sup_{\alpha\in C(\Omega)}\int_\Omega\h(\alpha)-\alpha\,\d\rho\leq0$ for $\rho\in\measp(\Omega)$. Vice versa, taking $\rho=\delta_x$ for some $x\in\Omega$, $W_{\h,\g,B}(\rho,\rho)=0$ implies $\sup_{\alpha\in\R}\h(\alpha)-\alpha=0$ and thus in particular $\h\leq\id$. Furthermore, for a contradiction assume $\h(0)<0$, then by virtue of the hyperplane separation theorem, the concavity and upper semi-continuity of $\h$ there exists $s\in\R$ and $\veps>0$ with $\h(\alpha)<s\alpha-\veps$ for all $\alpha\in\R$. Due to $\h\leq\id$ we may assume $s\geq0$ and thus obtain $0\leq W_{\h,\g,B}(\rho,s\rho)=\sup_{\alpha\in\R}\h(\alpha)-s\alpha<0$. Assuming now conditions~\ref{enm:convexity} to \ref{enm:positivity1} one can easily derive the equivalence of condition~\ref{enm:positivity} and property~\ref{enm:propPos}. Indeed, taking again $\g=\hB=\id$, condition~\ref{enm:positivity} implies that $E_{\h,\g,B}^{\rho_0,\rho_1}$ is differentiable at $(\alpha,\beta)=(0,0)$ in any direction $(\varphi,-\varphi)\in \Lip(\Omega)^2$ with $\partial_{(\alpha,\beta)}E_{\h,\g,B}^{\rho_0,\rho_1}(\alpha,\beta)(\varphi,-\varphi)=\int_\Omega\varphi\,\d(\rho_0-\rho_1)$. Thus, $E_{\h,\g,B}^{\rho_0,\rho_1}(0,0)=0$ can only be a maximum if $\rho_0 = \rho_1$. On the other hand, $-\h$ is convex with subgradient $\partial(-\h)(0)\supset\{-1\}$ due to condition~\ref{enm:positivity1}. Now taking $\rho_0\in\measp(\Omega)$ and $\rho_1=s\rho_0$ for some $s\geq0$ with $s\neq1$, property~\ref{enm:propPos} implies the existence of some $\alpha\in \Lip(\Omega)$ with $0<E_{\h,\g,B}^{\rho_0,\rho_1}(\alpha,-\alpha)\leq-\int_\Omega(\partial(-\h)(0)+s)\alpha\,\d\rho_0$. Thus, $-s\notin\partial(-\h)(0)$ and therefore $\partial(-\h)(0)=\{-1\}$ from which condition~\ref{enm:positivity1} follows. Note further that condition~\ref{enm:positivity1} automatically implies property~\ref{enm:propNonNegMeas} since $\h$ and $\g$ are unbounded from below, while condition~\ref{enm:negativeMeasures} may simply be assumed for $\h$ and $\g$ without loss of generality: Indeed, suppose for instance $\h$ to be nonmonotone, then conditions~\ref{enm:convexity} and \ref{enm:positivity1} imply the existence of a unique maximum value $\h(\bar\alpha)\geq0$. Therefore, $E_{\h,\g,B}^{\rho_0,\rho_1}(\alpha,\beta)\leq E_{\h,\g,B}^{\rho_0,\rho_1}(\min(\alpha,\bar\alpha),\beta)=E_{h,\g,B}^{\rho_0,\rho_1}(\alpha,\beta)$ and thus $W_{\h,\g,B}=W_{h,\g,B}$ for the monotonically increasing $h(\alpha)=\h(\min(\alpha,\bar\alpha))$. Finally, the structure \eqref{eqn:BIntersection} of the set $B$ is necessary for property~\ref{enm:propSWLSC} (that construction~\eqref{eqn:BIntersection} actually implies property~\ref{enm:propSWLSC} will later follow from Corollary \ref{cor:equivalenceStatic}). For instance, take $\g=\id$ and let $\rho_1=\delta_x$ and $\rho_0^n=\frac1n\rho_1\to0$ as $n\to\infty$. If $\gB(-\bar\alpha)>\gB(-\alphamin)$ for some $\bar\alpha < \alphamin$, then \begin{multline*} W_{\h,\g,B_{01}}(0,\rho_1) \geq E_{\h,\g,B_{01}}^{0,\rho_1}(\bar\alpha,\gB(-\bar\alpha)) =\gB(-\bar\alpha)\\ >\gB(-\alphamin) \geq\liminf_{n\to\infty}\sup_{\alpha\geq\alphamin}\tfrac{\h(\alpha)}n+\gB(-\alpha) =\liminf_{n\to\infty}W_{\h,\g,B_{01}}(\rho_0^n,\rho_1)\,. \end{multline*} \end{remark} Without explicit mention we will in the following always assume $(\h,\g,B)$ to be admissible. The class of $W_{\h,\g,B}$-discrepancies is natural to consider and allows to extend the classical $W_1$ distance to unbalanced measures as we will see. Several previously introduced extensions of the $W_1$-distance (as well as $W_1$ itself) can be shown to fall into this category. In Section \ref{sec:Examples} some examples, both already well-known and new variants, will be discussed in more detail. \begin{remark}[Discrepancy bounds] The conditions on $\h$, $\g$, and $B$ imply $W_{\h,\g,B}(\rho_0,\rho_1)\leq W_1(\rho_0,\rho_1)$ for all $\rho_0,\rho_1\in\measp(\Omega)$. \end{remark} \begin{remark}[Non-existence of optimizers] Unfortunately, maximizers of $E_{\h,\g,B}^{\rho_0,\rho_1}$ do not exist in general. For instance, consider the relevant special case of $\h(\alpha)=\frac{\alpha}{1+\alpha}$ for $\alpha>-1$ and $\h(\alpha)=-\infty$ else (see the Hellinger distance in Section~\ref{sec:Examples}) and set $\g=\hB=\id$ for simplicity. For $\rho_0=\delta_x$ and $\rho_1=0$ it is easily seen that $W_{\h,\g,B}(\rho_0,\rho_1)=1$ but that $E_{\h,\g,B}^{\rho_0,\rho_1}(\alpha,\beta)<1$ for all $\alpha,\beta\in C(\Omega)$. \end{remark} Due to the potential non-existence of maximizers we will later also examine a dual problem formulation. However, non-existence happens rather for certain special cases. As shown in Proposition~\ref{prop:existencePrimal}, those cases can be characterized by equations which are simple to check. \begin{proposition}[Upper semi-continuity] \label{prop:upperSemicontinuity} Energy $E_{\h,\g,B}^{\rho_0,\rho_1}$ is upper semi-continuous on $C(\Omega)^2$. \end{proposition} \begin{proof} Let $(\alpha_n,\beta_n)\to(\alpha,\beta)$ in $C(\Omega)^2$ with $E_{\h,\g,B}^{\rho_0,\rho_1}(\alpha_n,\beta_n)>-\infty$ for all $n$ sufficiently large (else there is nothing to show). Due to the closedness of $\Lip(\Omega)$ and $B$ we have $(\alpha,\beta)\in\Lip(\Omega)^2$ as well as $(\alpha(x),\beta(x))\in B$ for all $x\in\Omega$. Finally \begin{multline*} \limsup_{n\to\infty}\int_\Omega \h(\alpha_n)\,\d\rho_0 =\limsup_{n\to\infty}\int_\Omega \h(\alpha_n)-\alpha_n\,\d\rho_0+\int_\Omega\alpha_n\,\d\rho_0\\ \leq\int_\Omega\limsup_{n\to\infty}\h(\alpha_n)-\alpha_n\,\d\rho_0+\lim_{n\to\infty}\int_\Omega\alpha_n\,\d\rho_0 \leq\int_\Omega \h(\alpha)-\alpha\,\d\rho_0+\int_\Omega\alpha\,\d\rho_0 =\int_\Omega \h(\alpha)\,\d\rho_0\,, \end{multline*} where we have used Fatou's lemma (noting that $\h(\alpha_n)-\alpha_n\leq0$), the upper semi-continuity of $\h$, as well as the continuity of the dual pairing between $\measp(\Omega)$ and $C(\Omega)$. Analogously, $\limsup_{n\to\infty}\int_\Omega \g(\beta_n)\,\d\rho_1\leq\int_\Omega \g(\beta)\,\d\rho_1$, concluding the proof. \end{proof} \begin{proposition}[Existence of optimizers] \label{prop:existencePrimal} Let $\rho_0,\rho_1\in\measp(\Omega)$. \begin{itemize} \item $W_{\h,\g,B}(\rho_0,\rho_1)=\infty$ if and only if $\sup_{(\alpha,\beta)\in B}\measnrm{\rho_0}\,\h(\alpha)+\measnrm{\rho_1}\,\g(\beta)=\infty$ (in which case there are no maximizers of $E_{\h,\g,B}^{\rho_0,\rho_1}$). \item A maximizer of $E_{\h,\g,B}^{\rho_0,\rho_1}$ exists if and only if there exist $\alpha^*,\beta^*\in(-\infty,0]$ with \begin{align*} \measnrm{\rho_0}\,(-\h\circ\hB)'(-\beta^*)&\geq\measnrm{\rho_1}\,(-\g)'(\beta^*)\,\\ \text{or}\quad \measnrm{\rho_1}\,(-\g\circ\gB)'(-\alpha^*)&\geq\measnrm{\rho_0}\,(-\h)'(\alpha^*)\,, \end{align*} where the prime refers to a favourably chosen element of the subgradient. \item A maximizer of $E_{\h,\g,B}^{\rho_0,\rho_1}$ exists if and only if $\sup_{(\alpha,\beta)\in B}\h(\alpha)\measnrm{\rho_0}+\g(\beta)\measnrm{\rho_1}$ has a maximizer. \end{itemize} \end{proposition} \begin{proof} For a function $f \in C(\Omega)$ we abbreviate $\hat f=\max_{x\in\Omega} f(x)$, $\check f=\min_{x\in\Omega} f(x)$. Throughout the following, let $\alpha_n,\beta_n\in C(\Omega)$ denote a maximising sequence and assume without loss of generality that $\hat\alpha_n\geq\hat\beta_n$ for $n$ large enough (else we may simply swap the roles of $\h,\alpha$ and $\g,\beta$). As for the first statement, let us show that $W_{\h,\g,B}(\rho_0,\rho_1)=\infty$ implies the divergence of $\sup_{\alpha,\beta\text{ constant}}E_{\h,\g,B}^{\rho_0,\rho_1}(\alpha,\beta)$ (the converse implication is trivial). Indeed, we must have $\h(\hat \alpha_n)\to\infty$ and thus $\alpha_n(x)\to\infty$ for all $x \in \Omega$ due to the Lipschitz constraint. Then for $n$ sufficiently large, \begin{multline*} \int_\Omega \h(\alpha_n(x))\,\d\rho_0(x)+\int_\Omega \g(\beta_n(x))\,\d\rho_1(x) \\ \leq \h(\check\alpha_n)\measnrm{\rho_0}+\g(\hat\beta_n)\measnrm{\rho_1}+(\h(\hat\alpha_n)-\h(\check\alpha_n))\measnrm{\rho_0}\,. \end{multline*} Using $(\check\alpha_n,\hat\beta_n)\in B$ and $\h(\hat\alpha_n)-\h(\check\alpha_n)\leq\hat\alpha_n-\check\alpha_n\leq\diam\Omega$ (note that $\h$ is a contraction on $[0,\infty)$) we indeed obtain $E_{\h,\g,B}^{\rho_0,\rho_1}(\check\alpha_n,\hat\beta_n)\geq E_{\h,\g,B}^{\rho_0,\rho_1}(\alpha_n,\beta_n)-\diam\Omega\measnrm{\rho_0}\to\infty$. As for the second statement, note first that $-\h \circ \hB$ is convex and that $-(-\h)'(\hB(\beta)) \cdot (-\hB)'(\beta) \in \partial (-\h \circ \hB)(\beta)$, where the prime indicates an arbitrary subgradient element. Assume now the existence of a suitable $\beta^*$. For a contradiction, suppose $\check\alpha_n\to\infty$ and $\hat\beta_n\to-\infty$ (both are equivalent due to $\alpha+\beta\leq0$ for all $(\alpha,\beta)\in B$). Then for any $\Delta>0$ and $n$ large enough, $\hat\beta_n<\beta^*-\Delta$. Now define $\tilde\beta_n(x)=\beta_n(x)+\Delta$ and $\tilde\alpha_n(x)=\hB(-\tilde\beta_n(x))$. Note that $\tilde\alpha_n\in\Lip(\Omega)$ (since $\tilde\beta_n < \beta^* \leq 0$ and $\hB$ is a contraction on $[0,\infty)$) with $\tilde\alpha_n\geq\hB(-\beta_n)+\Delta(-\hB)'(-\tilde\beta_n)\geq\alpha_n+\Delta(-\hB)'(-\beta^*)$. We have \begin{align*} E_{\h,\g,B}^{\rho_0,\rho_1}(\alpha_n,\beta_n) &=\int_\Omega \h(\alpha_n(x))\,\d\rho_0(x)+\int_\Omega \g(\beta_n(x))\,\d\rho_1(x)\\ &\leq\int_\Omega \h(\tilde\alpha_n)+(-\h)'(\tilde\alpha_n)\Delta(-\hB)'(-\beta^*)\,\d\rho_0 +\int_\Omega \g(\tilde\beta_n)+(-\g)'(\tilde\beta_n)\Delta\,\d\rho_1\\ &\leq\int_\Omega \h(\tilde\alpha_n)\,\d\rho_0+(-\h)'(\hB(-\beta^*))\Delta(-\hB)'(-\beta^*)\measnrm{\rho_0}\\ &\qquad+\int_\Omega \g(\tilde\beta_n)\,\d\rho_1+(-\g)'(\beta^*)\Delta\measnrm{\rho_1}\\ &=\int_\Omega \h(\tilde\alpha_n)\,\d\rho_0+\int_\Omega \g(\tilde\beta_n)\,\d\rho_1 \\ & \qquad -\Delta\left[(-\h\circ\gB)'(-\beta^*)\measnrm{\rho_0}-(-\g)'(\beta^*)\measnrm{\rho_1}\right]\\ &\leq E_{\h,\g,B}^{\rho_0,\rho_1}(\tilde\alpha_n,\tilde\beta_n)\,. \end{align*} Thus, $(\tilde\alpha_n,\tilde\beta_n)$ is an even better maximising sequence so that we may assume the maximising sequence $\alpha_n,\beta_n\in C(\Omega)$ to be uniformly bounded with $\beta_n\geq\hat\beta_n-\diam\Omega\geq\beta^*-\diam\Omega$ and $\alpha_n\leq-\beta_n$. Since $\alpha_n,\beta_n\in\Lip(\Omega)$, the sequence is equicontinuous and converges (up to a subsequence) against some $(\alpha,\beta)\in C(\Omega)^2$. By the upper semi-continuity of the energy, this must be a maximizer. The argument for a suitable $\alpha^*$ is analogous. For the converse implication assume $\measnrm{\rho_0}\,(-\h\circ\hB)'(-\beta^*)<\measnrm{\rho_1}\,(-\g)'(\beta^*)$ for all $\beta^*\in(-\infty,0]$ (the proof is analogous if the other condition is violated). Taking $\beta^*=0$, this implies $\measnrm{\rho_0}>\measnrm{\rho_1}$. Let $(\alpha,\beta)\in C(\Omega)^2$ be a maximizer and choose $\Delta>\max\{\hat\beta,0\}$. We now set $\tilde\beta=\beta-\Delta$, $\tilde\alpha(x)=\hB(-\tilde\beta(x))$ (note that again $\tilde\alpha\in\Lip(\Omega)$) and obtain as before \begin{multline*} E_{\h,\g,B}^{\rho_0,\rho_1}(\alpha,\beta) =\int_\Omega \h(\alpha)\,\d\rho_0+\int_\Omega \g(\beta)\,\d\rho_1\\ \leq\int_\Omega \h(\tilde\alpha)\,\d\rho_0-(-\h)'(\hB(-\check{\tilde\beta}))\Delta(-\hB)'(-\check{\tilde\beta})\measnrm{\rho_0} +\int_\Omega \g(\tilde\beta)\,\d\rho_1-(-\g)'(\check{\tilde\beta})\Delta\measnrm{\rho_1}\\ =\int_\Omega \h(\tilde\alpha)\,\d\rho_0+\int_\Omega \g(\tilde\beta)\,\d\rho_1 +\Delta\left[(-\h\circ\gB)'(-\check{\tilde\beta})\measnrm{\rho_0}-(-\g)'(\check{\tilde\beta})\measnrm{\rho_1}\right] <E_{\h,\g,B}^{\rho_0,\rho_1}(\tilde\alpha,\tilde\beta)\,, \end{multline*} contradicting the optimality of $(\alpha,\beta)$. By repeating the arguments for the second statement under restriction to spatially constant $(\alpha,\beta) \in \Lip(\Omega)^2$, we find that the existence conditions of the second statement are equivalent to existence of maximizers for $\sup_{(\alpha,\beta)\in B}\h(\alpha)\measnrm{\rho_0}+\g(\beta)\measnrm{\rho_1}$. \end{proof} There may be some redundancy in the choice of $\h$, $\g$, and $B$. In detail, it turns out that in particular cases the model can be simplified by eliminating the constraint set $B$ and the variable $\beta$. Later, this will also allow to simplify some infimal convolution-type discrepancy measures (cf.~Corollary \ref{cor:reductionInfimalConv} and Section \ref{sec:unbalancedExamples}). \begin{proposition}[Model reduction] \label{prop:modelReduction} Let $\gamma>0$ (possibly $\gamma = +\infty$), $\rho_0,\rho_1\in\measp(\Omega)$, and abbreviate \begin{equation*} B(a,b)=\{(\alpha,\beta)\in\R^2\,|\,\alpha+\beta\leq0\}\cap \left( [a,+\infty)\times [b,+\infty) \right) \,. \end{equation*} \begin{itemize} \item If $\gB(\alpha)=\min \{ \alpha,\gamma \}$ for all $\alpha>0$ (or equivalently $\hB(\beta)=\beta-\iota_{[-\gamma,\infty)}(\beta)$ for $\beta<0$), then $W_{\h,\g,B}=W_{\h\circ\hB,\g,B(\tildealphamin,\betamin)}$ with \begin{equation*} \tildealphamin = -\gB(-\alphamin) = \max\{\alphamin,-\gamma\}\,. \end{equation*} \item If $\hB(\beta)=\min \{ \beta,\gamma \}$ for all $\beta>0$ (or equivalently $\gB(\alpha)=\alpha-\iota_{[-\gamma,\infty)}(\alpha)$ for $\alpha<0$), then $W_{\h,\g,B}=W_{\h,\g\circ\gB,B(\alphamin,\tildebetamin)}$ with \begin{equation*} \tildebetamin = -\hB(-\betamin) = \max\{\betamin,-\gamma\}\,. \end{equation*} \item $W_{\h,\g,B(a,b)}=\sup\left\{\int_\Omega\h(\alpha)\,\d\rho_0+\int_\Omega\g(-\alpha)\,\d\rho_1\right|\left.\vphantom{\int_\Omega}\alpha\in\Lip(\Omega),\,a\leq\alpha\leq -b\right\}$. \end{itemize} \end{proposition} \begin{proof} In the first case, notice that for any $\beta\in\Lip(\Omega)$ with $\beta \leq \gamma$ we also have $\tilde\alpha=\hB\circ(-\beta)\in\Lip(\Omega)$, since $\hB(-\cdot)$ is a contraction on $(-\infty,\gamma]$. Note that if $\beta(x)> \gamma$ for some $x \in \Omega$, both energies are $-\infty$ for any $\alpha$. Moreover, for $\tilde\alpha$ to be feasible, we need $\tilde\alpha(x) = \hB(-\beta(x)) \geq \alphamin$ for all $x \in \Omega$ (see \eqref{eqn:AlphaBetaMinA}-\eqref{eqn:AlphaBetaMinB}), which is equivalent to $-\beta(x) \geq -\gB(-\alphamin) = \tildealphamin$ (see \eqref{eqn:BHypograph}). Thus, \begin{multline*} \sup_{\alpha\in C(\Omega)}E_{\h,\g,B}^{\rho_0,\rho_1}(\alpha,\beta) =E_{\h,\g,B}^{\rho_0,\rho_1}(\tilde\alpha,\beta)\\ =E_{\h\circ\hB,\g,B(\tildealphamin,\betamin)}^{\rho_0,\rho_1}(-\beta,\beta) =\sup_{\alpha\in C(\Omega)}E_{\h\circ\hB,\g,B(\tildealphamin,\betamin)}^{\rho_0,\rho_1}(\alpha,\beta) \end{multline*} from which the statement follows. The second case follows analogously. Finally, \begin{align*} W_{\h,\g,B(a,b)} & =\sup\left\{\int_\Omega\h(\alpha)\,\d\rho_0+\int_\Omega\g(\beta)\,\d\rho_1\right|\left.\vphantom{\int_\Omega}\alpha,\beta\in\Lip(\Omega),\,\alpha+\beta\leq0,\,a \leq \alpha,\,b \leq \beta\right\} \\ & =\sup\left\{\int_\Omega\h(\alpha)\,\d\rho_0+\int_\Omega\g(-\alpha)\,\d\rho_1\right|\left.\vphantom{\int_\Omega}\alpha\in\Lip(\Omega),\,a\leq\alpha\leq -b\right\}\,. \qedhere \end{align*} \end{proof} \begin{remark}[Wasserstein-1 metric]\label{rem:W1reduction} For standard $W_1$, where $\h = \g = \hB = \id$, one has $\alphamin = \betamin = -\infty$ and $B=B(\alphamin,\betamin) = \R^2$. Consequently, by virtue of Proposition \ref{prop:modelReduction}, one can eliminate one dual variable by setting $\alpha=-\beta$, as is common practice (cf.~Section \ref{sec:IntroOverview}). \end{remark} \subsection[Infimal convolution-type extensions of W1]{Infimal convolution-type extensions of $W_1$}\label{sec:infConvW} In the literature typically a different approach is taken to achieve convex and efficient generalizations of the $W_1$ metric, namely an infimal convolution-type combination of non-transport-type metrics with the Wasserstein metric. To introduce a general class of such discrepancies we now fix a suitable family of local, non-transport-type discrepancies. \begin{definition}[Local discrepancy] \label{def:LocalSimilarityMeasure} Let $c : \R \times \R \mapsto [0,\infty]$ satisfy the following assumptions, \begin{enumerate} \renewcommand{\theenumi}{\alph{enumi}} \item\label{enm:cConv} $c$ is convex, positively 1-homogeneous, and lower semi-continuous jointly in both arguments, \item\label{enm:strictPos} $c(m,m)=0$ for all $m\geq0$ and $c(m_0,m_1)>0$ if $m_0\neq m_1$, \item\label{enm:domain} $c(m_0,m_1) = \infty$ whenever $m_0<0$ or $m_1 < 0$. \end{enumerate} Then $c$ induces a discrepancy $D$ on $\measp(\Omega)$ (extended to the rest of $\meas(\Omega)$ by infinity) via \begin{align} D & : \meas(\Omega)^2 \rightarrow [0,\infty]\,, & (\rho_0,\rho_1) & \mapsto \int_\Omega c(\RadNik{\rho_0}{\rho},\RadNik{\rho_1}{\rho})\,\d\rho\,, \end{align} where $\rho$ is any measure in $\measp(\Omega)$ with $\rho_0$, $\rho_1 \ll \rho$ (for instance $\rho=|\rho_0|+|\rho_1|$ with $|\cdot|$ indicating the total variation measure). Note that due to the 1-homogeneity of $c$ this definition does not depend on the choice of the reference measure $\rho$ (see e.\,g.\ \cite[Thm.\,2]{GoffmanSerrin64}), for which reason we shall also use the shorter notation \begin{equation*} D(\rho_0,\rho_1)=\int_\Omega c(\rho_0,\rho_1)\,. \end{equation*} \end{definition} Several examples of local discrepancies will be provided in Section~\ref{sec:Examples}, including the squared Hellinger distance or the metric induced by the total variation, \begin{equation*} D^{FR}(\rho_0,\rho_1)=\int_\Omega\left(\sqrt{\tfrac{\d\rho_0}{\d\rho}}-\sqrt{\tfrac{\d\rho_1}{\d\rho}}\right)^2\,\d\rho \quad\text{and}\quad D^{TV}(\rho_0,\rho_1)=\|\rho_0-\rho_1\|_\meas\,. \end{equation*} Below we discuss a few basic properties. \begin{remark}[Metric properties] It is straightforward to check that a local discrepancy with integrand $c$ is a metric on $\measp(\Omega)$ if $c$ is a metric on $[0,\infty)$. \end{remark} \begin{remark}[Seminorm properties] Obviously, a local discrepancy $D$ is positively homogeneous and convex on $\meas(\Omega)^2$. \end{remark} \begin{proposition}[Lower semi-continuity] \label{prop:LSCLocDis} A local discrepancy $D$ is weakly-* (and thus sequentially weakly-*) lower semi-continuous on $\meas(\Omega)^2$. \end{proposition} Our proof follows \cite{BoBu90}, who have shown sequential lower semi-continuity for more general functionals on $\meas(\Omega)^2$. In case of a finite integrand $c$, sequential weak-* lower semi-continuity also directly follows from \cite[Thm.\,3]{GoffmanSerrin64}. \begin{proof} Let us define the sets \begin{align*} B&=\{u\in\R^2\,|\,u\cdot m\leq c(m_0,m_1)\text{ for all }m = (m_0,m_1) \in\R^2\}\\ \text{and}\quad H&=\{u\in C(\Omega,\R^2)\,|\,u(x)\in B\text{ for all }x\in\Omega\}\,. \end{align*} Note that the set $B$ is also characterized by $c^\ast = \iota_B$. Now consider a net $\rho^a=(\rho_0^a,\rho_1^a)$ which converges weakly-* to $\rho=(\rho_0,\rho_1)$ in $\meas(\Omega)^2$. For an arbitrary $u\in H$ we have \begin{equation*} D(\rho_0^a,\rho_1^a) =\int_\Omega c\left(\RadNik{\rho_0^a}{|\rho^a|},\RadNik{\rho_1^a}{|\rho^a|}\right)\,\d|\rho^a| \geq\int_\Omega u(x)\cdot\d\rho^a(x) \to\int_\Omega u(x)\cdot\d\rho\,, \end{equation*} where $|\cdot|$ indicates the total variation measure. Introducing the indicator function $\iota_H:L^\infty(\Omega,|\rho|)^2 \allowbreak \to \allowbreak \{0,\infty\}$, $\iota_H(u)=0$ if $u\in H$ and $\iota_H(u)=\infty$ else, we now have \begin{equation*} \sup_{u\in H}\int_\Omega u(x)\cdot\d\rho =\preconj{\iota_H}(\RadNik{\rho}{|\rho|}) =\int_\Omega c\left(\RadNik{\rho_0}{|\rho|},\RadNik{\rho_1}{|\rho|}\right)\,\d|\rho| =D(\rho_0,\rho_1)\,. \end{equation*} Indeed, the first and last equality hold by definition of the preconjugate and $D$, while the middle one is due to \cite[Thm.\,2]{BouchitteValadier88} (taking their $J\equiv0$, in which case their $k(x,(m_0,m_1))=c(m_0,m_1)$), where the conjugation is with respect to the dual pair $(L^1(\Omega,|\rho|)^2,L^\infty(\Omega,|\rho|)^2)$. \end{proof} \begin{remark}[Coercivity]\label{rem:coercivity} The convexity and positive homogeneity of $c$ imply \begin{equation*} D(\rho_0,\rho_1)\geq c(\measnrm{\rho_0},\measnrm{\rho_1}) \end{equation*} via Jensen's inequality. Furthermore, for any $\varepsilon\in(0,1)$ we have \begin{equation*} D(\tilde\rho,\rho)\geq c(\varepsilon,1)\measnrm{\rho} \quad\text{and}\quad D(\rho,\tilde\rho)\geq c(1,\varepsilon)\measnrm{\rho} \qquad\text{for all }\rho,\tilde\rho\text{ with }\measnrm{\rho} \geq \measnrm{\tilde\rho}/\varepsilon \end{equation*} due to $D(\tilde\rho,\rho)\geq\measnrm{\rho} c(\measnrm{\tilde\rho}/\measnrm{\rho},1)$ and $D(\rho,\tilde\rho)\geq\measnrm{\rho} c(1,\measnrm{\tilde\rho}/\measnrm{\rho})$. \end{remark} \begin{definition}[Infimal convolution of discrepancies] Let $X$ be a space and $d_1,d_2:X\times X\to[0,\infty]$ with $d_i(x_1,x_2)=0$, $i=1,2$, if and only if $x_1=x_2$. We define a new function $d_1\diamond d_2:X\times X\to[0,\infty]$ by \begin{equation*} (d_1\diamond d_2)(x_1,x_2)=\inf_{x\in X}d_1(x_1,x)+d_2(x,x_2)\,. \end{equation*} \end{definition} \begin{remark}[Properties of $d_1\diamond d_2$]\label{rem:propertiesInfConv}\ \begin{enumerate} \item The operation $\diamond$ is associative, that is, $d_1\diamond(d_2\diamond d_3)=(d_1\diamond d_2)\diamond d_3$, but not commutative. \item For $X$ a vector space, the operation $\diamond$ can be viewed as an infimal convolution $\square$ via \begin{equation*} (d_1\diamond d_2)(x_1,x_2) =(d_1(x_1,\cdot) \, \square \, d_2(y-\cdot,x_2))(y) =(d_1(x_1,y-\cdot) \, \square \, d_2(\cdot,x_2))(y) \end{equation*} for an arbitrary $y\in X$. \item We have $(d_1\diamond d_2)(x_1,x_2)\leq d_i(x_1,x_2)$ for $i=1,2$. \item The nonnegativity of $d_1$ and $d_2$ is inherited by $d_1\diamond d_2$. Furthermore, $(d_1\diamond d_2)(x_1,x_2)=0$ if and only if $x_1=x_2$. \item If $X$ is a vector space and $d_1$ and $d_2$ are jointly convex and positively 1-homogeneous in both arguments, then $d_1\diamond d_2$ is so as well, as is straightforward to check. If in addition $X$ has a topology and $d_1$ and $d_2$ are lower semi-continuous, then so is $d_1\diamond d_2$. \item\label{enm:metricInfConv} If $d_1=d_2=d$ for a metric $d$, then $d_1\diamond d_2=d$. \end{enumerate} \end{remark} We are now in a position to introduce a new class of generalizations to the Wasserstein-1 metric. The idea is to just combine $W_1$ with local discrepancies via the above infimal convolution-type approach. There are multiple possibilities to do so, for instance one might append the local discrepancy before or after the Wasserstein metric, or one might want to allow mass change in between two mere mass transports. The below definition is held general enough such that it encompasses all these possibilities. \begin{definition}[$W_{\Dl,\Dm,\Dr}$-discrepancy]\label{def:SandwichProblem} For three local discrepancies $\Dl$, $\Dm$, $\Dr$ we define the following discrepancy on $\measp(\Omega)$, \begin{multline} \label{eq:SandwichProblem} W_{\Dl,\Dm,\Dr}(\rho_0,\rho_1) =(\Dl\diamond W_1\diamond\Dm\diamond W_1\diamond \Dr)(\rho_0,\rho_1)\\ =\inf \left\{ \Dl(\rho_0,\rho_0') + W_1(\rho_0',\rho_0'') + \Dm(\rho_0'',\rho_1'') + W_1(\rho_1'',\rho_1') + \Dr(\rho_1',\rho_1)\, \right| \\ \left. (\rho_0',\rho_0'',\rho_1'',\rho_1') \in \measp(\Omega)^4 \right\}\,. \end{multline} \end{definition} \begin{remark}[Unbalanced measures] \label{rem:UnbalancedMeasures} The above-defined discrepancy can be used to extend $W_1$ to unbalanced measures. Indeed, the mass can change in three places, before the first $W_1$ transport (penalized by $\Dl$), after the second $W_1$ transport (in $\Dr$), and in between (in $\Dm$). This does not only allow to accommodate mass differences in $\rho_0$ and $\rho_1$, but will also have the effect that mass may be decreased a little before transport and then re-increased again afterwards. By choosing the extended discrete metric \begin{equation}\label{eqn:discreteMetric} D^d(\rho_0,\rho_1)=\begin{cases}0&\text{if }\rho_0=\rho_1\in\measp(\Omega)\,,\\\infty&\text{else}\end{cases} \end{equation} for some of the $\Dl,\Dm,\Dr$, mass change can be prohibited and thus the level of `local flexibility' in \eqref{eq:SandwichProblem} be varied. Note that the model can then be simplified due to \begin{equation*} W\diamond D^d=D^d\diamond W=W \end{equation*} for any (local or nonlocal) discrepancy $W$ on $\measp(\Omega)$. Furthermore, due to Remark \ref{rem:propertiesInfConv}\eqref{enm:metricInfConv} a further model reduction results from \begin{equation*} W_1\diamond W_1=W_1\,. \end{equation*} \end{remark} The existence of minimizers $\rho_0',\rho_1',\rho_0'',\rho_1''$ will follow automatically from the proof of Proposition~\ref{prop:PrimalDualSandwich} below, but can also easily be proven directly, only using the sequential weak-* lower semi-continuity and coercivity of the discrepancies. \begin{proposition}[Existence] Given $\rho_0,\rho_1\in\measp(\Omega)$ such that $W_{\Dl,\Dm,\Dr}(\rho_0,\rho_1)$ is finite, problem\,\eqref{eq:SandwichProblem} admits minimizers $\rho_0',\rho_1',\rho_0'',\rho_1''$. \end{proposition} \begin{proof} Consider a minimising sequence $(\rho_{0,n}',\rho_{1,n}',\rho_{0,n}'',\rho_{1,n}'')$, $n=1,2,\ldots$. By the coercivity of $\Dl$ and $\Dr$ we may assume $(\measnrm{\rho_{0,n}'},\measnrm{\rho_{1,n}'})$ and thus also $(\measnrm{\rho_{0,n}''},\measnrm{\rho_{1,n}''})$ to be uniformly bounded. Therefore, a subsequence of $(\rho_{0,n}',\rho_{1,n}',\rho_{0,n}'',\rho_{1,n}'')$ converges weakly-* against some $(\rho_0',\rho_1',\rho_0'',\rho_1'')\in\measp(\Omega)^4$, which must be the minimizer due to the sequential weak-* lower semi-continuity of $\Dl$, $\Dr$, $\Dm$, and $W_1$. \end{proof} \begin{proposition}[Sequential lower semi-continuity] \label{prop:seqLSCW} The discrepancy $W_{\Dl,\Dm,\Dr}$ is sequentially weakly-* lower semi-continuous on $\meas(\Omega)^2$. \end{proposition} \begin{proof} Indeed, let $\rho_i^n\to_{n\to\infty}\rho_i$ weakly-*, $i=0,1$, and assume (potentially by restricting to a subsequence) $W_{\Dl,\Dm,\Dr}(\rho_0^n,\rho_1^n) < C$ for some $C>0$ and all $n$ (otherwise there is nothing to show). Furthermore, let $(\rho_{0,n}',\rho_{1,n}',\rho_{0,n}'',\rho_{1,n}'')$ be the corresponding minimizers in \eqref{eq:SandwichProblem}. The coercivity of $\Dl$, $\Dr$, and $W_1$ now implies the uniform boundedness of $(\rho_{0,n}',\rho_{1,n}',\rho_{0,n}'',\rho_{1,n}'')$ in $\measp(\Omega)^4$. Therefore we have weak-* convergence against some $(\rho_0',\rho_1',\rho_0'',\rho_1'')$ up to taking a subsequence and thus \begin{align*} W_{\Dl,\Dm,\Dr}(\rho_0,\rho_1) &\leq \Dl(\rho_0,\rho_0') + W_1(\rho_0',\rho_0'') + \Dm(\rho_0'',\rho_1'') + W_1(\rho_1'',\rho_1') + \Dr(\rho_1',\rho_1)\\ &\leq\liminf_{n\to\infty} \Dl(\rho_{0}^n,\rho_{0,n}') + W_{1}(\rho_{0,n}',\rho_{0,n}'') + \Dm(\rho_{0,n}'',\rho_{1,n}'') \\ & \qquad \qquad + W_{1}(\rho_{1,n}'',\rho_{1,n}') + \Dr(\rho_{1,n}',\rho_{1}^n)\\ &=\liminf_{n\to\infty}W_{\Dl,\Dm,\Dr}(\rho_0^n,\rho_1^n) \end{align*} due to the weak-* lower semi-continuity of $\Dl$, $\Dr$, $\Dm$, and $W_1$. \end{proof} \begin{remark}[Semimetric properties] If $\Dl(\mu,\nu)=\Dr(\nu,\mu)$ for all $\mu,\nu\in\measp(\Omega)$ and $\Dm$ is symmetric, then also $W_{\Dl,\Dm,\Dr}$ will be symmetric. Thus, $W_{\Dl,\Dm,\Dr}$ is a semimetric in that it satisfies all metric axioms except for possibly the triangle inequality. \end{remark} \subsection{Model equivalence}\label{sec:modelEquivalence} Here we prove that the two previously introduced model families $W_{\h,\g,B}$ and $W_{\Dl,\Dm,\Dr}$ are actually equivalent. As a byproduct we arrive at a more intuitive interpretation of the quantities from definition\,\eqref{def:GeneralizedPrimalProblem}, which so far was just derived as the most general extension of the predual $W_1$ formulation. \begin{proposition}[Primal and predual formulation] \label{prop:PrimalDualSandwich} Let the local discrepancies $\Dl$, $\Dm$, and $\Dr$ be induced by the integrands $\Cl$, $\Cm$, and $\Cr$, and let $\rho_0,\rho_1\in\measp(\Omega)$ with finite mass. The following primal and predual formulations hold. \begin{align} W_{\Dl,\Dm,\Dr}(\rho_0,\rho_1) &=\inf_{\pi_1,\pi_2\in\measp(\Omega^2)} \int_\Omega \Cl(\rho_0,{\Proj_1}_\sharp\pi_1) +\int_{\Omega\times\Omega}d(x,y)\,\d\pi_1(x,y)\nonumber\\ &\qquad+\int_\Omega \Cm({\Proj_2}_\sharp\pi_1,{\Proj_1}_\sharp\pi_2) +\int_{\Omega\times\Omega}d(x,y)\,\d\pi_2(x,y) +\int_\Omega \Cr({\Proj_2}_\sharp\pi_2,\rho_1)\,,\label{eqn:primal}\\ W_{\Dl,\Dm,\Dr}(\rho_0,\rho_1) &=\sup_{\substack{\alpha,\beta\in\Lip(\Omega)\\(\alpha(x),\beta(x))\in B_{01}\cap(B_0\times B_1)\,\forall x\in\Omega}} \int_\Omega \h(\alpha)\,\d\rho_0+\int_\Omega \g(\beta)\,\d\rho_1\,,\label{eqn:predual} \end{align} where \begin{gather} \label{eqn:predualInducedH} \hB(\beta)=-[\Cm(1,\cdot)]^\ast(-\beta)\,,\qquad \h(\alpha)=-[\Cl(1,\cdot)]^\ast(-\alpha)\,,\qquad \g(\beta)=-[\Cr(\cdot,1)]^\ast(-\beta)\,,\\ B_{01}=\{(\alpha,\beta)\in\R^2\,|\,\alpha\leq \hB(-\beta)\}\,, \nonumber \\ B_0=\cl\{\alpha\in\R\,|\,\h(\alpha)>-\infty\}\,,\qquad B_1=\cl\{\beta\in\R\,|\,\g(\beta)>-\infty\}\,, \nonumber \end{gather} where $\cl$ denotes the closure. If it is finite, the infimum in the primal formulation is achieved. \end{proposition} \begin{remark}[Relation to $W_{\h,\g,B}$] Problem \eqref{eqn:predual} looks very similar to a $W_{\h,\g,B}$-type problem as specified in \eqref{eq:GeneralizedPrimalProblem}, where partial conjugates of $(\Cl,\Cr,\Cm)$ take the roles of $(\h,\g,\hB)$. We will make this correspondence more precise at the end of this section. Note that the set $B_{01}$ can also be characterized by $\Cm^\ast = \iota_{B_{01}}$. However, the characterization via the function $\hB$ emphasises the symmetry of the roles of $\h$, $\g$ and $B$, as counterparts of $\Cl$, $\Cr$ and $\Cm$. \end{remark} \begin{remark}[Relation to entropy-transport]\label{rem:entropyTransport} With $\Dm$ the extended discrete metric \eqref{eqn:discreteMetric} (in which case one of $\pi_1$ and $\pi_2$ can be eliminated) and the metric $d$ replaced by a more general cost $c$, the above primal problem \eqref{eqn:primal} has for instance also been considered by Liero et al.\,\cite[(1.6)]{LieroMielkeSavare-HellingerKantorovich-2015a}. \end{remark} The proof of Proposition \ref{prop:PrimalDualSandwich} requires a few preparatory lemmas. The first one is the analogue of the well-known Fenchel--Moreau theorem, only stated for functionals on the dual space. \begin{lemma} \label{lem:FenchelMoreau} Let $X$ be a Banach space with topological dual $X^*$ and $w:X^*\to(-\infty,\infty]$ be proper convex and weakly-* lower semi-continuous. Then $w=(\preconj w)^*$. \end{lemma} \begin{proof} Let $Y=X^*$ equipped with the weak-* topology, then its dual space, the space of continuous linear functionals on $Y$, is $Y^*=X$. Now define $v:Y\to(-\infty,\infty]$ by $v(y)=w(y)$ for all $y\in Y$, then $v$ is proper convex lower semi-continuous on $Y$. By the Fenchel--Moreau theorem, $v=\preconj(v^*)$, however, $v^*=\preconj w$ and $\preconj(v^*)=(\preconj w)^*$ by definition of the Legendre--Fenchel conjugate. \end{proof} Next, recall that any discrepancy $D$ acts on all of $\meas(\Omega)^2$ with $D(\rho_0,\rho_1)=\infty$ as soon as $\rho_0\notin\measp(\Omega)$ or $\rho_1\notin\measp(\Omega)$. \begin{lemma} \label{lem:conjugateD} Let $D(\rho_0,\rho_1)=\int_\Omega c(\rho_0,\rho_1)$ be a local discrepancy on $\measp(\Omega)^2$ and $\rho\in\measp(\Omega)$. We have \begin{equation*} \preconj[D(\rho,\cdot)](\alpha)=\int_\Omega h(\alpha)\,\d\rho+\iota_{\cl[h(\alpha)<\infty]}(\alpha) \qquad\text{and}\qquad \preconj D(\alpha,\beta)=\iota_{[\alpha\leq-h(\beta)]}(\alpha,\beta) \end{equation*} for all $\alpha,\beta\in C(\Omega)$ and $h(\alpha)=[c(1,\cdot)]^\ast(\alpha)$, where $\cl$ denotes the closure and $[h(\alpha)<\infty]\subset C(\Omega)$ and $[\beta\leq h(\alpha)]\subset C(\Omega)^2$ denote the sets of functions $\alpha$ and $\beta$ such that the respective conditions hold pointwise. Moreover we have \begin{equation} \label{eqn:CConjugateAlphaMin} \cl\{\alpha\in\R\,|\,h(\alpha)<\infty\}=(-\infty,c(0,1)]\,. \end{equation} \end{lemma} \begin{proof} We have \begin{multline*} \preconj[D(\rho,\cdot)](\alpha) =\sup_{\hat\rho\in\meas(\Omega)}\int_\Omega\alpha\,\d\hat\rho-\Dl(\rho,\hat\rho)\\ =\sup_{\substack{\hat\rho,\mu\in\measp(\Omega)\,:\\\rho,\hat\rho\ll\mu}}\int_\Omega\alpha\RadNik{\hat\rho}{\mu}-c\left(\RadNik{\rho}{\mu},\RadNik{\hat\rho}{\mu}\right)\,\d\mu =\sup_{\substack{\mu\in\measp(\Omega)\,:\,\rho\ll\mu\\g\geq0\text{ measurable}}}\int_\Omega\alpha g-c\left(\RadNik{\rho}\mu,g\right)\,\d\mu\,. \end{multline*} Now take as a test case $\mu=\delta_{\hat x}+\rho$ for $\hat x\in\Omega$ and $g(\hat x)=n\in\N$, $g(x)=1$ else. We obtain \begin{align*} \preconj[D(\rho,\cdot)](\alpha) & \geq\int_\Omega\alpha g-c\left(\RadNik{\rho}\mu,g\right)\,\d\mu \\ & = (1+\rho(\{\hat x\})) \cdot \left[ n \cdot \alpha(\hat x) - n \cdot c \left(\RadNik{\rho}\mu(\hat x)/n,1\right) \right] + \int_{\Omega \setminus \{\hat x\}} \left( \alpha -c(1,1) \right)\,\d \rho \end{align*} which, due to $c(z/n,1)\to c(0,1)$ for any $z\geq0$, diverges to infinity as $n\to\infty$ if $\alpha(\hat x)>c(0,1)$. Thus \begin{equation*} \preconj[D(\rho,\cdot)](\alpha)=\infty\qquad\text{if }\alpha(x)>c(0,1)\text{ for any }x\in\Omega\,. \end{equation*} Furthermore, for any $\mu\in\measp(\Omega)$ let us denote the Lebesgue decomposition by $\mu=g_\mu\rho+\mu^\perp$, where $g_\mu$ is a density and $\mu^\perp$ is the singular part with respect to $\rho$. If $\alpha(x)\leq c(0,1)$ for all $x\in\Omega$, we have \begin{align*} \preconj[D(\rho,\cdot)](\alpha) &=\sup_{\substack{\mu\in\measp(\Omega),\,\rho\ll\mu\\g\geq0\text{ measurable}}}\int_\Omega\alpha g-c(0,g)\,\d\mu^\perp+\int_\Omega\left(\alpha g-c(\tfrac1{g_\mu},g)\right)g_\mu\,\d\rho\\ &=\sup_{\tilde g\text{ measurable}}\int_\Omega\alpha\tilde g-c(1,\tilde g)\,\d\rho\,. \end{align*} Since $c$ is a normal integrand and $\alpha\in L^\infty(\Omega,\rho)$ we have (see e.\,g.\ \cite[Thm.\,VII-7]{CaVa77}) \begin{equation*} \preconj[D(\rho,\cdot)](\alpha) \geq\sup_{\tilde g\in L^1(\Omega,\rho)}\int_\Omega\alpha\tilde g-c(1,\tilde g)\,\d\rho =\int_\Omega[c(1,\cdot)]^\ast(\alpha)\,\d\rho\,, \end{equation*} while on the other hand also \begin{equation*} \preconj[D(\rho,\cdot)](\alpha) \leq\int_\Omega\sup_{\tilde g}\left(\alpha\tilde g-c(1,\tilde g)\right)\,\d\rho =\int_\Omega[c(1,\cdot)]^\ast(\alpha)\,\d\rho\,. \end{equation*} Finally, it is straightforward to check $$\cl\{\alpha\in\R\,|\,[c(1,\cdot)]^\ast(\alpha)<\infty\}=(-\infty,c(0,1)]$$ so that summarising, we arrive at $\preconj[D(\rho,\cdot)](\alpha)=\int_\Omega h(\alpha)\,\d\rho+\iota_{\cl[h(\alpha)<\infty]}(\alpha)$. As for $\preconj D$ we can finally compute \begin{equation*} \preconj D(\beta,\alpha) =\sup_{\rho\in\measp(\Omega)}\int_\Omega\beta\,\d\rho+\preconj[D(\rho,\cdot)](\alpha) =\sup_{\rho\in\measp(\Omega)}\int_\Omega\beta+h(\alpha)\,\d\rho+\iota_{\cl[h(\alpha)<\infty]}(\alpha)\,, \end{equation*} where it is straightforward to identify the right-hand side with $\iota_{[\beta\leq-h(\alpha)]}(\beta,\alpha)$. \end{proof} \begin{lemma}[Support function of $\Lip(\Omega)$] \label{lem:SupLip} For $\rho\in\meas(\Omega)$ we have $$\iota_{\Lip(\Omega)}^\ast(\rho)=\inf_{\mu\in\meas(\Omega)}W_1(\mu,\rho+\mu)\,,$$ where the infimum is achieved if it is finite. \end{lemma} \begin{proof} Let $\mu\in\meas(\Omega)$ be any measure with $\mu,\rho+\mu\in\measp(\Omega)$. By Remark \ref{rem:W1reduction} we have \begin{equation*} W_1(\mu,\rho+\mu) =\sup_{\beta\in\Lip(\Omega)}\int_\Omega-\beta\,\d\mu+\int_\Omega\beta\,\d(\rho+\mu) =\sup_{\beta\in\Lip(\Omega)}\int_\Omega\beta\,\d\rho =\iota_{\Lip(\Omega)}^\ast(\rho)\,. \end{equation*} Taking the infimum over $\mu\in\meas(\Omega)$ yields the result (recall that $W_1(\mu,\rho+\mu)=\infty$ for $\mu\notin\measp(\Omega)$ or $\rho+\mu\notin\measp(\Omega)$). Obviously, the infimum is achieved by any $\mu$ with $\mu,\rho+\mu\in\measp(\Omega)$. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:PrimalDualSandwich}] To obtain the primal formulation of $W_{\Dl,\Dm,\Dr}$ it is sufficient to replace any occurrence of $W_1$ by its definition \eqref{eqn:W1primal}, so let us now consider the predual formulation. Let us abbreviate \begin{equation*} K=\{(\alpha,\beta)\in C(\Omega)^2\,|\,(\alpha(x),\beta(x))\in B_{01}\,\forall x\in\Omega\}\,. \end{equation*} By Lemmas~\ref{lem:FenchelMoreau} and \ref{lem:conjugateD} we have \begin{equation*} \Dm=(\preconj{\Dm})^\ast=\iota_K^\ast \end{equation*} (note that Lemma~\ref{lem:FenchelMoreau} can be applied to $\Dm$ due to Proposition~\ref{prop:LSCLocDis}). Next consider the indicator function $H:C(\Omega)^2\to\{0,\infty\}$, \begin{equation*} H(\alpha,\beta)=\iota_{K}(\alpha,\beta)+\iota_{\Lip(\Omega)}(\alpha)+\iota_{\Lip(\Omega)}(\beta)\,. \end{equation*} Denoting by $\square$ the infimal convolution and by $\cl$ the closure of functions (also known as lower semi-continuous envelope), for $\mu,\nu\in\meas(\Omega)$ we have \begin{align*} H^\ast(\mu,\nu) &=\cl\left[\iota_K^\ast\square(\iota_{\Lip(\Omega)},\iota_{\Lip(\Omega)})^\ast\right](\mu,\nu)\\ &=\cl\left[\inf_{\hat\eta,\hat\theta\in\meas(\Omega)}\Dm(\mu-\hat\eta,\nu-\hat\theta)+\iota_{\Lip(\Omega)}^\ast(\hat\eta)+\iota_{\Lip(\Omega)}^\ast(\hat\theta)\right]\\ &=\cl\bigg[\inf_{\hat\eta,\hat\theta\in\meas(\Omega)}\min_{\eta,\theta\in\meas(\Omega)}\Dm(\mu-\hat\eta,\nu-\hat\theta)+W_1(\hat\eta+\eta,\eta)+W_1(\hat\theta+\theta,\theta)\bigg] \end{align*} by Lemma~\ref{lem:SupLip}. Substituting $\zeta=\hat\eta+\eta$ and $\xi=\hat\theta+\theta$ we arrive at \begin{align*} H^\ast(\mu,\nu) &=\cl\bigg[\inf_{\zeta,\xi\in\meas(\Omega)}\min_{\eta,\theta\in\meas(\Omega)}\Dm(\mu-\zeta+\eta,\nu-\xi+\theta)+W_1(\zeta,\eta)+W_1(\xi,\theta)\bigg]\\ &=\min_{\zeta,\xi,\eta,\theta\in\meas(\Omega)}\Dm(\mu-\zeta+\eta,\nu-\xi+\theta)+W_1(\zeta,\eta)+W_1(\xi,\theta)\,, \end{align*} where the closedness of the right-hand side follows as in Proposition~\ref{prop:seqLSCW} and the existence of minimizers follows via the direct method from the sequential weak-* lower semi-continuity of $W_1$ and $\Dm$ as well as the fact that the minimization may be restricted to $\|\zeta\|_\meas=\|\eta\|_\meas\leq\|\mu\|_\meas$ and $\|\xi\|_\meas=\|\theta\|_\meas\leq\|\nu\|_\meas$. Indeed, we may assume $\eta$ and $\zeta$ to be nonnegative (else the functional would be infinite) and to be singular, $\eta\perp\zeta$, since otherwise we can subtract their common part from both $\eta$ and $\zeta$ without changing the functional value. Then, however, we require $\zeta\leq\mu$, for otherwise we would have $\mu-\zeta+\eta\notin\measp(\Omega)$ and the functional would be infinite. The analogous holds for $\theta$ and $\xi$. Now abbreviate for $i=0,1$ \begin{equation*} F_i:C(\Omega)\to(-\infty,\infty],\quad F_i(\alpha)=-\int_\Omega h_i(\alpha)\,\d\rho_i+\iota_{\cl[h_i(\alpha)>-\infty]}\,. \end{equation*} By Lemma~\ref{lem:conjugateD} we have \begin{equation*} F_0^\ast(\rho)=\Dl(\rho_0,-\rho)\,, \qquad F_1^\ast(\rho)=\Dr(-\rho,\rho_1)\,. \end{equation*} Furthermore, by the conditions in Definition~\ref{def:LocalSimilarityMeasure} there exist $\alpha,\beta\in\Lip(\Omega)$ with $(\alpha(x),\beta(x))\in\mathrm{int}(B_{01}\cap(\dom \h\times\dom \g))$ for all $x\in\Omega$ (for instance take $\alpha\equiv\beta\equiv-\delta$ for $\delta>0$ small enough). Thus we have strong Fenchel duality \cite[p.\,201, Thm.\,1]{Luenberger69}, that is, \begin{align*} &\sup_{\alpha,\beta\in C(\Omega)}-\left[(F_0,F_1)(\alpha,\beta)+H(\alpha,\beta)\right]\\ =&\min_{\rho_0',\rho_1'\in\meas(\Omega)}\left[(F_0^\ast,F_1^\ast)(-\rho_0',-\rho_1')+H^\ast(\rho_0',\rho_1')\right]\\ =&\min_{\rho_0',\rho_1',\eta,\theta,\zeta,\xi\in\meas(\Omega)}\Dl(\rho_0,\rho_0')+\Dr(\rho_1',\rho_1) +\Dm(\rho_0'-\zeta+\eta,\rho_1'-\xi+\theta)+W_1(\zeta,\eta)+W_1(\xi,\theta)\\ =&\min_{\rho_0',\rho_1',\rho_0'',\rho_1'',\eta,\theta\in\measp(\Omega)} \Dl(\rho_0,\rho_0')+W_1(\rho_0'+\eta-\rho_0'',\eta)+\Dm(\rho_0'',\rho_1'')\\ & \qquad \qquad +W_1(\theta,\rho_1'+\theta-\rho_1'')+\Dr(\rho_1',\rho_1)\\ =&W_{\Dl,\Dm,\Dr}(\rho_0,\rho_1)\,, \end{align*} where in the last step we used $W_1(\rho_0'+\eta-\rho_0'',\eta)=W_1(\rho_0',\rho_0'')$ and $W_1(\theta,\rho_1'+\theta-\rho_1'')=W_1(\rho_1'',\rho_1')$ due to the fact that $W_1(\mu,\nu)=W_1(\mu+\rho,\nu+\rho)$ for any $\mu,\nu\in\measp(\Omega)$ and $\rho\in\meas(\Omega)$ such that $\mu+\rho,\nu+\rho\in\measp(\Omega)$. Note that in the above calculation we have assumed the value of the optimization problem to be finite (else the $\min$ would have to be replaced by $\inf$) so that Fenchel duality automatically yields the existence of optimal $\rho_0',\rho_0'',\rho_1'',\rho_1'$. \end{proof} The predual formulation \eqref{eqn:predual} of the $W_{\Dl,\Dm,\Dr}$-discrepancy \eqref{eq:SandwichProblem} already looks very similar to a $W_{\h,\g,B}$-type formulation, \eqref{eq:GeneralizedPrimalProblem}. For the equivalence it remains to establish that the functions $\gB$, $\h$ and $\g$ as defined in \eqref{eqn:predualInducedH} are admissible in the sense of Definition \ref{def:admissibility} and that conversely, admissible choices $\gB$, $\h$ and $\g$ can indeed be induced via \eqref{eqn:predualInducedH} from local discrepancy integrands $\Cl$, $\Cm$ and $\Cr$ in the sense of Definition \ref{def:LocalSimilarityMeasure}. \begin{lemma}[Conversion of $c$ and $h$] \label{lem:chConversion} Let $c$ be a local discrepancy integrand in the sense of Definition \ref{def:LocalSimilarityMeasure}. Then $$h : \alpha \mapsto -[c(1,\cdot)]^\ast(-\alpha)$$ is an admissible function in the sense of Definition \ref{def:admissibility}, and $\alphamin = -c(0,1)$ (see \eqref{eqn:AlphaBetaMinA}-\eqref{eqn:AlphaBetaMinB}). Conversely, if $h$ is admissible, then \begin{equation} \label{eqn:cInduced} c : (m_0,m_1) \mapsto \begin{cases} m_0\,(-h)^\ast(-\frac{m_1}{m_0}) & \tn{ if } m_0 > 0,\, m_1 \geq 0, \\ m_1\,\lim_{z \to \infty} \big((-h)^\ast(-z)/z \big) & \tn{ if } m_0 = 0,\,m_1 > 0, \\ 0 & \tn{ if } m_0 = 0,\, m_1 = 0, \\ + \infty & \tn{ else,} \end{cases} \end{equation} is a local discrepancy integrand and $h = -[c(1,\cdot)]^\ast(-\cdot)$. \end{lemma} \begin{proof} For a given $c$, the induced $h$ is concave and upper semi-continuous due to the properties of the Fenchel-Legendre conjugate (conditions \ref{enm:convexity} and \ref{enm:wellposedness}). Furthermore, condition~\ref{enm:positivity1} is a consequence of $c\geq0$ and $c(1,1)=0$ for all local discrepancy integrands $c$, while condition~\ref{enm:positivity} then follows from the strict positivity property~\ref{enm:strictPos} of Definition~\ref{def:LocalSimilarityMeasure} due to the conjugate subgradient theorem (see e.\,g.\ \cite[Prop.\,16.13]{BauschkeCombettes2011}). Finally, condition~\ref{enm:negativeMeasures} from Definition~\ref{def:admissibility} is implied by property~\ref{enm:domain} of Definition~\ref{def:LocalSimilarityMeasure}. The value of $\alphamin = -c(0,1)$ is given by Lemma \ref{lem:conjugateD}. Now let an admissible $h$ be given and consider the induced $c$. We need to show that it satisfies the properties in Definition~\ref{def:LocalSimilarityMeasure}. It is a straightforward exercise to show that property~\ref{enm:strictPos} is implied by conditions~\ref{enm:positivity1} and \ref{enm:positivity} on $h$. The positive one-homogeneity of $c$ follows by definition. Convexity is implied by this one-homogeneity together with the subadditivity \begin{multline*} c(m_0+n_0,m_1+n_1) =(m_0+n_0)(-h)^\ast\left(-\tfrac{m_1+n_1}{m_0+n_0}\right)\\ \leq(m_0+n_0)\left[\tfrac{m_0}{m_0+n_0}(-h)^\ast\left(-\tfrac{m_1}{m_0}\right)+\tfrac{n_0}{m_0+n_0}(-h)^\ast\left(-\tfrac{n_1}{n_0}\right)\right] =c(m_0,n_0)+c(m_1,n_1)\,, \end{multline*} where we have used convexity of $(-h)^\ast(-\cdot)$. The lower semi-continuity of $c$ on $(0,+\infty) \times [0,+\infty)$ is a direct consequence of the lower semi-continuity of $(-h)^\ast(-\cdot)$ and the continuity of $(m_0,m_1)\mapsto(m_0,\frac{m_1}{m_0})$. The function for $m_0=0$ and $m_1>0$ is just defined as the limit $m_0 \to 0$ for $m_1 > 0$, and lower semi-continuity in $(m_0,m_1) = (0,0)$ follows since $0$ is the global minimum of $c$ (this establishes property \ref{enm:cConv}). Property~\ref{enm:domain} is satisfied by definition of $c$. Finally, it is a simple exercise to show that the $h$ induced by $c$ is in fact the original $h$. \end{proof} We can finally state the equivalence relation between \eqref{eq:GeneralizedPrimalProblem} and \eqref{eq:SandwichProblem}. \begin{corollary}[Equivalence of formulations] \label{cor:equivalenceStatic} Let the local discrepancies $\Dl$, $\Dm$, and $\Dr$ be induced by the integrands $\Cl$, $\Cm$, and $\Cr$. Then $W_{\Dl,\Dm,\Dr}=W_{\h,\g,B}$ for \begin{gather*} \h(\alpha)=-[\Cl(1,\cdot)]^\ast(-\alpha)\,,\qquad \g(\beta)=-[\Cr(\cdot,1)]^\ast(-\beta)\,,\\ \gB(\alpha)=-[\Cm(\cdot,1)]^\ast(-\alpha)\,,\qquad \hB(\beta)=-[\Cm(1,\cdot)]^\ast(-\beta)\,,\\ B_{01}=\{(\alpha,\beta)\in\R^2\,|\,\alpha\leq\hB(-\beta)\} = \{(\alpha,\beta)\in\R^2\,|\,\beta\leq\gB(-\alpha)\} \,,\\ B_0 = [\alphamin,+\infty)\,, \qquad \alphamin = -\Cl(0,1)\,, \\ B_1 = [\betamin,+\infty)\,, \qquad \betamin=-\Cr(1,0)\,,\\ B = B_{01} \cap (B_0 \times B_1)\,, \end{gather*} and the triple $(\h,\g,B)$ is admissible in the sense of Definition \ref{def:admissibility}. Conversely, given admissible $(\h,\g,B)$ and $\hB$ describing $B$ we have $W_{\h,\g,B}=W_{\Dl,\Dm,\Dr}$ with $\Cl$, $\Cr$ and $\Cm$ being induced by $\h$, $\g$ and $\hB$ as given in \eqref{eqn:cInduced} (for the conversion of $\g$ into $\Cr$ the arguments $m_0$ and $m_1$ have to be swapped). $(\Dl,\Dm,\Dr)$ are local discrepancy measures in the sense of Definition \ref{def:LocalSimilarityMeasure}. \end{corollary} \begin{proof} The claim follows directly from Proposition \ref{prop:PrimalDualSandwich} and Lemma \ref{lem:chConversion}. \end{proof} The above identification of different formulations allows to always choose the more convenient for analytical and numerical purposes. The following corollary makes use of this fact and proves a reduction of particular infimal convolution type discrepancies via Proposition \ref{prop:modelReduction}, which is by far not obvious from Definition~\ref{def:SandwichProblem}. It can be stated more elegantly with an auxiliary Lemma. \begin{lemma}[Infimal convolution of local discrepancy measures] \label{lem:LocalConvolution} Let $D_0$, $D_1$ be two local discrepancy measures in the sense of Definition \ref{def:LocalSimilarityMeasure} with integrands $c_0$, $c_1$, and let $h_i= -[c_i(1,\cdot)]^*(-\cdot)$ for $i\in\{0,1\}$. Further, let $c$ be the discrepancy integrand induced by $h=h_0 \circ h_1$ via \eqref{eqn:cInduced} and $D$ be the corresponding discrepancy measure. Then, $D=D_0 \diamond D_1$. \end{lemma} \begin{proof} It is easy to see that $h=h_0 \circ h_1$ is indeed admissible in the sense of Definition \ref{def:admissibility} and therefore, by virtue of Lemma \ref{lem:chConversion} $c$ is an admissible local discrepancy integrand. With arguments as in Lemma \ref{lem:conjugateD} one can show that $D=D_0 \diamond D_1$ is equivalent to $c=c_0 \diamond c_1$. Consider now $m_0$, $m_1 > 0$. Introducing $g_1= -[c_1(\cdot,1)]^*(-\cdot)$, we find \begin{align*} (c_0 \diamond c_1)(m_0,m_1) & = \inf_{m \in \R} c_0(m_0,m) + c_1(m,m_1) = \sup_{\alpha \in \R} \underbrace{-[c_0(m_0,\cdot)]^*(-\alpha)}_{m_0 \cdot h_0(\alpha)} \underbrace{-[c_1(\cdot,m_1)]^*(\alpha)}_{m_1 \cdot g_1(-\alpha)} \\ & = \sup_{\substack{(\alpha,\beta) \in \R^2 \colon\\ \beta \leq g_1(-\alpha)}} m_0 \cdot h_0(\alpha) + m_1 \cdot \beta = \sup_{\substack{(\alpha,\beta) \in \R^2 \colon\\ \alpha \leq h_1(-\beta)}} m_0 \cdot h_0(\alpha) + m_1 \cdot \beta \\ & = \sup_{\substack{(\alpha,\beta) \in \R^2 \colon\\ \alpha \leq h_0(h_1(-\beta))}} m_0 \cdot \alpha + m_1 \cdot \beta = c(m_0,m_1)\,. \end{align*} In the second equality we have used the Fenchel--Rockafellar Theorem and that $h_0(0)=g_1(0)=0$ and both functions are continuous at $0$ (see Definition \ref{def:admissibility}). For $m_0<0$ or $m_1<0$ it is easy to verify that $(c_0 \diamond c_1)(m_0,m_1)=\infty$, and so is $c(m_0,m_1)$ since it is admissible. Finally, since $c$, $c_0$, and $c_1$ (and thus also $(c_0 \diamond c_1)$ by Remark \ref{rem:propertiesInfConv}\eqref{enm:metricInfConv}) are convex and lower semi-continuous, they must all be right-continuous in $m_0=0$ or $m_1=0$. Since $c$ and $(c_0 \diamond c_1)$ coincide on $(0,+\infty)^2$, this implies their equality also on $\{0\} \times [0,\infty) \cup [0,\infty) \times \{0\}$. \end{proof} \begin{corollary}[Reduction of infimal convolution formulation] \label{cor:reductionInfimalConv} Let the local discrepancies $\Dl$, $\Dm$, and $\Dr$ be induced by the integrands $\Cl$, $\Cm$, and $\Cr$, and let $\h$, $\g$, $\hB$, and $\gB$ be as in Corollary \ref{cor:equivalenceStatic}. Furthermore let $z\in(0,\infty]$, and let the extended discrete metric be defined by \eqref{eqn:discreteMetric}. \begin{itemize} \item If $\Cm(m_0,m_1)=z|m_0-m_1|$ for all $m_1>m_0 \geq 0$, then \begin{equation*} W_{\Dl,\Dm,\Dr} =W_{\Dl \diamond \Dm,D^d,\Dr} =\Dl \diamond \Dm\diamond W_1\diamond\Dr\,. \end{equation*} \item If $\Cm(m_0,m_1)=z|m_0-m_1|$ for all $0 \leq m_1<m_0$, then \begin{equation*} W_{\Dl,\Dm,\Dr} =W_{\Dl,D^d,\Dm \diamond \Dr} =\Dl\diamond W_1\diamond \Dm \diamond \Dr\,. \end{equation*} \end{itemize} \end{corollary} \begin{proof} Consider the first statement (the second follows analogously). The second equality follows from $W_1\diamond D^d\diamond W_1=W_1\diamond W_1=W_1$ (Remark \ref{rem:UnbalancedMeasures}). As for the first equality, by Corollary \ref{cor:equivalenceStatic} and Proposition \ref{prop:modelReduction} we have \begin{equation*} W_{\Dl,\Dm,\Dr} =W_{\h,\g,B} =W_{\h\circ\hB,\g,B(\tildealphamin,\betamin)}\,, \end{equation*} where we have used the notation from Corollary \ref{cor:equivalenceStatic} and Proposition \ref{prop:modelReduction} as well as the fact $\gB(\alpha)=\min\{\alpha,z\}$ for all $\alpha\geq0$. Now it is straightforward to check via Lemma \ref{lem:LocalConvolution} and Corollary \ref{cor:equivalenceStatic} that the equivalent formulation of $W_{\Dl \diamond \Dm,D^d,\Dr}$ is also $W_{\h\circ\hB,\g,B(\tildealphamin,\betamin)}$. \end{proof} Let us close by giving yet another, flow-based formulation. \begin{remark}[Flow formulation]\label{rem:flowFormulation} In Euclidean space $\R^n$, the following relation is known as Beckmann's problem \cite[Thm.\,4.6]{Santambrogio15}, \begin{equation*} W_1(\rho_0,\rho_1)=\min_{\phi\in\meas(\Omega)^n,\,\div \phi=\rho_1-\rho_0}\||\phi|\|_{\meas}\,, \end{equation*} where the divergence is taken in any open superset of $\Omega$ (cf.\ also \cite{LellmannKantorovichRubinstein2014}) in the distributional sense. As a consequence we have the alternative flow formulation \begin{multline*} W_{\Dl,\Dm,\Dr}(\rho_0,\rho_1)= \min_{\rho_0'',\rho_1''\in\measp(\Omega),\,\phi,\psi\in\meas(\Omega)^n}\Dl(\rho_0,\rho_0''-\div \phi)+\||\phi|\|_\meas\\+\Dm(\rho_0'',\rho_1'')+\||\psi|\|_\meas+\Dr(\rho_1''-\div \psi,\rho_1)\,, \end{multline*} where $\phi$ and $\psi$ have the interpretation of a mass flow field and can thus be used in applications to extract flow information. The divergence essentially arises via dualization of the Lipschitz constraint in \eqref{eqn:W1predual}, interpreted as a local constraint $|\nabla\alpha|\leq1$ almost everywhere (see Section~\ref{sec:variableSplitting}). \end{remark} \subsection{The inhomogeneous case} \label{sec:Inhomogeneous} For ease of exposition we restricted ourselves to the spatially homogeneous case in the previous sections. For the sake of completeness we shall here briefly comment on what changes if all integrands become space-dependent. In more detail, $\h$, $\g$, and $B$ in Definition~\ref{def:GeneralizedPrimalProblem} are now also allowed to depend on $x\in\Omega$, and the energy becomes \begin{equation*} E_{\h,\g,B}^{\rho_0,\rho_1}(\alpha,\beta) = \begin{cases} \int_\Omega \h(\alpha(x),x)\,\d\rho_0(x)\\ \;+ \int_\Omega \g(\beta(x),x)\,\d\rho_1(x)& \text{if }\alpha, \beta \in \Lip(\Omega)\text{ with }(\alpha(x),\beta(x)) \in B(x) \forall\, x \in\Omega\,,\\-\infty&\text{else.}\end{cases} \end{equation*} The spatial dependence demands two additional admissibility conditions in Definition~\ref{def:admissibility}: $\h$, $\g$, and $B$ must depend measurably on $x$ so that $E_{\h,\g,B}^{\rho_0,\rho_1}$ is well-defined, where $B(x)=B_{01}(x)\cap(B_0(x)\times B_1(x))$ for \begin{gather*} B_{01}(x)=\left\{(\alpha,\beta)\in\R^2\,\right|\left.\alpha\leq\hB(-\beta,x)\right\}=\left\{(\alpha,\beta)\in\R^2\,\right|\left.\beta\leq\gB(-\alpha,x)\right\}\,,\\ B_0(x)=\cl\{\alpha\in\R\,|\,\h(\alpha,x)>-\infty\}\,,\qquad B_1(x)=\cl\{\beta\in\R\,|\,\g(\beta,x)>-\infty\}\,. \end{gather*} Furthermore, \begin{equation*} \tilde{B}_0(x)=\{(\alpha,\beta)\in\R^2\,|\,\alpha\leq\h(-\beta,x)\}\,,\quad \tilde{B}_1(x)=\{(\alpha,\beta)\in\R^2\,|\,\beta\leq\g(-\alpha,x)\}\,,\text{ and }B(x) \end{equation*} have to be lower semi-continuous in $x$ (a multifunction $x\mapsto B(x)$ is said to be lower semi-continuous if the set $\{x\in\Omega\,|\,U\cap B(x)\neq\emptyset\}$ is open for any open set $U$) to ensure sequential \mbox{weak-*} lower semi-continuity of $W_{\h,\g,B}$. At first glance one might expect the stronger requirement of $\h$ and $\g$ being upper semi-continuous in $x$, however, due to the restriction of $\alpha$ and $\beta$ to the domains of $\h$ and $\g$ we essentially only need upper semi-continuity where $\h$ and $\g$ are finite, resulting in lower semi-continuity of $\tilde{B}_0$ and $\tilde{B}_1$. That this heuristic intuition is correct will turn out from the model equivalence below. Likewise, the local discrepancy integrand in Definition~\ref{def:LocalSimilarityMeasure} may depend on $x$ as well, inducing a discrepancy \begin{equation*} D(\rho_0,\rho_1) = \int_\Omega c(\RadNik{\rho_0}{\rho}(x),\RadNik{\rho_1}{\rho}(x),x)\,d\rho(x)\,. \end{equation*} Of course, in addition to the properties from Definition~\ref{def:LocalSimilarityMeasure} $c$ is required to be measurable also in $x$. Furthermore it needs to be jointly lower semi-continuous in all its arguments, which can easily be seen to be a necessary condition for sequential weak-* lower semi-continuity of $D$ (its sufficiency is shown below). Under the above natural conditions, the equivalence between both models stays valid with slight proof variations. In particular, the proof of weak-* lower semi-continuity, Proposition~\ref{prop:LSCLocDis}, becomes more complicated as more work would be required to apply \cite[Thm.\,2]{BouchitteValadier88} in the last step. Instead we can make use of \cite[Lemma\,3.7]{BoBu90} (which performs that additional work inside its proof). Before, we require a little lemma which takes the role of \cite[Lemma\,3.6]{BoBu90}. \begin{lemma} $F:\meas(\Omega)^2\to(-\infty,\infty]$ is weakly-* lower semi-continuous if and only if $F_\varepsilon(\rho)=F(\rho)+\varepsilon\int_\Omega \binom{1}{1} \cdot\d\rho$ is so for any $\varepsilon>0$. \end{lemma} \begin{proof} This follows directly from the fact that $\varepsilon \int_\Omega \binom{1}{1} \cdot\d\rho$ is a weak-* continuous perturbation. \end{proof} \begin{proposition}[Lower semi-continuity] $D(\rho_0,\rho_1)= \int_\Omega c(\RadNik{\rho_0}{\rho}(x),\RadNik{\rho_1}{\rho}(x),x)\,d\rho(x)$ is weakly-* lower semi-continuous on $\meas(\Omega)^2$. \end{proposition} \begin{proof} \newcommand{\auxSet}{A} Due to the previous lemma it is sufficient to show weak-* lower semi-continuity of $D_\varepsilon(\rho_0,\rho_1) \allowbreak=\allowbreak \int_\Omega c_\varepsilon(\RadNik{\rho_0}{\rho}(x),\RadNik{\rho_1}{\rho}(x),x)\,d\rho(x)$ with $c_\varepsilon(m_0,m_1,x)=c(m_0,m_1,x)+\varepsilon(m_0+m_1)$. Define the sets \begin{align*} \auxSet(x) & = \{u\in\R^2\,|\,u\cdot m\leq c_\varepsilon(m_0,m_1,x)\text{ for all }m=(m_0,m_1)\in\R^2\}\,, \\ \tn{and} \quad H & =\{u\in C(\Omega)\,|\,u(x)\in\auxSet(x)\text{ for all }x\in\Omega\} \end{align*} and consider a net $\rho^a=(\rho_0^a,\rho_1^a)$ converging weakly-* against $\rho=(\rho_0,\rho_1)$ in $\meas(\Omega)^2$. For an arbitrary $u\in H$ we have \begin{equation*} D_\varepsilon(\rho_0^a,\rho_1^a) =\int_\Omega c_\varepsilon\left(\RadNik{\rho_0^a}{|\rho^a|},\RadNik{\rho_1^a}{|\rho^a|},x\right)\,\d|\rho^a| \geq\int_\Omega u(x)\cdot\d\rho^a(x) \to\int_\Omega u(x)\cdot\d\rho\,. \end{equation*} Now note that $\auxSet(x)$ is lower semi-continuous in $x$ due to the lower semi-continuity of $c_\varepsilon$ and $c_\varepsilon(m_0,m_1,x)=\sup_{u\in\auxSet(x)}u_0m_0+u_1m_1$ \cite[Thm.\,17]{BoVa89} and that $\{u\in\R^2\,|\,|u|\leq\varepsilon\}\subset\auxSet(x)$ for all $x\in\Omega$. Thus we may apply \cite[Lemma\,3.7]{BoBu90} with $\mu=0$, $\lambda=\rho$, and $h=c_\varepsilon$, yielding \begin{equation*} \sup_{u\in H}\int_\Omega u(x)\cdot\d\rho =\int_\Omega c_\varepsilon\left(\RadNik{\rho_0}{|\rho|},\RadNik{\rho_1}{|\rho|},x\right)\,\d|\rho| =D_\varepsilon(\rho_0,\rho_1) \end{equation*} which concludes the proof. \end{proof} Another change is required in Remark~\ref{rem:coercivity}, where the coercivity estimate now turns into $D(\rho_0,\rho_1)\geq\tilde c(\measnrm{\rho_0},\measnrm{\rho_1})$ with $\tilde c(m_0,m_1)$ the convex envelope of $\min_{x\in\Omega}c(m_0,m_1,x)$. Note that $\tilde c(m_0,m_1)$ is well-defined (due to the lower semi-continuity of $c$ in $x$ and the compactness of $\Omega$) and strictly positive for $m_0\neq m_1$. The final addition concerns the proof of Corollary \ref{cor:equivalenceStatic}, where we need to show that the lower semi-continuity of $\tilde{B}_0$, $\tilde{B}_1$, and $B$ in $x$ is equivalent to the lower semi-continuity of $\Cl$, $\Cm$, and $\Cr$ in all their arguments. This is true by \cite[Thm.\,17]{BoVa89}.
{"config": "arxiv", "file": "1701.01945/02-w1-extension.tex"}
TITLE: Question regarding dividing group of people into ordered pair QUESTION [2 upvotes]: This is a very basic counting problem, however I couldn't recall my memory to understand the answer correctly. From "A First Course in Probability by Sheldon Ross", Example A football team consists of 20 offensive and 20 defensive players. The players are paired in groups of 2 for the purpose of determining roommates.... There are $\dfrac{40!}{(2!)^{20}}$ ways of dividing 40 players into 20 "ordered pairs" of two each. I tried a smaller set, says: Offensive = ${a_1, a_2}$ Defensive = ${b_1, b_2}$ so all ordered pairs are : $$a_1a_2, a_2a_1, b_1b_2, b_2b_1$$ $$a_1b_1, b_1a_1, a_1b_2, b_2a_1$$ $$a_2b_1, b_1a_2, a_2b_2, b_2a_2$$ So there are 12 of pairs. Now if I use the formula was given above, I got: $$\dfrac{4!}{(2!)^2} = \dfrac{4.3.2}{4} = 6$$ which is clearly incorrect. So my question is, should the formula given from the Example be $\dfrac{40!}{{2!}^{19}}$? REPLY [1 votes]: I think you are miss understanding the thing, since it is said: "There are $\frac{40!}{(2!)^{20}}$ ways of dividing 40 players into 20 "ordered pairs" of two each." which it means that the ordening of the pairs in the set of pairs of a particular combination of pairs matters that pairs are ordered has not sense since the factorial would not have to be in that case with $2$), i.e. in your case would be the possibilities $$(a_1,a_2),(b_1,b_2)$$ $$(b_1,b_2),(a_1,a_2)$$ $$(a_1,b_2),(b_1,a_2)$$ $$(b_1,a_2),(a_1,b_2)$$ $$(a_1,b_1),(a_2,b_2)$$ $$(a_2,b_2),(a_1,b_1)$$ If the ordening of the pairs in the set of pairs mattered, but not the ordenind of the particular pairs, then the formula would be $$\frac{40!}{20!}$$ where the denominator represents the way of ordening the $20$ pairs. I hope this helps.
{"set_name": "stack_exchange", "score": 2, "question_id": 62236}
TITLE: Set notation for specific cartesian product set? QUESTION [0 upvotes]: If I have a set $\{(1, H), (2, C), (3, F), (4, Z), (5, S), (6, L) \}$ is there any way to express this with set builder notation? If not, is there any other way to express this mathematically? REPLY [1 votes]: {(1,H),(2,C),(3,F),(4,Z),(5,S),(6,L)} = { x : x = (1,H) or x = (2,C) or x = (3,F) or x = (4,Z) or x = (5,S) or x = (6,L) }
{"set_name": "stack_exchange", "score": 0, "question_id": 2684433}
TITLE: Does it have bounded variation on $\mathbb{R}$? QUESTION [1 upvotes]: Suppose $$f(x)= x+\sin x.$$ Does this function have bounded variation on $\mathbb{R}$? Please help me, thank you. REPLY [2 votes]: Hint: The function is monotonic. REPLY [2 votes]: Yes; its derivative is bounded on the entire real line; $0<f'<2$
{"set_name": "stack_exchange", "score": 1, "question_id": 515190}
TITLE: How can a "bridge path" in an undirected graph be defined correctly/mathematically precisely? QUESTION [1 upvotes]: How would you define an undirected graph containing a path solely of bridges like: Connected Subgraph<->*<->*<->*<->* (* node; <-> undirected edge) Alternatively: Connected Subgraph<->*<->*<->*<->Connected Subgraph Where a path of bridges are a number of nodes of degree two starting at a node of higher degree and ending in a node of either degree > 2 or degree one. All edges "inside" (imprecise) the path are bridges. Somehow it will also be necessary to ensure that all nodes in a bridge path have to belong to the same path... Is there a known precise definition of such a structure in preferably fewer words? REPLY [1 votes]: I do not think there is existing terminology for a "chain of bridges". There is a related notion of ears (quoted here from Wikipedia): In graph theory, an ear of an undirected graph $G$ is a path $P$ where the two endpoints of the path may coincide, but where otherwise no repetition of edges or vertices is allowed, so every internal vertex of $P$ has degree two in $G$. We can take inspiration from this definition, but probably shouldn't build on it, since ears are in practice only used in cases where the two endpoints are in the same connected component. One possible wording of a definition is: A $u-v$ path $P$ in a graph $G$ is a bridge path if $P$ is the unique $u-v$ path in $G$, every internal vertex of $P$ has degree $2$ in $G$, and $P$ is not contained in any longer path with these properties.
{"set_name": "stack_exchange", "score": 1, "question_id": 4367695}
TITLE: How to correctly understand submanifolds? QUESTION [5 upvotes]: First a disclaimer: the particular interest that led me to this is the use of submanifolds in General Relativity, in particular spacelike hypersurfaces. This idea used a lot in GR led me to seek a better understanding of submanifolds. So, the point is that up to now my understanding of submanifolds has been quite "informal", mostly based on examples, and most examples are something like: pick a manifold $M$ a chart $(x,U)$ and hold some coordinate functions constant. This defines a submanifold. Now this is not a good understanding of the subject. So searching for a formal understanding I've tried Spivak's book. He defines actually four things: Definition 1: A differentiable function $f : M\to N$ is called an immersion if the rank of $f$ is $\dim M$. Definition 2: A subset $M_1\subset M$ together with a differentiable structure that need not be inherited from $M$ is called an immersed submanifold if the inclusion $i : M_1\to M$ is an immersion. Definition 3: An embedding is an immersion $f: M\to N$ which is injective and a homeomorphism over its image. Definition 4: A submanifold of $M$ is an immersed submanifold $M_1$ such that the inclusion $i : M_1\to M$ is an embedding. Other authors do differently. For example, Sean Carroll in his Spacetime and Geometry says: Consider an $n$-dimensional manifold $M$ and an $m$-dimensional manifold $S$, with $m\leq n$, and a map $\phi : S\to M$. If the map $\phi$ is both $C^\infty$ and one-to-one, and the inverse $\phi^{-1} : \phi(S)\to S$ is also $C^\infty$, then we say that the image $\phi(S)$ is an embedded submanifold of $M$. If $\phi$ is one-to-one locally but not necessarily globally, then we say that $\phi(S)$ is an immersed submanifold of $M$. When we speak of "submanifolds" without any particular modifier, we are imagining that they are embedded. The major difference here is: for Spivak, submanifolds are subsets obeying certain properties (that is what I would expected), for Carroll, they are other manifolds which we map into subsets, which seems quite odd. Strangely, I've read some authors saying that this is one concept that is really done different by each author. What I find confusing is that the idea of a "sub-structure" in general is quite straightforward. In Algebra defining subgroups, vector subspaces and so forth is quite easily done, it is just a subset with a straightforward property that makes it "inherit" the structure of the first set. The same happens with topology when defining topological subspaces with the relative topology. Submanifolds seems different. One author resorts to "external" sturcture, namely, another manifold. Spivak also says that the "submanifolds" might have another differentiable structure. This all confuses me. For example: what if we have a subset and want to inherit the structure of the original manifold? After all that is what is done in all those examples I've mentioned. How to correctly understand submanifolds, the various definitions and the relations between them? Why this is a good definition of submanifold? How to truly understand why this is the correct definition of a sub-structure for manifolds? REPLY [9 votes]: First of all, your definition 1 is a bit faulty, you should say that $rank(df)_x$ is constant equal to $m=dim(M)$ at every point $x\in M$. With this in mind, these are equivalent definitions. However, I do not like either one of these definitions and for several reasons. Spivak's definition - because it depends on a nontrivial theorem (the immersion theorem), while a definition this basic should not depend on anything nontrivial. Also, for the reason that you stated. More importantly, I do not like both definitions - because they utterly fail in other, closely related situations. For instance, if I were to define the notion of a topological submanifold in a topological manifold along these lines, Spivak's will fail immediately (what is the rank of the derivative if I do not have any derivatives to work with?); Carroll's definition will fail because it will yield in some cases rather unsavory objects, like Alexander's horned sphere in the 3-space. The same if I were to use triangulated manifolds and triangulated submanifolds, algebraic (sub)varieties and analytic (sub)varieties. Here is the definition that I prefer. First of all, what are we looking for in an $n$-dimensional manifold $N$ (smooth or not): We want something which is locally isomorphic (in whatever sense of the word isomorphism) to an $n$-dimensional real vector space (no need for particular coordinates, but if you like, just $R^n$). Then an $m$-dimensional submanifold should be a subset which locally looks like an $m$-dimensional vector subspace in an $n$-dimensional vector space. This is our intuition of a submanifold in any category (smooth, topological, piecewise-linear, holomorphic, symplectic, etc) we work with. Once you accept this premise, the actual definition is almost immediate: Definition. Let $N$ be a smooth $n$-dimensional manifold. A subset $M\subset N$ is called a smooth $m$-dimensional submanifold if for every $x\in M$ there exists an (open) neighborhood $U$ of $x$ in $N$ and a diffeomorphism $\phi: U\to V\subset R^n$ ($V$ is open) such that $\phi(M\cap U)= L\cap V$, where $L$ is an $m$-dimensional linear subspace in $R^n$. (If you like coordinates, assume that $L$ is given by the system of equations $y_1=...y_{n-m}=0$.) This is completely intrinsic. Next, you prove a lemma which says that such $M$ has a natural structure of an $m$-dimensional smooth manifold with topology equal to the subspace topology and local coordinates near points $x\in M$ given by the restrictions $\phi|(U\cap M)$. Then you prove that with this structure, $M$ satisfies the other two definitions that you know. Remark. Note that this definition will work almost verbatim if I were to deal with topological manifolds: I would just replace "a diffeomorphism" with "a homeomorphism. If I were to work with, say, complex (i.e. holomorphic) manifolds, I would replace $R^n$ with $C^n$ (of course), use complex vector subspaces and replace "diffeomorphism" with "a biholomorphic map". An so on. Now, to the question why is it so much more complicated than the concept of a subgroup or a submodule or any other algebraic concept you can think of. This is because manifolds have a much richer structure. To begin with, they are topological spaces. (Notice that every submanifold is equipped with the subspace topology, so this has to be built in.) Then, the notion of vector spaces has to be used at some point. Next, there is the "local" thing (local charts)....
{"set_name": "stack_exchange", "score": 5, "question_id": 2400719}
TITLE: Prove the following integral identity. QUESTION [1 upvotes]: Consider the integrable functions $f, g: [a,b] \rightarrow > \mathbb{R}$. Prove that if $$\hspace{5cm} g(x) \ge 0 \hspace{3cm} \forall x \in [a, b]$$ and $$m = \inf(f)$$ $$M = \sup(f)$$ there exists a $c \in [m, M]$ such that: $$\int_a^b f(t)g(t) dt = c \cdot \int_a^b g(t)dt$$ When I think about it, it doesn't seem so far fetched. If we have two functions on the interval $[a, b]$, then it obviously seems reasonable that there exists a $c_1 \in [m, M]$ such that $$f(x) = c_1$$ , with $x \in [a,b]$. So we would have: $$f(x)g(x) = c_1 g(x)$$ We could find a $c_k$ for every point $x \in [a, b]$, so that means that we can integrate the above relation and get: $$\int_a^b f(t) g(t)dt = c \cdot \int_a^b g(t)dt$$ where $c$ is a some sort of linear combination (I think, I'm not sure) of all of the $c_k$'s that I used for every point in the interval $[a, b]$. What's a better, more concise proof to this identity than the weird and incomplete one that I came up with? REPLY [1 votes]: The easiest path to my mind is to say that since $mg(x)\le f(x)g(x)\le Mg(x)$ for $x\in(a,b)$ then $$m\int_a^bg(x)dx\le\int_a^bf(x)g(x)dx\le M\int_a^bg(x)dx$$ On division we get $$m\le\frac{\int_a^bf(x)g(x)dx}{\int_a^bg(x)dx}=c\le M$$ Multiplying back we get $$\int_a^bf(x)g(x)dx=c\int_a^bg(x)dx$$ Notice that if $\int_a^bg(x)dx=0$ this method of proof was invalid but we can see from the first inequality that $\int_a^bf(x)g(x)dx=0$ anyhow so any $c\in[m,M]$ would satisfy the hypothesis of the theorem.
{"set_name": "stack_exchange", "score": 1, "question_id": 3557627}
\chapter{Conclusions and Outreach} \label{Chapter4} In this chapter we want to remark the important features of this work, as well as give some insights on what may become future work and research lines. \lhead{Chapter 4. \emph{Conclusions and Outreach}} \section{Outreach: four qubits} Throughout this work we have calculated the asymptotic rates for the distribution $p(\alpha,\beta,\gamma|\psi)$ for different entanglement classes. To that end we have used two approaches; the computation making the Schur transform and using the Louck polynomials, which is rather complicated for the general case, and the \emph{covariant-to-state} approach which gives us an explicit formula for the ratio $p(\alpha,\beta,\gamma|\psi_1)/p(\alpha,\beta,\gamma|\psi_2)$ by calculating the ratio $\braket{\Phi_{\psi_1}}{\Phi_{\psi_1}}/\braket{\Phi_{\psi_2}}{\Phi_{\psi_2}}$. The asymptotics are not easily calculated and involves non-trivial maximization problems. These two approaches can in principle be extended to four or more qubit systems. The cons are that as we may expect the complexity increases vastly if we add another qubit; for example, the entanglement classes are now 9 \emph{families} in which it is not clear how to organize them in some hierarchy (separable and entangled states ca belong to the same family) \cite{Verstraete_2002}. Additionally, the generators of the algebra of covariants, which are given in \cite{Briand4}, are 170 covariants. They can be calculated by means of a computer search using \emph{transvectans} and guided by the corresponding Hilbert multivariate series (with $u_1=u_2=u_3=u_4=u$ for simplicity) \begin{equation} \dfrac{P(u,t)}{(1-tu^2)(1-tu^4)(1-t^2)(1-t^2u^2)^2(1-t^2u^4)^3(1-t^4)(1-t^4u^2)(1-t^4u^4)(1-t^6)}, \end{equation} where $P(t,u)$ is a polynomial of degree 20 in $t$ and $u$ which we will not write here but can be found at \cite{Briand4}. Nevertheless as we have mentioned before, not all the covariants are necessary for an entanglement classification. Advances in this classification can be seen in \cite{Zimmerman} where the authors give a smaller subset of \emph{covariant vectors} and attempt to classify what they called \emph{nilpotent orbits}, which is nothing more than orbits in which all the invariants vanish. See also \cite{Chen2,Luque} and a review \cite{Borsten_2012} with interesting relations to black holes entropy. \\ \\ Regarding proposition \ref{prop1} and theorem \ref{teo2} for four qubits, we have verified them numerically so far as the first 47 of the 170 covariants in \cite{Briand4} taking into account some additional restrictions given by the syzygys. The complete set is ongoing work. This has lead us to think that proposition \ref{prop1} may be general for any $n$ qubit system. However, to make this assertion into a formal proof seems rather impractical as well as the \emph{covariant-to-state} approach for this case. For \emph{qudit} systems it is not clear yet how the relation between \emph{fundamental tuples} of Young diagrams can be constructed from the covariants. \section{Conclusions} In his work we have calculated the asymptotic rates $\phi(\bar{\alpha},\bar{\beta},\bar{\gamma}|\psi)$ of the probability $p(\alpha,\beta,\gamma|\psi)$ in different regions of the compatibility polytope. To calculate the probability $p(\alpha,\beta,\gamma|\psi)$ we employed two techniques. The first technique involves taking the $n$-th tensor product of the state and write it explicitly to then apply the Schur transformation. This approach involves dense algebra dealing with Louck and Han-Eberlein polynomials which we adapt to fit our problem. The second technique is what we called the \emph{covariant-to-state} approach and involves $\vec{n}$-th powers of the covariants to describe a state $\ket{\Phi_{\psi}}\in V_\lambda^{d}$ in the Wedderburn decomposition of $(\mathbb{C}^{2}\otimes\mathbb{C}^{2}\otimes\mathbb{C}^{2})^{\otimes n}$, a technique we consider is a novelty and that can bring a more geometrical/algebraic insight into entanglement. With this approach we were able to calculate explicitly the internal product ratio $\braket{\Phi_{\psi_1}}{\Phi_{\psi_1}}/\braket{\Phi_{\psi_2}}{\Phi_{\psi_2}}$ for the GHZ and W entanglement class in the facets of the polytope. We calculated for the W class its asymptotics in the $\alpha+\beta+\gamma=n$ plane as well as an expression for general $\alpha,\beta,\gamma$, and in the GHZ class the rates at the facet defined by $\bar{\alpha}=1/2$. In the facets of the polytope where $g_{\alpha\beta\gamma}=1$, we show that the rate (relative rate) can be written as a convex combination of the rates at the vertex of the polytope that lie in the facet. Furthermore, the rates at the vertex are given in terms of LU invariants relative to each vertex for the W class. In the GHZ class we obtained that the rates at the vertex are obtained as the asymptotic expression of LU invariants, which may not be LU invariant themselves, for example \begin{equation} \braket{B_{200}}{B_{200}}\neq\lim\limits_{n\to\infty}\dfrac{\braket{B_{200}^{n/2}}{B_{200}^{n/2}}^2}{n}. \end{equation} However, this result coincides with the asymptotic rate calculated by making the Schur transform directly and expressing the probability $p(\alpha,\beta,\gamma|\psi)$ as a non-trivial sum over Louck polynomials, whose linearisation coefficients are so-far unknown, making the general problem practically intractable. We have also found a relation in the case of three qubits (and probably extendible to $n$ qubit systems) between the Kronecker coefficients and a set of fundamental Young frames constructed from the covariants. This result is summarized in theorem \ref{teo2} and basically consists on counting the number of solutions to a given set of equations (see proposition \ref{prop1}). This relation also showed us, using the Keyl-Werner theorem, a correspondence in the asymptotic limit between the convex regions in the entanglement polytope and the regions where $g_{\alpha,\beta,\gamma}\neq0$ using the restrictions from theorem \ref{teo2}. We successfully verified our results by comparing to what is known about Kronecker coefficients in the literature. \\ \\ For future work we would like to see if this results hold in the four or more qubits case. Although we have mentioned the dimensions of the problem escalate exponentially with increasing number of qubits, we will still be able to obtain some insight about the entanglement distribution in these systems. We would also like to see how to extend our results to values of the Kronecker coefficient larger than one . That is, we would like to investigate the bulk of the polytope, specially the GHZ entanglement class in order to generalize our results. We hope that by doing this, the asymptotic rate will be written as a convex combination of the rates at the vertex of the polytope. Another interesting variation to the problem will be to study identical particle systems, where now the Hilbert space changes and thus the covariants must change. Finally, one could also study systems of qudits, for example two qubits and a qutrit and see how this asymmetry in the dimensions of the Hilbert space affects entanglement classification.
{"config": "arxiv", "file": "1610.09552/Chapter4.tex"}
TITLE: Given $X×Y$ hausdorff. Show that $X$ hausdorff. QUESTION [0 upvotes]: Given $X×Y$ hausdorff. Show that $X$ hausdorff. Assume $x_1≠x_2$ in $X$. Then $(x_1,y_0)≠(x_2,y_0)$ for some $y_0∈Y$. Then there exists disjoint open neighborhoods in $X×Y$. As those neigborhoods are open there exist (disjoint) basis elements of $X×Y$ such that: $(x_1,y_0)⊂U_1×V_1$ and $(x_2,y_0)⊂U_2×V_2$ where $U_1,U_2$ are open. If I can show that $U_1,U_2$ are disjoint, then I'm done, but I don't see how. Any hints ? REPLY [4 votes]: You need to require that $Y$ be nonempty to draw any conclusions about $X$ from properties of $X\times Y$. If $Y\neq \varnothing$, it is frequently useful to note that one can embed $X$ as a subspace in the product, for any $y_0 \in Y$, the map $$\iota \colon X \to X\times Y;\quad \iota(x) = (x,y_0)$$ is continuous, since both of its components are, and its inverse is the restriction of a coordinate projection, hence also continuous. Thus $\iota$ is an embedding. Hence $X$ has all properties that $X\times Y$ has and that are inherited by subspaces, such as being $T_0$, $T_1$, Hausdorff, $T_3$, $T_{3\frac{1}{2}}$ and more. If $Y$ is a $T_1$ space (more generally, if $Y$ contains a point $y_1$ such that $\{y_1\}$ is closed), then $X$ embeds as a closed subspace into $X\times Y$, and hence also has all properties of $X\times Y$ that are inherited by closed subspaces, such as being $T_4$ or complete (in case of uniform spaces). Of course we can also see that $X$ must be Hausdorff if $X\times Y$ is and $Y\neq \varnothing$ in the way you went: $$\varnothing = (U_1 \times V_1) \cap (U_2 \times V_2) = (U_1\cap U_2) \times (V_1 \cap V_2),$$ and since $y_0 \in V_1 \cap V_2$, it follows that $U_1 \cap U_2 = \varnothing$.
{"set_name": "stack_exchange", "score": 0, "question_id": 722463}
\begin{document} \maketitle \begin{abstract} When the body gets infected by a pathogen or receives a vaccine dose, the immune system develops pathogen-specific immunity. Induced immunity decays in time and years after recovery/vaccination the host might become susceptible again. Exposure to the pathogen in the environment boosts the immune system thus prolonging the duration of the protection. Such an interplay of within host and population level dynamics poses significant challenges in rigorous mathematical modeling of immuno-epidemiology. The aim of this paper is twofold. First, we provide an overview of existing models for waning of disease/vaccine-induced immunity and immune system boosting. Then a new modeling approach is proposed for SIRVS dynamics, monitoring the immune status of individuals and including both waning immunity and immune system boosting. We show that some previous models can be considered as special cases or approximations of our framework.\\ \ \\ \textit{KEYWORDS:} {Immuno-epidemiology;\, Waning immunity;\, Immune status;\, Boosting;\, Physiological structure;\, Reinfection;\, Delay equations;\, Vaccination}\\ \ \\ \textit{AMS Classification:} {92D30;\, 35Q91;\, 34K17} \end{abstract} \section{Introduction} Models of SIRS type are a traditional topic in mathematical epidemiology. Classical approaches present a population divided into susceptibles (S), infectives (I) and recovered (R), and consider interactions and transitions among these compartments \cite{Brauer2001}. Susceptibles are those hosts who either did not contract the disease in the past or lost immunity against the disease-causing pathogen. When a susceptible host gets in contact with an infective one, the pathogen can be transmitted from the infective to the susceptible and with a certain probability the susceptible host becomes infective himself. After pathogen clearance the infective host recovers and becomes immune for some time, afterward he possibly becomes susceptible again (in certain cases one can talk of life-long immunity). The model can be extended by adding vaccination. Vaccinees (V) are protected from infection for some time, usually shorter than naturally infected hosts.\\ \ \\ From the in-host point of view, immunity to a pathogen is the result of either active or passive immunization. The latter is a transient protection due to the transmission of antibodies from the mother to the fetus through the placenta. The newborn is thus immune for several months after birth \cite{McLean1988a}. Active immunization is either induced by natural infection or can be achieved by vaccine administration \cite{Siegrist2008,KubyImmBook}. \indent Let us first consider the case of natural infection. A susceptible host, also called \textit{naive host}, has a very low level of specific immune cells for a pathogen (mostly a virus or a bacterium). The first response to a pathogen is nonspecific, as the innate immune system cannot recognize the physical structure of the pathogen. The innate immune response slows down the initial growth of the pathogen, while the adaptive (pathogen-specific) immune response is activated. Clonal expansion of specific immune cells (mostly antibodies or CTL cells) and pathogen clearance follow. The population of pathogen-specific immune cells is maintained for long time at a level that is much higher than in a naive host. These are the so-called \textit{memory cells} and are activated in case of secondary infection (see Figure \ref{Fig:introfig1}, adapted from \cite{BarbarossaRostJoMB}.). Memory cells rapidly activate the immune response and the host mostly shows mild or no symptoms \cite{Antia2005}. \indent Each exposure to the pathogen might have a boosting effect on the population of specific memory cells. Indeed, the immune system reacts to a new exposure as it did during primary infection, thus yielding an increased level of memory cells. Though persisting for long time after pathogen clearance, the memory cell population slowly decays and in the long run the host might lose his pathogen-specific immunity \cite{Wodarz2007}. \indent Vaccine-induced immunity works in a similar way as immunity induced by the natural infection. Agents contained in vaccines resemble, in a weaker form, the disease-causing pathogen and force a specific immune reaction without leading to the disease. If the vaccine is successful, the host is immunized for some time. Vaccinees experience immune system boosting and waning immunity, just as hosts recovered from natural infection do. In general, however, disease-induced immunity induces a much longer lasting protection than vaccine-induced immunity does \cite{Siegrist2008}. \begin{figure}[!] \centering \includegraphics[width=0.9\columnwidth]{memory_cells_vaccine.eps} \caption{Level of pathogen-specific immune cells with respect to the time. The solid line represents the case of natural infection, the dotted line represents the immune status of a vaccinated host. Generation of memory cells takes a few weeks: once primary infection (respectively, vaccination) occurred, the adaptive immune system produces a high number of specific immune cells (clonal expansion). After pathogen clearance, specific immune cells (memory cells) are maintained for years at a level that is much higher than in a naive host. Memory cells are activated in case of secondary infection.} \label{Fig:introfig1} \end{figure} \noindent Waning immunity might be one of the factors which cause, also in highly developed regions, recurrent outbreaks of infectious diseases such as measles, chickenpox and pertussis. On the other side, immune system boosting due to contact with infectives prolongs the protection duration. In a highly vaccinated population there are a lot of individuals with vaccine-induced immunity and few infection cases, as well as many individuals with low level of immunity. In other words, if a high portion of the population gets the vaccine, there are very few chances for exposure to the pathogen and consequently for immune system boosting in protected individuals.\\ \ \\ In order to understand the role played by waning immunity and immune system boosting in epidemic outbreaks, in the recent past several mathematical models were proposed. Few of these models describe only in-host processes during and after the infection \cite{Wodarz2007,Heffernan2008}. Many more models, formulated in terms of ordinary differential equations (ODEs), consider the problem only at population level, defining compartments for individuals with different levels of immunity and introducing transitions between these compartments \cite{Dafilis2012,Heffernan2009}. Vaccinated hosts or newborns with passive immunity are often included in the model equations and waning of vaccine-induced or passive immunity are observed \cite{Rouderfer1994,Mossong1999,Glass2003b,Grenfell2012,Lavine2011,Arino2006,Mossong2003}. \indent To describe the sole waning immunity process, authors have sometimes chosen delay differential equation (DDE) models with constant or distributed delays \cite{Kyrychko2005,Taylor2009,Blyuss2010,Bhat2012,Belair2013}. The delay represents the average duration of the disease-induced immunity. However, neither a constant nor a distributed delay allows for the description of immune system boosting. \indent Models which include partial differential equations (PDEs) mostly describe an age-structured population \cite{mclean1988,Katzmann1984,Rouderfer1994} and consider pathogen transmission among the different age groups (newborns, children, pupils, adults, \ldots). Rare examples suggest a physiologically structured approach with populations structured by the level of immunity, coupling within-host and between-hosts dynamics \cite{Martcheva2006,BarbarossaRostJoMB}.\\ \ \\ The goal of the present book chapter is twofold. On the one side, we found necessary to provide a comprehensive overview of previously published models for waning of disease/vaccine-induced immunity and immune system boosting (Sect.~ \ref{sec:overview}). On the other side, in Sect.~\ref{sec:framework} we propose a new modeling framework for SIRVS dynamics, monitoring the immune status of individuals and including both waning immunity and immune system boosting. \section{Mathematical models for waning immunity and immune system boosting} \label{sec:overview} In the following we provide an overview on previous mathematical models for waning immunity and immune system boosting. We shall classify these models according to their mathematical structure (systems of ODEs, PDEs or DDEs). \subsection{Systems of ODEs} Mossong and coauthors were among the first to suggest the inclusion of individuals with waning immunity in classical SIRS systems \cite{Mossong1999}. Motivated by the observation that measles epidemics can occur even in highly vaccinated populations, the authors set up a model to study the waning of vaccine-induced immunity and failure of seroconversion as possible causes for recurrent outbreaks. Their compartmental model includes hosts with the so-called ``vaccine-modified measles infection'' (VMMI) which can occur in people with some degree of passive immunity to the virus, including those previously vaccinated. Assuming that not all vaccinees are protected from developing VMMI, the authors classify vaccinees into three groups: immediately susceptible to VMMI (weak response), temporarily protected who become susceptible to VMMI due to waning of vaccine-induced immunity (intermediate response), and permanently protected from VMMI (strong response). Infection occurs due to contact with infectious individuals (both regular measles infection and VMMI). The resulting compartmental model includes waning of vaccine-induced immunity but not of disease-induced immunity, nor immune system boosting. Similar to McLean and Blower \cite{McLean1993}, Mossong et~al. define a parameter $\phi$ to describe the impact of the vaccine: if $\phi<1$, then vaccine failure is possible. Analytical results in \cite{Mossong1999} show that the main effect of VMMI is to increase the overall reproduction number of the infection.\\ \ \\ Inspired by Mossong's work, in 2003-2004 Glass, Grenfell and coauthors \cite{Glass2003,Glass2004b,Glass2004} proposed modifications and extensions of the system in \cite{Mossong1999}. The basic model is similar to the ODE system in \cite{Mossong1999}, with a group of subclinical cases which carry the pathogen without showing symptoms \cite{Glass2003b}. In addition, the distribution of antibody levels in immune hosts (included in the ODEs coefficients) and immune system boosting are introduced: the average antibody level in an immune host increases due to contact with infective or subclinical hosts. This model was used to fit measles data in England \cite{Glass2004}. In \cite{Glass2004b} the basic model was extended to consider measles transmission in a meta-population with $N$ patches.\\ \indent Immune system boosting in vaccinees was further studied in \cite{Grenfell2012}. In this work two models are introduced. In the first one vaccinees are separated from non-vaccinated hosts. Both groups of individuals are classified into susceptible, infective and immune, but in contrast to the models in \cite{Glass2003,Glass2004b,Glass2004,Mossong1999}, there is no compartment for subclinical cases. Non-vaccinated hosts do not undergo immune system boosting. For vaccinated hosts the authors include a so called ``self-boosting'' of vaccine, so that contact with infectives moves susceptible vaccinees to the immune vaccinated compartment. The second model extends the first one with a new compartment for hosts with waning immunity (W). These can receive immune system boosting due to contact with infectives or move back to the susceptible compartment due to immunity loss. Numerical simulations show possible sustained oscillations. The SIRWS system was partially analyzed by Dafilis et~al. \cite{Dafilis2012}.\\ \ \\ Heffernan and Keeling \cite{Heffernan2008} proposed an in-host model to understand the behavior of the immune system during and after an infection. Activation of immune system effectors and production of memory cells depend on the virus load. When not stimulated by the virus, the number of activated cells decays (waning immunity). Vaccination is simulated by changing the initial conditions for the virus load. Numerical simulations show that the number of infected immune system cells in a vaccinated patient reaches approximately half of what is reached in a patient who undergoes natural infection. In turn, the level of immunity gained after one dose of vaccine is the same as the level observed in a measles patient 4 years after natural infection. The in-host model in \cite{Heffernan2008} was extended by the same authors to a population model (SEIRS) with waning immunity and immune system boosting \cite{Heffernan2009}. In contrast to classical SEIRS models, the class R refers here to individuals protected by short-term immune memory, while the class S refers to those individuals who have lost this short-term protection and may experience immune system boosting. Each compartment is classified according to the level of immunity, which can be related to the number of memory cells. Newborns are recruited into the susceptible class $S_0$ (lowest level of immunity). During exposure and infection the host does not change his level of immunity, that is, transition occurs from $S_j$ to $E_j$ to $I_j$ for each $j\in \N$. Hosts in $S$ and $R$ experience waning immunity and transit from $S_j$ to $S_{j-1}$ (respectively from $R_j$ to $R_{j-1}$). Immune system boosting is due to recovery from infection and is incorporated into the equations with transition terms from $I_j$ to $R_k$, with $k\geq j$. The resulting large system of ODEs, with a very high number of parameters, is quite hard to approach from an analytical point of view, hence the authors make use of numerical simulations to investigate the long term behavior. A somehow simplified version of the ODE system in \cite{Heffernan2009} was proposed by Reluga et~al. in \cite{Reluga2008}. A similar large system of ODEs was introduced by Lavine et~al. \cite{Lavine2011}, extending the SIRWS model in \cite{Mossong1999,Glass2003b}, by including several levels of immunity for immune hosts (R) and hosts with waning immunity (W), as well as age classes for all compartments. The authors claim that the model can explain several observed features of pertussis in US, in particular a shift in the age-specific incidence and the re-emergence of the disease in a highly vaccinated population. \subsection{System of DDEs} Delay models with constant or distributed delay have been introduced to describe waning of disease-induced or vaccine-induced immunity. A simple SIRS system with constant delay is given by \begin{equation} \label{sys:SIRSdelay} \begin{aligned} \dot S(t)& = \mu(1-S(t))-\phi S(t)f(I(t))+\gamma I(t-\tau)e^{-\mu \tau}\\ \dot I(t)& = \phi S(t)f(I(t))-(\mu+\gamma) I(t)\\ \dot R(t)& = \gamma I(t) -\mu R(t) -\gamma I(t-\tau)e^{-\mu \tau}. \end{aligned} \end{equation} This model was studied by Kyrychko and Blyuss \cite{Kyrychko2005}, who provided results on existence, uniqueness and non-negativity of solutions, linear and global stability of the disease-free equilibrium, as well as global stability of the unique endemic equilibrium. A special case of \eqref{sys:SIRSdelay} was considered some years later by Taylor and Carr \cite{Taylor2009}. An extension of system \eqref{sys:SIRSdelay} with distributed delay was proposed in \cite{Blyuss2010} and shortly after in \cite{Bhat2012}.\\ \indent A more general model with distributed delay and vaccination was proposed by Arino et~al. in \cite{Arino2004}. Their system includes three compartments (susceptible, infective and vaccinated hosts) in a population which remains constant in time. Vaccine-induced immunity might be only partial, resulting in vaccinated individuals becoming infective. Systems of ODEs or DDEs can be obtained from the general model by a proper choice of the kernel (see also \cite{Hethcote2000,Hethcote1981}).\\ \indent Recently, Yuan and B\'elair proposed a SEIRS model with integro-differential equations which resembles the systems in \cite{Arino2004,Hethcote2000}. The probability that an individual stays in the exposed class (E) for $t$ units of time is $P(t)$, hence, \begin{displaymath} E(t) = \int_0^t \beta \frac{S(u)I(u)}{N}e^{-b(t-u)}P(t-u)\,du. \end{displaymath} Similarly, $Q(t)$ is the probability that an individual is immune $t$ units of time after recovery, thus \begin{displaymath} R(t) = \int_0^t \gamma I(u)e^{-b(t-u)}Q(t-u)\,du. \end{displaymath} For a certain choice of the probabilities $P$ and $Q$, the problem can be reduced to a system with one or two constant delays. The authors show existence of an endemic equilibrium and boundedness of solutions in a positive simplex. For the system with one constant delay, results for existence of a global attractor as well as the proof of persistence of the disease in case $R_0>1$ are provided. \subsection{Systems of PDEs} Structured populations in the context of waning immunity and immune system boosting have been motivated in different ways. Often the structure can be found in the biological age \cite{mclean1988,McLean1988a,Katzmann1984,Rouderfer1994}, and is used to observe disease transmission among babies, children, adults and seniors. Only few works suggest models for physiologically structured populations \cite{Martcheva2006,BarbarossaRostJoMB}.\\ \ \\ McLean and Anderson \cite{mclean1988,McLean1988a} proposed a model for measles transmission which includes a compartment for babies protected by maternal antibodies. Indeed, mothers who have had measles or have been vaccinated transfer measles immunity to the baby through the placenta. For several months after birth (ca. 2 months if the mother was vaccinated, ca. 4 months if she had the disease \cite{McLean1988a}) the baby is still protected by maternal antibodies and should not be vaccinated. The model by McLean and Anderson \cite{mclean1988} considers only waning of maternally induced immunity in the context of measles infection. Few years before McLean, Katzmann and Dietz \cite{Katzmann1984} proposed a bit more general model, which includes also waning of vaccine-induced immunity. In both cases, the age structure was used to determine the optimal age for vaccination. A compartment for adult hosts with waning immunity who can also receive immune system boosting was introduced only years later by Rouderfer et~al. \cite{Rouderfer1994}. A further deterministic system of ODEs for maternally induced immunity in measles was proposed in \cite{Moghadas2008}. \\ \ \\ Different is the approach when physiologically structured populations are considered. Martcheva and Pilyugin \cite{Martcheva2006} suggest an SIRS model in which infective and recovered hosts are structured by their immune status. In infective hosts the immune status increases over the course of infection, while in recovered hosts the immune status decays at some non-constant rate. When the immune status has reached a critical level, recovered hosts transit from the immune to the susceptible compartment.\\ \indent A general framework for SIRS systems, modeling waning immunity and immune system boosting, and combining the in-host perspective with the population dynamics, was proposed in \cite{BarbarossaRostJoMB}. \section{A general modeling framework} \label{sec:framework} In this section we extend the model in \cite{BarbarossaRostJoMB} to include vaccine-induced immunity. As in \cite{Martcheva2006,BarbarossaRostJoMB}, we couple the in-host with the between-hosts dynamics, focusing on the effects of waning immunity and immune system boosting on the population dynamics. In contrast to the models proposed in \cite{Heffernan2009,Lavine2011}, we shall maintain the number of equations as low as possible. The resulting model (V1) is a system of ODEs coupled with two PDEs. The ODE systems in \cite{Mossong1999,Glass2003b,Grenfell2012,Arino2006,Mossong2003}, as well as extensions of the DDEs systems in \cite{Taylor2009,Belair2013}, can be recovered from our modeling framework.\\ \ \\ Setting up our model we do not restrict ourselves to a particular pathogen. The model (V1) can be adapted to several epidemic outbreaks (e.g. measles, chickenpox, rubella, pertussis) by ad-hoc estimating coefficients from available experimental data \cite{Luo2012,Amanna2007,Li2013}. \subsection{Model ingredients} \subsubsection{\textit{Originally susceptible} and infectives hosts} Let $S(t)$ denote the total population of \textit{originally susceptible} hosts. These are susceptible individuals which have neither received vaccination nor have been infected before. Newborns enter the susceptible population at rate $b(N)$, dependent on the total population size $N$. For simplicity we assume that the natural death rate $d>0$ does not depend on $N$. Assume that $b:[0,\infty)\to [0, b_+],\, N\mapsto b(N),$ with $0< b_+ <\infty$, is a nonnegative function, with $b(0)=0$. Finally, assume that in absence of disease-induced death there exists an equilibrium $N^*$ such that $b(N^*)=d\,N^*$.\\ \indent Let $I(t)$ denote the total infective population at time $t$. Infection of susceptible individuals occurs by contact, at rate $\beta I/N$. Infected hosts recover at rate $\gamma>0$. When we include disease-induced death at rate $d_I>0$, the equilibrium $N^*$ satisfies \begin{equation*} \label{eq:equil_Nstart_dI} b(N^*)=d\,N^*+d_I I^*. \end{equation*} \subsubsection{Immune individuals} Let us denote by $r(t,z)$ the density of recovered individuals with disease-induced immunity level $z \in [\zm, \zM]$ at time $t$. The total population of recovered hosts is given by \begin{displaymath} R(t) = \int_{\zm}^{\zM} r(t,z)\, dz. \end{displaymath} \noindent The parameter $z$ describes the immune status and can be related to the number of specific immune cells of the host. The value $\zM$ corresponds to maximal immunity, whereas $\zm$ corresponds to low level of immunity. Individuals who recover at time $t$ enter the immune compartment with maximal level of immunity $\zM$. The level of immunity tends to decay in time and when it reaches the minimal value $\zm$, the host becomes susceptible again. However, exposure to the pathogen can boost the immune system from $z\in [\zm,\zM]$ to any higher status. It is not straightforward to determine how this kind of immune system boosting works, as no experimental data are available. Nevertheless, laboratory analysis on vaccines tested on animals or humans suggest that the boosting efficacy might depend on several factors, among which the current immune status of the recovered host and the amount of pathogen he receives \cite{Amanna2007,Luo2012}. Possibly, exposure to the pathogen can restore the maximal level of immunity, just as natural infection does \cite{BarbarossaRostJoMB}.\\ \indent Let $p(z,\tilde z),\, z\geq \tilde z,\,z,\tilde z \in \R$ denote the probability that an individual with immunity level $\tilde z$ moves to immunity level $z$, when exposed to the pathogen. Due to the definition of $p(z,\tilde z)$, we have $p(z,\tilde z)\in [0,1],\, z\geq \tilde z$ and \begin{displaymath} p(z,\tilde z)= 0, \quad \mbox{for all} \quad z < \tilde z. \end{displaymath} As we effectively consider only immunity levels in the interval $[\zm,\zM]$, we set \begin{displaymath} p(z,\tilde z)= 0, \quad \mbox{for all} \quad \tilde z \in (-\infty,\zm) \cup (\zM,\infty). \end{displaymath} Then we have $$\int_{-\infty}^{\infty}p(z,\tilde z)\, dz\,=\,\int_{\tilde z}^{\zM}p(z,\tilde z)\, dz\,=\,1,\quad \mbox{for all}\quad \tilde z \in [\zm,\zM].$$ Exposure to the pathogen might restore exactly the immunity level induced by the disease ($\zM$). In order to capture this particular aspect of immune system boosting, we write the probability $p(z,\tilde z)$ as the combination of a continuous ($p_0$) and atomic measures (Dirac delta): \begin{displaymath} p(z,\tilde z)= c_{max}(\tilde z)\delta (\zM-\tilde z) + c_0(\tilde z)p_0(z,\tilde z) + c_1(\tilde z)\delta(z-\tilde z), \end{displaymath} where \begin{itemize} \item \textbf{$c_{max}:[\zm,\zM ]\to [0,1],\;y\mapsto c_{max}(y)$}, is a continuously differentiable function and describes the probability that, due to contact with infectives, a host with immunity level $y$ boosts to the maximal level of immunity $\zM$. \item \textbf{$c_{0}:[\zm,\zM]\to [0,1],\;y\mapsto c_{0}(y)$}, is a continuously differentiable function and describes the probability that, due to contact with infectives, a host with immunity level $y$ boosts to any other level $z \in (y,\zM)$, according to the continuous probability $p_0(z,y)$. \item \textbf{$c_{1}(y)=1-c_{max}(y)-c_0(y)$} describes the probability that getting in contact with infectives, the host with immunity level $y\in [\zm,\zM]$ does not experience immune system boosting. \end{itemize} \indent The immunity level decays in time at some rate $g(z)$ which is the same for all recovered individuals with immunity level $z$. In other words, the immunity level $z$ follows \begin{equation*} \frac{d}{dt}z(t)=g(z), \end{equation*} with $g:[\zm,\zM]\to (0,K_g],\; K_g<\infty$ continuously differentiable. The positivity of $g(z)$ is required from the biological motivation. Indeed, if $g(\tilde z)=0$ for some value $\tilde z \in [\zm,\zM]$, there would be no change of the immunity level at $\tilde z$, contradicting the hypothesis of natural decay of immune status. In absence of immune system boosting, we have that $$ \int_{\zm}^{\zM} \frac{1}{g(x)}\,dx$$ is the time a recovered host remains immune (see \cite{BarbarossaRostJoMB}). \subsubsection{Vaccination} We structure the vaccinated population by the level of immunity as well. Let $v(t,z)$ be the density of vaccinees with immunity level $z \in [\zm, \zM]$ at time $t$. The total population of vaccinated hosts is given by \begin{displaymath} V(t) = \int_{\zm}^{\zM} v(t,z)\, dz. \end{displaymath} Vaccination infers a level of immunity $\zV,$ which is lower than the level of immunity after natural infection: $\zM>\zV>\zm$ \cite{Siegrist2008}. As in recovered individuals, the level of immunity of a vaccinated host tends to decay in time and when it reaches the minimal value $\zm$, the host becomes susceptible again. However, also in vaccinated hosts, exposure to the pathogen can boost the immunity level $z\in [\zm,\zV]$ to any higher value in $ [\zm,\zM]$. Immune system boosting is described by the probability $p(z,\tilde z)$, as in recovered hosts. We consider the possibility that exposure to the pathogen boosts the immune system of a vaccinated individual to $z \in (\zV,\zM]$. Vaccinated hosts with $z \in (\zV,\zM]$ have an immune status which can be compared to the one of hosts who recovered from natural infection.\\ \indent It is reasonable to assume that in vaccinated individuals the immunity level decays in time at the same rate $g$, as in hosts who underwent natural infection. In absence of exposure to the pathogen (hence in absence of immune system boosting), the time that a vaccinee remains immune is shorter than the time a recovered host does: $$ \int_{\zm}^{\zV} \frac{1}{g(x)}\,dx<\int_{\zm}^{\zM} \frac{1}{g(x)}\,dx.$$ Let us define the vaccination rate at birth $\alpha>0$. We assume that originally susceptible (adult) individuals get vaccinated at rate $\phi\geq 0$. \subsubsection{Becoming susceptible again} In absence of immune system boosting both disease-induced and vaccine-induced immunity fade away. Individuals who lose immunity either after recovery from infection or after vaccination, enter the class $S_2$ of susceptible individuals who shall not get a new dose of vaccine. A host who had the disease or got vaccination relies indeed on the induced-immunity and is not aware of the fact that his level of immunity might have dropped below the critical immunity threshold.\\ \ \\ We denote by $S_2(t)$ the population at time $t$ of susceptible hosts who are not going to receive vaccination. \subsection{Model equations} In view of all what we have mentioned above, we can easily write down the equations for the compartments $S,\,I$ and $S_2$. Let initial values $S(0)=S^0\geq 0$, $I(0)=I^0\geq 0$ and $S_2(0)=S_2^0\geq 0$ be given. The population of originally susceptible individuals is governed by \begin{equation} \dot S(t) = \underbrace{b(N(t))(1-\alpha)}_{\mbox{birth}} -\underbrace{\phi S(t)}_{\mbox{vaccination}}-\underbrace{\beta \frac{S(t)I(t)}{N(t)}}_{\mbox{infection}}-\underbrace{dS(t)}_{\mbox{death}}, \label{eq:S_vacc} \end{equation} whereas hosts who become susceptible due to immunity loss follow \begin{equation*} \dot S_2(t) = -\underbrace{\beta \frac{S_2(t)I(t)}{N(t)}}_{\mbox{infection}}-\underbrace{dS_2(t)}_{\mbox{death}} +\underbrace{\Lambda_R}_{\substack{\text{immunity loss}\\ \text{after recovery}}}+\underbrace{\Lambda_V}_{\substack{\text{immunity loss}\\ \text{after vaccination}}}. \end{equation*} The term $\Lambda_R$ (respectively $\Lambda_V$), which represents transitions from the immune (respectively, the vaccinated) compartment to the susceptible one, will be specified below together with the dynamics of the recovered (respectively, vaccinated) population.\\ \ \\ Both kinds of susceptible hosts can become infective due to contact with infective hosts: \begin{equation} \dot I(t) = \underbrace{\beta \frac{S(t)I(t)}{N(t)}}_{\mbox{infection of }S}+\underbrace{\beta \frac{S_2(t)I(t)}{N(t)}}_{\mbox{infection of }S_2} -\underbrace{\gamma I(t)}_{\mbox{recovery}} -\underbrace{d I(t)}_{\substack{\text{natural}\\ \text{death}}}-\underbrace{d_I I(t)}_{\substack{\text{disease-induced}\\ \text{death}}}. \label{eq:I} \end{equation} To obtain an equation for the recovered individuals, structured by their levels of immunity, one can proceed similarly to size structured models or as it was done for the immune population in \cite{BarbarossaRostJoMB}. The result is the following PDE. Let a nonnegative initial distribution $r(0,z)=\psi(z),\, z \in [\zm,\zM]$ be given. For $t>0, z\in[\zm,\zM]$ we have \begin{equation} \begin{aligned} \frac{\partial }{\partial t}r(t,z)-\frac{\partial }{\partial z}\left(g(z)r(t,z)\right) & = -dr(t,z)+ \beta \frac{I(t)}{N(t)}\int_{\zm}^{z} p(z,x)r(t,x)\,dx\\[0.5em] & \phantom{==} - r(t,z)\beta\frac{I(t)}{N(t)}, \label{eq:pdeR_zmzV} \end{aligned} \end{equation} with the boundary condition \begin{equation} \label{eq:BC_mod1_pde_R} g(\zM)r(t,\zM) = \gamma I(t) + \beta \frac{I(t)}{N(t)}\int_{\zm}^{\zM} p(\zM,x)r(t,x)\,dx. \end{equation} Equation \eqref{eq:pdeR_zmzV} expresses the rate of change in the density of recovered individuals according to immune level due to natural waning, mortality, and boosting. The boundary condition \eqref{eq:BC_mod1_pde_R} includes newly recovered individuals as well as those recovered individuals, who just received a boost which elevated their immune system to maximal level. Next we shall consider the vaccinated population. Again, by structuring this group according to immunity level, one has the PDE \begin{equation} \begin{aligned} \frac{\partial }{\partial t}v(t,z) & = \frac{\partial }{\partial z}\left(g(z)v(t,z)\right) -dv(t,z)+ \beta \frac{I(t)}{N(t)}\int_{\zm}^{z} p(z,x)v(t,x)\,dx\\[0.5em] & \phantom{==} - v(t,z)\beta\frac{I(t)}{N(t)}+\delta(z-z_{vax})\left(\phi S(t) + \alpha b(N(t))\right), \label{eq:mod1_pde_V} \end{aligned} \end{equation} and \begin{equation} \label{eq:BC_mod1_pde_V} g(\zM)v(t,\zM) = \beta \frac{I(t)}{N(t)}\int_{\zm}^{\zM} p(\zM,x)v(t,x)\,dx, \end{equation} provided with a nonnegative initial distribution $v(0,z)=\psi_v(z),\, z \in [\zm,\zM]$. Observe that newly vaccinated hosts do not enter the vaccinated population at $\zM$, but at the lower value $\zV$, which is expressed in equation \eqref{eq:mod1_pde_V} as an impulse at $z=z_{vax}$ by the term with the Dirac delta $\delta(z-z_{vax})$. It becomes evident that the quantity $\Lambda_R$, initially introduced in the $S_2$ equation to represent the number of hosts who experienced immunity loss, is given by the number $g(\zm)r(t,\zm)$ of immune hosts who reached the minimal level of immunity after recovery from natural infection. Similarly, $\Lambda_V$ is the number $g(\zm)v(t,\zm)$ of vaccinated hosts who reached the minimal level of immunity. Hence we have \begin{equation} \dot S_2(t) = -\underbrace{\beta \frac{S_2(t)I(t)}{N(t)}}_{\mbox{infection}}-\underbrace{dS_2(t)}_{\mbox{death}} +\underbrace{g(\zm)r(t,\zm)}_{\Lambda_R}+\underbrace{g(\zm)v(t,\zm)}_{\Lambda_V}. \label{eq:S2_vacc} \end{equation} \noindent In the following we refer to the complete system \eqref{eq:S_vacc} -- \eqref{eq:S2_vacc} as to \textbf{model (V1)}. \section{Connection to other mathematical models} \subsection{Connection to ODE models} As it was shown in \cite{BarbarossaRostJoMB} for a simpler problem, model (V1) can be reduced to a system of ODEs analogous to those proposed in \cite{Mossong1999,Glass2003b,Grenfell2012,Lavine2011,Heffernan2009,Mossong2003}. The connection between model (V1) and the ODE system is given by the \textit{method of lines}, a technique in which all but one dimensions are discretized \cite{MOLbook}. In our case, we shall discretize the immunity level ($z$) and obtain a system of ODEs in the time variable.\\ \ \\ Let us define a sequence $\left\{z_j\right\}_{j\in \N}$, with $h_j:=z_{j+1}-z_j>0$, for all $j \in \N$. To keep the demonstration as simple as possible, we choose a grid with only a few points, $z_1:=\zm<\zw:=\zV<\zf<\zM$ and for simplicity (or possibly after a rescaling) assume that $h_j=1$ for all $j$. We define the following subclasses of the immune/vaccinated population: \begin{itemize}\itemsep0.5cm \item $R_F(t):= r(t,\zf)$, immune hosts with high level of immunity at time $t$. As their immunity level is quite high, these individuals do not experience immune system boosting. Immunity level decays at rate $\mu:=g(\zf)>0$. \item $R_W(t):= r(t,\zw)$, immune hosts with intermediate level of immunity at time $t$. These individuals can get immune system boosting and move to $R_F$. Immunity level decays at rate $\nu:=g(\zw)>0$. \item $R_C(t):= r(t,\zm)$, immune hosts with critically low level of immunity at time $t$. With probability $\theta$ boosting moves $R_C$ individuals to $R_W$ (respectively, with probability $(1-\theta)$ to $R_F$). Immunity level decays at rate $\sigma:=g(\zm)>0$. If they do not get immune system boosting, these hosts move to the class $S_2$ (become susceptible again). \item $V_R(t):= v(t,\zf)$, vaccinated hosts who thanks to immune system boosting gained a very high level of immunity at time $t$. These individuals do not experience immune system boosting. Immunity level decays at rate $\mu$. \item $V_0(t):= v(t,\zw)$, vaccinated individuals at time $t$ with maximal vaccine-induced immunity. This class includes new vaccinees. If their immune system gets boosted hosts move to $V_R$. Immunity level decays at rate $\nu$. \item $V_C(t):= v(t,\zm)$, vaccinees with critically low level of immunity at time $t$. With probability $\xi$ boosting moves $V_C$ hosts to $V_0$ and with probability $(1-\xi)$ to $V_R$. Immunity level decays at rate $\sigma$. If they do not receive immune system boosting, $V_C$ hosts move to $S_2$. \end{itemize} To show how the PDE system can be reduced to a system of ODEs by means of the method of lines, we consider a simple example. Let us neglect immune system boosting for a moment. Then the PDE for $r(t,z)$ in model (V1) becomes \begin{equation} \label{eq:PDEr_MOL_noboost} \frac{\partial}{\partial t} r(t,z) = \frac{\partial}{\partial z} \bigl(g(z) r(t,z)\bigr) -d r(t,z), \qquad z \in [\zm,\zM], \end{equation} with boundary condition $R_{\zM}(t):=r(t,\zM)=\gamma I(t) / g(\zM)$. Using forward approximation for the $z$-derivative in \eqref{eq:PDEr_MOL_noboost}, we obtain, e.g., for $R_F(t)$ the following differential equation: \begin{align*} \dot R_F(t) & =\frac{\partial}{\partial t} r(t,\zf)\\ & = \frac{\partial}{\partial z} \bigl(g(\zf) r(t,\zf)\bigr) -d r(t,\zf)\\ & \approx \frac{g(\zM) r(t,\zM)- g(\zf) r(t,\zf)}{\underbrace{\zM-\zf}_{=1}} -d r(t,\zf)\\ & = g(\zM) R_{\zM}(t)- \mu R_F(t) -d R_F(t)\\ & = \gamma I(t)- (\mu+d) R_F(t). \end{align*} Analogously one can find equations for $R_W,\,R_C,\,V_R,\,V_0$ and $V_C$. Altogether we obtain a system of ordinary differential equations in which a linear chain of ODEs replaces the PDEs for the immune and the vaccinated class: \begin{align*} \dot S(t) & = (1-\alpha)b(N(t)) -\phi S(t)-\beta \frac{S(t)I(t)}{N(t)}-dS(t)\\ \dot I(t) & = \beta \frac{I(t)}{N(t)}(S(t)+S_2(t))-(\gamma+d+d_I)I(t)\\ \dot R_F(t) & = \gamma I(t)-\mu R_F(t) -dR_F(t)\\ \dot R_W(t) & =\mu R_F(t)-\nu R_W(t)-d R_W(t)\\ \dot R_C(t) & = \nu R_W(t)-\sigma R_C(t) -d R_C(t)\\ \dot V_R(t) & = -\mu V_R(t) -d V_R(t)\\ \dot V_0(t) & = \phi S(t)+\alpha b (N(t)) +\mu V_R(t)-\nu V_0(t) -dV_0(t)\\ \dot V_C(t) & =\nu V_0(t)-\sigma V_C(t) -d V_C(t)\\ \dot S_2(t) &= -\beta \frac{S_2(t)I(t)}{N(t)}-dS_2(t)+\sigma (R_C(t)+ V_C(t)). \end{align*} The method of lines can be applied to the full model (V1) as well \cite{BarbarossaRostJoMB}. To this purpose it is necessary to discretize the boosting probability $p(z,\tilde z)$ (this is expressed by the parameters $\xi$ and $\theta$ below). Incorporating the boosting effect, the result is the following system of ODEs. \begin{align*} \dot S(t) & = (1-\alpha)b(N(t)) -\phi S(t)-\beta \frac{S(t)I(t)}{N(t)}-dS(t)\\ \dot I(t) & = \beta \frac{I(t)}{N(t)}\left(S(t)+S_2(t)\right)-(\gamma+d+d_I)I(t)\\ \dot R_F(t) & = \gamma I(t)-\mu R_F(t) -dR_F(t)+\beta \frac{I(t)}{N(t)} \left(R_W(t) +(1-\theta)R_C(t)\right)\\ \dot R_W(t) & =\mu R_F(t)-\nu R_W(t)-d R_W(t)+ \beta \frac{I(t)}{N(t)}(\theta R_C(t)-R_W(t))\\ \dot R_C(t) & = \nu R_W(t)-\sigma R_C(t) -d R_C(t)-\beta \frac{I(t)}{N(t)} R_C(t)\\ \dot V_R(t) & = \beta \frac{I(t)}{N(t)} \left(V_0(t) +(1-\xi)V_C(t) \right)-\mu V_R(t) -dV_R(t)\\ \dot V_0(t) & = \phi S(t)+\alpha b (N(t)) +\mu V_R(t)-\nu V_0(t) -dV_0(t) +\beta \frac{I(t)}{N(t)}\left(\xi V_C(t)-V_0(t)\right)\\ \dot V_C(t) & = \nu V_0(t)-\sigma V_C(t) -d V_C(t)-\beta \frac{V_C(t)I(t)}{N(t)}\\ \dot S_2(t) &= -\beta \frac{S_2(t)I(t)}{N(t)}-dS_2(t)+\sigma (R_C(t)+ V_C(t)). \end{align*} The linear chain of ODEs provides a rough approximation of the PDEs in model (V1). Indeed, with the method of lines we approximate the PDE dynamics considering only changes at the grid points ($\zm,\;\zw,\;\zf$), whereas the dynamics remains unchanged in each immunity interval $[z_{j},z_{j+1}]$. We consider as representative point of the interval the lowest boundary $z_j$ - for this reason we do not have a differential equation for $R_{\zM}(t)$ or $V_{\zM}(t)$. \subsection{Connection to DDE models} \label{sec:connDDEs} Delay models with constant delay can be recovered from special cases of model (V1). We show here how to obtain the classical SIRS model with delay studied in \cite{Taylor2009}, or extensions thereof.\\ \ \\ In the following we neglect boosting effects and vaccination. Further we do not distinguish between originally susceptibles and host who have lost immunity, hence w.r.t. model (V1) we identify the classes $S$ and $S_2$. From our assumptions, the disease-induced immunity lasts for a fix time, $\tau>0$ years, given by $$ \int_{\zm}^{\zM}\frac{1}{g(x)}\,dx=\tau.$$ We can express the total immune population at time $t$ as the number of individuals who recovered in the time interval $[t-\tau,t]$, \begin{equation*} R(t) = \gamma \int_{t-\tau}^{t} I(y)e^{-d(t-y)}\,dy = \gamma \int_0^{\tau} I(t-x)e^{-dx}\,dx. \label{eq:defR_integr} \end{equation*} Differentiation with respect to $t$ yields \begin{equation} \dot R(t) = \gamma I(t)-\gamma I(t-\tau)e^{-d\tau}-dR(t). \label{eq:ddeR1} \end{equation} On the other side, we have the definition in terms of distribution of immune individuals, \begin{equation*} R(t) =\int_{\zm}^{\zM} r(t,z)\,dz. \end{equation*} Differentiate the last relation and compare with \eqref{eq:ddeR1}: \begin{equation*} g(\zM)r(t,\zM)=\gamma I(t), \qquad g(\zm)r(t,\zm)=\gamma I(t-\tau)e^{-d\tau}. \label{eq:ddeR_relations} \end{equation*} This means that individuals with maximal level of immunity are those who recover from infection. If a host who recovers at time $t_1$ survives up to time $t_1+\tau$, he exits the $R$ class and enter $S$. In turn, we find a delay term in the equation for $S$ too, and have a classical SIRS model with constant delay \begin{align*} \dot S(t) & = b(N(t)) -\beta \frac{S(t)I(t)}{N(t)}-dS(t)+\gamma I(t-\tau)e^{-d\tau}\\[0.3em] \dot I(t) & = \beta \frac{S(t)I(t)}{N(t)}-(\gamma+d+d_I)I(t)\\[0.3em] \dot R(t) & =\gamma I(t)-\gamma I(t-\tau)e^{-d\tau} -dR(t), \end{align*} which was studied by Taylor and Carr \cite{Taylor2009}.\\ \ \\ Now we can include again vaccination and the class $S_2$ as in the general model (V1). We assume that vaccine-induced immunity lasts for a time $\tau_v>0$, $$ \tau_v:=\int_{\zm}^{\zV}\frac{1}{g(x)}\,dx\;<\;\int_{\zm}^{\zM}\frac{1}{g(x)}\,dx=:\tau.$$ With similar arguments as for the immune population, we obtain the relations \begin{align*} g(\zV)v(t,\zV) & =\alpha b(N(t))+\phi S(t), \\ g(\zm)v(t,\zm) & =\left(\alpha b(N(t-\tau_v))+\phi S(t-\tau_v)\right)e^{-d\tau_v}, \end{align*} and find a system with two constant delays \begin{equation*} \begin{aligned} \dot S(t) & = (1-\alpha)b(N(t)) -\phi S(t) -\beta \frac{S(t)I(t)}{N(t)}-dS(t)\\[0.3em] \dot I(t) & = \beta \frac{I(t)}{N(t)}(S(t)+S_2(t))-(\gamma+d+d_I)I(t)\\[0.3em] \dot R(t) & =\gamma I(t)-\gamma I(t-\tau)e^{-d\tau} -dR(t)\\[0.3em] \dot V(t) & = \alpha b(N(t))+\phi S(t)-\left(\alpha b(N(t-\tau_v))+\phi S(t-\tau_v)\right)e^{-d\tau_v} -d V(t)\\ \dot S_2(t) &= -\beta \frac{S_2(t)I(t)}{N(t)}-dS_2(t)+\gamma I(t-\tau)e^{-d\tau}\\ & \phantom{=} +\left(\alpha b(N(t-\tau_v))+\phi S(t-\tau_v)\right)e^{-d\tau_v}. \end{aligned} \end{equation*} \section*{Acknowledgments} Authors were supported by the ERC Starting Grant No 259559. MVB was supported by the European Union and the State of Hungary, co-financed by the European Social Fund in the framework of T\'AMOP-4.2.4. A/2-11-1-2012-0001 National Excellence Program. GR was supported by Hungarian Scientific Research Fund OTKA K109782 and T\'AMOP-4.2.2.A-11/1/KONV-2012-0073 "Telemedicine focused research activities on the field of Mathematics, Informatics and Medical sciences".
{"config": "arxiv", "file": "1501.03451/mvb_gr_vaccinewaningboosting_arxiv.tex"}
TITLE: Prove that two linear map are same --- from “A invitation to 3-d vision" QUESTION [1 upvotes]: I found a question from the book to prove the following equation: $A^T\hat{\omega}A=\widehat{A^{-1}\omega}$. where $\hat{}$ means turn a vector $(x_1, x_2, x_3)$ into a skew-symmetric matrix $ \left[\begin{matrix} 0 & -x_3 & x_2 \\ x_3 & 0 & -x_1 \\ -x_2 & x_1 & 0 \end{matrix}\right] $ and A is a rotation matrix. How to prove it? One hint the book gives is, prove two linear maps, $A^T\hat{(.)}A$ and $\widehat{A^{-1}(.)}$ are the same. Thank you for reading the question anyway. REPLY [0 votes]: Thanks for the answer of "user1551". From his help I come up with a way. But I don't use the scalar triple product formula(may be I haven't totally understand his proof). My proof is: for the equation $A^T\hat{\omega}A=\widehat{A^{-1}\omega}$, multiply vector u on both right, we need to prove: $$ A^T(\omega \times(Au)) = (A^{-1}\omega)\times u$$ Since $A^T$ is a rotate matrix, and it preserve the angle between vectors, we got $$ A^T(\omega \times(Au)) = (A^T\omega) \times (A^TAu) = (A^{-1}\omega)\times u $$ So it is proved.
{"set_name": "stack_exchange", "score": 1, "question_id": 2979258}
TITLE: Correct method of integration involving two exponential terms QUESTION [2 upvotes]: I have an Integrand involving two exponential terms: $$ \int_{0}^{\infty} \frac{\exp(x^2)}{(1+\exp(x^2))^2} dx $$ I would like to know what is the best way to integrate such a function without blowing it up? What if $x^2$ is replaced by two variables $(x^2 + y^2)$ and we have a double integral? Will the method of integration remain the same? I use python and matlab for calculations. Thanks Notation fixed. REPLY [0 votes]: $$ \begin{align} \int_0^\infty\frac{e^{x^2}}{\left(1+e^{x^2}\right)^2}\,\mathrm{d}x &=\int_0^\infty\frac{e^{-x^2}}{\left(1+e^{-x^2}\right)^2}\,\mathrm{d}x\\ &=\frac12\int_0^\infty\frac{e^{-x}}{(1+e^{-x})^2}\frac{\mathrm{d}x}{\sqrt{x}}\tag{1} \end{align} $$ Consider $$ \begin{align} &\int_0^\infty\frac{e^{-x}}{(1+e^{-x})^2}x^{\alpha-1}\mathrm{d}x\tag{2}\\ &=\int_0^\infty\left(e^{-x}-2e^{-2x}+3e^{-3x}-\dots\right)x^{\alpha-1}\,\mathrm{d}x\tag{3}\\ &=\Gamma(\alpha)\left(1-2^{1-\alpha}+3^{1-\alpha}-\dots\right)\tag{4}\\[6pt] &=\Gamma(\alpha)\zeta(\alpha-1)\left(1-2^{2-\alpha}\right)\tag{5} \end{align} $$ The integral in $(2)$ is analytic for $\mathrm{Re}(\alpha)\gt0$. For $\mathrm{Re}(\alpha)\gt1$ the sum in $(4)$ converges and takes the value in $(5)$. Since the functions in $(2)$ and $(5)$ are analytic and equal for $\mathrm{Re}(\alpha)\gt1$, they must be equal for $\mathrm{Re}(\alpha)\gt0$. Therefore, with $\alpha=\frac12$, we get $$ \begin{align} \int_0^\infty\frac{e^{x^2}}{\left(1+e^{x^2}\right)^2}\,\mathrm{d}x &=\frac{\sqrt\pi}2\zeta\left(-\tfrac12\right)\left(1-\sqrt8\right)\\ &\doteq0.336859119428876991346\tag{6} \end{align} $$ A note on computing $\boldsymbol{\zeta\!\left(-\frac12\right)}$ We can't use the standard series $$ \zeta(s)=\sum_{n=1}^\infty n^{-s}\tag{7} $$ to compute $\zeta\!\left(-\tfrac12\right)$ because the series in $(7)$ only converges for $s\gt1$. In this answer, analytic continuation is used to show that $$ \zeta\!\left(-\tfrac12\right)=\lim_{n\to\infty}\left(\sum_{k=1}^n\sqrt{k}\,-\tfrac23n^{3/2}-\tfrac12n^{1/2}\right)\tag{8} $$ The convergence of $(8)$ is very slow; the error is approximately $\frac1{24\sqrt{n}}$. However, by using $8$ terms from the Euler-Maclaurin Sum Formula, the error is reduced to $\frac{52003}{100663296}n^{-25/2}$. Thus, using $n=1000$, we get $$ \zeta\!\left(-\tfrac12\right)=-0.2078862249773545660173067253970493022262\dots\tag{9} $$
{"set_name": "stack_exchange", "score": 2, "question_id": 1923997}
TITLE: Find a point on line $AB$ such that it's distance from point $c = r$ QUESTION [0 upvotes]: I have a line $AB$ and a point $C$. I need to find point $D$ on line $AB$ at a distance $r$ from point $C$, how can I find point $D$? and is there a general formula to calculate this? Coordinates of points $A, B, C$ are known. REPLY [1 votes]: First, you have to realize that there are three cases: Either the distance $r$ is inferior to the distance between $C$ and $AB$ and in that case you have no such point D Or the distance $r$ is equal to the distance between $C$ and $AB$. In that case you have one solution Or the $r$ is greater than the distance between $C$ and $AB$ and then you have two solutions Already you can see that it looks that it will be like solving a polynomila of second degree. Let's say $C(x_C,y_C)$ and $AB$ have the following equation $y=ax+b$. Then $D(x_D,y_D)$, being on $AB$, follows $y_D=ax_D+b$ (equation 1) Distance $CD$ is $CD=\sqrt{(x_C-x_D)^2+(y_C-y_D)^2}=r$ (equation 2) Or $(x_C-x_D)^2+(y_C-y_D)^2=r^2$ $(x_C-x_D)^2+(y_C-ax_D-b)^2=r^2$ (after using equation 1) which is a quadratic equation for $x_D$... I think you can take it from here
{"set_name": "stack_exchange", "score": 0, "question_id": 2458602}
TITLE: Matrix times its transpose equals original matrix QUESTION [1 upvotes]: I have a 6x6 matrix that equals the original matrix when multiplied by its transpose. What does this say about this matrix? What unique conditions does this matrix satisfy, since this property doesn't seem to hold in general? REPLY [3 votes]: If I understand the comments correctly, the matrices you're interested in have the following two properties: They are symmetric: $M^T = M$. They are idempotent: $M^2 = M$. Let me further assume that your matrices are real. Then these matrices are precisely the orthogonal projections onto some subspace (namely their image).
{"set_name": "stack_exchange", "score": 1, "question_id": 213375}
\begin{document} \title{$A$-homology, $A$-homotopy and spectral sequences} \author[E.M. Ottina]{Enzo Miguel Ottina} \address{Enzo Miguel Ottina. Instituto de Ciencias B\'asicas \\ Universidad Nacional de Cuyo \\ Mendoza, Argentina.} \email{emottina@uncu.edu.ar} \begin{abstract} \noindent Given a CW-complex $A$ we define an `$A$-shaped' homology theory which behaves nicely towards $A$-homotopy groups allowing the generalization of many classical results. We also develop a relative version of the Federer spectral sequence for computing $A$-homotopy groups. As an application we derive a generalization of the Hopf-Whitney theorem. \end{abstract} \subjclass[2000]{55N35, 55Q05, 55T05.} \keywords{CW-complexes, Homology Theories, Homotopy Groups, Spectral Sequences.} \maketitle \section{Introduction} \label{intro} Given pointed topological spaces $A$ and $X$, the $A$-homotopy groups of $X$ are defined as $\pi_n^A(X)=[\Sigma^n A,X]$, that is, the homotopy classes of pointed maps from the reduced $n$-th suspension of $A$ to $X$. These groups appear naturally in different situations, for example in Quillen's model categories \cite{Qui}, and generalize the homotopy groups with coefficients \cite{Nei,Pet}. The $A$-homotopy groups have also been studied indirectly by some authors as homotopy groups of function spaces. Among them, we mention M. Barratt, R. Brown and H. Federer. The first one proves in \cite{Bar1} that homotopy groups of function spaces can be described as a central extension of certain homology groups, and computes in \cite{Bar2} these homotopy groups in several cases using Whitney's tube systems \cite{Whn}. In \cite{Br}, Brown works in a simplicial setting obtaining results which are used to study homotopy types of function spaces. As an application, he obtains different proofs for some of Barratt's results. In \cite{Fed}, Federer introduces a spectral sequence which converges to the homotopy groups of function spaces. Clearly, the Federer spectral sequence may also be understood as a tool to compute $A$-homotopy groups when $A$ is locally compact and Hausdorff. In the first part of this article we delve deeply into this spectral sequence taking a different approach to that of Federer's: we focus our attention on $A$-homotopy groups of spaces rather than on homotopy groups of function spaces. We develop a relative version of the Federer spectral sequence and obtain as a first application a generalization of the Hopf-Whitney theorem (\ref{Hopf-Whitney}). The homotopy groups and the homology groups of a topological space are related, for example, by the Hurewicz theorem, or more generally, by the Whitehead exact sequence \cite{Whi}. Therefore, it is natural to think that the $A$-homotopy groups should also have their homological counterpart. The main objective of the second part of this article is to define a suitable `$A$-shaped' homology theory and give results which show the relationship between this homology theory and the $A$-homotopy groups. This is achieved in section \ref{section_A_homology} where, given a CW-complex $A$, we define the $A$-homology groups of a CW-complex $X$ generalizing singular homology groups. We obtain many generalization of classical results, among the most important of which we mention a Hurewicz-type theorem relating the $A$-homotopy groups with the $A$-homology groups (\ref{Hurewicz_for_A-homology}) and a homological version of the Whitehead theorem which states that, under certain hypotheses, a map between CW-complexes which induces isomorphisms in the $A$-homology groups is a homotopy equivalence (\ref{A-homology_isomorphism}). Finally, we define a Hurewicz-type map between the $A$-homotopy groups and the $A$-homology groups and embed it in a long exact sequence generalizing the exact sequence constructed by Whitehead in \cite{Whi}. \medskip Throughout this article, all spaces are supposed to be pointed and path-connected. Homology and cohomology will mean reduced homology and reduced cohomology respectively. Also, if $X$ is a pointed topological space with base point $x_0$, we will simply write $\pi_n(X)$ instead of $\pi_n(X,x_0)$. \medskip I would like to thank G. Minian for many valuable comments and suggestions on this article \section{A relative version of the Federer spectral sequence} \label{section_Federer} In this section we introduce a relative version of the Federer spectral sequence which will be used later. As one of its first applications, we will obtain a relative version of the Hopf-Whitney theorem. Recall that the $A$-homotopy groups of a (pointed) topological space $X$ are defined by $\pi_n^A(X)=[\Sigma^n A,X]$, that is the (pointed) homotopy classes of maps from $\Sigma^n A$ to $X$. Similarly, the relative $A$-homotopy groups of a (pointed) topological pair $(Y,B)$ are defined by $\pi_n^A(Y,B)=[(\cn \Sigma^{n-1} A,\Sigma^{n-1} A),(Y,B)]$. Now we state and prove the main result of this section. \begin{theo} \label{theo_relative_Federer_Spectral_Sequence} Let $(Y,B)$ be a topological pair such that $\pi_2(Y,B)$ is an abelian group and let $A$ be a finite-dimensional (and path-connected) CW-complex. Then there exists a homological spectral sequence $\{E^a_{p,q}\}_{a\geq 1}$, with $E^2_{p,q}$ satisfying \begin{itemize} \item $E^2_{p,q}\cong H^{-p}(A;\pi_q(Y,B))$ for $p+q\geq 2$ and $p\leq -1$. \item $E^2_{p,q}$ is isomorphic to a subgroup of $H^{-p}(A;\pi_{q}(Y,B))$ if $p+q=1$ and $p\leq -1$. \item $E^2_{p,q}=0$ if $p+q\leq 0$ or $p\geq 0$. \end{itemize} which converges to $\pi_{p+q}^A(Y,B)$ for $p+q\geq 2$. \end{theo} We will call $\{E^a_{p,q}\}_{a\geq 1}$ the \emph{relative Federer spectral sequence associated to $A$ and $(Y,B)$}. \begin{proof} We may suppose that $A$ has only one $0$-cell. For $r\leq -1$, let $A^r=\ast$, and for $r\in\N$, let $J_r$ be an index set for the $r$-cells of $A$. For $\alpha \in J_r$ let $g^r_\alpha$ be the attaching map of the cell $e^r_\alpha$. For $r\in\mathbb{N}$ let $\displaystyle Z_r= \bigvee_{J_r}S^r \cong A^r/A^{r-1}$. The long exact sequences associated to the cofiber sequences $\displaystyle A^{r-1}\to A^r \overset{\overline{q}_r}{\to} Z_r$, $r\in\mathbb{N}$, may be extended as follows \begin{displaymath} \xymatrix@C=12pt{\ldots \ar[r] & \pi^{A^{r-1}}_2(Y,B) \ar[r]^(.53){\partial_r} & \pi^{Z_r}_1(Y,B) \ar[r]^(.48){\eta} & \dfrac{\pi^{Z_r}_1(Y,B)}{\im\partial_r} \ar[r]^(.45){0} & \dfrac{\pi^{Z_{r-1}}_1(Y,B)}{\im\partial_{r-1}} \ar[r]^{\textrm{id}} & \dfrac{\pi^{Z_{r-1}}_1(Y,B)}{\im\partial_{r-1}} \ar[r] & 0} \end{displaymath} where $\eta$ is the quotient map. These extended exact sequences yield an exact couple $(A_0,E_0,i,j,k)$ where the bigraded groups $\displaystyle A_0 = \bigoplus_{p,q \in \Z} A^1_{p,q}$ and $\displaystyle E_0 = \bigoplus_{p,q \in \Z} E^1_{p,q}$ are defined by \begin{displaymath} A^1_{p,q}=\left\{\begin{array}{ll}\pi_{p+q+1}^{A^{-p-1}}(Y,B) & \textrm{if $p+q\geq 1$} \\ \pi_1^{Z_{-p-1}}(Y,B)/\im \partial_{-p-1} & \textrm{if $p+q=0$} \\ 0 & \textrm{if $p+q\leq -1$} \end{array}\right. \end{displaymath} and \begin{displaymath} E^1_{p,q}=\left\{\begin{array}{ll}\pi_{p+q}^{Z_{-p}}(Y,B) & \textrm{if $p+q\geq 1$} \\ \pi_1^{Z_{-p-1}}(Y,B)/\im \partial_{-p-1} & \textrm{if $p+q=0$} \\ 0 & \textrm{if $p+q\leq -1$} \end{array}\right. \end{displaymath} Note that all these groups are abelian, except perhaps for $\pi_2^{A^r}(Y,B)$, $r\in\N$. Hence, $E_0$ is an abelian group. Therefore, the exact couple $(A_0,E_0,i,j,k)$ induces a spectral sequence $(E^a_{p,q})_{p,q}$, $a\geq 1$, which converges to $\pi_n^A(Y,B)$ for $n\geq 2$ since $A$ is finite-dimensional. Note also that $$\displaystyle E^1_{p,q} = \pi_{p+q}^{Z_{-p}}(Y,B) = \prod_{J_{-p}} \pi_q(Y,B) \cong C^{-p}(A;\pi_q(Y,B))$$ for $p+q\geq 1$ and $p\leq -1$, where $C^{\ast}(A;\pi_q(Y,B))$ denotes the cellular cohomology complex of $A$ with coefficients in $\pi_q(Y,B)$. The isomorphism $\gamma:E^1_{p,q}=\pi_{p+q}^{Z_{-p}}(Y,B)\to C^{-p}(A;\pi_q(Y,B))$ is given by $$\gamma([f])(e_\alpha^{-p})=[f\circ\cn\Sigma^{p+q-1}i_\alpha]$$ where $i_\alpha:S^{-p}\to Z_{-p}$ denotes the inclusion in the $\alpha$-th copy of $S^{-p}$. Note also that $E^2_{p,q}=0$ if $p+q\leq 0$ or $p\geq 0$. We wish to prove now that $E^2_{p,q}\cong H^{-p}(A;\pi_q(Y,B))$ for $p+q\geq 2$ and $p\leq -1$. We consider the morphism $\delta:E^1_{p,q}\cong C^{-p}(A;\pi_q(Y,B))\to E^1_{p-1,q}\cong C^{-p+1}(A;\pi_q(Y,B))$ coming from the spectral sequence. We will prove that $\delta=d^\ast$ for $n= p+q\geq 2$ and $p\leq -1$, where $d^\ast$ is the cellular boundary map. This is equivalent to saying that the following diagram commutes \begin{displaymath} \xymatrix{\pi^{Z_{p'}}_{n}(Y,B) \ar[r]^{(\overline{q}_{p'})^\ast} \ar[d]^\cong_\gamma & \pi^{A^{p'}}_{n}(Y,B) \ar[r]^(.45){(\underset{J_{p'+1}}{+} g_\beta^{p'+1})^\ast } & \pi^{Z_{p'+1}}_{n-1}(Y,B) \ar[d]^\cong_\gamma \\ C^{p'}(A;\pi_{n+p'}(Y,B)) \ar[rr]^{d^\ast} & & C^{p'+1}(A;\pi_{n+p'}(Y,B)) } \end{displaymath} where $p'=-p$. If $[h]\in \pi^{Z_{p'}}_{n}(Y,B)$ and $e_\alpha^{p'+1}$ is a $(p'+1)$-cell of $A$, then \begin{displaymath} \begin{array}{rcl} \left(\gamma(\underset{\beta\in J_{p'+1}}{+} g_\beta^{p'+1})^\ast q^\ast(h)\right)(e_\alpha^{p'+1}) & = & \gamma(h\cn\Sigma^{n-1} q(\underset{\beta\in J_{p'+1}}{+} \cn\Sigma^{n-1} g_\beta^{p'+1}))(e_\alpha^{p'+1}) = \\ & = & [h\cn\Sigma^{n-1} q\cn\Sigma^{n-1} g_\alpha^{p'+1}]. \end{array} \end{displaymath} On the other hand, \begin{displaymath} \begin{array}{rcl} \displaystyle d^{\ast}(\gamma([h]))(e_\alpha^{p'+1}) & = & \displaystyle (\gamma([h]))(d(e_\alpha^{p'+1})) = \sum_{\beta\in J_{p'}}\deg(q_\beta g^{p'+1}_\alpha)(\gamma([h]))(e_\beta^{p'})= \\ & = & \displaystyle \sum_{\beta\in J_{p'}}\deg(q_\beta g^{p'+1}_\alpha)[h\cn \Sigma^{n-1} i_\beta] \end{array} \end{displaymath} where $q_\beta:A^{p'}\to S^{p'}$ is the map that collapses $A^{p'}-e^{p'}_\beta$ to a point. Let $r=n-1+p'$. Since the morphism $\displaystyle \bigoplus_{\beta\in J_{p'}} (\Sigma^{n-1} q_\beta)_\ast$ is the inverse of the isomorphism $\displaystyle \bigoplus_{\beta\in J_{p'}} (\Sigma^{n-1} i_\beta)_\ast:\bigoplus_{\beta\in J_{p'}}\pi_r(S^{r})\to\pi_r(\Sigma^{n-1}Z_{p'})$ we obtain that \begin{displaymath} \begin{array}{lcl} [\Sigma^{n-1}\overline{q}_{p'}\Sigma^{n-1}g_{\alpha}^{p'+1}] & = & \displaystyle \bigoplus_{\beta\in J_{p'}} (\Sigma^{n-1} i_\beta)_\ast ( \bigoplus_{\beta\in J_{p'}} (\Sigma^{n-1} q_\beta)_\ast([\Sigma^{n-1}\overline{q}_{p'}\Sigma^{n-1}g_{\alpha}^{p'+1}])) = \\ & = & \displaystyle \bigoplus_{\beta\in J_{p'}} (\Sigma^{n-1} i_\beta)_\ast (\{[\Sigma^{n-1} q_\beta \Sigma^{n-1} g_{\alpha}^{p'+1}]\}_{\beta\in J_{p'}}) = \\ & = & \displaystyle \sum_{\beta\in J_{p'}}[\Sigma^{n-1} i_\beta \Sigma^{n-1} q_\beta \Sigma^{n-1} g_{\alpha}^{p'+1}]. \end{array} \end{displaymath} Hence, \begin{displaymath} \begin{array}{rcl} [h\cn \Sigma^{n-1} \overline{q}_{p'}\cn \Sigma^{n-1} g_\alpha^{p'+1}] & = & \displaystyle h_\ast([\cn \Sigma^{n-1} \overline{q}_{p'}g_\alpha^{p'+1}]) = h_\ast(\cn \Sigma^{n-1}(\sum_{\beta\in J_{p'}}[i_\beta q_\beta g_\alpha^{p'+1}])) = \\ & = & \displaystyle \sum_{\beta\in J_{p'}}[h \cn \Sigma^{n-1} (i_\beta q_\beta g_\alpha^{p'+1})] = \\ & = & \displaystyle \sum_{\beta\in J_{p'}}[h \cn \Sigma^{n-1} i_\beta][\cn \Sigma^{n-1} (q_\beta g^{p'+1}_\alpha)] = \\ & = & \displaystyle \sum_{\beta\in J_{p'}}\deg(q_\beta g^{p'+1}_\alpha)[h\cn \Sigma^{n-1}i_\beta]. \end{array} \end{displaymath} It follows that $E^2_{p,q}\cong H^{-p}(A;\pi_q(Y))$ for $p+q\geq 2$ and $p\leq -1$. The same argument works for the case $p+q=1$, $p\leq -2$, and we obtain a commutative diagram \begin{displaymath} \xymatrix@C=40pt{\pi^{Z_{p'}}_{1}(Y,B) \ar[r]^{(\overline{q}_{p'})^\ast} \ar[d]^\cong_\gamma & \pi^{A^{p'}}_{1}(Y,B) \ar[r] ^(.45){(\underset{J_{p'+1}}{+} g_\beta^{p'+1})^\ast } & \pi^{\underset{J_{p'+1}}{\bigvee}S^{p'}}_{1}\!\!\!(Y,B)_{\phantom{\underset{J_{p'}}{\bigvee}S^{p'}}} \ar[d]^\cong \\ C^{p'}(A;\pi_{p'+1}(Y,B)) \ar[rr]^{d^\ast} & & C^{p'+1}(A;\pi_{p'+1}(Y,B)) \ar@{-}[u]+<0 pt,-9 pt>} \end{displaymath} Then $E^2_{p,q}=\ker d^1_{p,q}/\im d^1_{p+1,q}=\im \partial_{-p}/\im d^1_{p+1,q}$. By exactness, $\im \partial_{-p}=\ker q^\ast$. Thus, $\im \partial_{-p}\subseteq \ker d^\ast$ since the previous diagram commutes. Moreover, if $p\leq -2$ by the previous case the map $d^1_{p+1,q}$ coincides, up to isomorphisms, with the map $d^\ast:C^{p'-1}(A;\pi_{p'+1}(Y,B)) \to C^{p'}(A;\pi_{p'+1}(Y,B))$, and in case $p=-1$, both maps are trivial. Therefore, $E^2_{p,q}$ is isomorphic to a subgroup of $H^{-p}(A;\pi_{q}(Y,B))$ if $p+q=1$ and $p\leq -1$. \end{proof} Of course, applying this theorem to the topological pair $(CY,Y)$ we obtain the following absolute version, which is similar to Federer's result. \begin{coro} \label{theo_Federer_Spectral_Sequence} Let $Y$ be a topological space with abelian fundamental group and let $A$ be a finite-dimensional (and path-connected) CW-complex. Then there exists a homological spectral sequence $\{E^a_{p,q}\}_{a\geq 1}$, with $E^2_{p,q}$ satisfying \begin{itemize} \item $E^2_{p,q}\cong H^{-p}(A;\pi_q(Y))$ for $p+q\geq 1$ and $p\leq -1$. \item $E^2_{p,q}$ is isomorphic to a subgroup of $H^{-p}(A;\pi_{q}(Y))$ if $p+q=0$ and $p\leq -1$. \item $E^2_{p,q}=0$ if $p+q<0$ or $p\geq 0$. \end{itemize} which converges to $\pi_{p+q}^A(Y)$ for $p+q\geq 1$. \end{coro} We will call $\{E^a_{p,q}\}_{a\geq 1}$ the \emph{Federer spectral sequence associated to $A$ and $Y$}. \bigskip Note that the relative version of the Federer spectral sequence is natural in the following sense. If $A$ is a finite-dimensional CW-complex, $(Y,B)$ and $(Y',B')$ are topological pairs such that the groups $\pi_2(Y,B)$ and $\pi_2(Y',B')$ are abelian and $f:(Y,B)\to (Y',B')$ is a continuous map, then $f$ induces a morphism between the relative Federer spectral sequence associated to $A$ and $(Y,B)$ and the one associated to $A$ and $(Y',B')$. Indeed, $f$ induces morphisms between the extended long exact sequences of the proof above and hence a morphism between the exact couples involved, which gives rise to the morphism between the spectral sequences mentioned above. Moreover, if $A'$ is another finite-dimensional CW-complex and $g:A\to A'$ is a cellular map, then $g$ also induces morphisms between the extended long exact sequences mentioned before and therefore, a morphism between the relative Federer spectral sequence associated to $A$ and $(Y,B)$ and the one associated to $A'$ and $(Y,B)$. If $g$ is not cellular we may replace it by a homotopic cellular map to obtain the induced morphism. Of course, by the description of the second page of our spectral sequence, the map $g$ itself will also induce the same morphism from page two onwards. Clearly, the same holds for the absolute version. \begin{rem} \ \noindent (1) Looking at the extended exact sequences of the proof of \ref{theo_relative_Federer_Spectral_Sequence} we obtain that the relative Federer spectral sequence converges to the trivial group in degree $1$. Thus, the groups $E^2_{p,q}$, with $p+q=1$, become all trivial in $E^\infty$. \noindent (2) As we have mentioned above, the spectral sequence given by Federer in \cite{Fed} is similar to our absolute version. But since we work with pointed topological spaces our version enables us to compute homotopy groups of function spaces only when the base point is the constant map. However, the hypothesis we require on the space $Y$ ($\pi_1(Y)$ is abelian) are weaker than Federer's ($\pi_1(Y)$ acts trivially on $\pi_n(Y)$ for all $n\in\mathbb{N}$). Moreover, our approach in terms of $A$-homotopy groups admits the relative version given before. \end{rem} Just as a simple example of application of the Federer spectral sequence consider the following, which is a reformulation of a well-known result for homotopy groups with coefficients. \begin{ex} \label{ex_A_homotopy} If $A$ is a Moore space of type $(G,m)$ (with $G$ finitely generated) and $X$ is a path-connected topological space with abelian fundamental group, in the Federer spectral sequence we get $$E^2_{-p,q}=\left\{\begin{array}{cl}\hom(G,\pi_q(X)) & \textrm{if $p=m$} \\ \ext(G,\pi_{q}(X)) & \textrm{if $p=m+1$} \\ 0 & \textrm{otherwise} \end{array} \right. \qquad \qquad \textrm{for $-p+q\geq 1$.}$$ Hence, from the corresponding filtrations, we deduce that, for $n\geq 1$, there are short exact sequences of groups \begin{displaymath} \xymatrix{0 \ar[r] & \ext(G,\pi_{n+m+1}(X)) \ar[r] & \pi_n^A(X) \ar[r] & \hom(G,\pi_{n+m}(X)) \ar[r] & 0} \end{displaymath} As a corollary, if $G$ is a finite group of exponent $r$ then $\alpha^{2r}=0$ for every $\alpha\in\pi_n^A(X)$. For example, if $X$ is a path-connected topological space with abelian fundamental group, then every element in $\pi_n^{\mathbb{P}^2}(X)$ ($n\geq 1$) has order 1, 2 or 4. \end{ex} We will now apply \ref{theo_Federer_Spectral_Sequence} to obtain an extension to the Hopf-Whitney theorem. \begin{theo} \label{Hopf-Whitney} Let $K$ be a path-connected CW-complex of dimension $n\geq 2$ and let $Y$ be $(n-1)$-connected. Then there exists a bijection $[K,Y]\leftrightarrow H^n(K;\pi_n(Y))$. In addition, if $K$ is the suspension of a path-connected CW-complex (or if $Y$ is a loop space), then the groups $[K,Y]$ and $H^n(K;\pi_n(Y))$ are isomorphic. Moreover, this isomorphism is natural in $K$ and in $Y$. \end{theo} \begin{proof} The first part is the Hopf-Whitney theorem (cf. \cite{MT}). The second part can be proved easily by means of the Federer spectral sequence. Concretely, suppose that $K=\Sigma K'$ with $K'$ path-connected. Let $\{E^a_{p,q}\}$ denote the Federer spectral sequence associated to $K'$ and $Y$. Then $E^2_{p,q}=0$ for $q\leq n-1$ since $Y$ is $(n-1)$-connected, and $E^2_{p,q}=0$ for $p\leq -n$ since $\dim K'=n-1$. Hence, $E^2_{-(n-1),n}\cong H^{n-1}(K';\pi_n(Y))$ survives to $E^\infty$. As it is the only nonzero entry in the diagonal $p+q=1$ of $E^2$ it follows that $$[K,Y]\cong \pi_1^{K'}(Y) \cong E^2_{-(n-1),n} \cong H^{n-1}(K';\pi_n(Y)) \cong H^n(K;\pi_n(Y)).$$ Finally, naturality follows from naturality of the Federer spectral sequence. \end{proof} In a similar way, from theorem \ref{theo_relative_Federer_Spectral_Sequence} we obtain the following relative version of the Hopf-Whitney theorem, which not only is interesting for its own sake but also will be important for our purposes. \begin{theo} \label{relative_Hopf-Whitney} Let $K$ be the suspension of a path-connected CW-complex of dimension $n-1\geq 1$ and let $(Y,B)$ be an $n$-connected topological pair. Then there exists an isomorphism of groups $$[(\cn K,K);(Y,B)]\leftrightarrow H^n(K;\pi_{n+1}(Y,B)).$$ which is natural in $K$ and in $(Y,B)$. \end{theo} \begin{proof} Suppose that $K=\Sigma K'$ with $K'$ path-connected. Let $\{E^a_{p,q}\}$ denote the relative Federer spectral sequence associated to $K'$ and $(Y,B)$. Then $E^2_{p,q}=0$ for $q\leq n$ since $(Y,B)$ is $n$-connected, and $E^2_{p,q}=0$ for $p\leq -n$ since $\dim K'=n-1$. Hence, $E^2_{-(n-1),n+1}=H^{n-1}(K';\pi_{n+1}(Y,B))$ survives to $E^\infty$. As it is the only nonzero entry in the diagonal $p+q=2$ of $E^2$ it follows that \begin{displaymath} \begin{array}{rcl} [(CK,K);(Y,B)] & = & \pi_2^{K'}(Y,B) \cong E^2_{-(n-1),n+1} \cong H^{n-1}(K';\pi_{n+1}(Y,B)) \cong \\ & \cong & H^n(K;\pi_{n+1}(Y,B)). \end{array} \end{displaymath} and naturality follows again from naturality of the Federer spectral sequence. \end{proof} We will give now another application of \ref{theo_Federer_Spectral_Sequence}. We will denote by $\mathcal{T}_{\mathcal{P}}$ the class of torsion abelian groups whose elements have orders which are divisible only by primes in a set $\mathcal{P}$ of prime numbers. \begin{prop} \label{A-homotopy} Let $A$ be a finite-dimensional CW-complex such that $H_n(A)$ is finitely generated for all $n\in\N$ and let $X$ be a path-connected topological space such that $\pi_1(X)$ is abelian. If $H_n(A)\in \mathcal{T}_{\mathcal{P}}$ for all $n\in\mathbb{N}$ then $\pi_n^A(X) \in \mathcal{T}_{\mathcal{P}}$ for all $n\in\mathbb{N}$. \end{prop} \begin{proof} By \ref{theo_Federer_Spectral_Sequence}, it suffices to prove that $H^{-p}(A;\pi_q(X))\in \mathcal{T}_{\mathcal{P}}$ for all $p,q\in \mathbb{Z}$ such that $p+q\geq 0$ and $p\leq -1$. By the universal coefficient theorem $$H^{-p}(A;\pi_q(X))\cong \hom (H_{-p}(A),\pi_q(X)) \oplus \ext (H_{-p-1}(A),\pi_q(X)).$$ Since $A$ is $\mathcal{T}_{\mathcal{P}}$-acyclic and $H_n(A)$ is finitely generated for all $n\in\N$ it follows that $\hom (H_{-p}(A),\pi_q(X))\in \mathcal{T}_{\mathcal{P}}$ and $\ext (H_{-p-1}(A),\pi_q(X))\in \mathcal{T}_{\mathcal{P}}$ for all $p\leq -1$ and $q\geq 0$. Thus, $H^{-p}(A;\pi_q(X)) \in \mathcal{T}_{\mathcal{P}}$ for all $p\leq -1$ and $q\geq 0$. \end{proof} \section{$A$-homology} \label{section_A_homology} In this section we define an `\emph{$A$-shaped}' reduced homology theory, which we call $A$-homology and which coincides with the singular homology theory in case $A=S^0$. Our definition enables us to obtain generalizations of several classical results. For example, we prove a Hurewicz-type theorem (\ref{Hurewicz_for_A-homology}) relating the $A$-homotopy groups with the $A$-homology groups. We also give a homological version of the Whitehead theorem which states that, under certain hypotheses, a map between CW-complexes which induces isomorphisms in the $A$-homology groups is a homotopy equivalence (\ref{A-homology_isomorphism}). Finally, we define a Hurewicz-type map between the $A$-homotopy groups and the $A$-homology groups and embed it in a long exact sequence \ref{A-Whitehead_exact_sequence} generalizing the Whitehead exact sequence \cite{Whi}. \medskip We begin with a simple remark which will be used later. \begin{rem} Let $p:(E,e_0)\to (B,b_0)$ be a quasifibration, let $F=p^{-1}(b_0)$ and let $A$ be a CW-complex. Since $p$ induces isomorphisms $p_\ast:\pi_i(E,F,e_0)\to \pi_i(B,b_0)$ for all $i\in\mathbb{N}$ and $\pi_i(E,F,e_0)\cong \pi_{i-1}(P(E,e_0,F),c_{e_0})$ and $\pi_i(B,b_0)\cong \pi_{i-1}(\Omega B,c_{b_0})$ it follows that the induced map $\hat{p}:(P(E,e_0,F),c_{e_0})\to (\Omega B,c_{b_0})$ is a weak equivalence. Thus, $\hat{p}$ induces isomorphisms $\hat{p}_\ast:\pi_i^A(P(E,e_0,F),c_{e_0})\to \pi_i^A(\Omega B,c_{b_0})$. Since $\pi_i^A(E,F,e_0)\cong \pi_{i-1}^A(P(E,e_0,F),c_{e_0})$ and $\pi_i^A(B,b_0)\cong \pi_{i-1}^A(\Omega B,c_{b_0})$ we obtain that $p_\ast:\pi_i^A(E,F,e_0)\to \pi_i^A(B,b_0)$ is an isomorphism for all $i\in\mathbb{N}$. \end{rem} Our definition of $A$-homology groups is inspired by the Dold-Thom theorem. \begin{definition} Let $A$ be a CW-complex and let $X$ be a topological space. For $n\in\mathbb{N}_0$ we define the \emph{$n$-th $A$-homology group of $X$} as $$H_n^A(X)=\pi_n^A(SP(X))$$ where $SP(X)$ denotes the infinite symmetric product of $X$. \end{definition} \begin{theo} The functor $H_\ast^A(\_)$ defines a reduced homology theory on the category of (path-connected) CW-complexes. \end{theo} \begin{proof} It is clear that $H_\ast^A(\_)$ is a homotopy functor. If $(X,B,x_0)$ is a pointed CW-pair, then by the Dold-Thom theorem, the quotient map $q:X\to X/B$ induces a quasifibration $\hat{q}:SP(X)\to SP(X/B)$ whose fiber is homotopy equivalent to $SP(B)$. Since $A$ is a CW-complex there is a long exact sequence \begin{displaymath} \xymatrix@C=20pt{\ldots \ar[r] & \pi_n^A(SP(B)) \ar[r] & \pi_n^A(SP(X)) \ar[r] & \pi_n^A(SP(X/B)) \ar[r] & \pi_{n-1}^A(SP(B)) \ar[r] & \ldots} \end{displaymath} It remains to show that there exists a natural isomorphism $H_n^A(X)\cong H_{n+1}^A(\Sigma X)$ and that $H_n^A(X)$ are abelian groups for $n=0,1$. The natural isomorphism follows from the long exact sequence of above applied to the CW-pair $(\cn X,X)$. Note that $H_n^A(\cn X)=0$ since $\cn X$ is contractible and $H_n^A$ is a homotopy functor. The second part follows immediately, since $H_0^A(X)\cong H_{1}^A(\Sigma X)\cong H_{2}^A(\Sigma^2 X)$. The group structure on $H_0^A(X)$ is induced from the one on $H_1^A(X)$ by the corresponding natural isomorphism. \end{proof} The proof above encourages us to define the relative $A$-homology groups of a CW-pair $(X,B)$ by $H^A_n(X,B)=\pi^A_n(SP(X/B))$ for $n\geq 1$. As shown before, there exist long exact sequences of $A$-homology groups associated to a CW-pair $(X,B)$. Federer's spectral sequence can be applied as a first method of computation of $A$-homology groups. Indeed, given a finite CW-complex $A$ and a CW-complex $X$, the associated Federer spectral sequence $\{E^a_{p,q}\}$ converges to the $A$-homotopy groups of $SP(X)$ (note that $\pi_1(SP(X))$ is abelian). In this case we obtain that $E^2_{p,q}=H^{-p}(A,\pi_q(SP(X)))=H^{-p}(A,H_q(X))$ if $p+q\geq 1$ and $p\leq -1$. Moreover, we will show later a explicit formula to compute $A$-homology groups of CW-complexes. We exhibit now some examples. \begin{ex} \label{ex_1} If $A$ is a finite-dimensional CW-complex and $X$ is a Moore space of type $(G,n)$ then $SP(X)$ is an Eilenberg-Mac Lane space of the same type. Hence, by the Federer spectral sequence $$H_r^A(X)=\pi_r^A(SP(X))=H^{n-r}(A,\pi_n(SP(X)))=H^{n-r}(A,G) \qquad \textrm{for $r\geq 1$}.$$ In particular, $H_r^A(S^n)=H^{n-r}(A,\Z)$. We also deduce that if $X$ is a Moore space of type $(G,n)$ and $A$ is $(n-1)$-connected, then $H_r^A(X)=0$ for all $r\geq 1$. \end{ex} \begin{ex} \label{ex_2} Let $A$ be a Moore space of type $(G,m)$ (with $G$ finitely generated) and let $X$ be a path-connected CW-complex. As in example \ref{ex_A_homotopy}, for $n\geq 1$, there are short exact sequences of abelian groups \begin{displaymath} \xymatrix{0 \ar[r] & \ext(G,H_{n+m+1}(X)) \ar[r] & H_n^A(X) \ar[r] & \hom(G,H_{n+m}(X)) \ar[r] & 0} \end{displaymath} As a consequence, if $G$ is a finite group of exponent $r$, then $\alpha^{2r}=0$ for every $\alpha\in H_n^A(X)$. \end{ex} It is well known that if a CW-complex does not have cells of a certain dimension $j$, then its $j$-th homology group vanishes. As one should expect, a similar result holds for the $A$-homology groups. Concretely, if $A$ is an $l$-connected CW-complex of dimension $k$ and $X$ is a CW-complex, applying the Federer spectral sequence to the space $SP(X)$ one can obtain that: \begin{enumerate} \item If $\dim(X)=m$, then $H_r^A(X)=0$ for $r\geq m-l$. \item If $X$ does not have cells of dimension less than $m'$, then $H_r^A(X)=0$ for $r\leq m'+l-k$. \end{enumerate} Following the idea of example \ref{ex_1} we will show now a explicit formula to compute $A$-homology groups. \begin{prop} Let $A$ be a finite-dimensional CW-complex and let $X$ be a connected CW-complex. Then for every $n\in\N_0$, $\displaystyle H^A_n(X)=\bigoplus_{j \in \mathbb{N}} H^{j-n} (A,H_j(X))$. \end{prop} \begin{proof} Since $SP(X)$ has the weak homotopy type of $\displaystyle \prod_{n \in \mathbb{N}} K(H_n(X),n)$ and $A$ is a CW-complex we obtain that \begin{displaymath} \begin{array}{lcl} H^A_n(X) & = & \displaystyle \pi^A_n(SP(X)) \cong \pi^A_n(\prod_{j \in \mathbb{N}} K(H_j(X),j)) \cong \prod_{j \in \mathbb{N}} \pi^A_n( K(H_j(X),j)) \cong \\ & \cong & \displaystyle \prod_{j \in \mathbb{N}} H^{j-n} (A,H_j(X)) = \bigoplus_{j \in \mathbb{N}} H^{j-n} (A,H_j(X)) \end{array} \end{displaymath} where the last isomorphism follows from the Federer spectral sequence. \end{proof} Now we show that, in case $A$ is compact, $H_\ast^A$ satisfies the wedge axiom. This can be proved in two different ways: using the definition of $A$-homotopy groups or using the above formula. We choose the first one. \begin{prop} Let $A$ be a finite CW-complex, and let $\{X_i\}_{i\in I}$ be a collection of CW-complexes. Then $$H_n^A\left(\bigvee_{i\in I} X_i \right)=\bigoplus_{i\in I}H_n^A(X_i).$$ \end{prop} \begin{proof} The space $SP(\underset{i \in I}{\bigvee} X_i)$ is homeomorphic to $\underset{i \in I}{\prod}^w SP(X_i)$ with the weak product topology, i.e. $\underset{i \in I}{\prod}^w SP(X_i)$ is the colimit of the products of finitely many factors. Since $A$ is compact, $\pi_n^A(\underset{i \in I}{\prod}^w SP(X_i))\cong \underset{i \in I}{\bigoplus} \pi_n^A(SP(X_i))$ and the result follows. \end{proof} We will prove now some of the main results of this article. We begin with a Hurewicz-type theorem relating the $A$-homology groups with the $A$-homotopy groups. \begin{theo} \label{relative_Hurewicz_for_A-homology} Let $A$ be the suspension of a path-connected CW-complex of dimension $k-1\geq 1$ and let $(X,B)$ be an $n$-connected CW-pair with $n\geq k$. Suppose, in addition, that $B$ is simply-connected and non-empty. Then $H_r^A(X,B)=0$ for $r\leq n-k$ and $\pi_{n-k+1}^A(X,B)\cong H_{n-k+1}^A(X,B)$. \end{theo} \begin{proof} By the Hurewicz theorem, $H_r(X,B)=0$ for $r\leq n$ and $H_{n+1}(X,B)\cong\pi_{n+1}(X,B)$. Since $(X,B)$ is a CW-pair, by the Dold-Thom theorem we obtain that $\pi_r(SP(X/B))\cong H_r(X/B) \cong H_r(X,B) = 0$ for $r\leq n$. Since $A$ is a CW-complex of dimension $k \leq n$, then, $H^A_r(X,B)=\pi^A_r(SP(X/B))=0$ for $r\leq n-k$. Also, \begin{displaymath} \begin{array}{lcl} \pi_{n-k+1}^A(X,B) & = & [(\cn\Sigma^{n-k}A,\Sigma^{n-k}A);(X,B)] \cong H^{n}(\Sigma^{n-k}A,\pi_{n+1}(X,B)) \cong \\ & \cong & H^{n}(\Sigma^{n-k}A,H_{n+1}(X,B)) \cong H^{n+1}(\Sigma^{n-k+1}A,\pi_{n+1}(SP(X/B))) \cong \\ & \cong & [\Sigma^{n-k+1}A,SP(X/B)] = \pi_{n-k+1}^A(SP(X/B)) = H_{n-k+1}^A(X,B) \end{array} \end{displaymath} where the first and fourth isomorphisms hold by \ref{relative_Hopf-Whitney} and \ref{Hopf-Whitney} respectively. \end{proof} Moreover, by naturality of \ref{Hopf-Whitney} and \ref{relative_Hopf-Whitney} it follows that the isomorphism above is the morphism induced in $\pi_n^A$ by the map which is the composition of the quotient map $(X,B)\to(X/B,\ast)$ with the inclusion map $(X/B,\ast) \to (SP(X/B),\ast)$. \medskip Clearly, from this relative $A$-Hurewicz theorem we can deduce the following absolute version. \begin{theo} \label{Hurewicz_for_A-homology} Let $A$ be the suspension of a path-connected CW-complex with $\dim A = k\geq 2$ and let $X$ be an $n$-connected CW-complex with $n\geq k$. Then $H_r^A(X)=0$ for $r\leq n-k$ and $\pi_{n-k+1}^A(X)\cong H_{n-k+1}^A(X)$. Moreover, the morphism $i_\ast:\pi_{n-k+1}^A(X)\to \pi_{n-k+1}^A(SP(X))=H_{n-k+1}^A(X)$ induced by the inclusion map $i:X\to SP(X)$ is an isomorphism. \end{theo} Thus, the morphism $i_\ast:\pi_n^A(X)\to \pi_n^A(SP(X))$ can be thought as a Hurewicz-type map and will be called \emph{$A$-Hurewicz homomorphism}. Not only is it natural, but also it can be embedded in a long exact sequence, as we will show later (\ref{A-Whitehead_exact_sequence}). \medskip We give now a homological version of the Whitehead theorem, which states that, under certain hypotheses, a continuous map between CW-complexes inducing isomorphisms in $A$-homotopy groups is a homotopy equivalence. \begin{theo} \label{A-homology_isomorphism} Let $A'$ be a path-connected and locally compact CW-complex of dimension $k-1\geq 0$ such that $H_{k-1}(A')\neq 0$ and let $A=\Sigma A'$. Let $f:X\to Y$ be a continuous map between simply-connected CW-complexes which induces isomorphisms $f_\ast:H^A_r(X)\to H^A_r(Y)$ for all $r\in \N$ and $f_\ast:\pi_i(X)\to \pi_i(Y)$ for all $i\leq k+1$. Then $f$ is a homotopy equivalence. \end{theo} \begin{proof} Replacing $Y$ by the mapping cylinder of $f$, we may suppose that $f$ is an inclusion map and hence $(Y,X)$ is $(k+1)$-connected. We will prove by induction that $(Y,X)$ is $n$-connected for all $n\in\mathbb{N}$. Suppose that $(Y,X)$ is $n$-connected for some $n\geq k+1$. Then $$\pi_{n-k}^A(Y,X)=[(\cn \Sigma^{n-k-1}A,\Sigma^{n-k-1}A),(Y,X)]=0$$ because $\dim (\cn \Sigma^{n-k-1}A)=n$. Moreover, by \ref{relative_Hurewicz_for_A-homology} we obtain that $$\pi_{n-k+1}^A(Y,X)\cong H_{n-k+1}^A(Y,X)=0.$$ Now, by \ref{relative_Hopf-Whitney}, \begin{displaymath} \begin{array}{lcl} 0 & = & \pi_{n-k+1}^A(Y,X)=H^{n}(\Sigma^{n-k} A,\pi_{n+1}(Y,X)) = H^{k}(A,\pi_{n+1}(Y,X))= \\ & = & \hom(H_{k}(A),\pi_{n+1}(Y,X))\oplus \ext(H_{k-1}(A),\pi_{n+1}(Y,X)). \end{array} \end{displaymath} Then $\hom(H_k(A),\pi_{n+1}(Y,X))=0$. By the hypotheses on $A'$ it follows that $H_{k-1}(A')=H_k(A)$ has $\mathbb{Z}$ as a direct summand. Hence, $\pi_{n+1}(Y,X)=0$ and thus $(Y,X)$ is $(n+1)$-connected. In consequence, $(Y,X)$ is $n$-connected for all $n\in\mathbb{N}$. Then the inclusion map $f:X\to Y$ is a weak equivalence and since $X$ and $Y$ are CW-complexes it follows that $f$ is a homotopy equivalence. \end{proof} To finish, we will make use of a modern construction of the exact sequence Whitehead introduced in \cite{Whi} to embed the $A$-Hurewicz homomorphism defined above in a long exact sequence. As a corollary we will obtain another proof of theorem \ref{Hurewicz_for_A-homology} together with an extension of it. A different way to obtain the Whitehead exact sequence is given in \cite{BH}. Let $X$ be a CW-complex and let $\Gamma X$ be the homotopy fiber of the inclusion $i:X\to SP(X)$. Hence, there is a long exact sequence \begin{displaymath} \xymatrix{\ldots \ar[r] & \pi_n^A(\Gamma X) \ar[r] & \pi_n^A(X) \ar[r]^(.42){i_\ast} & \pi_n^A(SP(X)) \ar[r] & \pi_{n-1}^A(\Gamma X) \ar[r] & \ldots } \end{displaymath} and by definition $\pi_n^A(SP(X))=H_n^A(X)$ and $i_\ast$ is the $A$-Hurewicz homomorphism. Thus, we have proved the following. \begin{prop} \label{A-Whitehead_exact_sequence} Let $A$ and $X$ be CW-complexes. Then, there is a long exact sequence \begin{displaymath} \xymatrix{\ldots \ar[r] & \pi_n^A(\Gamma X) \ar[r] & \pi_n^A(X) \ar[r]^(.5){i_\ast} & H_n^A(X) \ar[r] & \pi_{n-1}^A(\Gamma X) \ar[r] & \ldots } \end{displaymath} \end{prop} Using this long exact sequence we will give another proof of theorem \ref{Hurewicz_for_A-homology}. Recall that in \cite{Whi}, given a CW-complex $Z$, Whitehead defines the group $\Gamma_n(Z)$ as the kernel of the canonical morphism $\pi_n(Z^n)\to \pi_n(Z^n,Z^{n-1})$ which by exactness coincides with the image of the morphism $j_\ast:\pi_n(Z^{n-1})\to \pi_n(Z^n)$, where $j:Z^{n-1}\to Z^n$ is the inclusion map. It is known that $\Gamma_n(Z)\cong\pi_n(\Gamma Z)$. Now let $A$ be a CW-complex of dimension $k\geq 2$ and let $X$ be an $n$-connected topological space with $n\geq k$. Replacing $X$ by a homotopy equivalent CW-complex $Y$ with $Y^n=\ast$, it follows that $\Gamma_r(X)=0$ for $r\leq n+1$. Hence, $\Gamma X$ is $(n+1)$-connected. Therefore, $\pi_r^A(\Gamma X)=0$ for $r\leq n-k+1$. Thus, from the exact sequence above we obtain that $i_\ast:\pi_{n-k+1}^A(X)\to H_{n-k+1}^A(X)$ is an isomorphism. Moreover, $i_\ast:\pi_{n-k+2}^A(X)\to H_{n-k+2}^A(X)$ is an epimorphism. Summing up, we have proved the following. \begin{theo} \label{Hurewicz_for_A-homology_2} Let $A$ be a path-connected CW-complex with $\dim A = k\geq 2$ and let $X$ be an $n$-connected CW-complex with $n\geq k$. Let $i:X\to SP(X)$ be the inclusion map. Then $H_r^A(X)=0$ for $r\leq n-k$, $i_\ast:\pi_{n-k+1}^A(X)\to H_{n-k+1}^A(X)$ is an isomorphism and $i_\ast:\pi_{n-k+2}^A(X)\to H_{n-k+2}^A(X)$ is an epimorphism. \end{theo}
{"config": "arxiv", "file": "1104.3726.tex"}
TITLE: Integration - substitution that introduces $i$ into the integrand QUESTION [1 upvotes]: This may turn out to be a trivial question, but is it valid to make a change of variables when calculating an indefinite, real integral that introduces the imaginary unit into the integrand? For example, if I'm trying to evaluate $$\int\frac{1}{\sqrt{y^2-1}}dy,$$ is making the substitution $y=cos(\theta)$, leading to the integral $$-i\int d\theta,$$ valid? Of course following the calculation of the above $\theta$ integral we would replace $\theta$ by $arcos(y)$. REPLY [0 votes]: Yes, you should however be able inter-convert between circular / hyperbolic /inverse hyperbolic functions and log functions. Continuing from where you left, let $$ u = -i \cos^{-1} y $$ $$ \cos iu = y = \cosh u$$ $$ u = \cosh^{-1}y =\pm \log(y + \sqrt{y^2-1}) $$ However, it may be more convenient sticking either to circular or to hyperbolic regimes.
{"set_name": "stack_exchange", "score": 1, "question_id": 3648321}
TITLE: Suppose $2^n$ and $5^n$ start with the same digit $d$, for some $n\ge 1$.Find $d$. QUESTION [0 upvotes]: Suppose $2^n$ and $5^n$ start with the same digit $d$, for some $n\ge 1$.Find $d$. My work: I can manually see that for $n=5$, $2^5$ and $5^5$ start with same digit $3$, but I could do that as $n$ was small. But, how would I do it if I could not find such $n$ easily. Please help with a mathematical technique and not a manual one. REPLY [0 votes]: The only possible solution for d is 3, because 10^n must carry 1 as first digit, and as we are multiplying basically two numbers so the first digit cannot carry >1 from second digit. 33=9, 77=49. But the second one will give 5 as first digit if it carries 1,so only possible is 3.If we see the case of carrying 0 from second digit then 1*1=1 but then the other digits of the product cannot be 0 if 2^n & 5^n both starts with 1. So only possible d=3, hence proved
{"set_name": "stack_exchange", "score": 0, "question_id": 635414}
\section{Concluding remarks} In this survey we covered all known variants of signature-based \grobner{} basis algorithms. We gave a complete classification based on a generic algorithmic framework called \rba{} which can be implemented in various different ways. The variations are based on $3$ different orders: \begin{enumerate} \item $<$ denotes the monomial order as well as the compatible module monomial order. We have seen in Section~\ref{sec:available-implementations} that this order has the biggest impact. \item $\rleq$ denotes the rewrite order. If \rba{} handles various elements of the same signature, only one needs to be further \sreduced{}. The rewrite order give a unique choice which element is chosen and which are removed. In Section~\ref{sec:available-implementations} we have seen that the outcomes of using different implementations of $\rleq$, namely $\rleqff$ and $\rleqsb$ are nearly equivalent when it comes to the number of operations. \item $\pleq$ denotes the order in which S-pairs are handled in \rba{}. Nearly all known efficient implementations use $\pleqs$, so S-pairs are handled by increasing signature. \end{enumerate} Thus any known algorithm, like \ff{} or \gvw{} can be implemented with any of the above $3$ choices, so the difference are rather small. Even so some of those algorithms are presented in a restricted setting, for example \ggv{} for $\potl$ only, they all can be seen as different, specialized implementatons of \rba{} and thus are just slight variants of each other and not complete new algorithms as possibly assumed. We covered all variants known and gave a dictionary for translating different notations used in the corresponding publications. Thus this survey can also be used as a reference for researcher interested in this topic. Important aspects when optimizing \rba{} and further open questions are the following: \begin{enumerate} \item Ensuring termination algorithmically as presented in Section~\ref{sec:f5-termination-algorithmically} can lead to earlier termination and thus improved behaviour of the algorithm by using different techniques to detect the completeness of \basis{}. \item Exploiting algebraic structures is an area of high research at the moment (Section~\ref{sec:exploit-algebraic-structures}). Developments in this direction might have a huge impact on the computations of (signature) \grobner{} bases in the near future and are promising in decreasing the complexity of computations. \item Using linear algebra for the reduction process as illustrated in Section~\ref{sec:f4-f5} is another field where a lot more optimizations can be expected. At the moment, restrictions to \sreductions{} lead to restrictions swapping rows during the Gaussian Elimination. Getting more flexible and possibly able to use (at least some of) the ideas from~\cite{FL10b} is still an open problem. \item If we are only interested in computing a \grobner{} basis for some input system, can one generalize the usage of signatures and find an intermediate representation between sig-poly pairs $(\sig\alpha,\proj\alpha) \in \module \times \ring$ and full module representations $\alpha \in \module$? Where is the breaking point of using more terms from the module representation in order to interreduce the syzygy elements even further and not adding too much overhead in time and memory? \end{enumerate} Even though quite different notations are used by researchers, the algorithms are two of a kind, mostly they are even just the same. We hope that this survey helps to give a better understanding on signature-based \grobner{} basis algorithms. Moreover, we would like to give researchers new to this area a guide to find their way through the enormous number of publications that have been released on this topic over the last years. Even more, we hope to encourage experts with this survey to collaborate and to push the field of \grobner{} basis computations even further.
{"config": "arxiv", "file": "1404.1774/concluding-remarks-arxiv.tex"}
TITLE: Let $M:=\{(x,y,z)\in\mathbb R^3 :x^2+y^2=2z^2,z>0\}$ and $f(x,y,z):=(x+y+z)^2e^{-z}, \forall(x,y,z)\in \mathbb R^3$. Find... QUESTION [0 upvotes]: Let $M:=\{(x,y,z)\in\mathbb R^3 :x^2+y^2=2z^2,z>0\}$ and $f(x,y,z):=(x+y+z)^2e^{-z}, \forall(x,y,z)\in \mathbb R^3$. i) Prove that $f$ has a absolute maximum and minimum in $M$. ii) Prove that $M$ is a differentiable variety in $\mathbb R^3$ and find the absolute and relative extremes of $f$ in $M$. I didn't know how to do start with i) so I tried jumping straight into ii) and I had real trouble when solving the equations defined by deriving with respect to $x,y,z$ of: $\Phi=(x+y+z)^2e^{-z}-\lambda(x^2+y^2-2z^2)$. I tried starting with the case $\lambda=0$ but I'm unable to finish it. Edit: I only need help with the case $\lambda=0, $ I managed to get the case $\lambda \neq 0.$ REPLY [0 votes]: The minimum value of $f$ is zero. This should be easy enough to show as $(x+y+z)^2 \ge 0$ and $e^{-z} > 0$ and $(1,-1, 1)$ is in $M$ As for the maximum: My best idea is to chose some value for $z = n.$ Find the maximal value with this restriction. Let $g(n)$ be this maximum. Find the $n$ that maximizes $g.$ I suppose we can be more abstract with it. For some $n$ there is an upper bound on $f$ when $(x,y,z) \in M$ and $z > n$ Let $M' = \{(x,y,z): x^2+y^2 = 2z^2, 0\le z\le n\}$ On the domain $M'$ we have a continuous function on a compact (closed and bounded) domain. Which must achieve a maximum value, by the extreme value theorem. The show that there is a point on $M'$ where $f(x,y,z)$ is greater than the upper bound you found earlier.
{"set_name": "stack_exchange", "score": 0, "question_id": 3530623}
TITLE: Repeated addition producing $0$ in a finite field QUESTION [2 upvotes]: I'm stuck on the first part of a problem from Topics in Algebra by Herstein: Suppose that $F$ is a field having a finite number of elements. Prove that there is a prime number $p$ such that $\underbrace{a + a + \cdots + a}_{p\text{-times}} = 0$ for all $a \in F$. Any hints to get me started? (Please don't give away the answer.) REPLY [1 votes]: Show that there is some number $p$ such that $\underbrace{1_F+\cdots+1_F}_p=0_F$, where $1_F$ and $0_F$ are respectively the multiplicative and additive identity of $F$. As a hint for this part, consider $F$ under addition to be a finite abelian group. Letting $p$ be the "additive torsion" of $1_F$ in $F$ (called $F$'s characteristic), show that it is prime by exhibiting a contradiction on the opposite hypothesis. That is, assume $p=ab$ with $a,b\ne1$, and deduce that there are two nonzero elements of $F$ that multiply to zero (clearly you want to construct these two elements using $a$, $b$ and $1_F$ somehow..)
{"set_name": "stack_exchange", "score": 2, "question_id": 359168}
\begin{document} \title{Monotonic Distributive Semilattices} \thanks{This paper has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 689176, and the support of the grant PIP 11220150100412CO of CONICET (Argentina).} \author{Sergio A. Celani \and Ma. Paula Mench\'{o}n} \email{scelani@exa.unicen.edu.ar} \email{mpmenchon@exa.unicen.edu.ar} \address{CONICET and Departamento de Matem\'{a}ticas, Facultad de Ciencias Exactas, Univ. Nac. del Centro, Pinto 399, 7000 Tandil, Argentina\\ } \maketitle \subjclass{ } \begin{abstract} In the study of algebras related to non-classical logics, (distributive) semilattices are always present in the background. For example, the algebraic semantic of the $\{\rightarrow,\wedge,\top\}$-fragment of intuitionistic logic is the variety of implicative meet-semilattices \cite{CelaniImplicative} \cite{ChajdaHalasKuhr}. In this paper we introduce and study the class of distributive meet-semilattices endowed with a monotonic modal operator $m$. We study the representation theory of these algebras using the theory of canonical extensions and we give a topological duality for them. Also, we show how our new duality extends to some particular subclasses. \end{abstract} \keywords{Distributive meet semilattices, monotonic modal logics, $DS$-spaces, modal operators.} \section{Introduction} Boolean algebras with modal operators are the algebraic semantic of classical modal logics. Using Stone\textquoteright s topological representation of Boolean algebras, it is known that every Boolean algebra with a modal operator can be represented as a relational structure \cite{Chagrov-Zakharyaschev}. This representation plays an important role in the study of many extensions of normal modal logics \cite{Chagrov-Zakharyaschev} and monotone modal logics \cite{Chellas} \cite{Hansen}. Recall that monotone modal logics are a generalization of normal modal logics in which the axiom $m(\varphi\rightarrow\phi)\rightarrow(m\varphi\rightarrow m\phi)$ has been weakened, leading to a monotonicity condition which can be either expressed as an axiom ($m(\varphi\wedge\phi)\rightarrow m\varphi$), or as a rule (from $\varphi\rightarrow\phi$ derive $m\varphi\rightarrow m\phi$). Thus, it is possible to study monotone modal logics with a language containing only the connectives $\wedge$ and $m$, or $\rightarrow$ and $m$. Classical monotone modal logics are interpreted semantically by means of neighborhood frames \cite{Chellas} \cite{Hansen} \cite{HansenKupkePacuit}. This class of structures provides a generalization of Kripke semantics. Every neighborhood frame produces a Boolean algebra endowed with a monotonic operator, called monotonic algebra. And reciprocally, every monotonic algebra defines a neighborhood frame (see \cite{Hansen} or \cite{Celaniboole}). Also, it is possible to consider monotone modal logics defined on non-classical logics. For example, in \cite{Kojima}, Kojima considered neighborhood semantics for intuitionistic modal logic, and he defined a neighborhood frame as a triple $\langle W,\leq,N\rangle$ where $N$ is a neighborhood function, which is a mapping from $W$ to $\mathcal{P}(\mathcal{P}(W))$ that satisfies the decreasing condition, i.e., $N(x)\supseteq N(y)$ whenever $x\leq y$ (see Definition 3.1 of \cite{Kojima}). Monotonic logics based on intuitionistic logic are also studied in \cite{Sotirov}. In the study of algebras related to non-classical logics, semilattices are always present in the background. For example, the algebraic semantic of the $\{\rightarrow,\wedge,\top\}$-fragment of intuitionistic logic is the variety of implicative meet-semilattices \cite{CelaniImplicative} \cite{ChajdaHalasKuhr}, and it is well known that the meet-semilattice reduct of an implicative meet-semilattice is distributive in the sense of \cite{Gratzer} or \cite{CelaniTopological}. In \cite{Gratzer} G. Gr\"{a}tzer gave a topological representation for distributive semilattices using sober spaces. This representation was extended to a topological duality in \cite{CelaniTopological} and \cite{CelaniCalomino}. The principal novelty of \cite{CelaniTopological} was the characterization of meet-semilattice homomorphisms preserving top by means of certain binary relations. For implicative semilattices there exists a similar representation in \cite{CelaniImplicative}. The main objective of this paper is to study a full Stone style duality for distributive meet-semilattices endowed with a monotonic operator. So, most of the results given in this paper are applicable, with minor modifications, to the study of bounded distributive lattices, implicative semilattices, Heyting algebras, and Boolean algebras with monotonic operators. We note that in the particular case of Boolean algebras our duality yields the duality given in \cite{Celaniboole} and \cite{Hansen}. Canonical extensions were introduced by J\'{o}nsson and Tarski to study Boolean algebras with operators. The main purpose was to make it easier to identify what form the dual of an additional operation on a lattice should take. Since their seminal work, the theory of canonical extensions has been simplified and generalized \cite{Gehrke -Jonsson2000,PalmigianoDunn}, leading to a theory widely applicable beyond the original Boolean setting. We will use canonical extension as a tool for the development of a theory of relational methods, in an algebraic way. The paper is organized as follows. In Section 2 we recall the definitions and some basic properties of distributive semilattices and canonical extensions. We recall the topological representation and duality developed in \cite{CelaniTopological} and \cite{CelaniCalomino}. In Section 3, we introduce a special class of saturated sets of a $DS$-space that is dual to the family of order ideals of a distributive semilattice. In Section 4 we present the class of distributive semilattices endowed with a monotonic operator, and we extend the results on representation using canonical extensions. In section 5 we consider some important applications of the duality. We show how our new duality extends to some particular subclasses. \section{Preliminaries} We include some elementary properties of distributive semilattices that are necessary to read this paper. For more details see \cite{CelaniTopological}, \cite{ChajdaHalasKuhr} and \cite{CelaniCalomino}. Let $\langle X,\leq\rangle$ be a poset. For each $Y\subseteq X$, let $[Y)=\{x\in X:\exists y\in Y(y\leq x)\}$ and $(Y]=\{x\in X:\exists y\in Y(x\leq y)\}$. If $Y=\{y\}$, then we will write $[y)$ and $(y]$ instead of $[\{y\})$ and $(\{y\}]$, respectively. We call $Y$ an \emph{upset} (resp. \emph{downset}) if $Y=[Y)$ (resp. $Y=(Y]$). The set of all upsets of $X$ will be denoted by $\mathrm{Up}(X)$. The complement of a subset $Y\subseteq X$ will be denoted by $Y^{c}$ or $X-Y$. \begin{definition} A \emph{meet-semilattice with greatest element}, a semilattice for short, is an algebra $\mathbf{A}=\langle A,\wedge,1\rangle$ of type $(2,0)$ such that the operation $\wedge$ is idempotent, commutative, associative, and $a\wedge1=a$ for all $a\in A$. \end{definition} It is clear that for each poset $\langle X,\leq\rangle$ the structure $\langle\mathrm{Up}(X),\cap,X\rangle$ is a semilattice. Let $\mathbf{A}$ be a semilattice. As usual, we can define a partial order on $\mathbf{A}$, called the natural order, as $a\leq b$ iff $a\wedge b=a$. It is easy to see that $1$ is the greatest element of $A$. A subset $F\subseteq A$ is a \emph{filter} of $\mathbf{A}$ if it is an upset, $1\in F$ and if $a,b\in F$, then $a\wedge b\in F$. We will denote the set of all filters of $\mathbf{A}$ by $\Fi({\mathbf{A}})$. It is easy to see that $\Fi({\mathbf{A}})$ is closed under arbitrary intersections. The filter generated by the subset $X\subseteq A$ will be denoted by $F(X)$. If $X=\{a\}$, $F(\{a\})=F(a)=[a)$. We shall say that a proper filter is \emph{irreducible} or \emph{prime} if for any pair of filters $F_{1},F_{2}$ such that $F=F_{1}\cap F_{2}$, it follows that $F=F_{1}$ or $F=F_{2}$. We will denote the set of all irreducible filters of a semilattice $\mathbf{A}$ by $X({\mathbf{A}})$. A subset $I\subseteq A$ is called an \emph{order ideal} if it is a downset and for every $a,b\in I$ we have that there exists $c\in I$ such that $a,b\leq c$. We will denote the set of all order ideals of $\mathbf{A}$ by $\mathrm{Id}({\mathbf{A}})$. \begin{theorem}\cite{CelaniTopological} \label{sep}Let $\mathbf{A}$ be a semilattice. Let $F\in\Fi({\mathbf{A}})$ and let $I\in\mathrm{Id}({\mathbf{A}})$ such that $F\cap I=\emptyset$. Then there exists $P\in X({\mathbf{A}})$ such that $F\subseteq P$ and $P\cap I=\emptyset$. \end{theorem} A semilattice $\mathbf{A}$ is \emph{distributive} if for all $a,b,c\in A$ such that $a\wedge b\leq c$ there exist $a_{1},b_{1}\in A$ such that $a\leq a_{1}$, $b\leq b_{1}$ and $c=a_{1}\wedge b_{1}$. We will denote by $\mathcal{DS}$ the class of distributive semilattices. We recall (see \cite{CelaniCalomino,CelaniTopological}) that in a distributive semilattice $\mathbf{A}$, if $F$ is a proper filter then the following conditions are equivalent: \begin{enumerate} \item $F$ is irreducible, \item for every $a,b\in A$ such that $a,b\notin F$, there exist $c\notin F$ and $f\in F$ such that $a\wedge f\leq c$ and $b\wedge f\leq c$, \item $A-F=F^{c}$ is an order ideal. \end{enumerate} Let $\mathbf{A},\mathbf{B}\in \mathcal{DS}$. A mapping $h\colon A\rightarrow B$ is called a \emph{semilattice homomorphism} if \begin{enumerate} \item $h(a)\wedge h(b)=h(a\wedge b)$ for every $a,b\in A$ and \item $h(1)=1$. \end{enumerate} Let $\mathbf{A}\in\mathcal{DS}$. Let us consider the poset $\langle X({\mathbf{A}}),\subseteq\rangle$ and the mapping $\beta_{\mathbf{A}}\colon A\rightarrow\mathrm{Up}(X({\mathbf{A}}))$ defined by $\beta_{\mathbf{A}}(a)=\{P\in X({\mathbf{A}}):a\in P\}$. For convenience, we omit the subscript of $\beta_{\mathbf{A}}$, when no confusion can arise. \begin{theorem} \label{rep Hilbert}Let $\mathbf{A}\in\mathcal{DS}$. Then $\mathbf{A}$ is isomorphic to the subalgebra $\beta[A]=\{\beta(a):a\in A\}$ of $\langle\mathrm{Up}(X({\mathbf{A}})),\cap,X({\mathbf{A}})\rangle$. \end{theorem} \subsection{$\mathcal{DS}$-spaces} In this subsection we recall the duality for distributive semilattices given in \cite{CelaniCalomino} and \cite{CelaniTopological} based on a Stone style duality and we give some definitions that we will need to extend it. Let $\langle X,\mathcal{T}\rangle$ be a topological space. We will denote by $\mathcal{KO}(X)$ the set of all compact and open subsets of $X$ and let $D(X)$ be the set $D(X)=\{U:U^{c}\in\mathcal{KO}(X)\}$. We will denote by $\mathcal{C}(X)$ the set of all non-empty closed subsets of $X$. The closure of a subset $Y\subseteq X$ will be denoted by $\Cl(Y)$. A subset $Y\subseteq X$ is \emph{saturated} if it is an intersection of open sets. The smallest saturated set containing $Y$ is the \emph{saturation} of $Y$ and will be denoted by $\sat(Y)$. We recall that the \emph{specialization order} of $\langle X,\mathcal{T}\rangle$ is defined by $x\preceq y$ if $x\in\Cl(\{y\})=\Cl(y)$. It is easy to see that $\preceq$ is a reflexive and transitive relation. If $X$ is $T_{0}$ then the relation $\preceq$ is a partial order. The dual order of $\preceq$ will be denoted by $\leq$, i.e., $x\leq y$ if $y\in\Cl(x)$. Moreover, if $X$ is $T_{0}$ then $\Cl(x)=[x)$, $\sat(Y)=(Y]$, and every open (resp. closed) subset is a downset (resp. upset) respect to $\leq$. Recall that a non-empty subset $Y\subseteq X$ of a topological space $\langle X,\mathcal{T}\rangle$ is \emph{irreducible} if $Y\subseteq Z\cup W$ for any closed subsets $Z$ and $W$, implies $Y\subseteq Z$ or $Y\subseteq W$. A topological space $\langle X,\mathcal{T}\rangle$ is \emph{sober} if for every irreducible closed set $Y\subseteq X$, there exists a unique $x\in X$ such that $\Cl(x)=Y$. Each sober space is $T_{0}$. The following definition is equivalent to the definition given by G. Gr\"{a}tzer in \cite{Gratzer}. \begin{definition}\cite{CelaniCalomino} A \emph{$DS$-space} is a topological space $\langle X,\mathcal{T}\rangle$ such that: \begin{enumerate} \item The set of all compact and open subsets $\mathcal{KO}(X)$ forms a basis for the topology $\mathcal{T}$ on $X$. \item $\langle X,\mathcal{T}\rangle$ is sober. \end{enumerate} \end{definition} If $\langle X,\mathcal{T}\rangle$ is a $DS$-space, then $\langle D(X),\cap,X\rangle$ is a distributive semilattice (see \cite{Gratzer}). Let $\langle X,\leq\rangle$ be a poset. Recall that a subset $K\subseteq X$ is called \emph{dually} \emph{directed} if for any $x,y\in K$ there exists $z\in K$ such that $z\leq x$ and $z\leq y$. A subset $K\subseteq X$ is called \emph{directed} if for any $x,y\in K$ there exists $z\in K$ such that $x\leq z$ and $y\leq z$. \begin{theorem} Let $\langle X,\mathcal{T}\rangle$ be a topological space with basis $\mathcal{K}$ of open and compact subsets for $\mathcal{T}$. Then, the following conditions are equivalent: \begin{enumerate} \item $\langle X,\mathcal{T}\rangle$ is sober \item $\langle X,\mathcal{T}\rangle$ is $T_{0}$ and $\bigcap\{U:U\in\mathcal{L}\}\cap Y\neq\emptyset$ for each closed subset $Y$ and for any dually directed subset $\mathcal{L}\subseteq\mathcal{K}$ such that $Y\cap U\neq\emptyset$ for all $U\in\mathcal{L}$. \end{enumerate} \end{theorem} Let $\mathbf{A}\in\mathcal{DS}$. Consider $\mathcal{K}_{\mathbf{A}}=\{\beta(a)^{c}:a\in A\}$ and let $\mathcal{T}_{\mathbf{A}}$ be the topology generated by the basis $\mathcal{K}_{\mathbf{A}}$. Then, $\langle X({\mathbf{A}}),\mathcal{T}_{\mathbf{A}}\rangle$ is a $DS$-space, called the \emph{dual} \emph{space} of $\mathbf{A}$ (see \cite{CelaniTopological} and \cite{CelaniCalomino}). Recall that $Q\in\mathrm{\Cl}(P)$ iff $P\subseteq Q$, i.e., the specialization dual order of $\langle X({\mathbf{A}}),\mathcal{T}_{\mathbf{A}}\rangle$ is the inclusion relation $\subseteq$. Also, recall that the lattices $\Fi(\mathbf{A})$ and $\mathcal{C}(X(\mathbf{A}))$ are dually isomorphic under the maps $F\mapsto\hat{F}$, where $\hat{F}=\{P\in X(\mathbf{A}):F\subseteq P\}=\bigcap\{\beta(a):a\in F\}$ for each $F\in\Fi(\mathbf{A})$ and $Y\mapsto F_{Y}$, where $F_{Y}=\{a\in A:Y\subseteq\beta(a)\}$ for each $Y\in\mathcal{C}(X(\mathbf{A}))$. \subsection{Canonical extension} Here we will give the basic definitions of the theory of canonical extensions focused on (distributive) meet semilattices. The following is an adaptation of the definition given in \cite{PalmigianoDunn} for posets. This definition agrees with the definition of canonical extensions for bounded distributive lattices and Boolean algebras \cite{Gehrke -Jonsson2000,Jonsson y Tarski}. \begin{definition}Let $\mathbf{A}$ be a a semilattice. A $\mathit{completion}$ of $\mathbf{A}$ is a semilattice embedding $e\colon A\rightarrow X$ where $X$ is a complete lattice. From now on, we will suppress $e$ and call $X$ a completion of $\mathbf{A}$ and assume that $\mathbf{A}$ is a subalgebra of $X$. \end{definition} \begin{definition}Let $\mathbf{A}$ be a semilattice. Given a completion $X$ of $\mathbf{A}$, an element of $X$ is called $\mathit{closed}$ provided it is the infimum in $X$ of some filter $F$ of $\mathbf{A}$. We denote the set of all closed elements of $X$ by $K(X)$. Dually, an element of $X$ is called $\mathit{open}$ provided it is the supremum in $X$ of some order ideal $I$ of $\mathbf{A}$. We denote the set of all open elements of $X$ by $O(X)$. A completion $X$ of $\mathbf{A}$ is said to be $\mathit{dense}$ provided each element of $X$ is both the supremum of all the closed elements below it and the infimum of all the open elements above it. A completion $X$ of $\mathbf{A}$ is said to be $\mathit{compact}$ provided that whenever $D$ is a non-empty dually directed subset of $A$, $U$ is a non-empty directed subset of $A$, and $\bigwedge_{L}D\leq\bigvee_{L}U$, then there exist $x\in D$ and $y\in U$ such that $x\leq y$. \end{definition} \begin{definition}Let $\mathbf{A}$ be a semilattice. A $\textit{canonical extension}$ of $\mathbf{A}$ is a dense and compact completion of $\mathbf{A}$. \end{definition} \begin{theorem}Let $\mathbf{A}$ be a semilattice, then $\mathbf{A}$ has a canonical extension and it is unique up to an isomorphism that fixes $\mathbf{A}$. \end{theorem} \begin{lemma}Let us consider a distributive semilattice with greatest element $\mathbf{A}=\langle A,\wedge,1\rangle$. $\mathrm{\langle Up}(X(\mathbf{A})),\cap,\cup,X(\mathbf{A}),\emptyset\rangle$ is a canonical extension of $\mathbf{A}$, where $A\cong\beta[A]\subseteq\mathrm{Up}(X(\mathbf{A}))$. We will call it `the' canonical extension. \end{lemma} \section{Ideals and saturated subsets} \label{section: Ideals} In this section we present a particular family of saturated sets in a $DS$-space, dual to the family of order ideals of a semilattice. \begin{definition} Let $\langle X,\mathcal{T}\rangle$ be a $DS$-space. $Z\subseteq X$ is a \emph{special basic saturated subset} if $Z=\bigcap\{W:W\in\mathcal{L}\}$ for some dually directed family $\mathcal{L}\subseteq\mathcal{KO}(X)$. \end{definition} We denote by $\mathcal{S}(X)$ the set of all special basic saturated subsets of a $DS$-space $\langle X,\mathcal{T}\rangle$. Note that every special basic saturated subset is a saturated set. Let $\mathbf{A}\in\mathcal{DS}$. Let $I\in$ $\mathrm{Id}({\mathbf{A}})$. We consider the following subset of $X({\mathbf{A}}):$ \[ \alpha(I)=\bigcap\{\beta(a)^{c}:a\in I\}=\{P\in X({\mathbf{A}}):I\cap P=\emptyset\}. \] It is clear that $\alpha(I)$ is a special basic saturated set of $\langle X({\mathbf{A}}),\mathcal{T}_{\mathbf{A}}\rangle$. Let $Z\subseteq X({\mathbf{A}})$ be a special basic saturated set of $X({\mathbf{A}})$. Consider the subset \[ I_{\mathbf{A}}(Z)=\{a\in A:\beta(a)\cap Z=\emptyset\}. \] It is easy to see that $I_{\mathbf{A}}(Z)$ is a downset of $\mathbf{A}$. \begin{remark}The special basic saturated subsets of a $DS$-space $\langle X,\mathcal{T}\rangle$ are precisely the compact saturated subsets of the topology. \end{remark} Given two posets $\langle X,\leq_{X}\rangle$ and $\langle Y,\leq_{Y}\rangle$, a \emph{surjective order-isomorphism} from $\langle X,\leq_{X}\rangle$ to $\langle Y,\leq_{Y}\rangle$ is a surjective function $f\colon X\rightarrow Y$ with the property that for every $x$ and $y$ in $X$, $x\leq_{X}y$ if and only if $f(x)\leq_{Y}f(y)$. We say that the posets $\langle X,\leq_{X}\rangle$ and $\langle Y,\leq_{Y}\rangle$ are isomorphic if there exists a surjective order-isomorphism $f\colon X\rightarrow Y$. In the following result we prove that order ideals are in bijective correspondece with the family of basic saturated subsets of $X(\mathbf{A})$. \begin{theorem} Let $\mathbf{A}\in\mathcal{DS}$. Then the posets $\langle\mathrm{Id}({\mathbf{A}}),\subseteq\rangle$ and $\langle\mathcal{S}(X(\mathbf{A})),\subseteq\rangle$ are dually isomorphic. \end{theorem} \begin{proof} Let $Z\subseteq X({\mathbf{A}})$ be a special basic saturated subset of $X({\mathbf{A}})$. We prove that $I_{\mathbf{A}}(Z)$ is an order ideal of $\mathbf{A}$ and $Z=\alpha(I_{\mathbf{A}}(Z))$. Moreover, if $I$ is any order ideal of $\mathbf{A}$, then we prove that $I=I_{\mathbf{A}}(\alpha(I))$. It is clear that $I_{\mathbf{A}}(Z)$ is a downset of $\mathbf{A}$. Let $a,b\in I_{\mathbf{A}}(Z)$. So, we have that $Z\cap(\beta_{\mathbf{A}}(a)\cup\beta_{\mathbf{A}}(b))=\emptyset$. Since $Z=\bigcap\{\beta(a)^{c}:\beta(a)^{c}\in\mathcal{L}\}$ for some dually directed family $\mathcal{L}\subseteq\mathcal{K}_{\mathbf{A}}$ and $\beta(a)\cup\beta(b)$ is a closed subset, there exists $\beta_{\mathbf{A}}(c)^{c}\in\mathcal{L}$ such that $\beta(c)^{c}\cap(\beta(a)\cup\beta(b))=\emptyset$. Thus, $\beta(a)\cup\beta(b)\subseteq\beta(c)$ and $Z\cap\beta_{\mathbf{A}}(c)=\emptyset$, i.e., $a,b\leq c$ and $c\in I_{\mathbf{A}}(Z)$. Therefore, $I_{\mathbf{A}}(Z)$ is an order ideal of $\mathbf{A}$ and we have that $\alpha(I_{\mathbf{A}}(Z))=\bigcap\{\beta(a)^{c}:Z\subseteq\beta(a)^{c}\}\subseteq\bigcap\{\beta(a)^{c}:\beta(a)^{c}\in\mathcal{L}\}=Z$. The other inclusion is immediate. Now, let $I$ be an order ideal. Let $b\in I_{\mathbf{A}}(\alpha(I))$. Then $\beta(b)\cap\alpha(I)=\beta(b)\cap\bigcap\{\beta(a)^{c}:a\in I\}=\emptyset$. Since $\beta_{\mathbf{A}}(b)$ is a closed subset, and the family $\{\beta(a)^{c}:a\in I\}$ is dually directed, we get that there exists $a\in I$ such that $\beta(b)\subseteq\beta(a)$. So, $b\leq a$, and as $I$ is a downset, we have that $b\in I$. The other inclusion is immediate. Thus, we have a surjective function $\alpha\colon\mathrm{Id}({\mathbf{A}})\rightarrow\mathcal{S}(X(\mathbf{A}))$ with inverse function $I_{\mathbf{A}}\colon\mathcal{S}(X(\mathbf{A}))\rightarrow\mathrm{Id}({\mathbf{A}})$. We prove that $\alpha$ is a dual order-isomorphism. Let $I_{1}$ and $I_{2}$ be two ideals of $\mathbf{A}$. Assume that $I_{1}\subseteq I_{2}$. Let $P\in\alpha(I_{2})$. Then, $P\cap I_{2}=\emptyset$. It follows that $P\cap I_{1}=\emptyset$, i.e., $P\in\alpha(I_{1})$. Assume that $\alpha(I_{1})\subseteq\alpha(I_{2})$. Let $a\in I_{2}$ and suppose that $a\notin I_{1}$. Then $I_{1}\cap[a)=\emptyset$, so there exists $P\in X({\mathbf{A}})$ such that $[a)\subseteq P$ and $P\cap I_{1}=\emptyset$. It follows that $P\in\alpha(I_{1})$ but $P\notin\alpha(I_{2})$ which is a contradiction. Therefore, $a\in I_{1}$.\end{proof} \begin{remark} We note that for any $a\in A$, $\alpha((a])=\beta(a)^{c}$. \end{remark} For simplicity we will write $\alpha(a)$ instead of $\alpha((a])$. \begin{proposition}\label{Sat y cerr}Let $\mathbf{A}\in\mathcal{DS}$, let $Y\in\mathcal{C}(X(\mathbf{A}))$ and $Z\in\mathcal{S}(X(\mathbf{A}))$. Then, \[F_{Y}\cap I_{\mathbf{A}}(Z)=\emptyset\text{ iff }Y\cap Z\neq\emptyset. \] \end{proposition} \begin{proof} Suppose that $F_{Y}\cap I_{\mathbf{A}}(Z)=\emptyset$. Then, there exists $P\in X(\mathbf{A})$ such that $F_{Y}\subseteq P$ and $P\cap I_{\mathbf{A}}(Z)=\emptyset$, i.e., $P\in Y$ and $P\in Z$. Thus, $Y\cap Z\neq\emptyset$. The rest of proof is straightforward.\end{proof} Now, we are able to identify the topological structures that are the closed and open elements of a canonical extension of a distributive semilattice. \begin{lemma}Let $\mathbf{A}$ be a distributive semilattice. Let us consider the canonical extension $\langle\mathrm{Up}(X(\mathbf{A})),\cap,\cup,X(\mathbf{A}),\emptyset\rangle$ and the DS-space $\langle X(\mathbf{A}),\mathcal{T}_{\mathbf{A}}\rangle$. Then, $K(\mathrm{Up}(X(\mathbf{A})))=\mathcal{C}(X(\mathbf{A}))$ and $O(\mathrm{Up}(X(\mathbf{A})))=\{Z^{c}:Z\in\mathcal{S}(X(\mathbf{A}))\}$, i.e., the closed elements of the canonical extension are exactly the closed sets of the topology and the open elements of the canonical extension are the complements of the special saturated sets of the topology. \end{lemma} \begin{remark}Given a complete lattice $C$, we denote the set of completely join prime elements by $J^{\infty}(C)$ and the set of completely meet prime elements by $M^{\infty}(C)$. Every element of $\mathrm{Up}(X(\mathbf{A}))$ is a join of completely join prime elements and a meet of completely meet prime elements, where $J^{\infty}(\mathrm{Up}(X(\mathbf{A})))=\{\hat{P}=[P):P\in X(\mathbf{A})\}$ and $M^{\infty}(\mathrm{Up}(X(\mathbf{A})))=\{\alpha(P^{c})^{c}=(P]^{c}:P\in X(\mathbf{A})\}$. \end{remark} \section{Representation and duality of monotonic distributive semilattices} \begin{definition} Let $\mathbf{A}=\langle A,\wedge,1\rangle$ be a distributive semilattice. A \textit{monotonic operator }is an operator $m:A\rightarrow A$ that satisfies the following condition \[ \text{If }a\leq b\text{, then }ma\leq mb\text{ for all }a,b\in A. \] \end{definition} The following result is immediate. \begin{proposition} Let $\mathbf{A}\in\mathcal{DS}$ and let $m:A\rightarrow A$ be a unary function. Then the following conditions are equivalent: \begin{enumerate} \item For all $a,b\in A$, if $a\leq b$ then $ma\leq mb$, \item $m(a\wedge b)\leq ma\wedge mb$ for all $a,b\in A$. \end{enumerate} \end{proposition} \begin{definition} Let $\mathbf{A}\in\mathcal{DS}$. The pair $\langle\mathbf{A},m\rangle$ such that $m$ is a monotonic operator defined on $\mathbf{A}$ is called a \emph{monotonic distributive semilattice}. \end{definition} The class of all monotonic distributive semilattices will be denoted by $\mathcal{MDS}$. Let $\langle\mathbf{A},m\rangle,\langle\mathbf{B},m\rangle\in\mathrm{\mathcal{MDS}}$. We say that a homomorphism $h$ $\colon A\rightarrow B$ is a \emph{homomorphism of monotonic distributive semilattices} if $h$ commutes with $m$, i.e., if $h(ma)=mh(a)$ for all $a\in A$. We denote by $\mathcal{MDSH}$ the category of monotonic distributive semilattices and monotonic distributive semilattice homomorphisms. We will give two examples of monotonic distributive semilattices constructed from certain relational systems. We shall use these examples in the theory of representation and topological duality for monotonic distributive semilattices. Let $X$ be a set. A \emph{multirelation} on $X$ is a subset of the Cartesian product $X\times\mathcal{P}(X)$, that is, a set of ordered pairs $(x,Y)$ where $x\in X$ and $Y\subseteq X$ \cite{Duntsch-Orlowska-RewitzkyMultirelations2010} \cite{Rewitzky2003}. We recall that in classical monotone modal logic a neighborhood frame is a pair $\langle X,R\rangle$ where $X$ is a set and $R\subseteq X\times\mathcal{P}(X)$, i.e., $R$ is a multirelation (see \cite{Chellas} \cite{Hansen}). Now we give a generalization of this notion. \begin{definition} \label{SCneighborhood}An $S$\emph{-neighborhood} frame is a triple $\langle X,\leq,R\rangle$ where $\langle X,\leq\rangle$ is a poset and $R$ is a subset of $X\times\mathcal{P}(X)$ such that if $x\leq y$, then $R(y)\subseteq R(x)$ for all $x,y\in X$. For each $U\in\mathrm{Up}(X)$ we define the set \begin{equation} m_{R}(U)=\{x\in X:\forall Z\in R(x)~(Z\cap U\neq\emptyset)\}.\label{eq:op1} \end{equation} A $C$\emph{-neighborhood} frame is a triple $\langle X,\leq,G\rangle$ where $\langle X,\leq\rangle$ is a poset and $G$ is a subset of $X\times\mathcal{P}(X)$ such that if $x\leq y$, then $G(x)\subseteq G(y)$ for all $x,y\in X$. For each $U\in\mathrm{Up}(X)$ we define the set \begin{equation} \mathbf{m}_{G}(U)=\{x\in X:\exists Y\in G(x)~(Y\subseteq U)\}.\label{eq:op2} \end{equation} \end{definition} \begin{lemma} Let $\langle X,\leq,R\rangle$ be an $S$-neighborhood frame and $\langle X,\leq,G\rangle$ be a $C$-neighborhood frame. Then $\langle\mathrm{Up}(X),\cap,m_{R},X\rangle$ and $\langle\mathrm{Up}(X),\cap,\mathbf{m}_{G},X\rangle$ are monotonic distributive semilattices. \end{lemma} We will represent the monotonic operator $m$ on a distributive semilattice $\mathbf{A}$ as a multirelation on the dual space of $\mathbf{A}$, where the canonical extension offers an advantageous point of view. We consider two different ways of extending maps that agree with the ones given in \cite{PalmigianoDunn} for posets, bounded distributive lattices and Boolean algebras. \begin{definition}Let $\mathbf{A}$ be a distributive semilattice. Given a monotonic operation $m\colon A\rightarrow A$, we define the maps \[ m^{\sigma},m^{\pi}:\mathrm{Up}(X(\mathbf{A}))\rightarrow\mathrm{Up}(X(\mathbf{A})) \] by \[ m^{\sigma}(X)=\bigcup\{\bigcap\{\beta(ma):Y\subseteq\beta(a)\}:X\supseteq Y\in\mathcal{C}(X(\mathbf{A}))\} \] and \[ m^{\pi}(X)=\bigcap\{\bigcup\{\beta(ma):Z\subseteq\beta(a)^{c}\}:X^{c}\supseteq Z\in\mathcal{S}(X(\mathbf{A}))\}. \] \end{definition} The two extensions of a map $m$ shown above are not always equal. Whether we want to extend a particular additional operation using the $\sigma$- or the $\pi$-extension depends on the properties of the particular operation to be extended. The following lemma is a consequence of Lemma 3.4 of \cite{PalmigianoDunn}. \begin{lemma}\label{lemaim}Let $\langle\mathbf{A},m\rangle\in\mathcal{MDS}$. The maps $m^{\sigma},m^{\pi}$ are monotonic extensions of $m$, i.e., $\langle\mathrm{Up}(X(\mathbf{A})),m^{\sigma}\rangle,\langle\mathrm{Up}(X(\mathbf{A})),m^{\pi}\rangle\in\mathcal{MDS}$ and $m^{\sigma}(\beta(a))=m^{\pi}(\beta(a))=\beta(ma)$ for all $a\in A$. In addition, $m^{\sigma}\leq m^{\pi}$ with equality holding on $K(\mathrm{Up}(X(\mathbf{A})))\cup O(\mathrm{Up}(X(\mathbf{A})))$. For $X\in\mathrm{Up}(X(\mathbf{A}))$, $Y\in\mathcal{C}(X(\mathbf{A}))$ and $Z\in\mathcal{S}(X(\mathbf{A}))$ \[\begin{array}{lll} m^{\sigma}(X)&=&\bigcup\{m^{\sigma}(Y):X\supseteq Y\in\mathcal{C}(X(\mathbf{A}))\},\\ m^{\sigma}(Y)&=&\bigcap\{\beta(ma):Y\subseteq\beta(a)\},\\ m^{\pi}(X)&=&\bigcap\{m^{\pi}(Z^{c}):X^{c}\supseteq Z\in\mathcal{S}(X(\mathbf{A}))\},\\ m^{\pi}(Z^{c})&=&\bigcup\{\beta(ma):Z\subseteq\beta(a)^{c}\}. \end{array}\] So, $m^{\sigma}$ and $m^{\pi}$ send closed sets to closed sets and complements of special saturated sets to complements of special saturated sets. \end{lemma} Now we show how, using the $\sigma$-extension and the $\pi$-extension, it is possible to define two multirelations on the dual space of $\mathbf{A}$. Let $\langle\mathbf{A},m\rangle\in\mathcal{MDS}$. Note that by definition of $m^{\pi}$, for every $Z\in\mathcal{S}(X(\mathbf{A}))$ we have: \begin{center} \begin{tabular}{lll} $P\in m^{\pi}(Z^{c})$ & $\Leftrightarrow$ & $\exists a\in A$ such that $P\in\beta(ma)$ and $Z\subseteq\beta(a)^{c}$ \tabularnewline & $\Leftrightarrow$ & $\exists a\in A$ such that $ma\in P$ and $a\in I_{\mathbf{A}}(Z)$ \tabularnewline & $\Leftrightarrow$ & $m^{-1}(P)\cap I_{\mathbf{A}}(Z)\neq\emptyset$.\tabularnewline \end{tabular} \end{center} So, for every $X\in\mathrm{Up}(X(\mathbf{A}))$ we get: \begin{center} \begin{tabular}{lll} $P\in m^{\pi}(X)$ & $\Leftrightarrow$ & $\forall Z\in\mathcal{S}(X(\mathbf{A}))$ such that $Z\subseteq X^{c}$, we have $P\in m^{\pi}(Z^{c})$\tabularnewline & $\Leftrightarrow$ & $\forall Z\in\mathcal{S}(X(\mathbf{A}))$ such that $Z\subseteq X^{c}$, we have $m^{-1}(P)\cap I_{\mathbf{A}}(Z)\neq\emptyset$. \tabularnewline \end{tabular} \end{center} We define the relation \[ R_{m}\subseteq X(\mathbf{A})\times\mathcal{S}(X(\mathbf{A})) \] by \begin{equation} (P,Z)\in R_{m}\text{~iff~}m^{-1}(P)\cap I_{\mathbf{A}}(Z)=\emptyset.\label{eq:Relation sat} \end{equation} Consequently, the operation $m^{\pi}$ on $\mathrm{Up}(X(\mathbf{A}))$ can be defined in terms of the relation $R_{m}$ as: \[ P\in m^{\pi}(X)\text{\text{~}iff }\text{~}\forall Z\in R_{m}(P)[Z\cap X\neq\emptyset]. \] On the other hand, we can define another multirelation using the operation $m^{\sigma}$. Note that for each $Y\in\mathcal{C}(X(\mathbf{A}))$ we have: \begin{center} \begin{tabular}{lcl} $P\in m^{\sigma}(Y)$ & $\Leftrightarrow$ & $\forall a\in A$ such that $Y\subseteq\beta(a)$ we get $P\in\beta(ma)$\tabularnewline & $\Leftrightarrow$ & $\forall a\in A$ such that $a\in F_{Y}$ we get $ma\in P$\tabularnewline & $\Leftrightarrow$ & $F_{Y}\subseteq m^{-1}(P)$.\tabularnewline \end{tabular} \end{center} So, for each $X\in\mathrm{Up}(X(\mathbf{A}))$ we obtain: \begin{center} \begin{tabular}{lll} $P\in m^{\sigma}(X)$ & $\Leftrightarrow$ & $\exists Y\in\mathcal{C}(X(\mathbf{A}))$ such that $Y\subseteq X$ and $P\in m^{\sigma}(Y)$\tabularnewline & $\Leftrightarrow$ & $\exists Y\in\mathcal{C}(X(\mathbf{A}))$ such that $Y\subseteq X$ and $P\in m^{\sigma}(Y)$\tabularnewline & $\Leftrightarrow$ & $\exists Y\in\mathcal{C}(X(\mathbf{A}))$ such that $Y\subseteq X$ and $F_{Y}\subseteq m^{-1}(P)$.\tabularnewline \end{tabular} \end{center} Thus, we can define another relation $G_{m}\subseteq X(\mathbf{A})\times\mathcal{C}(X(\mathbf{A}))$ as \[ (P,Y)\in G_{m}\text{~iff~}F_{Y}\subseteq m^{-1}(P). \] Consequently, the operation $m^{\sigma}$ on $\mathrm{Up}(X(\mathbf{A}))$ can be defined in terms of the relation $G_{m}$ as: \begin{equation} P\in m^{\sigma}(X)\text{~iff~}\exists Y\in G_{m}(P)[Y\subseteq X].\label{eq:Relation clo} \end{equation} \begin{remark}Let $\langle\mathbf{A},m\rangle\in MDS$. Note that $\langle X(\mathbf{A}),\subseteq,R_{m}\rangle$ and $\langle X(\mathbf{A}),\subseteq,G_{m}\rangle$ are an S-monotonic and a C-monotonic frame, respectively, where $m_{R_{m}}=m^{\pi}$ and $\mathbf{m}_{G_{m}}=m^{\sigma}$. \end{remark} Now, we are able to define the dual spaces of monotonic distributive semilattices. Depending on the way we define the relation on the dual space, there are two possible constructions of relational systems. However, we will show that both systems are interdefinible. For each monotonic operator, we can choose to work with either of them based on its behavior. In the next section we will see how some additional conditions affect the relations associated. Let $\langle X,\mathcal{K}\rangle$ be a $DS$-space. For each $U\in D(X)$ we define the subsets $L_{U}$ and $D_{U}$ of $\mathcal{P}(\mathcal{S}(X))$ and $\mathcal{P}(\mathcal{C}(X))$ as follows: \[ L_{U}=\{Z\in\mathcal{S}(X):Z\cap U\neq\emptyset\} \] and \[ D_{U}=\{Y\in\mathcal{C}(X):Y\subseteq U\}. \] \begin{definition} \label{t esp mon} An \textit{$\mathcal{S}$-monotonic }$DS$\textit{-space} is a structure $\langle X,\mathcal{T},R\rangle$, where $\langle X,\mathcal{T}\rangle$ is a $DS$-space, and $R\subseteq X\times\mathcal{S}(X)$ is a multirelation such that \begin{enumerate} \item $m_{R}(U)=\{x\in X:\forall Z\in R(x)[Z\cap U\neq\emptyset]\}\in D(X)$, for all $U\in D(X)$ and \item $R(x)=\bigcap\{L_{U}:U\in D(X)\text{ and }x\in m_{R}(U)\}$, for all $x\in X$. \end{enumerate} We can also give an analogous definition of $\mathcal{C}$-\textit{monotonic }$DS$\textit{-space} as a structure $\left\langle X,\mathcal{T},G\right\rangle $, where $\left\langle X,\mathcal{T}\right\rangle $ is a $DS$-space and $G\subseteq X\times\mathcal{C}(X)$ is a multirelation such that \begin{enumerate} \setcounter{enumi}{2} \item $\mathbf{m}_{G}\left(U\right)=\{x\in X:\exists Y\in G\left(x\right)\left[Y\subseteq U\right]\}\in D(X)$ for all $U\in D(X)$, and \item $G\left(x\right)={\textstyle \bigcap}\{(D_{U})^{c}:U\in D(X)\text{ and }x\in \mathbf{m}_{G}\left(U\right)^{c}\}$ for all $x\in X$. \end{enumerate} \end{definition} \begin{lemma} Let $\langle X,\mathcal{T},R\rangle$ and $\left\langle X,\mathcal{T},G\right\rangle $ be an \textit{$\mathcal{S}$}-monotonic $DS$-space and a\textit{ $\mathcal{C}$}-monotonic $DS$-space respectively. Then, \begin{enumerate} \item $R(y)\subseteq R(x)$ for all $x,y\in X$ such that $x\leq y$ and \item $G(x)\subseteq G(y)$ for all $x,y\in X$ such that $x\leq y$. \end{enumerate} \end{lemma} \begin{proof}1. Suppose that $x\leq y$ and let $Z\in R(y)$. Let $U\in D(X)$ such that $x\in m_{R}(U)$. By (1) of Definition \ref{t esp mon}, $m_{R}(U)$ is an upset, then $y\in m_{R}(U)$. By (2) of Definition \ref{t esp mon} we have that $Z\cap U\neq\emptyset$. Then, $Z\in\bigcap\{L_{U}:U\in D(X)\text{ and }x\in m_{R}(U)\}=R(x)$. 2. Suppose that $x\leq y$. Let $Y\in G(x)$. Let $U\in D(X)$ such that $y\in \mathbf{m}_{G}(U)^{c}$. By (3) of Definition \ref{t esp mon}, $\mathbf{m}_{G}(U)^{c}$ is a downset, then $x\in \mathbf{m}_{G}(U)^{c}$. By (4) of Definition \ref{t esp mon} we have that $Y\cap U^{c}\neq\emptyset$. Then, $Y\in\bigcap\{(D_{U})^{c}:U\in D(X)\text{ and }y\in \mathbf{m}_{G}(U)^{c}\}=G(x)$.\end{proof} As a corollary we have that $\langle X,\leq,R\rangle$ is an S-neighborhood frame and $\langle X,\leq,G\rangle$ is a C-neighborhood frame. From Definition \ref{SCneighborhood}, we get that the algebras $\langle\mathrm{Up}(X),m_{R}\rangle$ and $\langle\mathrm{Up}(X),\mathbf{m}_{G}\rangle$, considering the operators defined by \ref{eq:op1} and \ref{eq:op2}, are monotonic distributive semilattices. In consequence, by (1) and (3) of Definition \ref{t esp mon}, $\langle D(X),m_{R}\rangle$ and $\langle D(X),\mathbf{m}_{G}\rangle$ are monotonic distributive semilattices considering the operators restricted to $D(X)$. Now, we will see how we get a kind of space from the other. \begin{definition} Let $\left\langle X,\mathcal{T}\right\rangle $ be a $DS$-space. Let $\phi_{X}\colon\mathcal{P}(\mathcal{S}(X))\rightarrow\mathcal{P}(\mathcal{C}(X))$ be the function defined by \[ \phi_{X}(S)=\{Y\in\mathcal{C}(X):\forall Z\in S\ [Y\cap Z\neq\emptyset]\} \] and let $\psi_{X}\colon\mathcal{P}(\mathcal{C}(X))\rightarrow\mathcal{P}(\mathcal{S}(X))$ be the function defined by \[ \psi_{X}(C)=\{Z\in\mathcal{S}(X):\forall Y\in C\ [Y\cap Z\neq\emptyset]\}. \] \end{definition} It is easy to see that \[ C\subseteq\phi_{X}(S)\text{ iff }S\subseteq\psi_{X}(C), \] for all $S\subseteq\mathcal{S}(X)$ and $C\subseteq\mathcal{C}(X)$. It follows that the pair $(\phi_{X},\psi_{X})$ is a Galois connection. \begin{proposition} \begin{enumerate} \item Given an $\mathcal{S}$-monotonic $DS$-space $\langle X,\mathcal{T},R\rangle$, the relation $G_{R}\subseteq X\times\mathcal{C}(X)$ defined as \[ \left(x,Y\right)\in G_{R}\text{ iff }Y\in\phi_{X}(R(x)) \] is such that $\langle X,\mathcal{T},G_{R}\rangle$ is a $\mathcal{C}$-monotonic $DS$-space and $m_{R}(U)=\mathbf{m}_{G_{R}}(U)$ for all $U\in D(X)$. \item Given a $\mathcal{C}$-monotonic $DS$-space $\langle X,\mathcal{T},G\rangle$, the relation $R_{G}\subseteq X\times\mathcal{S}(X)$ defined as \[ \left(x,Z\right)\in R_{G}\text{ iff }Z\in\psi_{X}(G(x)) \] is such that $\langle X,\mathcal{T},R_{G}\rangle$ is a $\mathcal{S}$-monotonic $DS$-space and $\mathbf{m}_{G}(U)=m_{R_{G}}(U)$ for all $U\in D(X)$. \end{enumerate} \end{proposition} \begin{proof} 1. Let $\langle X,\mathcal{T},R\rangle$ be an $\mathcal{S}$-monotonic $DS$-space. We will see that $m_{R}(U)=\mathbf{m}_{G_{R}}(U)$ for all $U\in D(X)$. Let $U\in D(X)$ and $x\in m_{R}(U)$. Then, for all $Z\in R(x)$ we have that $Z\cap U\neq\emptyset$ and since $U\in\mathcal{C}(X)$, $U\in G_{R}(x)$. From $U\subseteq U$, we get that $x\in \mathbf{m}_{G_{R}}(U)$. Now, suppose that $x\in \mathbf{m}_{G_{R}}(U)$. So, there exists $Y\in G_{R}(x)$ such that $Y\subseteq U$ and since for all $Z\in R(x)$ we have that $Y\cap Z\neq\emptyset$, then $Z\cap U\neq\emptyset$ for all $Z\in R(x)$ and thus $x\in m_{R}(U)$. We have proved that $m_{R}(U)=\mathbf{m}_{G_{R}}(U)$ and since $\langle X,T,R\rangle$ is an $\mathcal{S}$-monotonic $DS$-space, $\mathbf{m}_{G_{R}}(U)\in D(X)$. Now, we will see that $G_{R}\left(x\right)=\bigcap\{(D_{U})^{c}:U\in D(X)\text{ and }x\in \mathbf{m}_{G_{R}}\left(U\right)^{c}\}$ for all $x\in X$. Let $x\in X$. It is easy to prove the inclusion $G_{R}\left(x\right)\subseteq\bigcap\{(D_{U})^{c}:U\in D(X)\text{ and }x\in \mathbf{m}_{G_{R}}\left(U\right)^{c}\}$. To prove the other inclusion, let $Y\in\bigcap\{(D_{U})^{c}:U\in D(X)\text{ and }x\in \mathbf{m}_{G_{R}}\left(U\right)^{c}\}$ and suppose that $Y\notin G_{R}(x)$. So, there exists $Z\in R(x)$ such that $Z\cap Y=\emptyset$. Since $Z\in\mathcal{S}(X)$ and $Y\in\mathcal{C}(X)$, there exists $U\in D(X)$ such that $Z\subseteq U^{c}$ and $Y\cap U^{c}=\emptyset$. Then $Z\cap U=\emptyset$, $x\notin m_{R}(U)=\mathbf{m}_{G_{R}}(U)$ and $Y\subseteq U$, i.e., $Y\in D_{U}$, which is a contradiction. 2. Let $\langle X,\mathcal{T},G\rangle$ be a $\mathcal{C}$-monotonic $DS$-space. We will see that $\mathbf{m}_{G}(U)=m_{R_{G}}(U)$ for all $U\in D(X)$. Let $U\in D(X)$ and $x\in m_{R_{G}}(U)$. Suppose that $x\notin \mathbf{m}_{G}(U)$. Then, for all $Y\in G(x)$, $Y\cap U^{c}\neq\emptyset$. So, $U^{c}\in R_{G}(x)$ which contradicts the fact that $x\in m_{R_{G}}(U)$. Now, suppose that $x\in \mathbf{m}_{G}(U)$. Then there exists $Y\in G(x)$ such that $Y\subseteq U$. Let $Z\in R_{G}(x)$. So, we have that $Y\cap Z\neq\emptyset$, then $Z\cap U\neq\emptyset$. Thus $x\in m_{R_{G}}(U)$. We have proved that $\mathbf{m}_{G}(U)=m_{R_{G}}(U)$ and since $\langle X,\mathcal{T},G\rangle$ is a $\mathcal{C}$-monotonic $DS$-space, $m_{R_{G}}(U)\in D(X)$. Now, we will see that $R_{G}\left(x\right)=\bigcap\{L_{U}:U\in D(X)\text{ such that }x\in m_{R_{G}}(U)\}$ for all $x\in X$. Let $x\in X$. The proof of the inclusion $R_{G}\subseteq\bigcap\{L_{U}:U\in D(X)\text{ such that }x\in m_{R_{G}}(U)\}$ is easy. Let $Z\in\bigcap\{L_{U}:U\in D(X)\text{ such that }x\in m_{R_{G}}(U)\}$ and suppose that $Z\notin R_{G}(x)$. So, there exists $Y\in G(x)$ such that $Z\cap Y=\emptyset$. Since $Z\in\mathcal{S}(X)$ and $Y\in\mathcal{C}(X)$, there exists $U\in D(X)$ such that $Z\subseteq U^{c}$ and $Y\cap U^{c}=\emptyset$. Then, $Y\subseteq U$, $x\in \mathbf{m}_{G}(U)=m_{R_{G}}(U)$ and $Z\cap U=\emptyset$, i.e., $Z\notin L_{U}$, which is a contradiction. \end{proof} \begin{proposition} \label{prop dual Hilbert space} Let $\langle\mathbf{A},m\rangle\in\mathcal{MDS}$. Then $\langle X({\mathbf{A}}),\mathcal{T}_{\mathbf{A}},R_{m}\rangle$ is an $\mathcal{S}$-monotonic $DS$-space and $\langle X({\mathbf{A}}),\mathcal{T}_{\mathbf{A}},G_{m}\rangle$ is a $\mathcal{C}$-monotonic $DS$-space. \end{proposition} \begin{proof} Let $U\in D(X({\mathbf{A}}))$. By definition, $U=\beta(a)$ for some $a\in A$. By Lemma \ref{lemaim} we have that $m_{R_{m}}(\beta(a))=\mathbf{m}_{G_{m}}(\beta(a))=\beta(ma)\in D(X({\mathbf{A}}))$, i.e., $m_{R_{m}}(U),\mathbf{m}_{G_{m}}(U)\in D(X({\mathbf{A}}))$ for all $U\in D(X({\mathbf{A}}))$. Now we will show that for all $P\in X({\mathbf{A}})$ \[ R_{m}(P)=\bigcap\{L_{\beta(a)}:ma\in P\}. \] Let $P\in X({\mathbf{A}})$. It is clear that $R_{m}(P)\subseteq\bigcap\{L_{\beta(a)}:ma\in P\}$. On the other hand, let $Z\in\bigcap\{L_{\beta(a)}:ma\in P\}$, we will prove that $Z\in R_{m}(P)$. Suppose, contrary to our claim, that $Z\notin R_{m}(P)$. Then, there exists $a\in m^{-1}(P)$ such that $Z\cap\beta(a)=\emptyset$. By assumption, $Z\in L_{\beta(a)}$, i.e., $Z\cap\beta(a)\neq\emptyset$, which is a contradiction. Therefore, $Z\in R_{m}(P)$. The indentity $G_{m}\left(P\right)={\textstyle \bigcap}\{(D_{\beta(a)})^{c}:ma\notin P\}$ is proved similarly. \end{proof} \begin{lemma}\label{rels} Let $\langle\mathbf{A},m\rangle\in\mathcal{MDS}$. Then $R_{m}(P)=\psi_{X(\mathbf{A})}(G_{m}(P))$ and $G_{m}(P)=\phi_{X(\mathbf{A})}(R_{m}(P))$. Therefore the sets $R_{m}(P)$ and $G_{m}(P)$ are closed sets of the Galois connection $(\phi_{X(\mathbf{A})},\psi_{X(\mathbf{A})})$. \end{lemma} \begin{proof} First, we will prove that $R_{m}(P)=\psi_{X(\mathbf{A})}(G_{m}(P))$. Let $Z\in R_{m}(P)$. Then, $m^{-1}(P)\cap I_{\mathbf{A}}(Z)=\emptyset$. Let $Y\in G_{m}(P)$. By definition, $F_{Y}\subseteq m^{-1}(P)$ and we get that $F_{Y}\cap I_{\mathbf{A}}(Z)=\emptyset$. From Proposition \ref{Sat y cerr}, $Y\cap Z\neq\emptyset$. Now, let $Z\in S(X(\mathbf{A}))$ and suppose that for all $Y\in G_{m}(P)$, $Z\cap Y\neq\emptyset$. Let $a\in m^{-1}(P)\cap I_{\mathbf{A}}(Z)$. So, $[a)\subseteq m^{-1}(P)$ and by hypothesis $\widehat{[a)}\cap Z=\beta(a)\cap Z\neq\emptyset$. Then $a\notin I_{\mathbf{A}}(Z)$ which is a contradiction. The other equality is proved analogously.\end{proof} From now on, we consider a monotonic $DS$-space as an $\mathcal{S}$-monotonic $DS$-space. It is clear how we can construct one kind of space from the other, and there is no particular reason we have chosen $\mathcal{S}$-monotonic $DS$-spaces as our default other than to keep things simple and avoid repetition obtaining similar theorems and propositions. \begin{definition}Given $\langle\mathbf{A},m\rangle\in\mathcal{MDS}$, the structure $\langle X({\mathbf{A}}),\mathcal{T}_{\mathbf{A}},R_{m}\rangle$ \emph{is the monotonic $DS$-space associated to} $\langle\mathbf{A},m\rangle$. \end{definition} \begin{definition} The algebra $\langle D(X),m_{R}\rangle$ is the \emph{monotonic distributive semilattice associated to}\textit{ }the monotonic $DS$-space $\langle X,\mathcal{T},R\rangle$. \end{definition} Now, we are able to enunciate the representation theorem. \begin{theorem}[of Representation]\label{reo mon}Let $\langle\mathbf{A},m\rangle\in\mathcal{MDS}$. Then, the structure \linebreak{} $\langle\mathrm{Up}(X({\mathbf{A}})),\cap,m_{R_{m}},X({\mathbf{A}})\rangle$ is a monotonic distributive semilattice and the map $\beta\colon A\rightarrow\mathrm{Up}(X({\mathbf{A}}))$ defined by \[ \beta(a)=\{P\in X({\mathbf{A}}):a\in P\} \] is an injective homomorphism of monotonic distributive semilattices. \end{theorem} \begin{proof} It follows from Theorem \ref{rep Hilbert} and the fact that for all $a\in A$, $m_{R_{m}}(\beta(a))=\beta(ma)$. \end{proof} \begin{corollary} \label{rep cor}Let $\langle\mathbf{A},m\rangle\in\mathcal{MDS}$. Then, the map $\beta\colon A\rightarrow D(X(\mathbf{A}))$ defined by \[ \beta(a)=\{P\in X({\mathbf{A}}):a\in P\} \] is an isomorphism of monotonic distributive semilattices. \end{corollary} We note that if $\langle X,\mathcal{T},R\rangle$ is a monotonic $DS$-space, then we have that \linebreak{} $\langle X(D(X)),\mathcal{T}_{D(X)},R_{m_{R}}\rangle$ is the monotonic space associated to $\langle D(X),m_{R}\rangle$. In \cite{CelaniTopological} Celani has proved that the map \[ H_{X}:X\rightarrow X(D(X)) \] defined by \[ H_{X}(x)=\{U\in D(X):x\in U\}, \] is an homeomorphism between $DS$-spaces and an order isomorphism with respect to $\leq$. Now we introduce the following definition. \begin{definition} Let $\langle X_{1},\mathcal{T}_{1},R_{1}\rangle$ and $\langle X_{2},\mathcal{T}_{2},R_{2}\rangle$ be two monotonic $DS$-spaces. A map $f\colon X_{1}\rightarrow X_{2}$ is an \emph{isomorphism of }$DS$\emph{-spaces} if it satisfies, \begin{enumerate} \item $f$ is a homeomorphism, \item $(x,Z)\in R_{1}$ if and only if $(f(x),f[Z])\in R_{2}$, for all $x\in X_{1}$ and for each $Z\in\mathcal{S}(X_{1})$, \end{enumerate} where $f[Z]=\{f(z):z\in Z\}$. \end{definition} \begin{proposition}Let $\langle X_{1},\mathcal{T}_{1}\rangle$ and $\langle X_{2},\mathcal{T}_{2}\rangle$ be two $DS$-spaces and let $f\colon X_{1}\rightarrow X_{2}$ be a homeomorphism. Then, $f[Z]\in\mathcal{S}(X_{2})$ for all $Z\in\mathcal{S}(X_{1})$ and for all $S\in\mathcal{S}(X_{2})$ there exists $Z\in\mathcal{S}(X_{1})$ such that $S=f[Z]$. \end{proposition} \begin{remark} \label{prop Hx} Let $\langle X,\mathcal{T}\rangle$ be a $DS$-space. Then, $H_{X}[Z]\in\mathcal{S}(X(D(X))$ for all $Z\in S(X)$ and for all $S\in\mathcal{S}(X(D(X))$ there exists $Z\in\mathcal{S}(X)$ such that $S=H_{X}[Z]$. Also, we have that $H_{X}[U]=\{H_{X}(u):u\in U\}=\beta_{D(X)}(U)$ for all $U\in D(X)$. Then, \[ Z\cap U=\emptyset\Leftrightarrow H_{X}[Z]\cap\beta_{D(X)}(U)=\emptyset \] for all $Z\in\mathcal{S}(X)$ and $U\in D(X)$. \end{remark} \begin{theorem} \label{HX}Let $\langle X,\mathcal{T},R\rangle$ be a monotonic $DS$-space. Then, the map $H_{X}\colon X\rightarrow X(D(X))$ defined by \[ H_{X}(x)=\{U\in D(X):x\in U\} \] is an isomorphism of monotonic $DS$-spaces. \end{theorem} \begin{proof} By \cite{CelaniTopological} and \cite{CelaniCalomino}, it is only left to prove that $(x,Z)\in R$ iff $(H_{X}(x),H_{X}[Z])\in R_{m_{R}}$. $\Rightarrow)$ Let $Z\in\mathcal{S}(X)$ such that $(x,Z)\in R$. We will see that $H_{X}[Z]\cap\beta_{D(X)}(U)\neq\emptyset$ for all $U\in m_{R}^{-1}(H_{X}(x))$. Let $U\in D(X)$ such that $U\in m_{R}^{-1}(H_{X}(x))$, i.e., $m_{R}(U)\in H_{X}(x)$. Then, $x\in m_{R}(U)$. Since $Z\in R(x)$, we get that $Z\cap U\neq\emptyset$ and by Remark \ref{prop Hx}, $H_{X}[Z]\cap\beta_{D(X)}(U)\neq\emptyset$. Therefore, $U\notin I_{D(X)}(H_{X}[Z])$. $\Leftarrow)$ Suppose that $(H_{X}(x),H_{X}[Z])\in R_{m_{R}}$. Then, $H_{X}[Z]\cap\beta_{D(X)}(U)\neq\emptyset$ for all $U\in m_{R}^{-1}(H_{X}(x))$, i.e., for all $U\in D(X)$ such that $x\in m_{R}(U)$. We will prove that $(x,Z)\in R$. To do so, suppose that $Z\notin R(x)$. From condition (4) of Definition \ref{t esp mon} we have that there exists $U\in D(X)$ such that $x\in m_{R}(U)$ and $Z\cap U=\emptyset$. Then, by Remark \ref{prop Hx}, $H_{X}[Z]\cap\beta_{D(X)}(U)=\emptyset$, which is a contradiction. Therefore, $Z\in R(x)$. \end{proof} By the following result we get that the dual spaces of monotonic distributive semilattices are exactly those triples $\langle X,\mathcal{T},R\rangle$, where $\langle X,\mathcal{T}\rangle$ is a $DS$-space, $R\subseteq X\times\mathcal{S}(X)$, $m_{R}(U)\in D(X)$, for all $U\in D(X)$, and $\langle X,\mathcal{T},R\rangle$ satisfies any of the equivalent conditions of Theorem \ref{equivalent}. \begin{lemma}\label{Rupset} Let $\langle X,\mathcal{T},R\rangle$ be a monotonic $DS$-space. Then $R(x)$ is an upset of \linebreak{} $\langle\mathcal{S}(X),\subseteq\rangle$, i.e., for all $S,Z\in\mathcal{S}(X)$, and for all $x\in X$, if $S\subseteq Z$ and $S\in R(x)$, then $Z\in R(x)$. \end{lemma} \begin{proof} Let $S,Z\in\mathcal{S}(X)$, and $x\in X$, such that $S\subseteq Z$ and $S\in R(x)$. If $Z\notin R(x)$, then by condition (4) of Definition \ref{t esp mon}, there exists $U\in D(X)$ such that $Z\cap U=\emptyset$ and $x\in m_{R}(U)$. But this implies that $S\cap U=\emptyset$ and $x\in m_{R}(U)$, which is impossible because $S\in R(x)$. Thus, $R(x)$ is an upset of $\langle\mathcal{S}(X),\subseteq\rangle$.\end{proof} \begin{theorem} \label{equivalent} Let $\langle X,\mathcal{T}\rangle$ be a $DS$-space. Consider a relation $R\subseteq X\times\mathcal{S}(X)$ such that $m_{R}(U)=\{x\in X:\forall Z\in R(x)[Z\cap U\neq\emptyset]\}\in D(X)$ for all $U\in D(X)$. Then, the following conditions are equivalent, \begin{enumerate} \item $R(x)=\bigcap\{L_{U}:x\in m_{R}(U)\text{ and }U\in D(X)\}$ for all $x\in X$, \item For all $x\in X$ and for all $Z\in\mathcal{S}(X)$, if $(H_{X}(x),H_{X}[Z])\in R_{m_{R}}$ then $(x,Z)\in R$, \item $m_{R}(Z^{c})=\bigcup\{m_{R}(U):Z\subseteq U^{c}\text{ and }U\in D(X)\}$ for all $Z\in\mathcal{S}(X)$, and $R(x)$ is an upset of $\langle\mathcal{S}(X),\subseteq\rangle$ for all $x\in X$. \end{enumerate} \end{theorem} \begin{proof} $1.\Rightarrow2$. It was proved in the previous theorem. $2.\Rightarrow1$. Let $x\in X$. The inclusion $R(x)\subseteq\bigcap\{L_{U}:x\in m_{R}(U)\}$ is clear. Let $Z\in\mathcal{S}(X)$ such that $Z\in\bigcap\{L_{U}:x\in m_{R}(U)\}$. We will prove that $(H_{X}(x),H_{X}[Z])\in R_{m_{R}}$. Let $U\in D(X)$ such that $x\in m_{R}(U)$. Then, $Z\in L_{U}$, i.e., $Z\cap U\neq\emptyset$. By Remark \ref{prop Hx}, we have that $H_{X}[Z]\cap\beta_{D(X)}(U)\neq\emptyset$. Thus, we have that for all $U\in D(X)$ such that $U\in m_{R}^{-1}(H_{X}(x))$, $H_{X}[Z]\cap\beta_{D(X)}(U)\neq\emptyset$, i.e., $U\notin I_{D(X)}(H_{X}[Z])$. Therefore, $(H_{X}(x),H_{X}[Z])\in R_{m_{R}}$ and by assumption, $Z\in R(x)$ and it follows that $\bigcap\{L_{U}:x\in m_{R}(U)\}\subseteq R(x)$. $1.\Rightarrow3$. Let $x\in m_{R}(Z^{c})$. Then, for all $S\in R(x)$ we have that $S\cap Z^{c}\neq\emptyset$. So, $Z\notin R(x)$. By assumption, $Z\notin\bigcap\{L_{U}:x\in m_{R}(U)\}$, i.e., there exists $U\in D(X)$ such that $x\in m_{R}(U)$ and $Z\cap U=\emptyset$. Thus, $x\in\bigcup\{m_{R}(U):Z\subseteq U^{c}\text{ and }U\in D(X)\}$. The other inclusion is trivial. The last part is a consequence of Lemma \ref{Rupset}. $3.\Rightarrow1$. Let $x\in X$ and $Z\in\bigcap\{L_{U}:x\in m_{R}(U)\text{ and }U\in D(X)\}$. Suppose that $Z\notin R(x)$. We will see that $x\in m_{R}(Z^{c})$. On the contrary, suppose that $x\notin m_{R}(Z^{c})$. Then, there exists $S\in R(x)$ such that $S\cap Z^{c}=\emptyset$. So, $S\subseteq Z$ and by assumption $Z\in R(x)$ which is a contradiction. Thus, $x\in m_{R}(Z^{c})=\bigcup\{m_{R}(U):Z\subseteq U^{c}\text{ and }U\in D(X)\}$, i.e., there exists $U\in D(X)$ such that $x\in m_{R}(U)$ and $Z\cap U=\emptyset$, a contradiction. Therefore $Z\in R(x)$. The other inclusion is trivial.\end{proof} \subsection{Representation of homomorphisms} In \cite{CelaniTopological} and \cite{CelaniCalomino} it was shown that there exists a duality between homomorphisms of distributive semilattices and certain binary relations called meet-relations. It is also known that $DS$-spaces with meet-relations form a category. Now, we shall study the representation of homomorphisms of monotonic distributive semilattices. Let $S\subseteq X_{1}\times X_{2}$ be a binary relation. Consider the mapping $h_{S}\colon\mathcal{P}(X_{2})\rightarrow\mathcal{P}(X_{1})$ defined by \[ h_{S}(U)=\{x\in X_{1}:S(x)\subseteq U\}. \] A \emph{meet-relation }between two $DS$-spaces $\langle X_{1},\mathcal{T}_{1}\rangle$ and $\langle X_{2},\mathcal{T}_{2}\rangle$ was defined as a subset $S\subseteq X_{1}\times X_{2}$ satisfying the following conditions: \begin{enumerate} \item For every $U\in D(X_{2})$, $h_{S}(U)\in D(X_{1})$, and \item $S(x)=\bigcap\{U\in D(X_{2}):S(x)\subseteq U\}$ for all $x\in X_{1}$. \end{enumerate} If $S$ is a meet-relation, then $h_{S}$ is a homomorphism between distributive semilattices. On the other hand, let $\mathbf{A},\mathbf{B}\in\mathcal{DS}$. Let $h\colon A\rightarrow B$ be a homomorphism. The binary relation $S_{h}\subseteq X({\mathbf{B}})\times X({\mathbf{A}})$ defined by \[ (P,Q)\in S_{h}\text{ iff }h^{-1}[P]\subseteq Q \] is a meet-relation, where $h^{-1}[P]=\{a\in A:h(a)\in P\}$. \begin{definition} \label{cond rel}Let $\langle X_{1},\mathcal{T}_{1},R_{1}\rangle$ and $\langle X_{2},\mathcal{T}_{2},R_{2}\rangle$ be two monotonic $DS$-spaces. Let us consider a meet-relation $S\subseteq X_{1}\times X_{2}$. We say that $S$ is a \emph{monotonic meet-relation} if for all $x\in X_{1}$ and every $U\in D(X_{2})$ it satisfies \begin{equation} U^{c}\in R_{2}[S(x)]\text{~iff~}S^{-1}[U^{c}]\in R_{1}(x)\label{eq:homo} \end{equation} where $R_{2}[S(x)]=\{Z\in\mathcal{S}(X_{2}):\exists y\in S(x)\ [(y,Z)\in R_{2}]\}$. \end{definition} \begin{remark}Note that if $S\subseteq X_{1}\times X_{2}$ is a meet-relation between two $DS$-spaces $\langle X_{1},\mathcal{T}_{1}\rangle$ and $\langle X_{2},\mathcal{T}_{2}\rangle$, then $S^{-1}[U^{c}]=h_{S}(U)^{c}\in\mathcal{S}(X_{1})$. \end{remark} \begin{proposition} \label{equivalence homo}The condition (\ref{eq:homo}) is equivalent to the condition \[ h_{S}(m_{R_{2}}(U))=m_{R_{1}}(h_{S}(U)) \] for all $U\in D_{\mathcal{K}_{2}}(X_{2})$, i.e., the mapping $h_{S}\colon D(X_{2})\rightarrow D(X_{1})$ is a homomorphism of monotonic distributive semilattices.\end{proposition} \begin{proof} $\Rightarrow)$ Suppose that for all $x\in X_{1}$ and every $U\in D(X_{2})$, $U^{c}\in R_{2}[S(x)]$ if and only if $S^{-1}[U^{c}]\in R_{1}(x)$. Let $x\in h_{S}(m_{R_{2}}(U))$, i.e., $S(x)\subseteq m_{R_{2}}(U)$. Then, for all $y\in S(x)$ we have that $y\in m_{R_{2}}(U)$. So, for all $y\in S(x)$ and for all $Z\in R_{2}(y)$ we have that $Z\cap U\neq\emptyset.$ Then, for all $y\in S(x)$, $U^{c}\notin R_{2}(y)$. Thus, $U^{c}\notin R_{2}[S(x)]$. By hypothesis, $S^{-1}[U^{c}]\notin R_{1}(x)$. Therefore, $x\in m_{R_{1}}(S^{-1}[U^{c}]^{c})=m_{R_{1}}(h_{S}(U))$. The other inclusion is obtained reverting the implications. $\Leftarrow)$ Suppose that $h_{S}$ is a homomorphism. Let $U^{c}\in R_{2}[S(x)]$. Then, there exists $y\in S(x)$ such that $U^{c}\in R_{2}(y)$. So, $y\notin m_{R_{2}}(U)$. Thus, $S(x)\nsubseteq m_{R_{2}}(U)$, i.e., $x\notin h_{S}(m_{R_{2}}(U))$. By hypothesis, $x\notin m_{R_{1}}(h_{S}(U))$, i.e., there exists $Z\in R_{1}(x)$ such that $Z\cap h_{S}(U)=\emptyset$. We have that $Z\subseteq h_{S}(U)^{c}$ and since $h_{S}(U)^{c}\in\mathcal{S}(X)$ we have that $S^{-1}[U^{c}]=h_{S}(U)^{c}\in R_{1}(x)$. The other implication is obtained similarly.\end{proof} Now, we will study the composition of monotonic meet-relations. Let $X_{1}$, $X_{2}$ and $X_{3}$ be sets. Let us consider two relations $S_{1}\subseteq X_{1}\times X_{2}$ and $S_{2}\subseteq X_{2}\times X_{3}$. Then, the composition of $S_{1}$ and $S_{2}$ is the relation $S_{2}\circ S_{\text{1}}\subseteq X_{1}\times X_{3}$ defined by \[ S_{2}\circ S_{\text{1}}=\{(x,z)\in X_{1}\times X_{3}:\exists y\in X_{2}[(x,y)\in S_{1}\text{ and }(y,z)\in S_{2}\}. \] \begin{proposition} Let $\langle X_{1},\mathcal{T}_{1},R_{1}\rangle$, $\langle X_{2},\mathcal{T}_{2},R_{2}\rangle$ and $\langle X_{3},\mathcal{T}_{3},R_{3}\rangle$ be three monotonic $DS$-spaces. Let us consider two monotonic meet-relations $S_{1}\subseteq X_{1}\times X_{2}$ and $S_{2}\subseteq X_{2}\times X_{3}$. Then, $S_{3}=S_{2}\circ S_{1}\subseteq X_{1}\times X_{3}$ is a monotonic meet-relation. \end{proposition} \begin{proof} It follows from the fact that $h_{S_{3}}(U)=h_{S_{2}\circ S_{1}}(U)=h_{S_{1}}\circ h_{S_{2}}(U)$ for all $U\in D_{\mathcal{K}_{3}}(X_{3})$, definition \ref{cond rel} and proposition \ref{equivalence homo}. \end{proof} \begin{proposition} Let $\langle X,\mathcal{T},R\rangle$ be a monotonic $DS$-space. The specialization dual order $\leq\subseteq X\times X$ is a monotonic meet-relation. \end{proposition} \begin{proof} $\Rightarrow)$ Let $U\in D(X)$ and suppose that $U^{c}\in R([x))$. Then, there exists $y\geq x$ such that $U^{c}\in R(y)$ and since $R(y)\subseteq R(x)$, we have that $U^{c}\in R(x)$. As $U^{c}$ is a downset, $\leq^{-1}[U^{c}]=U^{c}$. The other implication is trivial. \end{proof} So, monotonic $DS$-spaces with monotonic meet-relations form a category where the identity arrow is the specialization dual order. We will denote this category by $\mathcal{MDSR}$. \begin{proposition}\label{homo cond} Let $\langle\mathbf{A},m_{\mathbf{A}}\rangle$, $\langle\mathbf{B},m_{\mathbf{B}}\rangle\in\mathcal{MDS}$. \begin{enumerate} \item Let $h\colon A\rightarrow B$ be a monotonic homomorphism. Then, the meet-relation $S_{h}$ satisfies condition (\ref{eq:homo}). \item Let $h\colon A\rightarrow B$ be a homomorphism and suppose that the meet-relation $S_{h}$ satisfies condition (\ref{eq:homo}). Then, $h$ is monotonic. \end{enumerate} \end{proposition} \begin{proof} 1. Suppose that $h$ is a monotonic homomorphism. So, it is easy to see that $h_{S_{h}}(\beta_{\mathbf{A}}(a))=\beta_{\mathbf{B}}(h(a))$ for all $a\in A$. Then, we have \begin{align*} h_{S_{h}}(m_{R_{m_{\mathbf{A}}}}\beta_{\mathbf{A}}(a)) & =h_{S_{h}}(\beta_{\mathbf{A}}(m_{\mathbf{A}}a))=\beta_{\mathbf{B}}(h(m_{\mathbf{A}}a))\\ & =\beta_{\mathbf{B}}(m_{\mathbf{B}}h(a))=m_{R_{m_{\mathbf{B}}}}(\beta_{\mathbf{B}}(h(a)))\\ & =m_{R_{m_{\mathbf{B}}}}(h_{S_{h}}(\beta_{\mathbf{A}}(a))) \end{align*} for all $a\in A$. 2. Suppose that $h$ is a homomorphism and that $S_{h}$ satisfies condition (\ref{eq:homo}). Then, $h_{S_{h}}(\beta_{\mathbf{A}}(a))=\beta_{\mathbf{B}}(h(a))$ for all $a\in A$. So, we have \begin{align*} \beta_{\mathbf{B}}(h(m_{\mathbf{A}}a)) & =h_{S_{h}}(\beta_{\mathbf{A}}(m_{\mathbf{A}}a))=h_{S_{h}}(m_{R_{m_{\mathbf{A}}}}\beta_{\mathbf{A}}(a))\\ & =m_{R_{m_{\mathbf{B}}}}(h_{S_{h}}(\beta_{\mathbf{A}}(a)))=m_{R_{m_{\mathbf{B}}}}(\beta_{\mathbf{B}}(h(a)))\\ & =\beta_{\mathbf{B}}(m_{\mathbf{B}}h(a)) \end{align*} and since $\beta_{\mathbf{B}}$ is an injective function, we get that $h(m_{\mathbf{A}}a)=m_{\mathbf{B}}h(a)$ for all $a\in A$. \end{proof} From Theorem \ref{HX} and Proposition \ref{equivalence homo}, we conclude that the functor $\mathbb{D}:\mathcal{MDSR}\rightarrow\mathcal{MDSH}$ defined by \begin{enumerate} \item $\mathbb{D}(X)=\langle D(X),m_{R}\rangle$ if $\langle X,\mathcal{T},R\rangle$ is a $DS$-space, \item $\mathbb{D}(S)=h_{S}$ if $S$ is a monotonic meet-relation \end{enumerate} is a contravariant functor. By Theorem \ref{reo mon}, Corollary \ref{rep cor} and Proposition \ref{homo cond}, we conclude that the functor $\mathbb{X}:\mathcal{MDSH}\rightarrow \mathcal{MDSR}$ defined by \begin{enumerate} \item $\mathbb{X}(\mathbf{A})=\langle X(\mathbf{A});\mathcal{T}_{A},R_{m}\rangle$ if $\langle\mathbf{A},m\rangle$ is a monotonic distributive semilattice, \item $\mathbb{X}(h)=S_{h}$ if $h$ is homomorphism of monotonic distributive semilattices \end{enumerate} is a contravariant functor. Therefore, we give the following result. \begin{corollary} The categories $\mathcal{MDSH}$ and $\mathcal{MDSR}$ are dually equivalent. \end{corollary} \section{Applications of the duality} In this section we consider some applications of the duality. We will consider some important subclasses and show how our new duality extends the one developed in \cite{Celaniboole} for Boolean algebras. \subsection{Additional conditions} Now we will see how some additional conditions affect the relations associated to the monotonic operator. The following formulas are $\pi$- and $\sigma$-canonical, i.e., their validity is preserved under taking $\pi$- and $\sigma$-canonical extensions. \begin{proposition} Let $\langle A,m\rangle\in\mathcal{MDS}$. Then, \begin{enumerate} \item $m1=1$ iff $\forall P\in X(\mathbf{A})\ [\emptyset\notin R_{m}(P)]$ iff $\forall P\in X(\mathbf{A})\ [X(\mathbf{A})\in G_{m}(P)]$; \item $m0=0$ iff $\forall P\in X(\mathbf{A})\ [X(\mathbf{A})\in R_{m}(P)]$ iff $\forall P\in X(\mathbf{A})\ [\emptyset\notin G_{m}(P)]$; \item $\forall a\in A\ [ma\leq a]$ iff $\forall P\in X(\mathbf{A})\ [\alpha(P^{c})=(P]\in R_{m}(P)]$ iff\\ $\forall P\in X(\mathbf{A})\forall Y\in G_{m}(P)\ [P\in Y]$; \item $\forall a\in A\ [a\leq ma]$ iff $\forall P\in X(\mathbf{A})\forall Z\in R_{m}(P)\ [P\in Z]$ iff \\ $\forall P\in X(\mathbf{A})\ [\hat{P}=[P)\in G_{m}(P)]$. \end{enumerate} \end{proposition} \begin{proof} 1. Suppose that $m1=1$ and suppose that there exists $P\in X(\mathbf{A})$ such that $\emptyset\in R_{m}(P)$. Then, $m1=1\in P$ and $m^{-1}(P)\cap I_{\mathbf{A}}(\emptyset)=m^{-1}(P)\cap A=\emptyset$ and it follows that $m^{-1}(P)=\emptyset$ which is a contradiction. Now, suppose that for all $P\in X(\mathbf{A})$ we have $\emptyset\notin R_{m}(P)$ and suppose that there exists $P\in X(\mathbf{A})$ such that $X(\mathbf{A})\notin G_{m}(P)$. Then, $F_{X(\mathbf{A})}=\{1\}\nsubseteq m^{-1}(P)$ , i.e., $1\notin m^{-1}(P)$. Since $P$ is an upset, $m^{-1}(P)=\emptyset$. So, we have that $m^{-1}(P)\cap A=m^{-1}(P)\cap I_{\mathbf{A}}(\emptyset)=\emptyset$ and by definition $\emptyset\in R_{m}(P)$ which is a contradiction. Suppose that for all $P\in X(\mathbf{A})$ we have $X(\mathbf{A})\in G_{m}(P)$ and suppose that $m1\neq1$. Then, there exists $P\in X(\mathbf{A})$ such that $m1\notin P$. So, we have that $F_{X(\mathbf{A})}=\{1\}\nsubseteq m^{-1}(P)$ which is a contradiction. 2. The proof is similar to 1. 3. Suppose that $ma\leq a$ for all $a\in A$ and that there exists $P\in X(\mathbf{A})$ such that $(P]\notin R_{m}(P)$. Then, $m^{-1}(P)\cap P^{c}\neq\emptyset$. So, there exists $a\in A$ such that $ma\in P$ and $a\notin P$, which is a contradiction. Now, suppose that for all $P\in X(\mathbf{A})$ we have $(P]\in R_{m}(P)$ and suppose that there exists $P\in X(\mathbf{A})$ and there exists $Y\in G_{m}(P)$ such that $P\notin Y$. Then, $F_{Y}\subseteq m^{-1}(P)$ and there exists $a\in F_{Y}$ such that $a\notin P$. So, $a\in m^{-1}(P)\cap P^{c}$ and it follows that $(P]\notin R_{m}(P)$ which is a contradiction. Suppose that for all $P\in X(\mathbf{A})$ and for all $Y\in G_{m}(P)$ we have $P\in Y$ and suppose that $ma\nleq a$. Then, there exists $P\in X(\mathbf{A})$ such that $ma\in P$ and $a\notin P$. So, we have that $[a)\subseteq m^{-1}(P)$ but $P\notin\widehat{[a)}$ which is a contradiction. 4. The proof is similar to 3.\end{proof} Now, we will characterize the dual spaces of monotonic distributive meet-semilattices satisfying condition ($\mathbf{4}_{\square}$) $ma\leq m^{2}a$ or condition ($\mathbf{4}_{\Diamond}$) $m^{2}a\leq ma$ for every element $a$. We will see that condition $\mathbf{4}_{\square}$ is $\sigma$-canonical and that condition $\boldsymbol{4}_{\Diamond}$ is $\pi$-canonical. Let $\langle X,\mathcal{T},R\rangle$ be a monotonic $DS$-space. For any $U\in\mathrm{Up}(X)$, we define the operator $m_{R}^{2}\colon\mathrm{Up}(X)\rightarrow\mathrm{Up}(X)$ by $m_{R}^{2}(U)=m_{R}(m_{R}(U))$. \begin{remark}Let $\langle X,\mathcal{T},R\rangle$ be a monotonic $DS$-space and let $Z\in\mathcal{S}(X)$. Then recall that $m_{R}(Z^{c})^{c}=\bigcap\{m_{R}(U)^{c}:U\in D(X)\text{ and }Z\subseteq U^{c}\}$. \end{remark} \begin{proposition}Let $\langle\mathbf{A},m\rangle\in\mathcal{MDS}$ such that $m^{2}a\leq ma$ for all $a\in A$. Then, $m_{R_{m}}^{2}(U)\subseteq m_{R_{m}}(U)$ for all $U\in\mathrm{Up}(X(\mathbf{A}))$, i.e., $\boldsymbol{4}_{\Diamond}$ is $\pi$-canonical. \end{proposition} \begin{proof}Let $A\in\mathcal{MDS}$ such that $m^{2}a\leq ma$ for all $a\in A$. Then, for all $U\in D(X(\mathbf{A}))$ we have that $m_{R_{m}}^{2}(U)\subseteq m_{R_{m}}(U)$. First, we will see that $m_{R_{m}}^{2}(Z^{c})\subseteq m_{R_{m}}(Z^{c})$ for all $Z\in\mathcal{S}(X(\mathbf{A}))$. Let $P\in m_{R_{m}}^{2}(Z^{c})$. So, we get that $m_{R_{m}}(Z^{c})^{c}\notin R_{m}(P)$. Suppose that $P\notin m_{R_{m}}(Z^{c})$. Then, we get that $Z\in R_{m}(P)$. Since $R_{m}(P)=\bigcap\{L_{U}:U\in D(X)\text{ and }P\in m_{R_{m}}(U)\}$, there exists $U\in D(X)$ such that $P\in m_{R_{m}}(U)$ and $m_{R_{m}}(Z^{c})^{c}\cap U=\emptyset$. By the previous remark, there exists $V\in D(X)$ such that $Z\subseteq V^{c}$ and $m_{R_{m}}(V)^{c}\cap U=\emptyset$. Thus, $U\subseteq m_{R_{m}}(V)$ and by hypothesis $P\in m_{R_{m}}(U)\subseteq m_{R_{m}}^{2}(V)\subseteq m_{R_{m}}(V)$. Since $P\in m_{R_{m}}(V)$ and $Z\in R_{m}(P)$ we get that $Z\cap V\neq\emptyset$ which is a contradiction. Now, we will see that $m_{R_{m}}^{2}(U)\subseteq m_{R_{m}}(U)$ for all $U\in\mathrm{Up}(X(\mathbf{A}))$. Let $P\in m_{R_{m}}^{2}(U)$ and suppose that $P\notin m_{R_{m}}(U)$. Then, there exists $Z\in R_{m}(P)$ such that $Z\cap U=\emptyset$. So, $U\subseteq Z^{c}$ and we get that $m_{R_{m}}(U)\subseteq m_{R_{m}}(Z^{c})$. Thus, $m_{R_{m}}(U)\cap m_{R_{m}}(Z^{c})^{c}=\emptyset$ and since $P\in m_{R_{m}}^{2}(U)$ we get that $m_{R_{m}}(Z^{c})^{c}\notin R_{m}(P)$. Therefore $P\in m_{R_{m}}^{2}(Z^{c})\subseteq m_{R_{m}}(Z^{c})$ which is a contradiction because $Z\in R_{m}(P)$.\end{proof} Let $\langle X,\mathcal{T},R\rangle$ be a monotonic $DS$-space. We will define a relation $\bar{R}\subseteq\mathcal{S}(X)\times\mathcal{S}(X)$ by \[ (S,Z)\in\bar{R}\Leftrightarrow\forall x\in S\ (x,Z)\in R. \] We define $R^{2}\subseteq X\times\mathcal{S}(X)$ as follows \[ (x,Z)\in R^{2}\Leftrightarrow\exists S\in\mathcal{S}(X)\text{ such that }(x,S)\in R\text{ and }(S,Z)\in\bar{R}. \] \begin{definition}Let $\langle X,\mathcal{T},R\rangle$ be a monotonic $DS$-space. The relation $R$ is \emph{transitive} if and only if for all $x\in X$ and for all $Z\in\mathcal{S}(X)$ if $(x,Z)\in R^{2}$ then $(x,Z)\in R$. \end{definition} \begin{definition}Let $\langle X,\mathcal{T},R\rangle$ be a monotonic $DS$-space. The relation $R$ is \emph{weakly dense} if and only if for all $x\in X$ and for all $Z\in\mathcal{S}(X)$ if $(x,Z)\in R$ then $(x,Z)\in R^{2}$. \end{definition} \begin{corollary}Let $\langle A,m\rangle\in\mathcal{MDS}$. Then, $I_{\mathbf{A}}(m_{R_{m}}(\alpha(I)^{c})^{c})=(m(I)]$ where $m(I)=\{ma:a\in I\}$. \end{corollary} \begin{lemma} \label{Lemma ideal}Let $A\in\mathcal{MDS}$. Then for all $P\in X(\mathbf{A})$ and $I\in\mathrm{Id}(\mathbf{A})$, $(P,\alpha(I))\in R_{m}^{2}$ if and only if $I\subseteq\{a\in A:m^{2}a\in P^{c}\}$. \end{lemma} \begin{proof}$\Rightarrow)$ Suppose that $(P,\alpha(I))\in R_{m}^{2}$ and let $a\in I$. Suppose that $m^{2}a\in P$. Then, there exists $J\in\mathrm{Id}(\mathbf{A})$ such that $(P,\alpha(J))\in R_{m}$ and $(\alpha(J),\alpha(I))\in\bar{R}_{m}$. So, $m^{-1}(P)\cap J=\emptyset$ and $ma\notin J$. Thus, there exists $Q\in\alpha(J)$ such that $ma\in Q$. Therefore, $(Q,\alpha(I))\in R_{m}$ and $a\notin I$ which is a contradiction. $\Leftarrow)$ Suppose that $I\subseteq\{a\in A:m^{2}a\in P^{c}\}$. We will prove that $m_{R_{m}}(\alpha(I)^{c})^{c}\in R_{m}(P)$. Since $I_{\mathbf{A}}(m_{R_{m}}(\alpha(I)^{c})^{c})=(m(I)]$, suppose that there exists $a\in A$ such that $a\in m^{-1}(P)\cap(m(I)]$. So, $ma\in P$ and there exists $b\in I$ such that $a\leq mb$. Then, $ma\leq m^{2}b$ and we get that $m^{2}b\in P\cap P^{c}$ which is a contradiction. Therefore, $m_{R_{m}}(\alpha(I)^{c})^{c}\in R_{m}(P)$ and $(m_{R_{m}}(\alpha(I)^{c})^{c},\alpha(I))\in\bar{R}_{m}$, i.e., $(P,\alpha(I))\in R_{m}^{2}$.\end{proof} \begin{proposition} Let $A\in\mathcal{MDS}$. Then $ma\leq m^{2}a$ for all $a\in A$ if and only if $R_{m}$ is transitive. \end{proposition} \begin{proof}$\Rightarrow)$ Suppose that $ma\leq m^{2}a$ for all $a\in A$ and that $(P,Z)\in R_{m}^{2}$. Then, $I_{\mathbf{A}}(Z)\subseteq\{a\in A:m^{2}a\in P^{c}\}$. Suppose that $m^{-1}(P)\cap I_{\mathbf{A}}(Z)\neq\emptyset$. We get that there exists $a\in A$ such that $ma\in P$ and $a\in I_{\mathbf{A}}(Z)$. Therefore $m^{2}a\notin P$ and $m^{2}a\in P$ which is a contradiction. Since $m^{-1}(P)\cap I_{\mathbf{A}}(Z)=\emptyset$ we get that $(P,Z)\in R_{m}$. $\Leftarrow)$ Suppose that $R_{m}$ is transitive and suppose that there exists $a\in A$ such that $ma\nleq m^{2}a$. Then, there exists $P\in X(\mathbf{A})$ such that $ma\in P$ and $m^{2}a\notin P$. So, $(a]\subseteq\{a\in A:m^{2}a\in P^{c}\}$ and by Lemma \ref{Lemma ideal}, $(P,\alpha(a))\in R_{m}^{2}$. We get that $(P,\alpha(a))\in R_{m}$, i.e., $ma\notin P$, which is a contradiction.\end{proof} \begin{proposition} Let $A\in\mathcal{MDS}$. Then $m^{2}a\leq ma$ for all $a\in A$ if and only if $R_{m}$ is weakly dense. \end{proposition} \begin{proof}$\Rightarrow)$ Suppose that $m^{2}a\leq ma$ for all $a\in A$ and that $(P,Z)\in R_{m}$. We will prove that $I_{\mathbf{A}}(Z)\subseteq\{a\in A:m^{2}a\in P^{c}\}$. Let $a\in I_{\mathbf{A}}(Z)$. Since $m^{-1}(P)\cap I_{\mathbf{A}}(Z)=\emptyset$ we get that $ma\notin P$ and therefore $m^{2}a\notin P$. By Lemma \ref{Lemma ideal}, $(P,Z)\in R_{m}^{2}$. $\Leftarrow)$ Suppose that $R_{m}$ is weakly dense. Suppose that there exists $a\in A$ such that $m^{2}a\nleq ma$. Then, there exists $P\in X(\mathbf{A})$ such that $m^{2}a\in P$ and $ma\notin P$. Then, $(P,\alpha(a))\in R_{m}$. So, $(P,\alpha(a))\in R_{m}^{2}$ and, by Lemma \ref{Lemma ideal}, $(a]\subseteq\{a\in A:m^{2}a\in P^{c}\}$, i.e., $m^{2}a\notin P$, which is a contradiction.\end{proof} For the sake of completeness we add the corresponding definitions and theorems for $\mathcal{C}$-monotonic $DS$-spaces. Let $\langle X,\mathcal{T},G\rangle$ be a $\mathcal{C}$-monotonic $DS$-space. For any $U\in\mathrm{Up}(X)$ we define the operator $\mathbf{m}_{G}^{2}\colon\mathrm{Up}(X)\rightarrow\mathrm{Up}(X)$ by $\mathbf{m}_{G}^{2}(U)=\mathbf{m}_{G}(\mathbf{m}_{G}(U))$. \begin{remark}Let $\langle X,\mathcal{T},G\rangle$ be a $\mathcal{C}$-monotonic $DS$-space and $Y\in\mathcal{C}(X)$. Recall that $\mathbf{m}_{G}(Y)=\bigcap\{\mathbf{m}_{G}(U):U\in D(X)\text{ and }Y\subseteq U\}$. \end{remark} \begin{proposition}Let $\langle\mathbf{A},m\rangle\in\mathcal{MDS}$ such that $ma\leq m^{2}a$ for all $a\in A$. Then, $\mathbf{m}_{G_{m}}(U)\subseteq \mathbf{m}_{G_{m}}^{2}(U)$ for all $U\in\mathrm{Up}(X(\mathbf{A}))$, i.e., $\boldsymbol{4}_{\square}$ is $\sigma$-canonical. \end{proposition} \begin{proof}Let $A\in\mathcal{MDS}$ such that $ma\leq m^{2}a$ for all $a\in A$. Then, for all $U\in D(X(\mathbf{A}))$ we have that $\mathbf{m}_{G_{m}}(U)\subseteq \mathbf{m}_{G_{m}}^{2}(U)$. First, we will see that $\mathbf{m}_{G_{m}}(Y)\subseteq \mathbf{m}_{G_{m}}^{2}(Y)$ for all $Y\in\mathcal{C}(X(\mathbf{A}))$. Let $P\in \mathbf{m}_{G_{m}}(Y)$. So, we get that $Y\in G_{m}(P)$. Suppose that $P\notin \mathbf{m}_{G_{m}}^{2}(Y)$. Then, we get that $\mathbf{m}_{G_{m}}(Y)\notin G_{m}(P)$. Since $G_{m}(P)=\bigcap\{(D_{U})^{c}:U\in D(X)\text{ and }P\notin \mathbf{m}_{G_{m}}(U)\}$, there exists $U\in D(X)$ such that $P\notin \mathbf{m}_{G_{m}}(U)$ and $\mathbf{m}_{G_{m}}(Y)\subseteq U$. By the previous remark, there exists $V\in D(X)$ such that $Y\subseteq V$ and $\mathbf{m}_{G_{m}}(Y)\subseteq \mathbf{m}_{G_{m}}(V)\subseteq U$. Thus, $P\in \mathbf{m}_{G_{m}}(V)$ and by hypothesis $P\in \mathbf{m}_{G_{m}}^{2}(V)\subseteq \mathbf{m}_{G_{m}}(U)$ which is a contradiction. Now, we will see that $\mathbf{m}_{G_{m}}(U)\subseteq \mathbf{m}_{G_{m}}^{2}(U)$ for all $U\in\mathrm{Up}(X(\mathbf{A}))$. Let $P\in \mathbf{m}_{G_{m}}(U)$. Then, there exists $Y\in G_{m}(P)$ such that $Y\subseteq U$. So, $\mathbf{m}_{G_{m}}(Y)\subseteq \mathbf{m}_{G_{m}}(U)$ and we get that $P\in \mathbf{m}_{G_{m}}(Y)\subseteq \mathbf{m}_{G_{m}}^{2}(Y)\subseteq \mathbf{m}_{G_{m}}^{2}(U)$. Thus, $P\in \mathbf{m}_{G_{m}}^{2}(U)$.\end{proof} Let $\langle X,\mathcal{T},G\rangle$ be a $\mathcal{C}$-monotonic $DS$-space. We will define a relation $\bar{G}\subseteq\mathcal{C}(X)\times\mathcal{C}(X)$ by \[ (Y,C)\in\bar{G}\Leftrightarrow\forall x\in Y\ (x,C)\in G. \] We define $G^{2}\subseteq X\times\mathcal{C}(X)$ as follows \[ (x,Y)\in G^{2}\Leftrightarrow\exists C\in\mathcal{C}(X)\text{ such that }(x,C)\in G\text{ and }(C,Y)\in\bar{G}. \] \begin{definition}Let $\langle X,\mathcal{T},G\rangle$ be a $\mathcal{C}$- monotonic $DS$-space. The relation $G$ is \emph{transitive} if and only if for all $x\in X$ and for all $Y\in\mathcal{C}(X)$ if $(x,Y)\in G^{2}$ then $(x,Y)\in G$. \end{definition} \begin{definition}Let $\langle X,\mathcal{T},G\rangle$ be a a $\mathcal{C}$- monotonic $DS$-space. The relation $G$ is \emph{weakly dense} if and only if for all $x\in X$ and for all $Y\in\mathcal{C}(X)$ if $(x,Y)\in G$ then $(x,Y)\in G^{2}$. \end{definition} \begin{lemma}Let $A\in\mathcal{MDS}$ and $F\in\mathrm{Fi}(\mathbf{A})$. Then, $F_{\mathbf{m}_{G_{m}}(\hat{F})}=[m(F))$ where $m(F)=\{ma:a\in F\}$. \end{lemma} \begin{lemma} Let $A\in\mathcal{MDS}$. Then for all $P\in X(\mathbf{A})$ and $F\in\mathrm{Fi}(\mathbf{A})$, $(P,\hat{F})\in G_{m}^{2}\Leftrightarrow F\subseteq\{a\in A:m^{2}a\in P\}$. \end{lemma} \begin{proposition} Let $A\in\mathcal{MDS}$. Then $ma\leq m^{2}a$ for all $a\in A$ if and only if $G_{m}$ is weakly dense. \end{proposition} \begin{proposition} Let $A\in\mathcal{MDS}$. Then $m^{2}a\leq ma$ for all $a\in A$ if and only if $G_{m}$ is transitive. \end{proposition} \subsection{Modal distributive semilattices} \label{subsection: Modal} In this section we consider distributive semilattices endowed with a normal modal operator, i.e., a function that preserves the greatest element and finite meets. \begin{definition}A \emph{modal distributive semilattice} is an algebra $\langle\mathbf{A},m\rangle$ where $\mathbf{A}$ is a distributive semilattice and $m\colon A\rightarrow A$ is an operator that verifies the following conditions: \begin{enumerate} \item $m1=1$, \item $m(a\wedge b)=ma\wedge mb$ for all $a,b\in A$. \end{enumerate} \end{definition} It is clear that $m$ is a homomorphism and that a modal distributive semilattice is a monotonic distributive semilattice. Thus, a modal operator $m$ could be represented by means of an adequate multirelation defined in the dual space and by a meet-relation. Now we are going to identify what additional conditions must satisfy this multirelation. \begin{remark} Let $\langle X,\mathcal{T},R\rangle$ be a monotonic $DS$-space. Since $(x]=\bigcap\{U\in\mathcal{KO}(X):x\in U\}$ and $\mathcal{KO}(X)$ is a basis, we have that $(x]\in\mathcal{S}(X)$ for each $x\in X$. \end{remark} \begin{remark} Given a modal distributive semilattice $\langle\mathbf{A},m\rangle$, we note that $m^{-1}(F)\in\Fi({\mathbf{A}})$ for all $F\in\Fi({\mathbf{A}})$. We also note that $I_{\mathbf{A}}((Q])=Q^{c}$ for all $Q\in X({\mathbf{A}})$. \end{remark} \begin{definition} A monotonic $DS$-space $\langle X,\mathcal{T},R\rangle$ is called \emph{normal} if for any $x\in X$ and for every $Z\in\mathcal{S}(X)$ such that $Z\in R(x)$ there exists $z\in Z$ such that $(z]\in R(x)$. \end{definition} Note that in every normal monotonic $DS$-space $\langle X,\mathcal{T},R\rangle$ for all $x\in X$, $\emptyset\notin R(x)$. \begin{proposition} Let $\langle\mathbf{A},m\rangle$ be a monotonic distributive semilattice. Then $\langle\mathbf{A},m\rangle$ is a modal distributive semilattice iff $\langle X({\mathbf{A}}),\mathcal{T}_{\mathbf{A}},R_{m}\rangle$ is a normal monotonic $DS$-space. \end{proposition} \begin{proof} $\Rightarrow)$ Let $(P,\alpha(I))\in R_{m}$. Then, $m^{-1}(P)\cap I=\emptyset$. Since $\langle\mathbf{A},m\rangle$ is a modal distributive semilattice, we have that $m^{-1}(P)\in\Fi({\mathbf{A}})$. So, there exists $Q\in X({\mathbf{A}})$ such that $m^{-1}(P)\subseteq Q$ and $Q\cap I=\emptyset$. Thus, $Q\in\alpha(I)$ and it is easy to see that $m^{-1}(P)\cap Q^{c}=\emptyset$ and therefore, $(P,(Q])\in R_{m}$. $\Leftarrow)$ Let $\langle X({\mathbf{A}}),\mathcal{T}_{\mathbf{A}},R_{m}\rangle$ be a normal monotonic $DS$-space. Suppose that there exist $a,b\in A$ such that $ma\wedge mb\nleq m(a\wedge b)$. Then, there exists $P\in X({\mathbf{A}})$ such that $ma\wedge mb\in P$ but $m(a\wedge b)\notin P$. Note that $ma\wedge mb\leq ma\in P$ and $ma\wedge mb\leq mb\in P$. So, we have that $(P,\alpha(a\wedge b))\in R_{m}$ and since $\langle X({\mathbf{A}}),\mathcal{T}_{\mathbf{A}},R_{m}\rangle$ is a normal monotonic $DS$-space, there exists $Q\in\alpha(a\wedge b)$ such that $(P,(Q])\in R_{m}$. From $I_{\mathbf{A}}((Q])=Q^{c}$ we get that $m^{-1}(P)\cap Q^{c}=\emptyset$. Thus, $a,b\in m^{-1}(P)\subseteq Q$ and, since $Q$ is a filter, we have that $a\wedge b\in Q$. Hence, $Q\cap(a\wedge b]\neq\emptyset$ which contradicts the fact that $Q\in\alpha(a\wedge b)$. Therefore $\langle\mathbf{A},m\rangle$ is a modal distributive semilattice.\end{proof} Since a normal modal operator is also a homomorphism of meet semilattices, we can also interpret it through a meet-relation in the dual space. We will show the relationship between the multirelation and the meet-relation associated to the same operator. Let $\langle X,\mathcal{T}\rangle$ be a $DS$-space and let $S\subseteq X\times X$ be a meet-relation. Let $\mathrm{m}_{S}\colon\mathrm{Up}(X)\rightarrow\mathrm{Up}(X)$ be the operator defined by \[ \mathrm{m}_{S}(U)=\{x\in X:S(x)\subseteq U\} \] where $S(x)=\{y\in X:(x,y)\in S\}$. We define a multirelation $R_{S}\subseteq X\times\mathcal{S}(X)$ by \[ (x,Z)\in R_{S}\Leftrightarrow S(x)\cap Z\neq\emptyset. \] On the other hand, let $\langle X,\mathcal{T},R\rangle$ be a normal monotonic $DS$-space. We define a relation $S_{R}\subseteq X\times X$ by \[ (x,z)\in S_{R}\Leftrightarrow(x,(z])\in R. \] \begin{proposition} \begin{enumerate} \item Let $\langle X,\mathcal{T},R\rangle$ be a normal monotonic $DS$-space. Then, the structure $\langle X,\mathcal{T},S_{R}\rangle$ is a $DS$-space with a meet-relation $S_R$ such that $m_{R}(U)=\mathrm{m}_{S_{R}}(U)$ for all $U\in\mathrm{Up}(X$) and $R=R_{S_{R}}$. \item Let $\langle X,\mathcal{T},S\rangle$ be a $DS$-space with a meet-relation $S\subseteq X\times X$ . Then, the structure $\langle X,\mathcal{T},R_{S}\rangle$ is a normal monotonic $DS$-space such that $m_{R_{S}}(U)=\mathrm{m}_{S}(U)$ for all $U\in\mathrm{Up}(X)$, and $S=S_{R_{S}}$. \end{enumerate} \end{proposition} \begin{proof} 1. Let $U\in\mathrm{Up}(X)$. We will prove that $m_{R}(U)=\mathrm{m}_{S_{R}}(U)$. Let $x\in m_{R}(U)$ and $z\in S_{R}(x)$. Then, $(x,(z])\in R$ and we get that $(z]\cap U\neq\emptyset$. Since $U$ is an upset, we have that $z\in U$. Thus, $S_{R}(x)\subseteq U$ and $x\in \mathrm{m}_{S_{R}}(U)$. Let $x\in \mathrm{m}_{S_{R}}(U)$ and $Z\in R(x)$. Then, there exists $z\in Z$ such that $(z]\in R(x)$. By definition $(x,z)\in S_{R}(x)$. So, $z\in U$ and we get that $Z\cap U\neq\emptyset$. Thus, $x\in m_{R}(U)$. So, we have that $m_{R}(U)=\mathrm{m}_{S_{R}}(U)\in D(X)$ for all $U\in D(X)$. We will see that $S_{R}(x)=\bigcap\{U\in D(X):S_{R}(x)\subseteq U\}$ for all $x\in X$. Let $x,z\in X$ such that $z\in\bigcap\{U\in D(X):S_{R}(x)\subseteq U\}$. Then, $z\in U$ for all $U\in D(X)$ such that $x\in \mathrm{m}_{S_{R}}(U)=m_{R}(U)$. By condition (4) of Definition \ref{t esp mon}, $(z]\in R(x)$. Therefore, $z\in S_{R}(x)$. Now, let $(x,Z)\in R$. We will see that $S_{R}(x)\cap Z\neq\emptyset$. Since $\langle X,\mathcal{T},R\rangle$ is a normal monotonic $DS$-space, there exists $z\in Z$ such that $(z]\in R(x)$. By definition $z\in S_{R}(x)\cap Z$. Let $(x,Z)\in R_{S_{R}}$. Then, $S_{R}(x)\cap Z\neq\emptyset$. Let $z\in Z$ such that $(x,z)\in S_{R}$. By definition of $S_{R}$, $(x,(z])\in R$. So, $(z]\subseteq Z$ and by Proposition \ref{Rupset}, $(x,Z)\in R$. Therefore $R=R_{S_{R}}$. 2. Let $U\in\mathrm{Up}(X)$. We will prove that $m_{R_{S}}(U)=\mathrm{m}_{S}(U)$. Let $x\in m_{R_{S}}(U)$ and $z\in S(x)$. Then, $S(x)\cap(z]\neq\emptyset$. By definition of $R_{S}$, $(x,(z])\in R_{S}$. So, $(z]\cap U\neq\emptyset$ and since $U$ is an upset, we have that $z\in U$. Thus, $S(x)\subseteq U$ and $x\in \mathrm{m}_{S}(U)$. Let $x\in \mathrm{m}_{S}(U)$ and let $Z\in R_{S}(x)$. Then, there exists $z\in Z$ such that $z\in S(x)$. So, $z\in U$ and we get that $Z\cap U\neq\emptyset$. Thus, $x\in m_{R_{S}}(U)$. So, we have that $\mathrm{m}_{S}(U)=m_{R_{S}}(U)\in D(X)$ for all $U\in D(X)$. We will see that $\langle X,\mathcal{T},R_{S}\rangle$ is a normal monotonic $DS$-space. Let $x\in X$, $Z\in\bigcap\{L_{U}:x\in m_{R_{S}}(U)\}$ and suppose that $Z\notin R_{S}(x)$. By definition, $Z\cap S(x)=\emptyset$ and since $S(x)$ is a closed subset, there exists $U\in D(X)$ such that $Z\subseteq U^{c}$ and $S(x)\cap U^{c}=\emptyset$, i.e., $x\in \mathrm{m}_{S}(U)=m_{R_{S}}(U)$ and $Z\cap U=\emptyset$, which is a contradiction. Therefore, $R_{S}(x)=\bigcap\{L_{U}:x\in m_{R_{S}}(U)\}$. Now, let $x\in X$ and $Z\in R_{S}(x)$. Then, there exists $z\in S(x)\cap Z$. So, $S(x)\cap(z]\neq\emptyset$ and by definition of $R_{S}$, $(x,(z])\in R_{S}$. Finally, let $(x,z)\in S$. Then $z\in S(x)\cap(z]$ and, by definition, $(x,(z])\in R_{S}$. Therefore, $(x,z)\in S_{R_{S}}$. On the other hand, let $(x,z)\in S_{R_{S}}$. Then, $(x,(z])\in R_{S}$. So, $S(x)\cap(z]\neq\emptyset$. Since $S$ is a meet-relation, $S(x)$ is an upset. Thus, $z\in S(x)$. Therefore $S=S_{R_{S}}$.\end{proof} \begin{remark}Note that as a particular case we get the relation defined in \cite{Gehrke} by Mai Gherke, where she gave an algebraic derivation of the space associated to a bounded distributive lattice with a modality $\square$ that preserves 1 and $\wedge$ based on the canonical extension. Given the modality $\square:A\rightarrow A$, since the extension $\square^{\sigma}=\square^{\pi}$ is completely meet preserving, it is completely determined by its action on the completely meet prime elements of the canonical extension. Working on Stone spaces, the family $\mathcal{S}(X(\mathbf{A}))$ is the family of all basic saturated sets and recall that $M^{\infty}(Up(X(\mathbf{A})))=\{\alpha(P^{c})^{c}=(P]^{c}:P\in X(\mathbf{A})\}$. The relation $S\subseteq X(\mathbf{A})\times X(\mathbf{A})$ defined in \cite{Gehrke} is: \[ (P,Q)\in S\Leftrightarrow\square^{\pi}((Q]^{c})\subseteq(P]^{c}. \] It is easy to see that $\square^{\pi}((Q]^{c})\subseteq(P]^{c}\Leftrightarrow\square^{-\text{1}}(P)\cap Q^c=\emptyset$. \end{remark} \subsection{Boolean Algebras with a monotonic operator} \label{subsection: Boolean} In paper \cite{Celaniboole} (see also \cite{Hansen} and \cite{HansenKupkePacuit}) S. Celani developed a topological duality between monotonic Boolean algebras and descriptive monotonic frames. These frames are actually monotonic $DS$-frames. Recall that a Boolean algebra with a normal monotonic operation, is a pair $\langle\mathbf{A},\square\rangle$ such that $\mathbf{A}$ is a Boolean algebra, and $\square$ is an operator defined on $A$ such that \begin{enumerate} \item $\square(a\wedge b)\leq\square a\wedge\square b$ for all $a,b\in A$, \item $\square1=1$. \end{enumerate} Also, recall that a \emph{Stone space} is a topological space $X=\langle X,\tau\rangle$ that is \textit{compact and totally disconnected}, i.e., given distinct points $x,y\in X$, there exists a clopen (closed and open) subset $U$ of $X$ such that $x\in U$ and $y\notin U$. Let $\mathrm{Clop}(X)$ be the family of closed and open subsets of a Stone space $\langle X,\tau\rangle$. A\emph{ descriptive} $m$-frame \cite{Celaniboole}, or \emph{monotonic modal space}, is a triple $\langle X,R,\tau\rangle$ such that \begin{enumerate} \item $\langle X,\tau\rangle$ is a Stone space, \item $R\subseteq X\times\mathcal{C}_{0}(X)$, where $\mathcal{C}_{0}(X)=\mathcal{C}(X)-\{\emptyset\}$, \item $\square_{R}(U)=\{x\in X:\forall Y\in R(x)\ (Y\cap U\neq\emptyset)\}\in\mathrm{Clop}(X)$ for all $U\in\mathrm{Clop}(X)$, \item $R(x)=\bigcap\{L_{U}:x\in\square_{R}(U)\}$, for all $x\in X$. \end{enumerate} \begin{remark} It is well known that if $\langle X,\tau\rangle$ is a Stone space then $\mathrm{Clop}(X)$ is a basis for the topology and $\mathcal{S}(X)=\mathcal{C}(X)$. As $X$ is Hausdorff, the only irreducible closed sets are singletons, so $X$ is sober. Then, it is easy to see that any descriptive $m$-frame is a monotonic $DS$-space.\end{remark} \begin{remark} Let $\mathbf{A}=\langle A,\vee,\wedge,\lnot,0,1\rangle$ be Boolean algebra. Note that if $F$ is a filter of $\mathbf{A}$, then the set $I_{F}=\{\lnot a:a\in F\}$ is an ideal of $\mathbf{A}$, and thus $\hat{F}=\alpha(I_{F})$. Similarly, if $I$ is an ideal of $\mathbf{A}$, then the set $F_{I}=\{\lnot a:a\in I\}$ is a filter of $\mathbf{A}$, and $\alpha(I)=\hat{F_{I}}$. \end{remark} Let $\langle\mathbf{A},\square\rangle$ be a Boolean algebra endowed with a normal monotonic operator $\square$. Let $\Diamond\colon A\rightarrow A$ be the dual operator defined by $\Diamond a=\lnot\square\lnot a$, for each $a\in A$. Following the construction of the $\mathcal{S}$- and $\mathcal{C}$- monotonic spaces, we have four relations: $G_{\Diamond}$, $G_{\square}$, $R_{\Diamond}$ and $R_{\square}$. The following proposition shows the relationships between them. \begin{proposition} Let $\langle\mathbf{A},\square\rangle$ be a Boolean algebra endowed with a monotonic operator. Then, $G_{\Diamond}=R_{\square}$ and $G_{\square}=R_{\Diamond}$.\end{proposition} \begin{proof} We will prove that $G_{\Diamond}=R_{\square}$. Let $F\in\mathrm{Fi}(\mathbf{A})$ such that $(P,\hat{F})\in G_{\Diamond}$. Then, $F\subseteq\Diamond^{-1}(P)$, i.e., for all $a\in F,$ $\lnot\square(\lnot a)\in P$. So, $\square(\lnot a)\notin P$, i.e., $\lnot a\notin\square^{-1}(P)$. Thus, $\square^{-1}(P)\cap I_{F}=\emptyset$ and $(P,\alpha(I_{F}))\in R_{\square}$. By remark, $\hat{F}=\alpha(I_{F})$. The proof of other inclusion is similar. The other equality is proved analogously.\end{proof}
{"config": "arxiv", "file": "1810.08585.tex"}
TITLE: Inverse function of isomorphism is also isomorphism QUESTION [18 upvotes]: Let $G$ be a group, and let $p:G\rightarrow G$ be an isomorphism. Why is $p^{-1}$ also an isomorphism? We know that $p(a)p(b)=p(ab)$ for any elements $a,b\in G$. We also know $p(a^{-1})=p(a)^{-1}$ for any element $a\in G$ (follows from the first statement.) How would it show $p^{-1}(ab)=p^{-1}(a)p^{-1}(b)$? REPLY [3 votes]: Cause the inverse is strictly related to the function itself, that especially gives $p^{-1}(y\tilde{y})=p^{-1}(p(x)p(\tilde{x}))=p^{-1}(p(x\tilde{x}))=x\tilde{x}=p^{-1}(y)p^{-1}(\tilde{y})$. But this relation is really just a "happy accident". Most properties won't be heridated e.g. the inverse of a continuous function is not necesarily continuous or e.g. differentiability. Moreover, I'd like to stress that -despite the fact that most textbook define isomorphism of groups to be bijective homomorphism- what one really desires is a homomorphism which inverse is homomorphism as well, what luckily comes for free ;-)
{"set_name": "stack_exchange", "score": 18, "question_id": 635858}
\begin{document} \author{Pavel \v{C}oupek} \address[Pavel \v{C}oupek]{Department of Mathematics, Purdue University} \email{pcoupek@purdue.edu} \date{} \title{Crystalline condition for $\boldsymbol{\ainf}$--cohomology and ramification bounds} \begin{abstract} \noindent For a prime $p>2$ and a smooth proper $p$--adic formal scheme $\mathscr{X}$ over $\oh_K$ where $K$ is a $p$--adic field, we study a series of conditions \Crs, $s\geq 0$ that partially control the $G_K$--action on the image of the associated Breuil--Kisin prismatic cohomology $\H^i_{\Prism}(\mathscr{X}/ \Es)$ inside the $\ainf$--prismatic cohomology $\H^i_{\Prism}(\mathscr{X}_{\ainf}/ \ainf)$. The condition \Cr{0} is a criterion for a Breuil--Kisin--Fargues $G_K$--module to induce a crystalline representation used by Gee and Liu in \cite[Appendix F]{EmertonGee1}, and thus leads to a proof of crystallinity of $\H^i_{\et}(\mathscr{X}_{\overline{\eta}}, \mathbb{Q}_p)$ that avoids the crystalline comparison. The higher conditions $\Crs$ are used to adapt the strategy of Caruso and Liu from \cite{CarusoLiu} to establish ramification bounds for the mod $p$ representations $\H^{i}_{\et}(\mathscr{X}_{\overline{\eta}}, \mathbb{Z}/p\mathbb{Z}),$ for arbitrary $e$ and $i$, which extend or improve existing bounds in various situations. \end{abstract} \maketitle \tableofcontents \section{Introduction} Let $k$ be a perfect field of characteristic $p>2$ and $K_0=W(k)[1/p]$ the associated absolutely unramified field. Let $K/K_0$ be a totally ramified finite extension with ramification index $e$, and denote by $G_K$ its absolute Galois group. The goal of the present paper is to provide new bounds for ramification of the mod $p$ representations of $G_K$ that arise as the \'{e}tale cohomology groups $\H^i_{\et}(\mathscr{X}_{\overline{\eta}}, \mathbb{Z}/p\mathbb{Z})$ in terms of $p, i$ and $e$, where $\mathscr{X}$ is a smooth proper $p$--adic formal scheme over $\oh_K$ (and $\mathscr{X}_{\overline{\eta}}$ is the geometric adic generic fiber). Concretely, let us denote by $G_K^{\mu}$ the $\mu$--th ramification group of $G_K$ in the upper numbering (in the standard convention, e.g. \cite{SerreLocalFields}) and $G_K^{(\mu)}=G_K^{\mu-1}$. The main result is as follows. \begin{thm}[Theorem~\ref{thm:FinalRamification}]\label{thm:IntroMain} Set $$\alpha=\left\lfloor\mathrm{log}_p\left( \mathrm{max} \left\{\frac{ip}{p-1}, \frac{(i-1)e}{p-1}\right\}\right)\right\rfloor+1, \;\;\; \beta=\frac{1}{p^\alpha}\left(\frac{iep}{p-1}-1\right).$$ Then: \begin{enumerate}[(1)] \item{The group $G_K^{(\mu)}$ acts trivially on $\H^i_{\et}(\mathscr{X}_{\overline{\eta}}, \mathbb{Z}/p\mathbb{Z})$ when $\mu>1+e\alpha+\mathrm{max}\left\{\beta, \frac{e}{p-1}\right\}.$} \item{Denote by $L$ the field $\overline{K}^H$ where $H$ is the kernel of the $G_K$--representation $\rho$ given by $\H^i_{\et}(\mathscr{X}_{\overline{\eta}}, \mathbb{Z}/p\mathbb{Z})$. Then $$v_K(\mathcal{D}_{L/K})<1+e\alpha+\beta,$$ where $\mathcal{D}_{L/K}$ denotes the different of the extension $L/K$ and $v_K$ denotes the additive valuation on $K$ normalized so that $v_K(K^{\times})=\mathbb{Z}.$} \end{enumerate} \end{thm} \noindent In particular, there are no restrictions on the size of $e$ and $i$ with respect to $p$. \begin{rem} As the constants $\alpha, \beta$ appearing in Theorem~\ref{thm:IntroMain} are quite complicated, let us draw some non--optimal, but more tractable consequences. The group $G_K^{(\mu)}$ acts trivially on $\H^i_{\et}(\mathscr{X}_{\overline{\eta}}, \mathbb{Z}/p\mathbb{Z})$ when one of the following occurs: \begin{enumerate}[(1)] \item{when $e \leq p$ and $\mu>1+e\left(\left\lfloor \mathrm{log}_p\left(\frac{ip}{p-1}\right)\right\rfloor+1\right)+e,$} \item{when $e>p$ and $\mu>1+e\left(\left\lfloor \mathrm{log}_p\left(\frac{ie}{p-1}\right)\right\rfloor+1\right)+p,$\footnote{Strictly speaking, to obtain this precise form one has to replace $(i-1)e$ in $\alpha$ from Theorem~\ref{thm:IntroMain} by $ie,$ and modify $\beta$ appropriately; one can show that such form of Theorem~\ref{thm:IntroMain} is still valid.}} \item{when $i=1$ ($e, p$ are arbitrary) and $\mu>1+e\left(1+\frac{1}{p-1}\right).$} \end{enumerate} \end{rem} \vspace{1em} Let us briefly summarize the history of related results. Questions of this type originate in Fontaine's paper \cite{Fontaine}, where he proved that for a finite flat group scheme $\Gamma$ over $\oh_K$ that is annihilated by $p^n$, $G_K^{(\mu)}$ acts trivially on $\Gamma(\overline{K})$ when $\mu>e(n+1/(p-1))$; this is used as an important step in proving that there are no non--trivial abelian schemes over $\mathbb{Z}$. In the same paper, Fontaine conjectured that general $p^n$--torsion cohomology would follow the same pattern: given a proper smooth variety $X$ over $K$ with good reduction, $G_K^{(\mu)}$ should act trivially on $\H^i_{\et}(X_{\overline{K}}, \mathbb{Z}/p^n\mathbb{Z})$ when $\mu>e(n+i/(p-1))$. This conjecture has been subsequently proved by Fontaine himself (\cite{Fontaine2}) in the case when $e=n=1, i<p-1$ and by Abrashkin (\cite{Abrashkin}; see also \cite{Abrashkin2}) when $e=1, i<p-1$ and $n$ is arbitrary. This is achieved by using the torsion Fontaine--Laffaille modules (introduced in \cite{FontaineLaffaille}), which parametrize quotients of pairs of $G_K$--stable lattices in crystalline representations with Hodge--Tate weights in $[0, i]$ (such as $\H^i_{\et}({X}_{\overline{K}}, \mathbb{Q}_p)^{\vee}$). The (duals of the) representations $\H^i_{\et}({X}_{\overline{K}}, \mathbb{Z}/p^n\mathbb{Z})$ are included among these thanks to a comparison theorem of Fontaine--Messing (\cite{FontaineMessing}). Similarly to the orginal application, these ramification bounds lead to a scarcity result for existence of smooth proper $\mathbb{Z}$--schemes. Various extensions to the semistable case subsequently followed. Under the asumption $i<p-1$ (and arbitrary $e$), Hattori proved in \cite{Hattori} a ramification bound for $p^n$--torsion quotients of lattices in semistable representations with Hodge--Tate weights in the range $[0, i],$ using (a variant of) Breuil's filtered $(\phi_r, N)$--modules. Thanks to a comparison result between log--crystalline and \'{e}tale cohomology by Caruso (\cite{CarusoLogCris}), this results in a ramification bound for $\H^i_{\et}({X}_{\overline{K}}, \mathbb{Z}/p^n\mathbb{Z})$ when ${X}$ is proper with semistable reduction, assuming $ie<p-1$ when $n=1$ and $(i+1)e<p-1$ when $n \geq 2$ \footnote{Recently, in \cite{LiLiu} Li and Liu extended Caruso's result to the range $ie<p-1$ regardless of $n$, for $\mathscr{X}/\oh_K$ proper and smooth (formal) scheme. In view of this, results of \cite{Hattori} should apply in these situations as well.}. These results were further extended by Caruso and Liu in \cite{CarusoLiu} for all $p^n$--torsion quotients of pairs of semistable lattices with Hodge--Tate weights in $[0, i]$, without any restriction on $i$ or $e$. The proof uses the theory of $(\varphi , \widehat{G})$--modules, which are objects suitable for description of lattices in semistable representations. Roughly speaking, a $(\varphi , \widehat{G})$--module consists of a Breuil--Kisin module $M$ and the datum of an action of $\widehat{G}=\mathrm{Gal}(K(\mu_{p^{\infty}}, \pi^{1/p^\infty})/K)$ on $\widehat{M}=M \otimes_{\Es, \varphi} \widehat{\mathcal{R}}$ where $\widehat{\mathcal{R}}$ is a suitable subring of Fontaine's period ring $\ainf=W(\oh_{\mathbb{C}_{K}^\flat})$ (and $\pi\in K$ is a fixed choice of a uniformizer). An obstacle to applying the results of \cite{CarusoLiu} to the torsion \'{e}tale cohomology groups $\H^i_{\et}(X_{\overline{K}}, \mathbb{Z}/p\mathbb{Z})$ is that it is not quite clear when (duals of) such representations come as a quotient of two semistable lattices with Hodge--Tate weights in $[0, i].$ This is indeed the case in the situation when $e=1$, $i<p-1$ and $X$ has good reduction by the aforementioned Fontaine--Messing theorem, and it was also shown in the case $i=1$ (no restriction on $e, p$) for $X$ with semistable reduction by Emerton and Gee in \cite{EmertonGee1}, but in general the question seems open. \vspace{1em} Nevertheless, the idea of the proof of Theorem~\ref{thm:IntroMain} is to follow the general strategy of Caruso and Liu. While one does not necessarily have semistable lattices and the associated $(\varphi, \widehat{G})$--modules to work with, a suitable replacement comes from the recently developed cohomology theories of Bhatt--Morrow--Scholze and Bhatt--Scholze (\cite{BMS1, BMS2, BhattScholze}). Concretely, to a smooth $p$--adic formal scheme $\mathscr{X}$ one can associate the ``$p^n$-torsion prismatic cohomologies'' $$\R\Gamma_{\Prism, n}(\mathscr{X}/ \Es)=\R\Gamma_{\Prism}(\mathscr{X}/ \Es)\stackrel{{\mathsf{L}}}{\otimes}\mathbb{Z}/p^n\mathbb{Z}, \;\;\;\;\;\;\R\Gamma_{\Prism, n}(\mathscr{X}_{\ainf}/ \ainf)=\R\Gamma_{\Prism}(\mathscr{X}_{\ainf}/ \ainf)\stackrel{{\mathsf{L}}}{\otimes}\mathbb{Z}/p^n\mathbb{Z}$$ where $\R\Gamma_{\Prism}(\mathscr{X}_{\ainf}/ \ainf), \R\Gamma_{\Prism}(\mathscr{X}/ \Es)$ are the prismatic avatars of the $\ainf$-- and Breuil--Kisin cohomologies from \cite{BMS1} and \cite{BMS2}, resp. Taking $M_{\BK}=\H^i_{\Prism, 1}(\mathscr{X}/ \Es)$ and $M_{\inf}=\H^i_{\Prism, 1}(\mathscr{X}/ \ainf),$ Li and Liu showed in \cite{LiLiu} that $M_{\BK}$ is a $p$--torsion Breuil--Kisin module, $M_{\inf}$ is a $p$--torsion Breuil--Kisin--Fargues $G_K$--module, and that these modules recover the \'{e}tale cohomology group $\H^i_{\et}(\mathscr{X}_{\overline{\eta}}, \mathbb{Z}/p\mathbb{Z})$ essentially due to the \'{e}tale comparison theorem for prismatic cohomology from \cite{BhattScholze}. The pair $(M_{\BK}, M_{\inf})$ then serves as a suitable replacement of a $(\varphi, \widehat{G})$--module in our context. One key technical input in \cite{CarusoLiu} is to establish a partial control on the Galois action on $M$ inside $\widehat{M}.$ Namely, for any $g \in G_{K(\pi^{1/p^s})},$ one has \begin{equation}\label{IntroPhiGiHatCondition} \forall x \in M:\;\;g(x)-x \in (J_{n,s}+p^n\ainf) (\widehat{M}\otimes_{\widehat{\mathcal{R}}}\ainf), \end{equation} where $J_{n, s} \subseteq \ainf$ are certain ideals (that are shrinking with growing $s$). This is a ``rational'' fact, in the sense that this claim is a consequence of the description of the Galois action in terms of the monodromy operator on the associated Breuil module $\mathcal{D}(\widehat{M})$ (cf. \cite{Breuil}, \cite[\S 3.2]{LiuLatticesNew}) which is a $K_0$--vector space. In particular, this technique does not adapt to a $p^n$--torsion setting. To replace this fact in our situation, we turn to a result by Gee and Liu in \cite[Appendix F]{EmertonGee2} (see also \cite[Theorem~3.8]{Ozeki}). Given a finite free Breuil--Kisin module $M_{\BK}$ (of finite height) and a compatible structure of Breuil--Kisin--Fargues $G_K$--module on $M_{\inf}=M_{\BK}\otimes_{\Es}\ainf,$ such that the image of $M_{BK}$ under the natural map lands in $(M_{\inf})^{G_{K(\pi^{1/p^{\infty}})}}$, the \'{e}tale realization of $M_{\inf}$ is crystalline if and only if \begin{equation}\tag{$\mathrm{Cr}_0$}\label{IntroCrysCondition} \forall g \in G_K,\;\; \forall x \in M_{\BK}: g(x)-x \in \varphi^{-1}([\underline{\varepsilon}]-1)[\underline{\pi}]M_{\inf}. \end{equation} Here $[-]$ denotes the Teichm\"{u}ller lift and $\underline{\varepsilon}, \underline{\pi}$ are the elements of $\oh_{\mathbb{C}_K^\flat}$ given by a collection $(\zeta_{p^n})_n$ of (compatible) $p^n$--th roots of unity and a collection $(\pi^{1/p^n})_n$ of $p^n$--th roots of the chosen uniformizer $\pi$, resp. We call condition~(\ref{IntroCrysCondition}) the \emph{crystalline condition}. As the considered formal scheme $\mathscr{X}$ is assumed to be smooth over $\oh_K$, it is reasonable to expect that the same condition applies to the pair $M_{\BK}=\H^i_{\Prism}(\mathscr{X}/ \Es)$ and $M_{\inf}=\H^i_{\Prism}(\mathscr{X}_{\ainf}/ \ainf)$, despite the fact that the Breuil--Kisin and Breuil--Kisin--Fargues modules coming from prismatic cohomology are not necessarily free. This is indeed the case and, moreover, it can be shown that the crystalline condition even applies to the embedding of the chain complexes $\R\Gamma_{\Prism}(\mathscr{X}/ \Es)\rightarrow \R\Gamma_{\Prism}(\mathscr{X}_{\ainf}/ \ainf)$: to make sense of this claim, we model the cohomology theories by their associated \v{C}ech--Alexander complexes. These were introduced in \cite{BhattScholze} in the case that $\mathscr{X}$ is affine, but can be extended to (at least) arbitrary separated smooth $p$--adic formal schemes. We are then able to verify the condition termwise for this pair of complexes. More generally, we introduce a decreasing series of ideals $I_s$, $s \geq 0$ where $I_0=\varphi^{-1}([\underline{\varepsilon}]-1)[\underline{\pi}]\ainf,$ and then formulate and prove the analogue of (\ref{IntroCrysCondition}) for $I_s$ and the action of $G_{K(\pi^{1/p^s})}.$ As a consequence, we obtain: \begin{thm}[Theorem~\ref{thm:CrsForCechComplex}, Corollary~\ref{cor:CrystallineForCohomologyGrps}, Proposition~\ref{prop:CrystallineCohMopPN}]\label{thm:IntroCrysCondition} Let $\mathscr{X}$ be a smooth separated $p$--adic formal scheme over $\oh_K$. \begin{enumerate}[(1)] \item{For all $s \geq 0$, the \v{C}ech--Alexander complexes $\check{C}_{\BK}^{\bullet},\check{C}_{\inf}^{\bullet}$ that compute $\R\Gamma_{\Prism}(\mathscr{X}/ \Es)$ and $\R\Gamma_{\Prism}(\mathscr{X}_{\ainf}/ \ainf)$, resp., satisfy (termwise) the condition \begin{equation}\tag{$\mathrm{Cr}_s$} \forall g \in G_{K(\pi^{1/p^s})}, \;\; \forall x \in \check{C}_{\BK}^{\bullet}: g(x)-x \in I_s \check{C}_{\inf}^{\bullet}. \end{equation}} \item{The associated prismatic cohomology groups satisfy the crystalline condition, that is, the condition $$ \forall g \in G_{K}, \;\; \forall x \in \H^i_{\Prism}(\mathscr{X}/ \Es): \;\; g(x)-x \in \varphi^{-1}([\underline{\varepsilon}]-1)[\underline{\pi}] \H^i_{\Prism}(\mathscr{X}_{\ainf}/ \ainf). $$} \item{For all pairs of integers $s, n$ with $s+1\geq n \geq 1$, the $p^n$--torsion prismatic cohomology groups satisfy the condition $$ \forall g \in G_{K(\pi^{1/p^s})}, \;\; \forall x \in \H^i_{\Prism, n}(\mathscr{X}/ \Es): \;\; g(x)-x \in \varphi^{-1}([\underline{\varepsilon}]-1)[\underline{\pi}]^{p^{s+1-n}} \H^i_{\Prism, n}(\mathscr{X}_{\ainf}/ \ainf). $$} \end{enumerate} \end{thm} Theorem~\ref{thm:IntroCrysCondition}~(3) specialized to $n=1$ provides the desired analogue of the property~(\ref{IntroPhiGiHatCondition}) of $(\varphi, \widehat{G})$--modules and allows us to carry out the proof of Theorem~\ref{thm:IntroMain}. As a consequence of Theorem~\ref{thm:IntroCrysCondition} (2), we obtain a proof of crystallinity of the cohomology groups $\H^i_{\et}(\mathscr{X}_{\overline{\eta}}, \mathbb{Q}_p)$ in the proper case partially by means of ``formal'' $p$--adic Hodge theory (Corollary~\ref{cor:EtaleCohomologyCrystalline}). This fact in this generality is originally due to Bhatt, Morrow and Scholze (\cite{BMS1}). Of course, since our setup relies on the machinery of prismatic cohomology and especially \'{e}tale comparison, the proof can be considered independent of the one from \cite{BMS1} only in that it avoids the crystalline comparison theorem for (prismatic) $\ainf$--cohomology. \vspace{1em} The bounds of Theorem~\ref{thm:IntroMain} compare to the above--mentioned bounds as follows. Whenever the bounds of ``semistable type'' are known to apply to the situation of $\H^i_{\et}(\mathscr{X}_{\overline{\eta}}, \mathbb{Z}/p\mathbb{Z})$ (e.g. \cite{CarusoLiu} when $i=1$, \cite{Hattori} when $ie<p-1$ and $\mathscr{X}$ is a scheme), the bounds from Theorem~\ref{thm:IntroMain} agree with those bounds. The bounds tailored to crystalline representations (\cite{Fontaine2, Abrashkin}) are slightly better but their applicability is quite limited ($e=1$ and $i<p-1$). The fact that the cohomology groups $\H^i_{\et}(\mathscr{X}_{\overline{\eta}}, \mathbb{Z}/p^n\mathbb{Z})$ have an associated Breuil--Kisin module yields one more source of ramification estimates: in \cite{Caruso}, Caruso provides a very general bound for $p^n$--torsion $G_K$--modules based on their restriction to $G_{K(\pi^{1/p^\infty})}$ via Fontaine's theory of \'{e}tale $\oh_{\mathcal{E}}$--modules. Using the Breuil--Kisin module $\H^i_{\Prism,n }(\mathscr{X}/ \Es)$ attached to $\H^i_{\et}(\mathscr{X}_{\overline{\eta}}, \mathbb{Z}/p^n\mathbb{Z})$, this bound becomes explicit (as discussed in more detail in Remark~\ref{rem:CarusoBound}). Comparing this result to Theorem~\ref{thm:IntroMain} is more ambiguous due to somewhat different shapes of the estimates, but roughly speaking, the estimate of Theorem~\ref{thm:IntroMain} is approximately the same for $e \leq p$, becomes worse when $K$ is absolutely tamely ramified with large ramification degree, and is expected to outperform Caruso's bound in case of large wild absolute ramification (rel. to the tame part of the ramification). In future work, we intend to extend the result of Theorem~\ref{thm:IntroMain} to the case of arbitrary $n$. This seems plausible thanks to the full statement of Theorem~\ref{thm:IntroCrysCondition} (3). In a different direction, we plan to extend the results of the present paper to the case of semistable reduction, using the log--prismatic cohomology developed by Koshikawa in \cite{Koshikawa}. An important facts in this regard are that the $\ainf$--log--prismatic cohomology groups are still Breuil--Kisin--Fargues $G_K$--modules by a result of \v{C}esnavi\v{c}ius and Koshikawa (\cite{CesnaviciusKoshikawa}) and that by results of Gao, a variant of the condition \Cr{0} might exist in the semistable case (\cite{GaoBKGK}; see Remark~\ref{rem:CrystConditionProof} (3) below for details). \vspace{1em} The outline of the paper is as follows. In \S\ref{sec:prelim} we establish some necessary technical results. Namely, we discuss non--zero divisors and regular sequences on derived complete and completely flat modules with respect to the weak topology of $\ainf$, and establish \v{C}ech--Alexander complexes in the case of a separated and smooth formal scheme. Next, \S\ref{sec:crs} introduces the conditions \Crs, studies their basic algebraic properties and discusses in particular the crystalline condition \Cr{0} in the case of Breuil--Kisin--Fargues $G_K$--modules. In \S\ref{sec:CrsCohomology} we prove the conditions \Crs for the Alexander--\v{C}ech complexes of a separated smooth $p$--adic scheme $\mathscr{X}$ over $\Es$ and $\ainf$, and draw some consequences for the inidividual cohomology groups (especially when $\mathscr{X}$ is proper), proving Theorem~\ref{thm:IntroCrysCondition}. Finally, in \S\ref{sec:bounds} we establish the ramification bounds for mod $p$ \'{e}tale cohomology, proving Theorem~\ref{thm:IntroMain}. Subsequently, we discuss in more detail how the bounds from Theorem~\ref{thm:IntroMain} compare to the various bounds from the literature discussed above. \vspace{1em} Let us set up some basic notation used throughout the paper. We fix a perfect field $k$ of characteristic $p>0$ and a finite totally ramified extesnion $K / K_0$ of degree $e$ where $K_0=W(k)[1/p]$. We fix a uniformizer $\pi \in \mathcal{O}_K$, and a compatible system $(\pi_n)_n$ of $p^n$--th roots of $\pi$ in $\mathbb{C}_K$, the completion of algebraic closure of $K$. Setting $\Es=W(k)[[u]]$, the choice of $\pi$ determines a surjective map $\Es \rightarrow \mathcal{O}_{K}$ by setting $u \mapsto \pi$; the kernel of this map is generated by an Eisenstein polynomial $E(u)$ of degree $e$. $\Es$ is endowed with a Frobenius lift (hence a $\delta$--structure) extending the one on $W(k)$ by $u \mapsto u^p$. Denote $\ainf=\Ainf{\mathcal{O}_{\mathbb{C}_K}}=W(\mathcal{O}_{\mathbb{C}_K^{\flat}})$ where $W(-)$ denotes the Witt vectors construction and $\mathcal{O}_{\mathbb{C}_K^{\flat}}=\mathcal{O}_{\mathbb{C}_K}^{\flat}$ is the tilt of $\oh_{\mathbb{C}_K}$, $\mathcal{O}_{\mathbb{C}_K}^{\flat}=\varprojlim_{x \mapsto x^p}\mathcal{O}_{\mathbb{C}_K}/p$. The choice of the system $(\pi_n)_n$ describes an element $\underline{\pi} \in \oh_{\mathbb{C}_K^\flat}\simeq\varprojlim_{x \mapsto x^p}\oh_{\mathbb{C}_K}$, and hence an embedding of $\Es$ into $\ainf$ via $u \mapsto [\underline{\pi}]$ where $[-]$ denotes the Teichm\"{u}ller lift. Under this embedding, $E(u)$ is sent to a generator $\xi$ of the kernel of the canonical map $\theta: \ainf\rightarrow \mathcal{O}_{\mathbb{C}_K}$ that lifts the first\footnote{Meaning ``zeroth'', i.e. no twists by Frobenius.} projection $\mathrm{pr}_0:\mathcal{O}_{\mathbb{C}_K}^{\flat}\rightarrow \mathcal{O}_{\mathbb{C}_K}/p.$ Consequently, $(\Es, (E(u)))\rightarrow (\ainf, \mathrm{Ker}\,\theta)$ is a map of prisms. It is known that under such embedding, $\ainf$ is faithfully flat over $\Es$ (see e.g. \cite[Proposition~2.2.13]{EmertonGee2}). Similarly, we fix a choice of a compatible system of primitive $p^{n}$--th roots of unity $(\zeta_{p^n})_{n \geq 0}$. This defines an element $\underline{\varepsilon}$ of $\oh_{\mathbb{C}_K^\flat}$ in an analogous manner, and the embedding $\Es\hookrightarrow \ainf$ extends to a map (actually still an embedding by \cite[Proposition~1.14]{Caruso}) $W(k)[[u, v]] \rightarrow \ainf$ by additionally setting $v \mapsto [\underline{\varepsilon}]-1$. Additionally, we denote by $\omega$ the element $([\underline{\varepsilon}]-1)/([\underline{\varepsilon}^{1/p}]-1)=[\underline{\varepsilon}^{1/p}]^{p-1}+\dots+[\underline{\varepsilon}^{1/p}]+1$. It is well--known that this is another generator of $\mathrm{Ker}\,\theta$, therefore $\omega/\xi$ is a unit in $\ainf$. The choices of $\pi, \pi_n$ and $\zeta_{p^n}$ remain fixed throughout, hence so do the embeddings $\Es\hookrightarrow \ainf$ and $W(k)[[u,v]]\hookrightarrow \ainf$. For this reason, we often (almost always) refer to $[\underline{\pi}], [\underline{\varepsilon}]-1, \xi$ as $u, v$ and $E(u),$ resp., and so on. \vspace{1em} \textbf{Acknowledgements.} I would like to express my gratitude to my PhD advisor Tong Liu for suggesting the topic of this paper, his constant encouragement and many comments, suggestions and valuable insights. Many thanks go to Deepam Patel for organizing the prismatic cohomology learning seminar at Purdue University in Fall 2019, and to Donu Arapura for a useful discussion of \v{C}ech theory. The present paper is part of the author's forthcoming PhD thesis at Purdue University. During the preparation of the paper, the author was partially supported by the Ross Fellowship of Purdue University as well as Graduate School Summer Research Grants of Purdue University during summers 2020 and 2021. \section{Preparations}\label{sec:prelim} \subsection{Regularity on $(p, E(u))$--completely flat modules}\label{subsec:regularity} The goal of this section is to prove that every $(p, E(u))$--complete and $(p, E(u))$--completely flat $\ainf$--module is torsion--free, and that any sequence $p, x$ with $x \in \ainf\setminus(\ainf^\times \cup p\ainf)$ is regular on such modules. Regarding completions and complete flatness, we adopt the terminology of \cite[091N]{stacks}, \cite{BhattScholze}, but since we apply these notions mostly to modules as opposed to objects of derived categories, our treatment is closer in spirit to \cite{Positselski}, \cite{Rezk} and \cite{YekutieliFlatness}. Given a ring $A$ and a finitely generated ideal $I=(f_1, f_2, \dots f_n)$, the derived $I$--completion\footnote{That is, this is derived $I$--completion of $M$ as a module. This will be sufficient to consider for our purposes.} of an $A$--module $M$ is \begin{equation}\label{eqn:completion} \widehat{M}=M[[X_1, X_2, \dots X_n]]/(X_1-f_1, X_2-f_2, \dots, X_n-f_n)M[[X_1, X_2, \dots X_n]]. \end{equation} $M$ is said to be \emph{derived $I$--complete} if the natural map $M \rightarrow \widehat{M}$ is an isomorphism. An important characterization of derived $I$--completeness is the condition that $\mathrm{Ext}^i_A(A_f, M)=0$ for $i=0, 1$ and all $f \in I$ (equivalently, for $f=f_j$ for all $j$). As a consequence, the category of derived $I$--complete modules forms a full abelian subcategory of the category of all $A$--modules with exact inclusion functor (and the derived $I$--completion is its left adjoint; in particular, derived $I$--completion is right exact as a functor on $A$--modules). Another consequence is that derived $I$--completeness is equivalent to derived $J$--completeness when $I, J$ are two finitely generated ideals and $\sqrt{I}=\sqrt{J}$. There is always a natural surjection $\widehat{M}\rightarrow {\widehat{M}}^{\mathrm{cl}}$ where $\widehat{(-)}^{\mathrm{cl}}$ stands for $I$--adic completion, which will be reffered to as classical $I$--completion for the rest of the paper. Just like for classsicaly $I$--complete modules, if $M$ is derived $I$--complete, then $M/IM=0$ implies $M=0$ (this is referred to as \emph{derived Nakayama lemma}). A convenient consequence of the completion formula (\ref{eqn:completion}) is that in the case when $M=R$ is a derived $I$--complete $A$--algebra, the isomorphism $R \rightarrow R[[X_1, \dots X_n]]/(X_1-f_1, \dots,..., X_n-f_n)$ picks a preferred representative in $R$ for the power series symbol $\sum_{j_1, \dots, j_n}a_{j_1, \dots, j_n}f_1^{j_1}\dots f_m^{j_n}$ as the preimage of the class represented by $\sum_{j_1, \dots, j_n}a_{j_1, \dots, j_n}X_1^{j_1}\dots X_n^{j_n}$. This gives an algebraically well--behaved notion of power series summation despite the fact that $R$ is not necessarily $I$--adically separated\footnote{This operation further leads to the notion of contramodules, discussed e.g. in \cite{Positselski}.}. An $A$--module $M$ is said to be \emph{$I$--completely (faithfully) flat} if $\mathrm{Tor}_i^A(M, A/I)=0$ for all $i>0$ and $M/IM$ is a (faithfully) flat $A/I$--module. Just like for derived completeness, $I$--complete flatness is equivalent to $J$--complete flatness when $J$ is another finitely generated ideal with $\sqrt{I}=\sqrt{J}$ \footnote{However, note that while (derived) $I$--completeness more generally implies (derived) $I'$--completeness when $I'$ is a finitely generated ideal contained in $\sqrt{I}$, the ``opposite'' works for flatness, i.e. $I$--complete flatness implies $I''$--complete flatness when when $I''$ is a finitely generated ideal with $I\subseteq \sqrt{I''}$.}. Let us start by a brief discussion of regular sequences on derived complete modules in general. For that purpose, given an $A$--module $M$ and $\underline{f}=f_1, \dots, f_n \in A$, denote by $\mathrm{Kos}(M; \underline{f})$ the usual Koszul complex and let $H_m(M; \underline{f})$ denote the $m$-th Koszul homology of $M$ with respect to $f_1, f_2, \dots, f_n$. The first lemma is a straightforward generalization of standard facts about Koszul homology (e.g. \cite[Theorem~16.5]{Matsumura}) and regularity on finitely generated modules. \begin{lem}\label{regKoszul} Let $A$ be a ring, $I \subseteq A$ a finitely generated ideal and let $M$ be a nonzero derived $I$-complete module. Let $\underline{f}=f_1, f_2, \dots, f_n \in I$. Then \begin{enumerate} \item{$\underline{f}$ forms a regular sequence on $M$ if and only if $H_m(M; \underline{f})=0$ for all $m \geq 1$ if and only if $H_1(M; \underline{f})=0$.} \item{In this situation, any permutation of $f_1, f_2, \dots, f_n$ is also a regular sequence on $M$.} \end{enumerate} \end{lem} \begin{proof} As Koszul homology is insensitive to the order of the elements $f_1, f_2, \dots, f_n$, part (2) follows immediately from (1). To prove (1), the forward implications are standard and hold in full generality (see e.g. \cite[Theorem~16.5]{Matsumura}). It remains to prove that the sequence $f_1, f_2, \dots f_n$ is regular on $M$ if $H_1(M; f_1, f_2, \dots, f_n)=0$. We proceed by induction on $n$. The case $n=1$ is clear ($H_1(M; x)=M[x]$ by definition, and $M/xM\neq 0$ follows by derived Nakayama). Let $n \geq 2$, and denote $\underline{f}'$ the truncated sequence $f_1, f_2, \dots, f_{n-1}$. Then we have $\mathrm{Kos}(M; \underline{f})\simeq \mathrm{Kos}(M; \underline{f'})\otimes\mathrm{Kos}(A; f_n),$ which produces a short exact sequence $$0 \longrightarrow \mathrm{Kos}(M; \underline{f'})\longrightarrow \mathrm{Kos}(M; \underline{f})\longrightarrow \mathrm{Kos}(M; \underline{f'})[-1]\longrightarrow 0$$ of chain complexes which upon taking homologies results in a long exact sequence $$\cdots \rightarrow H_1(M; \underline{f}') \stackrel{\pm f_n}{\longrightarrow} H_1(M; \underline{f}')\longrightarrow H_1(M; \underline{f})\longrightarrow M/(\underline{f}')M \stackrel{\pm f_n}{\rightarrow} M/(\underline{f}')M\longrightarrow M/(\underline{f})M\rightarrow 0$$ (as in \cite[Theorem~7.4]{Matsumura}). By assumption, $H_1(M; \underline{f})=0$ and thus, $f_n H_1(M; \underline{f}')=H_1(M; \underline{f}')$ where $f_n \in I$. Upon observing that $H_1(M; \underline{f}')$ is obtained from finite direct sum of copies of $M$ by repeatedly taking kernels and cokernels, it is derived $I$--complete. Thus, derived Nakayama implies that $H_1(M; \underline{f}')=0$ as well, and by induction hypothesis, $\underline{f}'$ is a regular sequence on $M$. Finally, the above exact sequence also implies that $f_n$ is injective on $M/(\underline{f}')M,$ and $M/(\underline{f})M\neq 0$ is satisfied thanks to derived Nakayama again. This finishes the proof. \end{proof} \begin{cor}\label{FlatReg} Let $A$ be a derived $I$--complete ring for an ideal $I=(\underline{f})$ where $\underline{f}=f_1, f_2, \dots, f_n$ is a regular sequence on $A$, and let $F$ be a nonzero derived $I$--complete $A$--module that is $I$--completely flat. Then $\underline{f}$ is a regular sequence on $F$ and consequently, each $f_i$ is a non--zero divisor on $F$. \end{cor} \begin{proof} By Lemma~\ref{regKoszul} (1), $H_m(A; \underline{f})=0$ for all $m \geq 1$, hence $\mathrm{Kos}(A; \underline{f})$ is a free resolution of $A/I$. Thus, on one hand, the complex $F\otimes_A\mathrm{Kos}(A; \underline{f})$ computes $\mathrm{Tor}^A_*(F, A/I)$, hence is acyclic in positive degrees by $I$--complete flatness; on the other hand, this complex is by definition $\mathrm{Kos}(F; \underline{f})$. We may thus conclude that $H_i(F; \underline{f})=0$ for all $i \geq 1$. By Lemma~\ref{regKoszul}, $\underline{f}$ is a regular sequence on $F$, and it remains regular on $F$ after arbitrary permutation. This proves the claim. \end{proof} Now we specialize to the case at hand, that is, $A= \ainf$. Recall that this is a domain and so is $\ainf/p=\oh_{\mathbb{C}_K^\flat}$ (which is a rank $1$ valuation ring). \begin{lem}\label{disjointness} For any element $x \in\ainf\setminus(\ainf^{\times} \cup p\ainf) $ and all $k, l,$ $p^k\ainf \cap x^l\ainf =p^kx^l\ainf,$ and $p, x$ forms a regular sequence. Furthermore, we have that $\sqrt{(p, x)}=(p,W(\mathfrak{m}_{\mathbb{C}_K^{\flat}}))$ is the unique maximal ideal of $\ainf$. In particular, given two choices $x, x'$ as above, we have $\sqrt{(p, x)}=\sqrt{(p, x')}$. \end{lem} In particular, the equalities ``$\sqrt{(p, x)}=\sqrt{(p, x')}$'' imply that all the $(p, x)$--adic topologies (for $x$ as above) are equivalent to each other; this is the so--called weak topology on $\ainf$ (usually defined as $(p, u)$--adic topology in our notation), and it is standard that $\ainf$ is complete with respect to this topology. \begin{proof} By assumption, the image $\overline{x}$ of $x$ in $\ainf/p=\mathcal{O}_{\mathbb{C}_K^{\flat}}$ is non--zero and non--unit in $\ainf/p$ (non--unit since $x \notin \ainf^{\times}$ and $p \in \mathrm{rad}(\ainf)$). Thus, $x^l$ is a non--zero divisor both on $\ainf$ and on $\ainf/p$, hence the claim that $p\ainf \cap x^l\ainf=px^l\ainf$ follows for every $l$. The element $p$ is itself non--zero divisor on $\ainf$ and thus, $p, x$ is a regular sequence. To obtain $p^k\ainf \cap x^l\ainf=p^kx^l\ainf$ for general $k$, one can e.g. use induction on $k$ using the fact that $p$ is a non-zero divisor on $\ainf$ (or simply note that one may replace elements in regular sequences by arbitrary positive powers). To prove the second assertion, note that $\sqrt{(\overline{x})}=\mathfrak{m}_{\mathbb{C}_K^{\flat}}$ since $\ainf/p=\mathcal{O}_{\mathbb{C}_K^{\flat}}$ is a rank $1$ valuation ring. It follows that $(p,W(\mathfrak{m}_{\mathbb{C}_K^{\flat}}))$ is the unique maximal ideal of $\ainf$ above $(p)$, hence the unique maximal ideal since $p \in \mathrm{rad}(\ainf)$, and that $\sqrt{(p, x)}$ is equal to this ideal. \end{proof} We are ready to prove the claim mentioned at the beginning of the section. \begin{cor}\label{FlatTorFree} Let $F$ be a derived $(p, E(u))$--complete and $(p, E(u))$--completely flat $\ainf$--module, and let $x \in \ainf \setminus (\ainf^\times \cup p\ainf)$. Then $p, x$ is a regular sequence on $F$. In particular, for each $k, l >0$, we have $p^kF \cap x^l F=p^kx^l F$. Consquently, $F$ is a torsion--free $\ainf$--module. \end{cor} \begin{proof} By Lemma~\ref{disjointness}, $\ainf$ and $F$ are derived $(p, x)$--complete and $F$ is $(p, x)$--completely flat over $\ainf$, and $p, x$ is a regular sequence on $\ainf$. Corollary~\ref{FlatReg} then proves the claim about regular sequence. The sequence $p^k, x^l$ is then also regular on $F$, and the claim $p^kF \cap x^lF=p^kx^lF$ follows. To prove the ``consequently'' part, let $y$ be a non--zero and non--unit element of $\ainf$. Since $\ainf$ is classically $p$--complete, we have $\bigcap_n p^n \ainf = 0$, and so there exist $n$ such that $y=p^nx$ with $x \notin p\ainf$. If $x$ is a unit, then $y$ is a non--zero divisor on $F$ since so is $p^n$. Otherwise $x \in \ainf \setminus (\ainf^\times \cup p\ainf)$, so $p, x$ is a regular sequence on $F$, and so is $x, p$ (e.g. by Lemma~\ref{regKoszul}). In particular $p, x$ are both non--zero divisors on $F$, and hence so is $y=p^nx$. \end{proof} Finally, we record the following consequence on flatness of $(p, E(u))$--completely flat modules modulo powers of $p$ that seems interesting on its own. \begin{cor}\label{cor:FlatModp} Let $x \in \ainf \setminus (\ainf^{\times} \cup p\ainf)$, and let $F$ be a derived $(p, x)$--complete and $(p, x)$--completely (faithfully) flat $\ainf$--module. Then $F$ is derived $p$--complete and $p$--completely (faithfully) flat. In particular, $F/p^nF$ is a flat $\ainf/p^n$--module for every $n>0$. \end{cor} \begin{proof} The fact that $F$ is derived $p$--complete is clear since it is derived $(p, x)$--complete. We need to show that $F/pF$ is a flat $\ainf/p$--module and that $\mathrm{Tor}_i^{\ainf}(F, \ainf/p)=0$ for all $i>0$. The second claim is a consequence of the fact that $p$ is a non--zero divisor on both $\ainf$ and $F$ by Corollary~\ref{FlatTorFree}. For the first claim, note that $\ainf/p=\oh_{\mathbb{C}_K^\flat}$ is a valuation ring and therefore it is enough to show that $F/pF$ is a torsion--free $\oh_{\mathbb{C}_K^\flat}$--module. This follows again by Corollary~\ref{FlatTorFree}. For the `faithful' version, note that both the statements that $F/pF$ is faithfully flat over $\ainf/p$ and that $F/(p, x)F$ is faithfully flat over $\ainf/(p, x)$ are now equivalent to the statement $F/\mathfrak{m}F \neq 0$ where $\mathfrak{m}=(p, W(\mathfrak{m}_{\mathbb{C}_K^\flat}))$ is the unique maximal ideal of $\ainf$. \end{proof} \subsection{\v{C}ech--Alexander complex}\label{sec:CAComplex} Next, we discuss the \v{C}ech--Alexander complexes for computing prismatic cohomology in global situation. For the sake of being explicit, as well as to restate the construction in order to avoid a subtle issue in \cite[Construction~4.16]{BhattScholze} (see \cite[\S 3.1]{LiLiu} for details), we present the construction quite explicitly, in a very \v{C}ech--theoretic way\footnote{A possible shorter way how to do this might be to combine Proposition~\ref{prop:ProductsInPrismaticSite} below with the \v{C}ech--Alexander complexes in the affine case, using the fact that prismatic cohomology satisfies Zariski descent.}. Throughout this section, let $(A, I)$ be a fixed bounded base prism, and let $\mathscr{X}$ be a smooth separated $p$--adic formal scheme over $A/I.$ Recall that $(\mathscr{X}/A)_{\Prism}$ denotes the site whose underlying category is the opposite of the category of bounded prisms $(B, IB)$ over $(A, I)$ together with a map of formal schemes $\spf(B/IB)\rightarrow \mathscr{X}$ over $A/I$. Covers in $(\mathscr{X}/A)_{\Prism}$ are given by the opposites of faithfully flat maps $(B, IB)\rightarrow (C, IC)$ of prisms, meaning that $C$ is $(p, I)$--completely flat over $(B, IB)$. The prismatic cohomology $\R\Gamma_{\Prism}(\mathscr{X}, A)$ is then defined as the sheaf cohomology $\R\Gamma((\mathscr{X}/A)_{\Prism}, \mathcal{O})$($=\R\Gamma((*, \mathcal{O})$ where $*$ is the terminal sheaf) for the sheaf $\mathcal{O}=\mathcal{O}_{\Prism}$ on the prismatic site $(\mathscr{X}/A)_{\Prism}$ defined by $(B, IB)\mapsto B$. Additionally, let us denote by $\Prism$ the site of all bounded prisms, i.e the opposite of the category of all bounded prisms and their maps, with topology given by faithfully flat maps of prisms. In order to discuss the \v{C}ech--Alexander complex in a non-affine situation, a slight modification of the topology on $ (\mathscr{X}/A)_{\Prism}$ is convenient. The following proposition motivates the change. \begin{prop}\label{prop:DisjointUnions} Let $(A, I)$ be a bounded prism. \begin{enumerate} \item{Given a collection of maps of (bounded) prisms $(A, I)\rightarrow (B_i, IB_i),$ $i=1, 2, \dots, n,$ the canonical map $(A, I)\rightarrow (C, IC)=\left(\prod_iB_i, I\prod_iB_i\right)$ is a map of (bounded) prisms.} \item{$(C, IC)$ is flat over $(A, I)$ if and only if each $(B_i, IB_i)$ is flat over $(A, I)$. In that situation, $(C,IC)$ is faithfully flat prism over $(A, I)$ if and only if the family of maps of formal spectra $\spf(B_i/IB_i)\rightarrow \spf(A/I)$ is jointly surjective.} \item{Let $f \in A$ be an element. Then $(\widehat{A_{f}}, I\widehat{A_f})$, where $\widehat{(-)}$ stands for the derived (equivalently, classical) $(p, I)$--completion, is a bounded prism\footnote{We do consider the zero ring with its zero ideal a prism, hence allow the possibility of $\widehat{A_f}=0$, which occurs e.g. when $f \in (p, I).$ Whether the zero ring satisfies Definition~3.2 of \cite{BhattScholze} depends on whether the inclusion of the empty scheme to itself is considered an effective Cartier divisor; following the usual definitions pedantically, it indeed seems to be the case. Also some related claims, such as \cite[Lemma~3.7 (3)]{BhattScholze} or \cite[Lecture 5, Corollary 5.2]{BhattNotes}, suggest that the zero ring is allowed as a prism.}, and the map $(A, I) \rightarrow (\widehat{A_{f}}, I\widehat{A_f})$ is a flat map of prisms.} \item{Let $f_1, \dots, f_n \in A$ be a collection of elements generating the unit ideal. Then the canonical map $(A, I)\rightarrow \left(\prod_i\widehat{A_{f_i}}, I\prod_i\widehat{A_{f_i}}\right)$ is a faithfully flat map of (bounded) prisms.} \end{enumerate} \end{prop} \begin{proof} The proof of (1) is more or less formal. The ring $C=\prod_i B_i$ has a unique $A$--$\delta$--algebra structure since the forgetful functor from $\delta$--rings to rings preserves limits, and $C$ is as product of $(p, I)$--complete rings $(p, I)$--complete. Clearly $IC=\prod_i (IB_i)$ is an invertible ideal since each $IB_i$ is. In particular, $C[I]=0$, hence a prism by \cite[Lemma~3.5]{BhattScholze}. Assuming that all $(B_i, IB_i)$ are bounded, from $C/IC = \prod_i B_i/IB_i$ we have $C/IC[p^\infty]=C/IC[p^k]$ for $k$ big enough so that $B_i/IB_i[p^{\infty}]=B_i/IB_i[p^{k}]$ for all $i$, showing that $(C, IC)$ is bounded. The ($(p, I)$--complete) flatness part of (2) is clear. For the faithful flatness statement, note that $C/(p, I)C=\prod_i B_i/(p,I)B_i$, hence $A/(p,I)\rightarrow C/(p, I)C$ is faithfully flat if and only if the map of spectra $\coprod_i \spec({B_i/(p, I)B_i})=\spec({C/(p, I)C})\rightarrow \spec({A/(p,I)})$ is surjective. Let us prove (3). Since $\widehat{A_f}$ has $p\in \mathrm{rad}(\widehat{A_f}),$ the equality $\varphi^n(f^k)=f^{kp^n}+p(\dots)$ shows that $\varphi^n(f^k)$ for each $n, k \geq 0$ is a unit in $\widehat{A_f}$. Consequently, as in \cite[Remark~2.16]{BhattScholze}, $\widehat{A_f}=\widehat{S^{-1}A}$ for $S=\{\varphi^n(f^k)\;|\; n, k \geq 0 \}$, and the latter has a unique $\delta$--structure extending that of $A$ by \cite[Lemmas~2.15 and 2.17]{BhattScholze}. In particular, $\widehat{A_f}$ is a $(p, I)$--completely flat $A$--$\delta$--algebra, hence $(\widehat{A_f}, I\widehat{A_f})$ is flat prism over $(A, I)$ by \cite[Lemma~3.7 (3)]{BhattScholze}. Part (4) follows formally from parts (1)--(3). \end{proof} \begin{constr} Denote by $(\mathscr{X}/A)_{\Prism}^\amalg$ the site whose underlying category is $(\mathscr{X}/A)_{\Prism}$. The covers on $(\mathscr{X}/A)_{\Prism}^\amalg$ are given by the opposites of finite families $\{(B, IB) \rightarrow (C_i, IC_i)\}_{i}$ of flat maps of prisms such that the associated maps $\{\spf(C_i/IC_i)\rightarrow \spf(B/IB)\}$ are jointly surjective. Let us call these ``faithfully flat families'' for short. The covers of the initial object $\varnothing$ \footnote{That is, $\varnothing$ corresponds to the zero ring, which we consider to be a prism as per the previous footnote.} are the empty cover and the identity. We similarly extend $\Prism$ to $\Prism^{\amalg}$, that is, we proclaim identity cover and empty cover to be covers of $\varnothing$, and generally proclaim (finite) faithfully flat families to be covers. Clearly isomorphisms as well as composition of covers are covers in both cases. To check that $(\mathscr{X}/A)_{\Prism}^\amalg$ and $\Prism^\amalg$ are sites, it thus remains to check the base change axiom. This is trivial for situations involving $\varnothing,$ so it remains to check that given a faithfully flat family $\{(B, IB)\rightarrow (C_i, IC_i)\}_i$ and a map of prisms $(B, IB) \rightarrow (D, ID)$, the fibre products\footnote{Here we mean fibre products in the variance of the site, i.e. ``pushouts of prisms''. We use the symbol $\boxtimes$ to denote this operation.} $(C_i, IC_i)\boxtimes_{(B, IB)}(D, ID)$ in $\Prism^{\amalg}$ exist and the collection $\{(D, ID)\rightarrow (C_i, IC_i)\boxtimes_{(B, IB)}(D, ID)\}_i$ is a faithfully flat family; the existence and $(p, I)$--complete flatness follows by the same proof as in \cite[Corollary~3.12]{BhattScholze}, only with ``$(p, I)$--completely faithfully flat'' replaced by ``$(p, I)$--completely flat'' throughout, and the fact that the family is faithfully flat follows as well, since $\left(\prod_i(C_i, IC_i)\right)\boxtimes_{(B, IB)}(D, ID)=\prod_i \left( (C_i, IC_i)\boxtimes_{(B, IB)}(D, ID)\right)$ (and using Remark~\ref{rem:CompareSites} (1) below). \end{constr} \begin{rem}\label{rem:CompareSites} \begin{enumerate}[(1)] \item{Note that for a finite family of objects $(C_i, IC_i)$ in $(\mathscr{X}/A)_{\Prism},$ the structure map of the product $(A, I)\rightarrow \prod_i(C_i, IC_i)$ together with the map of formal spectra (induced from the maps for individual $i$'s) $$\spf(\prod_i C_i/IC_i)=\coprod_i \spf(C_i/IC_i)\rightarrow \mathscr{X}$$ makes $(\prod_i C_i, I\prod_i C_i)$ into an object of $(\mathscr{X}/A)_{\Prism}$ that is easily seen to be the coproduct of $(C_i, IC_i)$'s. In view of Proposition~\ref{prop:DisjointUnions} (2), one thus arrives at the equivalent formulation $$ \{Y_i \rightarrow Z\}_{i}\text{ is a }(\mathscr{X}/A)_{\Prism}^\amalg\text{--cover }\Leftrightarrow \coprod_iY_i \rightarrow Z\text{ is a }(\mathscr{X}/A)_{\Prism}\text{--cover.} $$ That is, $(\mathscr{X}/A)_{\Prism}^{\amalg}$ is the (finitely) superextensive site having covers of $(\mathscr{X}/A)_{\Prism}$ as singleton covers. (Similar considerations apply to $\Prism$ and $\Prism^{\amalg}$.)} \item{The two sites are honestly different in that they define different categories of sheaves. Namely, for every finite coproduct $Y=\coprod_i Y_i$, the collection of canonical maps $\{Y_i \rightarrow \coprod_i Y_i\}_i$ forms a $(\mathscr{X}/A)_{\Prism}^\amalg$--cover, and the sheaf axiom forces upon $\mathcal{F}\in \shv((\mathscr{X}/A)_{\Prism}^\amalg)$ the identity $\mathcal{F}\left(\coprod_i Y_i\right)=\prod_i \mathcal{F}(Y_i),$ which is not automatic\footnote{For example, every constant presheaf is a sheaf for a topology given by singleton covers only, which is not the case for $(\mathscr{X}/A)_{\Prism}^{\amalg}.$}. In fact, $ \shv((\mathscr{X}/A)_{\Prism}^\amalg)$ can be identified with the full category of $\shv((\mathscr{X}/A)_{\Prism})$ consisting of all sheaves compatible with finite disjoint unions in the sense above. In particular, the structure sheaf $\mathcal{O}=\mathcal{O}_{\Prism}: (B, IB)\mapsto B$ is a sheaf for the $(\mathscr{X}/A)_{\Prism}^\amalg$--topology. (Again, the same is true for $\Prism$ and $\Prism^{\amalg}$, including the fact that $\mathcal{O}: (B, IB)\mapsto B$ is a sheaf.)} \end{enumerate}\end{rem} Despite the above fine distinction, for the purposes of prismatic cohomology, the two topologies are interchangeable. This is a consequence of the following lemma. \begin{lem}\label{VanishingObjects} Given an object $(B, IB) \in (\mathscr{X}/A)_{\Prism}^\amalg,$ one has $\H^i((B, IB), \mathcal{O})=0$ for $i>0$. \end{lem} \begin{proof} The sheaf $\mathcal{O}: (B, I) \mapsto B$ on $\Prism^\amalg$ has vanishing positive \v{C}ech cohomology essentially by the proof of \cite[Corollary~3.12]{BhattScholze}: one needs to show acyclicity of the \v{C}ech complex for any $\Prism^\amalg$--cover $\{(B, I)\rightarrow (C_i, IC_i)\}_i,$ but the resulting \v{C}ech complex is identical to that for the $\Prism$--cover $(B, I)\rightarrow \prod_i(C_i, IC_i)$, for which the acyclicity is proved in \cite[Corollary~3.12]{BhattScholze}. By a general result (e.g. \cite[03F9]{stacks}), this implies vanishing of $\H^i_{\Prism^\amalg}((B, I), \mathcal{O})$ for all bounded prisms $(B, I)$ and all $i>0$. Now we make use of the fact that cohomology of an object can be computed as the cohomology of the corresponding slice site, \cite[03F3]{stacks}. Let $(B, IB)\in (\mathscr{X}/A)_{\Prism}^\amalg.$ After forgetting structure, we may view $(B, IB)$ as an object of $\Prism^\amalg$ as well, and then \cite[03F3]{stacks} implies that for every $i,$ we have the isomorphisms \begin{align*} \H^i_{(\mathscr{X}/A)_{\Prism}^\amalg}((B, IB), \mathcal{O}) & \simeq \H^i((\mathscr{X}/A)_{\Prism}^\amalg/(B, IB), \mathcal{O}|_{(B, IB)}), \\ \H^i_{\Prism^\amalg}((B, IB), \mathcal{O}) & \simeq \H^i((\Prism^\amalg/(B, IB), \mathcal{O}|_{(B, IB)}) \end{align*} (where $\mathcal{C}/c$ for a site $\mathcal{C}$ and $c \in \mathcal{C}$ denotes the slice site). Upon noting that the slice sites $(\mathscr{X}/A)_{\Prism}^\amalg/(B, IB),$ $\Prism^\amalg/(B, IB)$ are equivalent sites (in a manner that identifies the two versions of the sheaf $\mathcal{O}|_{(B, IB)}$), the claim follows. \end{proof} \begin{cor}\label{cor:CohomologySame} One has $$\R\Gamma((\mathscr{X}/A)_{\Prism}, \mathcal{O}) = \R\Gamma((\mathscr{X}/A)_{\Prism}^\amalg, \mathcal{O}).$$ \end{cor} \begin{proof} The coverings of $(\mathscr{X}/A)_{\Prism}^\amalg$ contain the coverings of $(\mathscr{X}/A)_{\Prism},$ so we are in the situation of \cite[0EWK]{stacks}, namely, there is a morphism of sites $\varepsilon:(\mathscr{X}/A)_{\Prism}^\amalg \rightarrow (\mathscr{X}/A)_{\Prism}$ given by the identity functor of the underlying categories, the pushforward functor $\varepsilon_*: \shv((\mathscr{X}/A)_{\Prism}^\amalg) \rightarrow \shv((\mathscr{X}/A)_{\Prism})$ being the natural inclusion and the (exact) inverse image functor $\varepsilon^{-1}: \shv((\mathscr{X}/A)_{\Prism}) \rightarrow \shv((\mathscr{X}/A)_{\Prism}^\amalg)$ is the sheafification with respect to the ``$^\amalg$''-topology. One has $$\Gamma((\mathscr{X}/A)^\amalg,-)=\Gamma((\mathscr{X}/A),-)\circ \varepsilon_*$$ (where $\varepsilon_*$ denotes the inclusion of abelian sheaves in this context), hence $$\R\Gamma((\mathscr{X}/A)^\amalg,\mathcal{O})=\R\Gamma((\mathscr{X}/A),\R\varepsilon_*\mathcal{O}),$$ and to conclude it is enough to show that $\R^i\varepsilon_* \mathcal{O}=0$ $\forall i>0$. But $\mathsf{R}^i\varepsilon_* \mathcal{O}$ is the sheafification of the presheaf given by $(B, IB) \mapsto \H^i((B, IB), \mathcal{O})$ (\cite[072W]{stacks}), which is $0$ by Lemma~\ref{VanishingObjects}. Thus, $\mathsf{R}^i\varepsilon_* \mathcal{O}=0$, which proves the claim. \end{proof} For an open $p$--adic formal subscheme $\mathscr{V} \subseteq \mathscr{X}$, denote by $h_{\mathscr{V}}$ the functor sending $(B, IB) \in (\mathscr{X}/A)_{\Prism}$ to the set of factorizations of the implicit map $\spf(B/IB) \rightarrow \mathscr{X}$ through $\mathscr{V} \hookrightarrow \mathscr{X};$ that is, $$h_\mathscr{V}((B, IB))=\begin{cases}*\;\; \text{ if the image of }\spf(B/IB) \rightarrow \mathscr{X} \text{ is contained in }\mathscr{V},\\ \emptyset\;\; \text{ otherwise.}\end{cases}$$ Let $(B, IB)\rightarrow (C, IC)$ correspond to a morphism in $(\mathscr{X}/A)_{\Prism}$. If $\spf(B/IB)\rightarrow \mathscr{X}$ factors through $\mathscr{V},$ then so does $\spf(C/IC)\rightarrow \spf(B/IB)\rightarrow \mathscr{X}$. It follows that $h_\mathscr{V}$ forms a presheaf on $(\mathscr{X}/A)_{\Prism}$ (with transition maps $h_\mathscr{V}((B, IB))\rightarrow h_\mathscr{V}((C, IC))$ given by $* \mapsto *$ when $h_\mathscr{V}((B, IB)) \neq \emptyset$, and the empty map otherwise). Note that $h_{\mathscr{X}}$ is the terminal sheaf. \begin{prop} $h_\mathscr{V}$ is a sheaf on $(\mathscr{X}/A)_{\Prism}^\amalg$. \end{prop} \begin{proof} Consider a cover in $(\mathscr{X}/A)_{\Prism}^{\amalg},$ which is given by a faithully flat family $\{(B, IB)\rightarrow (C_i, IC_i)\}_i$. One needs to check that the sequence $$h_\mathscr{V}((B, IB)))\rightarrow \prod_i h_\mathscr{V}((C_i, IC_i)) \rightrightarrows \prod_{i,j} h_\mathscr{V}((C_i, IC_i)\boxtimes_{(B, IB)}(C_j, IC_j))$$ is an equalizer sequence. All the terms have at most one element; consequently, there are just two cases to consider, depending on whether the middle term is empty or not. In both cases, the pair of maps on the right necessarily agree, and so one needs to see that the map on the left is an isomorphism. This is clear in the case when the middle term is empty (since the only map into an empty set is an isomorphism). It remains to consider the case when the middle term is nonempty, which means that $h_\mathscr{V}((C_i, IC_i))=*$ for all $i$. In this case we need to show that $h_{\mathscr{V}}((B, IB)) =*$. Since the maps $\spf(C_i/IC_i)\rightarrow \spf(B/IB) $ are jointly surjective and each $\spf(C_i/IC_i) \rightarrow \mathscr{X}$ lands in $\mathscr{V}$, it follows that so does the map $\spf(B/IB) \rightarrow \mathscr{X}$. Thus, $h_\mathscr{V}((B, IB))=*$, which finishes the proof. \end{proof} \begin{constr}[\v{C}ech--Alexander cover of $\mathscr{V}$]\label{constACcover} Let us now assume additionally that $\mathscr{V}=\spf R$ is affine, and choose a surjection $P_{\mathscr{V}} \rightarrow R$ where $P_{\mathscr{V}}=\widehat{A[\underline{X}]}$ is a $p$--completed free $A$--algebra. Note that, upon fixing a smooth scheme $V=\spec(R_0)$ of which $\mathscr{V}$ is the formal $p$--completion (which exists by \cite[Th\'{e}or\`{e}me 7]{Elkik}), $R_0$ is of finite $A/I$--presentation and, consequently, $P_{\mathscr{V}}$ can be taken as the $p$--completion of finite--type free $A$--algebra $P_0$, and the map $P_{\mathscr{V}}\rightarrow R$ can be taken with finitely generated kernel $J_{\mathscr{V}} \subseteq P_{\mathscr{V}}$. Then there is a commutative diagram with exact rows \begin{center} \begin{tikzcd} 0 \ar[r] & J_{\mathscr{V}} \ar[r] \ar[d] & P_{\mathscr{V}} \ar[d] \ar[r] & R\ar[d] \ar[r] & 0 \\ 0 \ar[r] & J_{\mathscr{V}}\widehat{P_{\mathscr{V}}^{\delta}} \ar[r] & \widehat{P_{\mathscr{V}}^{\delta}} \ar[r] & \widehat{R\otimes_{P_{\mathscr{V}}}P_{\mathscr{V}}^{\delta}} \ar[r] & 0, \end{tikzcd} \end{center} where $P_{\mathscr{V}}^{\delta}$ is the universal $\delta$--algebra enveloping $P_{\mathscr{V}}$, and $\widehat{(-)}$ stands for derived $(p, I)$--completion. To see the exactness of the second row, note that it agrees with the last row of the diagram \begin{center} \begin{tikzcd} 0 \ar[r] & J_0 \ar[r] \ar[d] & P_0 \ar[d] \ar[r] & R_0\ar[d] \ar[r] & 0 \\ 0 \ar[r] & J_{0}P_{0}^{\delta} \ar[r] \ar[d] & P_{0}^{\delta} \ar[r] \ar[d] & R_0\otimes_{P_{0}}P_{0}^{\delta} \ar[d] \ar[r] & 0 \\ 0 \ar[r] & J_{0}\widehat{P_{0}^{\delta}} \ar[r] & \widehat{P_{0}^{\delta}} \ar[r] & \widehat{R_0\otimes_{P_{0}}P_{0}^{\delta}} \ar[r] & 0. \end{tikzcd} \end{center} Here the middle row is obtained from the upper row by base--change, hence it is exact because $P_0^{\delta}$ is flat over $P_0$. The third row is obtained by considering a right exact sequence $(P_{0}^{\delta})^n \stackrel{\alpha}\rightarrow P_{0}^{\delta} \rightarrow R_0\otimes_{P_{0}}P_{0}^{\delta} \rightarrow 0$ where $\alpha $ is determined by a set of generators of $J_{0}$, applying the derived $(p, I)$-completion, and truncating on the left. In particular, since $P_{0}^{\delta}$ is $A$--flat and $R_0\otimes_{P_{0}}P_{0}^{\delta}$ is $A/I$--flat, we have that $\widehat{ P_{0}^{\delta}}$ agrees with the classical $(p, I)$--completion of $ P_{0}^{\delta}$, and similarly $\widehat{R_0\otimes_{ P_{0}} P_{0}^{\delta}}$ is classically $p$--complete. Let $(\check{C}_\mathscr{V}, I\check{C}_\mathscr{V})$ be the prismatic envelope of $(\widehat{P^{\delta}_\mathscr{V}}, J_\mathscr{V}\widehat{P^{\delta}_\mathscr{V}})$. Then the map $$R\rightarrow \widehat{R\otimes_{P_{\mathscr{V}}}P^{\delta}_\mathscr{V}} \rightarrow \check{C}_{\mathscr{V}}/I\check{C}_{\mathscr{V}}$$ corresponds to the map of formal schemes $\spf(\check{C}_{\mathscr{V}}/I\check{C}_{\mathscr{V}}) \rightarrow \mathscr{V}\hookrightarrow \mathscr{X}$. This defines an object of $(\mathscr{X}/A)_{\Prism}^\amalg,$ which we call a \emph{\v{C}ech--Alexander cover of $\mathscr{V}$}. It follows from the proof of \cite[Corollary~3.14]{BhattScholze} that $(\check{C}_{\mathscr{V}}, I\check{C}_{\mathscr{V}}))$ is a flat prism over $(A, I)$. \end{constr} The following proposition justifies the name. \begin{prop}\label{ACechCovers} Denote by $h_{\check{C}_{\mathscr{V}}}$ the sheaf represented by the object $(\check{C}_\mathscr{V}, I\check{C}_\mathscr{V})\in (\mathscr{X}/A)_{\Prism}^\amalg$. There exists a unique map of sheaves $h_{\check{C}_{\mathscr{V}}} \rightarrow h_\mathscr{V}$, and it is an epimorphism. \end{prop} \begin{proof} If $(B, IB) \in (\mathscr{X}/A)_{\Prism}$ with $h_{\check{C}_{\mathscr{V}}}((B, IB))\neq \emptyset,$ this means that $\spf(B/IB)\rightarrow \mathscr{X}$ factors through $\mathscr{V}$ since it factors through $\spf(\check{C}_{\mathscr{V}}/I\check{C}_{\mathscr{V}}).$ Thus, we also have $h_{\mathscr{V}}((B, IB))=*$, and so the (necessarily unique) map $h_{\check{C}_{\mathscr{V}}}((B, IB))\rightarrow h_{\mathscr{V}}((B, IB))$ is defined. When $h_{\check{C}_{\mathscr{V}}}((B, IB))$ is empty, the map $h_{\check{C}_{\mathscr{V}}}((B, IB))\rightarrow h_{\mathscr{V}}((B, IB))$ is still defined and unique, namely given by the empty map. Thus, the claimed morphism of sheaves exists and is unique. We show that this map is an epimorphism. Let $(B, IB)\in (\mathscr{X}/A)_{\Prism}$ such that $h_{\mathscr{V}}((B, IB)) = *$, i.e. $\spf(B/IB) \rightarrow \mathscr{X}$ factors through $\mathscr{V}$, and consider the map $R \rightarrow B/IB$ associated to the map $\spf(B/IB) \rightarrow \mathscr{V}$. Since $P_{\mathscr{V}}$ is a $p$--completed free $A$--algebra surjecting onto $R$ and $B$ is $(p, I)$--complete, the map $R \rightarrow B/IB$ admits a lift $P_{\mathscr{V}} \rightarrow B$. This induces an $A$--$\delta$--algebra map $\widehat{P_{\mathscr{V}}^{\delta}}\rightarrow B$ which gives a morphism of $\delta$--pairs $(\widehat{P_{\mathscr{V}}^{\delta}}, J_{\mathscr{V}}\widehat{P_{\mathscr{V}}^{\delta}})\rightarrow (B, IB)$, and further the map of prisms $(\check{C}_\mathscr{V}, I\check{C}_\mathscr{V}) \rightarrow (B, IB)$ using the universal properties of objects involved. It is easy to see that this is indeed (the opposite of) a morphism in $(\mathscr{X}/A)_{\Prism}$. This shows that $h_{\check{C}_{\mathscr{V}}}((B, IB))$ is nonempty whenever $h_{\mathscr{V}}((B, IB))$ is. Thus, the map is an epimorphism. \end{proof} Let $\mathfrak{V}=\{\mathscr{V}_j\}_{j \in J}$ be an affine open cover of $\mathscr{X}$. For $n \geq 1$ and a multi--index $(j_1, j_2, \dots, j_n) \in J^n,$ denote by $\mathscr{V}_{j_1, \dots j_n}$ the intersection $\mathscr{V}_{j_1}\cap \dots \cap \mathscr{V}_{j_n}$. As $\mathscr{X}$ is assumed to be separated, each $\mathscr{V}_{j_1, \dots j_n}$ is affine and we write $\mathscr{V}_{j_1, \dots j_n}=\mathrm{Spf}(R_{j_1, \dots, j_n})$. \begin{rem}[Binary products in $(\mathscr{X}/A)_{\Prism}$] \label{products} For $(B,IB), (C, IC)\in (\mathscr{X}/A)_{\Prism}$, let us denote their binary product by $(B, IB)\boxtimes (C, IC)$. Let us describe it explicitely at least under the additional assumptions that \begin{itemize} \item{$(B,IB), (C, IC)$ are $(p, I)$--completely flat over $(A, I),$} \item{there are affine opens $\mathscr{U}, \mathscr{V} \subseteq \mathscr{X}$ such that $h_{\mathscr{U}}((B, IB))=*=h_{\mathscr{V}}((C, IC))$.} \end{itemize} Set $\mathscr{W}=\mathscr{U} \cap \mathscr{V}$ and denote the rings corresponding to the affine open sets $\mathscr{U}, \mathscr{V}$ and $\mathscr{W}$ by $R, S$ and $ T,$ resp. Then any object $(D, ID)$ with map s both to $(B, IB)$ and $(C, IC)$ lives over $\mathscr{W},$ i.e. satisfies $h_{\mathscr{W}}((D, ID))=*$. This justifies the following construction. Consider the following commutative diagram, where $\urcorner$ (and $\widehat{\otimes}$) denotes $(p, I)$--completed tensor product everywhere: \begin{center} \begin{tikzcd} && B \widehat{\otimes}_A C \ar[dd] && \\ &&&&\\ B \ar[uurr]\ar[dd]&& B/IB\widehat{\otimes}_{A/I}C/IC\ar[d, "\alpha"]&& \ar[uull] C\ar[dd]\\ && (B/IB\widehat{\otimes}_{R}T)\widehat{\otimes}_{T}(C/IC\widehat{\otimes}_{S}T) \ar[d, phantom, "\cornerdown", near start] && \\ B/IB \ar[r] \ar[uurr]& B/IB\widehat{\otimes}_{R}T\ar[ur] & {\color{white} A} &\ar[ul] C/IC\widehat{\otimes}_{S}T & \ar[l]\ar[uull]C/IC \\ R\ar[u]\ar[r] \ar[ur, phantom, "\llcorner", very near end] & T\ar[u] \ar[rr, equal] & & T \ar[u]& \ar[l] \ar[ul, phantom, "\lrcorner", very near end] S \ar[u] \end{tikzcd} \end{center} Let $J \subseteq B \widehat{\otimes}_A C$ be the kernel of the map $$B \widehat{\otimes}_A C \rightarrow B/IB \widehat{\otimes}_{A/I} C/IC \stackrel{\alpha}{\rightarrow} (B/IB\widehat{\otimes}_{R} T) \widehat{\otimes}_{T} (C/IC\widehat{\otimes}_{S} T).$$ Then $(B, IB)\boxtimes (C, IC)$ is given by the prismatic envelope of the $\delta$--pair $(B \widehat{\otimes}_A C, J)$. \end{rem} \begin{prop}\label{prop:ProductsInPrismaticSite} The \v{C}ech--Alexander covers can be chosen so that for all indexes $j_1, \dots, j_n $ we have $$(\check{C}_{\mathscr{V}_{j_1, \dots j_n}}, I\check{C}_{\mathscr{V}_{j_1, \dots j_n}})=(\check{C}_{\mathscr{V}_{j_1}}, I\check{C}_{\mathscr{V}_{j_1}})\boxtimes (\check{C}_{\mathscr{V}_{j_2}}, I\check{C}_{\mathscr{V}_{j_2}}) \boxtimes \dots \boxtimes (\check{C}_{\mathscr{V}_{j_n}}, I\check{C}_{\mathscr{V}_{j_n}}).$$ \end{prop} \begin{proof} Clearly it is enough to show the statement for binary products. More precisely, given two affine opens $\mathscr{V}_1, \mathscr{V}_2 \subseteq \mathscr{X}$ and an arbitrary initial choice of $(\check{C}_{\mathscr{V}_{1}}, I\check{C}_{\mathscr{V}_{1}})$ and $(\check{C}_{\mathscr{V}_{2}}, I\check{C}_{\mathscr{V}_{2}}),$ we show that $P_{\mathscr{V}_{12}} \rightarrow R_{12}$ can be chosen so that the resulting \v{C}ech--Alexander cover $(\check{C}_{\mathscr{V}_{12}}, I\check{C}_{\mathscr{V}_{12}})$ of $\mathscr{V}_{12}$ is equal to $(\check{C}_{\mathscr{V}_{1}}, I\check{C}_{\mathscr{V}_{1}}) \boxtimes (\check{C}_{\mathscr{V}_{2}}, I\check{C}_{\mathscr{V}_{2}})$. For the purposes of this proof, let us refer to a prismatic envelope of a $\delta$--pair $(S, J)$ also as ``the prismatic envelope of the arrow $S \rightarrow S/J$''. Consider $\alpha_i:P_{\mathscr{V}_i}\twoheadrightarrow R_i,\; i=1, 2$ as in Construction~\ref{constACcover}, and set $P_{\mathscr{V}_{12}}=P_{\mathscr{V}_1}\widehat{\otimes}_A P_{\mathscr{V}_2}$. Then one has the induced surjection $\alpha_1 \otimes \alpha_2:P_{\mathscr{V}_{12}} \rightarrow R_1\widehat{\otimes}_{A/I} R_2$, which can be followed by the induced map $R_1\widehat{\otimes}_{A/I} R_2\rightarrow R_{12}$. This latter map is surjective as well since $\mathscr{X}$ is separated, and therefore the composition of these two maps $\alpha_{12}:P_{\mathscr{V}_{12}}\rightarrow R_{12}$ is surjective, with the kernel $J_{\mathscr{V}_{12}}$ that contains $(J_{\mathscr{V}_1}, J_{\mathscr{V}_2})P_{\mathscr{V}_{12}}$. We may construct a diagram analogous to the one from Remark~\ref{products}, which becomes the diagram \begin{center} \begin{tikzcd}[column sep = small] && \widehat{P_{\mathscr{V}_{12}}^{\delta}} \ar[dd] && \\ &&&&\\ \widehat{P_{\mathscr{V}_1}^{\delta}} \ar[uurr]\ar[dd]&& (R_1\widehat{\otimes}_{A/I}R_2)\widehat{\otimes}_{P_{\mathscr{V}_{12}}}(\widehat{P_{\mathscr{V}_{12}}^{\delta}})\ar[d, "\alpha"]&& \ar[uull] \widehat{P_{\mathscr{V}_2}^{\delta}}\ar[dd]\\ && R_{12}\widehat{\otimes}_{P_{\mathscr{V}_{12}}}(\widehat{P_{\mathscr{V}_{12}}^{\delta}}) \ar[d, phantom, "\cornerdown", near start] && \\ R_1\widehat{\otimes}_{P_{\mathscr{V}_1}}\widehat{P_{\mathscr{V}_1}^{\delta}} \ar[uurr]\ar[r]& R_{12}\widehat{\otimes}_{P_{\mathscr{V}_1}}\widehat{P_{\mathscr{V}_1}^{\delta}}\ar[ur] & {\color{white} A} &\ar[ul] R_{12}\widehat{\otimes}_{P_{\mathscr{V}_2}}\widehat{P_{\mathscr{V}_2}^{\delta}} & \ar[l] \ar[uull]R_2\widehat{\otimes}_{P_{\mathscr{V}_2}}\widehat{P_{\mathscr{V}_2}^{\delta}} \\ R_1\ar[u]\ar[r] \ar[ur, phantom, "\llcorner", very near end] & R_{12}\ar[u] \ar[rr, equal] & & R_{12} \ar[u]& \ar[l] \ar[ul, phantom, "\lrcorner", very near end] R_2, \ar[u] \end{tikzcd} \end{center} (after replacing the expected terms in the center column with isomorphic ones). The composition of the two central arrows is the arrow obtained from the surjection $P_{\mathscr{V}_{12}} \rightarrow R_{12}$ by the procedure as in Construction~\ref{constACcover}. Now $(\check{C}_{\mathscr{V}_{12}}, I\check{C}_{\mathscr{V}_{12}})$ is obtained as the prismatic envelope of this composed central arrow, while $(\check{C}_{\mathscr{V}_{1}}, I\check{C}_{\mathscr{V}_{1}})\boxtimes (\check{C}_{\mathscr{V}_{2}}, I\check{C}_{\mathscr{V}_{2}})$ is obtained the same way, but only after replacing the downward arrows on the left and right by their prismatic envelopes. Comparing universal properties, one easily sees that the resulting central prismatic envelope remains unchanged, proving the claim. \end{proof} \begin{prop}\label{prop:Cover} The map $\coprod_j h_{\mathscr{V}_j} \rightarrow h_\mathscr{X}=*$ (where $\coprod$ denotes the coproduct in $\shv((\mathscr{X}/A)_{\Prism}^\amalg)$) to the final object is an epimorphism, hence so is the map $\coprod_j h_{\check{C}_{\mathscr{V}_j}} \rightarrow *$. \end{prop} \begin{proof} It is enough to show that for a given object $(B, IB)\in (\mathscr{X}/A)_{\Prism}^\amalg,$ there is a faithfully flat family $(B, IB) \rightarrow (C_i, IC_i)$ in $ (\mathscr{X}/A)_{\Prism}^{\amalg, \mathrm{op}}$ such that $\coprod^{\mathrm{pre}}_j h_{\mathscr{V}_j}((C_i, IC_i)) \neq \emptyset$ for all $i$ where $\coprod^{\mathrm{pre}}$ denotes the coproduct of presheaves. With that aim, let us first consider the preimages $\mathscr{W}_j \subseteq \spf(B/IB)$ of each $\mathscr{V}_j $ under the map $\spf(B/IB)\rightarrow \mathscr{X}$. This is an open cover of $\spf(B/IB)$ that corresponds to an open cover of $\spec B/(p, I)B$. One can then choose $f_1, f_2, \dots, f_m$ such that $\{\spec (B/(p, I)B)_{f_i}\}_i$ refines this cover, i.e. every $\spec (B/(p, I)B)_{f_i}$ corresponds to an open subset of $\mathscr{W}_{j(i)}$ for some index $j(i)$. The elements $f_1, \dots, f_m$ generate the unit ideal of $B$ since they do so modulo $(p, I)$ which is contained in $\mathrm{rad}(B).$ Thus, the family $$(B, IB) \rightarrow (C_i, IC_i):=(\widehat{B_{f_i}}, I \widehat{B_{f_i}})\;\; i=1, 2, \dots, m$$ is easily seen to give the desired faithfully flat family, with each $\coprod^{\mathrm{pre}}_j h_{\mathscr{V}_j}((C_i, IC_i))$ nonempty, since each $\spf(C_i/IC_i) \rightarrow \mathscr{X}$ factors through $\mathscr{V}_{j(i)}$ by construction. \end{proof} \begin{rem} The proof of Proposition~\ref{prop:Cover} is the one step where we used the relaxation of the topology, namely the fact that the faithfully flat cover $(B, IB)\rightarrow \prod_i(C_i, IC_i)$ can be replaced by the family $\{(B, IB)\rightarrow (C_i, IC_i)\}_i$. \end{rem} Finally, we obtain the \v{C}ech--Alexander complexes in the global case. \begin{prop} \label{prop:CechComplex} $\R\Gamma((\mathscr{X}/A)_{\Prism}, \mathcal{O})$ is modelled by the \v{C}ech--Alexander complex \begin{equation}\tag{$\check{C}^\bullet_{\mathfrak{V}}$}\label{eqn:CechComplex} 0 \longrightarrow \prod_j \check{C}_{\mathscr{V}_j}\longrightarrow \prod_{j_1, j_2} \check{C}_{\mathscr{V}_{j_1, j_2}} \longrightarrow \prod_{j_1, j_2, j_3} \check{C}_{\mathscr{V}_{j_1, j_2, j_3}} \longrightarrow\dots \end{equation} \end{prop} \begin{proof} By \cite[079Z]{stacks}, the epimorphism of sheaves $\coprod_j h_{\check{C}_{\mathscr{V}_j}}\rightarrow *$ from Proposition~\ref{prop:Cover} implies that there is a spectral sequence with $E_1$-page $$E_1^{p, q}=H^q\Big(\big(\coprod_j h_{\check{C}_{\mathscr{V}_j}}\big)^{\times p}, \mathcal{O}\Big)=H^q\Big(\coprod_{j_1, j_2, \dots, j_p} h_{\check{C}_{\mathscr{V}_{j_1,\dots, j_p}}}, \mathcal{O}\Big)=\prod_{j_i, \dots, j_p}H^q((\check{C}_{\mathscr{V}_{j_1, \dots, j_p}}, I\check{C}_{\mathscr{V}_{j_1, \dots, j_p}}), \mathcal{O})$$ converging to $H^{p+q}(*, \mathcal{O})=H^{p+q}((\mathscr{X}/A)_{\Prism}^\amalg, \mathcal{O})=H^{p+q}((\mathscr{X}/A)_{\Prism}, \mathcal{O}),$ where we implicitly used Corollary~\ref{cor:CohomologySame} and the fact that $h_{\check{C}_{\mathscr{V}_{j_1}}}\times h_{\check{C}_{\mathscr{V}_{j_2}}}=h_{\check{C}_{\mathscr{V}_{j_1}}\boxtimes\check{C}_{\mathscr{V}_{j_2}}}=h_{\check{C}_{\mathscr{V}_{j_1, j_2}}}$ as in Proposition~\ref{prop:ProductsInPrismaticSite}, and similarly for higher multi--indices. By Lemma~\ref{VanishingObjects}, $H^q((\check{C}_{\mathscr{V}_{j_1, \dots, j_n}}, I\check{C}_{\mathscr{V}_{j_1, \dots, j_n}}), \mathcal{O})=0$ for every $q>0$ and every multi--index $j_1, \dots, j_n$. The first page is therefore concentrated in a single row of the form $\check{C}^\bullet_{\mathfrak{V}}$ and thus, the spectral sequence collapses on the second page. This proves that the cohomologies of $\R\Gamma((\mathscr{X}/A)_{\Prism}, \mathcal{O})$ are computed as cohomologies of $\check{C}^\bullet_{\mathfrak{V}}$, but in fact, this yields a quasi--isomorphism of the complexes themselves. (For example, analyzing the proof of \cite[079Z]{stacks} via \cite[03OW]{stacks}, the double complex $E_0^{\bullet \bullet}$ of the above spectral sequence comes with a natural map $\alpha:\check{C}^\bullet_{\mathfrak{V}}\rightarrow \mathrm{Tot}(E_0^{\bullet\bullet}),$ and a natural quasi--isomorphism $\beta: \R\Gamma((\mathscr{X}/A)_{\Prism}, \mathcal{O}) \rightarrow \mathrm{Tot}(E_0^{\bullet\bullet});$ when the spectral sequence collapses as above, $\alpha$ is also a quasi--isomorphism). \end{proof} \begin{rems}\label{rem:CechBaseChange} \begin{enumerate} \item{Just as in the affine case, the formation of \v{C}ech--Alexander complexes is compatible with flat base--change on the base prism essentially by \cite[Proposition~3.13]{BhattScholze}.} \item{Let now $(A, I)$ be the prism $(\ainf, \mathrm{Ker}\,\theta)$ and let $\mathscr{X}$ be of the form $\mathscr{X}=\mathscr{X}^0\times_{\oh_K}\oh_{\mathbb{C}_K}$ where $\mathscr{X}^0$ is a smooth separated formal $\oh_K$--scheme. A convenient way to describe the $G_K$--action on $\R\Gamma_{\Prism}(\mathscr{X}/\ainf)$ is via base--change: given $g \in G_K,$ action of $g$ on $\ainf$ gives a map of prisms $g:(\ainf, (E(u)))\rightarrow (\ainf, (E(u)))$, and $g^*\mathscr{X}=\mathscr{X}$ since $\mathscr{X}$ comes from $\oh_K$. Base--change theorem for prismatic cohomology \cite[Theorem~1.8 (5)]{BhattScholze} then gives an $\ainf$--linear map $g^*\R\Gamma_{\Prism}(\mathscr{X}/\ainf)\rightarrow \R\Gamma_{\Prism}(\mathscr{X}/\ainf);$ untwisting by $g$ on the left, this gives an $\ainf$--$g$--semilinear action map $g: \R\Gamma_{\Prism}(\mathscr{X}/\ainf)\rightarrow \R\Gamma_{\Prism}(\mathscr{X}/\ainf)$. The exact same procedure defines the $G_K$--action on the the \v{C}ech--Alexander complexes modelling the cohomology theories since they are base--change compatible.} \end{enumerate} \end{rems} \section{The conditions \Crs}\label{sec:crs} \subsection{Definition and basic properties} Let us fix some more notation. For a natural number $s$, denote by $K_s$ the field $K(\pi_s)$ (where $(\pi_n)_n$ is the compatible chain of $p^n$--th roots of $\pi$ chosen before, i.e. so that $u=[(\pi_n)_n]$ in $\ainf$), and set $K_\infty=\bigcup_s K_s$. Further set $K_{p^{\infty}}=\bigcup_m K(\zeta_{p^m})$ and for $s \in \mathbb{N}\cup \{\infty\}$, set $K_{p^{\infty},s}=K_{p^{\infty}}K_{s}$. Note that the field $K_{p^{\infty}, \infty}$ is the Galois closure of $K_{\infty}$. Denote by $\widehat{G}$ the Galois group $\mathrm{Gal}(K_{p^\infty, \infty}/K)$ and by $G_s$ the group $\mathrm{Gal}(\overline{K}/K_s),$ for $s \in \mathbb{N}\cup\{\infty\}$. The group $\widehat{G}$ is generated by its two subgroups $\mathrm{Gal}(K_{p^\infty, \infty}/K_{p^{\infty}})$ and $\mathrm{Gal}(K_{p^\infty, \infty}/K_{\infty})$ (by \cite[Lemma~5.1.2]{LiuBreuilConjecture}). The subgroup $\mathrm{Gal}(K_{p^\infty, \infty}/K_{p^{\infty}})$ is normal, and its element $g$ is uniquely determined by its action on the elements $(\pi_s)_s$, which takes the form $g(\pi_s)=\zeta_{p^s}^{a_s} \pi_s$, with the integers $a_s$ unique modulo $p^s$ and compatible with each other as $s$ increases. It follows that $\mathrm{Gal}(K_{p^{\infty},\infty}/K_{p^{\infty}})\simeq \mathbb{Z}_p$, with a topological generator $\tau$ given by $\tau (\pi_n)=\zeta_{p^n}\pi_n$ (where, again, $\zeta_{p^n}$'s are chosen as before, so that $v=[(\zeta_{p^n})_n]-1$). Similarly, the image of $G_s$ in $\widehat{G}$ is the subgroup $\widehat{G}_s=\mathrm{Gal}(K_{p^\infty, \infty}/K_s)$. Clearly $\widehat{G}_s$ contains $\mathrm{Gal}(K_{p^\infty, \infty}/K_{\infty})$ and the intersection of $\widehat{G}_s$ with $\mathrm{Gal}(K_{p^\infty, \infty}/K_{p^\infty})$ is $\mathrm{Gal}(K_{p^\infty, \infty}/K_{p^\infty, s}).$ Just as in the $s=0$ case, $\widehat{G}_s$ is generated by these two subgroups, with the subgroup $\mathrm{Gal}(K_{p^\infty, \infty}/K_{p^\infty, s})$ normal and topologically generated by the element $\tau^{p^s}$. There is a natural $G_K$--action on $\ainf=W(\oh_{\mathbb{C}_K}^{\flat}),$ extended functorially from the natural action on $\oh_{\mathbb{C}_K}^{\flat}$. This action makes the map $\theta: \ainf \rightarrow \oh_{\mathbb{C}_K}$ $G_K$--equivariant, in particular, the kernel $E(u)\ainf$ is $G_K$--stable. The $G_K$--action on the $G_K$--closure of $\Es$ in $\ainf$ factors through $\widehat{G}$. Note that the subgroup $\mathrm{Gal}(K_{p^{\infty},\infty}/K_{{\infty}})$ of $\widehat{G}$ acts trivially on elements of $\Es$, and the action of the subgroup $\mathrm{Gal}(K_{p^{\infty},\infty}/K_{p^{\infty}})$ is determined by the equality $\tau (u)=(v+1)u$. For an integer $s \geq 0$ and $i$ between $0$ and $s$, denote by $\xi_{s, i}$ the element $$\xi_{s, i}=\frac{\varphi^{s}(v)}{\omega \varphi(\omega) \dots \varphi^i(\omega)}=\varphi^{-1}(v)\varphi^{i+1}(\omega)\varphi^{i+2}(\omega)\dots \varphi^{s}(\omega)$$ (recall that $\omega=v/\varphi^{-1}(v)$), and set $$I_s=\left(\xi_{s, 0}u, \xi_{s, 1}u^p, \dots, \xi_{s, s}u^{p^s}\right).$$ We are concerned with the following conditions. \begin{deff}\label{Def:Crys} Let $M_{\inf}$ be an $\ainf$--module endowed with a $G_K$--$\ainf$--semilinear action, let $M_{\BK}$ be an $\Es$--module and let $M_{\BK}\rightarrow M_{\inf}$ be an $\Es$--linear map. Let $s \geq 0$ be an integer. \begin{enumerate} \item{An element $x \in M_{\inf}$ is called a \emph{{\Crs}--element} if for every $g \in G_s$, $$g(x)-x\in I_sM_{\inf}.$$} \item{We say that the pair $M_{\BK} \rightarrow M_{\ainf}$ \emph{satisfies the condition \Crs} if for every element $x \in M_{\BK}$, the image of $x$ in $M_{\inf}$ is \Crs.} \item{An element $x \in M_{\inf}$ is called a \emph{{\Crrs}--element} if for every $g \in G_s$, there is an element $y \in M_{\inf}$ such that $$g(x)-x=\varphi^{s}(v)uy.$$} \item{We say that the pair $M_{\BK} \rightarrow M_{\ainf}$ \emph{satisfies the condition \Crrs} if for every element $x \in M_{\BK}$, the image of $x$ in $M_{\inf}$ is \Crrs.} \item{Aditionally, we call \Cr{0}--elements \emph{crystalline elements} and we call the condition \Cr{0} \emph{the crystalline condition}.} \end{enumerate} \end{deff} \begin{rems} \begin{enumerate}[(1)] \item{Since $I_0=\varphi^{-1}(v)u\ainf,$ the crystalline condition equivalently states that for all $g \in G_K$ and all $x$ in the image of $M_{\BK},$ $$g(x)-x \in \varphi^{-1}(v)uM_{\inf}.$$ The reason for the extra terminology in the case $s=0$ is that the condition is connected with a criterion for certain representations to be crystalline, as discussed below. The higher conditions \Crs will on the other hand find application in computing bounds on ramification of $p^n$--torsion \'{e}tale cohomology. The conditions \Crrs serve an auxillary purpose. Clearly \Crrs implies \Crs. } \item{Strictly speaking, one should talk about the crystalline condition (or \Crs) for the map $f$, but we choose to talk about the the crystalline condition (or \Crs) for the pair $(M_{\BK}, M_{\inf})$ instead, leaving the datum of the map $f$ implicit. This is because typically we consider the situation that $M_{\BK}$ is an $\Es$--submodule of $M_{\inf}^{G_{\infty}}$ and $M_{\BK}\otimes_\Es \ainf \simeq M_{\ainf}$ via the natural map (or the derived $(p, E(u))$--completed variant, $M_{\BK}\widehat{\otimes}_\Es \ainf \simeq M_{\ainf}$). Also note that $f:M_{\BK}\rightarrow M_{\inf}$ satisfies the condition \Crs if and only if $f(M_{\BK})\subseteq M_{\inf}$ does.} \end{enumerate} \end{rems} \begin{lem}\label{lem:GkStableIdeals} For any integer $s$, the ideals $\varphi^s(v)u\ainf$ and $I_s$ are $G_K$--stable. \end{lem} \begin{proof} It is enough to prove that the ideals $u\ainf$ and $v\ainf$ are $G_K$--stable. Note that the $G_K$--stability of $v\ainf$ implies $G_K$--stability of $\varphi^s(v)\ainf$ for any $s \in \mathbb{Z}$ since $\varphi$ is a $G_K$--equivariant automorphism of $\ainf$. Once we know this, we know that $g \varphi^s(v)$ equals to $\varphi^s(v)$ times a unit for every $g$ and $s$, the same is then true of $\varphi^{i}(\omega)=\varphi^i(v)/\varphi^{i-1}(v)$, hence also of all the elements $\xi_{i,s}$ and it follows that $I_s$ is $G_K$--stable. Given $g \in G_K$, $g(\pi_n)=\zeta_{p^n}^{a_n}\pi_n$ for an ineger $a_n$ unique modulo $p^n$ and such that $a_{n+1}\equiv a_n \pmod{p^n}$. It follows that $g(u)=[\underline{\varepsilon}]^a u$ for a $p$--adic integer $a$($=\lim_n a_n$). (The $\mathbb{Z}_p$--exponentiation used here is defined by $[\underline{\varepsilon}]^a =\lim_n [\underline{\varepsilon}]^{a_n} $ and the considered limit is with respect to the weak topology.) Thus, $u\ainf$ is $G_K$--stable. Similarly, we have $g(\zeta_{p^n})=\zeta_{p^n}^{b_n},$ for integers $b_n$ coprime to $p$, unique modulo $p^n$ and compatible with each other as $n$ grows. It follows that $g([\underline{\varepsilon}])=[\underline{\varepsilon}]^b$ for $b=\lim_n b_n$, and so $g(v)=(v+1)^b-1=\lim_n((v+1)^{b_n}-1).$ The resulting expression is still divisible by $v$. To see that, fix the integers $b_n$ to have all positive representatives. Then the claim follows from the formula $$(v+1)^{b_n}-1=v((v+1)^{b_n-1}+(v+1)^{b_n-2}+\dots +1),$$ upon noting that the sequence of elements $((v+1)^{b_n-1}+(v+1)^{b_n-2}+\dots +1)=((v+1)^{b_n}-1)/v$ is still $(p, v)$--adically (i.e. weakly) convergent thanks to Lemma~\ref{disjointness}. \end{proof} In view of the above lemma, the following is a convenient restatement of the conditions \Crs, \Crrs. \begin{lem}\label{lem:RestatementCrs} Given $f: M_{\BK} \rightarrow M_{\inf}$ as in Definition~\ref{Def:Crys}, the pair $(M_{\BK}, M_{\inf})$ satisfies the condition \Crs (\Crrs, resp.) if and only if the image of $M_{\BK}$ in $\overline{M_{\inf}}:=M_{\inf}/I_sM_{\inf}$ ($\overline{M_{\inf}}:=M_{\inf}/\varphi^s(v)uM_{\inf}$, resp.) lands in $\overline{M_{\inf}}^{G_s}$. \end{lem} \begin{proof} Upon noting that the $G_K$--action is well--defined on $\overline{M_{\inf}}$ thanks to Lemma~\ref{lem:GkStableIdeals}, this is just a direct reformulation of the conditions \Crs or \Crrs. \end{proof} In the case of the above--mentioned condition $f(M_{\BK})\subseteq M_{\inf}^{G_{\infty}},$ the $G_K$--closure of $f(M_{\BK})$ in $M_{\inf}$ is contained in the $G_K$--submodule $M^{G_{K_{p^\infty,\infty}}}$, and thus, the $G_K$--action on it factors through $\widehat{G}$. Under mild assumtions on $M_{\inf}$, the $G_s$--action on the elements of $f(M_{\BK})$ is ultimately determined by $\tau^{p^s}$, the topological generator of $\mathrm{Gal}(K_{p^\infty, \infty}/ K_{p^\infty, s})$. Consequently, the conditions \Crrs are also determined by the action of this single element: \begin{lem}\label{lem:TauIsEnough} Let $f: M_{\BK} \rightarrow M_{\inf}$ be as in Definition~\ref{Def:Crys}. Additionally assume that $M_{\inf}$ is classically $(p, E(u))$--complete and $(p, E(u))$--completely flat, and that the $G_K$--action on $M_{\inf}$ is continuous with respect to this topology. Also assume that $f(M_{\BK})$ is contained in $M_{\inf}^{G_{\infty}}$. Then the action of $\widehat{G}$ on elements of $f(M_{\BK})$ makes sense, and the pair $(M_{\BK}, M_{\inf})$ satisfies the condition \Crrs if and only if $$\forall x \in f(M_{\BK}): \tau^{p^s}(x)-x\in \varphi^s(v)uM_{\inf}.$$ \end{lem} \begin{proof} Clearly the stated condition is necessary. To prove sufficiency, assume the above condition for $\tau^{p^s}$. By the fixed--point interpretation of the condition \Crrs as in Lemma~\ref{lem:RestatementCrs}, it is clear that the analogous condition holds for every element $g \in \langle \tau^{p^s}\rangle $. Next, assume that $g \in \mathrm{Gal}(K_{p^\infty, \infty}/K_{p^\infty, s})=\overline{\langle \tau^{p^s} \rangle}$. This means that $g=\lim_n \tau^{p^s a_n}$ with the sequence of integers $(a_n)$ $p$--adically convergent. Then, for $x \in f(M_{\BK}),$ by continuity we have $g(x)-x=\lim_n (\tau^{p^s a_n}(x)-x)$, which equals to $\lim_n \varphi^s(v)u y_n$ with $y_n \in M_{\inf}$. Upon noting that the sequence $(y_n)$ is still convergent (using the fact that the $(p, E(u))$--adic topology is the $(p, \varphi^s(v)u)$--adic topology, and that $p, \varphi^s(v)u$ is a regular sequence on $M_{\inf}$), we have that $g(x)-x=\varphi^s(v)u y$ where $y=\lim_n y_n$. To conclude, note that a general element of $\widehat{G}_s$ is of the form $g_1g_2$ where $g_1 \in \mathrm{Gal}(K_{p^\infty, \infty}/ K_{p^\infty, s})$ and $g_2 \in \mathrm{Gal}(K_{p^\infty, \infty}/ K_{\infty}).$ Then for $x \in f(M_{\BK})$, by the assumption $f(M_{\BK})\subseteq M_{\inf}^{G_{\infty}}$ we have $g_1g_2(x)-x=g_1(x)-x$, and so the condition \Crrs is proved by the previous part. \end{proof} Let us now discuss some basic algebraic properties of the conditions \Crs and \Crrs. The basic situation when they are satisfied is the inclusion $\Es \hookrightarrow \ainf$ itself. \begin{lem}\label{coeff} The pair $\Es\hookrightarrow \ainf$ satisfies the conditions \Crrs (hence also \Crs) for all $s \geq 0$. \end{lem} \begin{proof} Note that $\Es\hookrightarrow \ainf$ satisfies the assumptions of Lemma~\ref{lem:TauIsEnough}, so it is enough to consider the action of the element $\tau^{p^s} \in \widehat{G}_s$. For an element $f = \sum_i a_i u^i \in \Es$ we have $$\tau^{p^s}(f)-f=\sum_{i\geq 0} a_i ((v+1)^{p^s}u)^{i}-\sum_{i\geq 0} a_iu^i=\sum_{i\geq 1}a_i((v+1)^{p^si}-1)u^i,$$ and thus, $$\frac{\tau^{p^s}(f)-f}{\varphi^s(v)u}=\sum_{i \geq 1}a_i\frac{(v+1)^{p^si}-1}{\varphi^s(v)}u^{i-1}=\sum_{i \geq 1}a_i\frac{(v+1)^{p^si}-1}{(v+1)^{p^s}-1}u^{i-1}$$ Since $\varphi^s(v)=(v+1)^{p^s}-1$ divides $(v+1)^{p^si}-1$ for each $i$, the obtained series has coefficients in $\ainf$, showing that $\tau^{p^s}(f)-f \in \varphi^s(v)u\ainf$ as desired. \end{proof} The following lemma shows that in various contexts, it is often sufficient to verify the conditions \Crs, \Crrs on generators. \begin{lem}\label{generators} Fix an integer $s \geq 0$. Let \textnormal{(C)} be either the condition \Crs or \Crrs. \begin{enumerate} \item{Let $M_{\inf}$ be an $\ainf$--module with a $G_K$--$\ainf$--semilinear action. The set of all \textnormal{(C)}--elements forms an $\Es$--submodule of $M_{\inf}$.} \item{Let $C_{\inf}$ be an $\ainf$--algebra endowed with a $G_K$--semilinear action. The set of \textnormal{(C)}--elements of $C_{\inf}$ forms an $\Es$--subalgebra of $C_{\inf}$.} \item{If the algebra $C_{\inf}$ from (2) is additionally $\ainf$--$\delta$--algebra such that $G_K$ acts by $\delta$--maps (i.e. $\delta g=g \delta$ for all $g \in G_K$) then the set of all \textnormal{(C)}--elements forms a $\Es$--$\delta$--subalgebra of $C_{\inf}$.} \item{If the algebra $C_{\inf}$ as in (2) is additionally derived $(p, E(u))$--complete, the $G_K$--action on it is continuous with respect to the $(p, E(u))$--adic topology and $C_{\BK}\rightarrow C_{\inf}$ is a map of $\Es$-algebras that satisfies the condition \textnormal{(C)}, then so does $\widehat{C_{\BK}} \rightarrow C_{\inf},$ where $\widehat{C_{\BK}}$ is the derived $(p, E(u))$--completion of $C_{\BK}$. In particular, the set of all \textnormal{(C)}--elements in $C_{\inf}$ forms a derived $(p, E(u))$--complete $\Es$--subalgebra of $C_{\inf}$.} \end{enumerate} \end{lem} \begin{proof} Let $J$ be the ideal $I_s$ if (C)=\Crs and the ideal $\varphi^s(v)u\ainf$ if (C)=\Crrs. In view of Lemma~\ref{lem:RestatementCrs}, the sets described in (1),(2) are obtained as the preimages of $\left(M_{\inf}/J M_{\inf}\right)^{G_s}$ (ring $\left(C_{\inf}/J C_{\inf}\right)^{G_s}$, resp.) under the canonical projection $M_{\inf} \rightarrow M_{\inf}/J M_{\inf}$ ($C_{\inf} \rightarrow C_{\inf}/J C_{\inf},$ resp.). As these $G_s$--fixed points form an $\Es$--module ($\Es$--algebra, resp.) by Lemma~\ref{coeff}, this proves (1) and (2). Similarly, to prove (3) we need to prove only that the ideal $J C_{\inf}$ is a $\delta$--ideal and therefore the canonical projection $C_{\inf} \rightarrow C_{\inf}/J C_{\inf}$ is a map of $\delta$--rings. Let us argue first in the case \Crrs. As $\delta(u)=0$, we have $$\delta(\varphi^{s}(v)u)=\delta(\varphi^{s}(v))u^{p}=\frac{\varphi(\varphi^{s}(v))-(\varphi^{s}(v))^p}{p}u^{p}=\frac{\varphi^{s+1}(v)-(\varphi^{s}(v))^p}{p}u^{p}.$$ Recall that $\varphi^s(v)=[\underline{\varepsilon}]^{p^s}-1$ divides $\varphi^{s+1}(v)=([\underline{\varepsilon}]^{p^s})^p-1$. The numerator of the last fraction is thus divisible by $\varphi^{s}(v)$ and since $\varphi^{s}(v)\ainf \cap p\ainf=\varphi^{s}(v)p\ainf$ by Lemma~\ref{disjointness}, $\varphi^{s}(v)$ divides the whole fraction ${(\varphi^{s+1}(v)-(\varphi^{s}(v))^p)/p}$ in $\ainf$. (We note that this is true for \textit{every} integer $s$, in particular $s=-1$, as well.) Let us now prove that the ideal $J=I_s$ is a $\delta$--ideal. For any $i$ between $0$ and $s-1$, we have $$\delta\left(\xi_{s, i}\right)=\delta(\varphi^{-1}(v)\varphi^{i+1}(\omega)\dots \varphi^s(\omega))=\frac{\varphi^{-1}(v)\omega \varphi^{i+2}(\omega)\dots \varphi^{s+1}(\omega)-\varphi^{-1}(v)^p\varphi^{i+1}(\omega)^p\dots \varphi^s(\omega)^p}{p}.$$ The numerator is divisible by $\xi_{s, i+1}$, hence so is the fraction again by Lemma~\ref{disjointness}. Thus, we have that $\delta(\xi_{s, i}u^{p^i})=\delta(\xi_{s, i})u^{p^{i+1}}$ is a multiple of $\xi_{s, i+1}u^{p^{i+1}}$. Finally, when $i=s$, we have $\xi_{s, s}=\varphi^{-1}(v)$, and $\delta(\xi_{s, s})$ is thus a multiple of $\xi_{s, s}$ by the previous. Consequently, $\delta(\xi_{s,s}u^{p^s})=\delta(\xi_{s,s})u^{p^{s+1}}$ is divisible by $\xi_{s,s}u^{p^s}$. This shows that $I_s$ (hence also $I_sC_{\inf}$) is a $\delta$--ideal. Finally, let us prove (4). Note that $E(u)\equiv u^e \pmod{p\Es}$, hence $\sqrt{(p, E(u))}=\sqrt{(p, u^e)}=\sqrt{(p, u)}$ even as ideals of $\Es$; consequently, the derived $(p, E(u))$--completion agrees with the derived $(p, u)$--completion both for $\Es$-- and $\ainf$--modules. We may therefore replace $(p, E(u))$--completions with $(p, u)$--completions throughout. Since $C_{\inf}$ is derived $(p, u)$--complete, any power series of the form $$f=\sum_{i,j}c_{i, j}p^iu^j$$ with $c_{i, j}\in C_{\inf}$ defines a unique\footnote{Here we are using the preferred representatives of powers series as mentioned at the beginning of \S\ref{subsec:regularity}.} element $f \in C_{\inf}$, and $f$ comes from $\widehat{C_{\BK}}$ if and only if the coefficients $c_{i, j}$ may be chosen in the image of the map $C_{\BK}\rightarrow C_{\inf}$. Assuming this, for $g \in G_s$ we have $$g(f)-f=\sum_{i, j}g(c_{i, j})p^i(\gamma u)^j-\sum_{i, j}c_{i, j}p^iu^j=$$ $$=\sum_{i, j}\left(g(c_{i, j})\gamma^j-g(c_{i, j})+g(c_{i, j})-c_{i, j}\right)p^iu^j,$$ where $\gamma$ is the $\ainf$--unit such that $g(u)=\gamma u$. Thus, it is clearly enough to show that, upon assuming the condition \textnormal{(C)} for $(C_{\BK}, C_{\inf})$, that the terms $\left(g(c_{i, j})\gamma^{j}-g(c_{i, j})\right)p^iu^j$ and $\left(g(c_{i, j})-c_{i, j}\right)p^iu^j$ are in $J C_{\inf}$ when $g \in G_{{s}}$. (Note that here we rely on the fact that an element $d=\sum_{i, j} d_{i,j}p^i u^j$ with $d_{i, j}\in JC_{\inf}$ is itself in $JC_{\inf}$, a fact that holds thanks to $J$ being finitely generated.) We have $g(c_{i, j})-c_{i, j} \in J C_{\inf}$ by assumption, so it remains to treat the term $g(c_{i, j})(\gamma^j-1)$. Note that $(\gamma^{j}-1)$ is divisible by $\gamma-1$, which is divisible by $\varphi^s(v)$ by Lemma~\ref{coeff}, and so the terms $g(c_{i, j})(\gamma^j-1)p^i u^j$ are divisible by $\varphi^s(v)u$ when $j\geq 1$; thus, they belong to $JC_{\inf}$ in both considered cases. When $j=0,$ these terms become $0$ and there is nothing to prove. To prove the second assertion of (4), let now $C_{\BK} \subseteq C_{\inf}$ be the $\Es$--subalgebra of all crystalline elements. By the previous, the map $\widehat{C_{\BK}}\rightarrow C_{\inf}$ satisfies \textnormal{(C)}, and hence the image $C_{\BK}^+$ of this map consists of \textnormal{(C)}--elements. Thus, we have $C_{\BK} \subseteq C_{\BK}^+\subseteq C_{\BK},$ and hence, $C_{\BK}$ is derived $(p, E(u))$--complete since so is $C_{\BK}^+$. \end{proof} \begin{rem} One consequence of Lemma~\ref{generators} is that the $\Es$--subalgebra $\mathfrak{C}$ of $\ainf$ formed by all crystalline elements (or even \Crr{0}--elements) forms a prism, with the distinguished invertible ideal $I=E(u)\mathfrak{C}$. As Lemma~\ref{coeff} works for any choice of Breuil--Kisin prism associated to $K/K_0$ in $\ainf$, $\mathfrak{C}$ contains all of these (in particular, it contains all $G_K$--translates of $\Es$). \end{rem} For future use in applications to $p^n$--torsion modules, we consider the following simplification of the ideals $I_s$ appearing in the conditions \Crs. \begin{lem}\label{lem:IsModPn} Consider a pair of integers $n, s$ with $s\geq 0, n \geq 1$. Set $t=\mathrm{max}\left\{0, s+1-n\right\}$. Then the image of the ideal $I_s$ in the ring $W_n(\oh_{\mathbb{C}_K^\flat})=\ainf/p^n$ is contained in the ideal $\varphi^{-1}(v)u^{p^{t}}W_n(\oh_{\mathbb{C}_K^\flat})$. That is, we have $I_s+p^n\ainf \subseteq \varphi^{-1}(v)u^{p^t}\ainf+p^n\ainf.$ \end{lem} \begin{proof} When $t=0$ there is nothing to prove, therefore we may assume that $t=s+1-n>0$. In the definition of $I_s$, we may replace the elements $$\xi_{s, i}=\varphi^{-1}(v)\varphi^{i+1}(\omega)\varphi^{i+2}(\omega)\dots \varphi^{s}(\omega)$$ by the elements $$\xi'_{s, i}=\varphi^{-1}(v)\varphi^{i+1}(E(u))\varphi^{i+2}(E(u))\dots \varphi^{s}(E(u)),$$ since the quotients $\xi_{s, i}/\xi'_{s, i}$ are $\ainf$--units. It is thus enough to show that for every $i$ with $0 \leq i \leq s,$ the element $$\vartheta_{s, i}=\frac{\xi'_{s, i}u^{p^i}}{\varphi^{-1}(v)}=\varphi^{i+1}(E(u))\varphi^{i+2}(E(u))\dots \varphi^{s}(E(u))u^{p^i}$$ taken modulo $p^n$ is divisible by $u^{p^{s+1-n}}$. This is clear when $i\geq s+1-n$, and so it remains to discuss the cases when $i \leq s-n.$ Write $\varphi^{j}(E(u))=(u^e)^{p^j}+px_j$ (with $x_j \in \Es$). Then it is enough to show that \begin{equation}\tag{$*$}\label{eqn:ProductPhiEu}\frac{\vartheta_{s, i}}{u^{p^i}}=((u^e)^{p^{i+1}}+px_{i+1})((u^e)^{p^{i+2}}+px_{i+2})\dots((u^e)^{p^s}+px_s) \end{equation} taken modulo $p^n$ is divisible by $$u^{p^{s+1-n}-p^{i}}=u^{p^i(p-1)(1+p+\dots +p^{s-n-i})}.$$ Since we are interested in the product (\ref{eqn:ProductPhiEu}) only modulo $p^n$, in expanding the brackets we may ignore the terms that use the expressions of the form $px_j$ at least $n$ times. Each of the remaining terms contains the product of at least $s-i-n+1$ distinct terms from the following list: $$(u^e)^{p^{i+1}}, (u^e)^{p^{i+2}}, \dots, (u^e)^{p^{s}}.$$ Thus, each of the remaining terms is divisible by (at least) $$(u^e)^{p^{i+1}+p^{i+2}+\dots+p^{s-n+1}}=(u^e)^{p^i\cdot(p)\cdot(1+p+\dots +p^{s-n-i})},$$ which is more than needed. This finishes the proof. \end{proof} \subsection{Crystalline condition for Breuil--Kisin--Fargues $G_K$--modules} The situation of central interest regarding the crystalline condition is the inclusion $M_{\BK} \rightarrow M_{\inf}^{G_{\infty}}$ such that $\ainf\otimes_{\Es}M_{\BK} \rightarrow M_{\inf}$ is an isomorphism, where $M_{\BK}$ is a Breuil--Kisin module and $M_{\inf}$ is a Breuil--Kisin--Fargues $G_K$--module. The version of these notions used in this paper is tailored to the context of prismatic cohomology. Namely, we have: \begin{deff} \begin{enumerate}[(1)] \item{A \emph{Breuil--Kisin module} is a finitely generated $\Es$--module $M$ together with a $\Es[1/E(u)]$--linear isomorphism $$\varphi=\varphi_{M[1/E]}:(\varphi_{\Es}^*M)[1/E(u)]\stackrel{\sim}{\rightarrow} M[1/E(u)].$$ For a positive integer $i$, the Breuil--Kisin module $M$ is said to be \textit{of height $\leq i$} if $\varphi_{M[1/E]}$ is induced (by linearization and localization) by a $\varphi$--$\Es$--semilinear map $\varphi_M: M \rightarrow M$ such that, denoting $\varphi_{\lin}: \varphi^*M \rightarrow M$ its linearization, there exists an $\Es$--linear map $\psi: M\rightarrow \varphi^*M$ such that both the compositions $\psi \circ \varphi_{\lin}$ and $ \varphi_{\lin}\circ \psi $ are multiplication by $E(u)^i$. A Breuil--Kisin module is \emph{of finite height} if it is of height $\leq i$ for some $i$.} \item{A \emph{Breuil--Kisin--Fargues module} is a finitely presented $\ainf$--module $M$ such that $M[1/p]$ is a free $\ainf[1/p]$--module, together with an $\ainf[1/E(u)]$--linear isomorphism $$\varphi=\varphi_{M[1/E]}:(\varphi_{\ainf}^*M)[1/E(u)]\stackrel{\sim}{\rightarrow} M[1/E(u)].$$ Similarly, the Breuil--Kisin--Fargues module is called \textit{of height $\leq i$} if $\varphi_{M[1/E]}$ comes from a semilinear map $\varphi_M:\varphi^*M \rightarrow M$ such that there exist an $\ainf$--linear map $\psi: M \rightarrow \varphi^*M$ such that $\psi \circ \varphi_{\lin}$ and $ \varphi_{\lin}\circ \psi $ are multiplication maps by $E(u)^i$, where $\varphi_{\lin}$ is the inearization of $\varphi_M$. A Breuil--Kisin--Fargues module is \emph{of finite height} if it is of height $\leq i$ for some $i$.} \item{A \emph{Breuil--Kisin--Fargues $G_K$--module} (of height $\leq i$, of finite height, resp.) is a Breuil--Kisin--Fargues module (of height $\leq i$, of finite height, resp.) that is additionally endowed with an $\ainf$--semilinear $G_K$--action that makes $\varphi_{M[1/E]}$ $G_K$--equivariant (that makes also $\varphi_M$ $G_K$--equivariant in the finite height cases).} \end{enumerate} \end{deff} That is, the definition of a Breuil--Kisin module agrees with the one in \cite{BMS1}, and $M_{\inf}$ is a Breuil--Kisin--Fargues module in the sense of the above definition if and only if $\varphi_{\ainf}^*M_{\inf}$ is a Breuil--Kisin--Fargues module in the sense of \cite{BMS1}\footnote{This is to account for the fact that while Breuil--Kisin--Fargues modules in the sense of \cite{BMS1} appear as $\ainf$--cohomology groups of smooth proper formal schemes, Breuil--Kisin--Fargues modules in the above sense appear as \textit{prismatic} $\ainf$--cohomology groups of smooth proper formal schemes.}. The notion of Breuil--Kisin module of height $\leq i$ agrees with what is called ``(generalized) Kisin modules of height $i$'' in \cite{LiLiu}. The above notion of finite height Breuil--Kisin--Fargues modules agrees with the one from \cite[Appendix~F]{EmertonGee2} except that the modules are not assumed to be free. Also note that under these definitions, for a Breuil--Kisin module $M_{\BK}$ (of height $\leq i,$ resp.), the $\ainf$--module $M_{\inf}=\ainf\otimes_{\Es}M_{\BK}$ is a Breuil--Kisin--Fargues module (of height $\leq i,$ resp.), without the need to twist the embedding $\Es \rightarrow \ainf$ by $\varphi$. The connection between Breuil--Kisin--, Breuil--Kisin--Fargues $G_K$--modules and the crystalline condition (that also justifies the name of the condition) is the following theorem. \begin{thm}[{\cite[Appendix F]{EmertonGee2}}, \cite{GaoBKGK}]\label{BKBKFCrystallineThm} Let $M_{\inf}$ be a free Breuil--Kisin--Fargues $G_K$--module which admits as an $\Es$--submodule a free Breuil--Kisin module $M_{\BK}\subseteq M_{\inf}^{G_{\infty}}$ of finite height, such that $\ainf\otimes_{\Es}M_{\BK} \stackrel{\sim}\rightarrow M_{\inf}$ (as Breuil--Kisin--Fargues modules) via the natural map, and such that the pair $(M_{\BK}, M_{\inf})$ satisfies the crystalline condition. Then the \'{e}tale realization of $M_{\inf}$, $$V(M_{\inf})=\left(W(\mathbb{C}_K^\flat)\otimes_{\ainf} M_{\inf}\right)^{\varphi=1}\left[\frac{1}{p}\right],$$ is a crystalline representation. \end{thm} \begin{rems}\label{rem:CrystConditionProof}\begin{enumerate}[(1)] \item{Theorem~\ref{BKBKFCrystallineThm} is actually an equivalence: If $V(M)$ is crystalline, it can be shown that the pair $(M_{\BK}, M_{\inf})$ satisfies the crystalline condition. We state the theorem in the one direction since this is the one that we use. However, the converse direction motivates why it is resonable to expect the crystalline condition for prismatic cohomology groups that is discussed in Section~\ref{sec:CrsCohomology}.} \item{Strictly speaking, in \cite[Appendix~F]{EmertonGee2} one assumes extra conditions on the pair $M_{\inf}$ (``satisfying all descents''); however, these extra assumptions are used only for a semistable version of the statement. Theorem~\ref{BKBKFCrystallineThm} in its equivalence form is therefore only implicit in the proof of \cite[Theorem~F.11]{EmertonGee2}. (See also \cite[Theorem~3.8]{Ozeki} for a closely related result.)} \item{On the other hand, Theorem~\ref{BKBKFCrystallineThm} in the one--sided form as above is a consequence of \cite[Proposition~7.11]{GaoBKGK} that essentially states that $V(M)$ is crystalline if and only if the much weaker\footnote{``Weaker'' for the purposes of controlling the $G_K$--action on the submodule $M_{\BK}$ inside $M_{\inf}$.} condition $$\forall g \in G_K:\;\; (g-1)M_{\BK} \subseteq \varphi^{-1}(v)W(\mathfrak{m}_{\oh_{\mathbb{C}_K^\flat}})M_{\inf}$$ is satisfied. We note a related result of \textit{loc. cit.}: $V(M)$ is semistable if and only if $$\forall g \in G_K:\;\; (g-1)M_{\BK} \subseteq W(\mathfrak{m}_{\oh_{\mathbb{C}_K^\flat}})M_{\inf}.$$ This is interesting for at least two reasons: Firstly, the proof of \cite[Theorem~F.11]{EmertonGee2} is based on arguments of \cite{Ozeki} that make heavy use of the fact that for any $r \geq 0$, the sequence $u^{p^n}/p^{nr}$ converges $p$--adically to $0$ in $A_{\cris}=\widehat{\ainf[(E(u)^n/n!)_{n}]}$. In particular, in this approach $u$ is crucial and $\varphi^{-1}(v)$ is essentially irrelevant, which is the complete opposite of the situation in \cite{GaoBKGK}. Secondly, the semistable criterion above might be a good starting point in generalizing the results of Sections~\ref{sec:CrsCohomology} and \ref{sec:bounds} of the present paper to the case of semistable reduction, using the log--prismatic cohomology developed in \cite{Koshikawa}. Thus, a natural question to ask is: Similarly to how the crystalline condition is a stronger version of the crystallinity criterion from \cite{GaoBKGK}, what is an analogous stronger (while still generally valid) version of the semistability criterion from \cite{GaoBKGK}?} \end{enumerate} \end{rems} It will be convenient later to have version of Theorem~\ref{BKBKFCrystallineThm} that applies to not necessarily free Breuil--Kisin and Breuil--Kisin--Fargues modules. Recall that, by \cite[Propostition 4.3]{BMS1}, any Breuil--Kisin module $M_{\BK}$ is related to a free Breuil--Kisin module $M_{\BK,\mathrm{free}}$ by a functorial exact sequence \begin{center} \begin{tikzcd} 0 \ar[r] & M_{\BK,\mathrm{tor}} \ar[r] & M_{\BK} \ar[r] & M_{\BK, \mathrm{free}} \ar[r] & \overline{M_{\BK}} \ar[r] & 0 \end{tikzcd} \end{center} where $M_{\BK,\mathrm{tor}}$ is a $p^n$-torsion module for some $n$ and $\overline{M_{\BK}}$ is supported at the maximal ideal $(p, u)$. Taking the base--change to $\ainf,$ one obtains an analogous exact sequence \begin{center} \begin{tikzcd} 0 \ar[r] & M_{\inf,\mathrm{tor}} \ar[r] & M_{\inf} \ar[r] & M_{\inf, \mathrm{free}} \ar[r] &\overline{M_{\inf}}\ar[r] & 0 \end{tikzcd} \end{center} (also described by \cite[Proposition 4.13]{BMS1}) where $M_{\inf, \mathrm{free}}$ is a free Breuil--Kisin--Fargues module. Clearly the maps $M_{\BK}\rightarrow M_{\mathrm{free}}$ and $ M_{\inf}\rightarrow M_{\inf, \mathrm{free}}$ become isomorphisms after inverting $p$. Assume that $M_{\inf}$ is endowed with a $G_K$--action that makes it a Breuil--Kisin--Fargues $G_K$--module. The functoriality of the latter exact sequence implies that the $G_K$--action on $M_{\inf}$ induces a $G_K$--action on $M_{\inf, \mathrm{free}}$, endowing it with the structure of a free Breuil--Kisin--Faruges $G_K$--module. In more detail, given $\sigma \in G_K$, the semilinear action map $\sigma: M_{\inf}\rightarrow M_{\inf}$ induces an $\ainf$--linear map $\sigma_{\mathrm{lin}}:\sigma^*M_{\inf} \rightarrow M_{\inf}$ where $\sigma^*M=\ainf \otimes_{\sigma, \ainf} M$. As $\sigma$ is an isomorphism fixing $p$, $E(u)$ up to unit and the ideal $(p, u)\ainf,$ it is easy to see that $\sigma^*M_{\inf}$ is itself a Breuil--Kisin--Fargues module, and the exact sequence from \cite[Proposition 4.13]{BMS1} for $\sigma^*M_{\inf}$ can be identified with the upper row of the diagram \begin{center} \begin{tikzcd} 0 \ar[r] & \sigma^*M_{\inf,\mathrm{tor}} \ar[r] \ar[d, "\sigma_{\mathrm{lin}}"] & \sigma^*M_{\inf} \ar[d, "\sigma_{\mathrm{lin}}"] \ar[r] & \sigma^*M_{\inf, \mathrm{free}} \ar[d, "\sigma_{\mathrm{lin}}"] \ar[r] & \sigma^*\overline{M_{\inf}} \ar[d, "\sigma_{\mathrm{lin}}"] \ar[r] & 0 \\ 0 \ar[r] & M_{\inf,\mathrm{tor}} \ar[r] & M_{\inf} \ar[r] & M_{\inf, \mathrm{free}} \ar[r] &\overline{M_{\inf}}\ar[r] & 0, \end{tikzcd} \end{center} where the second map is the linearization of $\sigma$ and the rest is induced by functoriality of the sequence. Finally, untwisting $\sigma^*M_{\inf, \mathrm{free}},$ the third vertical map $\sigma_{\mathrm{lin}}$ induces a semilinear map $\sigma: M_{\inf, \mathrm{free}} \rightarrow M_{\inf, \mathrm{free}}$. Note that the module $M_{\inf}[1/p]\simeq M_{\inf, \mathrm{free}}[1/p]$ inherits the $G_K$--action from $M_{\inf}$; it is easy to see that the $G_K$-action on $M_{\inf, \mathrm{free}}$ agrees with the one on $M_{\inf}[1/p]$ when viewing $M_{\inf, \mathrm{free}}$ as its submodule. \begin{prop}\label{CrystallineFree} Assume that the pair $M_{\BK} \hookrightarrow M_{\inf}$ satisfies the crystalline condition. Then so does the pair $M_{\BK,\mathrm{free}} \hookrightarrow M_{\inf, \mathrm{free}}.$ \end{prop} \begin{proof} Notice that the crystalline condition is satisfied for $M_{BK}[1/p]\rightarrow M_{\inf}[1/p]$ and by \cite[Propositions 4.3, 4.13]{BMS1}, this map can be identified with $M_{\BK,\mathrm{free}}[1/p] \hookrightarrow M_{\inf, \mathrm{free}}[1/p]$. Thus, the following lemma finishes the proof. \end{proof} \begin{lem} Let $F_{\inf}$ be a free $\ainf$--module endowed with $\ainf$--semilinear $G_K$--action and $F_{BK} \subseteq F_{\inf}$ a free $\Es$--submodule such that $F_{BK}[1/p]\hookrightarrow F_{\inf}[1/p]$ satisfies the crystalline condition. Then the pair $F_{BK}\hookrightarrow F_{\inf}$ satisfies the crystalline condition. \end{lem} \begin{proof} Fix an element $a\in F_{BK}$ and $g \in G_K$. The crystalline condition holds after inverting $p$, and so $$b:=(g-1)a=\varphi^{-1}(v)u\frac{c}{p^k}$$ with $c \in F_{\inf}$. In other words (using that $p^k$ is a non-zero divisor on $F_{\inf}$), we have $$p^kb=\varphi^{-1}(v)uc\in p^k F_{\inf}\cap \varphi^{-1}(v)uF_{\inf}=p^k\varphi^{-1}(v)uF_{\inf},$$ where the last equality follows by Lemma~\ref{disjointness} since $F_{\inf}$ is a free module. In particular, $$p^kb=p^k\varphi^{-1}(v)ud$$ for yet another element $d \in F_{\inf}$. As $p^k$ is a non--zero divisor on $\ainf$, hence on $F_{\inf},$ we may cancel out to conclude $$(g-1)a=b=\varphi^{-1}(v)ud\in \varphi^{-1}(v)u F_{\inf},$$ as desired. \end{proof} Combining Theorem~\ref{BKBKFCrystallineThm} and Proposition~\ref{CrystallineFree}, we arrive at the following theorem. \begin{thm}\label{BKBKFCrystallineGeneralThm} The ``free'' assumption in Theorem~\ref{BKBKFCrystallineThm} is superfluous. That is, given a Breuil--Kisin--Fargues $G_K$--module $M_{\inf}$ together with its Breuil--Kisin--$\Es$--submodule $M_{\BK} \subseteq M_{\inf}^{G_{\infty}}$ of finite height such that $\ainf\otimes_{\Es}M_{\BK} \stackrel{\sim}\rightarrow M_{\inf}$ and such that the pair $(M_{\BK}, M_{\inf})$ satisfies the crystalline condition, the representation $$V(M_{\inf})=\left(W(\mathbb{C}_K^\flat)\otimes_{\ainf} M_{\inf}\right)^{\varphi=1}\left[\frac{1}{p}\right],$$ is crystalline. \end{thm} \begin{proof} With the notation as above, upon realizing that $V(M_{\inf})$ and $V(M_{\inf, \mathrm{free}})$ agree, the result is a direct consequence of Proposition~\ref{CrystallineFree}. \end{proof} \section{Conditions \Crs for cohomology} \label{sec:CrsCohomology} \subsection{\Crs for \v{C}ech--Alexander complexes} Let $\mathscr{X}$ be a smooth separated $p$--adic formal scheme over $\oh_K$. Denote by $\v{C}_{\BK}^{\bullet}$ a \v{C}ech--Alexander complex that models $\R\Gamma_{\Prism}(\mathscr{X}/\Es)$ and set $\check{C}_{\inf}^{\bullet}=\check{C}_{\BK}^{\bullet}\widehat{\otimes}_{\Es}\ainf$ (computed termwise). The next goal is to prove the following theorem. \begin{thm}\label{thm:CrsForCechComplex} For every $m, s \geq 0$, the pair $\v{C}_{\BK}^m\rightarrow \v{C}_{\inf}^m$ satisfies the condition \Crs, and additionally, we have $\v{C}_{\BK}^m \subseteq \left(\v{C}_{\inf}^m\right)^{G_{\infty}}$. \end{thm} Let $\spf(R)=\mathscr{V} \subseteq \mathscr{X}$ be an affine open formal subscheme. Then it is enough to prove the content of Theorem~\ref{thm:CrsForCechComplex} for $\check{C}_{\BK}\rightarrow \check{C}_{\inf}$ where $\check{C}_{\BK}$ and $\check{C}_{\inf}=\check{C}_{\BK}\widehat{\otimes}_{\Es} \ainf$ are the \v{C}ech--Alexander covers of $\mathscr{V}$ and $\mathscr{V}'=\mathscr{V}\times_{\Es}\ainf$ with respect to the base prism $\Es$ and $\ainf$, respectively, since the \v{C}ech--Alexander complexes termwise consist of products of such covers. Let $R'=R \widehat{\otimes}_{\oh_K}\oh_{\mathbb{C}_K}(=R \widehat{\otimes}_{\Es}\ainf)$. Let us fix a choice of the smooth $\oh_K$--algebra $R_0$ with $\widehat{R_0}=R$, free $\Es$--algebra $P_0=\Es[X_1, \dots, X_m]$ and the surjection $P_0\rightarrow R_0$ as in Construction~\ref{constACcover}. Then the analogous choices over $\ainf$ $R_0',$ $P_0'\rightarrow R_0'$ are done by the base change to $\ainf$ of the data above. It then follows that the associated ``$\delta$--envelopes'' are also related by the base change, \begin{equation}\label{CechDiagramGalois} \begin{tikzcd} 0 \ar[r] & J_0P_0^{\delta} \ar[r] \ar[d] & P_0^{\delta} \ar[r] \ar[d] & R_0\otimes_{P_0}P_0^{\delta} \ar[r] \ar[d] & 0 \ar[d, phantom, "-\otimes_{\Es}\ainf"]\\ 0 \ar[r] & J_0(P_0')^{\delta} \ar[r] & (P_0')^{\delta} \ar[r, "\alpha"] & R_0'\otimes_{P_0'}(P_0')^{\delta} \ar[r] & 0 \end{tikzcd} \end{equation} (and similarly for the $(p, E(u))$-completions as in Construction~\ref{constACcover}, only with $-\otimes_{\Es}\ainf$ replaced by the $(p, E(u))$-completed base change $-\widehat{\otimes}_{\Es}\ainf$). There is a natural $G_K$--action on $R_0'=R_0\otimes_{\Es} \ainf$ by acting on the second factor and, similarly, one has on $P_0'$ the action given by $g(X_i)=X_i$ and the action on $\ainf$--coefficients. This extends to an action by $\delta$--maps on $(P_0')^{\delta}$ by functoriality, and again this action takes simply the form $g(\delta^{i}(X_j))=\delta^{i}(X_j)$ for all $i$ and $j$. Finally, there is again an obvious action on $(R_0\otimes_{P_0}P_0^\delta)\otimes_{\Es}\ainf=R_0'\otimes_{P_0'}(P_0')^{\delta}$, compatible with the actions on $R_0'$ and $(P_0')^{\delta}$. In particular, the map $\alpha$ is $G_K$--equivariant, and the ideal $J_0(P_0')^{\delta}$ is $G_K$--stable. All the above actions continuously extend to the $(p, E(u))$--completions and, since $J_0\widehat{(P_0')^\delta}\subseteq \widehat{(P_0')^\delta}$ is still $G_K$--equivariant, to the prismatic envelope $(\check{C}_{\inf}, I\check{C}_{\inf})$ where the action obtained this way agrees with the one indicated in Remark~\ref{rem:CechBaseChange}. Upon taking the prismatic envelope $(\check{C}_{\BK}, I\check{C}_{\BK})$ of the $\delta$--pair $(\widehat{P_0^{\delta}}, J_0\widehat{P_0^{\delta}})$, we arrive at the situation $\check{C}_{\BK}\hookrightarrow \check{C}_{\inf}=\check{C}_{\BK}\widehat{\otimes}_{\Es}\ainf$ for which we wish to verify the conditions \Crs. With the goal of understanding the $G_K$--action on $\check{C}_{\inf}$ even more explicitly, in similar spirit to the proof of \cite[Proposition~3.13]{BhattScholze} we employ the following approximation of the prismatic envelope. \begin{deff} Let $B$ be a $\delta$--ring, $J \subseteq B$ an ideal with a fixed generating set $\underline{x}=\{x_i\}_{i \in \Lambda},$ and let $b \in J$ be an element. Denote by $K_0$ be the kernel of the $B$--algebra map \begin{align*} B[\underline{T}]=B[\{T_i\}_{i \in \Lambda}] &\longrightarrow B \left[\frac{1}{b}\right]\\ T_i &\longmapsto \frac{x_i}{b}, \end{align*} and let $K$ be the $\delta$-ideal in $B\{\underline{T}\}$ generated by $K_0$. Then we denote by $B\{\frac{\underline{x}}{b}\}$ the $\delta$--ring $B\{\underline{T}\}/K$, and call it \emph{weak $\delta$--blowup algebra of $\underline{x}$ and $b$}. \end{deff} That is, the above construction adjoins (in $\delta$--sense) the fractions $x_i/b$ to $B$ together with all relations among them that exist in $B[1/b]$, making it possible to naturally compute with the fractions as opposed to possibly simpler constructions such as $B\{\underline{T}\}/(T_ib-x_i)_{\delta}$. Note that if $B \rightarrow C$ is a map of $B$--$\delta$--algebras such that $JC=bC$ and this ideal is invertible, the fact that the localization map $C\rightarrow C[\frac{1}{b}]$ is injective shows that there is a unique map of $B$--$\delta$--algebras $B\{\frac{\underline{x}}{b}\} \rightarrow C$. In fact, if $b$ happens to be a non--zero divisor on $B\{\frac{\underline{x}}{b}\}$, then $B\{\frac{\underline{x}}{b}\}$ is initial among all such $B$--$\delta$--algebras. This justifies the name 'weak $\delta$--blowup algebra'. The purpose of the construction is the following. \begin{prop}\label{ApproxEnvelopes} Let $(A, I)$ be a bounded orientable prism with an orientation $d \in I$. Let $(A, I) \rightarrow (B, J)$ be a map of $\delta$--pairs and assume that $(C, IC)$ is a prismatic envelope for $(B, J)$ that is classically $(p, I)$--complete. Let $\underline{x}=\{x_i\}_{i \in \Lambda}$ be a system of generators of $J$. Then there is a surjective map of $\delta$--rings $\widehat{B\{\frac{\underline{x}}{d}\}}^{\mathrm{cl}} \rightarrow C$, where $\widehat{(-)}^{\mathrm{cl}}$ denotes the classical $(p, I)$--completion. \end{prop} Note that the assumptions apply to a \v{C}ech--Alexander cover in place of $(C, IC)$ since it is $(p, I)$--completely flat over the base prism, hence classically $(p, I)$--complete by \cite[Proposition~3.7]{BhattScholze}. \begin{proof} Since $JC=dC$ and and $d$ is a non--zero divisor on $C$, there is an induced map $B\{\frac{\underline{x}}{d}\}\rightarrow C$ and hence a map of $\delta$--rings $\widehat{B\{\frac{\underline{x}}{d}\}}^{\mathrm{cl}} \rightarrow C$ (using \cite[Lemma~2.17]{BhattScholze}). To see that this map is surjective, let $C'$ denote its image in $C$, and denote by $\iota$ the inclusion of $C'$ into $C$. Then $C'$ is (derived, and, consequently, clasically) $(p, I)$--complete $A$--$\delta$--algebra with $C'[d]=0$. It follows that $(C', IC')=(C', (d))$ is a prism by \cite[Lemma~3.5]{BhattScholze} and thus, by the universal property of $C$, there is a map of $B$--$\delta$--algebras $r: C \rightarrow C'$ which is easily seen to be right inverse to $\iota$. Hence, $\iota$ is surjective, proving the claim. \end{proof} Finally, we are ready to prove the following proposition which, as noted above, proves Theorem~\ref{thm:CrsForCechComplex}. \begin{prop}\label{thm:CrsForACCoover} The pair $\check{C}_{\BK} \rightarrow \check{C}_{\inf}$ satisfies the conditions \Crs for every $s \geq 0$, and additionally, we have $\check{C}_{\BK}\subseteq \check{C}_{\inf}^{G_{\infty}}.$ \end{prop} \begin{proof} Fix a generating set $y_1, y_2, \dots, y_n$ of $J_0$. We obtain a commutative diagram \begin{equation}\label{postcompletion} \begin{tikzcd} \widehat{P_0^{\delta}\{\frac{\underline{y}}{E(u)}\}} \ar[r] \ar[d] & \widehat{(P_0')^{\delta}\{\frac{\underline{y}}{E(u)}\}} \ar[d] \\ \check{C}_{\BK} \ar[r] & \check{C}_{\inf}, \end{tikzcd} \end{equation} where the vertical maps are the surjective maps from Proposition~\ref{ApproxEnvelopes}, and the horizontal maps come from the (classically) $(p, E(u))$--completed base change $-\widehat{\otimes}_{\Es}\ainf$. The $G_K$--action on $(P_0')^{\delta}$ naturally extends to $(P_0')^{\delta}\{\frac{\underline{y}}{E(u)}\}$ by the rule on generators $$g\left(\frac{y_j}{E(u)}\right)=\frac{g(y_j)}{g(E(u))}=\gamma^{-1}\frac{g(y_j)}{E(u)}$$ where $\gamma$ is the $\ainf$--unit such that $g(E(u))=\gamma E(u)$ (note that the fraction on the right--hand side makes sense as $g(y_j)\in J_0P'_0$). Subsequently, the action can be again extended continuously to the $(p, E(u))$--adic completion. It is easy to see that this makes the right vertical map $G_K$--equivariant. It is therefore enough to prove the content of the proposition for the pair $(\widehat{P_0^{\delta}\{\frac{\underline{y}}{E(u)}\}}, \widehat{(P_0')^{\delta}\{\frac{\underline{y}}{E(u)}\}})$. The fact that the image of $P_0^{\delta}\{\frac{\underline{y}}{E(u)}\}$ lands in the $G_{\infty}$--fixed points of $\widehat{(P_0')^{\delta}\{\frac{\underline{y}}{E(u)}\}}$ is clear by the above description of the $G_K$--action, ultimately because $\Es \subseteq \ainf^{G_{\infty}}$. Thus, it remains to check the conditions \Crs, and by Lemma~\ref{generators} (3),(4), it is enough to check the conditions for the generators of $P_0\{\frac{\underline{y}}{E(u)}\}$ as an $\Es$--$\delta$--algebra, which are $X_1, X_2, \dots, X_m$ and $y_1/E(u), y_2/E(u), \dots, y_n/E(u).$ Fix an integer $s \geq 0$. Firstly, observe that the elements $X_1, X_2, \dots, X_m$ satisfy $g(X_i)-X_i=0$ for every $g \in G_s$; consequently, by Lemma~\ref{generators} (2) the pair $P_0 \rightarrow P_0'$ satisfies the stronger condition \Crrs. In particular, \Crrs holds for these generators, and since $ y_1, y_2, \dots, y_n$ all come from $P_0$, it follows that these are {\Crrs}--elements as well. Thus, upon fixing index $j$ and an element $g \in G_s$, we may write $g(y_j)-y_j=\varphi^{s}(v)u z_j$ for some $z_j \in P_0'$. Similarly, we have $g^{-1}(E(u))-E(u)=(\gamma^{-1}-1)E(u)=\varphi^{s}(v)u a$ with $a \in \ainf$ and an $\ainf$--unit $\gamma$ satisfying $g(E(u))=\gamma E(u)$. We may thus write $$g\left(\frac{y_j}{E(u)}\right)-\frac{y_j}{E(u)}=\frac{\gamma^{-1} g(y_j)-y_j}{E(u)}=\frac{\gamma^{-1} g(y_j)-\gamma^{-1}y_j+\gamma^{-1}y_j-y_j}{E(u)}=$$ $$=\gamma^{-1} \frac{g(y_j)-y_j}{E(u)}+(\gamma^{-1}-1)\frac{y_j}{E(u)}.$$ Now $g(y_j)-y_j=\varphi^{s}(v)uz_j$ and since $\omega$ and $E(u)$ are equal up to an $\ainf$--unit, we may write $g(y_j)-y_j=\xi_{s, 0}uE(u)\tilde{z}_j$ (where $\tilde{z_j}$ equals $z_j$ up to a unit). Similarly, we have, in $\ainf$ and hence in any $\ainf$--$\delta$--$G_K$--algebra, $(\gamma^{-1}-1)=\xi_{s, 0}u \tilde{a}$ (where $\tilde{a}$ equals $a$ up to a unit). Thus, we obtain $$g\left(\frac{y_j}{E(u)}\right)-\frac{y_j}{E(u)}=\xi_{s, 0}u \gamma^{-1}\tilde{z}_j+\xi_{s, 0}u\tilde{a}\frac{y_j}{E(u)} \in I_s (P_0')^{\delta},$$ and we are done. \end{proof} \subsection{Consequences for cohomology groups} Let us now use Theorem~\ref{thm:CrsForCechComplex} to draw some conclusions for individual cohomology groups. The first is the crystalline condition for the prismatic cohomology groups and its consequence for $p$--adic \'{e}tale cohomology. As before, let $\mathscr{X}$ be a separated smooth $p$--adic formal scheme over $\oh_K$. Denote by $\mathscr{X}_{\ainf}$ the base change $\mathscr{X}\times_{\oh_K}\oh_{\mathbb{C}_K}=\mathscr{X}\times_{\Es}\ainf$, and by $\mathscr{X}_{\overline{\eta}}$ the geometric generic adic fiber. \begin{cor}\label{cor:CrystallineForCohomologyGrps} For any $i \geq 0,$ the pair $H_{\Prism}^i(\mathscr{X}/\Es) \rightarrow H_{\Prism}^i(\mathscr{X}_{\ainf}/\ainf)$ satisfies the crystalline condition, and the image of $H^i_{\Prism}(\mathscr{X}/\Es)$ is contained in $H_{\Prism}^i(\mathscr{X}_{\ainf}/\ainf)^{G_{\infty}}$. \end{cor} \begin{proof} By the results of Section~\ref{sec:CAComplex}, we may and do model the cohomology theories by the \v{C}ech--Alexander complexes $$\check{C}_{\BK}^{\bullet} \rightarrow \check{C}_{\inf}^{\bullet}=\check{C}_{\BK}^{\bullet}\widehat{\otimes}_{\Es}\ainf,$$ and by Theorem~\ref{thm:CrsForCechComplex} the crystalline condition as well as the claim about $G_{\infty}$--fixed points termwise holds for this pair. The claim that $H^i_{\Prism}(\mathscr{X}/\Es) \subseteq H^i_{\Prism}(\mathscr{X}/\ainf)^{G_{\infty}}$ thus follows immediately. Each of the terms $\check{C}_{\inf}^{i}$ is $(p, E(u))$--completely flat over $\ainf$, which means in particular that the terms $\check{C}_{\inf}^{i}$ are torsion--free by Corollary~\ref{FlatTorFree}. Denote the differentials on $\check{C}_{\BK}^{\bullet}, \check{C}_{\inf}^{\bullet}$ by $\partial$ and $\partial'$, resp. To prove the crystalline condition for cohomology groups, it is clearly enough to verify the condition at the level of cocycles. Given $x \in Z^i(\check{C}_{\BK}^{\bullet}),$ denote by $x'$ its image in $Z^i(\check{C}_{\inf}^{\bullet})$. For $g \in G_K$ we have $g(x')-x'=\varphi^{-1}(v)uy'$ for some $y' \in \check{C}_{\inf}^{i}$. As $g(x')-x' \in Z^i(\check{C}_{\inf}^{\bullet}),$ we have $$\varphi^{-1}(v)u \partial'(y')=\partial'(\varphi^{-1}(v)u y')=0,$$ and the torsion--freeness of $\check{C}_{\inf}^{i+1}$ implies that $\partial'(y')=0$. Thus, $y' \in Z^i(\check{C}_{\inf})$ as well, showing that $g(x')-x' \in \varphi^{-1}(v)uZ^i(\check{C}_{\inf}^{\bullet}),$ as desired. \end{proof} When $\mathscr{X}$ is proper over $\oh_K$, we use the previous results to reprove the result from \cite{BMS1} that the \'{e}tale cohomology groups $\H^{i}_{\et}(\mathscr{X}_{\overline{\eta}}, \mathbb{Q}_p)$ are in this case crystalline representations. \begin{cor}\label{cor:EtaleCohomologyCrystalline} Assume that $\mathscr{X}$ is additionally proper over $\oh_K.$ Then for any $i \geq 0,$ the $p$--adic \'{e}tale cohomology $H_{\et}^i(\mathscr{X}_{\overline{\eta}}, \mathbb{Q}_p)$ is a crystalline representation. \end{cor} \begin{proof} It follows from \cite[Theorem~1.8]{BhattScholze} (and faithful flatness of $\ainf/\Es$) that $M_{\BK}=H_{\Prism}^i(\mathscr{X}/ \Es)$ and $M_{\inf}=H_{\Prism}^i(\mathscr{X}_{\ainf}/ \ainf)$ are Breuil--Kisin and Breuil--Kisin--Fargues modules, resp., such that $M_{\inf}=M_{\BK}\otimes_{\Es}\ainf$. Moreover, $M_{\inf}$ has the structure of a Breuil--Kisin--Fargues $G_K$--module with $$V(M_{\inf}):=\left(W(\mathbb{C}_K^{\flat})\otimes_{\ainf} M_{\inf})\right)^{\varphi=1}\left[\frac{1}{p}\right]\simeq H_{\et}^i(\mathscr{X}_{\overline{\eta}}, \mathbb{Q}_p)$$ as $G_K$--representations. By Corollary~\ref{cor:CrystallineForCohomologyGrps}, the pair $(M_{\BK}, M_{\inf})$ satisfies all the assumptions of Theorem~\ref{BKBKFCrystallineGeneralThm}. The claim thus follows. \end{proof} For the purposes of obtaining a bound on ramification of $p$--torsion \'{e}tale cohomology in the next section, let us recall the notion of torsion prismatic cohomology as defined in \cite{LiLiu}, and discuss the consequences of the conditions \Crs in this context. \begin{deff} Given a bounded prism $(A, I)$ and a smooth $p$--adic formal scheme $\mathscr{X}$ over $A/I$, the \textit{$p^n$--torsion prismatic cohomology} of $\mathscr{X}$ is defined as $$\R\Gamma_{\Prism, n}(\mathscr{X}/ A)=\R\Gamma_\Prism (\mathscr{X}/ A)\stackrel{{\mathsf{L}}}{\otimes}_{\mathbb{Z}}\mathbb{Z}/p^n\mathbb{Z}.$$ We denote the cohomology groups of $\R\Gamma_{\Prism, n}(\mathscr{X}/ A)$ by $\H^i_{\Prism, n}(\mathscr{X}/A)$ (and refer to them as $p^n$--torsion prismatic cohomology groups). \end{deff} \begin{prop}\label{prop:CrystallineCohMopPN} Let $s, n$ be a pair of integers satisfying $s\geq 0, n \geq 1$. Set $t=\mathrm{max}\left\{0, s+1-n\right\}.$ Then the torsion prismatic cohomology groups $\H^i_{\Prism, n}(\mathscr{X}/\Es)\rightarrow \H^i_{\Prism, n}(\mathscr{X}_{\ainf}/\ainf)$ satisfy the following condition: $$\forall g \in G_s: \;\;\; (g-1)\H^i_{\Prism, n}(\mathscr{X}/\Es) \subseteq \varphi^{-1}(v)u^{p^{t}}\H^i_{\Prism, n}(\mathscr{X}_{\ainf}/\ainf).$$ \end{prop} \begin{proof} The proof is a slightly refined variant of the proof of Corollary~\ref{cor:CrystallineForCohomologyGrps}. Consider again the associated \v{C}ech--Alexander complexes over $\Es$ and $\ainf$, $$\check{C}_{\BK}^{\bullet} \rightarrow \check{C}_{\inf}^{\bullet}=\check{C}_{\BK}^{\bullet}\widehat{\otimes}_{\Es}\ainf.$$ Both of these complexes are given by torsion--free, hence $\mathbb{Z}$--flat, modules by Corollary~\ref{FlatTorFree}. Consequently, $\R\Gamma_{\Prism, n}(\mathscr{X}/\Es)$ is modelled by $\check{C}_{\BK, n}^{\bullet}:=\check{C}_{\BK}^{\bullet}/p^n\check{C}_{\BK}^{\bullet}$, and similarly for $\R\Gamma_{\Prism, n}(\mathscr{X}_{\ainf}/\ainf)$ and $\check{C}_{\inf, n}^{\bullet}=\check{C}_{\inf}^{\bullet}/p^n\check{C}_{\inf}^{\bullet}$. That is, the considered maps between cohomology groups are obtained as the maps on cohomologies for the base--change map of chain complexes $$\check{C}_{\BK, n}^{\bullet}\rightarrow \check{C}_{\inf, n}^{\bullet}=\check{C}_{\BK, n}^{\bullet}\widehat{\otimes}_{\Es}\ainf,$$ and as in the proof of Corolary~\ref{cor:CrystallineForCohomologyGrps}, it is enough to establish the desired condition for the respective groups of cocycles. Set $\alpha=\varphi^{-1}(v)u^{p^{t}}$. Note that by Lemma~\ref{lem:IsModPn}, the condition \Crs for the pair of complexes $\check{C}_{\BK, n}^{\bullet}\rightarrow \check{C}_{\inf, n}^{\bullet}$ implies the condition $$\forall g \in G_{s}:\;\;\; (g-1)\check{C}_{\BK, n}^{\bullet} \subseteq \alpha\check{C}_{\inf, n}^{\bullet}$$ (meant termwise as usual), and since the terms of the complex $\check{C}_{\inf}^{\bullet}$ are $(p, E(u))$--complete and $(p, E(u))$--completely flat, $\alpha$ is a non--zero divisor on the terms of $\check{C}_{\inf, n}^{\bullet}$ by Corollary~\ref{FlatTorFree}. So pick any element $x \in Z^{i}(\check{C}_{\BK, n}^{\bullet})$. The image $x'$ of $x$ in $\check{C}_{\inf, n}^i$ lies in $Z^i(\check{C}_{\inf, n}^\bullet)$ and for any chosen $g \in G_s $ we have $g(x')-x'=\alpha y'$ for some $y'\in \check{C}_{\inf, n}^i$. Now $g(x')-x'$ lies in $Z^i(\check{C}_{\inf, n}^{\bullet}),$ so $\alpha y'=g(x')-x' $ satisfies $$0=\partial'(\alpha y')=\alpha\partial'(y').$$ Since $\alpha$ is a non--zero divisor on $\check{C}_{\inf, n}^{i+1}$, it follows that $\partial'(y')=0$, that is, $y'$ lies in $Z^i(\check{C}_{\inf, n}^{\bullet}).$ We thus infer that $g(x')-x'=\alpha y'\in\alpha Z^i(\check{C}_{\inf, n}^{\bullet}),$ as desired. \end{proof} \section{Ramification bounds for $p$--torsion \'{e}tale cohomology}\label{sec:bounds} \subsection{Ramification bounds} We are ready to discuss the implications to the question of ramification bounds for $p$--torsion \'{e}tale cohomology groups $H_{\et}^i(\mathscr{X}_{\overline{\eta}}, \mathbb{Z}/p\mathbb{Z})$ when $\mathscr{X}$ is smooth and proper $p$--adic formal scheme over $\oh_K$. We define an additive valuation $v^{\flat}$ on $\oh_{\mathbb{C}_K}^{\flat}$ by $v^\flat(x)=v(x^{\sharp})$ where $v$ is the valuation on $\oh_{\mathbb{C}_K}$ normalized so that $v(\pi)=1$, and $(-)^{\sharp}:\oh_{\mathbb{C}_K}^{\flat}\rightarrow \oh_{\mathbb{C}_K}$ is the usual multiplicative lift. This way, we have $v^{\flat}(\underline{\pi})=1$ and $v^\flat(\underline{\varepsilon}-1)=pe/(p-1)$. For a real number $c\geq 0$, denote by $\mathfrak{a}^{>c}$ ($\mathfrak{a}^{\geq c},$ resp.) the ideal of $\oh_{\mathbb{C}_K}^{\flat}$ formed by all elements $x$ with $v^{\flat}(x)>c$ ($v^{\flat}(x)\geq c$, resp.). Similarly, we fix an additive valuation $v_K$ of $K$ normalized by $v_K(\pi)=1$. Then for an algebraic extension $L/K$ and a real number $c \geq 0$, we denote by $\mathfrak{a}_L^{> c}$ the ideal consisting of all elements $x \in \oh_L$ with $v_K(x)>c$ (and similarly, for '$\geq$' as well). Broadly speaking, the strategy for obtaining the bounds goes back to Fontaine's paper \cite{Fontaine}. For a finite extensions $M/F/K$ and a real number $m \geq 0$, let us recall (a version of\footnote{Fontaine's original condition uses the ideals $\mathfrak{a}_E^{\geq m}$ instead. Up to changing some inequalities from `$<$' to `$\leq$' and vice versa, the conditions are fairly equivalent.}) Fontaine's property $(P_m^{M/F})$: $$\begin{array}{cc} (P_m^{M/F}) & \begin{array}{l}\text{For any algebraic extension }E/F,\text{ the existence of an }\oh_F\text{--algebra map}\\ \oh_M\rightarrow \oh_E/\mathfrak{a}_E^{>m}\text{ implies the existence of an }F\text{--injection of fields }M\hookrightarrow E.\end{array} \end{array}$$ We also recall the upper ramification numbering in the convention used in \cite{CarusoLiu}. For $G=\mathrm{Gal}(M/F)$ and a non--negative real number $\lambda,$ set $$G_{(\lambda)}=\{ g \in G \;|\; v_M(g(x)-x)\geq \lambda \;\;\forall x \in \oh_M\},$$ where $v_M$ is again the additive valuation of $M$ normalized by $v_M(M^{\times})=\mathbb{Z}$. For $t\geq 0,$ set $$\phi_{M/F}(t)=\int_0^t \frac{\mathrm{d}t}{[G_{(1)}:G_{(t)}]}$$ (which makes sense as $G_{(t)}\subseteq G_{(1)}$ for all $t>1$). Then $\phi_{M/F}$ is a piecewise--linear increasing continuous concave function. Denote by $\psi_{M/F}$ its inverse, and set $G^{(\mu)}=G_{(\psi_{M/F}(\mu))}.$ Denote by $\lambda_{M/F}$ the infimum of all $\lambda \geq 0$ such that $G_{(\lambda)}=\{\mathrm{id}\},$ and by $\mu_{M/F}$ the infimum of all $\mu \geq 0$ such that $G^{(\mu)}=\{\mathrm{id}\}.$ Clearly one has $\mu_{M/F}=\phi_{M/F}(\lambda_{M/F}).$ \begin{rem} Let us compare the indexing conventions with \cite{SerreLocalFields} and \cite{Fontaine}, as the results therein are (implicitly or explicitly) used. If $G^{\text{S-}(\mu)}, G^{\text{F-}(\mu)}$ are the upper--index ramification groups in \cite{SerreLocalFields} and \cite{Fontaine}, resp., and similarly we denote $G_{\text{S-}(\lambda)}$ and $G_{\text{F-}(\lambda)}$ for the lower--index ramification groups, then we have $$G^{(\mu)}=G^{\text{S-}(\mu-1)}=G^{\text{F-}(\mu)}, \;\;\;\; G_{(\lambda)}=G_{\text{S-}(\lambda-1)}=G_{\text{F-}(\lambda/\tilde{e})},$$ where $\tilde{e}=e_{M/F}$ is the ramification index of $M/F$. In particular, since the enumeration differs from the one in \cite{SerreLocalFields} only by a shift by one, the claims that lower indexing is compatible with restrictions to subgroups and upper indexing is compatible with passing to quotients remain valid. Thus, it make sense to set $$G_{F}^{(\mu)}=\varprojlim_{M'/F}\mathrm{Gal}(M'/F)^{(\mu)}$$ where $M'/F$ varies over finite Galois extensions $M'/F$ contained in a fixed algebraic closure $\overline{K}$ of $K$ (and $G_F=\varprojlim_{M'/F}\mathrm{Gal}(M'/F)$ is the absolute Galois group). \end{rem} Regarding $\mu$, the following transitivity formula is useful. \begin{lem}[{\cite[Lemma 4.3.1]{CarusoLiu}}]\label{lem:transitivity} Let $N/M/F$ be a pair of finite extensions with both $N/F$ and $M/F$ Galois. Then we have $\mu_{N/F}=\mathrm{max}(\mu_{M/F}, \phi_{M/F}(\mu_{N/M})).$ \end{lem} The property $(P^{M/F}_m)$ is connected with the ramification of the field extension $M/F$ as follows. \begin{prop}\label{prop:RamificationEngine} Let $M/F/K$ be finite extensions of fields with $M/F$ Galois and let $m>0$ be a real number. If the property $(P^{M/F}_m)$ holds, then: \begin{enumerate}[(1)] \item{{\normalfont(\cite[Proposition~3.3]{Yoshida})} $\mu_{M/F}\leq e_{F/K}m.$ In fact, $\mu_{M/F}/e_{F/K}$ is the infimum of all $m>0$ such that $(P^{M/F}_m)$ is valid.} \item{{\normalfont(\cite[Corollary~4.2.2]{CarusoLiu})} $v_K(\mathcal{D}_{M/F})<m,$ where $\mathcal{D}_{M/F}$ denotes the different of the field extension $M/F$.} \end{enumerate} \end{prop} \begin{cor}\label{cor:FontainePropertyWlogTotRamified} Both the assumptions and the conclusions of Proposition~\ref{prop:RamificationEngine} are insensitive to replacing $F$ by any unramified extension of $F$ contained in $M$. \end{cor} \begin{proof} Let $F'/F$ be an unramified extension such that $F' \subseteq M$. The fact that $(P^{M/F}_m)$ is equivalent to $(P^{M/F'}_m)$ is proved in \cite[Proposition~2.2]{Yoshida}. To show that also the conclusions are the same for $F$ and $F'$, it is enough to observe that $e_{F'/K}=e_{F/K}, e_{M/F'}=e_{M/F},$ $v_{K}(\mathcal{D}_{M/F'})=v_{K}(\mathcal{D}_{M/F})$ and $\mu_{M/F'}=\mu_{M/F}$. The first two equalities are clear since $F'/F$ is unramified. The third equality follows from $\mathcal{D}_{M/F}=\mathcal{D}_{M/F'}\mathcal{D}_{F'/F}$ upon noting that $\mathcal{D}_{F'/F}$ is the unit ideal. Finally, by Lemma~\ref{lem:transitivity}, we have $\mu_{M/F}=\mathrm{max}(\mu_{F'/F}, \phi_{F'/F}(\mu_{M/F'})).$ As $F'/F$ is unramified, we have $\mu_{F'/F}=0$ and $\phi_{F'/F}(t)=t$ for all $t \geq 0$. The fourth equality thus follows as well. \end{proof} Let $\mathscr{X}$ be a proper and smooth $p$--adic formal scheme over $\oh_K$. Fix the integer $i$, and let $T'=\H^{i}_{\et}(\mathscr{X}_{\overline{\eta}}, \mathbb{Z}/p\mathbb{Z})$. Let $L$ be the splitting field of $T'$, i.e. $L=\overline{K}^{\mathrm{Ker}\,\rho}$ where $\rho: G_K\rightarrow \mathrm{Aut}_{\mathbb{F}_p}(T')$ is the associated representation. The goal is to provide an upper bound on $v_K(\mathcal{D}_{L/K})$, and a constant $\mu_0=\mu_0(e, i, p)$ such that $G_K^{(\mu)}$ acts trivially on $T'$ for all $\mu>\mu_0$. To achieve this, we follow rather closely the strategy of \cite{CarusoLiu}. The main difference is that the input of $(\varphi, \widehat{G})$--modules attached to the discussed $G_K$--respresentations in \cite{CarusoLiu} is in our situation replaced by a $p$--torsion Breuil--Kisin module and a Breuil--Kisin--Fargues $G_K$--module that arise as the $p$--torsion prismatic $\Es$-- and $\ainf$--cohomology, resp. Let us therefore lay out the strategy, referring to proofs in \cite{CarusoLiu} whenever possible, and describe the needed modifications where necessary. To facilitate this approach further, the notation used will usually reflect the notation of \cite{CarusoLiu}, except for mostly omitting the index $n$ throughout (which in our situation is always equal to $1$). The relation of the above--mentioned $p$--torsion prismatic cohomologies to the $p$--torsion \'{e}tale cohomology is as follows. \begin{prop}[{\cite[Proposition~7.2, Corollary~7.4, Remark~7.5]{LiLiu}}] \label{prop:TorsionSetup} Let $\mathscr{X}$ be a smooth and proper $p$--adic formal scheme over $\oh_K$. Then \begin{enumerate}[(1)] \item{$M_{\BK}=\H^i_{\Prism, n}(\mathscr{X}/\Es)$ is a $p^n$--torsion Breuil--Kisin module of height $\leq i$, and we have $$H_{\et}^i(\mathscr{X}_{\overline{\eta}}, \mathbb{Z}/p^n\mathbb{Z})\simeq T_n(M_{\BK}):=\left(M_{\BK}\otimes_{W_n(k)[[u]]} W_n(\mathbb{C}_K^{\flat})\right)^{\varphi=1}$$ as $\mathbb{Z}/p^n\mathbb{Z}[G_\infty]$--modules.} \item{$M_{\inf}=\H^i_{\Prism, n}(\mathscr{X}_{\ainf}/\ainf)$ is a $p^n$--torsion Breuil--Kisin--Fargues $G_K$--module of height $\leq i$, and we have $$H_{\et}^i(\mathscr{X}_{\overline{\eta}}, \mathbb{Z}/p^n\mathbb{Z})\simeq T_n(M_{\inf}):=\left(M_{\inf}\otimes_{W_n(\oh_{\mathbb{C}_K^{\flat}})} W_n(\mathbb{C}_K^{\flat})\right)^{\varphi=1}$$ as $\mathbb{Z}/p^n\mathbb{Z}[G_K]$--modules.} \item{We have $M_{\BK}\otimes_{\Es}\ainf=M_{\BK}\otimes_{W_n(k)[[u]]}W_n(\mathcal{O}_{\mathbb{C}_K^{\flat}})\simeq M_{\inf},$ and the natural map $M_{\BK} \hookrightarrow M_{\inf}$ has the image contained in $M_{\inf}^{G_{\infty}}$.} \end{enumerate} \end{prop} So let $M^0_{\BK}=\H^i_{\Prism, 1}(\mathscr{X}/\Es)$ and $M_{\inf}^0=\H^i_{\Prism, 1}(\mathscr{X}_{\ainf}/\ainf)$, so that $T_1(M_{\BK}^0)=T_1^{\inf}(M_{\inf}^0)=H_{\et}^i(\mathscr{X}_{\overline{\eta}}, \mathbb{Z}/p\mathbb{Z}).$ Observe further that, since $u$ is a unit of $W_1(\mathbb{C}_{K}^{\flat})=\mathbb{C}_{K}^{\flat},$ we have $T_1(M_{\BK}^0)=T_1(M_{\BK})$ and $T_1^{\inf}(M_{\inf}^0)=T_1^{\inf}(M_{\inf}),$ where $M_{\BK}=M_{\BK}^0/M_{\BK}^0[u^{\infty}]$ and $M_{\inf}=M_{\inf}^0/M_{\inf}^0[u^{\infty}]$ are again a Breuil--Kisin module and a Breuil--Kisin--Fargues $G_K$--module, resp., of height $\leq i$. Since $\Es\hookrightarrow \ainf$ is faithfully flat, it is easy to see that the isomorphism $M_{\inf}\simeq M_{\BK}\otimes_{\Es}\ainf$ remains true. Furthermore, the pair $(M_{\BK}, M_{\inf})$ satisfies the conditions \begin{equation}\label{eqn:CrsModP} \forall g \in G_s\;\;\forall x\in M_{\BK}:\;\; g(x)-x \in \varphi^{-1}(v)u^{p^s}M_{\inf} \end{equation} for all $s \geq 0$, since the pair $(M_{\BK}^0, M_{\inf}^0)$ satisfies the analogous conditions by Proposition~\ref{prop:CrystallineCohMopPN}. Finally, the module $M_{\BK}$ is finitely generated and $u$--torsion--free $k[[u]]$--module, hence a finite free $k[[u]]$--module (and, consequently, $M_{\inf}$ is a finite free $\oh_{\mathbb{C}_K^{\flat}}$--module). Instead of referring to $T_1(M_{\inf})=H_{\et}^i(\mathscr{X}_{\overline{\eta}}, \mathbb{Z}/p^n\mathbb{Z})$ directly, we will discuss the ramification bound for $T:=T^{*, \inf}_1(M_{\inf})=\mathrm{Hom}_{\ainf, \varphi}(M_{\inf}, \oh_{\mathbb{C}_K^{\flat}})\simeq H_{\et}^i(\mathscr{X}_{\overline{\eta}}, \mathbb{Z}/p\mathbb{Z})^{\vee}$, which is equivalent, as the splitting field of $T$ is still $L$. Also note that $T\simeq T^*_1(M_{\BK})=\mathrm{Hom}_{\Es, \varphi}(M_{\BK}, \oh_{\mathbb{C}_K^{\flat}})$ as a $\mathbb{Z}/p\mathbb{Z}[G_\infty]$--module. \begin{rem}[Ramification bounds of \cite{Caruso}]\label{rem:CarusoBound} Similarly to the discussion above we may take, for any $n \geq 1,$ $M^0_{\BK}=\H^i_{\Prism, n}(\mathscr{X}/\Es), $ and $M_{\BK}=M^0_{\BK}/M^0_{\BK}[u^\infty]$. Then the $G_\infty$--module $T:=T^*_n(M_\BK)=\mathrm{Hom}_{\Es, \varphi}(M_{\BK}, W_n(\oh_{\mathbb{C}_K^{\flat}}))$ is the restriction of $\H^i_{\et}(\mathscr{X}_{\overline{\eta}}, \mathbb{Z}/p^n\mathbb{Z})^{\vee}$ to $G_\infty$. Denoting by $\oh_{\mathcal{E}}$ the $p$--adic completion of $\Es[1/u]$, $M_{\mathcal{E}}:=M_{\BK}\otimes_{\Es}\oh_{\mathcal{E}}$ then becomes an \'{e}tale $\varphi$--module over $\oh_{\mathcal{E}}$ in the sense of \cite[\S A]{Fontaine3}, with the natural map $M_\BK \rightarrow M_{\mathcal{E}}$ injective; thus, in terminology of \cite{Caruso}, $M_{\BK}$ serves as a $\varphi$--lattice of height dividing $E(u)^i$. Upon observing that $T$ is the $G_\infty$--respresentation associated with $M_{\mathcal{E}}$ (see e.g. \cite[\S 2.1.3]{Caruso}), Theorem~2 of \cite{Caruso} implies the ramification bound $$\mu_{L/K}\leq 1+c_0(K)+e\left(s_0(K)+\mathrm{log}_p(ip)\right)+\frac{e}{p-1}.$$ Here $c_0(K), s_0(K)$ are constants that depend on the field $K$ and that generally grow with increasing $e$. (Their precise meaning is described in \S~\ref{subsec:Comparisons}.) \end{rem} We employ the following approximations of the functors $T_1^{*}$ and $T_1^{*, \inf}$. \begin{nott} For a real number $c\geq 0$, we define $$J_c(M_{\BK})=\mathrm{Hom}_{\Es, \varphi}(M_{\BK}, \oh_{\mathbb{C}_K^{\flat}}/\mathfrak{a}^{>c}),$$ $$J^{\inf}_c(M_{\inf})=\mathrm{Hom}_{\ainf, \varphi}(M_{\inf}, \oh_{\mathbb{C}_K^{\flat}}/\mathfrak{a}^{>c}).$$ We further set $J_\infty(M_{\BK})=T_1^{*}(M_{\BK})$ and $J^{\inf}_\infty(M_{\inf})=T_1^{*, \inf}(M_{\inf})$. Given $c, d \in \mathbb{R}^{\geq 0}\cup \{\infty\}$ with $c \geq d,$ we denote by $\rho_{c, d}: J_c(M_{\BK})\rightarrow J_d(M_{\BK})$ ($\rho^{\inf}_{c, d}: J^{\inf}_c(M_{\inf})\rightarrow J^{\inf}_d(M_{\inf}),$ resp.) the map induced by the quotient map $\oh_{\mathbb{C}_K^{\flat}}/\mathfrak{a}^{>c}\rightarrow \oh_{\mathbb{C}_K^{\flat}}/\mathfrak{a}^{>d}$. \end{nott} Since $M_{\inf}\simeq M_{\BK}\otimes_{\Es}\ainf$ as $\varphi$--modules, it is easy to see that for every $c \in \mathbb{R}^{\geq 0}\cup \{\infty\},$ we have a natural isomorphism $\theta_c:J_c(M_{\BK}) \stackrel{\simeq}{\rightarrow} J_c^{\inf}(M_{\inf})$ of abelian groups; the biggest point of distinction between the two is that $J_c^{\inf}(M_{\inf})$ naturally attains the action of $G_K$ from the one on $M_{\inf}$, by the usual rule $$g(f)(x):=g(f(g^{-1}(x))),\;\; g \in G_K,\; f \in J_c^{\inf}(M_{\inf}),\;x \in M_{\inf}.$$ As for $J_c(M_{\BK}),$ there is a natural action given similarly by the formula $g(f)(x):=g(f(x))$ where $f \in J_c(M_{\BK})$ and $x \in M_{\BK}$. However, in order for this action to make sense, one needs that each $g(f)$ defined this way is still an $\Es$--linear map, which boils down to the requirement that $g(u)=u$ (that is, $g(\underline{\pi})=\underline{\pi}$) in the ring $ \oh_{\mathbb{C}_K^{\flat}}/\mathfrak{a}^{>c}$. This is certainly true for $g \in G_{\infty},$ but also for possibly bigger subgroups of $G_K$, depending on $c$. The concrete result is the following. \begin{prop}[{\cite[Proposition~2.5.3]{CarusoLiu}}]\label{prop:GsActionOnJc} Let $s$ be a non--negative integer with $s>\mathrm{log}_p(\frac{c(p-1)}{ep})$. Then the natural action of $G_s$ on $\oh_{\mathbb{C}_K^{\flat}}/\mathfrak{a}^{>c}$ induces an action of $G_s$ on $J_c(M_{\BK}).$ Furthermore, when $ d \leq c$, the map $\rho_{c, d}:J_c(M_{\BK}) \rightarrow J_d(M_{\BK})$ is $G_s$--equivariant, and when $s' \geq s$, the $G_{s'}$--action on $J_c(M_{\BK})$ defined in this manner is the restriction of the $G_s$--action to $G_{s'}$. \end{prop} The crucial connection between the actions on $J_c(M_{\BK})$ and $J_c^{\inf}(M_{\inf})$ is established using (the consequences of) the conditions \Crs. \begin{prop}\label{prop:GsEquivarianceOfJc} For $$s > \mathrm{max}\left\{\mathrm{log}_p\left(\frac{c(p-1)}{ep}\right),\mathrm{log}_p\left(c-\frac{e}{p-1}\right)\right\},$$ the natural isomorphism $\theta_c:J_c(M_{\BK})\stackrel{\simeq}\rightarrow J_c^{\inf}(M_{\inf})$ is $G_s$--equivariant. \end{prop} \begin{proof} Identifying $M_{\inf}$ with $M_{\BK}\otimes_{\Es}\ainf,$ $\theta_c$ takes the form $f \mapsto \widetilde{f}$ where $\widetilde{f}(x \otimes a):=af(x)$ for $x \in M_{\BK}$ and $a \in \ainf$. Note that we have $\varphi^{-1}(v)u^{p^s}\oh_{\mathbb{C}_K^{\flat}}=\mathfrak{a}^{\geq p^s + e/(p-1)}$. The condition (\ref{eqn:CrsModP}) then states that for all $x \in M_{\BK}$ and all $g \in G_s, $ $g(x\otimes 1)-x\otimes 1$ lies in $\mathfrak{a}^{\geq p^s + e/(p-1)}M_{\inf}$ and therefore in $\mathfrak{a}^{>c}M_{\inf}$ thanks to the assumption on $s$. It then follows that for every $\widetilde{f}\in J_c^{\inf}(M_{\inf}),$ $\widetilde{f}(g(x\otimes 1))=\widetilde{f}(x\otimes 1),$ and hence $$g(\widetilde{f})(x \otimes a)=g\left(\widetilde{f}(g^{-1}(x \otimes a))\right)=g\left(g^{-1}(a)\widetilde{f}(g^{-1}(x \otimes 1))\right)=ag\left(\widetilde{f}(x \otimes 1)\right)=ag(f(x))$$ for every $g \in G_s,$ $x \in M_{\BK}$ and $a \in \ainf$. Thus, we have that $g(\widetilde{f})=\widetilde{g(f)}$ for every $g \in G_s$ and $f \in J_c(M_{\BK}),$ proving the equivariance of $\theta_c$. \end{proof} From now on, set $b:=ie/(p-1)$ and $a:=iep/(p-1)$. Then $T$ is determined by $J_a(M), J_b(M)$ in the following sense. \begin{prop}\label{prop:ActionProlongationJc} \begin{enumerate}[(1)] \item{The map $\rho_{\infty, b}: T^{*}_1(M_{\BK})\rightarrow J_b(M_{\BK})$ is injective, and $\rho_{\infty, b}(M_{\BK})=\rho_{a, b}(J_{a}(M_{\BK}))$.} \item{The map $\rho_{\infty, b}^{\inf}: T^{*, \inf}_1(M_{\inf})\rightarrow J^{\inf}_b(M_{\inf})$ is injective, and $\rho^{\inf}_{\infty, b}(M_{\inf})=\rho^{\inf}_{a, b}(J^{\inf}_{a}(M_{\inf}))$.} \item{For $s>\mathrm{log}_p(i) $, $T^{*}_1(M_{\BK})$ has a natural action of $G_s$ that extends the usual $G_\infty$--action.} \item{For $s> \mathrm{max}\left( \mathrm{log}_p(i),\mathrm{log}_p((i-1)e/(p-1)) \right)$, the action from (3) agrees with $T|_{G_s}$.} \end{enumerate} \end{prop} \begin{proof} Part (1) is proved in \cite[Proposition~2.3.3]{CarusoLiu}. Then $T^{*}_1(M_{\BK})$ attains the action of $G_s$ with $s>\mathrm{log}_p(i)$ by identification with $\rho_{a, b}(J_{a}(M_{\BK}))$ and using Proposition~\ref{prop:GsActionOnJc} (see also \cite[Theorem~2.5.5]{CarusoLiu}), which proves (3). Finally, the proof of (2),(4) is analogous to \cite[Corollary~3.3.3]{CarusoLiu} and \cite[Theorem~3.3.4]{CarusoLiu}. Explicitly, consider the commutative diagram \begin{center} \begin{tikzcd} T_1^{*}(M_{\BK}) \ar[r, "\rho_{\infty,a}"] \ar[d, "\sim"', "\theta_{\infty}"] & J_a(M_{\BK}) \ar[d, "\sim"', "\theta_{a}"] \ar[r, "\rho_{a, b}"] & J_b(M_{\BK}) \ar[d, "\sim"', "\theta_b"] \\ T_1^{*, \inf}(M_{\inf}) \ar[r, "\rho^{\inf}_{\infty,a}"] & J^{\inf}_a(M_{\inf}) \ar[r, "\rho^{\inf}_{a, b}"] & J^{\inf}_b(M_{\inf}) , \end{tikzcd} \end{center} where the composition of the rows are $\rho_{\infty, b}$ and $\rho^{\inf}_{\infty, b},$ resp. This immediately proves (2) using (1). Finally, the map $\rho^{\inf}_{\infty, b}$ is $G_K$--equivariant and the map the map $\rho_{\infty, b}$ is tautologically $G_s$--equivariant for $s > \mathrm{log}_p(i)$ by the proof of (3), and both maps are injective. Since $\theta_b$ is $G_s$--equivariant when $s > \mathrm{log}_p((i-1)e/(p-1))$ by Proposition~\ref{prop:GsEquivarianceOfJc}, it follows that so is $\theta_{\infty}$, which proves (4). \end{proof} We employ further approximations of $J_c(M_{\BK})$ defined as follows. \begin{nott} Let $s$ be a non--negative integer, consider a real number $c \in [0, ep^s)$ and an algebraic extension $E/K_s$. We consider the ring $(\varphi_k^s)^*\oh_{E}/\mathfrak{a}_E^{> c/p^s}=k\otimes_{\varphi_k^s, k}\oh_{E}/\mathfrak{a}_E^{> c/p^s}$ (note that the condition on $c$ implies that $p \in \mathfrak{a}_E^{>c/p^s}$, making $\oh_{E}/\mathfrak{a}_E^{>c/p^s}$ a $k$--algebra). We endow this ring with an $\Es$--algebra structure via $\Es\stackrel{\mathrm{mod}\,p}{\rightarrow}k[[u]]\stackrel{\alpha}{\rightarrow} (\varphi_k^s)^*\oh_{E}/\mathfrak{a}_E^{> c/p^s}$ where $\alpha$ extends the $k$--algebra structure map by the rule $u \mapsto 1\otimes{\pi_s}.$ Then we set $$J_c^{(s), E}(M_{\BK})=\mathrm{Hom}_{\Es,\varphi}(M_{\BK}, (\varphi_k^s)^*\oh_{E}/\mathfrak{a}_E^{>c/p^s}).$$ Note that the fact that $g(\pi_s)=\pi_s$ for all $g \in G_s$ implies that $J_c^{(s), E}(M_{\BK})$ attains a $G_s$--action induced by the $G_s$--action on $\oh_{E}/\mathfrak{a}_E^{>c/p^s}$. When $c, d$ are two real numbers satisfying $ep^s> c \geq d \geq 0,$ there is a transition map $\rho_{c, d}^{(s), E}(M_{\BK}):J_c^{(s), E}(M_{\BK}) \rightarrow J_d^{(s), E}(M_{\BK})$ which is $G_s$--equivariant. \end{nott} The relation to $J_c(M_{\BK})$ is the following. \begin{prop} \label{prop:ApproximateJc} Let $s, c$ be as above. Then \begin{enumerate}[(1)] \item{Given an algebraic extension $E/K_s$, $J_c^{(s), E}(M_{\BK})$ naturally embeds into $J_c(M_{\BK})$ as a $G_s$--submodule.} \item{Given a tower of algebraic extensions $F/E/K_s$, $J_c^{(s), E}(M_{\BK})$ naturally embeds into $J_c^{(s), F}(M_{\BK})$ as a $G_s$--submodule.} \item{$J_c^{(s), \overline{K}}(M_{\BK})$ is naturally isomorphic to $J_c(M_{\BK})$ as a $G_s$--module.} \end{enumerate} \end{prop} \begin{proof} Part (2) is immediate upon observing that the inclusion $\oh_{E} \hookrightarrow \oh_{F}$ induces the map $\oh_{E}/\mathfrak{a}_E^{>c/p^s} \rightarrow \oh_{F}/\mathfrak{a}_F^{>c/p^s}$ which is still injective (and clearly $G_s$--equivariant). Similarly, part (3) follows from the fact that the map $\mathrm{pr}_s: \mathcal{O}_{\mathbb{C}_K^{\flat}}=\varprojlim_{s, \varphi}\oh_{\overline{K}}/p \rightarrow \oh_{\overline{K}}/p$ induces a ($G_s$--equivariant) isomorphism $ \oh_{\mathbb{C}_K^{\flat}}/\mathfrak{a}^{>c} \rightarrow (\varphi_k^s)^*\oh_{\overline{K}}/\mathfrak{a}_{\overline{K}}^{>c/p^s}$ when $s > \mathrm{log}_p(c/e)$ (so a fortiori when $s>\mathrm{log}_p(c)$), which is proved in \cite[Lemma~2.5.1]{CarusoLiu}. Part (1) is then obtained as a direct combination of (2) and (3). \end{proof} For a non--negative integer $s$, denote by $L_s$ the composite of the fields $K_s$ and $L$. The following adaptation of Theorem~4.1.1 of \cite{CarusoLiu} plays a key role in establishing the ramification bound. \begin{thm}\label{thm:Fuel} Let $s$ be an integer satisfying $$s> M_0:=\mathrm{max}\left\{\mathrm{log}_p\left(\frac{a}{e}\right),\mathrm{log}_p\left(b-\frac{e}{p-1}\right)\right\}=\mathrm{max}\left\{\mathrm{log}_p\left(\frac{ip}{p-1}\right), \mathrm{log}_p\left(\frac{(i-1)e}{(p-1)}\right)\right\},$$ and let $E/K_s$ be an algebraic extension. Then the inclusion $\rho_{a, b}^{(s),E}(J_a^{(s), E}(M_{\BK}))\hookrightarrow \rho_{a, b}(J_a(M_{\BK})),$ facilitated by the inclusions $J_a^{(s), E}(M_{\BK}) \hookrightarrow J_a(M_{\BK})$ and $J_b^{(s), E}(M_{\BK}) \hookrightarrow J_b(M_{\BK})$ from Proposition~\ref{prop:ApproximateJc}, is an isomorphism if and only if $L_s \subseteq E$. \end{thm} \begin{proof} The proof of \cite[Theorem~4.1.1]{CarusoLiu} applies in our context as well, as we now explain. Using just the fact that $M_{\BK}$ is a Breuil--Kisin module that is free over $k[[u]]$ together with the assumption $s>\mathrm{log}_p(a/e)$, for every $F/K_s$ algebraic, an auxillary set $\widetilde{J}_1^{(s), F}(M_{\BK})$ is constructed, together with maps of sets $\widetilde{\rho}_c^{(s), F}: \widetilde{J}_1^{(s), F}(M_{\BK}) \rightarrow J_c^{(s), F}(M_{\BK})$ for every $c \in (0, ep^s).$ When $F$ is Galois over $K$, this set is naturally a $G_s$--set and the maps are $G_s$--equivariant. Moreover, the sets have the property that $\left(\widetilde{J}_1^{(s), F}(M_{\BK})\right)^{G_{F'}}=\widetilde{J}_1^{(s), F'}(M_{\BK})$ when $F/F'/K_s$ is an intermediate extension. Subsequently, it is shown in \cite[Lemma~4.1.4]{CarusoLiu} that \begin{equation}\tag{$*$}\label{LiftedJ} \widetilde{\rho}_b^{(s), F}\text{ is injective and its image is }\rho_{a, b}^{(s), F}(J_a^{(s), F}(M_{\BK})), \end{equation} where the only restriction on $s$ is again $s> \mathrm{log}_p(a/e)$. Finally, one obtains a series of $G_s$--equivariant bijections: \begin{center} {\renewcommand{\arraystretch}{1.75} \begin{tabular}{rclr} $\widetilde{J}^{(s),\overline{K}}_1(M_{\BK}) $ & $ \simeq $ & $ \rho_{a, b}^{(s), \overline{K}}(J_a^{(s), \overline{K}}(M_{\BK}))$ & (by (\ref{LiftedJ}))\\ & $ \simeq $ & $ \rho_{a, b}(J_a(M_{\BK}))$ & (Proposition~\ref{prop:ApproximateJc} (3))\\ & $ \simeq $ & $ \rho_{a, b}^{\inf}(J_a^{\inf}(M_{\inf}))$ & (Proposition~\ref{prop:GsEquivarianceOfJc})\\ & $ \simeq $ & $ T$ & (Proposition~\ref{prop:ActionProlongationJc} (2)) \end{tabular} } \end{center} (where the step that uses Proposition~\ref{prop:GsEquivarianceOfJc} relies on the assumption $s>\mathrm{log}_p(b-e/(p-1))$ ). Applying $(-)^{G_E}$ to both sides and using $(*)$ again then yields $$\rho_{a, b}^{(s), E}(J_a^{(s), E}(M_{\BK})) \simeq T^{G_E}.$$ Therefore, we may replace the inclusion from the statement of the theorem by the inclusion $T^{G_E} \subseteq T,$ and the claim now easily follows. \end{proof} Finally, we are ready to establish the desired ramification bound. Let $N_s=K_s(\zeta_{p^s})$ be the Galois closure of $K_s$ over $K$, and set $M_s=L_sN_s$. Then we have \begin{prop}\label{prop:FontainePropertyForLs} Let $s$ be as in Theorem~\ref{thm:Fuel}, and set $m=a/p^s$. Then the properties $(P_{m}^{L_s/K_s})$ and $(P_{m}^{M_s/N_s})$ hold. \end{prop} \begin{proof} The proof of $(P_{m}^{L_s/K_s})$ is the same as in \cite{CarusoLiu}, which refers to an older version of \cite{Hattori} for parts of the proof. Let us therefore reproduce the argument for convenience. By Corollary~\ref{cor:FontainePropertyWlogTotRamified}, it is enough to prove $(P_{m}^{L_s/K_s^{un}})$ where $K_s^{un}$ denotes the maximal unramified extension of $K_s$ in $L_s$. Let $E/K_s^{un}$ be an algebraic extension and $f: \oh_{L_s}\rightarrow \oh_{E}/\mathfrak{a}_K^{>m}$ be an $\oh_{K_s^{un}}$--algebra map. Setting $c=a$ or $c=b$, it makes sense to consider an induced map $f_c: \oh_{L_s}/\mathfrak{a}_{L_s}^{>c/p^s}\rightarrow \oh_{E}/\mathfrak{a}_K^{>c/p^s}$, and we claim is that this map is well--defined and injective. Indeed, let $\varpi$ be a uniformizer of $L_s$, satisfying the relation $$\varpi^{e'}=c_1\varpi^{e'-1}+c_2\varpi^{e'-2}+\dots + c_{e'-1}\varpi+c_{e'},$$ where $P(T)=T^{e'}-\sum_{i}c_{i}T^{e'-i}$ is an Eisenstein polynomial over $K_{s}^{un}.$ Applying $f$ one thus obtains $t^{e'}=\sum_{i}c_{i}t^{e'-i}$ in $\oh_{E}/\mathfrak{a}_K^{>1}$ where $t=f(\varpi),$ and thus, lifting $t$ to $\widetilde{t}\in \oh_E,$ we obtain the equality $$\widetilde{t}^{e'}=c_1\widetilde{t}^{e'-1}+c_2\widetilde{t}^{e'-2}+\dots + c_{e'-1}\widetilde{t}+c_{e'}+r$$ with $v_K(r)>m>1/p^s$. It follows that $v_K(\widetilde{t})=v_K(\varpi)=1/p^se',$ and so $\varpi^n \in \mathfrak{a}_{L_s}^{>c/p^s}$ if and only if $\widetilde{t}^n \in \mathfrak{a}_{E}^{>c/p^s},$ proving that $f_c$ is both well--defined as well as injective. The map $f_c$ in turn induces an injection of $k$--algebras $(\varphi_k^s)^*\oh_{L_s}/\mathfrak{a}_{L_s}^{>c/p^s}\hookrightarrow (\varphi_k^s)^*\oh_{E}/\mathfrak{a}_{E}^{>c/p^s}$ which in turn gives an injection $J_c^{(s), L_s}(M_{\BK})\rightarrow J_c^{(s), E}(M_{\BK})$, where $c=a$ or $c=b$; consequently, we obtain an injection $$\rho_{a, b}^{(s), L_s}(J_a^{(s), L_s}(M_{\BK}))\hookrightarrow \rho_{a, b}^{(s), E}(J_a^{(s), E}(M_{\BK})).$$ Combining this with Propositions~\ref{prop:ActionProlongationJc} and \ref{prop:ApproximateJc}, we have the series of injections $$\rho_{a, b}^{(s), L_s}(J_b^{(s), L_s}(M_{\BK}))\hookrightarrow \rho_{a, b}^{(s), E}(J_b^{(s), E}(M_{\BK}))\hookrightarrow \rho_{a, b}^{(s), \overline{K}}(J_b^{(s), \overline{K}}(M_{\BK}))\hookrightarrow \rho_{a, b}(J_b(M_{\BK}))\simeq T.$$ Since $\rho_{a, b}^{(s), L_s}(J_b^{(s), L_s}(M_{\BK}))\simeq T$ by Theorem~\ref{thm:Fuel}, this is actually an injection $T \hookrightarrow T$ and therefore an isomorphism since $T$ is finite. In particular, the natural morphism $\rho_{a, b}^{(s), E}(J_b^{(s), E}(M_{\BK}))\hookrightarrow \rho_{a, b}(J_b(M_{\BK}))$ is an isomorphism, and Theorem~\ref{thm:Fuel} thus implies that $L_s \subseteq E$. This finishes the proof of (1). Similarly as in \cite{CarusoLiu}, the property $(P_{m}^{M_s/N_s})$ is deduced from $(P_{m}^{L_s/K_s})$ as follows. Given an algebraic extension $E/N_s$ and an $\oh_{N_s}$--algebra morphism $\oh_{M_s} \rightarrow \oh_{E}/\mathfrak{a}_E^{>m},$ by restriction we obtain an $\oh_{K_s}$--algebra morphism $\oh_{L_s} \rightarrow \oh_{E}/\mathfrak{a}_E^{>m},$ hence there is a $K_s$--injection $L_s\rightarrow E$. Since $N_s \subseteq E$, this can be extended to a $K_s$--injection $M_s\rightarrow E$, and upon noting that the extension $M_s/K_s$ is Galois, one obtains an $N_s$--injection $M_s\rightarrow E$ by precomposing with a suitable automorphism of $M_s$. This proves $(P_{m}^{M_s/N_s})$. \end{proof} \begin{thm}\label{thm:FinalRamification} Let $$\alpha=\lfloor M_0\rfloor+1=\left\lfloor \mathrm{log}_p\left(\mathrm{max}\left\{\frac{ip}{p-1}, \frac{(i-1)e}{p-1}\right\}\right)\right\rfloor+1.$$ Then \begin{enumerate}[(1)] \item{ $$v_K(\mathcal{D}_{L/K})<1+e\alpha+\frac{iep}{p^{\alpha}(p-1)}-\frac{1}{p^{\alpha}}.$$} \item{For any $\mu$ satisfying $$\mu>1+e\alpha+\mathrm{max}\left\{\frac{iep}{p^{\alpha}(p-1)}-\frac{1}{p^{\alpha}}, \frac{e}{p-1}\right\},$$ $G_K^{(\mu)}$ acts trivially on $T$.} \end{enumerate} \end{thm} \begin{proof} We may set $s=\alpha$ as the condition $s>M_0$ is then satisfied. Propositions~\ref{prop:RamificationEngine} and \ref{prop:FontainePropertyForLs} then imply that $v_K(\mathcal{D}_{L_s/K_s})<a/p^s$ (where $a=iep/(p-1)$ ) and thus \begin{align*} v_K(\mathcal{D}_{L_s/K})&=v_K(\mathcal{D}_{K_s/K})+v_K(\mathcal{D}_{L_s/K_s})<1+es-\frac{1}{p^s}+\frac{a}{p^s}=1+e\alpha+\frac{a-1}{p^{\alpha}}. \end{align*} Similarly, we have $v_K(\mathcal{D}_{L/K})=v_K(\mathcal{D}_{L_s/K})-v_K(\mathcal{D}_{L_s/L})\leq v_K(\mathcal{D}_{L_s/K}),$ and the claim (1) thus follows. To prove (2), let $M_s$ and $N_s$ be as in Proposition~\ref{prop:FontainePropertyForLs}. The fields $N_s$ and $M_s=LN_s$ are both Galois over $K$, hence Lemma~\ref{lem:transitivity} applies and we thus have $$\mu_{M_s/K}=\mathrm{max}\left\{\mu_{N_s/K}, \phi_{N_s/K}(\mu_{M_s/N_s})\right\}.$$ By \cite[Remark~5.5]{Hattori}, we have $$\mu_{N_s/K}=1+es+\frac{e}{p-1}.$$ As for the second argument, Proposition~\ref{prop:RamificationEngine} gives the estimate $$\mu_{M_s/N_s}\leq e_{N_s/K}m=\frac{e_{N_s/K}}{p^s}a.$$ The function $\phi_{N_s/K}(t)$ is concave and has a constant slope $1/e_{N_s/K}$ beyond $t=\lambda_{N_s/K},$ where it attains the value $\phi_{N_s/K}(\lambda_{N_s/K})=\mu_{N_s/K}=1+es+e/(p-1)$. Thus, $\phi_{N_s/K}(t)$ can be estimated linearly from above as follows: $$\phi_{N_s/K}(t)\leq 1+es+\frac{e}{p-1}+\frac{1}{e_{N_s/K}}\left(t-\lambda_{N_s/K}\right)=1+es+\frac{t}{e_{N_s/K}}-\frac{\lambda_{N_s/K}}{e_{N_s/K}}+\frac{e}{p-1}$$ There is an automorphism $\sigma \in \mathrm{Gal}(N_s/K)$ with $\sigma(\pi_s)=\zeta_{p}\pi_s$. That is, $v_K(\sigma(\pi_s)-\pi_s)=e/(p-1)+1/p^s,$ showing that $$\lambda_{N_s/K}\geq e_{N_s/K} \left( \frac{e}{p-1}+\frac{1}{p^s}\right),$$ and combinig this with the estimate of $\phi_{N_s/K}(t),$ we obtain $$\phi_{N_s/K}(t)\leq 1+es+\frac{t}{e_{N_s/K}}-\frac{1}{p^s}.$$ Plugging in the estimate for $\mu_{M_s/N_s}$ then yields \begin{align*} \phi_{N_s/K}(\mu_{M_s/N_s})&\leq 1+es+\frac{a}{p^s}-\frac{1}{p^s}=1+es+\frac{\frac{iep}{p-1}-1}{p^s}. \end{align*} Thus, we have $$\mu_{L/K}\leq \mu_{M_s/K}\leq 1+e\alpha+\mathrm{max}\left\{\frac{iep}{p^\alpha(p-1)}-\frac{1}{p^{\alpha}}, \frac{e}{p-1} \right\},$$ which finishes the proof of part (2). \end{proof} \subsection{Comparisons of bounds}\label{subsec:Comparisons} Finally, let us compare the bounds obtained in Theorem~\ref{thm:FinalRamification} with other results from the literature. These are summarized in the table below. \begin{table}[h] \noindent\begin{tabular}{| Sc| Sc Sl |} \hline & $\mu_{L/K}\leq \cdots$ & \\ \hline Theorem~\ref{thm:FinalRamification} & $1+e\left(\left\lfloor \mathrm{log}_p\left(\mathrm{max}\left\{\frac{ip}{p-1}, \frac{(i-1)e}{p-1}\right\}\right)\right\rfloor+1\right)+\mathrm{max}\left\{\beta, \frac{e}{p-1}\right\},$ & $\beta<\mathrm{min}\left(e, 2p\right)$ \tablefootnote{More precisely: When $i=1$, it is easy to see that $\beta=(eip/(p-1)-1)/p^{\alpha}$ is smaller than $e/(p-1)$, and hence does not have any effect. When $i>1$, one can easily show using $p^\alpha>ip/(p-1), p^\alpha>(i-1)e/(p-1)$ that $\beta<e$ and $\beta< pi/(i-1)\leq 2p$.} \\ \hline \cite{CarusoLiu} & $1+e\left(\left\lfloor \mathrm{log}_p\left(\frac{ip}{p-1}\right)\right\rfloor+1\right)+\mathrm{max}\left\{\beta, \frac{e}{p-1}\right\},$ & $\beta<e$ \tablefootnote{The number $\beta$ here has different meaning than the number $\beta$ of \cite[Theorem~1.1]{CarusoLiu}.}\\ \hline \cite{Caruso} & $1+c_0(K)+e\left(s_0(K)+\mathrm{log}_p(ip)\right)+\frac{e}{p-1}$ & \\ \hline \cite{Hattori} & $\begin{cases} 1+e+\frac{e}{p-1}, \;\;\;\;\;\;\;\;\;\;i=1,\\ 1+e+\frac{ei}{p-1}-\frac{1}{p}, \;\;\;i>1, \end{cases}$ & under $ie<p-1$ \\ \hline \cite{Fontaine2}, \cite{Abrashkin} & $1+\frac{i}{p-1}$ & under $\begin{matrix}e=1,\\ i< p-1\end{matrix}$ \\ \hline \end{tabular} \caption{Comparisons of estimates of $\mu_{L/K}$}\label{table} \end{table} \vspace{0.5cm} \noindent\textbf{Comparison with \cite{Hattori}.} If we assume $ie<p-1$, then the first maximum in the estimate of $\mu_{L/K}$ is realized by $ip/(p-1) \in (1, p)$; that is, in Theorem~\ref{thm:FinalRamification} one has $\alpha=1$ and thus, $$\mu_{L/K}\leq 1+e+\mathrm{max}\left\{\frac{ei}{p-1}-\frac{1}{p},\frac{e}{p-1}\right\},$$ which agrees precisely with the estimate \cite{Hattori}. \noindent\textbf{Comparison with \cite{Fontaine2}, \cite{Abrashkin}.} Specializing to $e=1$ in the previous case, the bound becomes $$\mu_{L/K}\leq \begin{cases} 2+\frac{1}{p-1}, \;\;\;\;\;\;\;\;i=1, \\ 2-\frac{1}{p} + \frac{i}{p-1} \;\;\;i>1. \end{cases}$$ This is clearly a slightly worse bound than that of \cite{Fontaine2} and \cite{Abrashkin} (by $1$ and $(p-1)/p$, respectiely). \noindent\textbf{Comparison with \cite{CarusoLiu}.} From the shape of the bounds it is clear that the bounds are equivalent when $(i-1)e \leq ip,$ that is, when $e \leq p$ and some ``extra'' cases that include the case when $i=1$ (more precisely, these extra cases are when $e>p$ and $i\leq e/(e-p)$), and in fact, the terms $\beta$ in such situation agree. In the remaining case when $(i-1)e > ip,$ our estimate becomes gradually worse compared to \cite{CarusoLiu}. \begin{rems} \begin{enumerate}[(1)] \item{It should be noted that the bounds from \cite{CarusoLiu} do not necessarily apply to our situation as it is not clear when $\H^i_{\et}(\mathscr{X}_{\overline{\eta}}, \mathbb{Z}/p\mathbb{Z})$ (or rather their duals) can be obtained as a quotient of two lattices in a semi--stable representation with Hodge--Tate weights in $[0, i]$. To our knowledge the only result along these lines is \cite[Theorem~1.3.1]{EmertonGee1} that states that this is indeed the case when $i=1$ (and $X$ is a proper smooth variety over $K$ with semistable reduction). Interestingly, in this case the bound from Theorem~\ref{thm:FinalRamification} always agrees with the one from \cite{CarusoLiu}.} \item{Let us also point out that the verbatim reading of the bound from \cite{CarusoLiu} as described in Theorem~1.1 of \textit{loc. cit.} would have the term $\left\lceil \mathrm{log}_p(ip/(p-1)) \right\rceil$ (i.e. upper integer part) instead of the term $\left\lfloor \mathrm{log}_p(ip/(p-1)) \right\rfloor+1$ as in Table~\ref{table}, but we believe this version to be correct. Indeed, the proof of Theorem~1.1 in \cite{CarusoLiu} (in the case $n=1$) ultimately relies on the objects $J_{1,a}^{(s), E}(\mathfrak{M})$ that are analogous to $J_{a}^{(s), E}(M_{\BK})$, where $s=\left\lceil \mathrm{log}_p(ip/(p-1)) \right\rceil$. In particular, Lemma~4.2.3 of \textit{loc. cit.} needs to be applied with $c=a$, and the implicitly used fact that the ring $\oh_{E}/\mathfrak{a}_E^{>a/p^s}$ is a $k$--algebra (i.e. of characteristic $p$) relies on the \emph{strict} inequality $e>a/p^s$, equivalently $s>\mathrm{log}_p(ip/(p-1))$. In the case that $ip/(p-1)$ happens to be equal to $p^t$ for some integer $t$, one therefore needs to take $s=t+1$ rather than $s=t$. This precisely corresponds to the indicated change.} \end{enumerate} \end{rems} \noindent\textbf{Comparison with \cite{Caruso}.} Let us explain the constants $s_0(K), c_0(K)$ that appear in the estimate. The integer $s_0(K)$ is the smallest integer $s$ such that $1+p^s\mathbb{Z}_p \subseteq \chi(\mathrm{Gal}(K_{p^{\infty}}/K))$ where $\chi$ denotes the cyclotomic character. The rational number $c_{0}(K) \geq 0$ is the smallest constant $c$ such that $\psi_{K/K_0}(1+t) \geq 1+et-c$ (this exists since the last slope of $\psi_{K/K_0}(t)$ is $e$)\footnote{To make sense of this in general, one needs to extend the definition of the functions $\psi_{L/M}, \varphi_{L/M}$ to the case when the extension $L/M$ is not necessarily Galois. This is done e.g. in \cite[\S 2.2.1]{Caruso}.}. In the case when $K/K_0$ is tamely ramified, the estimate from \cite{Caruso} becomes $$\mu_{L/K}\leq 1+e\left(\mathrm{log}_p(ip)+1\right)+\frac{e}{p-1},$$ which is fairly equivalent to the bound from Theorem~\ref{thm:FinalRamification} when $e < p$ (and again also in some extra cases, e.g. when $i=1$ for any $e$ and $p$), with the difference of estimates being approximately $$e\left(\mathrm{log}_p\left(\frac{p}{p-1}\right)-\frac{1}{p-1}\right) \in \left(-\frac{e}{4\sqrt{p}},\, 0\right).$$ In general, when $e$ is big and coprime to $p$, the bound in \cite{Caruso} becomes gradually better unless, for example, $i=1$. In the case when $K$ has relatively large wild absolute ramification, we expect that the bound from Theorem~\ref{thm:FinalRamification} generally becomes stronger, especially if $K$ contains $p^n$--th roots of unity for large $n$, as can be seen in the following examples (where we assume $i>1$; for $i=1$, our estimate retains the shape of the tame ramification case and hence the difference between the estimates becomes even larger). \begin{pr} \begin{enumerate}[(1)] \item{When $K=\mathbb{Q}_p(\zeta_{p^n})$ for $n\geq 2$, one has $e=(p-1)p^{n-1}$, $s_0(K)=n$ and from the classical computation of $\psi_{K/\mathbb{Q}_p}$ (e.g. as in \cite[IV \S4]{SerreLocalFields}), one obtains $c_0(K)=[(n-1)(p-1)-1]p^{n-1}+1$. Then the difference between the two estimates is approximately $ne-p^{n-1}+1 >(n-1)e$.} \item{When $K=\mathbb{Q}_p(p^{1/p^n})$ for $n\geq 3$, one has $e=p^{n}$ and $s_0(K)=1$. The description of $\psi_{K/\mathbb{Q}_p}$ in \cite[\S 4.3]{CarusoLiu} implies that $c_0(K)=np^n=ne$. The difference between the two estimates is thus approximately $$e\left(1+\mathrm{log}_p(i)-\mathrm{log}_p(i-1)+\mathrm{log}_p(p-1)\right)\approx 2e.$$ (In the initial cases $n=1, 2$, one can check that the difference is still positive, in both cases bigger than $p$.)} \end{enumerate} \end{pr} \addcontentsline{toc}{section}{References} \bibliography{references} \bibliographystyle{amsalpha}. \end{document}
{"config": "arxiv", "file": "2108.03833/CrystallineCondition.tex"}
\subsection{Norms and products} We denote non-negative real numbers by $\R_{+}$. For any subset $A \subseteq \R$, we let $\chi_{A} \colon \R \to \{0, 1\}$ be the indicator function of $A$. Let $X$ be a normed space over $\R^d$. We denote $B_X$ the unit ball of $X$, and $\|\cdot \|_{X}$ the norm of $X$. We denote $X^*$ the dual norm of $X$ with respect to the standard dot product $\langle \cdot, \cdot \rangle$, i.e~$\|x\|_{X^*} = \sup\{|\langle x, y\rangle|: y \in B_X\}$. For a vector $x \in \Rbb^d$ we define $|x| = (|x_1|, |x_2|, \ldots, |x_d|)$ to be the vector of the absolute values of the coordinates of $x$. For a positive integer $d$ and $1 \leq p \leq \infty$, we denote $\ell_p^d$ the space $\Rbb^d$ equipped with the standard $\ell_p$ norm, which we denote by $\|\cdot\|_p$. \begin{definition} \label{def:ordered} For any vector $x \in \R^d$, we let $x^* = P|x|$ be the vector obtained by applying the permutation matrix $P$ to $|x|$ so coordinates of $x^*$ are sorted in non-increasing absolute value. \end{definition} \begin{definition}[Symmetric norm] A norm $\|\cdot\|_{X} \colon \R^d \to \R$ is \emph{symmetric} if for every $x \in \R^d$, $\|x\|_X = \Bigl\||x|\Bigr\|_X = \|x^*\|_{X}$. \end{definition} See the introduction for examples of symmetric norms. We note once again that the dual norm of a symmetric norm is also symmetric. A natural way to combine norms is via {\em product spaces}, which we will heavily exploit in this paper. \begin{definition}[Product space] Let $1 \leq p \leq \infty$. Let $(X_1, d_{X_1})$, $(X_2, d_{X_2})$, \ldots, $(X_k, d_{X_k})$ be metric spaces. We define the {\em $\ell_p$-product space}, denoted $\bigoplus_{\ell_p} X_i$, to be a metric space whose ground set is $X_1 \times X_2 \times \ldots \times X_k$, and the distance function is defined as follows: the distance between $(x_1, x_2, \ldots, x_k)$ and $(x_1', x_2', \ldots, x_k')$ is defined as the $\ell_p$~norm of the vector $\bigl(d_{X_1}(x_1, x_1'), d_{X_2}(x_1, x_2'), \ldots, d_{X_k}(x_k, x_k')\bigr)$. \end{definition} Next we define the top-$k$ norm: \begin{definition} For any $k \in [d]$, the \emph{top-$k$ norm}, $\| \cdot \|_{T(k)} \colon \R^d \to \R$, is the sum of the absolute values of the top $k$ coordinates. In other words, \[ \| x\|_{T(k)} = \sum_{i=1}^k |x_i^*|, \] where $x^*$ is the vector obtained in Definition~\ref{def:ordered}. \end{definition} \begin{definition} Given vectors $x, y \in \R^d$, we say $x$ \emph{weakly majorizes} $y$ if for all $k \in [d]$, \[ \sum_{i=1}^k |x^*_i| \geq \sum_{i=1}^k |y^*_i|. \] \end{definition} \begin{lemma}[Theorem B.2 in \cite{MOA11}] If $x, y \in \R^d$ where $x$ weakly majorizes $y$, then for any symmetric norm $\|\cdot\|_{X}$, \[ \|x\|_{X} \geq \|y\|_{X}. \] \end{lemma} \begin{definition} For $i \in [d]$, let $\xi^{(i)} \in \R^d$ be the vector \[ \xi^{(i)} = (\underbrace{1, \dots, 1}_{i}, \underbrace{0, \dots, 0}_{d-i}) \] consisting of exactly $i$ 1's, and $d-i$ 0's. \end{definition} \subsection{ANN for $\ell_{\infty}$ and $\ell_{\infty}$-products} We will crucially use the following two powerful results of Indyk. The first result is for the standard $d$-dimensional $\ell_\infty$ space. \begin{theorem}[{\cite[Theorem~1]{I01}}] \label{thm:l-infty-ds} For any $\eps \in (0, 1/2)$, there exists a data structure for ANN for $n$-points datasets in the $\ell_{\infty}^d$ space with approximation $O\left(\frac{\log \log d}{\eps}\right)$, space $O(d\cdot n^{1 + \eps})$, and query time $O(d \cdot \log n)$. \end{theorem} The second is a generalization of the above theorem, which applies to an $\ell_{\infty}$-product of $k$ metrics $X_1,\ldots X_k$, and achieves approximation $O(\log \log n)$. It only needs black-box ANN schemes for each metric $X_i$. \begin{theorem}[{\cite[Theorem~1]{I02}}] \label{thm:maxProduct} Let $X_1, X_2, \ldots, X_k$ be metric space, and let $c > 1$ be a real number. Suppose that for every $1 \leq i \leq k$ and every $n$ there exists a data structure for ANN for $n$-point datasets from $X_i$ with approximation $c$, space $S(n)\ge n$, query time $Q(n)$, and probability of success $0.99$. Then, for every $\eps>0$, there exists ANN under $\bigoplus_{\ell_\infty}^k{\cal M}$ with: \begin{itemize} \item $O(\eps^{-1}\log\log n)$ approximation, \item $O(Q(n)\log n+dk\log n)$ query time, where $d$ is the time to compute distances in each $X_i$, and \item $S(n)\cdot O(kn^{\eps})$ space/preprocessing. \end{itemize} \end{theorem} Strictly speaking, we need to impose a technical condition on the ANN for each $X_i$ --- that it reports the point with the smallest {\em priority} --- which is satisfied in all our scenarios; see~\cite[Section 2]{I02} for details. Also, the original statement of \cite{I02} gave a somewhat worse space bound. The better space results simply from a better analysis of the algorithm, as was observed in \cite{AIK09}; we include a proof in Appendix \ref{apx:space}.
{"config": "arxiv", "file": "1611.06222/prelim.tex"}
TITLE: Orthocenter and vectors QUESTION [1 upvotes]: Given three vectors $a,b,c$ which are linearly independent, and since there's a dot product $<,>$, I gotta prove that the orthocenter $h$ exists and it's unique. I really have no idea how to do it. The professor gave us a hint about drawing in $\mathbb{R}^{3}$, what I did, but for me, every triangle I draw with these vectors will make them linearly dependent, which goes against the hypothesis. REPLY [2 votes]: Three points determine a plane, so it suffices to prove this for $\mathbb{R}^2$. The set of vectors $x$ such that $(a-x)\cdot(b-c)=0$ determines a line. Likewise, $(b-x)\cdot(c-a)=0$ determines another line. If $a$, $b$ and $c$ are not collinear, these lines will not be parallel, and determine a unique intersection, $h$. To prove our original claim, it thus suffices to prove that $(c-h)\cdot(a-b)=0$. And we can do this by noticing the following chain of equalities: $$(c-h)\cdot(a-b)=$$ $$a\cdot c-b\cdot c-a\cdot h+b\cdot h=$$ $$-((a\cdot b-a\cdot c-b\cdot h+c\cdot h)+(b\cdot c-a\cdot b-c\cdot h+a\cdot h))=$$ $$(a-h)\cdot(b-c)+(b-h)\cdot(c-a)=0.$$
{"set_name": "stack_exchange", "score": 1, "question_id": 3417902}
TITLE: Combinatorics with colored beans QUESTION [0 upvotes]: I have some difficulties with the following exercise in combinations: There are $8$ beans in the box: $6$ white beans, $2$ green beans. Two players one by one pick $2$ beans; first player one picks $2$ beans, after that player two picks $2$ beans. For every green bean that player picks he gets $5$ points. What's the expected number of points for player one? What's the probability that player two picks only one green bean? Solution: $$E(\text{points of player one}) = 4 \cdot 5 \cdot \frac{2}{8} \cdot \frac{6}{7} + 10 \cdot 2 \cdot \frac{2}{8} \cdot \frac{1}{7}$$ Unfortunately I didn't find any good way to fir the binomial distribution here. I don't get any idea how to answer the second question. REPLY [0 votes]: For question 1 you have to take the sum of the value of each outcome multiplied by the probability of the outcome. This is how I would do it: 2 beans: $\underbrace{\frac{1}{8\cdot7}}_\text{probability}\cdot\underbrace{10}_{\text{value of outcome}}$ 1 bean: $\frac{7}{8\cdot7}\cdot5$ 0 beans: value is 0 so probabilty won't matter. Add them all up to get $\frac{15}{14}\approx 1.071$ For the second question break in to cases: case 1: player 1 took 1 green bean. Probability:$\frac{2\cdot6\cdot5}{8\cdot7\cdot6}$ assuming this happens the probability player 2 gets the remaining green is $\frac{6\cdot5}{6\cdot5\cdot4}$ So the probability of both happening is $\frac{2\cdot6\cdot5}{8\cdot7\cdot6}\cdot\frac{6\cdot5}{6\cdot5\cdot4}=\frac{5}{112}\approx 0.044$ case 2: player 1 took no green balls: probability $\frac{6\cdot5\cdot4}{8\cdot7\cdot6}$ assuming this happens the probability player 2 takes exactly 1 green ball is $\frac{2\cdot6\cdot5}{6\cdot5\cdot4}$. So the probability both happen is $\frac{6\cdot5\cdot4}{8\cdot7\cdot6}\cdot\frac{2\cdot6\cdot5}{6\cdot5\cdot4}=\frac{5}{28}\approx.178$ So the total probvability is the sum of the probabilities of the two cases which is $\frac{5}{28}+\frac{5}{112}\approx 0.2232$
{"set_name": "stack_exchange", "score": 0, "question_id": 652028}
\begin{document} \author{Alexander E Patkowski} \title{On certain Fourier expansions for the Riemann zeta function} \maketitle \begin{abstract} We build on a recent paper on Fourier expansions for the Riemann zeta function. It is shown that a new criteria for the Riemann Hypothesis follows from a theorem of Wiener. We establish Fourier expansions for certain $L$-functions, and offer series representations involving the Whittaker function $W_{\gamma,\mu}(z)$ for the coefficients. A new expansion for the Riemann xi function is presented in the third section by constructing an integral formula using Mellin transforms for its Fourier coefficients. \end{abstract} \keywords{\it Keywords: \rm Riemann zeta function; Riemann Hypothesis, Fourier series} \subjclass{ \it 2010 Mathematics Subject Classification 11L20, 11M06.} \section{Introduction and Main Results} The measure $$\mu(B):=\frac{1}{2\pi}\int_{B}\frac{dy}{\frac{1}{4}+y^2},$$ for each $B$ in the Borel set $\mathfrak{B},$ has been applied in the work of [7] as well as Coffey [3], providing interesting applications in analytic number theory. For the measure space $(\mathbb{R},\mathfrak{B}, \mu),$ \begin{equation} \left\lVert g \right\rVert_2^2:=\int_{\mathbb{R}}|g(t)|^2d\mu,\end{equation} is the $L^2(\mu)$ norm of $f(x).$ Here (1.1) is finite, and $f(x)$ is measurable [10, pg.326, Definition 11.34]. In a recent paper by Elaissaoui and Guennoun [7], an interesting Fourier expansion was presented which states that, if $f(x)\in L^2(\mu),$ then \begin{equation} f(x)=\sum_{n\in\mathbb{Z}}a_ne^{-2in\tan^{-1}(2x)},\end{equation} where \begin{equation}a_n=\frac{1}{2\pi}\int_{\mathbb{R}}f(y)e^{2in\tan^{-1}(2y)}\frac{dy}{\frac{1}{4}+y^2}.\end{equation} By selecting $x=\frac{1}{2}\tan(\phi),$ we return to the classical Fourier expansion, since $f(\frac{1}{2}\tan(\phi))$ is periodic in $\pi.$ The main method applied in their paper to compute the constants $a_n$ is the Cauchy residue theorem. However, it is possible (as noted therein) to directly work with the integral \begin{equation}a_n=\frac{1}{2\pi}\int_{-\pi}^{\pi}f(\frac{1}{2}\tan(\frac{\phi}{2}))e^{in\phi}d\phi.\end{equation} Many remarkable results were extracted from the Fourier expansion (1.2)--(1.3), including criteria for the Lindel$\ddot{o}$f Hypothesis [7, Theorem 4.6]. \par Let $\rho$ denote the the nontrivial zeros of $\zeta(s)$ in the critical region $(0,1),$ and $\Re(\rho)=\alpha,$ $\Im(\rho)=\beta.$ The goal of this paper is to offer some more applications of (1.2)--(1.3), and state a criteria for the Riemann Hypothesis by applying a theorem of Wiener [15]. Recall that the Riemann Hypothesis is the statement that $\alpha\notin(\frac{1}{2},1).$ \begin{theorem}\label{thm:thm1} Assuming the Riemann Hypothesis, we have that there exists a Fourier expansion for $1/\zeta(\sigma+ix)$ for $\frac{1}{2}<\sigma<1.$ Namely, $$\frac{1}{\zeta(\sigma+ix)}=\frac{1}{\zeta(\sigma+\frac{1}{2})}+\sum_{n\ge1}\bar{a}_ne^{-2in\tan^{-1}(2x)},$$ where $$\bar{a}_n=\frac{1}{n!}\sum_{n> k\ge0}\binom{n}{k}\frac{(-1)^{n}(n-1)!}{(k-1)!}\lim_{s\rightarrow0}\frac{\partial^{k}}{\partial s^k}\frac{1}{\zeta(\sigma+\frac{1}{2}-s)}.$$ Moreover, if the zeros of $\zeta(s)$ are simple, we have $$\frac{1}{\zeta(\sigma-ix)}=\sum_{n\in\mathbb{Z}}\hat{a}_ne^{-2in\tan^{-1}(2x)},$$ where for $n\ge1,$ $$\hat{a}_n=\frac{1}{n!}\sum_{n\ge k\ge0}\binom{n}{k}\frac{(-1)^{n}(n-1)!}{(k-1)!}\lim_{s\rightarrow0}\frac{\partial^{k}}{\partial s^k}\frac{1}{\zeta(\sigma-\frac{1}{2}+s)}-S(n,\sigma),$$ where $$S(n,\sigma)=\sum_{\beta: \zeta(\rho)=0}\left(\frac{\sigma-i\beta}{1-\sigma+i\beta}\right)^n\frac{1}{\zeta'(\rho)(1-\sigma+i\beta)(\sigma-i\beta)}$$ $$+\sum_{k\ge1}\left(\frac{\frac{1}{2}+\sigma+2k}{\frac{1}{2}-\sigma-2k}\right)^n\frac{1}{\zeta'(-2k)(\frac{1}{2}-\sigma-2k)(\frac{1}{2}+\sigma+2k)},$$ and $\hat{a}_n=-S(n,\sigma)$ for $n<0,$ $\hat{a}_0=1/\zeta(\sigma+\frac{1}{2}).$ \end{theorem} \begin{corollary}\label{corollary:Cor1} Assuming the Riemann Hypothesis, for $\frac{1}{2}<\sigma<1,$ $$\frac{1}{2\pi}\int_{\mathbb{R}}\frac{d\mu}{|\zeta(\sigma+iy)|^2}=\frac{1}{\zeta^2(\sigma+\frac{1}{2})}+\sum_{k\ge1}|\bar{a}_k|^2,$$ where the $\bar{a}_n$ are as defined in the previous theorem. \end{corollary} Next we consider a Fourier expansion with coefficients expressed as a series involving the Whittaker function $W_{\gamma, \mu}(z),$ which is a solution to the differential equation [8, pg.1024, eq.(9.220)] $$\frac{d^2W}{dz^2}+\left(-\frac{1}{4}+\frac{\gamma}{z}+\frac{1-4\mu^2}{4z^2}\right)W=0.$$ This function also has the representation [8, pg.1024, eq.(9.220)] $$W_{\gamma,\mu}(z)=\frac{\Gamma(-2\mu)}{\Gamma(\frac{1}{2}-\mu-\gamma)}M_{\gamma,\mu}(z)+\frac{\Gamma(2\mu)}{\Gamma(\frac{1}{2}+\mu-\gamma)}M_{\gamma,-\mu}(z).$$ Here the other Whittaker function $M_{\gamma,\mu}(z)$ is given by $$M_{\gamma,\mu}(z)=z^{\mu+\frac{1}{2}}e^{-z/2}{}_1F_1(\mu-\gamma+\frac{1}{2};2\mu+1;z),$$ where $_1F_1(a;b;z)$ is the well-known confluent hypergeometric function. \begin{theorem}\label{thm:thm2} Let $v$ be a complex number which may not be an even integer. Then for $1>\sigma>\frac{1}{2},$ we have the expansion $$\zeta(\sigma+ix)\cos^{v}(\tan^{-1}(2x))=\frac{1}{2}\zeta(\sigma+\frac{1}{2})+\sum_{n\in\mathbb{Z}}\tilde{a}_ne^{-2in\tan^{-1}(2x)},$$ where $\tilde{a}_n=\frac{(2\sigma^2-4\sigma+\frac{5}{2})}{2(\sigma-\frac{1}{2})^2(\frac{3}{2}-\sigma)^2}\left(\frac{\frac{3}{2}-\sigma}{\sigma-\frac{1}{2}}\right)^n$ for $n<0,$ and for $n\ge1,$ $$\tilde{a}_n=\frac{2\Gamma(v+2)}{\Gamma(\frac{v}{2}+n+1)\Gamma(\frac{v}{2}-n+1)}+\frac{\pi }{2^{v/2+1}}\sum_{k>1} k^{-\sigma}\left(\frac{\log(k)}{2}\right)^{v/2}\frac{W_{n,-\frac{v+1}{2}}(\log(k))}{\Gamma(1+\frac{v}{2}+n)}.$$ \end{theorem} \section{Proof of Main Theorems} In our proof of our condition for the Riemann Hypothesis, we will require an application of Wiener's theorem [15, pg.14, Lemma IIe]. \begin{lemma}(Wiener [15]) Suppose $f(x)$ has an absolutely convergent Fourier series and $f(x)\neq0$ for all $x\in \mathbb{R}.$ Then its reciprocal $1/f(x)$ also has an absolutely convergent Fourier series. \end{lemma} In our proof of Corollary 1.1.1, we will require a well known result [13, pg.331, Theorem 11.45] on functions in $L^2(\mathbb{\mu}).$ \begin{lemma} Suppose that $f(x)=\sum_{k\in\mathbb{Z}}a_k\kappa_k,$ where $\{\kappa_n\}$ is a complete orthonormal set and $f(x)\in L^2(\mu),$ then $$\int_{X}|f(x)|d\mu=\sum_{k\in\mathbb{Z}}|a_k|^2.$$ \end{lemma} \begin{proof}[Proof of Theorem~\ref{thm:thm1}] The proof of this theorem consists of applying Lemma 2.1 together with the residue theorem. First we assume the RH that there are no zeros in the region $\frac{1}{2}<\sigma<1,$ and rewrite the integral as \begin{equation}\bar{a}_n=\frac{1}{2\pi}\int_{\mathbb{R}}e^{2in\tan^{-1}(2y)}\frac{dy}{\zeta(\sigma+iy)(\frac{1}{4}+y^2)}dy=\frac{1}{2\pi i}\int_{(\frac{1}{2})}\left(\frac{s}{1-s}\right)^n\frac{ds}{\zeta(\sigma-\frac{1}{2}+s)s(1-s)}. \end{equation} We replace $s$ by $1-s$ and apply the residue theorem by moving the line of integration to the left. By the Leibniz rule, we compute the residue at the pole $s=0$ of order $n+1,$ $n\ge0,$ as \begin{equation}\begin{aligned} &\frac{1}{n!}\lim_{s\rightarrow0}\frac{d^n}{ds^n}s^{n+1}\left(\left(\frac{1-s}{s}\right)^n\frac{1}{\zeta(\sigma+\frac{1}{2}-s)s(1-s)}\right) \\ &=\frac{1}{n!}\lim_{s\rightarrow0}\frac{d^n}{ds^n}\frac{(1-s)^{n-1}}{\zeta(\sigma+\frac{1}{2}-s)} \\ &=\frac{1}{n!}\sum_{n\ge k\ge0}\binom{n}{k}\frac{(-1)^n(n-1)!}{(k-1)! }\lim_{s\rightarrow0}\frac{\partial^{k}}{\partial s^k}\frac{1}{\zeta(\sigma+\frac{1}{2}-s)} \end{aligned}\end{equation} The residue at $s=0$ if $n=0$ is $-1/\zeta(\sigma+\frac{1}{2}).$ There are no additional poles when $n<0.$ Since the sum in (2.2) is zero for $k=n$ it reduces to the one stated in the theorem. \par Next we consider the second statement. The integrand in \begin{equation}\frac{1}{2\pi i}\int_{(\frac{1}{2})}\left(\frac{1-s}{s}\right)^n\frac{ds}{\zeta(\sigma-\frac{1}{2}+s)s(1-s)} \end{equation} has simple poles at $s=1-\sigma+i\beta,$ where $\Im(\rho)=\beta.$ The integrand in (2.3) also has simple poles at $s=\frac{1}{2}-\sigma-2k,$ and a pole of order $n+1,$ $n>0,$ at $s=0.$ We compute, $$\begin{aligned} &\frac{1}{n!}\lim_{s\rightarrow0}\frac{d^n}{ds^n}s^{n+1}\left(\left(\frac{1-s}{s}\right)^n\frac{1}{\zeta(\sigma-\frac{1}{2}+s)s(1-s)}\right) \\ &=\frac{1}{n!}\lim_{s\rightarrow0}\frac{d^n}{ds^n}\frac{(1-s)^{n-1}}{\zeta(\sigma-\frac{1}{2}+s)} \\ &=\frac{1}{n!}\sum_{n\ge k\ge0}\binom{n}{k}\frac{(-1)^n(n-1)!}{(k-1)!}\lim_{s\rightarrow0}\frac{\partial^{k}}{\partial s^k}\frac{1}{\zeta(\sigma-\frac{1}{2}+s)}. \end{aligned}$$ The residue at the pole $s=1-\sigma+i\beta,$ is $$\sum_{\beta: \zeta(\rho)=0}\left(\frac{\sigma-i\beta}{1-\sigma+i\beta}\right)^n\frac{1}{\zeta'(\rho)(1-\sigma+i\beta)(\sigma-i\beta)},$$ and at the pole $s=\frac{1}{2}-\sigma-2k$ is $$\sum_{k\ge1}\left(\frac{\frac{1}{2}+\sigma+2k}{\frac{1}{2}-\sigma-2k}\right)^n\frac{1}{\zeta'(-2k)(\frac{1}{2}-\sigma-2k)(\frac{1}{2}+\sigma+2k)},$$ The residue at the pole $n=0,$ $s=0,$ is $-1/\zeta(\sigma-\frac{1}{2}).$ \end{proof} \begin{proof}[Proof of Corollary~\ref{corollary:Cor1}] This result readily follows from application of theorem 1.1 to Lemma 2.2 with $X=\mathbb{R}.$ The convergence of the series $\sum_{k}|\bar{a}_n|^2$ follows immediately from [10, pg.580, Lemma 12.6].\end{proof} \begin{proof}[Proof of Theorem~\ref{thm:thm2}] It is clear that $$\cos^{v}(2\tan^{-1}(2y))=\left(\frac{1-4y^2}{1+4y^2}\right)^v=O(1).$$ Comparing with [7, Theorem 1.2] we see our function belongs to $L^2(\mu).$ We compute that $$ \begin{aligned} &\tilde{a}_n=\frac{1}{2\pi}\int_{-\pi}^{\pi}f(\frac{1}{2}\tan(\frac{\phi}{2}))e^{in\phi}d\phi \\ &=\frac{1}{2\pi}\int_{-\pi}^{\pi}\zeta(\sigma+\frac{i}{2}\tan(\frac{\phi}{2}))\cos^v(\frac{\phi}{2}) e^{in\phi}d\phi \\ &=\frac{1}{2\pi}\left(\int_{0}^{\pi}\zeta(\sigma+\frac{i}{2}\tan(\frac{\phi}{2})) \cos^v(\frac{\phi}{2}) e^{in\phi}d\phi +\int_{-\pi}^{0}\zeta(\sigma+\frac{i}{2}\tan(\frac{\phi}{2}))\cos^v(\frac{\phi}{2})e^{in\phi}d\phi\right)\\ &=\frac{1}{2\pi}\left(\int_{0}^{\pi}\zeta(\sigma+\frac{i}{2}\tan(\frac{\phi}{2})) \cos^v(\frac{\phi}{2}) e^{in\phi}d\phi +\int_{0}^{\pi}\zeta(\sigma-\frac{i}{2}\tan(\frac{\phi}{2}))\cos^v(\frac{\phi}{2})e^{-in\phi}d\phi\right)\\ &=\frac{1}{2\pi}\left(\int_{0}^{\pi}\zeta(\sigma+\frac{i}{2}\tan(\frac{\phi}{2})) \cos^v(\frac{\phi}{2}) e^{in\phi}d\phi +\int_{0}^{\pi}\zeta(\sigma-\frac{i}{2}\tan(\frac{\phi}{2}))\cos^v(\frac{\phi}{2})e^{-in\phi}d\phi\right)\\ &=\frac{1}{\pi}\int_{0}^{\pi} \cos^v(\frac{\phi}{2}) \sum_{k\ge1}k^{-\sigma}\cos\left(\frac{1}{2}\tan(\frac{\phi}{2})\log(k)-n\phi\right)d\phi\\ &=\frac{1}{\pi}\int_{0}^{\pi} \cos^v(\frac{\phi}{2}) \cos\left(n\phi\right) d\phi +\frac{1}{\pi}\int_{0}^{\pi} \cos^v(\frac{\phi}{2}) \sum_{k>1}k^{-\sigma}\cos\left(\frac{1}{2}\tan(\frac{\phi}{2})\log(k)-n\phi\right) d\phi \\ &=\frac{2}{\pi}\int_{0}^{\pi/2} \cos^v(\phi) \cos\left(n2\phi\right) d\phi +\frac{2}{\pi}\int_{0}^{\pi/2} \cos^v(\phi) \sum_{k>1}k^{-\sigma}\cos\left(\frac{1}{2}\tan(\phi)\log(k)-n2\phi\right) d\phi. \\ \end{aligned}$$ Now by [8, pg.397] for $\Re(v)>0,$ we have \begin{equation}\int_{0}^{\pi/2}\cos^{v-1}(y)\cos(by)dy=\frac{\pi\Gamma(v+1)}{\Gamma(\frac{v+b+1}{2})\Gamma(\frac{v-b+1}{2})}.\end{equation} Let $\mathbb{Z}^{-}$ denote the set of negative integers. Then, by [8, pg.423] with $a>0,$ $\Re(v)>-1,$ $\frac{v+\gamma}{2}\neq\mathbb{Z}^{-},$ \begin{equation}\int_{0}^{\pi/2}\cos^v(y)\cos(a\tan(y)-\gamma y)dy=\frac{\pi a^{v/2}}{2^{v/2+1}}\frac{W_{\gamma/2,-\frac{v+1}{2}}(2a)}{\Gamma(1+\frac{v+\gamma}{2})}.\end{equation} Hence, if we put $b=2n$ and replace $v$ by $v+1$ in (2.4), and select $a=\frac{1}{2}\log(k)$ and $\gamma=2n$ in (2.5), we find \begin{equation}\tilde{a}_n=\frac{2\Gamma(v+2)}{\Gamma(\frac{v}{2}+n+1)\Gamma(\frac{v}{2}-n+1)}+\frac{\pi }{2^{v/2+1}}\sum_{k>1} k^{-\sigma}\left(\frac{\log(k)}{2}\right)^{v/2}\frac{W_{n,-\frac{v+1}{2}}(\log(k))}{\Gamma(1+\frac{v}{2}+n)}.\end{equation} Hence $v$ cannot be a negative even integer. The interchange of the series and integral is justified by absolute convergence for $\sigma>\frac{1}{2}.$ To see this, note that [8, pg.1026, eq.(9.227), eq.(9.229)] $$W_{\gamma,\mu}(z)\sim e^{-z/2}z^{\gamma},$$ as $|z|\rightarrow\infty,$ and $$W_{\gamma,\mu}(z)\sim (\frac{4z}{\gamma})^{1/4}e^{-\gamma+\gamma\log(\gamma)}\sin(2\sqrt{\gamma z}-\gamma\pi-\frac{\pi}{4}),$$ as $|\gamma|\rightarrow\infty.$ Using (2.6) as coefficients for $n<0$ is inadmissible, due to the resulting sum over $n$ being divergent. On the other hand, it can be seen that $$\begin{aligned} &\tilde{a}_n=\frac{1}{2\pi}\int_{\mathbb{R}}e^{2in\tan^{-1}(2y)}\frac{\zeta(\sigma+iy)\cos^{v}(\tan^{-1}(2y))dy}{(\frac{1}{4}+y^2)}\\ &=\frac{1}{2\pi i}\int_{(\frac{1}{2})}\frac{\zeta(\sigma-\frac{1}{2}+s)2(2s^2-2s+1)}{(2s(1-s))^2}\left(\frac{s}{1-s}\right)^nds\\ &=\frac{1}{2\pi i}\int_{(\frac{1}{2})}\frac{\zeta(\sigma+\frac{1}{2}-s)2(2s^2-2s+1)}{(2s(1-s))^2}\left(\frac{1-s}{s}\right)^nds . \end{aligned}$$ We will only use the residues at the pole $s=0$ when $n<0$ and $s=\sigma-\frac{1}{2},$ and outline the details to obtain an alternative expression for the $\tilde{a}_n$ for $n\ge0.$ The integrand has a simple pole at $s=\sigma-\frac{1}{2},$ a pole of order $n+2$ at $s=0,$ and when $n<0$ there is a simple pole when $n=-1,$ at $s=0.$ The residue at the pole $s=0$ for $n\ge0$ is computed as \begin{equation}\begin{aligned} &\frac{1}{n!}\lim_{s\rightarrow0}\frac{d^{n+1}}{ds^{n+1}}s^{n+2}\left(\frac{\zeta(\sigma+\frac{1}{2}-s)2(2s^2-2s+1)}{(2s(1-s))^2}\left(\frac{1-s}{s}\right)^n\right) \\ &=\frac{1}{n!2}\lim_{s\rightarrow0}\frac{d^{n+1}}{ds^{n+1}}\left(\zeta(\sigma+\frac{1}{2}-s)(2s^2-2s+1)(1-s)^{n-2}\right). \end{aligned}\end{equation} And because the resulting sum is a bit cumbersome, we omit this form in our stated theorem. The residue at the simple pole when $n=-1,$ at $s=0$ is $\frac{1}{2}\zeta(\sigma+\frac{1}{2}).$ Collecting our observations tells us that if $n<0,$ $$\tilde{a}_n=\frac{(2\sigma^2-4\sigma+\frac{5}{2})}{2(\sigma-\frac{1}{2})^2(\frac{3}{2}-\sigma)^2}\left(\frac{\frac{3}{2}-\sigma}{\sigma-\frac{1}{2}}\right)^n.$$ \end{proof} \section{Riemann xi function} The Riemann xi function is given by $\xi(s):=\frac{1}{2}s(s-1)\pi^{-\frac{s}{2}}\Gamma(\frac{s}{2})\zeta(s),$ and $\Xi(y)=\xi(\frac{1}{2}+iy).$ In many recent works [4, 5], Riemann xi function integrals have been shown to have interesting evaluations. (See also [11] for an interesting expansion for the Riemann xi function.) The classical application is in the proof of Hardy's theorem that there are infinitely many non-trivial zeros on the line $\Re(s)=\frac{1}{2}.$ \par We will need to utilize Mellin transforms to prove our theorems. By Parseval's formula [12, pg.83, eq.(3.1.11)], we have \begin{equation}\int_{0}^{\infty}f(y)g(y)dy=\frac{1}{2\pi i}\int_{(r)}\mathfrak{M}(f(y))(s)\mathfrak{M}(g(y))(1-s)ds.\end{equation} provided that $r$ is chosen so that the integrand is analytic, and where $$\int_{0}^{\infty}y^{s-1}f(y)dy=:\mathfrak{M}(f(y))(s).$$ From [12, pg.405] \begin{equation}\int_{0}^{1}y^{s-1}\log^n(y)dy=\frac{(-1)^nn!}{s^{n+1}},\end{equation} if $\Re(s)<0.$ Now it is known [6, pg.207--208] that for any $\Re(s)=u\in \mathbb{R},$ \begin{equation}\Theta(y)=\frac{1}{2\pi i}\int_{(u)}\xi(s)y^{-s}ds,\end{equation} where \begin{equation}\Theta(y):=2y^2\sum_{n\ge1}(2\pi^2 n^4y^2-3\pi n^2)e^{-\pi n^2 y^2},\end{equation} for $y>0.$ Define the operator $\mathfrak{D}_{n, y}(f(y)):=\underbrace{y\frac{\partial }{\partial y}\dots y\frac{\partial }{\partial y}}_{n}(f(y)).$ \begin{theorem} For real numbers $x\in\mathbb{R},$ $$\Xi(x)=\frac{1}{(\frac{1}{4}+x^2)}\sum_{n\in\mathbb{Z}}\ddot{a}_ne^{-2in\tan^{-1}(2x)},$$ where $\ddot{a}_0=0,$ and for $n\ge1,$ $$\ddot{a}_n=-\frac{1}{n!}\int_{0}^{1}\log^n(y)\mathfrak{D}_{n, y}(\Theta(y))dy+\frac{(-1)^n}{(n-1)!}\sum_{n-1\ge k \ge0}\binom{n-1}{k}\frac{n!}{(k+1)!}\xi^{(k)}(1),$$ and $$\ddot{a}_{-n}=-\frac{1}{n!}\int_{0}^{1}\log^n(y)\mathfrak{D}_{n, y}(\Theta(y))dy-\frac{(-1)^n}{(n-1)!}\sum_{n-1\ge k \ge0}\binom{n-1}{k}\frac{n!}{(k+1)!}\xi^{(k)}(0).$$ \end{theorem} \begin{proof} Applying the operator $\mathfrak{D}_{n, y}$ to (3.3)--(3.4), then applying the resulting Mellin transform with (3.2) to (3.1), we have for $c>1,$ $n\ge1,$ \begin{equation}-\frac{1}{n!}\int_{0}^{1}\log^n(y)\mathfrak{D}_{n, y}(\Theta(y))dy=\frac{1}{2\pi i}\int_{(c)}\left(\frac{s}{1-s}\right)^n\xi(s)ds.\end{equation} On the other hand, \begin{equation}\begin{aligned} &\ddot{a}_n=\frac{1}{2\pi}\int_{\mathbb{R}}e^{2in\tan^{-1}(2y)}\frac{(\frac{1}{4}+y^2)\Xi(y)dy}{(\frac{1}{4}+y^2)}dy=\frac{1}{2\pi i}\int_{(\frac{1}{2})}\left(\frac{s}{1-s}\right)^n\xi(s)ds \\ &= \frac{1}{2\pi i}\int_{(\frac{1}{2})}\left(\frac{s}{1-s}\right)^n\pi^{-s/2}\frac{s}{2}(s-1)\zeta(s)\Gamma(\frac{s}{2})ds .\end{aligned}\end{equation} The integrand in (3.5) has a pole of order $n,$ $n\ge1,$ at $s=1.$ Now we can move the line of integration to the left for (3.5) to arrive at the line $\Re(s)=\frac{1}{2}$ by computing residue at the pole $s=1.$ We compute this residue as \begin{equation}\begin{aligned} &\frac{1}{(n-1)!}\lim_{s\rightarrow1}\frac{d^{n-1}}{ds^{n-1}}(s-1)^{n}\left(\left(\frac{1}{1-s}\right)^{n}s^{n+1}2^{-1}(s-1)\pi^{-s/2}\Gamma(\frac{s}{2})\zeta(s)\right) \\ &=-\frac{1}{(n-1)!}\lim_{s\rightarrow1}\frac{d^{n-1}}{ds^{n-1}}(s-1)^{n}\left(\left(\frac{1}{1-s}\right)^{n-1}s^{n+1}2^{-1}\pi^{-s/2}\Gamma(\frac{s}{2})\zeta(s)\right) \\ &=\frac{(-1)^n}{(n-1)!}\lim_{s\rightarrow1}\frac{d^{n-1}}{ds^{n-1}}\left(s^{n+1}(s-1)2^{-1}\pi^{-s/2}\Gamma(\frac{s}{2})\zeta(s)\right)\\ &=\frac{(-1)^n}{(n-1)!}\sum_{n-1\ge k \ge0}\binom{n-1}{k}\frac{n!}{(k+1)!}\xi^{(k)}(1). \end{aligned}\end{equation} Hence for $n\ge1,$ \begin{equation}\begin{aligned}&\ddot{a}_n=\frac{1}{2\pi}\int_{\mathbb{R}}e^{2in\tan^{-1}(2y)}\frac{(\frac{1}{4}+y^2)\Xi(y)dy}{(\frac{1}{4}+y^2)}dy\\ &=\frac{1}{2\pi i}\int_{(\frac{1}{2})}\left(\frac{s}{1-s}\right)^n\xi(s)ds\\ &=-\frac{1}{n!}\int_{0}^{1}\log^n(y)\mathfrak{D}_{n, y}(\Theta(y))dy-\frac{(-1)^n}{(n-1)!}\sum_{n-1\ge k \ge0}\binom{n-1}{k}\frac{n!}{(k+1)!}\xi^{(k)}(1).\end{aligned} \end{equation} If we place $n$ by $-n$ in the integrand of (3.8), we see that there is a pole of order $n,$ $n>0,$ at $s=0,$ with residue similar to the one given in (3.7). Hence, for $n>0,$ $-2<r'<0,$ \begin{equation}\begin{aligned}&\ddot{a}_{-n}=\frac{1}{2\pi i}\int_{(\frac{1}{2})}\left(\frac{1-s}{s}\right)^n\xi(s)ds\\ &=\frac{(-1)^n}{(n-1)!}\sum_{n-1\ge k \ge0}\binom{n-1}{k}\frac{n!}{(k+1)!}\xi^{(k)}(0)+\frac{1}{2\pi i}\int_{(r')}\left(\frac{1-s}{s}\right)^n\xi(s)ds\\ &=\frac{(-1)^n}{(n-1)!}\sum_{n-1\ge k \ge0}\binom{n-1}{k}\frac{n!}{(k+1)!}\xi^{(k)}(0)-\frac{1}{n!}\int_{0}^{1}\log^n(y)\mathfrak{D}_{n, y}(\Theta(y))dy .\end{aligned} \end{equation} In (3.9) we replaced $s$ by $1-s$ and applied the functional equation $\xi(s)=\xi(1-s)$ and (3.5) in the third line. \end{proof} Now according to Coffey [1, pg.527], $\xi^{(n)}(0)=(-1)^n\xi^{(n)}(1),$ which may be used to recast Theorem 3.1 in a slightly different form. The integral formulae obtained in [2, pg.1152, eq.(28)] (and another form in [9, pg.11106, eq.(12)]) bear some resemblance to the integral contained in (3.5). It would be interesting to obtain a relationship to the coefficients $\ddot{a}_n.$ Next we give a series evaluation for a Riemann xi function integral. \begin{corollary} If the coefficients $\ddot{a}_n$ are as defined in Theorem 3.1., then $$\int_{\mathbb{R}}(\frac{1}{4}+y^2)^2\Xi^2(y)d\mu=\sum_{n\in\mathbb{Z}}|\ddot{a}_n|^2.$$ \end{corollary} \begin{proof} This is an application of Theorem 3.1 to Lemma 2.2 with $X=\mathbb{R}.$ \end{proof} \section{On the partial Fourier series} Here we make note of some interesting consequences of our computations related to the partial sums of our Fourier series. First, we recall [10, pg.69] that \begin{equation}\sum_{n=-N}^{N}a_ne^{inx}=\frac{1}{2\pi}\int_{-\pi}^{\pi}f(x-y)D_{N}(y)dy,\end{equation} where $$D_{N}(x)=\frac{\sin((N+\frac{1}{2})x)}{\sin(\frac{x}{2})}.$$ Now making the change of variable $y=2\tan^{-1}(2y),$ we find (4.1) is equal to $$\frac{1}{2\pi}\int_{\mathbb{R}}f(x-2\tan^{-1}(2y))\frac{D_{N}(2\tan^{-1}(2y))}{\frac{1}{4}+y^2}dy.$$ Recall [10, pg.71] that $K_{N}(x)$ is the Fej\'er kernel if $$K_{N}(x)=\frac{1}{N+1}\sum_{n=0}^{N}D_{n}(x).$$ \begin{theorem} Let $K_{N}(x)$ denote the Fej\'er kernel. Then, assuming the Riemann hypothesis, $$\lim_{N\rightarrow\infty}\frac{1}{2\pi}\int_{\mathbb{R}}\frac{K_{N}(x_0-2\tan^{-1}(2y))}{\zeta(\sigma+iy)(\frac{1}{4}+y^2)}dy=\frac{1}{\zeta(\sigma+\frac{i}{2}\tan(\frac{x_0}{2}))},$$ for $x_0\in(-\pi,\pi),$ $\frac{1}{2}<\sigma<1.$ \end{theorem} \begin{proof} Notice that $1/\zeta(\sigma+\frac{i}{2}\tan(\frac{y}{2}))$ is continuous for $y\in(-\pi,\pi)$ if there are no singularities for $\frac{1}{2}<\sigma<1.$ It is also periodic in $\pi.$ Applying Fej\'er's theorem [10, pg.73, Theorem 1.59] with $f(y)=1/\zeta(\sigma+\frac{i}{2}\tan(\frac{y}{2}))$ implies the result. \end{proof}
{"config": "arxiv", "file": "2007.12962.tex"}
\begin{document} \begin{frontmatter} \title{A Fast and Memory Efficient Sparse Solver with Applications to Finite-Element Matrices} \author[Amir]{AmirHossein Aminfar\corref{cor1}\fnref{label1}} \ead{aminfar@stanford.edu} \author[Eric]{Eric Darve\fnref{label1}} \fntext[label1]{Mechanical Engineering Department, Stanford University} \cortext[cor1]{Corresponding author. +1 650-644-7624} \address[Amir]{496 Lomita Mall, Room 104, Stanford, CA, 94305} \address[Eric]{496 Lomita Mall, Room 209, Stanford, CA, 94305} \begin{abstract} In this article, we introduce a fast and memory efficient solver for sparse matrices arising from the finite element discretization of elliptic partial differential equations (PDEs). We use a fast direct (but approximate) multifrontal solver as a preconditioner, and use an iterative solver to achieve a desired accuracy. This approach combines the advantages of direct and iterative schemes to arrive at a fast, robust and accurate solver. We will show that this solver is faster ($\sim$ 2x) and more memory efficient ($\sim$ 2--3x) than a conventional direct multifrontal solver. Furthermore, we will demonstrate that the solver is both a faster and more effective preconditioner than other preconditioners such as the incomplete LU preconditioner. Specific speed-ups depend on the matrix size and improve as the size of the matrix increases. The solver can be applied to both structured and unstructured meshes in a similar manner. We build on our previous work and utilize the fact that dense frontal and update matrices, in the multifrontal algorithm, can be represented as hierarchically off-diagonal low-rank (HODLR) matrices. Using this idea, we replace all large dense matrix operations in the multifrontal elimination process with $O(N)$ HODLR operations to arrive at a faster and more memory efficient solver. \end{abstract} \begin{keyword} Fast direct solvers \sep Iterative solvers \sep Generalized minimal residual method (GMRES) \sep Numerical linear algebra \sep Hierarchically off-diagonal low-rank (HODLR) matrices \sep Multifrontal elimination \end{keyword} \end{frontmatter} \section{Introduction} \label {sec:intro} In many engineering applications, we are interested in solving a set of linear equations: \[ Ax = b \] where $A$ is a symmetric positive definite stiffness matrix arising from a finite element discretization of an elliptic PDE, and $b$ is a forcing vector, associated with the inhomogeneity in the PDE. Iterative methods are widely popular in solving such equations. However, the main difficulty with these methods is that they require a preconditioner and convergence is not guaranteed. Direct methods on the other hand are very robust but are generally slower and more memory demanding. In this article, we present an accelerated multifrontal solver that we use as a preconditioner to a generalized minimal residual (GMRES~\cite{saad1986gmres}) iterative scheme to achieve a desired accuracy. This approach combines the robustness of direct solvers with the speed of iterative solvers to arrive at a fast overall solver for sparse finite element matrices. Accelerating the multifrontal direct solve algorithm has been the subject of many recent research articles~\cite{xia2009superfast,MFGeneralMesh,GeneralizedMF,randomizedMF,BLRMF}. For a detailed summary and overview of such algorithms see~\cite{BlackBox_HODLR}. The general idea behind most of these methods is approximating dense frontal matrices arising in the multifrontal elimination process with an off-diagonal low-rank matrix structure. The off-diagonal low-rank property leads to more efficient factorization and storage compared to dense BLAS3 operations if the rank is sufficiently small. The methods described in~\cite{xia2009superfast,MFGeneralMesh,GeneralizedMF,randomizedMF} approximate the frontal matrix with a hierarchically semiseparable (HSS) matrix, while \cite{BLRMF} approximates the frontal matrix with a block low-rank (BLR) matrix. In this article, we accelerate the multifrontal algorithm by approximating dense frontal matrices as hierarchically off-diagonal low-rank (HODLR) matrices. Compared to HSS structures which have been widely used in approximating dense frontal matrices, HODLR matrices are much simpler as they lack the nested off-diagonal basis. For 3D PDEs, we find that the rank used to approximate the off-diagonal blocks increases with the size of the block with $r \approx O(\sqrt{n})$, where $r$ is the rank and $n$ the size of the block. This results in a geometric increase of the rank with the HODLR level. As a result of this increase, as we demonstrated in~\cite{BlackBox_HODLR}, the factorization cost is the same for both HODLR and HSS structures, namely $\mathcal{O}(r^2n)$, where $r$ is the rank at the top of the tree. This is despite the fact that HSS uses a more data-sparse format. The reason why the difference in the basis does not affect the asymptotic cost is because the cost is dominated by the computation at the root of the HODLR tree, for the largest block. In addition, HODLR is advantageous compared to HSS during the low-rank approximation phase, since it does not need to produce a nested basis, which simplifies many steps in the algorithm. Hence, in most practical applications, HSS may not have a clear advantage over HODLR. Furthermore, we will demonstrate that the combination of HODLR and the boundary distance low-rank approximation method (BDLR)~\cite{BlackBox_HODLR} leads to a very fast and simple extend-add algorithm, which results in an overall fast multifrontal solver. At the time of writing this article, only Xia~\cite{randomizedMF} has demonstrated a fast and memory efficient multifrontal solver for general sparse matrices, with an asymptotic cost in $O(N^{4/3} \log N)$ where $N$ is the size of the {\bf sparse} matrix. In contrast, the method in this paper leads to an overall cost of $O(N^{4/3})$. This cost may be compared with an LU factorization with nested dissection, with cost $O(N^2)$ in 3D. In this article, we introduce a fast multifrontal solver that is much simpler compared to~\cite{randomizedMF}, and demonstrate its performance for large and complicated test cases. The method is shown to be advantageous compared to traditional preconditioners like ILU. \section{Review of Important Concepts} We now review two concepts that are central to the fast sparse solver algorithm. Namely, hierarchical off-diagonal low-rank (HODLR) matrices and the boundary distance low-rank approximation method (BDLR). \subsection{Hierarchically Off-Diagonal Low-Rank (HODLR) Matrices} Hierarchical matrices are data sparse representation of a certain class of dense matrices. This representation relies on the fact that these matrices can be sub-divided into a hierarchy of smaller block matrices, and certain sub-blocks can be efficiently represented as a low-rank matrix. We refer the readers to~\cite{hackbusch1999sparse, hackbusch2000sparse, grasedyck2003construction, hackbusch2002data, borm2003hierarchical, ULV, chandrasekaran2006fast1,BlackBox_HODLR} for more details. Ambikasaran at al.~\cite{ambikasaran2013thesis} provides a detailed description of these different hierarchical structures. In this article, we use the simplest hierarchical structure, namely the hierarchically off-diagonal low-rank matrix (HODLR), to approximate the dense frontal matrices that arise during the sparse elimination process. As shown in~\cite{BlackBox_HODLR}, the HODLR structure reduces the dense factorization and storage cost from $\mathcal{O}(n^3)$ and $\mathcal{O}(n^2)$ to $\mathcal{O}(r^2n)$ and $\mathcal{O}(rn)$ respectively, where $n$ is the size of the dense matrix and $r$ is the off-diagonal rank. An HODLR matrix has low-rank off-diagonal blocks at multiple levels. As described in~\cite{SivaFDS}, a 2-level HODLR matrix, $K\in \mathbb{R}^{n\times n}$, can be written as shown in Eq.~\eqref{eq:HODLR2}: \begin{align} K & = \begin{bmatrix} K_1^{(1)}&U_1^{(1)} (V_{1,2}^{(1)})^T \\ U_2^{(1)} (V_{2,1}^{(1)})^T&K_2^{(1)} \end{bmatrix} \notag \\ & = \begin{bmatrix} \begin{bmatrix} K_1^{(2)}&U_1^{(2)} (V_{1,2}^{(2)})^T \\ U_2^{(2)} (V_{2,1}^{(2)})^T&K_2^{(2)} \end{bmatrix}& U_1^{(1)} (V_{1,2}^{(1)})^T \\ U_2^{(1)} (V_{2,1}^{(1)})^T& \begin{bmatrix} K_3^{(2)}& (U_3^{(2)})^T (V_{3,4}^{(2)})^T \\ U_4^{(2)} (V_{4,3}^{(1)})^T&K_4^{(2)} \end{bmatrix} \end{bmatrix} \label{eq:HODLR2} \end{align} where for a $p$-level HODLR matrix, $K_i^{(p)} \in \mathbb{R}^{n/2^p\times n/2^p}$, $U_{2i-1}^{(p)}$, $U_{2i}^{(p)}$, $V_{2i-1,2i}^{(p)}$, $V_{2i,2i-1}^{(p)} \in \mathbb{R}^{n/2^p\times r}$ and $r\ll n$. Further nested compression of the off-diagonal blocks will lead to an HSS structure~\cite{SivaFDS}. \subsection{Boundary Distance Low-Rank Approximation Method (BDLR)} \label{sec:BDLR} In order to take advantage of the off-diagonal low-rank property, we need a fast and robust low-rank approximate method. More precisely, we need a low rank approximation method that has the following properties: \begin{itemize} \item We want our method to be applicable to general sparse matrices. Hence, we need a low-rank approximation scheme that is purely algebraic (black-box). That is, we can not use analytical low-rank approximation methods like Chebyshev, multipole expansion, analytical interpolation, etc. \item In order to obtain speedup compared to conventional multifrontal solvers, we need a fast low-rank approximation scheme that has a computational cost of $\mathcal{O}(rn)$ where $n$ and $r$ are the size and rank of a dense low-rank matrix respectively. Hence, we cannot use traditional low-rank approximation methods like SVD, rank revealing LU or rank revealing QR as they have a computational cost of $\mathcal{O}(n^3)$, $\mathcal{O}(n^2)$ and $\mathcal{O}(n^2)$ respectively. \item We need a robust and efficient low-rank approximation method that is applicable to a wide variety of problems. \end{itemize} One possible option is to use randomized algorithms~\cite{RndSummary,Rnd1,Rnd2,Rnd3} similar to Xia~\cite{randomizedMF}. However, such algorithms require the implementation of a fast matrix-vector product. For our purpose, as we demonstrated in~\cite{BlackBox_HODLR}, the boundary distance low-rank approximation method (BDLR) is a fast and robust scheme that results in very fast solvers for both structured and unstructured meshes. BDLR is a pseudoskeleton~\cite{pseudoSkeleton} like low-rank approximation scheme that picks rows and columns based on the corresponding interaction graph of a dense matrix, which in the case of frontal matrices, is the graph corresponding to the sparse separator. That is, for an off-diagonal block in the frontal matrix, it chooses a subset of rows and columns based on the corresponding separator graph. The criteria for choosing these rows and columns is based on the location of their respective nodes in the sparse separator graph. Figure~\ref{fig:BDLR_Full} shows an example of an interaction graph corresponding to the interaction of a set of row and column indices in an off-diagonal block of a sample frontal matrix. Figure~\ref{fig:BDLR_LR} shows that the BDLR method chooses row indices and column indices corresponding to nodes that are closer to the boundary (blue line). \begin{figure}[htbp] \centering \subfigure[Full Matrix Representation]{ \includegraphics[scale=1]{boundary_1.pdf} \label{fig:BDLR_Full} } \subfigure[Low-Rank Matrix Representation]{ \includegraphics[scale=1]{boundary_2.pdf} \label{fig:BDLR_LR} } \caption{Classification of vertices based on distance from the other set.} \end{figure} Let $R$ and $C$ be the matrices containing all the selected rows and columns. In other words: \[ R = B(I,:), \qquad C = B(:,J) \] where $B$ is the off-diagonal low-rank matrix and $I$ and $J$ are the set of row and columns indices chosen by the BDLR algorithm respectively. Defining $\widehat{B} = B(I,J)$, we perform a full pivoting LU factorization : \begin{equation} \label{eq:Bhat} {\widehat{B}} = P^{-1}LUQ^{-1} \end{equation} where $P$ and $Q$ are permutation matrices. Let $r$ be the chosen rank for $\widehat{B}$. Define $\widetilde{R}$ and $\widetilde{C}$ as: \begin{align*} \widetilde{C}& = (CQ)(:,1:r) \; (U(1:r,1:r))^{-1}\\ \widetilde{R}& = (L(1:r,1:r))^{-1} \; (PR)(1:r,:) \end{align*} We then have: \begin{equation*} B\approx \widetilde{C} \; \widetilde{R} \end{equation*} $(U(1:r,1:r))^{-1}$ and $(L(1:r,1:r))^{-1}$ correspond to lower-triangular solves. The inverse matrices are not explicitly computed. The approximation rank $r$ is chosen based on the desired final accuracy such that $|u_{r+1,r+1} / u_{11}| < \epsilon$ where $u_{r+1,r+1}$ and $u_{11}$ correspond to $(r+1)$th and the first pivots respectively and $\epsilon$ is the desired accuracy. The final rank $r$ may be significantly smaller than the number of originally selected rows and columns. This higher compression results in higher efficiency both in terms of memory and runtime. \section {An Iterative Solver with Direct Solver Preconditioning} \label{sec:directIterative} In this article, we investigate using an accelerated multifrontal sparse direct solver as a preconditioner to the generalized minimal residual (GMRES)~\cite{saad1986gmres} method. In this case, we use a relatively low accuracy for the direct solver. We will show that this approach is much faster and more memory efficient than a conventional multifrontal sparse solver. We should also mention that this preconditioning method can be applied to any iterative solvers (conjugate gradient (CG)~\cite{hestenes1952methods}, etc.). \section {A Fast Multifrontal Solver} \label{sec:directSolver} \subsection{Overview of a Conventional Multifrontal Algorithm} \label{sec:conventional} We do not intend to give a detailed explanation of the multifrontal solve process in this article. We refer the reader to the available literature (see for example~\cite{MFReview}) for an in-depth explanation of the algorithm. In the multifrontal method, the unknowns are eliminated following the ordering imposed by the elimination tree. That is, each node of the elimination tree corresponds to a set of unknowns, and these unknowns cannot be eliminated until all the unknowns corresponding to the children of this node are eliminated. The multifrontal algorithm is an algorithm to calculate the Cholesky or LU factorization of a sparse matrix~\cite{MFReview}, with special optimizations that take advantage of the sparsity. Moreover (and this is specific to a multifrontal elimination), during the elimination, information is propagated only from a child node to its parent (in the so-called elimination tree~\cite{duff1986direct}). This is what distinguishes for example a multifrontal elimination from a supernodal elimination. We note that in this paper we describe our method in the context of a multifrontal elimination; however, the same method can be applied to a supernodal elimination. No fundamental change is required to our algorithm. \subsubsection{Factorization} Consider now a node $p$ in the elimination tree. Let $I_p$ be the set of indices of unknowns associated with node $p$: \begin{equation*} I_p = \{i^{(p)}_{1}, \ldots, i^{(p)}_{n_p} \} \end{equation*} where $n_p$ is the number of unknowns corresponding to node $p$, and $i^{(p)}_j$ is the global index of the $j$th unknown associated with node $p$. We denote a specific child node of $p$ as $c_k$ ($c_k \in \mathcal{C}_p$, $k \in \{1, \ldots,n^c_p \}$ where $\mathcal{C}_p$ is the set of all children and $n^c_p$ is the number of children of node $p$). Define the set $S_p$ as the set of unknowns $j > i^{(p)}_{n_p}$ that are connected to any of the unknowns in $I_p$, in the graph of $A$. More precisely: \begin{equation} \label{eq:couplingSet} S_p = \{ j \, | \, \exists i \in I_p, j > i^{(p)}_{n_p}, a_{ij} \ne 0 \} \end{equation} where $a_{ij}$ is the entry at the $i$th row and $j$th column of the original sparse matrix $A$. Describing the details of a multifrontal elimination requires the definition of the matrix $U_{c_k}$, which we call the update matrix corresponding to the $k$th child of node $p$ by recurrence. The set of indices corresponding to unknowns associated with $U_{c_k}$ (update matrix of children nodes) is denoted $I^U_{c_k}$. If $p$ is a leaf node in the elimination tree, the matrix $U_{c_k}$ is not defined and hence, $I^U_{c_k}=\emptyset$. We define the set of frontal indices $I_p^f$ as follows: \begin{equation*} I_p^f = S_p \cup \{\cup_{k = 1}^{n^c_p} I^U_{c_k} \} \setminus I_p \end{equation*} We now define the matrix $\bar{F}_p$ as the sub-matrix of $A$ associated with $I_p \cup I_p^f$. \begin{equation} \label{eq:fbar} \includegraphics{frontalMatrix.pdf} \end{equation} The symbol $\times$ schematically denotes a nonzero entry in the matrix. In $\bar{F}_p$, we set the entries for the block $I_p^f \times I_p^f$ to 0. The frontal matrix for node $p$, $F_p$, is defined as follows: \begin{equation*} F_p = \bar{F}_p \oplus U_{c_1} \oplus \cdots \oplus U_{c_{n^c_p}} \end{equation*} The symbol $\oplus$ denotes the extend-add operation and $U_{c_k}$ denotes the update matrix corresponding to the $k$th child of node $p$. The extend-add is basically an addition. The ``extend'' part corresponds to the fact that there is a size mismatch between a $U_{c_k}$ and $F_p$, and an index mapping from $U_{c_k}$ to $F_p$ must be used. In the special case where node $p$ is a leaf node in the elimination tree, $F_p = \bar{F}_p$. Note that, after the extend add operations, the frontal matrix $F_p$ is (nearly) a fully dense matrix. We then divide $F_p$ into four sub-blocks: \begin{equation*} \includegraphics{frontalMatrix_SubBlock.pdf} \end{equation*} Factorizing $F_{pp}$, we are left with the Schur complement. This is by definition the update matrix associated with node $p$ which will be used in the extend-add operation of its parent: \begin{equation} \label{eq:update} U_p \overset{\text{def}}{=} F_{ff} - F_{fp}F_{pp}^{-1}F_{pf} \end{equation} Repeating the operations described in Eqns.~\eqref{eq:couplingSet} to~\eqref{eq:update} for all nodes in the elimination tree starting from the leaf nodes and going up to the root node, constitutes the factorization phase of the conventional multifrontal algorithm. \subsubsection{Solve} \label{sec:convSolve} The solve phase constitutes of an upward pass (L solve) and a downward pass (U solve) in the elimination tree. In the upward (downward) pass, we traverse the elimination tree upward (downward) from leaves to root (root to leaves) and traverse the right hand side vector $b$ downwards (upwards). Hence, the upward and downward passes correspond to the L and U solve phases in a conventional LU solver respectively. In the upward pass (L solve) phase, we first construct the upward pass solution matrix $b_u$ which is initially equal to the right hand side $b$. Then, moving upward in the elimination tree, we construct the upward solution $b_{u_p}$ for each node $p$, which is basically the elements of the upward pass solution $b_u$ corresponding to the unknowns in $I_p$ and $I_p^f$. \begin{equation} \label{eq:L_Soln} \includegraphics{L_Soln.pdf} \end{equation} Now, update the upward pass solution using: \begin{equation} \label{eq:upward} b_{u_{fp}} = b_{u_{fp}} - F_{fp}F_{pp}^{-1}b_{u_{pp}} \end{equation} After completing the upward pass (L solve), we must perform a downward pass (U solve) to arrive at our final solution. The final solution $x$ is initially an empty vector. We traverse the elimination tree from root to leaves (downward). For each node $p$, we construct the final solution vector (Eq.~\eqref{eq:U_Soln}). The corresponding solution for each node can be calculated as follows: \begin{gather} \label{eq:U_Soln} \includegraphics{U_Soln.pdf} \\ \label{eq:downward} x_{pp} = F_{pp}^{-1}(b_{u_{pp}} - F_{pf}x_{fp}) \end{gather} Note that since we're traversing the elimination tree downward (traversing $b_{u_p}$ upward), $x_{fp}$ has already been calculated by the time we reach $p$. \subsection{HODLR Accelerated Multifrontal Algorithm} Looking at the procedure described in Section~\ref{sec:conventional}, one can observe that dense BLAS3 operations like the one described by Eq.~\eqref{eq:update}, which involves both a factorization and an outer product update, can become time and memory consuming as the front size increases. In order to accelerate the multifrontal elimination process, we replace large dense matrices with HODLR structures. \subsubsection{Accelerated Factorization} In the factorization phase, we want to represent the frontal and update matrices for each node $p$ as HODLR matrices. In order to be able to construct an HODLR structure, we need to utilize a suitable low-rank approximation method. Our previous results~\cite{BlackBox_HODLR} show that the boundary distance low-rank approximation scheme (BDLR) is a suitable algorithm for our purposes. Furthermore, as we will show, a priori knowledge of rows and columns with BDLR leads to a very fast extend-add operation. To construct the HODLR representation of the frontal and update matrices of $p$, we first assemble $\bar{F}_p$ as described by Eq.~\eqref{eq:fbar}. As described in~\cite{BlackBox_HODLR}, the BDLR algorithm requires an interaction graph that describes the interaction between the rows and columns of the matrix, which, in this case, is the graph that is constructed from the submatrix of the original matrix $A$ corresponding to the interaction of rows and columns with indices in the set $I_p \cup I_p^f$. Using the interaction graph for $I_p \cup I_p^f$, we create the HODLR representation of $F_p^{HODLR}$ using the BDLR algorithm: \begin{equation} \label{eq:frontalMatrix_HODLR} \includegraphics[scale=1.1]{frontalMatrix_HODLR.pdf} \end{equation} Using the extend-add notation $\oplus$, $F_p^{HODLR}$ is given by: \begin{equation} \label{eq:HODLR_EA} F_p^{HODLR} = \bar{F}_p \oplus U_{c_1}^{HODLR} \oplus \cdots \oplus U_{c_{n_c^p}}^{HODLR} \end{equation} For simplicity, we've assumed that all the update matrices associated with node $p$ are HODLR matrices. In some cases, the update matrices might be small dense matrices, in which case the extend-add operations described for the child HODLR updates will become almost trivial for the dense child updates. Looking at Eq.~\eqref{eq:update}, we notice that every HODLR update matrix is composed of two components: an HODLR matrix and an outer product update. \begin{equation} \label{eq:update_HODLR} U_{c_k}^{HODLR} = F_{{ff}_k}^{HODLR} - W_{{fp}_k}V_{{fp}_k}^{T}(F_{{pp}_k}^{HODLR})^{-1}W_{{pf}_k}V_{{pf}_k}^T=F_{{ff}_k}^{HODLR} - W_kV_k^T \end{equation} where the subscript $k$ denotes the update from the $k$th child of $p$. Since we only utilize $U_{c_k}^{HODLR}$ in the extend-add operation of its parent, we will save the two contributions for the extend-add operation. That is, Eq.~\eqref{eq:HODLR_EA} now becomes: \begin{equation} \label{eq:HODLR_EA_Expand} F_p^{HODLR} = \bar{F}_p \oplus (F_{{ff}_1}^{HODLR} - W_1V_1^T) \oplus \cdots \oplus (F_{{ff}_{n_c^p}}^{HODLR} - W_{n_c^p}V_{n_c^p}^T) \end{equation} Equation~\eqref{eq:HODLR_EA_Expand} requires that we perform an extend-add operation into a target HODLR structure. Before going into the details of this operation, we should first emphasize the importance of this operation both in terms of computational cost and memory saving compared to a conventional extend-add operation. Consider the outer product $W_1V_1^T$ in Eq.~\eqref{eq:HODLR_EA_Expand}. Let $\hat{n}_p$ be the size of the matrix $F_p^{HODLR}$. In order to perform the extend-add operation, we must extend $W_1$ and $V_1$ to arrive at matrices $W_1^e$ and $W_2^e$, of size $\hat{n}_p\times r_p$. In the conventional algorithm, we had to perform the outer product $W_1^eV_1^{e^T}$, which has a computational cost of $\mathcal{O}(\hat{n}_p^2r_p)$ and a storage cost of $\mathcal{O}(\hat{n}_p^2)$. For a 3D mesh with $N$ degrees of freedom, $\widehat{n}_p$, corresponding to the root node in the elimination tree, grows as $\mathcal{O}(N^{2/3})$, and $r_p$ for this node roughly scales as $\mathcal{O}(N^{1/3})$. This will result in a computational cost of $\mathcal{O}(N^2)$ and a storage cost of $\mathcal{O}(N^{4/3})$. Hence, in practice, the extend-add operation dominates the computational cost of the conventional multifrontal algorithm. As shown in Eq.~\eqref{eq:HODLR_EA_Expand}, the extend-add process involves two different operations. The first operation is updating the frontal matrix ($\bar{F}_p$) with an HODLR structure ($F_{{ff}_k}^{HODLR}$) to arrive at a target HODLR structure ($F_p^{HODLR}$). What makes this operation difficult is the fact that $F_{{ff}_k}^{HODLR}$ and $F_p^{HODLR}$ typically have different structures. See Figure~\ref{fig:HODLR_EA} for an illustration. That is, the diagonal block sizes and the number of HODLR levels might differ between the two matrices. A key feature of the BDLR algorithm is that for each target HODLR structure ($F_p^{HODLR}$), we know a priori the rows and columns needed to construct the off-diagonal low-rank approximation. Hence, in order to perform the extend-add operation, we traverse the target HODLR structure and for each off-diagonal block, we extract the rows and columns determined by the BDLR algorithm from the child update HODLR matrices ($F_{{ff}_k}^{HODLR}$). The second extend add-operation is adding a low-rank matrix ($W_kV_k^T$) to the frontal matrix ($\bar{F}_p$) to arrive at a target HODLR structure ($F_p^{HODLR}$). This process is very similar to the one described for adding $F_{{ff}_k}^{HODLR}$ to $\bar{F}_p$. The only difference is that instead of reconstructing rows and columns from an HODLR structure ($F_{{ff}_k}^{HODLR}$), we reconstruct the required rows and columns from the low-rank outer product ($W_kV_k^T$). After reconstructing and extracting the rows and columns selected by BDLR and adding them to the corresponding rows and columns in the target structure, we perform the partial pivoting LU factorization. As described in Section~\ref{sec:BDLR}, we arrive at a final rank $r$ which is much smaller compared to the number of originally selected rows and columns. Given that the factorization of a hierarchical matrix of size $n$ scales as $\mathcal{O}(r^2n)$, a reduction in the rank has a significant effect on the resulting speedup. Assuming the off-diagonal rank of the target HODLR structure corresponding to the node $p$ in the elimination tree is $r_p$, we need to extract $r_p$ rows and columns from the HODLR matrices and the outer products. Hence, we need to perform $\mathcal{O}(r_p^2\hat{n}_p)$ operations in order to construct $r_p$ rows from the outer product updates. This translates to a computational cost of $\mathcal{O}(N^{4/3})$ for the root node in the elimination tree ($r_p$ scales as $\mathcal{O}(N^{1/3})$) and is much more efficient compared to the $\mathcal{O}(N^2)$ scaling of the conventional extend-add algorithm. Moreover, we used no additional memory in order to perform the accelerated extend-add operation. Now that we have constructed the HODLR representation of the frontal matrix $F_p^{HODLR}$, we factorize $F_{{pp}}^{HODLR}$ using an HODLR solver (see~\cite{BlackBox_HODLR} for example). Next, we store the update matrix $U_p^{HODLR}$ as an HODLR matrix and an outer product update: \begin{equation} \label{eq:update_HODLR_Parent} U_p^{HODLR}\overset{\text{def}}{=} F_{{ff}}^{HODLR} - W_{{cp}}V_{{cp}}^{T}(F_{{ff}}^{HODLR})^{-1}W_{{pc}}V_{{pc}}^T=F_{{ff}}^{HODLR} - W_pV_p^T \end{equation} \begin{figure}[htbp] \centering \subfigure[Determine the required off-diagonal rows and columns using BDLR]{ \includegraphics[scale=0.6]{HODLR_EA2.pdf} } \subfigure[Extract the identified rows and columns from $\bar{F_p}$]{ \includegraphics[scale=0.6]{HODLR_EA3.pdf} } \subfigure[Extract the identified rows and columns from $F_{{ff}_k}^{HODLR}$]{ \includegraphics[scale=0.6]{HODLR_EA4.pdf} } \subfigure[Repeat the same procedure for all off-diagonal blocks of the target matrix]{ \includegraphics[scale=0.6]{HODLR_EA5.pdf} } \caption{Fast HODLR $\leftarrow$ HODLR $+$ HODLR operation using the BDLR low-rank approximation algorithm. Red: Dense matrix block, Cyan: Low-rank matrix block, White: Block of zeros. \label{fig:HODLR_EA}} \end{figure} \subsubsection{Accelerated Solve} The solve phase of the accelerated method is very similar to the solve phase of the conventional method described in Section~\ref{sec:convSolve}. The only difference is that $F_{pp}^{-1}$ is now replaced by $(F_{pp}^{HODLR})^{-1}$, which simply represents an HODLR solve instead of a conventional solve. Furthermore, the matrices $F_{pf}$ and $F_{fp}$ are now represented as low-rank products which results in a more efficient matrix-vector multiplications in Eqns.~\eqref{eq:upward} and~\eqref{eq:downward}. \section {Application to Finite-Element Matrices} In order to demonstrate the effectiveness of our method, we benchmark our solver for two classes of problems. We first apply our solver to a finite-element stiffness matrix that arises from a complicated 3D geometry. Next, we benchmark the performance of our solver for sparse matrices arising from the FETI method~\cite{FETI_DP1,FETI_DP2}. \subsection{Elasticity Problem for a Cylinder Head Geometry} \label{sec:cylinderResults} We apply the iterative solver with the accelerated multifrontal preconditioner to a stiffness matrix corresponding to the finite-element discretization of the elasticity equation in a cylinder head geometry: \begin{equation} (\lambda + \mu)\nabla(\nabla \cdot \boldsymbol{u})+\mu \nabla^2\boldsymbol{u}+\boldsymbol{F} = 0 \label{eq:NavierCauchy} \end{equation} where $\boldsymbol{u}$ is the displacement vector and $\lambda$ and $\mu$ are Lam\'e parameters. The cylinder head mesh consists of a mixture of 8-node hexahedral, 6-node pentahedral and 4-node tetrahedral solid elements, and also 3-node shell elements. Figure~\ref{fig:cylinderMesh} shows a sample mesh for the cylinder head geometry. \begin{figure}[htbp] \centering \includegraphics[width=200pt,height=165pt]{cylinderhead.png} \caption{A sample cylinder head mesh.} \label{fig:cylinderMesh} \end{figure} \subsection{FETI-DP Solver for a 3D Elasticity Problem} \label{sec:FETIResults} FETI methods~\cite{FETI_DP1,FETI_DP2} are a family of domain decomposition algorithms with Lagrange multipliers that have been developed for the fast sequential and parallel iterative solution of large-scale systems of equations arising from the finite-element discretization of partial differential equations~\cite{FETI_DP1}. In this article, we investigate the solution of sparse matrices arising from a FETI-DP solver applied to the elasticity equation Eq.~\eqref{eq:NavierCauchy}. We consider two classes of problems within the FETI-DP framework. The first class of matrices, called local matrices, corresponds to solving the problem on a subdomain of the original mesh. The other class of matrices is called coarse problem matrices and corresponds to the corner DOFs of all the subdomains. We benchmark our code for FETI-DP local matrices in various mesh structures. We consider a structured and an unstructured mesh in a cube geometry geometry. The structured cube mesh uses an 8 node cube element while the unstructured cube mesh uses a 4 node tetrahedral element to discretize the elasticity equation Eq.~\eqref{eq:NavierCauchy}. Figures~\ref{fig:structuredMesh} and~\ref{fig:unstructuredMesh} show a sample mesh for the structured and the unstructured cubes respectively. \begin{figure}[htbp] \centering \subfigure[structured cube]{ \includegraphics[width=190pt]{cube_Structured.png} \label{fig:structuredMesh} } \subfigure[unstructured cube]{ \includegraphics[width=190pt]{cube_UnStructured.png} \label{fig:unstructuredMesh} } \caption{Sample structured cube, unstructured cube and vehicle meshes. These meshes correspond to solving the local FETI-DP problem corresponding to the finite-element discretization of the elasticity equation Eq.~\eqref{eq:NavierCauchy}.} \end{figure} We also apply our solver to FETI-DP coarse problem matrices arising from solving the elasticity equation in a unit cube geometry. Factorization of the coarse matrix for problems where the coarse matrix is large is expensive and might become the bottleneck of the FETI-DP solver. As a result, we are interested in accelerating the solve process and decreasing the memory footprint of factorizing such matrices. Figure~\ref{fig:subdomains} show a typical subdomain configuration in the unit cube. \begin{figure}[htbp] \centering \includegraphics[width=200pt,height=200pt]{subdomains.pdf} \caption{A sample subdomain configuration in a cube geometry. Each colored block represent a subdomain.} \label{fig:subdomains} \end{figure} \section{Numerical Results} In this section we show numerical benchmarks for the matrices described in Sections ~\ref{sec:cylinderResults} and~\ref{sec:FETIResults} respectively. As described in Section~\ref{sec:directIterative}, we use the accelerated multifrontal solver at low accuracies as a preconditioner to the GMRES iterative method. We compare this approach to conventional preconditioners, namely the diagonal and the incomplete LU (ILU) preconditioner as well as the conventional multifrontal algorithm. We implemented our code in C\verb|++| and used the Eigen C\verb|++| library for linear algebra operations. The incomplete LU algorithm is the incomplete LU with the dual-thresholding implementation from the SPARSEKIT package~\cite{ILUCode}. \subsection{Elasticity Problem for a Cylinder Head Geometry} Figure~\ref{fig:cHeadIter} shows the convergence of the accelerated multifrontal preconditioner against the conventional diagonal and ILU preconditioners for the cylinder head geometry. As can be seen, since the problem is relatively difficult, the diagonal preconditioners fails to converge and one needs to work with the parameters of the ILU preconditioner in order to achieve convergence. Furthermore, the accelerated multifrontal preconditioner converges much faster compared to both diagonal and ILU preconditioners. Figures~\ref{fig:cHeadTime} and~\ref{fig:cHeadMem} show the run time and memory consumption comparison between the conventional multifrontal, the accelerated multifrontal and the incomplete LU iterative schemes respectively. The accelerated multifrontal algorithm has a lower runtime compared to the conventional multifrontal and the ILU algorithm. \subsection{FETI-DP Solver for a 3D Elasticity Problem} \subsubsection{FETI-DP Local Problems} Figure~\ref{fig:structCubeIter} compares the convergence of the accelerated multifrontal method with traditional preconditioners for the structured cube mesh local problem. As this problem is a relatively simple problem, both the diagonal and ILU preconditioners converge without too many iterations. However, the accelerated multifrontal method still has the highest convergence rate amongst the benchmarked algorithm. Figures~\ref{fig:structCubeTime} and~\ref{fig:structCubeMem} show the runtime and memory consumption comparison for the structured cube local problem. The fast multifrontal algorithm has a significantly lower runtime and memory consumption compared to the conventional multifrontal algorithm. However, because of the relative simplicity of this problem, the ILU algorithm is competitive in terms of factorization time. Figure~\ref{fig:unstructCubeIter} shows the convergence rate of the accelerated multifrontal, diagonal and the ILU preconditioner for the unstructured cube mesh local problem. As can be seen, this problem is the most complicated and difficult. Not only does the diagonal preconditioner fail to converge, but the input parameters of ILU need to be increased significantly in order to achieve convergence. Figure~\ref{fig:unstructCubeTime} shows that the ILU preconditioner is significantly slower than both the multifrontal and accelerated multifrontal solver. The accelerated multifrontal solver is the fastest among all conventional algorithms, and Figure~\ref{fig:unstructCubeMem} shows that it also reduces the memory requirements. \subsubsection{FETI-DP Coarse Problems} Figure~\ref{fig:elasticitySIter} shows the convergence rate of the accelerated multifrontal method and the conventional preconditioners for a coarse FETI-DP problem in a cube geometry. Figure~\ref{fig:elasticitySTime} shows that the accelerated multifrontal method is faster than both the conventional multifrontal and the ILU algorithms. Furthermore, as can be seen in Figure~\ref{fig:elasticitySMem}, the memory consumption of the accelerated multifrontal method is significantly lower compared to the conventional algorithm. Figure~\ref{fig:elasticityStIter} compares the convergence rates of the accelerated multifrontal method with ILU and diagonal preconditioning schemes for a coarse FETI-DP problem that only includes translational degrees of freedoms at the corner of the subdomains. As can be seen in Figure~\ref{fig:elasticityStTime}, the ILU algorithm is significantly slower compared to both the conventional and the accelerated multifrontal schemes. Figures~\ref{fig:elasticityStTime} and~\ref{fig:elasticityStMem} show that not only the accelerated multifrontal method is faster, but also it consumes much less memory compared to the conventional multifrontal algorithm. \subsection{Summary} \label{sec:summary} Table~\ref{table:summaryNum} shows a detailed summary of all the benchmark cases. As can be seen in all cases, the GMRES solver with the accelerated multifrontal preconditioner converges after few iterations which shows the effectiveness of the developed algorithm. Furthermore, in almost all cases, we observe a speedup and memory saving of up to 3x compared to the conventional multifrontal solver. We were not able to benchmark larger cases due to memory limitations. However, one can observe that both the speedup and memory saving become more significant as the matrix size grows. That is, a very high speedup and memory saving can be achieved for very large matrices. Figures~\ref{fig:numIterStruct} and~\ref{fig:numIterUnstruct} compare the number of iterations for the ILU and the accelerated multifrontal preconditioner for the benchmarked structured and unstructured meshes respectively. As both figures show, not only the accelerated multifrontal preconditioner has less number of iterations compared to ILU, but also the number of iterations does not grow significantly with matrix size. This is another important advantage of the accelerated multifrontal preconditioner that makes it more favorable for parallel implementations compared to ILU. This is because fewer number of iterations results in fewer matrix vector products which requires less communication between the nodes which ultimately results in a higher speedup. Figure~\ref{fig:numIterAcc} shows that one can significantly decrease the number of iterations required by the accelerated multifrontal preconditioner by simply decreasing the accuracy parameter (and in turn, increasing the depth parameter). However, this results in an increase in the off-diagonal rank and the decrease of speedup and memory savings. This shows that one can fine-tune the code parameters based on their available resources and their desired convergence rates. \begin{table}[htbp] \centering \scalebox{0.7}{ \begin{tabular}{|c||c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{Matrix} & \multirow{3}{*}{Mesh} & \multirow{3}{*}{Matrix}& \multicolumn{2}{c}{Conventional}\vline & \multicolumn{6}{c}{Accelerated} \vline&\multicolumn{3}{c}{GMRES}\vline& \multirow{3}{*}{Speed} & \multirow{3}{*}{Mem} \\ \cline{12-14} & & & \multicolumn{2}{c}{Multifrontal}\vline & \multicolumn{6}{c}{Multifrontal} \vline&{D}&\multicolumn{2}{c}{ILU}\vline& \multirow{3}{*}{-up}& \\ \cline{4-14} Type& Type&{Size} & Fact & Mem& Fact & Mem& Num&\multicolumn{3}{c}{Parameters}\vline&Num& Num&\multirow{2}{*}{k}&&Saving\\ \cline{9-11} & & & (s) & (GB)& (s) & (GB) &Iter&$n_c$&$\epsilon$&$d$ &Iter&Iter&&&\\ \hline \multirow{2}{*}{Stiffness}&\multirow{2}{*}{C Head} &330K &1.08e2&4.92&4.78e1&3.39 &142&3K&1e-1&1 &x&1009&1&\bf2.26&1.645\\ &&2.30M*&6.33e3&66.34&3.58e3&37.42 &86&10K&1e-2&5&x&2709&2&\bf1.77&\bf1.77\\ \cline{1-16} \multirow{9}{*}{FETI }&\multirow{5}{*}{Str} &100K &4.32e1&2.16 &3.88e1&1.72&33&3K &1e-1&1&813&129&1&1.11&1.25\\ \multirow{9}{*}{Local} &\multirow{5}{*}{Cube}&200K &2.14e2&6.13 &1.94e2&3.38&80&3K &1e-1&1&2194&245&1&1.10&\bf1.81\\ &&320K &5.20e2&10.99&4.48e2&5.87&81&3K&1e-1&1&1059&216&1&1.16&\bf1.87\\ &&390K &8.69e2&14.30&7.74e2&6.99&104&3K&1e-1&1&2759&272&1&1.12&\bf2.05\\ &&530K &1.67e3&22.52&1.25e3&10.36&197&3K&1e-1&1&x&582&1&1.34&\bf2.17\\ &&1.57M&1.35e4&97.62&6.99e3&31.29&191&3K&1e-1&1&x&605&1&\bf1.93&\bf3.11\\\cline{2-16} &\multirow{3}{*}{Uns} &200K &1.74e2 &4.26 &9.93e1&2.84&189&4K&1e-1&1 &x&760&1&\bf1.75&1.47\\ &\multirow{3}{*}{Cube}&440K &6.55e2 &12.85&4.73e2&7.30&341&5K&1e-1&1&x&1084&3&1.38&\bf1.76\\ &&580K* &1.24e3 &18.24&7.95e2&9.84&884&6K&1e-1&1 &x&1209&3&\bf1.56&\bf1.85\\ &&1.73M*&1.35e4 &89.82&6.69e3&35.81&1010&10K&1e-1&1&x&3553&6&\bf2.02&\bf2.05\\ \cline{1-16} \multirow{5}{*}{FETI}&\multirow{3}{*}{Elasticity} &80K* &1.90e1 &1.51 &2.14e1&1.42&22&3K&1e-1&1 &1454&113&1&0.88&1.06\\ \multirow{5}{*}{Coarse}&&660K* &1.10e3&21.50 &1.01e3&15.04&58&18K&1e-1&1 &2116&275&1&1.08&1.43\\ &&2.26M*&2.72e4 &166.35&1.61e4&82.89&125&16K&1e-1&1&x&563&1&\bf1.69&\bf2.00\\ \cline{2-16} &\multirow{2}{*}{Elasticity} &45K* &5.37e0 &0.63 &5.89e0&0.62&19&2K&1e-1&1 &1452&93&1&0.91&1.02\\ &\multirow{2}{*}{T}&375K* &3.04e2&9.21 &2.72e2&6.78&61&5K&1e-1&1 &x&219&1&1.12&1.36\\ &&1.30M*&3.92e3 &48.87&2.47e3&23.55&100&10K&1e-1&1&x&505&1&\bf1.59&\bf2.08\\ \hline \end{tabular}} \caption{Summary of solver accuracy, speed and memory consumption for various benchmark cases. All timings are measured in seconds and memory usage is measured in Giga Bytes. {\bf FETI local:} FETI-DP local matrices. {\bf FETI coarse:} FETI-DP coarse problem matrices. T stands for a matrix where, in the coarse FETI-DP matrix, we consider only the translational degrees of freedom for the corner nodes. {\bf Stiffness:} Regular finite-element stiffness matrices. `Str' refers to structured meshes and `Uns' refers to unstructured meshes. All results are obtained using the GMRES iterative method with a termination tolerance of $10^{-6}$. Columns `AM' ,`D' and `ILU' refer to the GMRES method with the accelerated multifrontal, diagonal and incomplete LU preconditioners respectively. The letter `x' indicates that the respective iterative method did not converge within 4,000 iterations. The parameters column reflects the code parameters that were used in obtaining the results. $n_{c}$: size threshold for converting from dense linear algebra to HODLR matrix operations. $\epsilon$: error tolerance used in the low-rank approximations. $d_{BDLR}$: depth parameter in the BDLR low-rank approximation method. `k' is a parameter that is used to identify the amount of fill-in in the ILU scheme. That is, the fill-in in each row is set to $\frac{k \cdot NNZ}{N}+1$ where $N$ and $NNZ$ are the matrix size and the number of non zeros in the matrix respectively. Speed-up denotes the speed up in numerical factorization phases, whereas memory savings denotes the overall memory savings compared to the conventional multifrontal method. The symbol `*' in front of the matrix name depicts that the matrix has been scaled such that the norm of the largest entry in each row is $1$.} \label{table:summaryNum} \end{table} \section {Conclusion} We have developed a black-box fast and robust linear solver for finite element matrices arising from the discretization of elliptic PDEs. As we've shown, this solver is advantageous both in terms of running time and memory consumption compared to both the conventional multifrontal direct solve algorithm and the ILU preconditioner. Furthermore, not only our solver is faster in terms of factorization time, but also it results in less number of iterations compared to ILU. The examples presented here were run on a single core machine, and are limited in size by the amount of memory available on a single computer node. A parallel implementation would have allowed us to run much larger test cases. Since the speed-up improves with $N$, this would have allowed us to demonstrate even greater speed-ups, in particular compared to ILU. This will be, however, done in a future publication. This solver can be used at low-accuracy as a preconditioner, or as a standalone direct solver at high accuracy. The current scheme relies on the assumption that all off-diagonal blocks are low-rank. In practice, this implies that the rank required to reach the desired accuracy may become somewhat large, leading to a loss in efficiency. This can be remedied using more complex algorithms such as~\cite{ambikasaran2014ifmm}. Such algorithms are currently under development for the case of sparse matrices. Despite this limitation, the class of methods presented does yield significant improvements over the current state of the art. One advantage of the method presented here is its relative simplicity. For example, by removing the requirement to form a nested low-rank basis across levels, we can simplify the implementation and algorithm significantly. This is in contrast with the HSS class of methods for example~\cite{sheng2007algorithms}. Despite this simplification, the HODLR scheme has a computational cost in $O(N^{4/3})$, whereas HSS-based schemes scale like $O(N^{4/3} \log N)$~\cite{randomizedMF}. We finally point out that the algorithms presented here are very general and robust. They can be applied to a wide-range of problems in a black-box manner. This was demonstrated in part in this manuscript. \section*{Acknowledgments} The authors would like to acknowledge Prof.\ Charbel Farhat, Dr.\ Philip Avery and Dr. Jari Toivanen for providing us with the FETI test matrices. We also want to thank Profs.\ Pierre Ramet and Mathieu Faverge for their useful input and suggestions in this project. Part of this research was done at Stanford University, and was supported in part by the U.S.\ Army Research Laboratory, through the Army High Performance Computing Research Center, Cooperative Agreement W911NF-07-0027. This material is also based upon work supported by the Department of Energy under Award Number DE-NA0002373-1. \FloatBarrier \begin{figure}[htbp] \centering \subfigure[Convergence Analysis]{ \includegraphics{sparseSolver-figure0.pdf} \label{fig:cHeadIter} } \subfigure[RunTime]{ \includegraphics{sparseSolver-figure1.pdf} \label{fig:cHeadTime} } \subfigure[Memory Consumption]{ \includegraphics{sparseSolver-figure2.pdf} \label{fig:cHeadMem} } \caption{Convergence, runtime and memory consumption analysis for the unstructured cylinder head mesh. AM stands for accelerated multifrontal preconditioner and D stands for the diagonal preconditioner. For detailed code parameters see Table~\ref{table:summaryNum}.} \label{fig:cylinderResults} \end{figure} \begin{figure}[htbp] \centering \subfigure[Convergence Analysis]{ \includegraphics{sparseSolver-figure3.pdf} \label{fig:structCubeIter} } \subfigure[Run Time]{ \includegraphics{sparseSolver-figure4.pdf} \label{fig:structCubeTime} } \subfigure[Memory Consumption]{ \includegraphics{sparseSolver-figure5.pdf} \label{fig:structCubeMem} } \caption{Convergence, runtime and memory consumption analysis for FETI-DP local matrices arising from the structured cube mesh. AM stands for accelerated multifrontal preconditioner and D stands for the diagonal preconditioner. For detailed code parameters see Table~\ref{table:summaryNum}.} \label{fig:strcuturedResults} \end{figure} \begin{figure}[htbp] \centering \subfigure[Convergence Analysis]{ \includegraphics{sparseSolver-figure6.pdf} \label{fig:unstructCubeIter} } \subfigure[Run Time]{ \includegraphics{sparseSolver-figure7.pdf} \label{fig:unstructCubeTime} } \subfigure[Memory Consumption]{ \includegraphics{sparseSolver-figure8.pdf} \label{fig:unstructCubeMem} } \caption{Convergence, runtime and memory consumption analysis for FETI-DP local matrices arising from the unstructured cube mesh. AM stands for accelerated multifrontal preconditioner and D stands for the diagonal preconditioner. For detailed code parameters see Table~\ref{table:summaryNum}.} \label{fig:unstructuredResults} \end{figure} \begin{figure}[htbp] \centering \subfigure[Convergence Analysis]{ \includegraphics{sparseSolver-figure9.pdf} \label{fig:elasticitySIter} } \subfigure[RunTime]{ \includegraphics{sparseSolver-figure10.pdf} \label{fig:elasticitySTime} } \subfigure[Memory Consumption]{ \includegraphics{sparseSolver-figure11.pdf} \label{fig:elasticitySMem} } \caption{Convergence, runtime and memory consumption analysis for FETI-DP coarse matrices arising from the discretization of the elasticity equation in a structured cube mesh. AM stands for accelerated multifrontal preconditioner and D stands for the diagonal preconditioner. For detailed code parameters see Table~\ref{table:summaryNum}. The benchmark matrices correspond to dividing the unit cube into $16^3$, $32^3$, and $48^3$ subdomains. The size of each subdomain is $8\times8\times8$ elements. The coarse matrix is based on the displacement of corners of subdomains and the average augmentation for displacements and rotations on the faces.} \label{fig:elasticitySResults} \end{figure} \begin{figure}[htbp] \centering \subfigure[Convergence Analysis]{ \includegraphics{sparseSolver-figure12.pdf} \label{fig:elasticityStIter} } \subfigure[RunTime]{ \includegraphics{sparseSolver-figure13.pdf} \label{fig:elasticityStTime} } \subfigure[Memory Consumption]{ \includegraphics{sparseSolver-figure14.pdf} \label{fig:elasticityStMem} } \caption{Convergence, runtime and memory consumption analysis for FETI-DP coarse matrices arising from the discretization of the Elasticity equation in a structured cube mesh. AM stands for accelerated multifrontal preconditioner and D stands for the diagonal preconditioner. For detailed code parameters see Table~\ref{table:summaryNum}. The benchmark matrices correspond to dividing the unit cube into $16^3$, $32^3$ and $48^3$ subdomains. The size of each subdomain is $8\times8\times8$ elements. The coarse matrix is based on corners of subdomains and the average augmentation for displacements without rotations on the faces.} \label{fig:elasticityStResults} \end{figure} \begin{figure}[htbp] \centering \subfigure[Number of Iterations for Structured Meshes]{ \includegraphics{sparseSolver-figure15.pdf} \label{fig:numIterStruct} } \subfigure[Number of Iterations for Unstructured Meshes]{ \includegraphics{sparseSolver-figure16.pdf} \label{fig:numIterUnstruct} } \subfigure[Convergence Analysis]{ \includegraphics{sparseSolver-figure17.pdf} \label{fig:numIterAcc} } \caption{Number of iterations vs.\ matrix size and solver accuracy for a variety of problems. a) Number of iterations vs.\ matrix size for problems with a structured cube mesh. b) Number of iterations vs.\ matrix size for problems with an unstructured mesh. c) Normalized number of iterations vs.\ fast solver accuracy. Number of iterations has been normalized by the number of iterations at the accuracy of $10^{-1}$.} \end{figure} \FloatBarrier \bibliographystyle{elsarticle-harv} \bibliography{sparseSolver} \end{document}
{"config": "arxiv", "file": "1410.2697/v8 arXiv/sparseSolver.tex"}
TITLE: Prove that given a partition $\mathcal{P}$ of a set $A$ nonempty, there exists a unique equivalence relation on $A$ from which it is derived QUESTION [1 upvotes]: Prove that given a partition $\mathcal{P}$ of a set $A$ nonempty, there exists a unique equivalence relation on $A$ from which it is derived sol: Let $\mathcal{P} $ be the partition $\{ A_{\alpha} \}_{\alpha} $ where $A_{ \alpha } \cap A_{\beta} = \varnothing $ for $\alpha \neq \beta $. My idea is to create an equivalence relation $R$ on $A$ as follows: $$ (x,y) \in R \iff x,y \in A_{\alpha} \in \mathcal{P} \; \; \; \text{for some } \; \alpha $$ Since $x \in A_{\alpha}$ then $(x,x) \in R$ is clear. Now, Suppose $(x,y) \in R$. That is, suppose $x,y$ are in $A_{\alpha}$, then $y,x \in A_{\alpha}$. So that $(y,x) \in R$. Is this that simple? Now, if $(x,y) \in R$ and $(y,z) \in R$, then we show that $(x,z) \in R$. That is, we show $x,z \in A_{\alpha}$. We already know $x \in A_{\alpha}$. IF $z $ is ${\bf not}$ in there, then $z$ is in another $A_{\beta}$ and thus $y \in A_{\beta}$ but $y$ is in some $A_{\gamma}$ and so $A_{\beta} \cap A_{\gamma} \neq \varnothing$ which is a contradiction. As for uniqueness, how can I show this? Any hint would be appreciated. REPLY [1 votes]: You've already done it: the equivalence relation you've defined IS, in fact, the partition you began with, or in other words: when I know the equivalence classes of an equivalence relation on some set I already know completely and uniquely that equivalence, and since those equivalence classes are the partition's sets we're then done. Another way you could try: Suppose there is another equivalence relation $\;S\;$ @derived@ (in the way you showed) from the given partition. Then both equivalence relations $\;R,\,S\;l$ have the very same equivalence classes (which are the sets of the partition!), and it is then a mtter of simply check that we have $\;aRb\iff aSb\;$, as then $\;a,b\;$ belong to the same set in the partition... REPLY [1 votes]: Your solutions for existence is good, but can be simplified for transitivity (no need to do by contradiction). The proofs for both are not too complicated. Transitivity: Suppose $(x,y) \in R$ and $(y,z) \in R$, so $x,y \in A_{\alpha_1}$ and $y,z \in A_{\alpha_2}$ for some indices $\alpha_1,\alpha_2$. Since $\mathcal{P}$ is a partition, $y \in A_{\alpha_1} \cap A_{\alpha_2} \implies A_{\alpha_1} = A_{\alpha_2}$. Thus, $z \in A_{\alpha_1}$, and since $x,z \in A_{\alpha_1}$, $(x,z) \in R$. To prove uniqueness, let $R'$ be another relation such that the partition $\mathcal{P}$ forms the equivalence classes of $R'$ as well. First suppose $(x,y) \in R'$, so $x,y$ belongs to the same equivalence class, say $x,y \in A_\alpha$. By definition of $R$, $(x,y) \in R$, so $R' \subseteq R$. On the other hand, suppose $(x,y) \in R$, so $(x,y) \in A_\alpha$ for some $\alpha$. Since $A_\alpha$ is an equivalence class of $R'$, we have that $x$ and $y$ belongs to the same equivalence class of $R'$, so $(x,y) \in R'$.
{"set_name": "stack_exchange", "score": 1, "question_id": 3659492}
\begin{document} \newcommand{\hhom}{{\mathbb H}} \newcommand{\R}{{\mathbb R}} \newcommand{\N}{{\mathbb N}} \newcommand{\Z}{{\mathbb Z}} \newcommand{\C}{{\mathbb C}} \newcommand{\T}{{\mathbb T}} \newcommand{\rn}{{\mathbb R}^n} \newcommand{\cA}{{\mathcal A}} \newcommand{\cB}{{\mathcal B}} \newcommand{\cC}{{\mathcal C}} \newcommand{\cD}{{\mathcal D}} \newcommand{\cE}{{\mathcal E}} \newcommand{\cF}{{\mathcal F}} \newcommand{\cG}{{\mathcal G}} \newcommand{\cH}{{\mathcal H}} \newcommand{\cI}{{\mathcal I}} \newcommand{\cJ}{{\mathcal J}} \newcommand{\cK}{{\mathcal K}} \newcommand{\cN}{{\mathcal N}} \newcommand{\cL}{{\mathcal L}} \newcommand{\cP}{{\mathcal P}} \newcommand{\cQ}{{\mathcal Q}} \newcommand{\cS}{{\mathcal S}} \newcommand{\cT}{{\mathcal T}} \newcommand{\ga}{\alpha} \newcommand{\gb}{\beta} \renewcommand{\gg}{\gamma} \newcommand{\gG}{\Gamma} \newcommand{\gd}{\delta} \newcommand{\eps}{\varepsilon} \newcommand{\gve}{\varepsilon} \newcommand{\gk}{\kappa} \newcommand{\gl}{\lambda} \newcommand{\gL}{\Lambda} \newcommand{\go}{\omega} \newcommand{\gO}{\Omega} \newcommand{\gvp}{\varphi} \newcommand{\gt}{\theta} \newcommand{\gT}{\Theta} \renewcommand{\th}{\vartheta} \newcommand{\gs}{\sigma} \newcommand{\fT}{{\mathfrak T}} \newcommand{\ol}{\overline} \newcommand{\ul}{\underline} \newcommand{\Dbar}{D\hspace{-1.5ex}/\hspace{.4ex}} \newcommand{\Pf}{{\em Proof.}~} \newcommand{\Proof}{\noindent{\em Proof}} \newcommand{\pitensor}{\hat\otimes_\pi} \newcommand{\eproof}{{~\hfill$ \triangleleft$}} \renewcommand{\i}{\infty} \newcommand{\rand}[1]{\marginpar{\small #1}} \newcommand{\forget}[1]{} \newcommand{\cut}{C^\infty_{tc}(T^-X)} \newcommand{\Ctc}{C^\infty_{c}} \newtheorem{leer}{\hspace*{-.3em}}[section] \newenvironment{rem}[2] {\begin{leer} \label{#1} {\bf Remark. } {\rm #2 } \end{leer}}{} \newenvironment{lemma}[2] {\begin{leer} \label{#1} {\bf Lemma. } {\sl #2} \end{leer}}{} \newenvironment{thm}[2] {\begin{leer}\label{#1} {\bf Theorem. } {\sl #2} \end{leer}}{} \newenvironment{dfn}[2] {\begin{leer} \label{#1} {\bf Definition. } {\rm #2 } \end{leer}}{} \newenvironment{cor}[2] {\begin{leer} \label{#1} {\bf Corollary. } {\sl #2 } \end{leer}}{} \newenvironment{prop}[2] {\begin{leer} \label{#1} {\bf Proposition. } {\sl #2} \end{leer}}{} \newenvironment{extra}[3] {\begin{leer} \label{#1} {\bf #2. } {\rm #3 } \end{leer}}{} \renewcommand{\theequation}{\thesection.\arabic{equation}} \renewcommand{\labelenumi}{{\rm (\roman{enumi})}} \newcounter{num} \newcommand{\bli}[1]{\begin{list}{{\rm(#1{num})}\hfill}{\usecounter{num}\labelwidth1cm \leftmargin1cm\labelsep0cm\rightmargin1pt\parsep0.5ex plus0.2ex minus0.1ex \itemsep0ex plus0.2ex\itemindent0cm}} \newcommand{\eli}{\end{list}} \def\Im{{\rm Im}\,} \def\lra{\longrightarrow} \def\Re{{\rm Re}\,} \def\rpb{\overline\R_+} \def\sumj#1{\sum_{j=0}^{#1}} \def\sumk#1{\sum_{k=0}^{#1}} \def\vect#1#2#3{\begin{array}{c}#1\\#2\\#3\end{array}} \def\vec#1#2{\begin{array}{c}#1\\#2\end{array}} \def\skp#1{\langle#1\rangle} \title{A Continuous Field of $C^*$-algebras and the Tangent Groupoid for Manifolds with Boundary} \author{\sc Johannes Aastrup, Ryszard Nest and Elmar Schrohe} \date{} \maketitle {\small {\bf Abstract.} For a smooth manifold $X$ with boundary we construct a semigroupoid $\cT^-X$ and a continuous field $C^*_r(\cT^-X)$ of $C^*$-algebras which extend Connes' construction of the tangent groupoid. We show the asymptotic multiplicativity of $\hbar$-scaled truncated pseudodifferential operators with smoothing symbols and compute the $K$-theory of the associated symbol algebra. {\bf Math.\ Subject Classification} 58J32, 58H05, 35S15, 46L80. {\bf Keywords:} Manifolds with boundary, continuous fields of $C^*$-algebras, tangent groupoid. } \tableofcontents \section*{Introduction} It is a central idea of semi-classical analysis to consider Planck's constant $\hbar$ as a small real variable and to study the relation between systems in mechanics and systems in quantum mechanics by associating to a function $f=f(x,\xi)$ on the cotangent bundle of a manifold the $\hbar$-scaled pseudodifferential operator $\op_\hbar(f)$ with symbol $f(x,\hbar\xi)$ and analyzing their relation as $\hbar\to0$. For $f \in \cS (T^*\R^n)$, for example, a basic estimate states that \begin{equation}\label{0.1} \lim_{\hbar \to 0}\| \op_\hbar (f )\|=\| f \|_{\rm sup} . \end{equation} Moreover, given a second symbol $g\in \cS (T^*\R^n)$ we have \begin{equation}\label{0.2} \lim_{\hbar \to 0}\| \op_\hbar (f)\op_\hbar (g)-\op_\hbar (fg)\|=0; \end{equation} in other words, the map $\op_\hbar $ is asymptotically multiplicative. As both statements concern the asymptotic behavior of pseudodifferential operators, it is somewhat surprising that they can be proven within the framework of continuous fields of $C^*$-algebras associated to amenable Lie groupoids, more precisely, the $C^*$-algebra of the so-called tangent groupoid $\cT M$, cf.\ Connes \cite[Section II.5]{Connes94}. For a boundaryless manifold $M$, $\cT M $ is constructed by gluing the tangent space $TM$ to the cartesian product $M\times M\times ]0,1]$ via the map $TM\times [0,1]\ni (m,v,\hbar)\mapsto (m,\exp_m(-\hbar v),\hbar)$. It has the natural cross-sections $\cT M(\hbar)$, $0\le \hbar\le 1$, given by $TM$ for $\hbar =0$ and by $M\times M\times \{\hbar\}$ for $\hbar\not=0$. The basic observation, establishing the link between $\hbar$-scaled pseudodifferential operators and the tangent groupoid, is the following: In the Fourier transformed picture, the $\hbar$-scaled pseudodifferential operator $\op_\hbar(f)$ becomes the convolution operator $\rho_\hbar(\hat f)$ acting by $$\rho_\hbar(\hat f)\xi(x) =\frac1{\hbar^n}\int \hat f\big(x,\frac{x-y}\hbar\big)\xi(y)dy,\quad\xi\in L^2(\R^n), $$ and for $\hbar \not=0$, the mappings $\rho_\hbar$ (or better their generalization to the manifold case) coincide with the natural representations of $C^\infty_c(\cT M(\hbar))$ by convolution operators. The $\rho_\hbar$, $\hbar\not=0$, are complemented by the representation $\pi_0$ of $C^\infty_c(TM)$ on $L^2(TM)$ via convolution in the fiber which in turn coincides with the natural representation of $C^\infty_c( \cT M(0)).$ Now the tangent groupoid is additionally amenable, so that, according to a theorem by Anantharaman-Delaroche and Renault \cite{AnDeRe}, the reduced $C^*$-algebra $C^*_r(\cT M)$, defined as the closure of $C^\infty_c(\cT M)$ with respect to the natural representations, and the full $C^*$-algebra $C^*(\cT M)$, i.e., the closure with respect to all involutive Hilbert space representations, are isomorphic. The crucial fact then is that $C_r^*(\cT M)$ is a continuous field of $C^*$-algebras over $[0,1]$; the fiber over $\hbar$ is $C^*_r(\cT M(\hbar))$. An elegant way to establish the continuity is to show upper semi-continuity and lower semi-continuity separately, noticing that upper semi-continuity is easily proven in $C^*(\cT M)$ while lower semi-continuity is not difficult to show in $C_r^*(\cT M)$. As both $C^*$-algebras are isomorphic, continuity follows. For a good account of these facts see \cite{LandsmanRamazan} by Landsman and Ramazan. The identities \eqref{0.1} and \eqref{0.2} are then an immediate consequence of the continuity of the field. In the present paper we consider manifolds with boundary. The analog of the usual pseudodifferential calculus here is Boutet de Monvel's calculus for boundary value problems \cite{MR53:11674}. In order to obtain an operator algebra, one cannot work with pseudodifferential operators alone, but has to introduce an additional class of operators, the so-called singular Green operators. The reason is the way pseudodifferential operators act on functions defined on a half space: One first extends the function (by zero) to the full space, then applies the pseudodifferential operator and finally restricts the result to the half space again -- one often speaks of trucated pseudodifferential operators. Given two pseudodifferential operators $P$ and $Q$, the `leftover operator' $L(P,Q)=(PQ)_+-P_+Q_+$, i.e.\ the difference between the trucated pseudodifferential operator $(PQ)_+$ associated to the composition $PQ$ and the composition of the truncated operators $P_+$ and $Q_+$ associated with $P$ and $Q$ is a typical example of such a singular Green operator. The singular Green operators `live' at the boundary. They are smoothing operators in the interior, while, close to the boundary, they can be viewed as operator-valued pseudodifferential operators along the boundary, acting like smoothing operators in the normal direction. In the full algebra which consists -- at least in the slightly simplified picture we have here -- of sums of (truncated) pseudodifferential operators and singular Green operators, the singular Green operators form an ideal. With this picture in our mind, we construct an analog of Connes' tangent groupoid for a manifold $X$ with boundary. Our semi-groupoid $\cT^- X$ consists of the groupoid $X\times X\times ]0,1]$ to which we glue, with the same map as above, the half-tangent space $T^-X$, which comprises all those tangent vectors to $X$ for which $\exp_m(-\hbar v)$ lies in $X$ for small $\hbar$ (note that this condition is only effective at the boundary of $X$). As before, we have natural cross-sections $\cT^- X(\hbar)$, coinciding with $X\times X\times\{\hbar\}$ for $\hbar\not=0$ and with $T^-X$ for $\hbar=0$. For $\hbar\not=0$, the operators $\rho_\hbar$ (with integration now restricted to $X$), are the natural representations of the groupoid $\cT^-X(\hbar)$. At $\hbar=0$ we use two mappings. The first, $\pi_0$ is the analog of the above map $\pi_0$. It acts on the tangent space of $X$ by convolution. The second, $\pi_0^\partial$, acts on the half tangent space over the boundary by half-convolution: $\pi_0^\partial:C^\infty_c(T^-X)\to\cL(L^2(T^-X|_{\partial X})$ is given by $$\pi_0^\partial(f)\xi(m,v)=\int _{T^-_mX}f(m,v-w)\xi(m,w)dw.$$ In order to avoid problems concerning the topology of $\cT^-X$, we denote by $C^\infty_c(\cT^-X)$ the space of all restrictions of functions in $C_c^\infty(\cT\widetilde X)$ to $\cT^-X$; here $\widetilde X$ is a boundaryless manifold containing $X$. The reduced $C^*$-algebra $C^*_r(\cT^-X)$ is then defined as the $C^*$-closure of $C^\infty_c(\cT^-X)$ with respect to the $\rho_\hbar$, $\hbar\not=0$, and $\pi_0,\pi_0^\partial$ for $\hbar=0$. For the full $C^*$-algebra we use all involutive representations. We show that $C^*_r(\cT^-X)$ is a continuous field of $C^*$-algebras over $[0,1]$, where the fiber over $\hbar\not=0$ is $C^*_r(\cT^-X(\hbar))$, and the fiber over $\hbar=0$ is the $C^*$-closure of $C^\infty_c(T^-X)$ with respect to $\pi_0$ and $\pi_0^\partial$. The proof of continuity is again split up into showing upper semi-continuity and lower semi-continuity. According to an idea by Rieffel \cite{Rieffel}, lower semi-continuity is established using strongly continuous representations. The basic idea for the proof of upper semi-continuity would be to infer an isomorphism between $C^*_r(\cT^-X)$ and $C^*(\cT^-X)$ from the amenability of $\cT^-X$. However, as $T^-X$ is only a semi-groupoid, we make a little detour: Using short exact sequences and the amenability of the tangent groupoids for boundaryless manifolds we prove that $C^*_r(T^-X)$ is isomorphic to the closure of $C^\infty_c(T^-X)$ with respect to the upper semi-continuous norm. The present study should be seen as a step towards fitting Boutet de Monvel's calculus for boundary value problems into the framework of deformation quantization and groupoids, in the spirit of Connes \cite{Connes94}, Monthubert and Pierrot \cite{MonthuberPierrot}, Nest and Tsygan \cite{MR1350407}, \cite{MR1337107}, Nistor and Weinstein and Xu \cite{MR1687747}, Eventually one could hope to develop an algebraic index theory for these deformations in the spirit of Nest and Tsygan. The structure of the paper is as follows: In the first section we review the case of boundaryless manifolds. We introduce the basic notions and show how \eqref{0.1} and \eqref{0.2} are derived with the help of the continuous field of $C^*$-algebras associated to the tangent groupoid. We then consider a manifold $X$ with boundary. In order to make the presentation more transparent, we first study the case where $X=\R^n_+=\{(x_1,\ldots,x_n)\ |\ x_n\ge0\}$. Here all relevant features show up, but computations are easier to perform. We then go over to the general case. In Section 3 we determine the $K$-theory of the symbol algebra $C^*_r(T^-X)$. Starting from the short exact sequence $$0\longrightarrow C^*_r(TX^\circ)\longrightarrow C^*_r(T^-X)\longrightarrow Q\longrightarrow 0$$ we show that the quotient $Q$ can be identified with $C_0(T^*\partial X)\otimes \cT_0$, where $\cT_0$ is an ideal in the Toeplitz algebra with vanishing $K$-theory. In particular, we obtain the isomorphism $$K_i(C^*_r(T^-X))\cong K_i(C_0(T^*X)),\quad i=0,1.$$ The appearance of the Toeplitz operators can be seen as a feature inherent in the geometry of the problem. In fact, the construction of an algebra of pseudodifferential operators on a closed (Riemannian) manifold amounts to the construction of a suitably completed operator algebra, generated by multivariable functions of vector fields and the operators of multiplication by smooth functions. In the boundaryless case, one can localize to $\R^n$ and reduce the task essentially to defining $f(D)$ for a classical symbol $f$ and $D=(D_1,\ldots, D_n)$ with the vector fields $D_j=i\partial_{x_j}$. One convenient way of achieving this is to use the operator families $e^{itD_j}$ and to let $$ f(D)=(2\pi)^{-n}\int \widehat f(\xi)e^{i\xi D}\,d\xi $$ with the Fourier transform $\widehat f$ of $f$ and $\xi D=\xi_1D_1+\ldots +\xi_n D_n$. Note that the use of the $e^{i\xi D}$ is purely geometric and only relies on the fact that vector fields integrate to flows. On a manifold with boundary, one will have vector fields transversal to the boundary which do not integrate to flows. In this case, one has two possibilities: The first is to restrict the class of admissible vector fields to those which {\em} do integrate. This is a basic idea in the pseudodifferential calculi introduced by Melrose \cite{MelroseKyoto}, see also Ammann, Lauter, Nistor \cite{AmmannLauterNistor}. In Boutet de Monvel's calculus, on the other hand, transversal vector fields are admitted. After localization to $\ol \R^n_+$, we may focus on $D_n$. One of the functions one would certainly like to define is the Cayley transform (recall that the Cayley transform $C(A)$ of an operator $A$ is given by $C(A)= (A-i)(A+i)^{-1}=1-2i(A+i)^{-1}$). Now it is well-known that the Cayley transform $C(A)$ is an isometry, and that it is a unitary if and only if $A$ is selfadjoint. As there is no selfadjoint extension of $D_n$, its Cayley transform will be a proper isometry. Hence by a theorem of Coburn \cite{Coburn,Coburn2}, the algebra generated by it (which becomes part of the calculus), is the Toeplitz algebra. While the pseudodifferential calculus for closed manifolds is commutative modulo lower order terms, this calculus is not. From a geometric point of view, the resulting algebra can thus be seen as a noncommutative completion of the manifold with boundary. {\em Remark on the notation.} A variety a representations naturally comes up in this context. In order to distinguish their origin, we will apply the following rule. Representations related to the groupoid structure are denoted by $\pi$ (possibly indexed), asymptotic pseudodifferential operators by $\rho_\hbar$ and the asymptotic Green operators (introduced in Section 2) by $\kappa_\hbar$. \section{The Classical Case} \subsection*{Groupoids} A groupoid $G$ is a small category where all the morphisms are invertible. We will denote by $G^{(0)}$ the set of objects in $G$ and by $G^{(1)}$ the set of morphisms. We will also call $G^{(0)}$ the base and the elements in $G^{(1)}$ the arrows. On $G^{(1)}$ there are two maps $r,s$ into $G^{(0)}$. The first map, $r $, is the range object of a morphism and the second, $s$, the source. For $x\in G^{(0)}$ we define $G^x=r^{-1}(x)$ and $G_x=s^{-1}(x)$. There is an embedding $\iota$ of $G^{(0)}$ into $G^{(1)}$ given by mapping an object to the identity morphism on this object. Furthermore we define $G^{(2)}$ to be the subset of composable morphisms of $G^{(1)}\times G^{(1)}$. \begin{dfn}{liegr}{A Lie groupoid $G$ is a groupoid together with a manifold structure on $G^{(0)}$ and $G^{(1)}$ such that the maps $r,s$ are submersions, the map $\iota$ and the the composition map $G^{(2)} \rightarrow G^{(1)}$ are smooth. } \end{dfn} To a given a smooth manifold $M$ without boundary there are associated two canonical Lie groupoids. The first is the tangent bundle $TM$ of $M$. The groupoid structure is given by \begin{eqnarray*} G^{(0)} =M, && G^{(1)}=TM \\ r(m,X)=m, && s(m,X)=m\\ (m,X)\circ (m,Y)&=&(m,X+Y) \end{eqnarray*} The second one is the pair groupoid $M\times M$ with \begin{eqnarray*} G^{(0)}=M, && G^{(1)}=M\times M \\ r(m_1,m_2)=m_1, &&s(m_1,m_2)=m_2 \\ (m_1,m_2)\circ (m_2,m_3)&=&(m_1 , m_3) \end{eqnarray*} Both are clearly Lie groupoids. \extra{Haar}{Haar systems}{A smooth left Haar system on a Lie groupoid is a family of measures $\{ \lambda^x \}_{ x\in G^{(0)}}$ on $G$ with $\hbox{supp}\lambda^x=G^x$ which is left invariant, i.e. $\gamma(\lambda^{s(\gamma )})=\lambda^{r(\gamma )}$, and for each $ f\in C^\infty_c (G^{(1)})$, the function on $G^{(0)}$ defined by $$x \mapsto \int fd\lambda^x, \quad f\in C^\infty_c (G^{(1)})$$ is smooth. In \cite[Proposition 3.4]{LandsmanRamazan}, it is proven that all Lie groupoids possess a smooth left Haar system. Similarly, a right Haar system $\{\gl_x\}$ is given by $\gl_x=(\gl^x)^{-1}$. } \begin{dfn}{ame}{A Lie groupoid $G$ with a smooth left Haar system $\lambda^x$ is called topologically amenable if there exists a net of nonnegative continuous functions $\{f_i\}$ on $G^{(1)}$ such that \begin{enumerate} \item For all $i$ and for $x\in G^{(0)}$, $\int f_i d\lambda^x=1$. \item The functions $\gamma \mapsto \int | f_i(\gamma^{-1}\gamma')-f_i(\gamma')|d\lambda^{r(\gamma)}(\gamma')$ converge uniformly to zero on compact subsets of $G^{(1)}$. \end{enumerate}} \end{dfn} It is easy to verify that the two groupoids $TM$ and $M\times M$ are topologically amenable. \extra{tg}{Connes' tangent groupoid}{Let $M$ be a smooth manifold. Connes tangent groupoid $\cT M$ is a blow up of the diagonal in $M\times M$. More specifically: Let $\cT M=TM \cup (M\times M\times ]0,1])$ as a set. The groupoid structure is just the fiberwise groupoid structure coming from the groupoid structure on $TM$ and $M\times M$. The manifold structure on $M\times M \times ]0,1]$ is obvious. We next glue $TM$ to $M\times M \times ]0,1]$ to get a manifold structure on $\cT M$. To this end we choose a Riemannian metric on $M$ and glue with the charts $$TM\times [0,1]\supseteq U\ni (m,v,\hbar)\mapsto \left\{ \begin{array}{cc} (m,v) & \hbox{for }\hbar =0 \\ (m, \exp_m(-\hbar v),\hbar)& \hbox{for }\hbar \not= 0, \end{array} \right.$$ where $\exp_m$ denotes the exponential map and $U$ is an open neighborhood of $M\times \{0\}\subset TM\times \{0\}$; here, $M$ is embedded as the zero section. Here, $G^{(0)}=M\times [0,1]$. For $\hbar\not=0$ and $x=(\tilde m,\tilde \hbar)\in G^{(0)}$, we have $G^x=\{(\tilde m,m,\tilde \hbar): m\in M\}$; for $x=(\tilde m,0)$, $G^x=T_{\tilde m}M$. Fixing the measure $\mu$ on $M$ induced by the metric, we obtain a Haar system $\{\lambda^x\}_{x\in G^{(0)}}$ by $\lambda^{(\tilde m,\tilde \hbar)} =\hbar^{-n}\mu$, $\hbar \not=0$; for $\hbar=0$, we let $\lambda^{(\tilde m,0)}$ be the measure on $T_mM$ given by the metric. This makes $\cT M$ a Lie groupoid, see \cite{LandsmanRamazan}. } \extra{c*}{C*-algebras associated to groupoids}{Let $G$ be a Lie groupoid with a smooth left Haar system $\lambda$. On $C^\infty_c(G^{(1)})$ we define a $*$-algebra structure by \begin{eqnarray} (f*g)(\gamma)&=&\int_{G^{s(\gamma)}}f(\gamma \gamma_1)g(\gamma_1^{-1})\ d\lambda^{s(\gamma )}(\gamma_1) \\ f^*(\gamma )&=&\overline{f(\gamma^{-1})} \end{eqnarray} There are involutive representations $\pi_x$, $x\in G^{(0)}$, of this $*$-algebra on the Hilbert spaces $L^2(G_x,\lambda_x)$ given by \begin{equation}\label{repsl} \pi_x(f)\xi(\gamma )=\int_{G^{x}}f(\gg\gamma_1)\xi (\gamma_1^{-1})~d\lambda^{x}(\gamma_1), \qquad \xi \in L^2(G_x,\lambda_x). \end{equation} } \begin{dfn}{deffcg}{The full $C^*$-algebra $C^*(G)$ of a groupoid is the $C^*$-completion of the $*$-algebra $C^\infty_c(G^{(1)})$ with respect to all involutive Hilbert space representations. The reduced $C^*$-algebra $C^*_r(G)$ of $G$ is the $C^*$-completion of $C_c^\infty(G)$ with respect to the representations \eqref{repsl}.} \end{dfn} Note that, by universality, we have a quotient map from $C^*(G)$ to $C^*_r(G)$. \begin{rem}{1.7}{Although the construction of the $*$-algebra structure on $C_c^\infty (G^{(1)})$ and the representations \eqref{repsl} use a smooth Haar system, the algebra is independent of the choice. See \cite{LandsmanQTG} for a detailed exposition.} \end{rem} \extra{tanfour}{Example}{For the tangent bundle $TM$ of a manifold, the space $G_m$ is just $T_mM$ and the representation is $$\pi_m (f)\xi(v)=\int_{T_mM} f(m,v-w)\xi (w)\, dw\quad \xi \in L^2(T_mM).$$ By Fourier transform in each fiber $T_mM$, the $C^*$-algebra $C^*_r(TM)$ becomes isomorphic to $C_0(T^*M)$, the continuous functions on $T^*M$ vanishing at infinity. } The importance of topological amenability lies in the following result from \cite{AnDeRe}: \begin{prop}{amec}{When $G$ is topologically amenable the quotient map from $C^*(G)$ to $C^*_r(G)$ is an isomorphism.} \end{prop} \subsection*{Continuous Fields and $\hbar$-Scaled Pseudodifferential Operators} \dfn{cf}{A continuous field of $C^*$-algebras $(A,\{A(\hbar),\gvp_\hbar\}_{\hbar\in[0,1]})$ over $[0,1]$ consists of a $C^*$-algebra $A$, $C^*$-algebras $A(\hbar)$, $\hbar\in[0,1]$, with surjective homomorphisms $\gvp_\hbar:A\to A(\hbar)$ and an action of $C([0,1])$ on $A$ such that for all $a\in A$ \begin{enumerate} \item The function $\hbar\mapsto \|\gvp_\hbar(a)\|$ is continuous; \item $\|a\|=\sup_{\hbar\in[0,1]}\|\gvp_\hbar(a)\|$; \item For $f\in C([0,1])$, $\gvp_\hbar(fa)=f(\hbar)\gvp_\hbar(a)$. \end{enumerate}} \thm{cfprop}{For the tangent groupoid $\cT M$ we define $\cT M(0)=TM$ and $\cT M(\hbar)=M\times M\times \{\hbar\}$ for $\hbar\not=0$. The pullback under the inclusion $\cT M(\hbar)\hookrightarrow \cT M$ induces a map $\gvp_{\hbar}:C^\infty_c(\cT M)\to C^\infty_c(\cT M(\hbar))$ which extends by continuity to a surjective $*$-homomorphism $\gvp_{\hbar}: C^*_r(\cT M)\to C^*_r(\cT M(\hbar))$. The $C^*$-algebras $A= C^*_r(\cT M)$ and $A(\hbar)=C^*_r(\cT M(\hbar))$ with the maps $\gvp_{\hbar}$ form a continuous field over $\R$.} \Proof. Together with the amenability of $\cT M$ and Proposition \ref{amec} this is immediate from Theorem 6.4 in \cite{LandsmanRamazan}.\eproof \extra{1.12}{$\hbar$-scaled pseudodifferential operators}{For $0<\hbar\le 1$ define $\rho_\hbar: C^\infty_c(T\R^n)\to \cL(L^2(\R^n))$ by \begin{eqnarray} \rho_\hbar (f)\xi(x) &=& \int f(x,w)\xi(x-\hbar w)\,dw =\hbar^{-n}\int f\big(x,\frac{x-w}\hbar\big)\xi(w)\,dw, \quad \xi\in L^2(\R^n) \end{eqnarray} We complement this by the map $\pi_0: C^\infty_c(T\R^n)\to \cL(L^2(T\R^n))$ \begin{eqnarray}\label{tilderho} \pi_0(f)\xi(x,v)&=& \int f(x,w)\xi(x,v- w) dw. \end{eqnarray} } \rem{1.12a}{(a) We can define $\tilde\rho_\hbar: C^\infty_c(T\R^n)\to \cL(L^2(T\R^n))$, $\hbar\ge 0$, by $$\tilde \rho_\hbar(f)\xi(x,v)= \int f(x,w)\xi(x-\hbar w,v- w) dw$$ and then obtain a more consistent representation. Note that for $h>0$ the representations $\rho_\hbar$ and $\tilde \rho_\hbar$ are unitarily equivalent. (b) On a smooth Riemannian manifold $M$ we define $\rho_\hbar$ by \begin{eqnarray} (\rho_\hbar f)\xi(x) &=& \int \psi (x,\exp_x(-\hbar w)) f(x,w)\xi(\exp_x(-\hbar w))dw\nonumber\\ &=&\hbar^{-n}\int \psi(x,y) f(x,-\exp^{-1}(x,y)/\hbar)\xi(y)\, dy.\label{eq1.8} \end{eqnarray} Here $\psi \in C^\infty (M\times M)$ is a function, which is one on a neighborhood of the diagonal, $0\leq \psi \leq 1$ and such that $$\exp :TM \rightarrow M\times M,\quad (m,v)\mapsto (m,\exp_mv),$$ maps a neighborhood of the zero section diffeomorphically to the support of $\psi$; a similar construction applies to $\tilde \rho$. Note that for two representations $\rho^1_\hbar,\rho^2_\hbar$, defined with cut-off functions $\psi_1$ and $\psi_2$, the norm $\| \rho_\hbar^1 (f)-\rho_\hbar^2(f)\|$ tends to zero as $\hbar \to 0$.} \lemma{1.13}{To each $f\in C^\infty_c(TM)$ we associate a function $\tilde f\in C^\infty(\cT M)$ by $$ \begin{array}{ll}\tilde f(x,v,0)=f(x,v)&\text{ for } \hbar=0,x\in M,v\in T_xM;\\ \tilde f(x,y,\hbar)=\psi (x,y) f(x,-\exp^{-1} (x,y)/ \hbar)& \text{ for } \hbar\not=0,x,y\in M. \end{array}$$ By \eqref{eq1.8} $$ \pi_{(x,\hbar)}(\tilde f)=\rho_{\hbar}(f) \text{ and } \|\gvp_{\hbar}(\tilde f)\|_{\cT M(\hbar)}=\sup_x\|\pi_{(x,\hbar)}(\tilde f)\|_{\cL (L^2(G_{(x,\hbar)},\lambda_{(x,\hbar)}))} =\|\rho_\hbar(f)\|. $$ } \thm{1.14}{We denote by $\widehat f$ the Fourier transform of $f$ with respect to the covariable. Then \\ {\rm (a)} $\lim_{h\to 0} \|\rho_h(f)\|=\|\widehat f\|_{\sup}$.\\ {\rm (b)} $\lim_{h\to 0} \|\rho_h(f)\rho_h(g)-\rho_h(f*g)\|=0$. } \Proof. We have $$\lim_{\hbar\to 0} \|\rho_{\hbar}(f)\| =\lim_{\hbar\to 0}\|\gvp_{\hbar}(\tilde f)\|=\|\gvp_{0}(\tilde f)\| = \|\pi_0(f)\|=\|\widehat f\|_{\sup} $$ and, for arbitrary $x$, \begin{eqnarray*} \|\rho_\hbar(f)\rho_\hbar(g)-\rho_\hbar(f*g)\| &=& \|\pi_{(x,\hbar)}(\tilde f)\pi_{(x,\hbar)}(\tilde g)-\pi_{(x,\hbar)}(\widetilde{f*g})\|\\ &=&\|\pi_{(x,\hbar)}(\tilde f*\tilde g-\widetilde{f*g})\|\to \|\pi_0 (\tilde f*\tilde g-\widetilde{f*g})\|=0. \end{eqnarray*} \eproof \section{Manifolds with Boundary} \setcounter{equation}{0} In the following, we shall denote by $X$ a smooth $n$-dimensional manifold with boundary, $\partial X$. We assume that $X$ is embedded in a boundaryless manifold $\widetilde X$ and write $X^\circ$ for the interior of $X$. We also fix a Riemannian metric on $X$, so that we have $L^2$ spaces. We will show later on that the construction is independent of the choice of the metric. First of all, however, it is helpful to study the case where $X=\R^n_+=\{ (x_1,\ldots, x_n)| x_n\geq 0 \}$ (including $x_n=0$!). We adopt the usual notation by writing an element $x \in \R_+^n$ as $x=(x',x_n)$. \subsection*{Local Computation of the Asymptotic Green Term} We change the formula for the $\hbar$-scaled boundary pseudodifferential operators with Fourier transformed symbol $f\in C^\infty_c(T\R_+^n)$ to \begin{eqnarray} \rho_\hbar(f)\xi(x)&=&\int_{x_n\geq \hbar v_n}f(x,v)\xi(x-\hbar v)dv\nonumber\\ &=&\hbar^{-n}\int_{w_n\ge0}f\big(x,\frac{x-w}\hbar\big)\xi(w)\,dw\label{hscaled} ,\quad \xi\in L^2(\R^n_+). \end{eqnarray} A straightforward computation shows that \begin{eqnarray*} (\rho_\hbar(f)\rho_\hbar (g)\xi )(x)&=& \int_{x_n\geq \hbar w_n} \left( \int_{x_n\geq \hbar v_n}f(x,v)g(x-\hbar v,w-v)dv \right)\xi(x-\hbar w)dw\\ & =&\int_{x_n\geq \hbar w_n}\Big( \int f(x,v)g(x-\hbar v,w-v)dv \\ && - \int_{x_n \leq \hbar v_n } f(x,v)g(x-\hbar v,w-v)dv \Big) \xi(x-\hbar w)dw, \end{eqnarray*} where in the last line $f,g$ have to be understood as extended to functions in $C^\infty_c(T\R^n)$. The term \begin{eqnarray}\label{f*hg} f*_hg= \int f(x,v)g(x-\hbar v,w-v)dv \end{eqnarray} is just the usual composition of Fourier transformed symbols of pseudodifferential operators on manifolds without boundary. We call the remainder, i.e. the operator which maps $\xi$ to \begin{eqnarray} x&\mapsto&{-\int_{x_n\geq \hbar w_n} \int_{x_n\leq \hbar v_n} f(x,v)g(x-\hbar v,w-v)dv\, \xi(x-\hbar w)dw}\nonumber\\ &=&-\int_{y_n\ge0} \int_{x_n\leq \hbar v_n} f(x,v)g(x-\hbar v,y'-v',\frac{x_n}\hbar -y_n-v_n)\,dv\, \xi(x'-\hbar y',\hbar y_n)\, dy\mbox{\quad\quad } \end{eqnarray} the ``asymptotic Green'' term, because it corresponds to the leftover term in the composition of two truncated pseudodifferential operators in Boutet de Monvel's calculus, which is a singular Green operator, cf.\ \cite{MR53:11674}. In order to analyze it, we introduce the following notation: \begin{dfn}{kappa} { For $0< \hbar \leq 1$ define $$\kappa_\hbar :C^\infty_c(T\R^{n-1}\times \R_+\times \R_+\times [0,1])\to\cL (L^2(\R^n_+))$$ by $$\kappa_\hbar (K)\xi (x)=\int_{y_n\geq 0} K(x',y',\frac{x_n}\hbar ,y_n,\hbar )\xi(x'-\hbar y',\hbar y_n)dv .$$ } \end{dfn} The asymptotic Green thus is of the form $\gk_\hbar(l_\hbar (f,g))$ with \begin{eqnarray}\label{lfg} \lefteqn{l_\hbar (f,g)(x',y',x_n,y_n)}\nonumber\\ &=& -\int_{x_n\leq v_n}f(x',\hbar x_n,v)g(x'-\hbar v',\hbar(x_n-v_n),y'-v',x_n-y_n-v_n)dv. \end{eqnarray} As $\hbar\to0$ this tends to \begin{eqnarray} l(f,g)(x',y',x_n,y_n)=-\int_{x_n\leq v_n}f(x',0,y'-v',v_n)g(x',0,v',x_n-v_n-y_n)dv. \end{eqnarray} In fact, the difference $l_\hbar(f,g)-l(f,g)$ is an element of $C^\infty_c(T\R^{n-1}\times\R_+\times\R_+\times[0,1])$ which vanishes for $\hbar=0$. Similarly, the difference $f*_\hbar g-f*g\in C^\infty_c(T\R^n_+\times[0,1])$ vanishes for $\hbar=0$. \bigskip In order to extend Theorem \ref{1.14} to manifolds with boundary, the asymptotic Green term has to be taken into account. \dfn{pi0}{For $\hbar =0$ we introduce $$\pi_0^\partial :C^\infty_c(T\R^n_+\times [0,1])\oplus C^\infty_c(T\R^{n-1} \times \R_+\times \R_+\times [0,1]) \to \cL(L^2(T\R^{n-1}\times \R_+))$$ given by \begin{eqnarray*} \lefteqn{\pi_0^\partial (f\oplus K)\xi(x',v',v_n)=}\\ &=&\int_{w_n\geq 0} \left(f(x',0,v'-w',v_n-w_n,0) + K(x',v'-w',v_n,w_n,0)\right)\xi (x',w',w_n)dw \end{eqnarray*} We complement $\pi_0^\partial$ by the map $\pi_0:C^\infty_c(T\R^n_+)\to \cL(L^2(T\R^n_+))$ in \eqref{tilderho}. } The crucial point is: \lemma{starprod}{The map $$(\pi_0 ,\pi_0^\partial ):C^\infty_c(T\R^n_+)\oplus C^\infty_c(T\R^{n-1}\times \R_+\times \R_+)\to \cL (L^2(T\R_+^n)\oplus L^2(T\R_+^{n-1}\times \R_+))$$ given by $$(\pi_0,\pi_0^\partial )(f\oplus K)=(\pi_0(f),\pi^\partial_0(f\oplus K))$$ turns $C^\infty_c(T\R_+^n)\oplus C^\infty_c(T\R^{n-1}_+\times \R_+\times \R_+)$ into an algebra. We denote this product with $*'$. Note that $f*'g=f*g+l(f,g)$. } It is clear that Theorem \ref{1.14}(b) will not remain true literally. Instead we obtain: \thm{2.2}{For two symbols $f,g\in C^\infty_c(T\R^n_+)$ the following holds $$\lim_{\hbar\to0} \| \rho_\hbar (f)\rho_\hbar (g)-\rho_\hbar(f*g)-\kappa_\hbar(l(f,g)) \|=0.$$ } As in the case of manifolds without boundary, this will be related to the continuity of a field of $C^*$-algebras which we will now introduce \begin{dfn}{field}{We denote by $A$ the $C^*$-completion of $$A^\infty=C^\infty_c(T\R^n_+\times [0,1])\oplus C^\infty_c(T\R^{n-1}\times \R_+\times \R_+\times [0,1])$$ in the representation $\rho_\hbar + \kappa_\hbar$, for $\hbar \not= 0$ and $(\pi_0 , \pi_0^\partial )$ for $\hbar =0$, i.e. under the mappings $$f\oplus K\mapsto \left\{ \begin{array}{ll} \rho_\hbar(f)+\gk_\hbar(K),&\hbar\not=0;\\\pi_0(f)\oplus \pi_0^\partial(f\oplus K)),&\hbar=0. \end{array} \right. $$ There are obvious maps $$\varphi_\hbar :A \to A(\hbar ),$$ where $A (\hbar )$ is the completion of $C^\infty_c(T\R^n_+)\oplus C^\infty_c(T\R^{n-1}\times \R_+\times \R_+)$ with respect to the specific representation in $\hbar$.} \end{dfn} We will show: \thm{rncase}{The triple $(A, \{ A (\hbar ),\varphi_\hbar \}_{\hbar \in [0,1]})$ is a continuous field of $C^*$-algebras with $A(\hbar )$ isomorphic to the compact operators for $\hbar \not= 0$. } For fixed $\hbar \not=0$, the operators $\rho_\hbar(f)+\gk_\hbar(K)$ are compact, because they are integral operators with a square integrable kernel, so $A(\hbar)$ is isomorphic to the compact operators. We shall next analyze the field in more detail. We abbreviate $$T=T\R^n_+\quad\text{ and }\quad \cT_\partial=T\R^{n-1}\times \R_+\times \R_+$$ and start with the following observation: \begin{prop}{stjernealg}{As a subset of $A$, $A^\infty$ is a is a $*$-algebra.} \end{prop} \Proof. First we prove closure under multiplication. The product of $K_1,K_2 \in C^\infty_c(\cT_\partial\times [0,1])$ is just the convolution product of the two functions on the groupoid $\cT \R^{n-1}\times \R_+\times \R_+$, thus again a function in $C^\infty_c(\cT_\partial\times [0,1])$. For $f,g\in C^\infty_c({T})$ we have already computed, cf.\ \eqref{f*hg}, \eqref{lfg}: $$\rho_\hbar (f)\rho_\hbar (g)=\rho_\hbar (\tilde{f}*_\hbar \tilde{g})+\kappa_\hbar (l_\hbar (\tilde{f},\tilde{g})).$$ where $\tilde{f},\tilde{g}$ are smooth extensions of $f,g$ to functions in $C^\infty_c(T\R^n\times [0,1])$. Since $$( \pi_0 ,\pi_0^\partial)(f)(\pi_0,\pi_0^\partial)(g)=(\pi_0(f*g),\pi_0^\partial(f*g)+\pi_0^\partial(l(f,g) )$$ we see the closure under products of $f,g$. Checking the closure under products of $f$'s with $K$'s is straightforward. The same is true for the closure under involution. \eproof \subsection*{The Algebra in 0} The algebra in zero, $A(0),$ is the completion of $$A(0)^\infty :=(C^\infty_c({T})\oplus C^\infty_c(\cT_\partial),*')$$ in the representation $(\pi_0 , \pi_0^\partial )$. The summand $C^\infty_c(\cT_\partial)$ becomes an ideal in $A(0)^\infty$. We thus get the short exact sequence \begin{equation} \label{glatkort} 0\to C^\infty_c(\cT_\partial) \to A(0)^\infty \stackrel{q}\to C^\infty_c({T})\to 0. \end{equation} As noted in the proof of Proposition \ref{stjernealg}, the algebra structure on $C^\infty_c(\cT_\partial)$ comes from the groupoid structure on $\cT_\partial$, where $\R_+\times \R_+$ carries the pair groupoid structure. Likewise, the algebra structure on $C^\infty_c({T})$ stems from the groupoid structure on ${T}$. Note that both groupoids are amenable. \lemma{sesforA0}{We have a short exact sequence of $C^*$-algebras \begin{equation}\label{stjernekort} 0\to C^*_r(\cT_\partial) \to A(0) \to C^*_r({T}) \to 0. \end{equation}} \Proof. In the short exact sequence \eqref{glatkort}, the projection $q$, mapping $f\oplus K$ to $f$, is a $*$-homomorphism. The trivial estimate $$\|\pi_0(f)\|_{\cL(L^2({T}))}\le \| \pi_0(f)\oplus \pi_0^\partial(f\oplus K)\|_{\cL(L^2({T})\oplus L^2(T\R^{n-1}\times\R_+))},$$ shows that $\pi$ extends to a map $A(0)\to C^*_r({T})$ with $C^*_r(\cT_\partial)$ in its kernel. Since we may estimate the norm of $\pi_0^\partial(f)$ by the norm of $\pi_0(f)$, we obtain \eqref{stjernekort}.\eproof\medskip Alternatively, the lemma may be proven using only the amenability of the groupoids, similarly as in the proof of Theorem \ref{rn}, below. Note that, via the Fourier transform, $$ C^*_r(\cT_\partial)\simeq C_0(T^*\R^{n-1})\otimes \cK (L^2(\R_+))$$ and $$C^*_r({T})\simeq C_0(T^*\R_+^n).$$ \subsection*{Upper Semi-continuity} \dfn{as}{On $A$ we define $$\| a\|_{as}=\max (\limsup_{\hbar \to 0} \|\varphi_\hbar (a) \|,\|\varphi_0(a)\|).$$ This is a $C^*$-seminorm which is continuous with respect to the norm of $A$. The quotient $$A[0]=A/I, \quad \text{where}\quad I=\{ a\in A |\ \| a\|_{as}=0 \},$$ therefore carries two norms: the quotient norm and $\|\cdot\|_{as}$. Both are equivalent by \cite[Proposition 1.8.1]{Dixmier}, so that $A[0]$ is a $C^*$-algebra with norm $\|\cdot \|_{as}$. } Since $\|a\|_{as}\ge \|\gvp_0(a)\|$ we have a natural map $$\Phi:A[0]\longrightarrow A(0).$$ \begin{lemma}{vurdering} {Elements in $A^\infty$ which are $0$ for $\hbar=0$ belong to $I$.} \end{lemma} \Proof. For $f\oplus K \in A^\infty $ it is easy to estimate $$\|\rho_\hbar (f)\|\leq M_f\|f(\cdot , \hbar)\|_\infty \hbox{ and } \| \kappa_\hbar (K) \| \leq M_K\|K(\cdot,\hbar )\|_\infty,$$ where $M_f$ and $M_K$ are constants depending on $f$ and $K$, respectively, but not on $ \hbar$. \eproof \thm{rn}{The field $(A,\{ A(\hbar ),\varphi_\hbar \}_{\hbar \in [0,1]})$ is upper semi-continuous in $0$.} \Proof. We denote by $R$ the closure of the range of the natural map $\gg:C_c^\infty(\cT_\partial)\to A[0]$. This is an ideal in $A[0]$: Indeed, $C_c^\infty(\cT_\partial)$ is an ideal in $A(0)^\infty$, and the extension (e.g. constant in $\hbar$) of functions in $A(0)^\infty$ to functions in $A^\infty$ furnishes an embedding of $A(0)^\infty$ into $A[0]$ with dense range. Since $\cT_\partial$ is amenable, the quotient map $C^*(\cT_\partial)\to C^*_r(\cT_\partial)$ is an isomorphism. It factorizes through $R$, since $R$ gives us a Hilbert space representation of $\cT_\partial$, while $\|a\|_{as}\ge\|\gvp_0(a)\|$. This leads to a commutative diagram of natural maps \begin{eqnarray*} &&C^*(\cT_\partial)\\ &\nearrow&~~~\downarrow~~~\\ C_c^\infty(\cT_\partial) & \stackrel{}{\hookrightarrow} &~~~R\subseteq A[0], \\ & \searrow &~~~ \downarrow~~\\ &&C_r^*(\cT_\partial) \end{eqnarray*} where the upper vertical arrow is surjective, since the inclusion has dense range. The invertibility of the quotient map implies that the lower vertical arrow is an isomorphism. Next we define a map $\tilde{q}:A[0]\to C^*_r(T)$: By definition, $A[0]$ is the set of equivalence classes of Cauchy sequences in $A^\infty$ with respect to $\|\cdot\|_{as}$. Given such a Cauchy sequence $a_k=(f_k\oplus K_k)$, we may evaluate at $\hbar=0$ and obtain a sequence $(f_k^0\oplus K_k^0)$ in $A(0)^\infty$. As $\|a_k\|_{as}\ge \|\gvp_0(a_k)\|$, the sequence $(f^0_k)$ is a Cauchy sequence in $C_r^*(T)$; moreover, the mapping $(a_k)\mapsto (f^0_k)$ is well-defined and continuous. In view of Lemma \ref{vurdering} its kernel is $R$. Combining this with the short exact sequence \eqref{stjernekort} we obtain the following commutative diagram of short exact sequences \begin{equation} \begin{array}{ccccccccc}\label{cd} 0 & \to & C_r^*(\cT_\partial) & \to & A[0] & \stackrel{\tilde{q}}\to &C_r^*({T})&\to &0 \\ \| &&\| &&\downarrow \Phi&&\|&&\| \\ 0 & \to & C_r^*(\cT_\partial) & \to & A(0) & \to &C_r^*({T})&\to &0 \end{array}. \end{equation} We conclude from the five lemma that $\Phi$ is an isomorphism, and therefore $$\limsup_{\hbar \rightarrow 0} \|\varphi_\hbar (a)\|\leq \|\varphi_0(a)\|,$$ i.e. the field is upper semi-continuous in $0$.\eproof\medskip What is still missing is the proof of the lower semi-continuity of the field $A$. It will be given at the end of Section 2, since there is no simplification for the half-space case. \subsection*{The Tangent Groupoid for a Manifold with Boundary} \dfn{2.1}{We denote by $T^-X$ the subset of $T\widetilde X$ formed by all vectors $(m,v)\in T\widetilde X|_{X}$ for which $\exp_m(-\gve v)\in X$ for sufficiently small $\gve>0$. This is a semi-groupoid with addition of vectors. Note that $T^-X=TX^\circ\cup T^-X|_{\partial X}$ We define $\cT^-X$ as the disjoint union $T^-X \cup (X\times X\times ]0,1])$, endowed with the fiberwise semi-groupoid structure induced by the semi-groupoid structure on $T^-X$ and the groupoid structure on $X\times X$. As in the boundaryless case, we glue $T^-X$ to $X\times X \times ]0,1]$ via the charts $$T^-X\times [0,1]\supseteq U\ni (m,v,\hbar)\mapsto \left\{ \begin{array}{cc} (m,v) & \hbox{for }\hbar =0 \\ (m, \exp_m(-\hbar v),\hbar)& \hbox{for }\hbar \not= 0 \end{array} \right.$$ and let $\cT^-X(0)=T^-X$ and $\cT^-X(\hbar)=X\times X\times \{\hbar\}$. In order to avoid problems with the topology of $\cT^-X$ (which is in general not a manifold with corners) we let $C^\infty_c(\cT^-X)=C^\infty_c(\cT\widetilde{X})|_{\cT^-X}$. } \subsection*{C*-algebras Associated to the Semi-groupoids $T^-X$ and $\cT^- X$} We start with $T^-X$. Let $ \Ctc ( T^-X)$ denote the smooth functions on $T^-X$ which have compact support in $T^-X$. In analogy with Definition \ref{kappa} we introduce \begin{eqnarray*} \pi_0&:&\Ctc(T^-X)\to \cL(L^2(T X^\circ ))\quad\text{and}\\ \pi_0^\partial&:&\Ctc(T^-X)\to \cL(L^2(T^-X|_{\partial X})) \end{eqnarray*} acting by \begin{eqnarray} \pi_0(f)\xi(m,v)=\int_{T_mX}f(m,v-w)\xi(m,w)\, dw,\\ \pi_0^\partial(f)\xi (m,v)=\int_{T^+_mX}f(m,v-w)\xi(m,w)\, dw.\label{kappaf} \end{eqnarray} Note that due to its compact support in $T^-X$, the function $f$ naturally extends (by zero) to $TX$. \dfn{Cr}{We denote by $C^*_r(T^-X)$ the $C^*$-algebra generated by $\pi_0$ and $\pi_0^\partial$, i.e.~by the map $\Ctc(T^-X)\ni f\mapsto (\pi_0(f),\pi_0^\partial(f))\in \cL(L^2(TX^\circ)\oplus L^2(T^-X|_{\partial X}))$.} At first glance, this definition seems to overlook the operators of the form $\pi_0^\partial(K)$ in \ref{kappa} and operators of the form $\pi_0(f)$ and $\pi_0^\partial (f)$, where $f\in C^\infty_c(T\tilde{X})|_{TX}$. In fact, this is not the case. The second type of operators belongs to $C^*_r(T^-X)$, because we take the closure under the adjoint operation and addition. The reason that the first type of operators is in $C^*_r(T^-X)$, is the well-known relation between operators of half-convolution and Toeplitz operators, which we recall, below. We denote by $\fT$ the algebra of all Toeplitz operators on $L^2(S^1)$ and by $\fT_0$ the ideal of all operators whose symbol vanishes in $-1$. \lemma{Toeplitz2}{Let $f\in C^\infty_c(\R).$ Then the operator $$L^2(\R_+)\ni\xi\mapsto \left(s\mapsto \int_0^\infty f(s-w)\xi(w)\,dw\right)\in L^2(\R_+)$$ is unitarily equivalent to the Toeplitz operator $T_\gvp$ with symbol $\gvp(z)=\hat f(i(z-1)/(z+1)).$ Note that $\gvp(-1)=\hat f(\infty)=0$. The $C^*$-algebra generated by the operators in the image of $C^\infty_c(\R)$ under this map is precisely the ideal $\fT_0$, while the compact operators in $\fT$ are generated by their commutators.} \Proof. Plancherel's theorem shows that the above operator of half convolution is the truncated pseudodifferential operator with symbol $\hat f$, mapping $\xi\in L^2(\R_+)$ to $\op(\hat f)_+\xi(s)= \int e^{ist}\hat f(t)\widehat{(e^+\xi)}(t)\,dt$, where $e^+\xi$ is the extension (by zero) of $\xi$ to $\R$. Now one observes that the unitary $U:L^2(S^1)\to L^2(\R_+)$ given by $Ug(t)=\frac{\sqrt 2}{1+it}~g\left(\frac{1-it}{1+it}\right)$ maps the Hardy space $H^2$ to $F(L^2(\R_+))$ with the Fourier transform $F$, and that $\op(\hat f)_+$ is $F^{-1}UT_\gvp U^{-1}F$. See \cite[Section 2]{RS} for details. For the second statement, one first notes that the $C^*$-algebra generated by these operators is a subalgebra of $\fT_0$. On the other hand, $\fT_0$ consists of the operators of the form $T_\gvp+C$, where $\gvp\in C(S^1)$ vanishes in $-1$, and $C$ is compact. According to \cite[Proposition 7.12]{Douglas}, the commutators of all $T_\gvp$, $ \gvp\in C(S^1)$, generate the compacts, hence so do the commutators of those $T_\gvp$, where $\gvp$ vanishes in $-1$. As these $T_\gvp$ can be approximated by elements in the image of $C^\infty_c(\R)$, the proof is complete. \eproof \lemma{2.12}{We have a representation $\pi_0^\partial$ of $C^\infty_c(T\partial X\times \R_+\times \R_+)$ on $L^2(T^-X|_{\partial X})$ via \begin{eqnarray}\label{kappaK} \pi_0^\partial(K)\xi(m,v',v_n)=\int K(m,v'-w',v_n,w_n) \xi(m,w',w_n)\,dw'dw_n. \end{eqnarray} The closure of its range is isomorphic to $$J=C_0 (T^*\partial X)\otimes \cK (L^2(\R_+)).$$ $J$ is an ideal in $C_r^*(T^-X)$ generated by commutators of elements of the form $\pi_0^\partial(f)$. } \Proof. The algebraic tensor product $C^\infty_c(T\partial X)\otimes C^\infty_c(\R_+\times \R_+)$ is dense in $C^\infty_c(T\partial X\times \R_+\times \R_+)$. Due to the continuity of $$\pi_0^\partial :C^\infty_c(T\partial X\times \R_+\times \R_+) \to \cL(L^2(T^-X|_{\partial X})$$ it is sufficient to determine the closure of $\pi_0^\partial ( C^\infty_c(T\partial X)\otimes C^\infty_c(\R_+\times \R_+))$. It is clear that $\pi_0^\partial ( C^\infty_c(T\partial X)\otimes C^\infty_c(\R_+\times \R_+))\subseteq J$. In fact, we have equality, since the Fourier transform gives an isomorphism $C_r(T\partial X)\to C_0(T^*\partial X)$ and since a compact operator on $L^2(\R_+)$ can be approximated by a Hilbert-Schmidt operator, thus by an integral operator with kernel in $C^\infty_c(\R_+\times \R_+)$. In order to see that $J$ is contained in $C^*_r(T^-X)$, it is sufficient to approximate both factors of a pure tensor $h\otimes c$, where $h\in C_0(T^*\partial X)$ and $c\in \cK(L^2(\R_+))$. For the first task we choose a function in $C^\infty_c(T\partial X)$ whose fiberwise Fourier transform is close to $h$ in sup-norm. For the second, we refer to Lemma \ref{Toeplitz2}. In particular, we see that $J$ also is generated by commutators. A direct computation shows that $J$ is an ideal in $C^*_r(T^-X)$. \eproof\medskip \dfn{kappa_h}{We let $$\cut = C^\infty_c(TX)\oplus C^\infty_c(T \partial X\times \R_+\times\R_+).$$ This is a dense $*$-subalgebra of $C^*_r( T^-X)$. We will denote the product in this subalgebra by $*'$. \\ For $\hbar \not= 0$ we obtain representations of $C^\infty_c(\cT^-X)=C^\infty_c(\cT \widetilde{X})|_{\cT^-X}$ in $\cL(L^2(X))$ by: \begin{equation} \label{repsg} \pi_\hbar (f)\xi (m)=\frac{1}{\hbar^n}\int f(m,\tilde m,\hbar)\xi (\tilde m)d\tilde m. \end{equation} Note that these are the natural groupoid representations for $X\times X\times ]0,1]$. We denote by $C_r^*(\cT^-X)$ the reduced $C^*$-algebra generated by $\pi_\hbar$, $0\le \hbar\le 1$, and $\pi_0^\partial$. } For $X=\R^n_+$ we have $\cut =A(0)^\infty$, $C^*_r(T^-X)=A(0)$ and $C^*_r(\cT^-X)=A$. Also there are evaluation maps $$\varphi_\hbar :C^*_r(\cT^- X)\to C_r^*(\cT^- X)(\hbar ). $$ \thm{hoved}{We have \begin{eqnarray*} C^*_r(\cT^-X(\hbar ))&=&\cK (L^2(X)),\quad \hbar \not= 0;\\ C^*_r(\cT^- X(0))&=&C^*_r(T^-X).\end{eqnarray*} Moreover: $( C^*_r(\cT^- X),\{ C^*_r(\cT^- X(\hbar )),\varphi_\hbar\}_{\hbar \in [0,1]})$ is a continuous field of $C^*$-algebras. } The first two statements are obvious. For the proof of upper semi-continuity, we will essentially follow the ideas for the half-space case. Our first task is the construction of a representation of $\cut$. To this end, we will simply extend $f\in C^\infty_c(TX)$ and $K\in C^\infty_c(T\partial X\times\R_+\times \R_+)$ to functions $\tilde f$ and $ \tilde K$ on $\cT^-X$ as described below, then apply \eqref{repsg}. Choose a function $\psi \in C^\infty (X\times X)$ which is one on a neighborhood of the diagonal, $0\leq \psi \leq 1$, such that $$\exp :T^-X \rightarrow X\times X$$ maps a neighborhood of the zero section diffeomorphically to the support of $\psi$. For $f\in C^\infty_c(TX)$ we define $\tilde{f} \in C^\infty_c(\cT^-X)$ by \begin{equation} \label{udvidf} \tilde{f}(m,\tilde m,\hbar)=\psi (m,\tilde m) f \left( m, -\frac{\exp^{-1}(m,\tilde m)}{\hbar } \right). \end{equation} We next identify a neighborhood $U$ of $\partial X$ in $X$ with $\partial X\times [0,1[$ and write $U\ni m =(m',m_n)$ with $m'\in\partial X $ and $m_n\ge 0$. We also choose a function $\chi \in C^\infty_c(X)$ supported in $U$ with $0 \leq \chi \leq 1$ and $\chi\equiv 1$ near $\partial X$. For $K \in C^\infty_c(T\partial X \times \R_+\times\R_+ )$ we then define $\tilde{K} \in C^\infty_c(X\times X\times]0,1])$ by \begin{eqnarray}\label{udvidk} \tilde K(m,\tilde m,\hbar)= \chi(m)\chi(\tilde m)\psi(m,\tilde m)~K\left(m',-\frac{ \exp^{-1}(m',\tilde m')}\hbar ,\frac{m_n}{\hbar},\frac{\tilde m_n}{\hbar }\right). \end{eqnarray} \rem{halfspace}{In the half-space case with the flat metric we have, for fixed $f$ and $K$, $$\pi_\hbar(\tilde f)=\rho_\hbar(f)\quad\text{and}\quad \pi_\hbar (\widetilde{K}) =\kappa_\hbar(K)$$ provided $\hbar$ is sufficiently small.} \cor{Relation_to_(0.1)}{We then obtain the analog of Property \eqref{0.1}: \begin{eqnarray} \lim_{\hbar\to 0} \|\pi_\hbar(\widetilde f)+\pi_\hbar(\widetilde K) \| =\max\{\|\pi_0(f)\|,\|\pi_0^\partial(f\oplus K)\|\}. \end{eqnarray} } \extra{metric}{Metrics}{The construction of $C^*_r(\cT^-X)$ and the extensions (\ref{udvidf}), (\ref{udvidk}) used a metric, but $C^*_r(\cT^-X)$ is independent of the choice: Let $\nu_1, \nu_2$ be two different metrics on $X$, and denote by $\mu_1,\mu_2$ the associated measures on $X$ as well as the fiberwise measures in $TX$. Let $k \in C^\infty (X)$ be given by $$\mu_1=k\mu_2.$$ Multiplication by $\sqrt{k}$ yields a unitary $$U: (L^2(X),\mu_1)\rightarrow (L^2(X),\mu_2),$$ and multiplication by $\sqrt{k(m)}$ a family of unitaries $$U_m :(L^2(T_m^-X),\mu_1) \rightarrow (L^2(T_m^-X),\mu_2).$$ We define $$\phi:C^\infty_c(\cT^-X)\to C^\infty_c(\cT^-X)$$ taking $f(m,v,0)$ to $f(m,v,0)k(m)$ for $\hbar=0$ and $f(m,\tilde m,\hbar)$ to $f(m,\tilde m,\hbar)\sqrt{k(m)k(\tilde m)}$, $\hbar\not=0$. Then $\pi^1_\hbar(f)=U^{-1}\pi_\hbar^2(\phi(f))U$, where $\pi^{1}_\hbar$ and $\pi_\hbar^2$ are the representations induced by $\mu_1$ and $\mu_2$. A corresponding relation holds for $\pi_0^\partial$. Hence $C^*_r(\cT^-X)$ is independent of the metric. } The following lemma clarifies the influence of the extension by different metrics. \begin{lemma}{metrik}{ Let $f\in C^\infty_c(TX)$. Denote by $\tilde{f}^i$ the extension of $f$ with respect to the metric $\nu_i$, $i=1,2$. Then $$\| \pi_\hbar (\phi (\tilde{f}^1)) -\pi_\hbar (\widetilde{\phi (f)}^2)\| \rightarrow 0 \hbox{ for } \hbar \rightarrow 0.$$ Here $\pi_\hbar$ is understood with respect to $\mu_2$.} \end{lemma} \Proof. This follows from Lemma \ref{vurdering}, since $\phi (\tilde{f}^1)-\widetilde{\phi (f)}^2$ is a function in $C^\infty_c(\cT^-X)$ which is zero at $\hbar =0$. \eproof \medskip A similar statement holds if we start with $K \in C^\infty_c(T\partial X \times \R_+\times\R_+ )$. \subsection*{Upper Semi-continuity} We again use the seminorm $$\| a\|_{as}=\max \{ \| \varphi_0(a)\| ,\limsup_{\hbar \to 0}\| \varphi_\hbar (a)\|\}$$ for elements in $C^*_r(\cT^-X)$ and introduce the analog of $A[0]$: $$C^*_?(T^-X)=C^*_r (\cT^- X)/I,$$ where $$I=\{ a \in C^*_r( \cT^- X)|~ \| a\|_{as} =0 \}.$$ The notation $C^*_? (TX^- )$ is justified by the following: \begin{prop}{fundamental}{The mappings $f\mapsto\tilde f$ and $K\mapsto\tilde K$ induce a $*$-homomorphism $\Psi$ from $(C^\infty_c(T^-X),*')$ to $C^*_?(T^-X)$ with dense range, and we have \begin{eqnarray}\label{AsMult} \lim_{\hbar \to 0}\|\pi_\hbar(\tilde f)\pi_\hbar(\tilde g)-\pi_\hbar(\widetilde{f*'g)}\|=0,\quad f,g\in C^\infty_c(TX). \end{eqnarray} } \end{prop} \Proof. Choose an open covering $\{ U_i\}$ of $X$, where each $U_i$ can be identified with an open subset of $\R^n$ or $\R^n_+$. By possibly shrinking the $U_i$, we may assume that the function $\psi$ used in \eqref{udvidf} and \eqref{udvidk} equals $1$ on $U_i\times U_i$ and that the function $\chi$ is $\equiv 1$ on $U_i$ whenever $U_i$ intersects the boundary. We also fix a subordinate partition of unity $\{\psi_i\}\subset C^\infty_c(U_i)$. For $f,g\in C^\infty_c(TX)$ we have $(\psi_if)*'g=(\psi_if)*'(\eta_ig)$ for each $\eta_i\in C^\infty_c(U_i)$ with $\psi_i\eta_i=\psi_i$. Moreover, $\pi_ \hbar(\widetilde {\psi_if})\pi_\hbar (\tilde g)= \pi_ \hbar(\widetilde {\psi_if})\pi_\hbar (\widetilde{\gt_i g})$ for suitable $\gt_i\in C^\infty_c(U_i)$, provided $\hbar $ is small. Hence \begin{eqnarray}\label{normi} \lefteqn{ \| \pi_\hbar ( \widetilde{f*'g} )- \pi_\hbar (\tilde{f}) \pi_\hbar (\tilde{g})\| \leq \sum\| \left(\pi_\hbar ( \widetilde{(\psi_if)*'g} ) -\pi_\hbar (\widetilde{\psi_if}) \pi_\hbar (\tilde{g})\right)\|}\nonumber\\ &=&\sum \| \pi_\hbar (\widetilde{\psi_if*'\eta_ig})) -\pi_\hbar (\widetilde{\psi_if}) \pi_\hbar (\widetilde{\gt_ig})\|. \end{eqnarray} For sufficiently small $\hbar$, all operators will have support in $U_i\times U_i\times [0,1]$ so that we are working on Euclidean space. According to Lemma \ref{metrik} we can also, modulo terms converging to zero as $\hbar \rightarrow 0$, use the Euclidean metric. So we are precisely in the situation considered at the beginning of the section. The explicit computation shows that \begin{eqnarray}\label{diff} \pi_\hbar(\widetilde{f*'g}))-\pi_\hbar(\tilde f)\pi_\hbar (\tilde g)=\rho_\hbar(f*g-f*_\hbar g)+\gk_\hbar(l(f,g)-l_\hbar(f,g)). \end{eqnarray} As $f*g-f*_\hbar g\in C^\infty_c(T\R^n_+\times[0,1])$ and $l(f,g)-l_\hbar(f,g)\in C^\infty(T\R^{n-1}\times\R_+\times\R_+\times[0,1])$ vanish for $\hbar=0$, the difference \eqref{diff} is in $I$ by Lemma \ref{vurdering}. Hence \eqref{normi} tends to zero, and $\Psi (f*' g)=\Psi (f)\Psi (g)$. The remaining $*$-algebra properties are checked similarly. In order to see that the image of $\Psi$ is dense in $C^*_? (T^-X)$, we simply note that the evaluation at $\hbar=0$ associates to an element $F$ in $C^\infty_c(\cT^-X)$ an element in $\cut$ whose extension via \eqref{udvidf}, \eqref{udvidk} induces the same element in $C_?^*(\cT^-X)$ by Lemma \ref{vurdering}.\eproof \rem{AM}{Property \eqref{AsMult} is the analog of the asymptotic multiplicativity \eqref{0.2} in the case of manifolds with boundary. In particular, we have established Theorem \ref{2.2}. } With Proposition \ref{fundamental}, the proof of the following theorem is analogous to that of Theorem \ref{rn}. \thm{uppergeo}{ $( C^*_r(\cT^- X),\{ C^*_r(\cT^- X)(\hbar ),\varphi_\hbar\}_{\hbar \in [0,1]})$ is upper semi-continuous in $0$.} \forget{\extra{AM}{Asymptotic multiplicativity}{Proposition \ref{fundamental} in connection with Lemma \ref{vurdering} shows that \begin{eqnarray*} \lim_{\hbar \to 0}\|\rho_\hbar(f)\rho_\hbar(g)-\rho_\hbar({f*'g)}\|=0,\quad f,g\in C^\infty_c(\cT^-X), \end{eqnarray*} where $f*'g$ denotes the product of $f$ and $g$ in $C_r^*(\cT^-X)$. For $f,g\in C^*_r(\cT^-X)$ we choose sequences $(f_k), (g_k)$ in $C^\infty_c(T^-X).$ Upper semi-continuity then establishes asymptotic multiplicativity for $f,g\in C^*_r(\cT^-X)$.} } \subsection*{Lower Semi-continuity} As in the classical case \cite{LandsmanRamazan} lower semi-continuity is proven by introducing strongly continuous representations using the groupoid structure. We split the representations into two: One taking care of the contribution from the interior of the manifold, i.e. the convolution part, and one taking care of the boundary part, i.e. half convolution and kernels on the boundary. For the lemmata, below, we note that -- by construction -- $\pi_0$ and $\pi_0^\partial$ extend to $C^*_r(\cT^-X)$. \begin{lemma}{lavet}{$\liminf_{\hbar \to 0}\| \varphi_\hbar ( a)\|\geq \| \pi_0 (a)\|$ for all $a\in C^*_r(\cT^-X)$. } \end{lemma} \Proof. According to Proposition \ref{fundamental} it is sufficient to show that \forget{, the families of the form $\rho_\hbar (\tilde f+\tilde{K})$ form a dense subset of It follows from Proposition \ref{fundamental} that the families of the form $\rho_\hbar (f+\tilde{K})$, $\hbar \not= 0$ and $(\pi_0(f),\pi_0^\partial(f)+\pi_0^\partial(K))$, where $f\in C_c^\infty (\cT^-X)$, and $\tilde{K}$ is constructed from $K\in C^\infty_c(\cT\partial X \times \R_+\times\R_+)$ as in (\ref{udvidk}), form a dense subset of $C^*_r(\cT^-X)$. Indeed, the class of $a\in C^*_r(\cT^-X)$ in $C^*_? (T^-X)$ can be approximated by elements of the form $\rho_\hbar (f)+\rho_\hbar(\tilde{K})$. By an analog of Lemma \ref{vurdering}, elements in $I$ can be approximated by elements in $C^\infty_c(TX\times [0,1])$ which vanish at $\hbar=0$. Hence it sufffices to show} \begin{equation} \label{in} \lim_{\hbar \to 0}\| \rho_\hbar (\tilde f+\tilde{K} )\| \geq \| \pi_0 (f)\| \quad\text{for}~~f\oplus K\in \cut. \end{equation} For $g \in C^\infty_c (\cT^- X)$ define $$\| g \|_{\infty ,\hbar}^2 =\sup_{m\in X} \left\{ \frac{1}{\hbar^n}\int_X | g(x,m,\hbar )|^2 dx \right\} \hbox{ for } \hbar \not= 0,$$ and $$\|g \|^2_{\infty , 0} = \sup_{m \in X} \left\{ \int_{T_mX}|g(m,v,0 )|^2dv\right\} , \hbox{ for } \hbar=0.$$ Set $$\|g\|_\infty =\sup_{\hbar \in [0,1]}\|g \|_{\infty , \hbar}.$$ It is easily checked that \begin{eqnarray}\label{norm1h} \| \pi_\hbar (\tilde f+\tilde{K})\| =\sup \Big\{ \Big\|\frac 1 {\hbar^n} \int (\tilde f( \cdot ,m,\hbar)+\tilde{K}(\cdot,m,\hbar ))g( m,\cdot,\hbar )\,dm\Big\|_{\infty ,\hbar } ~\Big|~ \|g\|_\infty \leq 1\Big\} \end{eqnarray} for $\hbar \not= 0$, and \begin{eqnarray}\label{norm0} \|\pi_0 (f)\| = \sup \Big\{ \Big\| \int f(\cdot,v,0)g(\cdot ,\cdot-v,0)dv\Big\|_{\infty ,0} ~\Big|~ \|g\|_\infty \leq 1 \Big\}: \end{eqnarray} In fact, for \eqref{norm1h} we note that ``$\ge$'' follows from the estimate \begin{eqnarray*} \lefteqn{\left\|\frac1{\hbar^n}\int\tilde f(m_1,m,\hbar)g(m,m_2,\hbar)\, dm\right\|_{\infty,\hbar}^2 = \left\|\frac1{\hbar^n}\pi_\hbar(\tilde f)g(\cdot,m_2,\hbar)\right\|_{\infty,\hbar}^2}\\ &=&\sup_{m_2\in X}\frac1{\hbar^n} \left\|\pi_\hbar(\tilde f)g(\cdot,m_2,\hbar)\right\|_{L^2(X)}^2 \le \left\|\pi_\hbar(\tilde f)\right\|^2 \sup_{m_2\in X}\frac1{\hbar^n} \left\|g(\cdot,m_2,\hbar)\right\|_{L^2(X)}^2\\ &=&\left\|\pi_\hbar(\tilde f)\right\|^2\|g\|_{\infty,\hbar}^2\le \left\|\pi_\hbar(\tilde f)\right\|^2\|g\|_{\infty}^2 . \end{eqnarray*} For the reverse inequality we choose $g(x,m,\hbar)=s(m)\xi(x)\hbar^n\gvp(\hbar)$, where $s\in C^\infty_c (X)$, $s\le 1$, $\|\xi\|_{L^2(X)}=1$ with $\|\pi_\hbar(\tilde f)\xi\|\ge \|\pi_\hbar(\tilde f)\|-\gve$, and $\gvp\in C^\infty_c(]0,1])$ is equal to one outside a neighborhood of zero. Equation \eqref{norm0} follows by a similar argument. Now suppose that $g\in C^\infty_c(\cT^-X)$ and $g(x,m,h)=0$ for $x\in\partial X$. Then the weak convergence of $\tilde{K}$ towards zero implies that \begin{eqnarray*} \lim_{\hbar \rightarrow 0} \Big\| \frac 1 {\hbar^n} \int (\tilde f( \cdot ,m,\hbar )+ \tilde{K}(\cdot, m,\hbar ))g( m , \cdot,\hbar )\,dm\Big\|_{\infty , \hbar} &=& \| \int f(\cdot , v,0)g(\cdot ,\cdot-v,0)dv\|_{\infty ,0}. \end{eqnarray*} As the set of these $g$ is dense in $\{g\in C^\infty_c(\cT^-X)\ |\ \|g\|_\infty\le 1\}$, \eqref{in} follows. \eproof \begin{lemma}{lavto}{$\liminf_{\hbar \to 0} \| \varphi_\hbar (a)\| \geq \| \pi_0^\partial(a)\|$ for all $a\in C^*_r(\cT^-X)$.} \end{lemma} \Proof. As in the proof of Lemma \ref{lavet} we only have to show that \begin{equation} \label{into} \liminf_{\hbar \to 0} \| \rho_\hbar (\tilde f+\tilde{K})\| \geq \| \pi_0^\partial (f\oplus K)\|, \end{equation} for $f \in C^\infty_c(TX)$ and $K\in C^\infty_c(T\partial X\times \R_+\times\R_+ )$. We let $P_\hbar$ be the projection in $L^2(X)$ given by multiplication by the characteristic function of $\partial X\times[0,a_\hbar[$, where $$a_\hbar \rightarrow 0 \hbox{ for } \hbar \rightarrow 0 \text{~ and ~} \frac{a_\hbar} \hbar \rightarrow \infty \hbox{ for } \hbar \rightarrow 0. $$ As $\|P_\hbar \pi_\hbar (\tilde{f}+\tilde{K}) P_\hbar \| \leq \| \pi_\hbar(\tilde{f}+\tilde{K}) \| $, it is enough to show that $$\liminf_{\hbar \rightarrow 0} \| P_\hbar \pi_\hbar(\tilde f+\tilde{K}) P_\hbar \|\geq \| \pi_0^\partial(f\oplus K)\|.$$ Since we are free too choose a metric, we fix a metric on $\partial X$ and the standard metric on $[ 0,a_\hbar [$. As in the proof of Lemma \ref{lavet}, we equip the space $C^\infty_c(\cT \partial X \times [0,\infty [)$ with norms \mbox{$\| \cdot \|_{\infty ,\hbar}$}, \mbox{$\|\cdot\|_\infty$}, which are just like the norms before, on $\cT \partial X$ instead of $\cT^- X$, combined with the $L^2$-norm on $[0,\infty[$. For $f\in C^\infty_c(\cT^-X)$ and $K\in C^\infty_c (T \partial X \times \R_+\times\R_+)$ we define representations on $C^\infty_c(\cT \partial X \times [0,\infty [)$ by \begin{eqnarray*} \eta_\hbar (f)g(m_1,m_2,\hbar, b)&=&\frac 1 {\hbar^{n-1}}\int_{a\in [0,\frac{a_\hbar}\hbar]} f(m_1,\hbar b,m,\hbar a,\hbar)g( m, m_2 , \hbar, a )dmda,\ \ b\in[0,\frac{a_\hbar}{\hbar}[,\hbar\not=0; \\ \eta_0 (f)g(m_1,v,0,b)&=&\int_{T_{m_1}\partial X\times \R_+}f(m_1,0,v-w,b-a,0)g(m_1,w,0,a)\,dwda;\\ \eta_0 (K)(m_1,v,0,b)&=&\int_{T_{m_1}\partial X\times \R_+} K(m_1,v-w,b,a)g(m_1,w,0,a)\, dwda. \end{eqnarray*} Note that $\|P_\hbar \pi_\hbar(f) P_\hbar\|=\|D_{\hbar} P_\hbar \pi_\hbar (f)P_\hbar D_{\hbar^{-1}}\|=\sup \{ \|\eta_\hbar (f)g\|_{\infty ,\hbar}~|~\|g\|_\infty \leq 1 \}$, where $D_\hbar$ is the dilation operator in the normal direction, given by $D_\hbar f(x',x_n)=f(x',\hbar x_n)$. As before $$ \| \pi_0^\partial (f\oplus K)\| =\sup \{ \|(\eta_0 (f\oplus K))g\|_{\infty,0} |\|g\|_\infty \leq 1\}.$$ Plugging in the definitions of $\tilde{f}$ and $\tilde{K}$ (omitting the cut off functions) we get $$\eta_\hbar (\tilde{f})g(m_1,m_2,\hbar, b)=\frac{1}{\hbar^{n-1}}\int_{[0,\frac{a_\hbar }{\hbar}]}f\left( m_1,\hbar b,-\frac{\exp^{-1} (m_1,m)}{\hbar} ,b-a\right) g(m,m_2,\hbar,a) dmda$$$$ \eta_\hbar (\tilde{K})g(m_1,m_2,\hbar,b)=\frac{1}{\hbar^{n-1}}\int_{[0,\frac{a_\hbar }{\hbar}]}K\left( m_1, b,-\frac{\exp^{-1} (m_1,m)}{\hbar} ,a\right) g(m,m_2,\hbar,a)dmda\\ $$ Using dominated convergence and the fact that $g$ for small $\hbar$ looks like $g_0(m,-\frac{\exp^{-1} (m,m_2)}{\hbar} ,\hbar ,a)$, $g_0\in C^\infty_c(T\partial X \times [0,1]\times [0,\infty [)$, we get $$\lim_{\hbar \to 0} \| (\eta_\hbar (\tilde f+\tilde{K})g\|_{\infty,\hbar}=\|(\eta_0 (f\oplus K)g\|_{\infty,0},$$ and (\ref{into}) follows. \eproof \\ Lemma \ref{lavet} and \ref{lavto} imply that $\liminf_{\hbar \to 0} \| \varphi_\hbar (a)\|\geq \|\varphi_0 (a)\|,$ i.e. \thm{nedad}{ $( C^*_r(\cT X^-),\{ C^*_r(\cT X^-)(\hbar ),\varphi_\hbar\}_{\hbar \in [0,1]})$ is lower semi-continuous in $0$.} This finishes the proof of Theorem \ref{hoved}. \section{$K$-theory of the Symbol Algebra $C^*_r(T^-X)$} \setcounter{equation}{0} $C_c^\infty (TX^\circ)$ with the fiberwise convolution product is a $*$-ideal of $\cut$. After completion, $C^*_r(TX^\circ )$ becomes a $C^*$-ideal of $C^*_r(T^-X)$, and we have a short exact sequence \begin{equation} \label{foelge} 0 \to C^*_r(TX^\circ) \to C^*_r(T^-X) \to C^*_r(T^-X)/ C^*_r(TX^\circ)\to 0. \end{equation} \prop{Toeplitz}{ The quotient $Q= C^*_r(T^-X)/ C_r^*(TX^\circ)$ is naturally isomorphic to $C_0(T^*\partial X)\otimes \fT_0$ for the ideal $\fT_0$ of the Toeplitz algebra introduced before Lemma \ref{Toeplitz2}. } \Proof. Define $$\Psi:C_c^\infty(T^-X)\to \cL(L^2(T^-X|_{\partial X}))\text{~ by ~}\Psi(f\oplus K)=\pi_0^\partial(f)+\pi_0^\partial(K)$$ with the maps in \eqref{kappaf} and \eqref{kappaK}. This is a $*$-homomorphism with respect to $*'$, and $C^\infty_c(TX^\circ)$ is in its kernel. We first show that $\ker\Psi=C^*_r(TX^\circ)$: Since $C_r^*(T^-X)$ is the closure of $C^\infty_c(T^-X)$ with respect to the norm $$\|f\oplus K\|=\max\{|\pi_0(f)\|,\|\pi_0^\partial(f)+\pi_0^\partial(K)\|\},$$ and $C_r^*(TX^\circ)$ is the closure of $C^\infty_c(TX^\circ)$ with respect to $\|\pi_0(f)\|$, we have $C^*_r(TX^\circ)\subseteq \ker\Psi$. On the other hand, suppose that $a\in\ker\Psi$; i.e., $a$ is the equivalence class of a Cauchy sequence $(f_k\oplus K_k)\in C^\infty_c(T^-X)$ with $\pi_0^\partial(f_k)+\pi_0^\partial(K_k)\to 0$. We next note that \begin{eqnarray*} \|\pi_0(f_k)\|&=&\sup\{|\hat f_k(m,\gs)|~|~(m,\gs)\in T^*X\}: \text{~and~}\\ \|\pi_0^\partial(f_k)+\pi_0^\partial(K_k)\|&\ge& \sup\{|\hat f_k(m,\gs)|~|~(m,\gs)\in T^*X|_{\partial X}\} \end{eqnarray*} Indeed the first inequality follows from the fact that, via fiberwise Fourier transform, $\pi_0(f_k)$ is equivalent to multiplication by $\hat f_k(m,\sigma)$. For the second, we observe first that $\|\pi_0^\partial(f_k)\|=\sup\{|\hat f_k|\}$ as a consequence of the fact that translation of $\xi=\xi(m,w)$ in the direction of $w_n$ preserves $\|\pi_0^\partial(f_k)\xi\|$ in $L^2(T^-X|_{\partial X})$. On the other hand, $\pi_0^\partial(K_k)\xi=0$ provided we translate sufficiently far. Hence $\|\pi_0^\partial(f_k)+\pi_0^\partial(K_k)\|\ge \|\pi_0^\partial(f_k)\|$. We conclude that the fiberwise Fourier transforms $\hat f_k$ tend to zero uniformly on $T^*X|_{\partial X}$. Hence the Cauchy sequence $(f_k)$ may be replaced by an equivalent Cauchy sequence $(g_k)$ with $g_k\in C^\infty_c(TX^\circ)$. We conclude that $\pi_0^\partial(K_k)\to 0$ so that $(K_k)\sim 0$, and therefore $\ker\Psi\subseteq C^*_r(TX^\circ)$. Hence $\Psi$ descends to an injective $C^*$-morphism on $Q$; in particular, it has closed range. Now we observe that we have a natural identification of $TX|_{\partial X}$ with $T\partial X\times \R$ and consequently of $T^-X|_{\partial X}$ with $T\partial X\times \R_-$. Hence $\cL(L^2(T^-X|_{\partial X}))\cong \cL(L^2(T\partial X)\otimes L^2(\R_-)).$ Suppose that, at the boundary, $f\in C^\infty_c(TX)$ is of the form $f(x',0,v',v_n)= g(x',v')h(v_n)$ with $g\in C^\infty_c(T\partial X)$ and $h\in C_c^\infty(\R)$. Then $\pi_0^\partial(f)=\pi_0^{\partial,0} (g)\otimes\pi_0^{\partial,n} (h)$, where $\pi_0^{\partial,0} $ is the convolution operator by $g$, acting on $L^2(T\partial X)$, while $\pi_0^{\partial,n} (h)$ is the operator of half convolution acting on $L^2(\R_-)$ (note that $\R_-\cong T^-\R_+|_{\{0\}}$). Via Fourier transform, the operator $\pi_0^{\partial,0} (g)$ is unitarily equivalent to multiplication by $\hat g\in C_0(T^*X)$, while, according to Lemma \ref{Toeplitz2}, $\pi_0^{\partial,n} (h)$ is unitarily equivalent to a Toeplitz operator in $\fT_0$. The closure of the image of the span of the pure tensors thus gives us $C_0(T^*\partial X)\otimes \fT_0$. We know already from Lemma \ref{2.12} that -- via the Fourier transform -- the image of $C^\infty_c(T\partial X\times\R_+\times\R_+)$ can also be identified with a subset of $C_0(T^*\partial X)\otimes \cK\subseteq C_0(T^*\partial X)\otimes \fT_0$. This completes the argument. \eproof \thm{K}{Via fiberwise Fourier transform $C^*_r(TX^\circ )$ can be identified with $C_0(T^*X^\circ)$ and the inclusion $ C_0(T^*X^\circ) \cong C^*_r(TX^\circ )\hookrightarrow C^*_r(T^-X)$ induces an isomorphism of K-groups $$K_i ( C^*_r(T^-X))\cong K_i ( C_0(T^*X^\circ)),\quad i=0,1.$$} \Proof. It is well-known (or easily checked) that $K_i(\fT_0 )=0$, $i=0,1$. Thus it follows from the Künneth formula that $K_i(C_0(T^*\partial X )\otimes \fT_0)=0$, $i=0,1$. The result now is a consequence of \eqref{foelge} and the associated six term exact sequence. \eproof
{"config": "arxiv", "file": "math0507317.tex"}
TITLE: Inclined Stationary Bike QUESTION [1 upvotes]: Why would it make you work harder if you incline a stationary exercise bike? What's the physics involved? Assuming all other factors remain constant and only the incline changes, why would you burn more calories? REPLY [2 votes]: A stationary exercise bicycle presents a load to your legs which is entirely frictional and is generated either by a drag brake acting on the wheel, a fan that stirs up air, or an electromagnet that induces eddy currents in the wheel rim. None of these friction sources has anything to do with gravity; they would all work just fine in deep space- and therefore the load on your legs is independent of the direction of the gravity vector. This means that tilting the stationary exercise bike one way or another will have no effect at all on the difficulty of turning the pedals on it.
{"set_name": "stack_exchange", "score": 1, "question_id": 652165}
TITLE: How do we get $\nabla \cdot \mathbf{B} = 0$ from $\nabla \times \mathbf{E} = - \dfrac{\partial}{\partial{t}}\mathbf{B}$? QUESTION [0 upvotes]: I am currently studying the textbook Physics of Photonic Devices, second edition, by Shun Lien Chuang. Section 2.1.1 Maxwell's Equations in MKS Units says the following: The well-known Maxwell's equations in MKS (meter, kilogram, and second) units are written as $$\nabla \times \mathbf{E} = - \dfrac{\partial}{\partial{t}}\mathbf{B} \ \ \ \ \text{Faraday's law} \tag{2.1.1}$$ $$\nabla \times \mathbf{H} = \mathbf{J} + \dfrac{\partial{\mathbf{D}}}{\partial{t}} \ \ \ \ \text{Ampére's law} \tag{2.1.2}$$ $$\nabla \cdot \mathbf{D} = \rho \ \ \ \ \text{Gauss's law} \tag{2.1.3}$$ $$\nabla \cdot \mathbf{B} = 0 \ \ \ \ \text{Gauss's law} \tag{2.1.4}$$ where $\mathbf{E}$ is the electric field (V/m), $\mathbf{H}$ is the magnetic field (A/m), $\mathbf{D}$ is the electric displacement flux density (C/m$^2$), and $\mathbf{B}$ is the magnetic flux density (Vs/m$^2$ or Webers/m$^2$). The two source terms, the charge density $\rho$(C/m$^3$) and the current density $\mathbf{J}$(A/m$^2$), are related by the continuity equation $$\nabla \cdot \mathbf{J} + \dfrac{\partial}{\partial{t}}\rho = 0 \tag{2.1.5}$$ where no net generation or recombination of electrons is assumed. In the study of electromagnetic, one usually assumes that the source terms $\rho$ and $\mathbf{J}$ are given quantities. It is noted that (2.1.4) is derivable from (2.1.1) by taking the divergence of (2.1.1) and noting that $\nabla \cdot (\nabla \times \mathbf{E}) = 0$ for any vector $\mathbf{E}$. So we are told that Gauss's law $\nabla \cdot \mathbf{B} = 0$ is derivable from Faraday's law $\nabla \times \mathbf{E} = - \dfrac{\partial}{\partial{t}}\mathbf{B}$ by taking the divergence and noting that $\nabla \cdot ( \nabla \times \mathbf{E}) = 0$ for any vector $\mathbf{E}$. I did not understand why/how $\nabla \cdot ( \nabla \times \mathbf{E}) = 0$, and nor did I understand how we then get $\nabla \cdot \mathbf{B} = 0$ from $\nabla \times \mathbf{E} = - \dfrac{\partial}{\partial{t}}\mathbf{B}$. So, since this seemed like a mathematical issue, I asked here. However, based on the comments, I am told that this actually isn't valid. So is the author just wrong here? From the above point, the author then continues as follows: Similarly, (2.1.3) is derivable from (2.1.2) using (2.1.5). Thus, we have only two independent vector equations (2.1.1) and (2.1.2), or six scalar equations as each vector has three components. However, there are $\mathbf{E}$, $\mathbf{H}$, $\mathbf{D}$, and $\mathbf{B}$, 12 scalar unknown components. Thus, we need six more scalar equations. These are the so-called constitutive relations that describe the properties of a medium. In isotropic media, they are given by $$\mathbf{D} = \epsilon \mathbf{E} \ \ \ \ \ \ \ \ \ \ \mathbf{B} = \mu \mathbf{H} \tag{2.1.6}$$ In anisotropic media, they may be given by $$\mathbf{D} = \epsilon \cdot \mathbf{E} \ \ \ \ \ \ \ \ \ \ \mathbf{B} = \mu \cdot \mathbf{H} \tag{2.1.7}$$ where $\epsilon$ is the permittivity tensor and $\mu$ is the permeability tensor: $$\epsilon = \begin{bmatrix} \epsilon_{xx} & \epsilon_{xy} & \epsilon_{xz} \\ \epsilon_{yx} & \epsilon_{yy} & \epsilon_{yz} \\ \epsilon_{zx} & \epsilon_{zy} & \epsilon_{zz} \end{bmatrix} \ \ \ \ \ \ \ \ \ \ \mu = \begin{bmatrix} \mu_{xx} & \mu_{xy} & \mu_{xz} \\ \mu_{yx} & \mu_{yy} & \mu_{yz} \\ \mu_{zx} & \mu_{zy} & \mu_{zz} \end{bmatrix} \tag{2.1.8}$$ For electromagnetic fields at optical frequencies, $\rho = 0$ and $\mathbf{J} = 0$. Is all of this correct? REPLY [2 votes]: Taking the divergence of both sides gets $$ \nabla \cdot (\nabla \times \vec{E}) = \nabla \cdot \frac{\partial \vec{B}}{\partial t}$$ The left hand side is zero. This is a vector calculus identitiy that you can check by writing out the derivatives. On the right hand side you can rearrange the space and time derivative to get $$ \frac{\partial (\nabla \cdot \vec{B})}{\partial t} = 0$$ If the result is integrated with respect to time and the constant of integration is taken to be 0, this is identical to gauss' law of divergence $\nabla \cdot \vec{B} = 0$ In essence, Faraday's law states that there is no change in the divergence of $\vec{B}$. Gauss' law puts a stronger constraint that states that the divergence of $\vec{B}$ is always 0. So while they are not exactly independent, I wouldn't go so far to say that all the information in Gauss' law is encapsulated in Faraday's law.
{"set_name": "stack_exchange", "score": 0, "question_id": 681628}
TITLE: Can a moving magnetic field do work on a charged particle? QUESTION [0 upvotes]: Imagine a charged particle suspended between 2 horizontal magnetic plates, which create a uniform magnetic field. Now instantaneously, the particle is accelerated to velocity $v$. By my understanding, the particle will now start doing uniform circular motion, due to the Lorentz force. However, what if the magnetic field is moving, instead of the charged particle? Instead of the particle being accelerated, now the magnetic plates are instantly accelerated to velocity $v$. What happens to the charged particle? Does it now undergo that same uniform circular motion (in the magnetic plates' reference frame), and in doing so, does it "keep up" with the magnetic plates? Or Does it get accelerated by the Lorentz force, but eventually falls out of the magnetic field? If 1 is what happens, how did the particle gain kinetic energy? It went from stationary to moving, but I was taught that magnetic field cannot do work on charged particles. So then what did the work on the particle? If 2 is what happens, then why is this scenario any different than accelerating the particle instead of the plates? Shouldn't those two scenarios be the same in the magnetic plates' reference frame? REPLY [0 votes]: Focus on a short time interval in which the particle moves with constant velocity. In the magnet frame there is a current component parallel to its velocity. In the rest frame of the particle a charge density appears due to this current component. A net electric field appears perpendicular to the velocity and the particle is deflected. It will move in an orbit that can be derived from its orbit in the magnet frame by directly transforming it. To answer the unrelated question in the title: no work is performed.
{"set_name": "stack_exchange", "score": 0, "question_id": 555835}
\begin{document} \title{Trivial source endo-trivial modules for finite groups with semi-dihedral Sylow $2$-subgroups} \date{\today} \author{{Shigeo Koshitani and Caroline Lassueur}} \address{{\sc Shigeo Koshitani}, Center for Frontier Science, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522, Japan.} \email{koshitan@math.s.chiba-u.ac.jp} \address{{\sc Caroline Lassueur}, FB Mathematik, TU Kaiserslautern, Postfach 3049, 67653 Kaiserslautern, Germany.} \email{lassueur@mathematik.uni-kl.de} \thanks{ The first author was partially supported by the Japan Society for Promotion of Science (JSPS), Grant-in-Aid for Scientific Research (C) 19K03416, 2019--2021. The second author acknowledges financial support by DFG SFB/TRR 195. This piece of work is part of Project A18 thereof. } \keywords{Endo-trivial modules, semi-dihedral 2-groups, Schur multiplier, trivial source modules, $p$-permutation modules, special linear and unitary groups} \subjclass[2010]{Primary: 20C20. Secondary: 20C25, 20C33, 20C34, 20J05} \begin{abstract} We finish off the classification of the endo-trivial modules of finite groups with Sylow $2$-subgroups isomorphic to a semi-dihedral $2$-group started by Carlson, Mazza and Th\'{e}venaz in their article \textit{Endotrivial modules over groups with quaternion or semi-dihedral Sylow $2$-subgroup} published in 2013. \end{abstract} \dedicatory{Dedicated to Jon Carlson on the occasion of his 80th Birthday and to Jacques Th\'{e}venaz on the occasion of his 70th Birthday.} \maketitle \pagestyle{myheadings} \markboth{S. Koshitani and C. Lassueur}{Endo-trivial modules for groups with semi-dihedral Sylow $2$-subgroups} \section{Introduction} Endo-trivial modules play an important r\^{o}le in the representation theory of finite groups. For instance in the description of different types of equivalences between block algebras. These modules were introduced by E. C. Dade in 1978 \cite{DADE78a,DADE78b}. They have been intensively studied since the beginning of the century and were classified in a number of special cases: e.g. for groups with cyclic, generalised quaternion, Klein-four or dihedral Sylow subgroups, for $p$-soluble groups, for the symmetric and the alternating groups and their Schur covers, for the sporadic groups their Schur covers, or for some infinite families of finite groups of Lie type. We refer the reader to the recent survey book \cite{MazzaBook} by N. Mazza and the references therein for a complete introduction to this theory.\par An \textit{endo-trivial} module over the group algebra $kG$ of a finite group $G$ over an algebraically closed field~$k$ of prime characteristic~$p$ is by definition a $kG$-module whose $k$-endomorphism algebra is trivial in the stable module category. The set of isomorphism classes of indecomposable endo-trivial $kG$-modules form an abelian group under the tensor product $\otimes_k$, which is denoted $T(G)$, and this group is known to be finitely generated by work of Puig. One of the central questions in this theory is to understand the structure of the group $T(G)$, and, in particular, of its torsion subgroup~$TT(G)$.\par Now, letting $X(G)$ be the subgroup of $TT(G)$ consisting of all one-dimensional $kG$-modules and $K(G)$ be the subgroup of $T(G)$ consisting of all the indecomposable endo-trivial $kG$-modules which are at the same time trivial source $kG$-modules, we have $$\Hom(G,k^\times)\cong X(G)\subseteq K(G)\subseteq TT(G)$$ and $K(G)=TT(G)$ unless a Sylow subgroup is cyclic, generalised quaternion or semi-dihedral (see \cite[Chapter~5]{MazzaBook}). Although it often happens that $X(G)=K(G)$, in general $X(G)\lneq K(G)$. Furthermore, we emphasise that the determination of the structure of the endo-trivial modules lying in $K(G)\setminus X(G)$ is a very hard problem, to which, up to date, no general solution is known. Most of the work that has been done in previous articles provides case by case solutions to the determination of the abelian group structure of $K(G)$, but in the vast majority of the cases does not provide information about the structure of the modules in $K(G)\setminus X(G)$.\par In \cite{CMT11quat} Carlson-Mazza-Th\'{e}venaz essentially described the structure of the group $T(G)$ of endo-trivial modules for groups with a semi-dihedral Sylow $2$-subgroup. However they left open the question of computing the trivial source endo-trivial modules, i.e. the determination of the structure of the subgroup $K(G)$ of $T(G)$. The purpose of the present article is to finish off the determination of $K(G)$ in this case. In order to reach this aim we use three main ingredients, two of which were not available when \cite{CMT11quat} was published: \begin{enumerate} \item[1.] The first one is a method we developed in \cite{KoLa15} in order to treat finite groups with dihedral Sylow $2$-subgroups, extended in \cite{LT17etcentralext} to a more general method to relate the structure of $T(G)$ to that of $T(G/O_{p'}(G))$, which allows us to reduce the problem to groups with $O_{2'}(G)=1$. \item[2.] The second one is the classification of finite groups with semi-dihedral Sylow $2$-subgroups modulo $O_{2'}(G)$ due to Alperin-Brauer-Gorenstein \cite{ABG}. \item[3.] The third main ingredient relies on major new results obtained by J. Grodal through the use of homotopy theory in \cite{GrodalET}, or more precisely on a slight extension of the main theorem of \cite{GrodalET} recently obtained by D. Craven in \cite{CravenET} and which provides us with purely group-theoretic techniques to deal with Grodal's description of $K(G)$ in \cite[Theorem 4.27]{GrodalET}. The latter results will in particular enable us to treat families of groups related to the finite groups of Lie type $\SL_3(q)$ with $q\equiv 3\pmod{4}$ and $\SU_3(q)$ with $q\equiv 1\pmod{4}$. \end{enumerate} \noindent With these tools, our main result is a description of the structure of the group of endo-trivial modules for groups with a semi-dihedral Sylow $2$-subgroup as follows: \begin{thma}\label{thm:intro} Let $k$ be an algebraically closed field of characteristic~$2$ and let $G$ be a finite group with a Sylow $2$-subgroup $P\cong \mathrm{SD}_{2^m}$ of order $2^m$ with $m\geq 4$. Then the following assertions hold. \begin{enumerate} \item[\rm(a)] If $G/O_{2'}(G)\ncong \PGL^{\ast}_2(9)$, then $T(G)\cong X(G)\oplus \IZ/2\IZ\oplus \IZ$. \item[\rm(b)] If $G/O_{2'}(G)\cong \PGL^{\ast}_2(9) \cong \fA_6.2_3\cong\mathrm{M}_{10}$, then $T(G)\cong K(G)\oplus \IZ/2\IZ\oplus \IZ$ where $$K(G)/X(G)\leq \IZ/3\IZ\,.$$ Moreover, $K(G)/X(G)=1$ if $G=\PGL^{\ast}_2(9)$ and the bound $K(G)/X(G)\cong \IZ/3\IZ$ is reached by the group $G=3.\PGL^{\ast}_2(9)$. \end{enumerate} \end{thma} We point out that in Theorem~\ref{thm:intro} in both cases the summand $\IZ$ is generated by the first syzygy module of the trivial $kG$-module and the summand $\IZ/2\IZ$ is generated by a torsion endo-trivial module which is explicitly determined in \cite[Proposition~6.4]{CMT11quat}. \par The paper is built up as follows. In Section 2 we introduce our notation. In Section 3 we quote the main results on endo-trivial modules which we will use to prove Theorem~\ref{thm:intro}. In Section 4 we state and prove preliminary results on groups with semi-dihedral Sylow $2$-subgroups. In Section 5 we compute the trivial source endo-trivial modules for the special linear and special unitary groups and finally in Section 6 we prove Theorem~\ref{thm:intro}. \vspace{2mm} \section{Notation and definitions} \enlargethispage{5mm} Throughout this article, unless otherwise specified we adopt the following notation and conventions. All groups considered are assumed to be finite and all modules over finite group algebras are assumed to be finitely generated left modules. We let $k$ denote an algebraically closed field of prime characteristic $p$ and $G$ be a finite group of order divisible~by~$p$.\\ We denote by $\mathrm{SD}_{2^m}$ the semi-dihedral group of order $2^m$ with $m\geq 4$, by $C_a$ the cyclic group of order $a\geq 1$, by $\mathfrak A_a$ and $\mathfrak S_a$ the alternating and the symmetric groups on $a$ letters, and we refer to the ATLAS \cite{ATLAS} and \cite{ABG} for the definitions of further standard finite groups that occur in the statements of our main results. In particular, for a prime power~$q$ and $\varepsilon\in\{-1,1\}$, we set $\SL^{\pm1}_2(q):=\{A\in \GL_2(q)\mid \det(A)=\pm1\}$, $\SU^{\pm1}_2(q):=\{A\in \GU_2(q)\mid \det(A)=\pm1\}$, $\PSL^{\varepsilon}_n(q)=\PSL_n(q)$ (resp. $\GL_n^{\varepsilon}(q)=\GL_n(q)$, $\SL_n^{\varepsilon}(q)=\SL_n(q)$) if $\varepsilon=1$ and $\PSL_3^{\varepsilon}(q)=\PSU_n(q)$ (resp. $\GU_n^{\varepsilon}(q)=\GU_n(q)$, $\SU_n^{\varepsilon}(q)=\SU_n(q)$) if $\varepsilon=-1$. For $\emptyset \,{\not=}\,S \subseteq G\ni g,x$ we write $^g{\!}S:= \{ gsg^{-1} | s\in S\}$ and $x^g:= g^{-1}xg$. We let $O_{p'}(G)$, resp. $O_p(G)$, be the largest normal $p'$-subgroup, resp. $p$-subgroup, of $G$ and $O^{p'}(G)$ the smallest normal subgroup of $G$ whose quotient is a $p'$-group. Following \cite[Section~2]{CravenET}, if $P$ is a Sylow $p$-subgroup of $G$, then we define $K_G^{\circ}$ to be the normal subgroup of $N_G(P)$ generated by $N_G(P)\cap O^{p'}(N_G(Q))$ for all $N_G(P)$-conjugacy classes of subgroups $1<Q\leq P$. Clearly $K_G^{\circ}\unlhd N_G(P)$.\\ If $M$ is a $kG$-module, then we denote by $M^*$ the $k$-dual of $M$ and by $\End_k(M)$ its $k$-endomorphism algebra, both of which are endowed with the $kG$-module structure given by the conjugation action of $G$. We recall that a $kG$-module $M$ is called \emph{endo-trivial} if there is an isomorphism of $kG$-modules $$\End_k(M)\cong k\oplus \proj\,$$ where $k$ denotes the trivial $kG$-module and $\proj$ denotes a projective $kG$-module or possibly the zero module. Any endo-trivial $kG$-module $M$ splits as a direct sum $M= M_{0}\oplus \proj$ where $M_{0}$, the projective-free part of~$M$, is indecomposable and endo-trivial. The set $T(G)$ of all isomorphism classes of indecomposable endo-trivial $kG$-modules endowed with the composition law $[M]+[L]:=[(M\otimes_k L)_{0}]$ is an abelian group called the \emph{group of endo-trivial modules of $G$}. The zero element is the class $[k]$ of the trivial module and $-[M]=[M^{*}]$. By a result of Puig, the group $T(G)$ is known to be a finitely generated abelian group (see \cite[Theorem 2.3]{MazzaBook}).\par We let $X(G)$ denote the group of one-dimensional $kG$-modules endowed with the tensor product $\otimes_k$\,. Clearly $X(G)\leq T(G)$ and we recall that $$X(G)\cong \Hom(G,k^{\times})\cong (G/[G,G])_{p'}\,.$$ In particular, it follows that $X(G)=\{[k]\}$ when $G$ is $p'$-perfect. Furthermore, we let $K(G)$ denote the subgroup of $T(G)$ consisting of the isomorphism classes of indecomposable endo-trivial $kG$-modules which are at the same time trivial source modules. It follows easily from the theory of vertices and sources that $K(G)$ is precisely the kernel of the restriction homomorphism $$ \Res_P^G : T(G) \longrightarrow T(P), [M]\mapsto[\Res^G_P(M)_0] $$ where $P$ is a Sylow $p$-subgroup of~$G$. Clearly $X(G)\subseteq K(G)$ and $K(G)\subseteq TT(G)$ since $X(P)=\{[k]\}$. Moreover, by the main result of~\cite{CT}, we have $K(G)=TT(G)$~unless $P$ is cyclic, generalised quaternion, or semi-dihedral.\\ \vspace{2mm} \section{Quoted results} To begin with, we quickly review the results about $T(G)$ in the semi-dihedral case obtained by Carlson-Th\'evenaz in \cite{CT} and Carlson-Mazza-Th\'{e}venaz in~\cite{CMT11quat}. \begin{thm}\label{thm:CMT} Let $p=2$ and let $G$ be a finite group with a Sylow $2$-subgroup $P\cong \mathrm{SD}_{2^m}$ of order $2^m$ with $m\geq 4$. Then the following assertions hold. \begin{enumerate} \item[\rm(a)] \cite[Theorem~7.1]{CT} $T(P)\cong \IZ/2\IZ\oplus \IZ$. \item[\rm(b)] \cite[Proposition~6.1]{CMT11quat} $T(G)\cong K(G)\oplus \Image(\Res^G_P)$. \item[\rm(c)] \cite[Proposition~6.4]{CMT11quat} $\Res^G_P:T(G)\lra T(P)$ is a split surjective group homomorphism. \item[\rm(d)] \cite[Proposition~6.4]{CMT11quat} $TT(G)\cong K(G)\oplus\IZ/2\IZ$, where the $\IZ/2\IZ$ summand is generated by a self-dual torsion endo-trivial module which is not a trivial source module. \item[\rm(e)] \cite[Corollary~6.5]{CMT11quat} If $P=N_G(P)$, then $K(G)=\ker(\Res^{G}_{P})= \{[k]\}$ and hence $T(G)\cong T(P)\cong \IZ/2\IZ\oplus \IZ$. \end{enumerate} \end{thm} \begin{ques} The remaining open question in the article~\cite{CMT11quat} by Carlson-Mazza-Th\'{e}venaz about finite groups $G$ with semi-dihedral Sylow $2$-subgroups is to compute the structure of the group $K(G)$ when the Sylow $2$-subgroups are not self-normalising. \end{ques} \noindent Next, we state below the main results which we will use in our proof of Theorem~\ref{thm:intro}. However, they are all valid in arbitrary prime characteristic~$p$. \begin{lem}[{}{\cite[Lemma 2.6]{MT07}}]\label{lem:Op(G)} Let $G$ be a finite group and let $P$ be a Sylow $p$-subgroup of $G$. If $\lconj{x}{P}\cap P\neq 1$ for every $x\in G$, then $K(G)=X(G)$. In particular, if $O_p(G)>1$, then $K(G)=X(G)$. \end{lem} \noindent Then, the following lemma will be applied to semi-direct products of the form $G=N\rtimes H$ with $p\nmid |G:N|$. \begin{lem}[{}{\cite[Theorem 5.1.(4.)]{MazzaBook}}]\label{lem:normal} Let $G$ be a finite group and let $N\unlhd G$ such that $p\nmid |G:N|$. If $K(N)=X(N)$, then $K(G)=X(G)$. \end{lem} \noindent We will also use the following result from our previous paper on endo-trivial modules for groups with dihedral Sylow $2$-subgroups. \begin{thm}[{}{\cite[Theorem 1.1]{KoLa15}}]\label{thm:KoLa15} Let $G$ be a finite group with $p$-rank at least $2$ and which does not admit a strongly $p$-embedded subgroup. Let $H\unlhd G$ be a normal subgroup such that $p\nmid |H|$. If $H^2(G, k^{\times}) = 1$, then $$K(G) = X(G) + \Inf_{G/H}^{G}(K(G/H))\,.$$ \end{thm} \noindent In case $H^2(G, k^{\times}) \neq 1$, then we may apply the following generalisation of the above result. \begin{thm}[{}{\cite[Theorem 1.1]{LT17etcentralext}}]\label{thm:LT17} Let $G$ be a finite group with $p$-rank at least $2$ and which does not admit a strongly $p$-embedded subgroup. Let $\wt{Q}$ be any $p'$-representation group of the group $Q:=G/O_{p'}(G)$. \begin{enumerate} \item[\rm(a)] There exists an injective group homomorphism $$\Phi_{G,\wt{Q}}: T(G)/X(G)\lra T(\wt{Q})/X(\wt{Q})\,.$$ In particular, $\Phi_{G,\wt{Q}}$ maps the class of $\Inf_Q^G(W)$ to the class of~$\Inf_Q^{\wt{Q}}(W)$, for any endo-trivial $kQ$-module~$W$. \item[\rm(b)] The map $\Phi_{G,\wt{Q}}$ induces by restriction an injective group homomorphism $$\Phi_{G,\wt{Q}}: K(G)/X(G)\lra K(\wt{Q})/X(\wt{Q})\,.$$ \item[\rm(c)] In particular, if $K(\wt{Q})= X(\wt{Q})$, then $K(G)= X(G)$.\\ \end{enumerate} \end{thm} \noindent For further details on the notion of a $p'$-representation group we refer the reader to the expository note \cite{LT17b}. We emphasise that if the Schur multiplier is a $p'$-group, then a $p'$-representation group is just a representation group in the usual sense.\\ \noindent Finally, we will apply a recent result obtained by David Craven in \cite{CravenET}, which provides us with a purely group-theoretic method in order to use Grodal's homotopy-theoretical description of $K(G)$ of \cite[Theorem 4.27]{GrodalET}. \begin{lem}[{}{\cite[Section~2]{CravenET}}]\label{lem:Craven} Let $G$ be a finite group and let $P$ be a Sylow $p$-subgroup of $G$. If $K_G^{\circ}=N_G(P)$, then $K(G)\cong \{[k]\}$. \end{lem} \begin{proof} Assuming $K_G^{\circ}=N_G(P)$, by \cite[Theorem~2.3 and the remark before Theorem~2.3]{CravenET} we have $$K(G)\cong \left(N_G(P)/K_G^{\circ}\right)^{\text{ab}}=1\,.$$ The claim follows. \end{proof} $$\boxed{\text{From now on and for the remainder of this article, we assume that $p=2$.}}$$ \vspace{4mm} \section{Some properties of groups with semi-dihedral Sylow $2$-subgroups} Groups with semi-dihedral Sylow $2$-subgroups are classified as follows as a byproduct of the results of Alperin-Brauer-Gorenstein in \cite{ABG}. This observation, essential to our analysis of the endo-trivial modules, is due to Benjamin Sambale. \begin{prop}[{}{\cite{SAMBALEunpub}}]\label{prop:classificSD} Let $G$ be a finite group with a Sylow $2$-subgroup isomorphic to ${\mathrm{SD}}_{2^m}$ ($m\geq 4$) and $O_{2'}(G) = 1$. Let $q:=r^n$ denote a power of prime number $r$. Then one of the following holds: \begin{enumerate} \item[{\bf \rm(SD1)}] $G\cong {\mathrm{SD}}_{2^m}$; \item[{\bf \rm(SD2)}] $G\cong\mathrm{M}_{11}$ and $m=4$; \item[{\bf \rm(SD3)}] $G\cong \SL^{\pm1}_2(q)\rtimes C_d$ where $q\equiv -1\pmod{4}$, $d\mid n$ is odd and $4(q + 1)_2 = 2^m$; \item[{\bf \rm(SD4)}] $G\cong \SU^{\pm1}_2(q)\rtimes C_d$ where $q\equiv 1\pmod{4}$, $d\mid n$ is odd and and $4(q - 1)_2 = 2^m$; \item[{\bf \rm(SD5)}] $G\cong\PGL^{\ast}_2(q^2)\rtimes C_d$ where $r$ is odd, $d\mid n$ is odd and $2(q^2 - 1)_2 = 2^m$; \item[{\bf \rm(SD6)}] $G\cong\PSL^{\varepsilon}_3(q).H$ where $q\equiv -\varepsilon\pmod{4}$, $H\leq C_{(3,q-\varepsilon)}\times C_n$ has odd order and $4(q + \varepsilon)_2 = 2^m$; \end{enumerate} \end{prop} \noindent In addition, two crucial results for this piece of work are given by the following lemma and proposition, which will allow us to apply Theorem~\ref{thm:KoLa15} and Theorem~\ref{thm:LT17}. \begin{lem}\label{lem:stpemb} A finite group with a semi-dihedral Sylow $2$-subgroup does not admit any strongly $2$-embedded subgroup. \end{lem} \begin{proof} The Bender-Suzuki theorem \cite[Satz~1]{Bender71} states that a finite group $G$ with a strongly $2$-embedded subgroup $H$ is of one of the following forms: \begin{itemize} \item[\rm1.] $G$ has cyclic or generalised quaternion Sylow 2-subgroups and $H$ contains the centraliser of an involution; or \item[\rm2.] $G/O_{2'}(G)$ has a normal subgroup of odd index isomorphic to one of the simple groups $\PSL_2(q)$, $\mathrm{Sz}(q)$ or $\PSU_3(q)$ where $q\geq 4$ is a power of $2$ and $H$ is $O_{2'}(G)N_G(P)$ for a Sylow $2$-subgroup $P$ of $G$. \end{itemize} Therefore, it follows from Proposition~\ref{prop:classificSD} that such a group cannot admit a semi-dihedral Sylow $2$-subgroup. \end{proof} \begin{prop}\label{prop:Smult} Let $G$ be a finite group with a semi-dihedral Sylow $2$-subgroup $P\cong {\mathrm{SD}}_{2^m}$ for some $m\geq 4$ and $O_{2'}(G)=1$. Let $q:=r^n$ denote a power of prime number $r$. Then the following assertions hold. \begin{enumerate} \item[\rm(a)] If $G={\mathrm{SD}}_{2^m}$, then \smallskip $H^2(G, k^\times)=1$. \item[\rm(b)] If $G={\mathrm M}_{11}$, then \smallskip $H^2(G, k^\times)=1$. \item[\rm(c)] If $G=\SL^{\pm1}_2(q)\rtimes C_d$ where $q\equiv -1\pmod{4}$ and $d\mid n$ is odd, then \smallskip $H^2(G, k^\times)=1$. \item[\rm(d)] If $G=\SU^{\pm1}_2(q)\rtimes C_d$ where $9\neq q\equiv 1\!\pmod{4}$ and $d\mid n$ is odd, then \smallskip ${H^2(G, k^\times)\!=\!1}$. \item[\rm(e)] If $G=\SU^{\pm1}_2(9)$, then \smallskip $H^2(G, k^\times)\cong C_3$. \item[\rm(f)] If $G=\PGL_2^*(q^{2})\rtimes C_d$ where $q^{2}\,{\not=}\,9$ is odd and $d$ is an odd divisor of $n$, then \smallskip ${H^2(G, k^\times)=1}$. \item[\rm(g)] If $G=\PGL_2^*(9)$, then \smallskip $H^2(G, k^\times)\cong C_3$. \item[\rm(h)] If $G=\PSL^{\varepsilon}_3(q).H$ where $q\equiv -\varepsilon \pmod{4}$ and $H\leq C_{(3,q-\varepsilon)}\times C_n$ is cyclic of odd order, then \smallskip $|H^2(G,k^{\times})| \big| (3,q-\varepsilon)$\,. \item[\rm(i)] If $G=\PSL^{\varepsilon}_3(q).H$ where $q\equiv -{\varepsilon}\pmod{4}$ and $H\leq C_{(3,q-\varepsilon)}\times C_n$ is non-cyclic of odd order, then \smallskip $|H^2(G,k^{\times})| \big| 9$. \end{enumerate} \end{prop} \begin{proof} Let $M(G):=H^2(G, \mathbb C^\times)$ denote the Schur multiplier of $G$. Then it well-known that $H^2(G, k^\times) \cong M(G)_{2'}$ (see e.g. \cite[Proposition 2.1.14]{Karp}) and we set $h:=|H^2(G, k^\times)|$. In order to compute $h$ we recall that if $N\unlhd G$ such that $G/N$ is cyclic then, by \cite[Theorem~3.1(i)]{JONES}, we have \[ |M(G)|\, \Big|\, |M(N)|\cdot |N/[N,N]| \,. \tag{$\ast$} \] We compute: \begin{enumerate} \item[\rm(a)] Because the Schur multiplier of a cyclic group is trivial, we obtain from \cite[Theorem 2.1.2(i)]{Karp} (or \cite[Corollary 5.4]{ISAACS}) that if $p$ is an odd prime divisor of $|M(G)|$ then a Sylow $p$-subgroup of $G$ must be noncyclic, hence \smallskip $|M(G)_{2'}|=1$. \item[\rm(b)] See the \smallskip ATLAS \cite[p.18]{ATLAS}. \item[\rm(c)] First, we have $\SL^{\pm1}_2(q)\cong\SL_2(q).2$ (see \cite[Chapter I, p.4]{ABG}). Thus ($\ast$) yields $$M( \SL^{\pm1}_2(q) )=1$$ since $\mathrm{SL}_2(q)$ is perfect and has a trivial Schur multiplier as $q\equiv 3 \pmod{4}$ (see \cite[7.1.1.Theorem]{Karp}). Therefore, we may apply ($\ast$) again to $G=N\rtimes C_d$ with $N= \SL^{\pm1}_2(q)$. Because $|N/[N,N]|=2$ we obtain $$|M(G)| \,\Big|\, |M(N)|{\cdot} |N/[N,N]|=2$$ and it follows that \smallskip $h=|M(G)_{2'}|=1$. \item[\rm(d)] We have ${\mathrm{SU}}^{\pm1}_2(q) = {\mathrm{SU}}_2(q).2$ (see \cite[Chapter I, p.4]{ABG}) and ${\mathrm{SL}}_2(q)\cong{\mathrm{SU}}_2(q)$. Therefore, as $q\neq 9$ we have $M(\SU_2(q))\cong M(\SL_2(q))=1$ (see \cite[7.1.1.Theorem]{Karp}) and the claim follows by the same argument as in~{\rm(c)}, applying ($\ast$) \smallskip twice. \item[(e)] We have $\SU^{\pm1}_2(9)\cong 2.\PGL_2(9)\cong 2.\fA_6.2_2$ (see \cite{ABG}), hence by \cite[2.1.15 Corollary]{Karp} and the ATLAS \cite[p.4]{ATLAS} we obtain that $$M(G)_{2'}\cong M(\fA_6.2_2)_{2'}\cong C_3\,.$$ \item[\rm(f)] We have ${\mathrm{PGL}}^*_2(q^{2})={\mathrm{PSL}}_2(q^{2}).2$ (see \cite[p. 335]{Gor69}). Now, if $q^{2}\neq 9$, then $\PSL_2(q^{2})$ is perfect and has a Schur multiplier of order $2$. Therefore applying ($\ast$) twice as in (c) it follows that \smallskip $h=|M(G)_{2'}|=1$. \item[\rm(g)] We read from the ATLAS \cite[p.4]{ATLAS} that $G={\mathrm{PGL}}^*_2 (9)\cong \mathrm{M}_{10}\cong \fA_6.2_3$ and \smallskip $h=3$. \item[\rm(h)] Write $G=N.H$ with $N:=\PSL^{\varepsilon}_3(q)$ ($q\equiv -\varepsilon \pmod{4}$). Because $N$ is perfect by ($\ast$), we have that $$|M(G)|\, \Big|\, |M(N)|=(3,q-\varepsilon)\,$$ as $M(N)\cong C_{(3,q-\varepsilon)}$ \smallskip (see e.g. \cite{ATLAS}). Hence $h=|M(G)_{2'}| \big| (3,q-\varepsilon)$\,. \item[\rm(i)] We keep the same notation as in {\rm(h)}, that is $G=N.H$ with $N:=\PSL^{\varepsilon}_3(q)$ ($q\equiv -\varepsilon \pmod{4}$). Now, as $H$ is not cyclic, we have $H\cong C_3\times C_a$ with $3\mid a$. For $X:=N.C_3$ we obtain that $|M(X)|\big| |M(N)|=3$ by ($\ast$). Then applying ($\ast$) a second time, we get $$|M(G)|\, \Big|\, |M(X)|\cdot |X/[X,X]|=9\,.$$ Hence $h=|M(G)_{2'}| \big| 9$\,. \end{enumerate} \end{proof} \begin{rem} We note that if in case (SD6) of Proposition~\ref{prop:classificSD} the extension $G=\PSL_3^{\varepsilon}(q).H$ is split, then it follows from a general result of K. Tahara \cite[Theorem 2]{Tahara} on the second cohomology groups of semi-direct products that $H^2(G,k^{\times})\cong C_{(3,q-\varepsilon)}$ if $H$ is cyclic and $H^2(G,k^{\times})\cong C_3\times C_3$ if $H$ is non-cyclic. \end{rem} \vspace{2mm} \section{Endotrivial modules for $\SL^{\varepsilon}_3(q)$ with $q\equiv -\varepsilon\pmod{4}$} In this section, we compute $K(G)$ for $G=\SL^{\varepsilon}_3(q)$ with $q\equiv -\varepsilon\pmod{4}$ over an algebraically closed field of characteristic~$2$. We note that for the sepcial linear group $G=\SL_3(q)$ with $q\equiv -1\pmod{4}$ the structure of $K(G)$ is in principle given by \cite[Theorem 9.2(b)(ii)]{CMN16}, namely $K(G)=X(G)$. However, the proof given to this fact in \cite{CMN16} contains a minor error: the authors assumed that a Sylow $2$-subgroup $S$ of the simple group $H=\PSL_3(q)$ is always self-normalising, which is not correct in general, as $N_H(S)\cong S\times Z$ where $Z$ is a cyclic group of order $(q-1)_{2'}/(q-1,3)$. (See e.g. \cite[Corollary on p.2]{Kond05}). For this reason, we treat both $\SL_3(q)$ and $\SU_3(q)$ below.\\ Throughout this section we let $G:=\SL^{\varepsilon}_3(q)$ and $\widetilde{G}:=\GL_3^{\varepsilon}(q)$ with $q\equiv -\varepsilon\pmod{4}$ an odd prime power and we define $\overline{q}$ to be $q$ if $\varepsilon= 1$ and $q^2$ if $\varepsilon=-1$. Furthermore, we let $$\imath:\GL_2^{\varepsilon}(q)\lra \SL_3^{\varepsilon}(q),\, a\mapsto \left(\begin{matrix}a & 0 \\ 0 & \det(a)^{-1}\end{matrix}\right)$$ be the natural embedding. Now, in order to describe the normaliser of a Sylow $2$-subgroup $P$ of $G$, we follow the procedure described in \cite[Sections~7~and~8]{ST18} to obtain $N_G(P)$ from the normaliser $N_{\wt{G}}(\wt{P})$ of a Sylow $2$-subgroup $\wt{P}$ of $\wt{G}$ as given by Carter-Fong \cite{CarterFong}. Firstly, as the $2$-adic expansion of~$3$ is $3=2^{r_1}+2^{r_2}$ with $r_1=1$ and $r_2=0$ we have $\widetilde{P}\cong \prod_{i=1}^2 S^{\varepsilon}_{r_i}(q)$ where $S^{\varepsilon}_{r_i}(q)\in \Syl_2(\GL^{\varepsilon}_{2^{r_i}}(q))$ and $$N_{\wt{G}}(\wt{P})\cong \wt{P}\times C_{(q-\varepsilon)_{2'}}\times C_{(q-\varepsilon)_{2'}}\,.$$ Concretely, we may assume that $\wt{P}$ is realised by embedding $\prod_{i=1}^2 S^{\varepsilon}_{r_i}(q)\leq \prod_{i=1}^2 \GL^{\varepsilon}_{2^{r_i}}(q)$ block-diagonally in a natural way. Moreover, for $1\leq j\leq 2$ the corresponding factor $C_{(q-\varepsilon)_{2'}}$ is embedded as $O_{2'}(Z( \GL^{\varepsilon}_{2^{r_j}}(q) ))$, so that an arbitrary element of $N_{\wt{G}}(\wt{P})$ is of the form $xz$ with $x\in\wt{P}$ and $z$ is a diagonal matrix of the form $z=\diag(\lambda_1 I_2, \lambda_2)$ with $\lambda_1,\lambda_2\in C_{(q-\varepsilon)_{2'}}\leq \IF^{\times}_{\overline{q}}$ and $I_2$ the identity matrix in $\GL_2(\overline{q})$. \begin{lem}\label{lem:NormaliserSL3eps} The following assertions hold: \begin{enumerate} \item[\rm(a)] $P:=\wt{P}\cap G=\{\imath(x)\mid x\in S_{1}^{\varepsilon}(q)\}$ is a Sylow $2$-subgroup of $G$ which is normal \smallskip in~$\wt{P}$\,; \item[\rm(b)] $N_G(P) = P\times O_{2'}(C_G(P))$ where $$O_{2'}(C_G(P))=\{\diag(\lambda_1 I_2,\lambda_1^{-2})\mid \lambda_1\in C_{(q-\varepsilon)_{2'}}\leq\IF_{\overline{q}}^{\times}\}\cong C_{(q-\varepsilon)_{2'}}\,.$$ \end{enumerate} \end{lem} \begin{proof} Part (a) is given by \cite[Sections~\S8.1]{ST18}. For part (b), first, as $P$ is semi-dihedral its automorphism group $\Aut(P)$ is a $2$-group and it follows that $$N_G(P)=PC_G(P)=P\times Z\qquad \text{with }Z:=O_{2'}(C_G(P))\,.$$ Now by \cite[Sections~\S8.1]{ST18}, we have $N_G(P) = N_{\widetilde{G}}(\widetilde{P})\cap G$ and the claim follows from the above description of $N_{\widetilde{G}}(\widetilde{P})$. \end{proof} \begin{lem}\label{prop:Q} Consider the diagonal matrices \[ u:=\begin{pmatrix} 1 & 0 & 0 \\ 0& -1&0 \\ 0&0&-1\end{pmatrix},\,\, v:=\begin{pmatrix} -1 & 0 & 0 \\ 0& 1&0 \\ 0&0&-1\end{pmatrix} \in G \] and set $Q:=\langle u,v\rangle\leq G$. Then the following assertions hold: \begin{enumerate} \item[\rm(a)] $Q= \langle u\rangle \times \langle v\rangle\cong C_2\times C_2$\,; \item[\rm(b)] the centraliser of $Q$ in $G$ is \begin{equation*} \begin{split} C_G(Q) & = \{ \diag(\eta_1,\eta_2,(\eta_1\eta_2)^{-1})\mid \eta_1,\eta_2\in C_{(q-\varepsilon)}\leq \IF^{\times}_{\overline{q}} \} \\ & \cong C_{q-\varepsilon}\times C_{q-\varepsilon}\cong Q\times C_{(q-\varepsilon)_{2'}}\times C_{(q-\varepsilon)_{2'}} \,; \end{split} \end{equation*} \item[\rm(c)] the normaliser of $Q$ in $G$ is $N_G(Q) \cong (Q\times C_{(q-\varepsilon)_{2'}} \times C_{(q-\varepsilon)_{2'}}).\mathfrak{S}_3$\,; and \item[\rm(d)] $O^{2'}(N_G(Q))=N_G(Q)$\,. \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item[\rm(a)] It is straightforward to see that $u$ and $v$ have order~$2$ and commute with each other, hence $Q= \langle u,v\rangle = \langle u\rangle \times \langle v\rangle\cong C_2\times C_2$\,. \item[\rm(b)] It is clear that a matrix $c\in G$ such that $cu=uc$ and $cv=vc$ must be a diagonal matrix of the form $\diag(\eta_1,\eta_2,(\eta_1\eta_2)^{-1})$ with $\eta_1,\eta_2\in C_{(q-\varepsilon)}\leq \IF^{\times}_{\overline{q}}$. The claim follows. \item[\rm(c)] Let $\zeta$ is a generator of the subgroup $C_{(q-\varepsilon)_{2'}}\leq \IF_{\overline{q}}^{\times}$. Then for the centraliser $C_G(Q)\cong Q\times C_{(q-\varepsilon)_{2'}}\times C_{(q-\varepsilon)_{2'}}$ we may identify the first factor $C_{(q-\varepsilon)_{2'}}$ with $X:=\langle x\rangle$ where $x=\diag(\zeta,\zeta,\zeta^{-2})$ and the second factor $C_{(q-\varepsilon)_{2'}}$ with $Y:=\langle y\rangle$ where $y=\diag(\zeta,1,\zeta^{-1})$, namely $C_G(Q)=Q\times X\times Y$. Now, we consider the matrices $$ t:= \begin{pmatrix} 1&0&0\\ 0&0&1\\ 0&-1&0\end{pmatrix}\in G\,\text{ and } \,a:= \begin{pmatrix}0&1&0\\0&0&1\\1&0&0\end{pmatrix} \in G\,.$$ Clearly $t^2=u$, so that $t$ has order~$4$. Also $$ (uv)t(uv)^{-1}=(uv)t(uv)=t^{-1}, u^t=u, v^t=uv, \text{ and }(uv)^t=v, $$ so that $$G\geq D:=\langle uv,t\rangle\cong D_8\,.$$ Clearly $a$ has order $3$ and we compute that \begin{equation}\label{a-t} a^t = a^2 u v, \ u^a=v, \ v^a= uv, \ (uv)^a=u\,. \end{equation} It follows that $$ G\geq \mathcal S:= \langle u, v, t, a \rangle \cong\mathfrak S_4,\, Q\vartriangleleft \mathcal S \text{ and } \mathcal S/Q\cong \mathfrak S_3. $$ Moreover, \begin{equation}\label{x^a x^t y^a y^t} x^a = xy^{-3}, \ x^t= x^{-2}y^3, \ y^a=xy^{-2}, \ y^t=x^{-1}y^2. \end{equation} hence we have $$ \mathcal S \leq N_G(Q\times X \times Y)\,\text{ but also }\, \mathcal S\leq N_G(Q)\,. $$ Therefore, \begin{equation}\label{N_G(Q)} (Q \times X\times Y).\mathfrak S_3 \text{ (non-split extention)}\cong (Q \times X\times Y)\mathcal S \leq N_G(Q) \end{equation} and it follows from \cite[Proposition 1 in Chapter II, Section 1]{ABG} that \begin{equation}\label{N/C} N_G(Q)/C_G(Q)\cong\mathfrak S_3\,. \end{equation} Finally it follows from (\ref{N_G(Q)}) and (\ref{N/C}) that $$N_G(Q) = (Q \times X\times Y)\mathcal S \cong (Q\times C_{(q-\varepsilon)_{2'}} \times C_{(q-\varepsilon)_{2'}}).\mathfrak S_3\,.$$ \item[\rm(d)] Set $N:=N_G(Q)$ and $H:=O^{2'}(N)$. By the proof of (c) we have $N=\langle X,Y,u,v,t,a\rangle$, therefore it suffices to prove that $u,v,t,a\in H$ and $X,Y\leq H$. To begin with, since $u,v,t$ are all $2$-elements in $N$, we have $$Q \leq D\leq H$$ and it is clear that $u,v,t\in H$. Now, since $t\in H \unlhd N\ni a$, we have $$ H \ni (t^{-1})^a = a^{-1} t^{-1} a = a^{-1} t^{-1} a \,t \,t^{-1} = a^{-1}\,a^t\,t^{-1} =a^{-1}\,a^2 uv \,t^{-1} = a uv t^{-1} $$ where the last-but-one equality holds by (\ref{a-t}), and it follows that $a\in H$\,. Next, since $t\in H \unlhd N\ni y$, it follows from (\ref{x^a x^t y^a y^t}) and the fact that $[x,y]=1$ that $$H \ni y^{-1}t^{-1}y\,t= y^{-1} y^t = y^{-1}\,x^{-1}y^2 = x^{-1} y\,.$$ Moreover, as $a\in H$ and $H\vartriangleleft N\ni y$, it also follows from (\ref{x^a x^t y^a y^t}) and the fact that $[x,y]=1$ that $$H\ni y^{-1}a^{-1}y\,a = y^{-1} y^a = y^{-1} \, x y^{-2} =x y^{-3}\,.$$ Therefore $H \ni x^{-1}y\,x y^{-3} = y^{-2}$, again as $[x,y]=1$. Now, as $Y=\langle y^{-2}\rangle\cong C_{(q-\varepsilon)_{2'}}$ is of odd order, we have proved that $Y \leq H$. As $x^{-1}y\in H$ it follows that $x^{-1}\in H$ and hence $X=\langle x^{-1}\rangle\leq H$. \end{enumerate} \end{proof} \begin{prop}\label{prop:SU3} If $G=\SL^{\varepsilon}_3(q)$ with $q\equiv -\varepsilon\pmod{4}$ an odd prime power, then $$K(G)=X(G)=\{[k]\}\,.$$ \end{prop} \begin{proof} Let $P$ be the Sylow $2$-subgroup of $G$ defined in Lemma~\ref{lem:NormaliserSL3eps}(a). Thanks to Lemma~\ref{lem:Craven}(b) it is enough to prove that $N_G(P)=K_G^{\circ}$. We recall that it follows from the definition of $K_G^{\circ}$ that $K_G^{\circ}\unlhd N_G(P)$. Moreover, by Lemma~\ref{lem:NormaliserSL3eps}(b), we have $N_G(P) = P\times O_{2'}(C_G(P))$, hence it is clear that $$P=O^{2'}(N_G(P))=N_G(P)\cap O^{2'}(N_G(P)) \leq K_G^{\circ}\,.$$ Therefore, it remains to prove that $ O_{2'}(C_G(P)) \leq K_G^{\circ}$\,. Now, for $Q=\langle u,v\rangle\cong C_2\times C_2$ as defined in Proposition~\ref{prop:Q}, we have $$O_{2'}(C_G(P))\leq C_G(P)\leq C_G(Q)\leq N_G(Q)=O^{2'}(N_G(Q))$$ where the last equality holds by Lemma~\ref{prop:Q}(d). It follows that $$O_{2'}(C_G(P)) \leq N_G(P)\cap O^{2'}(N_G(Q))\leq K_G^{\circ}\,,$$ as required. \end{proof} \vspace{2mm} \section{Proof of Theorem~\ref{thm:intro}} Throughout this section we let $G$ denote a finite group with a semi-dihedral Sylow $2$-subgroup~$P$ of order $2^{m}$ for some $m\geq 4$. Moreover $q=r^n$ denotes a power of a prime number~$r$. We write $Q:=G/O_{2'}(G)$ and let $\wt{Q}$ be a $2'$-representation group of $Q$.\\ \noindent In order to prove Theorem~\ref{thm:intro}, we go through the possibilities for $Q$ given by Proposition~\ref{prop:classificSD}. We note that as the $2$-rank of $P$ is $2$ and $G$ does not have any strongly $2$-embedded subgroup by Lemma~\ref{lem:stpemb}, we may always apply Theorem~\ref{thm:KoLa15} and Theorem~\ref{thm:LT17}. \begin{lem}\label{lem:SD1} If $G/O_{2'}(G)\cong P$, then $K(G)=X(G)$. \end{lem} \begin{proof} There are different approaches possible in this case. We can use the fact that $G$ is $2$-nilpotent. Then \cite[Conjecture 3.6]{CMTpsol} together with \cite[Theorem]{NavRobET} prove that $K(G)=X(G)$.\\ Else, clearly $K(P)=\{[k]\}$ and $H^2(P,k^{\times})=1$ by Proposition~\ref{prop:Smult}(a), hence $K(G)=X(G)$ by Theorem~\ref{thm:KoLa15}. (We note that both approaches rely on an argument using the classification of finite simple groups.) \end{proof} \begin{lem}\label{lem:SD2} If $G/O_{2'}(G)\cong \mathrm{M}_{11}$, then $K(G)=X(G)$. \end{lem} \begin{proof} As $Q=\mathrm{M}_{11}$ has a self-normalising Sylow $2$-subgroup it follows from Theorem~\ref{thm:CMT}(e) that $K(\mathrm{M}_{11})=\{[k]\}$. (See also \cite[\S 4.1]{LaMaz15}.) Now, as $H^2(\mathrm{M}_{11},k^{\times})=1$ by Proposition~\ref{prop:Smult}(b) it follows from Theorem~\ref{thm:KoLa15} that $K(G)=X(G)$ (or from Theorem~\ref{thm:LT17}(c) as $Q=\wt{Q}=\mathrm{M}_{11}$). \end{proof} \begin{lem}\label{lem:SLpm}\label{lem:SD3}{\ } \begin{enumerate} \item[\rm(a)] If $G= \SL^{\pm1}_2(q)$ with $q\equiv -1\pmod{4}$, then \smallskip $K(G)=X(G)$. \item[\rm(b)] If $G/O_{2'}(G)\cong \SL^{\pm1}_2(q)\rtimes C_d$ with $q\equiv -1\pmod{4}$ and $d\mid n$ is odd, then $K(G)=X(G)$. \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item[\rm(a)] For $G=\SL^{\pm1}_2(q)$ with $q\equiv -1\pmod{4}$ we obtain that $K(G)=X(G)$ by Lemma~\ref{lem:Op(G)} because $G$ admits a non-trivial central $2$-subgroup $Z=\{\pm I_2\}$, that is \smallskip $O_2(G)\neq 1$. \item[\rm(b)] Now, for $Q=\SL^{\pm1}_2(q)\rtimes C_d$ with $q\equiv -1\pmod{4}$ and $d\mid n$, Lemma~\ref{lem:normal} yields $K(Q)=X(Q)$, because $N:= \SL^{\pm1}_2(q)\unlhd Q$ with odd index and $K(N)=X(N)$ by (a). Furthermore, $H^2(Q,k^{\times})=1$ by Proposition~\ref{prop:Smult}(c). Hence $K(G)=X(G)$ by Theorem~\ref{thm:KoLa15}. \end{enumerate} \end{proof} \begin{lem}\label{lem:SD4a} If $G/O_{2'}(G)\cong \SU^{\pm1}_2(q)\rtimes C_d$ with $9\neq q\equiv 1\pmod{4}$ and $d\mid n$ is odd, then $K(G)=X(G)$. \end{lem} \begin{proof} Since $\SU^{\pm1}_2(q)$ admits a central subgroup $Z=\{\pm I_2\}$ of order $2$ and for $Q=\SL^{\pm1}_2(q)\rtimes C_d$ with $q\neq3^2$ we have $H^2(Q,k^{\times})=1$ by Proposition~\ref{prop:Smult}(d), the same arguments as in the proof of Lemma~\ref{lem:SLpm} yield the result. \end{proof} \medskip \begin{lem}\label{lem:SD4b} If $G/O_{2'}(G)\cong \SU^{\pm1}_2(9)$, then $K(G)=X(G)$. \end{lem} \begin{proof} First, recall that $Q:=\SU^{\pm1}_2(9)$ is isomorphic to $2.\PGL_2(9)\cong2.\fA_6.2_2$\,. (See e.g. \cite{ABG}.) By Proposition~\ref{prop:Smult}(e) we have $H^2(Q,k^{\times})\cong C_3$ and we may choose $\wt{Q}$ to be $6.\PGL_2(9)$. Therefore Lemma~\ref{lem:Op(G)} yields $K(\wt{Q})=X(\wt{Q})$ because $\wt{Q}$ admits a central subgroup of order~$2$, which is therefore a normal $2$-subgroup. Finally it follows from Theorem~\ref{thm:LT17}(c) that $K(G)=X(G)$. \end{proof} \begin{lem}\label{lem:SD5a} If $G/O_{2'}(G)\cong \PGL_2^{\ast}(q^{2})\rtimes C_d$ where $q^{2}\,{\not=}\,9$ is odd and $d$ is an odd divisor of $n$, then $K(G)=X(G)$. \end{lem} \begin{proof} We have $\PGL_2^{\ast}(q^{2})= \PSL_2(q^{2}).2$ (see \cite[p.17, proof of Lemma 1]{ABG}). As a Sylow $2$-subgroup of $\PSL_2(q^{2})$ is self-normalising, so is a Sylow $2$-subgroup of $\PGL_2^{\ast}(q^{2})$. Therefore $K(\PGL_2^{\ast}(q^{2}))=X(\PGL_2^{\ast}(q^{2}))$ by Theorem~\ref{thm:CMT}(e) and hence it follows from Lemma~\ref{lem:normal} that for $Q:= \PGL_2^{\ast}(q^{2})\rtimes C_d$ we have $K(Q)=X(Q)$. Now, as $H^2(G,k^{\times})=1$ by Proposition~\ref{prop:Smult}(f), we obtain from Theorem~\ref{thm:KoLa15} that $K(G)=X(G)$. \end{proof} \begin{lem}\label{lem:SD5b}{\ } \begin{enumerate} \item[\rm(a)] If $G=\PGL_2^{\ast}(9)$, then \smallskip $K(G)=X(G)$. \item[\rm(b)] If $G=3.\PGL_2^{\ast}(9)$, then $K(G)\cong \IZ/3\IZ$. Moreover, the indecomposable representatives of the two non-trivial elements of $K(G)$ lie in the two distinct faithful and dual $2$-blocks with full defect $B$ and $B^{\ast}$ of $G$. Their Loewy and socle series are respectively $$ \boxed{\footnotesize \begin{matrix} 9 \\ 9 \ \ 6 \\ 9\end{matrix}} \quad\text{ and }\quad \boxed{\footnotesize \begin{matrix} 9^* \\ 9^* \ \ 6^* \\ 9^*\end{matrix}} $$ where $9$ (resp. $6$) denotes the unique $9$-dimensional (resp. $6$-dimensional) simple \smallskip $kB$-module. \item[\rm(c)]If $G/O_{2'}(G)\cong \PGL_2^{\ast}(9)$, then $K(G)/X(G)$ is isomorphic to a subgroup of $\IZ/3\IZ$. \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item[\rm(a)] The group $\PGL_2^{\ast}(9)$ has a semi-dihedral Sylow $2$-sugbgroup $P\cong \text{SD}_{16}$, which is self-normalising. Hence the claim follows from \smallskip Theorem~\ref{thm:CMT}(e). \item[\rm(b)] We treat the group $G=3.\PGL_2^{\ast}(9)\cong 3.\fA_6.2_3 \cong 3.\mathrm{M}_{10}$ entirely via computer algebra using MAGMA \cite{MAGMA}. First, we note that the block structure of $G$ can be found in the \emph{decomposition matrices} section at the Modular Atlas Homepage \cite[$A_6.2_3$]{ModAtlas}, where we also read that $B$ has exactly two simple modules, one of dimension~$6$ and one of dimension~$9$.\\ Now, in this case, $N_G(P)=P\times Z(G)\cong P\times C_3$, hence by the definition of $K(G)$ we need to show that the Green correspondents $X_1$ and $X_2$ of the two non-trivial $1$-dimensional $kN_G(P)$-modules, say $1a$ and $1b$, of $N_G(P)$ are endo-trivial $kG$-modules. However, as $X_1$ and $X_2$ must be dual to each other, it suffices to consider~$X_1$.\\ Loading the Mathieu group $\mathrm{M}_{10}$ we find out that its triple cover $G$ has a permutation representation on $36$ points and the group $G$ may be defined as \smallskip follows: {\footnotesize \begin{verbatim} > S36:=SymmetricGroup(36); > x1:=S36!(1,2,4,8,20,34,31,36,22,10,9,3)(5,13,15,6,14,28,27,35,21,18,25,11) (7,16,24,33,30,17)(12,26,23,29,32,19); > x2:=S36!(1,4,5)(2,6,7)(9,21,22)(10,11,24)(12,27,16)(13,28,18)(14,20,31) (15,33,23)(17,32,25)(19,26,29)(30,34,35); > x3:=S36!(2,6,8)(3,10,11)(4,5,12)(7,18,19)(9,21,23)(13,29,30)(14,32,31) (15,25,27)(16,17,33)(24,28,26)(34,35,36); > G:=sub<S36|x1,x2,x3>; \end{verbatim} } \smallskip \noindent We confirm the endo-triviality of $X_1$ with the following \smallskip code: {\footnotesize \begin{verbatim} > p:=2; > P:=SylowSubgroup(G,p); > N:=Normalizer(G,P); > k:=TrivialModule(P,GF(p^8)); > kN:=Induction(k, N); > SkN:=IndecomposableSummands(kN); > [IsIsomorphic(TrivialModule(N,GF(p^8)), SkN[i]) : i in [1..3]]; [ false, true, false ] > I1:=Induction(SkN[1], G); > S1:=IndecomposableSummands(I1);S1; [ GModule of dimension 12 over GF(2^8), GModule of dimension 33 over GF(2^8) ] \end{verbatim} } \smallskip \noindent Therefore by the Green correspondence $X_1$ has dimension $33$. Then using the function \verb+LeftProjectiveStrip+ from \cite[\S2]{CarlsonSupp}, which strips off projective summands for $p$-groups, we get: {\footnotesize \begin{verbatim} > X1:=S1[2]; > LeftProjectiveStrip(TensorProduct(Restriction(X1,P),Dual(Restriction(X1,P)))); GModule of dimension 1 over GF(2^8) \end{verbatim} } \smallskip \noindent Hence the restriction of $X_1$ to $P$ is an endo-trivial module and therefore so is $X_1$, as required. Finally, from this data we also easily obtain from MAGMA that the Loewy and socle series are as \smallskip given. \item[\rm(c)] For $Q\cong\PGL_2^{\ast}(9)$ we take $\widetilde{Q}=3.\PGL_2^{\ast}(9)$ and the claim follows from Theorem~\ref{thm:LT17}(b) together with {\rm(b)}. \end{enumerate} \end{proof} \noindent We note that the computation given in part (b) above can be run in less than 120 seconds and therefore be checked, for example, in the MAGMA online calculator. \\ \begin{lem}\label{lem:SD6a} If $G/O_{2'}(G)\cong \PSL^{\varepsilon}_3(q).H$ where $q\equiv -\varepsilon\pmod{4}$ and $H\leq C_{(3,q-\varepsilon)}\times C_n$ is cyclic of odd order, then $K(G)=X(G)$. \end{lem} \begin{proof} For $Q=\PSL^{\varepsilon}_3(q).H$ with $q\equiv -\varepsilon\pmod{4}$ and $H$ cyclic, by Proposition~\ref{prop:Smult}(h), we have $H^2(Q,k^{\times})\leq C_3$.\par If $H^2(Q,k^{\times})=1$, then we have $\wt{Q}=Q$. Now, as $K(\SL^{\varepsilon}_3(q))=X(\SL^{\varepsilon}_3(q))$ by Proposition~\ref{prop:SU3} and $\SL^{\varepsilon}_3(q)$ is a $2'$-representation group of $\PSL^{\varepsilon}_3(q)$ we also have that $K(\PSL^{\varepsilon}_3(q))=X( \PSL^{\varepsilon}_3(q))$ by Theorem~\ref{thm:LT17}(c). Therefore $K(\widetilde{Q})=X(\widetilde{Q})$ by Lemma~\ref{lem:normal} and it follows from Theorem~\ref{thm:KoLa15} that $K(G)=X(G)$.\par Next, if $H^2(Q,k^{\times})=C_3$, then we can take $\widetilde{Q}=N.H$ with $N=\SL^{\varepsilon}_3(q)$. Now, $K(N)=X(N)$ by Proposition~\ref{prop:SU3}, so that $K(\widetilde{Q})=X(\widetilde{Q})$ by Lemma~\ref{lem:normal} because $N$ is normal of odd index in $\widetilde{Q}$. Finally it follows from Theorem~\ref{thm:LT17}(c) that $K(G)=X(G)$. \end{proof} \begin{lem}\label{lem:SD6b} If $G/O_{2'}(G)\cong \PSL^{\varepsilon}_3(q).H$ where $q\equiv -\varepsilon\pmod{4}$ and $H\leq C_{(3,q-\varepsilon)}\times C_n$ is non-cyclic of odd order, then $K(G)=X(G)$. \end{lem} \begin{proof} For $Q=\PSL^{\varepsilon}_3(q).H$ with $q\equiv -\varepsilon\pmod{4}$ and $H$ non-cyclic, by Proposition~\ref{prop:Smult}(i), we have $|H^2(G,k^{\times})|\,\big|\, 9$. First, we note that if $|H^2(G,k^{\times})|\in\{1,3\}$, then the same arguments as in the proof of Lemma~\ref{lem:SD6a} yield the result. Assume now that $|H^2(G,k^{\times})|~=~9$. In this case a $2'$-representation group $\wt{Q}$ of $Q$ is a central extension of $Q_1:=\SL^{\varepsilon}_3(q).H$ with kernel $Z\cong C_3$. Since $\SL^{\varepsilon}_3(q)$ is normal in $Q_1$, there exists a normal subgroup $Y\unlhd \wt{Q}$ containing $Z$ and such that $Y/Z\cong \SL^{\varepsilon}_3(q)$ and as $\SL^{\varepsilon}_3(q)$ is its own $2'$-representation group, we have that $Y\cong Z\times \SL^{\varepsilon}_3(q)$. Therefore, as $K(\SL^{\varepsilon}_3(q))=X(\SL^{\varepsilon}_3(q))$ by Proposition~\ref{prop:SU3}, it follows from Lemma~\ref{lem:normal}, applied a first time, that $K(Y)=X(Y)$ and applied a second time that $K(\wt{Q})=X(\wt{Q})$. Finally, it follows from Theorem~\ref{thm:LT17}(c) that $K(G)=X(G)$. \end{proof} \noindent With these results, we can now prove our main Theorem. \begin{proof}[Proof of Theorem~\ref{thm:intro}] We know from Theorem~\ref{thm:CMT}(a)--(d) that $$T(G)=TT(G)\oplus TF(G)\cong K(G)\oplus\IZ/2\IZ\oplus\IZ\,,$$ so that it only remains to compute the group $K(G)$ of trivial source endo-trivial modules. As $G/O_{2'}(G)$ must be one the groups listed in Proposition~\ref{prop:classificSD}, if $G/O_{2'}(G)\ncong \PGL_2^{\ast}(9)$, then it follows from Lemma~\ref{lem:SD1}, Lemma~\ref{lem:SD2}, Lemma~\ref{lem:SD3}(b), Lemma~\ref{lem:SD4a}, Lemma~\ref{lem:SD4b}, Lemma~\ref{lem:SD5a}, Lemma~\ref{lem:SD6a} and Lemma~\ref{lem:SD6b} that $K(G)=X(G)$, hence (a). If $G/O_{2'}(G)\cong \PGL_2^{\ast}(9)$, then the assertions in (b) follow directly from Lemma~\ref{lem:SD5b}(a),(b) and (c). \end{proof} \bigskip \section*{Acknowledgments} \noindent {The authors would like to thank Jesper Grodal, Gunter Malle, Nadia Mazza, Burkhard K\"ulshammer, Benjamin Sambale, Mandi Schaeffer Fry, Gernot Stroth, Jacques Th\'{e}venaz, and Rebecca Waldecker for helpful hints and discussions related to the content of this article.} \bigskip \bigskip \bigskip \bibliographystyle{amsalpha} \bibliography{biblio.bib}
{"config": "arxiv", "file": "2009.07666/ET_SD2n.tex"}
TITLE: Looking to confirm inequality or learn where mistake is QUESTION [3 upvotes]: Hello I am looking for some advice on the following, I am wanting to show that $n^{\frac{1}{n}} \lt (1+\frac{1}{\sqrt{n}})^{2}$ for all $n \in \mathbb{N}$ and I thought I would try by induction The base case of n=1 is clear because $1 \lt 4$ Now I said suppose it holds that $$n^{\frac{1}{n}} \lt (1+\frac{1}{\sqrt{n}})^{2}$$ then I must show this implies the truth of $$(n+1)^{\frac{1}{n+1}} \lt (1+\frac{1}{\sqrt{n+1}})^{2}$$ I used Bernoulli to show that $(1+\frac{1}{\sqrt{n+1}}^{n+1} \gt \sqrt{n+1}$ for all n and then I thought if I can just use that and show that $\sqrt{n+1} \gt (n+1)^{\frac{1}{n+1}}$ then my proof would be complete. Any advice guys? How does it look? I tried to do as much as I can on my own as I do want to learn, but I also dont want to be coming up with false proofs etc Here is my new way of saying it, We already showed base case, now suppose that $$n^{1/n} \lt (1+\frac{1}{\sqrt{n}})^{2}$$ Then we want to show $$(n+1)^{1/(n+1)} \lt (1+\frac{1}{\sqrt{n+1}})^{2}$$ I then use $$(1+\frac{1}{\sqrt{n+1}})^{2} \ge 1+2\sqrt{n+1} \gt \sqrt{n+1} \gt (n+1)^{\frac{1}{n+1}}$$ to show it holds, how does this seem? REPLY [1 votes]: Your approach is fine, except you have not written out the steps cleanly. As I understand, you want to use induction and hence the inductive step $$\left(1+\frac1{\sqrt{n+1}}\right)^2 > (n+1)^{\frac1{n+1}} \iff \left(1+\frac1{\sqrt{n+1}}\right)^{n+1} > \sqrt{n+1}$$ Where the last step can be proved using Bernoulli's inequality. Of course it is simpler to use Bernoulli directly and avoid induction altogether. The original inequality, by raising both sides to the power $\dfrac{n}2$, is equivalent to $$\left(1+\frac1{\sqrt{n}}\right)^n> \sqrt{n}$$ and that is easily concluded by Bernoulli which in fact gives the tighter $LHS > 1+\sqrt{n}$.
{"set_name": "stack_exchange", "score": 3, "question_id": 1502721}
TITLE: Topology of submanifolds of manifolds with boundary QUESTION [0 upvotes]: Let $X$ be a compact $n$-manifold with boundary $\partial X$ and let $U$ be a submanifold of $X$ such that $\partial U=\partial X\cap U$. why $X-U-\partial U$ is open in $X$? i know that the topological boundary $\partial A$ of a subset $A$ is always closed since it is the intersection of the closure of $A$ and the closure of $X-A$ but here we are talking about the geometric boundary of a manifold which is different from the topological boundary. I also think from the definition that $\partial X$ is an $(n-1)$-submanifold of $X$ but i don't know if $\partial X$ is always closed? On the other hand i know that an open subset of the manifold $X$ is a submanifold, is the converse true, I mean do we have that each submanifold of $X$ is open in $X$? thank you for your clarifications REPLY [2 votes]: The geometric boundary of a manifold is always closed, since it is the complement of the open set of interior points. Also, not every submanifold of $X$ is open, just take an example where the submanifold has smaller dimension.
{"set_name": "stack_exchange", "score": 0, "question_id": 208773}
\subsection{The Problems} \label{sec:prob} Here we formally define the PCP (\cref{dfn:PCP}), PCR (\cref{dfn:PCR}), Squared Ridge Regression (\cref{dfn:square}), and Square-root Computation (\cref{dfn:square-root}) problems we consider throughout this paper. Throughout, we let $\A\in\R^{n\times d}$ $(n\ge d)$ denote a data matrix where each row $\aaa_i\in\R^n$ is viewed as a datapoint. Our algorithms typically manipulate the positive semidefinite (PSD) matrix $\A^\top\A$. We denote the eigenvalues of $\A^\top\A$ as $\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_d\ge0$ and corresponding eigenvectors as $\nnu_1,\nnu_2,\cdots,\nnu_d\in \R^d$, i.e. $\A^\top\A=\V\LLambda\V^\top$ with $\V\defeq(\nnu_1,\cdots,\nnu_d)^\top$ and $\LLambda\defeq\diag(\lambda_1,\cdots,\lambda_d)$. Given eigenvalue threshold $\lambda\in(0,\lambda_1)$ we define $\PP_\lambda\defeq(\nnu_1,\cdots,\nnu_k)(\nnu_1,\cdots,\nnu_k)^\top$ as a projection matrix projecting any vector onto the top-$k$ eigenvectors of $\A^\top\A$, i.e. $\mathrm{span}\{\nnu_1,\nnu_2,\cdots,\nnu_k\}$, where $\lambda_k$ is the minimum eigenvalue of $\A^\top\A$ no smaller than $\lambda$, i.e. $\lambda_k\ge\lambda>\lambda_{k+1}$. Without specification $\|\cdot\|$ is the standard $\ell_2$-norm of vector or matrix. Given $\gamma\in(0,1)$, the goal of a PCP algorithm is to project any given vector $\vv=\sum_{i\in[d]}\alpha_i\nnu_i$ in a desired way: mapping $\nnu_i$ of $\A^\top\A$ with eigenvalue $\lambda_i$ in $[\lambda(1+\gamma),\infty)$ to itself, eigenvector $\nnu_i$ with eigenvalue $\lambda_i$ in $[0,\lambda(1-\gamma)]$ to $\0$, and any eigenvector $\nnu_i$ with eigenvalue $\lambda_i$ in between the gap to anywhere between $\0$ and $\nnu_i$. Formally, we define the PCP as follows. \begin{definition}[Principal Component Projection] \label{dfn:PCP} The principal component projection (PCP) of $\vv\in\R^d$ at threshold $\lambda$ is $\vv_\lambda^*=\PP_\lambda \vv$. Given threshold $\lambda$ and eigengap $\gamma$, an algorithm $\mathcal{A}_{\PCP}(\vv,\eps,\delta)$ is an $\epsilon$-approximate PCP algorithm if with probability $1- \delta$, its output satisfies following: \begin{align} & \bullet \|\PP_{(1+\gamma)\lambda}(\mathcal{A}_{\PCP}(\vv)-\vv)\|\le\eps\|\vv\|;\nonumber\\ & \bullet \|(\I-\PP_{(1-\gamma)\lambda})\mathcal{A}_{\PCP}(\vv)\|\le\eps\|\vv\| \label{cond:PCP}\\ & \bullet \|(\PP_{(1+\gamma)} - \PP_{(1-\gamma)\lambda}) (\mathcal{A}_{\PCP}(\vv) - \vv) \|\le \|(\PP_{(1+\gamma)} - \PP_{(1-\gamma)\lambda}) \vv \| + \eps\|\vv\| \nonumber \end{align} \end{definition} The goal of a PCR problem is to solve regression restricted to the particular eigenspace we are projecting onto in PCP. The resulting solution should have no correlation with eigenvectors $\nnu_i$ corresponding to $\lambda_i\le\lambda(1-\gamma)$, while being accurate for $\nnu_i$ corresponding to eigenvalues above $\lambda_i\ge\lambda(1+\gamma)$. Also, it shouldn't have too large correlation with $\nnu_i$ corresponding to eigenvalues between $(\lambda(1-\gamma),\lambda(1+\gamma))$. Formally, we define the PCR problem as follows. \begin{definition}[Principal Component Regression] \label{dfn:PCR} The principal component regression (PCR) of an arbitrary vector $\bb\in\R^n$ at threshold $\lambda$ is $\xx_\lambda^*=\min_{\xx\in\R^d}\|\A\PP_\lambda \xx-\bb\|$. Given threshold $\lambda$ and eigengap $\gamma$, an algorithm $\mathcal{A}_{\PCR}(\bb,\eps,\delta)$ is an $\epsilon$-approximate PCR algorithm if with probability $1-\delta$, its output satisfies following: \begin{equation} \|(\I-\PP_{(1-\gamma)\lambda})\mathcal{A}_{\PCR}(\bb,\eps)\|\le\epsilon\|\bb\| \enspace \text{ and } \enspace \|\A\mathcal{A}_{\PCR}(\bb,\eps)-\bb\|\le\|\A \xx_{(1+\gamma)\lambda}^*-\bb\|+\eps\|\bb\| ~. \label{cond:PCR} \end{equation} \end{definition} We reduce PCP and PCR to solving squared linear systems. The solvers we develop for this squared regression problem defined below we believe are of independent interest. \begin{definition}[Squared Ridge Regression Solver] \label{dfn:square} Given $c\in[0,\lambda_1]$~\footnote{We remark that when $c<0$ (or $c>\lambda_1$), we can preprocess the problem by solving $(\A^\top\A-c\I)+\mu I$ (or $(c\I-\A^\top\A)+\mu \I$) twice, which is known to have efficient solvers~\cite{FMMS16,GHJ+16} enjoying provably better runtime guarantees than what we've shown for the harder (non-PSD) case $c\in[0,\lambda_1]$. }, $\vv\in\R^d$, we consider a squared ridge regression problem where exact solution is $\xx^*=((\A^\top \A-c\I)^2+\mu^2\I)^{-1}\vv$. We call an algorithm $\RidgeSquare(\A,c,\mu^2,\vv,\eps,\delta)$ an $\eps$-approximate squared ridge regression solver if with probability $1-\delta$ it returns a solution $\tilde{\xx}$ satisfying $\|\tilde{\xx}-\xx^*\|\le\eps\|\vv\|.$ \end{definition} Using a similar idea of rational polynomial approximation, we also examine the problem of $\M^{1/2}\vv$ for arbitrarily given PSD matrix $\M$ to solving PSD linear systems approximately. \begin{definition}[Square-root Computation] \label{dfn:square-root} Given a PSD matrix $\M\in\R^{n\times n}$ such that $\mu \I\preceq\M\preceq \lambda\I$ and $\vv\in\R^n$, an algorithm $\SR(\M,\vv,\eps,\delta)$ is an $\eps$-approximate square-root solver if with probability $1-\delta$ it returns a solution $\xx$ satisfying $\|\xx-\M^{1/2}\vv\|\le\eps\|\M^{1/2}\vv\|$. \end{definition} \subsection{Our Results} \label{sec:results} Here we present the main results of our paper, all proved in \cref{App:main}. For data matrix $\A\in\R^{n\times d}$, our running times are presented in terms of the following quantities. \begin{itemize} \item Input sparsity: $\nnz(\A)\defeq\text{ number of nonzero entries in }\A$; \item Frobenius norm: $\|\A\|_\mathrm{F}^2\defeq \Tr(\A^\top\A)$; \item Stable rank: $\mathrm{sr}(\A)\defeq \|\A\|_\mathrm{F}^2 / \|\A\|_2^2 = \|\A\|_\mathrm{F}^2/\lambda_1$; \item Condition number of top-eigenspace: $\kappa\defeq\lambda_1/\lambda$. \end{itemize} When presenting running times we use $\tilde{O}$ to hide polylogarithmic factors in the input parameters $\lambda_1 ,\gamma,\vv,\bb$, error rates $\eps$, and success probability $\delta$. For $\A\in\R^{n\times d}$ ($n\ge d$), $\vv\in\R^d$, $\bb\in\R^n$, without loss of generality we assume $\lambda_1\in[1/2,1]$\footnote{This can be achieved by getting a constant approximating overestimate $\tilde{\lambda}_1$ of $\A^\top\A$'s top eigenvector $\lambda_1$ through power method in $\tilde{O}(\nnz(\A))$ time, and consider $\A\leftarrow\A/\sqrt{\tilde{\lambda}_1},\lambda\leftarrow\lambda/\tilde{\lambda}_1,\bb\leftarrow\bb/\sqrt{\tilde{\lambda}_1}$ instead.} Given threshold $\lambda\in(0,\lambda_1)$ and eigengap $\gamma\in(0,2/3]$, the main results of this paper are the following new running times for solving these problems. \begin{theorem}[Principal Component Projection] \label{thm:pcp_main} For any $\eps\in(0,1)$, there is an $\eps$-approximate PCP algorithm (see \cref{dfn:PCP}) $\ISPCP(\A,\vv,\lambda,\gamma,\eps,\delta)$ specified in \cref{alg:ISPCP} with runtime $$\tilde{O} \left(\nnz(\A)+\sqrt{\nnz(\A)\cdot d\cdot\mathrm{sr}(\A)}\kappa/\gamma \right).$$ \end{theorem} \begin{theorem}[Principal Component Regression] \label{thm:pcr_main} For any $\eps\in(0,1)$, there is an $\eps$-approximate PCR algorithm (see \cref{dfn:PCR}) $\ISPCR(\A,\bb,\lambda,\gamma,\eps,\delta)$ specified in \cref{alg:ISPCR} with runtime $$\tilde{O}\left(\nnz(\A)+\sqrt{\nnz(\A)\cdot d\cdot\mathrm{sr}(\A)}\kappa/\gamma\right).$$ \end{theorem} We achieve these results by introducing a technique we call \emph{asymmetric SVRG} to solve squared systems $[(\A^\top\A-c\I)^2+\mu^2\I]\xx=\vv$ with $c\in[0,\lambda_1]$. The resulting algorithm is closely related to the SVRG algorithm for monotone operators in~\citet{pal16}, but involves a more fine-grained error analysis. This analysis coupled with approximate proximal point~\cite{FGKS15} or Catalyst~\cite{LMH15} yields the following result (see \cref{sec:SVRG} for more details). \begin{theorem}[Squared Solver] \label{thm:square_solver_main} For any $\eps\in(0,1)$, there is an $\eps$-approximate squared ridge regression solver (see \cref{dfn:square}) using $\AsySVRG(\M,\hat{\vv},\zz_0,\eps\|\vv\|,\delta)$ that runs in time $$\tilde{O}\left(\nnz(\A)+\sqrt{\nnz(\A)d\cdot\mathrm{sr}(\A)}\lambda_1/\mu\right).$$ \end{theorem} When the eigenvalues of $\A^\top\A-c\I$ are bounded away from $0$, such a solver can be utilized to solve non-PSD linear systems in form $(\A^\top\A-c\I)\xx=\vv$ through preconditioning and considering the corresponding problem $(\A^\top\A-c\I)^2\xx=(\A^\top\A-c\I)\vv$ (see \cref{cor:square_solver_main}). \begin{corollary} \label{cor:square_solver_main} Given $c\in[0,\lambda_1]$, and a non-PSD system $(\A^\top\A-c\I)\xx=\vv$ and an initial point $\xx_0$, for arbitrary $c$ satisfying $(\A^\top\A-c\I)^2\succeq\mu^2\I,\mu>0$, there is an algorithm returns with probability $1-\delta$ a solution $\widetilde{\xx}$ such that $\|\widetilde{\xx}-(\A^\top\A-c\I)^{-1}\vv\|\le\epsilon\|\vv\|$, within runtime $\tilde{O}\bigl(\nnz(\A)+\sqrt{\nnz(\A)d\cdot\mathrm{sr}(\A)}\lambda_1/\mu\bigr)$. \end{corollary} Another byproduct of the rational approximation used in the paper is a nearly-linear runtime for computing an $\eps$-approximate square-root of PSD matrix $\M\succeq \0$ applied to an arbitrary vector. \begin{theorem}[Square-root Computation] \label{thm:square_root_solver_main} For any $\eps\in(0,1)$, given $\mu\I\preceq\M\preceq\lambda\I$, there is an $\eps$-approximate square-root solver (see \cref{dfn:square-root}) $\SR(\M,\vv,\eps,\delta)$ that runs in time $$\tilde{O}(\nnz(\M)+\mathcal{T})$$ where $\mathcal{T}$ is the runtime for solving $(\M+\kappa\I)\xx=\vv$ for arbitrary $\ \vv\in\R^n$ and $\kappa\in[\tilde{\Omega}(\mu),\tilde{O}(\lambda)]$. \end{theorem}
{"config": "arxiv", "file": "1910.06517/main_results.tex"}
\section{Inverse operator by a series expansion} There exists a basic theorem providing a perturbative method to obtain the inverse of continuous linear operators on a Banach space which are not too far from the identity operator. That theorem in its original form, however, does not apply to the case of convolution (or folding) operators. The main result of this paper is a generalization of that theorem to the case of convolution operators. Now we recall the series expansion (called also Neumann series) for the inverse of an operator. Let $A$ be a continuous linear operator on a Banach space such that $\Vert I-A \Vert< 1$, where $I$ is the identity operator. Then the operator $A$ is one-to-one and onto and its inverse is continuous, and the series $N\mapsto\sum_{n=0}^{N}(I-A)^{n}$ is absolutely convergent to $A^{-1}$. The proof is pretty simple, and can be found in any textbooks of functional analysis (e.g.\ \cite{Matolcsi}, \cite{Rudin}). It will be instructive, however, to cite the proof, as later we will strengthen this theorem. First, it is easily shown by induction that $\sum_{n=0}^{N}(I-A)^{n}A= A\sum_{n=0}^{N}(I-A)^{n}=I-(I-A)^{N+1}$. The condition $\Vert I-A \Vert< 1$ guarantees that the sequence $N\mapsto(I-A)^{N+1}$ converges to zero in the operator norm, and the absolute convergence of the series $N\mapsto\sum_{n=0}^{N}(I-A)^{n}$, thus $\left(\sum_{n=0}^{\infty}(I-A)^{n}\right)A= A\left(\sum_{n=0}^{\infty}(I-A)^{n}\right)=I$, i.e.\ $A^{-1}=\sum_{n=0}^{\infty}(I-A)^{n}$. As $A^{-1}$ is expressed as a limit of a series of continuous operators which is convergent in the operator norm, we infer that $A^{-1}$ is continuous. \begin{Rem} The conditions of the above series expansion theorem fail for any folding operator $A_{\rho}$. \begin{enumerate} \item We can observe that the series expansion is only meaningful for the case of a folding operator only when the spaces $X$ and $Y$ are the same. \item Let us assume that $Y=X$. Then, it is easily obtained that a folding operator $A_{\rho}$ does not satisfy the required condition $\Vert I-A_{\rho} \Vert< 1$. It is trivial by the triangle inequality of norms that $\Vert I-A_{\rho} \Vert\leq 2$. We will show now that this inequality can be saturated for a wide class of cpdfs. Let us choose an arbitrary point $y\in X$, and consider the series of pdfs $n\mapsto\frac{1}{\lambda(K_{n}(y))}\chi_{{}_{K_{n}(y)}}$, where $K_{n}(y)$ are compact sets having non-zero Lebesgue measure $\lambda(K_{n}(y))$, such that $K_{n+1}(y)\subset K_{n}(y)$ for all $n\in \mathbb{N}$ and $\underset{n\in \mathbb{N}}{\cap}K_{n}(y)=\{y\}$. Then, \[\left\Vert(I-A_{\rho})\frac{1}{\lambda(K_{n}(y))}\chi_{{}_{K_{n}(y)}}\right\Vert =\int_{z\not\in K_{n}(y)}\int\rho(z|x)\frac{1}{\lambda(K_{n}(y))}\chi_{{}_{K_{n}(y)}}(x)\;\mathrm{d}x\;\mathrm{d}z\] \[+\int_{z\in K_{n}(y)}\left\vert\frac{1}{\lambda(K_{n}(y))}\chi_{{}_{K_{n}(y)}}(z)-\int\rho(z|x)\frac{1}{\lambda(K_{n}(y))}\chi_{{}_{K_{n}(y)}}(x)\;\mathrm{d}x\right\vert\;\mathrm{d}z.\] By making use of the fact that the integral of any pdf is $1$, one can write \[\int_{z\not\in K_{n}(y)}\int\rho(z|x)\frac{1}{\lambda(K_{n}(y))}\chi_{{}_{K_{n}(y)}}(x)\;\mathrm{d}x\;\mathrm{d}z=\] \[1-\int\int\chi_{{}_{K_{n}(y)}}(z)\rho(z|x)\frac{1}{\lambda(K_{n}(y))}\chi_{{}_{K_{n}(y)}}(x)\;\mathrm{d}x\;\mathrm{d}z\] for the first term. For the second term, one can use the monotonity of integration: \[\int_{z\in K_{n}(y)}\left\vert\frac{1}{\lambda(K_{n}(y))}\chi_{{}_{K_{n}(y)}}(z)-\int\rho(z|x)\frac{1}{\lambda(K_{n}(y))}\chi_{{}_{K_{n}(y)}}(x)\;\mathrm{d}x\right\vert\;\mathrm{d}z\] \[\geq \left\vert\int_{z\in K_{n}(y)}\left(\frac{1}{\lambda(K_{n}(y))}\chi_{{}_{K_{n}(y)}}(z)-\int\rho(z|x)\frac{1}{\lambda(K_{n}(y))}\chi_{{}_{K_{n}(y)}}(x)\;\mathrm{d}x\right)\;\mathrm{d}z\right\vert\] \[=\left\vert\int\frac{1}{\lambda(K_{n}(y))}\chi_{{}_{K_{n}(y)}}(z)\;\mathrm{d}z-\int\int\chi_{{}_{K_{n}(y)}}(z)\rho(z|x)\frac{1}{\lambda(K_{n}(y))}\chi_{{}_{K_{n}(y)}}(x)\;\mathrm{d}x\;\mathrm{d}z\right\vert\] \[=\left\vert 1-\int\int\chi_{{}_{K_{n}(y)}}(z)\rho(z|x)\frac{1}{\lambda(K_{n}(y))}\chi_{{}_{K_{n}(y)}}(x)\;\mathrm{d}x\;\mathrm{d}z\right\vert\] \[=1-\int\int\chi_{{}_{K_{n}(y)}}(z)\rho(z|x)\frac{1}{\lambda(K_{n}(y))}\chi_{{}_{K_{n}(y)}}(x)\;\mathrm{d}x\;\mathrm{d}z\] Here, at the second equality $\int\frac{1}{\lambda(K_{n}(y))}\chi_{{}_{K_{n}(y)}}(z)\;\mathrm{d}z=1$ was used, and the fact that the integral of any pdf over a Borel set is smaller or equal to $1$ was used at the third equality. Thus, we infer the inequality: \[\left\Vert(I-A_{\rho})\frac{1}{\lambda(K_{n}(y))}\chi_{{}_{K_{n}(y)}}\right\Vert \geq 2\cdot\left(1-\int\int\chi_{{}_{K_{n}(y)}}(z)\rho(z|x)\frac{1}{\lambda(K_{n}(y))}\chi_{{}_{K_{n}(y)}}(x)\;\mathrm{d}x\;\mathrm{d}z\right).\] If the point $(y,y)\in X\times X$ is a Lebesgue point of $\rho$, then we will shown that the integral term goes to zero when $n$ goes to infinity, thus saturating our inequality in question. If a function $g:X\rightarrow\mathbb{C}$ is locally integrable, then a point $y\in X$ is called a \emph{Lebesgue point of $g$} if $\underset{n\rightarrow\infty}{\mathrm{lim}}\;\frac{1}{\lambda(K_{n}(y))}\int_{K_{n}(y)} \vert g(x)-g(y)\vert\;\mathrm{d}x=0$. If $y\in X$ is a Lebesgue point for $g$, then by the monotonity of integration it also follows that $\underset{n\rightarrow\infty}{\mathrm{lim}}\;\frac{1}{\lambda(K_{n}(y))}\int_{K_{n}(y)} g(x)\;\mathrm{d}x=g(y)$. Applying this result for $\rho$ on the product space $X\times X$ (assuming that the point $(y,y)\in X\times X$ is a Lebesgue point of $\rho$), we have that the sequence $n\mapsto\frac{1}{\lambda(K_{n}(y))}\frac{1}{\lambda(K_{n}(y))}\int_{K_{n}(y)}\int_{K_{n}(y)}\rho(z|x)\;\mathrm{d}x\;\mathrm{d}z$ is convergent to $\rho(y|y)$. Multiplying this sequence by the sequence $n\mapsto \lambda(K_{n}(y))$ (which is convergent to zero), we infer that $\underset{n\rightarrow\infty}{\mathrm{lim}}\;\frac{1}{\lambda(K_{n}(y))}\int_{K_{n}(y)}\int_{K_{n}(y)}\rho(z|x)\;\mathrm{d}x\;\mathrm{d}z=0$. If $\rho$ is continuous, then every point in $X\times X$ is a Lebesgue point of $\rho$. Thus, we have shown that if the cpdf $\rho$ is continuous, then $\Vert I-A_{\rho} \Vert=2$ holds, therefore the original theorem of Neumann cannot be applied directly for a folding operator with continuous cpdf. \end{enumerate} \end{Rem} Apart from the above remark, the reason is obvious for the obstruction of inverting the convolution on the operator level: as the convolution operators are not onto in general, one only can try to invert the operator on a function in the range of the operator. We try to modify the theorem for the case of convolution operators requiring, instead of convergence in the operator series, the convergence of the series $N\mapsto\sum_{n=0}^{N}(I-A)^{n}(Af)$ in some sense (equivalently, the convergence of the sequence $N\mapsto(I-A)^{N+1}f$ in the same sense), for any $f\in L^{1}(X)$. For getting a convenient result, let us recall that the elements of $L^{1}(X)$ can be viewed as regular tempered distributions. The Fourier transformations can be extended to the space of tempered distributions, where they are one-to-one and onto, continuous, and their inverse is also continuous (\cite{Matolcsi}, \cite{Rudin}). The proof of convergence will be performed on the Fourier transforms of the functions, then the result will be brought back by using the continuity of the inverse Fourier transformation on the space of tempered distributions. \begin{Thm} Let $A_{\eta}$ be a convolution operator for some $\eta\in L^{1}(X)$. Let $Z$ be the set of zeros of the function $F_{\pm}\eta$. If the inequality \[\left\vert 1-F_{\pm}\eta \right\vert < 1\] is satisfied everywhere outside $Z$, then for all $f\in L^{1}(X)$ the series \[N\mapsto\sum_{n=0}^{N}(I-A_{\eta})^{n}(A_{\eta}f)\] is convergent in the space of tempered distributions, and \[\sum_{n=0}^{\infty}(I-A_{\eta})^{n}(A_{\eta}f)=f-F_{\pm}^{-1}(\chi_{{}_{Z}}F_{\pm}f).\] \end{Thm} \begin{Prf} Assume that $\left\vert 1-F_{\pm}\eta \right\vert < 1$ holds everywhere outside $Z$. Let $V$ denote the subset of $X^{*}$ where $F_{\pm}\eta$ is nonzero. It is clear that $V$ and $Z$ are disjoint Lebesgue measurable sets and $X^{*}=V\cup Z$. Trivially, the sequence $N\mapsto \left\vert 1-F_{\pm}\eta \right\vert^{N+1}$ converges pointwise to $0$ on $V$, furthermore $\left\vert 1-F_{\pm}\eta \right\vert^{N+1}=1$ on $Z$ for all $N$. For every $f\in L^{1}(X)$ and rapidly decreasing test function $\varphi$ on $X^{*}$, we have \[\left\vert\int(1-F_{\pm}\eta(y))^{N+1}F_{\pm}f(y) \cdot\varphi(y)\;\mathrm{d}y-\int\chi_{{}_{Z}}\cdot F_{\pm}f(y) \cdot\varphi(y)\;\mathrm{d}y\right\vert=\] \[\left\vert\int_{V}(1-F_{\pm}\eta(y))^{N+1}F_{\pm}f(y) \cdot\varphi(y)\;\mathrm{d}y\right\vert\leq\] \[\int_{V}\left\vert1-F_{\pm}\eta(y)\right\vert^{N+1} \left\vert F_{\pm}f(y)\right\vert\cdot\left\vert\varphi(y)\right\vert\;\mathrm{d}y.\] The series of Lebesgue integrable functions $N\mapsto\left\vert1-F_{\pm}\eta\right\vert^{N+1} \left\vert F_{\pm}f\right\vert\cdot\left\vert\varphi\right\vert$ converges pointwise to zero on $V$, and $\left\vert1-F_{\pm}\eta\right\vert^{N+1} \left\vert F_{\pm}f\right\vert\cdot\left\vert\varphi\right\vert\leq \left\vert1-F_{\pm}\eta\right\vert^{1} \left\vert F_{\pm}f\right\vert\cdot\left\vert\varphi\right\vert$ for all $N$, thus by Lebesgue's theorem of dominated convergence the last term of the inequality tends to zero when $N$ goes to infinity. Therefore, the function series $N\mapsto(1-F_{\pm}\eta)^{N+1}(F_{\pm}f)$ is convergent in the space of tempered distributions to the function $\chi_{{}_{Z}}F_{\pm}f$. Applying the inverse Fourier transformation $F_{\pm}^{-1}$ and using the continuity of the inverse Fourier transformation in the space of tempered distributions, we get the desired result, as by the convolution theorem we have $F_{\pm}^{-1}\left((1-F_{\pm}\eta)^{N+1}(F_{\pm}f)\right)=(I-A_{\eta})^{N+1}f$, and because \[f-\sum_{n=0}^{N}(I-A_{\eta})^{n}(A_{\eta}f)=(I-A_{\eta})^{N+1}f\] for all $N$. \end{Prf} \begin{Rem} Let us assume that the condition of the theorem holds. Then it is quite evident that \begin{enumerate} \item If $Z$ has zero Lebesgue measure (which holds if and only if $A_{\eta}$ is one-to-one), then $F_{\pm}^{-1}(\chi_{{}_{Z}}F_{\pm}f)=0$. This means that the series in question always restores the arbitrarily chosen original function $f$ if and only if $A_{\eta}$ is one-to-one, i.e.\ if and only if $F_{\pm}\eta$ is ae nowhere zero. \item If $Z$ has nonzero Lebesgue measure, our series also converges, and restores the maximum possible information about the original function $f$, namely the tempered distribution $f-F_{\pm}^{-1}(\chi_{{}_{Z}}F_{\pm}f)$. However, this tempered distribution may not be a function in general. If the function $\chi_{{}_{Z}}F_{\pm}f$ is not a continuous function which tends to zero at the infinity, then $F_{\pm}^{-1}(\chi_{{}_{Z}}F_{\pm}f)$ cannot be an integrable function. As we shall see in the next section, if the function $\chi_{{}_{Z}}F_{\pm}f$ is not a continuous function which is bounded, then $F_{\pm}^{-1}(\chi_{{}_{Z}}F_{\pm}f)$ cannot even be a measure with finite variation. \item Let now $\eta$ and $f$ be pdfs, and suppose that $F_{\pm}^{-1}(\chi_{{}_{Z}}F_{\pm}f)=0$. Then our convergence result has the following meaning in probability theory: the series converges in the sense that the expectation values of all rapidly decreasing test functions on $X$ are restored. Namely, for any rapidly decreasing test function $\psi$ on $X$ we have that: \[\underset{n\rightarrow\infty}{\mathrm{lim}}\;\int\left(\sum_{n=0}^{N}(I-A_{\eta})^{n}(A_{\eta}f)\right)(x)\cdot\psi(x)\;\mathrm{d}x=\int f(x)\cdot\psi(x)\;\mathrm{d}x.\] \end{enumerate} \end{Rem} It can be easily observed that the condition of our previous theorem is not always satisfied for a pdf $\eta$. E.g.\ if $\eta$ is a Gaussian pdf centered to zero, then it is satisfied, but e.g.\ if $\eta$ is a uniform pdf on a rectangular domain centered to zero, then the condition is not satisfied. Therefore, one could think that the applicability of our deconvolution theorem is rather limited. This is not the case, however, as stated in our following theorem. \begin{Thm} Let $\eta$ be a pdf on $X$. Then for any $f\in L^{1}(X)$ the series \[N\mapsto\sum_{n=0}^{N}(I-A_{P\eta}A_{\eta})^{n}A_{P\eta}(A_{\eta}f)\] is convergent in the space of tempered distributions, and \[\sum_{n=0}^{\infty}(I-A_{P\eta}A_{\eta})^{n}A_{P\eta}(A_{\eta}f)= f-F_{\pm}^{-1}(\chi_{{}_{Z}}F_{\pm}f),\] where $Z:=\{y\in X^{*}|F_{\pm}\eta(y)=0\}$. Here $P$ is the parity operator on $L^{1}(X)$, namely $Pf(x):=f(-x)$ for all $f\in L^{1}(X)$ and $x\in X$. \end{Thm} \begin{Prf} Let us observe, that if $F_{\pm}\eta$ is real valued and nonnegative for a pdf $\eta$, then $\vert 1-F_{\pm}\eta\vert < 1$ is automatically satisfied outside $Z$. This is because \begin{enumerate} \item by our assumption $0 < F_{\pm}\eta$ outside $Z$, thus we conclude that $1-F_{\pm}\eta<1$ outside $Z$, and \item by the inequality $\vert F_{\pm}\eta\vert\leq \Vert \eta\Vert=1$, we conclude that $0\leq 1-\vert F_{\pm}\eta \vert = 1-F_{\pm}\eta$. \end{enumerate} It is easy to see that $F_{\pm}P\eta=\overline{F_{\pm}\eta}$ (where the bar denotes complex conjugation) for a pdf $\eta$, because $\eta$ is real valued. Thus, we have that $F_{\pm}(P\eta\star \eta)=\vert F_{\pm}\eta \vert^2$ is real valued and nonnegative, consequently, by our previous observation, the inequality $\vert 1-F_{\pm}(P\eta\star\eta)\vert<1$ holds outside $Z$, i.e.\ our previous theorem can be applied by replacing the convolution operator $A_\eta$ with the double convolution operator $A_{P\eta}A_{\eta}$. \end{Prf} When applying this theorem in practice, one should take into account that the measured pdf (which is obtained by histograming in general) is not in the range of the convolution operator, but it can be viewed as the sum of a pdf in the range of the convolution operator (if our model is accurate enough) and a noise term. By the above theorem, the series expansion will be convergent on the pdf in the range of the convolution operator, but will be divergent (most probably) on the noise term, as it is not in the range of the convolution operator (in general). Thus, the problem is that when to stop the series expansion: one should let the series go far enough to restore the original (unknown) pdf, but should stop the series expansion early enough to prevent the divergence arising from the noise term. This truncation procedure can be viewed as a very elegant way to do the high frequency regulation. Note, however, that the regulation problem at the finite frequencies (at the zeros of the Fourier transform of the convolver pdf) does not arise at all, with this method. The only remaining question is: at which index should one stop to keep the noise content lower than a given threshold. When working in practice, our density functions are discrete in general (e.g.\ histograms), thus we may view them as a vector of random variables (e.g.\ in the case of histograming, these random variables are the number of entries in the histogram bins). Let us denote it by $v$. If A is a linear operator (i.e.\ a matrix here), then we have that $\mathrm{E}(Av)= A \mathrm{E}(v)$ and $\mathrm{Covar}(Av)=A\mathrm{Covar}(v)A^{+}$, where we denote expectation value by $\mathrm{E}(\cdot)$, covariance matrix by $\mathrm{Covar}(\cdot)$, and the adjoint matrix by $(\cdot)^{+}$. Thus, in the $N$-th step of the series expansion, we have \[\mathrm{Covar}\left(\sum_{n=0}^{N}\left(I-A_{\eta}\right)^{n}v\right)= \left(\sum_{n=0}^{N}\left(I-A_{\eta}\right)^{n}\right) \mathrm{Covar}(v)\left(\sum_{n=0}^{N}\left(I-A_{\eta}\right)^{n}\right)^{+}.\] This means that if we have an initial estimate for the covariance matrix $\mathrm{Covar}(v)$, we can calculate the covariance matrix at each step, thus can calculate the propagated errors at each order. When using the method of histograming, as the entries in the histogram bins are known to obey independent Poisson distributions, the initial undistorted estimates $\mathrm{E}(v_{i})\approx N_{i}$ ($i\in\{1,\dots,M\}$) and $\mathrm{Covar}(v)\approx\mathrm{diag} (N_{1},\dots,N_{M})$ will be valid, where we consider our histogram to be a mapping $H:\{1,\dots,M\}\rightarrow \mathbb{N}_{0},i\mapsto N_{i}$. The squared standard deviations are the diagonal elements of the covariance matrix, thus we can have an estimate on the $L^{1}$-norm of the noise term at each $N$-th order by taking $\frac{1}{\sum_{j=1}^{M}N_{j}}\sum_{i=1}^{M}\sqrt{\mathrm{Covar}_{ii} \left(\sum_{n=0}^{N}\left(I-A_{\eta}\right)^{n}v\right)}$. By stopping the series expansion when this noise content exceeds a certain predefined threshold, we get the desired truncation of the series expansion. \begin{Rem} We show an other (iterative) form of our series expansion which may be more intuitive for physicists. Namely, take the initial conditions \[f_{0}:=A_{P\eta}H,\] \[\hat{C}_{0}:=A_{P\eta}\mathrm{diag}(H),\quad C_{0}:=\left(A_{P\eta}\hat{C}_{0}^{+}\right)^{+}.\] Then, perform the iteration steps \[f_{N+1}:=f_{N}+f_{0}-A_{P\eta}A_{\eta}f_{N},\] \[\hat{C}_{N+1}:=\hat{C}_{N}+\hat{C}_{0}-A_{P\eta}A_{\eta}\hat{C}_{N}, \quad C_{N+1}:=\left(\hat{C}_{N}^{+}+\hat{C}_{0}^{+}- A_{P\eta}A_{\eta}\hat{C}_{N}^{+}\right)^{+}.\] Here $H$ means the initial (measured) histogram, $f_{N}$ is the deconvolved histogram at the $N$-th step, and $A_{P\eta}A_{\eta}$ is the discrete version of the double convolution operator. The quantity $\hat{C}_{N}$ is a supplementary quantity, and $C_{N}$ is the covariance matrix at each step. The noise content can be written as $\frac{1}{\sum_{j=1}^{M}N_{j}}\sum_{i=1}^{M}\sqrt{\left(C_{N}\right)_{ii}}$, which should be kept under a certain predefined threshold. \end{Rem} \begin{Rem} As pointed out in the previous remark, one can exactly follow the error propagation during the iteration. However, to store and to process the whole covariance matrix can cost a lot of memory and CPU-time. Therefore, one may rely on a slightly more pessimistic but less costly approximation of the error propagation, namely on the Gaussian error propagation. This means, that at each step one assumes the covariance matrix to be approximately diagonal, i.e.\ this method is based on the neglection of correlation of entries (which, indeed, holds initially), that slightly will overestimate the error content. Gaussian error propagation means that when calculating the action of the operators in questions, we apply the following two rules: \begin{enumerate} \item if $v$ is a random variable (histogram entry), and $a$ is a number, then $\sigma(a\cdot v):=\vert a\vert\cdot\sigma(v)$ (this is exact, of course), and \item if $v_1$ and $v_2$ are random variables (histogram entries), then $\sigma^{2}(v_1+v_2):=\sigma^{2}(v_1)+\sigma^{2}(v_2)$ (which is exact only if $v_1$ and $v_2$ are uncorrelated). Here $\sigma$ means standard deviation. \end{enumerate} \end{Rem} \begin{Rem} Even if the convergence condition for the deconvolution by series expansion is satisfied for $A_{\eta}$, it is better to use the double deconvolution procedure by $A_{P\eta}A_{\eta}$, for the following reason. In practice the measured pdf corresponds to a pdf in the range of $A_{\eta}$ plus a noise term. When convolving the measured pdf by $P\eta$ before the iteration, the noise level is reduced by orders of magnitudes (the convolution by $P\eta$ smooths out the statistical fluctuations). As a thumb rule, one iteration step is lost with the convolution by $P\eta$, but several iteration steps are gained, as we start the iteration from a much lower noise level. \end{Rem}
{"config": "arxiv", "file": "math-ph0601017/unfold_02.tex"}
TITLE: What is the difference between a linear and a non-linear perturbation? QUESTION [1 upvotes]: Sometimes you will hear about the stability of certain solutions (black holes, solitons, etc) with respect to perturbations. Often they talk about linear vs. non-linear perturbations. What is the distinction between a linear and a non-linear perturbation? I assume it has to do with the response of the solution to the perturbation, but how is it determined whether a given perturbation is "non-linear" or "linear"? Or maybe it has to do with the specific method used? (e.g. applying a perturbation to a linearized version of the equations results is a "linear perturbation?) REPLY [1 votes]: I would tend to agree with you that this (commonly used) language is somewhat misleading. A perturbation is just a perturbation, "linear" and "non-linear" are words that describe the methods used to understand the perturbation. (To paraphrase a famous physicist, also keep in mind that dividing the world between "linear" and "non-linear" is like dividing the world between "bananas" and "non-bananas".) Typically the physics of some system is described by some non-linear differential equation. Perhaps we are able to solve that equation in a special case, such as a situation with a lot of symmetry. Then a perturbation describes a deviation of the system from that ideal case we can solve. If the perturbation is "small" (in some sense that you need to make precise within the context you are working), then you can usually linearize the full differential equation about the ideal solution, and the perturbation will be well-described by solutions to this linearized equation (the linearized equation is a good approximation to the full dynamics). This would be a linear perturbation. However, if the perturbation is "large", then the linearized differential equation will give a poor description of the behavior of the perturbation (the linearized equations are a poor approximation). So you need to make use of the full non-linear differential equation to understand what is going on, or at least you need to make a better approximation than linearizing. This would be a non-linear perturbation.
{"set_name": "stack_exchange", "score": 1, "question_id": 725978}
TITLE: Find the point where the slope changes drastically QUESTION [2 upvotes]: I have a distribution for which I have to find the point where the slope changes drastically. In visual terms, I have to find this point: I though I could use derivatives, but for the following equation: $$ y = -0.255ln(x) + 1.6889 $$ It seems I can't. How can I get what I need? REPLY [2 votes]: Let the curve be $y=-a \ln x +b.$ [The values of $a,b$ from the equation are then $a=0.255,\ b=1.6889,$ but note these values do not agree with the diagram in the question.] Anyway it is not, for these curves, that one wants to maximize the second derivative, which here is $y''=a/x^2$ which goes to $+\infty$ as $x \to 0^+.$ The formula for curvature $\kappa(x)$ of the curve $y=y(x)$, when squared for ease of maximizing it, is $$\kappa(x)^2=\frac{(y'')^2}{(1+y'^2)^3},$$ and when we take the derivative of this and insert $y'=-a/x, \ y''=a/x^2$ and factor, we get for the derivative of squared curvature the expression $$\frac{-2a^2x(2x^2-a^2)}{(x^2+a^2)^4}.$$ So the maximal squared curvature (and thus the maximal curvature) occurs at $x=a/\sqrt{2}.$ For the values of $a,b$ noted at the top of this answer, taken from the displayed equation of the post, this gives the point of maximal curvature as about $(0.18031,2.12573).$ A more convincing thing would come if we knew the true values of $a,b$ for the equation which go with the diagram included in the question.
{"set_name": "stack_exchange", "score": 2, "question_id": 1097632}
TITLE: If $x,y \in V$ are linearely independent, then there exists a transvection $\tau$ with $\tau(x)=y$ QUESTION [1 upvotes]: Let $V$ be a $n$-dimensional $K$ vector space. A $\tau \in \operatorname{GL}(V)$ is called a transvection, if there exists a $(n-1)$-dimensional $\tau$-invariant subvector space $W$ of $V$ with $$\tau_{W} = id_W \text{ and } \tau(v)-v \in W \quad \forall v \in W$$ Task: Prove: If $x,y \in V$ are linearly independent, then there exists a transvection $\tau$ with $\tau(x)=y$. All the mappings of $\tau$ that I can find either do not satisfy $\tau(x)=y$ or $\tau(v)-v \in W $. What I have so far: $x,y$ are linearly independent $\Rightarrow \operatorname{V} \geq 2$ $\Rightarrow \text{because} W=<y> \Rightarrow \operatorname{dim}(V) > dim(W) > 0$ $\Rightarrow \text{so } W \text{exists and thus } \tau$. I have found a similiar post on that task. But it seems not to be the answer to my task. If yes, please help me clarifying that answer. REPLY [1 votes]: Choose a basis where the first to vectors are $x$ and $y$ then the transformation $\tau$ corresponding to the cyclic permutation matrix of order $2$: $$ \begin{pmatrix} 0 & 1 & 0 & \ldots & 0 \\ 1 & 0 & 0 & \ldots & 0 \\ 0 & 0 & 1 & \ldots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \ldots & 1 \\ \end{pmatrix} $$ will do the trick. Note that $W$ is the kernel of $\tau - \tau^0$ which has rank $1$ since $ \begin{pmatrix} -1 & 1 \\ 1 & -1 \\ \end{pmatrix} $ has rank $1$, so dim($W$) = $n-1$.
{"set_name": "stack_exchange", "score": 1, "question_id": 3299068}
\begin{document} \begin{abstract} For each irrational $\alpha\in[0,1)$ we construct a continuous function $f\: [0,1)\to \R$ such that the corresponding cylindrical transformation $[0,1)\times\R \ni (x,t) \mapsto (x+\alpha, t+ f(x)) \in [0,1)\times\R$ is transitive and the Hausdorff dimension of the set of points whose orbits are discrete is 2. Such cylindrical transformations are shown to display a certain chaotic behaviour of Devaney-like type. \end{abstract} \maketitle \section*{Introduction} Chaotic behaviour in dynamical systems has been of particular interest in topological dynamics since about the second half of the 20th century. \footnote{Let us remind that one of the most popular definitions of chaos, the Devaney chaos, comprises dense orbits (transitivity), dense set of periodic orbits and sensitivity to initial conditions (the last condition usually follows from the first two ones).} Few examples had been studied earlier, thus it must have been surprising to Abram Besicovitch to discover a homeomorphism of the cylinder $\T\times\R$ ($\simeq [0,1)\times\R$) with both dense and (closed) discrete orbits (\cite{bes2}, see also \cite{bes1}). It is an example of a class called now cylindrical transformations or, more generally, skew products. \emph{Cylindrical transformation} or \emph{cylinder} \footnote{Also called \emph{cylinder flow} or \R-extension.} is a mapping of the form \[ T_f\:X\times\R\ni (x,t) \mapsto (Tx, t+f(x)) \in X\times\R, \] where $T\: X\to X$ is, in the most general setting, a homeomorphism of a topological space, and $f\: X\to \R$ is a continuous function. They arise naturally in ergodic theory, as their iterates are the products of respective iterates of $T$ and the ergodic sums of $f$ over $T$. Formally, they were introduced (and even earned their own chapter) in a textbook on topological dynamics \cite[Chapter 14]{gohe}. However, such transformations were already considered before: Besicovitch viewed his cylinder on $[0,1)\times\R$ as a homeomorphism of the punctured plane $\R^2\setminus\{0\}$ and expanded it to a homeomorphism of the plane with dense and discrete orbits. Also, cylinders are sections of the flows derived from some differential equations (studied in \cite{poi}, Chapitre XIX, pp. 202ff.; see also \cite[Section 8]{flem}). The result of Besicovitch concerned only some particular $Tx=x+\alpha$ on $\T$ and $f\:\T\to\R$. Therefore, a few natural questions arise: which cylinders have both dense and discrete orbits? (Such cylinders are hereafter called \emph{Besicovitch cylinders}.) For which rotations do such cylinders exist and how common are they? How many discrete orbits do they have? How about other homeomorphisms $(X,T)$? These problems were studied, among others, by Fr\k aczek and Lema\'nczyk in \cite{flem} and by Kwiatkowski and Siemaszko in \cite{ksie}. In particular, in \cite{flem}, Besicovitch cylinders over every minimal rotation of tori $\T^d$ were constructed. As for the amount of discrete orbits, it is known that the set of nonrecurrent points in these cases is small in both topological and measure-theoretical sense: it is of first category (albeit dense) and of measure zero. Thus, the authors of \cite{flem} used some finer means to analyse the set of points with discrete orbits. Firstly, for every minimal rotation of a torus $\T^d$ they found a Besicovitch cylinder with uncountably many discrete orbits. Secondly, for almost every minimal rotation there is a Besicovitch cylinder for which the points with discrete orbits have altogether the Hausdorff dimension at least $d + 1/2$ (that is, of codimension at most $1/2$). Also, the authors discovered some classes of regular examples (in terms of H\"older continuity, Fourier coefficients or degree of smoothness). They left as an open problem whether higher Hausdorff dimensions can be achieved. The (positive) solution this problem is the main topic of the present paper: by enhancing the techniques from \cite{flem} we have constructed Besicovitch cylinders with full Hausdorff dimension of discrete orbits for every minimal rotation of $\T^d$. The present paper consists of four sections. Section 1 contains some preliminary facts on cylindrical transformations that are relevant to our quest for Besicovitch cylinders; in particular, we show that they form a first category set within some relevant function space. In Sections 2 and 3, we present our construction, define some subsets of \T\ and prove that their elements have discrete orbits (although there may also exist other discrete orbits). The Hausdorff dimension of these sets is calculated in Section 4. The last section introduces a definition of chaos that some Besicovitch cylinders satisfy, which is also a possible generalization of the Devaney chaos to noncompact dynamical systems. \section{Cylindrical transformations} \label{ss/cyl} The cylindrical transformations are a special case of the concept of skew product (see \cite{foko}, Subsection 10.1.3) in ergodic theory, transferred in a~natural way to the topological setting. In general, they can be defined for a minimal homeomorphism $T$ of a compact metric space $X$ (the \emph{base}) with a $T$-invariant measure \m\ defined on the Borel $\sigma$-algebra, and a real continuous function $f\:X\too \R$ (which we will customarily call a \emph{cocycle}). In the next sections, we will confine ourselves to minimal rotations on tori with Lebesgue measure. \footnote{Minimal rotations on compact groups do not always exist -- groups possessing them are called \emph{monothetic}. All tori $\T^n$ are monothetic, and a rotation on $\T^n$ is minimal precisely when its coordinates are irrational and $\mathbb{Q}$- linearly independent; moreover, these rotations are uniquely ergodic with respect to Lebesgue measure.} Now, $T$ and $f$ generate a \emph{cylindrical transformation} (or a \emph{cylinder}): \begin{align*} &T_f\:X\times\R\too X\times\R\\ &T_f(x,t) \df (Tx,t+f(x)) \end{align*} The iterations of $T_f$ are of the form $T_f^n(x,t) = (T^n x,t+f^{(n)}(x))$, where $f^{(n)}$ is given by the formula: \[ f^{(n)}(x)\df \begin{cases} \ f(x) + f(Tx) + \dots + f(T^{n-1}x),& \text{for } n>0,\\ \quad 0, & \text{for } n=0,\\ \ - f(T^{-1}x) - f(T^{-2}x) - \dots - f(T^n x),& \text{for } n<0. \end{cases} \] Observe that the dynamics of a point $(x,t)$ does not depend on $t$, because the mappings \[ \tau_{t_0}\: X\times\R \ni (x,t)\mapsto (x,t+t_0) \in X\times\R \] (for arbitrary $t_0\in \R$) are in the topological centralizer of $T_f$: \begin{multline*} T_f(\tau_{t_0}(x,t)) = T_f(x,t+t_0) = (Tx,t+t_0+f(x)) =\\ \tau_{t_0}(Tx,t+f(x)) = \tau_{t_0}(T_f(x,t)). \end{multline*} Unlike the compact case, a homeomorphism of a locally compact space (as here $\xr$) need not have a minimal subset. The cylinder on a compact space is never minimal (as proved in \cite{bes2}), thus it is meaningful to study minimal subsets. From now on, we will usually assume that the base is a torus $X=\T^d$ or even the circle (with a minimal rotation). Then, it is well-known that there are two cases in which the minimal subsets can be easily described: \begin{enumerate} \def\theenumi{T\arabic{enumi}} \item when $\int_{\T^d} f\dm\mu \neq 0$, all points have closed discrete orbits, or, equivalently: for all $x\in X$: $|f^{(n)}(x)| \xrightarrow{n\to\infty} \infty$. \label{tr1} \item when the cocycle is a \emph{coboundary}, i.e.\ of the form $f= g - g\circ T$ for a (continuous) \emph{transfer function} $g\:\T^d\too\R$, the minimal sets are vertically translated copies of the graph of $g$ in $\T^d\times \R$. Conversely, if some orbit under $T_f$ is bounded, then so are all of them, and the cocycle $f$ is a coboundary (Gottschalk-Hedlund Theorem, \cite[Theorem 14.11]{gohe}).\label{tr2} \end{enumerate} Notice that if $f$ is a coboundary and $T$ is measure-preserving, then $\int_{\T^d} f\dm\mu = 0$. Also, in both cases \ref{tr1} and \ref{tr2} the phase space decomposes into minimal sets. In what follows we will call cocycles that fulfil \ref{tr1} or \ref{tr2} \emph{trivial}. \begin{thm} [Lema\'nczyk, Mentzen] \label{t1} If $X=\T^d$, $T$ is a minimal rotation and a cocycle $f$ is not trivial, then the cylinder $T_f$ is transitive (see \cite{leme}, Lemmas 5.2, 5.3). Therefore, if $f$ is of average zero, but $T_f$ has a closed discrete orbit, then it is automatically transitive (since by \ref{tr2} coboundaries have only bounded orbits). \end{thm} As in \cite{flem}, we consider cylindrical transformations that display both transitive and discrete behaviour, called \emph{Besicovitch transformations} or \emph{Besicovitch cylinders}, because their first example was given in \cite{bes2}. Given a homeomorphism of the base, we will also call a cocycle which generates a Besicovitch cylinder a \emph{Besicovitch cocycle}. For brevity, we will also write `discrete' instead of `closed discrete'. By virtue of the condition \ref{tr1} and Theorem \ref{t1}, \emph{a cocycle is Besicovitch if and only if it has average zero and the resulting cylinder has a discrete orbit}. This characterisation will be used in our paper. Unfortunately, Besicovitch cylinders are not easy to find. When $T$ is an irrational rotation of the circle, too regular cocycles yield no minimal sets at all, as has been proved by Matsumoto and Shishikuro, and, independently, by Mentzen and Siemaszko: \begin{thm}[{\cite[Theorem 1]{mash}, \cite[Theorem 2.4]{mesi}}] \label{bv-min} If the cocycle on \T\ is nontrivial and of bounded variation, then the cylinders which it generates have no minimal sets. In particular, they have no discrete orbits. \end{thm} Moreover, the set of Besicovitch cocycles for any minimal compact base is first category in the set of all cocycles with zero average -- for a proof, see Subsection \ref{ss1}. Given a cylinder $T_f$, we will denote \begin{align*} \mathcal D\df&\; \{x\in X\mathpunct{:} \text{ the $T_f$-orbit of }(x,t) \text{ is discrete for every } t\in\R\}\\ =&\; \{x\in X\mathpunct{:} \text{ the $T_f$-orbit of }(x,0) \text{ is discrete}\}. \end{align*} Then the set of points in \xr\ with discrete orbits equals $\mathcal D\times\R$. Clearly, if $T$ is minimal and $\mathcal D\neq\emptyset$, then both $\mathcal D$ and $\mathcal D\times\R$ are dense in ambient spaces, as $\mathcal D$ is $T$-invariant. In \cite{flem}, the authors construct a Besicovitch cocycle for any minimal rotation of a torus. They also find ones with some special properties, in particular with relatively large $\mathcal D$. \footnote{Recall also that the set of discrete orbits is of first category and of measure zero for minimal rotations of tori.} \begin{thm}[\cite{flem}] \label{fl1} For every irrational rotation of \T\ there exist Besicovitch cocycles \textup{(\cite[Section 2]{flem})}. The cocycles can be chosen in such a way that $\mathcal D$ is uncountable \textup{(\cite[Proposition 6]{flem})}. Moreover, for almost every irrational rotation one can find Besicovitch cocycles such that the Hausdorff dimension of $\mathcal D$ is at least $1/2$ \textup{(\cite[Theorem~9]{flem})}. \end{thm} It was left as an open problem whether the coefficient $1/2$ could be improved or not. We answer it by developing the techniques from \cite{flem}: for every irrational rotation of \T\ we have obtained Besicovitch cylinders with $\mathcal D$ of full Hausdorff dimension (Conclusion \ref{con}). This construction is presented in Section \ref{s2}. \subsection{Nonrecurrent cylinders are first category} \label{ss1} We aim to show that, given a uniquely ergodic homeomorphism of a compact metric space as the base, all cocycles admitting nonrecurrent orbits form a first category set in the space of zero-averaged cocycles (with the uniform topology). In particular, we will prove that Besicovitch cocycles are of first category. Note that a minimal rotation of a compact metric group is uniquely ergodic for the Haar measure. \begin{proof} Denote by $(X,\mu)$ the space, by $T$ a~uniquely ergodic homeomorphism thereof, and by $f\:X\too\R$ a cocycle. Also, $\lvert p - q\rvert $ will denote the distance between $p,\,q\in\xr$ in the taxicab metric. Recall that $p\in\xr$ is \emph{nonrecurrent} for \t\ if it is not recurrent, i.e. if its positive semi-orbit lies outside some neighbourhood of $p$: there exists $\e>0$ such that $\lvert p - \t^k(p)\rvert\geq \e$ for every $k>0$. Thus, all the functions in question are contained in the union (increasing as $\e\to 0$ or $n\to\infty$) $\bigcup_{\e>0} N_\e = \bigcup_{n=1}^\infty N_{1/n}$, where \begin{align*} N_\e \df \{f\:X\to\R\mathpunct{:}\ &f \text{ is continuous, }\int_X f\,\mathrm{d}\mu = 0,\\ &\lvert p-\t^k(p)\rvert\geq\e \text{ for some } p\in\xr \ \text{and all } k>0\} \end{align*} To finish the proof, we will show that every $N_\e$ is closed and has empty interior -- hence their union, by definition, is of first category. From now on, an $\e>0$ will be fixed. \paragraph{\textnormal{1}} The set $N_\e$ has empty interior because the set of coboundaries is dense (by the ergodic theorem for uniquely ergodic homeomorphisms), and the cylinders generated by coboundaries have only recurrent points, so all coboudaries lie outside $N_\e$. \paragraph{\textnormal{2}} To prove that $N_\e$ is closed, consider a uniformly convergent sequence $(f_j)_{j=1}^\infty\subset N_\e$, $f_j\rightrightarrows f$. Let $p_j\in\xr$ be chosen for $f_j$ as in the definition of $N_\e$. We may assume that all $p_j$ lie in $X\times\{0\}$, because the dynamic behaviour of a point wrt \t\ does not depend on its second coordinate. Since $X$ is compact, $(p_j)$ has an accumulation point, say, $p_{j_n}\too p$. We will show that this point satisfies the condition from the definition of $N_\e$ for $f$. By the choice of $p_j$, the following holds for all $k>0$ and $n>0$: \begin{align*} \e &\leq \lvert p_{j_n} - \t[j_n]^k(p_{j_n})\rvert\\ &\leq \lvert p_{j_n} - p\rvert + \lvert p - \t^k(p)\rvert + \lvert\t^k(p) - \t^k(p_{j_n}) \rvert + \lvert\t^k(p_{j_n}) - \t[j_n]^k(p_{j_n}) \rvert. \end{align*} After passing to the limit as $n\to\infty$ all but the second of the summands vanish. Indeed, this is obvious for the first and the third one. As for the last summand, it follows form the convergence $f_j\rightrightarrows f$: one can easily check that the supremum distance between arbitrary $T_g^k$ and $T_{g'}^k$ equals the supremum distance $\|g^{(k)} - g'^{(k)}\|_{\sup}$, which is at most $k\|g - g'\|_{\sup}$, so $\lvert\t^k(p_{j_n}) - \t[j_n]^k(p_{j_n})\rvert \leq k\|f - f_{j_n}\|_{\sup} \to 0$. This finally proves that $\lvert p - \t^k(p)\rvert\geq \e$ for all $k>0$, and therefore $f\in N_\e$. \end{proof} \subsubsection*{Remark} The proof remains valid for each Banach subspace $\mathcal F \subset \mathcal C (X)$, satisfying the ergodic theorem, whose norm is stronger than $\|\cdot\|_{\sup}$ and on which $T$ acts as an isometry (in particular, for the space of H\"older continuous functions and for $\mathcal{C}^k(\T^d) \subset \mathcal{C}(\T^d)$). \section{Construction of a Besicovitch cylinder} \label{s2} Let $\alpha$ be an irrational number in $[0,1)$ and $(p_n/q_n)_{n\geq 0}$ its sequence of convergents. Recall that then \begin{equation} \frac 1{2q_n q_{n+1}} < (-1)^n \left(\alpha-\frac{p_n}{q_n}\right) = \left\lvert\alpha-\frac{p_n}{q_n}\right\rvert < \frac 1{q_n q_{n+1}}; \label{e0} \end{equation} (by \cite{khi}, Theorems 9 and 13). Because $q_n\to\infty$, one can choose a subsequence $(\q)_{n\geq 1}$ that grows quickly enough: \begin{gather} \qqk[1] \geq 9, \label{e4}\\ \q[+1] \geq 5 \q, \ \qk[+1] \geq 5\qk, \label{e1}\\ \frac 1n \log\q \to \infty. \label{e3} \end{gather} We may also assume that \begin{equation} \text{all } k_n \text{ are even or all are odd.} \label{e17} \end{equation} For example, we can set $k_n\df 4 n^2 + 1$, because always $q_6 \geq \operatorname{Fib}_6 =13$, $\q[+1] \geq q_{4 + k_n} \geq \operatorname{Fib}_4 \q = 5 \q$ and the sequences $q_{4n^2 + 1} \geq \operatorname{Fib}_{4n^2 + 1}$ grow superexponentially. Put additionally \begin{equation} \A\df \lfloor (3/4)^n\qk \rfloor > (3/4)^n\qk -1 \text{\quad for } n\geq 1. \label{e2} \end{equation} It follows that for $n\geq 2$ on the one hand \begin{multline} \frac {\A\qk[-1]}{\A[-1]\qk} \stackrel{\eqref{e2}}>\frac {((3/4)^n\qk -1)\od\qk[-1]} {(3/4)^{n-1}\qk[-1]\qk} = \frac 34 - \frac{(4/3)^{n-1}}{\qk} \label{e5}\\ \stackrel{\eqref{e1}}\geq \frac 34 - \frac{(4/3)^{n-1}}{5^{n-2}\qqk[2]} \geq \frac 34 - \frac {4/3}{\qqk[2]} \stackrel{\eqref{e4},\, \eqref{e1}} > \frac{18}{25}, \end{multline} and on the other hand \begin{multline} \frac {\A[-1]\qk}{\A\qk[-1]} \stackrel{\eqref{e2}}>\frac {((3/4)^{n-1}\qk[-1]-1)\od \qk} {(3/4)^n\qk\qk[-1]} = \frac 43 - \frac{(4/3)^n}{\qk[-1]} \\ \stackrel{\eqref{e1}}\geq \frac 43 - \frac{(4/3)^n}{5^{n-2}\qqk[1]} \geq \frac 43 - \frac {16/9} {\qqk[1]} \stackrel{\eqref{e4}}> 1.1, \end{multline} hence altogether \begin{gather} 1.1 < \frac {\qk}{\A} : \frac{\qk[-1]}{\A[-1]} < 25/18. \label{e6} \intertext{This also proves that} \text{the sequence } \qk/\A \text{ rises exponentially.} \label{e11} \end{gather} For the sake of brevity, we will also denote \[ \L\df \q\qk/n^2, \quad \text{for } n\geq 1. \] We consider a modification of the example from \cite[Section 2]{flem}: we define $f_n$ to be \L-Lipschitz, $1/(\qa)$-pe\-ri\-od\-ic and even continuous function (hence also $f_n(\frac 1{\qa} - x) = f_n(x)$) by the formulas: \[ f_n(x)\df \begin{cases} \phantom{.}\ 0, &\text{for \ } 0\leq x \leq \dfrac 1{12\qa},\\ \phantom{\Bigg|}\ \L\left(x- \dfrac 1{12\qa} \right), &\text{for \ } \dfrac 1{12\qa} \leq x \leq \dfrac 5{12\qa},\\ \phantom{\Bigg|}\ \dfrac {\qk}{3\A n^2}, &\text{for \ } \dfrac 5{12\qa} \leq x \leq \dfrac 1{2\qa}. \end{cases} \] By periodicity: \begin{multline*} \lvert f_n(x+\alpha) - f_n(x)\rvert = \left\lvert f_n\left(x+\alpha - \frac{\A p_{k_n}} {\qa}\right) - f_n(x)\right\rvert \\ \stackrel{f_n \text{ is $\L$-Lipsch.}}\leq \L \left\lvert \alpha - \frac{p_{k_n}}{\q}\right\rvert \stackrel{\eqref{e0}}< \frac {\q\qk}{n^2} \frac 1{\q\qk} = 1/n^2, \end{multline*} so the series \begin{gather} \f(x) \df \sum_{l=1}^\infty (f_l(x+\alpha) - f_l(x))\nonumber \intertext{converges uniformly and yields a continuous cocycle of average zero. Moreover, it is easy to verify that for every $m\in\Z$:} \f^{(m)}(x) = \sum_{l=1}^\infty (f_l(x+m\alpha) - f_l(x)),\label{e8} \end{gather} where $\f^{(m)}(x)$ is the second coordinate of $T_\f^m (x,0)$ (we recall that $T_\f^m (x,t) = (T^m(x), t+ \f^{(m)}(x))$ for every $x$ and $t$). \section{Discrete orbits} Consider, for $n\geq 1$ and $j=0,\dotsc,\qa -1$: \begin{align*} &F^{++}_{n,j} \df \left[-\frac 1{12\qa} , \frac 1{12\qa}\right] + \frac j\qa, \displaybreak[0]\\ &F^{-+}_{n,j} \df \left[\frac 1{6\qa} , \frac 1{3\qa}\right] + \frac j\qa,\displaybreak[0]\\ &F^{--}_{n,j} \df \left[\frac 5{12\qa} , \frac 7{12\qa}\right] + \frac j\qa = F^{++}_{n,j} + \frac 1{2\qa},\displaybreak[0]\\ &F^{+-}_{n,j} \df \left[\frac 2{3\qa} , \frac 5{6\qa}\right] + \frac j\qa = F^{-+}_{n,j} + \frac 1{2\qa} \end{align*} and for arbitrary $s_-, s_+\in\{+,-\}$ \[ F^{s_- s_+} \df \bigcap_{n=1}^\infty \bigcup_{j=0}^{\qa -1} F^{s_- s_+}_{n,j}. \] The sets $F^{s_- s_+}$ are nonempty and uncountable; indeed, every interval $F^{s_- s_+}_{n-1,j}$ contains at least \begin{multline} \left\lfloor\frac{\lvert F^{s_- s_+}_{n-1,j} \rvert}{1/(\qa)}\right\rfloor - 1 = \left\lfloor\frac \qa{6\qa[-1]}\right\rfloor - 1 \\= \left\lfloor\frac 16 \frac {\A\qk[-1]} {\A[-1]\qk} \cdot \frac{\qk}{\qk[-1]} \cdot \frac{\q}{\q[-1]}\right\rfloor - 1 \stackrel{\eqref{e5},\, \eqref{e1}}> \lfloor \frac 16 \cdot \frac{18}{25} \cdot 25\rfloor - 1 = 2 \label{e18} \end{multline} of the intervals $F^{s_- s_+}_{n,j}$, since the intervals from the $n$-th union are uniformly distributed with period $1/(\qa)$; therefore, the intersections $F^{s_- s_+}$ are topological Cantor sets. We will now show that the products $F^{s_-s_+}\times\R$ consist of \emph{discrete points}, i.e.\ points with discrete orbits, which proves that $T_\f$ is a Besicovitch cylinder. More precisely, we will show that for every $x\in F^{s_- s_+}$: \begin{itemize} \item if $s_+ = s_-$, then \[ \f^{(m)}(x)\xrightarrow{m\to \pm\infty} s_+\,\infty, \] \item if $s_+ \neq s_-$, then \[ \f^{(m)}(x)\xrightarrow{m\to \pm\infty} (-1)^{k_n} s_{\pm}\,\infty, \] where the coefficient $(-1)^{k_n}$ is constant (cf.\ \eqref{e17}). \end{itemize} Later, in the next section, we will verify that these sets are of full Hausdorff dimension. \subsection{The case of \texorpdfstring{$F^{++}$}{F++}} Fix an element $x\in F^{++}$ and an integer $|m| > \qqk[1]/(3A_1)$. We wish to bound the summands $f_l(x+m\alpha) -f_l(x)$ from below. To this end, recall that $x$ determines a sequence $(j_l)^\infty_{l=1}$ such that $x\in F^{++}_{l,j_l}$ for every $l\in\N$ and let $x_l$ be given by $x_l \df x - j_l/(\qqa)$; then $|x_l| \leq 1/(12\qqa)$. Now, by the properties of $f_l$ \begin{multline} f_l(x+m\alpha) - f_l(x) = f_l(x_l+m\alpha) - f_l(x_l) = f_l(x_l+m\alpha) \\ = f_l\left(x_l+m\alpha - \frac{m A_l p_{k_l}}{\qqa}\right) = f_l\left(x_l+m\left(\alpha - \frac{p_{k_l}}{\qq}\right)\right), \label{e7} \end{multline} which implies that \begin{gather} \label{e10} f_l(x+m\alpha) - f_l(x)\geq 0. \end{gather} Because of \eqref{e11}, there exists a unique $n = n(m)$ which satisfies \[ \qk[-1]/(2\A[-1])\leq |m|< \qk/(2\A), \] and when $|m|$ tends to infinity, so does $n(m)$. Sucvh assumption enables us to estimate the $n$-th summand of $\f^{(m)}$: \begin{align*} &\bullet\ \left\lvert m\left(\alpha - \frac{p_{k_n}}{\q}\right) \right\rvert \stackrel{\eqref{e0}}< \frac \qk{2\A}\cdot \frac 1{\q\qk} = \frac 1{2\qa},\\ &\bullet\ \left\lvert m\left(\alpha - \frac{p_{k_n}}{\q}\right) \right\rvert \stackrel{\eqref{e0}}> \frac {\qk[-1]}{2\A[-1]} \cdot \frac 1{2\q\qk} = \frac 1{4\qa} \cdot \frac {\qk[-1]}{\A[-1]} \cdot \frac\A\qk\\ &\phantom{\bullet\ m\left(\alpha - \frac{p_{k_n}}{\q}\right)> \frac {\qk[-1]}{2\A[-1]} \cdot \frac 1{2\q\qk} =\quad } \stackrel{\eqref{e5}}> \frac 1{4\qa} \cdot \frac{18}{25} = \frac 9{50\qa}. \end{align*} Therefore, owing to the bound for $x_n$, \[ \cramped {\frac 1{12} + \frac 1{75} = \frac 9{50} - \frac 1{12} < \left\lvert\!\left(x_n + m\alpha - \frac{mp_{k_n}}{\q}\right) \! \qa \!\right\rvert < \frac 1{12} } + \frac 12 < 1 - \frac 1{12} - \frac 1{75}. \] This leads to the bound we seek, since $f_n$ is even, symmetrical and unimodal on $[0, 1/(\qa)]$: \begin{multline*} f_n\left( x_n+m\left(\alpha - \frac{p_{k_l}}{\qq}\right)\! \right) = f_n\left(\left\lvert x_n+m\left(\alpha - \frac{p_{k_l}}{\qq}\right)\!\right\rvert \right) \\ > f_n\left(\frac 1{12\qa} + \frac 1{75\qa}\right) = L_n \cdot \frac 1{75\qa} = \frac \qk{75\A n^2} \xrightarrow{\eqref{e11}} \infty, \end{multline*} and finally proves the required divergence: \begin{multline*} \f^{(m)}(x) \stackrel{\eqref{e8}} = \sum_{l=1}^\infty (f_l(x+m\alpha) - f_l(x)) \stackrel{\eqref{e10}}\geq f_{n(m)}(x+m\alpha) - f_{n(m)}(x)\\ \stackrel{\eqref{e7}}= f_n\left(x_n+m\left(\alpha - \frac{p_{k_n}}{\q}\right)\!\right) \xrightarrow{|m|\to\infty} \infty. \end{multline*} \subsection{The case of \texorpdfstring{$F^{--}$}{F--}} The behaviour of functions $f_n$ on the set $F^{--}_{n,j}$ is symmetrical to the situation on $F^{++}_{n,j}$, and the calculations are analogous. \subsection{The case of \texorpdfstring{$F^{-+}$}{F-+} and \texorpdfstring{$F^{+-}$}{F+-}} Choose an $x\in F^{-+} \cup F^{+-}$. Again, there is $(j_l)^\infty_{l=1}$ such that $x\in F^{-+}_{l,j_l} \cup F^{+-}_{l,j_l}$ for every $l\in\N$, and we denote by $x_l$ the respective ``reductions'' $x - j_l/(\qqa)$; then \[ x_l \in \left[\frac 1{6\qqa}, \frac 1{3\qqa}\right]\ (s_+=+) \text{\quad or\quad} x_l\in \left[\frac 2{3\qqa}, \frac 5{6\qqa}\right]\ (s_+=-). \] Additionally, fix an integer $|m| > \qqk[1]/(12 A_1)$. It follows from the periodicity of $f_l$ that \begin{multline} f_l(x+m\alpha) - f_l(x) = f_l(x_l + m\alpha) - f_l(x_l)\\ = f_l(x_l + m(\alpha - p_{k_l}/\qq)) - f_l (x_l). \label{e9} \end{multline} We remind that $\operatorname{sign} (\alpha - p_{k_l}/\qq) \stackrel{\eqref{e0}}= (-1)^{k_l} \stackrel{\eqref{e17}}= (-1)^{k_1}$. Take now $n=n(m)\geq 1$ for which \begin{equation} \qk/(12\A)\leq |m|< \qk[+1]/(12\A[+1]). \label{e15} \end{equation} These constraints along with the inequalities \eqref{e0} imply that for $l>n$ \begin{align*} \left\lvert m\left(\alpha - \frac{p_{k_l}}{\qq}\right)\right\rvert \stackrel{\eqref{e0}}< \frac {\qk[+1]}{12\A[+1]} \cdot \frac 1{\qq\qqk} \stackrel{\eqref{e6}}\leq \frac {\qqk}{12A_l} \cdot \frac 1{\qq\qqk} = \frac 1{12\qqa}. \end{align*} Therefore, both arguments $x_l + m(\alpha - p_{k_l}/\qq)$ and $x_l$ lie in the same interval of linearity (and monotonicity) of $f_l$, so the sign of the difference \eqref{e9} equals $(-1)^{k_1} s_+\operatorname{sign} m$ (it does not depend on $l$) and the expression \eqref{e9} can be estimated: \begin{multline*} \left\lvert f_l\left(x_l + m\left(\alpha - \frac{p_{k_l}}{\qq}\right)\right) - f_l(x_l) \right \rvert = L_l \left\lvert m\left(\alpha - \frac{p_{k_l}}{\qq}\right)\right\rvert\\ \stackrel{\eqref{e0},\, \eqref{e15}}> \frac {\qq\qqk}{l^2} \cdot \frac {\qk}{12\A} \cdot \frac 1{2\qq\qqk} = \frac {\qk}{24\A l^2}. \end{multline*} Since all these differences are of the same sign, this yields an estimate for the part of the sum \eqref{e8} with $l>n$: \begin{equation} \cramped{ \left\lvert \sum_{l>n} \left(f_l\!\left(x_l + m\!\left(\alpha - \frac{p_{k_l}}{\qq}\right)\! \right) - f_l(x_l)\right) \right\rvert > \frac {\qk}{24\A} \sum_{l>n} \frac1{l^2} \stackrel{(\star)}> \frac {\qk}{25\A n}, } \label{e12} \end{equation} where the inequality $(\star)$ holds for $n$ large enough, which results form the fact that the remainder $\sum_{l>n} 1/l^2$ is asymptotically equivalent to $1/n$ (thus greater than $24/(25n)$ for large $n$). \footnote{This follows from the termwise equivalence to a telescoping series of $1/n$:\\ \phantom{.}\hfill\( \sum_{l\geq n+1} \frac 1{(l+1)^2} - \sum_{l\geq n+2} \frac 1{(l+1)^2} = 1/(n+1)^2 \approx (1/n) - 1/(n+1), \)\hfill\hfill \\ and from an analogue of the Stolz-Ces\`aro Theorem.} As it occurs, we do not have to work hard to take the remaining summand into account -- it suffices to subtract the upper bounds of the functions $f_l$: \begin{equation} \left\lvert \sum_{l\leq n} (f_l(x_l + m\alpha) - f_l(x_l)) \right\rvert \leq \sum_{l\leq n} 2\max_{x\in\T} f_l = \frac 23\sum_{l\leq n} \frac {\qqk}{A_l l^2} \label{e14} \end{equation} Note that this sum behaves roughly like the sum of a finite geometric series: since $\qqk/A_l$ grows exponentially and, asymptotically, $l^2$ grows slower, the quotient for large $l$ also grows exponentially, say: \[ \frac {\qqk}{A_l l^2} \geq C \frac {\qqk[l-1]}{A_{l-1} (l-1)^2} \text{\quad for some } C>1 \text{ and } l \text{ large enough} \] (e.g.\ when $l^2/(l-1)^2 < 1.1/C$). Then, indeed, the sum \eqref{e14} is of order of its largest term, and therefore we arrive at a satisfactory bound: \begin{equation} \frac 23 \sum_{l\leq n}\frac{\qqk}{A_l l^2} \leq\frac 23 \sum_{l\leq n}\frac 1{C^{n-l}} \cdot\frac{\qk}{\A n^2} < \frac 23 \cdot \frac C{C-1}\cdot \frac{\qk}{\A n^2} \stackrel{(\star\star)}< \frac{\qk}{50 \A n}, \label{e13} \end{equation} where the inequality $(\star\star)$ also holds for large $n$. Combining the estimations \eqref{e12}, \eqref{e14} and \eqref{e13}, we eventually obtain the required divergence: \begin{multline*} |\f^{(m)}(x)| \stackrel{\eqref{e8},\, \eqref{e9}}{=\!=} \left\lvert \sum_{l\geq 1} \left(f_l\left(x_l + m\left(\alpha - {p_{k_l}}/{\qq}\right)\!\right) - f_l(x_l)\right) \right\rvert\\ \geq\left\lvert \sum_{l>n(m)} \cdots \right\rvert - \left\lvert \sum_{l\leq n(m)} \cdots \right\rvert \stackrel{\text{\scriptsize (\ref{e12},\ref{e14},\ref{e13})}}> \frac{\qk}{25 \A n} - \frac{\qk}{50 \A n} = \frac{\qk}{50 \A n} \mathop{\xrightarrow{m\to\pm\infty}}\limits_{\eqref{e11}} \infty. \end{multline*} Also, the sign of $\f^{(m)}$ is correct, because the prevailing part has correct sign. \begin{rem} Observe that the calculations for $F^{+-}$ and $F^{-+}$ (in this and the previous section) do not require all the assumptions on $k_n$ and \A\ that we have made initially. Actually, we only need that $k_n$ are all of the same parity, $\qk/\A$ grows at least geometrically, and \( \qa \geq 18 \qa[-1]. \) In particular, the restriction for the growth of $\qk/\A$ (as in \eqref{e5}) is redundant -- for example, we may put $\A\df 1$ for every $n$ (then we have to ensure the inequality $\q\geq 18\q[-1]$). Moreover, we do not use the pieces of constant value of the functions $f_n$. Summarizing, the sets $F^{+-}$ and $F^{-+}$ also consist of discrete points in the following example from \cite[Section 2]{flem}: \begin{gather} \f(x) \df \sum_{n=1}^\infty (g_n(x+\alpha) - g_n(x)) \nonumber\\ \intertext{where $g_n$ are \L-Lipschitz, $1/\q$-pe\-ri\-od\-ic continuous functions:} g_n(x)\df \begin{cases} \L x, &\text{for } 0\leq x \leq \dfrac 1{2\q},\\ \L\left( \dfrac 1\q - x \right), &\text{for } \dfrac 1{2\q} \leq x \leq \dfrac 1\q. \end{cases} \end{gather} and $\q\geq 18\q[-1]$ (this coefficient can be decreased by widening $F^{s_-s_+}_{n,j}$ appropriately). \end{rem} \section{Hausdorff dimension of \texorpdfstring{$F^{s_- s_+}$}{F s-s+}} To compute the Hausdorff dimension of $F^{s_- s_+}$, we will use methods from \cite{fal} (Example 4.6 and Proposition 4.1):\\ {\itshape Consider a sequence of unions of a finite number of disjoint closed intervals in $[\n]0,1\n()$ \emph{(here: the sequence $(\bigcup_{j=0}^{\qa -1} F^{s_- s_+}_{n,j}) _{n\geq 1}$)}. Suppose that the intervals of the $n$-th union ($n\geq 1$) \begin{itemize} \item are of length at most $\delta_n$ and $\delta_n\to 0$, \item are separated by gaps of length at least $\e_n$ (with $\e_n> \e_{n+1}>0$), \item contain at least $m_{n+1}\geq 2$ and at most $\overline m_{n+1}$ intervals of the $(n+1)$-st union. \end{itemize} Then the Hausdorff dimension of the intersection of this sequence lies between the following two numbers: \[ \liminf _{n\to\infty} \frac {\log (m_2\cdots m_{n})}{-\log(m_{n+1}\e_{n+1})} \leq \liminf _{n\to\infty} \frac {\log (\overline m_2\cdots \overline m_{n})}{-\log \delta_{n+1}}. \] } First, note that $\delta_n = |F^{s_- s_+}_{n,j}| = 1/(6\qa)\to 0$. Next, observe that \[ \e_n = \frac 1{\qa} - |F^{s_- s_+}_{n,j}| > \frac 1{\qa} - \frac 1{6\qa} > \frac 1{2\qa}. \] As for $m_n$ and $\overline m_n$, we have already checked that $m_n\geq 2$ (see \eqref{e18}), but we need a more precise estimate. Using the inequality $\lfloor t \rfloor -1 > t/2$ for $t\geq 3$, we conclude that: \[ m_n \geq \left\lfloor\frac {|F_{n-1,j}^{s_- s_+}|}{1/\qa}\right\rfloor - 1 = \left\lfloor\frac \qa {6\qa[-1]}\right\rfloor - 1 \geq \frac \qa {12\qa[-1]}. \] On the other hand, only one more interval can fit into: \[ \overline m_n \leq \left\lfloor\frac {|F_{n-1,j}^{s_- s_+}|}{1/\qa}\right\rfloor \leq \frac \qa {6\qa[-1]}. \] Consequently: \begin{align*} m_2 \cdots m_n &\geq \frac {\qqa[2]}{12\qqa[1]} \cdots \frac \qa{12\qa[-1]} = \frac \qa{12^{n-1}\qqa[1]},\\ m_{n+1}\e_{n+1} &\geq \frac 1{12} \frac{\qa[+1]}\qa \cdot \frac 1{2\qa[+1]} = \frac 1{24\qa},\\ \overline m_2 \cdots \overline m_n &\leq \frac \qa{6^{n-1}\qqa[1]}, \end{align*} hence eventually \begin{align*} \dim_H F^{s_-s_+} &\geq \liminf _{n\to\infty} \frac{\log \qa - (n-1)\log 12 - \log\qqa[1]} {\log\qa + \log 24}\\ &= 1 - \limsup_{n\to\infty} \frac n{\log\qa}\cdot \log 12,\\ \dim_H F^{s_-s_+} &\leq \liminf _{n\to\infty} \frac{\log \qa - (n-1)\log 6 - \log\qqa[1]} {\log\qa + \log 6} \\ &= 1 - \limsup_{n\to\infty} \frac n{\log\qa}\cdot \log 6. \end{align*} Let us remark that the coefficient $12$ can be lowered nearly to $6$, if $m_n$ are larger. Nevertheless, under the assumption \eqref{e3} the dimension equals $1$. \begin{con} \label{con} For every irrational rotation of \T\ there exists a Besicovitch cocycle such that the set $\mathcal{D}\times \R\ (\supset F^{s_-s_+}\times \R)$ of discrete points of the respective cylinder has Hausdorff dimension two. \end{con} \section{Discrete Devaney chaos} The sole property of transitivity is enough for some dynamicists to call a dynamical system chaotic. However, over the years multiple definitions for chaos have been proposed. Let us recall the notion of the Devaney chaos, one of the most popular ones: a dynamical system $(X,T)$ on a metric space $(X,d)$ is \emph{chaotic in the sense of Devaney} if: \begin{enumerate} \item it is transitive, \item the set of periodic points is dense, \item the system is \emph{sensitive}, i.e.\ there are points around every point $x\in X$ (arbitrarily close) whose orbits at least once diverge far enough from the orbit of $x$: there is $\e>0$ such that for every $x\in X$ and $\delta>0$ there are $n>0$ and $y$ with $d(x,y)< \delta$ and $d(T^n(x), T^n(y))> \e$. \end{enumerate} We remind that the the last condition follows from the remaining ones, if $X$ is infinite (\cite{bbcd}, main theorem, or \cite{gw}, Corollary 1.4). It occurs that the dynamical systems we consider in this article satisfy a bit more general condition, namely, with ``periodic orbits'' replaced by ``discrete orbits'' (note that both notions are equivalent in compact spaces). We will call this property \emph{discrete Devaney chaos} and check this fact in a moment. A similar generalization was proposed in \cite{gw}, with ``almost periodic'' (that is, contained in a minimal set) instead of ``periodic'' and it was shown that this, combined with transitivity, implies sensitivity, if $X$ is compact. Note also, that there are no periodic points in cylinders over minimal rotations, so they cannot be Devaney chaotic. Recall first that a space or a set is \emph{boundedly compact} if bounded closed subsets are always compact. \footnote{Such spaces are also given other names in the literature: they are called \emph{proper}, \emph{finitely compact}, \emph{totally complete}, \emph{Heine-Borel} or having the \emph{Heine-Borel property} (not to be confused with the Heine-Borel [covering] property, or precompactness, that is, ``every open cover has a finite subcover'').} In particular, closed subsets of Euclidean spaces are boundedly compact. All such spaces are complete and separable. Also, a system is called \emph{maximally sensitive}, if it sensitive with every~\mbox{$\e<\operatorname{diam}(X)/2$}, and \emph{maximally chaotic}, if it is Devaney chaotic and maximally sensitive (definitions introduced in \cite{alp}). \begin{thm} Let $X$ be an infinite, boundedly compact space without isolated points, and let $T$ be transitive with dense set of discrete points. Then the system is sensitive. If, moreover, the set of discrete nonperiodic points is dense, then the system is maximally sensitive. \end{thm} \begin{proof} Since $X$ is complete, separable and without isolated points, the system is even positively transitive (it has a dense semi-orbit -- see \cite{ox}, p.\ 70; the proof was recalled in \cite{dy1}, Proposition 2.1). The set of discrete points consists of periodic points and nonperiodic discrete points, both of which are invariant. Thus, one of these sets contains a positively transitive point in its closure, and so it is dense. If periodic points are dense, then, by \cite{bbcd} or \cite{gw}, the system is sensitive. The new result is when the second set is dense, what we assume henceforth. Any infinite (= nonperiodic) discrete orbit, by bounded compactness, has no bounded subsequence, so $\operatorname{diam}(X) = \infty$. Fix then any $x\in X$, $\e>0$ and $\delta>0$. In the $\delta$-neighbourhood of $x$ there is a point $y_1$ with dense semi-orbit and a discrete nonperiodic point $y_2$. Then, for infinitely many $n>0$ the orbit of $y_1$ returns to $x$: $d(T^n(y_1),x)<\e$, and on the other hand, for $n$ large enough the orbit of $y_2$ stays far away from $x$: $d(T^n(y_2),x)>3\e$ (by bounded compactness again). Consequently, for some $n>0$: $d(T^n(y_1), T^n(y_2))>2\e$, and hence $d(T^n(x), T^n(y_1))>\e$ or $d(T^n(x), T^n(y_2))>\e$. \end{proof} \begin{rem} The Besicovitch cylinders that we consider are of course transitive and have a dense set of discrete points (we have found discrete points in $F^{s_-s_+}\times\R$, but their orbits are dense). Therefore, there are examples of maximally discretely chaotic systems with full-dimensional set of relatively ``regular'' (almost periodic, discrete) points. This feature seems not to be studied so far. However, there are results about full Hausdorff dimension of the set of points with nondense orbits, although they are rather concerned with bounded orbits -- see e.g.\ \cite{kl, ur}. \end{rem}
{"config": "arxiv", "file": "1303.3099.tex"}
(* Title: Aodv_Message.thy License: BSD 2-Clause. See LICENSE. Author: Timothy Bourke, Inria *) section "AODV protocol messages" theory Aodv_Message imports Aodv_Basic begin datatype msg = Rreq nat rreqid ip sqn k ip sqn ip | Rrep nat ip sqn ip ip | Rerr "ip \<rightharpoonup> sqn" ip | Newpkt data ip | Pkt data ip ip instantiation msg :: msg begin definition newpkt_def [simp]: "newpkt \<equiv> \<lambda>(d, dip). Newpkt d dip" definition eq_newpkt_def: "eq_newpkt m \<equiv> case m of Newpkt d dip \<Rightarrow> True | _ \<Rightarrow> False" instance by intro_classes (simp add: eq_newpkt_def) end text \<open>The @{type msg} type models the different messages used within AODV. The instantiation as a @{class msg} is a technicality due to the special treatment of @{term newpkt} messages in the AWN SOS rules. This use of classes allows a clean separation of the AWN-specific definitions and these AODV-specific definitions.\<close> definition rreq :: "nat \<times> rreqid \<times> ip \<times> sqn \<times> k \<times> ip \<times> sqn \<times> ip \<Rightarrow> msg" where "rreq \<equiv> \<lambda>(hops, rreqid, dip, dsn, dsk, oip, osn, sip). Rreq hops rreqid dip dsn dsk oip osn sip" lemma rreq_simp [simp]: "rreq(hops, rreqid, dip, dsn, dsk, oip, osn, sip) = Rreq hops rreqid dip dsn dsk oip osn sip" unfolding rreq_def by simp definition rrep :: "nat \<times> ip \<times> sqn \<times> ip \<times> ip \<Rightarrow> msg" where "rrep \<equiv> \<lambda>(hops, dip, dsn, oip, sip). Rrep hops dip dsn oip sip" lemma rrep_simp [simp]: "rrep(hops, dip, dsn, oip, sip) = Rrep hops dip dsn oip sip" unfolding rrep_def by simp definition rerr :: "(ip \<rightharpoonup> sqn) \<times> ip \<Rightarrow> msg" where "rerr \<equiv> \<lambda>(dests, sip). Rerr dests sip" lemma rerr_simp [simp]: "rerr(dests, sip) = Rerr dests sip" unfolding rerr_def by simp lemma not_eq_newpkt_rreq [simp]: "\<not>eq_newpkt (Rreq hops rreqid dip dsn dsk oip osn sip)" unfolding eq_newpkt_def by simp lemma not_eq_newpkt_rrep [simp]: "\<not>eq_newpkt (Rrep hops dip dsn oip sip)" unfolding eq_newpkt_def by simp lemma not_eq_newpkt_rerr [simp]: "\<not>eq_newpkt (Rerr dests sip)" unfolding eq_newpkt_def by simp lemma not_eq_newpkt_pkt [simp]: "\<not>eq_newpkt (Pkt d dip sip)" unfolding eq_newpkt_def by simp definition pkt :: "data \<times> ip \<times> ip \<Rightarrow> msg" where "pkt \<equiv> \<lambda>(d, dip, sip). Pkt d dip sip" lemma pkt_simp [simp]: "pkt(d, dip, sip) = Pkt d dip sip" unfolding pkt_def by simp end
{"subset_name": "curated", "file": "formal/afp/AODV/Aodv_Message.thy"}
TITLE: Prove that equality occurs if and only if x = y QUESTION [1 upvotes]: Prove the arithmetic geometric mean inequality. That is, for two positive real numbers x,y we have sqrt(xy) is less than or equal to (x+y)/2. Furthermore, equality occurs if and only if x = y. I have proved the first part but I was wondering if someone could show me how to prove the second part. I can show how I proved the first part... if that is needed? REPLY [3 votes]: So you need to show $\sqrt{xy}=\frac{x+y}{2}\iff x=y$ proof of $\implies$: $\sqrt{xy}=\frac{x+y}{2}$. Squaring both sides gives $xy=\frac{(x+y)^2}{4}.$ After simplification, we have $x^2-2xy+y^2=0$ which factors into $(x-y)^2=0$. From this we see that $x=y$. proof of $\impliedby:$ Since $\sqrt{x^2}=x$, and $x$ can be written as $x=\frac{x+x}{2}$, then we have $\sqrt{xx}=\frac{x+x}{2}$. Since by assumption $x=y$, then we have that $\sqrt{xy}=\frac{x+y}{2}.\quad\quad\square$
{"set_name": "stack_exchange", "score": 1, "question_id": 759937}
TITLE: Notation for sets of unordered pairs QUESTION [3 upvotes]: Let $A$ be a finite set of unordered pairs, e.g., $$A = \{\{1, 2\}, \{1, 3\}, \{2, 3\}\} \enspace .$$ Which of the following is proper notation for "the element $\{1, 2\}$ belongs to $A$"? $\{1, 2\} \in A$ $\{1, 2\} \subsetneq A$ $\{\{1, 2\}\} \subsetneq A$ The second option makes no sense at all, but would the first and the third be equally appropriate? REPLY [0 votes]: Your first answer is correct. If an element (a single element or a set of elements) belongs to a set, it is represented using the element of operator; therefore: $$\{1, 2\} \in A$$ You would use the element of operator. Also, option 3 is correct because that element is a set itself, and it is a subset of the finite set $A$. So, also $$\{\{1, 2\}\} \subsetneq A$$ The first and third are therefore equally appropriate.
{"set_name": "stack_exchange", "score": 3, "question_id": 1703256}
\begin{document} \baselineskip=15pt \title[Chern classes of parabolic vector bundles]{A construction of Chern classes of parabolic vector bundles} \author[I. Biswas]{Indranil Biswas} \address{School of Mathematics, Tata Institute of Fundamental Research, Homi Bhabha Road, Bombay 400005, India} \email{indranil@math.tifr.res.in} \author[A. Dhillon]{Ajneet Dhillon} \address{Department of Mathematics, University of Western Ontario, London, Ontario N6A 5B7, Canada} \email{adhill3@uwo.ca} \subjclass[2000]{14F05, 14C15} \keywords{Parabolic bundle, Chern class, equivariant Chow group, ramified bundle} \date{} \begin{abstract} Given a parabolic vector bundle, we construct for it of projectivization and tautological line bundle. These are analogs of the projectivization and tautological line bundle for an usual vector bundle. Using these we give a construction of the parabolic Chern classes. \end{abstract} \maketitle \section{Introduction} Parabolic vector bundles on a smooth complex projective curve were introduced in \cite{Se}. In \cite{MY}, Maruyama and Yokogawa introduced parabolic vector bundles on higher dimensional complex projective varieties. The notion of Chern classes of a vector bundle extends to the context of parabolic vector bundles \cite{Bi-c}, \cite{IS}, \cite{Ta}. Take a vector bundle $V$ of rank $r$ on a variety $Z$. Let $\psi\, :\, {\mathbb P}(V)\,\longrightarrow\, Z$ be the projective bundle parametrizing hyperplanes in the fibers of the vector bundle $V$. The tautological line bundle on ${\mathbb P}(V)$ will be denoted by ${\mathcal O}_{{\mathbb P}(V)}(1)$. One of the standard ways of constructing Chern classes of $V$ is to use the identity $$ \sum_{i=0}^r (-1)^i c_1({\mathcal O}_{{\mathbb P}(V)}(1))^{r-i} \psi^*c_i(V)\,=\, 0 $$ with $c_0(V)\,=\, 1$. Our aim here is to give a construction of Chern classes of parabolic vector bundles along this line (see Theorem \ref{thm1}). N. Borne showed that parabolic vector bundles can be understood as vector bundles on root-stacks \cite{Bo1}, \cite{Bo2}. In terms of this correspondence, the $i$-th parabolic Chern class of a parabolic vector bundle is the usual $i$-th Chern class of the corresponding vector bundle on root-stack. It should be mentioned that this elegant correspondence in \cite{Bo1}, \cite{Bo2} between parabolic vector bundles and vector bundles on root-stacks is turning out to be very useful (see, for example, \cite{BD} for application of this correspondence). \section{Preliminaries} \subsection{Parabolic vector bundles} Let $X$ be an irreducible smooth projective variety defined over $\mathbb C$. Let $D\, \subset\, X$ be an effective reduced divisor satisfying the condition that each irreducible component of $D$ is smooth, and the irreducible components of $D$ intersect transversally; divisors satisfying these conditions are called simple normal crossing ones. Let \begin{equation}\label{decomp.-D} D\, =\, \sum_{i=1}^\ell D_i \end{equation} be the decomposition of $D$ into irreducible components. Let $E_0$ be an algebraic vector bundle over $X$. For each $i\,\in\, [1\, , \ell]$, let \begin{equation}\label{divisor-filt.} E_0\vert_{D_i}\, =\, F^i_1 \,\supsetneq\, F^i_2 \,\supsetneq\, \cdots \,\supsetneq\, F^i_{m_i} \,\supsetneq\, F^i_{m_i+1}\, =\, 0 \end{equation} be a filtration by subbundles of the restriction of $E_0$ to $D_i$. A \textit{quasiparabolic} structure on $E_0$ over $D$ is a filtration as above of each $E_0\vert_{D_i}$ such that the system of filtrations is locally abelian (see \cite[p. 157, Definition 2.4.19]{Bo2} for the definition of a locally abelian structure). For a quasiparabolic structure as above, \textit{parabolic weights} are a collection of rational numbers $$ 0\, \leq\, \lambda^i_1\, < \, \lambda^i_2\, < \, \lambda^i_3 \, < \,\cdots \, < \, \lambda^i_{m_i} \, <\, 1\, , $$ where $i\,\in\, [1\, ,\ell]$. The parabolic weight $\lambda^i_j$ corresponds to the subbundle $F^i_j$ in \eqref{divisor-filt.}. A \textit{parabolic structure} on $E_0$ is a quasiparabolic structure on $E_0$ (defined as above) equipped with parabolic weights. A vector bundle over $X$ equipped with a parabolic structure on it is also called a \textit{parabolic vector bundle}. (See \cite{MY}, \cite{Se}.) For notational convenience, a parabolic vector bundle defined as above will be denoted by $E_*$. The divisor $D$ is called the \textit{parabolic divisor} for $E_*$. We fix $D$ once and for all. So the parabolic divisor of all parabolic vector bundles on $X$ will be $D$. The definitions of direct sum, tensor product and dual of vector bundles extend naturally to parabolic vector bundles; similarly, symmetric and exterior powers of parabolic vector bundles are also constructed (see \cite{MY}, \cite{Bi2}, \cite{Yo}). \subsection{Ramified principal bundles} The complement of $D$ in $X$ will be denoted by $X-D$. Let $$ \varphi\, :\, E_{\text{GL}(r, {\mathbb C})}\, \longrightarrow\, X $$ be a ramified principal $\text{GL}(r, {\mathbb C})$--bundle with ramification over $D$ (see \cite{BBN}, \cite{Bi2}, \cite{Bi3}). We briefly recall its defining properties. The total space $E_{\text{GL}(r, {\mathbb C})}$ is a smooth complex variety equipped with an algebraic right action of $\text{GL}(r, {\mathbb C})$ \begin{equation}\label{f} f\, :\,E_{\text{GL}(r, {\mathbb C})}\times \text{GL}(r, {\mathbb C})\, \longrightarrow\, E_{\text{GL}(r, {\mathbb C})}\, , \end{equation} and the following conditions hold: \begin{enumerate} \item{} $\varphi\circ f \, =\, \varphi\circ p_1$, where $p_1$ is the natural projection of $E_{\text{GL}(r, {\mathbb C})}\times \text{GL}(r, {\mathbb C})$ to $E_{\text{GL}(r, {\mathbb C})}$, \item{} for each point $x\, \in\, X$, the action of $\text{GL}(r, {\mathbb C})$ on the reduced fiber $\varphi^{-1}(x)_{\text{red}}$ is transitive, \item{} the restriction of $\varphi$ to $\varphi^{-1}(X - D)$ makes $\varphi^{-1}(X- D)$ a principal $\text{GL}(r, {\mathbb C})$--bundle over $X- D$, \item{} for each irreducible component $D_i\, \subset\, D$, the reduced inverse image $\varphi^{-1}(D_i)_{\text{red}}$ is a smooth divisor and $$ \widehat{D}\, :=\, \sum_{i=1}^\ell \varphi^{-1}(D_i)_{\text{red}} $$ is a normal crossing divisor on $E_{\text{GL}(r, {\mathbb C})}$, and \item{} for any point $x$ of $D$, and any point $z\, \in\, \varphi^{-1}(x)$, the isotropy group \begin{equation}\label{e8} G_z\, \subset\,\text{GL}(r, {\mathbb C}) \, , \end{equation} for the action of $\text{GL}(r, {\mathbb C})$ on $E_{\text{GL}(r, {\mathbb C})}$, is a finite group, and if $x$ is a smooth point of $D$, then the natural action of $G_z$ on the quotient line $T_zE_{\text{GL}(r, {\mathbb C})}/T_z\varphi^{-1}(D)_{\text{red}}$ is faithful. \end{enumerate} Let $$ D^{\rm sm}\, \subset\, D $$ be the smooth locus of the divisor. Take any point $x\, \in\,D^{\rm sm}$, and let $z\, \in\, \varphi^{-1}(x)$ be any point. Since $G_z$ acts faithfully on the line $T_zE_{\text{GL}(r, {\mathbb C})}/T_z\varphi^{-1}(D)_{\text{red}}$, it follows that $G_z$ is a cyclic group. Take any $z'\,\in\, E_{\text{GL}(r, {\mathbb C})}$ such that $\varphi(z')\, =\, \varphi(z)$. There is an element $g\,\in\, \text{GL}(r, {\mathbb C})$ such that $f(z\, ,g)\,=\, z'$. Therefore, the subgroup $G_z$ is conjugate to the subgroup $G_{z'}$; more precisely, we have $g^{-1}G_zg\,=\, G_{z'}$. In particular, $G_z$ is isomorphic to $G_{z'}$. There is a natural bijective correspondence between the ramified principal $\text{GL}(r, {\mathbb C})$--bundles with ramification over $D$ and the parabolic vector bundles of rank $r$ (see \cite{BBN}, \cite{Bi2}). We first describe the steps in the construction of a ramified principal $\text{GL}(r, {\mathbb C})$--bundle from a parabolic vector bundle of rank $r$: \begin{itemize} \item Given a parabolic vector bundle $E_*$ of rank $r$ on $X$, there is a Galois covering \begin{equation}\label{e1} \gamma\, :\, Y\, \longrightarrow\, X\, , \end{equation} where $Y$ is an irreducible smooth projective variety, and a $\text{Gal}(\gamma)$--linearized vector bundle $F$ on $Y$ \cite{Bi1}, \cite{Bo1}, \cite{Bo2}. Let $F_{\text{GL}(r, {\mathbb C})}$ be the principal $\text{GL}(r, {\mathbb C})$--bundle on $Y$ defined by $F$. We recall that $F_{\text{GL}(r, {\mathbb C})}$ is the space of all linear isomorphisms from ${\mathbb C}^r$ to the fibers of $F$. \item The linearization action of $\text{Gal}(\gamma)$ on $F$ produces an action of $\text{Gal}(\gamma)$ on $F_{\text{GL}(r, {\mathbb C})}$. This action of $\text{Gal}(\gamma)$ on $F_{\text{GL}(r, {\mathbb C})}$ commutes with the action of $\text{GL}(r, {\mathbb C})$ on $F_{\text{GL}(r, {\mathbb C})}$ because it is given by a linearization action on $F$. \item The quotient $$\text{Gal}(\gamma)\backslash F_{\text{GL}(r, {\mathbb C})} \, \longrightarrow\, \text{Gal}(\gamma)\backslash Y\,=\, X$$ is a ramified principal $\text{GL}(r, {\mathbb C})$--bundle. \end{itemize} It is starightforward to check that $\text{Gal}(\gamma)\backslash F_{\text{GL}(r, {\mathbb C})}$ is a ramified principal $\text{GL}(r, {\mathbb C})$--bundle over $X$. We will now describe the construction of a parabolic vector bundles of rank $r$ from a ramified principal $\text{GL}(r, {\mathbb C})$--bundle. Let $$ \varphi \, :\, F_{\text{GL}(r, {\mathbb C})}\, \longrightarrow\, X $$ be a ramified principal $\text{GL}(r, {\mathbb C})$--bundle. Let $f$ be as in \eqref{f}. Consider the trivial vector bundle $$ W\, :=\, F_{\text{GL}(r, {\mathbb C})}\times {\mathbb C}^r\,\longrightarrow\, F_{\text{GL}(r, {\mathbb C})}\, . $$ The group $\text{GL}(r,{\mathbb C})$ acts on $F_{\text{GL}(r, {\mathbb C})} \times {\mathbb C}^r$ as follows: the action of any $g\, \in\, \text{GL}(r,{\mathbb C})$ sends any $(z\, ,v)\, \in\, F_{\text{GL}(r, {\mathbb C})} \times {\mathbb C}^r$ to $(f(z,g)\, ,g^{-1}(v))$. Note that this action on $W$ is a lift of the action of $\text{GL}(r,{\mathbb C})$ on $F_{\text{GL}(r, {\mathbb C})}$ defined by $f$. This action of $\text{GL}(r,{\mathbb C})$ on $W$ produces an action of $\text{GL}(r,{\mathbb C})$ on the quasicoherent sheaf $\varphi_*W$ on $X$. Note that this action commutes with the trivial action of $\text{GL}(r,{\mathbb C})$ on $X\,=\,F_{\text{GL}(r, {\mathbb C})}/\text{GL}(r,{\mathbb C})$. The vector bundle underlying the parabolic vector bundle corresponding to $F_{\text{GL}(r, {\mathbb C})}$ is $$ E_0\,:=\, (\varphi_*W)^{\text{GL}(r,{\mathbb C})}\, \subset\, \varphi_*W\, . $$ Here $(\varphi_*W)^{\text{GL}(r,{\mathbb C})}$ denotes the sheaf of invariants; from the given conditions on $F_{\text{GL}(r, {\mathbb C})}$ it follows that $E_0$ is a locally free coherent sheaf. We will construct a parabolic structure on $E_0$. For any $i\, \in\,[1\, ,\ell]$, the reduced divisor $\varphi^{-1}(D_i)_{\text{red}} \, \subset\, F_{\text{GL}(r, {\mathbb C})}$ is preserved by the action of $\text{GL}(r,{\mathbb C})$ on $F_{\text{GL}(r, {\mathbb C})}$. Therefore, the line bundle $$ {\mathcal O}_{F_{\text{GL}(r, {\mathbb C})}} (\varphi^{-1}(D_i)_{\text{red}}) \, \longrightarrow\, F_{\text{GL}(r, {\mathbb C})} $$ is equipped with a lift of the action of $\text{GL}(r, {\mathbb C})$ on $F_{\text{GL}(r, {\mathbb C})}$. For each $n\,\in\, \mathbb Z$, the action on $\text{GL}(r, {\mathbb C})$ on ${\mathcal O}_{F_{\text{GL}(r, {\mathbb C})}} (\varphi^{-1}(D_i)_{\text{red}})$ produces an action of $\text{GL}(r, {\mathbb C})$ on the line bundle ${\mathcal O}_{F_{\text{GL}(r, {\mathbb C})}} (n\cdot \varphi^{-1}(D_i)_{\text{red}})$. For any $i\, \in\,[1\, ,\ell]$, take any point $x_i\, \in\, D^{\rm sm}\bigcap D_i$, where $D^{\rm sm}$ as before is the smooth locus of $D$. Recall that the order of the cyclic isotropy subgroup $G_z\, \in\, \text{GL}(r, {\mathbb C})$, where $z\, \in\, \varphi^{-1}(x_i)$, is independent of the choices of both $x_i$ and $z$. Let $n_i$ be the order of $G_z$, where $z$ is as above. For any real number $\lambda$, by $[\lambda]$ we will denote the integral part of $\lambda$. So, $[\lambda]\, \in\, \mathbb Z$, and $0\, \leq\, \lambda-[\lambda]\, <\, 1$. For any $t\, \in\, \mathbb R$, consider the vector bundle $$ W_t\, :=\, W\otimes {\mathcal O}_{F_{\text{GL}(r, {\mathbb C})}} ( \sum_{i=1}^\ell [-tn_i]\cdot \varphi^{-1}(D_i)_{\text{red}}) \, \longrightarrow\, F_{\text{GL}(r, {\mathbb C})}\, , $$ where $n_i$ is defined above. The actions of $\text{GL}(r, {\mathbb C})$ on $W$ and ${\mathcal O}_{F_{\text{GL}(r, {\mathbb C})}} (\varphi^{-1}(D_i)_{\text{red}})$ together produce an action of $\text{GL}(r, {\mathbb C})$ on the vector bundle $W_t$ defined above. This action of $\text{GL}(r, {\mathbb C})$ on $W_t$ lifts the action of $\text{GL}(r, {\mathbb C})$ on $F_{\text{GL}(r, {\mathbb C})}$. Let $$ E_t\, :=\, (\varphi_*W_t)^{\text{GL}(r,{\mathbb C})}\, \subset\, \varphi_*W_t $$ be the invariant direct image. This $E_t$ is a locally free coherent sheaf on $X$. This filtration of coherent sheaves $\{E_t\}_{t\in \mathbb R}$ defines a parabolic vector bundle on $X$ with $E_0$ as thr underlying vector bundle (see \cite{MY} for the description of a parabolic vector bundles as a filtration of sheaves). The proof of it similar to the proofs in \cite{Bi1}, \cite{Bo1}, \cite{Bo2}. The above construction of a parabolic vector bundle of rank $r$ from a ramified principal ${\text{GL}(r, {\mathbb C})} $--bundle is the inverse of the earlier construction of a ramified principal ${\text{GL}(r, {\mathbb C})}$--bundle from a parabolic vector bundle. We note that the above construction of a parabolic vector bundle of rank $r$ from a ramified principal $\text{GL}(r, {\mathbb C})$--bundle coincides with the following construction (however we do not need this here for our purpose). As before, let $F_{\text{GL}(r, {\mathbb C})}\, \longrightarrow\, X$ be a ramified principal $\text{GL}(r, {\mathbb C})$--bundle. Then there is a finite (ramified) Galois covering $$ \gamma\, :\, Y\, \longrightarrow\, X $$ such that the normalization $\widetilde{F_{\text{GL}(r, {\mathbb C})} \times_X Y}$ of the fiber product $F_{\text{GL}(r, {\mathbb C})}\times_X Y$ is smooth. The projection $\widetilde{F_{\text{GL}(r, {\mathbb C})}\times_X Y} \,\longrightarrow\, Y$ is a principal $\text{GL}(r, {\mathbb C})$--bundle equipped with an action of the Galois group $\Gamma\,:=\, \text{Gal}(\gamma)$. Let $F_{V_0}\,:=\, \widetilde{F_{\text{GL}(r, {\mathbb C})}\times_X Y}(V_0)$ be the vector bundle over $Y$ associated to the principal $\text{GL}(r, {\mathbb C})$--bundle $\widetilde{F_{\text{GL}(r, {\mathbb C})} \times_X Y}$ for the standard $\text{GL}(r, {\mathbb C})$--module $V_0 \,:=\, {\mathbb C}^r$. The action of $\Gamma$ on $Y$ induces an action of $\Gamma$ on $\widetilde{F_{\text{GL}(r, {\mathbb C})}\times_X Y}$; this action of $\Gamma$ on $\widetilde{F_{\text{GL}(r, {\mathbb C})}\times_X Y}$ commutes with the action of ${\text{GL}(r, {\mathbb C})}$ on $\widetilde{F_{\text{GL}(r, {\mathbb C})}\times_X Y}$. Hence the action of $\Gamma$ on $\widetilde{F_{\text{GL}(r, {\mathbb C})}\times_X Y}$ induces an action of $\Gamma$ on the above defined associated bundle $F_{V_0}$ making $F_{V_0}$ a $\Gamma$--linearized vector bundle. Let $E_*$ be the parabolic vector bundle of rank $r$ over $X$ associated to this $\Gamma$--linearized vector bundle $F_{V_0}$. Take an irreducible component $D_i$ of the parabolic divisor $D$. Consider the parabolic vector bundle $E_*$ constructed above from the ramified principal $\text{GL}(r, {\mathbb C})$--bundle $F_{\text{GL}(r, {\mathbb C})}\, \longrightarrow\, X$. A rational number $0\, \geq\, \lambda\, <\, 1$ is a parabolic weight for the quasiparabolic filtration of $E_*$ over $D_i$ if and only if $\exp(2\pi\sqrt{-1} \lambda)$ is an eigenvalue of the isotropy subgroup $G_z$ for a general point $z$ of $D_i$; if $\lambda$ is a parabolic weight, then its multiplicity coincides with the multiplicity of the eigenvalue $\exp(2\pi\sqrt{-1} \lambda)$ of $G_z$. \section{Chern classes of parabolic vector bundles} \subsection{Projective bundle and the tautological line bundle} Let $E_*$ be a parabolic vector bundle over $X$ of rank $r$. Let \begin{equation}\label{ee} \varphi\, :\, E_{\text{GL}(r, {\mathbb C})}\,\longrightarrow\, X \end{equation} be the corresponding ramified principal $\text{GL}(r, {\mathbb C})$--bundle. Let ${\mathbb P}^{r-1}$ be the projective space parametrizing the hyperplanes in ${\mathbb C}^r$. The standard action of $\text{GL}(r, {\mathbb C})$ on ${\mathbb C}^r$ produces an action of $\text{GL}(r, {\mathbb C})$ on ${\mathbb P}^{r-1}$. Let \begin{equation}\label{rho} \rho\, :\, \text{GL}(r, {\mathbb C})\, \longrightarrow\, \text{Aut}({\mathbb P}^{r-1}) \end{equation} be the homomorphism defined by this action. Let \begin{equation}\label{e7} {\mathbb P}(E_*) \, = \, E_{\text{GL}(r, {\mathbb C})}({\mathbb P}^{r-1}) \, :=\, E_{\text{GL}(r, {\mathbb C})}\times^{\text{GL}(r, {\mathbb C})} {\mathbb P}^{r-1}\,\longrightarrow\, X \end{equation} be the associated (ramified) fiber bundle. We note that ${\mathbb P}(E_*)$ is a quotient of $E_{\text{GL}(r, {\mathbb C})}\times{\mathbb P}^{r-1}$; two points $(y_1\, ,z_1)$ and $(y_2\, ,z_2)$ of $E_{\text{GL}(r, {\mathbb C})}\times{\mathbb P}^{r-1}$ are identified in ${\mathbb P}(E_*)$ if there is an element $g\, \in\, \text{GL}(r, {\mathbb C})$ such that $y_2\,=\, y_1g$ and $z_2\,=\, \rho(g^{-1})(z_1)$, where $\rho$ is the homomorphism in \eqref{rho}. \begin{definition}\label{def1} {\rm We will call ${\mathbb P}(E_*)$ the} projective bundle {\rm associated to the parabolic vector bundle $E_*$.} \end{definition} Take a point $x\,\in\,D$; it need not be a smooth point of $D$. Take any $z\,\in\, \varphi^{-1}(x)$, where $\varphi$ is the morphism in \eqref{ee}. As in \eqref{e8}, let $G_z\,\subset\, \text{GL}(r, {\mathbb C})$ be the isotropy subgroup of $z$ for the action of $\text{GL}(r, {\mathbb C})$ on $E_{\text{GL}(r, {\mathbb C})}$. We recall that $G_z$ is a finite group. Let $n_x$ be the order of $G_z$; we note that $n_x$ is independent of the choice of $z$ in $\varphi^{-1}(x)$ because for any other $z'\, \in\, \varphi^{-1}(x)$, the two groups $G_{z'}$ and $G_z$ are isomorphic. The number of distinct integers $n_x$, $x\,\in\,D$, is finite. Indeed, this follows immediately from the fact that as $x$ moves over a fixed connected component of $D^{\rm sm}$, the conjugacy class of the subgroup $G_z\, \subset\, \text{GL}(r, {\mathbb C})$, $z\, \in\, \varphi^{-1}(x)$, remains unchanged. Let \begin{equation}\label{e9} N(E_*)\, :=\, \text{l.c.m.}\{n_x\}_{x\in D} \end{equation} be the least common multiple of all these integers $n_x$. As before, ${\mathbb P}^{r-1}$ is the projective space parametrizing the hyperplanes in ${\mathbb C}^r$. For any point $y\,\in\, {\mathbb P}^{r-1}$, let \begin{equation}\label{e10} H_y\,\subset\, \text{GL}(r, {\mathbb C}) \end{equation} be the isotropy subgroup for the action of $\text{GL}(r, {\mathbb C})$ on ${\mathbb P}^{r-1}$ constructed using $\rho$ in \eqref{rho}. So $H_y$ is a maximal parabolic subgroup of $\text{GL}(r, {\mathbb C})$. Let ${\mathcal O}_{{\mathbb P}^{r-1}}(1)\,\longrightarrow \, {\mathbb P}^{r-1}$ be the tautological quotient line bundle. The group $H_y$ in \eqref{e10} acts on the fiber ${\mathcal O}_{{\mathbb P}^{r-1}}(1)_y$ over the point $y$. For points $x\, \in\, D$, $z\, \in\, \varphi^{-1}(x)$ and $y\,\in\, {\mathbb P}^{r-1}$, the group $G_z\bigcap H_y\,\subset\, \text{GL}(r, {\mathbb C})$, where $G_z$ and $H_y$ are defined in \eqref{e8} and \eqref{e10} respectively, acts trivially on the fiber ${\mathcal O}_{{\mathbb P}^{r-1}}(n_x)_y$ of the line bundle ${\mathcal O}_{{\mathbb P}^{r-1}}(n_x)\,:=\, {\mathcal O}_{{\mathbb P}^{r-1}}(1)^{\otimes n_x}$ over $y$. Indeed, this follows from the fact that $n_x$ is the order of $G_z$. Therefore, from the definition of $N(E_*)$ in \eqref{e9} it follows immediately that for any $z\,\in\, \varphi^{-1}(D)$ and any $y\,\in\, {\mathbb P}^{r-1}$, the group $G_z\bigcap H_y\,\subset\, \text{GL}(r, {\mathbb C})$ acts trivially on the fiber of the line bundle $${\mathcal O}_{{\mathbb P}^{r-1}}(N(E_*))\, :=\, {\mathcal O}_{{\mathbb P}^{r-1}}(1)^{\otimes N(E_*)} $$ over the point $y$. Consider the action of $\text{GL}(r, {\mathbb C})$ on the total space of the line bundle ${\mathcal O}_{{\mathbb P}^{r-1}} (N(E_*))$ constructed using the standard action of $\text{GL}(r, {\mathbb C})$ on ${\mathbb C}^r$. Let $$ E_{\text{GL}(r, {\mathbb C})}({\mathcal O}_{{\mathbb P}^{r-1}} (N(E_*))) \, :=\, E_{\text{GL}(r, {\mathbb C})}\times^{\text{GL}(r, {\mathbb C})} {\mathcal O}_{{\mathbb P}^{r-1}} (N(E_*)) \,\longrightarrow\, X $$ be the associated fiber bundle. Since the natural projection $$ {\mathcal O}_{{\mathbb P}^{r-1}} (N(E_*))\,\longrightarrow\, {\mathbb P}^{r-1} $$ intertwines the actions of $\text{GL}(r, {\mathbb C})$ on ${\mathcal O}_{{\mathbb P}^{r-1}} (N(E_*))$ and ${\mathbb P}^{r-1}$, this natural projection produces a projection \begin{equation}\label{e11} E_{\text{GL}(r, {\mathbb C})}({\mathcal O}_{{\mathbb P}^{r-1}} (N(E_*))) \,\longrightarrow\, {\mathbb P}(E_*) \end{equation} between the associated bundles, where ${\mathbb P}(E_*)$ is constructed in \eqref{e7}. Using the above observation that $G_z\bigcap H_y$ acts trivially on the fiber of ${\mathcal O}_{{\mathbb P}^{r-1}} (N(E_*))$ over $y$ it follows immediately that the projection in \eqref{e11} makes $E_{\text{GL}(r, {\mathbb C})} ({\mathcal O}_{{\mathbb P}^{r-1}} (N(E_*)))$ an algebraic line bundle over the variety ${\mathbb P}(E_*)$. \begin{definition}\label{def2} {\rm The line bundle $E_{\text{GL}(r, {\mathbb C})}({\mathcal O}_{ {\mathbb P}^{r-1}}(N(E_*)))\,\longrightarrow\, {\mathbb P}(E_*)$ will be called the} tautological line bundle; {\rm this tautological line bundle will be denoted by ${\mathcal O}_{{\mathbb P}(E_*)}(1)$.} \end{definition} \subsection{Chern class of the tautological line bundle} For any nonnegative integer $i$, define the rational Chow group $\text{CH}^i(X)_{\mathbb Q}\,:=\, \text{CH}^i(X)\bigotimes_{\mathbb Z} \mathbb Q$. Let $E_*$ be a parabolic vector bundle over $X$ of rank $r$. The corresponding ramified principal $\text{GL}(r, {\mathbb C})$--bundle over $X$ will be denoted by $E_{\text{GL}(r, {\mathbb C})}$. Consider ${\mathbb P}(E_*)$ constructed as in \eqref{e7} from $E_{\text{GL}(r, {\mathbb C})}$. Let $$ \psi\, :\, {\mathbb P}(E_*)\, \longrightarrow\, X $$ be the natural projection. Let ${\mathcal O}_{{\mathbb P}(E_*)}(1)$ be the tautological line bundle over ${\mathbb P}(E_*)$ (see Definition \ref{def2}). \begin{theorem}\label{thm1} For each integer $i\, \in\, [0\, ,r]$, there is a unique element $$ \widetilde{C}_i(E_*)\, \in\, {\rm CH}^i(X)_{\mathbb Q} $$ such that \begin{equation}\label{b3} \sum_{i=0}^r (-1)^i c_1({\mathcal O}_{{\mathbb P}(E_*)} (1))^{r-i} \psi^*\widetilde{C}_i(E_*) \, =\, 0 \end{equation} with $\widetilde{C}_0(E_*)\, =\, 1/N(E_*)^r$, where $N(E_*)$ is the integer in \eqref{e9}. \end{theorem} \begin{proof} Let $\gamma\, :\, Y\,\longrightarrow\, X$ be the covering in \eqref{e1} (recall that it depends on $E_*$). Let $E'\,\longrightarrow\, Y$ be the $\Gamma$--linearized vector bundle over $Y$ corresponding to $E_*$, where $\Gamma\,=\, \text{Gal}(\gamma)$ is the Galois group of $\gamma$. Let ${\mathbb P}(E')$ be the projective bundle over $Y$ parametrizing the hyperplanes in the fibers of $E'$. The tautological line bundle over ${\mathbb P}(E')$ will be denoted by ${\mathcal O}_{{\mathbb P}(E')}(1)$. The action of $\Gamma$ on $E'$ produces an action of $\Gamma$ on ${\mathbb P}(E')$ lifting the action of $\Gamma$ on $Y$. It can be seen that the variety ${\mathbb P}(E_*)$ in \eqref{e7} is the quotient \begin{equation}\label{g1} \Gamma\backslash {\mathbb P}(E')\,=\, {\mathbb P}(E_*)\, . \end{equation} Indeed, this follows immediately from the fact that $\Gamma\backslash E'_{\text{GL}(r, {\mathbb C})}\,=\, E_{\text{GL}(r, {\mathbb C})}$, where $E_{\text{GL}(r, {\mathbb C})}$ is the ramified principal $\text{GL}(r, {\mathbb C})$--bundle corresponding to $E_*$, and $E'_{\text{GL}(r, {\mathbb C})}$ is the principal $\text{GL}(r, {\mathbb C})$--bundle corresponding to $E'$. For any point $y\, \in\, Y$, let $\Gamma_y\, \subset\, \Gamma$ be the isotropy subgroup that fixes $y$ for the action of $\Gamma$ on $Y$. The action of $\Gamma_y$ on the fiber of ${\mathcal O}_{{\mathbb P} (E')}(N(E_*))\, :=\, {\mathcal O}_{{\mathbb P}(E')}(1)^{\otimes N(E_*)}$ is trivial, where $N(E_*)$ is the integer in \eqref{e9}. Indeed, this follows immediately from the construction of $E_*$ for $E'$. Therefore, the quotient $\Gamma\backslash {\mathcal O}_{{\mathbb P}(E')}(N(E_*))$ defines a line bundle over $\Gamma\backslash{\mathbb P}(E')\,=\, {\mathbb P}(E_*)$. We have a natural isomorphism of line bundles \begin{equation}\label{g2} \Gamma\backslash{\mathcal O}_{{\mathbb P}(E')}(N(E_*))\,=\, {\mathcal O}_{{\mathbb P}(E_*)}(1)\, . \end{equation} Let $$ \psi_{E'}\, :\, {\mathbb P}(E')\, \longrightarrow\, Y $$ be the natural projection. For any $i\, \in\, [0\, ,r]$, let $$ c_i(E')\,\in\,\text{CH}^i(Y)_{\mathbb Q}\,:=\,\text{CH}^i(Y) \otimes_{\mathbb Z}{\mathbb Q} $$ be the $i$--th Chern class of $E'$. We have \begin{equation}\label{g3} \sum_{i=0}^r \frac{(-1)^i}{N(E_*)^{r-i}} c_1({\mathcal O}_{{\mathbb P}(E')}(N(E_*)))^{r-i} \psi^*_{E'}c_i(E') \,=\,\sum_{i=0}^r (-1)^i c_1({\mathcal O}_{{\mathbb P}(E')} (1))^{r-i} \psi^*_{E'}c_i(E')\,=\, 0 \end{equation} (see \cite[page 429]{Ha}). The identity in \eqref{g3} in fact uniquely determines the Chern classes of $E'$ provided it is given that $c_0(E') \,=\,1$. Since the vector bundle $E'$ is $\Gamma$--linearized, it follows immediately that \begin{equation}\label{g4} c_i(E')\,\in\, (\text{CH}^i(Y)_{\mathbb Q})^\Gamma\, , \end{equation} where $(\text{CH}^i(Y)_{\mathbb Q})^\Gamma$ is the invariant part of $\text{CH}^i(Y)_{\mathbb Q}$ for the action of $\Gamma$ on it. We also know that the pullback homomorphism $$ \gamma^*\, :\, \text{CH}^i(X)_{\mathbb Q}\, \longrightarrow\,(\text{CH}^i(Y)_{\mathbb Q})^\Gamma $$ is an isomorphism \cite[pages 20--21, Example 1.7.6]{Fu}. {}From \eqref{g1} we have the quotient map $$ \beta\, :\,{\mathbb P}(E')\,\longrightarrow \, {\mathbb P}(E_*) $$ for the action of $\Gamma$, and from \eqref{g2} it follows that \begin{equation}\label{b1} \beta^*{\mathcal O}_{{\mathbb P}(E_*)}(1)\,=\, {\mathcal O}_{{\mathbb P}(E')}(N(E_*))\, . \end{equation} Hence $$ \beta^*c_1({\mathcal O}_{{\mathbb P}(E_*)}(1))\,=\, c_1({\mathcal O}_{{\mathbb P}(E')}(N(E_*)))\, . $$ Therefore, from \eqref{g3} and \eqref{g4} we conclude that for each $i\, \in\, [0\, ,r]$, there is an unique element $\widetilde{C}_i\, \in\, \text{CH}^i(X)_{\mathbb Q}$ such that $\widetilde{C}_0(E_*)\, =\, 1/N(E_*)^r$ and $$ \sum_{i=0}^r (-1)^i c_1({\mathcal O}_{{\mathbb P}(E_*)} (1))^{r-i} \psi^*\widetilde{C}_i(E_*) \, =\, 0\, . $$ This completes the proof of the theorem. \end{proof} \begin{definition}\label{def4} {\rm For any integer $i\, \in\, [0\, ,r]$, the} $i$--th Chern class $c_i(E_*)$ {\rm of a parabolic vector bundle $E_*$ is defined to be $$ c_i(E_*)\, :=\, N(E_*)^{r-i}\cdot \widetilde{C}_i(E_*)\, \in\, \text{CH}^i(X)_{\mathbb Q}\, , $$ where $\widetilde{C}_i(E_*)$ is the class in Theorem \ref{thm1}.} \end{definition} \begin{corollary}\label{cor1} Let $E_*$ be a parabolic vector bundle over $X$ of rank $r$. Let $E'\,\longrightarrow\, Y$ be the corresponding $\Gamma$--linearized vector bundle (see the proof of Theorem \ref{thm1}). Then $$ c_i(E')\, =\, \gamma^*c_i(E_*) $$ for all $i$. \end{corollary} \begin{proof} {}From the construction of \eqref{b3} using \eqref{g3} it follows immediately that $\gamma^* \widetilde{C}_i(E_*) \,=\, c_i(E')/N(E_*)^{r-i}$. Therefore, the corollary follows from Definition \ref{def4}. \end{proof} Define the \textit{Chern polynomial} for $E_*$ to be $$ c_t(E_*)\,=\, \sum_{i=0}^r c_i(E_*)t^i\, , $$ where $r\, =\, \text{rank}(E_*)$, and $t$ is a formal variable. The \textit{Chern character} of $E_*$ is constructed from the Chern classes of $E_*$ in the following way: if $c_t(E_*)\,=\, \prod_{i=1}^r (1+\alpha_i t)$, then $$ {\rm ch}(E_*)\,:=\, \sum_{j=1}^r\exp(\alpha_j) \,\in\, \text{CH}^*(X)_{\mathbb Q}\, . $$ \begin{proposition}\label{prop1} Let $E_*$ and $F_*$ be parabolic vector bundles on $X$. \begin{enumerate} \item The Chern polynomial of the parabolic direct sum $E_*\oplus F_*$ satisfies the identity $c_t(E_*\oplus F_*)\,=\, c_{t}(E_*)\cdot c_{t}(F_*)$. \item The Chern polynomial of the parabolic dual $E^*_*$ satisfies the identity $c_t(E^*_*)\,=\, c_{-t}(E_*)$. \item{} The Chern character of the parabolic tensor product $E_*\otimes F_*$ satisfies the identity ${\rm ch}(E_*\otimes F_*)\,=\,{\rm ch}(E_*)\cdot{\rm ch}(E_*)$. \end{enumerate} \end{proposition} \begin{proof} The Chern classes of usual vector bundles satisfy the above relations. The correspondence between the parabolic vector bundles and the $\Gamma$--linearized vector bundles takes the tensor product (respectively, direct sum) of any two $\Gamma$--linearized vector bundles to the parabolic tensor product (respectively, parabolic direct sum) of the corresponding parabolic vector bundles. Similarly, the dual of a given parabolic vector bundle corresponds to the dual of the $\Gamma$--linearized vector bundle corresponding to the given parabolic vector bundle. In view of these facts, the proposition follows form Corollary \ref{cor1}. \end{proof} \section{Comparison with equivariant Chern classes} Let us recall the basic construction of equivariant intersection theory as in \cite{EG}. Consider a smooth variety $Z$ equipped with an action of a finite group $G$. Let $V$ be a representation of $G$ such that there is an open subset $U$ of $V$ on which $G$ acts freely and the codimension of the complement $V-U$ is at least $\dim Z - i$. Following Edidin and Graham we write $$ Z_G \,= \,(Z \times U )/G\, . $$ The equivariant Chow groups are defined to be $$ A^{i}_G(Z) \,=\, A^{i}(Z_G)\otimes {\mathbb Q}\, . $$ It is shown in Proposition 1 of \cite{EG} that this definition does not depend on $V$ and $U$. Consider a parabolic vector bundle $E_*$ on $X$. Let $\gamma\, :\, Y\longrightarrow X$ be a Galois cover as in the proof of Theorem \ref{thm1}. The Galois group of $\gamma$ will be denoted by $G$. Let $E'$ be the $G$--linearized vector bundle on $Y$ associated to $E_*$. The vector bundle $E'$ has equivariant Chern classes $$ c_i(E') \,\in \, A^i_G(Y)\, . $$ We have a diagram $$ \xymatrix{ {\mathbb P}(E_*) \ar[d]^{\psi_X} & {\mathbb P}(E_*)_G = (\Pr(E_*) \times U)/G \ar[l]_(.7){\pi_X} \ar[d]^{\psi^G_X} & {\mathbb P}(E')_G \ar[l]_(.28){\beta^G} \ar[d]^{\psi_Y^G} \\ X & X_G = (X \times U)/G \ar[l]^(.7){p_X}& Y_G \ar[l]^(.3){f_G} } $$ Note that the morphisms $p_X$ and $\pi_X$ are flat. Further, the morphisms $\beta_G$ and $f_G$ are flat and proper. The scheme ${\mathbb P}(E')_G$ is a projective bundle over $Y_G$ as the action of $G$ on $Y\times U$ is free; see also \cite[Lemma 1]{EG}. All these can be deduced by using the fact that the group $G$ acts freely on $X\times U$ and $Y\times U$. \begin{proposition} We have the following relationship amongst Chern classes: $$ p_X^*(c_i(E_*)) \,= \,f_{G,*}(c^G_i(E'))\, . $$ \end{proposition} \begin{proof} By the projection formula it suffices to show that \begin{equation}\label{l1} f_G^* p_X^*(c_i(E_*)) \,=\, c^G_i(E')\, . \end{equation} Flat pullback preserves intersection products so the equation obtained by pulling back \eqref{b3} to ${\mathbb P}(E')$ remains valid. As was observed in the proof of Theorem \ref{thm1} we have that $$ \beta^{G*}\pi_X^*(\mathcal{O}_{{\mathbb P}(E_*)}(1)) \,=\, \mathcal{O}_{{\mathbb P}(E')}(N(E_*)) $$ (see \eqref{b1}). Now using Definition \ref{def4} it is deduced that \eqref{l1} holds. \end{proof}
{"config": "arxiv", "file": "1209.3160.tex"}
TITLE: Question about extending a solution to Monge-Ampere solution QUESTION [1 upvotes]: I am interested in solutions to the Monge-Ampere equation for a smooth function $h(x,y)$ of two variables(though I suppose I could try to make do with $C^2$ solutions). The equation is: $$\det[\nabla^2(h(x,y))]=0 $$ Here $\det$ denotes the determinant and $\nabla^2$ denotes the Hessian. Geometrically this says that we have a smooth developable surface $(x,y,h(x,y))$ of constant Gaussian curvature. I want this solution to be defined a half-plane where $x > R $ for some $ R >> 0$. And I want it to satisfy the following properties. $$h(x,y)=x \quad when \quad |y| < \epsilon $$ $$ h(x,y)=\sqrt{x^2 + y^2} \quad when \quad |y| > 2\epsilon $$ Do such solutions exist? REPLY [4 votes]: The answer in the smooth case is 'no', because the differential equations governing the second fundamental form of the graph (which has Gauss curvature $K\equiv0$) show that the entire part of the domain $y>0$ (and $x>R$) will have to belong to the non-planar locus, i.e., the open set where the second fundamental form of the graph $\bigl(x,y,h(x,y)\bigr)$ is nonzero. That rules out patching together the solutions in the way that you want. Here is the basic argument: Suppose that one had a smooth solution $h$ of the given equation defined on the half-plane $x>R>>0$, agreeing with $\sqrt{x^2+y^2}$ when $|y|\ge2\epsilon$ and agreeing with $x$ when $|y|\le \epsilon$. The surface $$ S = \{ \bigl(x,y,h(x,y)\bigr)\ \|\ x>R, y\in\mathbb{R} \}\subset\mathbb{R}^3 $$ is then a surface with Gauss curvature $K\equiv 0$, i.e., its first fundamental form $\mathsf{I}$ is flat. Now, the second fundamental form $\mathsf{I\!I}$ must have rank at most $1$ everywhere (by the Gauss Equation). Let $N\subset S$ denote the non-planar domain, i.e., the open subset where $\mathsf{I\!I}$ is nonvanishing, and let $P\subset S$ denote the planar subset, i.e, the closed subset of $S$ on which $\mathsf{I\!I}$ vanishes. Note that $N$, because it is open, must properly contain the part of $S$ that lies above the two 'quadrants' $x>R$ while $|y|\ge2\epsilon$. Meanwhile $P$, which is closed, at least contains the part of $S$ that lies above the 'plank' $x>R$ while $|y|\le \epsilon$. Now, a classic fact from surface theory: Suppose that $U\subset\mathbb{R}^3$ is a connected and simply connected surface whose first fundamental form is flat while its second fundamental form is nowhere vanishing (and hence, after an orientation change if necessary, can be assumed to be nonnegative). Then there exist smooth functions $s$ and $t$ on $U$, unique up to additive constants, and smooth functions $a$ and $b$ on $U$ that satisfy $\mathrm{d}a\wedge\mathrm{d}t = \mathrm{d}b\wedge\mathrm{d}t = 0$ (i.e., locally, at least, $a$ and $b$ are functions of $t$) such that $a-bs>0$ on $U$ and $$ \mathsf{I} = \mathrm{d}s^2 + (a-bs)^2\,\mathrm{d}t^2 \qquad\text{and}\qquad \mathsf{I\!I} = (a-bs)\,\mathrm{d}t^2 $$ This is easily proved via the structure equations, so I won't go into that here, except to remark that the level sets of $t$ (which are the kernel of $\mathsf{I\!I}$) are the lines that rule the surface $U$. (Moreover, there is a converse, in the sense that, for any two smooth functions $s$ and $t$ on a simply-connected surface $U$ that satisfy $\mathrm{d}s\wedge\mathrm{d}t\not=0$ and any two functions $a$ and $b$ that satisfy $\mathrm{d}a\wedge\mathrm{d}t = \mathrm{d}b\wedge\mathrm{d}t = 0$ and $a-bs>0$, the above quadratic forms $\mathsf{I}$ and $\mathsf{I\!I}$ satisfy the Gauss and Codazzi equations and so define a flat, nonplanar surface in $\mathbb{R}^3$, unique up to rigid motion.) For example, when $U$ is the graph $z = \sqrt{x^2+y^2}$, one finds that $$ (x,y,z) = \left(\frac{s\ \cos(t\sqrt2)}{\sqrt2},\frac{s\ \cos(t\sqrt2)}{\sqrt2},\frac{s}{\sqrt2}\right) $$ with $s>0$ and $t$ can be regarded as periodic of period $\pi\sqrt2$ (after all, $U$ is not simply-connected). One also finds, in this case that $a\equiv0$ and $b\equiv -1$. I.e., in this case, $\mathsf{I} = \mathrm{d}s^2 + s^2\,\mathrm{d}t^2$ and $\mathsf{I\!I} = s\,\mathrm{d}t^2$. Now, go back to our putative example $S$, which, by hypothesis, agrees with the above cone above the quadrant where $x>R$ and $y\ge 2\epsilon$. It follows that in the connected component of the region $N\subset S$ that contains that part of the cone, we can extend the functions $s$ and $t$ to any $1$-connected open set $U$ that contains this conical part and that lies inside $N$. But this forces $U$ to also be part of the cone (since the first and second fundamental forms are the same). The point is that the ruling lines in $U$ will have to be extensions of the ruling lines over the conical part, and the issue is that the second fundamental form cannot go to zero along such a line until you get to $s=0$, which, because $x>R$, can never happen along such a line with $t$ in the range $0<t<\pi/(2\sqrt2)$. It follows that $N$ must contain the entire locus lying above the quadrant $x>R$ while $y>0$, but this is impossible, since, by hypothesis, $S$ contains the planar locus above the 'plank'. Thus, such a smooth solution joining the two partial solutions cannot exist. The argument would work just as well for $C^k$ when $k\ge 3$, but I'm not sure about $C^2$. Probably, it's OK, because the second fundamental form will still be continuous, but the argument for the above local coordinates would have to be examined carefully to see whether you could still establish it when you can't differentiate the coefficients of the second fundamental form. Note that this kind of argument would not work if, instead of trying to join the conical 'quadrant' solutions to the planal 'plank' solution, you had specified the domains as wedges centered on the origin, for then the boundaries of the domains would be ruling lines, which are characteristics for the PDE. It just so happens that you are trying to modify the solution along a *non-*characteristic curve, and that doesn't work.
{"set_name": "stack_exchange", "score": 1, "question_id": 172010}
TITLE: The bias of $\hat \sigma^2$ for the population variance $\sigma^2$ QUESTION [0 upvotes]: It is given that the bias of an estimator of $\hat \theta$ for parameter $\theta$ is defined as $\Bbb E(\hat \theta) -\theta$. How to find the bias of $(\hat \sigma)^2$? Anyone can help? Thanks. REPLY [0 votes]: Let $X_1, X_2, \cdots X_n$ be iid with mean $\mu$ and variance $\sigma^2$. Consider the quantity $$Q=\sum_{i=1}^n(x_i-\bar x)^2$$ where $\bar x$ is the sample mean. Using the so-called "shortcut formula", we get $$Q = \sum_{i=1}^n(x_i-\bar x)^2 = \sum_{i=1}^n x_i^2 - n\bar x^2$$ We can find the expected value of $Q$ as follows. \begin{align*} E(Q) &= E\left(\sum_{i=1}^nx_i^2\right) - nE(\bar x^2) \\ &= \sum_{i=1}^nE(x_i^2) - nE(\bar x^2) && \text{linearity of expectation} \\ &= n[E(x_i)^2 + Var(x_i)] - n\left[E(\bar x)^2 + Var(\bar x)\right] && \text{decomposition of variance} \\[1.2ex] &= n(\mu^2 + \sigma^2) - n(\mu^2 + \sigma^2/n) \\[1.2ex] &= (n-1)\sigma^2 \end{align*} From here, you can consider estimators of the form $\hat\sigma_c^2 = cQ$. Which value of $c$ corresponds to the estimator you are interested in? Sometimes people take $c=\frac{1}{n}$ which will lead to a biased estimate (the "method of moments" estimator). Sometimes $c=\frac{1}{n-1}$ which corresponds the unbiased sample variance (also the MLE). If you are interested in minimizing MSE of the estimator, you would take $c=\frac{1}{n-2}$.
{"set_name": "stack_exchange", "score": 0, "question_id": 3136121}
TITLE: $\triangle ABC;AB=c;AC=b;BC=a$ such that $a\geq b\geq c$. Prove : $\frac{a^2-b^2}{c}+\frac{b^2-c^2}{a}+\frac{c^2+2a^2}{b}\geq \frac{2ab-2bc+3ca}{b}$ QUESTION [3 upvotes]: $\triangle ABC;AB=c;AC=b;BC=a$ such that $a\geq b\geq c$. Prove : $$\frac{a^2-b^2}{c}+\frac{b^2-c^2}{a}+\frac{c^2+2a^2}{b}\geq \frac{2ab-2bc+3ca}{b}$$ I have tried that : $a\geq b\geq c\Rightarrow \frac{a^{2}-b^{2}}{c}\geq 0;\frac{b^{2}-c^{2}}{a}\geq 0;\frac{3a^{2}}{b}\geq \frac{3ac}{b}$ $\frac{a^2-b^2}{c}+\frac{b^2-c^2}{a}+\frac{c^2+2a^2}{b}\geq \frac{2ab-2bc+3ca}{b}\Leftrightarrow \frac{c^{2}-a^{2}}{b}+\frac{3a^{2}}{b}\geq 2(a-c)+\frac{3ac}{b}\Leftrightarrow \frac{(c-a)(c+a)}{b}\geq 2(a-c)\Leftrightarrow c+a\geq -2b$ !!?? REPLY [2 votes]: Another way to look at it would be: \begin{align} LHS &= \frac{(a - b)(a + b)}{c} + \frac{(b - c)(b + c)}{a} + \frac{c^2 + 2a^2}{b} \\ &> (a - b) + (b - c) + \frac{c^2 + 2a^2}{b} \\ &= a - c + \frac{c^2 + 2a^2}{b}. \end{align} And RHS $= 2a - 2c + \dfrac{3ca}{b}$. \begin{align} \text{So, }LHS > RHS &\iff \frac{c^2 + 2a^2 - 3ac}{b} > a - c \\ &\iff c^2 + 2a^2 - 3ac > ab - bc \\ &\iff (a - c)^2 + a(a - c) > b(a - c) \\ &\iff a - c + a > b \\ &\iff 2a > b + c \end{align} and this last inequality is true since $a > b > c$.
{"set_name": "stack_exchange", "score": 3, "question_id": 597664}
\section{The space $\Mm\Ll(S)$}\label{ML(S)} This section is entirely settled in the framework of (2-dimensional) {\it hyperbolic geometry}, and several facts that we are going to recall are well-known. However, we will give later a new insight (if not an outline of foundation) to many constructions and concepts in terms of {\it Lorentzian geometry}. \medskip Let us fix once for ever some {\it base} surfaces that will support several geometric structures: $$(\hat S,V)$$ is a compact closed oriented surface of genus $g\geq 0$, with a set of $r\geq 0$ {\it marked points} $V=\{p_1,\cdots,p_r\}$; $$S= \hat S \setminus V \ .$$ $ \overline {\Sigma}$ is obtained by removing from $\hat S$ a small open disk around each point $p_j$. Hence $\overline \Sigma$ is compact with $r$ boundary components $C_1,\cdots,C_r$. We denote by $\Sigma$ the interior of $\overline \Sigma$. We fix also a continuous map $$\overline \phi: \overline \Sigma \to \Shat$$ such that for every $j$, $\overline \phi(C_j) = p_j$, and the restriction $\phi: \Sigma \to S$ is an oriented diffeomorphism that is the identity outside a regular neighbourhood of the boundary of $\overline \Sigma$. In this way, we will often tacitly confuse $S$ and $ \Sigma$. We will also assume that $S$ is {\it not elementary}, that is its fundamental group is {\it non-Abelian}, equivalently $2-2g -r <0$. Such an $S$ is said to be {\it of finite type}. \subsection{The Teichm\"uller space $\widetilde \Tt(S)$}\label{teich} \smallskip We denote by $$\widetilde \Hh(S)$$ the space of non-necessarily complete hyperbolic structures $F$ on $S$ such that its completion $F^\Cc$ is a {\it complete hyperbolic surface with geodesic boundary}. Note that we do not require that the boundary components of $F^\Cc$ are closed geodesics. Denote by Diff$^0$ the group of diffeomorphisms of $S$ homotopic to the identity. Set $$\widetilde \Tt(S) = \widetilde \Hh(S)/{\rm Diff}^0$$ in other words, two hyperbolic structures in $\widetilde \Hh(S)$ are identified up to isometries homotopic to the identity. This is the ``full'' Teichm\"uller space we will deal with. \subsection{The convex-core map}\label{convex-core} \smallskip Let us point out some distinguished subspaces of $\widetilde \Tt(S)$. $$\Hh(S)\subset \widetilde \Hh(S)$$ denotes the space of {\it complete} hyperbolic structures on $S$ ({\it i.e.} $F=F^\Cc$). Hence every $F\in \Hh(S)$ can be realized as the quotient $\mh^2/\Gamma$ by a discrete, torsion free subgroup $\Gamma \subset {\rm Isom}^+(\mh^2)\cong PSL(2,\mr)$, isomorphic to $\pi_1(S)$. The corresponding quotient space $$\Tt(S)\subset \widetilde \Tt(S)$$ can be identified with the space of conjugacy classes of such subgroups of $PSL(2,\mr)$. $$\CG(S)\subset \widetilde \Hh(S)$$ denotes the set of $F$ of {\it finite area and such that all boundary components of $F^\Cc$ are closed geodesics}. $$\Tt_\cG(S) \subset \widetilde \Tt(S)$$ is the corresponding quotient space. Clearly if $S$ is compact ($V=\emptyset$) $$\Tt_g := \Tt_\cG(S) = \Tt(S) = \widetilde \Tt(S)$$ is the classical Teichm\"uller space. In general, set $$\Tt_{g,r} := \Tt(S)\cap \Tt_\cG(S) \ .$$ Via Uniformization Theorem, $\Tt_{g,r}$ is isomorphic to the Teichm\"uller space of {\it conformal structures} on $\hat S$ ({\it i.e.} on $S$ that extend to $\hat S$) mod Diff$^0(\hat S,\ {\rm rel} \ V)$. $\Tt(S)$ is isomorphic to the Teichm\"uller space of arbitrary conformal structures on $S$. \medskip \begin{prop}\label{convcore-map} There is an natural isomorphism $$ \Kk: \Tt(S) \to \Tt_\cG(S) \ .$$ \end{prop} Basically $\Kk[F]$ coincides with $[\Kk(F)]$, where $\Kk(F)$ denotes the interior of the {\it convex core} $\ \overline \Kk(F)$ of $F$. Note that $\Kk(F)^\Cc = \overline \Kk(F)$. This is a bijection because the convex core determines the whole complete surface. \begin{prop} There is a natural projection $$\beta: \widetilde \Tt(S)\to \Tt(S)$$ such that $\beta_{|\Tt(S)}={\rm Id}$. \end{prop} In fact the holonomy of any $[F]\in \widetilde \Tt(S)$ is the conjugacy class of a faithful representation of $\pi_1(S)$ onto a discrete, torsion free subgroup $\Gamma$ of $PSL(2,\R)$, hence $\beta([F])=[\hat F]$, $\hat F=\mh^2/\Gamma$. Finally we can lift the map of Proposition \ref{convcore-map} to define the {\it convex-core map} $$\Kk: \widetilde \Tt(S) \to \Tt_\cG(S), \ \ \Kk([F])= \Kk([\hat F]) \ . $$ In fact we can realize the representatives of the involved classes in such a way that $$ \overline\Kk(\hat F) \subset F^\Cc \subset \hat F$$ for $F^\Cc$ is a closed convex set in $\hat F$ homotopically equivalent to $S$, and $\overline \Kk(\hat F)$ is the minimal one with these properties. In what follows we will often made the abuse of confusing the classes with their representatives. \medskip {\bf Partition by types.} \begin{prop}\label{typeF} For every complete surface $F\in \Tt(S)$ there is a partition $$V = V_\Pp \cup V_\Hh$$ such that $p$ belongs to $V_\Pp$ iff the following equivalent properties are satisfied: \smallskip (1) $F$ is of finite area at $p$ (that is $F$ has a {\rm cusp} at $p$); (2) the holonomy of a circle in $S$ surrounding $p$ is of {\rm parabolic type}; \medskip \noindent $p$ belongs to $V_\Hh$ iff the following equivalent properties are satisfied: \smallskip (i) $p$ corresponds to a boundary component of the convex core $\overline \Kk(F)$; (ii) the holonomy of a circle in $S$ surrounding $p$ is of {\rm hyperbolic type}. \end{prop} The partition $V=V_\Pp \cup V_\Hh$, so that $r=r_\Pp + r_\Hh$, is said the {\it type $\theta (F)$} of $F$. More generally, for every $F\in \widetilde \Tt(S)$, set $\theta (F)= \theta (\hat F)$. Any fixed type $\theta$ determines the subspace $\widetilde \Tt^\theta (S)$ of hyperbolic structures that share that type; varying $\theta$ we get the {\it partition by types} of $\widetilde \Tt(S)$. \medskip {\bf The fibers of the convex-core map.} We want to describe the fibers of the convex-core map $$\Kk: \widetilde \Tt (S) \to \Tt_\cG (S) \ .$$ \smallskip Let $h \in {\rm Isom}^+(\mh^2)$ be of hyperbolic type. Denote by $\gamma=\gamma_h$ its invariant geodesic. Let $P$ be the closed hyperbolic half-plane determined by $\gamma$ such that the orientation of $\gamma$ as boundary of $P$ is {\it opposite} to the sense of the translation $h_{|\gamma}$. \begin{defi}\label{fring-end} {\rm An {\it $h$-crown} is of the form $$\Ee = H/h$$ where $H$ is the convex hull in $P$ of a $h$-invariant closed subset, say $\ \Ee_\infty \subset S^1_\infty$, contained in the frontier at infinity of $P$. } \end{defi} An $h$-crown $\Ee$ is complete and has geodesic boundary made by the union of the closed geodesic $\gamma/h$ and complete open geodesics. $\Ee\setminus \partial \Ee$ is homeomorphic to $S^1\times (0,+\infty)$. Now, let $F\in \widetilde \Tt^\theta (S)$ and $ \overline\Kk(\hat F) \subset F^\Cc \subset \hat F$ be as above. Then $F^\Cc$ is obtained by gluing a (possibly empty) crown at each boundary component $C$ of $\overline\Kk(\hat F)$, associated to some point $p\in V_\Hh$. This is possible iff, for every $C$ we take an $h$-crown $\Ee$ such that $h$ is in the same conjugacy class of the $\hat F$-holonomy of the loop $C$, endowed with the boundary orientation of $\overline\Kk(\hat F)$ (in other words, ${\rm length}(\gamma/h)= l(C)$ and both orientations of $\overline\Kk(\hat F)$ and $\Ee$ are induced by the one of $\hat F$). \begin{lem}\label{fin-area-fend} $F$ is of finite area iff all crowns are. A crown $\Ee$ is of finite area iff one of the following equivalent conditions are satisfied: (1) $\Ee_\infty/h$ is a finite set. (2) $\Ee$ has finitely many boundary components. For every boundary component $l$, the distance between each end of $l$ and $\partial \Ee \setminus l$ is $0$. Every $h$-crown $\Ee$ (every $F\in \widetilde \Tt(S)$) is the union of exhaustive sequences of increasing sub-crowns $\Ee_n\subset \Ee$ (sub-surfaces $F_n\subset F$) of finite area such that $\Ee_{n,\infty}\subset \Ee_\infty$. \end{lem} In fact if $\Ee_\infty$ is finite, then the area of $\Ee$ can be bounded by the sum of the area of a finite set of ideal triangles. If $\Ee_\infty$ is not a finite set, then $\Ee$ contains an infinite family of disjoint ideal triangles. \medskip Finally, for every $F\in \Tt_\cG (S)$, the fiber $\Kk^{-1}(F)$ can be identified with the set of all possible patterns of $r_\Hh$ gluable crowns. \medskip {\bf Parameters for $\Tt_\cG(S)$.} The fibers of the convex-core map are in any sense ``infinite dimensional''. On the other hand, the base space $\Tt_\cG(S)$ is tame and admits nice parameter spaces, that we are going to recall. \medskip {\bf Length/twist parameters.} This is based on a fixed {\it pant decomposition} $\Dd$ of $\overline \Sigma$. $\Dd$ contains $2g+r-2$ pants obtained by cut/opening $\overline \Sigma$ at $3g-3+r$ (ordered) disjoint essential simple closed curves $z_1,\cdots,z_{3g-3+r}$ in $\Sigma$, not isotopic to any boundary component. Everyone of the $r$ boundary components $C_1,\cdots, C_r$ of $\overline \Sigma$ is in the boundary of some pant. For every boundary component of a pant $P_k$, corresponding to some $z_j$, we fix also the unique ``essential'' arc $\rho$ in $P_k$ (shown in Fig. \ref{pant}) that has the end-points on that component, and we select furthermore one among these end-points, say $e$. \begin{figure}[ht] \begin{center} \includegraphics[width= 4cm]{pant.eps} \caption{\label{pant} A pant and an arc $\rho$.} \end{center} \end{figure} Denote by $$\mr_+ = \{l\in \mr; \ l>0\}, \ \ \overline \mr_+ = \{l\in \mr; \ l\geq0\} \ .$$ Consider first the simplest case of $S$ having $(g,r)=(0,3)$. In this case, set $$\Tt_\cG(S)=\Tt_\cG(0,3) \ .$$ We have just one pant. Let us vary the types. If $r_\Hh = 3$, every hyperbolic structure is determined by the $3$ lengths $(l_1,l_2,l_3)$ of the geodesic boundary components. If $r_\Hh =2$, by the corresponding $2$ lengths, and it is natural to associate the value $0$ to the boundary component that corresponds to the cusp, and so on. Eventually the octant $$\overline \mr_+^3=\{(l_1,l_2,l_3); \ l_j\geq 0\}$$ is a natural parameter space for the whole $\Tt_\cG(0,3)$. The canonical stratification by open cells of this closed octant corresponds to the partition by types. In the general case, let $F\in \Tt_\cG(S)$; then every pant of the topological decomposition $\Dd$ is associated to a suitable hyperbolic pant $P_i=P_i(F)$ belonging to $\Tt_\cG(0,3)$. Pant geodesic boundary components corresponding to some curve $z_j$ have the same length, so that $F^\Cc$ is obtained by isometrically gluing the hyperbolic pants at the curves $z_j$. Summing up, $F$ is of the form $$F=F(l,t)$$ $$(l,t)=(l_{C_1},\cdots,l_{C_r},l_{z_1},\cdots l_{z_{3g-3+r}},t_{z_1},\cdots , t_{z_{3g-3+r}})$$ where $l_{C_i}$ ($l_{z_j}$) is the length of the geodesic boundary component (the simple closed geodesic) of $F^\Cc$ corresponding to $C_i$ ($z_j$). The {\it twist} parameter $t_{z_j}\in \mr$ specifies the isometric gluing at $z_j$ as follows. For every hyperbolic pant, an arc $\rho$ is uniquely realized by a geodesic arc orthogonal to the boundary. Then $F(l,0)$ is the unique hyperbolic structure such that the selected end-points $e$ of such geometric $\rho$-arcs match by gluing. A generic $F(l,t)$ is obtained from $F(l,0)$ by modifying the gluing as follows: if $t_{z_j}>0$, the two sides at any geodesic line $\widetilde z_j$ in $\mh^2$ over the closed geodesic $z_j$ of $F(l,0)$ translate by $t_{z_j}$ along $\widetilde z_j$ on the {\it left} to each other. If $t_{z_j}<0$, they translate on the {\it right} by $|t_{z_j}|$ (``left'' and ``right'' are well defined and only depend on the orientation of $S$). We eventually realize in this way that $$\overline \mr_+^r\times \mr_+^{3g-3+r}\times \mr^{3g-3+r}$$ is a parameter space (depending on the choice of $\Dd$) for the whole $\Tt_\cG(S)$. The product by $\mr_+^{3g-3+r}\times \mr^{3g-3+r}$ of the natural stratification by open cells of $\overline \mr_+^r$, corresponds to the partition by types. Every cell has dimension $$ 6g-6+2r+r_\Hh $$ according to the type. The top-dimensional cell ($r_\Hh = r$) corresponds to the hyperbolic surfaces $F$ without cusps. $\Tt_{g,r}$ is the lowest dimensional one. Cells that share the same $r_\Hh$ are isomorphic as well as the corresponding $\Tt^\theta _\Cc(S)$. By varying $\Dd$ we actually get an atlas for $\Tt_\cG(S)$ that gives it a {\it real analytic manifold with corner} structure. \medskip {\bf Marked length spectrum.} Length and twist parameters are of somewhat different nature; in fact we can deal with {\it length parameters only}. For every $j$, consider: the ``double pant'' obtained by gluing the two pants of $\Dd$ at $z_j$; the simple closed curve $z_j'$ obtained by gluing the respective two $\rho$ arcs, and $z''_j$ the curve obtained from $z'_j$ via a Dehn twist along $z_j$. Thus we have further $6g-6 +2r$ simple closed curves on $S$, and for every $F$ we take the length of the corresponding simple closed geodesics. In this way we get an {\it embedding} $$ \Tt_\cG(S) \subset \overline \mr_+^r\times \mr_+^{9g-9+3r} \ .$$ This is the projection onto this finite set of factors of the {\it marked length spectrum} injection $$ {\rm L}: \Tt_\cG(S) \to \overline \mr_+^r \times \mr_+^{\SG'}$$ where $\SG'$ denotes the set of isotopy classes of essential simple closed curves in $S$, not isotopic to any boundary component. For more details about the length/twist parameters and the length spectrum see for instance \cite{F-L-P, Be-Pe}. \medskip {\bf Shear parameters.} This is based on a fixed {\it topological ideal triangulation} $T$ of $(\hat S,V)$, and works only if $V\neq \emptyset$. By definition $T$ is a (possibly singular - multi and self adjacency of triangles are allowed) triangulation of $\hat S$ such that $V$ coincides with the set of vertices of $T$. There are $6g-6+3r$ edges $E_1,\cdots, E_{6g-6+3r}$. The idea is to consider every triangle of $T$ as a hyperbolic ideal triangle and realize hyperbolic structures $F$ on $S$ by isometrically gluing them at the geodesic edges, according to the pattern of edge-identifications given by $T$. By the way, $T$ will be converted in a {\it geometric} ideal triangulation $T_F$ of $F$. Let us decorate every edge $E$ of $T$ by a real number $s_E$ and get $$s=(s(E_1),\cdots,s(E_{6g-6+3r}))\in \mr^{6g-6+3r} \ .$$ These {\it shear} parameters encode the isometric gluing at each $E_j$, and are of the same nature of the above twist parameters. Every edge of an ideal triangle has a distinguished point, say $e$, that is the intersection of the edge with the unique geodesic line emanating from the opposite ideal vertex and which is orthogonal to it. Then set $F=F(0)$ to be the unique hyperbolic structure such that the distinguished points match by gluing. A generic $F=F(s)$ is obtained from $F(0)$ by modifying the gluing according to the left/right moving rule as before. It turns out that all so obtained hyperbolic structures $F$ belong to $\Tt_\cG(S)$, and all elements of $\Tt_\cG(S)$ arise in this way. For every $s$ and every $p_i\in V$, set $$s(p_i)=\sum_{E_j\in {\rm Star}(p_i)}s(E_j) \ .$$ We realize that $$l_{C_i}(F(s))=|s(p_i)|$$ so that, in particular, $p_i\in V_\Pp$ iff $s(p_i)=0$ and this determines the type $\theta = \theta(F(s))$. This also shows that the map $$\Ss:\mr^{6g-6+3r} \to \Tt_\cG(S), \ \ \ F=F(s)$$ is {\it not} injective. For every $p_i\in V_\Hh$, define the sign $\epsilon_s(p_i)$ by $$ |s(p_i)|= \epsilon_s(p_i)s(p_i) \ .$$ Then, the generic fiber $\Ss^{-1}(F)$ consists of $2^{r_\Hh}$ points, that is $\Ss$ realizes all the possible {\it signature} $V_\Hh \to \{\pm 1\}$. For the geometric meaning of these signs, see below. For more details about shear parameters see for instance \cite{Bon}(4). \medskip {\bf The enhanced $\Tt_\cG(S)^\#$}. Let us reflect a length/twist parameter space $$\overline \mr_+^r\times \mr_+^{3g-3+r}\times \mr^{3g-3+r}$$ of $\Tt_\cG(S)$ along its boundary components to get $$ \mr^r\times \mr_+^{3g-3+r}\times \mr^{3g-3+r} \ .$$ This can be considered as a parameter space of the {\it enhanced Teichm\"uller space} $\Tt_\cG(S)^\#$, obtained by decorating each $F$ with a signature $$\epsilon: V_\Hh \to \{\pm 1\} \ .$$ Moreover, we stipulate that the sign $\epsilon_i$ associated to $i$ has the meaning of selecting an orientation of the corresponding $C_i$, by the rule: {\it $\epsilon_i= +1$ ~iff $C_i$ is equipped with the boundary orientation.} \smallskip To make the notation simpler, it is convenient to extend the signature $\eps$ to the whole of $V$ by stating that $\eps_i = 1$ on $V_\Pp$. In this way an enhanced surface can be written as $(F,\eps_1,\ldots,\eps_r)$ with $\eps_i\in\{\pm 1\}$ and $\eps_i=1$ for $i$ corresponding to a cusp of $F$. In the same way one can show that the shearing parameters are global coordinates on $\Tt_\cG(S)^\#$, namely the map \[ \Ss^\#:\mr^{6g-6+3r}\rightarrow\Tt_C^\#(S) \] defined by $\Ss^\#(s)=(F(s), \sign(s(p_1)),\ldots, \sign(s(p_n)))$, is a homeomorphism (see~\cite{F-G} for details). \smallskip There is a natural {\it forgetting projection} $$ \phi^\#: \Tt_\cG(S)^\# \to \Tt_\cG(S) \ .$$ We can also define in a coherent way the {\it enhanced length spectrum} $${\rm L}^\#: \Tt_\cG(S)^\# \to \mr^r\times \mr_+^{\SG'}$$ by setting $$l^\#_{C_i}(F,\epsilon)=\eps_i l_{C_i}(F)$$ on the peripheral loops, and $l^\#_{\gamma}(F,\epsilon)= l_{\gamma}(F)$ elsewhere. This is an injection of $\Tt_\cG(S)^\#$, and already the projection onto the usual finite set of factors as above is an embedding. \begin{remark}\emph{ For each $C_i$, the enhanced length is a continuous function of $\Tt_\cG^\#(S)$. On the other hand notice that $\eps_i$ coincides with the $\sign$ of $l^\#_{C_i}$, with the rule that the sign of $0$ is $1$. }\end{remark} \subsection{The space of measured geodesic laminations} \label{lamination} \begin{defi}\label{lam}{\rm A {\it simple} (complete) geodesic in $F\in \tilde \Tt(S)$ is a geodesic which admits an arc length parametrization defined on the whole real line $\R$ that either is injective (and we call its image a {\it geodesic line} of $F$), or its image is a simple closed geodesic. A {\it geodesic lamination} $\Ll$ on $F$ consists of: \smallskip (1) A {\it closed} subset $L$ of $F$ (the {\it support}) ; \smallskip (2) A partition of $L$ by simple geodesics (the \emph{leaves}). \smallskip The leaves together with the connected components of $F\setminus L$ make a {\it stratification} of $S$.} \end{defi} \begin{defi}\label{measure}{\rm Given a geodesic lamination $\Ll$ on $F\in \tilde \Tt(S)$, a rectifiable arc $k$ in $F$ is {\it transverse} to the lamination if for every point $p\in k$ there exists a neighbourhood $k'$ of $p$ in $k$ that intersects each leaf in at most a point and each $2$-stratum in a connected set. A {\it transverse measure $\mu$} on $\Ll$ is the assignment of a positive measure $\mu_k$ on each rectifiable arc $k$ transverse to $\Ll$ (this means that $\mu_k$ assigns a non-negative {\it mass} $\mu_k(A)$ to every Borel subset of the arc, in a countably additive way) in such a way that: (1) The support of $\mu_k$ is $k\cap L$; \smallskip (2) If $k' \subset k$, then $\mu_{k'} = \mu_k|_{k'}$; \smallskip (3) If $k$ and $k'$ are homotopic through a family of arcs transverse to $\Ll$, then the homotopy sends the measure $\mu_k$ to $\mu_{k'}$; \smallskip } Notice that we allow an arc $k$ hitting the boundary of $F^\Cc$ to have infinite mass, that is $\mu_k(k)=+\infty$. \end{defi} \begin{defi}\label{MLS}{\rm A {\it measured geodesic lamination on $F$} is a pair $\lambda=(\Ll,\mu)$, where $\Ll$ is a geodesic lamination and $\mu$ is a transverse measure on $\Ll$. For every $F\in \widetilde \Tt(S)$, denote by $\Mm\Ll(F)$ the set of measured geodesic laminations on $F$. Finally, let us define $\Mm\Ll(S)$ to be the set of couples $(F,\lambda)$, such that $F\in \widetilde \Tt(S)$, and $\lambda \in \Mm\Ll(F)$. We have the natural projection $$\pG : \Mm\Ll(S)\to \widetilde \Tt(S) \ .$$ } \end{defi} \begin{defi}\label{W-S-parts}{\rm Given $(F,\lambda)\in \Mm\Ll(S)$, the {\it simplicial part} $\Ll_S$ of $\Ll$ consists of the union of the isolated leaves of $\Ll$. Hence $\Ll_S$ does not depend on the measure $\mu$. A leaf, $l$, is called \emph{weighted} if there exists a transverse arc $k$ such that $k\cap l$ is an atom of $\mu_k$. The {\it weighted part} of $\lambda$ is the union of all weighted leaves. It depends on the measure and it is denoted by $\Ll_W=\Ll_W(\mu)$. } \end{defi} \begin{remark}\label{more rem}{\rm The word ``simplicial'' mostly refers to the ``dual'' geometry of the initial singularity of the spacetimes that we will associate to every $(F,\lambda)$, see Section~\ref{WR}. \smallskip By property (3) of the definition of transverse measure, if $l$ is weighted then for every transverse arc $k$ the intersection of $k$ with $l$ consists of atoms of $\mu_k$ whose masses are equal to a positive number $A$ independent of $k$. We call this number the weight of $l$. Since every compact set $K\subset F$ intersects finitely many weighted leaves with weight bigger than $1/n$, it follows that $\Ll_W$ is a countable set. As $L$ is the support of $\mu$, then we have the inclusion $\Ll_S\subset\Ll_W(\mu)$. } \end{remark} \begin{remark}\label{equi-def}{\rm There is a slightly different but equivalent definition of $\Mm\Ll(S)$ that runs as follows. We can consider measured geodesic laminations $\lambda = (\Ll,\mu)$ of $F^\Cc$ requiring furthermore that: \smallskip (1) The boundary components of $F^\Cc$ are leaves of $\Ll$; \smallskip (2) Every arc $k$ hitting the boundary of $F^\Cc$ {\it necessarily} has infinite mass ($\mu_k(k)=+\infty$). \smallskip If a boundary component of $F^\Cc$ is isolated in $\Ll$ we stipulate that it has {\it weight $+\infty$}. Notice that while a geodesic lamination on $F^\Cc$ can be regarded also as a particular lamination on the associated complete surface $\hat F$, condition (2) ensures that such a {\it measured} lamination cannot be extended beyond $F^\Cc$. On the other hand, a lamination on $F$ in not in general a lamination on $\hat F$. Given any $\lambda$ of $F$ we get a corresponding $\hat \lambda$ of $F^\Cc$ by adding the (possibly $+\infty$-weighted) boundary components to the lamination and keeping the same measure. Given $\hat \lambda$ in $F^\Cc$ we get $\lambda$ in $F$ by just forgetting the boundary leaves. In particular the empty lamination on $F$ corresponds to the lamination on $F^\Cc$ reduced to its boundary components. Clearly this establishes a canonical bijection, hence an equivalent definition of $\Mm\Ll(S)$. This second definition would sound at present somewhat unmotivated, so in this section we prefer to deal with $F$ instead of $F^\Cc$. However, we will see in Section \ref{WR} that it is the suitable one when dealing with the Lorentzian ``materializations'' of $\Mm\Ll(S)$.} \end{remark} {\bf Marked measure spectrum.}\label{mark-spect} Similarly to the above length spectrum ${\rm L}$, for every $F\in \widetilde \Tt(S)$, it is defined the {\it marked measure spectrum} $${\rm I}: \Mm\Ll(F)\to \overline \mr_+^r\times \overline \mr_+^{\SG'}$$ where for every $\lambda \in \Mm\Ll(F)$ and for every isotopy class $\gamma$ of essential simple closed curves on $S$, ${\rm I}_\gamma(\lambda)$ is the minimum of the {\it total variation} $\mu(c)$ of the ``$\lambda$-transverse component'' of $c$, $c$ varying among the representatives of $s$. The first $r$ factors correspond as usual to the curves parallel to the boundary components. \medskip {\bf Ray structure.} Every $\lambda = (\Ll,\mu)\in \Mm\Ll(F)$ determines the ray $$ R_\lambda= \{t\lambda = (\Ll, t\mu); \ t\in [0,+\infty)\}\subset \Mm\Ll(F)$$ where we stipulate that for $t=0$ we take the empty lamination of $F$. If ${\rm I}_\lambda \neq 0$, then ${\rm I}(R_\lambda)= R_{{\rm I}_\lambda}$, that is the corresponding ray in $\overline \mr_+^\SG$. \subsection {The sub-space $\Mm\Ll_\cG(S)$}\label{distinguish} $$\Mm\Ll_\cG(S)=\{(F,\lambda)\in \Mm\Ll(S);\ F\in \Tt_\cG(S)\}$$ $$ \pG_\cG:\Mm\Ll_\cG(S)\to \Tt_\cG(S)$$ being the natural restriction of $\pG$ with fibers $\Mm\Ll_\cG(F)$. For any $F\in\Tt_\cG(S)$ denotes by $\Mm\Ll_\cG(F)^0$ the set of laminations on $F$ that do not enter any cusp (namely the closure in $F^\Cc$ of the lamination support is compact). For a fixed type $\theta$ we denotes by $$ \Mm\Ll_\cG(S)^\theta =\{(F,\lambda)| F\in\Tt^\theta_\cG(S)\,,\ \lambda\in\Mm\Ll_\cG(S)^0\} $$ and we still denote by $\pG_\cG$ the restriction of the projection on every $\Mm\Ll_\cG(S)^\theta$. The spectrum $\rm I$ and the ray structure naturally restrict themselves. In particular, if $\lambda \in \Mm\Ll_\cG(F)^0$, and $s$ surrounds a cusp of $F$, then ${\rm I}_\lambda(s)=0$. On the other hand, if $s$ is parallel to a boundary component of $F^\Cc$, then ${\rm I}_\lambda(s)=0$ iff the closure in $F^\Cc$ of the lamination support $L$ does not intersect that boundary component. The following Proposition summarizes some basic properties of the fibers of $\pG_\cG$. \begin{prop}\label{property} Let $\lambda \in \Mm\Ll_\cG(F)$. Then: \smallskip (1) $F\setminus L$ has a finite number of connected components, and each component belongs to some $\widetilde \Tt(S')$, providing that we drop out the requirement that $S'$ is non-elementary. \smallskip (2) $\lambda$ is disjoint union of a finite set of {\it minimal} {\rm [with respect to the inclusion]} measured sublaminations {\rm [recall that a lamination $\Ll$ is minimal iff every half-leaf is dense in $\Ll$]}. Every minimal sublamination either is compact or consists of a geodesic line such that each sub half-line either enters a cusp or spirals towards a boundary component of $F^\Cc$. (3) $\Ll_W=\Ll_S$. \smallskip (4) Either any cusp or any boundary component has a neighbourhood $U$ such that $\Ll \cap U = \Ll_S \cap U$. \smallskip (5) For every arc $c$ in $F$ transverse to $\lambda$, $c\cap L$ is union of isolated points and of a finite union of Cantor sets. \end{prop} For a proof when $F\in \Tt_{g,r}$ we refer for instance to the body and the references of \cite{Bon}(1). The details for the extension to the whole of $\Mm\Ll_\cG(S)$ are given for instance in \cite{BSK}. \begin{remark}\label{equi-def2} {\rm If the lamination $\hat \lambda$ of $F^\Cc$ corresponds to $\lambda$ of $F$ as in Remark \ref{equi-def}, then a leaf spiraling towards a boundary component of $F^\Cc$ as in (2) is no longer a minimal sublamination of $\hat \lambda$.} \end{remark} \begin{exa}\label{exa-via-twist-shear} {\rm We refer to the above length/twist or shear parameters for $\Tt_\cG(S)$. \smallskip (a) Let $F=F(l,t)$. The union of simple closed geodesics of $F$ corresponding the the curves $z_j$ is a geodesic lamination $\Ll = \Ll_S$ of $F$. By giving each $z_j$ an arbitrary real weight $w_j>0$, we get $\lambda(w) \in \Mm\Ll_\cG(F(l,t))^0$. \smallskip (b) Let $F=F(s)$. The 1-skeleton of the geometric ideal triangulation $T_F$ (which is made by geodesic lines) makes a geodesic lamination of $F$. Every geodesic line is a minimal sublamination. By giving each geodesic lines an arbitrary weight $w_j>0$, we get $\lambda(w) \in \Mm\Ll_\cG(F(s))$. For such a $\lambda = \lambda(w)$ $$ {\rm I}_{C_i}(\lambda) = \sum_{E_j\in {\rm Star}(p_i)}w(E_j) \ .$$ } \end{exa} {\bf Lamination signatures.} Let $\lambda \in \Mm\Ll_\cG(F)$. Leaves of $\lambda$ can spiral around a boundary component $C_i$ in two different ways. On the other hand two leaves that spiral around $C_i$ must spiral in the same way (otherwise they would meet each other). This determines a {\it signature} $$\sigma(\lambda) : V_\Hh \to \{\pm 1 \}$$ such that $\sigma_i(\lambda)=-1$ if and only if there are leaves of $\lambda$ spiraling around the corresponding geodesic boundary $C_i$ with a negative sense with respect to the boundary orientation. In other words, $\sigma_i(\lambda)$ is possibly equal to $-1$ only if $p_i \in V_\Hh$ and ${\rm I}_{C_i}(\lambda)\neq 0$, $\sigma_i(\lambda)=1$ otherwise. The signature depends indeed only on the lamination $\Ll$, not on the measure. \begin{remark} {\rm If $\lambda = \lambda(w)$ as in Example \ref{exa-via-twist-shear}(b), then $\sigma_\lambda$ recovers the signs $\epsilon_s(p_i)$ already defined at the end of Section \ref{convex-core}.} \end{remark} \subsection{ Enhanced bundle $\Mm\Ll_\cG(S)^\#$ and measure spectrum} Here we address the question to which extent the (restricted) marked measure spectrum determines $\Mm\Ll_\cG(S)$. For example, this is known to be the case if we restrict to $\Mm\Ll_{g,r}^0$ {\it i.e.} to laminations over $\Tt_{g,r}$ that do not enter the cusps (see for instance \cite{Bon}(1)). We want to extend this known result. \smallskip We have seen in Proposition \ref{property} that a measured geodesic lamination $\lambda$ on $F\in\Tt_\cG(S)$ is the disjoint union of a compact part, say $\lambda_c$ (that is far away from the geodesic boundary of $F^\Cc$ and does not enter any cusps), with a part, say $\lambda_b$, made by a finite set of weighted geodesic lines $l_1,\ldots,l_n$ whose ends leave every compact subset of $F$. Notice that $\sigma(\lambda)=\sigma(\lambda_b)$. Let us take such a geodesic line $l$ on $F\in\Tt_\cG(S)$. We can select a compact closed interval $J$ in $l$ such that both components of $l \setminus J$ definitely stay either within a small $\eps$-neigbourhood of some boundary component of $F^\Cc$, or within some cusp. $J$ can be completed to a simple arc $c$ in $\hat S$ with end-points in $V$, just by going straightly from each end-point of $J$ to the corresponding puncture. It is easy to see that the homotopy class with fixed end-points of the so obtained arc $c$ does not depend on the choice of $J$. For simplicity we refer to it as the ``homotopy class'' of $l$. We can also give the end-points of $c$ a sign $\pm 1$ in the very same way we have defined the signature of a lamination on $F$ (recall that the sign is always equal to $1$ at cusps). We can prove \begin{lem} Given any $F\in \Tt_\cG(S)$, every homotopy class $\alpha$ of simple arcs on $\hat S$ with end-points on $V$, and every signature of the end-points (compatible with the type of $F$) can be realized by a unique geodesic line $l$ of $F$ whose ends leave every compact set of $F$. Moreover, the members of a finite family of such geodesic lines are pairwise disjoint iff the signs agree on every common end-point and there are disjoint representatives with end-points on $V$ of the respective homotopy classes. Analogously they do not intersect a compact lamination $\lambda_c$ iff so do suitable representatives. \end{lem} By using the lemma, we can prove (see ~\cite{BSK}) \begin{prop}\label{homotopy-arc} Let $\lambda \in \Mm\Ll_\cG(F)$. Then the support of $\lambda_b$ is determined by the homotopy classes of its geodesic lines $l_i$ and the signature of $\lambda$. More precisely, given any $\lambda_c$, every finite set of homotopy classes of simple weighted arcs on $\hat S$, with signed end-points in $V$ (providing the signature being compatible with the type of $F$), admitting representatives that are pairwise disjoint and do not intersect $\lambda_c$, is uniquely realized by a lamination $\lambda_b$ such that $\lambda = \lambda_b \cup \lambda_c \in \Mm\Ll_\cG(F)$. \end{prop} \begin{prop}\label{map-iota} Let $F, F' \in \Tt_\cG(S)$. Assume that $F$ is without cusps (that is $F$ belongs to the top dimensional cell of $ \Tt_\cG(S)$). Then there is a natural map \[ \iota:\Mm\Ll_\cG(F)\rightarrow\Mm\Ll_\cG(F') \] such that for every (isotopy class of) simple closed curve $\gamma$ on $S$, we have \[ {\rm I}_\gamma(\lambda)={\rm I}_{\gamma}(\iota(\lambda))\,. \] \end{prop} \Dim Assume first that $\lambda = \lambda_c \in \Mm\Ll_\cG(F)$. Then there is a unique $\lambda' = \lambda'_c \in \Mm\Ll_\cG(F')$ with the same spectrum. For we can embed $F'$ in the double surfaces of $(F')^\Cc$, say $DF'$ which is complete and of finite area. The measure spectrum of $\lambda_c$ induces a measure spectrum of a unique lamination $\lambda''_c$ on $DF'$ (by applying the result on the spectrum in the special case recalled at the beginning of this Section). Finally we realize that the compact support of $\lambda''_c$ is contained in $F'$ giving us the required $\lambda'_c$. So the map $\iota$ can be defined for laminations with compact support. Given a general lamination $\lambda=\lambda_c\cup\lambda_b$, we can define $\lambda'_c$ as before, while $\lambda'_b$ is the unique lamination of $F'$ (accordingly with Proposition \ref{homotopy-arc}) that share with $\lambda_b$ the same homotopy classes, weights and signs at $V_\Hh(F)\cap V_\Hh(F')$. Notice that $\lambda'_b$ is disjoint from $\lambda'_c$: in fact one can construct an isotopy of $S$ sending the supports of $\lambda_b$ and $\lambda_c$ to the supports of $\lambda'_b)$ and $\lambda'_c$. Finally set $\iota(\lambda)=\iota(\lambda_b)\cup\iota(\lambda_c)$. \cvd \begin{cor}\label{orbit} If both $F$ and $F'$ are without cusps, then the map $\iota$ is bijective. More generally, for every $\lambda'\in\Mm\Ll_\cG(F')$, $\iota^{-1}(\lambda')$ consists of $2^k$ points, where $k$ is the number of cusps of $F'$ entered by $\lambda'$. \end{cor} In fact, for every $F\in\Tt_\cG(S)$ (not necessarily in the top dimensional cell), there is a natural action of $(\mz/2\mz)^r$ on $\Mm\Ll_\cG(F)$ determined as follows. Let $\rho_i = (0,\dots, 1, \dots,0)$, $i=1,\dots r$, be the $i$th element of the standard basis of $(\mz/2\mz)^r$. Let $\lambda \in \Mm\Ll_\cG(F)$. First define the new signature $\rho_i \sigma(\lambda)$ by setting: \smallskip $\rho_i\sigma(\lambda)(p_j)= \sigma(\lambda)(p_j)$ if $i\neq j$; \smallskip $\rho_i\sigma(\lambda)(p_i)= \sigma(\lambda))(p_i)$ if either $p_i\in V_\Pp (F)$ or $p_i \in V_\Hh(F)$ and ${\rm I}_{C_i}(\lambda)=0$; \smallskip $\rho_i\sigma(\lambda)(p_i)= -\sigma(\lambda)(p_i)$, otherwise. This naturally extends to every $\rho \in (\mz/2\mz)^r$, giving the signature $\rho\sigma(\lambda)$. Finally set $\rho(\lambda) = \rho(\lambda_b) \cup \lambda_c$ where (accordingly again with Proposition \ref{homotopy-arc}) $\rho(\lambda_b)$ is the unique lamination that shares with $\lambda_b$ the homotopy classes and the weights, while its signature is $\rho\sigma(\lambda)$. Clearly the orbit of $\lambda$ consists of $2^k$ points, where $k$ is the number of $p_i$ in $V_\Hh(F)$ such that ${\rm I}_{C_i}(\lambda)\neq 0$. Finally $\iota^{-1}(\lambda')$ in Corollary \ref{orbit} is just an orbit of such an action. We call the action on $\Mm\Ll_\cG(F)$ of the generator $\rho_i$, the {\it reflection along $C_i$} (even if it could be somewhat misleading, as in some case it is just the identity). \smallskip If we restrict over the top-dimensional cell of $\Tt_\cG(S)$, $\pG_\cG$ is a bundle and we can use the first statement of the Corollary in order to fix a trivialization. The same fact holds for every restriction $\pG_\cG : \Mm\Ll_\cG(S)^\theta \to \Tt_\cG(S)^\theta$, type by type. On the other hand, because of the last statement of the Corollary, this is no longer true for the whole $\pG_\cG$. In order to overcome such phenomenon, one can introduce the notion of {\it enhanced lamination}. An enhanced lamination on $F\in\Tt_\cG(S)$, is a couple $(\lambda, \eta)$ where $\lambda \in \Mm\Ll_\cG(F)$, and $\eta: V \to \{\pm\}$ is a {\it relaxed signature} such that: \smallskip $\eta_i=\sigma_i(\lambda)$ if either $p_i\in V_\Hh(F)$ or $p_i\in V_\Pp(F)$ and ${\rm I}_{C_i}(\lambda) = 0$; \smallskip $\eta_i$ is arbitrary otherwise. \smallskip Notice that there are exactly $2^k$ $(\lambda,\eta)$ enhancing a given $\lambda \in \Mm\Ll_\cG(F)$, where $k$ is the number of cusps entered by $\lambda$. Clearly the above action of $(\mz/2\mz)^r$ extends on enhanced laminations: $\rho(\lambda,\eta)= (\rho(\lambda), \rho(\eta))$, where $\rho(\eta)$ is uniquely determined by the above requirements and by the fact that $\rho\sigma(\lambda)$ possibly modifies $\sigma(\lambda)$ only on $V_\Hh$. In particular this holds for the generating reflections $\rho_i$. We denote by $\Mm\Ll^\#_\cG(F)$ the set of such $(\lambda,\eta))$ on $F$. Finally we can define the {\it enhanced measure spectrum} $$ {\rm I}^\#: \Mm\Ll^\#_\cG(F) \to \mr^r \times\mr_+^{\SG'}$$ such that: \smallskip $${\rm I}^\#_{\gamma}(\lambda,\eta)={\rm I}_\gamma(\lambda)$$ for every $\gamma\in\SG'$; \smallskip $$ {\rm I}^\#_{C_i}(\lambda,\eta)=\eta_i {\rm I}_{C_i}(\lambda)$$ for every peripheral loop $C_i$. Here is the enhanced version of Proposition \ref{map-iota}. \begin{cor}\label{map-iota-en} Let $F, F' \in \Tt_\cG(S)$. Then there is a natural {\rm bijection} \[ \iota^\#:\Mm\Ll_\cG(F)^\# \rightarrow\Mm\Ll_\cG(F')^\# \] such that for every (isotopy class of) simple closed curve $\gamma$ on $S$, we have \[ {\rm I}^\#_\gamma((\lambda,\eta))={\rm I}^\#_{\gamma}(\iota^\#(\lambda,\eta))\,. \] \end{cor} \begin{prop}\label{m-s} (i) The enhanced spectrum ${\rm I}^\#$ realizes an embedding of every $\Mm\Ll_\cG(F)^\#$ into $\mr^r\times \overline \mr_+^{\SG'}$. Only the empty lamination goes to $0$. The image is homeomorphic to $\mr^{6g-6+3r}$. The image of $\Mm\Ll_\cG(F)^{\#,0}$ (that is the set of enhanced laminations that do not enter any cusp) is homeomorphic to $\mr^{6g-6+2r+r_{\Hh}}$ \smallskip (ii) For every pant decomposition $\Dd$ of $\overline \Sigma$, consider the subset {\rm [already considered to deal with the length spectrum]} $$\SG_\Dd = \{C_1,\cdots,C_r,z_1,z_1',z_1'',\cdots,,z_{3g-3+r},z_{3g-3+r}',z_{3g-3+r}''\} \subset \SG \ .$$ The projection onto this {\rm finite} set of factors is already an embedding of $\Mm\Ll_\cG(F)^\#$. By varying $\Dd$ we get an atlas of a {\it PL structure} on $\Mm\Ll_\cG(F)^\#$ ({\it i.e.} on $\mr^{6g-6+3r}$). Similar facts hold for the restriction to $\Mm\Ll_\cG(F)^{\#,0}$. \smallskip (iii) Finite laminations are dense in $\Mm\Ll_\cG(F)^\#$ ( $\Mm\Ll_\cG(F)^{\#,0}$). \smallskip (iv) For every $F,F'\in \Tt_\cG(S)$, there is a canonical identification between the respective sets of finite enhanced measured geodesic laminations, and this extends to a canonical PL isomorphism between $\Mm\Ll_\cG(F)^\#$ and $\Mm\Ll_\cG(F')^\#$, which respects the ray structures. Similarly for $\Mm\Ll_\cG(\cdot)^{\#,0}$. \end{prop} \Dim We will sketch the proof of this proposition. We assume that the result is known when $S$ is compact (see \cite{Bon}(1), \cite{F-L-P}). Thanks to Proposition \ref{map-iota-en} it is enough to deal with $F$ without cusps. Then the double $DF$ of $F^\Cc$ is compact, and we consider on $DF$ the involution $\tau$ that exchange the two copies of $F$. Let us denote by $ML(F)$ the set of $\tau$-invariant measured geodesic laminations on $DF$ that do not contain any component of $\partial F^\Cc$. The idea is to construct a map \[ T: \Mm\Ll_\cG(F)\rightarrow ML(F) \] that is surjective and such that (1) the fiber over a lamination $\lambda' \in ML(F)$ consists of $2^k$ laminations of $\Mm\Ll_\cG(F)$, where $k$ is the number of boundary components of $F^\Cc$ that intersect the support of $\lambda'$. (2) For every $\lambda \in \Mm\Ll_\cG(F)$, the restrictions to $\SG$ of both the spectrum of $T(\lambda)$ and of $\lambda$ coincide. \smallskip The existence of the map $T$ and the known results in the special cases recalled above will imply the Proposition. The construction of the map $T$ runs as follows. Let $\lambda = \lambda_b \cup \lambda_c \in \Mm\Ll_\cG(F)$ be decomposed as above. We define $T(\lambda_c)$ to be the double of $\lambda_c$ in $DF$. For each leaf $l_i$ of $\lambda_b$, take a ``big'' segment $J_i\subset l_i$, and complete it to a simple arc $l'_i$ properly embedded in $(F^\Cc,\partial F^\Cc)$, obtained by going straightly from each end-point of $J_i$ to the corresponding boundary component along an orthogonal segment. Clearly the double of $l'_i$ is a simple non-trivial curve in $DF$, so there is a geodesic representative, say $c_i$, that is $\tau$-invariant and simple. Since $l_i\cap l_j=\varnothing$ the same holds for the $c_i$'s. Moreover, since $l_i\cap\lambda_c=\varnothing$, the intersection of $c_i$ with $T(\lambda_c)$ is also empty. So we can define \[ T(\lambda)= T(\lambda_b)\cup (c_1,a_1)\cup (c_2,a_2)\cup\ldots\cup (c_n,a_n)\,. \] where $a_i$ is the initial weight of $l_i$. This map satisfies (2) by construction; moreover, it follows from Corollary \ref{orbit} that (1) holds for every $\lambda'$ belonging to the image of $T$. The only point to check is that the map is surjective. The key remark is that for every $\lambda' \in ML(F)$, every leaf $l$ hitting $\partial F^\Cc$ is necessarily closed. As it is $\tau$-invariant, then $l$ is orthogonal to $\partial F^\Cc$, and if $l$ intersects $\partial F^\Cc$ twice, then it is closed. Suppose that $l$ is a geodesic line, so that $l$ meets $\partial F$ exactly once. On the other hand, we know that the closure of $l$ is a minimal sublamination $\lambda''$, such that every leaf is dense in it. Thus if $l''\neq l$ is another leaf in $\lambda''$, then it intersects $\partial F^\Cc$ in a point $p$. Since $l$ is dense in $\lambda''$, there is a sequence of points in $l\cap\partial F^\Cc$ converging to $p$ and this contradicts the assumption that $l$ intersects $\partial F^\Cc$ once. Thus a lamination in $ML(F)$ is given by the double of a compact lamination $\lambda_c$ in $F$ and of a finite number of weighted simple geodesics arcs in $F$ hitting orthogonally $\partial F^\Cc$. These arcs can be completed to give a family of simple arcs on $\hat S$ with end-points on $V$. Fix a signature on the end-points of such arcs. Finally we can apply Proposition \ref{homotopy-arc} to these data and we get a suitable $\lambda = \lambda_b \cup \lambda_c \in \Mm\Ll_\cG(F)$ such that $T(\lambda)=\lambda'$. \cvd Finally we can define the map $$\pG_\cG^\#: \Mm\Ll_\cG(S)^\# \rightarrow \Tt_\cG^\#(S) \ .$$ The total space is defined as the set of pairs $$((F,\eps), (\lambda,\eta))$$ such that \begin{enumerate} \item $(F,\eps)=(F,\eps_1,\ldots,\eps_r)\in\Tt_\cG(S)^\#$; \item $(\lambda,\eta)=(\lambda,\eta_1,\ldots,\eta_r)\in\Mm\Ll_\cG(F)^\#$ \end{enumerate} Clearly $$ \phi^\#\circ \pG^\# = \pG \circ \phi_{\Mm\Ll}^\# $$ where $\phi_{\Mm\Ll}^\#$ denotes the {\it forgetting projection} of $\Mm\Ll_\cG(S)^\#$ onto $\Mm\Ll_\cG(S)$. We are going to see that in fact $\pG_\cG$ determines a {\it bundle} of enhanced lamination, that admits furthermore natural {\it trivializations} $\tG$. It follows from the previous discussion that the image of ${\rm I}^\#$ does not depend on the choice of $F$, hence ${\rm I}^\#(S)$ is well defined. We want to define a natural bijection $$ \tG: \Tt_\cG^\#(S)\times {\rm I}^\#(S) \rightarrow \Mm\Ll_\cG(S)^\# \ .$$ For every $\xi \in {\rm I}^\#(S)$ and $F\in\Tt_\cG(S)$ there is a unique $(\lambda(\xi), \eta(\xi)) \in \Mm\Ll_\cG(F)^\#$ that realizes $\xi$. So, let us put $$\tG(F,\eps,\xi)=(F,\eps,\rho_\eps(\lambda(xi),\eta(\xi))\ .$$ It follows from the previous discussion that $\tG$ is a bijection. We stipulate that it is a homeomorphism, determining by the way a topology on $\Mm\Ll_\cG(S)^\#$. Summing up, the map $$\pG^\#: \Mm\Ll_\cG(S)^\# \to \Tt_\cG(S)^\# $$ can be considered as a {\it canonically trivialized} fiber bundle having both the base space and the fiber (analytically or PL) isomorphic to $\mr^{6g-6+3r}$. Different choices of the base surface $F_0$ lead to isomorphic trivializations, via isomorphisms that preserve all the structures. These trivializations respect the ray structures. When $S$ is compact this specializes to the trivialized bundle $ \Tt_g \times \Mm\Ll_g \to \Tt_g $ mentioned in the Introduction. \begin{remark}{\rm The definition of $\tG$ could appear a bit distressing at a first sight. However the geometric meaning is simple. Given a spectrum of positive numbers, this determines the lamination up to choosing the way of spiraling towards the boundary components. If we give a sign to the elements of the spectrum corresponding to the boundary components, this allows to reconstruct the lamination by the rule: if the sign is positive the lamination spiral in the positive way, if the sign is negative the lamination spirals in the negative way {\it with respect to a fixed orientation of the boundary component}. In the non-enhanced set up, we have stipulated to use the boundary orientation induced by the surface one. Since the elements of an enhanced Teichm\"uller space can be regarded as hyperbolic surfaces equipped with an (arbitrary) orientation on each boundary component, it seems natural to reconstruct the lamination from the spetrum ${\rm I}^\#$ by means of such boundary component orientations. This choice is suitable in view of the {\it earthquake flow} that we are going to define on $\Tt_\cG(S)^\#$. } \end{remark} \subsection{Grafting, bending, earthquakes}\label{bend-quake} Let $(F,\lambda)\in \Mm\Ll(S)$. {\it Grafting} $(F,\lambda)$ produces a deformation $Gr_\lambda(F)$ of $F$ in $\Pp(S)$, the Teichm\"uller-like space of {\it complex projective structures} ({\it i.e.} $(S^2,PSL(2,\mc)$-{\it structures}) on $S$. \smallskip $3$-dimensional {\it hyperbolic bending} produces the $H$-{\it hull} of $Gr_\lambda(F)$, that is, in a sense, its ``holographic image'' in $\mh^3$. \smallskip The {\it left (right) earthquake} produces (in particular) a new element $\beta^L_\lambda(F)$ ($\beta^R_\lambda(F)$) in $\widetilde \Tt(S)$. \smallskip We will see in Section \ref{WR} how these constructions are {\it materialized} within the {\it canonical Wick rotation-rescaling theory} for $MGH$ Einstein spacetimes. For example, the grafting is eventually realized by the {\it level surfaces of the cosmological times}; earthquakes are strictly related to the {\it Anti de Sitter bending} procedure. \smallskip Here we limit to recall a few details about earthquakes, purely in terms of hyperbolic geometry. \medskip {\bf Features of arbitrary $(F,\lambda)$.} In such a general case, the leaves of $\lambda$ possibly enter the crowns of $F$. If $F$ is of finite area (see Lemma \ref{fin-area-fend}), basically the conclusions of Proposition \ref{property} still hold. The only new fact is that possibly there is a finite number of isolated geodesic lines of $\lambda$ having at least one end converging to a point of some $\Ee_\infty$. \smallskip The situation is quite different if $F$ is of infinite area. The set of isolated geodesic lines of $\lambda$ that are not entirely contained in one crown $\Ee$ is always finite. On the other hand, (1), (2), (3) and (5) of Proposition \ref{property} definitely fails. For example, the support of a lamination $\lambda$ could contain bands homeomorphic to $[0,1]\times \R$, such that every $\{t\}\times \R$ maps onto a geodesic line of $\lambda$. Both ends of every such a line converge to some $\Ee_\infty$. We can also construct transverse measures such that $L_W$ is dense in such bands. This also shows that in general $\Ll_S$ is strictly contained in $\Ll_W$. In general the fibers of ${\rm I}$ are, in any sense, infinite dimensional. For example we have: \begin{lem}\label{I=0} ${\rm I}^{-1}(0) \subset \Mm\Ll(F)$ consists of laminations such that the support is entirely contained in the union of crowns. \end{lem} On the other hand, the image of ${\rm I}$ is tame, in fact: \begin{prop}\label{I=I} ${\rm I}(\Mm\Ll(F))= {\rm I}(\Mm\Ll_\cG(\Kk(F))$. \end{prop} {\bf Earthquakes along finite laminations of $\Mm\Ll_\cG(F)$.} As finite laminations are dense, and arbitrary laminations $\lambda \in \Mm\Ll_\cG(F)$ look like finite ones at cusps and boundary components of $F^\Cc$, it is important (and easy) to understand earthquakes in the finite case. \begin{exa}\label{more-exa-via-twist-shear} {\rm Let us consider again the Examples \ref{exa-via-twist-shear}. Let $(F(l,t)$ be such that all twist parameters are strictly positive. Then, {\it by definition} $(F(l,t),\lambda(t))$ is obtained from $(F(l,0),\lambda(t))$ via a {\it left earthquake (along the measured geodesic lamination $\lambda(t)$ on $F(l,0)$)}. $(F(l,-t),\lambda(t))$ is obtained from $(F(l,0),\lambda(t))$ via a {\it right earthquake (along the measured geodesic lamination $\lambda(t)$ on $F(l,0)$)}. In the reverse direction, $(F(l,0),\lambda(t))$ is obtained from $(F(l,t),\lambda(t))$ via a {\it right earthquake}, and so on. This pattern of earthquakes does preserve the types. \smallskip Similarly, let $F(s)$ be such that all shear parameters are strictly positive. Then, by definition $(F(s),\lambda(s))$ is obtained from $(F(0),\lambda(s))$ via a {\it left earthquake (along the measured geodesic lamination $\lambda(s)$ on $F(0)$)}. $(F(-s),\lambda(s))$ is obtained from $(F(0),\lambda(s))$ via a {\it right earthquake (along the measured geodesic lamination $\lambda(s)$ on $F(0)$)}. In the reverse direction, $(F(0),\lambda(s))$ is obtained from $(F(s),\lambda(s))$ via a {\it right earthquake}, and so on. This pattern does not preserve the types, for $F(0)\in \Tt_{g,r}$, while $F(s)$ is without cusps. Moreover, $\lambda(s)$ has the following special property: \smallskip {\it For every boundary component $C_i$ of $F(s)^\Cc$ $$ l_{C_i}(F(s)) = {\rm I}_{C_i}(\lambda(s)) \ .$$} } \end{exa} For every $(F,\lambda) \in \Mm\Ll_\cG(S)$, $\lambda$ being finite, the definition of {\it $(F',\lambda')$ obtained from $(F,\lambda)$ via a left (right) earthquake} extends {\it verbatim} the one of the above examples, so that $(F',\lambda')\in \Mm\Ll_\cG(S)$, $\lambda'$ is also a finite lamination, and $(F,\lambda)$ is obtained from $(F',\lambda')$ via the {\it inverse} right (left) earthquake. \smallskip {\bf Quake cocycles and general earthquakes.} It is convenient to describe earthquakes by lifting everything to the universal covering. Let us set as usual $$\overline \Kk(\hat F) \subset F^\Cc \subset \hat F = \mh^2/\Gamma \ .$$ Then $F^\Cc$ lifts to a $\Gamma$-invariant {\it straight convex set} $H$ of $\mh^2$ ({\it i.e} $H$ is the closed convex hull of an ideal subset of $S^1_\infty$), and $\lambda$ lifts to a $\Gamma$-invariant measured geodesic lamination on $\mathring{H}$, that, for simplicity, we still denote by $\lambda$. If $F\in \Tt_\cG$, then $\overline \Kk(\hat F) = F^\Cc$. \begin{lem}\label{cocycle} Let $(F,\lambda)\in \Mm\Ll_\cG(S)$ such that $\lambda$ is finite. Then there exists a {\rm left-quake cocycle} \[B^L_\lambda :\mathring H \times \mathring H \rightarrow PSL(2,\mr) \] such that \begin{enumerate} \item $B^L_\lambda(x,y)\circ B^L_\lambda(y,z)=B^L_\lambda(x,z)$ for every $x,y,z\in \mathring H$. \item $B^L_\lambda(x,x)=Id$ for every $x\in\mathring H$. \item $B^L_\lambda$ is constant on the strata of the stratification of $\mathring H$ determined by $\lambda$. \item $B_\lambda(\gamma x,\gamma y)= \gamma B_\lambda(x,y)\gamma^{-1}$, for every $\gamma \in \Gamma$. \item For every $x_0$ belonging to a $2$-stratum of $\mathring H$, \[ \mathring H\ni x\mapsto B^L_\lambda (x_0,x)x\in\mh^2 \] lifts the left earthquake $\beta^L_\lambda(F)$ to $\mathring H$. This cocycle is essentially unique. There exists a similar {\rm right-quake cocycle} $B^R_\lambda$. \end{enumerate} \end{lem} The proof is easy and the earthquake is equivalently encoded by its cocycle. For a general $(F,\lambda)$ we look for (essentially unique) {\it quake-cocycles} that satisfy all the properties of the previous Lemma, with the exception of the last one, and requiring furthermore that \medskip {(*) \it If $\lambda_n\rightarrow\lambda$ on a $\eps$-neighbourhood of the segment $[x,y]$ and $x,y \notin L_W$, then $B_{\lambda_n}(x,y)\rightarrow B_{\lambda}(x,y)$.} \medskip Given such cocycles we can use the map of (5) in the previous Lemma as the {\it general definition of earthquakes}. \smallskip For example, if $(F,\lambda)\in \Mm\Ll_\cG(S)$ the cocycle can be derived by using Lemma \ref{cocycle}, the density of finite laminations and the fact that we require $(*)$. If $(F',\lambda')$ results from the left earthquake starting at $(F,\lambda)$, then this last belongs to $\Mm\Ll_\cG(S)$ and $(F,\lambda)$ is obtained from it via the inverse right earthquake. \smallskip In fact, in \cite{Ep-M} Epstein-Marden defined these quake-cocycles in general (extending the construction via finite approximations). Strictly speaking they consider only the case of (arbitrary) measured geodesic laminations on $\mh^2$, but the same arguments holds for laminations on arbitrary straight convex sets $H$ - see also \cite{Be-Bo} for more details. Hence general left (right) earthquakes $$ (F',\lambda') = \beta^L(F,\lambda)$$ so that $$(F,\lambda) = \beta^R(F',\lambda')$$ are eventually defined for arbitrary $(F,\lambda)\in \Mm\Ll(S)$. We will also write $F'= \beta^L_\lambda(F)$, $\lambda'= \beta^L_\lambda(\lambda)$. \medskip {\bf Earthquake flows on $\Mm\Ll_\cG(S)$.} Let $\lambda \in \Mm\Ll_\cG(F)$. Consider the ray $(F,t\lambda)$, $t\in [0,+\infty)$. Then, for every $t>0$, set $$(F_t,\lambda_t) = (\beta^L_{t\lambda}(F), \frac{1}{t}\beta^L_{t\lambda}(t\lambda)), \ \ t\geq 0 \ . $$ This continuously extends at $t=0$ by $$(F_0,\lambda_0)=(F,\lambda) \ .$$ We have $$((F_t)_s,(\lambda_t)_s)=(F_{t+s},\lambda_{t+s})$$ hence this defines the so called {\it left-quake flow} on $\Mm\Ll_\cG(S)$. In particular this allows to define a sort of ``exponential'' map $$\psi^L: \Mm\Ll_\cG(F) \to \Mm\Ll_\cG(S)$$ by evaluating the flow at $t=1$. We do similarly for the {\it right-quake flow}. Let $p_i\in V$ and $C_i$ be the curve surrounding it; as ${\rm I}_{C_i}(t\lambda) = t {\rm I}_{C_i}(\lambda)$, there is a unique ``critical value'' $t_i$ (see below) such that ${\rm I}_{C_i}(t\lambda)= l_{C_i}(F)$. For every $t$, we denote by $l(t)$ the marked length spectrum of $F_t$, by $\theta(t)$ its type, by ${\rm I}(t)$ the marked measure spectrum of $\lambda_t$, by $\sigma_t: V \to \{\pm\}$ its signature, and so on. The following Lemma describes the behaviour of these objects along the flow. \begin{lem}\label{ray-quake} The marked measure spectrum is constant for every $t$, that is \[ {\rm I}_\gamma(t)={\rm I}_\gamma(0)\ \ \textrm{for every }\gamma\in\SG\,. \] Let $p_i\in V$ and $C_i$ be the curve surrounding it. \smallskip If $p_i\in V_{\Hh}(0)$, then: $$ l_{C_i}(t)= |l_{C_i}(0)-t\sigma_i(\lambda) {\rm I}_{C_i}(0)| $$ and $$ \sigma_i(t)=\sign [l_{C_i}-t\sigma_i(\lambda) {\rm I}_{C_i}(0)]\sigma_i(0) \ .$$ \medskip If $p_i\in V_\Pp(0)$ then: $$ l_{C_i}(t)=t{\rm I}_{C_i}(0) $$ and $$ \sigma_i(t)=-1$$ \end{lem} As every $\lambda \in \Mm\Ll_\cG(F)$ looks finite at cusps and boundary components of $F^\Cc$, it is enough (and fairly easy) to check the Lemma in the finite case, by using also Examples \ref{more-exa-via-twist-shear}. \begin{remark}{\rm If $p_i\in V_\Pp$ and the lamination enters the corresponding cusp, then for $t>0$ the cusp opens on a geodesic boundary component whose length linearly depends on $t$ with slope equal to ${\rm I}_{C_i}(0)$. The way of spiraling of $\lambda_t$ around $p_i$ is always negative (positive for right earthquakes). Let us consider more carefully the case $p_i\in V_\Hh$. Notice that if $\lambda$ does not spiral around $C_i$ then the length of $C_i$ is constant. In the other cases let us distinguish two possibilities according to the sense of spiraling of $\lambda$. (1) Case $\sigma_i(0)=-1$. Then for every $t>0$, $$\sigma_i(t)=-1 \ , \ \ l_{C_i}= l_{C_i}(0)+t{\rm I}_{C_i}(0) \ . $$ Thus the length of $C_i$ increases linearly of slope $ {\rm I}_{C_i}(0)$ and the laminations continues to spiral in the negative direction. \smallskip (2) Case $\sigma_i(0)=1 \ $. There is a critical time $t_i=l_{C_i}(0)/{\rm I}_{C_i}(0)$. Before $t_i$ the length of $C_i$ decreases linearly and the lamination spiral in the positive direction. At $t_i$, $C_i$ is become a cusp. After $t_i$, $C_i$ is again a boundary component but the way of spiraling is now negative. }\end{remark} \begin{remark}\label{I=l}{\rm The above Proposition points out in every $\Mm\Ll_\cG(F)$ the set : $$ \Vv_\cG(F) = \{ \lambda; \ {\rm I}_{C_i}(\lambda)< l_{C_i}(F);\ i \in V_\Hh \} \ .$$ Note that this set is {\it not} preserved by the canonical bijections stated in Proposition \ref{m-s}(iv).} \end{remark} \begin{cor}\label{preserve-type} The restriction of the exponential-like map $\psi^L$ to $\Vv_\cG(F)\cap \Mm\Ll_\cG(F)^0$ preserves the type and the signatures. The restriction of this map to the whole of $ \Vv_\cG(F)$ has generic image over the top-dimensional cell of $\Tt_\cG(S)$. \end{cor} {\bf The quake-flow on $\Mm\Ll_\cG(S)^\#$.} We will define an earthquake flow on $\Mm\Ll_\cG(S)^\#$ that will satisfy the following properties (1) $\beta^\#_t\circ\beta^\#_s=\beta^\#_{t+s}$. \smallskip (2) Every flow line $\{\beta^\#_t(F,\eps,\lambda,\eta)|t>0\}$ is {\it horizontal} with respect to the trivialization of $\Mm\Ll_\cG^\#(S)$. This means that the enhanced lamination is constant along the flow. \smallskip (3) If we include $\Mm\Ll_\cG(S)$ into $\Mm\Ll_\cG(S)^\#$ by sending $(F,\lambda)$ to $(F,\eps,\lambda,\eta)$ with $\eps_i=1$ for every $i$ and $\eta_i=1$ for every $i\in V_\Pp$ then $\beta=\phi_{\Mm\Ll}^\#\circ\beta^\#$ (where $\phi_{\Mm\Ll}^\#$ is the usual forgetting map). \begin{remark}{\rm Before giving the actual definition, we describe the qualitative idea. Earthquakes paths on $\Tt_\cG(S)$ rebounce when reaches a cusp. Since $\Tt_\cG(S)^\#$ is obtained by reflecting $\Tt_\cG(S)$ along its faces, it is natural to lift such a paths to horizontal paths on $\Tt_\cG(S)^\#$. Instead of rebouncing the enhanced lamination after a cusp is obtained by a reflection along a boundary component of the initial lamination. This liftings are unique (up to the choice of a initial signature $\eps$) when $F$ does not contain cusp. When $F$ contains a cups then there are many possible liftings due to the possible choices of the signature of the cusp after the earthquake. Thus data $(F,\eps,\lambda)$ are not sufficient to determines the lifting. On the other hand the information of a signature of $\lambda$ around the cusp solves this ambiguity. }\end{remark} Let us come to the actual definition: \[ \beta_t^\#(F,\eps,\lambda,\eta)= (\overline F,\overline \eps,\overline\lambda, \overline\eta) \] where \smallskip (a) Similarly to the definition of the map $\tG$, $(\overline F,\overline\lambda)= \beta(F, \rho_\eps(\lambda))$; \smallskip (b) $\overline\eps_i=\eps_i\sign (l_{C_i}(F)+t\eta_i {\rm I}_{C_i}(\lambda))$. \smallskip (c) $\overline\eta_i=\eta_i\sign (l_{C_i}(F)+t\eta_i {\rm I}_{C_i}(\lambda))$. \smallskip Property (1) follows from the fact that $\beta$ is a flow. Point (2) depends on the fact the spectrum of $\lambda_t$ is constant and the products $\eps_i(t)\eta_i(t)$ are constant. Point (3) is straightforward. The only point to check is that $\beta^\#$ is continuous, as a map $\mr_\geq0\times\Mm\Ll_\cG^\#(S)\rightarrow\Mm\Ll_\cG^\#(S)$. By the definition of the topology of $\Mm\Ll_\cG^\#(S)$ it is enough to show that for every $\gamma\in\SG$ the functions \[ (t,(F,\eps,\lambda,\eta))\mapsto l^\#_\gamma(\beta_t^\#( F,\eps,\lambda,\eta))\qquad (t,(F,\eps,\lambda,\eta))\mapsto {\rm I}^\#_\gamma(\beta_t^\#(F,\eps,\lambda,\eta)) \] are continuous. If $\gamma$ is not peripheral, then $ l^\#_\gamma(\beta_t^\#(F,\eps,\lambda,\eta))$ and ${\rm I}^\#_\gamma(\beta^\#(t, F,\eps,\lambda,\eta)$ depend only on $F$ and $\lambda$ so the continuity is a consequence of the continuity of $\beta$. If $\gamma$ is peripheral, then by Lemma~\ref{ray-quake} we have \[ \begin{array}{l} l^\#_\gamma(\beta_t^\#(F,\eps,\lambda,\eta))= l^\#_\gamma(F,\eps)-t {\rm I}^\#_\gamma(F,\eps,\lambda,\eta)\\ {\rm I}^\#_\gamma(\beta_t^\#(F,\eps,\lambda,\eta))={\rm I}^\#_\gamma(F,\eps)\,. \end{array} \] For every $\xi\in {\rm I}^\#(S)$ let us consider the map of $\mr_{\geq 0}\times\Tt_\cG^\#(S)\rightarrow\Tt_\cG^\#(S)$ that associates to $t,(F,\eps)$ the projection on $\Tt_\cG^\#(S)$ of $\beta_t(F,\eps,\xi(F))$ (where $\xi(F)$ is the realization of $\xi$ with respect to the structure given by $F$). By (2) it is a flow on $\Tt_\cG(S)^\#$. We will denote by $\Ee^\#_\xi$ the homeomorphism of $\Tt_\cG(S)^\#$ corresponding to such a flow at time $1$ (notice that $\Ee_\xi\circ\Ee_\xi=\Ee_{2\xi}$), it will be called the {\it enhanced earthquake along $\xi$}. {\bf Earthquake Theorem.} \begin{teo}\label{quake-teo}{\rm [Earthquake Theorem on $\Tt_\cG(S)$]} For every $F_0,\ F_1 \in \Tt_\cG(S)$, denote by $m$ the number of points in $V$ that do not correspond to cusp of $F_1$ nor of $F_2$. Then there exist exactly $2^m$ left earthquakes such that $F_1= \beta^L_\lambda(F_0)$. The similar statement holds with respect to right-quakes. \end{teo} This is a consequence of the somewhat more precise \begin{teo}\label{quake-teo-bis}{\rm [Earthquake Theorem on $\Tt_\cG(S)^\#$]} For every $(F_0,\epsilon_0),\ (F_1,\epsilon_1) \in \Tt_\cG(S)^\#$, there is a unique $\xi\in {\rm I}^\#(S)$ such that $\Ee_\xi^\#(F_0,\eps_0)=(F_1,\eps_1)$ Similarly for the right quakes. \end{teo} Given two ``signed'' surfaces $(F_0,\sigma_0)$ and $(F_1,\sigma_1)$ in $\Tt_\cG(S)$, where the respective signatures are arbitrary maps $\sigma_j: V \to \{ \pm 1\}$), we say that they are {\it left-quake compatible} if there exists a left earthquake $(F_1,\lambda_1)=\beta^L(F_0,\lambda_0)$ such that $\sigma_j= \sigma_{\lambda_j}$. The following is an easy Corollary of Lemma \ref{ray-quake} and of Theorem \ref{quake-teo}. \begin{cor}\label{nec-comp} The signed surfaces $(F_0,\sigma_0)$ and $(F_1,\sigma_1)$ are left-quake compatible if and only if for every $i=1,\ldots,r$ the following condition is satisfied: \smallskip If $l_{C_i}(F_1)< l_{C_i}(F_0)$, then $\sigma_0(i)= 1 \ $. If $l_{C_i}(F_1)> l_{C_i}(F_0) \ $, then $\sigma_1(i)= 1 \ $. \smallskip Symmetric statements hold w.r.t. the right-quake compatibility. \end{cor} In Section \ref{moreAdS} we will outline an {\it AdS proof} of the Earthquake Theorem (by following \cite{BSK}) that generalizes Mess's proof in the special case of compact $S$. \medskip {\bf $\Mm\Ll_\cG(S)$ as tangent bundle of $\Tt_\cG(S)$.} We have seen above that the bundle $$\pG^\#: \Mm\Ll_\Cc(S)^\# \to \Tt_\Cc(S)^\# $$ shares some properties with the {\it tangent bundles} $T\Tt_\cG^\#$ of its base space. We are going to substantiate this fact by means of quake-flows. In fact we have associated to every $\xi\in {\rm I}^\#(S) $ a flow of $\Tt_\cG(S)^\#$, So we can consider the infinitesimal generator of such a flow, that is a vector field on $\Tt_\cG(S)^\#$, say $X_\xi$. \begin{prop} The map \[ \Pi:\Tt_\cG(S)^\#\times {\rm I}^\#(S)\rightarrow T\Tt_\cG(S)^\# \] defined by $\Pi(\xi, F)=X_\xi(F)$ is a trivialization of $T\Tt_\cG(S)$. \end{prop} As in the case of compact $S$, it is a consequence of the convexity of the length function along earthquakes paths. \begin{remark}{\rm The map $\Pi$ is only a {\it topological trivialization}. This means that the identifications between tangent spaces arising from $\Pi$ are not linear.} \end{remark} For a fixed type $\theta$, denotes by $ {\rm I}^\#(S)^\theta$ the points corresponding to laminations that do not enter any cusp. It is clear that for a point $F\in\Tt^\theta_\cG(S)^\#$ we have that $X_\xi(F)$ is tangent to $T^\theta_\cG(S)^\#$. So we get that the restriction of $\Pi$ to $\Tt^\theta_\Cc(S)^\#\times {\rm I}^\#(S)^\theta$ is a trivialization of $T\Tt^\theta_\cG(S)^\#$.
{"config": "arxiv", "file": "0704.2152/HANDMLS.tex"}
TITLE: Outer measure induced by measure, equality of subsets QUESTION [1 upvotes]: Let $(X,\mathcal{M},\mu)$ be a measure space such that $\mu(X)=1$, and let $\mu^{*}$ be the outer measure induced by $\mu$. Suppose $E\subset X$ satisfies $\mu^{*}(E)=1$. If $A,B\in \mathcal{M}$ and $A\cap E = B \cap E$, then $\mu(A)=\mu(B)$? REPLY [0 votes]: I just encountered this question and found an answer. First, I proved every measurable set $F \subset E^c$ has measure zero: $E \subset F^c \implies \mu^*(E) \leq \mu(F^c)$ and since $\mu^*(E) = 1$, $\mu(F^c) = 1$ and $\mu(F) = 0$ Now just write $A$ and $B$ as disjoint unions $A = (A\cap B)\cup(A\setminus B)$ $B = (A\cap B)\cup(B\setminus A)$ and since $A\setminus B$ and $B\setminus A$ are in $E^c$, they have measure zero and $\mu(A)=\mu(B)$.
{"set_name": "stack_exchange", "score": 1, "question_id": 1097036}
TITLE: homeomorphism between zero-dimensional Hausdorff and two-point space QUESTION [1 upvotes]: Given $\left\{g_{\alpha}: \alpha\in T\right\}$ consists of all continuous functions from $A$ to $\{0,1\}$ ($A$ is a zero-dimensional Hausdorff space). Let $G =\prod_{\alpha\in T} g_{\alpha}: A\rightarrow \left\{0,1\right\}^{T}$, so $G(x) = (g_{\alpha} (x))_{\alpha\in T}$. Show that $G$ gives a homeomorphism between $A$ and $\{0,1\}^{T}$. My progress. By the definition of $G$, it's easy to see that $G$ is $1$-$1$ and onto as well, so $G$ is a bijection. In addition, since all the components of $G$ are continuous functions, by the universal mapping property, $G$ is also continuous. It remains to show that $G^{-1}$ is continuous as well. Consider the neighborhood $(a,b)$ around the point $0$ (similar for $1$). Now, it's clear that $(G^{-1})^{-1}(a,b)$ is continuous, so can we imply that $G^{-1}$ is continuous because of zero-dimensional Hausdorff space (I can't see how to use this fact anywhere else:P) REPLY [0 votes]: Since $A$ is a zero dimensional Hausdorff space (I assume you mean a zero dimensional manifold, which means that each point has an open neighborhood homeomorphic to $\mathbb{R}^{0}$, i.e. the one point set) it is comprised entirely of isolated points. So, given any function $f:A\to X$ and any open set $U\subseteq X$, we have $f^{-1}(U) = \bigcup_{a\in f^{-1}(U)}\{a\}$ which is open, i.e. any function from $A$ is continuous. Much like $A$, if $T$ is finite (i.e. if $A$ is finite) $\{0,1\}^T$ also has the discrete topology, and therefore any function from $\{0,1\}^T$ is also continuous. If, however $T$ is infinite, then the topology on $\{0,1\}^T$ is gonna depend on which topology you choose (and generally, the canonical topology is such that the basic open sets will be of the form $\prod_{\alpha} U_\alpha$, where $U_\alpha$ differs from $\{0,1\}$ only from finitely many indices, note that in this case $\{0,1\}$ is Hausdorff, and it is not zero-dimensional, since there is no open set comprising a single point). From the two paragraphs above, we may have that $G$ is a homeomorphism only if $A$ is finite (because otherwise the $G(a) = (G^{-1})^{-1}(\{a\})$ has to have more than one point, since $\{a\}$ is open and we are assuming $G^{-1}$ is continuous). However, note that $T$ is, the number of funcions $g_\alpha:A\to\{0,1\}$ is equal to $|\mathcal{P}(A)|$, i.e. the number of subsets of $A$, so $\{0,1\}^T$ has $2^{|\mathcal P(A)|} = 2^{2^{|A|}}$ elements, therefore, there can be no bijection between $A$ and $\{0,1\}^T$.
{"set_name": "stack_exchange", "score": 1, "question_id": 1239159}
TITLE: Can somebody do this trigonometry compound angle question? QUESTION [0 upvotes]: If $\cos(A) + \sin(B) = x$ and $\sin(B) + \cos(A) = y$, prove that $$\sin(A + B) = \frac{x^2 + y^2 - 2}{2}$$ REPLY [2 votes]: Assuming $$x = \cos A + \sin B, y = \sin A + \cos B$$ We have $$x^2 = \cos^2 A + \sin^2 B + 2 \cos A \sin B$$ $$y^2 = \sin^2 A + \cos^2 B + 2 \sin A \cos B$$ Adding, $$x^2 + y^2 = 2 + 2 \sin (A+B)$$ Or, $$\sin(A+B) = \frac{1}{2}(x^2 + y^2 - 2)$$
{"set_name": "stack_exchange", "score": 0, "question_id": 4021236}